Statistically evaluating library association conferences: are we getting our value for our library budget expenditures?

The Bottom Line

ISSN: 0888-045X

Article publication date: 1 December 2001

133

Keywords

Citation

Ole Pors, N. (2001), "Statistically evaluating library association conferences: are we getting our value for our library budget expenditures?", The Bottom Line, Vol. 14 No. 4. https://doi.org/10.1108/bl.2001.17014daf.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2001, MCB UP Limited


Statistically evaluating library association conferences: are we getting our value for our library budget expenditures?

Statistically evaluating library association conferences: are we getting our value for our library budget expenditures?

Keywords: Professional associations, Statistical forecasting, Sampling frame, Libraries, Conferences

Introduction

It is well-known that many libraries all over the world from time to time conduct surveys of their users. It is also a fact that only a very small minority of these studies or investigations deserves the label "research". Most of the surveys are just data gathering and there is, of course, nothing wrong with that. They are often conducted for reasons of an administrative or political nature. It is normally impossible to make any kind of generalization on the basis of the surveys, usually due to, for example, the sampling frame or the measurement instrument. Presentation of data will often be descriptive. Yet, research is in contrast concerned with associations, causal inferences, explanations, generalization, models, theory developments and so on.

In between, we do have a majority of studies in our field. I am talking about studies that are using some of the instruments of research like valid sample frames, pre-tested measurement instruments, statistical analysis and some conceptions of causal relations, associations between variables and formulation of hypotheses. Even with these measures, there are quite a lot of pit-holes in analyzing variables and numbers. I will try to demonstrate some of these by giving examples from a piece of unpublished research. It will hopefully be demonstrative of how far one can come with rather simple statistical techniques.

Given the substantial amount of library budget allocation that professional association activities absorb, it will hopefully be valuable for financial managers to see how well survey techniques describe the conference-going experience. In the following case, I will use the evaluations of the IFLA conferences as an example of some of these problems. The department of Library and Information Management, Royal School of Library and Information Science has since 1997 on behalf of IFLA made the formal evaluation of the conferences. The evaluations have been conducted as combinations of interviews and questionnaires. It has given the unique opportunity to compare some of the variables on a longitudinal basis. The evaluation reports have been mainly descriptive, presenting summaries of a rather huge number of variables.

The questionnaire has been nearly the same for all the proceedings that have made some comparisons possible. The measurement instrument is directed towards eliciting information about the participants' satisfaction with various aspects of the conferences. Measurements are about the participants' perception of dimensions of quality. Much of the questioning in the questionnaire has to be answered on a five-point scale, falling between excellent and poor. Excellent is marked with a value of 5 and poor is marked with a value of 1, giving 3 as a middle value. In total, the questionnaire consisted of more than 60 questions. I will just analyze a couple of variables for the purpose of this column.

We have quite a lot of information about the individual respondents. They were asked about their gender, nationality, professional occupation, number of attendances at IFLA conference, function at the Jerusalem conference etc. These variables will form the backbone of the analysis of the perceptions of the quality of the analysis in relation to the different composite variables. Overall the evaluation of the IFLA conference in Jerusalem was positive. But comparisons with the evaluations of the previous conferences showed that it scored a bit lower on most of the variables.

To attempt to dig deeper into the data and give some explanations for this, I will use different statistical methods of analysis to provide a deeper understanding of what is hiding behind the numbers that looks so seducing. Comparisons or benchmarking techniques do seldom look behind the numbers and figures. Put bluntly, I simply ask: Was the overall quality of the conference a bit lower or were there other factors that played a role in the participants' perception of the conference?

Analysis of selected variables

First of all, there is no difference in perception according to either gender or age. Table I shows the perception in relation to nationality and the result is statistical significant. It simply means that there are differences in evaluation according to nationality. The Australian and UK delegates were the least satisfied. I have used a simple ANOVA test to get the results in Table I.

There is a slight tendency, significant at the alpha 0.10 level but not at the 0.05 level, that the evaluation is inverse in relation to number of conferences attended. There is no statistical association, if one uses background variables like gender and age. The next question is concerned with the different presentations at the conference. Again we are using a composite variable that includes evaluations of plenary sessions, poster sessions, workshops and discussion groups. There is no statistically significant difference according to gender or age. The same is true for number of conference participations.

Table II shows the evaluations in relation to nationality and an ANOVA test demonstrates a statistical significance between evaluation and nationality. What is more interesting is that a test between the delegates' professional function or occupation and their perception of the quality of the presentations does not show any significant differences.

Table I Overall evaluation of the conference (composite variable) in relation to nationality

Table II Evaluation of the quality of presentations (composite variable) in relation to nationality

What we have seen here is that nationality seems to be a decisive factor in quality measures. It is another way to emphasize that the composition of delegates in relation to nationality is an extremely important factor to take into account. You could put it another way. It looks as if the different nationalities do have different standards of judging quality.

In some of the earlier reports, it was stated that there was a relationship between the judgment of these variables and the perception of the quality of the conference. If it is true, it is the same as saying that people tend to judge the various aspects of a conference as a whole or that the variables interact (Egholm et al., 2000). This relationship is easily documented in Table III. I have chosen to run a series of correlation analysis on the four variables concerned with the overall impression of the conference and the variable concerned with the hotel accommodation.

There is a very strong co-variation among the variables. It simply means that people who judge the hotel as poor have a very strong tendency to judge the conference as rather poor. When one is working with a large set of variables it is always a good idea to investigate the association between the variables, since they influence the analysis and the associations will be a very important part of drawing conclusions.

Table III A correlation matrix of selected variables of quality

What we see here is simply that variables interact and different dimensions of a conference influence one another when people express satisfaction. The result is not surprising, but it shows clearly that the evaluation of a professional conference can be heavily influenced by factors having nothing to do with the content of the conference. In this respect, conference participation can be seen as a service transaction. Content and context interact in the judgment of the delegates. In a statistical sense, it simply means that it is difficult to isolate a single variable's influence on the dependent variable.

Next, the service level was measured in relation to the registration desk, information desk, tour desk, directional signs, food service, airport arrival and the volunteers. There are two outstanding phenomena in the evaluation of the service level in Jerusalem. They are the evaluation of the food services and the airport arrival. It is important to note that the average of the composite variable is lowered by the evaluation of the two services just mentioned.

We have again processed an ANOVA test between the dependent variable and the background variables (Table IV). Again there are no differences in evaluation according to gender, age and professional function or occupation, but there is a marked difference in relation to number of conferences attended and again to nationality. We do see a clear indication that perceptions of service quality change with the number of previous conferences in which one has participated.

Table IV Number of conferences attended and evaluation of service level

Furthermore, findings show consistent patterns in the two groups tested in relation to their evaluations of the IFLA conferences in 1998 and 1999. The participants from Australia/UK judge all the conferences more negatively than participants from the USA. I have also tested the proportion of the two groups in this and the previous conferences. A Chi-Square test indicates that the relative proportion of the two groups has been constant. In Table V we see a t-test of the two groups at the Jerusalem conference. The difference in their evaluation of the conference as a whole is statistically significant. The preliminary conclusion one can draw is that the perception of the quality of the Jerusalem conference was slightly lower than those we saw in relation to the previous conferences.

Conclusion

Overall we do see a conference that has been evaluated very well. Only a few factors of the many possible have received an evaluation below average or adequate. The evaluation reports of the last four conferences have been mainly descriptive and they have emphasized comparisons with previous years. They have in a way merely taken the temperature of the perception of the delegates.

Table V Comparison between two groups' evaluation of the conference

There is another factor that influences the evaluation. It is the evaluation from the different nationalities. In this evaluation we do see different patterns of grading according to nationality. It looks as if delegates from the third world and from Eastern European countries on average judge the different aspects of the conference more positively than delegates from Western Europe. It also emerges from this analysis that the number of IFLA conferences in which the single delegate has participated influences the judgment.

As Dean Roberta Shaffer and I conclude our series on statistics, we both hope that we have emphasized the importance and necessity of number literacy in our profession. I sincerely do hope that this small analysis of a few variables has convinced you how exciting statistical analysis can be. In my research on IFLA, I have touched upon different statistical tests, the consequences of comparing data, co-variation between theoretical independent variables like the quality of a hotel and judgment of a conference, and we have looked for the most convincing independent factor explaining a level of satisfaction with a conference. The last result simply was achieved by using an inductive method by playing with numbers. The result is important in relation to the evaluation of satisfaction measures for our professional conferences. It also shows that the composition of the participants in relation to nationality and participation frequency in earlier conferences is extremely important for the outcome of the evaluation. It could even be used to formulate hypotheses, which are the building-blocks for a formulation of worthwhile association meetings.

While worthwhile, it began as a simple descriptive piece of evaluation of a conference and it turned into some rather interesting questions simply by looking for interaction using simple statistical analysis. These techniques could easily form the backbone of a more focused future piece of research, while adding to the contributions of statistical analysis for our profession.

Niels Ole PorsDepartment of Library and Information Management, Royal School of Library and Information Science, Denmark

Reference and further reading Egholm, C., Johannsen, C.G. and Moring, C. (2000), ''Evaluation of the 63rd IFLA Council and General Conference'', IFLA Journal, Vol. 24 No. 1, pp. 49-56. Pors, N.O. (2000), Jerusalem and Before: Evaluation of the IFLA Conference in 2000, The Royal School of Library and Information Science, Denmark.

Related articles