Insights from an online CIT study
Roediger Voss, Center for Strategic Management, HWZ University of Applied Sciences of Zurich, Zurich, Switzerland
Thorsten Gruber, Manchester Business School, The University of Manchester, Manchester, UK
Alexander Reppel, Royal Holloway, University of London, Egham, UK
Purpose – This paper aims to explore satisfactory and dissatisfactory student-professor encounters in higher education from a student's perspective. The critical incident technique (CIT) is used to categorise positive and negative student-professor interactions and to reveal quality dimensions of professors.
Design/methodology/approach – An exploratory study using an online application of the well-established CIT method was conducted. The study took place at a large European university. A total of 96 students took part in the study on a voluntary basis and reported 164 incidents. Respondents were aged between 19 and 24 years (x=23.2) and slightly more female students (52 per cent) filled in the online CIT questionnaire than male students (48 per cent). On average, every student provided 1.7 incidents.
Findings – The results of the critical incident sorting process support previous classification systems that used three major groups to thoroughly represent the domain of (un)satisfactory student-professor encounters. The results of the CIT study also revealed ten quality dimensions of professors, corroborating previous research in this area.
Research limitations/implications – Owing to the exploratory nature of the study and the scope and size of its student sample, the results outlined are tentative in nature. The research study also only investigates the experiences of one stakeholder group.
Practical implications – Gaining knowledge of students' classroom experiences should be beneficial for professors to design their teaching programmes. Based on the results, universities might consider the introduction of student contracts or student satisfaction guarantees to manage student expectations effectively.
Originality/value – The paper was the first to successfully apply an online version of the CIT techniques to the issue of higher education services. This paper shows that the CIT method is a useful tool for exploring student-professor encounters in higher education. The paper has hopefully opened up an area of research and methodology that could reap considerable further benefits for researchers interested in this area.
Customer services quality; Higher education; Critical incident technique; Students; Academic staff.
International Journal of Educational Management
Emerald Group Publishing Limited
Increasingly, higher education institutions are realising that higher education could be regarded as a business-like service industry (Davis and Swanson, 2001; DeShields et al., 2005). In this regard, Frankel and Swanson (2002) point to the similarities between education and services in their delivery and evaluation processes. Further, Eagle and Brennan (2007) describe higher education as a complex service and for Hennig-Thurau et al. (2001, p. 332), educational services “fall into the field of services marketing”. The latter, however, also maintain that educational services differ from other professional services in the following ways: educational services play an important role in the students' lives and they have to show motivation and intellectual skills to attain their goals. Likewise, Cooper (2007) stresses that educational success depends on the efforts of both parties involved namely students and universities as service providers. Both groups may also be affected significantly by quality uncertainty and informational asymmetry. Moreover, students have to be willing to take responsibility for their own education and cannot merely consume the service offered (Svensson and Wood, 2007). Consequently, this paper does not regard students as customers but as partners (Clayson and Haley, 2005). Students “are one of a set of partners” (Clayson and Haley, 2005, p. 6) and universities should advance the interests of all stakeholders involved, i.e. students, faculty, teaching staff, parents, government, and society in general. Nevertheless, as “partners” or “co-creators of value” (Vargo and Lusch, 2004, 2006), students can expect to receive a beneficial learning experience in general and valuable student-professor encounters in particular.
Similar to a service encounter, the interaction between students and professors in a classroom is a form of human behaviour that is limited in scope and that has clear roles for the participating actors who pursue a purpose (Czepiel et al., 1986). Moreover, educational services can be described by several service characteristics: each student has his/her unique demands and needs and makes his/her own experiences. Educational services are also predominately intangible, heterogeneous, and perishable in nature. Further, the professor's teaching efforts are simultaneously “produced” and “consumed” with both professor and student being part of the teaching experience (Shank et al., 1995). Thus, findings from the services literature should be applicable to the context of higher education in general and to the student-professor encounter in particular.
This paper investigates the nature of quality in higher education and focuses on exploring satisfactory and dissatisfactory student-professor encounters from a student's perspective. The paper begins by reviewing the literature on quality in higher education services and the important role of professors. It then describes a study that uses an online version of the critical incident technique (CIT) (Flanagan, 1954) to categorise positive and negative student-professor encounters, to reveal quality dimensions of professors, and to examine which attributes of professors are likely to cause satisfaction and which dimensions predominately lead to dissatisfaction. The paper concludes then with a summary of findings and suggestions for further research.
Quality in higher education services
According to Harvey and Green (1993), quality in higher education is a multifaceted and complex concept and a single appropriate definition of quality is lacking. Thus, consensus concerning “the best way to define and measure service quality” (Clewes, 2003, p. 71) is not existing. Following a learning-oriented approach, quality is recently often interpreted as the transformation of students (Srikanthan and Dalrymple, 2007; Harvey and Knight, 1996) with the aim of students' transcendental self-development (Gibbs, 2008). Every stakeholder in higher education (e.g. students, government, and professional bodies), however, views quality differently, depending on their specific needs and wants. This paper is only concerned with one particular stakeholder in higher education students.
In the services literature, the focus is on perceived quality, which results from the comparison of service expectations with perceptions of actual performance (Zeithaml et al., 1990). Applied to the context of higher education, O'Neill and Palmer (2004, p. 42) defined service quality as “the difference between what a student expects to receive and his/her perceptions of actual delivery”. Browne et al. (1998) and Guolla (1999) pointed out that students' perceived service quality is an antecedent to student satisfaction.
Positive perceptions of service quality can result in student satisfaction and satisfied students may help attract new students through engaging in positive word-of-mouth (WOM)communication and may return themselves to the university to take further courses (Guolla, 1999; Wiers-Jenssen et al., 2002; Mavondo et al., 2004; Schertzer and Schertzer, 2004; Marzo-Navarro et al., 2005a, c; Helgesen and Nesset, 2007). Previous research by Guolla (1999) already indicated that course satisfaction is positively related to learning. Finally, Elliott and Shin (2002) showed that student satisfaction has also a positive impact on fundraising and student motivation. For professors to create satisfaction, however, they need to know what their students expect and experience (Davis and Swanson, 2001), which stresses again the importance of investigating student perceptions of classroom encounters.
The crucial role of professors
Oldfield and Baron (2000) identified higher education as a “pure” service and stressed the importance of the quality of personal contacts. Based on these findings, the underlying assumption of this paper is that for students, the qualities and behaviours of professors have a significant impact on their perceptions of service quality. This assumption can be supported by several research findings in the services literature. For example, Hartline and Ferrell (1996) maintained that the attitudes and behaviours of frontline employees have a strong impact on the customers' perceptions of service quality. Studies also point to important role the human interaction element plays in determining whether the delivered service is considered satisfactory (Chebat and Kollias, 2000). Finally, Bitner et al. (1994) showed that the nature of the interpersonal interaction between the customer and the contact employee often affects services satisfaction.
In the context of higher education, findings by authors such as Harnash-Glezer and Meyer (1991) and Hill et al. (2003) also stressed the importance of teaching staff and reported that the quality of the professor belongs to the most important factors in the provision of high-quality education. Finally, Pozo-Munoz et al. (2000) and Marzo-Navarro et al. (2005b) posited that teaching staff are main actors in a university exercising the largest positive influence on student satisfaction. Thus, the behaviours and attitudes of professors should be the primary determinant of students' perceptions of service quality in higher education. Knowing more about student experiences may enable professors to adapt their attitudes and behaviour to their students' underlying needs, which should positively influence students' perceived service quality and their satisfaction levels.
Service quality in higher education – the student's perspective
Oldfield and Baron (2000) believed that there exists a tendency to investigate service quality in higher education from an organizational perspective. Instead of collecting data based upon what universities believe their students regard as important, institutions should instead focus on what their students really want. Likewise, Joseph et al. (2005, p. 67) pointed to the heavy reliance of service quality in higher education researchers on the input from academic insiders while excluding the input from the students themselves. They feared that conventional approaches would leave “decisions about what constitutes quality of service (e.g. such as deciding what is ‘most important’ to students) exclusively in the hands of administrators and/or academics”. Joseph et al. (2005) therefore suggested that academic administrators should concentrate on recognising student needs. Similarly, Rowley (1997, p. 11) believed that researchers should try to reveal the most important quality dimensions from a student's point of view as these dimensions are “most likely to have an impact on their overall satisfaction”.
Aim of the study
On the basis of these findings, this paper focuses on the service quality elements in higher education that students themselves regard as important. Given the need for more research on classroom service encounters (Swanson and Frankel, 2002), the research study will be exploratory in nature. To be more specific, the research study uses a semi-standardized qualitative technique, the CIT, as O'Neill and Palmer (2004, p. 41) suggest that qualitative methods “provide an interesting insight into the mindset of individual students”. The major aim of this paper is to explore satisfactory and dissatisfactory student-professor encounters that students experienced. These experiences may improve or weaken the student's learning experience. Knowing what students regard as satisfactory and dissatisfactory student-professor interactions helps professors improve the classroom experience by, e.g. changing course policy or improving interpersonal skills or by just having a better understanding of the student's perspective (Davis and Swanson, 2001). The collected student-professor incidents will be categorised and quality dimensions of professors will be developed by especially examining which of the attributes of professors are likely to cause dissatisfaction and which predominately lead to satisfaction. Knowing what attributes of professors are desired by students may improve the overall education process (Faranda and Clarke, 2004). The following section describes the qualitative research method used in the study in more detail and explains its appropriateness.
Methodology – the CIT
Flanagan (1954) describes the CIT as “a procedure for gathering certain important facts concerning behaviour in defined situations” (p. 335). CIT has been used across a wide range of disciplines and in recent years, it has been used extensively in the service literature to explore sources of satisfaction and dissatisfaction in service interaction situations in a variety of contexts (Roos, 2002; Gremler, 2004). In this context, Gremler (2004, p. 77) points out that “the CIT method has been accepted as an appropriate method for use in service research”.
In the higher education literature, researchers have used CIT to investigate (dis)satisfactory professor/student interactions (Swanson and Davis, 2000; Davis and Swanson, 2001; Frankel and Swanson, 2002; Swanson and Frankel, 2002; Swanson et al., 2005). These authors developed a classification scheme that shows strong similarities to the system developed by service marketing authors such as Bitner et al. (1990, 1994).
CIT is a powerful qualitative research method to collect, analyse, and classify observations of human behaviour that allows researchers to gain valuable insights into phenomena that have not been documented well (Gremler, 2004). It helps reveal perceptions of quality and sources of satisfaction/dissatisfaction based on negative and positive incidents (Edvardsson and Roos, 2001). In this context, a critical incident can be described as any observable human activity that deviates significantly from what is the normal or expected (Flanagan, 1954) and that contributes significantly, either positively or negatively to the phenomenon or activity under study (Bitner et al., 1990). These incidents determine whether an individual leaves a situation satisfied or dissatisfied. Gremler (2004) points out that researchers using CIT are not required to follow a strict set of principles but can use a flexible set of rules that can be adapted to the particular research situation. Thus, in the context of this study, a critical incident is described as either a positive or a negative interaction between a professor and a student during a lecture that is particularly memorable to the student and that leads to either a positive or a negative disconfirmation of student expectations.
For a CIT study, respondents are asked to recall positive and/or negative incidents relating to the specific experience being studied. CIT reflects the normal way that people think as respondents can tell a story using their own words without being forced into a pre-existing framework (Stauss and Weinlich, 1997). The collected accounts provide researchers with “rich details of firsthand experiences” (Bitner et al., 1994, p. 97). Researchers do not have to develop hypotheses before using CIT as concepts and theories will emerge from the identified patterns in the responses of participants. Thus, CIT is a qualitative research method that is used “primarily for theory development” (Gremler, 2004, p. 77). In this context, Bitner et al. (1990, p. 73) maintain that CIT enables researchers “to increase knowledge of a phenomenon about which relatively little has been documented and/or to describe a real-world phenomenon based on a thorough understanding”.
The research study – collecting CIT data online
Researchers can collect CIT data in several ways. Traditionally, researchers conduct interviews or hand out questionnaires. More recently, the CIT method has also been conducted online using web-based CIT questionnaires (Meuter et al., 2000; Warden et al., 2003). This approach has several benefits: researchers do not have to tape and transcribe CIT interviews or questionnaires as the collected data are already in electronic form. Further, the whole interviewing process may be less stressful and more convenient for respondents as they can fill in the CIT questionnaire either at home or at work in a familiar and non-threatening environment (Wood et al., 2004). Moreover, Edvardsson and Strandvik (2000, p. 83) criticize the traditional CIT method for collecting “top-of-the mind memories of service interactions that are socially acceptable to report”. An online approach can address this concern effectively as the anonymous online situation means that participants are not influenced by an interviewers' appearance, tone of voice and body language as they could be during CIT interviews. Thus, social desirability bias and especially interviewer/interviewee bias should not occur (Miller and Dickson, 2001; Gunter et al., 2002; Duffy et al., 2005). Further, according to Joinson (2001) and Hanna et al. (2005), respondents are also willing to reveal more personal information and deeper feelings in computer-mediated communication than in traditional face-to-face discussions due to visual anonymity and higher levels of private self-awareness. As respondents are also less inhibited online, they are willing to state their opinions more directly than in a traditional interviewing environment (Tse, 1999; Pincott and Branthwaite, 2000; Sweet, 2001). On the basis of these findings, it was decided to collect the CIT data using an online CIT questionnaire.
The address of the web site hosting the CIT online questionnaire was mentioned in five business and economics education courses with a total of 322 postgraduate students at a large European university. The researchers who carried out the study are affiliated with different universities and had no previous contacts to the students. During the five courses, access codes (e.g. “51122”) were handed out to each student that they had to type in on the first online screen to start the CIT study. These access codes were necessary to make sure that only students of this particular university would fill in the questionnaire and to prevent “random walk-ins” by individuals who were not part of the population of interest but who discovered the web site by chance (Meuter et al., 2000). The access codes also made sure that every student filled in the questionnaire only once. The online questionnaire began by asking respondents to give details regarding age, gender, and course of study.
As mentioned, the main purpose for our study was to record student-professor interactions in the lecture theatre or seminar room that deviated from what they expected in either a positive or a negative way. These incidents had to be memorable enough to be recalled. Thus, students had to think of a specific situation in which they were extremely satisfied or dissatisfied with the teaching experience and the professor. In particular, students were asked the following questions, which were based on the questions used Bitner et al.'s (1990) CIT study:
- Briefly describe the incident.
- When and where did the incident happen?
- What was done or said during the interaction?
- What resulted that made you feel extremely satisfied or dissatisfied with the professor in the particular situation?
For each question, students could type in their answers in a large textbox. It was decided to ask students to think of both positive and negative critical incidents as Gremler (2004) reports that the majority of CIT studies collect a mix of both incident groups. Respondents could describe up to three positive and/or negative incidents using their own words.
As the incidents may have taken place up a long time before data collection, respondent's perceptions may have been reinterpreted or modified (Johnston, 1995). We addressed this concern by asking respondents to recall incidents within the last six months following recommendations by authors such as Keaveney (1995) and Sweeney and Lapp (2004). Figure 1 presents a screenshot from the study that shows both the CIT questions and the available textboxes for respondents to answer.
Respondents were then asked several quantitative questions to gain a better understanding of the relevance of the incidents and the subsequent student behaviour. First, respondents could rate the severity of the incident, which we measured with a five-point Likert scale that run from 1 (incident was not important) to 5 (incident was very important). We also wanted to know if respondents had told anyone about the incident and if yes who that person(s) was (were). Finally, respondents who reported about the incident were asked whether they engaged in positive or negative WOM communication.
Characteristics of sample
Out of the 332 business and economics education students, 96 students took part in the study on a voluntary basis, which equals a response rate of 29 per cent. These respondents reported 164 incidents. Respondents were aged between 19 and 24 years (X=23.2) and slightly more female students (52 per cent) filled in the online CIT questionnaire than male students (48 per cent). There are no clear rules regarding the minimum number of respondents and reported incidents to collect: Gremler (2004) notes that the number of respondents varies significantly, ranging from nine to 3,852. Similarly, he states that the number of usable critical incidents ranged from 22 to 2,505. Lockwood (1994) suggests that CIT studies should involve at least 100 incidents as this sample size would allow researchers to develop reliable categories. Thus, the sample size of this research study is sufficiently large enough and also comparable to earlier exploratory CIT studies (Sweeney and Lapp, 2004). Each student provided between one and four incidents with an average of 1.7 incidents.
Classification of incidents
For researchers to uncover emerging themes or patterns, the collected CIT data have to be interpreted and incidents have to be sorted into groups with similar topics (Keaveney, 1995; Stauss and Weinlich, 1997). According to Bitner et al. (1990, p. 74), the main goal of the necessary content analysis is to make the data “useful for answering the research questions while sacrificing as little detail and comprehensiveness as possible”. By classifying and categorising single incidents into a more general schema, a certain level of abstraction can be reached that is required for further analysis. The incident classification system used for categorising the collected incidents is based on the scheme used in studies by authors such as Swanson and Davis (2000). This classification system was chosen as it offers a useful framework for examining interactions experienced by students that may positively or negatively influence their learning experience. Further, the classification scheme proved to be reliable and valid for investigating the issue of satisfaction/dissatisfaction in a university setting. Finally, a recent study by Swanson et al. (2005) also showed that the classification scheme can be generalised to an international context.
The critical incidents were then further analysed to reveal crucial quality dimensions of professors. Following Johnston's (1995) approach, each incident was numbered and condensed using several keywords and phrases that summed up the student's experience. Two sets of cards were used, one for the descriptions of positive incidents and one for the anecdotes of negative incidents. These incident summaries were then classified using the list of quality dimensions identified by authors such as Andreson (2000), Sander et al. (2000), Lammers and Murphy (2002), Hill et al. (2003), Brown (2004), Swanson et al. (2005) and Voss and Gruber (2006).
As the classification procedure is largely subjective, it was decided to have two researchers familiar with the classification scheme to act as judges and to code the incidents independently. Incidents were read and sorted until similar incidents were assigned to distinct, meaningful categories. Sorting continued until incidents in one category were more similar to each other than they were to incidents in another category. Disagreements between the judges were discussed and resolved mutually.
The reliability of the coding procedure was assessed in two ways. First, intrajudge reliability, which measures how consistent a coder assigns incidents to a particular category overtime (Weber, 1985), was examined by coding the 164 incidents two times over a two-month period. Intrajudge exceeds the 80 per cent cutoff suggested by Weber (1985) and was 94 per cent. Second, interjudge reliability, which is the degree to which both judges agree that an incident should be classified into a particular category, was measured by calculating Cohen's κ (Cohen, 1960). Cohen's κ is a conservative reliability statistic that corrects for the likelihood of a coincidental agreement between judges (Bitner et al., 1994) and was found to be 0.82 for the satisfying and 0.83 for the dissatisfying incidents.
In general, interjudge reliabilities above 0.80 are considered to be satisfactory (Ronan and Latham, 1974; Bitner et al., 1990; Keaveney, 1995). Landis and Koch (1977) suggest that a κ higher than 0.6 should be regarded as acceptable and Gremler (2004) reports that the average of the κ mentioned in previous CIT studies is 0.745. Thus, the κ of our research study indicates a high level of interjudge reliability.
Results and discussion
Classification of incidents
The sorting process confirms the three major groups suggested by Swanson and Davis (2000) that accounts for all satisfactory and dissatisfactory incidents. The fact that no new categories emerged during the sorting and classification process can be considered as a very good indicator for high content validity of the applied critical incident classification system (Keaveney, 1995). Together with the described intrajudge and interjudge reliabilities, we can be confident that the classification scheme accurately represents the domain of (un)satisfactory student-professor encounters. The following three incident categories were confirmed:
- Group 1. Professor response to service delivery system failures. This category includes incidents that are directed linked to failures in the core services that students would expect to receive. Typical incidents are delayed services, professors who are not available during office hours, who refuse to answer student questions, and who come late to a scheduled meeting with a student. Professors who do not explain why these incidents occur tend to create dissatisfaction, while thorough explanations are likely to lead to satisfactory student recollections.
- Group 2. Professor response to students needs and requests. Students sometimes have special requests or desire-specific outcomes that suit their needs. The requests could for example be the result of a student mistake such as missing an exam or a preference for a certain type of assessment (e.g. 100 per cent exam instead of 80 per cent exam and 20 per cent group assignment). Flexibility on the part of the professor would be a potential source of student satisfaction, while unwillingness to accommodate students could, e.g. cause dissatisfaction.
- Group 3. Unprompted and unsolicited professor action. This category comprises incidents that students do not normally expect from professors but that were memorable either in a positive or a negative way. Possible sources of satisfaction are professors who demonstrate enthusiasm and/or are perceived to be fair. Typical dissatisfactory incidents relate to professors who are unable to control their temper, impatient with students, and who are rude.
The distribution of incidents across the three incident groups is illustrated in Table I.
Out of the 164 answers, more were relating to negative (95) than to positive incidents (69). By far the largest number of both satisfactory and dissatisfactory incidents were categorized in Group 3, with the next largest proportion falling into Group 2 followed then by Group 1. This distribution of incidents corroborates previous work by Swanson and Davis (2000) and Davis and Swanson (2001). Illustrative quotes for each category are given in Table II.
Respondents could also rate the importance of the incident. Students regarded 78.1 per cent of the reported incidents as either important or very important (categories four and five of the Likert scale). There were no significant differences between positive and negative incidents. This result is not surprising as the CIT method particularly asks respondents to recall incidents that they regard as critical.
Students were then asked if they told anyone about the incident and engaged in WOM communication: 55.1 per cent of the positive incidents but 75.8 per cent of the negative incidents were reported. This result supports previous research that found that dissatisfied individuals are generally believed to engage in considerably greater WOM than satisfied individuals (Richins, 1983; Schlossberg, 1991). The following table shows to whom respondents reported their experienced incidents (multiple answers were permitted).
As Table III reveals, for both positive and negative incidents, respondents informed mainly fellow students, followed by friends and parents.
Quality dimensions of professors
Most CIT studies focus predominately on the categories that emerge after content analysis and their characteristics (Meuter et al., 2000). As mentioned, this study, however, classifies incidents not only into distinct categories, but also explores which quality dimensions of professors are referred to in the incidents. Based on the analysis of the incident summaries, ten attributes were classified, which are presented with definitions in Table IV.
Table V shows the relative frequency of positive and negative incidents for each quality dimension. For example, 16 positive incidents referred to the helpfulness of the professor (64 per cent), whereas this attribute was mentioned in only nine negative incidents (36 per cent). The most frequently mentioned quality dimension for both positive and negative incidents is “teaching skills”, which supports findings by Sander et al. (2000) who stressed the importance of this attribute of good teaching staff. While respondents particularly pointed out the helpfulness, openness and enthusiasm of professors in the positive incidents, they mainly stressed the lack of friendliness and fairness in the negative incidents.
Two attributes of professors were only mentioned in negative incidents: expertise and reliability. Thus, students either only remembered situations in which professors showed a remarkable lack of competence or were particularly unreliable or they just have not experienced professors with outstanding competence and reliability yet. All the other quality dimensions were mentioned in both satisfying and dissatisfying incidents to varying degrees as Figure 2 shows. Illustrative quotes for each professor attribute are shown in Table VI.
Our findings are similar to previous research that indicated the importance of these attributes of professors (Feldmann, 1976; Braskamp et al., 1981; Patrick and Smart, 1998; Willcoxson, 1998; O'Toole et al., 2000; Desai et al., 2001). In particular, Swanson et al. (2005) found that professors should be knowledgeable, empathetic, friendly, helpful, reliable, responsive, and expressive. Similarly, Mersha and Adlakha (1992, p. 39) suggest that the professor's “willingness to correct errors, knowledgeability, thoroughness/accuracy of service and consistency/reliability” are the most important attributes of good service quality for colleges/universities. By contrast, reluctance to correct errors, lack of knowledge, indifference or ‘I don't care’ attitude and rudeness were mentioned as the most important indicators of poor service quality. Faranda and Clarke (2004) stressed the importance of personality factors such as approachability, friendliness, being receptive to student suggestions, sense of humor, and enthusiasm. Hill et al. (2003) reported that students want professors to be knowledgeable, well-organized, encouraging, helpful, sympathetic, and caring to students' individual needs. Students at the beginning of their university life wanted professors to be approachable, to have good teaching skills, to be knowledgeable, enthusiastic, and organized (Sander et al., 2000). Lammers and Murphy (2002) pointed out that students regard professors highly who are knowledgeable, enthusiastic about their subject, inspiring, and helpful. Andreson (2000) found that students want professors to be enthusiastic, caring, and interested in the students' progress. Research by Brown (2004) and Voss et al. (2007) indicated that competent professors know their subject are approachable and are willing to answer questions. They should also show flexibility and willing to explain things in different ways and to treat their students as individuals. Further, McElwee and Redman (1993) pointed out that reliability is a factor that significantly impacts on students' perceptions of service performance. Professors should turn up to classes on time and keep records of student performance. Empathy is also an attribute of teaching staff that authors like Elton (1996) found to be of importance to students. Finally, the important role of expertise supports findings by authors like, for example, Ramsden (1991), Husbands (1998), Patrick and Smart (1998) and Pozo-Munoz et al. (2000) who also stressed the importance of this quality dimension.
Limitations and directions for further research
Like all research studies, this project has several limitations as well. First of all, as the CIT is a qualitative research method, the findings presented here are only tentative in nature and are not meant to be generalisable (Meuter et al., 2000). The findings, however, provide a first valuable insight into the nature of the phenomenon under investigation – the analysis of satisfying and dissatisfying incidents in higher education and the development of quality dimensions of professors. Further research studies, however, should improve knowledge of this topic.
As the research study involved postgraduate students from one university, the results cannot be generalized to the student population as a whole. We are aware that we only had access to one group of students (business and economics education students) at one university. However, it has to be said that the potential for generalizability can never be achieved in any one study, but is an empirical question that requires comparisons over different studies (Greenberg, 1987). Thus, what is now needed is similar research with different sample populations. Results from these studies could then be compared and differences and similarities revealed.
Researchers interested in the measurement of service quality in higher education should also take the perspectives of other stakeholders (e.g. students' families, the government) into consideration as well (Rowley, 1997). Thus, fellow researchers could investigate the (deviations of) expectations of other stakeholder groups. Further research, for example, could investigate whether student expectations differ greatly from what professors believe students want. In the services literature, Mattila and Enz (2002) reported a large gap between customer and employee perceptions regarding service quality expectations. Thus, fellow researchers could hand out CIT questionnaires to both students and their professors or ask both parties to fill in a online questionnaire. Researchers could then compare the results to highlight different views. Insights gained should help make professors aware of differing perceptions and identify areas for appropriate training. In the context of service quality in higher education, first research results already indicate that a perception gap exists (Swanson and Frankel, 2002). Further, Shank et al. (1995) found that service delivery expectations are lower among professors than among their students.
Further research could also explore gender differences with regard to student-professor interactions in the lecture theatre or seminar room that deviate from what they expect in either a positive or a negative way. Previous consumer research studies have already identified differences between male and female information processing and decision-making styles (Iacobucci and Ostrom, 1993). Iacobucci and Ostrom (1993), however, also propose that gender differences in expectations exist only in the short term and would be evened out in the long term, i.e. men and women would expect the same things from the service provider (core and relational aspects) instead of women prioritizing more relational aspects and men focusing more on the core aspects of service delivery. Thus, it would be interesting to carry out a longitudinal study to explore whether gender differences truly exist long term or if they are just a short-term phenomenon.
Implications and conclusions
This paper explored the online application of the established CIT to investigate student-professor encounters in higher education. A total of 96 students were asked to think about deviations of expectations and to recall positive and negative interactions with professors. Based on the collected data, satisfactory and dissatisfactory incidents were categorised and quality dimensions of professors were revealed. The results of the critical incident sorting process support the classification system previously developed by Swanson and Davis (2000) that uses three major groups to thoroughly represent the domain of (un)satisfactory student-professor encounters. The results of the CIT study also revealed ten quality dimensions of professors, corroborating previous research in this area.
Such knowledge of (deviations of) student expectations as a form of student feedback should also be beneficial for curriculum development (McCuddy et al., 2008). Previous research (Rolfe, 2002) already indicated that students frequently criticise professors for offering courses that are too theory laden and that do not pay sufficient attention to vocational aspects. Thus, professors who are open to suggestions and criticism (“openness”) should cover topics in the curriculum that are beneficial for students in their preparation for their profession. Professors could for example provide assignments that are directly relevant to work, and use thought-provoking case studies from the business world. Professors could also stress linkages between theory and practice more and invite guest speakers who are eager to share valuable experiences with students.
The revealed importance of personality factors underscores the strong need for professors to maintain personal interactions with students, build strong relationships, and treat students with respect. Students apparently desire professors who sustain the human interface within marketing education (Faranda and Clarke, 2004) and who get along well with them (Foote et al., 2003). Fortunately, the role of creating rapport with students has been receiving increasing attention in the marketing education (Faranda and Clarke, 2004) and (services) marketing literature (Gremler and Gwinner, 2008) recently.
Universities might also consider the introduction of “student contracts” (Rowley, 1997) or “student satisfaction guarantees” (McCollough and Gremler, 1999a, b; Gremler and McCollough, 2002; Lawrence and McCollough, 2004) to manage student expectations effectively. A student satisfaction guarantee, for example, could tangibilise the offered educational services and signal the quality of the educational experience to current students and also help attract new students. Previous research by McCollough and Gremler (1999a) shows that satisfaction guarantees can influence student confidence in professors positively and they help set clear expectations that both parties involved, students and professors will work hard. Satisfaction guarantees used as a pedagogical device set performance standards and help increase the accountability of both professors and students. They also have a positive impact on student evaluations of professors and courses without losing rigour in the classroom (Gremler and McCollough, 2002). In this connection, the CIT method helps professors identify satisfactory and dissatisfactory deviations of expectations from a student's point of view and the satisfaction guarantee could for example cover the revealed quality dimensions of professors.
Professors could also directly ask students on the first day of the course to list everything they expect from the course and the teaching staff regarding course operation and learning outcomes. This exercise could help professors adjust unrealistic expectations and review learning objectives. At the end of term, professors could examine if the course has met the goals of the course (Appleton-Knapp and Krentler, 2006). This procedure could also be beneficial for reducing the probability of students experiencing dissatisfactory student-professor encounters.
As partners or co-creators of value in higher education, students can expect to receive a good service (i.e. good quality teaching). This good service, however, should always be seen as a “means to an end” with the end being the creation of more knowledgeable and capable individuals. Thus, professors should give students a beneficial learning experience and valuable student-professor interactions but it would not be in the interest of all stakeholders involved to allow students to dictate, for example, what grades they should receive, even if students want that (Clayson and Haley, 2005). We therefore agree with Desai et al. (2001, p. 143) who posit that professors can be more service oriented “without giving the store away”.
This study shows that the online application of the CIT method is a useful tool in examining the issue of student-professor encounters in higher education. Future research should be able to develop further studies to test the online application of the CIT method in their investigations of higher education services.
Figure 1Screenshot of online CIT questions
Figure 2Continuum of quality dimensions of professors
Table IClassification by type of incident outcome
Table IIIllustrative quotes for critical incident groups
Table IIIReceiver of WOM communication
Table IVDefinitions of quality dimensions
Table VFrequency of positive and negative incidents
Table VIExamples for quality dimensions of professors
Andreson, L. (2000), "Teaching development in higher education as scholarly practice: a reply to Rowland et al. turning academics into teachers", Teaching in Higher Education, Vol. 5 No.1, pp.23-31.
Appleton-Knapp, S.L., Krentler, K.A. (2006), "Measuring student expectations and their effects on satisfaction: the importance of managing student expectation", Journal of Marketing Education, Vol. 28 No.3, pp.254-64.
Bitner, M.J., Booms, B.H., Mohr, L.A. (1994), "Critical service encounters: the employee's viewpoint", Journal of Marketing, Vol. 58 No.4, pp.95-106.
Bitner, M.J., Booms, B.H., Tetreault, M.S. (1990), "The service encounter: diagnosing favorable and unfavorable incidents", Journal of Marketing, Vol. 54 No.1, pp.71-84.
Braskamp, L.A., Ory, J.C., Pieper, D.M. (1981), "Student written comments: dimensions of instructional quality", Journal of Educational Psychology, Vol. 73 pp.65-70.
Brown, N. (2004), "What makes a good educator? The relevance of meta programmes", Assessment & Evaluation in Higher Education, Vol. 29 No.5, pp.515-33.
Browne, B., Kaldenberg, D., Browne, W., Brown, D. (1998), "Student as customers: factors affecting satisfaction and assessments of institutional quality", Journal of Marketing for Higher Education, Vol. 8 No.3, pp.1-14.
Chebat, J-C., Kollias, P. (2000), "The impact of empowerment on customer contact employees' roles in service organizations", Journal of Service Research, Vol. 3 No.1, pp.66-81.
Clayson, D.E., Haley, D.A. (2005), "Marketing models in education: students as customers, products, or partners", Marketing Education Review, Vol. 15 No.1, pp.1-10.
Clewes, D. (2003), "A student-centred conceptual model of service quality in higher education", Quality in Higher Education, Vol. 9 No.1, pp.69-85.
Cohen, J. (1960), "A coefficient of agreement for nominal scales", Educational and Psychological Measurement, Vol. 20 No.1, pp.37-46.
Cooper, P. (2007), "Knowing your ‘lemons’: quality uncertainty in UK higher education", Quality in Higher Education, Vol. 13 No.1, pp.19-29.
Czepiel, J.A., Solomon, M.R., Surprenant, C.F., Gutman, E.G. (1986), "The service encounter: an overview", in Czepiel, J.A., Solomon, M.R., Surprenant, C.F. (Eds),The Service Encounter: Managing Employee/Customer Interaction in Service Business, Lexington Books, Lexington, MA, pp.3-16.
Davis, J.C., Swanson, S.T. (2001), "Navigating satisfactory and dissatisfactory classroom incidents", Journal of Education for Business, Vol. 76 No.5, pp.245-50.
Desai, S., Damewood, E., Jones, R. (2001), "Be a good teacher and be seen as a good teacher", Journal of Marketing Education, Vol. 23 No.2, pp.136-44.
DeShields, O.W. Jr, Kara, A., Kaynak, E. (2005), "Determinants of business student satisfaction and retention in higher education: applying Herzberg's two-factor theory", International Journal of Service Industry Management, Vol. 19 No.2, pp.128-39.
Duffy, B., Smith, K., Terhanian, G., Bremer, J. (2005), "Comparing data from online and face-to-face surveys", International Journal of Market Research, Vol. 47 No.6, pp.615-39.
Eagle, L., Brennan, R. (2007), "Are students customers? TQM and marketing perspectives", Quality Assurance in Education, Vol. 15 No.1, pp.44-60.
Edvardsson, B., Roos, I. (2001), "Critical incident techniques – towards a framework for analysing the criticality of critical incidents", International Journal of Service Industry Management, Vol. 12 No.3, pp.251-68.
Edvardsson, B., Strandvik, T. (2000), "Is a critical incident critical for a customer relationship?", Managing Service Quality, Vol. 10 No.2, pp.82-91.
Elliott, K.M., Shin, D. (2002), "Student satisfaction: an alternative approach to assessing this important concept", Journal of Higher Education Policy and Management, Vol. 24 No.2, pp.197-209.
(1996), in Elton, L. (Eds),Criteria for Teaching Competence and Teaching Excellence in Higher Education, Falmer Press, London, .
Faranda, W.T., Clarke, I. (2004), "Student observations of outstanding teaching: implications for marketing education", Journal of Marketing Education, Vol. 26 No.3, pp.271-81.
Feldmann, K.A. (1976), "The superior college teacher from the students' view: a review and analysis", Research in Higher Education, Vol. 6 No.3, pp.223-74.
Flanagan, J.C. (1954), "The critical incident technique", Psychological Bulletin, Vol. 51 No.4, pp.327-58.
Foote, D.A., Harmon, S.K., Mayo, D.T. (2003), "The impacts of instructional style and gender role attitude on students' evaluation of faculty", Marketing Education Review, Vol. 13 No.2, pp.9-19.
Frankel, R., Swanson, S.R. (2002), "The impact of faulty-student interactions on teaching behaviour: an investigation of perceived student encounter orientation, interactive confidence, and interactive practice", Journal of Education for Business, Vol. 78 No.2, pp.85-91.
Gibbs, P. (2008), "Marketers and educationalists – two communities divided by time?", International Journal of Educational Management, Vol. 22 No.3, pp.269-78.
Greenberg, J. (1987), "The college sophomore as guinea pig: setting the record straight", The Academy of Management Review, Vol. 12 No.1, pp.157-9.
Gremler, D.D. (2004), "The critical incident technique in service research", Journal of Service Research, Vol. 7 No.1, pp.65-89.
Gremler, D.D., Gwinner, K.P. (2008), "Rapport-building behaviors used by retail employees", Journal of Retailing, Vol. 84 No.3, pp.308-24.
Gremler, D.D., McCollough, M.A. (2002), "Student satisfaction guarantees: an empirical examination of attitudes, antecedents, and consequences", Journal of Marketing Education, Vol. 24 No.2, pp.150-260.
Gunter, B., Nicholas, D., Huntington, P., Williams, P. (2002), "Online versus offline research: implications for evaluating digital media", Aslib Proceedings, Vol. 54 No.4, pp.229-39.
Guolla, M. (1999), "Assessing the teaching quality to student satisfaction relationship: applied customer satisfaction research in the classroom", Journal of Marketing Theory and Practice, Vol. 7 No.3, pp.87-97.
Hanna, R.C., Weinberg, B., Dant, R.P., Berger, P.D. (2005), "Do internet-based surveys increase personal self-disclosure?", Database Marketing & Customer Strategy Management, Vol. 12 No.4, pp.342-56.
Harnash-Glezer, M., Meyer, J. (1991), "Dimensions of satisfaction with collegiate education", Assessment & Evaluation in Higher Education, Vol. 16 No.2, pp.95-107.
Hartline, M.D., Ferrell, O.C. (1996), "The management of customer-contact service employees: an empirical investigation", Journal of Marketing, Vol. 60 pp.52-70.
Harvey, L., Green, D. (1993), "Defining quality", Assessment & Evaluation in Higher Education, Vol. 18 No.1, pp.9-34.
Harvey, L., Knight, P.T. (1996), Transforming Higher Education, The Society for Research into Higher Education and Open University Press, Buckingham, .
Helgesen, Ø., Nesset, E. (2007), "What accounts for students' loyalty? Some field study evidence", International Journal of Educational Management, Vol. 21 No.2, pp.126-43.
Hennig-Thurau, T., Langer, M.F., Hansen, U. (2001), "Modeling and managing student loyalty: an approach based on the concept of relationship quality", Journal of Service Research, Vol. 3 No.4, pp.331-44.
Hill, Y., Lomas, L., MacGregor, J. (2003), "Students' perceptions of quality in higher education", Quality Assurance in Education, Vol. 11 No.1, pp.15-20.
Husbands, C.T. (1998), "Implications for the assessment of the teaching competence of staff in higher education of some correlates of students‘ evaluations of different teaching styles", Assessment & Evaluation in Higher Education, Vol. 23 No.2, pp.117-39.
Iacobucci, D., Ostrom, A. (1993), "Gender differences in the impact of core and relational aspects of services on the evaluation of service encounters", Journal of Consumer Psychology, Vol. 2 No.3, pp.257-86.
Johnston, R. (1995), "The determinants of service quality: satisfiers and dissatisfiers", International Journal of Service Industry Management, Vol. 6 No.5, pp.53-71.
Joinson, A.N. (2001), "Self-disclosure in computer-mediated communication: the role of self-awareness and visual anonymity", European Journal of Social Psychology, Vol. 31 pp.177-92.
Joseph, M., Yakhou, M., Stone, G. (2005), "An educational institution's quest for service quality: customers' perspective", Quality Assurance in Education, Vol. 13 No.1, pp.66-82.
Keaveney, S.M. (1995), "Customer switching behaviour in service industries: an exploratory study", Journal of Marketing, Vol. 59 No.2, pp.71-82.
Lammers, W., Murphy, J. (2002), "A profile of teaching techniques used in the university classroom", Active Learning in Higher Education, Vol. 3 pp.54-67.
Landis, J.R., Koch, G.G. (1977), "The measurement of observer agreement for categorical data", Biometrics, Vol. 33 No.1, pp.159-74.
Lawrence, J.J., McCollough, M.A. (2004), "Implementing total quality management in the classroom by means of student satisfaction guarantees", Total Quality Management, Vol. 15 No.2, pp.235-54.
Lockwood, A. (1994), "Using service incidents to identify quality improvements points", International Journal of Contemporary Hospitality Management, Vol. 6 No.1/2, pp.75-80.
McCollough, M.A., Gremler, D.D. (1999a), "Guaranteeing student satisfaction: an exercise in treating students as customers", Journal of Marketing Education, Vol. 21 No.2, pp.118-30.
McCollough, M.A., Gremler, D.D. (1999b), "Student satisfaction guarantees: an empirical investigation of student and faculty attitudes", Marketing Education Review, Vol. 9 No.2, pp.53-64.
McCuddy, M.R., Pinar, M., Gingerich, E.F.R. (2008), "Using student feedback in designing student-focused curricula", International Journal of Educational Management, Vol. 22 No.7, pp.611-37.
McElwee, G., Redman, T. (1993), "Upward appraisal in practice", Education+Training, Vol. 35 No.2, pp.27-31.
Marzo-Navarro, M., Pedraja-Iglesias, M., Rivera-Torres, M.P. (2005a), "A new management element for universities: satisfaction with the offered courses", International Journal of Educational Management, Vol. 19 No.6, pp.505-26.
Marzo-Navarro, M., Pedraja-Iglesias, M., Rivera-Torres, M.P. (2005b), "Determinants of satisfaction with university summer courses", Quality in Higher Education, Vol. 11 No.3, pp.239-49.
Marzo-Navarro, M., Pedraja-Iglesias, M., Rivera-Torres, M.P. (2005c), "Measuring customer satisfaction in summer courses", Quality Assurance in Education, Vol. 13 No.1, pp.53-65.
Mattila, A.S., Enz, C. (2002), "The role of emotions in service encounters", Journal of Service Research, Vol. 4 No.4, pp.268-77.
Mavondo, F.T., Tsarenko, Y., Gabbott, M. (2004), "International and local student satisfaction: resources and capabilities perspective", Journal of Marketing for Higher Education, Vol. 14 No.1, pp.41-60.
Mersha, T., Adlakha, V. (1992), "Attributes of service quality: the consumers' perspective", International Journal of Service Industry Management, Vol. 3 No.3, pp.34-45.
Meuter, M.L., Ostrom, A.L., Roundtree, R.I., Bitner, M.J. (2000), "Self-service technologies: understanding customer satisfaction with technology-based service encounters", Journal of Marketing, Vol. 64 No.3, pp.50-64.
Miller, T.W., Dickson, P.R. (2001), "Online market research", International Journal of Electronic Commerce, Vol. 5 No.3, pp.139-67.
Oldfield, B.M., Baron, S. (2000), "Student perceptions of service quality in a UK university business and management faculty", Quality Assurance in Education, Vol. 8 No.2, pp.85-95.
O'Neill, M.A., Palmer, A. (2004), "Importance-performance analysis: a useful tool for directing continuous quality improvement in higher education", Quality Assurance in Education, Vol. 12 No.1, pp.39-52.
O'Toole, D., Spinelli, M., Wetzel, J. (2000), "The important learning dimensions in the school of business: a survey of students and faculty", Journal of Education for Business, Vol. 75 pp.338-42.
Patrick, J., Smart, R. (1998), "An empirical evaluation of teacher effectiveness: the emergence of three critical factors", Assessment & Evaluation in Higher Education, Vol. 23 No.2, pp.165-78.
Pincott, G., Branthwaite, A. (2000), "Nothing new under the sun?", International Journal of Market Research, Vol. 42 No.2, pp.137-55.
Pozo-Munoz, C., Rebolloso-Pacheco, E., Fernandez-Ramirez, B. (2000), "The ‘ideal teacher’. Implications for student evaluation of teacher effectiveness", Assessment & Evaluation in Higher Education, Vol. 25 No.3, pp.253-63.
Ramsden, P. (1991), "A performance indicator of teaching quality in higher education: the course experience questionnaire", Studies in Higher Education, Vol. 16 No.2, pp.129-50.
Richins, M.L. (1983), "Negative word-of-mouth by dissatisfied consumers: a pilot study", Journal of Marketing, Vol. 47 No.1, pp.68-78.
Rolfe, H. (2002), "Students demands and expectations in an age of reduced financial support: the perspectives of lecturers in four English universities", Journal of Higher Education Policy and Management, Vol. 24 No.2, pp.171-82.
Ronan, W.W., Latham, G.P. (1974), "The reliability and validity of the critical incident technique: a closer look", Studies in Personnel Psychology, Vol. 6 No.1, pp.53-64.
Roos, I. (2002), "Methods of investigating critical incidents", Journal of Services Research, Vol. 4 No.3, pp.193-204.
Rowley, J. (1997), "Beyond service quality dimensions in higher education and towards a service contract", Quality Assurance in Education, Vol. 5 No.1, pp.7-14.
Sander, P., Stevenson, K., King, M., Coates, D. (2000), "University students' expectations of teaching", Studies in Higher Education, Vol. 25 No.3, pp.309-23.
Schertzer, C.B., Schertzer, S.M.B. (2004), "Student satisfaction and retention: a conceptual model", Journal of Marketing for Higher Education, Vol. 14 No.1, pp.79-91.
Schlossberg, H. (1991), "Customer satisfaction: not a fad, but a way of life", Marketing News, Vol. 25 No.20, pp.18-21.
Shank, M.D., Walker, M., Hayes, T. (1995), "Understanding professional service expectations: do we know what our students expect in a quality education?", Journal of Professional Services Marketing, Vol. 13 No.1, pp.71-83.
Srikanthan, G., Dalrymple, J.F. (2007), "A conceptual overview of a holistic model for quality in higher education", International Journal of Educational Management, Vol. 21 No.3, pp.173-93.
Stauss, B., Weinlich, B. (1997), "Process-oriented measurement of service quality – applying the sequential incident technique", European Journal of Marketing, Vol. 31 No.1, pp.33-55.
Svensson, G., Wood, G. (2007), "Are university students really customers? When illusion may lead to delusion for all!", International Journal of Educational Management, Vol. 21 No.1, pp.17-28.
Swanson, S.R., Davis, J.C. (2000), "A view from the aisle: classroom successes, failures and recovery strategies", Marketing Education Review, Vol. 10 No.2, pp.17-25.
Swanson, S.R., Frankel, R. (2002), "A view from the podium: classroom successes, failures, and recovery strategies", Marketing Education Review, Vol. 12 No.2, pp.25-35.
Swanson, S.R., Frankel, R., Sagan, M. (2005), "Exploring the impact of cultural differences", Marketing Education Review, Vol. 15 No.3, pp.37-48.
Sweeney, J.C., Lapp, W. (2004), "Critical service quality encounters on the web: an exploratory study", Journal of Services Marketing, Vol. 18 No.4, pp.276-89.
Sweet, C. (2001), "Designing and conducting virtual focus groups", Qualitative Market Research, Vol. 4 No.3, pp.130-5.
Tse, A.C.B. (1999), "Conducting electronic focus group discussions among Chinese respondents", Journal of the Market Research Society, Vol. 41 No.4, pp.407-15.
Vargo, S.L., Lusch, R.F. (2004), "Evolving to a new dominant logic for marketing", Journal of Marketing, Vol. 68 pp.1-17.
Vargo, S.L., Lusch, R.F. (2006), "Service-dominant logic: what it is, what it is not, what it might be", in Lusch, R.F., Vargo, S.L. (Eds),The Service-Dominant Logic of Marketing: Dialog, Debate and Directions, M.E. Sharpe, Armonk, NY, pp.43-56.
Voss, R., Gruber, T. (2006), "The desired teaching qualities of lecturers in higher education – a means end analysis", Quality Assurance in Education, Vol. 14 No.3, pp.217-42.
Voss, R., Gruber, T., Szmigin, I. (2007), "Service quality in higher education: the role of student expectations", Journal of Business Research, Vol. 60 No.9, pp.949-59.
Warden, C.A., Liu, T.-C., Huang, C.-T., Lee, C.-H. (2003), "Service failures away from home: benefits in intercultural service encounters", International Journal of Service Industry Management, Vol. 14 No.4, pp.436-57.
Weber, R.P. (1985), Basic Content Analysis, Sage, London, .
Wiers-Jenssen, J., Stensaker, B., Grogaard, J.B. (2002), "Student satisfaction: towards an empirical deconstruction of the concept", Quality in Higher Education, Vol. 8 No.2, pp.183-95.
Willcoxson, L. (1998), "The impact of academics' learning and teaching preferences on their teaching practice: a pilot study", Studies in Higher Education, Vol. 23 No.1, pp.59-70.
Wood, R.T.A., Griffiths, M.D., Eatough, V. (2004), "Online data collection from video game players: methodological issues", CyberPsychology & Behavior, Vol. 7 No.5, pp.511-8.
Zeithaml, V.A., Parasuraman, A., Berry, L.L. (1990), Delivering Quality Service: Balancing Customer Perceptions and Expectations, The Free Press, New York, NY, .
About the authors
Roediger Voss is a Professor of Business Administration at the HWZ University of Applied Sciences of Zurich, Center for Strategic Management. He received his PhD from the University of Education Ludwigsburg. His research interests include marketing of higher education, services marketing, and consumer behaviour. His work has been published and/or is forthcoming in journals such as Journal of Business Research, Journal of Marketing Management, Journal of Services Marketing, International Journal of Public Sector Management, Qualitative Market Research, Journal for Quality Assurance in Education, and Management Services.
Thorsten Gruber is a Lecturer in Marketing in the Manchester Business School, University of Manchester. Prior to that, he was engaged in postdoctoral research at the Birmingham Business School, University of Birmingham and a part-time Visiting Lecturer at the University of Education Ludwigsburg. He received his PhD and MBA from the University of Birmingham. His research interests include consumer complaining behaviour, services marketing, and the development of qualitative online research methods. His work has been published and/or is forthcoming in journals such as Journal of Business Research, Industrial Marketing Management, Journal of Marketing Management, Journal of Services Marketing, International Journal of Public Sector Management, Journal of Business and Industrial Marketing, Managing Service Quality, Qualitative Market Research, Journal of Product and Brand Management, Journal for Quality Assurance in Education, and Management Services. Thorsten Gruber is the corresponding author and can be contacted at: email@example.com
Alexander Reppel (University of Wuppertal, Germany), MBA International Business (University of Birmingham), PhD Commerce (University of Birmingham), is a Lecturer in Marketing at Royal Holloway, University of London. Alexander is a full member of the Academy of Marketing. His main research interests are in relationship marketing in consumer markets, marketing ethics, and consumer data management practices. Alexander Reppel is also involved in the development of innovative online research methods, such as interviewer- and non-interviewer-based online laddering techniques. He has published in journals such as the European Journal of Marketing, Industrial Marketing Management, Journal of Marketing Management, International Journal of Service Industry Management, and the Journal of Product & Brand Management, and Qualitative Market Research among others.