Greg J. Sears, Sprott School of Business, Carleton University, Ottawa, Canada
Haiyan Zhang, Kenexa High Performance Institute Minneapolis, Minneapolis, Minnesota, USA
Willi H. Wiesner, DeGroote School of Business, McMaster University, Hamilton, Canada
Rick D. Hackett, DeGroote School of Business, McMaster University, Hamilton, Canada
Yufei Yuan, DeGroote School of Business, McMaster University, Hamilton, Canada
Purpose – Based on theories of media richness and procedural justice, the authors aim to examine the influence of videoconferencing (VC) technology on applicant reactions and interviewer judgments in the employment interview, the most commonly used employee selection device.
Design/methodology/approach – MBA students participated in simulated VC and face-to-face (FTF) interviews. Applicant perceptions of procedural justice and interviewer characteristics were collected. Interviewers provided ratings of affect toward the applicant, perceived applicant competence, overall interview performance, as well as an overall hiring recommendation.
Findings – Applicants perceived VC interviews as offering less of a chance to perform and as yielding less selection information. They also viewed VC interviews as less job-related than FTF interviews and had significantly less favorable evaluations of their interviewer (on personableness, trustworthiness, competence, and physical appearance) in VC interviews. Finally, applicants in VC interviews received lower ratings of affect (likeability) and lower interview scores, and were less likely to be recommended for the position.
Research limitations/implications – The authors' findings suggest that VC technology can adversely affect both applicant reactions and interviewer judgments. They propose several precautionary steps to help minimize the risks associated with conducting VC interviews.
Originality/value – The authors extend prior research concerning the use of VC interviews by directly assessing applicant perceptions of both procedural justice and of interviewer characteristics associated with the probability that job offers will be accepted. They also add to the literature in showing that VC interviews tend to result in less favorable evaluations of applicants than FTF interviews.
Personnel selection; Employment interview; Videoconferencing; Applicant reactions; Interviewer judgments; Internet technology; Employment; Video; Interviews.
Emerald Group Publishing Limited
Over the last decade, the use of information technology in human resource management practices has soared in organizations (Viswesvaran, 2003; Gainey and Klaas, 2008; Cascio and Aguinis, 2011). Perhaps nowhere is this trend more apparent than in employee recruitment and selection, where managers and HR practitioners have faced mounting pressure to expand the scope of these activities while also optimizing efficiency and containing costs (Chapman and Webster, 2003; Sackett and Lievens, 2008). In response to these demands, many organizations have adopted computer and web-based technology in a number of their front-line recruitment activities, including posting job advertisements, collecting and managing applicant information (e.g. resumes), and conducting preliminary screening (Chapman and Webster, 2003).
Echoing this trend, a growing number of organizations have begun employing videoconference (VC) interviews as a supplement or alternative to traditional, face-to-face (FTF) interviews (Chapman and Rowe, 2001; Huffcutt and Culbertson, 2010). Relative to face-to-face interviews, VC interviews enable organizations to expand applicant outreach to more geographically dispersed regions, to more efficiently handle large and increasing volumes of applicants, and to realize substantial savings with respect to applicant and recruiter travel costs (Chapman, 1999; Chapman and Webster, 2003). Despite its increasing usage, very limited research has investigated the impact of VC technology on applicant perceptions and interviewer judgments in the employment interview.
To help fill this void, we examined applicants' perceptions of procedural justice and interviewer characteristics in FTF versus VC interviews. While applicant reactions to VC interviews have been studied (e.g. Chapman et al., 2003; Straus et al., 2001), our investigation is unique in examining the three dimensions of procedural justice (Bauer et al., 2001; Gilliland, 1993) most relevant to VC interviews (i.e. job-relatedness, chance to perform, and selection information). Furthermore, we focused on characteristics most strongly associated with applicant attraction outcomes in our comparison of FTF and VC interviewing on applicant perceptions of the interviewer (e.g. Chapman et al., 2005).
To help resolve conflicting findings in the literature, we also compared interviewers' evaluations of applicant performance in VC and FTF interviews. While some studies suggest that applicants receive comparable evaluations in both interview media (Straus et al., 2001), others find applicants fare better in VC interviews (Chapman and Rowe, 2001; Chapman and Webster, 2001). We add to these findings by collecting interviewer judgments concerning: affect toward the interviewee, interviewee competence, overall interviewee performance; and overall hiring recommendation.
To summarize, we extend the research on applicant reactions and interviewer evaluations in the VC interview by incorporating a broader range of evaluative criteria than is typical of past studies. Given conflicting results, coupled with recent advances in – and enhanced user familiarity with – VC technology (e.g. de Lind van Wijngaarden et al., 2010; Ham, 2009; Weekes, 2008), applicant reactions to – and interviewer judgments from – VCs warrant further, more comprehensive evaluation.
In the following sections key differences between VC and FTF interviews are highlighted. Hypotheses are then presented with respect to: applicant perceptions of procedural justice; applicant perceptions of interviewer characteristics; and interviewer evaluations of applicants.
VC and FTF interviews
There are at least three meaningful distinctions between VC and FTF interviews that are likely to influence both applicant reactions and interviewer judgments. Firstly, VC interviews often display a noticeable delay or lack of synchronization between audio and video signals due to signal compression (Angiolillo et al., 1997; Mayer-Patel, 2007). This results in changes to the surface structure of conversations, such as increased “turn taking”, fewer interruptions, and a lengthening of the time a participant speaks in any one exchange (O'Connail et al., 1993; Sellen, 1995). Conversations become less fluid, with applicants reporting difficulty in both regulating and understanding them (Straus et al., 2001).
Media richness theory also highlights differences across interview modes in that communication technologies differ in their capability to transmit information (Daft and Lengal, 1986). For example, although both VC and FTF interviews transmit visual and aural data, communication is less rich in the former resulting in lost opportunities to observe applicants' nonverbal behaviors, such as eye contact and body language. This is an important difference since nonverbal cues influence interviewer evaluations in FTF interviews (e.g. Arvey and Campion, 1982; DeGroot and Motowidlo, 1999; Liden et al., 1993) yet these cues are likely to be less pronounced in VC interviews (Kroeck and Magusen, 1997).
Finally, in VC interviews, only the torso and head of both the interviewer and applicant are typically visible. Since physical appearance can appreciably influence interpersonal judgments from the interview (Barrick et al., 2009; DeGroot and Motowidlo, 1999; Hosoda et al., 2003) it is likely that the transmission of restricted images in VC interviews alters the evaluation process relative to a FTF context (e.g. Storck and Sproull, 1995).
In summary, relative to FTF assessments, a VC interview is characterized by reduced clarity and immediacy of verbal exchanges, restricted transmission of nonverbal cues, and truncated images of both parties. VC interviews are therefore likely to alter applicant and interviewer perceptions of each other, and of the interview process (e.g. Chapman and Rowe, 2001; Chapman et al., 2003; Straus et al., 2001).
Applicant perceptions of procedural justice
Applicant reactions to the interview influence both their view of the organization and their likelihood of accepting a job offer (e.g. Ryan and Ployhart, 2000; Truxillo and Bauer, 2011). Although applicant perceptions of procedural justice are important to understanding the processes involved (Gilliland, 1993; Hausknecht et al., 2004; Ployhart and Ryan, 1998), only a few studies have assessed whether procedural justice perceptions differ by interview media. For example, Chapman et al. (2003) found that FTF interviews were typically seen as being fairer than those conducted by telephone or via VC. Building on this finding, we compared the perceived fairness of VC and FTF interviews in terms of the three specific components of procedural justice most strongly related to applicant reactions to selection processes (Celani et al., 2008; Hausknecht et al., 2004; Truxillo and Bauer, 2011) namely, the chance to perform, job relatedness, and selection information (cf. Gilliland, 1993).
Chance to perform refers to being given adequate opportunity to demonstrate one's knowledge, skills, and abilities (Gilliland, 1993). Importantly, constraints on the fluidity and richness of communications in VC interviews relative to FTF interviews may hamper applicants' ability to fully express themselves verbally and non-verbally (Sellen, 1995; Straus et al., 2001), resulting in lowered perceptions of chance to perform.
Job relatedness refers to the extent to which job-relevant information is assessed (Gilliland, 1993). Since VC interviews are relatively novel, some less “tech savvy” applicants may perceive more “tech savvy” individuals as gaining an unfair competitive advantage with VC interviews, especially when technology is not central to the job opening. They may also view the VC interview as less job-related in that, relative to FTF interviews, it is more difficult to assess the “soft skills” (e.g. interpersonal) that can have a strong influence on interviewer evaluations of candidates (e.g. Huffcutt et al., 2001).
Selection information involves the information, communication and explanations offered to applicants concerning the selection process (Gilliland, 1993). Central to applicant perceptions of selection information is their level of familiarity with the selection tool (Celani et al., 2008; Gilliland, 1993). Indeed, Arvey and Sackett (1993) suggest that applicants unfamiliar with a selection method may be more inclined to attribute their performance to the selection method used, especially if they performed poorly. Importantly, VC interviews may be seen as providing less information than FTF interviews due to their relative novelty and the lack of applicant familiarity involved (Chapman et al., 2003; Straus et al., 2001). We therefore propose the following:
H1. Applicants will report less favorable perceptions of procedural justice in VC than in FTF interviews with respect to: (a) chance to perform, (b) job-relatedness, and (c) selection information.
Applicant perceptions of recruiter characteristics
Signal theory posits that when complete information concerning an object, issue, or event is not readily accessible, decision-makers use the limited information available to make the required inferences (e.g. Spence, 1973; Highhouse and Hause, 1995). Drawing on signal theory, Rynes et al. (1991) suggest that applicants in a selection context view the interviewer as a signal of what it would be like to work for the organization and closely attend to their behavioral cues. Likewise, interviewer behavior is of strong interest to applicants to the extent that it conveys cues on the likelihood of receiving a job offer (Rynes and Miller, 1983; Turban and Dougherty, 1992). Two studies have compared applicant perceptions of recruiter (interviewer) characteristics in VC and FTF interviews. Straus et al. (2001) reported that applicants perceive interviewers as more “likeable” in FTF interviews. On the other hand, Chapman and Rowe (2002) did not find differences in ratings of interviewer friendliness by interview media, though interviewers were perceived as being more effective in FTF interviews.
In view of these inconsistent findings, our interest was in the three recruiter characteristics most strongly associated with applicant attraction and job acceptance intentions (Chapman et al., 2005), namely personableness, trustworthiness, and competence. We also examined applicants' assessments of interviewer physical appearance since beauty is often perceived favorably (Eagly et al., 1991) and physical attractiveness can positively influence perceptions of professionalism, social and intellectual competence, and likeability (Feingold, 1992; Hosoda et al., 2003; Langlois et al., 2000). Considerable research suggests that these effects carry over to the employment interview (e.g. Barrick et al., 2009; Heilman and Saruatari, 1979; Kinicki and Lockwood, 1985; Morrow, 1990).
We expect some of the constraints associated with the VC interview documented earlier (e.g. the restricted image, filtering of non-verbal cues, potential distortion/exaggeration of certain physical features; Sellen, 1995) to negatively impact applicant ratings of recruiter (interviewer) characteristics. Furthermore, relative to FTF contexts, the potential “depersonalizing” influence of VC technology on rater perceptions and judgments (e.g. Rice, 1993; Storck and Sproull, 1995) may exacerbate these effects. Thus, we submit the following hypothesis:
H2. Applicants will provide lower evaluations of interviewers in VC relative to FTF interviews with respect to: (a) personableness, (b) trustworthiness, (c) competence, and (d) physical appearance.
Interviewer judgments of applicants
Only two studies have examined the influence of VC technology on interviewer evaluations of applicants. Straus et al. (2001) found that interviewers reported less communication understanding and conversation fluency in VC relative to FTF interviews; however, interviewer ratings of applicants' overall ability (the primary measure of an applicant's suitability to be hired) did not differ between media. In contrast, Chapman and Rowe (2001) found that interviewers ascribed higher ratings of performance to applicants in VC than in FTF interviews, using an overall measure of interview performance derived from an aggregation of competency ratings. Chapman et al. (Chapman and Rowe, 2001; Chapman and Webster, 2001) inferred that a primary mechanism underlying this effect may have been rater overcorrection in which interviewers considered VC applicants to be at a disadvantage, and compensated accordingly.
Considering that the Chapman et al. research was reported more than a decade ago, current interviewers and applicants should have greater familiarity with videoconference technology (Ham, 2009; Weekes, 2008) leaving interviewers less inclined to automatically view VC interviewees as disadvantaged. On the other hand, interviewer evaluations in general have strong affective and cognitive underpinnings (Howard and Ferris, 1996; Keenan, 1978; Posthuma et al., 2002) such that applicant non-verbal behaviors (e.g. eye contact, smiling, nodding, body position, hand movement, physical appearance) are associated with interviewer perceptions of likeability and credibility (e.g. Arvey and Campion, 1982; Dipboye, 1992; DeGroot and Motowidlo, 1999; Gifford et al., 1985). As such, we expect that the lower levels of immediacy, fluidity, and clarity in VC interviews relative to FTF interviews may alter interviewer perceptions of visual and auditory cues, ultimately negatively affecting interviewer perceptions of applicants (Daft and Lengal, 1986; O'Connail et al., 1993; Sellen, 1995; Yuan et al., 2003). We therefore propose the following:
H3. Interviewers will provide lower ratings of applicants in VCs relative to FTF interviews with respect: (a) overall interview performance, (b) affect toward the applicant, (c) perceived applicant competence, (d) and overall hiring recommendation.
Participants and design
Participants were full-time MBA students (52 served as applicants and 52 as interviewers) at a medium-sized university who were recruited using class announcements and an electronic advertisement distributed to enrollees of career services workshops (providing training on job search, interviewing, and other work-related skills). Participation was voluntary, providing an interview experience in preparation for co-op work term placements. Participants were advised that they would be randomly assigned to serve either as an interviewer or applicant.
A repeated measures design was used in which each applicant was interviewed using both a FTF and VC interview. Applicants had a different interviewer in each trial. Also, each interviewer conducted both a FTF and a VC interview. The order of interview medium administration was fully counterbalanced (VC first, FTF second vs FTF first, VC second) for both interviewers and applicants. Further, each interview contained parallel question sets relating to two different jobs as project managers in different fictitious organizations. Interview questions for each job assessed the same competencies (e.g. decision-making, communication, interpersonal skills) and were assigned to the FTF and VC interview conditions on a random basis.
While questions have been raised regarding the generalizability of findings from laboratory experiments to field settings (e.g. Gordon et al., 1986), there is very little evidence to suggest that the direction or magnitude of results obtained in laboratory experiments significantly differs from those obtained in the field (Anderson et al., 1999; Highhouse and Gillespie, 2009). Indeed, the degree of similarity between a sample and population of interest does not appear to affect one's ability to identify relationships between variables as long as that sample is unbiased on factors that are integral to the research question (e.g. Calder et al., 1981; Farber, 1952; Highhouse and Gillespie, 2009; Kruglanski, 1975). In this study, we follow previous research on the VC interview (e.g. Chapman and Webster, 2001; Straus et al., 2001) in conducting simulated interviews that enable enhanced control over potential confound variables (e.g. order of interview administration, type of positions, differences between interviewers/applicants) and higher internal validity – a critical prerequisite for external validity (Hogarth, 2005; Lucas, 2003; Thye, 2000).
With respect to characteristics of participants in the study, the mean age of the applicants was 26.5, 66 percent were male; their average full-time work experience was 4.5 years. The mean age of the interviewers was 27.2, 43 percent were male, and their average full-time work experience was 5.8 years. Applicants reported that they had previously participated in an average of 8.7 interviews, as compared to 12.1 among the interviewers. Participants reported some familiarity with VC technology – 46.2 percent of applicants and 63.5 percent of interviewers reported having experience using VC technology (e.g. on-line software such as Skype, NetMeeting).
Videoconference interview system
A customized web-based videoconference interview interface was developed for the study using the Microsoft NetMeeting videoconferencing platform. Computers were equipped with the videoconference interview program, two audio speakers, a 17 inch computer screen displaying the head and torso of each party, and a web-camera situated on top of the computer monitor. The computers were located in two separate offices on campus that were connected through the university's local area network.
Prior to each interview, the applicants and interviewers completed a short demographic questionnaire assessing gender, age, ethnicity, work experience, and interview experience. Upon arrival at the interview session, both interviewers and applicants were given an overview of the study and a job description summarizing the nature of the organization and the position. Interviewers arrived 15 minutes in advance of applicants (40 minutes in advance of the scheduled interview sessions) so that detailed information could be provided on proper interviewing procedures and techniques (e.g. guidelines on conducting the interview, specific questions to be asked, competency descriptions, evaluation guidelines for each question). The actual interviews were 25-35 minutes in duration and consisted of 12 situational interview questions designed to assess decision-making, initiative, interpersonal skills, communication skills, and overall presentation. An example of an interview question (for initiative) is:
You have just been assigned to a management team consisting of six people from different functional areas. They seem to know each other a little, but you have never met any of them before. The production manager who assembled this team started the meeting by describing the project objectives she wants the team to accomplish and then leaves the room to let the team get organized and start working. What would you do?
After each interview, applicants completed a questionnaire measuring their perceptions of procedural justice associated with the interview, and of their interviewer's characteristics. Interviewers evaluated the candidate following a review of their answers. Both interviewers and applicants were debriefed following their participation and were given feedback concerning techniques that could be used to enhance their performance in structured situational interviews.
Using items from the selection procedural justice scale (SPJS) (Bauer et al., 2001), three dimensions (“rules”) of procedural justice were assessed: chance to perform (three items), selection information (three items), and job-relatedness (one item). For example: “I was able to show my skills and abilities throughout this interview” (chance to perform), “I understood in advance what the interviewing process would be like” (selection information), and “A person who scores well on this interview will do the job well” (job relatedness). Applicants rated their level of agreement with each item using a five-point Likert scale (1=totally disagree to 5=totally agree). Cronbach's alpha was 0.91 for chance to perform and 0.87 for selection information.
Perceived interviewer characteristics
Based on research concerning recruiter characteristics (e.g. Chapman et al., 2005; Harris and Fink, 1987) and the role of visual cues in selection (e.g. Arvey and Campion, 1982; DeGroot and Motowidlo, 1999; Gifford et al., 1985), applicants evaluated their interviewers on four especially salient recruiter characteristics: personableness, trustworthiness, competence, and overall physical appearance. A five-point scale (1=poor to 5=excellent) was used.
Total interview score (overall competency ratings)
Interviewers assessed applicants' interview performance on five competency dimensions commonly assessed in interviews (e.g. Einhorn, 1981; Huffcutt et al., 2001) and deemed especially relevant to the target project manager position. Decision-making, initiative, interpersonal skills, communication skills, and overall presentation (i.e. composure, discipline) were rated using a five-point rating scale (1=poor to 5=excellent). These ratings were then summed to form an overall interview score (Cronbach's alpha=0.78).
Affect toward applicant and perceived competence
Scales from Howard and Ferris's (1996) examination of social and situational influences on interview decisions were used. Both affect (four items; alpha=0.74) and competence (three items; alpha=0.72) were rated using a seven-point Likert scale (1=totally disagree to 7=totally agree). For example: “If I had the opportunity I would socialize with this applicant after working hours” (affect toward applicant); and “The applicant does not seem to know what he is talking about” (perceived competence).
Overall hiring recommendation
Consistent with previous research (e.g. Chapman and Webster, 2001), interviewers recorded their overall hiring recommendation using a ten-point scale (ranging from 10 to 100 percent) assessing how likely they were to hire the candidate.
Descriptive statistics and preliminary analyses
Table I shows the descriptive statistics for all study variables. No significant differences among the dependent variables were found due to the interview question set, or the applicant and interviewer demographic characteristics (e.g. gender, age, ethnicity, work experience, interview experience, and prior use of VC technology); therefore, these variables are not examined further.
Applicant perceptions of procedural justice in the interview
Two MANCOVAs were conducted using the general linear model procedure to test the influence of interview medium (VC vs FTF) on applicant perceptions of procedural justice and evaluations of interviewer recruiter characteristics. In these analyses, interview medium was entered as a between-subjects factor with interview order (FTF conducted first or second) included as a covariate in the analysis. Results from these analyses are reported in Tables II and III, respectively. Consistent with our predictions, applicants reported significantly less favorable perceptions of procedural justice in VC interviews relative to FTF interviews. VC interviews were perceived by applicants as providing less of a chance to perform (F (1,101)=16.68, p<0.01), less selection information (F (1, 101)=4.98, p<0.05) and were also viewed as less job-related (F (1, 101)=11.66, p<0.01) than FTF interviews. These results provide support for H1 (a, b, and c), which proposed that applicants would report lower ratings of procedural justice in VC interviews relative to FTF interviews.
Turning to applicants' assessments of interviewers, overall, applicants in VC interviews (vs FTF) provided significantly lower evaluations of their interviewers on key recruiter characteristics (see Table III). Specifically, in accordance with H2 (a, b, c, and d), applicants in VC interviews (vs FTF) rated their interviewers lower in terms of their personableness (F (1, 101)=10.78, p<0.01), trustworthiness (F (1, 101)=7.02, p<0.01), competence (F (1, 101)=12.64, p<0.01), and overall appearance (F (1, 101)=9.71, p<0.01).
All told, these findings provide support for the proposition that VC technology may negatively affect applicants' perceptions in the interview. Results from this study suggest that this effect not only applies to applicants' perceptions of the fairness of the interview, but also extend to applicants' impressions and interpersonal judgments regarding their interviewers.
Interviewer evaluations of applicants
To analyze the effect of interview medium (VC vs FTF) on interviewer evaluations of applicants, a final MANCOVA was performed with interview medium entered as a between-subjects factor and applicant trial (first or second interview conducted) entered as a covariate. As displayed in Table IV, results from these analyses indicate that interviewers provided significantly lower evaluations of applicants in VC interviews (vs FTF) with respect to ratings of their overall interview performance (competency ratings; F (1, 101)=7.20, p<0.01), interviewers' affect toward the applicant (F (1, 101)=5.79, p<0.05), and their overall hiring recommendation (F (1, 101)=4.57, p<0.05). Interviewers' ratings of perceived competence, however, did not differ between interview media (F (1, 101)=2.30, p>0.05). Overall, these findings provide partial support for H3 and the prediction that interviewers will administer less favorable evaluations to applicants in VC interviews. Support was found for H3(a), (b), and (d), but not (c).
We build on previous research concerning applicant reactions in VC interviews by directly assessing applicants' perceptions of procedural justice (Gilliland, 1993) and by focusing on their perceptions of recruiter characteristics that tend to be most strongly associated with job acceptance decisions. Consistent with media richness theory, the use of VC technology may result in applicants viewing both the fairness of the interview and the characteristics of the interviewer in a less favorable light. Our findings also add to the literature on interviewer evaluations, suggesting that applicants may be evaluated less favorably in VC interviews than in FTF interviews.
With respect to procedural justice, applicants viewed VC technology as not only hindering their interview performance (i.e. less “chance to perform”, less “selection information”) but also as potentially weakening the validity of the interview (i.e. less “job-related”). It may be that restrictions in the flow of conversation and in the projection of non-verbal cues combine to compromise applicants' perceived ability to “put their best foot forward”. Indeed, applicants may feel more anxious in these interviews (Straus et al., 2001), potentially hindering their perceptions of interview fairness and validity. Inasmuch as the interview is preferred over other selection methods primarily because of its face validity and capacity to provide applicants with a “voice” in the selection process (Hausknecht et al., 2004; Lievens et al., 2005; Truxillo and Bauer, 2011), lower ratings of VC interviews on these justice dimensions are a cause for concern.
The negative reactions to VC interviews extend also to applicant perceptions of interviewers, who viewed interviewers in VC interviews as not only less personable, but also less competent and trustworthy. Indeed, VC media have been described as placing a technological “barrier” between conversants (e.g. Short et al., 1976) which may diminish the level of “social presence” in the interview (e.g. Rice, 1993). Thus, interviewer behaviors reflective of warmth and friendliness may be made less salient, negatively influencing applicant perceptions of interviewer characteristics such as their degree of personableness and trustworthiness.
Consistent with the notion that the restricted images and filtering of non-verbal cues in VC interviews can result in distorted or misleading impressions, interviewers were also rated lower by applicants in terms of their physical appearance in VC versus FTF contexts. Given the importance of physical appearance in interpersonal judgments (Eagly et al., 1991; Feingold, 1992; Hosoda et al., 2003; Langlois et al., 2000), coupled with the potential for interviewers to perform less effectively in the VC interview as result of being distracted by the technology (Chapman and Rowe, 2002), these factors may have contributed to lower evaluations of interviewers in VC interviews on dimensions such as perceptions of the interviewer's competence. In all, although our findings are tentative, and multi-item measures of recruiter attributes require further development (e.g. Breaugh, 2012; Chapman et al., 2005), it appears that VC technology may negatively influence both applicant perceptions of the fairness of the interview and their impressions of the interviewer.
Although there were no differences in interviewer perceptions of applicant competence by medium, applicants in VC interviews did receive lower ratings of affect (likeability), lower interview scores, and were less likely to be recommended for hire. The notion that VC technology affects interviewer judgments in part through diminished affect toward the applicant is consistent with “social presence” theory (e.g. Rice, 1993). Indeed, an analysis of the dimensions assessed in the interview revealed that while interviewer ratings of applicant decision-making (F (1, 101)=3.14, p>0.05) and communication skills (F (1, 101)=1.29, p>0.05) did not discernibly differ between media, ratings of interpersonal skills (F (1, 101)=4.51, p<0.05), initiative (F (1, 101)=4.71, p<0.05) and overall presentation (F (1, 101)=5.16, p<0.05) were significantly lower in VC interviews. Thus, the competencies associated with fostering interpersonal attraction may be most crucial to media-based differences in interviewer ratings.
Limitations and future research
The generalizability of our findings can be questioned since they were based on simulated interviews using university students. It is important to note however, that our participants were in the process of applying for jobs in their Co-operative Education MBA program, and that feedback concerning their interviewing skills/performance was offered. As such, there was reason for subjects to take the simulated interviews seriously. The students also regularly participated in “high stakes” interviews. For example, both applicants and interviewers reported significant levels of previous full-time work experience (M interviewers=5.8 years; M applicants=4.5 years) and prior interview experience (M interviewers=12.1 interviews; M applicants=8.7 interviews). Moreover, consistent with prior VC interview research using student samples (e.g. Chapman and Webster, 2001; Straus et al., 2001), the experimental setting and materials (e.g. job descriptions, interview questions and procedure) we used were designed to closely emulate an actual job interview. Our interviews were highly structured and interviewers were given significant pre-interview training, consisting of an overview of the interviewing protocol, the specific structured questions, as well as guidelines for evaluating the responses. While there is some evidence to suggest that findings from laboratory experiments involving students may, in some cases, differ from those derived from field studies of non-students (e.g. Peterson, 2001), this pattern is less evident in research on the employment interview (e.g. Bernstein et al., 1975; Pulakos et al., 1996) which consists of a vast literature investigating interviewer and applicant decision-making processes in simulated settings (see Arvey and Campion, 1982; Harris, 1989; Macan, 2009; Posthuma et al., 2002; Schmitt, 1976). In this research, we enlisted MBA students with significant prior work experience who were in the process of applying for work term placements, and who were subjected to a highly standardized and structured interviewing process. These factors should enhance the generalizability of findings from this study (e.g. Campbell, 1986; Stone-Romero, 2008; Straus et al., 2001).
Another methodological concern was our relatively small sample size. A larger sample would have enabled more elaborate model-testing involving additional potential moderators. Nevertheless, the sample size was comparable to others laboratory studies conducted in the area (e.g. Straus et al., 2001) – the enhanced control over potential confound variables enabled by the simulated interview setting, coupled with the use of a repeated-measures design, allowed sufficient statistical power to test our primary hypotheses.
Finally, although a broad range of outcomes concerning applicant reactions and interviewer judgments in VCs were investigated, it is important to acknowledge issues we did not examine. For example, we did not directly investigate the validity of the VC interview. Thus, despite finding that VC interviews tended to elicit less favorable applicant reactions (including perceptions of job-relatedness) and lower interviewer evaluations than FTF interviews, it is nonetheless possible that VC interviews have comparable or higher validities than FTF interviews. For example, the task-media fit hypothesis (McGrath and Hollingshead, 1993) proposes that the use of media richer than what the task requires may act as a “distraction” (Mennecke et al., 2000) resulting in an improper fit between medium and task. Thus, in the context of the employment interview, redundant information unrelated to the job may be transmitted FTF that, in effect, diverts the participants from the task of exchanging and evaluating job-related information. Consistent with this premise, Chapman and Rowe (2001) noted that some interviewers felt that they were better able to focus on the content of applicants' verbal responses in VC interviews because the auditory and visual restrictions of the VC technology limited potentially biasing non-verbal information. In this respect, perhaps VC interviews yield more or at least similarly valid judgments in comparison to FTF interviews. Thus, further research concerning comparative validity is a high priority. Moreover, given recent findings demonstrating the positive effects of interviewer training on rating accuracy in the interview (Melchers et al., 2011; Powell and Goffin, 2009), the role of variations in interviewer training methods (e.g. frame-of-reference training, training on VC technology) on interviewer judgments and VC interview validity requires investigation.
Since our study is the first to report that VC interviews can result in lower applicant evaluations than in FTF interviews, it may be that the rater overcorrection effect (Chapman and Rowe, 2001; Chapman and Webster, 2001) identified in the early days of web-technology use no longer applies (Ham, 2009; Weekes, 2008; Winkler, 2006). In any case, research is required to identify the boundary conditions associated with interviewer overcorrection effects. Likewise, given our media effect on interviewer perceptions of interviewee affect coupled with research suggesting that perceptions of likeability influence interpersonal judgments to a greater extent in VC versus FTF interviews (e.g. Ferran and Watts, 2008), research should further examine factors such as affect and physical appearance as potential moderators of the influence of VC technology on interviewer evaluations. Given the impact of applicant discomfort and uneasiness on interviewer assessments (e.g. McCarthy and Goffin, 2004; Straus et al., 2001), research on interview anxiety in VC interviews is also called for.
Finally, to further enhance generalizability and advance the literature on VC interviews, we encourage field research that examines VC interviews in different organizations and for different types of jobs. Indeed, there is evidence to suggest that applicants for higher level positions prefer to have more face-to-face contact in selection processes (e.g. Martin and Nagao, 1989). Thus, research should examine potential differences in applicant reactions and interviewer evaluations as a function of job type/level and other organizational factors (e.g. the use of VC technology in the firm). We recommend the use of both quantitative and qualitative research methods in this research (e.g. direct observation, one-on-one research interviews; Creswell, 2009) to further explore the influence of applicant and interviewer perceptual processes on decision-making in the VC interview.
Applied implications and conclusion
In contrast to recent studies yielding a generally positive portrait of technology use in the administration of certain recruitment and selection methods (e.g. Lievens and Chapman, 2011; Sackett and Lievens, 2008), our results suggest that choices with regard to interview medium must be made with care. The many advantages to using VC interviews (e.g. Chapman, 1999; Chapman and Webster, 2003) must be balanced against the potential drawbacks, especially with respect to applicant attraction. Moreover, our results caution against varying the use of interview media within the applicant pool for a given job since VC-based evaluations may be systematically biased downward. Should an organization use VC interviews, deliberate steps to offset the potential negative effects should be taken. For example, to improve applicant perceptions of fairness, employers should provide applicants with detailed information about the VC interview process in advance, affording them sufficient time to review the information and adjust. Given applicants' relative unfamiliarity with VC interviews, organizations may also consider providing applicants with additional pre-interview preparation materials (e.g. practical tips on how to prepare for VC interviews, and how they differ from FTF interviews) to ensure a more level playing field for all applicants irrespective of their interview experience and experience with VC technology. To counteract potential negative effects on applicant interest and attraction to the organization, the employer may also elect to present an informational video – including segments portraying existing employees – to add a more “personal touch” to the selection process. The interviewers should be well-trained with respect to how to use VC technology in the interview and the specific characteristics and behaviors that enhance applicant attraction to the organization (e.g. Chapman et al., 2005; Rynes and Miller, 1983; Turban and Dougherty, 1992).
Finally, given the state of VC interview research (e.g. Chapman and Rowe, 2001, 2002; Chapman et al., 2003; Straus et al., 2001), we advise limiting the use of VC interviews to the initial stages of the selection process to screen out clearly unsuitable candidates. Given the high costs associated with inaccurate hiring decisions (e.g. Hunter and Schmidt, 1983), particularly in positions providing secure, long-term employment (e.g. where it is difficult to dismiss underperforming employees), we do not recommend using VC interviews to discriminate among top candidates at later stages in a selection process when more nuanced distinctions must be made. Instead, when VC interviews are used for screening, we recommend that FTF interviews should be used later in the process to mitigate potential negative applicant reactions. Furthermore, in cases where there is a high selection ratio, or a small pool of candidates vying for a given position, we would not normally recommend the use of a VC interview (unless the primary objective is to increase the size of the applicant pool). In such cases, certain benefits of using a VC interview over a FTF interview (e.g. cost savings on applicant/recruiter travel) would be greatly diminished without mitigating the potential risks involved (e.g. negative applicant reactions).
While the VC interview potentially offers many practical advantages for organizations, our findings raise concerns regarding the level of acceptance of this medium and its comparability to more traditional FTF options. Further study of the manner in which contextual and dispositional factors combine to influence VC interview validity are needed to strengthen the confidence with which this burgeoning selection method can be generally recommended for operational use.
Table I Descriptive statistics and zero-order correlations for all variables
Table II Descriptive and test statistics for applicants' perceptions of procedural justice in VC and FTF interviews
Table III Descriptive and test statistics for applicants' assessment of interviewer characteristics in VC and FTF interviews
Table IV Descriptive and test statistics for interviewers' evaluations of applicants in VC and FTF interviews
Random assignment was used except in three cases in which the interviewers did not appear on schedule. In these cases, PhD students specializing in HRM served as the interviewers. Removing these cases from the data set did not impact any of the findings reported below.
One item from the perceived competence scale (Howard and Ferris, 1996) was not included in our measure of this construct due to its lack of relevance to the focal position of project manager.
Anderson, C.A., Lindsay, J.J., Bushman, B.J. (1999), "Research in the psychological laboratory: truth or triviality?", Current Directions in Psychological Science, Vol. 8 pp.3-9.
Angiolillo, J.S., Blanchard, H.E., Israelski, E.W., Maine, A. (1997), "Technology constraints of video-mediated communication", in Finn, K., Sellen, A., Wilbur, S. (Eds),Video-mediated Communication, Lawrence Erlbaum, Mahwah, NJ, pp.51-73.
Arvey, R.D., Campion, J.E. (1982), "The employment interview: a summary and review of recent research", Personnel Psychology, Vol. 35 pp.281-322.
Arvey, R.D., Sackett, P.R. (1993), "Fairness in selection: current developments and perspectives", in Schmitt, N., Borman, W. (Eds),Personnel Selection, Jossey-Bass, San Francisco, pp.171-202.
Barrick, M., Shaffer, J.A., DeGrassi, S.W. (2009), "What you see may not be what you get: relationships among self-presentation tactics and ratings of interview and job performance", Journal of Applied Psychology, Vol. 94 pp.1394-1411.
Bauer, T.N., Truxillo, D.M., Sanchez, R.J., Craig, J.M., Ferrara, P., Campion, M.A. (2001), "Applicant reactions to selection: development of the selection procedural justice scale (SPJS)", Personnel Psychology, Vol. 54 pp.387-419.
Bernstein, V., Hakel, M.D., Harlan, A. (1975), "The college student as interviewer: a threat to generalizability?", Journal of Applied Psychology, Vol. 60 No.2, pp.266-268.
Breaugh, J.A. (2012), "Employee recruitment: current knowledge and suggestions for future research", in Schmitt, N. (Eds),The Oxford Handbook of Personnel Assessment and Selection, Oxford University Press, New York, NY, pp.68-87.
Calder, B.J., Phillips, L.W., Tybout, A.M. (1981), "Designing research for application", Journal of Consumer Research, Vol. 8 pp.197-207.
Campbell, J. (1986), "Labs, fields, and straw issues", in Locke, E.A. (Eds),Generalizing from Laboratory to Field Settings, Heath, Lexington, MA, pp.269-279.
Cascio, W., Aguinis, H. (2011), Applied Psychology in Human Resource Management, Prentice Hall, Englewood Cliffs, NJ, .
Celani, A., Deutsch-Salamon, S., Singh, P. (2008), "In justice we trust: a model of the role of trust in the organization in applicant reactions to the selection process", Human Resource Management Review, Vol. 18 pp.63-76.
Chapman, D.S. (1999), "Expanding the search for talent: adopting technology-based strategies for campus recruiting and selection", Journal of Cooperative Education, Vol. 34 pp.35-41.
Chapman, D.S., Rowe, P.M. (2001), "The impact of videoconference technology, interview structure, and interviewer gender on interviewer evaluations in the employment interview: a field experiment", Journal of Occupational and Organizational Psychology, Vol. 74 pp.279-298.
Chapman, D.S., Rowe, P.M. (2002), "The influence of videoconference technology and interview structure on the recruiting function of the employment interview: a field experiment", International Journal of Selection and Assessment, Vol. 10 pp.185-197.
Chapman, D.S., Webster, J. (2001), "Rater correction processes in applicant selection using videoconference technology: the role of attributions", Journal of Applied Social Psychology, Vol. 31 pp.2518-2537.
Chapman, D.S., Webster, J. (2003), "The use of technologies in the recruiting, screening, and selection processes for job candidates", International Journal of Selection and Assessment, Vol. 11 pp.113-119.
Chapman, D.S., Uggerslev, K.L., Webster, J. (2003), "Applicant reactions to face-to-face and technology-mediated interviews: a field investigation", Journal of Applied Psychology, Vol. 88 pp.944-953.
Chapman, D.S., Uggerslev, K.L., Carroll, S.A., Piasentin, K.A., Jones, D. (2005), "Applicant attraction to organizations and job choice: a meta-analytic review of the correlates of recruiting outcomes", Journal of Applied Psychology, Vol. 90 pp.928-944.
Creswell, J.W. (2009), Research Design: Qualitative, Quantitative, and Mixed Method Approaches, Sage Publications, Thousand Oaks, CA, .
Daft, R.L., Lengal, R.H. (1986), "Organizational information requirements, media richness, and structural design", Management Science, Vol. 32 pp.554-571.
de Lind van Wijngaarden, A.J., Erman, B., Matthews, E.P., Sharp, R., Sutter, E. (2010), "Multi-stream videoconferencing over a peer-to-peer network", Bell Labs Technical Journal, Vol. 15 pp.229-243.
DeGroot, T., Motowidlo, S.J. (1999), "Why visual and vocal interview cues can affect interviewers' judgments and predict job performance", Journal of Applied Psychology, Vol. 84 pp.986-993.
Dipboye, R.L. (1992), Selection Interviews: Process Perspectives, South-Western, Cincinnati, OH, .
Eagly, A.H., Ashmore, R.D., Makhijani, M.G., Longo, L.C. (1991), "What is beautiful is good, but..: a meta-analytic review of research on the physical attractiveness stereotype", Psychological Bulletin, Vol. 110 pp.109-128.
Einhorn, L.J. (1981), "An inner view of the job interview: an investigation of successful communication behaviours", Communication Education, Vol. 30 pp.217-228.
Farber, M.L. (1952), "The college student as laboratory animal", American Psychologist, Vol. 7 pp.102.
Feingold, A. (1992), "Good-looking people are not what we think", Psychological Bulletin, Vol. 111 pp.304-341.
Ferran, C., Watts, S. (2008), "Videoconferencing in the field: a heuristic processing model", Management Science, Vol. 54 pp.1565-1678.
Gainey, T.W., Klaas, B.S. (2008), "The use and impact of E-HR", People and Strategy, Vol. 31 pp.50-55.
Gifford, R., Ng, C.F., Wilkinson, M. (1985), "Nonverbal cues in the employment interview: links between applicant qualities and interviewer judgments", Journal of Applied Psychology, Vol. 70 pp.729-736.
Gilliland, S.W. (1993), "The perceived fairness of selection systems: an organizational justice perspective", Academy of Management Review, Vol. 18 pp.694-734.
Gordon, M.E., Slade, L.A., Schmitt, N. (1986), "The ‘science of the sophomore’ revisited: from conjecture to empiricism", Academy of Management Review, Vol. 11 pp.191-207.
Ham, C. (2009), "Videoconference pioneers: what the wider enterprises can learn from HR", Strategic HR Review, Vol. 8 pp.36-37.
Harris, M.M. (1989), "Reconsidering the employment interview: a review of recent literature and suggestions for future research", Personnel Psychology, Vol. 42 pp.691-726.
Harris, M.M., Fink, L.S. (1987), "A field study of applicant reactions to employment opportunities: does the recruiter make a difference?", Personnel Psychology, Vol. 40 pp.765-784.
Hausknecht, J.P., Day, D.V., Thomas, S.C. (2004), "Applicant reactions to selection procedures: an updated model and meta-analysis", Personnel Psychology, Vol. 57 pp.639-683.
Heilman, M.E., Saruatari, L.R. (1979), "When beauty is beastly: the effects of appearance and sex on evaluations of job applicants for managerial and non-managerial jobs", Organizational Behaviour and Human Decision Processes, Vol. 23 pp.360-372.
Highhouse, S., Gillespie, J.Z. (2009), "Do samples really matter that much?", in Lance, C.E., Vandenberg, R.J. (Eds),Statistical and Methodological Myths and Urban Legends: Doctrine, Verity, and Fable in the Organizational and Social Sciences, Lawrence Erlbaum, Mahwah, NJ, pp.249-267.
Highhouse, S., Hause, E.L. (1995), "Missing information in selection: an application of the Einborn-Hogarth ambiguity model", Journal of Applied Psychology, Vol. 80 pp.86-93.
Hogarth, R.B. (2005), "The challenge of representativeness design in psychology and economics", Journal of Economic Methodology, Vol. 12 pp.253-263.
Hosoda, M., Stone-Romero, E.F., Coats, G. (2003), "The effects of physical attractiveness on job-related outcomes: a meta-analysis of experimental studies", Personnel Psychology, Vol. 56 pp.431-462.
Howard, J.L., Ferris, G.R. (1996), "The employment interview context: social and situational influences on interviewer decisions", Journal of Applied Social Psychology, Vol. 26 pp.112-136.
Huffcutt, A.I., Culbertson, S.S. (2010), "Interviews", in Zedeck, S. (Eds),APA Handbook of Industrial-Organizational Psychology, American Psychological Association, Washington, DC, pp.185-203.
Huffcutt, A., Conway, J.M., Roth, P.L., Stone, N.S. (2001), "Identification and meta-analytic assessment of psychological constructs measured in employment interview", Journal of Applied Psychology, Vol. 86 pp.897-913.
Hunter, J.E., Schmidt, F. (1983), "Quantifying the effects of psychological interventions on employee job performance and work-force productivity", American Psychologist, Vol. 38 No.4, pp.473-478.
Keenan, A. (1978), "The selection interview: candidates' reactions and interviewers' judgments", British Journal of Social and Clinical Psychology, Vol. 17 pp.201-209.
Kinicki, A.J., Lockwood, C.A. (1985), "The interview process: an examination of factors recruiters use in evaluating job applicants", Journal of Vocational Behavior, Vol. 26 pp.117-125.
Kroeck, K.G., Magusen, K.O. (1997), "Employer and job candidate reactions to videoconference job interviewing", International Journal of Selection and Assessment, Vol. 5 pp.137-142.
Kruglanski, A.W. (1975), "The two meanings of external invalidity", Human Relations, Vol. 66 pp.373-382.
Langlois, J.H., Kalakanis, L., Rubenstein, A.J., Larson, A., Hallam, M., Smoot, M. (2000), "Maxims or myths of beauty? A meta-analytic and theoretical review", Psychological Bulletin, Vol. 126 pp.390-423.
Lian, S., Zhang, Y. (2009), Handbook of Research on Secure Multimedia Distribution, Information Science Reference, Hershey, PA, .
Liden, R.C., Martin, C.L., Parsons, C.K. (1993), "Interviewer and applicant behaviors in employment interviews", Academy of Management Journal, Vol. 36 pp.372-386.
Lievens, F., Chapman, D.S. (2009), "Recruitment and selection", in Wilkinson, A., Redman, T., Snell, S., Bacon, N. (Eds),The SAGE Handbook of Human Resource Management, Sage, London, pp.133-154.
Lievens, F., Highhouse, S., De Corte, W. (2005), "The importance of traits and abilities in supervisors' hirability decisions as a function of method of assessment", Journal of Occupational and Organizational Psychology, Vol. 78 pp.453-470.
Lucas, J.W. (2003), "Theory-testing, generalization, and the problem of external validity", Sociological Theory, Vol. 21 pp.236-253.
McCarthy, J., Goffin, R. (2004), "Measuring job interview anxiety: beyond weak knees and sweaty palms", Personnel Psychology, Vol. 57 pp.607-637.
McGrath, J.E., Hollingshead, A.B. (1993), "Putting the group back in group support systems: some theoretical issues about dynamic processes in groups with technological enhancements", in Jessup, L.M., Valacich, J.S. (Eds),Group Support Systems: New Perspectives, Macmillan, New York, NY, pp.78-96.
Macan, T. (2009), "The employment interview: a review of current studies and directions for future research", Human Resource Management Review, Vol. 19 pp.203-218.
Martin, C.L., Nagao, D.H. (1989), "Some effects of computerized interviewing on job applicant responses", Journal of Applied Psychology, Vol. 74 pp.72-80.
Mayer-Patel, K. (2007), "Videoconferencing", in Bidgoli, H. (Eds),Handbook of Computer Networks: Distributed Networks, Network Planning, Control, Management, and New Trends and Applications, Wiley & Sons, Hoboken, NJ, pp.755-767.
Melchers, K.G., Leinhardt, N., Von Aarburg, M., Kleinmann, M. (2011), "Is more structure really better? A comparison of frame-of-reference training and descriptively anchored rating scales to improve interviewers' rating quality", Personnel Psychology, Vol. 64 pp.53-87.
Mennecke, B.L., Valacich, J.S., Wheeler, B.C. (2000), "The effects of media and task on user performance: a test of the task-media fit hypothesis", Group Decision and Negotiation, Vol. 9 pp.507-529.
Morrow, P.C. (1990), "Physical attractiveness and selection decision-making", Journal of Management, Vol. 16 pp.45-60.
O'Connail, B., Whitaker, S., Wilbur, S. (1993), "Conversations over video conferences: an evaluation of the spoken aspects of video-mediated communication", Human-Computer Interaction, Vol. 8 pp.389-428.
Peterson, R.A. (2001), "On the use of college students in social science research: insights from a second-order meta-analysis", Journal of Consumer Research, Vol. 28 pp.450-461.
Ployhart, R.E., Ryan, A.M. (1998), "Applicants' reactions to the fairness of selection procedures: the effects of positive rule violations and time of measurement", Journal of Applied Psychology, Vol. 83 pp.3-16.
Posthuma, R.A., Morgeson, F.P., Campion, M.A. (2002), "Beyond employment interview validity: a comprehensive narrative review of recent research and trends over time", Personnel Psychology, Vol. 55 pp.1-81.
Powell, D.M., Goffin, R.D. (2009), "Assessing personality in the employment interview: the impact of training on rater accuracy", Human Performance, Vol. 22 pp.450-465.
Pulakos, E.D., Schmitt, N., Whitney, D., Smith, M. (1996), "Individual differences in interviewer ratings: the impact of standardization, consensus discussion, and sampling error on the validity of a structured interview", Personnel Psychology, Vol. 49 pp.85-102.
Rice, R.E. (1993), "Media appropriateness: using social presence theory to compare traditional and new organizational media", Human Communications Research, Vol. 19 pp.451-484.
Ryan, A.M., Ployhart, R.E. (2000), "Applicants' perceptions of selection procedures and decisions: a critical review and agenda for the future", Journal of Management, Vol. 26 pp.565-605.
Rynes, S.L., Miller, H.E. (1983), "Recruiter and job influences on candidates for employment", Journal of Applied Psychology, Vol. 68 pp.147-154.
Rynes, S.L., Bretz, R.D. Jr, Gerhart, B. (1991), "The importance of recruitment in job choice: a different way of looking", Personnel Psychology, Vol. 44 pp.487-521.
Sackett, P., Lievens, F. (2008), "Personnel selection", Annual Review of Psychology, Vol. 59 pp.419-450.
Schmitt, N. (1976), "Social and situational determinant of interview decisions: implications for the employment interview", Personnel Psychology, Vol. 29 pp.79-101.
Sellen, A.J. (1995), "Remote conversation: the effects of mediating talk with technology", Human Computer Interaction, Vol. 10 pp.401-444.
Short, J., Williams, E., Christie, B. (1976), The Social Psychology of Telecommunications, John Wiley & Sons, London, .
Spence, A.M. (1973), "Job market signaling", Quarterly Journal of Economics, Vol. 87 pp.355-374.
Stone-Romero, E.F. (2008), "The relative validity and usefulness of various empirical research designs", in Rogelberg, S.G. (Eds),Handbook of Research Methods in Industrial and Organizational Psychology, Blackwell Publishing, Oxford, pp.77-98.
Storck, J., Sproull, L. (1995), "Through a glass darkly: what do people learn in videoconferences?", Human Communication Research, Vol. 22 pp.197-219.
Straus, S.G., Miles, J.A., Levesque, L.L. (2001), "The effects of videoconference, telephone, and face-to-face media on interviewer and applicant judgements in employment interviews", Journal of Management, Vol. 27 pp.363-381.
Thye, S.R. (2000), "Reliability in experimental sociology", Social Forces, Vol. 78 pp.1277-1309.
Truxillo, D.M., Bauer, T.N. (2011), "Applicant reactions to organizations and selection systems", in Zedeck, S. (Eds),APA Handbook of Industrial and Organizational Psychology, American Psychological Association, Washington, DC, pp.379-397.
Turban, D.B., Dougherty, T.W. (1992), "Influences of campus recruiting on applicant attraction to firms", Academy of Management Journal, Vol. 35 pp.739-765.
Viswesvaran, C. (2003), "Introduction to the special issue: role of technology in shaping the future of staffing and assessment", International Journal of Selection and Assessment, Vol. 11 pp.107-112.
Weekes, S. (2008), "Lights, camera, action! Now clients get web interviews", Recruiter, August, Vol. 6 pp.13.
Winkler, C. (2006), "Job tryouts go virtual", HR Magazine, Vol. 51 pp.131-134.
Yuan, Y., Head, M., Du, M. (2003), "The effects of multimedia communication on web-based negotiation", Group Decision and Negotiation, Vol. 12 pp.89-109.
About the authors
Dr Greg J. Sears is Associate Professor of Human Resource Management and Organizational Behavior at the Sprott School of Business, Carleton University. Prior to joining the school, he was a Personnel Psychologist with the Public Service Commission of Canada where he worked in the areas of staffing policy, employee selection, and management development. Dr Sears' primary areas of research include personnel selection, leadership, workplace diversity, and the role of personality in workplace behavior. Greg J. Sears is the corresponding author and can be contacted at: firstname.lastname@example.org
Dr Haiyan Zhang is a researcher with the Kenexa High Performance Institute, studying employee attitudes and behaviors with the Institute's WorkTrends™ survey of 35,000 employees worldwide. Her areas of expertise include survey design and research, recruitment and selection, organizational justice, and cross-cultural human resources management. Dr Zhang received a PhD in Human Resources Management from the DeGroote School of Business at McMaster University. Prior to her PhD studies, Haiyan worked as vice general manager at Tianjin Hi-Tech Consulting where she led various large-scale research and management consulting projects.
Dr Willi H. Wiesner is Associate Professor of Human Resources and Management at the DeGroote School of Business, McMaster University. He has served as Institute Coordinator and Chair of the Canadian Society of Industrial and Organizational Psychology and as Chair of the Human Resources and Management Area of the DeGroote School of Business at McMaster University from 1997 until 2008. Dr Wiesner advises firms in both the private and public sectors and gives workshops on employee selection, performance appraisal, work team effectiveness, and other human resources areas. His recent research and publication activities have focused on employment interviewing and selection, group decision making, and work team effectiveness.
Dr Rick D. Hackett is Professor and Canada Research Chair of Organizational Behavior and Human Performance at the DeGroote School of Business, McMaster University. He is past Editor-in-Chief of the Canadian Journal of Administrative Sciences, Fellow of the Canadian Psychological Association, and past President of the Canadian Society for Industrial and Organizational Psychology. From 2001 to 2003, Dr Hackett was Visiting Scholar at the Hong Kong University of Science and Technology. As President of Hackett and Associates Human Resources Consultants, Inc., he advises firms in both the public and private sectors on HR assessment and selection.
Dr Yufei Yuan is Professor of Information Systems at DeGroote School of Business, McMaster University. He received his PhD in Computer Information Systems from The University of Michigan and a B.S. in Mathematics from Fudan University in China. His research interests are in the areas of mobile commerce, emergency response systems, web-based negotiation support systems, security and privacy, business models of electronic commerce, fuzzy logic and expert systems, matching problems, and information systems in health care.