Intention to dropout and study satisfaction: testing item bias and structural invariance of measures for South African first-year university students

Karina Mostert (Management Cybernetics Research Entity, Faculty of Economic and Management Sciences, North-West University, Potchefstroom, South Africa)
Clarisse van Rensburg (Management Cybernetics Research Entity, Faculty of Economic and Management Sciences, North-West University, Potchefstroom, South Africa)
Reitumetse Machaba (Management Cybernetics Research Entity, Faculty of Economic and Management Sciences, North-West University, Potchefstroom, South Africa)

Journal of Applied Research in Higher Education

ISSN: 2050-7003

Article publication date: 4 April 2023

648

Abstract

Purpose

This study examined the psychometric properties of intention to drop out and study satisfaction measures for first-year South African students. The factorial validity, item bias, measurement invariance and reliability were tested.

Design/methodology/approach

A cross-sectional design was used. For the study on intention to drop out, 1,820 first-year students participated, whilst 780 first-year students participated in the study on satisfaction with studies. Confirmatory factor analysis (CFA), differential item functioning (DIF), measurement invariance and internal consistency were used to test the scales.

Findings

A one-factor structure was confirmed for both scales. For the intention to drop out scale, Items 3 and 4 were identified with statistically significant item bias; however, these differences had no practical impact. Except for scalar invariance for language, sufficient measurement invariance was established. No problematic items were identified for the study satisfaction scale.

Practical implications

In essence, this study provides evidence of two short measures that are culturally sensitive that could be used as short and valid measures across contextual boundaries as practically valuable tools to measure intention to drop out and study satisfaction in diverse and multicultural contexts.

Originality/value

This study contributes to limited research on bias and invariance analyses for scales that can be used in interventions to identify students at risk of leaving the university and utilising psychometric analyses to ensure the applicability of these two scales in diverse and multicultural settings.

Keywords

Citation

Mostert, K., van Rensburg, C. and Machaba, R. (2023), "Intention to dropout and study satisfaction: testing item bias and structural invariance of measures for South African first-year university students", Journal of Applied Research in Higher Education, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JARHE-04-2022-0126

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Karina Mostert, Clarisse van Rensburg and Reitumetse Machaba

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

The transitioning process of first-year university students is often regarded as a stressful experience (Van Zyl and Dhurup, 2018). This transition is particularly problematic in South Africa and is associated with exceptionally high dropout rates (Van Zyl, 2017). In response to dealing with these challenges, two initiatives have been established in South Africa, the South African National Resource Centre (SANRC) for the First-Year Experience and Students in Transition and the Siyaphumelela (“We Succeed”) student success initiative. The SANRC aims to improve student success through the distribution and development of research regarding the first-year experience (Nyar, 2020), whilst the Siyaphumelela (“We Succeed”) initiative aims to expand evidence-based postsecondary student success strategies across Higher Education Institutions (HEIs) in South Africa. Based on the focus of these initiatives, two essential constructs that are imperative to consider in research on first-year students are the intention to drop out and study satisfaction.

Retention of students has been considered a quality indicator for many universities (Bernardo et al., 2022). Therefore, HEIs must identify students who intend to drop out and intervene before they leave university and do not return. The term intention to drop out can be described as a gradual process of goal disengagement (Ghassemi et al., 2017), where students' conflict with a previous goal (i.e. to graduate from university) disengage with the goal and eventually abandon the goal (i.e. drop out of university) (Scheunemann et al., 2022). Study satisfaction refers to the extent to which students evaluate various aspects of their studies, such as their major, conditions of studies and having unfulfilled expectations (Scheunemann et al., 2022; Westermann and Heise, 2018) and can be conceptualised as the student's level of satisfaction, general experience, or attitude towards their academic studies or the university (Duque, 2014).

Several studies investigated the relationship between the intention to drop out and study satisfaction. Scheunemann et al. (2022) position intention to dropout as a mediator between internal and external causes of student dropout and actual dropout. They viewed study satisfaction as a possible determinant of the intention to drop out. Their three-wave longitudinal study results showed a dynamic interplay between variables in the dropout process and showed that high dropout intention is significantly related to study satisfaction. Similarly, Bernardo et al. (2022) position study satisfaction and expectations with the course of study as predictors of intention to drop out. Their findings emphasise that multiple variables influence intentions to drop out directly and indirectly. These findings align with Duque's (2014) conceptual framework from a literature synthesis on the relationship between students' satisfaction, perceived learning outcomes and dropout intentions.

Our study positions intention to drop out and study satisfaction slightly differently than the studies mentioned above since the study forms part of a larger research project called StudyWell: Student Well-being and Success. The StudyWell project utilises the leading approach in occupational health and well-being research, the Job Demands-Resources (JD-R) theory (c.f. Bakker and Demerouti, 2017; Bakker et al., 2023). One of the assumptions of JD-R theory is that two processes underly well-being. The health-impairment process occurs when individuals experience severe demands, which may lead to exhaustion, health problems and unfavourable outcomes for the organisation, such as employee turnover. The motivational process occurs when resources are available to deal with the effect of demands and foster creativity and motivation, such as employee engagement, which may lead to positive outcomes for the individual and organisation (e.g. good performance). This approach enables the investigation of theoretically and empirically neglected reciprocal relations with the negative and positive outcomes of students' health-impairment and motivational processes (such as intention to drop out and study satisfaction). As such, integrating the streams of dropout literature with an integrated well-being theory, such as JD-R theory, may allow linking different aspects of students' lives (their demands, resources and well-being) to essential outcomes for the student and university (Duque, 2014; Scheunemann et al., 2022).

Periodic assessments are needed to accurately establish and measure students' intention to drop out of the university and their satisfaction with their studies (Băcilă et al., 2014). However, psychological testing is governed in South Africa by the Employment Equity Act No. 55 of 1998, Section 8 (President of the Republic of South Africa, 1998). According to this Act, assessments are prohibited unless they can scientifically be proven reliable and valid, can be applied fairly to all ethnic groups and cultures and are not biased against any person or group.

Item bias refers to the event in which the meaning of an item, or multiple items, is understood identically across different cultures or groups and relates to item-level irregularities. An item is biased when score differences do not occur based on actual differences in the measured underlying construct but because of item-level incongruities (Van de Vijver and Tanzer, 2004).

Establishing the configural invariance of measures is essential to investigate if the factor structure fits the data equally in all groups (i.e. has the same pattern across sub-groups). Configural invariance shows to what extent the factor structure can be replicated in the same way across different groups. Metric invariance is an essential property of a scale that indicates whether each unit of measurement (i.e. each item) contributes equally to the latent construct across different sub-groups. Scalar invariance refers to establishing whether a test score has the same meaning in terms of how it is interpreted, regardless of the cultural background (Van de Vijver and Tanzer, 2004).

In addition to adhering to legislation, establishing measurement invariance is also essential for practical reasons because inaccurate assessment may influence the valid interpretation and correct estimation of effects in research (Teresi and Fleishman, 2007). Many decisions are made on individual and group differences. Ensuring equivalent measurement is essential before making comparisons because a lack of measures' equivalence (or invariance) makes group comparisons ambiguous (Gregorich, 2006; Teresi and Fleihman, 2007). As a result, flawed instruments may lead to suboptimal decisions (Teresi and Fleishman, 2007) and may impact policy planning and implementation of interventions (Perkins et al., 2006).

This study aimed to test the psychometric properties of two short measures, intention to drop out and study satisfaction, to establish whether these measures are valid, reliable, unbiased and invariant for different language, campus and gender groups in a sample of first-year university students in South Africa.

Method

Research procedure and participants

An ethical application was submitted and approved and a formal ethics number was obtained. The goal and purpose, confidentiality and anonymity regarding personal information and the possible value to the university and students were explained. Emphasis was placed on participation being voluntary. The data collection was part of the larger StudyWell project, where intention to drop out was included in one study (Study 1) as an outcome of the health-impairment process and study satisfaction was included in another study (Study 2) as an outcome of the motivational process as described in JD-R theory (Bakker and Demerouti, 2017; Bakker et al., 2023).

Data were collected from the three campuses of the university. The university was formed by merging a historically black university and a historically white university as part of the South African government's plan to transform higher education. The merger formed three campuses, each with a unique and diverse culture hosting students from different cultures and language groups.

The sample in Study 1 consisted of 1 820 research participants between the ages of 17 and 24. In terms of language, 39% were Afrikaans, followed by Setswana (27%), Sesotho (9.2%) and English (7.3%). The remaining 14.8% of the sample consisted of participants who spoke one of the eleven official languages in South Africa or another language. The most significant number of participants (53.8%) studied at campus 2, followed by 28.2% of students who studied at campus 1 and 17.3% studying at campus 3. Most research participants were female (65.2%; males were 33.7%). The sample in Study 2 consisted of 780 research participants, of whom the majority were between 18 and 20 years old (73.7%). Regarding language, 38.8% indicated that they spoke Afrikaans, 33.1% indicated that they spoke Setswana and 6.2% indicated that they spoke Sesotho, three of the 11 official languages in South Africa. Most of the total sample was studying at either campus 2 (50.5%) or campus 1 (38.3%), with the smallest number of participants studying at campus 3 (9.7%). Concerning gender, the sample comprised 61.8% female and 38.2% male participants.

Measuring instruments

Intention to drop out

The work-related scale of intention to leave the organisation, developed by Sjöberg and Sverke (2000), was adapted to measure intention to drop out for the student context (“If it was up to me, I would quit my studies and do what I want”; “I feel that I want to leave the university before I finish my studies”; “I want to quit my studies”; and “If I was completely free to choose I would leave the university and find a job”). All items are scored on a 5-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree). Sjöberg and Sverke (2000) confirmed an internal consistency of the scale, obtaining a Cronbach's alpha coefficient of 0.83.

Study satisfaction

The job satisfaction scale, developed by Hellgren et al. (1997), was adapted to measure study satisfaction. The work-related scale originally consisted of three items and a fourth item was added. These four items were adapted to fit the student context (i.e. “I enjoy my studies”; “I am content with my studies”; “I am satisfied with my studies”; and “I am happy in my studies”). The scale was scored on a five-point Likert-type scale that ranges from 1 (Strongly disagree) to 5 (Strongly agree). Hellgren et al. (1997) confirmed the scale's internal consistency, obtaining a Cronbach's alpha coefficient of 0.86.

Statistical analysis

Mplus 8.6 (Muthén and Muthén, 2021) was used to conduct the statistical analyses. Confirmatory factor analysis (CFA) was used to test factorial validity and invariance. Maximum likelihood estimation was used, with the covariance matrix as input (Muthén and Muthén, 2014). The following fit indices were considered to assess the models' goodness-of-fit: the χ2 statistic, the comparative fit index (CFI), the Tucker–Lewis index (TLI), the root mean square error of approximation (RMSEA) and the standardised root mean square residual (SRMR). For CFI and TLI, the acceptable fit is considered at 0.90 and above (Byrne, 2001; Hoyle, 1995). A cut-off value of 0.05 or less indicates a good fit for RMSEA, whereas values between 0.05 and 0.08 are considered a good model fit (Chen et al., 2008). The guidelines of DiStefano et al. (2009) were followed to interpret the factor loadings of items.

Differential item functioning (DIF) was used to test for item bias. The lordif package (Choi et al., 2011) in RStudio (https://www.rstudio.com/) was used. The following formulae were used and compared to test for uniform and non-uniform bias, using ordinal logistic regression to generate three likelihood-ratio χ2 statistics (Choi et al., 2011):

  • Model 0: logit P(uk)= αk

  • Model 1: logit P(uk)= αk + β1 * ability

  • Model 2: logit P(uk)= αk + β1 * ability + β2 * group

  • Model 3: logit P(u k)= αk + β1 * ability + β2 * group + β2 * ability * group

Bias can be identified when statistically significant differences are detected by comparing the log-likelihood values of models (p < 0.01). Uniform bias can be detected when comparing logistic Models 1 and 2 (χ122; df = 1), whereas non-uniform bias can be detected when comparing logistic Models 2 and 3 (χ232; df = 1). Total bias (or DIF) can be identified when comparing logistic Models 1 and 3 (χ132; df = 2) (Choi et al., 2011). The pseudo-McFadden R2 statistic was used to test the magnitude of the DIF. The magnitude of DIF can be classified as negligible (<0.13), moderate (between 0.13 and 0.26) and large (>0.26) (Zumbo, 1999). Additionally, using the difference in the β1 coefficient from Models 1 and 2, the practical significance of uniform DIF was determined with 10% differences between Models 1 and 2, indicating a practically meaningful effect (Crane et al., 2004). Lower thresholds of 5% and even 1% are also used (Crane et al., 2007). A threshold of 5% was used in this study.

Measurement invariance was investigated based on language (Afrikaans, Sesotho, Setswana and English), campus (three campuses) and gender (male and female). Multigroup analysis was used that includes the: (1) configural invariance model (i.e. the baseline model for the more constrained models and the test if the factor structure is analogous across groups) (2) metric invariance model (assumes similarity or invariance of the factor loading across different groups); and (3) scalar invariance model (tests if the factor loadings and item intercepts similar or invariant across different groups) (Preti et al., 2013). CFI and RMSEA were used as cut-off points. The CFI value is considered adequate if values are above 0.90 and better if they are higher than 0.95. With regards to RMSEA, the cut-off value is < 0.08; better is < 0.05 (Van De Schoot et al., 2012); however, as recommended by Shi et al. (2019), changes in CFI (ΔCFI) were used. Therefore, a ΔCFI value higher than 0.01 between two nested models indicates that the added group constraints have led to a poorer fit, i.e. the more constrained model was rejected. The loadings of items were freed to achieve partial metric invariance (Preti et al., 2013).

Cronbach's alpha coefficient was used to determine the reliability of the scales. Cronbach's alpha reliability coefficient typically ranges between 0 and 1. George and Mallery (2003) provide the following rules of thumb to interpret the values of Cronbach's alpha coefficients: α > 0.90 – Excellent, α > 0.80 – Good, α > 0.70 – Acceptable, α > 0.60 – Questionable, α > 0.50 – Poor and α < 0.50 – Unacceptable.

Results

Psychometric analyses for the intention to drop out scale

Factorial validity

The results showed that a one-factor model was an excellent fit to the data (χ2 = 8.723, df = 2.000, CFI = 0.988, TLI = 0.965, RMSEA = 0.058, SRMR = 0.019). The factor loadings are presented in Table 1 below. All items had acceptable and statistically significant factor loadings (λ) ranging from 0.66 to 0.88.

Item bias

Uniform, non-uniform and total bias were tested for the intention to drop out scale. The results are presented in Table 2.

First, items are flagged when statistically significant differences are detected (items marked in italic in Table 2). As shown in Table 2, Item 3 displayed language and campus-related DIF, whilst Item 4 displayed DIF for language, campus and gender groups. Four visual graphs per item are provided below that display additional diagnostic information to interpret the bias detected in these items. The upper-left graph shows the item characteristic curves for the different sub-groups (i.e. different language, campus or gender groups). The lower-left graph shows the item response functions for the sub-group parameter estimates (slope and category threshold values for each sub-group). The upper-right graphs display the absolute difference between the item characteristic curves of the different groups. The lower-right graph shows the absolute difference between the item characteristic curves of the sub-groups weighted by the score distribution (Choi et al., 2011).

Table 2 and Figure 1 show that Item 3 displayed uniform, non-uniform and total bias for the different language groups. The top left plot in Figure 1 shows that the slope of the function for the Afrikaans groups was slightly higher than for the other language groups. It can also be seen in the bottom left plot that the category threshold values for the Afrikaans groups are noticeably different compared to the other groups. The top right plot shows a difference in the item-true-score functions; however, this difference is negligible, as seen in the density-weighted impact (bottom right plot). Based on this information, pseudo-McFadden R2 statistic values < 0.13 and the difference in the β1 coefficient smaller than 5%, DIF's magnitude or practical impact for Item 3 can be classified as negligible. Similarly, Item 4 displayed uniform and total bias in Figure 2. Although noticeable differences can be seen between groups in the graphs, these differences also have no practical impact with pseudo-McFadden R2 statistic values < 0.13 and Δ β1 coefficient smaller than 5%.

Regarding campus, Items 3 and 4 were flagged as items with statistically significant biased items; Item 3 with uniform and total bias and Item 4 with uniform, non-uniform and total bias. Some differences between the campus groups can be seen in the plots (in Figures 3 and 4), specifically Campus 1 (dark black line) scoring somewhat higher or lower than the other groups. However, regarding the magnitude of these items, the density-weighted impact seen in the bottom right plot, as well as pseudo-McFadden R2 statistic values < 0.13 and Δ β1 coefficient smaller than 5%, indicate that the practical significant effect is, again, negligible.

For male and female students, Item 4 showed statistically significant bias. The item-true-score functions (upper-left graph) show that male students are prone to endorse Item 4 with higher categories compared to female students with the same overall intention to drop out. Again, as can be seen by the weighted by density impact, this effect is barely noticeable and, therefore, negligible (see Figure 5).

Measurement invariance

Table 3 shows the measurement invariance across the language, campus and gender groups included for the intention to drop out scale.

Table 3 shows that the intention to drop out scale was invariant regarding configural, metric and scalar invariance for language, campus and gender groups, except scalar invariance for language groups. The ΔCFI value > 0.01 between the two nested models showed that scalar invariance could not be confirmed for language groups. Partial scalar invariance was achieved by releasing the intercepts for Items 3 and 4 in the Afrikaans group and the intercept of Item 4 in the other groups.

Internal consistency

A Cronbach's alpha coefficient of 0.85 demonstrated acceptable reliability (α ≥ 0.70) for the intention to drop out scale.

Psychometric properties of the study satisfaction scale

Factorial validity

The results showed a good fit to the data (χ2 = 0.646; df = 2; CFI = 1.000; TLI = 1.000; RMSEA = 0.000; SRMR = 0.004). Table 4 shows the results for the standardised loadings of the items for the latent variables of the scale. All items had acceptable and statistically significant factor loadings (λ) ranging from 0.753 to 0.870.

Item bias

DIF analyses were used to test for item bias. The results are shown in Table 5.

No uniform or non-uniform bias was found in the items of the study satisfaction scale across the different language, campus and gender groups. In addition, the changes in the beta coefficients across all groups were well below the 5% cut-off set for this study, demonstrating that the items are not biased across the different groups.

Measurement invariance

Measurement invariance (configural, metric and scalar) was tested between the different language, campus and gender groups. Table 6 shows the results.

The results in Table 6 show that the study satisfaction scale has configural, metric and scalar invariance across the different language, campus and gender groups, with CFI scores ranging from 0.985 to 1.000. This indicates strong measurement invariance (Van De Schoot et al., 2012).

Internal consistency

Cronbach's coefficient alpha was 0.90, indicating acceptable reliability (α ≥ 0.70).

Discussion and practical implications

The results showed that a one-factor model for each scale represented an excellent fit to the data. Regarding item bias, Items 3 and 4 of the intention to drop out scale showed some statistically significant bias. However, these differences were negligible and had no practical impact or effect. No bias was detected in any of the study satisfaction scales' items. Configural, metric and scalar invariance were tested. Although the intercepts for Items 3 and 4 in one language group had to be released to reach scalar invariance for the intention to drop out scale (implying that means can still be compared based on language group, if required), the findings indicate that both scales have configural invariance (same one-factor structure), metric invariance (similar factor loadings) and scalar invariance (similar intercepts) across the different groups. Both scales also demonstrated good internal consistency.

These results emphasise the importance for HEIs to invest in the multicultural assessment of measures in cross-cultural settings. Even though South Africa is a very diverse country where multicultural assessment is guided by legislation, migration and globalisation are a reality for many HEIs worldwide (Maringe and Foskett, 2010). There has been a significant upsurge in the number of international students to HEIs in many countries (IIE Open Doors / Enrollment Trends, 2020) and has created linguistically and culturally diverse student groups that give rise to various opportunities for cultural constructions and re-constructions (Wang and Sun, 2022). Using measures that take cultural factors into account, could contribute to credible practices that are rigorous, unbiased, have interpretive power and enable accurate interpretation and intervention for student success initiatives (Lacko et al., 2022).

Few measures have been established to assist with dropout preventative interventions (Bernardo et al., 2022), specifically for diverse settings. The two scales presented in this study could be used as short and valid measures across contextual boundaries and can be used as practically valuable tools to measure intention to drop out and study satisfaction in diverse and multicultural contexts. In addition, investment in student success initiatives and interventions at tertiary levels should ideally be transferred to students' employability, employment and general functioning after graduation. From an institutional perspective, it is essential to track graduates' employment destinations and functioning as graduates in a continual cycle, from the time students enter university until they exit, to fine-tune and improve intervention effectiveness where necessary (Jackson et al., 2013). To accomplish this, a fine-grained and aligned implementation of a questionnaire methodology is necessary (Manathunga et al., 2009). Since intention to drop out and study satisfaction are variables similar to the work-related concepts of intention to leave the organisation and job satisfaction, two widely used scales in occupational psychology have been adapted for the student context in our study. The advantage of this approach is to have systematic stability between the questionnaires administered for students vs graduates.

Limitations and recommendations

Our study had several limitations that should be mentioned and provides ideas for future research. Because this study was part of a larger initiative for first-year students, the results apply specifically to South African students. Another limitation concerns the language groups included in our sample. This limits international generalisations and generalisations to other language groups in South Africa, which has 11 official languages (Statistics South Africa, 2018). Future researchers should include samples representing other language groups in South Africa or English as a language group for cross-cultural comparisons.

Although using scales developed to measure intention to leave and job satisfaction (work-related scales) and adapting them for the student context can be beneficial (as explained above), the questions can seem too straightforward and present-generation students may not express their true feelings on such questions. Future research could explore redesigning the questionnaires by asking the questions more indirectly to obtain true intentions and feelings. These scales were chosen because they are short and concise, characteristics that are beneficial when students have to complete long questionnaires. However, future research could include scales explicitly designed for students that are more comprehensive and could enable researchers to link student motivations as a precursor to their ultimate actions (e.g. as outlined in the studies of Bernardo et al., 2022, Duque, 2014 and Scheunemann et al., 2022).

Finally, we used a cross-sectional design and two different samples. As a result, the relationship between turnover intention and study satisfaction could not be examined. Future researchers should explore how the intention to drop out and study satisfaction scales fit within the larger nomological net of first-year university students.

Figures

Graphical display of Item 3 with respect to language

Figure 1

Graphical display of Item 3 with respect to language

Graphical display of Item 4 with respect to language

Figure 2

Graphical display of Item 4 with respect to language

Graphical display of Item 3 with respect to campuses

Figure 3

Graphical display of Item 3 with respect to campuses

Graphical display of Item 4 with respect to campuses

Figure 4

Graphical display of Item 4 with respect to campuses

Graphical display of Item 4 with respect to gender

Figure 5

Graphical display of Item 4 with respect to gender

Standardised factor loadings for the latent variables of the intention to drop out scale

ItemItem textLoadingS.E.p
Item 1If it was up to me, I would quit my studies and do what I want0.6690.0280.000
Item 2I feel that I want to leave the university before I finish my studies0.8860.0200.000
Item 3I want to quit my studies0.8670.0210.000
Item 4If I was completely free to choose I would leave the university and find a job0.7570.0270.000

Note(s): S.E. = standard error and all p-values <0.001

Source(s): Authors' own work

Summary of the DIF analyses for the intention to drop out scale

GroupItemχ122χ232χ132Δβ1R122R132R232
LanguageItem 10.06590.74690.20880.00190.00190.00220.0003
Item 20.91100.27010.61510.00120.00020.00160.0014
Item 30.00060.00070.00000.02000.00710.01420.0071
Item 40.00000.17530.00000.01850.00960.01110.0016
CampusItem 10.83730.82190.94530.00040.00010.00020.0001
Item 20.76080.79330.90830.00100.00020.00030.0002
Item 30.00010.92610.00070.02010.00740.00750.0001
Item 40.00000.00720.00000.01970.01040.01330.0029
GenderItem 10.47160.11800.22740.00050.00010.00080.0000
Item 20.82450.33660.61490.00000.00000.00030.0000
Item 30.94820.89420.98910.00000.00000.00000.0000
Item 40.00110.20400.00220.00460.00310.00360.0000

Note(s): χ122 = chi-square of model 1 compared to model 2; χ132 = chi-square of model 1 compared to model 3; χ232 = chi-square of model 2 compared to model 3; Δβ1 = change in beta coefficient; R122 = pseudo-Mcfadden R2 of model 1 compared to model 2; R132 = pseudo-Mcfadden R2 of model 1 compared to model 3 and R232 = pseudo-Mcfadden R2 of model 2 compared to model 3

Source(s): Authors' own work

Summary of measurement invariance analyses for the intention to drop out scale

GroupItemχ2dfCFIΔCFIRMSEAΔRMSEA
LanguageConfigural24.3980.9840.074
Metric37.30170.980−0.0040.056−0.018
Scalar73.13260.953−0.0270.0690.013
Partial scalar62.94250.974−0.0060.0570.001
CampusConfigural99.69540.9930.072
Metric121.03620.991−0.0020.0760.004
Scalar147.30790.989−0.0020.073−0.003
GenderConfigural30.4740.9840.074
Metric39.5470.970−0.0060.072−0.014
Scalar48.84100.965−0.0050.066−0.006

Note(s): χ2 = chi-square; df = degrees of freedom; CFI = comparative fit index; ΔCFI = delta (change in) CFI; RMSEA = Root mean square error of approximation; ΔRMSEA = delta (change in) RMSEA

Source(s): Authors' own work

Standardised factor loadings for the latent variables of the study satisfaction scale

ItemItem textLoadingS.E.p
Item 1I enjoy my studies0.8420.0210.000
Item 2I am content with my studies0.7530.0210.000
Item 3I am satisfied with my studies0.8580.0210.000
Item 4I am happy in my studies0.8700.0220.000

Note(s): S.E. = standard error and all p-values < 0.001

Source(s): Authors' own work

Summary of the DIF analyses for the study satisfaction scale

GroupItemχ122χ132χ232Δβ1R122R132R232
LanguageItem 10.16570.21480.33270.00760.00270.00440.0017
Item 20.18990.27980.41650.00550.00250.00390.0013
Item 30.16910.27070.44680.00780.00270.00390.0012
Item 40.18110.38700.69500.01310.00260.00320.0006
CampusItem 10.86390.74680.43880.00160.00020.00130.0011
Item 20.94400.44020.16200.00110.00010.00260.0025
Item 30.86220.93010.75400.00130.00020.00060.0004
Item 40.85730.97170.90030.00140.00020.00040.0001
GenderItem 10.44620.74680.95240.00220.00030.00030.0000
Item 20.11310.27920.84050.00040.00140.00150.0000
Item 30.62030.85590.79780.00160.00010.00020.0000
Item 40.16080.27330.42820.00450.00120.00160.0004

Note(s): χ122 = chi-square of model 1 compared to model 2; χ132 = chi-square of model 1 compared to model 3; χ232 = chi-square of model 2 compared to model 3; β1 = change in beta coefficient; R122 = pseudo-Mcfadden R2 of model 1 compared to model 2; R132 = pseudo-Mcfadden R2 of model 1 compared to model 3 and R232 = pseudo-Mcfadden R2 of model 2 compared to model 3

Source(s): Authors' own work

Summary of measurement invariance analyses for the study satisfaction scale

GroupItemχ2dfCFIΔCFIRMSEAΔRMSEA
LanguageConfigural7.9181.0000.000
Metric21.30170.994−0.0060.0400.040
Scalar34.90260.988−0.0060.0460.006
CampusConfigural18.5560.9850.091
Metric16.80120.9940.0090.040−0.051
Scalar22.48180.9940.0000.0310.009
GenderConfigural6.7540.9960.042
Metric12.9370.991−0.0050.0470.005
Scalar17.75100.989−0.0020.045−0.002

Note(s): χ2 = chi-square; df = degrees of freedom; CFI = comparative fit index; ΔCFI = delta (change in) CFI; RMSEA = Root mean square error of approximation and ΔRMSEA = delta (change in) RMSEA

Source(s): Authors' own work

References

Băcilă, M., Pop, M.C., Scridon, M.A. and Ciornea, R. (2014), “Development of an instrument for measuring student satisfaction in business educational institutions”, Contemporary Priorities in Business Education, Vol. 16 No. 37, pp. 841-856.

Bakker, A.B. and Demerouti, E. (2017), “The job demands-resource model: taking stock and looking forward”, Journal of Occupational Health Psychology, Vol. 22 No. 3, pp. 273-285.

Bakker, A.B., Demerouti, E. and Sanz-Vergel, A. (2023), “Job demands–resources theory: ten years later”, Annual Review of Organizational Psychology and Organizational Behavior, Vol. 10, p. 25, doi: 10.1146/annurev-orgpsych-120920053933.

Bernardo, A.B., Galve-González, C., Núñez, J.C. and Almeida, L.S. (2022), “A path model of university dropout predictors: the role of satisfaction, the use of self-regulation learning strategies and students’ engagement”, Sustainability (Switzerland), Vol. 14 No. 3, doi: 10.3390/su14031057.

Byrne, B.M. (2001), Structural Equation Modelling with AMOS: Basic Concepts, Applications, and Programming, Lawrence Erlbaum, Mahwah.

Chen, F., Curran, P.J., Bollen, K.A., Kirby, J. and Paxton, P. (2008), “An empirical evaluation of the use of fixed cutoff points in RMSEA test statistic in structural equation models”, Sociology Methods and Research, Vol. 36 No. 4, pp. 462-464, doi: 10.1177/0049124108314720.

Choi, S.W., Gibbons, L.E. and Crane, P.K. (2011), “Lordif: an R package for detecting differential item functioning using iterative hybrid ordinal logistic regression/item response theory and Monte Carlo simulations”, Journal of Statistical Software, Vol. 39 No. 8, pp. 1-30.

Crane, P.K., Van Belle, G. and Larson, E.B. (2004), “Test bias in a cognitive test: differential item functioning in the CASI”, Statistics in Medicine, Vol. 23 No. 2, pp. 241-256, doi: 10.1002/sim.1713.

Crane, P.K., Gibbons, L.E., Ocepek-Welikson, K., Cook, K., Cella, D., Narasimhalu, K., Hays, R.D. and Teresi, J.A. (2007), “A comparison of three sets of criteria for determining the presence of differential item functioning using ordinal logistic regression”, Quality of Life Research, Vol. 16 No. 1, pp. 69-84, doi: 10.1007/s11136-007-9185-5.

DiStefano, C., Zhu, M. and Mîndrilã, D. (2009), “Understanding and using factor scores: considerations for the applied researcher”, Practical Assessment, Research and Evaluation, Vol. 14, doi: 10.7275/DA8T-4G52.

Duque, L.C. (2014), “A framework for analysing higher education performance: students' satisfaction, perceived learning outcomes, and dropout intentions”, Total Quality Management and Business Excellence, Vol. 25 Nos 1-2, pp. 1-21, doi: 10.1080/14783363.2013.807677.

George, D. and Mallery, P. (2003), SPSS for Windows Step by Step: A Simple Guide and Reference, 11.0 Update, 4th ed., Allyn & Bacon, Boston.

Ghassemi, M., Bernecker, K., Herrmann, M. and Brandstätter, V. (2017), “The process of disengagement from personal goals: reciprocal influences between the experience of action crisis and appraisals of goal desirability and attainability”, Personality and Social Psychology Bulletin, Vol. 43 No. 4, pp. 524-537, doi: 10.1177/0146167216689052.

Gregorich, S.E. (2006), “Do self-report instruments allow meaningful comparisons across diverse population groups? Testing measurement invariance using the confirmatory factor analysis framework”, Medical Care, Vol. 44 11 (Suppl 3), pp. 78-94, doi: 10.1097/01.mlr.0000245454.12228.8f.

Hellgren, J., Sjöberg, A. and Sverke, M. (1997), “The interactive effect of job involvement and organisational commitment on job turnover revisited: a note on the mediating role of turnover intention”, Scandinavian Journal of Psychology, Vol. 41, pp. 247-252.

Hoyle, R.H. (1995), “The structural equation modelling approach: basic concepts and fundamental issues”, in Hoyle, R.H. (Ed.), Structural Equation Modelling: Concepts, Issues and Applications, Sage, Thousand Oaks, pp. 1-15.

IIE Open Doors / Enrollment Trends (2020), “International students enrollment trends, 1948/49-2019/20”, Open Doors Report on International Educational Exchange, available at: https://opendoorsdata.org/data/international-students/enrollment-trends/ (accessed 30 November 2020).

Jackson, D., Sibson, R. and Riebe, L. (2013), “Delivering work-ready business graduates—keeping our promises and evaluating our performance”, Journal of Teaching and Learning for Graduate Employability, Vol. 4 No. 1, pp. 2-22, doi: 10.21153/jtlge2013vol4no1art558.

Lacko, D., Čeněk, J., Točík, J., Avsec, A., Đorđević, V., Genc, A., Haka, F., Šakotić-Kurbalija, J., Mohorić, T., Neziri, I. and Subotić, S. (2022), “The necessity of testing measurement invariance in cross-cultural research: potential bias in cross-cultural comparisons with individualism– collectivism self-report scales”, Cross-Cultural Research, Vol. 56 Nos 2-3, pp. 228-267, doi: 10.1177/10693971211068971.

Manathunga, C., Pitt, R. and Critchley, C. (2009), “Graduate attribute development and employment outcomes: tracking PhD graduates”, Assessment and Evaluation in Higher Education, Vol. 134 No. 1, pp. 91-103, doi: 10.1080/02602930801955945.

Maringe, F. and Foskett, N. (2010), “Introduction: globalisation and universities”, in Maringe, F. and Foskett, N. (Eds), Globalisation and Internationalisation in Higher Education: Theoretical, Strategic and Management Perspectives, Continuum, London.

Muthén, L.K. and Muthén, B.O. (2014), Mplus User’s Guide, 8th ed., Muthén & Muthén, Los Angeles, CA.

Muthén, L.K. and Muthén, B.O. (2021), Mplus User's Guide, 8th ed., Los Angeles, available at: https://www.statmodel.com/download/usersguide/MplusUserGuideVer_8.pdf

Nyar, A. (2020), “South Africa’s first-year experience: consolidating and deepening a culture of national scholarship”, Journal of Student Affairs in Africa, Vol. 8 No. 2, pp. ix-x, doi: 10.24085/jsaa.v8i2.4445.

Perkins, A.J., Stump, T.E., Monahan, P.O. and McHorney, C.A. (2006), “Assessment of differential item functioning for demographic comparisons in the MOS SF-36 health survey”, Quality of Life Research, Vol. 15 No. 3, pp. 331-348.

Preti, A., Vellante, M., Gabbrielli, M., Lai, V., Muratore, T., Pintus, E., Pintus, M., Sanna, S., Scanu, R., Tronci, D., Corrias, I., Petretto, D.R. and Carta, M.G. (2013), “Confirmatory factor analysis and measurement invariance by gender, age and levels of psychological distress of the short TEMPS-A”, Journal of Affective Disorders, Vol. 151, pp. 995-1002.

President of the Republic of South Africa (1998), “Employment equity act (act no. 55 of 1998)”, Government Gazette, pp 19370, available at: https://www.gov.za/sites/default/files/gcis_document/2014 09/a55-980.pdf

Scheunemann, A., Schnettler, T., Bobe, J., Fries, S. and Grunschel, C. (2022), “A longitudinal analysis of the reciprocal relationship between academic procrastination, study satisfaction, and dropout intentions in higher education”, European Journal of Psychology of Education, Vol. 37 No. 4, pp. 1141-1164, doi: 10.1007/s10212-021-00571-z.

Shi, D., Lee, T. and Maydeu-Olivares, A. (2019), “Understanding the model size effect on SEM fit indices”, Educational and Psychological Measurement, Vol. 79 No. 2, pp. 310-334, doi: 10.1177/0013164418783530.

Sjöberg, A. and Sverke, M. (2000), “The interactive effect of job involvement and organisational commitment on job turnover revisited: a note on the mediating role of turnover intention”, Scandanavian Journal of Psychology, Vol. 41 No. 3, pp. 247-252, doi: 10.1111/1467-9450.00194.

Statistics of South Africa (2018), “Improving lives through data ecosystems”, Statistics South Africa, available at: https://www.statssa.gov.za/?p=11341

Teresi, J.A. and Fleishman, J.A. (2007), “Differential item functioning and health assessment”, Quality of Life Research, Vol. 16, pp. 33-42, doi: 10.1007/s11136-007-9184-6.

Van De Schoot, R., Lugtig, P. and Hox, J. (2012), “A checklist for testing measurement invariance”, European Journal of Developmental Psychology, Vol. 9 No. 4, pp. 1-7, doi: 10.1080/17405629.2012.686740.

Van de Vijver, F.J.R. and Tanzer, N.K. (2004), “Bias and equivalence in cross-cultural assessment: an overview”, Revue Européenne de Psychologieappliqué, Vol. 54, pp. 119-135, doi: 10.1016/j.erap.2003.12.004.

Van Zyl, A. (2017), “The first year experience in higher education in South Africa: a good practices guide”, Report by the Fundani Centre for Higher Education and Training at the Cape Peninsula University of Technology, available at: https://heltasa.org.za/wp-content/uploads/2016/04/TDG-FYE-Good-Practices-Guide-24-5-5-17-final-2.pdf

Van Zyl, Y. and Dhurup, M. (2018), “Self-efficacy and its relationship with satisfaction with life and happiness among university students”, Journal of Psychology in Africa, Vol. 28 No. 5, pp. 389-393, doi: 10.1080/14330237.2018.1528760.

Wang, X. and Sun, W. (2022), “Unidirectional or inclusive international education? An analysis of discourses from US international student services office websites”, Journal of Diversity in Higher Education, Vol. 15 No. 5, pp. 617-629, doi: 10.1037/dhe0000357.

Westermann, R. and Heise, E. (2018), “Studienzufriedenheit [study satisfaction]”, in Rost, D.H., Sparfeldt, J.R. and Buch, S. (Eds), Handwörterbuch Pädagogische Psychologie [Handbook of Educational Psychology, Beltz, Weinheim, Basel, pp. 818-825.

Zumbo, B.D. (1999), A Handbook on the Theory and Methods of Differential Item Functioning (DIF): Logistic Regression Modelling as a Unitary Framework for Binary and Likert-type (Ordinal) Item Scores, Directorate of Human Resources Research and Evaluation, Department of National Defense, Ottawa, ON.

Further reading

Al-Sheeb, B.A., Abdulwahed, M.S. and Hamouda, A.M. (2018), “Impact of first-year seminar on student engagement, awareness, and general attitudes toward higher education”, Journal of Applied Research in Higher Education, Vol. 10 No. 1, pp. 15-30, doi: 10.1108/JARHE-01-2017-0006.

Araque, F., Roldán, C. and Salguero, A. (2009), “Factors influencing university dropout rates”, Computers and Education, Vol. 53, pp. 563-574.

Bobko, P. (2001), Correlation and Regression: Applications for Industrial Organisational Psychology and Management, Sage Publications, Thousand Oaks.

Cleary, T.A. and Hilton, T.L. (1968), “An investigation of item bias”, Educational and Psychological Measurement, Vol. 28 No. 1, pp. 61-75, doi: 10.1177/001316446802800106.

Darlaston-Jones, D., Cohen, L., Haunold, S., Young, A. and Drew, N. (2003), “The retention and persistence support (RAPS) project: a transition initiative”, Issues in Educational Research, Vol. 13 No. 2, pp. 1-12.

Duffy, M.K., Lee, K. and Adair, E.A. (2021), “Annual review of organizational psychology and organizational behavior”, Workplace Envy, Vol. 8, pp. 19-44, doi: 10.1146/annurev-orgpsych-012420-055746.

Essack, S.Y., Naidoo, I. and Barnes, G. (2010), “Government funding as leverage for quality teaching and learning: a South African perspective”, Higher Education Management Policy, Vol. 22 No. 3, pp. 1-12.

Ke, H., Junfeng, D. and Xiaojing, L. (2022), “International students' university choice to study abroad in higher education and influencing factors analysis”, Frontiers in Psychology, Vol. 13, doi: 10.3389/fpsyg.2022.1036569.

Kizito, R., Munyakazi, J. and Basuayi, C. (2016), “Factors affecting student success in a first-year mathematics course: a South African experience”, International Journal of Mathematical Education in Science and Technology, Vol. 47 No. 1, pp. 100-119, doi: 10.1080/0020739X.2015.1057247.

Locke, E.A. (1976), “The nature and causes of job satisfaction”, in Dunnette, M. (Ed.), Handbook of Industrial and Organisational Psychology, Rand McNally, Chicago, pp. 1297-1349.

McDowell, I. and Newell, C. (1996), Measuring Health: A Guide to Rating Scales and Questionnaires, 2nd ed., Oxford University Press Incorporate, available at: https://oxford.universitypressscholarship.com/view/10.1093/acprof:oso/9780195165678.001.0001/acprof-9780195165678

Mkonto, N. (2018), “Monitoring student (dis)engagement: retention officers' experiences at the cape peninsula university of technology”, Journal of Student Affairs in Africa, Vol. 6 No. 1, pp. 65-76, doi: 10.24085/jsaa.v6i1.3066.

Nunnally, J.C. and Berstein, I.H. (1994), Psychometric Theory, 3rd ed., McGraw-Hill.

WachKarbach, F.-S.J., Ruffling, S., Brüken, R. and Spinath, F.M. (2016), “University students' satisfaction with their academic studies: personality and motivation matter”, Frontiers in Psychology, Vol. 7 No. 55, pp. 1-12, doi: 10.3389/fpsyg.2016.00055.

Westermann, R., Elke, H., Spies, K. and Trautwein, U. (1996), “Identifikation und erfassung von komponenten der studienzufriedenheit (Identifying and assessing components of student satisfaction)”, Psychologie in Erziehung und Unterricht, Vol. 43 No. 1, pp. 1-22.

Young, D.G. (2016), “The case for an integrated approach to transition programmes at South Africa's higher education institutions”, Journal of Student Affairs in Africa, Vol. 4 No. 1, pp. 17-32, doi: 10.14426/jsaa.v4i1.142.

Yueh-Ching, C. (2022), “Mobilising multi-semiotics in Intercultural Communication: Asian international students' experience of using English as a multilingual franca in a Taiwanese university”, Taiwan Journal of TESOL, Vol. 19 No. 1, doi: 10.30397/tjtesol.202204_19(1),0003.

Zijlstra, J. (2020), “Stepwise student migration: a trajectory analysis of Iranians moving from Turkey to Europe and North America”, Geographical Research, Vol. 58 No. 4, pp. 403-415, doi: 10.1111/1745-5871.12434.

Acknowledgements

Acknowledgement and thanks is given to Prof L.T de Beer for his assistance with the statistical analysis and interpretation of the results.

Funding: The material described in this article is based on work supported by 1) the office of the Deputy Vice-Chancellor: Teaching and Learning at the North-West University; and 2) the National Research Foundation, under reference number RA180103297058 (Grant No.: 118953). The views and opinions expressed in this research are those of the researcher and do not necessarily reflect the opinions or views of the funders.

Corresponding author

Karina Mostert is the corresponding author can be contacted at: Karina.Mostert@nwu.ac.za

About the authors

Karina Mostert is Professor in Industrial Psychology in the Faculty of Economic and Management Sciences at the North-West University, Potchefstroom Campus, South Africa. She conducted research on the topic of Occupational Health and Well-Being and focussed on subjective well-being, burnout, engagement, work–home interference and strengths use. Her research focus shifted to the health and well-being of university students, with a specific focus on their experiences, engagement and psychological resources that can assist in optimal functioning and performance. She is leading the project “StudyWell: Student Well-Being and Success” at the North-West University, South Africa. The project aims to develop a valid, reliable, culturally sensitive online data informed monitoring tool for student well-being, informed by in-depth qualitative investigation, to assess and proactively monitor the study climate, individual traits, states and behaviour of students to inform targeted and cost-effective interventions.

Clarisse van Rensburg is currently completing her Industrial Psychology internship at a retail company in South Africa and had previously worked in higher education at Varsity College as a senior planning and scheduling coordinator. She has completed her Master's degree in Industrial and Organisational Psychology at the North-West University (Potchefstroom Campus), South Africa. Her research interests are in life satisfaction, first-year students and psychometrics.

Reitumetse Machaba is currently Organisational Development Consultant at Sanlam Corporate and had previously worked in higher education at North-West university focussing on organisational development and organisational culture initiatives. She has completed her Master's degree in Industrial Psychology at the North-West University (Potchefstroom Campus), South Africa. Her research interests are in first-year students and psychometrics.

Related articles