About the Author(s)


Gina Görgens-Ekermans Email symbol
Department of Industrial Psychology, Faculty of Economic and Management Sciences, Stellenbosch University, Cape Town, South Africa

Valerio Ghezzi symbol
Department of Psychology, Sapienza University of Rome, Rome, Italy

Tahira M. Probst symbol
Department of Psychology, College of Arts and Sciences, Washington State University, Vancouver, United States of America

Claudio Barbaranelli symbol
Department of Psychology, Sapienza University of Rome, Rome, Italy

Laura Petitta symbol
Department of Psychology, Sapienza University of Rome, Rome, Italy

Lixin Jiang symbol
School of Psychology, University of Auckland, Auckland, New Zealand

Sanman Hu symbol
School of Business, Huaqiao University, Quanzhou, China

Citation


Görgens-Ekermans, G., Ghezzi, V., Probst, T.M., Barbaranelli, C., Petitta, L., Jiang, L., & Hu, S. (2024). Measurement invariance of cognitive and affective job insecurity: A cross-national study. African Journal of Psychological Assessment, 6(0), a147. https://doi.org/10.4102/ajopa.v6i0.147

Note: Additional supporting information may be found in the online version of this article as Online Appendix 1.

Original Research

Measurement invariance of cognitive and affective job insecurity: A cross-national study

Gina Görgens-Ekermans, Valerio Ghezzi, Tahira M. Probst, Claudio Barbaranelli, Laura Petitta, Lixin Jiang, Sanman Hu

Received: 01 Aug. 2023; Accepted: 24 Feb. 2024; Published: 25 Apr. 2024

Copyright: © 2024. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Empirical evidence of established measurement invariance of job insecurity measures may enhance the practical utility of job insecurity as a valid predictor when utilised over different cross-national samples. This study investigated the measurement invariance of the nine-item versions of the Job Security Index (a measure of cognitive job insecurity) and the Job Security Satisfaction Scale (a measure of affective job insecurity), across four countries (i.e. the United States, N = 486; China, N = 629; Italy, N = 482 and South Africa, N = 345). Based on a novel bifactor-(S-1) model approach we found evidence for partial metric, partial scalar and partial strict invariance of our substantive bifactor-(S-1) structure. The results extend measurement invariance research on job insecurity with obvious pragmatic implications (e.g. scaling units, measurement bias over cross-national interpretations).

Contribution: This research provides evidence to support the applied use of cross-national comparisons of job insecurity scores across the nationalities included in this study. Theoretically, this research advances the debate about the nature of the relationship between cognitive and affective job insecurity, suggesting that in this cross-national dataset, a model where cognitive job insecurity is specified as the reference domain outperforms a model where affective job insecurity is assigned this status. Practically, it demonstrates that it is sensible and necessary to differentiate between cognitive and affective job insecurity and include measures of both constructs in future research on the construct.

Keywords: bifactor-(S−1) model; cross-national comparison; cognitive job insecurity; affective job insecurity; measurement invariance.

Introduction

Globalisation, technological advances and changing government policies regarding work and labour relations have led to an increase in organisational restructuring, downsizing and mergers (Hirsch & De Soucey, 2006; László et al., 2010). At the same time, the nature of work arrangements has shifted away from permanent full-time positions towards an increasing reliance on precarious employment relationships in a so-called gig economy (e.g. part-time, contingent and/or independent work contracts) (MacDonald & Giazitzoglu, 2019). For example, in 2018, it was estimated that informal work accounts for roughly 61% of the global workforce (ILO, 2018). A 2014 survey of corporate executives in 27 countries (Oxford Economics, 2014) found that a large majority (83%) intended to increase their reliance on contingent, intermittent and contract employees. Indeed, non-standard precarious work arrangements accounted for 60% of all new jobs created in countries, listed as part of the Organisation for Economic Cooperation and Development (OCED), from 2007 to 2013 and one-third of all jobs total (Kalleberg, 2018). While the gig economy and non-standard work arrangements might allow greater flexibility in where, when and how work is completed, collectively these global trends compounded by intermittent economic downturns and financial crises have led to pervasive job insecurity among today’s workers worldwide (Probst et al., 2023).

In addition, the outbreak of the global coronavirus disease 2019 (COVID-19) pandemic caused a historic number of job losses worldwide (e.g. Blustein et al., 2020), with some regions being more affected than others (i.e. higher in America, compared to Europe and central Asia; Eurofound, 2021), and unemployment trends surpassing the 2008–2009 financial crisis (i.e. global labour force participation rates down by 2.2%, compared to 0.2%, ILO, 2021a). More specifically, in January 2021, the International Labour Organization (ILO) estimated a global loss of 114 million jobs (relative to 2019), reflecting the 2020 impact of the pandemic (ILO, 2021a), a trend which worsened into 2021 with 137 million job losses reflected in the third quarter of 2021 (ILO, 2021b). Moreover, pre-pandemic trends (e.g. ILO, 2013) of a reduction of youth employment opportunities, coupled with a narrower selection of types of jobs, employment opportunities and job conditions, as a result of the ongoing financial crisis, especially in Europe, were exacerbated by the pandemic. These fundamental changes in the nature of work coupled with economic and financial crises have led some researchers (e.g. Jiang & Probst, 2014; Kalleberg, 2013) to argue that job insecurity is a critical and ubiquitous stressor in today’s workplaces.

Nearly four decades of research on job insecurity has provided a strong body of evidence underscoring the numerous adverse consequences associated with this workplace stressor. A comprehensive meta-analysis (Jiang & Lavaysse, 2018) summarising that the body of literature encompassed 535 independent samples with sample sizes up to N = 300 000 employees. The meta-analytic results provided compelling evidence of adverse effects of job insecurity on a wide range of job-related (e.g. commitment, absenteeism, safety behaviours, accidents, motivation and citizenship behaviours) and individual outcomes (e.g. physical and mental health). Moreover, a 2019 meta-analysis by Sverke et al. (2019) on 119 samples covering 106 studies, provided strong evidence of the pervasive effect of job insecurity on a range of job performance outcomes (e.g. task performance, counterproductive work behaviour and contextual performance).

These adverse effects of job insecurity have been observed in research conducted around the world and in numerous different settings. For example, two large-scale European studies (László et al., 2010; Probst & Jiang, 2017) on pooled data from 19 and 16 European countries, respectively, investigated job insecurity in relation to health and job stress. Many comprehensive studies on United States (US) workers (e.g. Jiang & Probst, 2014; Lawrence & Kacmar, 2017; Probst et al., 2021) exist. In the last 7 years, research on German (e.g. Barrech et al., 2018; Helbling & Kanji, 2018), Swedish (Låstad et al., 2018), Canadian (Watson & Osberg, 2018), Chinese (Lin et al., 2021; Wang et al., 2018), Italian (Chirumbolo et al., 2021; Probst et al., 2018), Taiwanese (Hsieh & Huange, 2018), Turkish (Bitmiş & Ergeneli, 2015), Flemish (Griep et al., 2021), Swiss (Sender et al., 2016) and South African (e.g. De Beer et al., 2016; Smit et al., 2016) employees have been published. The global reach of research efforts on job insecurity is further evidenced in a very recent meta-analysis by Jiang et al. (2021) encompassing 429 samples from 39 countries which investigated the sources of job insecurity through a resources-demands perspective.

Clearly, job insecurity is a pervasive workplace stressor with numerous adverse outcomes observed across different national and cultural settings. Yet, despite the clear relevance of job insecurity globally, there have been surprisingly few attempts to determine whether the measurement of the job insecurity construct is invariant across those different settings. Importantly, when measurement instruments are transported from one country to another, the comparability of those psychological measurements across different groups should be investigated. More specifically, tests of bias and equivalence should routinely be conducted so that bias and equivalence investigations could have theoretical and practical relevance. The presence of construct, method or item bias could express itself in the structural, metric and/or scalar non-equivalence of the given instruments.

Comporting with Cheung and Rensvold’s (2002) admonition regarding the importance of assessing measurement equivalence, we argue that invariance testing of job insecurity measures should be assessed before attempting to compare or interpret mean differences on the latent trait across groups. If measurement invariance or equivalence assumptions remain untested, the practical utility of job insecurity as a valid predictor when utilised over different cross-national groups may be questionable. Therefore, the purpose of the current study was to evaluate the measurement invariance across four countries (China, Italy, South Africa and the US) of two widely used measures of job insecurity (the Job Security Index, JSI; and the Job Security Satisfaction scale, JSS) initially developed by Probst (2003) for use in the United States with an English-speaking population. Our results could help inform future comparative international research and facilitate accurate interpretation of cross-country comparisons of the job security construct when measured with these scales. Below, we begin by discussing historical and more recent conceptual definitions and operationalisations of the job insecurity construct, with a focus on the measures being tested in the current study. Next, we summarise cross-cultural and cross-national studies of job insecurity, while noting the few instances where measurement equivalence has been explicitly assessed. Finally, we present the measurement invariance hypotheses being tested in the current study.

Conceptualisations and operationalisations of the job insecurity construct

The absence of a coherent conceptual definition and operationalisation of the job insecurity construct has plagued research endeavours in this field since the early seminal studies (Probst, 2003). Specifically, researchers have long-debated: (1) whether job insecurity should be conceptualised and measured as a subjective experience (i.e. something that is in the eye of the beholder) or an objective state (e.g. threatened by a layoff or a contingent employment status); (2) whether job insecurity is best conceptualised as a unidimensional vs. multidimensional construct and (3) whether it is theoretically or practically meaningful to differentiate between cognitive and affective insecurity.

While some disciplines (e.g. economics) prefer to conceptualise job insecurity as an objective state, researchers within the field of psychology have traditionally viewed it as best understood and described as a subjective phenomenon (e.g. Greenhalgh & Rosenblatt, 1984; Probst, 2002; Sverke, Hellgren, & Naswall, 2002). Using this perspective, job insecurity has been defined (and will be defined in the current study) as the subjective perception that the future of one’s job is unstable or at risk, regardless of actual objective levels of job security (Probst et al., 2018).

Researchers next grappled with whether to view the construct as unidimensional versus multidimensional. Unidimensional definitions and measures of job insecurity tend to take a global approach encompassing the perceived job security associated with the totality of one’s job (e.g. expectations of a change in their job for the worse). On the other hand, multidimensional measures (e.g. Hellgren et al., 1999) explicitly differentiate between quantitative job insecurity (i.e. threats of job loss) and qualitative job insecurity (i.e. threats to valued features of one’s job). While there are pros and cons to each approach, Reisel and Banai (2002) found that the threat of job loss was a better predictor of employee outcomes than a threat to their job features; moreover, a global (unidimensional) measure of job insecurity generally explained more variance than a multidimensional measure. Thus, in the current study, we focused on unidimensional global measures of job insecurity, specifically one that assesses cognitive job insecurity and the other measuring affective job insecurity.

Since the initial distinction between cognitive and affective job insecurity proposed by Borg and Elizur (1992), research has indeed increasingly suggested that there is value in this approach (Huang et al., 2010; Jiang & Probst, 2014; Jiang et al., 2020; Probst, 2003; Reisel & Banai, 2002). Whereas cognitive job insecurity reflects an employee’s appraisal of the future (in)stability of his or her job, affective job insecurity reflects employee affective reactions regarding those perceived levels of job insecurity (Probst, 2003). Thus, cognitive job insecurity reflects perceptions regarding the likelihood of negative changes to one’s job (e.g. with respect to job loss or loss of valued job features), whereas affective job insecurity refers to emotional reactions to that potential loss (e.g. concern, worry and anxiety). Empirical evidence suggests that cognitive and affective job insecurity have differential relationships with antecedents and consequences and supports the validity of making this distinction (Bazzoli & Probst, 2023; Huang et al., 2010; Jiang & Lavaysse, 2018; Jiang et al., 2021; Probst, 2003).

To address these issues, Probst (2003) developed and validated the Job Security Index (JSI) and Job Security Satisfaction (JSS) scale. Both measures assume that job insecurity is a subjective phenomenon and best measured using a global unidimensional approach (i.e. encompassing the entirety of the job, rather than job loss vs. loss of certain job features). However, the scales differentiate between the cognitive and affective aspects of the construct. Whereas the Job Security Index assesses ‘an individual’s cognitive appraisal of the future of his or her job with respect to the perceived level of stability and continuance of that job’ (Probst, 2003, p. 452), the Job Security Satisfaction scale was developed to assess ‘an individual’s attitudes regarding that level of job security (i.e. their affective reactions)’ (Probst, 2003, p. 452). This distinction between cognitive and affective job insecurity has increasingly been adopted by researchers in this domain (Jiang & Probst, 2014). Since their development and validation, the JSI and JSS scales have been used in more than 40 studies to date and in countries as varied as China, Nigeria, the US, Italy, Chile and Turkey.

Despite the extent to which these scales have been used, there has been limited empirical assessment of the measurement invariance of the scales across different cultural settings. However, as we will review below, this is not a unique shortcoming specific to these particular scales, but rather descriptive of much of the cross-cultural and cross-national studies on job insecurity (regardless of the measures used).

Cross-cultural and cross-national studies on job security

In a meta-analysis of job insecurity, Jiang and Lavaysse (2018) identified 435 published articles with 535 independent samples. A number of these studies investigated aspects of the job insecurity construct across different language, culture and/or national contexts. For example, Roll et al. (2015) investigated the relationship between job insecurity and performance across two cross-national samples (i.e. Germany and China), while König et al. (2011) conducted a Swiss–US comparison of the correlates of job insecurity. Earlier studies include a cross-national study on job insecurity conducted on data from Israel, the Netherlands and the United Kingdom (UK) (van Vuuren et al., 1991), while a 30-country study on its relationship with high involvement work systems was conducted by Bacon and Blyton (2001). In 1993, Orpen published differential correlations on the relationship between job insecurity and psychological well-being among black and white South African employees (Orpen, 1993). Lee et al. (2008) developed a measure of job insecurity and validated it on data from China and the US.

Assessing measurement invariance across groups

Only a few of the studies listed above conducted rigorous invariance or equivalence tests on the respective instruments utilised in their studies. For example, König et al. (2011) conducted a composite assessment (i.e. tested one measurement model) of the measurement equivalence of the job insecurity, job satisfaction, organisational commitment, turnover intention and uncertainty avoidance scales included in their study. Item parcels were constructed, and three models were tested (unrestricted mean and factor loadings, restricted factor loadings, and restricted mean and factor loadings). Configural but neither metric nor scalar invariances were obtained. Unfortunately, because the authors only tested the measurement equivalence of all scales used in their study, it is unknown whether the non-invariance was a result of the job insecurity measures. In addition, in the study by Lee et al. (2008) on the development of a cross-culturally appropriate measure of job insecurity, only individual country CFA results of the job insecurity measure in the two separate US and Chinese samples were reported. Because no further measurement invariance or equivalence tests across the groups were conducted, it is not known whether that measure of job insecurity functioned the same across the two groups.

In a South African study conducted by Pienaar et al. (2013), sufficient factor structure equivalence (by calculating Tucker’s Φ coefficient of congruence, see Van de Vijver & Leung, 1997) of a shortened version of the De Witte (2000) job insecurity measure was reported between black and white respondents. Further, more stringent invariance tests were also conducted by testing for factor loading and intercept invariance. The authors report support for weak factorial invariance by race (ΔCFI = 0.007) and conclude that when the intercepts were also constrained across groups, ‘total changes in CFI were slightly above the cut-off value of 0.01 for race (ΔCFI = 0.017)’ (Pienaar et al., 2013, p. 13).

Finally, Vander Elst et al. (2014) conducted a series of measurement invariance tests on the four-item JIS developed by De Witte (2000). Data derived from the translated versions were obtained from five West European countries and languages (i.e. Belgium [Flemish], The Netherlands [Dutch], Spain [Spanish], Sweden [Swedish] and the UK [English]). The results revealed evidence of full configural and metric invariance, as well as partial scalar and error variance invariance. Full factor variance invariance was also evident. The authors concluded that construct validity of the different translations of the JIS exists (Vander Elst et al., 2014). More recently, Shoss et al. (2023) reported metric invariance of the JIS (as part of the larger measurement model) over data from the US, UK and Belgium.

While the Vander Elst et al. (2014) study represents perhaps the most rigorous test of invariance to date, the De Witte JIS differs from the scales examined in the current study in that the De Witte JIS is a global unidimensional measure that contains items reflecting both cognitive and affective insecurity. On the other hand, the Probst (2003) Job Security Index and Job Security Satisfaction measures respond to calls and empirical evidence suggesting that these two forms of job insecurity reflect different unique constructs and should be measured and modelled as such (Jiang & Lavaysse, 2018).

The present study

We argue that invariance research (at different levels) is needed to validate the cross-national use of job insecurity measures. Employing psychological measures in distinct contexts (e.g. cultural or language groups) requires the separation of cultural bias (e.g. construct, method, item bias; Van de Vijver & Tanzer, 2004) from true construct variance in the data attained from such measures, as observed group differences may be as a result of measurement bias and not real underlying differences (Cheung & Rensvold, 2002). This may impede consistent and reliable cross-study and cross-country comparisons (e.g. Vander Elst et al., 2014). To this end, this study aims to add to the scant body of knowledge of cross-cultural invariance analysis on two well-validated and widely used job insecurity measures by examining the configural, metric, scalar and strict invariance (e.g. Vandenberg & Lance, 2000) of the Job Security Index and the Job Security Satisfaction scales (JSI and JSS; Probst, 2003) measurement models over four cross-national samples.

Method

Study design

A quantitative, cross-sectional research design was employed in this research. Data collection took place from 2015 to 2019 (i.e. October 2017 – November 2017 in Italy, July 2017 – August 2017 in China, May 2015 in the US, and June 2019 – July 2019 in SA). All the datasets (China: N = 629; Italy: N = 482; South Africa [SA]: N = 345; US: N = 486) contained anonymous data.

Participants

A description of the different subsamples in terms of available matched demographic information (only age and gender were available in the SA sample) is contained in Table 1. Table S1 of Online Appendix 1 reports frequencies regarding the employee distribution across industry sectors for the Chinese, Italian and US samples. As can be noted, the SA and Italian samples were (on average) slightly older than the others, while all samples were fairly balanced in terms of gender composition. With regards to Chinese, Italian and US sample comparisons, the Chinese sample displayed (on average) a higher number of years of education than the others, while the proportion of US employees unemployed during the past 5 years was slightly higher than for the Chinese and the Italian samples. In terms of job contracts and job types, we observed a significantly lower proportion of employees with permanent arrangements for the Chinese sample and a higher proportion of part-time employees for the Italian sample than for the others. Finally, both Chinese and US employees reported a higher number of working hours in a typical week than the Italian sample.

TABLE 1: Sub-sample characteristics and cross-cultural comparisons.
Instruments

Cognitive Job Insecurity. Cognitive job insecurity was assessed using the 9-item version of the Job Security Index (JSI, Probst, 2003). This scale was developed in order to evaluate ‘the perceived stability and continuance of one’s job as one knows it’ (p. 452). Participants rated a list of adjectives and phrases concerned with the future of their job using a 3-point scale (yes = 3, don’t know = 2, no = 0). These response options were modelled after the Job Descriptive Index since prior research (e.g. Hanisch, 1992) indicated this format allows for respondents with even very low reading ability to comprehend and discriminate among the categories; additionally, the asymmetrical 3/2/0 scoring is based on analyses by Hanisch (1992) that indicate the ‘don’t know’ response is not a neutral response, but rather is psychometrically closer to a ‘yes’ (i.e. higher insecurity) response than a ‘no’ response. Four items were negatively worded (e.g. ‘Unpredictable’, ‘Up in the air’), while five were positively worded (e.g. ‘Stable’, ‘My job is almost guaranteed’), and the order of presentation was mixed within the scale in order to avoid potential response biases. Items were recoded as necessary such that higher scores reflect greater cognitive job insecurity.

Affective job insecurity

Affective job insecurity was assessed using the 9-item version of the Job Security Survey (JSS, Probst, 2003). This scale was developed in order to capture the ‘evaluative responses one might have to a perceived level of job security’ (p. 455). Participants rated a list of adjectives and phrases concerned with the stability of their job using the same response format of the JSI. Four items were negatively worded (e.g. ‘Upsetting how little job security I have’, ‘Unacceptably low’), while five were positively worded (e.g. ‘Looks optimistic’, ‘Never been more secure’). As with the JSI items, the order of presentation of both positive and negative JSS items was balanced within the scale, and items were recoded as needed such that higher scores reflect greater affective job insecurity.

Procedure

A convenience sampling method was employed for both the SA and Italian data collection, and no incentives were offered to participants in these two countries. Online data collection was used in the US, China and SA, while a paper-based survey was distributed in Italy. In the US, participants were recruited with an online human subjects’ crowdsourcing platform (i.e. Amazon Mechanical Turk) as part of a larger research project on the antecedents, moderators and outcomes of job insecurity. Only individuals with an established track record of providing high quality data to previous crowd-sourced tasks (i.e. ‘high reputation’ participants; see Peer et al., 2014) were recruited. In addition, to circumvent any potential self-selection, based on potential participants’ existing perceptions of the constructs being measured, it was only indicated that participants needed to be currently employed and would be responding to a survey about their ‘work environment’. Upon completion of the survey, a small incentive ($2.00) was offered for participation. A similar strategy was followed in China by recruiting employees of Chinese enterprises through a well-known online survey platform (sojiang.com) in China. Respondents received a small reward ($2.83) for taking part in the survey.

In all the samples, the language of administration was the official national language of the country (i.e. English in the United States, Chinese in China and Italian in Italy). Because of the multilingual environment in South Africa (11 official national languages), only English and Afrikaans versions of the tests were administered to participants that indicated English or Afrikaans as their first (i.e. home) language. Translation and back translation procedures (Behling & Law, 2000) were utilised to create Afrikaans, Chinese and Italian versions of the JSI and JSS.

Data analysis

A series of alternative factorial structures of the JSI and JSS measures were initially fitted separately for each country sample (see Figure 1). In line with Probst (2003), the first model posited a single factor underlining all JSI and JSS items (M1), while a second model posited two distinct latent and correlated common causes underlining, respectively, JSI and JSS items (M2). While in the first model, the single latent variable may be interpreted as a general dimension of job insecurity, in the second model, the posited latent variables are explicitly modelled to distinguish between CJI and AJI.

FIGURE 1: Conceptual bifactor-(S–1) model for the JSI and JSS Items. (a) Model 1 (M1) – Single factor, (b) Model 2 (M2) – Two oblique factors, (c) Model 3 (M3) – Bifactor-(S-1) structure (Cognitive JI as referent factor) and (d) Model 4 (M4) – Bifactor-(S-1) structure (Affective JI as referent factor).

Following this, two further alternative factorial structures have been hypothesised and tested within the bifactor modelling approach (for an extensive review, see Reise, 2012). With this regard, it is important to note that the great majority of bifactor models proposed in the literature consisted in fully symmetrical bifactor structures of psychometric measures (Eid et al., 2017). In such models, all indicators load onto a general (G) factor and there are as many specific factors (S) as many specific constructs or facets are intended to be modelled, and both G and S factors are specified as orthogonal. As recently evidenced by Eid et al. (2017), fully symmetrical bifactor models have several theoretical and empirical flaws.

Firstly, S factors should be specified to model all specific constructs and facets under investigation only if they are structurally interchangeable. Specifically, two S factors may be considered as structurally interchangeable when they are randomly drawn from their universe of units (e.g. method factors associated with two randomly selected colleagues evaluating one’s job performance). In our case, CJI and AJI are clearly non-structurally interchangeable, as they represent two constructs explicitly proposed to assess different aspects of the broader concept of job insecurity (Cheng & Chan, 2008; Jiang & Lavaysse, 2018).

Secondly, fully symmetrical bifactor models generally produce estimates at odds with researchers’ expectations, such as issues with model convergence, negative variances of S factors and/or unexpected non-significant (even negative) factor loadings on the S factors (Eid et al., 2017; Heinrich et al., 2020), which are all typical signs of empirical non-identification of the estimated model (Bollen, 1989).

Thirdly, a limitation associated with fully symmetrical bifactor models is that the interpretation of both G and S factors is unclear (Bonifay et al., 2017), as the items are not clearly defined in terms of conditional expectations with respect to both general and specific latent variables (Eid et al., 2017). In order to overcome this limitation, Eid et al. (2017) proposed a modified version of the typical bifactor factorial structure, namely the bifactor-(S-1) model. Unlike other models, the bifactor-(S-1) factorial structure posits a reference general (G) factor (as for fully symmetrical bifactor models), and a number of specific factors (S) equal to that of the facets to be modelled minus one (for recent applications, see Burns et al., 2019; Heinrich et al., 2020). In this case, G and S factors are theoretically and mathematically defined as orthogonal, but S factors are allowed to covary. In this sense, the choice of the reference domain is crucial, because different options lead to different interpretation of both G and S factors and can alter the final estimates and overall fit of the model (Heinrich et al., 2020, 2021). The interpretation of both G and S factors in bifactor-(S-1) model is well-defined and straightforward. Specifically, the G factor represents the domain for which no S factor has been specified, while the interpretation of S factors is conditional on G: they represent residual latent constructs reflecting that part of the shared variance between the items which is independent from the referent domain.

This methodological framework allowed us to formulate the third and the fourth alternative models (i.e. M3 and M4) of our set. In M3, CJI was specified as the referent domain (G), while AJI represents the S factor, and this formalisation of referent and specific domains was inverted in M4. Although M3 reflects the most theoretically sound model (Jiang & Lavaysse, 2018; Lazarus & Folkman, 1984; Weiss & Cropanzano, 1996), M4 also represents a plausible option.

As the rating scale of JSI and JSS measures was on a three-option format, items were treated as ordered categorical variables (Flora & Curran, 2004). Thus, all models were tested using the least square mean and variance adjusted (WLSMV) estimators (Muthén & Muthén, 1998–2018) and a pairwise deletion strategy to handle missing data. Overall model fit was evaluated using multiple indices (Hu & Bentler, 1999a; 1999b; Kline, 2015): (1) WLSMVχ2 test statistic; (2) root mean square error of approximation (RMSEA); (3) Comparative Fit Index and Tucker–Lewis Fit Indices (respectively, CFI and TLI) and (4) standardised root mean square residual (SRMR). As not all models were nested (e.g. M3 and M4 were not nested within M2), the best-fitting model was determined by comparing different information criteria within each country sample. Specifically, Akaike’s Information Criterion, AIC (1973), the Bayesian Information Criterion (BIC, Schwarz, 1978) and the sample-size adjusted BIC (ABIC, Sclove, 1987) were computed by using appropriate formulas adapted from Yamaoka et al. (1978) based on the minimum value of the fitting function of the WLSMV estimators. Specifically, the two candidate models displaying lower values from AIC, BIC and ABIC indices were further compared. A ∆AIC > 10 indicates considerably less support for the model with the highest AIC (Burnham & Anderson, 2004), as well as both ∆BIC and ∆ABIC > 6 provide strong evidence for rejecting the model with higher values in these information criteria (if > 10 such evidence may be interpreted as very strong, see Kass & Raftery, 1995).

Once the most appropriate model was established for each country sample, cross-cultural measurement invariance of JSI and JSS measures was tested using a build-up strategy (Millsap & Yun-Tein, 2004). This procedure consists of testing a series of hierarchically nested models (i.e. configural, metric, scalar and strict invariance models) and comparing their fit in order to evaluate if different psychometric properties of the measures under investigation can be generalised across samples. For this purpose, we used the THETA parameterisation approach of Mplus 8.4 (Muthén & Muthén, 1998–2018) and all latent variables were scaled by fixing their first factor loading to unity.

Given that the ∆WLSMVχ2 test statistic is typically largely inflated by Type I error (see Sass et al., 2014), statistical comparison between adjacent nested models was carried out by evaluating the differences between values of their alternative fit indices (i.e. ∆RMSEA, ∆CFI and ∆TLI). Moreover, as many cut-offs have been proposed in the literature to establish different levels of invariance when using WLSMV estimators (for an overview, see Sass et al., 2014; Svetina et al., 2020), we set the rejection cut-off criteria to the most conservative available values: ∆RMSEA ≤ –0.007 (Meade et al., 2008), ∆CFI ≤ –0.002 (Svetina & Rutkowski, 2017) and ∆TLI < –0.001 (Marsh et al., 2010). If the full set of equality constraints specified in a given model is not tenable, partial invariance can still be pursued (Byrne et al., 1989; Vandenberg & Lance, 2000). In this case, modification indices were inspected, and equality constraints were released one by one (Millsap & Yun-Tein, 2004) until the most restrictive model did not significantly differ in terms of model fit with respect to the less restrictive one. In line with recent simulation studies (Lai et al., 2021), we established that the minimum level to achieve partial cross-cultural measurement invariance of the study measures was no more than one-third of noninvariant parameters over the total.

Ethical considerations

In the United States, Italy and China, because of the anonymity of the data and low risk to participants, the respective Institutional Review Boards classified the studies as exempt. The South African study was classified as low risk because of anonymity of the respondents, but a full review by the relevant higher educational institution’s research ethics committee was still required before clearances were obtained. Ethical clearance to conduct the study was obtained from the Research Ethics Committee: Social, Behavioural and Education Research at Stellenbosch University in the Western Cape (Project ID#8713). This included the ethics clearances granted by other international institutions.

Participants indicated their consent with an online consent statement. The anonymity of reported responses, as well as the secure password protection of data, was communicated to participants.

Results

Proportion of responses in answer categories for both JSI and JSS items is provided in Online Appendix 1 (Table S2). Table 2 shows fit indices of the alternative models tested separately per each country sample. As can be noted from the inspection of information criteria values, the two best-fitting models were M3 and M4 for the Chinese, Italian and SA samples, while in the case of US data were M2 and M3. In the case of Italian, SA and US data, the M3 model showed consistently lower values of all information criteria indices than for the second best fitting model (albeit in the US data, the ∆BIC between M2 and M3 was only slightly lower than 1). For the Chinese data, M4 seems to outperform M3 when ∆AIC and ∆ABIC are inspected, while ∆BIC showed the opposite pattern. However, estimates provided by M4 highlighted some unexpected results: specifically, two items of the JSS measure loaded negatively on the specific factor, as well as the variance term of the S factor was not statistically significant, suggesting that the residual CJI factor is substantially meaningless in that sample.

TABLE 2: Overall fit indices of the alternative factorial models for each country sample.

Given these results, M3 was retained as the final model for all country samples. As can be noted from Table 2, fit indices of the M3 model ranged from acceptable (RMSEA) to good (CFI and TLI) in all country samples. Consistent with the bifactor-(S-1) modelling approach, the interpretation of G and S factors specified in M3 is clear and theoretically well-defined. On the one hand, the G factor represents the referent domain of the JSI and JSS measures, reflecting the cognitive component associated with job insecurity perceptions underlining the entire set of items. On the other hand, the S factor reflects the affective (evaluative) component of job insecurity which is independent from the referent (cognitive) domain. More specifically, the latent score on the S factor can be interpreted as the degree to which an individual activates higher (or lower) affective evaluative reactions to job insecurity compared to other individuals having the same latent score on the G (cognitive) factor.

Table 3 shows results of the measurement invariance tests of M3 across the country samples. As can be noted, full metric invariance was not supported by the data (for the full pattern of fixed, invariant and non-invariant parameters, see also Table S3 of Online Appendix 1). Specifically, modification indices progressively suggested to release two-factor loadings on the G factor in the Chinese sample (for JSI7 and JSI9 items), four in the case of the Italian sample (for JSI5, JSI6, JSI8, and JSS6 items) and one in the case of both SA and US samples (for JSS6). Moreover, a single factor loading on the S factor was released in the Italian sample (for JSS8). After releasing these constraints, the partial metric invariance was reached. Thus, we tested the partial metric full scalar model. Also, in this case, some constraints on item thresholds led to a significant worsening in model fit. Specifically, ten thresholds were released in the Chinese sample, nine in the Italian sample, six in the SA sample and five in the US sample (see Table S3 of Online Appendix 1). After releasing these constraints, the partial metric scalar model was tenable. (As the residual variance of JSS7 was no longer significant for the Italian sample, this parameter was fixed to 0 for that group [Bollen, 1989]). Finally, we imposed equalities on items’ residual variances across the country samples. A single equality constraint was released in the Chinese, Italian and SA samples (for JSS3, JSS2 and JSS6 items, respectively), while there was no need to release any additional constraint on the US sample. Overall, we reached the partial metric, partial scalar, partial strict invariance of our substantive bifactor-(S-1) structure. As can be noted from Table 4, the proportion of non-invariant parameters was consistently low for all country samples. This evidence supported the cross-cultural generalisability of the final factorial structure of JSI and JSS items, highlighting very similar psychometric properties across the country samples involved in the present study. Moreover, as the noninvariant parameters were lower than one-third (see Lai et al., 2021), we can conclude that the measurement properties of the study scales were substantially generalisable across countries.

TABLE 3: Measurement invariance of the final bifactor-(S-1) Model (M3).
TABLE 4: Number of non-invariant parameters from the most restrictive measurement invariance model (4a).

Finally, we calculated consistency (Con), specificity (Spec) and reliability (Rel) coefficients (Eid et al., 2017; Heinrich et al., 2020) for each item from the most restrictive measurement invariance model (Table 5). While consistency coefficients reflect the proportion of the true score of each item attributable to individual differences in the CJI referent domain (G), specificity coefficients reflect to what extent such true score is accounted for by the AJI residual factor (S) after controlling for G. Rel coefficients represent the proportion of the observed total score of the items which is accounted for by error-free individual differences on both G and S factors. As can be noted, the G factor accounts for the largest proportion of items’ true scores in all country samples (average Con coefficients ranged from 0.721 to 0.884), but the S factor represented a significant unique source of true score variability in all cases (average Spec coefficients ranged from 0.107 to 0.279). Finally, a very high proportion of items’ observed scores was accounted for by reliable individual differences attributable to latent factors (average Rel coefficients ranged between 0.650 and 0.863 across country samples). Overall, we can conclude that CJI represents a strong common referent domain underlining both JSI and JSS items, but the AJI residual factor provided unique and valuable added information regarding the shared variability among JSS items in all country samples.

TABLE 5: Estimates from the final multi-group model (4a) and consistency, specificity and reliability coefficients.
Discussion
Theoretical contribution and practical measurement implications

Consistent and reliable cross-study and country comparisons on job insecurity hinge on assessments of measurement invariance in this domain (e.g. Vander Elst et al., 2014). The purpose of this study was to investigate the measurement invariance of the JSI and JSS over four samples, each derived from one country on four different continents (the US, Italy, China and SA). Partial metric, partial scalar and partial strict invariance for a bifactor-(S-1) model (M3) were achieved, rendering meaningful cross-national group comparisons permissible for this model. The results make several contributions to the current job insecurity literature.

Firstly, a series of competing models were tested. Overall, these represented a unique approach to investigating the cognitive–affective job insecurity relationship. Although the cognitive–affective distinction is well supported (e.g. Jiang & Probst, 2014; Probst, 2003, 2008; Reisel & Banai, 2002), the distinct nature of their interrelationship is less often reported. While Eid et al. (2017, p. 555) state that the bifactor- (S-1) model is mainly applicable when a ‘clear candidate for a reference domain’ exists, we argued in favour of testing competing models, given that no clear theoretical justification for either model existed from previous studies. The results revealed that the consistently best fitting model (i.e. Model 3) represented affective job insecurity as conditional to cognitive job insecurity, providing additional support for the distinctiveness of these two constructs and clarification of the complex relationship between them (e.g. Jiang & Lavaysse, 2018).

Theoretically, this result suggests that in this cross-national dataset, a model where cognitive job insecurity is specified as the reference domain outperforms a model where affective job insecurity is assigned this status. Practically, this suggests that interpretations of affective job insecurity scores hinge upon levels of cognitive job insecurity. Moreover, it suggests that across all samples, the nature of job insecurity is best demarcated as affective job insecurity being conditional to cognitive job insecurity, suggesting that interpretations of affective job insecurity rely on levels of cognitive job insecurity.

Theoretically, this result also aligns with both the cognitive appraisal (Lazarus & Folkman, 1984) and affective events theories (Weiss & Cropanzano, 1996), when applied to the job insecurity domain. Moreover, it suggests that this theoretical interpretation may replicate over different cultural–cross-national contexts. That is, the potential threat of job loss, i.e. cognitive job insecurity, as suggested by cognitive appraisal theory, could with confidence be assigned the status of primary appraisal, the process through which the meaning and significance of an event (i.e. potential job loss) is recognised (Lazarus, 1991). Affective job insecurity, hereafter, represents a secondary appraisal (Smith & Pope, 1992) contingent on coping resources envisioned to mitigate the severity of the threat inherent to the job loss appraisal. Similarly, as predicted by affective events theory (Weiss & Cropanzano, 1996), emotional or affective reactions (such as anxiety or worry) stem from the cognitive appraisal of events (evaluated for relevance to well-being) as proximal causes. Recently, Charkhabi (2019, p.2) argued that ‘actual job insecurity and appraisal of job insecurity, are two distinct constructs’ and showed that a hindrance appraisal of job insecurity mediates the relationship between quantitative job insecurity and emotional exhaustion. Moreover, strong evidence of the mediator effect of AJI in the relationships between CJI and a broad spectrum of workplace outcomes (see Jiang & Lavaysse, 2018) further underscores this notion.

This study provides strong cross-national evidence of the notion that ‘AJI can be considered as an emotional reaction to CJI’ (Jiang & Lavaysse, 2018, p. 2316). It, furthermore, underscores the notion that disentangling AJI and CJI in studies on JI may provide a stronger theoretical approach to understand the psychological mechanism driving the outcomes of JI. For example, a recent longitudinal study (Griep et al., 2021) revealed that perceptions of job insecurity influence mental health complaints when persistent job insecurity was present. However, the JI measure utilised in this study (Vander Elst et al., 2014) appears to reflect CJI only, and therefore perhaps missed the additional benefits offered by recognising that CJI and AJI as two separate appraisals with unique explanatory power in the stressor appraisal process.

The measurement invariance results indicated that partial metric, partial scalar and partial strict invariance for the substantive bifactor-(S-1) Model 3 emerged, conditional on certain parameters being modified between groups, suggesting specific conclusions regarding the translated versions of the JSI and JSS. More specifically, sufficient evidence emerged supporting the factorial structure of Model 3 (i.e. configural invariance model) over the four samples. This implies that the manifest measures induced similar conceptual frames of reference in each of the groups (Riordan & Vandenberg, 1994; Vandenberg & Lance, 2000).

The partial metric invariance results suggest similarity in the scaling units across countries, implying that meaningful interpretation of item scores across countries are permissible. That is, the majority of the translated JIS and JSS item observed scores are similarly calibrated to the two job insecurity factor scores across countries, given the unique relationship posited between them in the bifactor-(S-1) model. Moreover, the results revealed evidence of partial strict invariance (i.e. error variance invariance), implying invariance of the measurement errors of the translated JIS and JSS versions, and a partial lack of measurement bias. In conclusion, these results suggest that the Italian, Chinese, English and Afrikaans versions of the cognitive and affective job insecurity measures are invariant, a permissible conclusion as at least partial invariance of the parameters was found (see Vandenberg & Lance, 2000; Milfont & Fischer, 2010). This evidence is further supported by recent simulation studies (Lai et al., 2021), suggesting that less than one-third of noninvariant parameters do not affect the overall validity of a psychometrically sound measure.

This study extends measurement invariance research of job insecurity in several ways. Firstly, it represents the initial attempt (to the knowledge of the authors) to conduct invariance analysis on the two forms of job insecurity, reflecting their distinctiveness, but also significant interrelatedness. The only other study of this nature was conducted on a unidimensional global measure of job insecurity (i.e. Vander Elst et al., 2014). Secondly, the asymmetrical rating scale employed by the JSI and JSS is based on prior research by Hanisch (1992) indicating that a ‘don’t know’ response is psychometrically closer to a negative response (i.e. reflecting greater job insecurity) rather than being equidistant between the negative and positive response options. Because simulation studies (e.g. Bovaird & Koziol, 2012) indicate that three-point response scale may not be treated as approximately continuous, this requires (and our study employed) suitable estimation techniques (i.e. weighted least squares estimators) to handle the resulting ordinal data. Thirdly, this study employed a bifactor-(S-1) model approach (Heinrich et al., 2020) to circumvent theoretical and empirical weaknesses present in fully symmetrical bifactor models (e.g. Eid et al., 2017) while extending this application in terms of its theoretical contribution on the cognitive and affective job insecurity literature. Lastly, this study attempts to answer the call for measurement invariance studies over a diverse set of language and cultural groups (Bazzoli & Probst, 2023) on the JSI and JSS. It enhances confidence in the use of these measures, also in the African context, where measurement equivalence should be a particularly pressing issue, given the diversity of multilingual and cultural groups on which Western developed (etic) measures are often applied.

Limitations

Despite these contributions, some limitations must be acknowledged. Firstly, the cross-sectional nature of the data imposes some constraints on the conclusions derived from it. No information on the stability of the reliability and validity of the measures over time were included. Moreover, future studies should investigate directly potential sources of cross-national non-invariance of some parameters. A future cross-national longitudinal invariance study would strengthen the practical and theoretical implications of this research and could facilitate investigations into the differential prediction of job insecurity and work-related outcomes (e.g. job performance), cross-culturally. Secondly, we were limited in the comparisons that could be made regarding the composition of each country’s sample because of the kinds of demographic information that could be collected in each country. For example, different interpretations of job insecurity for temporary versus permanent employees, based on perceived psychological contract breach (e.g. De Cuiper & De Witte, 2006), may exist. Thirdly, no information about the representativeness of the samples for the respective countries is available. The South-African data, for example, only contain data for Afrikaans and English first-language respondents, thereby omitting a rather large portion of the population. Future research should attempt to use matched (i.e. on sociodemographic characteristics and type of job contracts) representative samples. Lastly, differences in administration (i.e. hard copy versus online) may have introduced administration bias, the effect of which may have resulted in method bias (Van de Vijver & Poortinga, 1997).

Acknowledgements

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Authors’ contributions

G.G.-E. conceived the project. V.G., C.B. and L.P. conducted the statistical analyses. G.G.-E., T.M.P. and V.G. wrote the original draft and L.J. and S.H. reviewed and edited the manuscript. All authors contributed data to the study.

Funding information

Funding for the US data collection was funded by an Edward R. Meyer Professorship granted to the third author.

Data availability

Derived data supporting the findings of this study are available from the corresponding author, G. G.-E., on request.

Disclaimer

The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of any affiliated agency of the authors and the publisher.

References

Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In B.N. Petrov, & F. Caski (Eds.), Proceedings of the second international symposium on information theory (pp. 267–281). Akademiai Kiado.

Bacon, N., & Blyton, P. (2001). High involvement work systems and job insecurity in the international iron and steel industry. Canadian Journal of Administrative Sciences, 18(1), 5–16. https://doi.org/10.1111/j.1936-4490.2001.tb00239.x

Barrech, A., Baumert, J., Gündel, H., & Ladwig, K. (2018). The impact of job insecurity on long-term self-rated health – Results from the prospective population-based MONICA/ KORA study. Public Health, 18, 754–764. https://doi.org/10.1186/s12889-018-5621-4

Bazzoli, A., & Probst, T.M. (2023). Psychometric properties of the shortened and rescaled versions of the job security index and job security satisfaction scale. Applied Psychology, 72(2), 832–848. https://doi.org/10.1111/apps.12397

Behling, O., & Law, K.S. (2000). Translating questionnaires and other research instruments: Problems and solutions. SAGE Publications Inc.

Bitmiş, M.G., & Ergeneli, A. (2015). How psychological capital influences burnout: The mediating role of job insecurity. Procedia – Social and Behavioral Sciences, 207, 363–368. https://doi.org/10.1016/j.sbspro.2015.10.106

Blustein, D.L., Duffy, R., Ferreira, J.A., Cohen-Scali, V., Cinamon, R.G., & Allan, B.A. (2020). Unemployment in the time of COVID-19: A research agenda. Journal of Vocational Behavior, 119, 103436. https://doi.org/10.1016/j.jvb.2020.103436

Bollen, K. (1989). Structural equations with latent variables. Wiley.

Bonifay, W., Lane, S.P., & Reise, S.P. (2017). Three concerns with applying a bifactor model as a structure of psychopathology. Clinical Psychological Science, 5(1), 184–186. https://doi.org/10.1177/2167702616657069

Borg, I., & Elizur, D. (1992). Job insecurity: Correlates, moderators and measurement. International Journal of Manpower, 13(2), 13–26. https://doi.org/10.1108/01437729210010210

Bovaird, J.A., & Koziol, N.A. (2012). Measurement models with ordered categorical indicators. In R.H. Hoyle (Ed.), Handbook of structural equation modeling (pp. 495–512). Guilford.

Burnham, K.P., & Anderson, D.R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods & Research, 33(2), 261–304. https://doi.org/10.1177/0049124104268644

Burns, G.L., Geiser, C., Servera, M., Becker, S.P., & Beauchaine, T.P. (2019). Application of the bifactor S–1 model to multisource ratings of ADHD/ODD symptoms: An appropriate bifactor model for symptom ratings. Journal of Abnormal Child Psychology, 48, 1–14. https://doi.org/10.1007/s10802-019-00608-4

Byrne, B.M., Shavelson, R.J., & Muthén, B. (1989). Testing for the equivalence of factor covariance and mean structures: The issue of partial measurement invariance. Psychological Bulletin, 105(3), 456–466. https://doi.org/10.1037/0033-2909.105.3.456

Charkhabi, M. (2019). Quantitative job insecurity and well-being: Testing the mediating role of hindrance and challenge appraisals. Frontiers in Psychology, 9, Article 2776. https://doi.org/10.3389/fpsyg.2018.02776

Cheng, G.H.L., & Chan, D.K.S. (2008). Who suffers more from job insecurity? A meta-analytic review. Applied Psychology: An International Review, 57(2), 272–303. https://doi.org/10.1111/j.1464-0597.2007.00312.x

Cheung, G.W., & Rensvold, R.B. (2002). Evaluating goodness-of-fit indexes for testing measurement invariance. Structural Equation Modeling, 9(2), 233–255. https://doi.org/10.1207/S15328007SEM0902_5

Chirumbolo, A., Callea, A., & Urbini, F. (2021). The effect of job insecurity and life uncertainty on everyday consumptions and broader life projects during COVID-19 pandemic. International Journal of Environmental Research and Public Health, 18(10), 5363. https://doi.org/10.3390/ijerph18105363

De Beer, L.T., Rothmann, S., & Pienaar, J. (2016). Job insecurity, career opportunities, discrimination and turnover intention in post-apartheid South Africa: Examples of informative hypothesis testing. International Journal of Human Resource Management, 27(4), 427–439. https://doi.org/10.1080/09585192.2015.1020446

De Cuyper, N., & De Witte, H. (2006). The impact of job insecurity and contract type on attitudes, well-being and behavioural reports: A psychological contract perspective. Journal of Occupational and Organizational Psychology, 79(3), 395–409. https://doi.org/10.1348/096317905X53660

De Witte, H. (2000). Arbeitsethos en job onzekerheid: Meting en gevolgen voor welzijn, tevredenheid en inzet op het werk (Work ethic and job insecurity: Assessment and consequences for well-being, satisfaction and performance at work), In R. Bouwen, K. De Witte, H. De Witte & T. Taillieu (Eds.), Van Groep tot Gemeenschapp (From Group to Community). Leuven, Belgium: Garant.

Eid, M., Geiser, C., Koch, T., & Heene, M. (2017). Anomalous results in G-factor models: Explanations and alternatives. Psychological Methods, 22(3), 541–562. https://doi.org/10.1037/met0000083

Eurofound. (2021). COVID-19: Implications for employment and working life. Publications Office of the European Union. Retrieved from https://www.eurofound.europa.eu/publications/report/2021/covid-19-implications-for-employment-and-working-life

Flora, D.B., & Curran, P.J. (2004). An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychological Methods, 9(4), 466–491. https://doi.org/10.1037/1082-989X.9.4.466

Greenhalgh, L., & Rosenblatt, Z. (1984). Job insecurity: Toward conceptual clarity. Academy of Management, 9(3), 438–448. https://doi.org/10.2307/258284

Griep, Y., Lukic, A., Kraak, J.M., Bohle, S.A.L., Jiang, L., Vander Elst, T., & De Witte, H. (2021). The chicken or the egg: The reciprocal relationship between job insecurity and mental health complaints. Journal of Business Research, 126, 170–186. https://doi.org/10.1016/j.jbusres.2020.12.045

Hanisch, K.A. (1992). The job descriptive index revisited: Questions about the question mark. Journal of Applied Psychology, 77(3), 377–382. https://doi.org/10.1037//0021-9010.77.3.377

Heinrich, M., Geiser, C., Zagorscak, P., Burns, G.L., Bohn, J., Becker, S.P., Eid, M., Beauchaine, T.P., & Knaevelsrud, C. (2021). On the meaning of the ‘p factor’ in symmetrical bifactor models of psychopathology: Recommendations for future research from the bifactor-(S–1) perspective. Assessment, 30(3), 107319112110602. https://doi.org/10.1177/10731911211060298

Heinrich, M., Zagorscak, P., Eid, M., & Knaevelsrud, C. (2020). Giving g a meaning: An application of the bifactor-(S-1) approach to realize a more symptom-oriented modeling of the beck depression inventory–II. Assessment, 27(7), 1429–1447. https://doi.org/10.1177/1073191118803738

Helbling, L., & Kanj, S. (2018). Job insecurity: Differential effects of subjective and objective measures on life satisfaction trajectories of workers aged 27–30 in Germany. Social Indicators Research, 137, 1145–1162. https://doi.org/10.1007/s11205-017-1635-z

Hellgren, J., Sverke, M., & Isaksson, K. (1999). A two-dimensional approach to job insecurity: Consequences for employee attitudes and well-being. European Journal of Work and Organizational Psychology, 8(2), 179–195. https://doi.org/10.1080/135943299398311

Hirsch, P.M., & De Soucey, M. (2006). Organizational restructuring and its consequences: Rhetorical and structural. Annual Review of Sociology, 32, 171–189. https://doi.org/10.1146/annurev.soc.32.061604.123146

Hsieh, H., & Huang, J. (2017). Core self-evaluations and job and life satisfaction: The mediating and moderated mediating role of job insecurity. The Journal of Psychology, 151(3), 282–298. https://doi.org/10.1080/00223980.2016.1270888

Hu, L., & Bentler, P.M. (1999a). Covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. https://doi.org/10.1080/10705519909540118

Hu, L.T., & Bentler, P.M. (1999b). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118

Huang, G.H., Lee, C., Ashford, S., Chen, Z., & Ren, X. (2010). Affective job insecurity. A mediator of cognitive job insecurity and employee outcomes relationships. International Studies of Management and Organizations, 40(1), 20–39. https://doi.org/10.2753/IMO0020-8825400102

International Labour Organization (ILO). (2013). Global employment trends for youth 2013. A generation at risk. International Labour Organization.

International Labour Organization (ILO). (2018). Women and men in the informal economy: A statistical picture. International Labour Organization.

International Labour Organization (ILO). (2021a). ILO monitor: COVID-19 and the world of work (7nd edn.). Updates estimates and analysis. ILO Press. Retrieved from https://www.ilo.org/wcmsp5/groups/public/@dgreports/@dcomm/documents/briefingnote/wcms_767028.pdf

International Labour Organization. (2021b). ILO monitor: COVID-19 and the world of work (8th edn.). Updates Estimates and Analysis. ILO Press. Retrieved from https://www.ilo.org/global/topics/coronavirus/impacts-andresponses/WCMS_824092/lang--en/index.htm

Jiang, L., & Lavaysse, L.M. (2018). Cognitive and affective job insecurity: A meta-analysis and a primary study. Journal of Management, 44(6), 2307–2342. https://doi.org/10.1177/0149206318773853

Jiang, L., & Probst, T.M. (2014). Organizational communication: A buffer in times of job insecurity? Economic and Industrial Democracy, 35(3), 557–579. https://doi.org/10.1177/0143831X13489356

Jiang, L., Hu, S., Näswall, K., López Bohle, S., & Wang, H-J. (2020). Why and when cognitive job insecurity relates to affective job insecurity? A three-study exploration of negative rumination and the tendency to negative gossip. European Journal of Work and Organizational Psychology, 29(5), 678–692. https://doi.org/10.1080/1359432X.2020.1758669

Jiang, L., Xu, X., & Wang, H.-J. (2021). A resources–demands approach to sources of job insecurity: A multilevel meta-analytic investigation. Journal of Occupational Health Psychology, 26(2), 108–126. https://doi.org/10.1037/ocp0000267

Kalleberg, A.L. (2013). Book review symposium: Response to reviews of Arne L Kalleberg, Good jobs, bad jobs: The rise of polarized and precarious employment systems in the United States, 1970s to 2000s. Work, Employment & Society, 27(5), 896–898.

Kalleberg, A.L. (2018). Precarious lives: Job insecurity and well-being in rich democracies. Polity Press.

Kass, R.E., & Raftery, A.E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430), 773–795. https://doi.org/10.1080/01621459.1995.10476572

Kline, R.B. (2015). Principles and practice of structural equation modeling (4th ed.). Guilford Press.

König, C., Probst, T., Staffen, S., & Graso, M. (2011). A Swiss–US comparison of the correlates of job insecurity. Applied Psychology: An International Review, 60, 141–159. https://doi.org/10.1111/j.1464-0597.2010.00430.x

Lai, M.H.C., Liu, Y., & Tse, W.W.-Y. (2021). Adjusting for partial invariance in latent parameter estimation: Comparing forward specification search and approximate invariance methods. Behavior Research Methods, 54(1), 414–434. https://doi.org/10.3758/s13428-021-01560-2

Låstad, L., Naswall, K., Berntson, E., Seddigh, A., & Sverke, M. (2018). The roles of shared perceptions of individual job insecurity and job insecurity climate for work- and health-related outcomes: A multilevel approach. Economic and Industrial Democracy, 39(3), 422–438. https://doi.org/10.1177/0143831X16637129

László, K.D., Pikhart, H., Kopp, M.S., Bobak, M., Pajak, A., Malyutina, S., Salavecz, G., & Marmot, M. (2010). Job insecurity and health: A study of 16 European countries. Social Science & Medicine, 70, 867–874. https://doi.org/10.1016/j.socscimed.2009.11.022

Lawrence, E.R., & Kacmar, K.M. (2016). Exploring the impact of job insecurity on employees’ unethical behavior. Business Ethics Quarterly, 27(1), 39–70. https://doi.org/10.1017/beq.2016.58

Lazarus, R.S. (1991). Emotion and adaptation. Oxford University Press.

Lazarus, R.S., & Folkman, S. (1984). Stress, Appraisal and Coping. New York: Springer.

Lee, C., Bobko, P., Ashfor, S., Chen, Z., & Ren, X. (2008). Cross-cultural development of an abridged Job Insecurity measure. Journal of Organizational Behaviour, 29(3), 373–390. https://doi.org/10.1002/job.513

Lin, W., Shao, Y., Li, G., Guo, Y., & Zhan, X. (2021). The psychological implications of COVID-19 on employee job insecurity and its consequences: The mitigating role of organization adaptive practices. Journal of Applied Psychology, 106(3), 317–329. https://doi.org/10.1037/apl0000896

MacDonald, R., & Giazitzoglu, A. (2019). Youth, enterprise and precarity: Or, what is, and what is wrong with, the ‘gig economy’? Journal of Sociology, 55(4), 724–740. https://doi.org/10.1177/1440783319837604

Marsh, H.W., Lüdtke, O., Muthén, B., Asparouhov, T., Morin, A.J.S., Trautwein, U., & Nagengast, B. (2010). A new look at the big five factor structure through exploratory structural equation modeling. Psychological Assessment, 22(3), 471–491. https://doi.org/10.1037/a0019227

Meade, A.W., Johnson, E.C., & Braddy, P.W. (2008). Power and sensitivity of alternative fit indices in tests of measurement invariance. Journal of Applied Psychology, 93(3), 568–592. https://doi.org/10.1037/0021-9010.93.3.568

Milfont, T.L., & Fischer, R. (2010). Testing measurement invariance across groups: Applications in cross-cultural research. International Journal of Psychological Research, 3(1), 111–121. https://doi.org/10.21500/20112084.857

Millsap, R.E., & Yun-Tein, J. (2004). Assessing factorial invariance in ordered-categorical measures. Multivariate Behavioral Research, 39(3), 479–515. https://doi.org/10.1207/S15327906MBR3903_4

Muthén, L.K., & Muthén, B.O. (1998–2018). Mplus user’s guide (8th edn.). Muthén&Muthén.

Orpen, C. (1993). Correlation between job insecurity and psychological well-being among white and black employees in South Africa. Perceptual and Motor Skills, 76(3), 885–886. https://doi.org/10.2466/pms.1993.76.3.885

Oxford Economics, 2014, Workforce 2020. Retrieved from https://ler.illinois.edu/wpcontent/uploads/2015/01/LERSlides-Bringing-the-Future-into-Focus.pdf

Peer, E., Vosgerau, J., & Acquisti, A. (2014). Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods, 46, 1023–1031. https://doi.org/10.3758/s13428-013-0434-y

Pienaar, J., De Witte, H., Hellgren, J., & Sverke, M. (2013). The cognitive/affective distinction of job insecurity: Validation and differential relations. South African Business Review, 17(2), 1–22.

Probst, T.M. (2002). The impact of job insecurity on employee work attitudes, job adaptation, and organizational withdrawal behaviors. In J.M. Brett & F. Drasgow (Eds.) The psychology of work: Theoretically based empirical research (pp. 141–168). Lawrence Erlbaum Associates.

Probst, T.M. (2003). Development and validation of the job security index and the job security satisfaction scale: A classical test theory and IRT approach. Journal of Occupational and Organizational Psychology, 76(4), 451–467. https://doi.org/10.1348/096317903322591587

Probst, T.M., & Jiang, L. (2017). European flexicurity policies: Multilevel effects on employee psychosocial reactions to job insecurity. Safety Science, 100(Part A), 83–90. https://doi.org/10.1016/j.ssci.2017.03.010

Probst, T.M., Bazzoli, A., Jenkins, M.R., Jiang, L., & Bohle, S.L. (2021). Coping with job insecurity: Employees with grit create I-deals. Journal of Occupational Health Psychology, 26(5), 437–447. https://doi.org/10.1037/ocp0000220

Probst, T.M., Jiang, L., & Benson, W.L. (2018). Job insecurity and anticipated job loss: A primer and exploration of possible interventions. In U. Klehe & E. van Hooft (Eds.), The Oxford handbook of job loss and job search (pp. 31–53). Oxford University Press.

Probst, T.M., Petitta, L., Barbaranelli, C., & Lavaysse, L.M. (2018). Moderating effects of contingent work on the relationship between job insecurity and employee safety. Safety Science, 106, 285–293. https://doi.org/10.1016/j.ssci.2016.08.008

Probst, T.M., Selenko, E., & Shoss, M. (2023). Is job insecurity still relevant? Unpacking the meaning of ‘job’ and ‘insecurity’ in today’s economy. In N. De Cuyper, E. Selenko, M.C. Euwema, & W. Schaufeli (Eds.), Job insecurity, precarious employment and burnout: Facts and fables in work psychology research (pp. 68–86). Edward Elgar

Reise, S.P. (2012). The rediscovery of bifactor measurement models. Multivariate Behavioral Research, 47(5), 667–696. https://doi.org/10.1080/00273171.2012.715555

Reisel, W.D., & Banai, M. (2002). Comparison of a multidimensional and a global measure of job insecurity: Predicting job attitudes and work behaviors. Psychological Reports, 90(3), 913–922. https://doi.org/10.2466/pr0.2002.90.3.913

Riordan, C.M., & Vandenberg, R.J. (1994). A central question in cross-cultural research: Do employees of different cultures interpret work-related measures in an equivalent manner? Journal of Management, 20(3), 643–671. https://doi.org/10.1016/0149-2063(94)90007-8

Roll, L.C., Siu, O., Li, S., & De Witte, H. (2015). Job insecurity: Cross-cultural comparison between Germany and China. Journal of Organizational Effectiveness: People and Performance, 2(1), 36–54. https://doi.org/10.1108/JOEPP-01-2015-0002

Sass, D.A., Schmitt, T.A., & Marsh, H.W. (2014). Evaluating model fit with ordered categorical data within a measurement invariance framework: A comparison of estimators. Structural Equation Modeling: A Multidisciplinary Journal, 21(2), 167–180. https://doi.org/10.1080/10705511.2014.882658

Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. https://doi.org/10.1214/aos/1176344136

Sclove, S.L. (1987). Application of model-selection criteria to some problems in multivariate analysis. Psychometrika, 52(3), 333–343. https://doi.org/10.1007/BF02294360

Sender, A., Arnold, A., & Staffelbach, B. (2017). Job insecurity as a threatened resource: Reactions to job insecurity in culturally distinct regions. The International Journal of Human Resource Management, 28(17), 2403–2429. https://doi.org/10.1080/09585192.2015.1137615

Shoss, M., Van Hootegem, A., Selenko, E., & De Witte, H. (2023). The job insecurity of others: On the role of perceived national job insecurity during the COVID-19 pandemic. Economic and Industrial Democracy, 44(2), 385–409. https://doi.org/10.1177/0143831X221076176

Smit, N.W.H., De Beer, L.T., & Pienaar, J. (2016). Work stressors, job insecurity, union support, job satisfaction and safety outcomes within the iron ore mining environment. SA Journal of Human Resource Management/SA Tydskrif vir Menslikehulpbronbestuur, 14(1), a719. https://doi.org/10.4102/sajhrm.v14i1.719

Smith, C.A., & Pope, L.K. (1992). Appraisal and emotion: The interactional contributions of dispositional and situational factors. In M.S. Clark (Ed.), Emotion and social behavior (pp. 32–62). Sage Publications, Inc.

Sverke, M., Hellgren, J., & Näswall, K. (2002). No security: A meta-analysis and review of job insecurity and its consequences. Journal of Occupational Health Psychology, 7(3), 242–264. https://doi.org/10.1037/1076-8998.7.3.242

Sverke, M., Låstad, L., Hellgren, J., Richter, A., & Näswall, K. (2019). A meta-analysis of job insecurity and employee performance: Testing temporal aspects, rating source, welfare regime, and union density as moderators. International Journal of Environmental Research and Public Health, 16(14), 2536. https://doi.org/10.3390/ijerph16142536

Svetina, D., & Rutkowski, L. (2017). Multidimensional measurement invariance in an international context: Fit measure performance with many groups. Journal of Cross-Cultural Psychology, 48(7), 991–1008. https://doi.org/10.1177/0022022117717028

Svetina, D., Rutkowski, L., & Rutkowski, D. (2020). Multiple-group invariance with categorical outcomes using updated guidelines: An illustration using M plus and the lavaan/semtools packages. Structural Equation Modeling: A Multidisciplinary Journal, 27(1), 111–130. https://doi.org/10.1080/10705511.2019.1602776

Van de Vijver, F., & Leung, K. (1997). Methods and data analysis for cross-cultural research. In J.W. Berry, Y.H. Poortinga & J. Pandey (Eds.), Handbook of cross-cultural psychology: Theory and method (2nd edn., Vol. 1, pp. 257–300). Needham Heights: Allyn & Bacon.

Van de Vijver, F., & Poortinga, Y.H. (1997). Towards an integrated analysis of bias in cross-cultural assessment. European Journal of Psychological Assessment, 13, 21–39.

Van de Vijver, F., & Tanzer, N.K. (1997). Bias and equivalence in cross-cultural assessment: An overview. European Review of Applied Psychology, 47, 263–279.

Van Vuuren, T., Klandermans B., Jacobson D., & Hartley, J. (1991). Employees’ reactions to job insecurity. In J. Hartley, D. Jacobson, B. Klandermans & T. Van Vuuren (Eds.), Job insecurity. Coping with jobs at risk (pp. 79–103). Sage.

Vandenberg, R.J., & Lance, C.E. (2000). A review and synthesis of the measurement invariance literature: Suggestions, practices, and recommendations for organizational research. Organizational Research Methods, 3(1), 4–70. https://doi.org/10.1177/109442810031002

Vander Elst, T., De Witte, H., & De Cuyper, N. (2014). The job insecurity scale: A psychometric evaluation across five European countries. European Journal of Work and Organizational Psychology, 23, 364–380. https://doi.org/10.1080/1359432X.2012.745989

Wang, X., Zheng, Q., Huang, Z., & Chen, H. (2018). Effect of construal level and job insecurity on responses to perceived eternal employability. Social Behavior and Personality, 46(8), 1359–1372. https://doi.org/10.2224/sbp.4892

Watson, B., & Osberg, L. (2018). Job insecurity and mental health in Canada. Applied Economics, 50(38), 4137–4152. https://doi.org/10.1080/00036846.2018.1441516

Weiss, H.M., & Cropanzano, R. (1996). Affective Events Theory: A theoretical discussion of the structure, causes and consequences of affective experiences at work. In B.M. Staw & L.L. Cummings (Eds.), Research in organizational behavior: An annual series of analytical essays and critical reviews (Vol. 18, pp. 1–74). Elsevier Science/JAI Press.

Yamaoka, K., Nakagawa, T., & Uno, T. (1978). Application of Akaike’s information criterion (AIC) in the evaluation of linear pharmacokinetic equations. Journal of Pharmacokinetics and Biopharmaceutics, 6(2), 165–175. https://doi.org/10.1007/BF01117450



Crossref Citations

No related citations found.