Abstract
This study aimed to confirm the psychometric validity of the French version of the Brief Resilience Scale (BRS-F), as well as to evaluate its psychometric properties for the first time using item response theory, more precisely the partial credit model (PCM). It also aimed to evaluate item invariance by exploring possible differential item functioning (DIF). The 3708 participants were recruited during a cross-sectional observational study among university students in the French region of Lorraine. The results of the classical test theory method demonstrated an overall good fit of the scale to the data, with SB-χ2 = 80.84, df = 9, χ2/df = 9.0, p < 0.001, and root mean square error of approximation (RMSEA) (90% CI) = 0.046 [0.039; 0.058], standardized root mean square residual (SRMR) = 0.018, and comparative fit index (CFI) = 0.991), and good internal consistency (Cronbach’s α = 0.86). The PCM also yielded a good fit, with a good internal consistency (PSR = 0.88) and overall good infit indices. A slight irregularity in response distribution was observed with the “Neutral” category, but was not a threat to the structural integrity of the scale. No statistically significant DIF was observed when tested for gender (male vs. female) or academic level (undergraduate vs. graduate). Overall, the BRS-F was a good fit for the population of university students in France in a cross-sectional design.
Similar content being viewed by others
Introduction
In March 2020, the World Health Organization (WHO) declared the outbreak of SARS-CoV-2 a pandemic1. Governments around the world undertook measures to slow the spread of the virus, many among them choosing to put their countries on lockdown. This was also the case for France, which announced a two-month-long lockdown period starting on March 17th 20202. This public health measure meant that schools, universities, and most workplaces were closed practically overnight, with little to no time to prepare for a transition into an online setting. In the particular case of universities, this occurred mid-semester, leaving students and faculty members to fend for themselves in an increasingly complicated social and sanitary context.
Prior to the pandemic, university students were already described as a vulnerable population3,4. Be it on the mental health plan, or the economic one, life at university is often that of lingering dependence on the family home battling with a newly acquired sense of autonomy5. The WHO reported in 2016 that the prevalence of any mental health disorder among the student population amounted to 20.3%, with the prevalence of anxiety disorders being the most important at 11.7%6. These troubles were only exacerbated by the stressful and anxiety-inducing context of the COVID-19 pandemic. In France, studies have reported these numbers doubling, with approximately 25% of students reporting moderate to severe anxiety symptoms at the issue of the first lockdown7.
In order to prevent such an important prevalence of mental health disorders in future pandemics or other crises, psychologists have looked to resilience as a potential resource8. Psychological resilience is commonly defined as the ability to bounce back after a stressful event or overcome adversity9,10,11, such as the COVID-19 pandemic and its societal consequences. Scarce information had been collected about the resilience of University students in France prior to the pandemic, but one study conducted in France during COVID-19 among medical students in Paris found that their levels of resilience were normal on average: with a mean score of 3.14 (SD = 0.837) on the Brief Resilience Scale (BRS)12. The sample was consisted of 2nd to 4th year medical students who had volunteered to help out at the Bicêtre Hospital Center during the epidemic peak. The authors of the study suggest that psychological distress may have been mitigated in part by factors such as pedagogical support, supervision, integration into a team, fighting isolation, responding to a sense of usefulness, maintaining links to the faculty of medicine, and respecting at least a minimum quality of life. According to them, these elements promote resilience and help to reduce the degree of distress experienced by medical students12.
Given its utility in assessing resilience, the BRS has been widely studied and adapted in diverse contexts. It was first developed in the United States and validated in four samples: two samples of undergraduate students, one sample of cardiac rehabilitation patients, and one sample of women with fibromyalgia and their healthy controls13. The BRS has since been translated into French and validated twice. Firstly, in Switzerland in 2019 in a population of midwives working at the University Hospitals in the French-speaking part of the country14, and secondly, in Belgium in 2021 among people who had survived the 2016 Brussels terrorist attacks15. These validations are not semantically identical. While these two validation studies did not explore the exact same translation of the original BRS scale, they yielded satisfactory results in terms of the unifactorial structure of the French BRS, as well as negative correlations between resilience measurement and mental health parameters.
However, there is a number of arguments to be made in favor of a new validation study in this particular context. Firstly, the Jacobs and Horsch study14 was conducted in a sample of midwives, and the Leys et al. study15 was conducted in a sample of terrorist attack survivors. Our study is conducted in a sample of university students in a COVID-19 pandemic context, and while these are all essentially sub-sets of the adult general population, it would be reasonable to assume that university students would have a different perception of certain items, than, for example, medical workers would. Secondly, the parameters studied in both validation studies do not go further than a confirmatory factor analysis (CFA) and correlation analyses in the case of Leys et al., and extend to principal component analysis (PCA) and Cronbach’s alpha in the Jacobs and Horsch study. While these parameters used to be studied systematically and were considered as sufficient in the past, recent developments in psychometric validation techniques have moved the borders much further than the Classical Test Theory (CTT) methods. Indeed, it has become increasingly common to use Item Response Theory (IRT) in validation studies, in order to gain a more complete understanding of scale validity, notably through item-by-item analyses, as well as by exploring the fit of response modes in Likert-type scales16. IRT analyzes individual item responses based on a participant’s trait level and the item’s difficulty, providing a detailed view of each item’s performance. While CTT mainly focuses on factor structure, IRT assumes that an individual’s response to an item is a direct product of two parameters: their trait level and the difficulty of the item17.
Furthermore, the invariance of items on a scale is also an interesting parameter that we are nowadays able to study, thanks to the Differential Item Functioning (DIF) analyses which also belong to the Rasch family of models18. DIF analysis examines whether items function similarly across groups, ensuring fairness in responses. DIF is essentially a violation of the item invariance condition, where two individuals with the same level of the latent trait would respond differently to an item19. This difference in the way they reply is usually due to a difference in perception of a certain item, often because individuals belong to two different groups (e.g. male/female) and thus do not have similar experiences when it comes to the phenomenon we seek to measure.
Considering new methodological possibilities and the lack of studies conducting IRT analyses on the elements of the BRS scale, this study aimed to validate the French version of the BRS in a university student population from Lorraine using both CTT and IRT. Additionally, it sought to evaluate item invariance through DIF analyses.
Methods
Study design and population
A cross-sectional observational study, "Feelings and psychological impact of the COVID-19 epidemic among Students in the Grand Est area (PIMS-CoV19)", was conducted via an online questionnaire at the end of the first French lockdown, from May 7th to May 17th 20207.
University students were invited to participate voluntarily through targeted outreach to the University of Lorraine and Sciences Po College in Nancy, Lorraine, Grand Est Region, France, in order to constitute the study sample. The Grand Est Region was particularly affected by the COVID-19 pandemic, with 19.6 cases per 100,000 inhabitants during the 7-day rolling period of the survey20.
Detailed information regarding the purpose of the study was sent to all students, as well as a form for providing informed consent to participate in the study. The survey was completed anonymously to ensure the confidentiality and reliability of the data. All procedures were conducted following the principles of the Declaration of Helsinki. When we launched the study, our target was the entire student population in Lorraine, i.e. more than 50,000 students. It was estimated that 5–10% of the students would respond to the survey, i.e. somewhere between 2,500 and 5,000 students, which should have ensured ample statistical power.
Data collection
We collected the data via an online survey21. Our team contacted the deans’ office of the University of Lorraine and the Sciences Po College in Nancy, which made sure to forward the survey link to the entire student population via their university e-mail addresses. The questions included in the survey concerned the sociodemographic characteristics of students, their living and learning conditions during lockdown, the threat to health posed by COVID-19 in their immediate environment, the impact of lockdown and related measures on their living and learning conditions, as well as self-administered questionnaires considering their health status measures7.
Measures
Resilience
The BRS was used to assess resilience levels13. Resilience is defined as the ability to bounce back or recover from stress. The subjects are asked to which extent they agree with the six items on the scale, and they are supposed to express their (dis)agreement on a 5-point Likert scale ranging from 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, to 5 = strongly agree. The score is determined by summing the responses to all questions and dividing by the total number of questions. Smith et al. (2013) proposed an interpretation framework with validated cutoff points at 3 and 4.3. Scores below 3 indicate low resilience levels, scores between 3 and 4.3 (inclusive) indicate moderate resilience levels, and scores above 4.3 signify high resilience levels22. The original version of the BRS presented good internal consistency (Cronbach’s α = 0.80–0.91), good test-retest reliability, with intra-class correlation (ICC) of 0.62 and 0.6913.
We used the Jacobs and Horsch version of the French BRS, our study having been conducted in 2020, prior to the publication of the Leys et al. translation in 2021. The BRS was translated into French using forward-backward translation and cultural adaptation14. A native French-speaking mental health expert conducted the forward translation, and an independent English-speaking translator performed the back translation. Discrepancies were reviewed by a bilingual author, and the translation was piloted with three midwives, requiring no further adjustments. The French version of the BRS (BRS-F) was validated in a sample of midwives. A unifactorial structure was confirmed, and BRS-F demonstrated good reliability (Cronbach’s α = 0.84) and correlations with mental health symptoms14.
Sociodemographic characteristics, living and learning conditions
Data including students’ age, gender, study discipline, study level (undergraduate, graduate, other), living and learning conditions during lockdown, as well as information on any family or entourage members suffering from COVID-19.
Descriptive analyses
Continuous variables were described by the mean and standard deviation, while categorical variables were summarized by percentages. Questionnaire scores were calculated according to their respective scoring indications. The descriptive analyses were performed using SAS 9.4 software (SAS Inst., Cary, NC, USA).
Psychometric validation
A combination of CTT and IRT was applied to examine the psychometric properties of the BRS-F scale. The statistical analyses were performed using the Validscale23 and PCM modules24 developed in Stata and accessed via the PRO-online website25 for the purpose of this study.
CTT
The structure of the BRS-F questionnaire was studied using a CFA, which was performed using covariance-based structural equation modeling with maximum likelihood estimation with Satorra-Bentler correction. CFA was performed to test the adequacy to the predefined unidimensional factor structure model. The adequacy was evaluated from different indices, including the root mean square error of approximation (RMSEA), standardized root mean square residual (SRMR), and comparative fit index (CFI). The RMSEA is a parsimony-adjusted index and values closer to 0 represent a good fit. The SRMR represents the square-root of the difference between residuals of the sample covariance matrix and the hypothesized model. The CFI compares the fit of a target model to the fit of an independent model. The models were considered a good fit if RMSEA < 0.08, SRMR ≤ 0.07, and CFI > 0.926. The internal consistency reliability was evaluated by Cronbach’s alpha coefficient (α), which was considered good from 0.8 to 0.89, and excellent if > 0.927.
IRT
IRT is a broad framework used to model the relationship between individuals’ latent traits and their test responses, considering both the individual’s trait level and item characteristics28. IRT includes various models that differ in complexity based on the number of parameters used to describe items29. The Rasch model is a simpler, one-parameter IRT model focusing only on item difficulty30. Rasch family models extend the Rasch model to include additional parameters such as item discrimination and guessing28. The Partial Credit Model (PCM) is a Rasch family model used specifically for polytomous items – items with more than two response categories (e.g. Likert scales), extending the Rasch framework to handle ordered categories. PCM models the probability of each possible response based on the difficulty of the item and the thresholds between the different response categories30.
The PCM was used to confirm the unidimensionality of the scale. The Person Separation Reliability (PSR) was calculated as an indicator of internal consistency reliability. PSR was considered as acceptable if > 0.7, and individual item fit residual statistics were acceptable when the value ranged from 0.90 to 1.10 for outfit and from 0.97 to 1.03 for infit statistics31.
Item invariance evaluation
An analysis of potential DIF was conducted using the lordif package in R32, both on gender (male/female) and level of academic studies (undergraduate/graduate). The lordif package combines IRT and logistic regression to detect DIF. DIF detection was assessed based on the likelihood ratio χ2 test, whereas the magnitude of DIF was evaluated using the pseudo-R2 statistic, with values < 0.02 considered negligible32.
Results
Sociodemographic, living and learning characteristics
The sociodemographic and academic characteristics, as well as living conditions, are described in (Table 1). Mean (SD) age of the sample was 21.7 (4.0) years, while 70.7% of the participants declared being female. A majority of students reported living with their parents during the lockdown (62.9%), while only 13.6% lived alone. During the same time period, 59.1% reported living in an urban area, while 40.8% lived in a rural environment. When it comes to the level of academic studies, 59.3% were enrolled in an undergraduate program, while 39.6% studied in a graduate program. Further characteristics of this sample, including resilience predictors, have been described in a separate analysis33.
Resilience levels
Levels of reported resilience are described in (Table 2). The overall mean (SD) resilience score was 3.2 (0.9), indicating normal resilience levels on average in this study sample, with 50.6% of students reporting normal resilience levels, and 37.3% reporting low resilience levels. Table 3 demonstrates response rates for each item, according to the response category.
Dimensionality and other psychometric properties of BRS-F
When testing the unidimensionality of the BRS-F scale, a CFA showed acceptable standardized factor loadings for all items (> 0.6), with the highest factor loading observed for item #5 (0.795) and the lowest standardized loading observed for item #1 (0.622). The goodness-of-fit indices were above the threshold of acceptability, with Satorra-Bentler-scaled-χ2 = 80.84, df = 9, χ2/df = 9.0, p < 0.001, and RMSEA (90% CI) = 0.046 [0.039; 0.058], SRMR = 0.018, and CFI = 0.991. The internal consistency was good, with the Cronbach’s alpha at α = 0.86.
The internal consistency reliability was reassessed via the PSR, which was acceptable at 0.88. The individual item fit statistics were slightly less good for items #1, #4 and #5, with standardized infit statistics at 1.133, 1.080, and 0.873, respectively.
As shown in (Fig. 1), the information curve and the curve representing the density of the latent trait were almost perfectly superimposed, indicating that our questionnaire was well adapted to measuring the latent trait, in this case, resilience. The positions of the thresholds of response modes indicated a similar level in difficulty among all items on the scale. Furthermore, the Wright map demonstrated overlaps between the thresholds of response modes #2 and #3, indicating a potential problem with the response mode #3 (“Neutral”), particularly in items #1 through #4.
Upon further inspection of the response mode “Neutral”, we observed that this mode was frequently selected, with 16.29–24.57% of participants choosing this category across the six items. Table 3 provides a detailed breakdown of response rates for each item, highlighting the frequent use of the “Neutral” response mode. While the categories characteristic curves (CCC) showed the “Neutral” probability curve lower than other response curves, its position remained equidistant from “Disagree” and “Agree” (Fig. 2).
DIF of the BRS-F items
DIF analyses revealed minor differences across gender (male vs. female) in all six BRS-F items, but these were not statistically significant. Furthermore, when research for DIF was conducted according to the level of studies (e.g. undergraduate vs. graduate), item #2 (“I have a hard time making it through stressful events.”) got flagged for DIF, but as with gender, it was not statistically significant. In summary, no statistically significant DIF was observed across gender or academic level, ensuring the scale’s fairness across these subgroups.
Discussion
This study aimed to confirm the psychometric validity of the French version of the BRS scale in the university student population in Lorraine, using both CTT and IRT – the latter being a novel approach for this scale. The BRS-F measures resilience as the self-perceived ability to bounce back after a stressful event13. We showed that BRS-F has a sound unifactorial structure, with overall good goodness-of-fit indices, and good internal consistency. The standardized infit statistics were slightly above (for items #1, #4) or below (for item #5) suggested reference values, which encouraged us to pursue analysis indicated as the secondary aim of this paper, which was to study the item invariance of the scale by testing for DIF. These analyses did not allow us to detect any statistically significant DIF in this sample, neither according to gender, nor according to academic level.
The French version of the BRS presented a sound unifactorial structure, similarly to previous validations13,14,34,35. The goodness-of-fit indices suggested an overall good fit of the scale to the data. When compared to previous validations of this scale, BRS-F showed a similar level of internal consistency at 0.88 for PSR and 0.86 for Cronbach’s α, compared to α = 0.80–0.91 in Smith et al.13, and α = 0.84 in Jacobs and Horsch14, as well as in Brazilian and German samples, with respectively α = 0.8434 and α = 0.8535. While previous studies validated the BRS in diverse cultural contexts, such as the Brazilian and German populations, our study is the first to apply IRT to assess item and response modality functioning, providing deeper insight into the scale’s performance. While pioneering this type of analysis to assess the BRS means we have no literature to compare our results to, our findings confirm the overall conclusions obtained by the CTT method in previous studies and attest to the good fit of the BRS scale to the French university student population.
The analysis of response modes showed slight irregularities concerning response mode “Neutral”. Upon further inspection of this modality, we observed that it was frequently used, even though in this population it did not seem to provide a lot of information on the trait level. The relatively frequent use of this modality could be due to several factors. Firstly, as a “middle response option”, it allows participants to express themselves even if they do not relate to other response modes of the scale. In a sample of relatively young participants, it could be imagined that a certain portion of them had not yet encountered a particularly stressful event that they could recall when replying to some of the questions. This could also mean that they may not have yet had the opportunity to hone and develop their resilience as a capacity, which in turn could explain the relatively high proportion of participants with low resilience levels (37.3%). On the other hand, the “neutral” response does not provide the researchers with ample information on the levels of the measured construct. However, the decision to keep this modality was made based on three facts: (1) the modality was not dysfunctional in a way that it could be regrouped with another response modality, nor did it pose a problem to the structure of the questionnaire; (2) the frequency of observations choosing this modality could not be neglected, meaning that people found it reflected their perception correctly; (3) while it did not provide plenty information in this population, we cannot exclude the possibility that in another population it would have been more informative. Thus, removing this modality could potentially be detrimental to the structure of the scale. In conclusion, while the “Neutral” response mode was frequently selected, it did not disrupt the scale’s structure or functionality. Future studies might explore whether alternative phrasing or additional guidance for this response option could yield more nuanced data.
The strengths of this study included the number of participants, which not only allowed for a sound statistical power in our analyses, but also ensured a distribution that covered all levels of the latent trait. Another strength lies in the methodological setup of this study, as a combination of CTT and IRT allowed us to assess psychometric validity on the dimension level, as well as to zoom into individual items and response modalities. Coupled with item invariance assessment via DIF analysis, this exhaustive methodological approach enabled an analysis of the BRS-F scale that went into much more detail than any other validation published to this day. The methodological rigor of combining CTT, IRT, and DIF analysis not only ensured robust psychometric validation but also established the scale’s fairness across subgroups, making it a reliable tool for diverse university populations.
When it comes to the potential limitations of this study, the cross-sectional approach did not allow us to assess longitudinal indicators, such as test-retest reliability or the response shift effect. Furthermore, in a design such as a cross-sectional auto-administered questionnaire investigating the psychological impact of the COVID-19 pandemic, participation bias may have influenced the results, as students particularly affected by the pandemic’s psychological impact may have been more likely to respond, potentially skewing resilience level distributions. Furthermore, the majority of participants in this study were female, reflecting a notable overrepresentation of women, which has already been described by Lippa as an overall higher participation rate of women in psychological research36, when compared to their male counterparts. Similar challenges have been observed in other psychometric validation studies, where higher percentages of female respondents have even been known to prevent examination of gender differences37. However, the elements put forward in this validation are largely sufficient for a cross-sectional use in this population, with the large number of participants being an argument for the representativeness of our sample.
In conclusion, we have demonstrated that BRS-F is well adapted for cross-sectional use in the French university student population, by assessing the psychometric validity of the scale using the CTT and IRT methods, as well as confirming the invariance of its items across different sub-groups. The BRS-F demonstrates excellent psychometric properties, confirming its suitability for cross-sectional use in French university populations. While population research using BRS-F to assess levels of resilience is highly recommended in view of the excellent psychometric properties of this scale, future research should prioritize longitudinal assessments to evaluate the invariance of the BRS-F over time, test-retest reliability, and its sensitivity to interventions designed to improve resilience. Furthermore, this validated version holds promise as a tool for evaluating interventions aimed at enhancing student resilience, thus supporting public health initiatives and decision-making.
Data availability
The datasets used and analysed during the current study are available from the corresponding author on reasonable request.
References
Note from the editors. World health organization declares novel coronavirus (2019-nCoV) sixth public health emergency of international concern. Euro. Surveill 25, 200131e (2020).
Légifrance. Arrêté Du 14 Mars 2020 Portant Diverses Mesures Relatives à La Lutte Contre La Propagation Du Virus Covid-19. (2020).
Browning, M. H. E. M. et al. Psychological impacts from COVID-19 among university students: risk factors across seven States in the united States. PLoS One 16, e0245327 (2021).
Russell, E. C., Abidogun, T. M., Lindley, L. L. & Griffin, K. W. Impact of the COVID-19 pandemic on university students’ psychological distress, well-being, and utilization of mental health services in the United States: populations at greatest risk. Front. Public. Health 12, 1442773 (2024).
Lewis, J., West, A., Roberts, J. & Noden, P. Parents’ involvement and university students’ independence. Fam. Relat. Soc. 4, 417–432 (2015).
Auerbach, R. P. et al. Mental disorders among college students in the world health organization world mental health surveys. Psychol. Med. 46, 2955–2970 (2016).
Bourion-Bédès, S. et al. Psychological impact of the COVID-19 outbreak on students in a French region severely affected by the disease: results of the PIMS-CoV 19 study. Psychiatr. Res. 295, 113559 (2021).
Masten, A. S. Ordinary magic: resilience processes in development. Am. Psychol. 56, 227–238 (2001).
Garmezy, N. Resilience in children’s adaptation to negative life events and stressed environments. Pediatr. Ann. 20, 459–460 (1991).
Felten, B. S. & Hall, J. M. Conceptualizing resilience in women older than 85: overcoming adversity from illness or loss. J. Gerontol. Nurs. 27, 46–53 (2001).
Bonanno, G. A. & Loss Trauma, and human resilience: have we underestimated the human capacity to thrive after extremely aversive events? Am. Psychol. 59, 20–28 (2004).
Rolland, F. Détresse et résilience des étudiants En médecine de Paris-Saclay Lors de La première vague de La pandémie de COVID-19. Ann. Médico-psychol. Revue Psychiatr. 181, 304–311 (2023).
Smith, B. W. et al. The brief resilience scale: assessing the ability to bounce back. Int. J. Behav. Med. 15, 194–200 (2008).
Jacobs, I. & Horsch, A. Psychometric properties of the French brief resilience scale. Eur. J. Health Psychol. 26, 1–9 (2019).
Leys, C. et al. Resilience predicts lower anxiety and depression and greater recovery after a vicarious trauma. IJERPH 18, 12608 (2021).
Nima, A. A., Cloninger, K. M., Persson, B. N., Sikström, S. & Garcia, D. Validation of subjective well-being measures using item response theory. Front. Psychol. 10, 3036 (2020).
Jabrayilov, R., Emons, W. H. M. & Sijtsma, K. Comparison of classical test theory and item response theory in individual change assessment. Appl. Psychol. Meas. 40, 559–572 (2016).
Hagquist, C. & Andrich, D. Recent advances in analysis of differential item functioning in health research using the Rasch model. Health Qual. Life Outcomes 15, 181 (2017).
Zumbo, B. D. Three generations of DIF analyses: considering where it has been, where it is now, and where it is going. Lang. Assess. Q. 4, 223–233 (2007).
COVID-19 Tableau de Bord Des Données Régionales Au 02 Juin 2020. https://www.grand-est.ars.sante.fr/ (2020).
Limesurvey, G. H. LimeSurvey GmbH, Hamburg, Germany.
Smith, B. W., Epstein, E. M., Ortiz, J. A., Christopher, P. J. & Tooley, E. M. The foundations of resilience: what are the critical resources for bouncing back from stress? in Resilience in Children, Adolescents, and Adults (eds Prince-Embury, S. & Saklofske, D. H.) 167–187 https://doi.org/10.1007/978-1-4614-4939-3_13 (Springer, 2013).
Perrot, B., Bataille, E. & Hardouin, J. B. Validscale: A command to validate measurement scales. Stata J. 18, 29–50 (2018).
Hardouin, J. B. & Blanchin, M. P. C. M. Stata module to estimate the parameters of a partial credit model (PCM) or of a rating scale model (RSM). Statistical Software Components S459223, Boston College Department of Economics.
Hardouin, J. B., Blanchin, M., Perrot, B. & Sébille, V. Pro-online—Easy PRO analyses.
Byrne, B. M. Structural Equation Modeling with EQS https://doi.org/10.4324/9780203726532 (Routledge, 2013).
Petersen, J. H. Quality of life. Assessment, analysis and interpretation. Statist Med. 20, 2214–2216 (2001).
Embretson, S. E. & Reise, S. P. Item Response Theory https://doi.org/10.4324/9781410605269 (Psychology, 2013).
Wright, B. & Stone, M. Best Test Design (1979).
Andrich, D. Rasch Models for Measurement. 95 (Sage Publications, Inc, 1988).
Prieto, L., Alonso, J. & Lamarca, R. Classical test theory versus Rasch analysis for quality of life questionnaire reduction. Health Qual. Life Outcomes. 1, 27 (2003).
Choi, S. W., Gibbons, L. E. & Crane, P. K. Lordif: an R package for detecting differential item functioning using iterative hybrid ordinal logistic regression/item response theory and Monte Carlo simulations. J. Stat. Soft. 39, (2011).
Todorović, A. et al. Factors associated with low levels of resilience among French university students during COVID-19 lockdown: the results of the PIMS-CoV19 study.
Coelho, L. D. H., Hanel, G. H. P., Medeiros Cavalcanti, P. & Teixeira Rezende, T. Veloso Gouveia, V. Brief resilience scale: testing its factorial structure and invariance in Brazil. Univ. Psychol. 15, 397 (2016).
Chmitorz, A. et al. Population-based validation of a German version of the brief resilience scale. PLoS One 13, e0192761 (2018).
Lippa, R. A. & Gender Nature, and Nurture (Lawrence Erlbaum Associates, 2005).
Soraci, P. et al. Validation and psychometric evaluation of the Italian version of the fear of COVID-19 scale. Int. J. Ment. Health Addict. 20, 1913–1922 (2022).
Acknowledgements
The authors thank all the participants of the study for their valuable input and cooperation. We would like to thank Dr Myriam Blanchin for her technical assistance with the PRO-online platform. This research is a part of a PhD project funded by the French Network of Doctoral programmes in Public Health (RDSP), coordinated by EHESP French School of Public Health.
Author information
Authors and Affiliations
Contributions
Conceptualization, A.T., C.B., H.R., and S.B.-B; methodology A.T., C.B., H.R., and S.B.-B; validation, A.T., C.B., and S.B.-B.; formal analysis, A.T. and H.R.; investigation, C.B. and S.B.-B.; data curation, A.T. and H.R.; writing—original draft preparation, A.T.; writing—review and editing, A.T., C.B., H.R., and S.B.-B.; supervision, C.B. and S.B.-B.; project administration, C.B. and S.B.-B. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethics approval and consent to participate
All participants received detailed information on the purpose of the study and provided online informed consent to participate. The survey was anonymous to ensure the confidentiality and reliability of the data, adhering to ethical standards. All procedures were performed in accordance with the principles of the Declaration of Helsinki and the study protocol was approved by the Institutional Review Board (Comité National de l’Informatique et des Libertés - registration 2,220,408).
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Todorović, A., Bourion-Bédès, S., Rousseau, H. et al. Psychometric validation and item invariance of the French version of the Brief Resilience Scale in a sample of French university students following the first COVID-19 lockdown. Sci Rep 15, 11753 (2025). https://doi.org/10.1038/s41598-025-94935-w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s41598-025-94935-w