About the Author(s)


Erica Munnik Email symbol
Department of Psychology, University of the Western Cape, Cape Town, South Africa

Mario R. Smith symbol
Department of Psychology, University of the Western Cape, Cape Town, South Africa

Citation


Munnik, E., & Smith, M.R. (2019). Methodological rigour and coherence in the construction of instruments: The emotional social screening tool for school readiness. African Journal of Psychological Assessment, 1(0), a2. https://doi.org/10.4102/ajopa.v1i0.2

Original Research

Methodological rigour and coherence in the construction of instruments: The emotional social screening tool for school readiness

Erica Munnik, Mario R. Smith

Received: 30 Oct. 2018; Accepted: 11 May 2019; Published: 24 June 2019

Copyright: © 2019. The Author(s). Licensee: AOSIS.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

The need for a contextually relevant and empirically grounded measure of emotional social competence in Grade R children was identified in the literature. The aim of this study was to develop a contextually relevant instrument for emotional social competence in preschool children. The study adopted a four-phase approach with each phase using distinct methodological approaches. This article reports on the use of multiple research methods to achieve methodological rigour and coherence in the construction. Phase 1 used systematic review methodology to establish a theoretical foundation for the instrument. The results identified two domains and nine subdomains that formed the theoretical model for the instrument. In phase 2 stakeholder perceptions of emotional and social competence were identified through concept mapping to increase contextual relevance and sensitivity. The results highlighted that early stimulation and contextual factors impacted school readiness and needed to be included. The construction of the instrument incorporated the findings from the first two stages. The draft instrument was presented to a panel of experts, using the Delphi technique, for validation of content and scalar decisions in phase 3. The results supported the proposed format and content of the screening tool. The resulting instrument was piloted in phase 4 with survey research. Good internal consistency was reported and the factor structure supported. The multiphase methodology provided an overarching framework with methodological rigour and coherence. The grounding in the literature, stakeholder consultation and rigorous validation processes enhanced the resultant instrument. The articulation of one phase into the next ensured methodological coherence.

Keywords: E3SR; test construction; systematic review; concept mapping; Delphi study; survey research.

Introduction

Test construction process and models

Foxcroft (2011) identified that the lack of methodological coherence and scientific rigour followed in the construction and validation phases resulted in many instruments being perceived as inadequate. Evidence of the strategies and the rigour in the process of test construction is essential to ensure that instruments are deemed adequate, reliable and valid for use in applied contexts (Foxcroft, 2013). Thus, a need exists for rigorous studies in scale construction that employ coherent design principles. This manuscript reports on the use of multiple methodologies to strengthen the methodological rigour and coherence in the construction of instruments. The construction process of the emotional social screening tool for school readiness (E3SR) is used as an illustrative case study.

Theoretical framework

DeVellis (2016) conceptualised scale construction as a continuous, well-designed process with four distinct steps, namely (1) theoretical foundation, (2) scale construction, (3) structural validation and (4) preparation of manuals. This framework provides an overarching model for the process of scale construction that underscores the methodological decisions that must be taken in order to develop a sound scale. Each step entails a series of activities that pursue the aim of the respective steps and feed into the overarching model.

The first step in DeVellis’s model entails the establishment of a theoretical foundation. Three core activities included here are the thorough consultation of the literature to identify current thinking and theory about the construct, available instruments, domains included and definitions used. A major concern in scale construction is the extent to which stakeholder consultation takes place. This is particularly important for enhancing contextual relevance and sensitivity in construct definition (Foxcroft, 2011). Stakeholder consultation also increases buy-in with users (Kline, 2015). The theoretical and operational definitions for the proposed scale are developed from the literature and stakeholder consultation in this process.

Step 2 is focused on scale construction. In this step, the selection of items, pre-testing and revision of the scale receive attention. The scalar decisions made are often not reported explicitly or interrogated sufficiently. Thus, this step must ensure that important aspects such as the user group, target group and scoring values are appropriate for the intended scale. During test construction, scalar decisions are often made without due consideration or without the recognition that it constitutes a methodological decision.

The third step is focused on the structural validation of the scale. This step usually includes piloting of the newly constructed instrument. Piloting is often performed with conveniently selected samples without due consideration for methodological or design principles guiding this kind of research process. The resulting data set is then used to establish the psychometric properties of the scale. Typical techniques used include Cronbach’s alpha for internal consistency, factor analytics methods for construct validation and where possible convergent or discriminant analyses for criterion referencing. The challenge is that these methods are often applied without testing whether the data set satisfied the requirements for the respective statistical analysis or data reduction as recommended by Kline (2015). In addition, statistical techniques are applied at a technical level without using theoretical formulations to guide the analytic process (Kline, 2015).

The fourth step in the model entails the writing of manuals. Particular attention is paid to technical details about the scale construction, guidelines for administration and use and instructions for scoring and interpretation. A particular challenge is that many scales are constructed without the subsequent preparation of manuals detailing appropriate use and construction.

The four steps in this model provide a coherent process that culminates in the accurate conceptualisation, construction and documentation of the scale (DeVellis, 2016). The model also underscores that construction is a continuous process (DeVellis, 2016). The revision and ongoing refinement of the instrument follows the same four step process which makes the model cyclical and continuous. The challenge often is that instruments are used with expanding populations and samples without using the feedback loop that contributes to further refinement. The scale becomes a means to an end without attention to the ongoing construction process. This is evidenced by the lack of reporting on psychometric properties of scales when used in subsequent studies (Foxcroft, 2004). In short, this model provides a logical process for scale construction, but leaves the operationalisation of steps to the developers. Research and development into scale construction often lacks rigorous attention to methodological principles at the various stages. Thus, there is a need to demonstrate how multiple methodologies can be harnessed to strengthen the activities within each of the steps. Figure 1 illustrates the model proposed by DeVellis (2016).

FIGURE 1: Model of test construction.

Application of methodology

Step 1: Theoretical foundation

From the above discussion, two activities emerged as key considerations in the first step. The first consideration is that the existing body of literature must be consulted and consolidated to identify definitions of the identified construct, and scales measuring the identified construct in part or as a whole. Narrative literature reviews are limited in that they do not provide a systematic, replicable process for filtering through the body of literature. In summative literature reviews, researchers often read specific sources at the expense of more comprehensive searches. The traditional approach to this step can be strengthened with the use of secondary research methods that specifically attempt to filter the body of literature following a specified set of procedures (Gough, Oliver, & Thomas, 2017). For example, scoping reviews and systematic reviews are recognised research methods that provide a rigorous process for the identification of literature reporting on a particular construct. Scoping reviews are recommended when researchers want to obtain an overview of the available literature reporting on a particular construct (Grant & Booth, 2009). Systematic reviews reportedly are the highest form of evidence and provide a critical appraisal of the literature to identify good-quality research from which information about the construct can be extracted. Wardlaw (2010) can be consulted for a comprehensive summary of systematic review methodology. The primary consideration is to strengthen this step by replacing narrative reviews with scoping or systematic reviews. A major advantage is that scale developers can identify existing measures and extract information about definitions and theoretical formulations of the construct.

The second consideration is to consult stakeholders about the constructs under study. Through this consultation, contextual relevance of the construct can be enhanced. Stakeholder consultation should follow a rigorous methodological process and can draw on existing methodologies that have demonstrated efficacy in this. Concept mapping is recommended for distilling the perceptions of a variety of people into one coherent whole (see Pokharel, 2009 for a comprehensive overview). Concept mapping can draw on qualitative methods if more exploratory work is required or quantitative methods if the construct needs further development (Novak & Cañas, 2006). Through the use of methods like concept mapping, important insights can be gained for consideration in the development of the construct.

The combination of consolidation of the literature and stakeholder consultation can strengthen the resulting theoretical and operational definitions of the construct under study. The use of these methodologies then operationalises at least two activities in this first step. Each of these activities will be informed by well-established methodologies that lend rigour and methodological coherence to the establishment of a theoretical foundation for the proposed scale.

Step 2: Scale construction

The primary consideration in this step can be summarised in two activities. The first activity would be to make scalar decisions explicit. Scalar decisions such as the intended user group of the scale, administration guidelines, scoring keys and the selection of items should all be documented clearly and the decisions substantiated. This process of careful and explicit documentation will become the basis of a draft manual that will be finalised in the final step. The primary consideration is to strengthen this step through improved documentation of decisions with motivation which will ensure an engagement with a more systematic and methodical process of decision-making.

The second activity entails testing the scalar decisions and the pool of draft items against an external panel. Testing and the scalar decisions can be performed through established techniques such as Delphi studies. The Delphi method is an iterative process to collect and distil the anonymous judgements of experts using a series of data collection and analysis techniques interspersed with feedback (Boulkedid, Abdoul, Loustau, Sibony, & Alberti, 2011). This type of research method employs a qualitative methodology when an interactive panel of experts is invited to share their expertise and work towards a consensus about a set of indicators by sharing expertise and opinions. By employing this methodology, one is able to facilitate an organised discussion that analyses information individually, but also as a set. The steps that are usually followed include (1) identification of a clearly defined research problem, rationale, aim and objectives, (2) the selection of expert panellists, (3) the development of a stimulus document, (4) dissemination of information (stimulus document) in various rounds and (5) analyses of feedback after each round with incorporation of feedback into the next rounds until consensus is reached. See Boulkedid et al. (2011) for a comprehensive review of the Delphi methodology.

Delphi studies present stimulus documents in an iterative process to a panel. The panel provides feedback after each round and revisions are made until there is consensus on the items presented. The draft formulated in the first activity of this step can be used as the stimulus document. Panels are carefully constituted and can include experts and/or stakeholder groups who can provide input from identified vantage points. Delphi studies are well documented as an effective method to establish content validity (Hasson, Keeney, & McKenna, 2000). The resultant document will be a more refined version that is ready for piloting.

The combination of documenting scalar decisions and Delphi techniques can significantly enhance the construction process. It provides the constructor with an opportunity to record initial considerations and expands the construction team through the inclusion of the Delphi panel. Thus, the end product is strengthened through the introduction of rigorous methods and more explicit reporting that can provide insight into how the resultant scale performs.

Step 3: Structural validation

The primary consideration in this step includes two activities. The first activity is conducting a pilot study of the newly constructed instrument. The pilot study should be conceptualised well to ensure that methodological decisions such as sampling are taken into consideration. Survey research provides a well-established framework for pilot studies that can enhance the methodological rigour of the pilot study and the quality of the resulting data set.

The second activity is the calculation of psychometric properties. This process should be guided by a strong theoretical formulation. Developers must identify whether they are testing a theoretical model or exploring how items load onto factors in a more organic process. The former would set out to test a theoretical model that has been conceptualised a priori. The latter uses a pool of items and examines how items would load onto factors and the number of factors in the solution. Thus, the data reduction process is not merely a technical exercise, but a well thought out analytic process that follows a broader theoretical underpinning.

The resulting data must be tested to determine whether the data conform to the requirements for the selected analysis or data reduction. Testing the assumptions underpinning inferential statistics and data reduction must become empirical questions. This is an important step to ensure that the data support the selected analysis. Establishing the psychometric properties of the scale can proceed with a greater measure of confidence if the assumptions for the data analyses or data reduction were tested. Pilot studies that are more formalised and incorporate good practice methodological principles can strengthen this step substantially. It shifts the focus from technical aspects of establishing psychometric properties to the overall scientific and empirical value of the pilot study.

Step 4: Revision and manuals

The primary consideration here is to strengthen this step through two activities. The first activity prioritises the production of technical and instructional manuals. Instructional manuals ensure that the resultant instrument is used appropriately. Instructional manuals capture the scientific rigour of the construction process and provide a template for other researchers.

The second activity entails further piloting and refinement either by the primary developers or by other researchers who may use the scale. Primary developers should actively conduct further research on the structural validity of the scale and refine as indicated. Permission should be granted to other researchers to use the scale with the proviso that feedback is provided about the psychometric properties of the scale in subsequent studies. This ensures that there is clear commitment to continued refinement of the instrument. Figure 2 illustrates the link between methods, activities and steps in the model.

FIGURE 2: Steps of test construction and methodological choices.

Ethical consideration

Project registration and ethics clearance were granted by the Senate Research Committee of the University of the Western Cape (Ethical Clearance number: 14/2/8).

Illustrative case: The emotional social screening tool for school readiness

The E3SR was developed as part of a doctoral study by Munnik (2018). The aim was to develop an instrument that could assess emotional and social competence in Grade R children as part of school readiness assessment. The aims and objectives of the study reflected the first three steps of the DeVellis (2016) framework. The conceptual framework articulated into a four-phase study. The phases were conceptualised as separate studies with independent methodologies. The results of each phase fed into the succeeding phase to form a coherent whole resulting in the prototype of the E3SR. A comprehensive discussion of the results can be accessed in the unpublished thesis of Munnik (2018). The phases below are described for illustrative purposes and not a detailed discussion of the results.

Step 1: Establish a theoretical foundation

The first step included two activities that articulated into two separate phases.

Phase 1: Consolidation of the literature

The first phase corresponded to the first activity that was the consolidation of the body of literature reporting on emotional social competence. Systematic review methodology was adopted to conduct two reviews focusing on (1) definitions of emotional social readiness in pre-schoolers and (2) instruments measuring emotional social competence in pre-schoolers, respectively. The reviews took place at four levels: (1) identification of articles with specific keywords or phrases, (2) screening or filtering of the identified articles by abstract, (3) appraisal of the identified article with a quality appraisal tool and (4) the summation of the articles by means of data extraction and meta-synthesis. The Smith Franciscus Swartbooi (SFS) scoring system was used to evaluate the identified studies for methodological quality (Smith, Franciscus, Swartbooi, Munnik, & Jacobs, 2015). The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) informed the filtration process used to consolidate the literature (Liberati et al., 2009). Reviews were conducted by a team of reviewers. Team meetings were facilitated in which reviewers discussed their assessments. After each operational step, reviewers were provided an opportunity to calibrate their findings.

The first review identified existing definitions of emotional social readiness in pre-schoolers from good-quality literature. Peer-reviewed, full text articles published between January 2003 and December 2013 were identified from a comprehensive search across eight databases selected on their relevance to psychology and education, as well as reference mining and grey literature. A total of 68 titles were identified of which seven articles were included in the final summation. Theoretical and operational definitions and their underpinning behaviours or attributes were extracted. The results indicated that there is no consensus on the definition of emotional and social competence in preschool children.

The second review identified instruments purported to measure emotional or social readiness or competence as part of school readiness. Peer-reviewed full text articles with a quantitative design published between January 2002 and December 2012 were identified from a comprehensive search across eight databases. Four articles were included in the final summation from 282 titles. Four instruments were identified and data were extracted that included (1) a description of identified instrument, (2) type of instrument, (3) aim of the instrument, (4) target group, (5) theoretical and operational definitions, (6) sample items in domains, (7) administration, (8) language of construction and (9) psychometric properties. The review indicated the need for a single-form, strengths-based screening instrument rather than a diagnostic tool. The results indicated that ease of administration and interpretation would allow for a wider application across the health professions. An integrated instrument would thus be more applicable and beneficial in the South African context. The review identified the lack of psychometrically sound, contextually appropriate measures for school readiness more specifically emotional or social readiness as a domain of school readiness.

Phase 2: Stakeholder consultation

Concept mapping was used to consult stakeholders about their perceptions of school readiness and emotional social competence as a domain of school readiness. Five focus groups were conducted with a purposive sample of 23 educators, 9 professionals and 9 parents. Two semi-structured interviews were conducted with an educator and a paediatrician, respectively, who were unable to attend the focus groups. Participants were recruited from a mixture of socio-economic areas to provide a cross-section of contextual considerations at play. Data collection and analysis happened concurrently until saturation was reached (Creswell, 2007). The conventions of reflexivity, dependability and trustworthiness of data were adhered to. Thematic analysis informed by Braun and Clarke (2006) was used and produced four core themes. The results were used to develop an interpretable conceptual framework, expressed in the language of the participants. This resulted in a more nuanced and contextualised understanding of emotional social readiness. The resultant concept map illustrated that understandings of children’s emotional social readiness cannot be separated from the systems within which they function. Societal, community, educational and familial systems act as the overarching framework and influence children’s emotional social readiness before school entry.

The findings from phases 1 and 2 formed the basis for developing theoretical and operational definitions of emotional and social competence as primary domains. Nine subdomain definitions were operationalised for (1) emotional maturity, (2) emotional management, (3) independence, (4) positive sense of self, (5) mental well-being and alertness, (6) social skills or confidence, (7) pro-social behaviour, (8) compliance to rules and (9) communication. These definitions formed the basis for a contextually sensitive theoretical model for the proposed instrument.

Step 2: Scale construction

Step 2 was achieved through the third phase of the study. Phase 3 had two subsections that corresponded to the two activities identified in the second step. Subsection A entailed the construction of the proposed measure. Subsection B entailed a Delphi study.

Sub-phase A entailed the development of the draft screening tool and a pool of test items. The developmental phase included steps as proposed by Foxcroft (2013) and Taguma (2000). Firstly, the intended aim or purpose of the tool was established. Secondly, the constructs were defined and operationalised and a pool of items generated. Thirdly, decisions were made about the content and format of the test. All of these steps resulted in the prototype.

Sub-phase B entailed external validation. The Delphi method fulfilled the validation process with a panel of 11 experts. The stimulus document included questions about the test construction (scalar) choices such as domain identification, theoretical and operational definitions and item writing. The stimulus document included three sections: (1) the aim and core constructs (i.e. aim, purpose, target population, theoretical and operational definitions), (2) the instrument (i.e. composition of the demographic section and the proposed items of the E3SR) and (3) technical aspects of the prototype (i.e. type of scale, scoring and general administration prompts).

Revisions were based on the feedback of the panellists. If consensus was reached (above 70%), the prompt or item was retained and not included again in subsequent rounds. Stimulus prompts were revised if the level of agreement was between 50% and 70%. Items that obtained levels of agreement that were below 35% were revised, replaced or omitted. The replacement stimulus prompts and revised prompts were included in the subsequent rounds. Qualitative data were also obtained that assisted with the revision or refinement of constructs and/or items. During round 1, consensus was reached on the majority of questions about the form and function of the prototype. The majority of the items (n = 74) were retained in their original format. Twenty items (n = 20) were retained and revised, and 28 items (n = 28) were omitted. Seven new items were included in round 2. Reversed items initially scored poorly, but were retained and identified as such in the second round. Consensus was reached on the form, function and content of the proposed screening tool after the second round after which the Delphi was concluded. The Delphi study established face and content validity. The findings were incorporated into a pilot version of the screening instrument now named the E3SR.

Step 3: Structural validation

The third step was achieved through phase 4. Phase 4 entailed a pilot study that aimed to establish the psychometric properties of the instrument. The first activity consisted of a cross-sectional survey conducted with a local sample of 26 preschool teachers in the Western Cape region in South Africa who completed 493 protocols in which they assessed preschool-aged children for emotional and social competence. The survey included a biographic questionnaire and the E3SR.

The second activity comprised advanced statistical analysis to determine the psychometric properties of the scale. Reliability was assessed through internal consistency. The nine sub-scales showed good to excellent Cronbach’s alphas ranging from 0.794 to 0.951. Construct validity was established using data reduction methods. The assumptions for data reduction were tested and the results proposed that the data would support factor analytic methods. Confirmatory factor analyses supported the theoretical nine-factor solution of the E3SR, whilst exploratory factor analyses provided an improved seven-factor model. The results suggested revisions to increase the model fit.

Step 4: Refinement and revision

The fourth step will be achieved through a postdoctoral study. The first activity will entail the revision of the E3SR based on the recommendations of Munnik (2018). The revised instrument will be piloted with new samples and the psychometric properties established. The second activity will entail the finalisation of instructional and technical manuals, as well as copyrighting of the E3SR. Thereafter permission can be granted to other researchers to use the E3SR in their studies with the agreement to feed information about the scale in those studies back to the scale developer. In this way, the E3SR will be refined and revised.

Discussion

Munnik (2018) used the theoretical formulation of DeVellis (2016) to construct a screening tool for emotional and social competence in preschool children. The first three steps of the model articulated into a four-phased study that contributed to the empirical underpinning of the construction process. Methodological rigour was applied to the conceptualisation of the instrument including well-established methodologies such as systematic review, concept mapping, Delphi study and survey research. A theoretical model was developed for the proposed scale from the theoretical foundation established through the consolidation of the literature (systematic reviews in phase 1) and stakeholder consultation (concept mapping in phase 2). The contextual sensitivity and relevance of the theoretical model were enhanced through consultation with stakeholders groups in the conceptualisation phase. This process also increased buy-in through stakeholder consultation consistent with the recommendation by Pokharel (2009).

The conceptual model was operationalised in the construction phase through scalar decisions and item writing resulting in a prototype. The prototype was subjected to a Delphi process that provided expert validation. The panel of multidisciplinary experts in the Delphi also represented different cultural groupings, providing a second opportunity for enhancing contextual sensitivity. The expert panel endorsed all scalar decisions that established face and content validity in only two rounds attesting to the enhanced quality of the prototype resulting from the more rigorous conceptualisation process.

The pilot study used a robust design with a larger than recommended sample to establish construct validity through the combination of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Exploratory factor analysis provided insight into sources of variance, whereas CFA tested the theoretical model underpinning the instrument. The theoretical model for the E3SR was adopted and revisions were recommended for further refinement. Further refinement will draw on the proposed revisions and the revised instrument will be piloted with new samples. This will be operationalised through a postdoctoral study aimed at refinement and preparation of the manuals.

Significance of the study

The present study contributed to the identified limited research conducted in South Africa on test construction in general and the design of instruments to measure emotional social skills or competencies as a domain of school readiness in particular (e.g. Bustin, 2007). The present study also contributed to addressing the lack of reliable and valid instruments, resulting from adaptation, poor test design or inadequate piloting (Laher & Cockcroft, 2014).

This multi-method approach was not mixed methodology. It constituted methodological triangulation between theory and method. This increased the methodological rigour and enhanced the resultant screening tool. The multi-method approach could act as a blueprint or framework for test construction in education and psychology. Clinicians usually use a variety of diagnostic and screening tools without an appreciation and acknowledgement of the methodological and conceptual underpinning of the instrument. Research and development is often dictated by clinical interest and a focus on content. This study would assist clinicians with shifting to a more balanced position where they are able to use empirical methods in test construction. It also provides a way for clinicians to evaluate new and existing instruments in that the model highlights the important psychometric aspects one has to consider in selecting a test.

The present study was a collaboration between the Department of Education and the University of the Western Cape, demonstrating the powerful results that could result from such collaborative initiatives.

This study forged important stakeholder relationships that paved the way for further adaptation and refinement of the resultant screening tool, ongoing collaborative research and knowledge exchange, as well as knowledge translation of assessment principles and developmental milestones for the target group. Ultimately, this process increases the likelihood of adoption into a variety of health professional practices.

The operationalisation of DeVellis’s model through multiple methods might make the theoretical and methodological underpinnings of test construction accessible, understandable and easier to use in scale construction. This, in turn, can foster more effective use and application of instruments, and promote construction and adaptation studies.

Concluding remarks

The methodological choices in the case study contributed to the establishment of a contextually appropriate screening tool designed in and for the South African context. The construction of the E3SR illustrated how various methodologies can be used to strengthen overall design. Clear methodological processes with sound methodological decisions assist in enhancing the end product without compromising the process of research. It underscores the importance of explicit methodological decisions and the benefits of using theoretical frameworks. The four-phase study with the respective methodologies proved to be a thorough process that contributed to methodological rigour and coherence despite being time-consuming. The rigour of the empirical process followed during construction provided a strong foundation for the screening instrument that ultimately increased the confidence with which the instrument could be applied in practice.

Acknowledgements

We thank the National Research Foundation (NRF) for financial support of the research project. The research has not been commissioned nor does it represent the opinions of the NRF. No conditions or prohibitions were placed on the study or dissemination protocol because of the funding.

Competing interests

The authors declare that they have no financial or personal relationships that may have inappropriately influenced them in writing this article.

Authors’ contributions

Both authors participated in the conceptualisation, design, composition, writing and critical revision of the manuscript.

Funding

This research was supported via two grants from the NRF. The first grant was awarded in the Thutuka PhD funding track from 2014–2016 and the second grant was awarded in the NRF Sabbatical grant for completion of PhD track in 2018.

Data availability statement

Data sharing is not applicable to this article as no new data were created or analysed in this study.

Disclaimer

This research has not been commissioned nor does it represent the opinions of the NRF or any affiliated agency of the authors.

References

Boulkedid, R., Abdoul, H., Loustau, M., Sibony, O., & Alberti, C. (2011). Using and reporting the Delphi method for selecting healthcare quality indicators: A systematic review. PLoS One, 6(6), e20476. https://doi.org/10.1371/journal.pone.0020476

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa

Bustin, C. (2007). The development and validation of a social emotional school readiness scale (Doctoral dissertation), University of the Free State.

Creswell, J.W. (2007). Qualitative inquiry and research design: Choosing among five approaches. 2nd edn. Thousand Oaks, CA: Sage.

DeVellis, R.F. (2016). Scale development: Theory and applications (vol. 26). US, University of North Carolina. Los Angeles, CA: Sage.

Foxcroft, C.D. (2004). Planning a psychological test in the multicultural South African context. South African Journal of Industrial Psychology, 30(4), 8–15. https://doi.org/10.4102/sajip.v30i4.171

Foxcroft, C.D. (2011). Ethical issues related to psychological testing in Africa: What I have learned (so far). Online Readings in Psychology and Culture, 2(2), 7. https://doi.org/10.9707/2307-0919.1022

Foxcroft, C.D. (2013). Developing a psychological measure. In C. Foxcroft & G. Roodt (Eds.), Introduction to psychological assessment in the South African context (4th edn., pp. 69–81). Cape Town: Oxford University Press.

Gough, D., Oliver, S., & Thomas, J. (Eds.). (2017). An introduction to systematic reviews. London: Sage.

Grant, M.J., & Booth, A. (2009). A typology of reviews: An analysis of 14 review types and associated methodologies. Health Information and Libraries Journal, 26(2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x

Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 32(4), 1008–1015. https://doi.org/10.1046/j.1365-2648.2000.t01-1-01567.x

Kline, P. (2015). A handbook of test construction (psychology revivals): Introduction to psychometric design. London: Routledge.

Laher, S., & Cockcroft, K. (2014). Psychological assessment in post-apartheid South Africa: The way forward. South African Journal of Psychology, 44(3), 303–314. https://doi.org/10.1177/0081246314533634

Liberati, A., Altman, D.G., Tetzlaff, J., Mulrow, C., Gøtzsche, P.C., Ioannidis, J.P., & Moher, D. (2009). The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLoS Medicine, 6(7), e1000100. https://doi.org/10.1371/journal.pmed.1000100

Munnik, E. (2018). The development of a screening tool for assessing emotional social competence in preschoolers as a domain of school readiness (Doctoral dissertation). University of the Western Cape. Retrieved from http://hdl.handle.net/11394/6099.

Novak, J.D., & Cañas, A.J. (2006). The theory underlying concept maps and how to construct them (Technical Report No. IHMC Cmap Tools 2006-01). Pensacola, FL: Institute for Human and Machine Cognition.

Pokharel, B. (2009). Concept mapping in social research. Tribhuvan University Journal, 26(1), 1–6.

Smith, M.R., Franciscus, G., Swartbooi, C., Munnik, E., & Jacobs W. (2015). The SFS scoring system. In M.R. Smith (Ed., Chair), Symposium on Methodological Rigour and Coherence: Deconstructing the Quality Appraisal Tool in Systematic Review Methodology conducted at the 21st National Conference of the Psychological Association of South Africa, South Africa.

Taguma, J. (2000). Steps in test construction. Paper presented at the Annual Meeting of the Southwestern Psychological Association, 20–22 April, Texas A&M University, Dallas, TX.

Wardlaw, J.M. (2010). Advice on how to write a systematic review. Retrieved from http://www.sbirc.ed.ac.uk/documents/advice%20on%20how%20to%20write%20a%20systematic%20review.pdf.


 

Crossref Citations

1. A systematic review of doctoral graduate attributes: Domains and definitions
Janine S. Senekal, Erica Munnik, Jose M. Frantz
Frontiers in Education  vol: 7  year: 2022  
doi: 10.3389/feduc.2022.1009106