THE INTERNATIONAL RESEARCH DESIGN FOR prePIRLS 2011

Get Complete Project Material File(s) Now! »

CHAPTER 2 METHODOLOGY OVERVIEW OF INTERNATIONAL PIRLS AND PREPIRLS 2011

Secondary analysis of the prePIRLS 2011 data was used in this study to investigate the effect of language of instruction and contextual factors on learner performance on the higher-order reading comprehension processes. The national project was conducted at the Centre for Evaluation and Assessment at the University of Pretoria under the auspices of the international governing body of the International Association for the Evaluation of Educational Achievement (IEA). IEA has conducted regular international assessments of reading literacy for more than 50 years (Mullis & Martin, 2013). IEA pioneered international comparative assessment of educational achievement in the 1960s to gain a deeper understanding of policy effects across countries’ different systems of education.
The research methods used in developing the instruments, sampling, data collection, scoring, data capturing and data scaling and plausible values are described in this chapter. The background to the prePIRLS 2011 project is outlined (2.1). The international research design is described (2.2). The PIRLS and prePIRLS Reading Purposes and Reading Comprehension Processes are explained (2.3). Section 2.4 explains the PIRLS Contextual Questionnaire Framework and then the research methods are outlined (2.5).

 BACKGROUND TO prePIRLS 2011

PIRLS (Progress in International Reading Literacy Study) was inaugurated in 2001 as a follow-up to IEA’s 1991 Reading Literacy Study. Conducted at five-year intervals, PIRLS 2011 is the fourth assessment in the current trend series, following PIRLS 2001 and 2006 with PIRLS 2016 underway at the time of writing up this study.
In 2011, nationally representative samples of learners in 50 education systems participated in the various assessment options included in PIRLS, bringing the total to 325 000 learners. All of the countries, institutions, and agencies involved in PIRLS assessments work collaboratively on each round of the assessment. PIRLS is directed by the PIRLS International Study Center at Boston College. It focuses on the achievement of learners in their fourth year of schooling and the experiences they have at home and at school in learning to read. PIRLS assesses learners in their fourth year of schooling because it is at this point that they have learned how to read and are now reading to learn. PIRLS, therefore, assesses how well learners can read with understanding, and use their reading to gain new insights and information (Mullis & Martin, 2013).
The prePIRLS assessment was initiated for countries whose performance in the previous studies had been low. It was developed to provide learners from low achieving countries with an opportunity to perform at a level different from those participating in PIRLS to ascertain their levels of reading literacy (Howie et al., 2012). PrePIRLS reflects the same conception of reading as PIRLS, except is less difficult and is designed to test basic reading skills that are a prerequisite for PIRLS (Mullis & Martin, 2013). The instruments in prePIRLS 2011 have a simpler vocabulary than that used in PIRLS 2011, the texts are easier, shorter in length and have simpler grammar and syntax, placing less emphasis on higher-order reading skills (Mullis, Martin, Foy & Drucker, 2012).

THE INTERNATIONAL RESEARCH DESIGN FOR prePIRLS 2011

PIRLS 2011 has a survey research design. The PIRLS 2011 conceptual framework places a child’s reading literacy development within a specific context. The relationship between the national and community reading literacy context, as well as an interaction between the home and the school contexts within which the learners’ reading behaviour and attitudes develop is represented in the conceptual framework as Figure 2.1. The reading outcomes are a result of these relationships and in turn have an effect on the national context. This home-school context lies within specific community and national contexts. The design of the study takes into account that reading literacy develops and may be enhanced because of these relationships (Howie, et al., 2012).
In order to provide an important context for interpreting the reading achievement results, background information is published together with the PIRLS achievement results. Learners, their parents, their teachers, and their school principals are asked to complete questionnaires about their home, school, and classroom contexts for learning to read (Mullis & Martin, 2013).
The PIRLS 2011 study requires the target grade to be that which represents four years of schooling, continuing from the first year of ISCED5 Level 1 (Mullis, Martin, Kennedy, Trong & Sainsbury, 2009). All participating countries included a study population that fulfilled this criterion. As a result of needing to be representative across the eleven language groups, the South African sample for Grade 4 prePIRLS was particularly large. The Grade 5 sample in PIRLS was much smaller, as it only included learners in schools in which the Language of Learning and Teaching (LoLT) in Grades 1-3 was Afrikaans and/or English. (Howie, et al., 2012).
The PIRLS 2011 Assessment Framework (Mullis, Martin, Kennedy, Trong & Sainsbury, 2009) illustrates what is meant by the contexts that develop children’s reading literacy (Figure 2.1.).

The prePIRLS 2011 reading scales

Having been developed specifically for a developing context, the prePIRLS assessment is a more accessible test than the PIRLS assessment, though it follows the same design. It consists of eight reading passages and accompanying questions. Each learner writes only part of the assessment according to a systematic booklet assembly and rotation procedure. Following data collection, learner responses are placed on a common reading achievement scale using item response theory methods that provide an overall picture of the assessment results for each country. The PIRLS achievement scale was established in 2001 so that 100 points on the scale was equal to one standard deviation across all of the countries that participated in 2001, and the scale midpoint of 500 was equal to the international average across those countries. Using passages that were administered in both 2001 and 2006 assessments as a basis for linking the two sets of assessment results, the 2006 data was also placed on this scale so that countries could gauge changes in learners’ reading achievement since 2001. Following a similar procedure, the PIRLS 2011 data was also placed on the PIRLS scale. The prePIRLS 2011 scales for reading purposes include:

  • Overall reading literacy scale
  • Reading for literary experience, and
  • Reading to acquire and use information.
  • In addition, the two scales for processes of reading comprehension are:
  • Retrieving explicitly stated information, and
  • Inferencing, interpreting and evaluating (Mullis & Martin, 2013).
READ  Learning Anities From Multiple Type of Feedbacks 

The scale ‘Retrieving explicitly stated information’ consists of items from the Focus on and retrieve explicitly stated information comprehension process. The scale ‘Inferencing, interpreting, and evaluating’ combines items from the Make straightforward inferences, Interpret and integrate ideas and information, and Examine and evaluate, content, language and textual elements processes.

Plausible values

PrePIRLS uses a matrix sampling technique for the administration of the assessment. prePIRLS 2011 consisted of six blocks of reading passages and their accompanying items for a total of four hours of testing time. These six test blocks were distributed across nine booklets with each block appearing in three booklets to enable linking between the various blocks. Booklets are distributed among learners in participating classrooms so that the groups of learners completing each booklet are approximately equivalent in terms of learner ability. PIRLS uses item response theory scaling methods to assemble a comprehensive picture of the reading achievement of a country’s entire fourth grade learner population by pooling individual learners’ responses to the booklets they were assigned (Mullis & Martin, 2013).
Accuracy of measurement is improved (conversely, the amount of measurement error is reduced) at the individual level by increasing the number of items measuring the latent trait. Using the matrix-sampling design, PIRLS requires much fewer responses from each sampled student but is still able to cover a wide range of content. The plausible values approach uses all available data, students’ responses to the items they were administered together with all background data, to estimate directly the characteristics of student populations and subpopulations and use these imputed scores in analyses. As explained above for the scales, the plausible value ‘inferencing, interpreting and evaluating’ consists of the ‘Make straightforward inferences’, ‘Interpret and integrate ideas and information’ and the ‘Examine and evaluate, content, language and textual elements’ reading comprehension process. The plausible value ‘Retrieving explicitly stated information’ consists of that one reading comprehension process (Mullis & Martin, 2013).
The Cronbach’s Alpha reliability coefficient for the prePIRLS 2011 instruments is reported as being 0.93 (Foy, Martin, Mullis, & Stanco, 2011) and within the acceptable range.

PIRLS AND prePIRLS READING PURPOSES AND COMPREHENSION PROCESSES

The PIRLS 2016 framework informs the processes assessed in the study. PIRLS focuses on the two overarching purposes for reading: for literary experience, and to acquire and use information. The PIRLS assessment integrates the four broad-based comprehension processes within each of the two purposes for reading: ‘Focus on and retrieve explicitly stated information’, ‘Make straightforward inferences’, ‘Interpret and integrate ideas and information’, and ‘Evaluate and critique content and textual elements’. It is noted that the purposes for reading and the processes of comprehension do not function in isolation from one another or from the context in which students live and learn. (Mullis & Martin, 2013).

CHAPTER 1 CONTEXT AND AIM OF STUDY
1.1 INTRODUCTION
1.2 THE RESEARCH PROBLEM
1.3 PURPOSE, SIGNIFICANCE AND MAIN RESEARCH QUESTION OF THE STUDY
1.4 KEY CONCEPTS
1.5 STRUCTURE OF THE THESIS
CHAPTER 2 METHODOLOGY OVERVIEW OF THE PIRLS AND prePIRLS 2011 STUDY
2.1 BACKGROUND TO prePIRLS 2011
2.2 THE INTERNATIONAL RESEARCH DESIGN FOR prePIRLS 2011
2.3 PIRLS AND prePIRLS READING PURPOSES AND COMPREHENSION PROCESSES
2.4 THE PIRLS CONTEXTUAL QUESTIONNAIRES
2.5 RESEARCH METHODS IN prePIRLS 2011
CHAPTER 3 LITERATURE REVIEW: LANGUAGE, CULTURAL CAPITAL AND TESTING HIGHER ORDER READING COMPREHENSION
3.1 THEORETICALBACKGROUND OF LANGUAGE OF INSTRUCTION IN SCHOOLS
3.2 CULTURAL CAPITAL AND ACCESS TO DOMINANT DISCOURSE
3.3 THE IMPACT OF SOCIO-ECONOMIC STATUS ON LEARNING
3.4 UNDERSTANDING HIGHER-ORDER THINKING
3.5 TESTING HIGHER-ORDER READING COMPREHENSION
3.6 COGNITIVE AND METACOGNITIVE READING STRATEGIES
3.7 THE DEVELOPMENT OF METACOGNITION THROUGH SOCIAL PROCESSES
3.8 THE CONNECTION BETWEEN READING AND WRITING FOR HIGHER-LEVEL LANGUAGE DEVELOPMENT
3.9 VALIDITY IN LANGUAGE AND LITERACY ASSESSMENT
3.10 SUMMARY OF THE LITERATURE REVIEW
CHAPTER FOUR CONCEPTUAL FRAMEWORK AND METHODOLOGY FOR THE STUDY
4.1 CONCEPTUAL FRAMEWORK
4.2 RESEARCH DESIGN AND SPECIFIC RESEARCH QUESTIONS
4.3 RESEARCH PARADIGM
4.4 RESEARCH METHODS
4.5 ISSUES OF VALIDITY AND RELIABILITY
4.6 ETHICAL CONSIDERATIONS
CHAPTER FIVE EXPLORING CONSTRUCT VALIDITY OF THE HIGHER-ORDER READING COMPREHENSION PROCESSES IN CONTEXT
5.1 VALIDATING THE CONSTRUCT ‘HIGHER-ORDER READING COMPREHENSION’
5.2 RELIABILITY ANALYSIS OF THE READING PROCESSES
5.3 EXAMINING THE VALIDITY OF THE CONSTRUCT
5.4 A CONTENT ANALYSIS OF OUTLIER ITEMS
5.5 EXAMINING ITEM DIFFICULTY
5.6 FACTORS INFLUENCING ITEM DIFFICULTY
5.7 MODELLING THE PROCESSES TO THE HIGHER- AND LOWER-ORDER SUBSCALES
5.8 COMPARING ITEM DIFFICULTY FOR ENGLISH L1 AND ENGLISH L2 LEARNERS
5.9 DISCUSSION OF THE VALIDITY OF THE HIGHER-ORDER AND LOWER-ORDER CONSTRUCT
CHAPTER 6 COMPARING THE EFFECT OF LANGUAGE OF INSTRUCTION ON LEARNER PERFORMANCE
6.1 COMPARISON OF THE LANGUAGE OF INSTRUCTION MODELS
6.2 THE EFFECT ON PERFORMANCE OF THE DIFFERENT LANGUAGE OF INSTRUCTION MODELS
6.3 COMPARING PERFORMANCE OF AFRICAN LANGUAGES L1 AND ENGLISH L2 LEARNERS
6.4 COMPARING THE EFFECT OF ACCESS TO TEXT AT HOME AND SCHOOL ON PERFORMANCE
6.5 COMPARING PERFORMANCE FOR AFRICAN LANGUAGES L1 AND ENGLISH L2 LOWEST SES GROUPS
6.6 EXAMINING THE EFFECT OF ENGLISH L2 INSTRUCTION AND ACCESS TO TEXT ON PERFORMANCE IN LOW SES SCHOOLS
6.7 DISCUSSION OF THE COMPARISON OF THE DIFFERENT LANGUAGE OF INSTRUCTION MODELS
CHAPTER 7 EXPLORING THE EFFECT OF THE LANGUAGE OF INSTRUCTION AND HOME AND SCHOOL FACTORS ON PERFORMANCE
7.1 THE RESULTS OF THE NULL MODEL ACROSS THE LANGUAGE GROUPINGS
7.2 THE EFFECT ON PERFORMANCE OF ATTENDING AN AFRICAN LANGUAGE, ENGLISH AND AFRIKAANS LOLT SCHOOL
7.3 DESCRIPTION OF THE HOME AND SCHOOL MEDIATORY CONTEXTS
7.4 COMPARING THE EFFECT OF THE LANGUAGE OF INSTRUCTION GROUPINGS WITH THE MEDIATORY CONTEXTS
7.5 COMPARING THE EFFECT OF HOME LANGUAGE INSTRUCTION ON PERFORMANCE
7.6 COMPARING THE EFFECT OF ENGLISH L2 AND AFRICAN LANGUAGES L1 INSTRUCTION
7.7 DISCUSSION OF THE EFFECT OF DIFFERENT LANGUAGES OF INSTRUCTION AND MEDIATORY CONTEXTS
CHAPTER 8 FINDINGS, CONCLUSIONS AND RECOMMENDATIONS
8.1 SUMMARY OF THE RESEARCH
8.2 SUMMARY OF KEY FINDINGS
8.3 REFLECTIONS ON THE CONCEPTUAL FRAMEWORK
8.4 REFLECTIONS ON THE METHODOLOGY
8.5 MAIN CONCLUSIONS
8.6 RECOMMENDATIONS
8.7 CLOSING THOUGHTS
REFERENCES 
 APPENDICES
GET THE COMPLETE PROJECT

Related Posts