Get Complete Project Material File(s) Now! »
Latent trait
One of the basic assumptions of the Rasch model is that a relatively stable latent trait underlies test results (Boone & Rogan, 2005). For this reason, the model is also sometimes called the ‘latent trait model’. Latent trait models focus on the interaction of a person with an item, rather than upon total test score (Wright & Stone, 1979). They use total test scores, but the mathematical model commences with a modelling of a person’s response to an item. They are concerned with how likely a person v of an ability v β on the ‘latent trait’ is to answer correctly, or partially correctly, an item i of difficulty i δ .
The latent trait or theoretical construct of concern to the tester is an underlying, unobservable characteristic of an individual which cannot be directly measured, but will explain scores attained on a specific test pertaining to that attribute (Andrich & Marais, 2006). For instance, in this study, the latent trait is the mathematical performance of first year tertiary students.
When items are conceived of as located, according to difficulty level, along a latent trait, the number of items a person answers correctly can vary according to the difficulties of the particular items included in the test. The relationship between person ability and total score is not linear. The non-linearity in this relationship means that test scores are not on an interval scale unless the items are evenly spaced in terms of difficulty. With a test designed according to the strategic of traditional test theory this would be unlikely to be the case because of the tendency to pick items clustered in the middle difficulty with only a few out towards the 0.8 and 0.2 levels of difficulty. In latent trait models, the construct or latent trait is conceived as a single dimension along which items can be located in terms of their difficulty ( ) i δ and persons can be located in terms of their ability ( ) v β . If the person’s ability v β is above the item’s difficulty i δ we would expect the probability of the person observed in category x of a rating scale applied to item i being correct to be greater than 0.5, i.e. if ( ) 0, v i β δ − > then { 1} 0.5 vi P χ = > If the person’s ability is below the item’s difficulty, we would expect the probability of a correct response to be less than 0.5, i.e. if ( ) 0, v i β δ − < then { 1} 0.5 vi P χ = < In the intermediate case where the person’s ability and the item’s difficulty are at the same point on the scale, the probability of a successful response would be 0.5 i.e. if ( ) 0, v i β δ − = then { 1} 0.5
Family of Rasch models
The responses of individual persons to individual items provide the raw data. Through the application of the Rasch model, raw scores undergo logarithmic transformations that render an interval scale where the intervals are equal, expressed as a ratio or log odd units or logits (Linacre, 1994). The Rasch model takes the raw data and makes from them item calibrations and person measures resulting in the following:
● valid items which can be demonstrated to define a variable
● valid response patterns which can be used to locate persons on the variable
● test-free measures that can be used to characterise persons in a general way
● linear measures that can be used to study growth and to compare groups (Bond & Fox, 2007).
Through the years the Rasch model has been developed to include a family of models, not only addressing dichotomies, but also inter alia rating scale and partial credit models.
Rasch-Andrich rating scale model
Andrich (as cited in Linacre, 2007, p7) in a conceptual breakthrough, comprehended that a rating scale, for example a Likert-type scale, could be considered as a series of Rasch dichotomies. Linacre (2007) makes the point that similar to the Rasch original dichotomous model, a person’s ability or attitude is represented by v β , whereas i δ is the item difficulty or the ‘difficulty to endorse’. The difficulty or endorsability value is the ‘balance point’ of the item according to Bond and Fox (2007, p8), and is situated at the point where the probability of observing the highest category is equal to the probability of observing the lowest category (Linacre, 2007).
In the Rasch-Andrich rating scale, a Rasch-Andrich threshold, x F , is also located on the latent variable. This ‘threshold’ or ‘step’ is, according to Linacre (2005), the point on the latent variable (relative to the item difficulty) where the probability of being observed in category x equals the probability of being observed in the previous category x −1. A threshold, in other words, is the
transition between two categories. Wright and Mok (in Smith & Smith, 2004) are of the opinion that if Likert scale items have the same response categories, that it is quite reasonable to assume that the thresholds would be the same for all items.
According to Linacre (2005), the Rasch-Andrich rating scale model specifies the probability, vix P , that person v of ability v β is observed in category x of a rating scale applied to item i with difficulty level i δ as opposed to the probability vi( x 1) P − of being observed in category x −1. In a Likert scale, x could represent ‘Strongly Agree’ and x −1 would then be the previous category ‘Agree’.
CHAPTER 1: INTRODUCTION
1.1 Purpose of study
1.2 Statement of problem
1.3 Significance of the study
1.4 Context of this study
1.5 Outline of study
CHAPTER 2: LITERATURE REVIEW
2.1 Terminology
2.2 The changing nature of university assessment in the South Africa context
2.3 Assessment models in mathematics education
2.4 Assessment taxonomies
2.5 Assessment purposes
2.6 Shifts in assessment
2.7 Assessment approaches
2.8 Question formats
2.9 Constructed response questions and provided response questions
2.10 Multiple choice questions
2.11 Good mathematics assessment
2.12 Good mathematics questions
2.13 Confidence
CHAPTER 3: RESEARCH DESIGN AND METHODOLOGY
3.1 Research design
3.2 Research questions
3.3 Qualitative research methodology
3.4 Quantitative research methodology
3.5 Reliability, validity, bias and research ethics
CHAPTER 4: QUALITATIVE INVESTIGATION
4.1 Qualitative data analysis
4.2 Qualitative investigation
CHAPTER 5: THEORETICAL FRAMEWORK
5.1 Mathematics assessment components
5.2 Defining the parameters
5.3 Model for measuring a good question
CHAPTER 6: RESEARCH FINDINGS
6.1 Qualitative data analysis
6.2 Data description
6.3 Component analysis
6.4 Results
CHAPTER 7: DISCUSSION AND CONCLUSIONS
7.1 Good and poor quality mathematics questions
7.2 A comparison of PRQs and CRQs in the mathematics assessment components
7.3 Conclusions
7.4 Addressing the research questions
7.5 Limitations of study
7.6 Implications for further research
REFERENCES