THE NEED FOR FOCUSING ON LEARNING POTENTIAL

Get Complete Project Material File(s) Now! »

The psychometric measurement of intelligence is nearing its centenary, but in many ways the tests used today still resemble the first intelligence test constructed in the early 1900s. Although this has led people such as Sternberg (1997a) to consider the contemporary cognitive testing industry to be a glaring exception to the rapid general rate of technological development, it will be shown that some of the ideas of the early theorists and practitioners can only now, with the availability of recent theoretical, psychometric and technological developments, be fully utilised. As in many other sciences, early ideas often provide the base for further research and development many years later. One such example in psychology concerning the measurement of intelligence is the work of Alfred Binet, who, together with his colleague, Theodore Simon 1, developed the first psychometric intelligence test in 1905.

 Binet’s legacy

The Binet-Simon test (Binet & Simon, 1915) was an international breakthrough and Binet became the founder figure in intelligence testing. However, the fulfilment of the promise of Binet’s early work on the measurement of intelligence has only now been made possible by the development of item response theory (IRT) Psychological testing of cognitive ability is generally used to make decisions about individuals and comparisons between people. As indicated in the previous chapter, this field has been beset with controversy and the nature (heritability) versus nurture (modifiability) debate continues. Standard tests of cognitive ability measure the products of prior learning. Hence they rely heavily on the assumption that all examinees have had comparable opportunities to acquire the skills and abilities being measured. However, this assumption is not true when people from different socioeconomic and cultural backgrounds are compared. In the case of people from disadvantaged backgrounds, for example, their abilities are likely to be underestimated, thus jeopardising the goal of fair evaluation (Campione & Brown, 1987; Hugo & Claassen, 1991).The concept of “general intelligence” or “g” has been used for many years. While many researchers and theorists agree that it is problematic to try and represent intelligence by means of a single score, it is also not practically feasible to attempt to measure all possible kinds of intelligence(s) that can be defined. What exactly should be measured presents a very real dilemma for the development of psychometric instruments for the measurement of cognitive ability. The “g” factor can be extracted from the correlations between any large and diverse collection of mental ability tests. Thus typical IQ scores obtained from a combination of subtest scores reflect a measure of “g”. Compared with other factors, it explains a large proportion of the total variance in the test scores. IQ scores and “g” have been regarded as useful to explain differences between individuals in terms of cognitive ability and performance. Standard cognitive tests address the need for measuring instruments that allow the evaluation of as well as comparison between individuals in terms of cognitive ability. However, growing dissatisfaction with standard tests of cognitive ability in multicultural

Correlations with other cognitive tests

One method of evaluating construct validity is to correlate a test with other tests that measure the same or a similar construct. According to Anastasi and Urbina (1997), the correlations between tests that measure approximately the same general area of behaviour should be moderately high, but not too high, since too much overlapping without added advantage, implies needless duplication. In the case of the LPCAT, the focus on dynamic measurement of learning potential by using only nonverbal, figural item content, together with the test-train-test computerised adaptive test administration, makes a unique contribution. Since the LPCAT learning potential measures are in the domain of general nonverbal reasoning ability, correlation with existing standard cognitive tests, in particular their nonverbal scores, provides useful information about its construct validity.

Correlations with criterion measures

Criterion-related validity refers to the effectiveness of a test to estimate performance on some other outcome measure of the same construct. The two types of criterion-related validity of interest are concurrent validity and predictive validity. A criterion should itself be reliable if it is to be a useful index of what the test measures. The validity coefficient will be diminished to the extent that the reliability of the test or the criterion is low, since the validity coefficient is always less than or equal to the square root of the test reliability multiplied by the criterion reliability (Gregory, 1996). There is no general answer to the question of how high a validity coefficient should be, and the less overlapping there is in content between the test and the criterion, the lower the expected validity coefficient will be. The latter is important to keep in mind when interpreting the LPCAT results, where performance on a test of nonverbal, figural content is compared to different measures of academic performance. Where possible, these correlations are presented for the total group as well as for the two language groups separately. The culture-fair measurement of cognitive ability has been a contentious issue for many years. In South Africa, political and social changes in recent times have brought new opportunities and challenges in many spheres – also in the field of psychometric testing and specifically cognitive assessment. The need for measures that can take the diversity of our population into account and that also make provision for differences in educational and socioeconomic background has been emphasised by researchers, the profession, and legislation (Claassen, 1997; Employment Equity Act, 1998; Foxcroft, 1997; Owen, 1998; Shuttleworth-Jordan, 1996; South African Professional Board of Psychology, 1998). The urgent need for the design and development of instruments for cross-cultural use and for which empirical studies are undertaken to investigate test bias and cultural appropriateness, is clear. Such instruments need to make allowance for the diversity of the population in terms of educational and socioeconomic background. At the same time they also need to ensure that the scarce resources available for training and development are utilised in such a way that opportunities can be provided to those who have been most disadvantaged while maintaining standards and success rates of training and development opportunities. With regard to practical considerations, ease of administration and test administration time should be optimised. Recent developments in cognitive ability testing which allow for training within test administration to make possible the measurement of learning potential, seem particularly suitable for multicultural testing of cognitive ability. Such dynamic tests use a test-train-test strategy, which allows, in a manner of speaking, for the levelling of the playing field for people from diverse backgrounds to bring about more equitable testing. A test that includes some training, benefits examinees by allowing them to improve on their initial performance. Dynamic testing has increasingly received attention in international and national research. The basic idea that learning potential While the present project has made certain contributions to the field of dynamic assessment, there are aspects that can be improved upon. These may be addressed by future research.In the present project, only three of the 10 provinces in South Africa were involved in the item analysis test administration. Although the sample sizes were adequate for three-parameter IRT item analysis purposes and while there is no reason to believe that the pupils from these provinces are any different in terms of their ability levels from pupils in other provinces, a more representative sample – both in terms of provincial representation and cultural representation – would add to the solidity of psychometric evidence for the utility of the LPCAT. A more thorough investigation of the effect of training in LPCAT results, using larger and more representative samples, is also recommended. The samples used to investigate this particular aspect for the LPCAT, were small and testing took place during the school examination period, which could have affected the findings. It is furthermore recommended that the LPCAT results should also be validated using practical training criteria or a combination of academic and practical training results. The use of academic criterion measures only does not address the need for measures to also identify people for practical training. The effect of language proficiency in academic performance has been clearly indicated in the results of the present project. An investigation into the effect of providing language proficiency training prior to or concurrent with academic training needs to be investigated. In particular where more than 76 percent of the South African population have an African home language, but where the majority of training is provided in English and Afrikaans, such an investigation is important for the planning of future course content. For people who receive training in a language other than their first language – and in which they are not adequately proficient – language proficiency training may enable them to better receive the full benefit of the training provided. It is vital to investigate whether the learning potential indicated in a nonverbal, general reasoning domain can be brought to full development in academic performance if the required level of language proficiency is attained. On the whole, the results provided here indicate support for the psychometric soundness as well as the internal and external validity of the LPCAT. Addressing the issues mentioned above will provide valuable additional information in the field of dynamic assessment. South Africa has been through tumultuous times and in the immediate future will need to focus on development both at individual and national level. Training and development have been identified as important priorities. Considering the cost of training, successful outcomes are important, which indicates a need for cognitive assessment to select people for such training. However, with large differences in terms of socioeconomic and educational background hampering test performance of disadvantaged individuals, and language proficiency further impacting negatively on their performance, the use of standard cognitive assessment instruments seems problematic – however well they may predict future performance. In the spirit of transformation and development, a need was identified to focus on learning potential, as a broader concept which includes present level of performance, but at the same time takes into consideration potential future level of performance if further training is provided. Learning potential measures are regarded as more equitable, since they take into account the differences in background with which examinees come to the testing situation. Given the sociopolitical history of South Africa and the consequent differential impact of education and developmental opportunities on disadvantaged groups, differences between the culture groups can be expected to remain in the foreseeable future. However, with living conditions and educational opportunities improving for the disadvantaged, these differences can at the same time be expected to diminish time.

READ  School Leadership and Teachers with HIV/AIDS: Stigma and Discrimination in Gauteng Province Schools

1.1 INTRODUCTION
1.2 THE RESEARCH PROBLEM IN HISTORICAL PERSPECTIVE 
1.3 AIMS OF THE STUDY
1.4 DIVISION OF CHAPTERS
CHAPTER 2 MEASUREMENT OF INTELLIGENCE
2.1 INTRODUCTION
2.2 THE HISTORY OF INTELLIGENCE AND ITS MEASUREMENT

2.3 CHANGES IN INTELLIGENCE TEST SCORES OVER TIME 
2.4 CONCLUSION 52
CHAPTER 3 THE MEASUREMENT OF LEARNING POTENTIAL
3.1 INTRODUCTION 
3.2 THE HISTORY OF DYNAMIC ASSESSMENT 
3.3 THE NEED FOR FOCUSING ON LEARNING POTENTIAL 

3.4 VGTOSKY’S ZONE OF PROXIMAL DEVLOPMENT AS A THEORETICAL BASE
3.5 OPERATIONALISATION OF DYNAMIC ASSESMENT AND LEARNING POTENTIAL
3.6DIFFERENT APPROACHES TO DYNAMIC ASSESSMENT AND THE MEASUREMENT OF LEARNING POTENTIAL
3.7 PROBLEM AREAS AND POSSIBLE SOLUTIONS: VYGOTSKY REVISITED
3.8 A PROPOSED NEW APPROACH TO DYNAMIC ASSESSMENT AND THE MEASUREMENT OF LEARNING POTENTIAL
3.9 CONCLUSION
CHAPTER 4 ITEM RESPONSE THEORY AND COMPUTERISED ADAPTIVE TESTING
4.1 INTRODUCTION
4.2 GENERAL FEATURES AND LIMITATIONS OF CLASSICAL TEST THEORY
4.3 A BRIEF HISTORY OF IRT
4.4 PRINCIPLES AND THEORETICAL CONCEPTS OF IRT 

4.5 ADVANTAGES OF IRT
4.6 COMPUTERISED ADAPTIVE TESTING (CAT)
4.7 ADVANTAGES OF USING CAT FOR DYNAMIC ASSESSMENT AND THE MEASUREMENT OF LEARNING POTENTIAL
4.8 CONCLUSION
CHAPTER 5 CONSTRUCTION OF THE LEARNING POTENTIAL COMPUTERISED ADAPTIVE TEST (LPCAT)
5.1 INTRODUCTION
5.2 DEFINING THE TEST
5.3 CHOICE OF SCALING METHOD
5.4 LPCAT ITEMS AND PRACTICE EXAMPLES

5.5 ITEM ANALYSIS ADMINISTRATION 
5.6 Test information function
5.7 CONSTRUCTION OF THE FINAL COMPUTERISED ADAPTIVE LPCAT
5.8 CONCLUSION 
CHAPTER 6 PROCEDURE FOR EVALUATING THE VALIDITY OF THE LPCAT
6.1 INTRODUCTION 
6.2 AN OVERVIEW OF VALIDITY EVALUATION IN GENERAL
6.3 PLANNED VALIDITY EVALUATION OF THE LPCAT
6.4 GENERAL VALIDITY OF THE LPCAT

6.5 VALIDITY OF THE LPCAT-1 
6.6 VALIDITY OF THE LPCAT-2 
6.7 ADDITIONAL VALIDITY INFORMATION
6.8 DATA CAPTURING AND STATISTICAL ANALYSIS
CHAPTER 7 VALIDITY RESULTS FOR THE LPCAT
7.1 INTRODUCTION
7.2 EMPIRICAL VALIDITY RESULTS FOR THE LPCAT-1
7.3 EMPIRICAL VALIDITY RESULTS FOR THE LPCAT-2
7.4 INTEGRATED SUMMARY OF LPCAT VALIDITY RESULTS FOR GROUPS 1 TO 5 
7.5 ADDITIONAL EVIDENCE FOR THE VALIDITY OF THE LPCAT 
CHAPTER 8 DISCUSSION AND RECOMMENDATIONS
8.1 INTRODUCTION
8.2 MEASUREMENT OF INTELLIGENCE
8.3 MEASUREMENT OF LEARNING POTENTIAL
8.4 IRT AND CAT
8.5 CONSTRUCTION OF THE LPCAT
8.6 PROCEDURE FOR EVALUATING THE VALIDITY OF THE LPCA
8.7 DISCUSSION AND INTERPRETATION OF THE RESULTS
8.8 CRITICAL EVALUATIN AND RECOMMENDATIONS
8.9 CONCLUSION
REFERENCE LIST
APPENDIX A
APPENDIX B
APPENDIX C 

GET THE COMPLETE PROJECT
THE CONSTRUCTION AND EVALUATION OF A DYNAMIC COMPUTERISED ADAPTIVE TEST FOR THE MEASUREMENT OF LEARNING POTENTIAL

Related Posts