Mastery or mere results?
Once again the Caribbean Secondary Education Certificate (CSEC) results are published showing Jamaica’s performance in the regional exams. These results have become perennial indicators of our educational growth and development. Let me say from the outset, that public examinations, such as those provided by the Caribbean Examinations Council (CXC), can provide useful accountability data and enhance teaching and learning. I wish to ask certain questions about the rise in CSEC results as a reflection of mastery of the criterion domains to the desired level.
Based on the Education 20/20 publication of the 2014 CSEC results, there has been a clear rise in CSEC results in English and mathematics. That is, students in our public schools have shown signs of improvement in these key areas. The traditional high schools are still in the lead, and upgraded high schools are steadily improving.
The Ministry of Education has added its voice to this “encouraging improvement” of the nation’s children. Several school leaders and teachers along with students shared their secret for continuous success or for improved success. In light of the many challenges that face our educational system, such as limited funding and resources, there are positive signs of improvement in the CSEC results.
Criterion-referenced measurement
CXC uses criterion-referenced measurement; this measurement is the inference about, or the interpretation of a test taker’s score. It is not about the tests and resultant rankings, but about the score-based inferences. This is important for our understanding of these results.
Many definitions of criterion-referenced measurement exist, but the one which best describes CXC’s experience is given by Glaser and Nitko (1971, p 653). They defined a criterion-referenced measurement test instrument as one “that is deliberately constructed so as to yield measurements that are directly interpretable in terms of specified performance standards. The performance standards are usually specified by defining some domain of tasks that the student should perform. Representative samples of tasks from this domain are organised into the test. Measurements are taken and are used to make a statement about the performance of each individual relative to that domain.”
Many definitions of criterion-referenced measurement exist, but the one which best describes CXC’s experience is given by Glaser and Nitko (1971, p 653). They defined a criterion-referenced measurement test instrument as one “that is deliberately constructed so as to yield measurements that are directly interpretable in terms of specified performance standards. The performance standards are usually specified by defining some domain of tasks that the student should perform. Representative samples of tasks from this domain are organised into the test. Measurements are taken and are used to make a statement about the performance of each individual relative to that domain.”
There is also norm-referenced measurement methods aimed chiefly at comparing test takers with one another. That is, each test taker’s score is compare with or “referencing it to” the scores earned by previous test takers, usually known as the norm group (Popham 2014). The CSEC results are about ascertaining whether test takers have mastered the criterion domains to the desired level. However, it is not the norm for CSEC results to be associated with mastery of the criterion domains which defined these scores.
Even, in the Education 20/20 publication this connection was not made. And it is the position of the writer that the rise in CSEC results must be interpreted in terms of the mastery of the criterion domains to the desired level. For example, what role does mastery of the criterion domains play in understanding the CSEC results? What about informing the teaching and learning process of our schools? How does mastery of the criterion domains guide school leaders and teachers in delivering the knowledge and skills within the curricula?
What is the link between criterion-referenced measurement and teaching and learning?
Ranking schools by focusing on English and mathematics passes without paying close attention to whether test takers mastered the criterion domains to the desired level is seriously misguided. The purpose of criterion-referenced measurement criteria is to tie down the skills or knowledge being assessed so that teachers can target instruction. Thus, criterion-referenced measurement revolves around clear descriptions of what a test is measuring. If teachers possess a clear picture of what their students are supposed to be able to do when instruction is over, those teachers will be more likely to design and deliver properly focused instruction (Popham 2014).
At present, it would appear that the teaching and learning process have been informed by other strategies instead of mastery of the criterion domains. So, if the criterion domains are not driving the teaching and learning process, what is? What is guiding instruction, classroom assessment, and student learning? Could it be a situation where what is tested influences what is taught?
At present, it would appear that the teaching and learning process have been informed by other strategies instead of mastery of the criterion domains. So, if the criterion domains are not driving the teaching and learning process, what is? What is guiding instruction, classroom assessment, and student learning? Could it be a situation where what is tested influences what is taught?
Is that teachers are teaching to the test, that is, instructors reallocate time and resources toward tested content mainly? Do teachers neglect the criterion domains in their delivery? Does instruction become less, due to the fact that the demands of securing high results in these exams hang in the balance? Do past papers become the standard mode of students’ instruction, instead of the knowledge and skills as stipulated by the curricula?
Also, do teachers realise that these criterion domains are not only about deciding whether test takers mastered the criterion domains to the desired level, but they are the very elements that give the CSEC exams their distinctive characteristics? Are we depriving our children the opportunity of an authentic learning experience by side-stepping the criterion domains?
It behoves school leaders and teachers and all other stakeholders to understand the importance of criterion-referenced measurement in relation to the CSEC results, and by extension the teaching and learning process. It is paramount that they ensure that students come into a working understanding of mastery of the criterion domains within the context of CSEC and its implications for building critical skills, innovation and creativity.
Even though some might be of the view that an increase in CSEC results automatically means mastery of the criterion domains to the desired level, it is not so. We must remember that CSEC results are about the inference drawn or the interpretation of the test takers’ scores in relation to the criterion domains without regard to the distribution of scores achieved by other test takers.
Let us be mindful, that increase in CSEC results does not necessarily mean mastery of the criterion domains. In order for the CSEC results to stay true to its objectives, then it must reflected score-based inferences — not tests. It is mastery of the criterion domains to the desired level that drives high quality CSEC results and not mere passes within range and comparison among schools. This should be the desire of all education stakeholders.
Oswald Leon Jr is an educational measurement specialist and a CXC assistant examiner. Send comments to: oswald.leon@yahoo.com.
test.jpg
The interpretation of a test taker’s scores is important to the understanding of whether learning has been effective.
PULL QUOTE
Is that teachers are teaching to the test, that is, instructors reallocate time and resources toward tested content mainly? Does instruction become less, due to the fact that the demands of securing high results in these exams hang in the balance? Do past papers become the standard mode of students’ instruction, instead of the knowledge and skills as stipulated by the curricula?