For some time now, many educators have contended that our current school-year-based system, in which students are expected to accomplish a certain set of objectives in a certain arbitrary unit of time, is fundamentally broken. One of the current, and controversial, approaches that seeks to fix it is “competency-based learning.” In a recent thought-paper published by iNACOL, three components of such systems are listed:
- Students advance upon mastery
- Explicit and measurable learning objectives are established that empower students
- Assessment is a meaningful and a positive learning experience for students
The main idea here is that students advance as they master new concepts and gain relevant new knowledge, rather than pass or fail a course based on whether they have mastered the unit in, say, 10 weeks. Individual standards for progress, smaller, more modular units of learning, tests that lead to recommendations instead of just diagnoses — all represent a dramatic shift in the way we’re used to approaching education, but they also raise questions about implementation. How do we measure students’ “mastery” as they progress through a course? How can we ensure that recommendations are accurate, and that they actually lead to learning improvements that let students advance?
Adaptive learning — being able to provide the right instruction, of the right type, to each student, at the right time — is on the verge of providing a solution.
Adaptivity has long been an unrealized dream of educators, but over the past decade advances in technology, cognitive psychology, and educational measurement have coincided to now enable a directed learning experience that is unique to each student.
People sometimes confuse adaptive testing and adaptive learning. Although the two can be used together, they are not the same thing. Adaptive testing generally works by selecting the next test question, based on a student’s prior performance, in such a way as to minimize the measurement error of an exam. Adaptive learning works by selecting the next learning object, so a student’s experience is tailored in a way that maximizes potential learning gains.
Being able to recommend to a student, or parent, or instructor what a student should do next depends in part on being able to estimate, at varying grain sizes, what level of proficiency a student has currently attained. This sounds obvious, because it is. But the mechanisms by which one arrives at those estimates are anything but obvious.
At Knewton, our adaptive learning platform builds on the work of many top-notch researchers in the field of educational measurement, some of whom we’ll profile in later posts. One productive area of research over the past several years has been the creation and evaluation of a variety of psychometric models that, based on a student’s performance on a set of activities or assessments, support fine-grained identification of a student’s strengths and weaknesses.
These models have been given various labels: diagnostic classification models, cognitive diagnosis models, multiple classification models, and the like. What they all have in common is that they provide statistical frameworks for transforming responses to test questions into concept-level profiles that indicate a student’s areas of strength and weakness. These profiles, when coupled with other information about a student, and other information about the relationships between various lessons, can then be used to identify which lessons will provide the most value to a given student, with a particular profile, at a given time.
The shift to competency-based learning is already underway. Whether it makes it way into classrooms in this decade is an open question. But whether it becomes available to students is not — the technical and technological advances I mentioned earlier now enable us to create online adaptive learning environments that provide not only the above-mentioned key components, but also a tailored, engaging, and enriching experience, to any student with online access.