
How does Knewton’s Proficiency Model estimate student knowledge in alta?
Michael Binger | April 2, 2018
Accurately estimating a student’s knowledge is one of the core challenges of adaptive learning.
By understanding what a student knows and doesn’t know, adaptive learning technology is able to deliver a learning experience that will help the student achieve mastery. Understanding the student knowledge state is also essential for delivering accurate, useful analytics to students and instructors.
We refer to our data-driven mathematical model for estimating a student’s knowledge state as Knewton’s Proficiency Model. This model lies at the core of our ability to deliver lasting learning experiences to students using alta.
How does our Proficiency Model estimate student knowledge? Answering that question begins by looking at its inputs, which include:
- The observed history of a student’s interactions, including which questions the student answered correctly and incorrectly, the instructional material they studied, and when they performed these activities.
- Content properties, such as the difficulty of a question the student is answering.
- The structure of the Knewton Knowledge Graph, in particular the prerequisite relationships between learning objectives.
The model’s outputs represent the student’s proficiencies in all of the learning objectives in the Knowledge Graph at a given point in time. So what’s in between the inputs and the outputs?

Knewton’s Proficiency Model
A basis in Item Response Theory
The foundation for our Proficiency Model is a well-known educational testing theory known as Item Response Theory (IRT).
One important aspect of IRT is that it benefits from network effects — that is, we learn more about the content and the students interacting with it as more people use the system. When a student answers a difficult question correctly, the model’s estimated proficiency for that student should be higher than it would be if the student had correctly answered an easy question. But how can we determine each question’s difficulty level? Only by observing how large numbers of diverse students performed when responding to those questions.
With this data in-hand, we are able to better and more efficiently infer student proficiency — or weakness — and deliver content that is targeted and effective.
Moving beyond the limits of IRT
Because IRT was designed for adaptive testing — a learning environment in which a student’s knowledge remains fixed — it does not meet all of the requirements of adaptive learning, an environment in which the student’s knowledge is continually changing. In a model based on IRT, a student’s older responses make the same impact on the student’s proficiency level as their more recent responses. While this is fine in a testing environment, in which students aren’t typically provided feedback or instruction, it becomes a problem in an adaptive learning environment.
In an adaptive learning environment, we inherently expect that students’ knowledge will change. As a result, we want to give more weight to recent responses than older ones — allowing for the possibility of an “Aha!” moment along the way.
To correct for the limitations of IRT, Knewton has built temporal models that weight a student’s recent responses more heavily than their older ones when determining proficiency, providing a more accurate and dynamic picture of the student’s knowledge state.
Accounting for relationships between learning objectives
Adaptive learning requires constant, granular assessment on multiple learning objectives embedded in the learning experience. However, traditional IRT also does not account for the relationships between learning objectives. As discussed above, these relationships are an important part of the Knewton Knowledge Graph.
To remedy this shortcoming of IRT, Knewton has developed a novel way to incorporate these relationships in a Bayesian modeling framework, allowing us to incorporate prior beliefs about proficiency on related topics, with evidence provided by the student’s responses. This leads to so-called proficiency propagation, or the flow of proficiency throughout the Knowledge Graph.
What does this look like in practice? If, in the Knowledge Graph below, a student is making progress toward the learning objective of “Solve word problems by subtracting two-digit numbers,” our Proficiency Model infers a high proficiency on that learning objective. The model also infers a high proficiency on the related learning objectives (“Subtract two-digit numbers” and “Subtract one-digit numbers”), even without direct evidence. The basic idea: If two learning objectives are related and a student masters one of them, there’s a good chance the student has also mastered the others.

A Knewton Knowledge Graph.
The effectiveness of Knewton’s Proficiency model
The many facets of the Proficiency Model – IRT-based network effects, temporal effects, and the Knowledge Graph structure – combine to produce a highly accurate picture of a student’s knowledge state. We use this picture to provide content that will increase that student’s level of proficiency. It’s also the basis of the actionable analytics we provide to students and instructors.
How effective is the Proficiency Model in helping students master learning objectives? In his post “Interpreting Knewton’s 2017 Student Mastery Results,” fellow Knerd Andrew D. Jones presents data that shows that Knewton’s Proficiency Model helps students achieve mastery — and that mastery, as determined by the Proficiency Model, makes a positive impact on student’s academic performance.