While it’s relatively straightforward to make simple differentiated learning apps, it’s extremely difficult and expensive to make proficiency-based adaptive learning. The difference is data — specifically, each student’s concept-level proficiency data.
A true model of proficiency can estimate what students know, how prepared they are for further instruction or assessment, and how their abilities will evolve over time. Concept-level proficiency data is not what a student did (this is covered by observable metrics like time taken and test scores), but what the system is confident that they know, at a granular level. To collect such data requires large pools of “normed” content, which in turn requires infrastructure to passively, algorithmically, and inexpensively norm content at scale, as well as infrastructure to make sense of and take action on the resulting data.
Building a self-constrained adaptive app with these infrastructures is difficult, expensive, functionally constrained, and unscalable. Knewton addresses this conundrum. Knewton has built the necessary infrastructures to gather, multiply, process, and action student proficiency data. Anyone who wants to build true adaptive learning apps can plug into the Knewton network and build on top of our infrastructures, rather than having to do the complex work themselves.
Today, the Knewton platform comprises three main parts:
- Data Collection Infrastructure: Collects and processes huge amounts of proficiency data
- Inference Infrastructure: Further increases data set and generates insights from collected data
- Personalization Infrastructure: Takes the combined data power of the entire network to find the optimal strategy for each student for every concept she learns.