In most classrooms, every student gets the same assignments, which a teacher has usually planned out months or weeks in advance. Students in classrooms that use Knewton-powered adaptive products have a totally different experience: we are able to figure out what each student in a class knows, and what she’s struggling with. Given this information, and the goals she’s working towards in a given class, Knewton recommends the best activities for that individual student to work on next, in real time. So in a class of 30 students, every student might be concentrating on different types of questions at various levels of difficulty while working toward the same goal.

It’s understandably difficult for most teachers, students, and parents to picture what that might look like in practice.

To help picture what Knewton is actually doing, let’s look at a few interactive animations that demonstrate the power of Knewton adaptive learning. These visualizations represent real students using a Knewton-powered course.

For the first example, let’s take a look at three real elementary-school students in the same class. Student privacy is important to us, so we don’t know these students’ names; however, to make it easier to talk about them, let’s call the student on the left Amy, the student in the middle Bill, and the student on the right, Chad.

All three students are all working on the same goals: interpreting multiplication equations (green), multiplying by 1-digit numbers (yellow), multiplying by two-digit numbers (blue), and solving multiplications word problems (pink).

In order to understand these four concepts, it’s also relevant to know some prerequisite topics (shown in grey).

For instance, students need to understand “multiplying by 1-digit numbers” in order to learn how to “multiply by 2-digit numbers” .

The video below shows how each student progresses through the course material.

When you press the play button, boxes will appear under each student’s graph. Each box represents a different question that the student answered. The color of the box tells you what concept that question was from. Green check marks represent questions that were answered correctly on the first try, while red x’s represent questions that were answered incorrectly on the first try. Use the back and forward buttons to skip through the students’ histories.

Notice how Knewton recommends the same first three questions to all three students? That’s because this is their first time using a Knewton-powered product and we’re trying to learn a little bit about their current level of knowledge.

Let’s dig a bit into why Knewton makes the recommendations it does to each individual student. On the third question, for example, we see that Bill is struggling with multiplying 2-digit numbers. While Amy and Chad move on to a new topic (interpreting multiplication equations), the middle student keeps working on multiplying 2-digit numbers. That’s differentiation in action!

On the fourth question, Knewton notices that Chad seems to be struggling with interpreting multiplication equations, so he’ll continue to work on questions related to that concept, while Amy continues through the assignment.

Notice how Knewton guides Bill and Chad to keep working on the concept that they’re struggling with until they start to understand and get questions right, eventually mastering that concept.

Knewton’s differentiated instruction lets students focus on the concepts that they are struggling with, rather than bogging them down with busywork on topics on which they have already demonstrated understanding. In addition to supporting students who are falling behind with targeted remediation, Knewton also gives advanced students the opportunity to move forward at their own pace — Amy is a great example of this.

In the second example, we illustrate how Knewton takes a student’s past work into performance when providing recommendations. Again, for privacy reasons we don’t know these students’ names or where they’re from, but for this example, we’ll call the student on the left Mary and the student on the right Joe.

Like the students in the first example, Mary and Joe are also working to master multiplication word problems. However, whereas all three of the students from Example 1 were in the same class, Mary and Joe are in different classes. On a previous day, Mary was assigned lessons that covered the three prerequisite concepts. Her responses to those questions are shown as grey boxes. This is Joe’s first interaction with Knewton, so he has no prerequisite work.

When both students start working on this goal, Mary immediately starts working on the goal concepts. Joe, meanwhile, begins with prerequisite concepts because Knewton is trying to verify that he is proficient in the prerequisite concepts before trying to teach him the goal concepts. Because of the information the Knewton already knows about Mary and Joe, it recommends prerequisites to Joe, but not to Mary — the system knows that she’s already mastered those earlier concepts!

Even though Mary misses more questions than Joe on this assignment, she is able to finish it faster because of her earlier prerequisite work. Joe gets extra help because he hasn’t learned the prerequisites yet.

In this way, Knewton again helps students who have demonstrated proficiency to finish their work quickly — so they have more time to learn new things. We also make sure that students who are new to the material feel comfortable with prerequisite concepts, ensuring that they are primed to excel at their goal.

These two examples are just a snapshot of the way Knewton continuously targets and differentiates instruction to meet every student’s needs. With the help of Knewton-powered products, teachers can ensure that every student works on material that supports in-class instruction and helps them move forward toward learning goals.

]]>These days, when people talk about artificial intelligence, there’s a lot of excitement around deep learning. AlphaGo, the algorithmic player that defeated 9-dan Go master Lee Sedol, incorporates deep learning, which meant that its programmers didn’t need to teach AlphaGo the rules of Go. They gave AlphaGo a lot of Go matches, and it figured out the rules on its own.

Deep learning has also shown impressive results in areas from computer vision to bioinformatics to linguistics. Deep learning helps Facebook understand the words people post there in more than 20 languages, and Amazon uses it to have conversations through Echo.

So deep learning is proving to be a popular way to understand how people write, speak, see, and play, but how good is it at modeling how people learn?

Last year, a team led by Chris Piech of Stanford University trained a recurrent neural network to do deep learning — or what they call Deep Knowledge Tracing. The idea is that, just as you don’t need to teach AlphaGo how to play the game on its own, Deep Knowledge Tracing can make sense of what’s being learned without human help. Using a public data set from ASSISTments, which guides students through math problem-solving, Deep Knowledge Tracing showed promising initial results.

There are other ways of modeling what students know. Item Response Theory, for example, has been around since the 1950s. It has been extended over the last decade to incorporate how people learn over time as well as expert human knowledge about the hierarchy of concepts being learned.

What’s the best way to predict what students know and don’t know, based on their previous answers and interactions?

Four Knewton data scientists — Kevin Wilson, Yan Karklin, Bojian Han, and Chaitanya Ekanadham — took a closer look at Deep Knowledge Tracing, comparing it with three models of how people learn built upon Item Response Theory. In addition to a classic Item Response Theory (IRT) model, the Knewton data science team used a temporal IRT model (called TIRT in the accompanying charts) and a hierarchical one (shown as HIRT).

The Knewton team used three collections of anonymous student interaction data, including ASSISTments, the Bridge to Algebra 2006–2007 data set from the KDD Cup, and millions of anonymized student interactions collected by Knewton.

With all three data sets, the Knewton team found that the Item Response Theory methods “consistently matched or outperformed” Deep Knowledge Tracing. Not only were the Item Response Theory approaches better at predicting what people know, they were easier to build and tune, “making them suitable candidates for real-world applications” such as adaptive learning platforms.

With Deep Knowledge Tracing, meanwhile, the Knewton team “found that computation time and memory load were prohibitive when training on tens of thousands of items. These issues could not be mitigated by reducing dimensionality without significantly impairing performance.”

In other words, deep learning still has a way to go to match established ways of modeling student learning.

For more details, read Back to the Basics: Bayesian extensions of IRT outperform neural networks for proficiency estimation or visit the International Educational Data Mining Society conference in Raleigh on July 1.

And if you want to reproduce our results, you can find code, links to the data sets, and instructions on GitHub.

]]>For the first time in six years, the U.S. Department of Education issued a National Education Technology Plan. So much has changed in the six years since its comprehensive overview of the state of educational technology. As the latest plan puts it, the conversation has shifted from *whether* technology should be used in learning to *how* it can improve learning to ensure that all students have access to high-quality educational experiences.

I’ve witnessed a similar progressive shift in my five and a half years at Knewton. In the early days, when Knewton was alone in the field, we had to explain what adaptive learning was and that it was actually possible to implement. Now, most companies in education and everyone from Barack Obama to Mark Zuckerberg is talking about the value of personalized education, and terms like adaptive and personalized get used for a wide spectrum of approaches and tools.

Likewise, educators and publishers from around the world now understand the promise and value of adaptive learning, and they see that its moment has arrived. We’ve seen particular interest from the advanced education markets of northern Europe and Asia. In China, for example, where national exams can determine your future, 7% of disposable income is spent on education, as compared to 2% of disposable income in the U.S. Chinese students turn to supplemental programs, both in person and online.

Take 17zuoye, a digital learning platform for K–12 students that began as an extracurricular learning application. Now teachers recommend 17zuoye as a supplement to the work they do in class. With an audience of 14 million students and 700 thousand teachers, 17zuoye has turned to Knewton to make its math and English language training programs adaptive.

Global interest in adaptive learning is also reflected in our most recent round of funding, which included investment from TAL Education Group, another Chinese K–12 education company, and EDBI, the corporate investment arm of the Singapore Economic Development Board. This infusion of capital is helping Knewton grow our teams to work closely with our global partners as they ready their products for launch.

Our partners serve everyone from Chinese children learning English to Spanish-speaking teenagers learning algebra or adults preparing for the Brazilian bar exam. These education companies are eager to release their adaptive learning products, to learn from the market and make the most of this moment of opportunity. To get their products to market more quickly, they are turning to companies like Knewton and also making connections with each other. For example, Gyldendal Denmark has partnered with Gyldendal Norway (a completely independent company) to bring the Norwegian publisher’s adaptive learning materials to students in Denmark. Our established Norwegian partner gains distribution, while Gyldendal Denmark can bring adaptive learning quicker to market and at lower cost.

The shift to digital educational materials from printed textbooks is a global phenomenon, and we’re seeing plenty of examples here in the United States, from Khan Academy to AltSchool. With Pearson starting to integrate Knewton into K–12 products, the big three American educational publishers in the United States have all acknowledged that adaptive learning as essential for their future. And our partner Waggle, which makes a smart, responsive practice application for grades 2–8, has shown truly impressive results, improving outcomes from Oklahoma City to West Palm Beach to a high-poverty urban school in Baltimore.

Educational institutions are also taking the initiative in bringing adaptive learning to their students. The Florida Virtual School, which is America’s first and largest online public school district, will use Knewton to power adaptive course materials for more than 200,000 students over the next few years.

We’re also hearing from every kind of institution of higher education, from public and private universities with four-year programs to community colleges to the for-profit sector. Administrators and faculty alike are hungry for learning products with a high-quality user interface, diverse and deep content, and data-driven adaptive learning. With more than 4,000 degree-granting institutions in the United States and international students flocking to study here, this country is well positioned to lead the way in adaptive learning at the post-secondary level and help more people fulfill their potential and find their way in the working world.

It has been fascinating to see the different ways adaptive learning has taken root in different places, and I look forward to seeing how it develops as its adoption spreads and accelerates and as examples from around the world inform and inspire each other. One thing is for certain: As the report from Washington, D.C., says, adaptive learning is no longer a matter of *whether*. Its all about *how*.

Measuring how much time each student spends on each item allows us to help teachers understand better how their students are working and whether they are engaged by the material.

When students are disengaged, their interactions with Knewton can sometimes reflect that. For example, when students move through coursework much faster than they usually do but without better performance, they might not be paying much attention, and are just clicking to get to the next thing. We call such behavior “spamming.”

Everything is relative: If a student is generally a fast worker, a fast response doesn’t count as spamming. Similarly, we take into account how long we expect each item to keep a student’s attention. Spending 28 seconds watching a 30-second video is generally considered working, while 28 seconds on a 5-minute video might suggest something else.

Once Knewton’s algorithm establishes a baseline expectation for each student and each piece of content, we can look at how long each interaction took and, by way of illustration, assign it a working-or-spamming probability.

Since Knewton is integrated into different learning applications aimed at different age groups, we wanted to see whether spamming rates vary by grade level, but users working on elementary school materials (most of them, presumably, elementary school students) and users of college coursework answered unexpectedly quickly at about the same rate. We also looked at whether students were more likely to spam on certain days of the week, but we didn’t see a Friday effect with spamming like we do with performance.

In addition, we examined whether different types of questions affected how likely students were to spam a particular question. The Knewton open platform, for example has some questions that are multiple choice questions and others that require students to type in an answer. Working interactions were almost evenly split between free response and multiple choice, but spamming occurred disproportionately on free response questions. It turns out that students were roughly three times as likely to spam on free-response questions.

Understanding this kind of spamming behavior augments our sense of what keeps each individual student engaged and, more broadly, contributes to our understanding of how students interact with learning applications. Knowing whether students are working productively enables Knewton and its partners to make better applications that help students remain in a learning flow and help spamming students get back on track.

]]>Further analysis shows that Knewton-powered adaptive assignments for struggling students narrow the gap between them and high-performing students on subsequent assignments. Closing this gap is one of the biggest challenges instructors can face in the classroom.

In “Reducing the Gap: How Adaptive Follow-Ups Help Struggling Students,” Hillary Green-Lerman and Kevin Wilson of Knewton looked at 48,202 students who used an online homework tool for college-level science textbooks in the spring of 2014. Beyond ordinary homework assignments, students who didn’t show mastery of the concepts they were learning received adaptive follow-up assignments powered by Knewton. These adaptive assignments present a personalized sequence of questions designed to address each student’s individual strengths and weaknesses.

Our research team found that students who were assigned an adaptive follow-up after struggling on a first assignment showed improvements of between four and 12 percentage points on subsequent assignments relative to their classmates who did not have adaptive follow-up assignments.

Students with a lower score will have more room to improve than high-performing ones. So Green-Lerman and Wilson corrected for initial differences in grade distributions between the higher- and lower-performing students. When taking this correction into account, they still see an average improvement of three points, and as much as eight points:

The online homework tool discussed in this study makes use of only a small portion of what Knewton can do to improve learning outcomes. Knewton’s research team continues to validate the efficacy of adaptive learning, and plans to continue to share its findings.

To read the full study, sign up below to download “Reducing the Gap.”

Santillana has named the product A2O, a play on the concept of *aprendizaje líquido*, or liquid learning: the idea that students can flow through their lessons in their own way. A2O adapts in real-time to address each student’s needs and helps teachers see exactly where students need support. Analytics give teachers unprecedented visibility into the learning process.

Teachers participating in the pilot over the next six months will be incorporating A2O into their algebra curriculum. Students will use A2O in the classroom and at home. The pilot covers four to six weeks of secondary-level algebra, and Santillana hopes to expand to other topics in math and other subjects.

Throughout the product development process, the Santillana team has received invaluable insights and feedback from teachers in A2O’s target market. We’re incredibly proud of the work Santillana has done and eager to see the product in the hands of as many students and teachers as possible. Interested teachers can still sign up for the pilot.

With over 28 million students in 22 countries, Santillana has an unparalleled understanding of Spanish and Latin American education. Still, teacher and student feedback will be invaluable when Santillana expands its adaptive learning offering.

En el mes de octubre de 2014, Knewton anunció una alianza con Santillana, la principal editorial de contenidos educativos de España y América Latina, con el objetivo de desarrollar productos de “aprendizaje adaptativo” para el mundo de habla hispana. Actualmente, Santillana está lanzando una prueba piloto de un producto de álgebra para nivel secundario dirigido a estudiantes de 12 y 13 años y sus profesores.

Santillana ha llamado al producto A2O, un juego sobre el concepto de *aprendizaje líquido*: la idea es que los estudiantes puedan fluir a través de sus lecciones de manera propia. A2O se adapta en tiempo real para abordar las necesidades de cada uno de los estudiantes y ayuda a que los profesores vean el momento exacto en qué concepto los estudiantes necesitan su apoyo. El análisis le brinda a los profesores un conocimiento inédito sobre el proceso de aprendizaje.

Durante los próximos seis meses, los profesores que participen en el piloto incorporarán A2O en su plan de estudios de álgebra. Los alumnos utilizarán A2O en clase y en su hogar. El material en el piloto incluye de cuatro a seis semanas de álgebra de nivel secundario. Santillana espera expandirse a otros temas de matemáticas y a otras materias.

Durante todo el proceso de desarrollo del producto, el equipo de Santillana recibió consejos y aportes invaluables de los profesores en los países al que apunta A2O. Estamos sumamente orgullosos del trabajo que ha hecho Santillana y ansiosos por ver el producto en manos de la mayor cantidad posible de estudiantes y profesores. Los profesores interesados aún pueden inscribirse al piloto.

Con más de 28 millones de estudiantes en 22 países, Santillana posee un entendimiento sin precedentes sobre la educación española y latinoamericana. Aún así, los aportes de profesores y estudiantes serán invaluables cuando Santillana amplíe su oferta de aprendizaje adaptativo.

]]>The Knewton data science team went back to look at those nearly 13,000 elementary school students to examine how the runup to a holiday affects student performance.

Because of the Friday effect, the best comparison for Thanksgiving week scores is to how the same students did on all other Mondays, Tuesdays, and Wednesdays that year. For non-holiday weeks, their average score during the beginning of the week was 74 percent. Before Thanksgiving, it was 67 percent.

That’s an even larger drop in performance than the Friday effect.

Now let’s see if there’s a similarly large drop on the Fridays before a long weekend. Might students show a similar slump before a shorter holiday?

The chart below, shows average student scores on Fridays before a two-day weekend as well as on the Fridays before Martin Luther King Jr. Day and Memorial Day.

There’s almost no difference between a normal Friday and the one before Martin Luther King, Jr., Day. However, there is a significant drop in average score before Memorial Day.

The final hypothesis we tested was whether students scores changed before Winter Break. For each weekday, the average score for the rest of the school year is in blue and the the average score for the week before Winter Break is in green.

The fall-off gets worse each day. By the Friday before Winter Break, students are scoring more than 15 percentage points lower than they would on an ordinary Friday — or about 20 points lower than on the average weekday.

]]>Having observed billions of interactions involving millions of students, Knewton can test how timing affects student performance. In order to design a reliable study, the data science team focused on a single product to guarantee a uniform population: similar ages, coursework, and context. So we chose a supplemental online tool for learning elementary school math in the United States. In the 2014–15 school year, 12,695 students used this tool for learning math.

When you look at how many math problems they did each day, you can see the rhythms of the week and the school year:

We can also see the rhythms of the school day. The next chart below shows by hour how many questions students answered on an average weekday. The dark line represents the mean number of questions answered, and the lighter areas represent the standard error of the mean — i.e., how much that number tends to vary.

Most of the extra math work happened during the school day, between 8 a.m. and 3 p.m., with a substantial tail of activity until 9 p.m.

That rhythm isn’t the same each day: On Fridays, for instance, almost no one used the math program after 3 p.m.:

Now that you know a little bit about our dataset, let’s come back to our original question. When students did these online math problems, how often were their answers correct, and did their percentage of correct answers vary from day to day? The chart below shows a simple average of students’ scores on each weekday.

These students scored an average of about five percentage points lower on Fridays. (Notice that the y-axis begins at 64 percentage points rather than at zero.)

This chart, however, doesn’t say whether the *same* students scored better on Mondays and worse on Fridays. Or is one set of students working on Mondays and scoring well, while a different set of students works on Fridays and scores poorly? In the following animation, each colored line represents the usage patterns of a randomly selected student:

Our initial assumption was that students would use computers at the same time each week. In fact the data show that students tended to go online once a week, but on a different day each week. Their varied schedules offer a convenient way to explore whether the observed “Friday effect” can be explained by differences in the students, or by the day of the week.

So we looked at the students who, over the course of the year, did these math problems at least once on each weekday:

The results of this analysis are similar to what we saw already: Students didn’t perform as well on Fridays.

One final way to verify the measurement is to compute the average score for each student who had activity on all five weekdays and compare that to her average score for a particular weekday. For instance, a student might get an overall average score of 70 percent, but on Wednesdays, her average score might be 72 percent. That means that, on Wednesdays, she scores 2 percentage points higher than her average across all weekdays. Let’s call it her *normalized score difference*.

Here is the average of normalized score differences for all the students:

Something is happening on Fridays. Students scored somewhere between 4 and 5 percentage points lower than on other weekdays. We don’t know exactly why, however.

We were startled by the strength of the Friday effect and the clarity of the data. While these results hold true for these particular students, preliminary analysis of several other groups shows that the Friday effect may be more widespread So doing work on a Thursday instead of a Friday could be the difference between an A- and a B+.

]]>It’s never easy to show the efficacy of a given set of instructional materials, since many other factors contribute to how students learn, from how the teacher runs the classroom to what they had for breakfast. Beyond that, the impact of Knewton can be difficult to isolate, since Knewton adaptive learning technology is only one component of any of the digital learning products we power.

As Knewton powers more products and serves more personalized recommendations, however, our data scientists can identify more examples that show how Knewton improves student outcomes.

In “The Improvement Index: Evaluating Academic Gains in College Students Using Adaptive Lessons,” Illya Bomash and Chris Kish of Knewton look at interactions involving approximately 288,000 students. They found that students performed better on average in college-level science courses with Knewton-powered adaptive assignments than in those without.

During the summer and fall semesters of 2013, these students studied college-level physics, biology, chemistry, and anatomy and physiology. Their textbooks came with a Knewton-powered online homework tool.

Knewton didn’t power the entire tool. Instructors assigned homework as usual for the students to complete online. On top of that, however, instructors had the option of enabling Knewton-powered adaptive follow-up assignments for students whose homework didn’t show mastery of the scientific concepts.

During these semesters, about 6,400 courses used the online homework tool. About a quarter of those courses offered Knewton-powered adaptive assignments. The rest didn’t.

Bomash and Kish compared these two groups. Their analysis found that students performed better in courses with Knewton-powered adaptive assignments:

The improvement increases with more use of adaptive assignments. We saw a peak average score difference of four percentage points.

They explored several other possible explanations for the difference — perhaps the better performing group were better prepared, or more motivated because they didn’t want extra homework — and ruled those out in their analysis. What’s more, results from courses in 2014 show consistent improvement in scores:

In courses that relied most on Knewton-powered assignments, students spent less than 25 minutes per week on average doing this additional coursework.

To read the full study, sign up below to download “The Improvement Index.”

A child’s education is composed of many short, singular experiences, which can sometimes be hard to connect. This is a reality with which educators (and learners) are very familiar — there’s so much to cover, and only so much student attention to go around.

A middle school science syllabus may call for only two to three lessons per week. Add in some homework assignments, and you’ve got just a few bursts of student focus over seven days. Here’s a visualization of a typical student’s work patterns — this student has science lessons on Tuesdays and Thursdays, and does a bit of homework each weekend:

And that fortnight’s activity pattern is typical throughout the semester:

Teachers know that a certain portion of each lesson must be focused on refreshing last week’s material, to bridge the gaps between work sessions.

In addition to bridging temporal gaps, educators also need to bridge conceptual gaps. For a few weeks, concepts may generally build on each other; in other lessons, brand new topics will be introduced; and in still other lessons, new ideas will draw on the material from two months ago. That’s a lot to keep track of.

Bridging all those gaps is what good teachers do, at the micro scale in the classroom in the moment, and at the macro scale through lesson plans and curriculum design.

Ideally, the tools students and teachers use would also help bridge those gaps. Educators face a dual challenge: they must 1) make the most of those brief spikes in student focus, and 2) help students make sense of all those disparate experiences.

Knewton has partnered with Muzzy Lane, a leading educational game producer, to explore how the combination of adaptivity and games can help address this challenge. By creating games that engage and inspire students, Muzzy Lane helps teachers make the most of those bursts of student focus. Meanwhile, Knewton provides tools that enable students and teachers to make sense of all those disparate learning experiences over time and across concepts. Knewton brings each student’s learning tendencies, as well as past and current performance, together to provide a unified picture of that student’s current mental state.

In the course of playing educational games, students reveal a lot of information about their learning tendencies and knowledge. But typically, this information is trapped within the game — instructors may have access to summaries of student performance, but that’s it. By combining rich game content with Knewton technology, Muzzy Lane and Knewton give instructors access to more detailed information about student performance. A digital product might include a few educational games, as well as “traditional” instructional and assessment material (think multiple-choice quizzes, essays graded by instructors, video lessons, text instructional content). Knewton can integrate information from a student’s experience in the game along with all the results from other course assessments and activities — meaning teachers get a much fuller and more accurate picture of student performance.

In Feed the Fox, a game prototype developed by Muzzy Lane and Knewton, students construct food webs in various biomes to learn about species classification, the environmental pressures different biomes present, and how organisms are related in an ecosystem. Each student action in the game gives Knewton a hint about one, or perhaps several, of her conceptual understandings (or misunderstandings).

Feed the Fox, and games like it, are intended to be part of a larger digital curriculum, also drawing on many different kinds of content, ranging from traditional instructional text and assessment questions to video and interactive activities. Knewton has the same access to the data generated inside the game as it does to students’ responses to assessment items and interactions with instructional content in other parts of the course.

As a student plays the game, her activity is communicated to Knewton. When she moves a species card onto the board, for example, she demonstrates her current understanding of species classification (consumer vs. producer) and the boreal forest biome (i.e., what types of species thrive in the conditions of this particular biome).

That’s really cool — with a single action, Knewton gets multiple pieces of assessment information.

And similarly, as the student draws connections between species, she demonstrates her understanding of the overall food web and the relationships between species:

Feed the Fox provides Knewton with dense, concentrated information about student proficiency. Students tend to be more motivated and focused in a game environment than within traditional assignments, and therefore maintain their focus for a longer period of time. The rich assessment information from the game provides direct evidence of the student’s proficiency on the relevant concepts in the game, as well as indirect evidence of her proficiency on related concepts not in the game.

Games also provide a unique opportunity to evaluate student proficiency unclouded by test anxiety or the incentive to cheat. Since traditional assessment can cause anxiety for many students, the “stealth assessment” in games may allow for a more comprehensive understanding of student ability. When games are commingled with traditional assessments, students produce more, and more varied, data related to their proficiency. For example, a student who excels in the game but misses the mark with other formats might not be struggling with the underlying concepts but instead might have issues with motivation, confidence, or boredom. Information about a student’s performance across modalities can position a teacher to more thoughtfully intervene while also facilitating richer recommendations by the Knewton engine.

Different types of content play different roles in a course. A whole year’s course will probably never be composed solely of games, nor should it be. But well designed games can provide uniquely rich, dense assessment material that serves to engage students more effectively than traditional content. Incorporating these games into an adaptive course can benefit instructors and students alike — helping teachers support students more effectively, providing students increased motivation to tackle course material, and improving both student and teacher insight into learning.

]]>