In most classrooms, every student gets the same assignments, which a teacher has usually planned out months or weeks in advance. Students in classrooms that use Knewton-powered adaptive products have a totally different experience: we are able to figure out what each student in a class knows, and what she’s struggling with. Given this information, and the goals she’s working towards in a given class, Knewton recommends the best activities for that individual student to work on next, in real time. So in a class of 30 students, every student might be concentrating on different types of questions at various levels of difficulty while working toward the same goal.

It’s understandably difficult for most teachers, students, and parents to picture what that might look like in practice.

To help picture what Knewton is actually doing, let’s look at a few interactive animations that demonstrate the power of Knewton adaptive learning. These visualizations represent real students using a Knewton-powered course.

For the first example, let’s take a look at three real elementary-school students in the same class. Student privacy is important to us, so we don’t know these students’ names; however, to make it easier to talk about them, let’s call the student on the left Amy, the student in the middle Bill, and the student on the right, Chad.

All three students are all working on the same goals: interpreting multiplication equations (green), multiplying by 1-digit numbers (yellow), multiplying by two-digit numbers (blue), and solving multiplications word problems (pink).

In order to understand these four concepts, it’s also relevant to know some prerequisite topics (shown in grey).

For instance, students need to understand “multiplying by 1-digit numbers” in order to learn how to “multiply by 2-digit numbers” .

The video below shows how each student progresses through the course material.

When you press the play button, boxes will appear under each student’s graph. Each box represents a different question that the student answered. The color of the box tells you what concept that question was from. Green check marks represent questions that were answered correctly on the first try, while red x’s represent questions that were answered incorrectly on the first try. Use the back and forward buttons to skip through the students’ histories.

Notice how Knewton recommends the same first three questions to all three students? That’s because this is their first time using a Knewton-powered product and we’re trying to learn a little bit about their current level of knowledge.

Let’s dig a bit into why Knewton makes the recommendations it does to each individual student. On the third question, for example, we see that Bill is struggling with multiplying 2-digit numbers. While Amy and Chad move on to a new topic (interpreting multiplication equations), the middle student keeps working on multiplying 2-digit numbers. That’s differentiation in action!

On the fourth question, Knewton notices that Chad seems to be struggling with interpreting multiplication equations, so he’ll continue to work on questions related to that concept, while Amy continues through the assignment.

Notice how Knewton guides Bill and Chad to keep working on the concept that they’re struggling with until they start to understand and get questions right, eventually mastering that concept.

Knewton’s differentiated instruction lets students focus on the concepts that they are struggling with, rather than bogging them down with busywork on topics on which they have already demonstrated understanding. In addition to supporting students who are falling behind with targeted remediation, Knewton also gives advanced students the opportunity to move forward at their own pace — Amy is a great example of this.

In the second example, we illustrate how Knewton takes a student’s past work into performance when providing recommendations. Again, for privacy reasons we don’t know these students’ names or where they’re from, but for this example, we’ll call the student on the left Mary and the student on the right Joe.

Like the students in the first example, Mary and Joe are also working to master multiplication word problems. However, whereas all three of the students from Example 1 were in the same class, Mary and Joe are in different classes. On a previous day, Mary was assigned lessons that covered the three prerequisite concepts. Her responses to those questions are shown as grey boxes. This is Joe’s first interaction with Knewton, so he has no prerequisite work.

When both students start working on this goal, Mary immediately starts working on the goal concepts. Joe, meanwhile, begins with prerequisite concepts because Knewton is trying to verify that he is proficient in the prerequisite concepts before trying to teach him the goal concepts. Because of the information the Knewton already knows about Mary and Joe, it recommends prerequisites to Joe, but not to Mary — the system knows that she’s already mastered those earlier concepts!

Even though Mary misses more questions than Joe on this assignment, she is able to finish it faster because of her earlier prerequisite work. Joe gets extra help because he hasn’t learned the prerequisites yet.

In this way, Knewton again helps students who have demonstrated proficiency to finish their work quickly — so they have more time to learn new things. We also make sure that students who are new to the material feel comfortable with prerequisite concepts, ensuring that they are primed to excel at their goal.

These two examples are just a snapshot of the way Knewton continuously targets and differentiates instruction to meet every student’s needs. With the help of Knewton-powered products, teachers can ensure that every student works on material that supports in-class instruction and helps them move forward toward learning goals.

]]>This summer, I traveled to Nairobi to represent Knewton at Education Innovation Africa, a gathering of educators, businesspeople, and government officials from Kenya and other African nations working toward the fourth of 17 Sustainable Development Goals: providing “inclusive and quality education for all” by 2030.

Knewton is proud to be a part of a growing wave of education innovation in Africa, supporting students and teachers while working toward our mission of personalizing education for everyone. Knewton entered the African market through a partnership with Top Dog Education, which has launched adaptive learning products for students in South Africa studying math and science in grades 4–12. Top Dog Math and Top Dog Science are both powered by Knewton. Top Dog is looking to bring these adaptive learning applications to students in other English-speaking African countries, including Nigeria, Zambia, and Kenya.

When it comes to technology, Kenya is a hub of innovation that understands both the needs and constraints of its society. M-PESA, a digital currency transmitted over mobile phones, has become a standard way of doing business.

This spirit of innovation extends to the field of education. The Kenyan government is rolling out a digital literacy program, bringing laptops and tablets to cities and the countryside, and it is embracing cloud computing to lower costs while expanding access.

Kenya is at one end of the spectrum of a large and diverse continent. As a whole, sub-Saharan Africa is struggling to provide children with even the most basic education. The region has half of the world’s 60 million out-of-school children of primary school age, according to the World Bank, and nine of the ten lowest national enrollment rates in the world. In many places girls often lack the same opportunities as boys.

Going to school doesn’t necessarily mean children are learning, as qualified teachers are hard to find. Teacher absenteeism is rampant: On any given day, 30 percent of teachers in Kenya do not show up for school. Only one in four sub-Saharan children attend secondary school, which limits the ability to train more teachers who can educate future generations.

The promise of digital technology to improve education was a recurring theme at the Education Innovation Africa conference. In Africa, as anywhere, students can benefit from adaptive course materials that can meet their individual needs at any given moment. With learning analytics, teachers can better support their students. Any classroom will have students with a range of knowledge and skills, but the need to differentiate instruction is greater when class sizes are larger and several grades share one teacher and one room. Mozambique, for example, has 55 students per teacher, down from 65 a decade ago.

Adaptive learning has even more to offer Africa given the incomplete educational infrastructure and long-term teacher shortages. Places that never had roads or telephone lines have seen widespread adoption of mobile phones, which will allow the delivery of adaptive learning to places where textbooks are scarce. There is nothing like studying one-on-one with a teacher who understands you. But for students without access to a teacher, self-paced adaptive learning is far better than nothing. It’s a promising option for national education systems in Africa working toward Sustainable Development Goals.

Technology was only part of the agenda at the innovation conference, and it will be only part of the solution. As Knewton’s Jose Ferreira has written, innovation that lowers barriers to education comes in many forms.

But as more classrooms and families in Africa get access to the internet and as the barriers to delivering quality course materials crumble, African students can access the educational resources they need. Publishers and content providers in the region will understand the needs of and constraints on their students, and Knewton stands ready to power the digital learning products that help every student in Africa achieve their full potential.

*Eva-Maria Olbers works at Knewton in business development, focusing on Europe, the Middle East, and Africa.*

Knewton is a pioneer in adaptive learning. Back in 2008, when Knewton started building an adaptive learning platform, hardly anybody had heard of adaptive learning, just a handful of academic specialists and researchers.

Google “adaptive learning” today, and you’ll find 565,000 search results.

It’s gratifying to see more people talking about adaptive learning and more companies committing themselves to personalizing education.

At the same time, “adaptive” and “personalized” have become education buzzwords. These terms get used so often and for such a wide range of products and services that you could almost think that any learning tool with a digital component could be considered “adaptive.”

So what does adaptive learning mean?

At the 2016 ASU–GSV Summit, Knewton president and COO David Liu gave a great answer to that question. To hear it, start the embedded video at the 30-minute mark.

Or read the transcript below, which has been condensed and edited for clarity.

—

]]>When we think of “adaptive,” it’s real science. It is a real practice. It takes real expertise and experience and large, large data sets.

I’m not going to get too technical, but let me just break it down in this way:

You have to understand and have real data on content. You really have to have a detailed understanding of how the content is working: Is the instructional content teaching what it was intended to teach? Is the assessment accurate in terms of what it’s supposed to assess? Can you calibrate that content at scale so you’re putting the right thing in front of a student, once you understand the state of that student?

If you don’t truly understand data at that level from that content, you’re making guesses. People will

callthat adaptive, because something is changing, and that’s completely irresponsible.Adaptive learning means understanding at very granular level, if required, what each piece of content is supposed to be doing.

And doing it at scale: I’m talking about millions of pieces of content.

And doing it in real time.

On the other side of the equation, you really have to understand student proficiency. Again, not guessing because they got a question either right or wrong, which is adaptive testing — that’s been around for decades. It’s actually understanding and being able to predict how that student is going to perform, based upon what they’ve done and based upon that content that I talked about before. And if you understand how well the student is performing against that piece of content, then you can actually begin to understand what that student needs to be able to move forward.

And that’s all in the context of this teaching environment.…

It is very important for people to understand what “adaptive” really is. It is absolutely data-driven. It is absolutely data-driven at scale.

You have to understand content

andproficiency of students.And if you don’t, you can build any kind of recommendation engine you want, but you’re literally spitting out randomized answers, and that’s completely irresponsible.

These days, when people talk about artificial intelligence, there’s a lot of excitement around deep learning. AlphaGo, the algorithmic player that defeated 9-dan Go master Lee Sedol, incorporates deep learning, which meant that its programmers didn’t need to teach AlphaGo the rules of Go. They gave AlphaGo a lot of Go matches, and it figured out the rules on its own.

Deep learning has also shown impressive results in areas from computer vision to bioinformatics to linguistics. Deep learning helps Facebook understand the words people post there in more than 20 languages, and Amazon uses it to have conversations through Echo.

So deep learning is proving to be a popular way to understand how people write, speak, see, and play, but how good is it at modeling how people learn?

Last year, a team led by Chris Piech of Stanford University trained a recurrent neural network to do deep learning — or what they call Deep Knowledge Tracing. The idea is that, just as you don’t need to teach AlphaGo how to play the game on its own, Deep Knowledge Tracing can make sense of what’s being learned without human help. Using a public data set from ASSISTments, which guides students through math problem-solving, Deep Knowledge Tracing showed promising initial results.

There are other ways of modeling what students know. Item Response Theory, for example, has been around since the 1950s. It has been extended over the last decade to incorporate how people learn over time as well as expert human knowledge about the hierarchy of concepts being learned.

What’s the best way to predict what students know and don’t know, based on their previous answers and interactions?

Four Knewton data scientists — Kevin Wilson, Yan Karklin, Bojian Han, and Chaitanya Ekanadham — took a closer look at Deep Knowledge Tracing, comparing it with three models of how people learn built upon Item Response Theory. In addition to a classic Item Response Theory (IRT) model, the Knewton data science team used a temporal IRT model (called TIRT in the accompanying charts) and a hierarchical one (shown as HIRT).

The Knewton team used three collections of anonymous student interaction data, including ASSISTments, the Bridge to Algebra 2006–2007 data set from the KDD Cup, and millions of anonymized student interactions collected by Knewton.

With all three data sets, the Knewton team found that the Item Response Theory methods “consistently matched or outperformed” Deep Knowledge Tracing. Not only were the Item Response Theory approaches better at predicting what people know, they were easier to build and tune, “making them suitable candidates for real-world applications” such as adaptive learning platforms.

With Deep Knowledge Tracing, meanwhile, the Knewton team “found that computation time and memory load were prohibitive when training on tens of thousands of items. These issues could not be mitigated by reducing dimensionality without significantly impairing performance.”

In other words, deep learning still has a way to go to match established ways of modeling student learning.

For more details, read Back to the Basics: Bayesian extensions of IRT outperform neural networks for proficiency estimation or visit the International Educational Data Mining Society conference in Raleigh on July 1.

And if you want to reproduce our results, you can find code, links to the data sets, and instructions on GitHub.

]]>For the first time in six years, the U.S. Department of Education issued a National Education Technology Plan. So much has changed in the six years since its comprehensive overview of the state of educational technology. As the latest plan puts it, the conversation has shifted from *whether* technology should be used in learning to *how* it can improve learning to ensure that all students have access to high-quality educational experiences.

I’ve witnessed a similar progressive shift in my five and a half years at Knewton. In the early days, when Knewton was alone in the field, we had to explain what adaptive learning was and that it was actually possible to implement. Now, most companies in education and everyone from Barack Obama to Mark Zuckerberg is talking about the value of personalized education, and terms like adaptive and personalized get used for a wide spectrum of approaches and tools.

Likewise, educators and publishers from around the world now understand the promise and value of adaptive learning, and they see that its moment has arrived. We’ve seen particular interest from the advanced education markets of northern Europe and Asia. In China, for example, where national exams can determine your future, 7% of disposable income is spent on education, as compared to 2% of disposable income in the U.S. Chinese students turn to supplemental programs, both in person and online.

Take 17zuoye, a digital learning platform for K–12 students that began as an extracurricular learning application. Now teachers recommend 17zuoye as a supplement to the work they do in class. With an audience of 14 million students and 700 thousand teachers, 17zuoye has turned to Knewton to make its math and English language training programs adaptive.

Global interest in adaptive learning is also reflected in our most recent round of funding, which included investment from TAL Education Group, another Chinese K–12 education company, and EDBI, the corporate investment arm of the Singapore Economic Development Board. This infusion of capital is helping Knewton grow our teams to work closely with our global partners as they ready their products for launch.

Our partners serve everyone from Chinese children learning English to Spanish-speaking teenagers learning algebra or adults preparing for the Brazilian bar exam. These education companies are eager to release their adaptive learning products, to learn from the market and make the most of this moment of opportunity. To get their products to market more quickly, they are turning to companies like Knewton and also making connections with each other. For example, Gyldendal Denmark has partnered with Gyldendal Norway (a completely independent company) to bring the Norwegian publisher’s adaptive learning materials to students in Denmark. Our established Norwegian partner gains distribution, while Gyldendal Denmark can bring adaptive learning quicker to market and at lower cost.

The shift to digital educational materials from printed textbooks is a global phenomenon, and we’re seeing plenty of examples here in the United States, from Khan Academy to AltSchool. With Pearson starting to integrate Knewton into K–12 products, the big three American educational publishers in the United States have all acknowledged that adaptive learning as essential for their future. And our partner Waggle, which makes a smart, responsive practice application for grades 2–8, has shown truly impressive results, improving outcomes from Oklahoma City to West Palm Beach to a high-poverty urban school in Baltimore.

Educational institutions are also taking the initiative in bringing adaptive learning to their students. The Florida Virtual School, which is America’s first and largest online public school district, will use Knewton to power adaptive course materials for more than 200,000 students over the next few years.

We’re also hearing from every kind of institution of higher education, from public and private universities with four-year programs to community colleges to the for-profit sector. Administrators and faculty alike are hungry for learning products with a high-quality user interface, diverse and deep content, and data-driven adaptive learning. With more than 4,000 degree-granting institutions in the United States and international students flocking to study here, this country is well positioned to lead the way in adaptive learning at the post-secondary level and help more people fulfill their potential and find their way in the working world.

It has been fascinating to see the different ways adaptive learning has taken root in different places, and I look forward to seeing how it develops as its adoption spreads and accelerates and as examples from around the world inform and inspire each other. One thing is for certain: As the report from Washington, D.C., says, adaptive learning is no longer a matter of *whether*. Its all about *how*.

Measuring how much time each student spends on each item allows us to help teachers understand better how their students are working and whether they are engaged by the material.

When students are disengaged, their interactions with Knewton can sometimes reflect that. For example, when students move through coursework much faster than they usually do but without better performance, they might not be paying much attention, and are just clicking to get to the next thing. We call such behavior “spamming.”

Everything is relative: If a student is generally a fast worker, a fast response doesn’t count as spamming. Similarly, we take into account how long we expect each item to keep a student’s attention. Spending 28 seconds watching a 30-second video is generally considered working, while 28 seconds on a 5-minute video might suggest something else.

Once Knewton’s algorithm establishes a baseline expectation for each student and each piece of content, we can look at how long each interaction took and, by way of illustration, assign it a working-or-spamming probability.

Since Knewton is integrated into different learning applications aimed at different age groups, we wanted to see whether spamming rates vary by grade level, but users working on elementary school materials (most of them, presumably, elementary school students) and users of college coursework answered unexpectedly quickly at about the same rate. We also looked at whether students were more likely to spam on certain days of the week, but we didn’t see a Friday effect with spamming like we do with performance.

In addition, we examined whether different types of questions affected how likely students were to spam a particular question. The Knewton open platform, for example has some questions that are multiple choice questions and others that require students to type in an answer. Working interactions were almost evenly split between free response and multiple choice, but spamming occurred disproportionately on free response questions. It turns out that students were roughly three times as likely to spam on free-response questions.

Understanding this kind of spamming behavior augments our sense of what keeps each individual student engaged and, more broadly, contributes to our understanding of how students interact with learning applications. Knowing whether students are working productively enables Knewton and its partners to make better applications that help students remain in a learning flow and help spamming students get back on track.

]]>Further analysis shows that Knewton-powered adaptive assignments for struggling students narrow the gap between them and high-performing students on subsequent assignments. Closing this gap is one of the biggest challenges instructors can face in the classroom.

In “Reducing the Gap: How Adaptive Follow-Ups Help Struggling Students,” Hillary Green-Lerman and Kevin Wilson of Knewton looked at 48,202 students who used an online homework tool for college-level science textbooks in the spring of 2014. Beyond ordinary homework assignments, students who didn’t show mastery of the concepts they were learning received adaptive follow-up assignments powered by Knewton. These adaptive assignments present a personalized sequence of questions designed to address each student’s individual strengths and weaknesses.

Our research team found that students who were assigned an adaptive follow-up after struggling on a first assignment showed improvements of between four and 12 percentage points on subsequent assignments relative to their classmates who did not have adaptive follow-up assignments.

Students with a lower score will have more room to improve than high-performing ones. So Green-Lerman and Wilson corrected for initial differences in grade distributions between the higher- and lower-performing students. When taking this correction into account, they still see an average improvement of three points, and as much as eight points:

The online homework tool discussed in this study makes use of only a small portion of what Knewton can do to improve learning outcomes. Knewton’s research team continues to validate the efficacy of adaptive learning, and plans to continue to share its findings.

To read the full study, sign up below to download “Reducing the Gap.”

Santillana has named the product A2O, a play on the concept of *aprendizaje líquido*, or liquid learning: the idea that students can flow through their lessons in their own way. A2O adapts in real-time to address each student’s needs and helps teachers see exactly where students need support. Analytics give teachers unprecedented visibility into the learning process.

Teachers participating in the pilot over the next six months will be incorporating A2O into their algebra curriculum. Students will use A2O in the classroom and at home. The pilot covers four to six weeks of secondary-level algebra, and Santillana hopes to expand to other topics in math and other subjects.

Throughout the product development process, the Santillana team has received invaluable insights and feedback from teachers in A2O’s target market. We’re incredibly proud of the work Santillana has done and eager to see the product in the hands of as many students and teachers as possible. Interested teachers can still sign up for the pilot.

With over 28 million students in 22 countries, Santillana has an unparalleled understanding of Spanish and Latin American education. Still, teacher and student feedback will be invaluable when Santillana expands its adaptive learning offering.

En el mes de octubre de 2014, Knewton anunció una alianza con Santillana, la principal editorial de contenidos educativos de España y América Latina, con el objetivo de desarrollar productos de “aprendizaje adaptativo” para el mundo de habla hispana. Actualmente, Santillana está lanzando una prueba piloto de un producto de álgebra para nivel secundario dirigido a estudiantes de 12 y 13 años y sus profesores.

Santillana ha llamado al producto A2O, un juego sobre el concepto de *aprendizaje líquido*: la idea es que los estudiantes puedan fluir a través de sus lecciones de manera propia. A2O se adapta en tiempo real para abordar las necesidades de cada uno de los estudiantes y ayuda a que los profesores vean el momento exacto en qué concepto los estudiantes necesitan su apoyo. El análisis le brinda a los profesores un conocimiento inédito sobre el proceso de aprendizaje.

Durante los próximos seis meses, los profesores que participen en el piloto incorporarán A2O en su plan de estudios de álgebra. Los alumnos utilizarán A2O en clase y en su hogar. El material en el piloto incluye de cuatro a seis semanas de álgebra de nivel secundario. Santillana espera expandirse a otros temas de matemáticas y a otras materias.

Durante todo el proceso de desarrollo del producto, el equipo de Santillana recibió consejos y aportes invaluables de los profesores en los países al que apunta A2O. Estamos sumamente orgullosos del trabajo que ha hecho Santillana y ansiosos por ver el producto en manos de la mayor cantidad posible de estudiantes y profesores. Los profesores interesados aún pueden inscribirse al piloto.

Con más de 28 millones de estudiantes en 22 países, Santillana posee un entendimiento sin precedentes sobre la educación española y latinoamericana. Aún así, los aportes de profesores y estudiantes serán invaluables cuando Santillana amplíe su oferta de aprendizaje adaptativo.

]]>The Knewton data science team went back to look at those nearly 13,000 elementary school students to examine how the runup to a holiday affects student performance.

Because of the Friday effect, the best comparison for Thanksgiving week scores is to how the same students did on all other Mondays, Tuesdays, and Wednesdays that year. For non-holiday weeks, their average score during the beginning of the week was 74 percent. Before Thanksgiving, it was 67 percent.

That’s an even larger drop in performance than the Friday effect.

Now let’s see if there’s a similarly large drop on the Fridays before a long weekend. Might students show a similar slump before a shorter holiday?

The chart below, shows average student scores on Fridays before a two-day weekend as well as on the Fridays before Martin Luther King Jr. Day and Memorial Day.

There’s almost no difference between a normal Friday and the one before Martin Luther King, Jr., Day. However, there is a significant drop in average score before Memorial Day.

The final hypothesis we tested was whether students scores changed before Winter Break. For each weekday, the average score for the rest of the school year is in blue and the the average score for the week before Winter Break is in green.

The fall-off gets worse each day. By the Friday before Winter Break, students are scoring more than 15 percentage points lower than they would on an ordinary Friday — or about 20 points lower than on the average weekday.

]]>Having observed billions of interactions involving millions of students, Knewton can test how timing affects student performance. In order to design a reliable study, the data science team focused on a single product to guarantee a uniform population: similar ages, coursework, and context. So we chose a supplemental online tool for learning elementary school math in the United States. In the 2014–15 school year, 12,695 students used this tool for learning math.

When you look at how many math problems they did each day, you can see the rhythms of the week and the school year:

We can also see the rhythms of the school day. The next chart below shows by hour how many questions students answered on an average weekday. The dark line represents the mean number of questions answered, and the lighter areas represent the standard error of the mean — i.e., how much that number tends to vary.

Most of the extra math work happened during the school day, between 8 a.m. and 3 p.m., with a substantial tail of activity until 9 p.m.

That rhythm isn’t the same each day: On Fridays, for instance, almost no one used the math program after 3 p.m.:

Now that you know a little bit about our dataset, let’s come back to our original question. When students did these online math problems, how often were their answers correct, and did their percentage of correct answers vary from day to day? The chart below shows a simple average of students’ scores on each weekday.

These students scored an average of about five percentage points lower on Fridays. (Notice that the y-axis begins at 64 percentage points rather than at zero.)

This chart, however, doesn’t say whether the *same* students scored better on Mondays and worse on Fridays. Or is one set of students working on Mondays and scoring well, while a different set of students works on Fridays and scores poorly? In the following animation, each colored line represents the usage patterns of a randomly selected student:

Our initial assumption was that students would use computers at the same time each week. In fact the data show that students tended to go online once a week, but on a different day each week. Their varied schedules offer a convenient way to explore whether the observed “Friday effect” can be explained by differences in the students, or by the day of the week.

So we looked at the students who, over the course of the year, did these math problems at least once on each weekday:

The results of this analysis are similar to what we saw already: Students didn’t perform as well on Fridays.

One final way to verify the measurement is to compute the average score for each student who had activity on all five weekdays and compare that to her average score for a particular weekday. For instance, a student might get an overall average score of 70 percent, but on Wednesdays, her average score might be 72 percent. That means that, on Wednesdays, she scores 2 percentage points higher than her average across all weekdays. Let’s call it her *normalized score difference*.

Here is the average of normalized score differences for all the students:

Something is happening on Fridays. Students scored somewhere between 4 and 5 percentage points lower than on other weekdays. We don’t know exactly why, however.

We were startled by the strength of the Friday effect and the clarity of the data. While these results hold true for these particular students, preliminary analysis of several other groups shows that the Friday effect may be more widespread So doing work on a Thursday instead of a Friday could be the difference between an A- and a B+.

]]>It’s never easy to show the efficacy of a given set of instructional materials, since many other factors contribute to how students learn, from how the teacher runs the classroom to what they had for breakfast. Beyond that, the impact of Knewton can be difficult to isolate, since Knewton adaptive learning technology is only one component of any of the digital learning products we power.

As Knewton powers more products and serves more personalized recommendations, however, our data scientists can identify more examples that show how Knewton improves student outcomes.

In “The Improvement Index: Evaluating Academic Gains in College Students Using Adaptive Lessons,” Illya Bomash and Chris Kish of Knewton look at interactions involving approximately 288,000 students. They found that students performed better on average in college-level science courses with Knewton-powered adaptive assignments than in those without.

During the summer and fall semesters of 2013, these students studied college-level physics, biology, chemistry, and anatomy and physiology. Their textbooks came with a Knewton-powered online homework tool.

Knewton didn’t power the entire tool. Instructors assigned homework as usual for the students to complete online. On top of that, however, instructors had the option of enabling Knewton-powered adaptive follow-up assignments for students whose homework didn’t show mastery of the scientific concepts.

During these semesters, about 6,400 courses used the online homework tool. About a quarter of those courses offered Knewton-powered adaptive assignments. The rest didn’t.

Bomash and Kish compared these two groups. Their analysis found that students performed better in courses with Knewton-powered adaptive assignments:

The improvement increases with more use of adaptive assignments. We saw a peak average score difference of four percentage points.

They explored several other possible explanations for the difference — perhaps the better performing group were better prepared, or more motivated because they didn’t want extra homework — and ruled those out in their analysis. What’s more, results from courses in 2014 show consistent improvement in scores:

In courses that relied most on Knewton-powered assignments, students spent less than 25 minutes per week on average doing this additional coursework.

To read the full study, sign up below to download “The Improvement Index.”