One of the core components of Knewton adaptivity is the *knowledge graph*. In general, a graph is composed of nodes and edges. In our case, the nodes represent independent *concepts*, and the edges represent *prerequisite* relationships between concepts. An edge between concepts A and B (A → B) can be read as *Concept A is prerequisite to concept B*. This means that the student generally must know concept A before being able to understand concept B. Consider the example portion of a knowledge graph below:

In math-speak this is a directed acyclic graph (DAG). We already covered what the “graph” part means. The “directed” part just means that the edges are directed, so that “A prerequisite to B” does **not** mean “B prerequisite to A” (we instead say “B *postrequisite* to A”). This is in contrast to undirected edges in social networks where, for example, “A is friends with B” *does* imply “B is friends with A”. The “acyclic” part of DAG means there are no cycles. A simple cycle would involve A → B → C → A. This would imply that you need to know A to know B, B to know C, and then C to know A! This is a horrible catch-22. You can never break the cycle and learn these concepts! Disallowing cycles in the graph allows us to represent a course, without contradictions, as starting with more basic concepts, and leading to more advanced concepts as the student progresses (this progression is top-to-bottom in the graph above).

Another crucial aspect of the knowledge graph is the content: i.e. the assessing questions and the instructional material. Each concept has a number of such content pieces attached, though we don’t show them in the picture above. You can think of them as living inside the node.

Of course, we can never know exactly what you know– that’d be creepy! Instead we *estimate* the student knowledge state using a mathematical model called the** Proficiency Model**. This takes, as inputs, the observed history of a student’s interactions, the graph structure, and properties of the content (question difficulty, etc.) and outputs the student’s proficiency in all the concepts in the graph at a given point in time. This is summarized below:

Abstractly, *proficiency* on a concept refers to the ability for a student to perform tasks (such as answer questions correctly) related to that concept. Thus, we can use the estimated values of the proficiencies to *predict* whether the student answers future questions correctly or not. Comparing our predictions to reality provides valuable feedback that allows us to constantly update and improve our model and assumptions.

The foundation for our Proficiency Model is a well-tested educational testing theory known as Item Response Theory (IRT). One important aspect of IRT is that it accounts for *network effects*— we learn more about the content and the students as more people use the system, leading to better and better student outcomes. IRT also serves as a foundation for our Proficiency Model on which we can build additional features.

One thing that basic IRT does not include is any notion of temporality. Thus older responses count the same as newer responses. This is fine in a testing environment, where “older” responses mean “generated 20 minutes ago”, but isn’t great in a learning environment. In a learning environment, we (obviously) expect that students will be learning, so we don’t want to overly penalize them for older work when in fact they may have had an “Aha!” moment. To remedy this, we’ve built temporal models into IRT that make more recent responses count more towards your proficiency estimate than older responses on a concept*.

Another thing that basic IRT does not account for is instructional effects. Consider the following example. Alice got 2 questions wrong, watched an informative video on the subject, and then got one question right. Under basic IRT we’d infer that her proficiency was the same as Bob who got the same 2 question wrong, did **not** watch the video, and then got one question correct. This doesn’t seem accurate. We should take Alice’s instructional interaction into account when inferring her knowledge state and deciding what’s best for her to work on next. We have extended IRT to take into account instructional effects.

Finally, basic IRT does not account for multiple concepts, nor their interrelationships in the knowledge graph. This will be the main focus of the rest of this post.

The titular question of this post: “What does knowing something tell us about knowing a related concept?” is answered through *Proficiency Propagation*. This refers to how proficiency flows (propagates) to different concepts in the knowledge graph.

To motivate why proficiency propagation is important, let’s consider two different scenarios.

First, consider the example shown below, where the only activity we’ve observed from Alice is that she performed well (a ✔ indicates a correct response) on several more advanced concepts.

We can’t know everything Alice has ever done in this course– she may have done a lot of work offline and answered tons of “*Add whole numbers*” questions correctly. Since we don’t have access to this information, we have to make our best inference. Note that all three concepts Alice excelled at are reliant upon “*Add whole numbers*” as a prerequisite. Let’s revisit the definition of the prerequisite relationship. We say “A is prerequisite to B” (A → B) if A must be mastered in order to understand B. In other words:

Concept B is mastered ⇒ Concept A is mastered

In our case, there are three different “concept B’s” that Alice has clearly mastered. Thus, by definition of the prerequisite relationship Alice almost certainly has mastered “*Add whole numbers*” (it’s the concept A). So let’s paint that green, indicating likely mastery.

By similar reasoning, if Alice has mastered “*Add whole numbers*”, then she has likely mastered its prerequisite “*Understand the definition of whole numbers and their ordering*”. However, we might be slightly less certain about this inference, since it is more indirect and relies on a chain of reasoning. So let’s paint that slightly less bright green:

What about the remaining two concepts? First consider “Multiply whole numbers”. Alice has mastered its prerequisite, which is promising. But she may have never received any instruction on multiplication, and may have never even heard of such a thing! On the other hand, she may be a prolific multiplier, having done lots of work on it in an offline setting. In this case, we don’t have the definition of “prerequisite” working in our favor giving us a clean inference. But certainly if we had to guess we’d say Alice is more likely to have mastered “Multiply whole numbers” than someone else who we have no info on. Thus, we give Alice a small benefit of the doubt proficiency increase from the baseline. Similar considerations apply to the last, most advanced concept:

Let’s summarize the lessons we’ve learned:

- Mastery (i.e. correct responses) propagates strongly ‘backwards’ to prerequisites.
- As we get further from direct evidence in the prerequisite chain, there is more uncertainty. Thus we infer slightly less mastery.
- Mastery propagates weakly ‘forwards’ to postrequisites.

Now let’s consider Bob, who has struggled on “Add whole numbers”, getting 3 incorrect:

Recall our deconstruction of the prerequisite relationship A → B:

Concept B is mastered ⇒ Concept A is mastered

Unfortunately, this doesn’t directly help us here, because Bob hasn’t mastered any concepts as far as we know. However, the contrapositive is exactly what what need:

Concept A is **not **mastered ⇒ Concept B is **not** mastered

Let’s take “struggling on” to be equivalent to “not mastered” for our purposes to get:

Struggling on Concept A ⇒ Struggling on Concept B

Thus, we now know that struggling-ness propagates strongly down to the postrequisites of “Add whole numbers”!

What about “*Understand the definition of whole numbers and their ordering*”? Similarly to the flipped situation of propagating mastery to postrequisites, we cannot make any strong pedagogical inferences just from the prerequisite relationship. However, we can still assert that it is more likely that Bob is struggling on it given we’ve seen him struggle on “Add whole numbers” than if we hadn’t seen him struggle on that concept:

Let’s summarize what we’ve learned about propagation of struggling-ness:

- Struggling (i.e. incorrect responses) propagates strongly forwards to postrequisites.
- As we get further from direct evidence in the postrequisite chain, there is more uncertainty. Thus we infer slightly less struggling.**
- Struggling propagates weakly backwards to prerequisites.

Notice these rules are just the mirror-opposites of the ones for propagating mastery! And all of this comes simply from the definition of “prerequisite-ness”, and some pedagogical reasoning.

While we now have a nice picture of how we want proficiency propagation to behave, that doesn’t count much unless we can rigorously define a mathematical model capturing this behavior, and code up an algorithm to efficiently compute proficiencies in real time for all possible cases. As they say, the devil is in the details. To give a flavor of what’s involved, here are some of the technical details our mathematical model and algorithm must obey:

- Convexity: This essentially means that the proficiencies are efficiently and reliably computable.
- Strong propagation of mastery up to prerequisites, and of struggling-ness down to postrequisites, with a slight decay in propagation strength at each ‘hop’ in the graph.
- Weak propagation of mastery down to postrequisites, and of struggling-ness up to prerequisites, with a large decay in propagation strength at each ‘hop’ in the graph.
- The above two points imply
*asymmetric propagation*: The impact of a response on neighboring proficiencies is asymmetric, always being stronger in one direction in the graph than the other. - All of this proficiency propagation stuff must also play nicely with the aforementioned IRT model and the extensions to include temporality and instructional effects.

Coming up with a well-defined mathematical model encoding asymmetric strong propagation is a challenging and fun problem. Come work at Knewton if you want to learn more details! !

So what good exactly does having this fancy proficiency model do us? At the end of the day, students care about being served a good educational experience (and ultimately, progressing forward through their schooling), and in Knewton-land that inevitably means getting served good recommendations. Certainly, having a pedagogically-sound and accurate proficiency model does not automatically lead to good recommendations. But having a bad proficiency model almost certainly will lead to bad recommendations. A good proficiency model is necessary, but not sufficient for good recommendations.

Our recommendations rely on models built “on-top” of the Proficiency Model, and answer questions such as:

- What are useful concepts to work on next?
- Has the student mastered the goal material?
- How much instructional gain will this material yield for the student?
- How much will this piece of material improve our understanding of the student’s knowledge state and therefore what she should focus on next?

All of these questions can only be answered when equipped with an accurate understanding of the student’s knowledge state. As an example, consider Alice again. If we had a bare-bones proficiency model that did not propagate her mastery to “Add whole numbers”, we might consider this a valid concept to recommend material from. This could lead to a frustrating experience, and the feeling that Knewton was broken: “Why am I being recommended this basic stuff that I clearly already know?!”

At the end of the day, it’s user experience stories like this that motivate much of the complex data analysis and mathematical modeling we do at Knewton. And it’s what motivates us to keep pushing the limit on how we can best improve student learning outcomes.

**There are other temporal effects that kick-in if you’ve seen the same question more than once recently*.

*** There is a whole other layer of complexity in our Proficiency Model that we’ve glossed over. We actually estimate a student’s proficiency and a measure of our confidence in that estimate. These are the proficiency mean and variance, and can be combined to obtain confidence intervals, for example. For the purposes of this blog post, we are only considering the propagation of proficiency means.*

*This post was written by Michael Binger, a data scientist at Knewton.*

Teachers, parents, and students have strong and sometimes contradictory opinions about the value of homework, but answers to these and other questions are not easy to come by. A lot depends on contextual information about individual students, their teachers, and the subject matter, and this information is difficult to gather and analyze.

We do know, however, that some students are using Knewton-powered learning applications to do homework, and the anonymized data that Knewton collects can shed light on its impact on student performance.

More than 100,000 elementary school students using one of our partner applications gave about 18.5 million answers to math questions over the course of the 2015-16 school year. What does this wealth of data tells us about doing homework?

It turns out that good students do, in fact, do their homework.

Math problems answered during the school day — between 8 a.m. and 3 p.m. — are presumably done in class, while answers submitted between 3 p.m. and 9 p.m. are most likely homework.

The graph below shows when during the day students did their math problems over the course of the 2015-2016 school year:

Most of the work in this Knewton-powered partner application happened in school. About one-sixth of the work, however — about 3 million answers to math problems — got done as homework.

When these students did homework, they answered math problems correctly more often than they did at school. Before 3 p.m., about 65% of answers were correct. Between 3 p.m. and 9 p.m., more than 80% of answers are correct. You can see how their performance improves after school:

(The y-axis begins at 50, not zero.)

What’s going on here? There are a couple of possible explanations:

- Doing work outside of school helped the students
*get more questions right*than they did at school. Maybe they had more time to think, or felt less pressure, or had to contend with fewer distractions, or got help from parents, older siblings, or the Internet. - The other possibility is that stronger students
*did more homework*than lower performing students did. Since the percentage of correct responses is an average of the efforts of 100,000 students, the diligence of the strongest students could lift the entire group’s performance above 80%.If this second possibility is true, a seemingly impressive statistical gain masks the fact that, for many of these students, homework isn’t making a difference. It’s a lot like when you have a group of five people and one of them gets $100: The group’s average wealth goes up $20, but four of them don’t see any benefit.

So which is it: that students do better when they do homework, or that hard-working whiz kids are making everyone look better?

In an attempt to find out which possibility is more likely, we sorted the 100,000 students into five groups of equal size, based on how often they answered correctly — kind of like a teacher grading on a curve.

The average student in the highest performing group (in orange, let’s call it Group A) answered 94% of math questions correctly. The students in the lowest performing group (light blue, Group E) gave right answers only 24% of the time.

How many math problems were each of these five groups doing, and was it school work or homework?

At school, there is a peculiar relationship between practice and proficiency. The lowest-scoring group of students, Group E, does the least amount of work. Group D, which performs better, does more work than Group E. Groups B and C, which perform better still, do the most work at school.

However, the strongest students, Group A, are barely working in school more than Group E, the lowest-scoring group.

The data can’t explain *why* this relationship exists, but it’s easy to imagine the classrooms they describe. Math teachers know that different concepts will come naturally to some children, while for other students greater effort is required. Some students succeed without trying very hard, others slack off, and still others struggle despite diligent practice.

When it comes to homework, however, we clearly see that the higher a group’s score was, the more homework they did. The result is striking: The five groups are in alphabetical order with Group A, the top performers, doing the most homework, and Group E doing the least. This suggests that stronger students simply do more homework than their peers.

How does each group perform over the course of the day?

The performance of each group improves during homework time, but these increases are too slight to suggest that homework makes students perform better. And the differences between groups are bigger than differences within each group. Scores for Group E fell off around 7 p.m.: Maybe they were in an afterschool program until their parents picked them up after work? This drop is statistically significant, but you can see that the shaded areas, showing the margin of error, get bigger as fewer students work on math.

We still don’t know whether the stronger students are performing better *because* they did their homework, or that they did their homework because they’re stronger students. So this finding will not resolve the debates over which approaches to homework are productive and beneficial, or whether it all amounts to busywork.

Still, aggregated student data does contain valuable lessons about how students learn. We’ve seen that student performance after school is much higher because well-performing students tend to do their homework, not because students tend to perform better after school hours.

So your teachers are correct when they say that good students do their homework.

*Ruben Naeff is a data scientist at Knewton*.

This summer, I traveled to Nairobi to represent Knewton at Education Innovation Africa, a gathering of educators, businesspeople, and government officials from Kenya and other African nations working toward the fourth of 17 Sustainable Development Goals: providing “inclusive and quality education for all” by 2030.

Knewton is proud to be a part of a growing wave of education innovation in Africa, supporting students and teachers while working toward our mission of personalizing education for everyone. Knewton entered the African market through a partnership with Top Dog Education, which has launched adaptive learning products for students in South Africa studying math and science in grades 4–12. Top Dog Math and Top Dog Science are both powered by Knewton. Top Dog is looking to bring these adaptive learning applications to students in other English-speaking African countries, including Nigeria, Zambia, and Kenya.

When it comes to technology, Kenya is a hub of innovation that understands both the needs and constraints of its society. M-PESA, a digital currency transmitted over mobile phones, has become a standard way of doing business.

This spirit of innovation extends to the field of education. The Kenyan government is rolling out a digital literacy program, bringing laptops and tablets to cities and the countryside, and it is embracing cloud computing to lower costs while expanding access.

Kenya is at one end of the spectrum of a large and diverse continent. As a whole, sub-Saharan Africa is struggling to provide children with even the most basic education. The region has half of the world’s 60 million out-of-school children of primary school age, according to the World Bank, and nine of the ten lowest national enrollment rates in the world. In many places girls often lack the same opportunities as boys.

Going to school doesn’t necessarily mean children are learning, as qualified teachers are hard to find. Teacher absenteeism is rampant: On any given day, 30 percent of teachers in Kenya do not show up for school. Only one in four sub-Saharan children attend secondary school, which limits the ability to train more teachers who can educate future generations.

The promise of digital technology to improve education was a recurring theme at the Education Innovation Africa conference. In Africa, as anywhere, students can benefit from adaptive course materials that can meet their individual needs at any given moment. With learning analytics, teachers can better support their students. Any classroom will have students with a range of knowledge and skills, but the need to differentiate instruction is greater when class sizes are larger and several grades share one teacher and one room. Mozambique, for example, has 55 students per teacher, down from 65 a decade ago.

Adaptive learning has even more to offer Africa given the incomplete educational infrastructure and long-term teacher shortages. Places that never had roads or telephone lines have seen widespread adoption of mobile phones, which will allow the delivery of adaptive learning to places where textbooks are scarce. There is nothing like studying one-on-one with a teacher who understands you. But for students without access to a teacher, self-paced adaptive learning is far better than nothing. It’s a promising option for national education systems in Africa working toward Sustainable Development Goals.

Technology was only part of the agenda at the innovation conference, and it will be only part of the solution. As Knewton’s Jose Ferreira has written, innovation that lowers barriers to education comes in many forms.

But as more classrooms and families in Africa get access to the internet and as the barriers to delivering quality course materials crumble, African students can access the educational resources they need. Publishers and content providers in the region will understand the needs of and constraints on their students, and Knewton stands ready to power the digital learning products that help every student in Africa achieve their full potential.

*Eva-Maria Olbers works at Knewton in business development, focusing on Europe, the Middle East, and Africa.*

Knewton is a pioneer in adaptive learning. Back in 2008, when Knewton started building an adaptive learning platform, hardly anybody had heard of adaptive learning, just a handful of academic specialists and researchers.

Google “adaptive learning” today, and you’ll find 565,000 search results.

It’s gratifying to see more people talking about adaptive learning and more companies committing themselves to personalizing education.

At the same time, “adaptive” and “personalized” have become education buzzwords. These terms get used so often and for such a wide range of products and services that you could almost think that any learning tool with a digital component could be considered “adaptive.”

So what does adaptive learning mean?

At the 2016 ASU–GSV Summit, Knewton president and COO David Liu gave a great answer to that question. To hear it, start the embedded video at the 30-minute mark.

Or read the transcript below, which has been condensed and edited for clarity.

—

]]>When we think of “adaptive,” it’s real science. It is a real practice. It takes real expertise and experience and large, large data sets.

I’m not going to get too technical, but let me just break it down in this way:

You have to understand and have real data on content. You really have to have a detailed understanding of how the content is working: Is the instructional content teaching what it was intended to teach? Is the assessment accurate in terms of what it’s supposed to assess? Can you calibrate that content at scale so you’re putting the right thing in front of a student, once you understand the state of that student?

If you don’t truly understand data at that level from that content, you’re making guesses. People will

callthat adaptive, because something is changing, and that’s completely irresponsible.Adaptive learning means understanding at very granular level, if required, what each piece of content is supposed to be doing.

And doing it at scale: I’m talking about millions of pieces of content.

And doing it in real time.

On the other side of the equation, you really have to understand student proficiency. Again, not guessing because they got a question either right or wrong, which is adaptive testing — that’s been around for decades. It’s actually understanding and being able to predict how that student is going to perform, based upon what they’ve done and based upon that content that I talked about before. And if you understand how well the student is performing against that piece of content, then you can actually begin to understand what that student needs to be able to move forward.

And that’s all in the context of this teaching environment.…

It is very important for people to understand what “adaptive” really is. It is absolutely data-driven. It is absolutely data-driven at scale.

You have to understand content

andproficiency of students.And if you don’t, you can build any kind of recommendation engine you want, but you’re literally spitting out randomized answers, and that’s completely irresponsible.

Knewton is working with higher ed institutions to bring personalized learning directly to university students and instructors. Leading that effort is Knewton’s Vice President for Higher Education Markets, Jason Jordan.

Jason participated in a podcast recently with Rod Murray, the executive director of academic technology at the University of the Sciences, for Inside Higher Ed and they discussed:

- the distinction between “personalized” learning and adaptive learning
- how Knewton works in general, and for higher education
- what Knewton supports in higher education and the challenges Knewton is trying to solve
- whether or not Knewton can be applied to other competency-based education, such as corporate training

If you’re interested in learning more about these things, you can either listen to the podcast on insidehighered.com or read the transcript below. This transcript has been edited for clarity.

*Before I get into the weeds with Knewton, why don’t you tell me a little bit about yourself and how you became connected to Knewton.*

Sure. I’ve been in the higher education industry for a little over 25 years. I was with Pearson for the bulk of that time. And as part of my role at Pearson, I became aware of Knewton and actually formed a partnership with them on behalf of Pearson. And so I got to know the company pretty early on when they were one of the very few players in adaptive learning. And that’s how I came to know them and came to know what the company does.

*Great. You know, some of my audience, especially in higher ed, I think, may not be all that familiar with adaptive or personalized learning. First of all, are they synonymous, adaptive and personalized learning? Can you tell us a little bit more about what that really means?*

That’s a great question. So, you know, the way I think about it is, adaptive capability is just that, it’s a capability that we now have through the use of technology. Personalized learning is a learning technique that really lowers the instructor-student ratio and gives the student the opportunity to learn at his or her own pace using the material that is best for them to understand the concept at hand.

*That certainly helps. I think my curiosity is, how does it really work? I mean, I know a little bit about it and I understand that it can be used pretty successfully in math. Because math, I guess, seems to be a little bit easier to handle in terms of personalizing the instruction. How does it really work behind the scenes?*

Sure. So different companies approach adaptive learning in different ways. The way Knewton approaches adaptive learning is through the use of a learning graph. And you’re absolutely right, math is extremely well suited for adaptive learning process because math is very structured and it’s kind of easy to break down into very atomic parts.

So what Knewton does, the way Knewton handles adaptive learning, is we tag the content at the very atomic level. So this is about three levels below the learning objective level. We actually tag really at the question level. And we graph that content onto a system that looks at mathematics as a whole. And the students, as they are going through our adaptive process, are being served up parts of that graph based on what we think is going to be their optimal learning path.

The idea is that all components of mathematics will be part of the Knewton graph. And students move around that graph based on their strengths and weaknesses. The interesting part about what Knewton does, which I think is another differentiator, is that we actually use crowdsourced data to help determine what the next recommendation should be. We try to match the student’s capabilities that we’re currently serving with past students who have been successful at that atomic particle level on our graph. And we try to classify them by students who have been successful before them to really give them a very personalized experience through the material.

*Interesting. So, for example, if I’m in a learning module on my LMS, and I’m in a Knewton module, I’m going to, you know, say a math question is posed, is there then instruction going along with that or is it pretty much problem and a solution, problem and solution. Is there interspersed with didactic content? How does that work?*

So your experience would be, in a Knewton product, as a student, you’d enter the module and you would get a very brief, probably two-paragraph explanation of what your experience was about to be. And then you would start answering some questions.

I like to think of it as you were as a student, way back when, when you and I were doing it pencil and paper in elementary school or whatever. Sometimes you had a tutor sitting beside you and so the first thing you did was you started answering questions. And when you got to a question that maybe you were having some trouble with, the tutor would help you through some narrative content.

And that’s basically the way Knewton works too. As long as you are answering the questions that you need to answer to satisfy mastery of the content, you will be in answering question mode. But when you start to struggle, we begin to serve up some narrative content to help you better understand the material, to aid in your learning, and to help you progress through the material.

So the narrative is interspersed but very much in a just-in-time manner. So Rod, if you and I were working through the same content and I started struggling before you, I might get some narrative that was unique to my problems, the issues that I was having with the material; and you would breeze through that part of the program if you had mastery of it.

*In trying to imagine, as the instructor, how I would produce instructional content, say even in math. Would it just be a matter of asking a question requires a calculation, like I would normally do, let’s say on a multiple choice exam, or would I have to really break it down? Like, if you’re doing a test and where your work in progress your notes, your calculations are going to be graded. So, I can imagine having to break it down, like you say, in a very granular level to help to coach the student. Am I on the right track?*

That’s right. You’re on the right track, you absolutely are. So, what the system does is exactly that. You know, once the student begins struggling, we try to zero in on the actual issue that the student is having. And we do that by going more and more granular to the material until we find that part of the material that the student is struggling with. And so, at that point, we’ll serve up some more narrative content to see if we can get the student back on track and to master that material, or we may serve up some remedial content to take them back on that particular subject matter to fill in the learning gap that they may be missing.

*Right. So, I can see how it could be used in math and physics and some scientific disciplines. Are there any other subject areas that you can point to that are not math that seem to work well?*

Yeah. We really feel pretty strongly that Knewton is flexible enough to go across the general education curriculum. As a matter of fact, when a student enters Knewton, let’s say they enter in a math course. Most students in this country are either taking math or freshman composition, many times both their freshman semester. And so, if they experience Knewton at that level, they actually build what we refer to as a student profile in the system and that profile tracks across all content verticals and follows that student; anytime that they are using a Knewton-powered product, that student profile grows.

So we can actually help students not only in math and science and engineering, we can also help them in psychology, economics, the quantitative side of business certainly. In the future, as we add more of the English content, we’re building a big OER content foundation in math but also in English. We think that we’ll be able to also help them in subjects like history or political science.

*Wow, that would be impressive. I’m just thinking how it sounds like some technology regarding artificial intelligence or almost neural networks could help you in that regard. Is that something that’s in the mix?*

That is not currently on our roadmap but I could see, you know, in the future, that would be an interesting partnership, perhaps. I don’t know that that would be part of our core competency at Knewton. I could certainly see how partnering with an AI-style engine could really help students learn, absolutely.

*Have you received much pushback from faculty? You know, our faculty, sometimes, push back at new technologies. And when we first started doing lecture capture, they didn’t even want to be recorded. What kind of roadblocks or pushback have you gotten?*

I think, once again, we’re sort of asking faculty to take a bigger risk and trust the technology as part of their classroom experience. I think the payoff for the faculty, if they do that, we feel like we can really help students’ success rates and increase retention rates within courses and across campuses. But the faculty have to kind of trust that Knewton is taking all their students on a pathway that will ultimately get them to succeed in their course goals.

And one of the things that we have tried to do with the product is to put some anchors back to the instructor’s syllabus so they can see at general checkpoints that the students are progressing and that everybody is syncing up to their syllabus.

It must be so hard, as an instructor, to go into a classroom full of new students that you’ve never seen before and try to determine at what point they’re all starting from. It’s such a mixture of skills. And we really see Knewton as a tool for faculty that they can use to determine where their students are and to really get everybody rowing in the same direction and on the same page in terms of the progress through the material.

You know, my real hope is that Knewton can kind of really assist the faculty in terms of remediating students and filling in knowledge gaps, which will then free up the faculty to teach more of the higher-level concepts. When you survey them or when you talk to them, that’s what they aspire to do, but they seem to run out of time at the end to get to those connecting the concepts and getting students to understand those higher-order concepts. And so, hopefully, Knewton will be a tool to help them get to that point. And through that, by trying Knewton out and seeing if we can save that kind of time for them in the classroom as part of their teaching, hopefully it will lower the barrier to entry.

*Yeah. That’s really interesting. Now, does Knewton come as sort of a prepackaged curriculum or is it more of a toolkit that faculty need to work with to build their own content?*

We actually will do both. We’ve had longstanding relationships with many of our publishing partners. Pearson and many other companies around the world utilize Knewton to power their adaptive capability.

The other thing that we have done to aid faculty members is we have curated a lot of Open Educational Resource content. We’ve tagged the content. We’ve adjusted it into our system. We have psychometrically “normed the content,” if you will. And we’re offering that up to faculty as a way to take advantage of some really low-cost options if they want it either as a supplement to what they’re currently doing or as a technology that they would use every day as part of their class.

*I can really see this as being an important component of the increase in competency-based education. Is this how you see it going in the future?*

Yeah. I certainly think Knewton is tailor-made for competency-based education. We’re already a mastery-based system. Because we break the content down into really atomic portions, it’s very easy for us to align to a competency-based system. So, we’re excited about the competency-based movement and we look forward to it growing.

We also feel like we’re very well positioned in any teaching style: emporium model, lecture model, “co-requisite-style” model… We feel like the system is flexible enough to adapt to any of those models. But, yes, “competency-based” is something that we think we could really drive.

*Where do you see your main market and where is it now and where is it moving? K–12, higher ed, or both equally?*

I think that we’re going to see a span between K–12 and higher ed. I am certainly focused on the higher ed markets. But there is a growing number of students that are kind of between K–12 and higher ed, that’s the college readiness market. I think that we can really help those kids not get caught into the spiral of developmental education where they burn through so much of their financial aid and many of them don’t persist.

I think we could help through the use of Knewton technology because we can remediate in a very “just-in-time” manner. We can help more of those students enter at the college level and provide remediation on the fly, personalized to just their needs, so the instructor doesn’t have to take time within the syllabus to work that in. I think that’s a big potential for Knewton.

*There are certainly a lot of challenges in higher education, both from the standpoint of the faculty having to learn and deal with new technologies in terms of colleges and universities competing for a diminishing demographic of students. What challenges or opportunities do you see? You’ve touched on this a little bit, but is there anything else you can say about how Knewton is going to address, sort of the future challenges in higher ed?*

I think one big challenge in higher ed — it’s today’s challenge — but it’s going to be a challenge as we go forward too, is affordability. College is so expensive. And in particular, you know if you are a community college student, sometimes the amount of money you’re paying for your materials in college is close to your tuition bill. I think that we all want that to change.

I think everybody in the industry recognizes that the cost of a college education has gotten out of hand and it’s causing access problems for some students, especially students on the lower socioeconomic scale who just can’t afford to take the leap and go to college because they’re afraid it’s going to put them in too much of a financial bind.

And so, I think Knewton is well positioned to address that affordability issue, number one. Because we feel because we’re taking advantage of some OER content and really making it easier to use, that those customers who are concerned about affordability will find Knewton very attractive.

I also think that as the enrollments continue to decline around North America, that retention of students and students’ persistence rates are going to become a much bigger deal to college administrators. They are competing for a smaller pie and they’re also going to need to keep the students that they already have worked so hard to get to enroll in their colleges. And so I think that is a wonderful trend for Knewton’s business outlook. Because we feel that we can really raise retention and persistence rates of students and we think that more and more value is going to be placed on that by college administrators in the future.

*I can second that. That’s a big issue, I know, here and a lot of places.*

Absolutely.

*It also occurs to me that your tools would be great for corporate training. Are you seeing that as being a good market for you?*

It’s so interesting that you say that. We’re currently in partnership discussions with several corporate training entities and we believe exactly the same thing. You know, it’s so interesting, Rod, it seems to be that competency-based education has been around for a long time in the corporate training world, you know?

*Right.*

So much of it is based on competency. And it just seems like a natural fit for Knewton to go into corporate training through partnerships with some established companies, and that’s what we’re pursuing.

*Before we wind down, I was wondering if you can tell us anything more about the future of Knewton, anything that we should be looking for? Do you have new releases on a periodic basis? What can you tell us about the future?*

Sure. So, we’re sort of on a constant product improvement cycle right now. We release courses about every two weeks or so new builds of the courses. I think what you’re going to see from Knewton is we’re going to continue to curate as much of the OER content out there as we could put into our system. And I think over time what you will see is that will help us expand product offerings. It will take us across different content verticals. And so we’ll have more product offerings for colleges and universities.

*Well, this has been great. I’ve gotten a much better look into what Knewton does and I wish you all the success in the future.*

Well, thank you so much.

Find Rod Murray on Twitter: @rodspods. And for more of his podcasts, visit RodsPulsePodcast.com.

]]>These days, when people talk about artificial intelligence, there’s a lot of excitement around deep learning. AlphaGo, the algorithmic player that defeated 9-dan Go master Lee Sedol, incorporates deep learning, which meant that its programmers didn’t need to teach AlphaGo the rules of Go. They gave AlphaGo a lot of Go matches, and it figured out the rules on its own.

Deep learning has also shown impressive results in areas from computer vision to bioinformatics to linguistics. Deep learning helps Facebook understand the words people post there in more than 20 languages, and Amazon uses it to have conversations through Echo.

So deep learning is proving to be a popular way to understand how people write, speak, see, and play, but how good is it at modeling how people learn?

Last year, a team led by Chris Piech of Stanford University trained a recurrent neural network to do deep learning — or what they call Deep Knowledge Tracing. The idea is that, just as you don’t need to teach AlphaGo how to play the game on its own, Deep Knowledge Tracing can make sense of what’s being learned without human help. Using a public data set from ASSISTments, which guides students through math problem-solving, Deep Knowledge Tracing showed promising initial results.

There are other ways of modeling what students know. Item Response Theory, for example, has been around since the 1950s. It has been extended over the last decade to incorporate how people learn over time as well as expert human knowledge about the hierarchy of concepts being learned.

What’s the best way to predict what students know and don’t know, based on their previous answers and interactions?

Four Knewton data scientists — Kevin Wilson, Yan Karklin, Bojian Han, and Chaitanya Ekanadham — took a closer look at Deep Knowledge Tracing, comparing it with three models of how people learn built upon Item Response Theory. In addition to a classic Item Response Theory (IRT) model, the Knewton data science team used a temporal IRT model (called TIRT in the accompanying charts) and a hierarchical one (shown as HIRT).

The Knewton team used three collections of anonymous student interaction data, including ASSISTments, the Bridge to Algebra 2006–2007 data set from the KDD Cup, and millions of anonymized student interactions collected by Knewton.

With all three data sets, the Knewton team found that the Item Response Theory methods “consistently matched or outperformed” Deep Knowledge Tracing. Not only were the Item Response Theory approaches better at predicting what people know, they were easier to build and tune, “making them suitable candidates for real-world applications” such as adaptive learning platforms.

With Deep Knowledge Tracing, meanwhile, the Knewton team “found that computation time and memory load were prohibitive when training on tens of thousands of items. These issues could not be mitigated by reducing dimensionality without significantly impairing performance.”

In other words, deep learning still has a way to go to match established ways of modeling student learning.

For more details, read Back to the Basics: Bayesian extensions of IRT outperform neural networks for proficiency estimation or visit the International Educational Data Mining Society conference in Raleigh on July 1.

And if you want to reproduce our results, you can find code, links to the data sets, and instructions on GitHub.

]]>For the first time in six years, the U.S. Department of Education issued a National Education Technology Plan. So much has changed in the six years since its comprehensive overview of the state of educational technology. As the latest plan puts it, the conversation has shifted from *whether* technology should be used in learning to *how* it can improve learning to ensure that all students have access to high-quality educational experiences.

I’ve witnessed a similar progressive shift in my five and a half years at Knewton. In the early days, when Knewton was alone in the field, we had to explain what adaptive learning was and that it was actually possible to implement. Now, most companies in education and everyone from Barack Obama to Mark Zuckerberg is talking about the value of personalized education, and terms like adaptive and personalized get used for a wide spectrum of approaches and tools.

Likewise, educators and publishers from around the world now understand the promise and value of adaptive learning, and they see that its moment has arrived. We’ve seen particular interest from the advanced education markets of northern Europe and Asia. In China, for example, where national exams can determine your future, 7% of disposable income is spent on education, as compared to 2% of disposable income in the U.S. Chinese students turn to supplemental programs, both in person and online.

Take 17zuoye, a digital learning platform for K–12 students that began as an extracurricular learning application. Now teachers recommend 17zuoye as a supplement to the work they do in class. With an audience of 14 million students and 700 thousand teachers, 17zuoye has turned to Knewton to make its math and English language training programs adaptive.

Global interest in adaptive learning is also reflected in our most recent round of funding, which included investment from TAL Education Group, another Chinese K–12 education company, and EDBI, the corporate investment arm of the Singapore Economic Development Board. This infusion of capital is helping Knewton grow our teams to work closely with our global partners as they ready their products for launch.

Our partners serve everyone from Chinese children learning English to Spanish-speaking teenagers learning algebra or adults preparing for the Brazilian bar exam. These education companies are eager to release their adaptive learning products, to learn from the market and make the most of this moment of opportunity. To get their products to market more quickly, they are turning to companies like Knewton and also making connections with each other. For example, Gyldendal Denmark has partnered with Gyldendal Norway (a completely independent company) to bring the Norwegian publisher’s adaptive learning materials to students in Denmark. Our established Norwegian partner gains distribution, while Gyldendal Denmark can bring adaptive learning quicker to market and at lower cost.

The shift to digital educational materials from printed textbooks is a global phenomenon, and we’re seeing plenty of examples here in the United States, from Khan Academy to AltSchool. With Pearson starting to integrate Knewton into K–12 products, the big three American educational publishers in the United States have all acknowledged that adaptive learning as essential for their future. And our partner Waggle, which makes a smart, responsive practice application for grades 2–8, has shown truly impressive results, improving outcomes from Oklahoma City to West Palm Beach to a high-poverty urban school in Baltimore.

Educational institutions are also taking the initiative in bringing adaptive learning to their students. The Florida Virtual School, which is America’s first and largest online public school district, will use Knewton to power adaptive course materials for more than 200,000 students over the next few years.

We’re also hearing from every kind of institution of higher education, from public and private universities with four-year programs to community colleges to the for-profit sector. Administrators and faculty alike are hungry for learning products with a high-quality user interface, diverse and deep content, and data-driven adaptive learning. With more than 4,000 degree-granting institutions in the United States and international students flocking to study here, this country is well positioned to lead the way in adaptive learning at the post-secondary level and help more people fulfill their potential and find their way in the working world.

It has been fascinating to see the different ways adaptive learning has taken root in different places, and I look forward to seeing how it develops as its adoption spreads and accelerates and as examples from around the world inform and inspire each other. One thing is for certain: As the report from Washington, D.C., says, adaptive learning is no longer a matter of *whether*. Its all about *how*.

Measuring how much time each student spends on each item allows us to help teachers understand better how their students are working and whether they are engaged by the material.

When students are disengaged, their interactions with Knewton can sometimes reflect that. For example, when students move through coursework much faster than they usually do but without better performance, they might not be paying much attention, and are just clicking to get to the next thing. We call such behavior “spamming.”

Everything is relative: If a student is generally a fast worker, a fast response doesn’t count as spamming. Similarly, we take into account how long we expect each item to keep a student’s attention. Spending 28 seconds watching a 30-second video is generally considered working, while 28 seconds on a 5-minute video might suggest something else.

Once Knewton’s algorithm establishes a baseline expectation for each student and each piece of content, we can look at how long each interaction took and, by way of illustration, assign it a working-or-spamming probability.

Since Knewton is integrated into different learning applications aimed at different age groups, we wanted to see whether spamming rates vary by grade level, but users working on elementary school materials (most of them, presumably, elementary school students) and users of college coursework answered unexpectedly quickly at about the same rate. We also looked at whether students were more likely to spam on certain days of the week, but we didn’t see a Friday effect with spamming like we do with performance.

In addition, we examined whether different types of questions affected how likely students were to spam a particular question. The Knewton open platform, for example has some questions that are multiple choice questions and others that require students to type in an answer. Working interactions were almost evenly split between free response and multiple choice, but spamming occurred disproportionately on free response questions. It turns out that students were roughly three times as likely to spam on free-response questions.

Understanding this kind of spamming behavior augments our sense of what keeps each individual student engaged and, more broadly, contributes to our understanding of how students interact with learning applications. Knowing whether students are working productively enables Knewton and its partners to make better applications that help students remain in a learning flow and help spamming students get back on track.

]]>Further analysis shows that Knewton-powered adaptive assignments for struggling students narrow the gap between them and high-performing students on subsequent assignments. Closing this gap is one of the biggest challenges instructors can face in the classroom.

In “Reducing the Gap: How Adaptive Follow-Ups Help Struggling Students,” Hillary Green-Lerman and Kevin Wilson of Knewton looked at 48,202 students who used an online homework tool for college-level science textbooks in the spring of 2014. Beyond ordinary homework assignments, students who didn’t show mastery of the concepts they were learning received adaptive follow-up assignments powered by Knewton. These adaptive assignments present a personalized sequence of questions designed to address each student’s individual strengths and weaknesses.

Our research team found that students who were assigned an adaptive follow-up after struggling on a first assignment showed improvements of between four and 12 percentage points on subsequent assignments relative to their classmates who did not have adaptive follow-up assignments.

Students with a lower score will have more room to improve than high-performing ones. So Green-Lerman and Wilson corrected for initial differences in grade distributions between the higher- and lower-performing students. When taking this correction into account, they still see an average improvement of three points, and as much as eight points:

The online homework tool discussed in this study makes use of only a small portion of what Knewton can do to improve learning outcomes. Knewton’s research team continues to validate the efficacy of adaptive learning, and plans to continue to share its findings.

To read the full study, sign up below to download “Reducing the Gap.”

Santillana has named the product A2O, a play on the concept of *aprendizaje líquido*, or liquid learning: the idea that students can flow through their lessons in their own way. A2O adapts in real-time to address each student’s needs and helps teachers see exactly where students need support. Analytics give teachers unprecedented visibility into the learning process.

Teachers participating in the pilot over the next six months will be incorporating A2O into their algebra curriculum. Students will use A2O in the classroom and at home. The pilot covers four to six weeks of secondary-level algebra, and Santillana hopes to expand to other topics in math and other subjects.

Throughout the product development process, the Santillana team has received invaluable insights and feedback from teachers in A2O’s target market. We’re incredibly proud of the work Santillana has done and eager to see the product in the hands of as many students and teachers as possible. Interested teachers can still sign up for the pilot.

With over 28 million students in 22 countries, Santillana has an unparalleled understanding of Spanish and Latin American education. Still, teacher and student feedback will be invaluable when Santillana expands its adaptive learning offering.

En el mes de octubre de 2014, Knewton anunció una alianza con Santillana, la principal editorial de contenidos educativos de España y América Latina, con el objetivo de desarrollar productos de “aprendizaje adaptativo” para el mundo de habla hispana. Actualmente, Santillana está lanzando una prueba piloto de un producto de álgebra para nivel secundario dirigido a estudiantes de 12 y 13 años y sus profesores.

Santillana ha llamado al producto A2O, un juego sobre el concepto de *aprendizaje líquido*: la idea es que los estudiantes puedan fluir a través de sus lecciones de manera propia. A2O se adapta en tiempo real para abordar las necesidades de cada uno de los estudiantes y ayuda a que los profesores vean el momento exacto en qué concepto los estudiantes necesitan su apoyo. El análisis le brinda a los profesores un conocimiento inédito sobre el proceso de aprendizaje.

Durante los próximos seis meses, los profesores que participen en el piloto incorporarán A2O en su plan de estudios de álgebra. Los alumnos utilizarán A2O en clase y en su hogar. El material en el piloto incluye de cuatro a seis semanas de álgebra de nivel secundario. Santillana espera expandirse a otros temas de matemáticas y a otras materias.

Durante todo el proceso de desarrollo del producto, el equipo de Santillana recibió consejos y aportes invaluables de los profesores en los países al que apunta A2O. Estamos sumamente orgullosos del trabajo que ha hecho Santillana y ansiosos por ver el producto en manos de la mayor cantidad posible de estudiantes y profesores. Los profesores interesados aún pueden inscribirse al piloto.

Con más de 28 millones de estudiantes en 22 países, Santillana posee un entendimiento sin precedentes sobre la educación española y latinoamericana. Aún así, los aportes de profesores y estudiantes serán invaluables cuando Santillana amplíe su oferta de aprendizaje adaptativo.

]]>