Knerd Story – David Simon, Baton Rouge Community College

David Simon
Baton Rouge Community College
Mathematics

Background

For a long time, I’ve been thinking about making videos of me teaching, but didn’t get around to it. The move to remote learning in spring 2020 was the motivation I needed to get going on making these teaching videos. By the end of that first week of remote learning, I had a rig set up with my tripod, a selfie stick, and my phone, and I just started making videos and created a YouTube channel. The channel now has about 200 videos.

Why did you decide to use Knewton Alta?

As a college, we had decided to pilot different products in different courses. In the fall of 2019 I was piloting Knewton Alta, along with another competitor product. In the beginning, I really got into Knewton Alta. I just felt that it flowed better, was easier for students to latch on to, and linked up with Canvas much more easily. It was working great. Cut to March 2020 and I was starting my video project in the middle of the semester and working with multiple software platforms in different courses.
I quickly realized I didn’t want my videos to be connected to any particular textbook, so I decided not to use textbook problems in my videos. Instead, I used my own problems, and didn’t have to follow along with textbook chapters, and could let things unfold in my own way. I started incorporating Knewton, which I felt fit in with my plan in a more seamless way. I had links to OpenStax textbooks, and Knewton worked better with my videos. So I stuck with it.

What features of Knewton Alta do you like?

The fact that the homework is interactive and can pinpoint for students where they are is so important. Knewton Alta can tell them, “You need to practice this topic some more…” Instead of just being the typical “Do questions one through five and hopefully I can give you feedback.”
Logo Description automatically generatedKnewton Alta can give students that individualized feedback. That’s what I need. And I know some of the other software does that too, but Knewton was simpler and it just worked out better.
This interactive stuff, this is the future. Especially with all the video conferencing technology and all the remote learning that’s happening. I don’t think education is ever going back to the way it was. Nobody’s going to let that happen.
I like that Knewton Alta is a software where you don’t ever see a negative, or the downside because it doesn’t ever take points away. That’s one thing that helps motivate students too. They can lose confidence if they keep getting hammered for getting things wrong.
Knewton Alta’s adaptability is important. I tried a couple of softwares that were adaptive. A competitor product was also good, but it’s just the interface of Knewton seemed to flow better. The problem choices were a bit more varied. I felt like I could do more in it. I could make some of the problems in it. And there were just too many blockades for some of the other softwares. Like they didn’t link with Canvas or students had to log in somewhere else. Knewton Alta and myself jive, like puzzle pieces, my tap dancing, and its adaptability. And I just kept getting such good feedback from the students about it too.

What has been the impact of using Knewton Alta?

Diagram Description automatically generatedWell, things have been developing since Spring 2020. I kept piloting Knewton Alta at my school until I was able to show everyone how well it was working. I used to have a 40 or 50% pass rate. Now I’m up to a 70% pass rate.
This semester (Fall 2021) we actually went ahead and adopted it for the college algebra and trigonometry classes too. And it seems to be going really well. There are always pitfalls when lots of people are using things because some people are good at the technology and some people aren’t. I’ve tried to help as much as I can. And then my Customer Success Manager for Knewton Alta fills in most of the gaps. And I’m hoping that it can keep going. I know sometimes people have a tendency to regress and they want to go to something they know, and they want to go back to solutions and partners we’ve used in the past. And I say no, we can’t do that.

Implementation Strategy

I teach a topic and assign a Knewton Alta homework on the topic (or sub-topic). I try to cut the homework up into manageable chunks of about half an hour each. I do this if the Knewton topic is too big. It also allows students to get quick feedback and encouragement. Then even if one weekly Canvas module has seven or eight assignments, students know each one will take about 30 minutes.
Many educators, myself included, would love to be able help individual students. Back in the “little red school house days” you had individual attention. It would be great to be able to stand next to a student and let them know I’m helping them today, until they understand the material like the back of their hand. We just can’t do that. Every semester I’ve got upwards of 200+ students in a small college.
We’re getting technology like Knewton Alta, right when we need it—or maybe a little bit later than we need.

And I don’t know if a lot of us realized that until something like Knewton Alta came along. But now we can figure out where the problems are, where students’ difficulties are, and try to smooth them out. And that’s what we’ve been needing.

Connecting our Knerds: Day of Knerdvocate Collaboration

Earlier this summer, Knewton hosted its first national Knerd Camp, bringing together adopters from across the country to discuss all things alta with our staff knerds.

Knewton’s “knerdvocates” played a key role in making the meeting a success. Knerdvocates are educators who use alta to drive student success in their own courses and help others do the same in theirs through peer coaching, best practices, and thought leadership.

During the 2-day Knerd Camp, instructors were able to experience the day in the life of a Knewton Knerd in New York City. Sharing Knewton’s vision of “putting achievement in reach for everyone,” attendees discussed their teaching challenges, offered advice, and collaborated with each other and staff Knerds on even better ways to help students using alta. They also got the behind-the-scenes view into the data science behind alta, and enjoyed a preview of the newly enhanced interface with the Knewton team of technology product developers, data scientists, and support team.

Instructor, Shawn Shields commented, “I really learned a lot from this event in terms of how it works and hearing others’ experiences and best practices. I ended up with quite a few good ideas from others that I can modify and add to my course,confirming how important it is to give educators opportunities for peer-to-peer coaching in a comfortable, positive setting.

Passion was also a recurring theme, with Knerdvocates becoming inspired by the passion of the Knerd learning community–”I loved the opportunity to talk with the Knerds who were so passionate about what they do…{….}. the Knerds genuinely care about what they do. You can’t fake that kind of passion and dedication,” indicated instructor, Melanie Yosko

Knerd Camp was rounded out with a cruise on the Hudson to visit New York City’s famous trademarks and dinner at the suitably named tapas restaurant, alta.

Building on the success of our first national Knerd Camp, Knewton is planning to expand the program with a series of regional Knerd Camps for instructors who are interested in learning more about alta. Keep your eyes open for one in your area.

 

Interpreting Knewton’s 2017 Student Mastery Results

This post was developed with Illya Bomash, Knewton’s Managing Data Scientist.

Results. Efficacy. Outcomes.

Student success is the ultimate goal of learning technology. Despite this, there exists a startling lack of credible data available to instructors and administrators that speaks to the impact of ed-tech on learning and academic performance.

To provide instructors and administrators with greater transparency into the effectiveness of alta and the Knewton adaptive technology that powers it, we analyzed the platform results of students using alta. These results represent our effort to validate our measure of mastery (more on that to come) and provide instructors and administrators with much-needed transparency regarding the impact of alta on student achievement.

Here, we hope to provide context and explanation that we hope will leave educators and those in the ed-tech community with a clearer picture of how we arrived at the these results — and why they matter.

Our data set

The findings in this report are drawn from the results of 11,586 students who cumulatively completed more than 130,000 assignments and 17,000 quizzes in alta in 2017.

This data set includes all of alta’s 2017 spring and summer student interactions. Only cases in which the relevant calculations are impossible have been excluded — such as quiz scores for a course in which the instructor chose not to administer quizzes. So while these results aren’t from randomized, controlled trials, they do paint an accurate portrait of student performance across alta users, making use of as much of our student data as possible.

Why mastery?

Our adaptive technology is based on the premise that if a student masters the concepts tied to the learning objectives of their course, that student will succeed in the course and be prepared to succeed in future courses. It’s also based on the premise that Knewton’s mathematical model of student knowledge states — which we frequently refer to as Knewton’s proficiency model — can determine when a student has reached mastery.

This basis in mastery manifests itself in how students experience alta: Every assignment that a student encounters in alta is tied to learning objectives that have been selected by the instructor for their course. A student “completes” an alta assignment when our proficiency model calculates that a student has mastered all of the learning objectives covered in that assignment.

Our 2017 Mastery Results seek to clarify two things: the frequency with which students achieve mastery in alta, and the later performance of students who have (and have not) achieved mastery, as determined by our proficiency model.

Controlling for students’ initial ability level

In this analysis, we wanted to assess the impact of mastery across the full spectrum of student ability levels. To capture a sense of each student’s initial proficiency, we aggregated the first two questions each student answered across all of the concepts he or she encountered in the course. The percentage of those questions the student answered correctly provides a naive but reasonable estimate of how well the student knew the material entering the course.

We looked at the distribution of this score across all of our students, tagging each student’s history with a label corresponding to where they fell among all users.

Note: Knewton’s proficiency model neither uses this measure nor tags students with any kind of “ability label.” Our adaptive technology calculates a detailed, individualized portrait of each student’s proficiency levels across a wide range of concepts after each student interaction. But for the sake of this comparative impact analysis, we’ve chosen to use these distinctions as a tool to compare students of similar initial abilities.

Our findings

Students of all ability levels achieved mastery with alta at high rates

Analyzing students’ assignment completion revealed that with alta, students achieve mastery at high rates. As seen in Figure 1, across all students, 87% of the time, students working on an assignment in alta achieved mastery. Even among students who struggled to complete a particular assignment, 82% eventually reached mastery.

Achieving mastery with alta makes a positive impact on students’ academic performance

We know that with alta, students are highly likely to achieve mastery. But what is the impact of that mastery? When our model indicates that a student has mastered the material, how well does the student perform on future assignments, quizzes, and tests?

For any given level of initial ability, Knewton’s adaptive learning technology is designed to facilitate reaching mastery effectively for any student willing to put in the time and effort. To validate Knewton’s measure of mastery, we compared the performance of students who mastered prerequisite learning objectives (for adaptive assignments) and target learning objectives (for quizzes) through altawith students of similar initial ability who did not master these concepts.

Mastery improves the quiz scores for students of all ability levels

Figure 2 shows average Knewton quiz scores for students who did/did not reach mastery of the quiz learning objectives on prior adaptive assignments. Quiz takers who mastered at least ¾ of the quiz learning objectives through previous adaptive work went on to achieve substantially higher quiz scores than similarly-skilled peers mastering ¼ or fewer of the learning objectives.

Mastery levels the playing field for struggling students

Putting in the work to reach mastery on the relevant adaptive assignments increased initially struggling students’ average quiz scores by 38 percentage points, boosting scores for these students above the scores of otherwise advanced students who skipped the adaptive work.

Mastery improves future platform performance

Students who master the learning objectives on earlier assignments also tend to perform better on later, more advanced assignments.

Assignment completion

As Figure 3 shows, controlling for overall student skill levels, students who mastered ¾ of the learning objectives prerequisite to any given assignment tended to complete the assignment at much higher rates than students who did not. This is the virtuous cycle of mastery: the more students master, the better prepared they are for future learning.

Work to completion

Mastery of an assignment’s learning objectives also saves students time. When students began an assignment after having mastered most of its prerequisites, they tended to require significantly fewer questions to complete it. For students who mastered at least ¾ of the prerequisites to any given adaptive assignment, completing the assignment took 30-45% fewer questions than for students who did not (see Figure 3). Mastery helps students of all abilities learn faster, and struggling students see the biggest gains: for these students, prerequisite mastery leads to an average postrequisite assignment shortening by more than 40%.

The road ahead

Any self-reported efficacy results will be met with a certain amount of scrutiny. While we’ve attempted to be as transparent as we can be about our data, we understand that some will question the validity of our data or our approach to presenting it.

It’s our hope that, if nothing else, the reporting of our results will inspire others in the ed-tech community to present their own with the same spirit of transparency. In many ways, these results are intended not as a definitive end-point but as the start of a more productive conversation about the impact of technology on learning outcomes.

Lastly, while our 2017 Student Mastery Results are encouraging, we know that they exist in a world that is constantly changing. The challenges in higher education are becoming greater and more complex. The student population is growing increasingly diverse. Our technology and our approach to learning is evolving.

This year, we plan to update these numbers periodically and provide the results of other analyses with the goal of providing greater transparency into the effectiveness of alta and deeper insights into how students learn.

View full mastery results

Putting achievement within reach through the corequisite model for redesign

For decades, educators and policymakers have been looking for ways to to remedy the epidemic of incoming freshman  who require extra preparation for college level coursework yet end up languishing in courses that don’t earn them college credit.

The number of students placed into non-credit bearing “prerequisite” courses who fail to ever enter — let alone pass — a credit-bearing course is staggering. Ninety-six percent of colleges enrolled students who required remediation during the 2014-2015 academic year, and more than 200 schools placed more than half of their incoming students into at least one remedial course.

But less than one in four students in remediation at 2-year colleges ever make it to a credit-bearing course. This comes at a cost to taxpayers of $7 billion annually.

To address this challenge, colleges and universities have increasingly turned to “redesign,” a shorthand term for the process by which they reconceive instruction for an entire course area to improve student outcomes and cut down on costs for students.

There are many configurations of redesign. While all have led to some level of student gains, the corequisite model has produced results that merit closer attention.

Corequisites: An overview

The corequisite model dispenses with prerequisite courses and replaces them with college-level courses that include “just-in-time” support for students who require it. By providing extra support within the framework of a college-level course, the corequisite model promises to accelerate students’ progress and increases their chances of success.

In Georgia, a state that had used a traditional, prerequisite model, only 21% of developmental-level students went on to complete the related, college-level math course. After transitioning to corequisites, that number leapt to 64%. The results were even more dramatic in Tennessee, where the number of students requiring remediation in math who went on to complete a credit-bearing course exploded, going from 12% to 63%.

More states are dipping their toes into corequisite waters. This past June, Texas Governor Greg Abbott mandated that the state’s public colleges and universities must enroll 75% of their developmental-level students in a corequisite course by 2020. In the parlance of the tech community, that’s a big lift.

Personalizing corequisite instruction with Knewton’s adaptive technology

For states and institutions seeking to give students who require extra support the skills they need while keeping them on pace to earn their degree on time, the corequisite model of redesign shows promise. But still, corequisites present a challenge: providing personalized instruction to students whose skill levels may vary greatly at the start of the course.

Looking to power their corequisite redesign efforts with adaptive technology, colleges are making Knewton a key part of their corequisite courses.

Knewton, which provides all students with an adaptive, personalized path to mastery and offers “just in time” support when needed, is a perfect fit for corequisite courses, which must achieve dual goals: providing developmental-level students with prerequisite skills while helping all students achieve the learning objectives of a college-level course.

And at $44 for two years of access, Knewton’s alta is in line with one of the goals of corequisites: making college more affordable.

While Knewton fits perfectly within any college-level course that includes both well-prepared and underprepared students, we’ve created corequisite versions of some courses to reflect their unique structure, which varies based on approach. Our corequisite courses support a blended approach with “just in time” support, a targeted approach with developmental math review available to be assigned at the beginning of every chapter, and a compressed approach that includes four weeks of developmental math and 8 weeks of college-level material.

Looking ahead

New approaches to redesign and corequisites are constantly emerging. Because of how our content and technology is built, we’re ready to help institutions and instructors seize opportunities to help students succeed by quickly designing solutions that meet their needs.

And because corequisites are rapidly expanding into new course areas, we’re constantly adding to our roster of courses that include developmental support. By 2018, we will offer corequisite versions of the following courses:

We’re excited by Knewton’s ability to support the corequisite model of redesign and to bring you the results of Knewton implementations in corequisite courses. In the meantime, we’ll be working hard to put achievement in reach for all learners by making more corequisite courses available.

What does knowing something tell us about a related concept?

At Knewton, we’ve built an adaptive learning platform that powers digital education around the world based on cutting-edge algorithms that leverage the diverse datasets we receive. One of the core data-driven models that powers everything we do is our Proficiency Model, which we use to infer each student’s knowledge state. We do this by combining a “knowledge graph”, time-tested psychometric models, and additional pedagogically motivated modeling. We’ll show you how the relationships in the knowledge graph get realized in Knewton’s Proficiency Model and answer the question: “What does knowing something tell us about knowing a related concept?” This has important pedagogical consequences, as well as an enormous impact on how our recommendations get served (and how confident we can be in their accuracy!).

The Knowledge Graph

One of the core components of Knewton adaptivity is the knowledge graph. In general, a graph is composed of nodes and edges. In our case, the nodes represent independent concepts, and the edges represent prerequisiterelationships between concepts. An edge between concepts A and B (A → B) can be read as Concept A is prerequisite to concept B. This means that the student generally must know concept A before being able to understand concept B. Consider the example portion of a knowledge graph below:

In math-speak this is a directed acyclic graph (DAG). We already covered what the “graph” part means. The “directed” part just means that the edges are directed, so that “A prerequisite to B” does not mean “B prerequisite to A” (we instead say “B postrequisite to A”). This is in contrast to undirected edges in social networks where, for example, “A is friends with B” does imply “B is friends with A”. The “acyclic” part of DAG means there are no cycles. A simple cycle would involve A → B → C → A. This would imply that you need to know A to know B, B to know C, and then C to know A! This is a horrible catch-22. You can never break the cycle and learn these concepts! Disallowing cycles in the graph allows us to represent a course, without contradictions, as starting with more basic concepts, and leading to more advanced concepts as the student progresses (this progression is top-to-bottom in the graph above).

Another crucial aspect of the knowledge graph is the content: i.e. the assessing questions and the instructional material. Each concept has a number of such content pieces attached, though we don’t show them in the picture above. You can think of them as living inside the node.

How do we know what you know?

Of course, we can never know exactly what you know– that’d be creepy! Instead we estimate the student knowledge state using a mathematical model called the Proficiency Model. This takes, as inputs, the observed history of a student’s interactions, the graph structure, and properties of the content (question difficulty, etc.) and outputs the student’s proficiency in all the concepts in the graph at a given point in time. This is summarized below:

 

Abstractly, proficiency on a concept refers to the ability for a student to perform tasks (such as answer questions correctly) related to that concept. Thus, we can use the estimated values of the proficiencies to predict whether the student answers future questions correctly or not. Comparing our predictions to reality provides valuable feedback that allows us to constantly update and improve our model and assumptions.

The foundation for our Proficiency Model is a well-tested educational testing theory known as Item Response Theory (IRT). One important aspect of IRT is that it accounts for network effects— we learn more about the content and the students as more people use the system, leading to better and better student outcomes. IRT also serves as a foundation for our Proficiency Model on which we can build additional features.

One thing that basic IRT does not include is any notion of temporality. Thus older responses count the same as newer responses. This is fine in a testing environment, where “older” responses mean “generated 20 minutes ago”, but isn’t great in a learning environment. In a learning environment, we (obviously) expect that students will be learning, so we don’t want to overly penalize them for older work when in fact they may have had an “Aha!” moment. To remedy this, we’ve built temporal models into IRT that make more recent responses count more towards your proficiency estimate than older responses on a concept*.

Another thing that basic IRT does not account for is instructional effects. Consider the following example. Alice got 2 questions wrong, watched an informative video on the subject, and then got one question right. Under basic IRT we’d infer that her proficiency was the same as Bob who got the same 2 question wrong, did not watch the video, and then got one question correct. This doesn’t seem accurate. We should take Alice’s instructional interaction into account when inferring her knowledge state and deciding what’s best for her to work on next. We have extended IRT to take into account instructional effects.

Finally, basic IRT does not account for multiple concepts, nor their interrelationships in the knowledge graph. This will be the main focus of the rest of this post.

Proficiency propagation

The titular question of this post: “What does knowing something tell us about knowing a related concept?” is answered through Proficiency Propagation. This refers to how proficiency flows (propagates) to different concepts in the knowledge graph.

To motivate why proficiency propagation is important, let’s consider two different scenarios.

Propagating mastery

First, consider the example shown below, where the only activity we’ve observed from Alice is that she performed well (a ✔ indicates a correct response) on several more advanced concepts.

We can’t know everything Alice has ever done in this course– she may have done a lot of work offline and answered tons of “Add whole numbers” questions correctly. Since we don’t have access to this information, we have to make our best inference. Note that all three concepts Alice excelled at are reliant upon “Add whole numbers” as a prerequisite. Let’s revisit the definition of the prerequisite relationship. We say “A is prerequisite to B” (A → B) if A must be mastered in order to understand B. In other words:

Concept B is mastered ⇒ Concept A is mastered

In our case, there are three different “concept B’s” that Alice has clearly mastered. Thus, by definition of the prerequisite relationship Alice almost certainly has mastered “Add whole numbers” (it’s the concept A). So let’s paint that green, indicating likely mastery.

By similar reasoning, if Alice has mastered “Add whole numbers”, then she has likely mastered its prerequisite “Understand the definition of whole numbers and their ordering”. However, we might be slightly less certain about this inference, since it is more indirect and relies on a chain of reasoning. So let’s paint that slightly less bright green:

What about the remaining two concepts? First consider “Multiply whole numbers”. Alice has mastered its prerequisite, which is promising. But she may have never received any instruction on multiplication, and may have never even heard of such a thing! On the other hand, she may be a prolific multiplier, having done lots of work on it in an offline setting. In this case, we don’t have the definition of “prerequisite” working in our favor giving us a clean inference. But certainly if we had to guess we’d say Alice is more likely to have mastered “Multiply whole numbers” than someone else who we have no info on. Thus, we give Alice a small benefit of the doubt proficiency increase from the baseline. Similar considerations apply to the last, most advanced concept:


Let’s summarize the lessons we’ve learned:

Propagating struggling

Now let’s consider Bob, who has struggled on “Add whole numbers”, getting 3 incorrect:

Recall our deconstruction of the prerequisite relationship A → B:

Concept B is mastered ⇒ Concept A is mastered

Unfortunately, this doesn’t directly help us here, because Bob hasn’t mastered any concepts as far as we know. However, the contrapositive is exactly what we need:

Concept A is not mastered ⇒ Concept B is not mastered

Let’s take “struggling on” to be equivalent to “not mastered” for our purposes to get:

Struggling on Concept A ⇒ Struggling on Concept B

Thus, we now know that struggling-ness propagates strongly down to the postrequisites of “Add whole numbers”!

What about “Understand the definition of whole numbers and their ordering”? Similarly to the flipped situation of propagating mastery to postrequisites, we cannot make any strong pedagogical inferences just from the prerequisite relationship. However, we can still assert that it is more likely that Bob is struggling on it given we’ve seen him struggle on “Add whole numbers” than if we hadn’t seen him struggle on that concept:

Let’s summarize what we’ve learned about propagation of struggling-ness:

Notice these rules are just the mirror-opposites of the ones for propagating mastery! And all of this comes simply from the definition of “prerequisite-ness”, and some pedagogical reasoning.

Details details

While we now have a nice picture of how we want proficiency propagation to behave, that doesn’t count much unless we can rigorously define a mathematical model capturing this behavior, and code up an algorithm to efficiently compute proficiencies in real time for all possible cases. As they say, the devil is in the details. To give a flavor of what’s involved, here are some of the technical details our mathematical model and algorithm must obey:

Coming up with a well-defined mathematical model encoding asymmetric strong propagation is a challenging and fun problem. Come work at Knewton if you want to learn more details! !

Putting it all together

So what good exactly does having this fancy proficiency model do us? At the end of the day, students care about being served a good educational experience (and ultimately, progressing forward through their schooling), and in Knewton-land that inevitably means getting served good recommendations. Certainly, having a pedagogically-sound and accurate proficiency model does not automatically lead to good recommendations. But having a bad proficiency model almost certainly will lead to bad recommendations. A good proficiency model is necessary, but not sufficient for good recommendations.

Our recommendations rely on models built “on-top” of the Proficiency Model, and answer questions such as:

All of these questions can only be answered when equipped with an accurate understanding of the student’s knowledge state. As an example, consider Alice again. If we had a bare-bones proficiency model that did not propagate her mastery to “Add whole numbers”, we might consider this a valid concept to recommend material from. This could lead to a frustrating experience, and the feeling that Knewton was broken: “Why am I being recommended this basic stuff that I clearly already know?!”

At the end of the day, it’s user experience stories like this that motivate much of the complex data analysis and mathematical modeling we do at Knewton. And it’s what motivates us to keep pushing the limit on how we can best improve student learning outcomes.

*There are other temporal effects that kick-in if you’ve seen the same question more than once recently.

** There is a whole other layer of complexity in our Proficiency Model that we’ve glossed over. We actually estimate a student’s proficiency and a measure of our confidence in that estimate. These are the proficiency mean and variance, and can be combined to obtain confidence intervals, for example. For the purposes of this blog post, we are only considering the propagation of proficiency means.

This post was written by Michael Binger, a data scientist at Knewton.

Introducing alta!

Today, we’re excited to launch alta, Knewton’s fully adaptive courseware for higher education.

You can explore this site (or read our press release) for more details, but here are a few things I’m especially excited to call out:

Our CEO, Brian Kibby, sees alta as part of a movement for better results and lower costs for college students.

“Students and instructors have been taken for granted by textbook publishers for too long. They deserve a better experience at a more affordable price,” said Knewton CEO Brian Kibby. “We designed every aspect of alta to empower instructors to put achievement within reach for their students, from its affordability and accessibility to its ability to help all learners achieve mastery.”

We look forward to bringing you more updates about alta in the weeks ahead!