Tag Archives: from jose

Knewton Raises $51 Million

Knewton has raised $51 million to accelerate growth in the coming year.

Raising another round of funding wasn’t an entirely obvious decision for us. We had a couple of profitable months at the end of 2013, and expected to be profitable all of 2014. But Knewton has lofty goals. We want to bring predictive analytics and personalized recommendations to publishers, schools, and, ultimately, students around the globe. We want to make it possible for any content creator to create high-quality adaptive experiences. We want to launch a public portal so that anyone anywhere can use Knewton to learn anything.

Every entrepreneur longs for the day when the company becomes profitable. Profitability means freedom and security, and maybe even an end to sleepless nights. But, for Knewton, this happens to be the exact moment we want to step hard on the gas. We want to invest in our platform and increase headcount dramatically in the next two years.

When we decided to raise this round, we called a few investors who had previously expressed interest in Knewton. We decided to go with Atomico, an international investment firm led by Skype co-founder Niklas Zennström. There were several reasons we chose Atomico to lead the round:

  • Strong personal chemistry between our team and theirs
  • Strong global presence in Europe, Asia, and Latin America — we are doing a lot of things in Europe right now
  • They have staff, whom we can borrow for local expertise and connections, in each of the numerous international regions we are focused on
  • They are founders who have experience scaling tech companies from inception to global platform

We also decided to make some room for a fund that focuses on edtech – Michael Moe’s GSV Capital. I’ve known Mike for years and felt he would add value during this rapid growth phase of ours. Atomico and GSV were joined by our existing investors – Accel Partners, Bessemer Venture Partners, First Round Capital, FirstMark Capital, and Founders Fund — along with debt financing by Silicon Valley Bank.

Knewton will use this capital to invest significantly in product enhancements and international growth. While this new funding wasn’t strictly necessary, the idea of growing aggressively now was too appealing to pass up.

For more on the announcement, check out this blog post from Atomico. 

Heavy Duty Infrastructure for the Adaptive World

The adaptive learning landscape has changed dramatically since Knewton was founded nearly six years ago. Back then, my pedantic lecturing about adaptive learning was met mostly with blank stares. The term itself was unused in the market.

In April, I wrote a post predicting that in the next few years all learning materials will become digital and adaptive. Knewton is premised on this revolution. We envision a world of adaptive learning apps, a world where every app maker is by definition an adaptive app maker.

But there’s a potential obstacle to such a world. While it’s relatively straightforward to make simple differentiated learning apps, it’s extremely difficult and expensive to make proficiency-based adaptive learning. There are many apps on the market today that offer rich learning experiences, with wonderful instructional design, content, and pedagogy. But without proficiency-based adaptivity, these apps are severely limited.

The difference is data. Specifically, each student’s concept-level proficiency data.

By that I mean something pretty specific, that goes way beyond “observable data” like test scores or time taken. Capturing a student’s performance on a test or assignment does not take into account the difficulty of the material, the concepts to which it has relevance, or a student’s prior experience on similar content. A true model of proficiency can estimate what students know, how prepared they are for further instruction or assessment, and how their abilities evolve over time.

Concept-level proficiency data is not what a student did, but what we are confident that they know, at a granular level. Extracting what students know from what students do is extremely difficult and absolutely critical.

To get this kind of proficiency data, it is essential to have large pools of “normed” content. To get normed content, you need infrastructure to passively, algorithmically, and inexpensively norm content at scale. Then you need infrastructure to make sense of and take action on the resulting data. There are no shortcuts.

Without each of those infrastructures, you have, at best, good guesses.1 For instance, some apps have a pre-determined decision tree, with a simple hurdle rate made up by some content editor, that says something like, “Students who get 8 out of 10 questions right on this algebra quiz can move on; otherwise give them more algebra questions.” There are a number of problems with simple rules-based systems like this, such as: they can’t control for differences in English language skill; they’re fundamentally arbitrary; they’re often used for endless drilling rather than learning. But the biggest problem is that there is no infrastructure involved that can produce any actual student proficiency data. It’s all unnormed practice questions and guesswork. That content editor might make some pretty good guesses, some not so good, but either way the error rate of those guesses compounds exponentially2 from one to the next.

This isn’t to say that there aren’t terrific apps like this out there today. I just wouldn’t call such apps “adaptive learning.”

To me, the term “adaptive learning” can only mean learning that’s based on rigorous estimates of each student’s proficiency on each concept. After all, “adaptive learning” grew out of, and is a play on, “adaptive testing,” which is based on the coarser, but similar notion of construct-level proficiency data. Instead, I would call proficiency-less apps “differentiated learning” — an instructional designer somewhere has made some (hopefully) intelligent guesses that will differentiate each student’s path based on observable data.

Note that you don’t necessarily need the Knewton platform to make a true adaptive learning app. Before we built our platform, Knewton produced an adaptive learning GMAT prep application. We wrote the questions ourselves and normed them by paying randomized students (via Amazon’s “Mechanical Turk” and Craigslist) to answer each question and then cleaning the resulting data with successive rounds of testing, analysis, and evaluation.

We know firsthand that producing a self-contained adaptive learning app is painful, expensive, functionally constrained, and unscalable. But building a platform to scalably norm assessment items, extract proficiency data, confidently infer cascades of additional data, and optimize learning based on those data is vastly more complex and expensive, with hundreds of different critical components all of which have to be built just right and interact with each other in exactly the right way.

Once you have normed items, you need infrastructures that can use those items to generate insights about students and content. And then you need infrastructures to turn that insight into great product features to help students, teachers, and parents — features like recommendations, predictive analytics, and unified learning histories across apps. Building such a platform requires that you know exactly what you’re building before you even start, and have the world’s top data scientists and software developers to build it. It simply makes no sense for any one company to build all of that just to power its own apps.

I created Knewton to solve this conundrum. Knewton has built the necessary infrastructures to gather, multiply, process, and action student proficiency data. Anyone who wants to builds true adaptive learning apps, without doing all the painful and expensive work on a one-off basis, can plug into our network and build on top of our infrastructures.

Subtle Art vs. Heavy Machinery

Each third-party app we power brings its own core competencies in the subtle arts of content creation, pedagogy, and user experience, while outsourcing the heavy machinery of its personalization infrastructure to Knewton. Today, the Knewton platform comprises three main parts:

Data Collection Infrastructure: Collects and processes huge amounts of proficiency data.

  • Adaptive ontology: Maps the relationships between individual concepts, then integrates desired taxonomies, objectives, and student interactions.
  • Model computation engine: Processes data from real-time streams and parallel distributed cluster computations for later use.

Inference Infrastructure: Further increases data set and generates insights from collected data.

  • Psychometrics engine: Evaluates student proficiencies, content parameters, efficacy, and more. Exponentially increases each student’s data set through inference.
  • Learning strategy engine: Evaluates students’ sensitivities to changes in teaching, assessment, pacing, and more.
  • Feedback engine: Unifies this data and feeds results back into the adaptive ontology.

Personalization Infrastructure: Takes the combined data power of the entire network to find the optimal strategy for each student for every concept she learns.

  • Recommendations engine: Provides ranked suggestions of what a student should do next, balancing goals, student strengths and weaknesses, engagement, and other factors.
  • Predictive analytics engine: Predicts student metrics such as the rate and likelihood of achieving instructor-created goals (e.g., how likely is a student to pass an upcoming test with at least a 70%?), expected score, proficiency on concepts and taxonomies (e.g., state standards), and more.
  • Unified learning history: A private account that enables students to connect learning experiences across disparate learning apps, subject areas, and time gaps to allow for a “hot start” in any subsequent Knewton-powered app.

Schools used to do everything themselves: teachers, materials, cafeterias (if any), technology (if any), etc. Similarly, factories used to do everything on site. Eventually people realized that an ecosystem of just-in-time parts providers resulted in far better quality and lower cost manufacturing. Schools today are increasingly moving in that direction, with outsourced content, content management, food services, academic services, etc. This way schools can do what they do best — teach, administer, offer next-step guidance, and foster community.

In a much smaller way, Knewton is trying to contribute to that ecosystem. We don’t do the sexy stuff — content, instructional design, pedagogy, etc. — but we do help the creative geniuses who excel at those arts to bring the most powerful vision of their products to life.


  1. Strictly speaking, even normed content yields only estimates of proficiency — they’re just estimates that one can have a lot of confidence in, and that get better with more data. 

  2. Because every cluster of concepts has other clusters dependent upon it, suboptimizing the learning around any one cluster suboptimizes that student’s learning around every dependent cluster

A Direction for Online Courses

According to The New York Times, 2012 was the “Year of the MOOC” (“massive open online class”), but it could have more accurately been called “the year elite colleges embraced online courses.” While MOOCs are important, they are merely a subset of a much larger phenomenon. The continued growth and acceptance of online courses — whether in primary, secondary, or higher education; credit bearing or not; private or open; paid or free; delivered by for-profit or not-for-profit institutions — is a seismic shift in the education universe.

The MOOC Ecosystem

Today, you can find MOOCs in nearly every higher education field of study. Some facilitators, like Coursera and Udacity, are for-profit and venture-funded — they will one day need to pursue revenue and liquidity opportunities for their investors. Others, like EdX and Khan Academy (not technically a MOOC platform) are not-for-profit — meaning they will be generally capital-constrained compared to any for-profit players who figure out a successful revenue model.

To date, MOOCs offer only part of what constitutes a course: lectures. Almost all of the additional learning that students normally are expected to do on their own — by studying a carefully curated textbook and other materials, with professionally created scope and sequence, instructional design, assessment items, and production values — is missing. Other supporting services that schools and universities provide — both academic services like libraries and tutors, as well as non-academic services — are also missing. All of these other essential elements that constitute a course cost money. MOOCs don’t give away these other components, so they don’t give away courses. They give away lectures.

But that’s still pretty awesome. Because they’re massive and online, it is possible for the first time to share the best teachers of the world with anyone. (Strictly speaking, it’s those who think they are the best teachers, vetted with a market choosing dynamic.) This in itself is revolutionary. But it is also incomplete. For all the massive size and bureaucracy of the global education system, students primarily do just two things that drive learning outcomes: 1) attend classes and 2) read or interact with texts and other materials and supporting services. There are those who call MOOCs “massive open online courses.” They are wrong to do so. I am hopeful that they will continue to evolve, but to date, MOOCs are classes or lectures only.

Small wonder then that MOOCs have, at times, struggled with quality and retention. Take San Jose State University’s pioneering partnership with MOOC facilitator Udacity. Six months after it began, San Jose State announced it would “pause” the partnership after trial courses reported failure rates ranging from 56 to 76 percent. Completion rates for MOOCs have been similarly problematic. Half of those who sign up don’t even attend the first class. One study found the average churn rate for MOOCs to be 93%.

This isn’t to say that MOOCs aren’t a great social good. People seem generally to conflate the social value represented by MOOCs with their commercial value. The former, at least, will be tremendous. They are a game-changer where the alternative is no classes at all. Tom Friedman wrote about MOOCs that “nothing has more potential to lift people out of poverty — by providing them an affordable education to get a job or improve in the job they have.” Khadijah Niazi, the 11-year-old Pakistani girl who completed MOOC facilitator Udacity’s physics class with highest distinction, is a particularly heartwarming story. But until these classes evolve into courses and the churn rate goes down, MOOCs’ promise of widespread disruption will go unfulfilled. No one ever disrupted anything with a 93% churn rate.

The Non-MOOC Landscape

The improvements — such as high quality textbooks, materials, and supporting services —needed to turn MOOCs from lectures into fully developed courses cost money. In response, some MOOC facilitators are beginning to offer non-MOOCs, sometimes called SPOCs — “small, private online courses.” Udacity partnered with Georgia Tech to offer a Masters in Computer Science priced around $7,000.

The program is neither “massive” nor “open.” It is, however, the future. Within a decade, virtually every large university in the United States, and many elsewhere as well, will offer online courses — for credit and for fee. These courses will be particularly useful to students who don’t already have access to comparable courses.

Some very large and prestigious public institutions such as Arizona State University, Penn State, and University of Maryland have been doing this for years. Even institutions once slow to innovate are now rapidly following suit. According to a study by the Babson Survey Research Group, as of 2012, all but 13.5% of institutions had some online offerings. Furthermore, many schools that once offered only online courses now offer complete online degree programs — 62.4% in 2012 compared to 34.5% in 2002. It’s not just public and for-profit universities increasing their presence in the online space. From 2002 to 2012, the number of private nonprofit institutions with online degree programs more than doubled, from 22.1% to 48.4%.

In this new landscape have come new educational opportunities and models. Some small schools, like Southern New Hampshire State University, focus on assembling the best educational experience possible while maintaining ties to bricks-and-mortar community and regional businesses. Meanwhile, large flagship institutions like ASU, Penn State, and SUNY offer relatively affordable education in order to serve hundreds of thousands and eventually perhaps millions of students. Even elite institutions, most of which now offer online credits only, will one day begin offering online degrees (especially to overseas students) once it becomes clear that it won’t dilute their brand (i.e.,because enough of their competitors are already doing so).

This probably represents a deep threat to for-profit universities. It is already possible to attend high-quality online degree programs from prestigious state universities with fairly open admissions standards. Given the critical importance of institutional brand when selecting a university program, it is hard to imagine that students who can get into either will choose a lower-brand for-profit over a higher-brand state university. The for-profit universities have just a few years — until there is widespread market awareness that these not-for-profit degree programs exist — to improve and in some cases reinvent their operations.

Business Models

MOOCs have been the poster child as of late for online education. But the real reason people are so excited about MOOCs is not because of what they are in and of themselves, but because of what they represent. Education has always had an access problem. Everyone intuitively understands that creating video versions of the world’s great lecture experiences represents the beginning of a solution to this problem.

For many working adults with families, these courses represent an exciting way to improve one’s professional value and get promoted or find new work. When coupled with innovations in Internet infrastructure and hardware, MOOCs also provide an opportunity for students in developing countries to access educational experiences from top-tier schools for the first time.

But the commercial value of MOOCs has not yet been proven. I don’t believe their value lies in corporate recruiting of top students. That feels too goofy and niche to justify much excitement around their commercial value. But I do believe that MOOCs’ value-add may lie in serving as a lead-generation mechanism for SPOCs provided by institutions. Today, everyone who has left, or is otherwise outside of, the formal K-20 education system is largely invisible to that system. Yet the market of people who would love to take an occasional free course from reputable schools probably numbers in the billions. Even if you only sign up and don’t show up, that expression of interest is still valuable. That school (or will it be the MOOC facilitator?) now knows that you are in the market for, say, some training in statistics. They can start offering you courses with full materials and support services, for credit or for a certificate, and for a fee. MOOCs can render this vast after-market visible, for the first time ever, to the formal K-20 education system.

It will be these high-production value, for-credit online courses that will play the central role in the ongoing educational revolution. It will be the institutions themselves who are the great disruptors. From K-12 through the university level, schools will offer high quality online classes (the market will quickly sort the good from the bad). They will trade credits more easily than they do today, so that students at comparable schools can easily take live online courses from any school. This will allow schools to focus on what they’re best at, and give students subject matter and lifestyle choices they have never had before. It has been predicted that half of higher education will one day be delivered live online, and perhaps a quarter of K-12. If that’s true, it will, at global scale, dramatically improve opportunities for students and create a new trillion dollar industry in the process.

Don’t Defund Humanities: They’re Crucial to the Economy, Too

liberal-arts

Everyone is talking about the crisis in STEM education — the shortfall of qualified scientists, technologists, engineers, and mathematicians to fill the jobs of the future. The relative lack of STEM graduates in the U.S. is frequently cited as a threat to the country’s global standing.

But lately there has been a growing outcry from the “other side of campus” — humanities and social science departments wondering where their advocates have gone. Some argue that the STEM crisis is overblown – that in fact there are more STEM graduates than jobs. Others just want their fair share of resources. According to a New Republic article from earlier this month, while the “U.S. government spends more than U.S. $3 billion each year on 209 STEM-related initiatives overseen by 13 federal agencies,” the proposed 2014 endowments of the National Endowments for the Arts and Humanities are $145.5 million respectively.

But a liberal arts education provides excellent training in at least three crucial areas – communications skills, critical thinking skills, and learning about other cultures and ideas. As any nation’s economy increasingly becomes a global knowledge economy, these skills only grow more important. All knowledge economies – the USA among them – want more high-skill workers, of any type.

So then what’s going on here? In an environment that clearly has use for both types of skills, why focus so much more on STEM?

I think the real answer is that communications skills, critical thinking skills, and multicultural proficiency are much less measurable than STEM skills.

Human beings (even very smart ones) tend to be very bad at assessing and considering hidden costs and hidden opportunities, and so over-index towards transparent costs and opportunities. Both on a national basis as well as for individual careers, STEM tends to be associated with much more transparent costs and opportunities than humanities.

But is outcome transparency the metric that we should use to govern choices? If one thing tends to be easy to measure and another hard, does it make sense to choose the easier one simply on that basis?

Last month, I wrote about the coming disruption of higher education. Specifically, I argued that the most important role of the university — making graduates employable — will soon be disrupted as learning outcomes become more transparent (the question is: how much?). I was careful to note that students expect universities to deliver both short-term and long-term employability. STEM skills are great for short-term employability. But we also want universities to prepare graduates for the long term – to teach them how to learn.

According to a study by Cathy Davidson, co-director of the annual Macarthur Foundation’s Digital Media and Learning Competitions, 65% of today’s elementary school students will end up doing jobs that don’t exist yet. In addition to proficiency in science and technology, our workers need skills that can translate from job to job or industry to industry in a dynamic economy – skills like communications, strategy, interpersonal, and multicultural skills.

STEM skills are immediately and transparently applicable to a number of tasks when one enters the workforce. They provide a great foundation for early-stage career jobs. But no matter how strong one’s technical skills, it is very difficult in most organizations to advance beyond a certain point without strong communication skills, interpersonal skills, and good judgment.

We might test this hypothesis by looking at which background, STEM vs. non-STEM, tends to be better represented in the ranks of corporate upper management. We can’t just look at data of senior executives as a whole, because selection bias exists in the pool of talent available for promotion at most companies. That is, if companies have a shortage of STEM workers to begin with, we can expect as its non-STEM workers are gradually promoted that that dynamic will continue into upper management. However, it would be possible to find a careful sample of companies that have roughly equal numbers of STEM workers as non-STEM, performing tasks of roughly equal importance in aggregate, and where both sides have roughly equal opportunities for advancement. It would be very interesting to design a study around all the companies nationally that fit that profile, and see who gets promoted more to upper management. (It would also be interesting to see if that changes over time, or from country to country.) My bet is the non-STEM workers, as a group, tend to be more represented in upper management.

Humanities majors excel even in technology, where there are far more STEM-trained workers than not. Just off the top of my head I can name Reid Hoffman, Peter Thiel, and Chris Dixon – like me, all philosophy majors. At Knewton, we have plenty of programmers and data scientists who graduated with liberal arts degrees. One of our data scientists, John Davies, studied English at Harvard. He finds his education directly relevant to his job: “I wrote my thesis on Paradise Lost. One crucial skill I developed while studying literature, and especially while writing my thesis, was extracting structure from complicated systems. And that’s exactly what I do here – try and find the fundamental structures that explain how education works.” (Plus, he adds, “I’m good at catching typos in other people’s code.”)

As we make choices – personal career choices, or national policy choices, and everything in between – about promoting STEM and defunding non-STEM, are we to some degree choosing early or obvious needs over later or less obvious (but just as important) needs? As a society, we seem to blindly accept that since STEM is good we need more of it. And STEM is good. But are we making the right choices accordingly? There must be tests, like the one I tried to devise above, that could use data to assess the possible consequences of defunding non-STEM. Let’s have that discussion – before we see even more schools cut back further on their humanities studies.

Lorenzo Received by the Liberal Arts Procession – Botticelli, photo from Scott MacLeod Liddle on Flickr

Disrupting Higher Ed: Thoughts from the Knewton Symposium

A few weeks ago, Knewton hosted our first annual Symposium on the future of higher education — a gathering of senior leadership from some of the world’s biggest and most innovative online universities, both public and private. We were joined by Harvard Business School professor Clayton Christensen, the father of “disruptive innovation” theory, who shared his thoughts on the coming disruption of higher education.

What is disruption?

Clayton began by providing a few familiar examples of disruptive innovation, i.e. “a process by which a product or service takes root initially in simple applications at the bottom of the market and then relentlessly moves up-market, eventually displacing established competitors.” Steel mini mills are his classic example. When mini mills first surfaced in the 1960s, they were very efficient but produced inferior steel to traditional integrated mills. Only the low-margin rebar market would use the mini mill steel. Instead of building mini mills themselves, the traditional mills just ceded this sector to focus on higher-margin products.

Soon the mini mills took over the rebar sector and pushed out the traditional mills. But with so much production, the price of rebar dropped. So the mini mills improved their technology enough to produce angle iron and bars and rods, both higher-quality and higher-margin products.

Before long the same thing happened with these products: mini mills took over, integrated mills focused further up-market, and the price of angle iron and bars and rods dropped. This cycle repeated twice more, first with structural steel and then with sheet steel (the last refuge of the traditional mill). The once low-end disruptors had pushed their way to the top and knocked the integrated mills out entirely.

What makes an industry ripe for disruption?

Every industry has a “technological core” that defines its product delivery. In the case of a disruptive innovation like the mini mill, a new technological core emerges. This enables the disruptor to start at the bottom of the market and, as profit and scale allow for further refinements to the technological core, move up-market. Without a disruptive new technological core, industries — even those that seem ripe for it — cannot be disrupted.

Take hotels: in order to move up-market, a hotel chain like Holiday Inn would have to replicate the amenities and services of, say, the Four Seasons. But that would require entirely new facilities, check-in procedures, staff and staff training, etc. Despite the fact that the Holiday Inn and the Four Seasons offer the same type of product — lodging — there is virtually no overlap between their technological cores.

Historically, the same was true for higher ed. For a community college to become the equivalent of an elite four-year institution would require it to replicate the four-year college’s services — from the quality of professors, to range of courses, to availability of campus housing, etc.

But now online course delivery has the potential to become education’s disruptive new technological core. Like all early-stage disruptive innovations, online learning has heretofore focused its attention on the fringes of the market — students who, because of geography or age or financial limitations, wouldn’t otherwise have access to comparable bricks-and-mortar educational opportunities.

The many “jobs” of bundled experiences

To play out how the theory of disruption might apply to the university, Clayton compares it to another large, multifaceted business undergoing disruption: the newspaper. For instance, the New York Times has many roles, or “jobs” in Clayton’s lexicon. It entertains. It delivers current events knowledge. It delivers business knowledge. It kills time in waiting rooms. It allows people to buy and sell things. It’s a place to find employment.

Each of these jobs has been disrupted by new players that focus on just one of these things. A TV or iPad is better entertainment, Bloomberg has better business news, Craigslist is where you sell things, job websites are where you find employment, etc.

Just like the New York Times, the university serves many purposes. Its primary “jobs” are facilitating learning, increasing employability, and providing a coming of age experience. Secondary “jobs” include research, networking, extracurricular activities, sports and entertainment, study abroad, and job placement.

Some of the secondary “jobs” of the university have already begun to be disrupted, with the advent of alternate study abroad programs, extracurricular activities, networking channels, career assistance, etc. But the most important role of the university — the all-important thing that drives employer and student selection patterns — is employability, both immediate prospects upon graduation and long-term employability (given that careers and industries evolve). The coming of age experience is crucial, but comparable from campus to campus. Students seek out elite schools as a stepping-stone to elite careers.

Could this, the most important “job” of the university, ever be disrupted? Clayton’s theory holds that he who does the job best, will, over time, build the brand and become synonymous with the job. For example, if I tell you that you have 72 hours to completely furnish and move into a new apartment, what is the first thing that comes to mind? One company has done that job so well that all anyone thinks of in this situation is Ikea.

But Clayton doesn’t get specific in regards to elite university brands, and universities are unique in the power of their brands. When I was in business school, I was taught that Coca-Cola was the world’s most powerful brand. But that’s plain wrong: elite universities are the world’s most powerful brands. No one ultimately cares all that much whether their daughter has a Coke or a Pepsi, do they? But they would do nearly anything to send her to an elite university.

In a competition between Clayton Christensen’s theory of disruptive innovation versus the world’s most powerful brands, who will win? A few of us at Knewton have spent the weeks since our Symposium trying to answer this for ourselves. Here is what we came up with…

Goldman Sachs doesn’t care about Harvard

After graduating from Harvard Business School, I went to work at Goldman Sachs. With few exceptions, all 120 or so members of my associate class were fellow graduates of Harvard, Wharton, and a handful of other top-tier schools. Most came from the top five programs as ranked by US News and Businessweek. Programs five through 10 each had one or two graduates represented. Programs 10 through 20 had no more than one each. There was no one from outside of the top 20. Then, as now, Goldman (and employers like it) use elite brand MBAs as a proxy for ability.

What if employers instead had access to far more accurate, real-time, and comprehensive data sets around learning outcomes upon which to base hiring decisions? A revolution in educational data mining is underway that will make this a reality. Within a decade, prospective employees will be able to show down to the atomic concept what they know, how quickly they learned it, and how well they retained it.

Goldman Sachs would know whether an HBS recruit was in fact the best, say, at finance concepts or derivatives trading concepts. Goldman would inevitably start seeing recruits from outside of the top five ranked programs who know way more than I did about every type of finance concept or derivatives concept, and who learned them faster and retained them longer. What are the odds that the graduates who are the top in the nation in mastery of those concepts all went to Harvard or Wharton? Educational data mining will soon be able to prove to employers in many fields if they are overpaying for brand or, even worse, hiring the wrong people.

Goldman Sachs doesn’t intrinsically care about Harvard. They care about finding the best person for the job. Elite brand degrees have just traditionally been the best proxy metrics for that, because precise metrics weren’t heretofore available.

What can be measured about individual students can also be measured about schools and departments. Employers (or graduate school admissions offices) will know which school consistently produces the best chemists or derivatives traders or engineers. How likely is it that the Harvard experience is actually number one at teaching everything? Like the New York Times, it’s really good in many fields, but is it actually the best in any? If, as an employer, I know that I can get better computer scientists at School X than at Harvard, and better biological engineers at School Y, I’m going to tilt my recruiting efforts to those schools.

As for students, they’re certain to follow the jobs. They don’t intrinsically care about Harvard either.

Transparency is the bane of brands

For this scenario to materialize, two things must happen. First, employers need to have access to these very deep data sets — and they need to trust them. This seems nearly inevitable.

Second, in order to change hiring behavior, each data set must correlate strongly and obviously to the specific skills of a given job. That is, if I’m a recruiter and I look at two academic profiles  — one from a student at an elite brand school, and a stronger profile from a student at an non-elite brand — I will be convinced by the weight of evidence (more concepts learned, faster, at a deeper level of proficiency, retained longer, synthesized better) to hire the applicant from the non-elite brand. This will begin in industries in which job performance is easier to measure, and where even minor differences matter to employers, such as computer science, accounting, finance, math, engineering, science, and medicine.

It’s already starting to happen at Knewton (and across the computer science sector). Several of our tech leads never graduated from college; we hired them because they were able to prove via alternative credentialing that they could do the necessary work exceptionally well. SiteAdvisor and Hunch co-founder Tom Pinckney didn’t graduate from high school; instead, he sent the MIT admissions office lines of code he’d written (and he got in).

The higher the stakes of a given job, the faster employers will catch on. Transparent and high-stakes fields like medicine, where lives are on the line, will be early. If the head of a medical practice can more accurately identify the most qualified recent med school grads, it is a moral imperative (and a business one as well, given malpractice costs) that she hire that person, regardless of where he went to school.

Lower-stakes but still quantifiable jobs, especially those including a business development component, will take a little longer. A Big Four accounting firm will presumably just hire the best candidates it can, regardless of brand. But the owner of a small accounting firm in Des Moines might hire a Harvard grad for brand prestige even if he knows that a U of Iowa grad is technically a bit more skilled. Until the brand erodes beyond a certain point, some employers might rely on the Harvard brand in and of itself to generate some additional business.

The power of university alumni networks might also delay the disruption: graduates may hire slightly less qualified alums from their schools vs. other schools. But as data-driven brand erosion continues within a career field, these biases will break down too. The brand of their alma mater is important to people, but they won’t support the brand all by themselves if they think another candidate is stronger.

What about “fuzzier” professions? It is much harder to correlate an English major’s academic performance with the skill set of, say, a copywriter or marketing coordinator. Is this where the disruption ends? As the more quantifiable career sectors start peeling away, will elite brands recalibrate to focus on students seeking these more creative pursuits? I don’t think so. Once we’ve reached this tipping point, there’s no going back. When it becomes conventional wisdom that Harvard, while very good at a lot of things, is not actually the best at preparing students for anything in particular, the brand as a whole will erode even amongst employers in difficult-to-measure fields. The elites ceded the most creative fields — film, design, fashion, dance — to specialist schools long ago anyway. With the exception perhaps of law, the elite schools have focused on exactly those careers — medicine, finance, business, engineering, etc. — where they are the most likely to be disrupted first.

Sometimes the world really is coming to an end

It’s impossible to know the extent to which university brand power will erode. We know that accurate, real-time, and comprehensive data sets around learning outcomes are imminent and inevitable. But we don’t know how strongly and obviously these data sets will correlate to the specific skills of most entry level jobs in the minds of employers. The greater the overall correlation, the more quickly the power of the university brand will decline.

So if you’re the president of Harvard, what do you do? Focus on teaching the most measurable skills as well as possible? Or the softest skills? Disrupt yourself now? Or hope that others don’t disrupt you later?

Here is the advice I would offer to any university president whose institution has a strong brand.

1. Acknowledge and admit that the very long run of universities being immune to commoditization is coming to an end.

The strength of the elite higher ed brands is primarily due to the following three things (strength of network is a secondary thing that will only delay the inevitable if the fundamentals break):

    • The historical, totally arbitrary capacity constraints of delivering higher ed at scale
    • The extreme lack of transparency of the quality of the product
    • The extremely high-stakes nature of the outcome

Two of those things are ending. The third — the extremely high stakes nature of the outcome — is just gasoline that drives up the strength of the elite brand when outcomes aren’t measurable but will just as certainly drive it down when the outcomes are. Commoditization is coming. Brand will decline. Now it’s just a question of how much.

2. Start measuring.

The age of learning outcome transparency is nearly upon us. You can’t improve your processes around driving higher outcomes if you don’t measure those outcomes. Get ahead of the curve and get control of your data.

3. Proactively defend your turf on the most measurable fields like finance, pre-med, and computer programming.

Yes, this is just delaying the inevitable. But since it so happens that the most measurable fields also happen to be many of the most remunerative, you must delay this commoditization for as long as possible and maintain relevance after it has happened. Do so by delivering the best learning product you can. Stop muddling teaching and research. Great researchers should pay for themselves via research only — not by doing a crappy job of teaching to an audience that won’t be captive for much longer. Get the best teachers you can to teach those subjects. Stop with the ancillary fluff. Focus relentlessly on learning outcomes. If something doesn’t improve your students’ learning outcomes, stop doing it.

4. Focus on what you’re best at.

Things you’re not good at, and will never be good at, are ultimately just going to hurt your brand and divert resources and attention from where they’re needed most.

5. Get online, now.

Start offering degrees or, at the very least, courses for-credit (and for a fee). Even Harvard could admit five totally different freshman classes every year from the United States alone without compromising its standards. Offering credits, and even online degrees, to the most talented kids in India, China, Korea, Russia, and the Middle East — many of whom can pay full-fare — will not sully the brand of your degree and will be strategically imperative. Get online now while your brand is still strong and parlay it into global online education dominance.

6. Consider setting up a self-governing offshoot.

In their extensive research, Clayton and his team have found that the only entities to survive disruptive innovation were those who set up fully autonomous divisions that were free to disrupt the rest of the organization (e.g., IBM). Imagine an “XYZ Online University, powered by Harvard,” capable of creating and delivering on a new business model while taking advantage of the Harvard brand — but still independent and free to build on top of a new technological core.

Yes, it will take unbelievable political capital to pull off these kinds of reforms. It will take tremendous courage. Many will oppose you. Their arguments will seem reasonable. They will argue that key constituencies will revolt. (This is true, but it can be managed, and it must be managed, because the alternative is much worse.) They will call for gradualism — which appears safer and more doable now, but will cost your institution strength and position later. And they will resort to ridicule; they will say these ideas are unproven. (Ask the rest of the internet if its disruptive power over distribution and data mining is unproven.) They will say that you are Chicken Little, that the brand is so strong that it can’t erode. (This is just perceptual bias talking, as that brand strength is all they have ever known.)

Do it anyway. The survival of your institution is at stake.

 

(Many thanks to all the very busy leaders and executives who attended the Knewton Higher Ed Symposium and to Clayton Christensen for spending so much time with us there. Thanks also to Clay’s Disrupting Class co-author Michael Horn, of the Clayton Christensen Institute, for listening to our ideas and suggesting numerous improvements and areas for further thought.)

Big Data in Education: The 5 Types That Matter

Big data in education is a hot topic, and getting hotter. Proponents tout its potential for reform. Detractors raise privacy concerns. Skeptics don’t see the point of it all.

Few people seem to have a clear understanding of what big data in education means, its scope, what will inevitably result, or even the differences between fundamental types of data. The responsibility for clarifying and communicating this understanding starts with the organizations building data platforms or applications.

Take a recent example. The Gates-funded initiative inBloom recently received scathing critiques that it would share confidential information without parental permission, along with other security concerns. InBloom’s mistake, in my opinion, is that it holds personally identifiable information (PII) but didn’t communicate a transparent payoff to users. For an education company to get big data right, it needs to be on the opposite side of both of those issues: avoid holding unnecessary PII and communicate clearly how its service makes transparent good use of users’ data.

(For the record: Knewton doesn’t hold any PII unless a user is able to consent and wants us to use the information for a specific reason: to create a private learning profile that can be carried by that user from app to app.)

Education has always had the capacity to produce a tremendous amount of data, more than maybe any other industry. First, academic study requires many hours of schoolwork and homework, 5+ days per week, for years. These extended interactions with materials produce a huge quantity of information. Second, education content is tailor-made for big data, generating cascade effects of insights thanks to the high correlation between concepts.

Only recently have advances in technology and data science made it possible to unlock these vast data sets. The benefits range from more effective self-paced learning to tools that enable instructors to pinpoint interventions, create productive peer groups, and free up class time for creativity and problem solving.

At Knewton, we divide educational data into five types: one pertaining to student identity and onboarding, and four student activity-based data sets that have the potential to improve learning outcomes. They’re listed below in order of how difficult they are to attain:

1) Identity Data: Who are you? Are you allowed to use this application? What admin rights do you have? What district are you in? How about demographic info?

2) User Interaction Data: User interaction data includes engagement metrics, click rate, page views, bounce rate, etc. These metrics have long been the cornerstone of internet optimization for consumer web companies, which use them to improve user experience and retention.

This is the easiest to collect of the data sets that affect student outcomes. Everyone who creates an online app can and should get this for themselves.

3) Inferred Content Data: How well does a piece of content “perform” across a group, or for any one subgroup, of students? What measurable student proficiency gains result when a certain type of student interacts with a certain piece of content? How well does a question actually assess what it intends to?

Efficacy data on instructional materials isn’t easy to generate — it requires algorithmically normed assessment items. However it’s possible now for even small companies to “norm” small quantities of items. (Years ago, before we developed more sophisticated methods of norming items at scale, Knewton did so using Amazon’s “Mechanical Turk” service.) Then, by splitting up instructional content and measuring (via the normed items) the resulting student proficiency gains of students using each pool, it’s possible to tease out differences in content efficacy.

4) System-Wide Data: Rosters, grades, disciplinary records, and attendance information are all examples of system-wide data. Assuming you have permission (e.g. you’re a teacher or principal), this information is easy to acquire locally for a class or school. But it isn’t very helpful at small scale because there is so little of it on a per-student basis.

At very large scale it becomes more useful, and inferences that may help inform system-wide recommendations can be teased out. But even a lot of these inferences are tautological (e.g. “if we improve system-wide student attendance rates we boost learning outcomes”); unreliable (because they hopelessly muddle correlation and causation); or unactionable (because they point to known, societal problems that no one knows how to solve). So these data sets — which are extremely wide but also extremely shallow on a per-student basis — should only be used with many grains of salt.

5) Inferred Student Data: Exactly what concepts does a student know, at exactly what percentile of proficiency? Was an incorrect answer due to a lack of proficiency, or forgetfulness, or distraction, or a poorly worded question, or something else altogether? What is the probability that a student will pass next week’s quiz, and what can she do right this moment to increase it?

Inferred student data are the most difficult type of data to generate — and the kind Knewton is focused on producing at scale. Doing so requires low-cost algorithmic assessment norming at scale. Without normed items, you don’t have inferred student data; you only have crude guesswork at best. You also need sophisticated database architecture and tagging infrastructure, complex taxonomic systems, and groundbreaking machine learning algorithms. To build it, you need teams of teachers, course designers, technologists, and data scientists. Then you need a lot of content and an even bigger number of engaged students and instructors interacting with that content. No one would build this system to get inferred student data for just one application — it would be much too expensive. Knewton can only accomplish it by amortizing, over every app our platform supports, the cost of creating these capabilities. To our knowledge, we’re the only ones out there doing it.

Educators are sometimes skeptical of adaptive apps because almost all of them go straight from gathering user interaction data to making recommendations, using simple rules engines with no inferred content data or inferred student data. (It is precisely because we envisioned a world in which everyone would try to build these apps that we created Knewton — so that app makers could all build them on top of low cost, yet highly accurate inferred content data and inferred student data.)

Big data is going to impact education in a big way. It is inevitable. It has already begun. If you’re part of an education organization, you need to have a vision for how you will take advantage of big data. Wait too long and you’ll wake up to find that your competitors (and the instructors that use them) have left you behind with new capabilities and insights that seem almost magical.

No one will build functionality to acquire all five of the above data sets. Most institutions will build none. Yet every institution must have an answer for all five. The answer will come in assembling an overall platform by using the best solution for each major data set.

It is incumbent upon the organizations building these solutions to make them as easy to integrate as possible, so that institutions can get the most value from them. Even more importantly, we must all commit to the principle that the data ultimately belong to the students and the schools. We are merely custodians, and we must do our utmost to safeguard it while providing maximum openness for those to whom it belongs.

Why Every Education Leader Must be a Tech Visionary

Education, like many industries before it, is now having its internet moment.

There are two great phases unfolding. The first is the shift to digital materials for use either in blended learning courses or as a replacement for the printed textbook. This shift is now well underway in the U.S. Before long, there will be no more printed textbooks.

The second phase is the shift of part of every student’s coursework to purely online formats. This phase is now beginning to seriously pick up steam, as evidenced by increasing numbers of for-credit online courses, MOOCs, and archived video lesson repositories like Khan Academy. And what we’re seeing now is only the beginning.

There are so many implications of all these changes that one can be forgiven for thinking it is hopeless to make sense of them. But the alternative — not worrying about it at all — probably isn’t the right answer either. I try in this newsletter to break down one implication at a time. Today I’d like to discuss how this coming world of digital education is changing the roles of everyone in the education ecosystem — in particular its leaders.

Leadership positions in education, whether at universities or learning companies, have recently undergone a crucial change (though not everyone has realized it yet). Namely, every education leadership position must now include as part of its skill-set the role of “tech visionary.”

By “tech visionary,” I don’t mean that education leaders must dream up their own new tech-enabled products. Far from it. But it is absolutely critical that a leader in education has a strong, informed opinion about where technology will lead the industry in the next few years, and that he or she plans accordingly.

What’s at stake?

A lack of technology vision could result in a series of small, “good enough” decisions that satisfy today’s needs but ultimately lock you into a structurally inferior system or strategy. It could mean that your big product launch falls flat, or that your institution suddenly faces strong new competitive threats that come out of nowhere, with no obvious way to respond. It could mean you find yourself seriously out of position as advancements gain steam. And since sudden technological innovation can lead to runaway marketplace dynamics where the strong get stronger, being out of position could turn into a death spiral.

Education didn’t used to work this way. There may have been unpredictable one-off events, but there were no system-wide surprises. But that isn’t how digital industries — which education is now becoming — work. A technology wave can take years or decades to develop, but when it crests it reshapes everything in its path. It is unstoppable, but with some intelligent foresight it is partially predictable.

Take the current — and still incipient — wave of online courses and big data.

MOOCS are too important to ignore. In addition to their social utility, they add real value to the system by (re)capturing a non-traditional market. The billions of people who have left the formal K-20 education system are largely invisible to that system, but some of those people would love to take online courses from reputable schools. With MOOCs, they can, and schools can then start marketing for-credit or for-certificate courses to this great untapped demographic. But to bet that colleges will put themselves out of business by offering free MOOCs for credit at scale alongside their traditional fare would be naïve.

However, transferable for-credit (and fee-based) online courses will soon be a staple of the average college student’s diet. In classic disruptive innovation style, this may initially gain widespread traction with courses a given student’s school doesn’t offer, and which hence are only available to that student online from other institutions. The implications of this, should it come to pass, are huge: an enormous new market of online courses that bring high margin revenue and rapid growth for institutions that start offering them early and declining numbers for those who do not.

Another big change: as education content migrates from printed textbooks to tablets and smartphones, the efficacy of any particular set of education materials will become accurately measurable for each student. Gone are the days when education courses and products of middling quality could be compensated for with stronger execution in sales and marketing. In an industry as high stakes as education, transparent outcomes will create intense competitive pressure on product quality.

The education ecosystem is just beginning to be transformed by this new wave of digital technologies. Education leadership today tends to be strong in areas like campus management, fund-raising, brand management, textbook sales, etc. These men and women are good at running huge, asset- and Human-Resources-intensive operations. These are extremely valuable abilities to be sure, but these leaders must now add technology vision to that mix.

Having a technology vision is tough, but it’s possible. Managing to that vision is even harder. You have to be smart and fearless. It takes years to know for absolute certain whether a major tech bet — a university’s course delivery ecosystem, a publisher’s platform, a company’s training tools — was the right one. You have to have a strong opinion today about where the world is headed, make your bets accordingly, and live with them until past the point where success or failure is predetermined.

That takes real vision.

The Coming Adaptive World

This is the year of adaptive learning. Everyone is fired up about it, from Arne Duncan and Bill Gates to individual teachers and students the world over. Ironically, as the idea of adaptive learning is becoming more popularized, confusion about it is increasing exponentially. So please bear with a little introspection, but I think now is the time to clarify matters. Let me start with an analogy…

In 2006, my old Harvard Business School classmate Andy Jassy realized that all computing would ultimately move to the cloud. A senior manager at Amazon, he got approval from Jeff Bezos to launch Amazon Web Services, which is today the leader in cloud computing infrastructure. Startups, and even many big companies like Netflix, outsource their hosting infrastructure to AWS.

Knewton’s goal is to be like AWS for education. We’ve created a shared data infrastructure platform that makes it fast and easy for anyone to build extremely powerful adaptive learning applications with Knewton. As our platform gets stronger over time, with more features and more data, every product built using our platform automatically gets stronger too.

Despite our constant protestations to the contrary, observers often confuse Knewton with the many adaptive learning app makers who are now popping up. Or they confuse app makers with platforms. Or they think we’re all competitors.

In fact, it is Knewton’s mission to help all these adaptive learning app makers.

Due probably to Google and Facebook, it’s become fashionable in tech circles to describe oneself as a platform despite the word’s actual meaning. To be a platform simply means that one’s technology is not an end-to-end solution but instead powers other applications and businesses. There’s nothing innately glamorous about platforms. Most platforms are largely anonymous and pretty boring.

There will soon be lots of wonderful adaptive learning apps: adaptive quizzing apps, flashcard apps, textbook apps, simulation apps — if you can imagine it, someone will make it. In a few years, every education app will be adaptive. Everyone will be an adaptive learning app maker.

Knewton doesn’t create these apps — we work with partners to help them create their own. We create no content, nor do we claim unique expertise in instructional design, cognitive science, or pedagogical approach. But we do help make everything from your content to your pedagogy better, by optimizing them with deep student proficiency data.

Knewton isn’t even, ultimately, an adaptive learning platform. Adaptive learning is merely one (completely awesome) feature that can be done with our platform. Knewton is an infrastructure platform that consolidates data science, statistics, psychometrics, content graphing, and tagging in one place, and allows for the consolidation and pooling of student proficiency data.

This infrastructure unlocks for the first time the vast quantities of data that students have always produced — data that make adaptive apps exponentially more powerful. The infrastructure is also extremely complex and expensive. Sure — it’s straightforward enough to wire up a simple, self-contained adaptive app, based on a pre-determined, limited decision-tree. But how much better would that app be if it contained an effectively unlimited amount of back-end content? If all of its assessment items had been algorithmically “normed” so that they resulted in exact concept proficiency data for each student? Or if the app pre-acted to the learning modalities of each student? Or if it “started hot” so that from Day 1 of a student taking a new course, all her prior concept proficiencies and learning styles had been preloaded?

Knewton makes possible all these things and more. Today, Knewton functionality includes pinpoint student proficiency measurement, content efficacy measurement (yes, we can tell you how effective your content is), student engagement optimization, atomic-concept adaptive learning, and concept-level analytics. Next year we’re adding “adaptive tutoring,” which combines the wisdom of crowds with Knewton’s network to find the perfect people online right now to give you real-time help.

We also provide scalability, distribution (if you have a great app, we’ll promote it to our partners), and network effects (the combined power of all the data helps each student learn each concept). And we do it without storing any personally identifiable information (“PII”) unless a student wants us to have it.

All of this stuff is so costly and complicated that no one has ever, or would ever, try to build it all just to power one app. It would be like automobile manufacturers feeling like they also had to build and maintain all the nation’s highways. Knewton can accomplish it only by amortizing, over every app we support, the extraordinary burden of creating these features.

Besides, student proficiency data are much stronger together than apart. A closed, isolated app can by definition never have more than a fraction of the proficiency data that Knewton’s open platform has. Even that assumes that the app contains “normed” assessment items (which Knewton does for free); otherwise it generates no proficiency data at all.

To make all this stuff work requires hundreds of millions in financial capital and an unreal degree of human capital. Knewton is lucky enough to attract top data scientists, psychometricians, and software engineers from around the world — people who choose Knewton over offers from incredible companies like Google, Palantir, and top Wall Street firms.

It’s been a struggle for us to get where we are today. Until recently, only large learning companies and university systems could use the Knewton platform. But now our enterprise API is flexible enough for a much wider audience. We’re happy to partner with anybody — even so-called “competitors.” We can’t quite say “yes” to everyone who wants to work with us yet, but our capacity is growing by leaps and bounds every day. We just issued our 200 millionth recommendation (suggesting the optimal next bit of real-time content for a student) and will be into the billions by the end of the year.

Ultimately, all learning materials will be digital and they will all be adaptive. Big companies, start-ups, schools, and individual teachers will make them. We hope to enable them, and can’t wait to see the amazing things people create. AWS made life a lot easier for everyone else on the Internet. If Knewton can accomplish something similar for education, we’ll feel like we did a pretty useful thing.

Why Materials Costs Aren’t the Problem in Education

Students spend a lot of their day learning.

They spend six or more hours in bricks-and-mortar classrooms each day, listening to teachers, talking with peers, and working with textbooks/software/technology (collectively, “materials”). Then they spend a few more hours working through materials after school. Some students learn more in the classroom environment; others learn more by using materials to teach themselves.

Despite how huge and complex the education system is, students do primarily just those two things: attend classes and work with materials.

For all kinds of reasons I won’t go into here, it’s very difficult to innovate — especially at scale — the bricks-and-mortar classrooms side of the system. But it’s eminently possible to innovate the materials side. In fact, we’re currently in an innovation boom.

Just as this innovation boom has begun to proliferate, though, materials creators have received growing criticism over the price of their products. People have long complained about textbook prices, but over the last few years it has been increasingly argued in some quarters that the high costs of education ought to be addressed by lowering materials costs.

But materials are only around 1 or 2 percent of global education expenditure. Bricks-and-mortar classrooms — and all the costs associated with operating them — make up the rest.

We would all like education to be more affordable, but focusing on reducing materials cost is pointless. It’s far too small an expenditure. In fact, it’s worse than pointless — it’s dangerous. It would deter investment in innovation in exactly that part of the education ecosystem where much of the innovation at scale is occurring.

Why, then, do some people insist on lowering systemic costs by focusing on materials? Perhaps they are unaware of the enormous skew in relative cost between these two sides of the education system. Perhaps they think the cost of bricks-and-mortar classrooms cannot possibly be changed (if so, keep an eye on Khan Academy, MOOCs, and online courses). Perhaps they reflexively dislike materials since their providers tend to be for-profit entities. Or they may do so for their own perceived self-interest, opposing innovation in order to protect the status quo.

On an ROI basis, the materials industry is fantastically productive. It produces or facilitates an extraordinary percentage of learning, yet accounts for only 1 percent or so of the cost. Materials creators operate efficient and relatively low-margin businesses. And with digitally delivered products, price per unit will likely decline (after a period of intense investment over the next few years).

The best way to lower systemic costs in education is to average down the cost of bricks-and-mortar classrooms with digital products and courses — exactly what the materials industry is focused on right now. The world’s major learning companies are now or will soon be making more revenue from technology and services than from printed textbooks. Online courses will, over time, tend to reduce the systemic costs of bricks-and-mortar classrooms. This will be true even in primary and secondary education, which tend to be very cost-efficient. It will be even more so in higher ed, which is not as efficient and, more importantly, is priced on the basis of scarcity rather than operating cost.

If anyone is hoping to see materials diminish as a percentage of total education expenditure, the next couple of decades are likely to be disappointing. Assuming a significant increase in technology-driven education, materials will instead increase its overall share of the global education system while tending to lower the overall cost for each student. For those of us who would like to see dramatic innovation in education, that is a very good thing.

This article originally appeared in the March issue of Knerd Dispatch, the Knewton newsletter. Subscribe here.

Is Ed Tech in a Bubble?

Education and edtech are proving big draws at this year’s World Economic Forum Annual Meeting, with nearly a dozen panels and related events. I’ve been lobbying for this kind of commitment to education here since I started coming to Davos three years ago, but this is the first year I’ve seen it happen (there were no panels on education in the last two years).

This new focus is reflective of a much larger trend. In the past few years, the edtech industry has grown incredibly — as has the amount of money flowing into the industry. These investments have prompted continual debate about the “edtech bubble.” Namely, are we in one, and if we are, when is it going to burst?

I’ll let you know my opinion — but first a little context. Back in 2007, when I first started raising money for Knewton, there was really no venture capital (“VC”) backed edtech at all. There was Chegg and BookRenter, which were distribution companies packing books in boxes more than tech companies. There was me trying to raise VC for Knewton and Princeton Review founder John Katzman trying at the same time to raise VC for his new startup 2Tor (now 2U). Grockit was out on the West Coast. (I’m sure there were others, but those were the ones on my radar.)

At the time, not many VCs were into edtech either. A notable example: When I reached out to Fred Wilson of Union Square Ventures, he declined even to look at Knewton, saying that the education industry was something his fund hadn’t been able to get excited about. (A year later, USV famously changed its mind, deciding education would be one of the most interesting places to invest.)

In 2009, Kno and Inkling launched, along with a number of other startups. It was a sizable increase in the number of VC-backed edtech companies. That same year, Arizona State University and GSV Advisors hosted a small edtech conference bringing together startups and investors. The ASU conference started almost by accident, but it proved a surprise hit, and attendance has doubled every year. One year after the ASU conference launched, Goldman Sachs concluded that education would be one of the great growth industries of the next 20 years. They teamed up with Stanford University to host their first Global Education Conference. A year later, that conference doubled, too.

Today, the level of activity is at an all-time high. Hence all the questions about whether we’re in an edtech bubble. Some people aren’t even asking — they just take it for granted that we are.

But I can’t quite get comfortable saying that edtech is in a bubble.

Sure, lots of money is suddenly flowing into the space — relative to before — but not relative to the size of the industry.  And many of the companies getting funded will disappear within two years. They won’t demonstrate enough traction, either with their product or more likely with market adoption, and will fail to secure the all-important follow-on round of financing. This is already happening to some early edtech startups, and it will happen to many more in the next few years.

But a bubble means something quite different. For a bubble to exist, money must pour into a sector beyond the reasonable ability of that sector to return positive value to investors. In other words, if all the edtech companies that have been funded over the last five years and those that will continue to be funded over the next few years, as an asset class, have a reasonable chance of producing positive return on investment — then we are not in a bubble. (That’s true even in a case where only a few, or even just one, company delivers the bulk of the sector’s returns.)

And, in my opinion, that is exactly what it going to happen. Why?

First, the industry is massive. It’s so massive that virtually nobody I’ve met truly grasps how big it is. It’s beyond their frame of reference. The total amount of money (both public and private) spent annually exceeds all spending, both online and offline, of every other information industry combined: that is, all media, entertainment, games, news, software, Internet and mobile media, e-tailing, etc.

Right now, much of the cost of education goes to wastage — things that don’t necessarily improve the final product but are necessary for distribution. In fact, nearly all of the money spent on education goes to distribution. And yet education is arguably the most poorly distributed very large industry in the world. Even in the rich nations, there’s very little subject matter choice (what languages did they teach at your primary or secondary schools?) or choice in teachers. In the poor nations, of course, the story is infinitely worse, with most kids not exceeding a 6th grade level of education. But distribution of online learning is pretty cheap.

The entire developing world will leapfrog heavy education infrastructure much as they leapfrogged landline telephony (which is inexpensive compared to education infrastructure) for cell phone. I’m not saying developing nations won’t build schools. Of course they will and of course they should. But they will significantly average down their costs by relying heavily on online education. They will democratize access to high quality teachers, from their own nation and abroad, and powerful new educational technologies. Knewton is committed to providing our analytics and adaptive learning technologies at low or no cost throughout the developing world in keeping with our strong social mission.

So online education will revolutionize distribution, providing increased quality, choice, and access to teachers. Cost to individual students will in many cases come down, while all new markets are simultaneously opened in scale. Bottom line, the education industry will grow; margins will explode even as unit prices drop (albeit unevenly).

The shifting of education from analog to digital is a one-time event in the history of the human race. At scale, it will have as big an effect on the world as indoor plumbing or electricity. The consequence of nearly every human being receiving as much education as she wants and her ability permits will transform quality of life and global GDP within one generation, with crazily exponential effects to follow. Massive pools of human talent will be unlocked. How many of these better-educated people will raise better-educated kids? How many more great minds — future Einsteins, Curies, Da Vincis, Pasteurs, MLKs, and McCartneys — will the world produce when we can quadruple the number of high school graduates? What kind of extraordinary growth will result from the contributions of even one such transcendent genius?

It’s hard to look at the scope of likely transformation and get particularly concerned that too much money is flowing into edtech. This money is fueling the technologies and business models that will power this once-in-history transformation. There will be some hugely influential movements emerging over the next decade and some very large companies built.

As with all such great moments in history, many efforts will fail. That was true of residential electricity 100 years ago and the internet 15 years ago. For education, much of the long-lasting success will be clustered around a few dozen companies, including some existing titans that are transforming their businesses and some new entrants pioneering powerful technologies. So get involved. If you have a killer product idea and a clear path to market, start your own company. If not, join a company you think does. This isn’t a bubble — it’s just the beginning.

This article also appears in the January issue of Knerd Dispatch, the Knewton newsletter. Subscribe here.