The Knewton Blog

Subscribe to Newsletter

Our monthly newsletter features edtech and product updates, with a healthy dose of fun Knerd news.

Defining Critical Thinking

Posted in Ed Tech on December 8, 2010 by



I had the pleasure recently of spending a little time with Simon Lebus, the Chief Executive of Cambridge Assessment (CA). The last time I checked, CA was Europe’s largest assessment agency. During part of that conversation, Simon pointed me to a research report CA produced in 2008 on critical thinking. The research methodology combined expert judgment with a review of relevant literature in order to produce consensus about what critical thinking is (and is not) and about what skills it comprises.

The stated goal of the report “was to create a definition and taxonomy for Critical Thinking in order to support validity arguments about Critical Thinking tests and exams administered by Cambridge Assessment,” and while it may serve its purpose for CA, it does something of broader significance: it provides a taxonomy of sufficient granularity to undergird adaptive learning.

The taxonomy includes 5 high-level constructs: Analysis, Evaluation, Inference, Synthesis/Construction, and Self-Reflection/Self-Correction. At the second level, there are 26 concepts, including, for example, recognizing arguments and explanations, judging significance, considering the impact of further evidence on an argument, and making and justifying rational decisions. The CA report expands on this basic structure by providing further descriptors of each second-level concept. In the case of Evaluation, there are 9 sub-concepts, three of which are the following:

 By my estimate, these descriptions could be deconstructed to generate roughly 100 finer-grained third-level concepts. “Detecting errors in reasoning,” for instance, is a concept built on more specific concepts related to recognizing common fallacies: mistaking correlation for causation, ad hominem attacks, straw man arguments, etc. These, in turn, could be deconstructed to generate many more specific concepts beyond that third level.

What results is a tree that starts to define the hierarchy of skills involved in Critical Reasoning mastery:

One can continue this deconstruction process as far, and only as far, as data supports — it is of little use to go below the lowest level at which students can be reasonably evaluated or instructed.

This kind of concept structure, with a defined set of relationships between various concepts, is one component of Knewton’s adaptive learning platform. By measuring student proficiencies at a fine-grained level — and by refining those estimates in the context of other student data — we can select appropriate instructional content for each student.

If a student needs help with the “Evaluation” part of critical thinking, for instance, her issue might be that she has trouble recognizing logical fallacies, or even circular arguments in particular. Tracking that kind of information can give educators tools to differentiate their instruction in a way that up to now hasn’t been possible.

Many taxonomies like the one above have been developed for many different domains, and there are often heated debates about which structures best represent the domain for some specific purpose. We think these debates are valuable, and ultimately result in clearer pictures of knowledge domains.

In cases where the dust has not yet settled, however, we can still provide adaptive learning – with the Knewton platform, our approach is to map to multiple structures simultaneously, and provide recommendations based on these multiple sources of information.

My conversation with Simon was exciting because every advance in the taxonomy of knowledge domains makes personalized learning more possible. The more we know about hierarchies of mastery, the better adaptivity we can provide.