When the Ladder Evolves
Rethinking Bloom’s Taxonomy in the Age of AI
Everywhere I see, we stand at a fascinating crossroads. On one hand, the enduring legacy of Bloom’s Taxonomy—an educational framework that has guided decades of learning—reveals the timeless nature of human cognition. On the other, the rise of generative AI has enabled machines to traverse every rung of that very ladder, from mere recall to the lofty heights of creative synthesis. Yet, as AI’s capabilities expand, so too do the risks associated with its origin, transparency, and governance. In particular, the phenomenon of “sovereign AI”—the risks that emerge when organizations use free or open source AI of uncertain origin—forces us to ask: When every cognitive task can be automated, how do we ensure that our digital “partners” are as trustworthy as they are talented?
Let’s explore the promise of agility and cost savings and how that masks underlying risks tied to the software’s origin and governance, and what that means for educators, policymakers, and organizations navigating a transformed knowledge economy.
Because whether you agree with me or not, there is one fact: education is changing.
I. Bloom’s Taxonomy: A Historical Beacon
In 1956, Benjamin Bloom and his colleagues revolutionized education by classifying learning objectives into hierarchical levels: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. Over time, this framework evolved—most notably in 2001, when the revised taxonomy reframed these levels as Remember, Understand, Apply, Analyze, Evaluate, and Create. More than a mere checklist of skills, Bloom’s Taxonomy became a lens through which educators could view and design the learning process, ensuring that instruction moved from basic fact-recall toward deeper, transformative thinking.
Bloom’s work has endured because it captures the very essence of human learning—a process that builds from simple memorization to sophisticated creativity. Its principles have not only structured classrooms but have also provided a foundation for assessing cognitive rigor.
Blooming the Taxonomy
Imagine learning as if you’re climbing a ladder. Each step on the ladder represents a different kind of thinking—a way of processing information that gets more complex as you move upward. This ladder is known as Bloom’s Taxonomy, and it was created to help us understand and organize learning.
What Is Bloom’s Taxonomy?
Bloom’s Taxonomy is a framework that breaks down the process of learning into six main levels. Think of it as a recipe for how we understand, apply, and create knowledge. The original version was introduced in 1956 by educational psychologist Benjamin Bloom and his team. In 2001, it was updated to use action words, making it easier to see learning as a process you actively engage in.
The Six Levels
Remembering
This is the first step—like the foundation of a house. At this level, you simply recall or remember facts and basic concepts. For example, remembering the capital of France or the formula for water is part of this step. It’s all about memorizing information.Understanding
Once you remember something, the next step is to understand it. This means you can explain ideas in your own words. For instance, after learning what photosynthesis is, you could explain how plants make their food using sunlight.Applying
Now that you understand the information, you can use it in real-life situations. Imagine learning a math formula and then using it to solve a problem on your homework. That’s applying what you’ve learned.Analyzing
At this level, you break information into parts to see how it fits together. For example, if you read a story, you might analyze the characters’ motives or the plot structure. This is like taking apart a machine to see how each piece works.Evaluating
This step is all about making judgments. You assess the quality or value of information. Imagine reading two different reviews of a movie and deciding which one makes the better argument. Evaluating involves thinking critically about what you’ve learned.Creating
At the top of the ladder is creating, where you put everything together to form something new. This might be writing your own story, designing a project, or even developing a new idea. Creating is where all your knowledge and critical thinking come together to produce original work.
It Blooming Matters
Bloom’s Taxonomy helps educators design lessons and assessments that gradually challenge students to think in more sophisticated ways. It’s like building a strong foundation before constructing a tall building. When each level is mastered, learners are better prepared to tackle complex problems and think creatively.
For casual learners, it’s a handy way to understand your own learning process. When you study, ask yourself: Am I just trying to remember facts, or am I also understanding, applying, and even creating new ideas? This framework can guide you to become a more active and engaged learner.
Bloom’s Taxonomy isn’t just for classrooms—it can be applied to almost any learning or problem-solving situation. It shows that learning is not a one-step process but a journey that builds upon itself, step by step. And there’s little tricks you can do at each step, such as Anki cards for remembering—which I may cover in future articles.
In short, think of Bloom’s Taxonomy as your personal ladder to better understanding. Whether you’re reading a book, solving a problem, or coming up with new ideas, remember that every great achievement starts with taking one step at a time.
The ladder is the key insight to unlocking the knowledge economy and AI.
II. AI’s Journey Through Bloom’s Ladder
Should you or your firm benchmark each level of the taxonomy?
Today’s AI, particularly generative models like Claude, Gemini or ChatGPT, are capable of remarkable feats. They can retrieve vast amounts of factual data in seconds, synthesize information into coherent narratives, and even generate creative content that sometimes mimics human ingenuity. On Bloom’s ladder, AI excels at "remembering" and "applying"—tasks that require speed, consistency, and pattern recognition. Yet, when we reach the levels of "evaluating" and "creating," the limitations become apparent. AI can remix and recombine existing data, but its ability to inject genuine insight or ethical judgment remains shallow compared to human capacity.
This duality—where machines can traverse nearly every cognitive task yet struggle with the most nuanced levels—raises an unsettling question: If AI can perform each level of Bloom’s Taxonomy, does the framework become a mirror reflecting our own cognitive biases, or does it reveal a chasm between automated efficiency and human creativity?
Well…
III. Navigating the Cognitive Ladder: Hybrid, Human, and AI Approaches
Imagine a future where training modules are designed around Bloom’s levels and where AI and human intellect collaborate seamlessly. For tasks that involve rote memorization or routine application, AI can serve as a tireless partner, offering rapid feedback and extensive data processing. However, as tasks demand deeper understanding, critical analysis, and creative innovation, human judgment becomes indispensable.
There will be models of education that seize that economic opportunity. The incentives are huge. Faster time to learn is faster time to market. And in capitalism, speed wins.
This hybrid model is not merely theoretical—it is already shaping the way organizations and educational institutions design curricula. By strategically aligning tasks with the appropriate cognitive challenge, we can delegate routine functions to AI while preserving the irreplaceable value of human insight. In doing so, we create a symbiotic environment where efficiency and originality coexist, but also capital markets provide rich incentives to foster generational solutions.
And where there’s money? There’s government.
IV. The Sovereign AI Challenge: Risks and International Considerations
Yet, while we celebrate AI’s prowess in navigating Bloom’s Taxonomy, we must also confront a more troubling aspect: the rise of sovereign AI. As highlighted in a recent article from “Letters from the Machine,” (you ARE a subscriber?) the allure of free or open source AI is undeniable. Organizations are drawn to the cost-effectiveness, customizability, and rapid prototyping that international AI tools promise. But beneath the surface, several risks loom large:
Origin Matters
The source of open source AI can be murky. What if the code originates from a country blacklisted for acts of terrorism or state-sponsored malfeasance? Relying on such software could jeopardize your organization’s compliance and reputation. The question isn’t just whether AI is free—it’s whether the origin of that AI aligns with your strategic and ethical mandates.Transparency and Trust
While a project may seem robust due to a large community or corporate backing, deeper scrutiny might reveal conflicting licenses, unknown contributors, or vulnerabilities hidden in layers of code. In sensitive sectors where data security and reliability are paramount, any ambiguity about the AI’s provenance is a red flag.Licensing and Compliance Pitfalls
Open source licenses vary widely and can introduce unexpected obligations. Some licenses require that any derivative work also be open-sourced, which might conflict with proprietary strategies. For organizations bound by strict compliance or export control regulations, these pitfalls are not just technical details—they’re strategic risks.Risk Mitigation Through Governance
With engineers already using international AI in their toolkits, the boardroom must ask tough questions. How will investors and regulators react when they learn that your organization is leveraging foreign AI? Are you prepared to justify that decision with rigorous oversight, robust cybersecurity protocols, and a formal approval process?
Integrating these concerns into our broader discussion of cognitive capabilities, we see that while AI can climb Bloom’s ladder with astonishing speed, the “sovereign” nature of some of these tools poses a strategic dilemma. The very efficiency that makes AI attractive also raises profound questions about security, compliance, and the ethics of relying on code whose origins may be as international as they are obscure.
Let’s keep it simple. Do you agree with the economic policies of China and Russia? How would an AI trained on those cultural artifacts influence the taxonomy?
V. Implications for the Knowledge Economy
Nothing is ever so clear cut, is it?
As AI lowers the cost of knowledge production, traditional value structures are upended. The phenomenon is reminiscent of Jevons Paradox: increased efficiency leads to increased consumption. In the knowledge economy, if AI can automate the lower levels of Bloom’s Taxonomy, the real value shifts to those skills that remain uniquely human—critical judgment, ethical evaluation, and genuine creativity.
However, when organizations embrace international (or sovereign) AI without sufficient vetting, they risk commoditizing their intellectual assets. The danger is twofold: not only might the quality of AI outputs be compromised by hidden vulnerabilities, but reliance on such tools can also obscure accountability. In essence, the future of work will be defined by who can best integrate human insight with machine efficiency—while managing the inherent risks of unsanctioned, foreign AI.
The question you need to explore is on whether the best to integrate human insight is fundmanetally a human concern or if it’s one the AI is well equipped to handle.
Compiled so hard that the Cognitive Ladder is Obsolete?
Maybe it was just a late night with pizza and soda.
If AI can traverse every level of Bloom’s Taxonomy, does that render the ladder itself obsolete? Or does it reveal a new form of hybrid cognition that blends machine efficiency with human creativity?
For decades, Bloom’s Taxonomy has served as a roadmap for education—guiding us from rote memorization to sophisticated creation. Yet today’s AI is capable of “remembering,” “understanding,” even “applying” and “analyzing” at speeds unimaginable to a human. But does this mean the ladder is outdated? Not quite. Many thought‐leaders now argue that what we are witnessing is not obsolescence but evolution. The taxonomy is morphing into a framework for “hybrid cognition.” In this model, AI efficiently handles the heavy lifting of data retrieval and standard application while humans infuse the process with nuanced interpretation, ethical judgment, and genuine creativity.
Recent discussions in academic and industry circles—even those with a controversial edge—suggest that this hybrid model may redefine “learning” itself. Instead of a sequential climb, the cognitive process becomes a network of interrelated tasks where human insight and machine speed coalesce. As one provocative article noted, “the ladder is less a hierarchy now than a web—a tapestry where the threads of efficiency and intuition are woven together”.
Is it true that the traditional steps of Bloom’s Taxonomy aren’t being discarded; they are being reinterpreted? In a world of hybrid cognition, the taxonomy becomes a tool to orchestrate collaborative intelligence rather than a measure of isolated human effort. It’s an interesting thesis.
🚀 or Valuing Creativity
That’s why you subscribe, right?
Creativity is such a key component of the human condition, is it not? How do we value creativity in an era where machines can generate content at scale? Is the true competitive edge shifting from production to curation and contextual interpretation?
The unprecedented scale of AI-generated content challenges the very notion of creativity. Machines can now churn out essays, art, and even strategic proposals—often indistinguishable from human output. However, true creativity is not simply about generating content; it’s about injecting originality, context, and emotional resonance. In the emerging knowledge economy, the competitive edge is increasingly about curating and interpreting machine outputs rather than just producing them.
Recent controversial perspectives from creative industries suggest that “the art of the future lies in the ability to sift through vast AI‐generated information, identifying the truly innovative ideas and placing them into meaningful context”. In other words, while machines can deliver quantity, the scarcity—and thus the value—of genuine creative insight may well shift to human curators and interpreters. This transformation forces us to rethink metrics of innovation. It’s no longer enough to ask, “Can the machine produce this?” Instead, we must ask, “Can we, as humans, elevate and adapt this content into something that resonates on a deeper level?”
Can we direct the AI?
In the future, creativity will be prized not for raw production but for the discernment and contextual intelligence applied to machine-generated ideas. The shift is from manufacturing output to curating wisdom is one such argument. But perhaps there’s something even more profound lurking a little deeper.
The Hidden Costs of Neighbors
What are the hidden costs of relying on international, free, or open source AI? How do we safeguard our organizations when the software’s origin may pose strategic risks? Consider not just learning in the classroom, but also the impact of training your staff.
The allure of free and open source AI is undeniable—rapid deployment, cost savings, and the promise of cutting-edge innovation. Yet, as we’ve seen in our discussion on sovereign AI, the origins of such software carry weighty implications. When an AI tool comes from an international source, questions of security, licensing, and even political risk come into play. For example, an open source project maintained by a small group with unclear oversight may expose your organization to vulnerabilities or compliance nightmares.
Recent analyses warn that “the hand‐waving nature of free AI can mask significant hidden risks—from conflicting licensing models to uncertain origins that may conflict with your organization’s ethical or regulatory obligations”. Executives must ask hard questions: Who are the developers? What jurisdictions govern the code? Can you rely on a volunteer‐driven project to secure critical data? In a competitive market, such questions aren’t mere details—they’re strategic imperatives.
Organizations must establish robust governance frameworks and perform rigorous audits of international or open source AI tools. The hidden costs—security vulnerabilities, licensing pitfalls, and geopolitical risks—demand proactive oversight and a thorough vetting process.
Definitions and Values
In a future where every routine cognitive task is automated, how do we redefine the essence of learning and innovation? Is it all dollar per novel token trained?
Imagine a world where AI takes over every predictable, routine cognitive task—fact-checking, data processing, even basic analysis. In such a future, the essence of learning would no longer be about acquiring information at speed; it would be about developing the capacity for critical, creative, and ethical thought. The focus shifts from “what do you know?” to “how do you think?” and “what do you do with that knowledge?”
Some thinkers suggest that this future demands an educational revolution. Instead of teaching students to memorize and repeat information, curricula must evolve to emphasize problem solving, strategic thinking, and collaborative innovation. As one controversial article recently argued, “the future of learning lies in cultivating a mindset that questions, critiques, and ultimately reimagines the role of knowledge in a digital era”.
Do we even know what knowledge really means when it’s so abundant?
This redefinition of learning isn’t merely academic—it’s essential for thriving in an age where human creativity and judgment are the ultimate differentiators. Educational institutions and organizations alike must redesign assessment methods and innovation models to reward not just efficiency but the ability to navigate complexity, ethical dilemmas, and creative challenges.
There are those who argue the essence of learning and innovation will be redefined to prioritize higher-order cognitive skills—critical evaluation, ethical reasoning, and creative synthesis. As routine tasks are automated, the true value of education will be measured by our ability to ask the right questions and forge new paths of human ingenuity.
Maybe that’s all reality ever was and why we’re all here in the first place: to figure out the right question to ask at the right time.
So now what?
The questions we’ve explored are not merely your weekend puzzles; they are YOUR questions. In a future where routine tasks are automated, our competitive edge will be defined by the uniquely human capacity for critical thought, ethical judgment, and creative synthesis.
Thanks for stopping by. To ensure you’re keeping up with the latest in AI, all you need to do is subscribe to this newsletter. Plus, it helps me calibrate the content to the letters that resonate most with YOU.
Next up…
Citation: Vibrant images of learning used in this article were produced by Grok2.







