by Polly Watt, Learning Network Director
Introduction
I’ve always preferred learning by doing. Give me a capstone project, a simulation, a build challenge – anything hands-on – and I’m all in. Give me a multiple-choice quiz, and I’ll probably just check the answers with AI and move on. Multiple-choice is not a particularly useful form of assessment, especially when you’ve written enough of them yourself. I also think it is one of the areas where learning is shooting itself in the foot. I’m fully aware of the reasons why they end up in courses, but in my opinion, they really shouldn’t.
I’ve spent a considerable amount of time exploring alternative assessment methods, like voice-bot assessments, which have you thinking on your feet. If you have to explain what you’ve learnt, and you can do it reasonably well, it’s a much better clue as to what you’ve taken away from a piece of learning content.
The hands-on experience I want to discuss today comes from a course I’m currently taking, from AI Build Lab. It’s intensely practical – probably the most hands-on learning you can possibly imagine using just a computer and a bunch of tools. This isn’t about memorising facts. It is about trying, failing, adjusting, and trying again – real learning, the messy kind. More reminiscent of the kind of learning you did as a kid.
It isn’t a course that has been written by any instructional designers or learning professionals. Instead it’s been put together by people who are absolutely passionate about the subject, and they share that passion in multiple live sessions a week and carefully documented processes that actually work, because they did it themselves. They want everyone to finish, and so, many of the participants are doing do-overs, because:
- There is just so much to get your head around the first time around.
- They actually want to make progress and be able to create, use and benefit from agents!
In this article, I want to share two reflections from this course:
➔ First, how building an agent mirrors how we really learn – often a messy, iterative process, perhaps more akin to how children explore and figure things out than the structured paths we sometimes design.
➔ Second, the course has made me reflect on which parts of my own work (L&D and other) could potentially be done independently by an agent, helping to free up some time and cognitive energy.
⸻
Iteration as Messy, Real Learning
Before you start building an AI agent, you might imagine creating a perfect plan or map. The reality? You begin with an incomplete idea. You sketch out a workflow, test something, it half-works (or breaks completely), and you adjust. You test, tweak, evaluate, repeat – again and again.
Let’s be honest – it’s not less work. It’s a time-sucking, bug-hunting, instruction-rewriting kind of effort. But it teaches you fast. In real life, we don’t move from “novice” to “expert” just by reading a slide deck or following a linear path. We learn by doing, failing, reflecting, getting feedback, and trying again. It can feel chaotic, but it’s effective.
Once you see what agents can (sort of) do, your brain starts scanning your workload like: ‘Wait… could an agent handle that? The process of building them forces us to confront and embrace iterative, experience-first learning – exactly the kind of learning we often say we want to create for others. It’s a powerful reminder that deep learning is rarely as neat and tidy as we as learning designers pretend.
When I built my first email agent from the course, the initial result was… accurate, but jarringly blunt. No greeting. No sign-off. Just a couple of terse sentences. It wasn’t until I saw it “in action” that I realised the gaps. I hadn’t just forgotten the necessary human touch; I’d also failed to provide sufficient context and instruction. Iteration isn’t just about fixing bugs; it’s about discovering what you didn’t even realise you needed to design for in the first place.
⸻
Rethinking Learning Design: What Could an AI Agent Do Independently?
Once you grasp what agents can do, even imperfectly at first, you start mentally spotting tasks they could potentially handle independently, freeing up your own time and mental bandwidth for higher-value, uniquely human work.
In practice, the most effective approach probably isn’t building one giant, all-knowing agent, but rather creating a suite of smaller, specialised agents. Each one does one job – and does it well, in the blink of an eye. No multitasking here. These agents can then be linked into a workflow, coordinated by a central “orchestrator” agent that directs traffic, deciding when and how to trigger each specialist.
This modular approach makes sense because specialised agents tend to be faster, more accurate within their domain, and easier to improve. Each agent stays focused, making the whole system more robust and adaptable. If one component needs an update, you can refine or replace it without disrupting everything else – much like organising complex projects within human teams.
Here’s what a semi-autonomous agent-driven learning design process could look like:
Start with the Learning Design Orchestrator Agent, which kicks off the process, based on a trigger (e.g. a request for a new learning module tied to a business goal). It manages handoffs, timing, and approvals across all the other agents:
1. Content Discovery and Structure
- The Content Research Agent pulls together internal and external knowledge.
- The SME Content Structuring Agent turns messy notes or transcripts into a coherent draft framework.
- Together, they provide the raw material and structure.
2. Define the Learning Goal
- The Learning Objective Agent uses performance data and business needs to define measurable learning outcomes.
3. Design the Experience
- The Storyboard Design Agent maps out learning sequences based on the objectives and structured content.
- The Scenario Generation Agent adds decision points and workplace scenarios to bring content to life.
4. Format and Presentation
- The Media Recommendation Agent suggests the best formats (videos, infographics, job aids, interactive tools).
- The Multimedia Concept Agent adds creative flavour and production ideas.
5. Make it Adaptive and Personalised
- The Adaptive Pathway Logic Agent builds in logic for adjusting difficulty and pacing.
- The Personalisation Engine ensures learners get a tailored experience based on their profile or performance.
6. Build Feedback Loops
- The Assessment Item Generation Agent creates aligned assessments (MCQs, voice bots, etc.).
- The Feedback Analysis Agent collects learner responses, flags issues, and recommends improvements.
- The Learning Analytics Agent turns all of this into dashboards that help designers iterate and refine.
⸻
Conclusion
Building AI agents is in itself a powerful form of learning. It’s fast, often messy, requires reflection – and it forces you to think deeply not just about what people need to learn, but how they actually experience the learning process.
It also challenges you to look at your own work through a different lens: Which parts are routine and repeatable, potentially suitable for independent agent assistance? Which parts demand your unique human insight, creativity, and connection? And how can we, as L&D professionals, build smarter systems – incorporating both human expertise and AI capabilities – that free us to do our best, most impactful work?
Even a small experiment – a clunky first agent, a rough prototype that requires significant refinement – can teach you more about learning, technology, and your own professional practice than you might expect. The AI Build Lab course, while a demanding and iterative experience, certainly provided that for me.
The future of learning isn’t just about doing more. It’s about doing the right things better – and strategically leveraging tools like AI agents to help with the rest.
So, what task would your first agent tackle?