Reinventing-the-Wheel Learning in the Age of AI: Lessons from Building an AI Agent with AI
It’s Christmas Eve. I hope everyone is having a good holiday season. I attended a conference in Nashville, Tennessee last week1 and got a little taste of American Christmas.
In this post, I’d like to share my current thoughts on what reinventing-the-wheel learning should look like in the age of AI, based on my experience actually trying it out.
What Is Reinventing-the-Wheel Learning?
Here, I use this term to refer to “a learning method where you reimplement a well-established technology from scratch, step by step.”
I believe this learning method is common among software engineers. It’s an extremely powerful way to build a foundational understanding of a technology. However, since I couldn’t find a single term to describe it, I’ll call it “reinventing-the-wheel learning” in this article.
There are many self-build tutorial contents available on the internet and in books for practicing reinventing-the-wheel learning across various technologies. Off the top of my head, here are some examples:
- Build Your Own C Compiler (Japanese)
- Build Your Own OS (Japanese)
- Build Your Own Computer System (Japanese edition of “The Elements of Computing Systems”)
- Build Your Own Wasm Runtime (Japanese)
- Build Your Own Kubernetes Scheduler (Japanese)
- Build Your Own Deep Learning Framework (Japanese edition of “Deep Learning from Scratch”)
This approach is also common in university courses. For example, the Processor & Compiler Lab, a well-known course in the University of Tokyo’s Department of Information Science, follows this approach. I personally experienced building a processor from scratch and a programming language implementation at Kyoto University2.
These self-build tutorials walk you through implementing concepts step by step with explanations along the way. Since implementing everything from scratch would be too difficult, they typically focus on a minimal set of important concepts. Learners read the tutorials, understand the concepts, and code along to see things work.
On the other hand, for technologies without existing self-build tutorials, it’s difficult to even know what concepts to learn or how to structure the implementation, making reinventing-the-wheel learning quite challenging.
Even when tutorials do exist, if the prerequisites don’t match the learner’s background, it can be hard to get started. For example, if the tutorial uses a programming language you’re unfamiliar with, you first need to learn that language. There’s also the natural language barrier – tutorials written in languages other than your native tongue can be difficult to follow.
How to Approach Reinventing-the-Wheel Learning in the Age of AI
This led me to think: by fully leveraging AI, we could freely generate self-build tutorials that don’t yet exist and practice reinventing-the-wheel learning for any technology, tailored to our own prerequisites. In the approach I’ll introduce here, you use Deep Research to learn the overview of the technology and the implementation plan, then use a Coding Agent to deepen understanding while having it implement the code.
First, ask AI about the overview of the technology and the steps to implement it. Use a Deep Research-class AI for broad investigation.
I want to deepen my understanding of [target technology] by implementing it from scratch.
Please design a step-by-step implementation plan and tutorial.
[Conditions such as programming language, prerequisites, etc.]
Use the response and further conversation with the AI to get a rough understanding of the technology.
Then, move on to implementation right away. Since the AI handles the implementation as well, use a Coding Agent.
Write instructions for the Coding Agent like this:
I want to build [target technology] from scratch for learning purposes. Here are the prerequisites:
[Conditions such as programming language, prerequisites, etc.]
Please follow these guidelines:
- Do not write any code beyond what I explicitly instruct, so we can implement step by step.
- Before each step, create a new design document in the docs/design directory.
- Explain and walk me through the code every time you update a file.
- [Other general implementation guidelines]
Then, instruct the agent to proceed with implementation following the steps planned with Deep Research. For each step, discuss with Deep Research and the Coding Agent what to implement, and choose what you want to build. At each step, have the AI create design documents beforehand and explain the implemented code, asking questions as you go to deepen your understanding of the technology.
Building an AI Agent with AI
I actually tried building an AI Agent from scratch using AI3. While there are some existing tutorials for building AI Agents, none of them met all of the following criteria, so I decided to fully leverage AI for learning:
- Focused on learning (not building a product with AI Agents)
- Explanations available in Japanese
- Can be implemented in Go, which I’m proficient in
I asked Gemini Deep Research the following:
I want to deepen my understanding of AI Agents by implementing one from scratch.
Please design a step-by-step implementation plan and tutorial.
I'll use OpenAI's API for the LLM. I want to implement it in Go.
The full output report is too long to include here, but in summary:
- The basic pattern is ReAct (Reason+Act), which iterates through Thought, Action, and Observation from Input to produce an Output (Final Answer)
- Thought: Analyze the current task and history, plan what to do next
- Action: Call an (external) tool with specific parameters (Final Answer is also treated as a tool)
- Observation: Get the tool execution result
- There are two implementation approaches for tool usage:
- Use OpenAI Tool Calling API
- The API guarantees valid tool responses
- Implement ReAct Text Parsing yourself
- Since the LLM generates tool responses as text, they may not always be valid
- Use OpenAI Tool Calling API
- Implementation steps (for ReAct Text Parsing):
- Design the system prompt
- Build the Text Parser
- Implement the ReAct Loop
- Implement various tools
- (Further steps) Implement multi-agent systems
I chose to implement ReAct Text Parsing myself since I wanted to build from the ground up. With a rough understanding in hand, I moved on to actual implementation using a Coding Agent. I used Claude Code for this.
First, I wrote a CLAUDE.md file like the following. In practice, I refined and updated CLAUDE.md as implementation progressed.
I want to build an AI Agent from scratch for learning purposes. Here are the prerequisites:
- CLI tool
- Implement in Go
- Use OpenAI's API for the LLM
- Adopt ReAct Text Parsing
Please follow these guidelines:
- Do not write any code beyond what I explicitly instruct, so we can implement step by step.
- Before each step, create a new design document in Japanese in the docs/design directory.
- Explain and walk me through the code every time you update a file.
- Read CLAUDE.md before starting each step.
- Write appropriate test code. However, do not make external calls (such as to the OpenAI API). Create mocks as needed, but do not write meaningless tests.
- Verify that make build and make test succeed at the end of each step.
- Update the following files to their latest state at the end of each step:
- README.md
- docs/features.md
- Description of each CLI tool feature
- docs/package-dependencies.md
- Go package/directory dependency diagram in mermaid format
- Run git commit and git push after completing each step.
Then, following the plan from Deep Research, I instructed the agent to proceed with the following steps. I discussed with the Coding Agent what steps to take and chose what to implement based on what I wanted to learn:
- Project setup
- Implement OpenAI API calls
- Implement chat interface
- Implement ReAct Parsing
- Integrate ReAct with chat
- Implement tool invocation and execution
- Implement various tools (text processing, file operations, command execution, etc.)
In the end, I had a working CLI tool that could interactively invoke simple tools and return results. I had also wanted to implement Deep Research and MCP tool calling, but haven’t gotten to those yet.
Learning Efficiency
In total, it took roughly 8 hours from the initial research to getting a rough understanding of AI Agents.
This learning method lets you proceed in a way that aligns with your existing knowledge. For example, since I was already familiar with Go CLI tool development patterns, I could easily follow the implementation. Because there’s no mismatch with your prerequisites, I believe learning efficiency improves.
Having the Coding Agent handle all the implementation also contributes to efficiency. Some might argue that typing out all the code yourself (i.e., “code transcription”) helps internalize the learning better. However, I believe what matters is understanding the code, not writing it. Instead of transcribing, I make sure to thoroughly understand every piece of code the AI outputs. This includes not just reading the code, but also having the AI explain it and discussing it.
Of course, if you have the time, there’s nothing wrong with transcribing the code yourself. It might even be more enjoyable from the perspective of building something with your own hands. Personally, my past tendency has been to start with the intention of transcribing but gradually get lazy and resort to copy-pasting, so I don’t worry too much about transcription.
Can You Gain Sufficient Knowledge?
This approach has a slight drawback: there’s no guarantee that the AI is providing sufficient knowledge. It’s possible that there are still fundamental things about AI Agents that I should know. There’s no way to discover that within this learning method alone. For example, the initial research didn’t cover how to compress and maintain context.
You need to trust the AI regarding how deep is deep enough. However, the same could be said for traditional self-build tutorials – you can only trust that the tutorial covers enough. Additionally, with better-crafted prompts, you may be able to dig deeper into the AI’s knowledge to some extent.
Applying This to Technologies with No Existing Tutorials
For building an AI Agent, there are already some tutorials available at least in English/Python. The AI may have referenced these during its research.
So, can this approach work for technologies with absolutely no existing tutorials? As a test, I asked Deep Research to investigate “building a Zig compiler in Go”4. It proposed the following steps:
- Project setup
- Implement Lexer (tokenization)
- Implement Parser (AST construction)
- Implement ZIR (Lowering)
- Implement Sema (interpretation and AIR generation)
- Implement CodeGen (QBE output)
This is only a surface-level evaluation since I didn’t actually attempt the build, but it appears to correctly combine general compiler implementation techniques with Zig compiler architecture specifics. I’d like to try applying this to other technologies in the future.
This approach may be difficult to apply to technologies that don’t consist entirely of software implementation. This method works precisely because it can fully leverage a Coding Agent when everything is software-based. It may not be well-suited for technologies that require hardware, or infrastructure and cloud resource technologies where there’s value in experimenting through a GUI console. That said, perhaps recent AI-powered browsers could make it applicable to those domains as well.
Is Reinventing-the-Wheel Learning Still Necessary in the Age of AI?
After actually going through this process by trial and error, I felt it ended up closely resembling a product development workflow. In product development, you also start with a broad investigation and design, then design feature by feature and implement while understanding the code.
If that’s the case, when you want to learn a prerequisite technology before starting product development, you might not need to go through reinventing-the-wheel learning at all. You could simply learn the technology as you develop the actual product.
On the other hand, I believe reinventing-the-wheel learning remains effective when the goal is to broadly learn a technology as foundational knowledge.
Another key difference between reinventing-the-wheel learning and product development lies in how much you apply the brakes. In product development, there’s growing discussion around how to delegate work to AI while maintaining quality without applying too many brakes5. In reinventing-the-wheel learning, on the other hand, it’s crucial to apply the brakes thoroughly and ensure you understand every piece of code the AI outputs.
Conclusion
I’ve briefly summarized my thoughts while building an AI Agent from scratch. Since I’ve only tried this once through trial and error, there may well be better approaches. I’d like to experiment with other technologies going forward.
Whether you’re about to try this for the first time after reading this article or are already practicing it, I’d love to hear your thoughts and experiences.
Many of these university course materials are publicly available online, making them excellent learning resources. ↩︎
My original motivation was to learn about AI Agents, which led me to start building one. As I progressed, I realized this approach could be generalized, which led to writing this article. ↩︎
As far as I could find, there are no existing tutorials for building a Zig compiler available on the internet. ↩︎
This presentation was very insightful on the topic of brakes and quality in AI-assisted development: https://speakerdeck.com/watany/its-only-the-end-of-special-time ↩︎
