Why Systems Thinking Still Matters in the Age of AI
FM (Friday Morning) Reflection #42
Building things with AI often starts simple. Recently, I created a small, focused app using AI tools – a ‘callback finder’ to help me connect ideas in new Friday Morning Reflections to relevant past posts. I needed this because Substack doesn’t provide a tool to do this natively, and I wanted to see how AI could help me.
I was pleasantly surprised at the result, and because my request was specific and contained, the AI coding assistant handled it well.
It felt like progress and a glimpse into a future where the power to innovate with technology becomes more accessible. The experience inspired me to explore the idea that AI could result in a new generation of creator-innovators, upending the built-in-a-garage-by-a-gang-of-guys techno bro founder mythology that started with Jobs and Woz and continues to this day. I wrote about this practical application of AI in From Creator to Architect: The New Entrepreneurial Frontier, and now you’ve also seen a callback in action.
My initial success naturally led to the next question: what else could it do?
When I wanted to expand my simple tool by giving it a new problem to solve, the assistant diligently got to work with an enthusiastic “That sounds like a great addition!”
I repeated this several times and was impressed with each turn.
But then I wanted to bundle the whole thing together and add features and functions that would unify everything into a cohesive toolset. Things got complicated fast. It started to “forget” design patterns that we cheerfully agreed to just minutes earlier. It wanted to re-implement things we’d already done, or suggest code that conflicted with previous work.
It quickly became clear that while the current generation of AI agent-driven builders is great at tackling specific and focused tasks, the tools often struggle with the bigger picture – the intricate connections needed to create a working system.
This experience highlighted a crucial gap I’m observing not just in AI tools, but in how we’re increasingly approaching problem-solving: a lack of systems thinking. We’re easily seduced by the quick win, the simple point solution, potentially overlooking the complex, interconnected nature of the challenges we face.
Seeing the Forest, Not Just the Trees
Systems thinking is the ability to see the whole picture – the relationships, the patterns, the unintended consequences – rather than just focusing on individual parts. It’s understanding that changing one element can ripple through the entire system in unexpected ways.
Building a robust software application isn’t just about writing code for individual features; it’s about architecting how those features interact, the workflow, ensuring security, planning for scalability, and maintaining consistency. It requires understanding the forest, not just planting individual trees.
Current AI coding assistants often seem to operate at the tree level. They can generate code snippets, troubleshoot specific errors, or even scaffold a basic application. But ask them to manage the complexity of a larger system – maintaining architectural integrity, remembering security protocols across different modules, adapting to evolving requirements – and their limitations become apparent.
They lack true “systems awareness.” They can help you get across town, maybe, but they can’t yet navigate the complexities of a cross-country journey. It’s like they can help you draw a picture of a bridge’s superstructure, but they can’t yet architect and build the entire bridge to safely carry traffic.
The Human Parallel: Rushing to Quick Fixes
I see a parallel trend in human behavior…perhaps it’s that the models are just mimicking us.
The urge to jump in and solve problems quickly with point solutions is well known. If you’ve worked in technology for a while, you’ve undoubtedly seen Band-aid or duct-tape-and-bailing-wire contraptions deployed that solve today’s immediate problem but fall over later because they were brittle and not designed to handle the next exception.
Now, this is being paired up with a rush to implement AI solutions, often driven by a desire for quick results or the fear of falling behind, and amplified by advertising from AI companies and wannabes: “Just let the agent loose and it’ll solve the problem.”
The ease with which AI can generate an answer – a piece of code, a summary, a plan – can obscure the difficulty of generating the right answer, one that is robust, secure, and considers the broader system. It touches on an idea I’ve explored before: the risk of being generally right but specifically wrong.
AI coding tools, in their current state, often produce outputs that seem plausible and generally related to the request, but might be subtly flawed or lack the nuanced understanding required for complex, real-world applications.
This can result in apps that look impressive on the surface but fail to account for the intricate realities of the operating environment and the actual jobs to be done that we should be focused on. I believe this is why we’re seeing so many headlines about AI solutions that don’t pay off right now.
When implemented properly, AI can and will drive significant value.
But we’re going to have to figure out how to guide both AI assistants as well as vibe-coding colleagues on the finer points of software development architecture, how to develop with AI with robust frameworks, and how to engineer for scalability.
Why AI Struggles with the System
Why do these tools falter with complexity? Part of it lies in their current architecture.
Context windows have limits, meaning the AI can only “remember” a certain amount of information at once. Longer, more complex projects quickly exceed these limits.
Coordinating multiple specialized AI agents to work together coherently is still a significant challenge – though the approach of agents ‘talking with each other’ to self-correct shows potential.
Model training data, while vast, doesn’t inherently equip them with the architectural foresight or deep understanding of system dynamics that comes from human experience. They are incredibly sophisticated pattern matchers, but not (yet) true system architects.
Cultivating Systems Thinking
So, how do we navigate this? How do we leverage the speed and power of AI without sacrificing the depth and rigor of systems thinking? It requires a conscious effort from us, the users, developers, and stakeholders who are involved with the building of these tools.
Here’s how we can Do Good by Doing Better:
Use AI Strategically, Not Blindly: Treat AI coding assistants like very capable, but sometimes forgetful, junior developers. Use them for specific tasks, code generation, debugging, and brainstorming, but maintain human oversight of the overall architecture, security, and system integrity. Don’t just “vibe code” an enterprise application into existence. This strategic approach to human-AI collaboration is essential.
Define the System First: Before deploying AI, take the time to map the system yourself. Understand the components, the interactions, the potential failure points, and the security considerations. Provide this architectural context to the AI, piece by piece if necessary.
Prioritize Robustness over Speed: Resist the pressure for immediate AI-driven solutions if they compromise quality or systemic integrity. It’s better to integrate AI thoughtfully into a well-designed system than to quickly build a fragile system that will break in real-world use.
Develop Your Own Systems Thinking: Recognize that AI is a tool, not a replacement for critical thought. Invest time in learning and applying systems thinking principles yourself. Ask “what if?” questions and consider second- and third-order effects.
Bridging the Gap
The current limitations of AI in handling complex systems aren’t a reason to dismiss the technology, but a reminder to use it more wisely. AI assistants can dramatically accelerate specific parts of the development process, freeing us up to focus on higher-level architectural and strategic thinking.
Simple apps, like my callback finder, show the power of focused AI assistance. The challenge lies in scaling that assistance effectively to complex, mission-critical systems.
We are still at the beginning of this road. These tools will undoubtedly become more capable, their context windows will expand, and agent coordination will improve. But the need for human-led systems thinking – that holistic, critical, and forward-looking perspective – will remain paramount. By consciously integrating this human strength with the computational power of AI, we can build solutions that are not just fast, but also robust, reliable, and truly impactful.


