undo
Go Beyond the Code
arrow_forward_ios

AI Can Code, But Can It Build?

July 10, 2025
To learn more about this topic, click here.

The Illusion of Simplicity

As AI-assisted development evolves, it sparks both curiosity and caution. The potential is remarkable: simply describe what you need in plain language, and the machine generates working software.

But what sounds like the end of traditional coding often reveals something more profound: software only becomes meaningful when it's built with intent, not just executed. Like any iconic creation, its value comes not from how it's built, but from why. Tools enable the outcome, but the purpose must come from us.


Why Simplicity Comes at a Cost

AI coding solutions, such as Cursor, Bolt, and Lovable, make it easier to get started. They lower the barrier by allowing anyone to prompt in plain English. However, this accessibility conceals a key flaw: these tools don’t understand intent. They recognize patterns but when you go beyond templates or tutorials, they struggle.

This isn't a new idea. The pursuit of abstraction has been ongoing for decades, from early programming languages like Fortran and Lisp to the rise of Model-Driven Architecture and BPM tools. Each promised to close the gap between business logic and technical implementation. Each eventually collided with complexity.

There's a reason "prompt engineering" has become so central: it's essentially the new process of gathering requirements. And it turns out, defining what you want an app to do, with precision and depth, is just as tough as building it yourself. So, where does this friction start to show up most clearly?


Where the Magic Fails

• They can’t adapt. They recall. Most AI coding tools operate within the boundaries of their training data. Ask for something outside the familiar, and you're back to manual work. According to an arXiv study, LLMs like Codex perform best when tasks closely match their training data; however, their performance drops significantly when faced with unfamiliar or domain-specific logic.

• Saying “Build a to-do app” is simple. But defining user flows, exception handling, and performance constraints is real engineering. As noted by OpenAI’s Andrej Karpathy during a Y Combinator event (quoted in Business Insider), large language models can generate thousands of lines of code, but “developers must remain vigilant.” Human clarity, oversight, and judgment are still essential to make that output work.

• Great at tweaks, not systems. Minor refactors? Sure. But building a scalable product still requires real engineers who understand architecture, trade-offs, and edge cases. WIRED reports that GitHub Copilot is effective for suggesting small code snippets but falls short on architectural coherence and long-term maintainability.


What This Means for Teams

The hype mirrors what we have seen in past trends, such as Model-Driven Development or BPM. Lots of promise, limited reach. We believe AI will increase the volume of app development, rather than reducing the need for developers. If anything, it will expose how crucial thoughtful design and engineering still are.

McKinsey estimates that generative AI can automate up to 20% of software development tasks, mostly in boilerplate and repetitive logic. But that still leaves 80% in the hands of engineers who need to reason, architect, and adapt. The practical implication? Teams that treat AI tools as copilots—not drivers—will move faster, with fewer crashes.


So, What Now?

One option is to constrain the playground: use these tools within fixed tech stacks (e.g., Java backend, React frontend) for internal tools that might work. But the bigger challenge remains: translating messy, human intent into buildable logic.

Clarity, objectives, and constant revision, not processing power, are what software demands most. The real challenge is that humans often struggle to articulate exactly what they want. Requirements shift. Context changes. Priorities evolve mid-sprint. And no model, no matter how advanced, can infer what hasn't been truly defined. That’s not a prompt problem. That’s a product problem.

Even in tightly scoped environments, AI-generated code needs clear direction, defined objectives, and continuous oversight. Without that, it mirrors the ambiguity of its input, resulting in code that may run, but rarely performs, and almost never aligns with true intent.

Capability, security, and performance don’t come out of the box. They require intentional design, critical thinking, and domain awareness. Until we bridge that gap, engineers remain essential.


Closing Thought

We believe that AI won't replace developers. But it will raise the bar. The future will reward those who can distinguish between tools that generate output and those who build with intent.

The challenge isn't just technical; it’s more profound because it's conceptual. Knowing what to build and why remains the most challenging part of software development. And that’s something no autocomplete can solve.

Esteban Robles Luna
Co-Founder - CEO & Solver

Start Your Digital Journey Now!

Which capabilities are you interested in?
You may select more than one.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.