Skip to main content

10 posts tagged with "agentic-development"

View All Tags

The Most Important Decision You'll Make as an Engineer This Year

· 6 min read
Pedro Arantes
CTO | Product Developer

The single most important decision an engineer must make today is a binary choice regarding their role in the software development lifecycle.

Option A: Continue reviewing code line-by-line as if a human wrote it. This path guarantees you become the bottleneck, stifling your team's throughput.

Option B: Evolve your review policy, relinquishing low-level implementation control to AI to unlock high-level architectural velocity.

If you choose Option A, this article is not for you. You will likely continue to drown in an ever-increasing tide of pull requests until external metrics force a change.

If you choose Option B, you are ready for a paradigm shift. However, blindly "letting AI code" without a governance system invites chaos. You need robust strategies to maintain system quality without scrutinizing every line of implementation. Here are the five strategies that make Option B a reality.

Why Problem Structure is the First Question You Should Ask When Building with AI

· 4 min read
Pedro Arantes
CTO | Product Developer

In the rush to build "agentic" systems, most teams jump straight to prompting strategies, tool calling, retrieval setups, or orchestration frameworks. That's putting the cart before the horse.

The very first question you must answer, before any architecture diagram or code, is this: Is the problem (or each sub-problem) you're trying to solve well-structured or ill-structured?

This single classification determines whether traditional deterministic software, probabilistic AI (LLMs and other ML models), or a hybrid of both is the right approach.

From Reviewer to Architect: Escaping the AI Verification Trap

· 6 min read
Pedro Arantes
CTO | Product Developer

There's a moment every engineering manager experiences after adopting AI coding tools. The initial excitement—"We're shipping features twice as fast!"—slowly curdles into a disturbing realization: "Wait, why are my senior engineers spending hours manually testing for regressions that proper automated tests could catch in seconds?"

This is the AI Verification Trap, and there's only one way out.

The AI Collaboration Paradox: Why Being Smart Isn't Enough Anymore

· 9 min read
Pedro Arantes
CTO | Product Developer

Two engineers join your team on the same day. Both have stellar résumés. Both ace the technical interviews. Both score in the 95th percentile on algorithmic problem-solving.

Six months later, one is shipping 3x more features than the other.

The difference? It's not intelligence. It's not work ethic. It's not even technical depth.

It's something we're just beginning to measure: collaborative ability with AI.

And it's exposing an uncomfortable truth: in the age of AI agents, being smart isn't enough anymore.

Compounding AI Outputs: Building a Memory for Your System

· 4 min read
Pedro Arantes
CTO | Product Developer

In the early stages of AI adoption, most teams treat AI agents as isolated tools: you give a prompt, get a result, and then the context vanishes. This "Task → Prompt → Result → Forget" cycle is inefficient because it fails to capture the intelligence generated during the interaction.

To truly leverage AI in product development, we must shift to a system where outputs are compounded. This means designing workflows where the insight from one agent becomes the context for the next, creating a shared "Memory Layer" that accumulates value over time.

Mastering the Context Window: Why Your AI Agent Forgets (and How to Fix It)

· 5 min read
Pedro Arantes
CTO | Product Developer

AI agents are transforming how we write code, but they are not magic. They operate within a strict constraint that many developers overlook until it bites them: the context window.

If you treat an AI session like an infinite conversation, you will eventually hit a wall where the model starts "forgetting" your initial instructions, hallucinating APIs, or reverting to bad patterns. This isn't a bug; it's a fundamental limitation of the technology. Success in agentic development requires treating context as a scarce, economic resource.

The 80% Rule: Why Your AI Agents Should Only Speak When Confident

· 4 min read
Pedro Arantes
CTO | Product Developer

We've all been there: You ask your AI coding assistant for a solution to a tricky bug. It responds instantly, with absolute certainty, providing a code snippet that looks perfect. You copy it, run it, and... nothing. Or worse, a new error.

The AI wasn't lying to you. It was hallucinating. It was "confidently wrong."

In our Agentic Development Principles, we call this The Corollary of Confidence-Qualified Output. But in practice, we just call it The 80% Rule.