Skip to main content

Tazuna UX

· 10 min read
Ennio Lopes
Product Engineering

Most interfaces still treat interaction as a chain of commands. The user asks, the system responds. The user clicks, the product reacts. The user hesitates, and the interface compensates with more controls, more explanation, more prompts, more noise.

That model is starting to fail.

As software becomes more adaptive, more ambient, and more AI-driven, the central challenge is no longer only usability. It is calibration: how present should the system be, when should it lead, when should it wait, and how can it help without becoming the thing the user has to manage?

A useful answer comes from an old Japanese concept with surprisingly modern implications: Tazuna.

In Japanese, tazuna means reins. Lexus turned the idea into a cockpit design philosophy centered on a direct and intuitive connection between driver and machine, with a simple principle: “hands on the wheel, eyes on the road.” The design goal was not spectacle. It was to minimize unnecessary hand, eye, and head movement so the driver could stay oriented toward the real task.

  • That is the leap worth making for UX.

Tazuna should not be treated as an aesthetic reference or a vague metaphor for minimalism. It is more useful than that. It can become a serious interaction model for product design, especially in systems shaped by AI.

My argument is simple:

The next generation of interfaces should be designed to guide without grabbing, assist without interrupting, and amplify intent without destabilizing the user’s sense of control.

That is what I call Tazuna UX.

The Missing Layer in Design Systems: Semantic Contract

· 8 min read
Ennio Lopes
Product Engineering

Most design systems do not fail because they lack components. They fail because they never define the layer that tells tokens, components, and product code how meaning should flow through the system.

Teams usually do the visible work. They define foundations, create tokens, build component libraries, and document usage. But when the semantic contract remains implicit, the system slowly drifts.

APIs stop lining up. Variants stop meaning the same thing across components. Tokens get bypassed. Over time, the design system collapses into a themed styling library rather than a true design language.

The failure is not visual first. It is semantic.

The Economics of Closed Loop Codebases for AI Agents

· 6 min read
Pedro Arantes
CTO | Product Developer

In the rush to adopt AI coding tools, engineering teams are rediscovering a fundamental principle of Control Engineering, now codified as The Principle of Automated Closed Loops: Open-loop systems are unstable, and human feedback is the most expensive way to close the loop.

The key insight is that codebases need to be structured with tests and verification mechanisms to provide the feedback signals that make those agent loops effective.

When we talk about "AI Agents," we are really talking about control systems. The Agent is the controller, your codebase is the plant, and the goal is a stable, functioning feature. But most current implementations—prompting ChatGPT, hitting copy-paste, and manually debugging—are economically broken. They rely on the most expensive resource you have (senior engineering attention) to do the job of a simple feedback sensor.

The Most Important Decision You'll Make as an Engineer This Year

· 6 min read
Pedro Arantes
CTO | Product Developer

The single most important decision an engineer must make today is a binary choice regarding their role in the software development lifecycle.

Option A: Continue reviewing code line-by-line as if a human wrote it. This path guarantees you become the bottleneck, stifling your team's throughput.

Option B: Evolve your review policy, relinquishing low-level implementation control to AI to unlock high-level architectural velocity.

If you choose Option A, this article is not for you. You will likely continue to drown in an ever-increasing tide of pull requests until external metrics force a change.

If you choose Option B, you are ready for a paradigm shift. However, blindly "letting AI code" without a governance system invites chaos. You need robust strategies to maintain system quality without scrutinizing every line of implementation. Here are the five strategies that make Option B a reality.

Technical Debt as Leverage in the Age of AI

· 6 min read
Pedro Arantes
CTO | Product Developer

Technical debt is often viewed solely as a negative consequence of poor engineering—a mess that needs to be cleaned up. However, at ttoss, we view technical debt through a different lens: as a financial instrument called leverage.

This is especially true in the age of AI. As code generation becomes a commodity, the ability to strategically incur and repay debt defines the velocity of a team. When used consciously, technical debt allows us to ship faster, learn earlier, and capture market opportunities. When accumulated unconsciously, it becomes entropy that grinds development to a halt.

The difference between leverage and negligence lies in how we manage it.

Why Problem Structure is the First Question You Should Ask When Building with AI

· 4 min read
Pedro Arantes
CTO | Product Developer

In the rush to build "agentic" systems, most teams jump straight to prompting strategies, tool calling, retrieval setups, or orchestration frameworks. That's putting the cart before the horse.

The very first question you must answer, before any architecture diagram or code, is this: Is the problem (or each sub-problem) you're trying to solve well-structured or ill-structured?

This single classification determines whether traditional deterministic software, probabilistic AI (LLMs and other ML models), or a hybrid of both is the right approach.

From Reviewer to Architect: Escaping the AI Verification Trap

· 6 min read
Pedro Arantes
CTO | Product Developer

There's a moment every engineering manager experiences after adopting AI coding tools. The initial excitement—"We're shipping features twice as fast!"—slowly curdles into a disturbing realization: "Wait, why are my senior engineers spending hours manually testing for regressions that proper automated tests could catch in seconds?"

This is the AI Verification Trap, and there's only one way out.

From Scripter to Architect in the Age of AI

· 4 min read
Pedro Arantes
CTO | Product Developer

For decades, the job of a software engineer was to write the "happy path." We spent our days scripting the exact sequence of events: fetch data, transform it, display it. We were the authors of the flow.

With the rise of Applied AI, that role is fundamentally changing. When an LLM generates the logic, we are no longer the scripters. We are the architects of the boundaries.