← Back to blog
· Haroon Choudery · 7 min read

Six Things I Keep Telling Every Company I Work With

I’ve spent the last few months working with companies trying to figure out what AI actually changes for their business. CTOs, founders, ops leaders, product teams. Different industries, different stages, different budgets.

The same conversations keep happening.

Not the same questions, exactly. The questions are always specific to the company. But the same underlying mistakes. The same assumptions that sound reasonable until you’re three months into a project that isn’t working.

So I started writing them down. Not as a manifesto or a philosophy. Just the things I find myself saying in almost every engagement, usually within the first week.

Here are six of them.

1. Good decisions outlast any model.

The first thing most companies want to talk about is which model to use. GPT-4o or Claude? Should we fine-tune? What about open-source?

These are fine questions. They’re also the wrong place to start.

The model you pick today will not be the model you’re running in 18 months. OpenAI, Anthropic, Google, and a dozen open-source projects are shipping new capabilities every few weeks. Whatever you choose right now will be superseded.

What won’t be superseded is the abstraction layer you build around it.

The companies that are in the best position aren’t the ones who picked the “best” model. They’re the ones who built clean separation between their application logic and the model layer. They can swap providers in an afternoon, not a quarter.

I keep saying this because the alternative is painful. I’ve seen teams build entire systems tightly coupled to a specific model’s quirks, only to have the next model version break half their prompts.

Simon Willison put it well: apply “boring technology” principles. Innovate on your unique selling points. Use tested, swappable solutions for everything else.

The model is a dependency. The architecture is the asset.

2. Start flexible, crystallize over time.

This one comes from building Autoblocks, the AI evaluation platform. I watched hundreds of teams build agent systems, and the ones that succeeded all followed the same pattern, whether they knew it or not.

They started with agents doing a lot. Exploring. Figuring out the right workflow, the right sequence of operations, the right tool calls.

Then, as patterns stabilized, they hardened those patterns into deterministic code. Fast, reliable, predictable. The agent kept handling the parts that were genuinely novel. Everything else became infrastructure.

I call this crystallization.

The mistake most teams make is picking one extreme. Either they try to build a fully deterministic system from day one, which means they’re guessing at the workflow before they understand it.

Or they leave everything agent-directed forever, which means they’re paying for latency, cost, and unpredictability on tasks that stopped being unpredictable months ago.

The right systems evolve. They start flexible and become more deterministic as you learn what the stable patterns actually are.

3. Context is the system.

This is the one nobody wants to hear.

Gartner predicted that organizations will abandon 60% of AI projects unsupported by “AI-ready data” through 2026. A recent survey found that 81% of AI professionals say their company has significant data quality issues.

And yet 85% say leadership isn’t addressing it.

Most AI failures aren’t model failures. They’re context failures.

Bad retrieval. Stale data. Missing business logic. No single source of truth.

The model is doing exactly what you asked it to do, with exactly the information you gave it. The information was just wrong.

I’ve walked into companies where the AI system is pulling from three different knowledge bases, two of which haven’t been updated since Q3 of last year. The team is debugging prompts. The prompts are fine. The context is garbage.

Simon Willison said it best: “Most of the craft of getting good results from an LLM comes down to managing its context.”

He’s right. And most companies treat context as an afterthought.

When I start an engagement, the first thing I look at isn’t the model, the prompts, or the agent architecture. It’s the context layer.

Where does the system’s knowledge live? How current is it? Who maintains it? Is there a single source of truth, or five conflicting ones?

If you get context right, a mediocre model will outperform a frontier model running on bad data. Every time.

4. Use the simplest thing that works.

There’s a paper from researchers that compared a simple three-phase approach (localize the problem, generate a fix, validate it) against complex autonomous agent systems for software engineering tasks.

The simple approach got 96 correct fixes. It cost $0.70 per attempt.

The question they posed: “Do we really have to employ complex autonomous software agents?”

Often, no.

I see this constantly. A company wants to “build an AI agent” for something that’s really just a well-structured API call. Or an automation. Or a cron job.

Not everything needs an agent. Not everything needs an LLM. The right answer is the one that solves the problem with the least complexity that still handles the edge cases.

This isn’t anti-AI. I’m an AI consultant. I want you to use AI where it creates real leverage. But the key word is “leverage,” not “everywhere.”

The most valuable thing I sometimes tell a client is “you don’t need AI for this.” It builds more trust than any demo ever could. And it means that when I do say “this is where an agent creates 10x leverage,” they believe me.

5. Build with your team. They own everything.

This one is about how we work, not just what we build.

When Seeko does a Build engagement, we’re writing code alongside the client’s engineers. Not handing over a repo at the end. Not building behind a wall and doing a reveal.

Working in their codebase, on their infrastructure, with their team learning the patterns by doing the work.

When we leave, the team should be better than when we arrived. And they should own every line of code, every system, every piece of architecture.

This is the opposite of how most consulting works. The traditional model creates dependency.

You hire the firm, they build something you don’t fully understand, and then you need them forever to maintain it.

I don’t want clients to need me forever. I want clients to tell other people they don’t need me anymore because their team can do it themselves now.

That’s the referral that actually matters.

6. Recommend honestly.

I’ve worked with Claude, GPT-4o, open-source models, Cursor, various agent frameworks. I don’t have a preferred vendor. I have preferences for specific use cases.

When a client asks “which model should we use?”, the answer depends on their cloud infrastructure (AWS pushes toward Anthropic via Bedrock, Azure toward OpenAI), their security requirements, their use case (tool use, long context, speed, cost), and whether they need a multi-model strategy.

Most serious companies will. Different models for different tasks. A routing layer that picks the right one based on the job.

The honest answer is rarely “use X for everything.” And I think companies can tell the difference between an advisor who recommends based on what they’ve actually seen work and one who’s incentivized by a partnership deal.

I’d rather be the first one.

Why these six?

I could have listed more. There are plenty of tactical principles about eval infrastructure, about team structure, about how to run agent-native development practices.

But these six keep coming up because they’re the ones that sit underneath everything else.

Get the architecture right, and the model choice is easy. Get the context layer right, and the prompts almost write themselves. Use the simplest thing that works, and you ship faster with fewer regrets.

Hamel Husain made a point that the biggest mistake in AI engineering is the “tools first” mindset, where teams get caught up in frameworks and architecture while neglecting to understand what’s actually working. I think he’s right.

These principles aren’t about tools. They’re about thinking clearly before you build.

If any of them resonate, or if you’re in the middle of figuring this out for your own company, I’d like to hear what you’re working on.

Thinking through the same questions?

We help companies figure out what AI actually changes for their business — and build the systems to act on it. Strategy, architecture, automations.

Tell us what you're working on →