The Case For Humans
AI made building cheap. Humans make winning possible.
For the past 3 months I’ve been building a product with AI, full send.
Agents crawling.
Agents writing copy.
Agents generating code.
Agents doing the boring stuff I used to procrastinate on.
I spent about $20,000 on AI credits.
And the craziest part is that it worked.
There are real advantages to going solo with AI:
Speed: you can run a dozen parallel workstreams at once.
Coherence: one decision maker, one consistent taste.
No incentive games: no promo ladders, no résumé projects, no “stakeholder alignment” theater.
As I relied more and more on AI, what broke was not my ability to create fast. What broke was the belief that humans are optional.
I’m bullish on humans for three reasons:
Accountability
Differentiation
Community
1) Accountability
Accountability is the difference between shipping and spinning.
Working in a big public tech company, I saw the same pattern over and over:
Projects with one clear owner moved fast, made decisions, and were fun to be on.
Projects with no real owner drifted. Decisions got made by committee. Everyone had input. Nobody had responsibility. They often died slowly, even when the market signal was there.
This is not a “big company problem.” This is a human behavior problem.
When responsibility is shared, responsibility dilutes. Social psychology has documented this for decades as diffusion of responsibility. The more bystanders there are, the less any one person feels on the hook.
That’s why there are operating frameworks designed to force clarity. They exist because ambiguity is the default.
Mechanisms like:
A DRI, one directly responsible individual, so ownership is never ambiguous.
Single-threaded leadership, so one person can actually deliver outcomes without getting coupled to ten other teams.
Now here’s the twist.
AI does not dilute accountability. AI deletes it.
Your agent can generate code, run tests, and open PRs. It can tell you something is “correct” in the narrow sense of passing a unit test.
But it does not own the outcome. If the user hates the UX, the agent does not care. If there’s a security bug and you get exploited, the agent does not apologize. If the site goes down, the agent does not wake up at 3 a.m.
You do.
Building Lazyweb.com made this visceral.
I’m moving fast. Looking back at the last couple months, what I shipped would have taken me years pre-AI. But I also realized what I miss most is not more horsepower.
It’s shared ownership.
Because no matter how productive AI makes you, there is only so much accountability one person can hold at once.
More leverage creates more levers. More levers create more accountability, tradeoffs and tough calls more need for accountability.
2) Differentiation
When software becomes instant, customers stop rewarding effort and start rewarding difference.
Speed is an advantage. Sometimes it’s the advantage. The trap is thinking speed turns into a moat.
AI is making it easier to ship, faster than ever. That creates real first-mover opportunities. It also makes it easy to confuse “I shipped fast” with “I built something that users want.”
ChatGPT is the cleanest example.
When it first hit, expectations were low because the breakthrough was the headline. It hallucinated, it was rough, and we still loved it because it felt like the future showed up early. You could put a wrapper on an AI model, with a few adjustments to system prompts and make hundreds of thousands a month.
Fast-forward to now and “a chat UI on top of a model” is not impressive. People expect the whole bundle. Web search, tool use, deep research, strong UX, constant iteration. The floor rose, so the applause stopped.
This is not just an AI thing. This is the pattern for every category that gets easier.
When something moves from expensive to cheap, the customer stops admiring the fact that you did it. They assume you did it. Then they start judging the parts that are harder to copy: reliability, trust, distribution, taste, brand, and how well it fits into their real life.
You can see this playing out in AI video right now.
Early models got forgiven for being janky because the headline was “it works at all.”
Then the floor rises, and nobody claps for the floor.
So differentiation moves up the stack.
You need either:
A continues train of breakthroughs
Something genuinely new. Rare, and often temporary.
Or a sustaining moat
A moat is not a feature list. A moat is a benefit plus a barrier. You get value, and competitors cannot easily replicate it.
If you want the simple map, here are the seven classic moats (or “powers”) and what they look like in the real world:
Scale economies
You get cheaper as you get bigger. New entrants cannot match your unit economics.
Network effects
The product gets more valuable as more people use it. Users pull in more users.
Switching costs
Leaving hurts. The product knows your workflow, your history, your preferences. The longer you use it, the harder it is to replace.
Brand
People choose you even when alternatives are “good enough.”
Cornered resource
You own something others cannot easily buy. Data, distribution, access, talent, a channel, a contract.
Process power
Your operating system is the advantage. Culture, cadence, and execution that is hard to copy.
Counter-positioning
You make a move incumbents cannot copy without breaking their existing business model.
Now connect this to AI coding.
People see agents generate code 10x faster and assume the job of engineering is typing.
It isn’t.
Engineering is judgment under constraints:
what to build
what not to build
what tradeoff is acceptable
what failure mode will kill trust
what “good” even means
Agent loops can search, test, refactor, and iterate. They can be insanely helpful. They still do not automatically produce a moat, because a moat is not “more output.” A moat is a hard-to-copy advantage that usually comes from choices that feel non-obvious at the time.
So yes, an agent can generate infinite code.
But it cannot generate infinite advantage.
Because advantage comes from non-obvious calls:
the thing you do that looks weird until it works
the constraint you embrace that others avoid
the wedge that breaks an incumbent’s position
the taste to kill 90% of what is easy so you can ship the 10% that matters
AI is leverage.
Yet humans are still the differentiation engine because humans can choose to be different on purpose.
3) Community
AI will personalize everything. Humans will fight to keep something shared.
We are walking into a one-to-one world.
AI can get to know you, learn your preferences, and generate content tailored to you forever. That is genuinely useful.
The trade is that personalization deletes common ground, and shared experiences are one of the main ways humans bond.
Even if you do not talk, just experiencing something alongside someone else makes it feel more intense. People rate shared experiences as more pleasant or more unpleasant than the same experience alone.
When emotions synchronize during shared experiences, feelings of social connection increase.
This is why news matters beyond “information.” A big story becomes a shared object. Everyone reacts, argues, jokes, and forms opinions. Shared narrative creates shared dialogue. Shared dialogue creates community.
Same with movies.
A big movie is a communal object. You watch it, your friends watch it, the internet watches it, and now you have something to talk about.
Now imagine the AI future where Netflix generates a different movie for you and a different movie for your friend, each perfectly optimized to your tastes.
Cool, until you realize you cannot talk about it.
There is no “did you see that scene?” because their scene never happened.
So yes, AI will flood the world with personalized content.
And that will make the demand for shared experiences stronger, not weaker, because shared experiences are how humans create tribes, status, inside jokes, meaning, and memory.
This is also why community becomes an even bigger moat for companies.
In a world where content is infinite, trust is scarce.
Nielsen found that people trust recommendations from people they know more than any other channel.
Even inside companies, this matters.
If you “run” 100 agents but you have no camaraderie, no shared wins, no shared pain, no shared story, work becomes sterile. You lose one of the biggest motivators, the feeling that you are doing something hard together.
Maybe we do not need 80,000 people to build an e-signature company.
But we will always need humans for the part AI cannot generate.
Shared meaning.
Closing
I’m not making the case against AI.
I’m making the case for humans, precisely because AI works.
AI makes output abundant.
It makes speed cheap.
It makes average easy.
So the scarce things become more valuable:
Ownership
Judgment
Trust
Community
AI can generate infinite work.
It cannot generate someone who cares if it wins.
And by “win,” I mean the real win. Shipping something people actually want, keeping it reliable, earning trust, surviving competition, and building something that lasts.
That part is and will forever be human.
Until next time,
Ali Abouelatta
