Read manifesto

Get articles in your inbox

Get articles in your inbox

Get articles in your inbox

Tommy Geoco discovers what fuels the internet’s most interesting designers and builders.

Tommy Geoco discovers what fuels the internet’s most interesting designers and builders.

Techniques

May 12, 2026

/

Tommy Geoco

May 12, 2026

Your team isn't AI-installed

In this post

001. The AI-installed company

Happy Tuesday.

I talked to Tom Krcha about Pencil, then kept seeing the same pattern show up everywhere else: the teams getting real work from agents are rebuilding the production floor around them.

The durable layer is the context everyone works from, even when the canvas changes.

That sounds small until you look around:

  • Linear is publishing agent-delegation numbers.

  • Shopify is running River in public Slack channels.

  • Ramp built Glass as an internal Claude coworker connected to company systems.

  • Box, Zapier, LangChain, and Microsoft are all saying different versions of the same thing: prototypes got easy, installation got exposed.

The hard part is installation. And companies are starting to share how they’re doing it.

– Tommy (@designertom)

TOGETHER WITH DSCOUT

Most "AI research" tools right now are wrappers. Dscout's AI Studio is the one that actually does the work.

Describe your goals → it drafts the study, screener, and logic. AI Moderator runs interview-style sessions across time zones with real participants, not fake users.

Then you ask your data questions in chat and get sourced answers back.

If you're shopping AI research tools, start here.

See how it works →

The AI-installed company

AI can generate impressive output.

The question now is whether your team has installed the context, memory, permissions, workflow redesign, and human review layer agents need before they can do real work.

That is the difference between an AI-decorated company and an AI-installed company: one bought licenses, the other rebuilt the production floor.

You can see the split in the companies publishing installation evidence instead of marketing hype language.

Linear shared that resolved work moved from 5,697 in Q4 2025 to 10,254 in Q1 2026, up 80%. Bug resolutions nearly doubled, from 1,278 to 2,483. Agent delegation moved from 10.1% in February to 24.4% month-to-date in April, and in April Linear’s agent was solving 57% of reported bugs.

That changes the argument. The proof moved from cool prototype to work moving through a production system.

Shopify’s River is even more revealing because it lives in public Slack, not private DMs. It reads code, runs tests, opens PRs, queries the data warehouse, and looks at production traces. Over the last 30 days, 5,938 employees used it across 4,450 Slack channels. In one week, it authored 1,870 PRs in Shopify’s main monorepo.

If the agent learns in the open, the organization learns with it.

Ramp’s Glass points at the same thing from another angle. Ramp says AI usage is up 6,300% year over year, 99.5% of the team is active on AI tools, and 800+ builders shipped 1,500+ internal apps in six weeks.

Again: this is a company system, with a chatbot-shaped entry point.

So the checklist I’d use with any design team is pretty blunt:

1. What context can the agent actually see?
Tom Krcha’s version is that shared context matters more than shared canvas. Pencil can run headless from the terminal, create PDFs and screenshots, and still tap the design system and codebase.

LangChain makes the same point from the harness side: memory is not a feature bolted onto agents. It becomes the lock-in layer. If your team’s context lives in private docs, stale design files, and tribal memory, the agent is mostly guessing with confidence.

2. What is it allowed to touch?
Aaron Levie has been unusually plain about this: enterprise agents need secure data, access controls, entitlements, scopes, monitoring, logging, agent-readable process docs, and redesigned workflows. Boring words. Load-bearing words. This is where a lot of AI adoption quietly dies, because the company wants agentic output without agentic permissions.

3. Did the workflow change, or did you just add a button?
Wade Foster’s Zapier read is useful here. Pure agent reliability is tricky in enterprises, so deterministic workflow steps can increase reliability and cost control. That is enterprise AI without being performative: agents, systems, and review points chained into something that survives contact with the business.

4. Who owns the last 20%?
Tom’s Pencil answer was simple: agents are great for stochastic exploration, but designers still need the chisel. He has seen users start with open exploration, then manually polish the last stretch because that is where authorship, taste, and intent come back in.

Microsoft’s WorkLab report says 86% of AI users treat AI output as a starting point and stay responsible for the thinking. Good. That should be the default.

The uncomfortable self-diagnosis is that a lot of teams are judging AI by the wrong proof.

A frontier-model demo proves the ceiling. A personal prototype proves curiosity. Installation proves the team can use agents where work actually happens.

Can the agent read the right context, act with the right permissions, leave an auditable trail, survive partial reliability, and still hand the chisel back before the work ships?

That is the production floor now, and there’s a lot of sawdust.

Tom had a line near the end of our conversation that I liked: taste comes from participation. The agent can start the idea, wander in directions you would not have found, and hand you something surprisingly close. But if you stop putting your hands in the clay, you lose the feel for what needs to change.

So watch the companies publishing installation evidence.

The boring plumbing.

If you want the human version of this argument, watch or listen to today’s State of Play conversation with Tom Krcha from Pencil.dev. And hit reply with the installation question your team cannot answer yet.

See you next time,

Tommy


😁  |  😐  |  😡

Login or Subscribe to participate in polls.

Founder

Tommy Geoco

After selling my startup in 2015, I worked in Silicon Valley supporting many shapes of work: design teams of one, leading design ops, taking ideas from 0 to 1, scaling teams, and supporting product growth.