Trevexia
What I learned building an agency around a model I did not yet trust.
In the spring of 2024 I had a problem I did not want. I was running outreach for two of my own ventures and a fund I was advising, and the work was eating about thirty hours a week. List building, copy-drafting, sequence management, reply triage — the bookkeeping of attention. I knew the work was structured because I had structured it, and I knew that anything that structured had a shape that a model could fit. I started writing an agent for myself one Sunday afternoon and shipped the first version eight days later, on a Monday at 3:14 in the morning. It replaced about a third of my manual outreach work in the first week.
This is what Trevexia is. The eight-day agent has since been rebuilt three times, but the question it answered for me was: how much of cold outreach is craft, and how much is bookkeeping that the craft has nothing to do with? The answer, by the end of the first month, was an embarrassing ratio. The craft is the writing voice, the discrimination of which signals actually mean an account is in market, and the willingness to walk away from a prospect that looked good on paper. The rest is a state machine.
I built the agency around that distinction. Trevexia takes the bookkeeping and gives back the craft. The agent handles list construction, signal monitoring, sequence orchestration, reply classification, calendar coordination. A human — a writer, with taste — handles the voice. The two halves talk through a queue.
The mistake I made for the first six months was assuming the operator would be the bottleneck. I designed the system as if a salesperson could absorb the output of the agent and stay in flow. What actually happened was that the agent surfaced ten times more in-market accounts than the operator had ever seen, and the operator drowned. The craft side was not the bottleneck. The craft side was the surface that had to expand to meet a pipeline now an order of magnitude wider than what it was used to.footnoteThe lesson generalizes. Whenever a model meaningfully lifts the throughput of one half of a workflow, the other half becomes the new constraint inside a week. The work of operating with an agent is mostly the work of redesigning around the new constraint, not of celebrating the lift.
The fix was unglamorous. We had to rewrite the briefing layer so a single operator could produce voice-correct copy for a thousand accounts in an afternoon. The agent fragments a single founder's voice into the eight or ten archetypal contexts the pipeline will hit that week — vertical, role, signal type, time-since-last-touch — and the operator writes against those eight or ten templates rather than against each individual account. The reduction in degrees of freedom is what makes the human side viable at scale. The agent does not write better copy than a good operator. It reduces the surface area on which a good operator has to act.
The agent does not write better copy than a good operator. It reduces the surface area on which a good operator has to act.
The thing I did not expect, but which I am now most proud of, is the signal layer. The original agent was sequence-first: feed it a list, get a sequence. Within a quarter it was clear that the better question was not "what do we send" but "what is this account doing right now that says it is in motion." Funding events. Hiring shifts. Page reads. Job-post deltas. The agent reads the world and the operator reads the agent. The sequence is downstream. Most of the agencies I had benchmarked against were starting from the sequence and working backward. They were optimizing the wrong half of the problem.
There is a quieter learning, harder to defend on a pitch deck, that I keep coming back to. Doing outreach with an agent makes you read the world differently. You start to see prospects as time series rather than as records. The state of an account on the day you touch it matters more than any feature of the account in static. The agent's discipline is to maintain a model of the world that is two hours old. The operator's discipline is to act on that model with a voice that does not sound automated, because nothing the agent ships actually is.
What did not work — and the parts of the architecture I have rebuilt twice — was anything that asked the model to make a judgment a clear-eyed human would not yet trust the model to make. The original system let the agent decide which leads to drop. The drop rate looked sensible on aggregate, but the residual was where the largest deals had been. We re-pulled the model out of that decision. The operator drops, the agent prepares. A pattern: every place I have given the model autonomy on the discrimination step, the cost has been the long tail. Models in 2026 are still excellent at the modal case and unreliable at the edge. Outbound is a business of edges.
Trevexia is now eighteen months into the version of itself I would actually defend. It is one operator, three engineers, a queue, a writing system, a signal system, and a model that handles the parts of the job a model can be trusted with. The thing I built it to find out was whether a single person with the right pipeline could outperform a ten-person outbound team. The answer turned out to be yes on the modal account and no on the long-tail one, which is the answer most honest tooling stories produce. The work now is the long tail.
The case for writing about it here, on a publication that is otherwise about the discipline of attention, is that the agency is the practical expression of the same question. What part of a workflow rewards the bookkeeping a model is good at, and what part rewards the attention a person is good at, and how do you build a thing that lets each do its job without contaminating the other. That is the underlying problem. The sales agency is a case study in it, not the answer.
Live at trevexia.com.
