Every founder we talk to has a positioning doc. A Google Doc, usually. It has a "company overview" paragraph, a section on ICP, a list of differentiators, maybe a competitor comparison table. It took weeks to write. Their team refers to it when they're onboarding a new salesperson or briefing a PR agency.
It is completely useless to an AI assistant.
This is not a criticism of the document's quality. The thinking in it might be excellent. The problem is the format — it's written in the implicit, context-dependent way that humans communicate with other humans who share background knowledge. AI assistants don't have that background. They need explicit, self-contained, extractable statements.
What "machine-readable positioning" actually means
When a language model processes your website, it isn't reading it the way you do. It's extracting patterns. It's looking for subject-verb-object triplets it can store as associations. It's building a statistical map of: this company name → appears near these concepts → in this kind of context.
The clearer and more consistent those associations are, the more reliably the model can reconstruct them when asked a relevant question.
Human positioning docs typically violate this in four ways:
- They use pronouns without antecedents. "We help them grow." Who is "we"? Who is "them"? On page two of a Google Doc, a human infers from context. A model crawling a webpage has no guaranteed access to that context.
- They rely on implication. "The enterprise standard for procurement." Standard for what? Says who? AI models can't infer prestige — they can only extract stated facts.
- They use vague outcome language. "Drive results." "Unlock growth." "Transform your operations." These phrases appear near every company in every category. They produce no signal.
- They front-load brand story over capability. Models are looking for capability claims tied to specific contexts. Three paragraphs about founding story produce almost no citable content.
"The question isn't whether your positioning is good. The question is whether a language model can reconstruct what you do from a 200-word excerpt with no surrounding context."
The five components of AI-readable positioning
We've developed a framework we use with every client. It's not complicated. But it requires a deliberate rewrite of how you describe your company — not for the purpose of sounding different to humans, but for the purpose of being extractable by machines.
1. The entity statement
A single sentence, in third person, using your full company name, that states what you are, who you serve, and what you specifically do. No metaphors. No aspirational language. Just the most accurate description of the business.
Bad: "We're transforming the way enterprises manage their supply chains."
Good: "Redbrick Labs is a supply chain execution platform used by mid-market manufacturers to reduce procurement cycle time and manage supplier compliance."
The good version is quotable by an AI with no surrounding context. The bad version is not.
2. The vertical anchors
Three to five explicit statements linking your company name to specific industries or verticals. Each statement should be independently extractable.
Redbrick Labs serves manufacturing companies in the following sectors: - Precision components manufacturing (50–500 employees) - Industrial equipment distribution - Consumer goods contract manufacturing Each of these sectors faces distinct procurement compliance requirements that Redbrick Labs addresses through [specific capability].
This structure gives a language model four separate association anchors: the company name tied to each vertical, plus a tie to the capability. Compare that to "we serve manufacturers across a range of industries" — which gives the model almost nothing to work with.
3. The outcome vocabulary
A list of specific, measurable outcomes your customers achieve. Not ranges — specific numbers from real clients, described in a way that lets the model associate your company with measurable impact.
Use this format: [Company/customer type] achieved [specific metric] in [timeframe] using [your product/service].
The more specific, the more citable. "Customers see 30–50% improvement in X" produces less signal than "a 200-person manufacturing company reduced supplier onboarding time from 14 days to 3 days using Redbrick Labs."
4. The comparison hooks
Explicitly name the decision context a buyer is in when they choose you. What alternatives are they evaluating? What criteria do they use? This gives the model the ability to surface you when someone asks a comparison question.
"Companies choosing between spreadsheet-based procurement and dedicated procurement software often select Redbrick Labs because [specific reason]." This is more citable than "we're the best choice for growing companies."
5. The FAQ layer
A structured FAQ section on your site that mirrors the exact phrasing of questions a buyer would ask an AI assistant. Not "what is Redbrick Labs?" but "what is the best procurement software for mid-market manufacturers?" — with a clear, first-person answer that includes your company name in the response.
Take any paragraph from your website. Strip out all company names and branding. Can you tell from the remaining text what industry this company operates in? Who they serve? What specific outcome they produce? If not — your content can't be extracted by a machine, and you won't be cited.
Putting it all together: a before/after
Here's a real example from a client we worked with — a B2B SaaS company in the compliance space. This is their homepage hero copy before we worked with them:
"Helping growing businesses stay ahead of compliance. Trusted by 500+ companies. Built for teams who move fast."
After our positioning rewrite, the same section read:
"Prisma Compliance is a regulatory compliance platform used by SaaS companies with 50–500 employees to manage SOC 2, ISO 27001, and GDPR requirements. Customers typically complete their first compliance audit in under 60 days, compared to the industry average of 4–6 months."
The second version has: the company name, the product category, the ICP, three specific regulatory contexts, a specific outcome with a comparison benchmark, and a timeframe. Every one of those is an extractable association.
Within eight weeks of launching the new copy, they appeared in ChatGPT responses to "best compliance software for SaaS startups" for the first time.