Logo
Overview
Building AI into your SaaS product

Building AI into your SaaS product

April 18, 2026
10 min read

The first four parts of this series were about internal AI. What it does to your engineering team, your productivity, your architecture, your risk posture. This part is about the bigger question.

What does AI mean for the product you’re building and the business you’re running?

AI as a product feature

Customers have gotten smarter. They can tell the difference between an AI feature that does something useful and one that exists because the slide deck needed a bullet point.

If you’re building a SaaS product in 2026, the question is “where does AI create genuine value in our specific domain, and what does it cost to deliver at the margins our business requires?”.

Build vs buy vs rent

Hitting a foundation model API is fast but it also creates dependencies on cost.

Fine tuning on your domain data is slower but gives you control over data and inference costs. However, you’re buying infrastructure and hiring ML expertise most SaaS teams don’t have.

The foundation model providers are shipping the wrapper layer themselves. Claude has tool use, computer use, MCP, extended thinking. OpenAI has assistants, function calling, GPTs, custom instructions. The features that startups spent the last two years wrapping are becoming native capabilities of the models.

If your product’s pitch is “we call the API and present it nicely”, the provider is actively building that presentation layer for you. That’s a model destined to fail and a growing number of AI startups are discovering it the hard way.

API integration works when your value sits above the model. Domain logic, proprietary data, workflow orchestration the provider is not capable of owning. If your value is the integration layer, the provider is going to eat your lunch.

The margin problem

If your product charges per seat and your costs are per token, you’re holding a bag about to explode.

Most customers use AI features lightly. Your highest value customers, the ones with the most data and complex workflows will hit AI features the hardest. Pricing on average usage breaks when applied to power users.

Inference costs don’t scale linearly with users or revenue. Finance needs new models for AI cost forecasting, like tokens spent per user action or cost per workflow completion.

It gets worse when you look at the mechanics. As your heaviest users work with more data, their prompts get larger and their context windows grow.

When you move from “user asks, model answers” to agentic workflows where the AI takes multiple steps on its own, costs per user action multiply. Agentic workflows have a fundamentally different cost structure from the chatbots most teams are shipping today.

And the mammoth in the room. AI providers are currently pricing below their actual inference costs to gain market share. Anthropic, OpenAI and others are burning through capital to drive adoption. That means the token prices you’re modelling today are artificially low. If your unit economics barely work at current rates, a provider repricing to sustainable margins could break your pricing entirely.

When we modelled our AI feature costs per customer, we found that our 10% heaviest users accounted for 80% of inference costs.
Pareto anyone?

Differentiation decays faster than you think

AI features have a shorter half life than most product features. Something that felt genuinely differentiated six months ago is now table stakes every competitor offers.

The lasting competitive advantage is your proprietary training data and deep integration into your customers’ workflows.
A case in point is with Anthropic shipping “SaaS killer” features on a weekly basis.

When we looked at adding AI to cloud costs analysis, we realised the real value was in how we generated our models tailored to specific customer utilization.

Agentic AI is the real shift

Most current SaaS AI features are reactive. A user asks, the chatbot responds. Agentic AI is different. It runs workflows and operates with some degree of autonomy.

If your product roadmap still treats AI as a “feature” and not as autonomous workflows embedded across your product, you’re planning for yesterday. In a year, the products that matter will be the ones where AI agents are running meaningful parts of the customer’s business.

An AI that writes a summary can be wrong without catastrophic consequences. An AI that sends emails or moves money needs guardrails, audit trails and human oversight at the right points. That has to be designed from the start.
We design our AI approaches carefully, non production critical changes? They can be applied and remediated by agents. Anything that is business critical? It needs additional human scrutiny.

AI across the SaaS business

The dynamics playing out in engineering are playing out in every function. As I covered in Part 4, the companies getting value from AI are the ones using AI across the whole lifecycle.

Sales and revenue

Same pattern as engineering. Sales people are using AI to reduce admin burden and focus on relationships and strategy. They can spot AI generated outreach that’s off tone or factually wrong.

The data quality problem shows up here too. AI powered lead scoring, personalisation and forecasting are only as good as the CRM data feeding them. Incomplete or inconsistent CRM data gets amplified, not fixed, by AI.

Distributed teams as collaboration infrastructure

For SaaS companies with distributed teams, AI’s biggest operational impact may be as collaboration infrastructure rather than just a productivity tool.

AI can capture tribal knowledge that used to live only in co located conversations. It can surface relevant history without needing to ping someone across time zones.
Regional proximity still has a lot of value but AI lowers the burden of timezone differences.
For SaaS companies scaling globally, that distinction matters. Organisations that design intentionally for distributed work (explicit comms norms, documentation culture, async first processes) will compound AI’s benefits. Treating remote work as a second class citizen will find AI amplifies that dysfunction.

HR and people ops

AI is already doing onboarding automation and employee self service. The interesting challenge for HR is one the function didn’t expect. Managing a hybrid workforce that includes AI agents with defined roles, responsibilities and something resembling performance metrics.

AI is flattening organisational structures. The leadership compression is real. Engineers at heart who moved to management roles now have real tailwinds and are active code contributors.
Myself included.

Finance

Finance is seeing real gains in automation, with anomaly detection in expenses and transactions. Well defined, high volume tasks where AI genuinely earns its keep.

The harder conversation is what AI does to your financial model if you have AI features in your SaaS product. Inference costs are variable COGS that behave differently from anything finance teams have modelled before. Usage driven, but not directly correlated with revenue. Exposed to model provider pricing changes outside your control and creating margin risk at the top end of your customer usage distribution.

If you’re building AI into your product, your finance team needs to understand this before you scale. The unit economics conversation needs to happen in product, engineering and finance together. Sequentially doesn’t work. By the time finance catches up, you’ve already priced yourself into a problem.

We have our internal finops practice and the AI unit economics are top of our agendas, in parallel with the legal and compliance ramifications.

When AI actually pays for itself

Both sides of the AI market are leaning on subsidies right now.
Foundation model providers are pricing below their cost of inference to capture share.
Customers are funding their AI spend with productivity stories that work on paper.
Neither of these is sustainable and the adjustment is going to reshape how every leader thinks about AI ROI.

The provider path to viability

Foundation providers get to sustainable margins in a few ways:
Smaller specialised models handling the easy work and larger models reserved for genuinely hard problems, e.g. Opus does the thinking and Sonnet does the, uh, doing.
Better inference infrastructure and cheaper silicon, like we’ve seen in the past, the 3dfx, the Intel vs AMD, the ARM vs Intel.

The customer path to viability

Too many AI business cases I see land with one implicit assumption, if AI makes engineers X% more productive we can avoid hiring (or even reduce headcount) Y people and the tool pays for itself.
That sums up most of the finance conversations I’ve sat in.
Also if you fall into the trap of the initial tailwinds by AI lowering the barrier of simple scaffolding work which then gets a lot slower and expensive as the complexity rises, the productivity forecast will not hold.

Where I’m experiencing AI being genuinely viable are through outcomes the business already invested on: increased reliability, performance and reduction of technical debt.

Does viability have to come from headcount reductions?

If you apply AI across the whole lifecycle, requirements, testing, operations, support, you will need fewer people doing repetitive work.
In our own setup the work that was once four engineers chasing incidents at night is now one senior reviewing AI generated remediation PRs in the morning. The other three engineers stayed on the team though. They moved to building the machine that operates the AI orchestration, rather than being the orchestration.

When cost avoidance is the only driver the first foundational provider puts the R&D model on trial and it rarely survives.

We run our AI finops reviews alongside our legal and compliance reviews.
Inference cost per workflow and sensitivity to provider repricing, all on the same dashboards. Headcount reductions and infra savings sit alongside those numbers rather than carrying the business case on their own.

The organisations that come out of this cycle ahead will be the ones whose AI economics work when the subsidies end and whose AI narrative works when the “we fired the team” story no longer ages well.

The SaaSpocalypse question

Everything above assumes AI changes how SaaS products work. Now does AI make SaaS products unnecessary entirely?

If an AI agent can process your data, run your workflows, generate your reports and talk to your customers, what is the SaaS product actually providing?

SaaS products exist because they package domain logic into software that’s easier to use than building it yourself. Foundation models increasingly reason about domain logic from context alone. Give an agent access to a company’s data and a clear set of instructions, and it can approximate a lot of what many SaaS products do.

The value of the packaged software decreases as the capability of the general purpose agent increases.

Products with deep regulatory compliance, ecosystem integrations and reliability at enterprise scale are genuinely hard to replace with an agent. The products most at risk are the ones that are essentially CRUD apps with a nice UI or simple foundation model wrappers.

The SaaSpocalypse framing is useful because it forces the right question about your product. “What does our software do that an AI agent with access to the same data cannot?” While this can be positioned as a challenge, it is one that all SaaS providers are facing, so getting ahead will be pivotal in who survives and who doesn’t.

What I actually think is going to happen

Nobody knows what things will look like in 6/12/18 months from now.

What’s visible right now is the foundation model providers are absorbing all the GPT wrapper businesses.

The inference costs are being kept artificially by the frontier model providers, once they change the pricing model there will be a ripple effect.

AI is an amplifier across your entire business. Make sure what you’re amplifying is worth amplifying.


I’m still working this one out in real time. If you’re a SaaS leader wrestling with the same questions, I’d genuinely like to hear how you’re approaching it. What’s moving the needle, and what you’ve tried that didn’t.