AI-Built Integrations Don’t Scale: Why You Need Integration Infrastructure (2026)
See more here

Ampersand Blog Writings from the founding team

CRM
20 min read
May 6, 2026
Article cover image

AI Coding Agents Built Your First CRM Integration in a Weekend. Here's Why That Won't Scale

Why AI coding agents make the first integration easy, but scaling multi-tenant, enterprise-grade integrations still requires real infrastructure

Chris Lopez's profile picture

Chris Lopez

Founding GTM

AI Coding Agents Built Your First CRM Integration in a Weekend. Here's Why That Won't Scale.

There is a sentence we hear from engineering leaders almost every week now: "Our CTO built that integration in a couple of days using Claude." Sometimes it is Cursor. Sometimes it is the in-house agent the team built on top of an open-source coding model. The integration in question is usually a CRM, sometimes Attio or HubSpot, sometimes a niche vertical CRM that the customer has been asking about for months. The story arrives with a kind of triumphant shrug, like the question of integrations has been quietly solved by tooling.

It has not. What has been solved is the cost of writing the first version of one connector. Everything else that makes an integration enterprise-grade, the part that determines whether your customers stay or churn, whether your product gets disqualified at procurement, whether your engineers spend Q3 chasing OAuth refresh bugs instead of shipping features, lives outside that weekend project. This post is about the difference between a working integration and an integration platform, and why teams that build their first connector with an AI agent still end up needing real integration infrastructure to scale.

The theme is not new. We have been writing about why building integrations in-house breaks down at scale since long before AI coding agents got good. But the cost curve has shifted, and it is worth being precise about what that means and what it does not.

The new economics of the first integration

A few years ago, asking an engineering team to build a Salesforce or HubSpot connector from scratch was a one-quarter project at minimum. You needed to study the API, design your data model, plan auth, build the OAuth dance, set up token refresh, work out paginated reads, decide between webhooks and polling, define your retry policy, store credentials safely, and then build a UI for your customer to install the thing. None of that was free, and most teams underestimated how long any single piece of it would take.

Modern coding agents collapse a meaningful portion of that work. A senior engineer pairing with Claude or Cursor can sketch a working OAuth flow against the major CRM APIs in a few hours, get pagination and basic CRUD wired up the same afternoon, and have a demo running by end of day. We have seen teams stand up their first Attio connector or first Pipedrive connector in two or three days of focused work, with most of the keyboard time spent reviewing the agent's output rather than typing.

This is real and it changes how engineering leaders think about integrations. It is no longer obviously irrational to say "we will just build that one ourselves." The objection you used to make, that the engineering cost was too high, has gotten cheaper. Some of the math has shifted.

What has not shifted is the cost of running an integration in production for thousands of customers across many tenants for many years. That is where AI-generated first drafts run out of road, and where the question of native product integrations versus DIY integrations becomes a different question entirely.

The deal-blocking problem nobody is measuring

Before going deeper into the technical scaling story, there is a commercial truth worth naming. Engineering leaders we have advised often discover, only after the fact, that integrations have been silently disqualifying them from deals they never knew existed.

The pattern goes like this. A buyer evaluating two SaaS products checks a feature comparison page and a list of supported integrations. If product A integrates with the buyer's system of record (NetSuite, Salesforce, HubSpot, Microsoft Dynamics, SAP, Sage, whatever) and product B does not, product B is often eliminated before any conversation happens. The product team at B never sees the lost opportunity. There is no inbound. There is no rejection email. The deal simply does not appear.

We have watched companies look at their pipeline and say "we are not losing deals over the missing NetSuite integration" because they cannot see what they cannot see. When they finally ship that connector and start showing up in evaluations, suddenly NetSuite-using prospects are inbound. The connector is not a feature. It is a precondition for participation in those deals.

This is the part that "I will build it when we need it" gets wrong. By the time the integration becomes a clear blocker in your pipeline, you are already months behind buyers who have it. And building reactively means you build under deadline pressure, which is exactly when integration shortcuts get baked in that cost you the next two years of engineering time. There is a whole category of deal you never compete in until your integration matrix is wide enough to make you visible. That is a strategic problem, not an engineering one.

The corollary is also true: native product integrations create defensible distribution. When your product is the one that lights up cleanly in the buyer's existing system of record, you stop competing on feature parity and start competing on workflow lock-in. We have seen teams shift from a flat feature contest to a clear win when they finally added the right vertical-specific integrations to their stack.

What the AI weekend project leaves behind

Now to the technical scaling story. Imagine you have shipped that first AI-assisted Attio connector. It works for three pilot customers. Your CTO has signed off. Champagne. Here is what is waiting for you in the next twelve months.

Auth that does not stay solved

OAuth tokens expire. Refresh tokens get revoked when users change passwords, leave their company, or trigger a security policy in their CRM admin. Some providers rotate refresh tokens on every refresh, which means a lost refresh requires re-auth. Some providers expire tokens on a schedule that varies by enterprise tier. Salesforce's session timeout, HubSpot's app scopes, NetSuite's token-based authentication, Microsoft Dynamics 365's certificate auth, each has its own quirks that you will only discover when something breaks at 2am for a customer in another timezone.

A weekend project handles auth for one customer in one happy-path scenario. Production handles auth for thousands of customers across hundreds of edge cases, with a credentials store that must be encrypted, audited, rotated, and monitored. As we have argued before, auth and token management isn't an integration. It is an entire infrastructure problem that sits underneath every integration. Solving it once for one CRM is not the same as solving it consistently across every system you support.

Multi-tenancy at the integration layer

The first version your AI agent writes almost always assumes a single tenant. Variables get hardcoded. Test credentials are checked in. Customer-scoping happens implicitly because there is only one customer. The moment you have ten customers, you need a layer that takes "customer A's HubSpot connection" and routes the right token, the right rate limits, the right field mappings, and the right error handling per tenant.

This is one of the hardest problems in integration engineering. We covered the architecture in detail in building multi-tenant CRM integrations at scale, but the short version is: every assumption you made when there was one customer breaks at ten. Connection storage breaks. Rate limiting breaks. Backfill jobs starve each other. Error logs become unreadable. And no AI coding agent is going to flag this for you, because the prompt was "build a Salesforce connector," not "design a multi-tenant integration platform."

Schema drift and field mapping

Every CRM lets your customers create custom fields, custom objects, and custom workflows. The HubSpot Deals object that one customer cares about has nine custom fields. The next customer has thirty-two. The next customer has renamed Deal Stage to Pipeline Phase and added a required custom property called Account Tier. Your integration cannot ship as a hardcoded schema. It needs to discover the customer's actual schema at install time, let the customer (or your team) map their fields to your data model, and update those mappings when the customer adds new fields.

This is the field mapping problem, and it is one of the deepest integration problems in the industry. We wrote about it in the context of AI agents specifically in field mapping is how AI agents learn enterprise reality, but the same logic applies to any product that reads from or writes to a customer's CRM. The first integration your AI agent writes almost certainly has a fixed schema. Production needs dynamic field mapping per tenant.

Rate limits, backfills, and write traffic

Once you have real customers, you have to deal with how many API calls you are making, when, and on whose behalf. Salesforce will rate-limit you per org. HubSpot has separate quotas for read and write. NetSuite throttles aggressively. Backfilling a new customer who has eight years of CRM history can take days and can saturate quota, breaking real-time sync for that customer until the backfill is done. Bulk writes need to be batched correctly to avoid 429s, and they need to retry with exponential backoff that respects the provider's Retry-After header.

The weekend project does not deal with any of this. Production cannot ignore it. Sometimes it gets buried under the headline that "the integration works." It works for one customer with a small dataset.

The gap between sync and search

Real-time sync sounds like the right answer until you actually try it. Subscribing to Salesforce Streaming API or Gmail Pub/Sub gives you a firehose. Translating that firehose into your product's data model, deduplicating events, handling out-of-order delivery, and recovering from extended outages without dropping events is a genuinely hard distributed systems problem. Most weekend projects pick polling instead, which is easier to reason about but introduces a different set of issues: every 10 minutes of polling latency is 10 minutes of stale data, and polling thousands of tenants at the same interval will cost you serious infrastructure.

We have a longer post on the patterns here in how AI agents break every integration pattern that worked for traditional SaaS, and the through-line is: the data flow problem looks easy when you only have one customer, and it gets exponentially harder as you scale.

The build-vs-buy math is no longer about the first connector

The traditional build-versus-buy debate around integration platforms was about the first connector. AI coding agents have made the first connector cheap. The math now is about everything else.

ConcernAI-assisted in-house buildNative Product Integrations Platform
Time to first working connector2 to 5 days1 to 3 days
Time to second connector2 to 5 days eachHours, often, with shared infra
Multi-tenant credential storageBuild it, audit it, rotate keysManaged
OAuth refresh and re-auth flowsBuild per providerManaged across providers
Field mapping UI for customersCustom build (weeks)Built in
Rate limit handling and Retry-AfterPer-provider logicCentralized
Bulk write batching and backfillCustom orchestrationManaged
Logs, alerting, error dashboardsBuild with your observability stackBuilt in
Adding a new system of recordRepeat the processAdd a YAML config
Replatform when a provider changesEngineering projectPatch the platform
Engineering cost in year 2High and growingLow and flat

The interesting numbers are not in the first row. They are in the second-to-last row. Year-two integration cost is where the build-it-yourself path quietly compounds. Every connector you ship is another set of tokens to rotate, another set of webhooks to debug, another set of edge cases waiting to surface during your customer's quarterly close. By the time you have eight or twelve integrations live, the maintenance cost dominates whatever you saved by building the first one yourself.

Why integration infrastructure is the actual unit of work

The right framing is not "do we build the connector or buy it." It is "what is our integration infrastructure, and where does it live." Integration infrastructure is the layer that holds credentials, manages refresh, schedules reads, batches writes, surfaces errors, and routes data per tenant. Every connector you ship sits on top of that infrastructure. The infrastructure is what gets reused. The connector itself, especially in the AI-coding-agent era, is the easy part.

We have a primer on this idea in what is integration infrastructure, and the core argument is that once you separate the infrastructure from the per-provider logic, you can ship integrations at the speed of YAML rather than at the speed of new engineering projects. That is the only model that scales as you add ten, twenty, fifty integrations to your matrix.

This is also what makes the comparison with embedded iPaaS misleading. Embedded iPaaS products like Paragon, Merge, Workato Embedded, and Prismatic try to abstract integrations into low-code workflow builders. The promise is "ship integrations without engineers." The reality is that anything sufficiently deep, anything that touches custom objects, dynamic field mappings, real-time sync, write-through orchestration, ends up requiring engineering work anyway, and the workflow builder gets in the way more than it helps. We covered this in why migrating from embedded iPaaS to Native Product Integrations reduces engineering overhead, and the pattern we see again and again is teams who started on an embedded iPaaS hitting a ceiling around customer count or use-case complexity, and migrating to integration-as-code on a native product integrations platform.

The right comparison for AI-coding-agent-built integrations is not "AI-built versus embedded iPaaS" or "AI-built versus integration platform." It is "AI-built first connector versus the next eighteen months of operating that connector in production for hundreds of customers." That is the part where leverage matters.

How Ampersand approaches this

Ampersand is a deep integration platform for product developers. The thesis is that integrations should be defined as code, version-controlled, deployed through CI/CD, and run on managed infrastructure that you do not maintain. You write a YAML file that describes the integration: which provider, which objects, which fields, which sync strategy, which write actions. Ampersand handles the rest, including managed authentication with automatic token refresh, multi-tenant credential storage, scheduled reads with backfill, bulk write optimization, on-demand read and write API endpoints, custom objects and dynamic field mapping, and dashboards with logs, alerting, error handling, and quota management.

We support hundreds of systems of record, including NetSuite, SAP, Sage, Salesforce, HubSpot, Marketo, Microsoft Dynamics 365, Zendesk, Gong, and many more via open-source connectors. The platform is GDPR compliant and ISO certified, which matters as soon as you start selling into enterprise.

The customer pattern we see most often is a team that has either built one or two integrations in-house already (sometimes with AI-coding-agent help, increasingly so) and is starting to feel the year-two maintenance cost, or a team about to build their third integration who has decided they do not want to keep paying that cost in perpetuity. They keep their existing AI-assisted first connector for as long as they want, and they ship the rest on Ampersand. The leverage point is not the first connector, it is the third, fifth, and tenth.

The Ampersand customer base includes teams like 11x, where Muizz Matemilola from engineering put it plainly: "Using Ampersand, we cut our AI phone agent's response time from 60 seconds to 5." That kind of result comes from offloading the integration infrastructure problem entirely, not from re-solving it for each new connector. John Pena, CTO at Hatch (a Yelp company), summarized the maintenance side similarly: "Ampersand lets our team focus on building product instead of maintaining integrations. We went from months of maintenance headaches to just not thinking about it." The thread between those two quotes is the same: the engineering team gets back the time it would have spent operating integrations, and that time turns into product velocity.

You can read more about how this works in practice on the how it works page, and the Ampersand documentation walks through the YAML data model, the auth patterns, and the deployment flow in detail. If you want to talk through an architecture decision with one of our engineers, the main Ampersand site has the team's contact information.

The Ampersand sell

Here is the direct version of the argument. AI coding agents have made the first integration cheaper to build. They have not made the next eighteen months of operating that integration cheaper. The work that determines whether your integrations scale, multi-tenancy, auth refresh, field mapping, rate limits, backfills, error handling, dashboards, alerting, replatforming when providers break their API, that work is still there, and it is still the most expensive part.

Ampersand is the integration infrastructure layer that holds all of that for you. You write a YAML file. We run the integration. Your team focuses on the product, not on the connector. When a provider changes their API, we patch it. When a customer needs a new field mapping, you change a config. When you want to add NetSuite or SAP or Microsoft Dynamics, you add another config rather than another engineering project.

If your product needs deep, bi-directional, enterprise-grade integrations and you want them shipped fast and maintained forever, that is the offer. The platform is enterprise-grade from day one, the pricing is friendly, and the support is high-touch. If you want to go deeper on the architecture, the Ampersand documentation is the right next step. If you want a sense of the customer-facing surface, the how it works page walks through the developer experience end to end.

FAQ

Will AI coding agents eventually solve the entire integration problem?

They will keep getting better at writing connector code, including handling auth flows, pagination, and basic transformations. What they will not do, at least not in any near-term horizon, is solve the operational and infrastructure layer underneath integrations: multi-tenant credential storage, organization-level rate limit awareness, dynamic field mapping at runtime, backfill orchestration, error budgets, schema drift detection, and the dashboards your support team needs at 2am. Those are platform problems, not code-generation problems. Even if your AI agent writes perfect first-draft code, it will not run that code at scale across thousands of tenants on infrastructure it does not own. That is what integration infrastructure is for.

When does it actually make sense to build an integration in-house with AI assistance?

Build in-house when the integration is genuinely a one-off, unlikely to need updates, sits in a domain where you have deep expertise, and is for a small number of tenants. Build on a platform when the integration will need to evolve, when you have many tenants, when the system of record is one your customers care about commercially, and when you cannot afford to be on call for it forever. Most product teams overestimate the first category and underestimate the second.

How is this different from embedded iPaaS?

Embedded iPaaS products are designed to abstract integrations into low-code workflow builders aimed at non-engineers. They optimize for breadth of pre-built connectors and low-effort UI. Native product integrations platforms like Ampersand are designed for engineers and product teams who want integrations as code, version-controlled, deployed through CI/CD, with deep API access and custom data models. The former is cheaper for shallow workflows. The latter is the only viable path for integrations that touch custom objects, dynamic schemas, or real-time bi-directional sync. The deeper your product needs to go into the customer's system of record, the more the embedded iPaaS abstraction starts costing you, and the more you end up writing custom code to work around it.

What about deal disqualification? How do I tell which integrations are deal-blockers?

The most reliable signal is your sales team's notes on lost deals and disqualified prospects. Look for "no NetSuite integration," "we use HubSpot, are you compatible," and similar phrases. Look at what your direct competitors integrate with. Look at the systems of record dominant in your ICP's industry. Then look at the integrations missing from your matrix. The gap between those two lists is your deal-blocker list. The harder gap to see is the deals you never get into because the buyer eliminates you before any conversation happens. The fix for that is widening your integration matrix proactively, especially around the systems of record that show up in procurement evaluations.

Does an integration platform make my product more vendor-locked?

The opposite, in practice. The lock-in risk in integrations comes from custom code that nobody on your team remembers how to maintain, hardcoded auth flows, and credentials stored in places that were appropriate three years ago and are not now. A well-designed integration platform with a YAML configuration model gives you a portable, version-controlled definition of every integration you run. If you ever migrate, you migrate the configurations, not the underlying plumbing.

How do native product integrations affect the AI agent strategy on top of my product?

If you are building AI agents, especially ones that act on customer data in CRMs or ERPs, your agent's effectiveness is bounded by the depth of your integration. Shallow integrations give the agent a partial picture. Deep, bi-directional integrations with custom objects and dynamic field mapping give the agent the full enterprise reality of the customer's data. We covered this in why your AI agent's memory is only as good as your field mapping strategy, and the principle is simple: the agent cannot reason about data it cannot see. Native product integrations are the substrate the agent runs on.

Conclusion

AI coding agents have rewritten the cost of the first integration. That is real, and worth celebrating. They have not rewritten the cost of running integrations at scale, which is where most of the engineering pain lives, and where teams that scale integrations year over year either invest in real infrastructure or pay a maintenance tax that compounds.

The right question for product and engineering leaders is not "can we build this connector with Claude or Cursor." The answer is yes, increasingly. The right question is "do we want to be in the business of operating integration infrastructure for the next five years, or do we want to ship integrations as code on infrastructure someone else maintains so our team can focus on the product." That second answer is the one Ampersand exists for. If you want to learn more, the Ampersand site is the place to start, and the documentation is the fastest path from "I am thinking about this" to "I have a YAML config running in production with multiple enterprise customers."

The first connector is now the easy part. The next eighty are the part that decides whether your product gets to grow.

Recommended reads

View all articles
Loading...
Loading...
Loading...