
Replatforming a Customer-Facing Salesforce Integration: OAuth Token Migration, Field Mapping, and Backfill Patterns That Don't Break Your Users
A step-by-step guide to replatforming Salesforce integrations without breaking OAuth, field mappings, or customer data sync

Chris Lopez
Founding GTM
Replatforming a Customer-Facing Salesforce Integration: OAuth Token Migration, Field Mapping, and Backfill Patterns That Don't Break Your Users
Every week, we meet another engineering leader whose team is staring down the same problem: a customer-facing Salesforce integration that worked at 20 customers, broke at 200, and is now blocking the roadmap at 500. The integration was probably written two or three years ago by an engineer who has since moved on. It bundles OAuth, webhook handling, field mapping, backfills, and sync retries into a single service that nobody wants to touch. And now, with a growing enterprise pipeline, the team has to replatform it: quickly, quietly, and without asking a single existing customer to click "Reauthorize" in Salesforce.
This is the replatforming problem. It doesn't get written about much, because most integration content treats integrations as greenfield builds. They almost never are. Teams inherit a working-but-fragile CRM integration and then have to migrate off it while it's still in production, still taking writes, and still tied to hundreds of customer-specific field mappings. In our work with dozens of B2B SaaS engineering teams over the past year, we've seen a repeatable pattern for how to do this cleanly, and a longer list of ways teams get it wrong. This post is the playbook we now hand to any product builder shipping Native Product Integrations into Salesforce and replacing something that already exists.
Why replatforming a customer-facing Salesforce integration is harder than building one
A greenfield Salesforce integration has three hard things: OAuth, schema mapping, and sync. A replatform has those three plus four more: preserving the existing OAuth app identity, importing existing tokens, migrating each customer's field mappings, and handling the fact that some customers are on a nightly cron and some are on a real-time subscribe pattern, often without documentation of which is which.
The engineering leaders we work with typically describe the problem in roughly this order. First, they discover their legacy integration stored OAuth tokens in a way that is uniquely painful to migrate (encrypted in a Postgres column with a now-rotated KEK, or pinned to an internal identity model that doesn't cleanly map to the new system). Second, they realize that "field mapping" isn't one concept in their codebase: it's five. Some mappings are hardcoded, some are in a YAML file, some are in a database table, some are in a customer admin UI, and some only exist as tribal knowledge in the customer success team's heads. Third, they find out that their Salesforce connected app (the one OAuth tokens are issued against) is registered with a client ID that every existing customer has already authorized. If they move to a new connected app, every customer has to reauthorize. If they don't, the new system needs to present itself as the old one to Salesforce. Fourth, they discover that backfills are non-trivial: replaying the full history of every customer on day one will melt the API quota, but not replaying enough will leave data gaps.
The stakes on this are high and usually under-appreciated. A forced re-authentication flow on a B2B SaaS integration is not a login page; it's a customer-facing ticket. Each reauth becomes an email to a Salesforce admin who may not be the same person who originally connected the integration, which becomes a Slack thread with the CS team, which becomes a week-long delay. If you have a few hundred customers connected, replatforming with forced reauth is a quarter-long project that looks to your customers like an outage.
The hidden architecture of a customer-facing Salesforce integration
Before we get into how to replatform, it's worth laying out what a mature customer-facing Salesforce integration actually contains. Most teams underestimate this because the naive version looks simple: OAuth to Salesforce, read some objects, write some objects. Done.
The real architecture has at least a dozen moving parts. There's the Salesforce External Client App (or legacy connected app) that holds the client ID and client secret, registered in a Salesforce Dev Hub. There's the OAuth authorization code flow your product initiates when a customer connects. There's the refresh token loop that runs whenever an access token expires (usually every two hours) and the retry/backoff logic for when Salesforce rate-limits the refresh endpoint itself. There's the token storage model, which must encrypt at rest, support rotation, and be queryable by your sync workers without leaking tokens into logs. There's the object and field catalog, pulled from Salesforce's Metadata API or Tooling API, which has to be refreshed periodically because customers add custom fields. There's the field mapping store, which captures per-customer decisions about which Salesforce fields map to which of your product's data model fields. There's the sync engine (typically a combination of scheduled polls for reads, subscribe/CDC streams for real-time, and API calls out for writes) with idempotency keys, dedup logic, and a retry queue. There's the webhook delivery layer that pushes Salesforce changes into your product. There's a dashboard that lets customers see the status of their integration, with logs, error surfaces, and re-run buttons. And there's the alerting layer that tells your on-call engineer when a customer's refresh token has been revoked.
Every one of these pieces exists whether you built it or bought it. When we talk about why building integrations in-house breaks down at scale, this is what we mean. A team can own one or two of these components well. Owning all twelve for one system of record, and then doing it again for HubSpot, Dynamics, NetSuite, and Marketo, is a full-time integration platform team, which is not what most B2B SaaS companies want to build.
The first rule of replatforming: reuse the existing OAuth app
The single most important decision in a Salesforce integration replatform is what to do about the OAuth app. Get this wrong and every customer reauthorizes. Get it right and the migration is invisible.
The principle is simple: the OAuth app identity is what the customer authorized, not your infrastructure. When a customer clicked "Allow" on the Salesforce consent screen three years ago, they authorized a specific client ID (the client ID of your connected app) to access their Salesforce org. That grant is durable. The refresh token they issued is still valid, provided the app hasn't been deleted and the refresh hasn't been revoked. What this means operationally is: if you can take the existing connected app's client ID and client secret and plug them into your new integration infrastructure, every customer's existing refresh token continues to work. No reauthorization needed. You just need an endpoint on the new system that accepts "here are my existing tokens, register them against this customer" and a provider app configuration that matches the original client credentials.
We see teams get this wrong in a couple of ways. The most common is assuming a new integration platform requires a new connected app. It doesn't; any serious Native Product Integrations platform should let you bring your own Salesforce External Client App. The second is creating a new connected app for "clean separation" and then getting stuck with a migration that requires customer action. The third is not realizing that Salesforce distinguishes between legacy connected apps and External Client Apps, and that the migration path between those app types has its own quirks. Modern Salesforce tenants are being pushed toward External Client Apps with second-generation managed packages and namespace registries, which has caught more than one engineering team off guard when their legacy connected app stops behaving as expected after a Salesforce org migration.
The migration flow we recommend looks like this. First, identify whether your existing app is a legacy connected app or an External Client App, and whether your organization owns it. (If it's owned by a third-party integration vendor, you have a harder problem: you'll need to negotiate token export or accept a reauth flow.) Second, register that app's client ID and client secret as the provider app inside your new integration platform. Third, import the existing tokens in bulk using the platform's connection import endpoint, tagging each connection with the customer ID it belongs to. Fourth, import the existing field mappings and installation configuration, so customer-specific decisions about which fields to sync and what to rename them to are preserved. Fifth, gate the cutover behind a feature flag so you can migrate customers cohort by cohort. When a customer is flipped over, the old integration stops polling and the new one picks up from a known watermark. No reauthorization, no customer-visible change.
This is the same pattern we recommend for HubSpot, Marketo, Dynamics, and any other OAuth-based system of record. Auth and token management isn't an integration; it's the load-bearing substrate underneath one. Treating OAuth as a first-class, migratable concern is the difference between a replatform that ships in a quarter and one that ships in a year.
Field mapping: required, optional, and the "custom field slot" pattern
Once OAuth is handled, the next hardest problem is field mapping. And the hardest part of field mapping isn't the technical schema; it's the product decision about how much flexibility to expose to your customers.
The teams we've advised generally converge on a three-tier model. The first tier is required fields: the handful of Salesforce fields that your product cannot function without. For an Account object, that's almost always Name, and often Id and a location field. Required fields are hardcoded in your integration's schema definition and appear as non-optional in the customer's mapping UI. The second tier is optional-but-enumerated fields: a curated list of additional fields your product can make use of if the customer chooses to map them. Billing city, industry, annual revenue, account type: these are the "we know what to do with this if you give it to us" fields. The third tier is what we call the custom field slot pattern: a small number of slots (typically three to five) that the customer can map to any custom Salesforce field in their org. Your product treats these as opaque pass-throughs. You don't know what they mean, but you store them, sync them, and pass them back when the customer asks.
The custom field slot pattern is worth dwelling on because it's the compromise that works. Exposing every Salesforce field as optional is overwhelming for most customers and makes your product's data model balloon with fields you can't reason about. Exposing only required and enumerated fields is too rigid, because enterprise customers will always have at least one custom field (ABM_Tier__c, Account_Rating__c, Lead_Score_V2__c) that they want to sync. Slots split the difference: you get a bounded number of pass-through fields that your product can use for display or downstream personalization, without committing to supporting infinite schema variation.
For teams on the receiving end of this migration, this maps cleanly onto Ampersand's AMP YAML model. Required fields are declared in the read action's requiredFields block. Enumerated optional fields are declared as named mappings (first_custom_field, second_custom_field, etc., or better, as meaningful slot names like abm_tier, account_rating). A fully-open "customer can pick any field" UI is configured by setting optionalFields: auto, which tells the UI library to fetch every available Salesforce field and render it for selection. The strength of this model is that a V1 release can start prescriptive (required fields only, no custom fields) and then add enumerated fields or slots in later releases without breaking existing mappings. Field mapping is how AI agents learn enterprise reality, and that goes double for any product doing anything downstream with the synced data.
Backfills, filters, and the dance between them
The third piece of the replatforming puzzle is historical data. Every customer-facing integration eventually has to answer the question: when a new customer connects, how much of their Salesforce history do we pull in?
The naive answer ("all of it") is usually wrong. Large Salesforce orgs have tens of millions of account records, many more contacts, and even more activity history. A full-history backfill on day one can exhaust the customer's API quota, trigger Salesforce's governor limits, and in the worst case, get your integration IP blocklisted. Even when it succeeds, it loads 99% of data that will never be looked at (records about churned customers, archived accounts, and decade-old contacts) into your product's database, costing money and slowing queries.
The better answer is a bounded backfill with customer-configurable extension. In the AMP YAML model, this is expressed as backfill: { defaultPeriod: { days: N } } rather than fullHistory: true. For an Account read action, most customers do well with a 30-to-90-day initial window. If they want more history later, they can request it, and your product can trigger an expanded backfill asynchronously.
The backfill problem is tightly entangled with filtering, which is the other half of "how much data do we pull in." A filter narrows which records are synced based on customer-specific criteria: "only accounts where Account_Type__c is 'Prospect' or 'Customer' and not 'Archived'." Filters operate at a different layer than backfill period (backfill is about time, filters are about selection) but in practice, customers think of them together, because the thing they really care about is "don't sync useless data."
There's an important architectural distinction here that teams miss. Backfill period is typically a global configuration set in your integration's YAML and applied to every customer. Filters are per-customer configuration, set in each customer's installation config. The reason matters: every customer gets the same backfill window because it's a constraint on your sync workers, but every customer has different filter needs because "useless data" means different things in different orgs. Modern platforms expose this by separating the static integration definition (the YAML) from the per-customer installation record, and by providing endpoints (like POST /installation and PATCH /installation) for managing that per-customer state programmatically.
The best flow we've seen for a customer-facing setup looks like this. Start with a pre-built mapping UI that captures the customer's field mapping in a step-through wizard. On the final step, before the initial backfill is triggered, show the customer a filter-building UI and a backfill period selector. When they hit "Finish," call the update-installation endpoint with the filters and backfill period, then call the trigger-read endpoint to kick off the first sync. This gives you full customer control without any of the platform infrastructure being custom-built. It's the hybrid of pre-built and headless: pre-built for the parts that are commoditized (OAuth, object mapping) and headless for the part that differentiates your product (how your customers want to filter and schedule their data).
Why real-time subscribe is worth the complexity (most of the time)
A question we field a lot: should a new customer-facing Salesforce integration run on a schedule (daily, hourly) or on real-time subscribe / Change Data Capture?
The short answer is: subscribe, unless you have a specific reason not to. Salesforce's Platform Events and CDC pipelines, exposed through the Streaming API, let your integration receive record-level change events within seconds of the change occurring in Salesforce. The operational cost is low once it's set up (a single streaming connection per customer, managed by your integration platform) and the product experience is categorically better. Customer updates a contact in Salesforce; your product sees it five seconds later; your downstream AI agent, workflow, or dashboard acts on fresh data.
Compare this to the scheduled-poll alternative, which has been the default for most customer-facing CRM integrations for a decade. Scheduled polls are easy to reason about: cron runs, worker hits Salesforce's REST API, diff gets computed, updates get applied. But polls have a latency floor equal to the poll interval (a daily cron means up to 24 hours of stale data) and they burn API quota on every run, including runs where nothing has changed. They also create a class of bugs where updates get lost if a poll fails and the watermark isn't advanced correctly.
The customers we've seen move from polls to subscribe describe the change in the same way: "our product feels alive now." This is especially true for AI-native products, where response time is the core experience. One of the teams on the Ampersand platform, 11x, runs AI phone agents that rely on real-time CRM data to answer inbound calls. Their engineering lead, Muizz Matemilola, put it this way: "Using Ampersand, we cut our AI phone agent's response time from 60 seconds to 5." The difference isn't a faster model; it's an integration that doesn't make the model wait for data.
That said, there are real cases where scheduled polls are the right choice. Batch analytics pipelines that only care about daily aggregates, regulated products that have audit requirements around when data was accessed, and integrations with systems of record that don't expose a streaming API all justify a polling approach. The right framing isn't "subscribe or poll." It's "subscribe for real-time objects, poll for batch objects, and make the choice per read action." Modern Native Product Integrations platforms let you declare this per object in the same YAML file, so the decision is legible to anyone reading the integration spec.
Industry context: the shift from iPaaS to Integration Infrastructure
Everything above sits inside a broader shift that's been accelerating for the last two years. The iPaaS model (workflow automation tools like Workato, Tray, or Zapier, originally designed for internal IT teams) was never a great fit for customer-facing, embedded Product Integrations. It's the wrong abstraction: iPaaS sells "connect these two apps with a visual flow builder," but what a B2B SaaS product actually needs is "let my customers connect their Salesforce to my product, with my branding, my UI, my data model, and my SLA." Those are two completely different products.
The market has woken up to this distinction. A wave of platforms (Ampersand, Merge, Paragon, Nango, Prismatic, and others) have emerged explicitly for product developers rather than internal automators. The shape of these platforms varies: some are unified API aggregators that normalize across providers, some are embedded iPaaS workflow engines, and some, like Ampersand, are integration-as-code platforms designed to give engineering teams a declarative, version-controlled way to build deep, bi-directional Native Product Integrations. The tradeoffs between these shapes are real and matter. We've written more about this in our post on the best tools for CRM integration in 2026, including when a unified API is the right choice versus when deep per-vendor integration is worth the extra work.
The broader industry data backs this up. Gartner's research on integration platforms has noted a sharp split between integration spend that serves internal IT and integration spend that is embedded in customer-facing software. Analysts are increasingly treating the two categories separately, which reflects what engineering leaders have known for a while: you can't run a customer-facing integration on an IT-facing tool.
The other context worth flagging is the rise of AI-native products and what they demand of integration infrastructure. An AI agent calling into a CRM to answer a question in real time has categorically different requirements than a nightly data pipeline. It needs sub-second reads, real-time writes, semantic field mapping that AI can reason about, and the ability to handle schema variation across customers without breaking. This is why we believe Integration Infrastructure is becoming a distinct product category. It's not iPaaS, it's not unified API, it's not a workflow engine. It's the substrate AI-native products stand on when they interact with enterprise systems of record.
How Ampersand solves the replatforming problem
The teams we work with who are replatforming customer-facing Salesforce integrations generally land on the same architecture after going through the decision tree above. Declarative integration spec in YAML, checked into git, deployed through CI/CD. OAuth handled by the platform, with support for bringing your own connected app and importing existing tokens so no customer has to reauthorize. Field mapping declared in the YAML and captured per-customer in an installation config, with pre-built UI components that can be styled to match product branding. Backfill and filter decisions exposed to customers through a hybrid flow: pre-built UI for mapping, headless API for the last-mile configuration. Real-time subscribe for objects where latency matters, scheduled reads for the rest. A dashboard with logs, error surfacing, and retry controls. Managed auth refresh, quota-aware retry, and all the operational plumbing that teams shouldn't be writing themselves.
That architecture is what Ampersand ships out of the box. Ampersand is an integration-as-code platform for product developers building native, bi-directional, enterprise-grade Integrations into Salesforce, HubSpot, Marketo, NetSuite, Dynamics 365, SAP, Sage, Zendesk, Gong, and hundreds of other systems of record through open-source connectors. It supports Direct Integrations that preserve the customer's existing OAuth app identity, which is exactly what makes a non-disruptive replatform possible. It exposes a declarative AMP YAML for defining read, write, and subscribe actions, with full support for required fields, optional enumerated fields, custom field slots, customer-specific filters, and backfill periods. It ships both a pre-built React UI library for standard onboarding and a headless UI library for advanced, per-customer flows, and they can be used together in the hybrid pattern described above.
Under the hood, Ampersand handles OAuth token refresh, retry/backoff, quota management, idempotency, watermark tracking for backfills, and webhook delivery. The dashboard surfaces installations, logs, errors, and configuration per customer. For teams migrating off legacy integration code, there are dedicated endpoints (POST /connection to import existing tokens and POST /installation to import existing field mappings) that enable the invisible-cutover pattern without writing custom migration tooling.
John Pena, CTO at Hatch (a Yelp company), summarizes the value prop for teams on the other side of that decision: "Ampersand lets our team focus on building product instead of maintaining integrations. We went from months of maintenance headaches to just not thinking about it." That's the outcome we care about: engineering teams shipping product, not staring at token refresh dashboards at 2am.
Build vs. Buy vs. Ampersand: how the choices compare
Below is the comparison we've found most useful when engineering leaders are evaluating how to ship or replatform a customer-facing Salesforce integration.
| Dimension | Roll your own | Generic iPaaS / Workflow | Unified API | Ampersand (Native Product Integrations) |
|---|---|---|---|---|
| OAuth app identity | You own it fully; you implement refresh, retry, storage | Usually abstracted; hard to bring your own app | Abstracted; often can't bring your own app | Bring your own connected app; import existing tokens |
| Field mapping | Custom UI + custom storage; heavy lift | Limited to provider's mapping model | Normalized across providers; less per-object fidelity | Declarative YAML + pre-built/headless UI; per-object fidelity |
| Backfill control | Custom logic; easy to get wrong at scale | Typically limited; hard to tune per customer | Often black-box; limited customer control | Per-action backfill period; per-customer override via API |
| Real-time subscribe | Custom streaming pipeline | Webhook-based; often shallow | Varies; often polling-first | First-class subscribe actions in YAML |
| Migration from legacy | Full custom engineering lift | Requires customer reauth almost always | Requires customer reauth usually | Import tokens and installations programmatically |
| Time to first customer in production | Months to quarters | Weeks, but shallow | Weeks, but normalized | Days to weeks, and deep |
| Ongoing maintenance cost | Full team-owned | Shared with vendor | Shared with vendor | Managed infrastructure |
| Fit for AI-native and vertical-specific integrations | Good, but expensive | Poor | Moderate | Strong |
The honest read is that roll-your-own is the right choice only if integrations are the core differentiator of your product. For most B2B SaaS companies, they're an enabling capability: customers expect them, but your moat isn't in your OAuth refresh loop. Generic iPaaS and unified APIs each have their place, but neither fits the replatforming scenario well, because both typically break OAuth continuity on migration. That's why more product teams are landing on purpose-built Integration Infrastructure designed for Native Product Integrations.
The Ampersand pitch, stated plainly
If you are replatforming a customer-facing Salesforce integration (or any customer-facing CRM, ERP, HRIS, or accounting integration), Ampersand is the fastest, least-disruptive way to do it. You keep your existing OAuth app. You import your existing tokens. You define your integration in YAML, check it into git, and deploy it with our CLI or through our GitHub Action as part of your CI/CD. You get a pre-built, customizable React UI library for the standard onboarding flow and a headless UI library for custom paths. You get real-time subscribe, scheduled reads, on-demand writes, bulk write optimization, and an operational dashboard with logs, alerting, quota management, and error handling. You get GDPR compliance and ISO certification from day one. You get Direct Integrations into the systems of record your customers actually use: Salesforce, HubSpot, Marketo, Dynamics 365, NetSuite, SAP, Sage, Zendesk, Gong, and hundreds more. You get Vertical-specific integrations into Life Sciences, Health Care, and accounting systems. And you get a team of engineers (ours) on call to help you ship.
The shortest path to understanding what this looks like in practice is to read our how-it-works page, or to browse the Ampersand docs and see the AMP YAML model, the UI library, and the platform APIs in one place. If you'd rather talk through your specific replatform scenario with an engineer, you can book a 30-minute call with me directly and we'll walk through your architecture, your migration constraints, and whether Ampersand is the right fit. We've also written more on why multi-tenant CRM integrations break at scale when teams try to use traditional integration platforms, and if that matches what you're seeing in your own codebase, it's worth the read.
FAQ
How do I migrate existing Salesforce OAuth tokens to a new integration platform without forcing customer re-authentication?
The short answer: reuse your existing Salesforce connected app or External Client App as the provider app in the new system, then import the existing refresh tokens through a programmatic endpoint. In Ampersand, this means configuring your provider app with the original client ID and client secret, then using the connection-creation API to register each customer's existing tokens. The refresh loop will then use your original app's credentials, and Salesforce will treat each call as if it's coming from the same authorized app the customer originally connected. No consent screen, no re-auth, no customer-facing change. The prerequisite is that your organization owns the existing connected app; if it's owned by a third party, you'll need to negotiate token export.
What's the difference between a Salesforce connected app and an External Client App, and does it matter for migration?
Salesforce is in the middle of a multi-year push to move customers from legacy connected apps to External Client Apps, which are registered through the Dev Hub with second-generation managed packages and namespace registries. Functionally, they behave similarly from an OAuth perspective (both issue client IDs and support the same authorization grants) but External Client Apps integrate more cleanly with Salesforce's packaging and deployment model and are what Salesforce recommends for new builds. For a migration, the practical implication is that if your existing app is a legacy connected app, you have a choice: keep it and continue to use its client ID, or migrate to an External Client App (which does require some customer-facing work). Most teams we've advised keep the legacy app through the replatform and migrate to External Client App as a separate, later project.
How do I design field mapping so my V1 is simple but I can add flexibility later?
Start prescriptive. Declare a small set of required fields that your product cannot function without, and a slightly larger set of enumerated optional fields that have known product value. Do not expose arbitrary custom field selection in V1; it creates support surface you're not ready for. Once you have V1 in production, layer in a custom field slot pattern with three to five named slots that customers can map to any Salesforce custom field they want. Your product treats those slots as opaque pass-throughs, which keeps your data model stable while giving enterprise customers an escape hatch. The declarative YAML model makes this progression easy: V1 ships with hardcoded required and optional fields, V2 adds slot definitions, and V3 can add optionalFields: auto for the subset of customers who want maximum flexibility. Existing installations don't break when you add new optional fields.
When should I use scheduled reads vs. real-time subscribe actions?
Use subscribe for any object where customer-visible latency matters and for any object that feeds an AI agent, workflow engine, or customer-facing dashboard. Use scheduled reads for batch analytics objects, historical archives, and objects that don't change often. Most mature customer-facing integrations end up using both (subscribe for Accounts and Contacts, scheduled reads for Opportunities or custom analytics objects) and modern integration platforms let you declare this per object in the same integration spec. The deciding question is usually: if this data is stale by 12 hours, does a customer notice? If yes, subscribe. If no, schedule.
How do I let customers configure their own filters and backfill period without building that UI from scratch?
The pattern that works is a hybrid of pre-built and headless. Use a pre-built UI library (like Ampersand's) for the commoditized parts of onboarding: OAuth, object selection, required/optional field mapping. Then use a headless API to add a final custom step where the customer builds filters and picks a backfill duration. When they finish that step, your code calls the platform's installation-update endpoint with their filter config and backfill period, then calls the trigger-read endpoint to kick off the initial sync. This gives customers the flexibility they expect without you owning any of the underlying integration infrastructure. The key operational detail is to disable automatic backfill in the global integration spec when you do this; otherwise the pre-built step will trigger a full backfill before the customer's filters are applied.
Is Ampersand a fit if my integration needs are vertical-specific, covering Life Sciences, Health Care, or accounting?
Yes. Ampersand supports Vertical-specific integrations into Life Sciences CRMs, Health Care systems of record, and accounting platforms (NetSuite, SAP, Sage) with the same declarative model and operational guarantees as the horizontal CRM integrations. The platform is GDPR compliant and ISO certified, which matters for regulated verticals. If you're building in a vertical where the system of record isn't in our supported list, we also support custom connectors through the open-source connector framework, so the AMP YAML, UI library, and platform APIs all work the same way against your custom integration.
Wrapping up
Replatforming a customer-facing Salesforce integration is one of those projects that looks scary on the surface and is entirely tractable once you decompose it. The OAuth migration is solvable by reusing the existing connected app and importing existing tokens. The field mapping migration is solvable with a three-tier model (required, enumerated optional, and custom field slots) that gives you a clean V1 and a clear path to more flexibility. The backfill and filter complexity is solvable with a hybrid pre-built-plus-headless UI flow that puts the customer-specific decisions where they belong: in a per-customer installation config, not in your global integration spec. And the real-time vs. scheduled question is solvable by making it per-action rather than per-integration.
What makes all of this hard, historically, is that every piece had to be built from scratch by each engineering team. That's what's changed in the last two years. Native Product Integrations platforms like Ampersand collapse the decision space from "design and build all of it" to "declare what you want in YAML, plug in your existing OAuth app, and ship." The result is that teams stop owning integration infrastructure as a permanent tax on the roadmap and start shipping integrations as a product capability.
If you're in the middle of a replatform, or about to start one, or just trying to figure out whether your current integration architecture is going to scale, go read the Ampersand docs, browse the Ampersand website, or book time with me to walk through your specific situation. The integration debt trap is real, and it's escapable, but it's escapable fastest when you treat integrations as infrastructure, not as feature work.