Last updated: April 2026

PCG handles data conversion, migration, and integration for organizations replacing legacy systems, connecting disconnected platforms, or moving years of operational data to a new architecture. Every project begins with a source audit and ends with a reconciliation report confirming that the data that left the source arrived correctly at the destination. PCG has completed hundreds of conversion, migration, and integration projects since 1995 with zero data loss on record.1

What is the difference between data conversion, data migration, and data integration?

These three terms are frequently used interchangeably but they describe distinct operations with different scopes and different risk profiles. Understanding which one your situation requires determines what the project actually involves.

Data Conversion

Transforming data from one format to another so it can be used by a different system. The data stays in the same location or moves to a staging environment. The operation is a translation, not a transfer.

Conversion addresses structural differences: date formats, field naming conventions, data types, code values that mean different things in different systems.

Example: Converting an Access database's date fields from MM/DD/YYYY text to ISO 8601 format before importing into SQL Server.
Data Migration

Moving data from one system to another, typically as part of a platform replacement. Migration includes conversion but goes further: it involves data cleansing, quality validation, reconciliation, and typically a one-time or phased transfer of historical records.

Migration ends with the data fully resident in the new system and the source system retired or placed in read-only status.

Example: Moving ten years of Access database records to a new SQL Server application, including data cleaning and post-migration reconciliation.
Data Integration

Creating an ongoing connection between two or more systems so data flows between them automatically without manual intervention. Integration is an ongoing operation, not a one-time transfer. Both systems remain active.

Integration can be real-time, scheduled batch, or event-triggered depending on the systems involved and the frequency requirements.

Example: Connecting an inventory system to an accounting platform so purchase receipts automatically update both systems without staff re-entering data.

What are the four risks that cause data migration projects to fail, and how does PCG address each one?

The four categories below account for the majority of data migration failures. Each one is preventable with the right planning and methodology. PCG's process addresses all four before any data moves.

Migrations that begin without a complete inventory of the source data, a defined target schema, and a field-level mapping between the two produce disorganized results at the destination. Data lands in the wrong fields, relationships break, and staff spend weeks manually correcting what should have arrived correctly. Inadequate planning also produces timeline surprises: a migration scoped for a weekend window that requires three weeks to complete because nobody measured the actual data volume before starting.

PCG begins every migration with a source audit that produces a complete field inventory, a documented mapping between source and destination schemas, and an identified list of data quality issues that need resolution before migration begins. Timeline estimates are based on actual measured data volumes and schema complexity, not on assumptions. No data moves until the mapping is reviewed and approved.

Data can be silently dropped or corrupted during migration when records fail validation rules at the destination, when referential integrity constraints reject records that arrived out of sequence, or when transformation errors produce null values in required fields. The loss is often not discovered until weeks after migration when a downstream process fails for an unknown reason, by which point the source system may no longer be accessible for comparison.

PCG validates every record against destination schema rules before it moves. Records that would fail at the destination are held in a quarantine log rather than dropped silently. A post-migration reconciliation report compares source record counts and key field values against destination records after every migration run. No migration is considered complete until reconciliation confirms zero unresolved discrepancies. The quarantine log documents every held record with the specific rule it failed, so nothing disappears without a traceable reason.

Source and destination systems store the same information differently. Date formats differ. Code values that mean "active" in one system mean something else in another. Fields that were one composite value in the source need to be split into multiple fields at the destination. Referential constraints that were not enforced in the source will reject records at the destination if the referenced parent records have not yet been loaded. Ignoring these differences produces data that appears to have transferred but behaves incorrectly in the new system.

PCG writes explicit transformation rules for every field that requires conversion: date format standardization, code value translation, field splits and merges, null handling for required destination fields, and load sequencing to satisfy referential constraints. These rules are tested against a representative sample of real source data before the full migration runs, surfacing edge cases in the actual data rather than in theory.

Destination environments that appear adequate in testing can behave differently under the full volume of production data. Index performance degrades at scale. Storage estimates based on compressed source data do not account for destination format overhead. Hardware and software configurations that performed well for the legacy system may require adjustment for the new one. Discovering these issues after migration, when the source system has been decommissioned, produces the worst possible recovery scenario.

PCG tests the full migration against a parallel destination environment that mirrors the production configuration before any live cutover. Performance under real data volumes, index behavior at production scale, and storage requirements are all verified in the parallel environment. The parallel environment also serves as the rollback option if anything unexpected surfaces during final cutover. The source system is never decommissioned until the parallel environment has been validated and the production cutover confirmed.

What types of conversion, migration, and integration projects does PCG handle?

Project Type What It Involves Typical Timeline
Access to SQL Server migration Schema mapping, data type conversion, referential integrity enforcement, front-end rebuild or connection. Full historical data migration with reconciliation. 4-10 weeks
Legacy ERP to modern platform Sage, Great Plains, Peachtree, or other legacy ERP data extracted, cleaned, and migrated to new architecture. Business logic preserved in new system. 8-20 weeks
Spreadsheet to database conversion Excel or multi-spreadsheet operational data normalized, cleaned, and loaded into a properly structured relational database. Reporting rebuilt in new platform. 2-6 weeks
On-premise to cloud migration Existing SQL Server or Access data migrated to Azure SQL, AWS RDS, or other hosted platform. Application layer updated to connect to new back-end. 4-12 weeks
Real-time system integration API development or direct database connection between two active systems. Ongoing automated data flow replaces manual export and re-entry. 2-8 weeks
File format conversion Data in proprietary formats, flat files, XML, or legacy database exports converted to modern structured formats compatible with current systems. 1-4 weeks

What does PCG's migration process look like from start to finish?

Every migration PCG executes follows the same core methodology regardless of the platforms involved. The specific technical approach varies by source and destination system, but the process discipline is constant because the failure modes are constant.

1

Source Audit and Field Inventory

PCG documents every table, field, data type, relationship, and constraint in the source system. This includes fields that exist in the schema but contain no data, fields that contain data in formats that differ from their defined type, and relationships that exist in the data but are not enforced by the schema. The audit also identifies data quality issues that need correction before migration: duplicates, nulls in required fields, values that do not match defined code sets, and referential integrity violations that the source system never caught.

2

Schema Mapping and Transformation Rule Development

PCG maps every source field to its destination equivalent, documents every transformation required to convert source format to destination format, and writes the conversion rules explicitly before any code is written. The mapping is reviewed and approved by your team before development begins. Fields that have no direct destination equivalent are evaluated for elimination, consolidation, or creation of new destination fields. The approved mapping becomes the specification that governs the entire migration build.

3

Test Migration Against Real Sample Data

PCG runs the migration process against a representative sample of your actual source data before running the full production migration. Testing against real data surfaces format inconsistencies, unexpected null values, referential integrity failures, and edge cases that do not appear in synthetic test records. Every issue identified during sample testing is resolved and the migration re-run before the full production dataset is processed.

4

Full Migration with Parallel Validation

The full production migration runs against a parallel environment that mirrors the live destination. PCG monitors the migration in real time, reviewing the quarantine log for records that failed validation and the transformation log for any conversion errors. After the full migration completes, PCG produces a reconciliation report comparing source and destination record counts and key field values. No cutover decision is made while open discrepancies exist above the agreed accuracy threshold.

5

Cutover, Post-Migration Verification, and Handoff

Once reconciliation confirms the migration is complete and accurate, the cutover to the production destination is executed. The source system remains accessible in read-only mode during a defined post-migration verification period. PCG delivers the migration documentation: field mapping specifications, transformation rules, reconciliation reports, and the data quality issue log with resolutions. Your team has a complete record of what moved, how it was transformed, and what was corrected during migration.

What makes PCG's approach to integration different from using a middleware platform?

Off-the-shelf middleware platforms like MuleSoft, Zapier, or Boomi handle the common case: connecting two well-documented systems that both have APIs, using pre-built connectors that the platform maintains. For organizations running legacy systems without APIs, systems with non-standard data formats, or integrations with compliance and regulatory requirements that generic platforms cannot address, PCG builds the integration directly.

  • When the source system has no API. PCG accesses the underlying database directly using ODBC or native database drivers, or builds a custom extraction layer for systems that expose no programmatic interface at all. Legacy applications built before REST APIs existed are not a barrier to integration.
  • When an API needs to be built, not just connected. If your organization needs to expose data to an external system and no outbound API currently exists, PCG writes the API. This is common for organizations that need to provide data feeds to regulatory platforms, client portals, or partner systems that require a specific data format and delivery mechanism.
  • When the integration carries compliance or audit trail requirements. Generic middleware platforms do not maintain the audit trail documentation that regulatory environments require. PCG builds compliance logging, change tracking, and data lineage documentation into integrations that operate in regulated industries.
  • When the integration needs to survive system updates without breaking. Platform-dependent connectors break when either connected system updates its API version. PCG builds integrations with version tolerance and error handling that surfaces failures before they affect operations, and provides ongoing support when connected systems change.

1 Zero data loss claim based on PCG QA process records across conversion, migration, and integration projects, 1995-2026. All projects include pre-migration validation, quarantine logging for failed records, and post-migration reconciliation before handoff.

2 Project timeline estimates based on PCG conversion, migration, and integration project records, 2015-2026. Timelines vary based on source data volume, schema complexity, and data quality issues identified during audit.

Frequently Asked Questions

PCG's migration process includes three safeguards that collectively prevent data loss. Pre-migration validation catches records that would fail destination schema rules before they move. A quarantine log holds failed records for review rather than silently dropping them. Post-migration reconciliation compares source record counts and key field values against destination records after every migration run. No migration is considered complete until reconciliation confirms that what left the source arrived correctly at the destination.
Yes. PCG reverse-engineers the source system schema from the database structure rather than from documentation. The source audit maps every table, field, relationship, and constraint directly from the database without relying on documentation that may not exist or may not reflect the current state of the system. The absence of the original developer extends the audit phase but does not prevent a complete and accurate migration.
PCG migrates the complete historical record as part of every system replacement engagement. Historical data is cleaned, corrected for quality issues identified during the source audit, and mapped to the destination schema before import. Data stored inconsistently in the legacy system due to lack of enforcement is standardized during migration. What arrives in the new system is more structurally complete than what the legacy system held, and the migration documentation provides a complete audit trail of every transformation applied.
Yes. PCG builds REST APIs for existing systems that need to expose data to external applications, partner platforms, or regulatory reporting systems. The API is scoped during the integration assessment: which data sets need to be exposed, in what format, with what authentication model, and with what rate limits. PCG also builds inbound APIs for systems that need to receive data from external sources in a defined format.
Simple migrations involving clean source data, straightforward schema mapping, and limited data volume typically run two to six weeks from source audit to completed reconciliation. Complex migrations involving legacy systems with undocumented schemas, significant data quality issues, large historical data volumes, or multiple source systems typically run eight to twenty weeks. PCG provides a timeline estimate after the source audit, not before it.
Yes. PCG's methodology runs the migration against a parallel destination environment while the source system remains the operational master. Your team continues working on the existing system throughout the migration and validation period. The cutover to the new system happens only after reconciliation confirms accuracy and your team has approved the results. The source system remains accessible in read-only mode during the post-cutover verification period.
PCG delivers the migrated data in the destination system, the migration documentation package, and a reconciliation report confirming data integrity. The documentation package includes the source audit results, the field mapping specification, the transformation rules applied during conversion, the quarantine log with resolution notes for every record that failed validation, and the post-migration reconciliation report. This documentation gives your team and any future developer a complete record of what was migrated and how.
Simple migrations between two well-documented systems with clean source data typically run between $3,000 and $10,000. Complex migrations involving legacy systems, significant data quality remediation, or large historical data volumes typically run between $10,000 and $40,000. System integration projects run between $3,000 and $25,000 depending on the number of systems involved, the availability of APIs, and the real-time versus batch requirements. PCG provides a fixed-price estimate after the source audit for all project types.
About the Author
Allison Woolbert, CEO and Senior Systems Architect, Phoenix Consultants Group

Allison has been executing data conversions, migrations, and integrations since the early 1980s, predating PCG's founding in 1995. Her migration work spans legacy database transfers for ExxonMobil, Nabisco, and AXA Financial, EPA compliance system deployments, healthcare staffing platform migrations, and hundreds of Access-to-SQL-Server migrations across 30 years of database work.

The consistent finding across those engagements: the migrations that fail do so at the validation stage, not the transfer stage. Data that arrives at the destination without pre-migration validation and post-migration reconciliation looks complete until something downstream breaks because a required field was silently dropped. PCG does not skip those steps.

// Not Sure Where to Start?

We Can Help
Manage Your Data