Summary
Any software system that survives long enough will require some form of migration or evolution. When that system is responsible for the data layer the process becomes more challenging. Sriram Panyam has been involved in several projects that required migration of large volumes of data in high traffic environments. In this episode he shares some of the valuable lessons that he learned about how to make those projects successful.
Announcements
Hello and welcome to the Data Engineering Podcast, the show about modern data managementData lakes are notoriously complex. For data engineers who battle to build and scale high quality data workflows on the data lake, Starburst is an end-to-end data lakehouse platform built on Trino, the query engine Apache Iceberg was designed for, with complete support for all table formats including Apache Iceberg, Hive, and Delta Lake. Trusted by teams of all sizes, including Comcast and Doordash. Want to see Starburst in action? Go to dataengineeringpodcast.com/starburst and get $500 in credits to try Starburst Galaxy today, the easiest and fastest way to get started using Trino.This episode is supported by Code Comments, an original podcast from Red Hat. As someone who listens to the Data Engineering Podcast, you know that the road from tool selection to production readiness is anything but smooth or straight. In Code Comments, host Jamie Parker, Red Hatter and experienced engineer, shares the journey of technologists from across the industry and their hard-won lessons in implementing new technologies. I listened to the recent episode "Transforming Your Database" and appreciated the valuable advice on how to approach the selection and integration of new databases in applications and the impact on team dynamics. There are 3 seasons of great episodes and new ones landing everywhere you listen to podcasts. Search for "Code Commentst" in your podcast player or go to dataengineeringpodcast.com/codecomments today to subscribe. My thanks to the team at Code Comments for their support.Your host is Tobias Macey and today I'm interviewing Sriram Panyam about his experiences conducting large scale data migrations and the useful strategies that he learned in the processInterview
IntroductionHow did you get involved in the area of data management?Can you start by sharing some of your experiences with data migration projects?As you have gone through successive migration projects, how has that influenced the ways that you think about architecting data systems?How would you categorize the different types and motivations of migrations?How does the motivation for a migration influence the ways that you plan for and execute that work?Can you talk us through one or two specific projects that you have taken part in?Part 1: The TriggersSection 1: Technical Limitations triggering Data MigrationScaling bottlenecks: Performance issues with databases, storage, or network infrastructureLegacy compatibility: Difficulties integrating with modern tools and cloud platformsSystem upgrades: The need to migrate data during major software changes (e.g., SQL Server version upgrade)Section 2: Types of Migrations for Infrastructure FocusStorage migration: Moving data between systems (HDD to SSD, SAN to NAS, etc.)Data center migration: Physical relocation or consolidation of data centersVirtualization migration: Moving from physical servers to virtual machines (or vice versa)Section 3: Technical Decisions Driving Data MigrationsEnd-of-life support: Forced migration when older software or hardware is sunsettedSecurity and compliance: Adopting new platforms with better security posturesCost Optimization: Potential savings of cloud vs. on-premise data centersPart 2: Challenges (and Anxieties)Section 1: Technical ChallengesData transformation challenges: Schema changes, complex data mappingsNetwork bandwidth and latency: Transferring large datasets efficientlyPerformance testing and load balancing: Ensuring new systems can handle the workloadLive data consistency: Maintaining data integrity while updates occur in the source systemMinimizing Lag: Techniques to reduce delays in replicating changes to the new systemChange data capture: Identifying and tracking changes to the source system during migrationSection 2: Operational ChallengesMinimizing downtime: Strategies for service continuity during migrationChange management and rollback plans: Dealing with unexpected issuesTechnical skills and resources: In-house expertise/data teams/external helpSection 3: Security & Compliance ChallengesData encryption and protection: Methods for both in-transit and at-rest dataMeeting audit requirements: Documenting data lineage & the chain of custodyManaging access controls: Adjusting identity and role-based access to the new systems
Part 3: PatternsSection 1: Infrastructure Migration StrategiesLift and shift: Migrating as-is vs. modernization and re-architecting during the movePhased vs. big bang approaches: Tradeoffs in risk vs. disruptionTools and automation: Using specialized software to streamline the processDual writes: Managing updates to both old and new systems for a timeChange data capture (CDC) methods: Log-based vs. trigger-based approaches for tracking changesData validation & reconciliation: Ensuring consistency between source and targetSection 2: Maintaining Performance and ReliabilityDisaster recovery planning: Failover mechanisms for the new environmentMonitoring and alerting: Proactively identifying and addressing issuesCapacity planning and forecasting growth to scale the new infrastructureSection 3: Data Consistency and ReplicationReplication tools - strategies and specialized toolingData synchronization techniques, eg Pros and cons of different methods (incremental vs. full)Testing/Verification Strategies for validating data correctness in a live environmentImplication of large scale systems/environmentsComparison of interesting strategies:DBLog, Debezium, Databus, Goldengate etc
What are the most interesting, innovative, or unexpected approaches to data migrations that you have seen or participated in?What are the most interesting, unexpected, or challenging lessons that you have learned while working on data migrations?When is a migration the wrong choice?What are the characteristics or features of data technologies and the overall ecosystem that can reduce the burden of data migration in the future?Contact Info
LinkedInParting Question
From your perspective, what is the biggest gap in the tooling or technology for data management today?Closing Announcements
Thank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The Machine Learning Podcast helps you go from idea to production with machine learning.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected]) with your story.Links
DagKnowsGoogle Cloud DataflowSeinfeld Risk ManagementACL == Access Control ListLinkedIn Databus - Change Data CaptureEspresso StorageHDFSKafkaPostgres Replication SlotsQueueing TheoryApache BeamDebeziumAirbyte[Fivetran](fivetran.com)Designing Data Intensive Applications by Martin Kleppman (affiliate link)Vector DatabasesPineconeWeaviateLAMP StackNetflix DBLogThe intro and outro music is from The Hug by The Freak Fandango Orchestra / CC BY-SA
Sponsored By:
- Red Hat Code Comments Podcast: 
Putting new technology to use is an exciting prospect. But going from purchase to production isn’t always smooth—even when it’s something everyone is looking forward to. Code Comments covers the bumps, the hiccups, and the setbacks teams face when adjusting to new technology—and the triumphs they pull off once they really get going. Follow Code Comments [anywhere you listen to podcasts](https://link.chtbl.com/codecomments?sid=podcast.dataengineering).
Starburst: This episode is brought to you by Starburst - an end-to-end data lakehouse platform for data engineers who are battling to build and scale high quality data pipelines on the data lake. Powered by Trino, the query engine Apache Iceberg was designed for, Starburst is an open platform with support for all table formats including Apache Iceberg, Hive, and Delta Lake.
Trusted by the teams at Comcast and Doordash, Starburst delivers the adaptability and flexibility a lakehouse ecosystem promises, while providing a single point of access for your data and all your data governance allowing you to discover, transform, govern, and secure all in one place. Want to see Starburst in action? Try Starburst Galaxy today, the easiest and fastest way to get started using Trino, and get $500 of credits free. Go to [dataengineeringpodcast.com/starburst](https://www.dataengineeringpodcast.com/starburst)
Support Data Engineering Podcast