Are you looking to migrate to Aurora Postgres ?

Rajesh Saluja
4 min readMay 3, 2021

What are we trying to Solve ?

Are you looking to change flavor of the RDBMS and consume AWS Aurora Postgres to utilize features of Aurora Postgres like switchover to other availability zone with warm cache to reduce MTTD and MTTR automatically or want to move away from traditional log buffer bottlenecks or consume faster vacuuming on Aurora Postgres or save overall cost and move to Aurora RDS ?

Are you in phase of deciding which RDBMS flavor should be used to run your transactional database and still in the decision phase? Example you want to move off the workload from Oracle/SQL Server and consume Aurora Postgres and need to know challenges you might face during the journey . The blog talks about criteria , method and recommendation for migration off of Oracle or SQL Server to Aurora Postgres .The blog mainly focuses on database stack with a target audience to database engineers , administrators , database managers and architects.

Criteria

The criteria list might vary as per business requirement but an attempt was made to list down criterias which will cover the majority of the use cases . Below is a list of key criteria which is required for the RDBMS flavor migration .

  • How to do the schema conversion ?
  • What optimal method to be used for data movement with minimal latency ?
  • How to perform data validation and data repair in case of data discrepancy ?
  • What should be the rollback strategy if target RDBMS doesn’t perform well ?
  • What should be the horse power of the target instance ?
  • What is required to convert the application code ?
  • Should support various partitioning type and features similar to Oracle like the database administrator favorite automated interval partitioning
  • Support on global indexes on partitioned table
  • How Oracle Merge statements will be handled on Aurora Postgres ?
  • What to do if my application has triggers which does update on a certain column post any update/insert on the row ?
  • How to have similar method to generate Active Session History “seconds level report” on Aurora Postgres ?
  • How to handle foreign key referencing partitioned tables ?
  • Ability to support currently used data types
  • Ability to switchover with minimal downtime in case of failures and failover to cross region with minimal interruption.
  • Ability to efficiently execute DML activities with minimal overhead
  • Support various objects types and their functionality
  • Scalable and Cost effective

Success Metrics

As we noted down various key criteria , the next step is to determine basic and must have functionality to change the flavor of RDBMS and how the chosen functionality will be tested and success metrics of each criteria. Sequence out your testing starting with basic and later ramping up to must have functionalities to support your business case .

In our proof of concept we chose Oracle and SQL Server as source RDBMS and tested to migrate to Aurora Postgres with below key criteria’s

Recommendations and Lesson Learned

Below is the snippet of process , features , method and lessons learned/recommendations to migrate to Aurora Postgres .

Conclusion

Although there will be many others criteria based on your business needs to determine the path of migration but as per the testing on Oracle RDS or SQL Server on EC2 as source, and Aurora Postgres as target RDBMS, we learned that there isn’t one stop shop for logical replication which supports data movement , validation and repair . Any DDL applied on source needs to be migrated manually to target . Moving the data back from Aurora Postgres to Oracle/SQL Server ( for rollback purpose ) isn’t straightforward for a partitioned table. Some of the features like database logon trigger , handling of Oracle Merge conversion to “ON Conflict” on postgres require different approaches to serve the needs.

Also you need to be very prudent on consuming partitioned tables on Postgres and handling the queries touching partitioned tables. Lastly if you are a fan of Oracle AWR and ASH then you might miss capabilities to quickly compare two data points , validate current and already executed execution plans for a query , as a workaround you need to build your own snapshot repository . On the flip side Aurora Postgres is failure resilient with warm cache, performant once queries are optimized , cross region replication is out of the box and is cost effective.

Author : Rajesh Saluja

--

--

Rajesh Saluja

Principal Big Data Engineer at Small Business and Self Employed Group, Intuit. https://www.linkedin.com/in/rajeshsaluja/