Home > OS >  Google Cloud SQL - move instance from one project to another with zero downtime?
Google Cloud SQL - move instance from one project to another with zero downtime?

Time:06-29

What is the easiest way to move Google Cloud SQL instance(Postgres 9.6) from one google project to another with minimum or zero downtime? Instance size is about 20 GB

There is a service called "Migration job" which looks very relevant https://cloud.google.com/database-migration/docs/postgres/create-migration-job . But I cannot understand whether it can be used to move instance from one google project to another.

Simple restoring from backup is not really my case because I want to achieve minimum possible downtime, so I'm looking for something like 2 running instances with the synced real-time data.

PS. I also have configured VM with pgbouncer

CodePudding user response:

Yes, Database Migration Service could be used to move one Cloud SQL instance from one GCP project to another. This is a cheaper way than the next approach, and although it requires more setup, it should be faster too. A connection profile can be created for the existing Cloud SQL instance, and a Cloud SQL target must be created in the destination project, but once everything is set up, most of the migration will be automatic. This is a well documented procedure, of which you can find information in our documentation.

Developers sometimes want to migrate their (normal) relational database with “zero” downtime. While downtime can be reduced, migration cannot be done without any impact on applications (that is, zero downtime). Replication causes replication lag.

The instant the decision is made to “migrate” all applications from one replica to another, applications (and therefore developers) have to wait (that is, downtime) for at least as long as the “replication lag” before using the new database. In practice, the downtime is a few orders of magnitude higher (minutes to hours) because:

  • Database queries can take multiple seconds to complete, and in flight queries must be completed or aborted at the time of migration.
  • The database has to be “warmed up” if it has substantial buffer memory - common in large databases.
  • If database shards have duplicate tables, some writes may need to be paused while the shards are being migrated.
  • Applications must be stopped at source and restarted in GCP, and connection to the GCP database instance must be established.
  • Network routes to the applications must be rerouted. Based on how DNS entries are set up, this can take some time.

All of these can be reduced with some planning and “cost” (some operations not permitted for some time before/after migration).

Decreasing the load on the source DB until the migration completes might help, and downtime might be less disruptive.

Other considerations:

  1. Increase the machine types to increase network throughput.
  2. Increase SSD size for higher IOPS/MBps.

More about.

The most intuitive way would be to export the data from the Cloud SQL instance to a GCS bucket, and import it to a new instance in the new project. This would imply some downtime, and you would have to manually create the instance in the target project with the same configuration as the original; it does require some manual steps, but it would be a simple and verifiable way to copy the data across an instance in a different project.

  • Related