Join us
@devgraph ă» Aug 13,2021 ă» 12 min read ă» 1134 views ă» Originally posted on www.devgraph.com
The average cost of database downtime has risen from $300,000 to almost $1,000,000 worldwide between 2014 and 2020, say Gartner and Statista, respectively.Â
Downtimes are one of the main risks that accompany database migrations to the cloud. But with the right tools in your hands, you can eliminate them and arrive safely at the advantages of cloud computing.Â
Cloud Databases Are Key to Your Business Success
By the end of 2021, 59% of all companies will be using the cloud for their main workloads, according to IDG. This shift is mostly driven by the three main factors:
On your way to a cloud database, you should be aware of the obstacles.
Database downtimes can be unnerving even for your permanent customers and a decisive disappointment for the new ones. When your application is not showing up-to-date information, users cannot work with it properly. Downtimes may have various reasons, for instance, application errors. Â
Applications remain bound to their databases through the SQL queries that run in the background. Once you've renamed an entity in the application source code, you have to reflect them in the database or the queries that the application sends over to the database.Â
Such changes may be quickly overlooked, and even if they are not, their implementation takes time.Â
Ideally, no adjustment in one layer should ever influence the other, especially during the cloud transition, to prevent database downtimes.
Thus, the first step on the way to a seamless database migration to the cloud is decoupling the application from the database. It can be done by creating an additional tier between the database and the application, making them independent from each other and more robust against failures.
But before we proceed with this, let's take some time to go through the database migration issue in detail and outline possible solutions for the pitfalls that may be waiting for you during the transition period. As a bonus, weâll show you a shortcut.Â
Cloud Migration: Database-Related Obstacles
If you decide to move your application and/or the underlying database to the cloud, the whole thing may get tricky. First of all, by moving, we obviously mean creating a copy of the original database and moving the data into the new one.Â
Basically, there are three widely used scenarios for doing this. Their main differences are the duration of a possible downtime and the number of business risks you have to accept in exchange for reducing the database downtime.Â
As we mentioned before, a database receives many queries that may be updating and reading the same records with a minimum time lag. During the migration, the challenge is to create a complete copy of your data catching the most recent updates.Â
Offline copy migration solves this issue by simply shutting down the database â and, consequently, the application â for the time a copy has to be generated by means of a simple export and import of the data. Within this downtime, the original database will be disconnected from the application and the new copy connected to it.Â
In reality, this may mean a few hours of the database and application unavailability. It may work if you migrate an internal application used only by your employees and you do it on a weekend or during the night hours. For commercial applications, however, this may lead to revenue reduction and a poor customer experience.
In this case, you create a read copy of your original database. The master copy gets updated and sends updates to the replica. Once you are ready to turn on the switch, you swap them, granting write access to the former replica.Â
Indeed, in this scenario, you still have some short downtime. You also run a risk that the original database failed to handle a few last-minute queries and they wonât show up in the new database. This may be acceptable for small businesses, internally-used applications, or applications that demonstrate a clear top and bottom in the amount of traffic during the day.
Contrary to the previous method, you can write data to both databases and âturn offâ one of them at any time. The databases need to be synched to keep identical records.Â
Here, a lot can get lost in transition. You never know which one is the most up-to-date. Thus, the synchronization process requires continuous observation. Together with the complexity of the maintenance, it can lead you to postpone the final switch.Â
The downtime is close to zero but the risks associated with this strategy are the highest among the three scenarios.
Still, the temptation for a quick switch is high, and there is a solution for this: incremental batch reading. This method adds a new column to each table in the database that identifies which records were already synced and which not. This allows you to at least spot the gaps, eliminating them later with a manual one-time synchronization.Â
Apart from the difficult strategic choice, there are other factors that can influence your application downtime and, therefore, your approach to database migration. We gathered the three most common challenges and a few life hacks for dealing with them.
Preparing a database migration is often a good point to do some inventory. For instance, to re-think the way your data is organized in the database and to re-design the latter accordingly. That manifests in the database schema changes.
As a result, your source and target databases have different schemes. The initial setup of a replica, as well as a routine synchronization between the two, will require a schema translation tool. Alternatively, you can use an eject-transform-load (ETL) tool, provided that you selected the first or second scenario.
When you move the entire application to the cloud, you may decide to do some re-factoring that leads to changes in how your application works. Consequently, the application data will change. For instance, instead of using UTC in a time column, you start using your local time format. Again, youâll need a synchronization and ETL solution to transfer the records correctly.Â
Indeed, cloud migration means a functional upgrade. For instance, your old database may have lacked one-to-many relationships between tables, or did not allow triggers or materialized views, or was not so good in partitioning.Â
The new cloud one may have this. The question is how to match the data, especially, if you choose the master/master scenario where you have to perform two-way updates all the time.Â
In very rare cases, it can be the other way around, and you perform a functional downgrade. For instance, if due to some cost factors you decided to adopt a cloud database that offers only essentials, then you have to cover discrepancies before the final switch happens. For instance, solve the many-to-many issue by adding a few more tables and establishing connections between them, or by getting rid of the old data instead of partitioning the tables.
A Smooth Migration: Managing Your Replicas Successfully
The use case with the master and a replica of a database is not only limited to database migration. You can use the master (the primary database) and one or more replicas, or secondaries or slaves, for other purposes. For instance, you might want to split the traffic between them and write the data only to the master and read the data only from the slaves to enhance your database performance.Â
When you manage your master and secondary databases in a clever way, this can help you not only to speed up the routine work of your application but also to secure your cloud migration.
The magic cure is a database load balancer. It can be applied to the databases without a master-slave relationship and to all databases that reside on more than one server.Â
Obviously enough, it balances the load of your database. Load balancers belong to middleware. They are placed between applications and database server farms.Â
With a load balancer, you have a universal endpoint for your application and enable the resource optimization of your database. It accelerates query throughput and lowers latency.Â
The load balancer distributes the queries more efficiently using one of the following methods:
When you want to migrate your database into the cloud, a load balancer can be used in the following scenario. But, indeed, not every load balancer is made for this. We explain to you how ScaleArc performs this task.
Usually, your database is tightly coupled to the application. Instead, you can make the coupling looser and more flexible by putting ScaleArc between the database and the application, and then proceed with the migration.Â
Inside ScaleArc, you need to create a cluster or use an existing one for the source and target databases. Then, you can add a read-and-write target database to that cluster. It will be put into standby mode and wonât receive any traffic until the cutover.Â
ScaleArc will move all the data to the target database and keep synching the new data. Once you are ready to make the target your primary database, switch the connection between ScaleArc and the new database. The old one will still be there but not receiving any traffic. Your application will read and write data from the former replica.
Since ScaleArc remains connected to your application during the whole migration period, this method has no downtime.Â
The ScaleArc console has an intuitive cockpit where you can configure your database clusters before migration. We reduced the number of configuration parameters to a sufficient minimum. You need just a couple of minutes to fill in the form and start the migration process.Â
In the ScaleArc console, do the following to begin your migration:
While your migration is happening, you will get notified by email about important status changes and the completion of the migration. You can also check the progress manually in the console. You can monitor how many tables have already been written to the new database.Â
Last but not least, you can schedule the cutover time.Â
ScaleArc: More Than Just Balancing
With ScaleArc, you have a full grip on your migration risks. A seamless switchover is made possible thanks to the integration with Microsoft SQL Serverâą AlwaysOn technology and MySQLâą automatic failover. Â
As we mentioned before, ScaleArcâs main purpose is routine database load balancing. We would like to highlight a few of its advantages to show you why this can be of great help to you even after youâve finished your migration.
Load balancing works in the three main directions:
It prevents downtime maintenance, doubles your website performance, and boosts your revenue. ScaleArc stands out among other tools since it offers a few particular features on top of the basic functionality.
If you have data-heavy applications, such as webshops, or booking or fintech platforms, you may keep one or a few secondary standby databases that can support the main one when it is down. Outside of the database migration, a switch from the primary to the secondary database is called failover.
ScaleArc offers an automated failover process. It means that not only do you get notified when a database is interrupted, but ScaleArc automatically re-directs the traffic to the secondary databases. The users wonât even notice the difference, and can keep working with your application normally.
During a failover, the primary and secondary databases are not synchronized. This leads to a replication lag. Once the failover ends, the primary database gets a lot of new requests, resulting in service outages.Â
It is important to queue the incoming requests to prevent new troubles. The issue is often addressed by surge queuing, which is a ScaleArc feature.Â
Write requests are very important since they can change the returned results of a read request that follows afterward. Thatâs why it is often better to separate these two types of requests completely and only perform write operations on the primary database, while using the secondary one for the read requests.
In this case, the information gets updated quicker. This feature boosts the overall performance of your application.
ScaleArc will collect inactive connections and store them temporarily in a pool instead of closing them immediately. This works faster than establishing a completely new connection every time.
Multiplexing goes beyond that and allows the re-use of a connection for multiple clients.Â
ScaleArc caches â saves â responses to the most common queries. This allows for quicker delivery of user content since the query does not need to be processed anew when it arrives again. This reduces waiting time for your applications, making them more user-friendly.Â
Indeed, ScaleArc is a highly transparent tool that allows you to collect data about your database performance and all failovers and downtimes.Â
At ScaleArc, we have thought about your need for a real-time view of all processes. We provide a live monitor for your database load. You can break down query traffic, handpick problematic queries and candidates for caching, and perform bucketing on your databases.Â
You have all your log data in one place. You do not have to generate logs for each database and then bring them together in a third-party analytics tool.Â
Moreover, you can retrieve historical data at any time and create forecasts based on it.Â
If you still want to use your tool for analyzing logs, ScaleArc supports REST API integrations with monitoring and management tools. You can customize your log analytics to catch weak points.Â
Join other developers and claim your FAUN account now!
Influence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.