I met with a customer today who described for me the challenges they had in their previous 10g to 11g Oracle database upgrade. Their requirements boiled down to this: • The business can’t afford a lengthy cutover time for the upgrade. • The business can’t afford any data loss. • They business had to be able to rollback the upgrade in the event of a fail. • The 8-10 downstream systems need to be upgraded very quickly soon after.
To meet these requirements, they had to make a whole variety of difficult choices that exacerbated all of the limitations and bottlenecks that an upgrade can pose. Instead of upgrading their 10g in place, they had to make a full copy, upgrade that backup to 11g, figure out how to ship the changes from the old 10g to the new 11g during which both databases were essentially down. And then, once the cutover was complete, there was still the job of making a backup of the new 11g that could be used to create all of the downstream systems. They faced most of the typical bottlenecks of an upgrade:
1. For databases in the 5 TB+ range, the time it take to run a database upgrade can be significant. 2. An upgrade is typically a one-way transformation on a physical file system. 3. Downstream systems either go through the upgrade as well or have to be restored or cloned from the “new” database, which can be very expensive as well.
What’s the real bottleneck?
An Oracle database is just a collection of blocks. Even if you go from 10g to 11g, typically you’re only changing a few of those blocks in that database. And, the reason why we’re faced with choices such as upgrade or re-clone on our downstream environments is because we just don’t have the data agility to be able to rapidly reproduce the change – we are forced to pay a tax in time, copy, or transmission to make it happen. But, again, the real change in the data is very minimal. What’s the real bottleneck? Data Agility.
Delphix to the rescue
I like to think of a database upgrade as made up of 3 distinct process steps. First, there’s the rehearsal where you create copies of your existing database and rehearse the upgrades until you’re happy and secure that everything went well. Second, there’s the cutover where you either quiesce and convert in place or stand up a new target and quiesce and apply the delta to the new target. And, third, there’s the propagate where you take the newly minted environment and copy it around for Prod Support, Reporting, Dev, Test, QA, etc. to bring everyone up to the same version.
Delphix has several powerful features that cut through the noise and get to the data agility problem: Virtual-to-Physical, Any Point In Time Database Clones on Demand, Database Rewind, and upgradeable source and virtual databases.
Consider this same client’s situation if they had used Delphix. Since Delphix has on-tap a Near-Real-Time version of the database to be upgraded, and can spin up a copy of that database in minutes, it’s easy to reduce the cycle time of each iteration of testing. So, the big gorilla in the room – the time it takes to rollback and reset for each rehearsal – just goes away with Delphix. Second, if you’re using Delphix to virtualize all of the downstream copies of the databases, then they will take up minimal space BOTH before AND after the upgrade (again, since the upgrade typically doesn’t change more than a small %ge of blocks.) Third, if you upgrade your primary data source from 10g to 11g, then the operation to “upgrade” virtual downstream systems can literally be a couple of clicks and a few minutes.
So What?
In my experience, the vast majority of the time people spend on their upgrade projects is not for the execution of the actual upgrade script, it’s in fact mostly around migration and cutover – moving bits, synchronizing bits, etc. When you see these things as the Data Agility problems that they are – and find ways to optimize those operations with a tool like Delphix, then you realize that the only real bottleneck in the operation is the actual upgrade script – and that’s the one thing you can’t change anyway.
The power of this sort of approach to upgrading databases is significant. I can recall a customer who had put together an entire 6-week timeline to prepare, test, and execute their cutover of a large multi TB data warehouse. With Delphix, that entire operation was complete in 6 hours. With thin cloning from Delphix, you can remove bottlenecks related to your Data Agility, and focus on the true bottleneck and by so doing reduce your Total Cost of Data by delivering your upgrades faster at a fraction of the cost you pay today.
Commenti