photo by David Blackwell
by Woody Evans
You are paying a data tax. You are paying a data tax moving and copying data and moving and copying data over and over again. Moving the data IS the big gorilla. This gorilla of a data tax is hitting your bottom line hard. When moving data is too hard, then the data in non production systems such as reporting, development or QA becomes older, and the older the data, the less actionable intelligence your BI or Analytics can give you. The less intelligence, the more missed revenue. The longer it takes you to match data from system A and system B, the more opportunities for your customer to get frustrated with the fact that the left hand hasn’t talked to the right hand – that you have no intimacy with them – you don’t seem to even know them. The longer it takes your systems to be tested (because it takes so long to reload the data) the fewer real features make it to market, and the more you put your market share at risk.
Business skeptics are saying to themselves that data processes are just a rounding error in most of their project timelines, and that they are sure their IT has developed processes to fix that. That’s the fundamental mistake. The very large and often hidden data tax lay in all the ways that we’ve optimized our software, data protection, and decision systems around the expectation that data is simply not agile. The belief that there is no agility problem is part of the problem.
How big is the data tax? One way we can measure it is by looking at the improvements in project timelines at companies that have eliminated this data tax through implementing a data virtualization appliance (DVA) and creating an agile data platform (ADP). Agile data is data that is delivered to the exact spot it’s needed just in time and with much less time/cost/effort. By looking at productivity rates after implementing an ADP compared to before the ADP we can get an idea of the price of the data tax without an ADP. IT experts building mission critical systems for Fortune 500 companies have seen real project returns averaging 20-50% productivity increases after having implemented an ADP. That’s a big data tax to pay without an ADP. The data tax is real, and once you understand how real it is, you realize how many of your key business decisions and strategies are affected by the agility of the data in your applications.
Data Agility by the Numbers
Let’s take a high level look at the kinds of cost, revenue, and risk impact that agile data can have to your business in four key areas: Business Level Initiatives, Application Operations, IT Operations, and IT Systems. In each of these cases, we’re incurring cost we can avoid or we are missing revenue that we could be capturing or accepting business risk when we don’t have to do so.
Business Level
At the business level, we’ve lived with the constraint of slow data for a long time. We may offshore the data management task of our nightly BI refresh. We may only allow business users to run their reports at night because the cost to copy data off production is too much, and the need for fresh data never goes away. We may live with week old data in our BI because our network is built for data protection and can only handle a full copy during off hours. To get features out the door, we may spend less time testing or simply accept the fact that there will be more errors post production and ignore that cost because it is born by operations. But what if data were agile?
If data were agile then, instead of paying for 2 full time offshore resources to get data that is already day old when we get it the next day, we could instead have minutes old data in minutes and get it automatically. With agile data margins go up and revenue opportunities increase (for example, wouldn’t it be good for Walmart in California to know that Lego Batman sold like hotcakes as soon as it hit the shelves in New York that morning?). Multiply that by 100 applications, and you’re talking about real money and competitive advantage. Instead of running 5 tests in two weeks (because it takes me 2 days to rollback after each of my 1 hour tests) and paying the cost of bugs slipping into production, what if I could run 15 tests in that same two weeks and have no bugs at all in production? Costs fall, quality rises, customer satisfaction rises, competitive position strengthens. Even better, what if I could get more features into my release because I knew my quality testing would be robust enough to handle it, and I had enough time in my schedule? How much is having new features faster worth? What about really big problems like consolidating data center real estate, or moving to the cloud? If you can non-disruptively collect the data, and easily and repeatedly present it in the target data center, you take huge chunks out of these migration timelines. Moreover, with data being so easy to move on demand, you neutralize the hordes of users who insist that there isn’t enough time to do this, or its too hard, or too risky. Moving the data IS the big gorilla. Eliminating the data tax is crucial to the success of your company. And, if huge databases can be ready at target data centers in minutes, the rest of the excuses are flimsy. We know from our experience that there are some $1B+ Data center consolidation price tags. Taking even 30% of the cost out of that, and cutting the timeline, is a strong and powerful way to improve margin.
When the cost to get fresh data falls precipitously, better decisions and faster delivery mean better margins, higher profitability, better revenue growth, and faster time to market. And, for the ones capable and willing to change the way they do business to take maximum advantage, it means better EPS, higher EBITDA, and a business ready to succeed.
Application Operations
Forrester estimates that companies will spend $97.5B in application outsourcing and management in 2013 (Forrester publication 83061). For large ecosystems, such as Oracle E*Business or SAP, the complexity of data management can be sprawling, with each landscape consuming dozens of databases, and landscapes being built not only for lifecycle copies for current development (Dev, Test, QA, Etc.) but for multiple streams of development (Release 2.1, 2.2, 2.3, 2.4, etc.)
For application teams, data constraints are often masked as something else. For example, maybe the application is only allocated so much storage, so there can only be so many full size copies, so the developers have to make do with stripped down copies (that can have unexpected results in production) or shared copies (which often involves all sorts of extra reset/rollback operations as well as freezes and holds so that one developer doesn’t clobber another). We trade productivity for cost in this phase. But, the primary cost sink is the data – storing it, moving it, copying it, waiting to be able to use it. So, business responds. Sometimes, the business lives with the risk of using subsets and slower test cycles by pushing the timeline: Saving cost at the expense of delivering to market quickly. Sometimes the business invests in a lot more hardware (and for many of our customers – a runaway storage bill): Delivering Quickly at the expense of higher cost and lower margin. Sometimes, the business just squeezes testing: Delivering lower quality applications sooner at the cost of higher remediation, and lower customer satisfaction. The point is that data is the big culprit in all of these.
Agile data – virtualized data – uses a small footprint. A truly agile data platform can deliver full size datasets cheaper than subsets. A truly agile data platform can move the time or the location pointer on its data very rapidly, and can store any version that’s needed in a library at an unbelievably low cost. And, a truly agile data platform can massively improve app quality by making it reliable and dead simple to return to a common baseline for one or many databases in a very short amount of time. Applications delivered with agile data can afford a lot more full size virtual copies, eliminating wait time and extra work caused by sharing, as well as side effects. With the cost of data falling so dramatically, business can radically increase their utilization of existing hardware and storage, delivering much more rapidly without any additional cost. An agile data platform presents data so rapidly and reliably that the data becomes commoditized – and servers that sit idle because it would just take too long to rebuild can now switch roles on demand.
Next Post: IT Operations, and IT Systems.
Comments