Getting the most out of the cloud? Get rid of your legacy apps!

Organisations must ‘move to the public cloud’: it offers tremendous business opportunities and is a key component of an effective digital strategy. The key question is what you move and what you replace.

‘Lifting and shifting’ your traditional IT may bring some business benefits but may also turn into a costly disappointment. Raising your organisational maturity, acquiring new technical skills and selecting the appropriate provider and platform are critical success factors. But don't expect too much. You must put effort into replacing your legacy applications and use the public cloud differently if you want to capture significant business benefits!

The cloud is omnipresent in communications and offerings of every IT supplier and is on the agenda of every CIO. For a good reason, as cloud technologies can deliver significant benefits to organisations. It has even become a ‘qualitative benchmark’ in Boards: ‘the move to the cloud’ as an indicator for the organisation’s transition to a more modern and strategic use of IT.

Almost every organisation is ‘moving to the cloud’ these days, but many are not able to extract significant business value yet. In some cases, organisations have not ‘moved’ beyond the infrastructural levels of IT and business benefits are limited. In other cases, organisations are working on adapting their structure and culture and expect that the targeted benefits will materialise later.

For many organisations though, it is important to check certain assumptions and clarify some expectations about the benefits of such a move. Working with ‘cloud native’ IT -for example with smartphones apps and is a lot easier than moving your existing IT to the cloud. More explicitly: lifting and shifting your IT to the cloud will not improve reliability, agility, flexibility, time to market or cost effectiveness by itself. On the contrary, it may even become a risky, costly and disappointing exercise.

The public cloud was built by providers such as Google and Amazon to give their software-based services unique and attractive characteristics. As a next step, their infrastructure designs and their software tools were made available to third parties. Although similar engineering philosophies were applied, each company built a unique technical solution, fit for its purpose. There is no such thing as a ‘standard’, ‘public’ cloud.

Furthermore, the technologies used are very different from the traditional solutions from IT vendors like SAP, Microsoft, Oracle, IBM and the ecosystem of applications built around their platforms. The differences are significant, already when using the public cloud as an ‘Infrastructure as a Service’ only. Running your existing IT in the public cloud is not a simple next step in a logical evolution of outsourcing and offshoring.

This engineering misfit needs to be addressed, or ‘lifting and shifting’ will become a very costly disappointment. Finding solutions to address issues that arise, challenges the traditional wisdom in IT departments. Your own organisation often lacks the appropriate engineering expertise; it is radically different from running or outsourcing your own IT. Furthermore, when IT departments start working with third party service providers more controls and more mature and formal ways of working and collaboration are needed. Also, it’s important to understand that the public cloud providers’ primary target customer is the developer community. In contrast to hosting centres and traditional outsourcing parties who are accustomed to and try to accommodate your particular circumstances, your ways of working and your legacy technologies.

Turning these issues into recommendations, our advice is fourfold:

  • Make sure you raise your organisational (process) maturity to an adequate level before moving to the public cloud. Also raise your technical expertise to the appropriate level before assessing and considering such a move.
  • You need to select the right provider and cloud platform to minimise technical and process mismatches with your legacy. Some mismatches may prove fatal and force you to abort the move. But even if you succeed, don’t expect too much:  a move of your legacy applications to the (public or private) cloud will only unlock a limited amount of the significant potential of the public cloud technologies.
  • Many problems emanate from your legacy applications. Trying to fit square pegs through round holes will not work well - read our technical notes further below. You must put effort into replacing your legacy applications - rather than trying to continue running these in the (public) cloud.
  • The public cloud is ideal for developing and running modern applications; you must build your new systems and functionalities in it *. Tremendous possibilities are at your reach today and new ones are introduced regularly!

Organisations must ‘move to the public cloud’: it delivers significant business contributions and is a key component of an effective IT strategy. The key question is what you move and what you replace.

Below, we elaborate on the ‘technological’ aspects that underpin our arguments. It is worth reading, not just for IT professionals.

* Alphabet's Eric Schmidt was interviewed this week at Start Up Fest Europe. His advice: "Anybody who's coming in with plans based on the earlier architectures, you're not going to make it. You need to be on scalable plans that solve real problems."

The arguments made above are based on the following observations about ‘technology and technical’ differences between the public cloud and the traditional platforms and environments. Not addressing these differences adequately will impact your business in a negative way, deteriorating user experience, reliability, cost effectiveness, et cetera.

  • Public cloud infrastructure technology is designed to run applications architected according to scale out principles (‘you simply need to deploy twice as many -low end- servers when you need to process twice the workload’). Legacy applications are architected according to scale up principles (‘you need a -higher quality- server that is twice as fast when you need to process twice the workload’).

    When having to scale up a legacy application in the public cloud -in order to facilitate an increased workload, for example when more users or devices must tap into your systems- notable performance issues arise quickly or costs rise disproportionately (or a combination of the two).
  • Most companies apply server virtualisation technologies to utilise hardware more efficiently and lower maintenance costs. Most public cloud providers use different virtualisation solutions to the one used in-house. Although virtualisation vendors typically provide tools to import a virtual machine from a different vendor system, these tools are far from robust; especially for virtual machines running Microsoft Windows. As a consequence organisations end up rebuilding each (!) Windows server in the new public cloud environment; a labour intensive, costly and time consuming exercise (even if you use automation tooling).
  • The benefits of virtualisation technologies are based on the consolidation of physical resources. As a consequence, application instructions are processed in a serial fashion on fewer, shared physical resources. This leads to queuing of instructions and additional specific (and expensive) measures are always needed to prevent a serious degradation in performance. Without such measures, users experience frequent, smaller hiccups, for example lasting up to 15 seconds. It’s often quite difficult to detect and address these issues as many monitoring tools miss a sufficient level of granularity in measurement.

    Public cloud scale out environments apply virtualisation technologies. When lifting and shifting scale up applications to the scale out public cloud, performance and the user experience is most likely to be degraded. But if this leads to a notable deterioration of the user experience, resolving such problems in the public cloud is even more challenging than resolving them in your own data centre. As a customer you don’t have sufficient access to the underlying technologies and the cloud provider may not even identify (nor acknowledge) the problems if the monitoring tooling fails to measure any issues. And finally, the public cloud provider may not even want to implement specific additional measures required to address your specific legacy problems. Note that scale out application are much less susceptible to such queueing bottlenecks.
  • Disaster recovery in traditional environments is usually based on storage replication. You will experience significant difficulties moving this set up to the public cloud. Disaster recovery for public cloud scale out applications is addressed entirely differently: by having the application run simultaneously in more than one data centres. This will not work for traditional applications. When moving to the public cloud you will have to forego recovery capabilities or you will have to find a less robust, laborious work around.
  • Most applications run software simultaneously both on the user devices as well as on back end servers. For the application to function and to perform smoothly, the time delay (‘latency’) between the two parts of the same application must be below a certain level. Latency depends mostly on the physical distance between the user’s device and the back end servers. If you move to the public cloud this distance usually increases significantly. Many legacy applications are less tolerant of higher latencies and their performance and the user experience suffer exponentially.

    If you need to retain these applications and want to run them in the public cloud, you must reduce the latency by also moving the user device part of the application to servers in the public cloud data centre. There are various technical solutions to achieve this; they are typically complex, costly and slowing down the migration, affecting the business case negatively.
  • Deploying additional IT resources is a more critical capability nowadays with the ever rising use of IT. Virtualisation and automation tooling have made it much easier to deploy additional resources. However the management practices of deploying IT resources effectively and efficiently has become more complex. In practice, it is quite common that business needs more hardware IT resources (and faster) than IT can provision.

    If IT capacity management practices are not sufficiently mature, the physical machines are oversubscribed and as a consequence notable performance issues arise for users. The public cloud, with its agility, deals with this fluctuating or growing demand without any problem. In fact it is made so easy, a different problem arises quickly: a seemingly uncontrollable growth in costs.
  • A large number of existing maintenance tasks are not eliminated by moving to the public cloud (unless you're building apps with the latest public cloud technologies that reduce this load). For example the tasks to manage and maintain development, testing and production environments, specific network security zones, intrusion detection systems. Tasks that are necessary to safeguard availability, reliability and performance.

    We have seen many business cases that centre around optimistic assumptions about both the elimination of tasks, people, and related costs and the improvement in the quality of these unavoidable maintenance tasks. These business cases hardly ever come true and assumptions about either cost or quality improvements need substantial and disappointing adjustments.
  • Traditional back-up practices with retention policies of up to -for example- seven years is rarely a standard service delivered by public cloud providers. If you choose to simply mimic your existing practices, (storage) costs will explode and you may experience notable performance issues as well. It is possible to arrange back up within a cloud based data center, but it will require dramatic technical and procedural changes, with significant change management challenges and effort.
  • Many organisations lack the infrastructure development expertise needed to engineer and make the move to the public cloud safely and successfully. There is a substantial difference in the required skill set, experience and profile between an engineer maintaining traditional systems and a solution engineer able to design effective solutions for the public cloud.

    Lacking such skills increases the risk that many problems -some of which are mentioned here- are overlooked and that the migrated legacy applications are bogged down with technical problems, with negative business implications or even that the move ends in disillusionment and with disinvestments.