DISCLAIMER: this article is older than one year and may not be up to date with recent events or newly available information.
When discussing the impact of technology on the organization, we’ve typically done so in terms of platforms and infrastructure: on-premise, off-premise, cloud, data centers, networks, Edge. And you might measure value and effectiveness in terms of the value of cost optimization, agility, speed to market, security, compliance, control and choice. What this focus overlooks is what’s actually driving business decisions today – something that, until a few years ago, most people outside of the IT department didn’t really think about – applications.
Everything changed when we, ‘the consumer’, got our hands on the iPhone and its App Store. Now, in only a handful of years, and with Application Marketplaces for every operating system, enterprises are thinking ‘app first’. But not all applications were created equal, and each app’s value must be measured in terms of how core it is to the business.
So, what’s mission critical, what’s business critical and what’s customer facing? It’s this prioritization of applications that is ultimately informing IT decisions, whether it’s a mission critical app that must deliver complete security without compromising its performance, or a consumer-facing service that needs to have the scalability to manage major spikes in use without constantly consuming vast amounts of resource, such as a retailer’s mobile commerce offering. The type of application is also a major factor – if you have a bespoke app that has sat at the core of your business for many years, like an automated pricing tool for a logistics company, simply lifting and shifting to the cloud will not work. With access to its data so critical, the decision may be made to keep it in its existing environment for the time being.
These are all factors that influence the criteria for choosing the right platform. The challenge is that with each application requiring different operating systems and platforms, and no one platform yet being able to offer all benefits without being prohibitively expensive, many organizations find themselves with a multitude of infrastructures and platforms with a complex application estate hosted in all sorts of places. Unfortunately, many of these applications are unable to move easily across platforms and different clouds to where they would be best located and used. Respondents to a recent VMware survey highlighted significant challenges to this situation; with integrating legacy systems (57%) and understanding new technologies (54%) two of the biggest obstacles organizations needed to overcome in order to get the best performance out of this mix of infrastructures. But is there a way of managing this ‘complex’ landscape with more ease?
Delivering a better experience across multiple platforms
Having a clear strategy and defined approach is key. Take a retail bank for example. With physical branches as well as mobile applications and online banking services, its infrastructure will mostly be a mix of on-premise or private cloud. With security, regulatory compliance and governance so critical, the unwieldly nature of these systems means that going with tried and trusted approaches is usually more straight forward. However, with new entrants and digital-native disruptors using public cloud providers, unencumbered by legacy systems, established players need to find a way of being able to respond quickly. Banks such as Capital One and the World Bank are deploying public cloud computing for development and testing. In this way, they enjoy the benefits of flexibility, scalability and agility without significant investment, whilst experimenting or using applications that do not draw on legacy data.
For instance, trailing the use of blockchain to streamline letters of credit could require significant resource. As it is a pilot, however, the bank may be less keen to commit to the investment of a fully private cloud environment. Deploying a public cloud becomes attractive; it provides the necessary infrastructure, the pilot can be run, and if it is deemed a success the decision can be made to move the application over to a private cloud environment. In doing so, the bank has been able to develop, deploy and test quickly, turning around results that allow a decision to be made and, potentially, a new product to be released to the market. If it has not been a success, investment in permanent resource has not been lost.
Another opportunity for a clearly defined approach and strategy is the opening up of banking. Driven by the likes of the Open Banking initiative in the UK and the EU’s Directive on Payment Services (PSD2), more financial institutions are giving API access to third party developers to build applications and services that consumers or businesses can use to manage their finances across multiple providers. The aim is to provide greater transparency and flexibility to customers, ultimately delivering a better experience. What it means for banks and other financial service providers is having the infrastructure in place to easily share relevant data securely – again, a mix of private and public cloud environments can support the development of third party apps without exposing core data or mission critical services to security risks or non-compliance.
Managing talent and avoiding silos
But what does this mean for the bank’s technology team? For starters, it raises the possibility of requiring teams with multiple skillsets or, more likely, separate teams focused on separate platforms. That public cloud might be from AWS, for example, which requires a different type of skillset to the one needed to operate the private cloud, which again might not be relevant for the team managing the legacy infrastructure. IT has long been plagued by silos of teams working on individual, proprietary technology, and left unchecked, this issue will be exacerbated further by the demands of multi-platform infrastructure. The whole point of having a multi-cloud environment, of being able to securely move applications from one environment to another depending on requirements at that time, becomes much more complicated if siloed teams struggle to work together.
And these demands are only going to increase. As more and more enterprises accelerate their digital transformation agendas, they are faced with the challenge of repurposing their sprawling application estates to meet their digital requirements without compromising security. Many are already harnessing multi–cloud environments to enable transformation. The same VMware survey mentioned earlier found that 80% of respondents believed that one of the benefits of multi-cloud was improving innovation – and it makes sense; being able to get the best across multiple types of environment sounds exactly what most enterprises need to do to unlock the opportunities of digitization.
Understanding what you need to achieve
For a multi-cloud deployment to work, enterprises need to understand what they fundamentally require and have the hybrid cloud infrastructure to run and manage those requirements across all environments and devices. The environments used are ultimately the support, the enabler, not the objective itself; that lies with the applications.
Yet this should also be in a constant state of evolution. As enterprises continue to digitally transform, they need to be continually reviewing and reforming their application estate. It is the ongoing process of choosing which applications are redundant, which need to be retrofitted, which can be completely transformed into cloud-native apps, and which need to be kept in legacy environments for a bit longer, all whilst being able to manage and move workloads as required. By following this approach, and by working with partners with the experience and skills required to deliver infrastructure that can efficiently run different platforms, enterprises can deliver an effective app-first approach, across any number of environments, to drive their digital business goals forward.
Category: News
Tags: AWS, digitization, multi-cloud
No comments yet