21 Apr 2021
The role of colocation is changing and it's finding a natural place between edge cloud environments on one side, and hyperscale cloud on the other.
The coronavirus pandemic caused a surge in demand for business and consumer cloud services as millions of people were forced to stay at home. In Europe this has also led to a boom in the colocation market, with providers racing to meet demand.
Computer Weekly reports that 95MW of new colocation capacity came online in Europe during the fourth quarter of 2020 and real estate consultancy CBRE expects more than 415MW of new supply to be added across the continent in 2021. Similar trends are likely to play out in other regions.
Part of the reason for that is the changing role of colocation, which is becoming a bridge between the hyperscale cloud at one end of the network and, at the other, edge cloud environments. There are other advantages too, which make colocation an increasingly popular option.
As 5G rollouts take shape around the world, the long-promised benefits of smart environments, such as cities and factories, become achievable. Everything from parking spaces and recycling bins to industrial machinery and vehicles can be connected to the internet thanks to the speed and capacity of 5G. However, for this to work safely, it needs to be done at low latency. An autonomous vehicle that is relying on sending and receiving real-time traffic data to plan its route cannot afford delays while this information updates.
For that reason, edge cloud environments - high density, micro servers, often liquid cooled, at the edge of the network and close to the source of the data they are processing - will be vital. For something like autonomous vehicles, which can cover an entire city, this means deploying lots of edge servers so that they can handle the management and control of local applications at the low latency required. However, they won’t have the capacity to process everything and edge servers certainly won’t replace public cloud services.
What of the other end of the scale? Massive, ‘hyperscale’ data centres will continue to leverage the efficiencies that come from their size to run huge platforms like Facebook and Google and all the cloud services we rely on everyday. There aren’t many data centres in the world that qualify as hyperscale. Around 500 meet the broadly accepted definition of more than 5,000 servers and 10,000 square feet, and more are springing-up every year in new regions. This is also before an anticipated increase in Chinese hyperscale developments westwards across 2020-2030.
There is a natural gap between these two ends of the spectrum. On the one hand, there is a need for data from the edge to be collected and sent to the cloud for data analytics purposes and other uses that can’t be handled at the edge. Much of this collection will happen at colocation data centres before being sent to the cloud as a batch. On the other hand, fine touch areas like supercomputing and GPU-accelerated AI and machine learning where the infrastructure optimisation, inter-connectivity and locality is crucial, needs to be taken down to the local level. It can’t all be done in hyperscale data centres. High performance colocation fits naturally into this gap.
It wasn’t so long ago that many enterprises had their own data centres, on-premise. As these data centres age, many companies are reluctant to replace them because of the high CAPEX costs involved. Roughly four years ago the answer to this problem was 'cloud first' and whole estates migrated upwards into the cloud, solving all perceived computing problems. What some companies have learnt across that period however is there remains a proportion of compute that doesn't fit nicely in hyperscale cloud.
'Cloud repatriation' has actually become a fast-growing trend and according to a recent IDC survey, a significant amount of organisations – more than half – are pulling some workloads back out of the cloud to colocation or on-premise due to regulatory compliance, risk when scaling, troubleshooting and optimisation, and hidden costs – especially where niche computing like HPC and Grid is concerned.
This is where colocation really adds benefit, because the costs can still shift to OPEX and because, as Gartner puts it, colocation “offers higher availability, reliability, certified building tier levels, energy efficient, dedicated facilities management and the ability to scale”.
This is especially appealing to companies with data that can’t go into the cloud, either because of security, governance or data size reasons, hardware optimisation or because they want greater oversight or more fine-grained control than they could get from a traditional hosting model or cloud provider.
A side benefit is that it helps to improve sustainability. On-premise compute is often inefficient both because older tech tends to consume more power and because smaller, 'legacy' data centres are harder to keep cool and maintain a low PUE. In contrast, industrial-scale campuses, like Kao Data’s, are designed for maximum efficiency and benefit from a larger scale.
More importantly, though, data centres like Kao Data provide rapid, low latency access to supercomputing resources at a local level, where they are more accessible for organisations and developing ecosystems that themselves are not at hyperscale level.
The future is likely to involve a hybrid model, with organisations choosing their partners so that they can bring their enterprise to the network, rather than the other way round. Colocation will play a vital role in that, enabling a scalable, agile infrastructure that becomes a bridge between the hyperscale cloud on one side and the edge cloud on the other side.