Some cloud-based AI systems are returning to on-premises data centers

As a concept, artificial intelligence is very old. My first job after college nearly 40 years ago was as an AI systems developer with Lisp. Many of the concepts from then are still in use today. However, it is now about a thousand times cheaper to build, deploy and use AI systems for all kinds of business purposes.

Cloud computing has revolutionized AI and machine learning, not because the hyperscalers invented it, but because they made it affordable. Nevertheless, I and some others are seeing a shift in thinking about where to host AI/ML processing and AI/ML associated data. Using the public cloud providers has been pretty much a no-brainer in recent years. Today, the valuation of hosting AI/ML and the necessary data about public cloud providers is being questioned. Why?

Cost of course. Many companies have built groundbreaking AI/ML systems in the cloud, and when they receive the cloud bills at the end of the month, they quickly understand that hosting AI/ML systems, including terabytes or petabytes of data, is pricey. In addition, outbound and inbound data charges (what you pay to send data from your cloud provider to your data center or other cloud provider) will add to that bill significantly.

Companies are looking at other more cost-effective options, including managed service providers and colocation providers (colos), or even moving those systems to the old server room down the hall. The latter group is returning to ‘own platforms’ largely for two reasons.

First, the cost of traditional compute and storage equipment has fallen sharply over the past five years. If you’ve never used anything other than cloud-based systems, let me explain. We used to go to rooms called data centers where we could physically touch our computing equipment — equipment that we had to buy right away before we could use it. I’m just kidding.

When it comes to renting versus buying, many find that traditional approaches, including the burden of maintaining your own hardware and software, are actually much cheaper than ever-increasing cloud bills.

Second, many experience some latency with the cloud. The delays occur because most enterprises use cloud-based systems over the open internet, and the multi-tenancy model means you share processors and storage systems with many others at the same time. Occasional latency can translate into many thousands of dollars in lost revenue per year, depending on what you do with your particular cloud-based AI/ML system.

Many of the AI/ML systems available from cloud providers are also available on traditional systems. Migrating from a cloud provider to a local server is cheaper, faster, and more like a lift-and-shift process, if you’re not stuck with an AI/ML system running on just one cloud provider.

What is the gist of this? Cloud computing will continue to grow. Traditional computer systems whose hardware we own and maintain, not so much. This trend will not slow down. However, some systems, especially AI/ML systems that use a large amount of data and processing and happen to be latency sensitive, will not be as cost effective in the cloud. This could also be the case for some larger analytical applications such as data lakes and data lake houses.

Some could save half the annual cost of hosting with a public cloud provider by repatriating the AI/ML system back on-premises. That business case is just too compelling to ignore, and many won’t.

Cloud computing prices can be lower to accommodate these workloads that are not too expensive to run on public cloud providers. Indeed, many workloads may not be built there in the first place, which I suspect is happening now. It is no longer always a good idea to use the cloud for AI/ML.

Copyright © 2022 IDG Communications, Inc.

Leave a Comment