September 27, 2021 Timothy Prickett Morgan
It is no secret to readers of The Four Hundred that we are big proponents of so-called cloud computing, which doesn’t just include access to slices of servers but also storage to keep their data and networking to link them to the world and, if multiple slices share work, to link them to storage and to each other.
We never liked the term “cloud,” because it connotes a fuzzy kind of infrastructure when quite the opposite is true. We still don’t like calling it cloud computing, but language is created by consensus, not by fiat, so sometimes we have to yield. But there was a better metaphor, and one we might want to revive if this term can shake off some of its own bad connotations.
Way back in the dawn of time in 2003, when Big Blue launched its “Supercomputing On Demand” service and standards for what the academics were calling “grid computing” were evolving to allow computing centers to interoperate and share work, the term we came up with to describe what was happening was the obvious and far more accurate “utility computing.” And as we pointed out at the time, almost two decades ago, it was not entirely obvious how this “On Demand” model being espoused by the major IT platform providers was different from the Application Service Provider (ASP) wave that started as the client/server revolution of the late 1980s and early 1990s merged with the Internet software stack of the mid-to-late 1990s and for the first time allowed for companies to use applications remotely and under a subscription model that looked like electricity service, telephone service, or cable service. This has evolved over the ensuing time into what we now know as Software as a Service, or SaaS, which is all well and good for those companies who can get by using code designed for some kind of class average across industries and sizes.
But as AS/400 and IBM i shops know perhaps more than any other base, true differentiation in the market comes from crafting applications that specifically match the needs of the business. There was never a question that IT matters, which was a tempest in a teacup when Nicholas Carr wrote “IT Doesn’t Matter” for the Harvard Business Review around the same time that IBM started its On Demand effort under new chief executive officer Sam Palmisano. A few months later, after online retailer Amazon.com had noticed that when it opened up APIs on its online store so people could build rudimentary applications on top of it, Andy Jassy, now chief executive officer at Amazon, took control of what would become Amazon Web Services, today the world’s largest, most complex, most complete, and arguably most expensive public cloud, which has managed to attain millions of unique customers.
It is not lost on us that many of the attributes of the original AS/400 platform – and integrated stack of operating systems, databases, file systems, and programming runtimes all running on highly available, distributed computing hardware – are embodied by the AWS cloud and its followers, such as Microsoft Azure and Google Cloud. In fact, in 2012, we quipped that it should be called AWS/400, and at that time, only six years after it had been launched, had about the same revenue stream and the same customer count as the original AS/400 base at its peak, which by the way it took IBM 29 years to reach after the launch of the System/3 in 1969.
Despite the success of AWS and its imitators and the realization of something that looks like the utility model that we and others conceived of two decades ago – a kind of return with a new twist to the early days of the shared computing, service bureau model that IBM started off with mainframes in the 1960s – we are simultaneously perplexed that “cloud” has not taken off in the IBM i base and also not surprised because the cloud, as it is currently delivered by the many excellent providers in the market, is missing a few vital things.
The first thing to remember is that cloud is a consumption model for a highly scalable platform that has utility pricing and a shared service bureau to bring the price down – well, down more than it might otherwise be, but it still ain’t cheap. But cloud is not a panacea. The world’s largest clouds have very sophisticated and scalable infrastructure, and it can be made to run some of the biggest distributed computing jobs on the planet. While this is intellectually interesting, it just doesn’t matter to a lot of companies, which is why there are still many tens of millions of companies that are still buying their own infrastructure and installing it in their own dataclosets and datacenters.
Most IBM i shops have persistent databases with fairly consistent workloads. Yes, they have processing peaks during key buying seasons and they also have peaks at the end of the week, the end of the month, the end of the quarter, and the end of the year, too. But there are ways of buying utility-style capacity on a temporary basis with the Capacity Upgrade On Demand (CUoD) features of IBM’s Power Systems to deal with this, or just simply overprovisioning the server from the get-go to deal with peaks. This may not be the most efficient way to use capital, but it works and firing up cloud capacity 24×7 for the five or six or seven years that many IBM i on Power Systems make use of their machine is far more expensive.
Moreover, IBM i shops have long since figured out how to make use of that excess capacity when it is not needed for running online transaction processing (OLTP) workloads, supporting partitions with other infrastructure workloads like file serving or Web serving or even analytics and batch processing. And at some point, we suspect that future Power Systems machines will be running machine learning training models by night and applying machine learning inference by day, embedded in the applications themselves.
The point is, while the cloud “utility” model is attractive from an intellectual standpoint, and being able to scale workloads up and down – and to turn them off and therefore not pay for them when you are not using them – is truly evolutionary, it just isn’t all that valuable for IBM i shops. And as evidence, all we need to do is talk to the big clouds. IBM has 125 customers on its Power Systems Virtual Server cloud instances, and the other true cloud providers have several dozens to hundreds of their own. There are even more companies that have what are really hosted IBM i instances, which are not utility as we have defined it – you can turn it on and turn it off at will. Call it 500 to 1,000 true cloud customers and maybe several thousand hosted customers, against an IBM i base that numbers somewhere between 120,000 and 150,000 unique customers, depending on who you ask.
This is after a decade and a half of pushing very hard by many companies, many of whom are listed in the Related Stories section below. And while many of these companies have been successful, it is hard to say that cloud has taken the IBM i base by storm in a way that it has for other customers. We are beginning to think that IBM i shops need something that feels like cloud in terms of the operational expense pricing model, but it really is a combination of hosting plus managed services layered on top of them that solves real problems.
Think about it. The public clouds are successful because developers needed a cheap place to try out new ideas and new services to make new kinds of applications, and when their companies were successful – think of Netflix on running on AWS – they needed to scale like crazy as well as increase their application scope to try to make some money. The big clouds solved the infrastructure problems of millions of developers and for several thousand and now several tens of thousands of enterprises. While there are some companies who have gone “all in” with AWS and other clouds, this is a lot more rare than anyone wants to talk about. IBM is right that hybrid cloud models, mixing on premises and cloud infrastructure, is the future for most companies.
IBM i shops are not fearful, but they are conservative. There is a lot of talk about how IBM i shops are afraid of change, afraid of loss of control, and afraid of the lack of security out there on the cloud. They aren’t afraid of change – most IT managers, system administrators, and programmers in the IBM i space have seen so much change in their many decades that it will make your head spin if you were born after 1990. They are not believers in change for the sake of change – no question about that. So let’s just put to bed the idea that IBM i shops are afraid of anything.
They surely are skeptical of some of the claims people make about cloud being cheaper than on premises infrastructure, and from the survey data that we have seen, they are indeed worried about security and performance on what is in essence a shared utility. They have data sovereignty issues – many of them compelled by law in financial services, insurance, healthcare, and other industries. They rightly worry about connectivity between their users and the systems running in the cloud, and because of the pricing complexity of cloud services, they worry how they can budget the costs.
There is a lot to worry about, and no one wants to go first to find out about the differences between on premises and the cloud the hard way. And even though they pay a premium for their IBM i on Power Systems iron, they can’t get nickled and dimed to death on a cloud – or dollared or ten dollared, for that matter. They want to bring order to the financing of IT, but they don’t want to lose control of IT. That is taking it too far, and that is why we are seeing so many datacenter repatriations after a wave of all-in cloud customer stories.
But we think the issue of resistance to the cloud among the vast majority of IBM i customers is even larger than all of this. After watching this for years, we have come to the conclusion that IBM i shops want a full, vertically integrated experience out of their infrastructure provider. This is the ideology that the AS/400 represented and that the IBM i platform continues. And we think they want to throw back all the way to what IBM originally delivered with the System/360 mainframe, when capacity on the machines was rented, often located in a service bureau because few companies could afford to buy mainframes, and Big Blue provided all kinds of training and programming services to help customers get the full use of the capacity they bought. The capacity was expensive and the help was free.
These days, the capacity is nearly free thanks to Moore’s Law, and the help that IBM i shops with an aging population and a large technical debt need desperately are too expensive. Something has to give, and someone needs to provide a vertically integrated set of hardware, software, and services that helps customers get their platforms and the applications that run on them all modernized. Updating the hardware is necessary, but not sufficient. We need a utility model for application programming and modernization as much as we need a utility model for hardware capacity and technical support. And anyone who can bring these all together will probably be able to get IBM i shops excited about what will still very likely be called the cloud. But we will all know it is more than that.