In this 100th episode of Burning Platform, we host George Gilbert, Principal Analyst at TechAlpha Partners. He has been a long-time tech financial and industry analyst and has focused more recently on the evolution of platforms and I invited him to discuss the likely impact of Generative AI on hyperscalers, ISVs and outsourcers.
I have known George since his Wall Street days in the 90s and every conversation with him gets me energized. This one was no different. To start with, he is extremely excited about Gen AI
“Gen AI is the biggest accelerant this industry has ever seen. We've never had a new technology that accelerated both demand and supply…
It allows software to be used in new contexts and by new people. Because you can interact with it in natural language soon, it'll be like an agent, like a Siri that can, you can give instructions to it, and it'll go off and do stuff on your behalf. That's accelerating the demand side.
Gen AI, is its first real killer application, other than as a conversational interface is to write software. It doesn't do it completely for you. But it dramatically raises the productivity of developers, and not just like code completion. But writing the glue code between the building block services of the hyperscalers and writing the code to deploy and monitor and soon remediate applications. It doesn't turn Infrastructure as a Service completely into Platform as a Service. But it does start to close that gap and the complexity.”
With that as a nice setup we discuss the world of platforms, the coopetition between hyperscalers and ISVs, the coming pendulum swing from buying to building applications, especially industry and company specific. We discuss the likely impact on outsourcers as low code development becomes increasingly ubiquitous and likely AI use cases that will get prioritized.
He also touches on the economics of AI infrastructure
“You know, the scarcity of NVIDIA GPUs is exactly like Cisco was in the late 90s. People are double and triple ordering. Because they want to make sure they get some allocation. There's a lot of work going on to make the inference costs come down, way down. We're already hearing rumors of order cuts to Nvidia, at least, you know, at the margin. This could be rumors, but we're gonna solve that problem. That's not where the value is being created. And by the way, we saw from, from Google, that they anticipated this problem like 10 years ago and started designing their own TPUs. So they're not constrained by the GPU shortage.”
We cover a lot more ground in this 30 minute episode. Agree or disagree with us, it will make you think of other angles to the rapidly evolving world of Gen AI and AI, broadly.