As we have moved to virtual briefings, I have increasingly been excerpting short video segments (with permission) as part of my Analyst Cam series.
Fred Laluyaux, President and CEO of Aera Technology, presents on the Aera Decision Cloud™
How many personal decisions do you make in a day? According to this article in the Harvard Business Review the average adult makes 33,000 to 35,000 decisions each day. Many of them are what it calls “autopilot” decisions. It encourages daily journaling to interrupt such decisions and to “live our lives more intentionally.”
Could you extrapolate that to an enterprise and its millions of daily decisions – some “autopilot”, some relatively simple yes/no, others much more complex which require interaction with a chain of other processes or groups?
That is Aera’s mission - digitizing, augmenting, and automating enterprise decision making. It aims to become a virtual member of your team, a digital analyst – delivering well-researched and informed business recommendations, and going even further and taking action autonomously to execute them once a decision has been made.
Fred, who has spent three decades “in the world of data analytics, applications. ERP, BI and of course, AI”, at Anaplan, Business Objects, SAP and elsewhere spends 40 minutes talking about the Decision Intelligence branch of the AI science. He talks about relatively simple decisions which can be processed using traditional CPUs, and others way more complex requiring GPUs. He talks about finance processes which are used only periodically over the course of a year compared to one at Unilever where they use daily ingestion of retail store sourced SKU data to drive demand forecasts. He talks about big swings in companies which went from inventory shortages during COVID to surplus inventories. The customer examples he provides are complex, global ones.
I was particularly fascinated by the platform feature of a Control Room where all kinds of decisions can be tracked and continually improved. You can ask questions like “Is there a bottleneck somewhere? Is the quality of data degrading? Is the quality of the algorithm increasing? Why? Why are people rejecting some recommendations? They may have a very good reason to do so.”
With the volume of decision data the platform is accumulating, you can vary blends of man and machine in next-best actions.
Very nicely done. Using the lexicon of the HBR article it would allow employees and groups to “live their corporate lives more intentionally.”
A new business model for AI?
This post is inspired by Frank Slootman, CEO of Snowflake. In an interview with CNBC he commented
“AI is not going to be cheap. I mean, somebody's paying for these wonderful Nvidia results. There needs to be a business model that's associated, you know, with the technology. One of the great things about Search, when it showed up was not only that it was great technology, but they also had a business model that paid for it. We need that here as well. Otherwise, it's fun and games, and it's an expensive hobby, that's not going to last…”
He was alluding to another Nvidia blowout quarter. Jensen Huang, CEO of Nvidia explained some of the trends driving their demand
“The world has something along the lines of about a trillion dollars’ worth of data centers installed in the cloud and enterprise and otherwise…And that trillion dollars of data centers is in the process of transitioning into accelerated computing and generative AI.”
This of course, begs the question – Is ALL the data in ALL those data centers fodder for processing with Nvidia products? They could be - but not necessarily at today’s premium prices. This documentary does a nice job describing Nvidia’s evolution from gaming chips to a powerhouse in generative AI. It touches on “premium” use cases in healthcare, design etc. that nicely leverage his technology.
My recommendation would be vendors poll their customers for use cases where they are willing to pay premium pricing. And think broader - the investment is not just going be around the GPUs, LLMs and other plumbing. Vendors will need to accumulate data around high-value applications. That will call for hiring unique domain experts and accumulating plenty of domain specific data not easily accessible today in the cloud. It is squirreled away somewhere in those trillion dollar worth of data centers, and even worse in corporate spreadsheets.
Let’s face it – the vast majority of enterprise data in the cloud today is back office accounting, HCM, procurement and basic CRM data. If you expect premium pricing, it will likely come from operational data unique to verticals and countries. To access that data, you will have to acquire specialist vendors like Oracle did with Cerner. Not cheap – Oracle paid $28 billion for a small fraction of largely US patient-specific data and it will have to navigate HIPAA and other privacy constraints around that data. Another option – come up with incentives for some of the most innovative customers in each domain to share their data to train your machines. Either way be prepared to invest.
For many vendors, their predictive AI may offer more value to customers than generative AI. If you can preclude unplanned shutdowns of expensive assets with preventive maintenance AI or you can reduce production and logistics footprint, waste and scrap through better demand forecasting AI, that may be exactly what your customers need.
I like Frank’s call for a business model for AI. I think vendors should start with finding out what customers are willing to pay for as “premium” use cases, then work backwards and figure out whether/how to acquire the premium infrastructure, domain knowledge and the data to train machines for those ambitious use cases.
That would be a business model better aligned with customer value and far easier to sell.
August 27, 2023 in AI, ML, Analytics, Big Data, Industry Commentary | Permalink | Comments (1)