We have had many previous hype cycles around AI. As I wrote in Silicon Collar : “Since the 1950s! That is when Alan Turing defined his famous test to measure a machine's ability to exhibit intelligent behavior equivalent to that of a human. In 1959, we got excited when Allen Newell and his colleagues coded the General Problem Solver. In 1968, Stanley Kubrick sent our minds into overdrive with HAL in his movie, 2001: A Space Odyssey. We applauded when IBM’s Deep Blue supercomputer beat Grandmaster Garry Kasparov at chess in 1997. We were impressed in 2011 when IBM’s Watson beat human champions at Jeopardy! and again in 2016 when Google's AlphaGo showed it had mastered Go, the ancient board game. Currently, we are so excited about Amazon's Echo digital assistant/home automation hub and its ability to recognize the human voice, that we are saying a machine has finally passed the Turing Test. Almost.”
The good news is over the seven decades, the AI community has gifted us a wide range of big words like deep learning, neural networks, cognitive computing and natural language processing.
Yale computer science professor David Gelernter thinks we have only scratched the surface. In his book The Tides of Mind, he calls it “the spectrum of consciousness,” which is “essentially a range of mental states through which all humans cycle each day. The cyclical element is crucial and underlies his metaphor of tidal motion. At the upper end of the spectrum, mental high tide, we are focused on the outer world, biased towards logical and abstract reasoning, and more likely to remember our experiences later. But as we drift down through the middle and into the lower reaches of the spectrum, we become increasingly conscious of the inner worlds of memory, prefer narrative to logic, and cross eventually into the difficult-to-remember realms of dreams.”
But we are told this time it is different. We can train machines with the gobs of Big Data we have been collecting and we have computing power in the cloud like we have never had before.
But do we really have enough of each? Listening to Ginni Rometty, CEO of IBM at Dreamforce last month (see interview below – Marc Benioff has always been a very respectful interviewer, and Ginni is in fine form), I have my doubts. Ginni makes the point that 80% of data is not searchable. I would argue that ratio for enterprise data is even higher. Most useful data is locked up in on-prem systems. Getting permissions to have broad samples of such data will be no easy feat. Then you have the issue of semantics and master data rationalization across the multiple sources of data. We are only now seeing early versions of natural user interfaces by training machines around decades of voice/accent data collected by the likes of Nuance in transcriptions ,voice mails we have been leaving each other since the early 1980s and photos we have been uploading into the cloud.
Ginni is also preparing us for a world of quantum computing because for a growing range of scenarios even today’s supercomputers (or cloud infrastructure) are not enough. Take the trajectory of weather forecasting. Decade after decade, data sets have become larger with satellites, buoy sensors and Hurricane Hunter type sorties collecting data on moisture, temperature, wind speed and other metrics. The modeling has become way more complex. So, the thirst for ever increasing compute capacity continues. Forecasts keep getting better and better – but tell that to many of my fellow Floridians who evacuated multiple times as Hurricane Irma made a mockery of most forecasts. As a renowned meteorologist said “I was very surprised not by how one model was going back and forth -- but by how all the models were going back and forth.” Exactly, now wait till Salesforce's Einstein and IBM's Watson start contradicting each other :) Customers are being told they need both.
Ginni does not overtly talk about it, but 50% of IBM revenues come from services. Go ask early Watson healthcare clients how much of the tab was in services. IBM's interest in Salesforce's AI tool, Einstein is being driven by its Bluewolf services unit. Amazon offers its engineers as consultants via its ML Solutions Lab, Google via its The Machine Learning Advanced Solutions Lab. We will need highly skilled labor to train the machines. And we will also need tactical labor to keep refining the data. Amazon has its talent networks – the Kindle author, the Flex courier, the Mechanical Turk and fulfillment networks it can mine. Apple has its iTune and iOS networks to draw from.
Enough usable data, enough compute, enough talent? Take a hard look at each from the lens of your AI project.
Far more tactical implementations of Robotic Process Automation (RPA) are more promising. RPA is not machine learning, it uses software bots to mimic human activity – like logging in to a system, copying and pasting data across systems. The payback is much lower than what AI/ML promises, but it removes the drudgery and increases productivity of many white collar jobs.
Mihir Shukla of RPA vendor, Automation Anywhere told me in an interview for Silicon Collar
“Think of the impact of this on real estate requirements. I can take what would need offices in six large buildings and put them on two racks. I can help consolidate global offices. A cognitive bot can process an invoice in 30 languages—I don't need centers in the Philippines and other centers in Japan to deliver diverse language skills. Think of the employee recruiting and onboarding time we can help save.”
It is a more traditional definition of savings that comes with automation, and far more realistic. But even there, many services firms are pushing RPA which is a bit of a red herring when it comes to implementation costs.
Of course, many companies are being deluded by the promises of AI (and broader automation). J.P. Gownder, a Forrester analyst says if you don’t have realistic expectations “You’ll be inclined to automate too many roles — at the expense of both customer and employee experience. You won’t hire the right mix of new roles— or any new people at all.” In other words besides seeing poor payback from your AI projects, you may hurt other critical parts of your business.
On the flip side, we may not need to worry as much when Elon Musk says AI is more dangerous than N. Korea. Or when Prof. Hawking says AI will end mankind. The bad guys will face the same data, compute and talent shortages that AI for good is facing.
Tone down your AI expectations. And get ready for the next generation of AI big words.
Comments
Tone down your AI expectations
We have had many previous hype cycles around AI. As I wrote in Silicon Collar : “Since the 1950s! That is when Alan Turing defined his famous test to measure a machine's ability to exhibit intelligent behavior equivalent to that of a human. In 1959, we got excited when Allen Newell and his colleagues coded the General Problem Solver. In 1968, Stanley Kubrick sent our minds into overdrive with HAL in his movie, 2001: A Space Odyssey. We applauded when IBM’s Deep Blue supercomputer beat Grandmaster Garry Kasparov at chess in 1997. We were impressed in 2011 when IBM’s Watson beat human champions at Jeopardy! and again in 2016 when Google's AlphaGo showed it had mastered Go, the ancient board game. Currently, we are so excited about Amazon's Echo digital assistant/home automation hub and its ability to recognize the human voice, that we are saying a machine has finally passed the Turing Test. Almost.”
The good news is over the seven decades, the AI community has gifted us a wide range of big words like deep learning, neural networks, cognitive computing and natural language processing.
Yale computer science professor David Gelernter thinks we have only scratched the surface. In his book The Tides of Mind, he calls it “the spectrum of consciousness,” which is “essentially a range of mental states through which all humans cycle each day. The cyclical element is crucial and underlies his metaphor of tidal motion. At the upper end of the spectrum, mental high tide, we are focused on the outer world, biased towards logical and abstract reasoning, and more likely to remember our experiences later. But as we drift down through the middle and into the lower reaches of the spectrum, we become increasingly conscious of the inner worlds of memory, prefer narrative to logic, and cross eventually into the difficult-to-remember realms of dreams.”
But we are told this time it is different. We can train machines with the gobs of Big Data we have been collecting and we have computing power in the cloud like we have never had before.
But do we really have enough of each? Listening to Ginni Rometty, CEO of IBM at Dreamforce last month (see interview below – Marc Benioff has always been a very respectful interviewer, and Ginni is in fine form), I have my doubts. Ginni makes the point that 80% of data is not searchable. I would argue that ratio for enterprise data is even higher. Most useful data is locked up in on-prem systems. Getting permissions to have broad samples of such data will be no easy feat. Then you have the issue of semantics and master data rationalization across the multiple sources of data. We are only now seeing early versions of natural user interfaces by training machines around decades of voice/accent data collected by the likes of Nuance in transcriptions ,voice mails we have been leaving each other since the early 1980s and photos we have been uploading into the cloud.
Ginni is also preparing us for a world of quantum computing because for a growing range of scenarios even today’s supercomputers (or cloud infrastructure) are not enough. Take the trajectory of weather forecasting. Decade after decade, data sets have become larger with satellites, buoy sensors and Hurricane Hunter type sorties collecting data on moisture, temperature, wind speed and other metrics. The modeling has become way more complex. So, the thirst for ever increasing compute capacity continues. Forecasts keep getting better and better – but tell that to many of my fellow Floridians who evacuated multiple times as Hurricane Irma made a mockery of most forecasts. As a renowned meteorologist said “I was very surprised not by how one model was going back and forth -- but by how all the models were going back and forth.” Exactly, now wait till Salesforce's Einstein and IBM's Watson start contradicting each other :) Customers are being told they need both.
Ginni does not overtly talk about it, but 50% of IBM revenues come from services. Go ask early Watson healthcare clients how much of the tab was in services. IBM's interest in Salesforce's AI tool, Einstein is being driven by its Bluewolf services unit. Amazon offers its engineers as consultants via its ML Solutions Lab, Google via its The Machine Learning Advanced Solutions Lab. We will need highly skilled labor to train the machines. And we will also need tactical labor to keep refining the data. Amazon has its talent networks – the Kindle author, the Flex courier, the Mechanical Turk and fulfillment networks it can mine. Apple has its iTune and iOS networks to draw from.
Enough usable data, enough compute, enough talent? Take a hard look at each from the lens of your AI project.
Far more tactical implementations of Robotic Process Automation (RPA) are more promising. RPA is not machine learning, it uses software bots to mimic human activity – like logging in to a system, copying and pasting data across systems. The payback is much lower than what AI/ML promises, but it removes the drudgery and increases productivity of many white collar jobs.
Mihir Shukla of RPA vendor, Automation Anywhere told me in an interview for Silicon Collar
“Think of the impact of this on real estate requirements. I can take what would need offices in six large buildings and put them on two racks. I can help consolidate global offices. A cognitive bot can process an invoice in 30 languages—I don't need centers in the Philippines and other centers in Japan to deliver diverse language skills. Think of the employee recruiting and onboarding time we can help save.”
It is a more traditional definition of savings that comes with automation, and far more realistic. But even there, many services firms are pushing RPA which is a bit of a red herring when it comes to implementation costs.
Of course, many companies are being deluded by the promises of AI (and broader automation). J.P. Gownder, a Forrester analyst says if you don’t have realistic expectations “You’ll be inclined to automate too many roles — at the expense of both customer and employee experience. You won’t hire the right mix of new roles— or any new people at all.” In other words besides seeing poor payback from your AI projects, you may hurt other critical parts of your business.
On the flip side, we may not need to worry as much when Elon Musk says AI is more dangerous than N. Korea. Or when Prof. Hawking says AI will end mankind. The bad guys will face the same data, compute and talent shortages that AI for good is facing.
Tone down your AI expectations. And get ready for the next generation of AI big words.
Tone down your AI expectations
We have had many previous hype cycles around AI. As I wrote in Silicon Collar : “Since the 1950s! That is when Alan Turing defined his famous test to measure a machine's ability to exhibit intelligent behavior equivalent to that of a human. In 1959, we got excited when Allen Newell and his colleagues coded the General Problem Solver. In 1968, Stanley Kubrick sent our minds into overdrive with HAL in his movie, 2001: A Space Odyssey. We applauded when IBM’s Deep Blue supercomputer beat Grandmaster Garry Kasparov at chess in 1997. We were impressed in 2011 when IBM’s Watson beat human champions at Jeopardy! and again in 2016 when Google's AlphaGo showed it had mastered Go, the ancient board game. Currently, we are so excited about Amazon's Echo digital assistant/home automation hub and its ability to recognize the human voice, that we are saying a machine has finally passed the Turing Test. Almost.”
The good news is over the seven decades, the AI community has gifted us a wide range of big words like deep learning, neural networks, cognitive computing and natural language processing.
Yale computer science professor David Gelernter thinks we have only scratched the surface. In his book The Tides of Mind, he calls it “the spectrum of consciousness,” which is “essentially a range of mental states through which all humans cycle each day. The cyclical element is crucial and underlies his metaphor of tidal motion. At the upper end of the spectrum, mental high tide, we are focused on the outer world, biased towards logical and abstract reasoning, and more likely to remember our experiences later. But as we drift down through the middle and into the lower reaches of the spectrum, we become increasingly conscious of the inner worlds of memory, prefer narrative to logic, and cross eventually into the difficult-to-remember realms of dreams.”
But we are told this time it is different. We can train machines with the gobs of Big Data we have been collecting and we have computing power in the cloud like we have never had before.
But do we really have enough of each? Listening to Ginni Rometty, CEO of IBM at Dreamforce last month (see interview below – Marc Benioff has always been a very respectful interviewer, and Ginni is in fine form), I have my doubts. Ginni makes the point that 80% of data is not searchable. I would argue that ratio for enterprise data is even higher. Most useful data is locked up in on-prem systems. Getting permissions to have broad samples of such data will be no easy feat. Then you have the issue of semantics and master data rationalization across the multiple sources of data. We are only now seeing early versions of natural user interfaces by training machines around decades of voice/accent data collected by the likes of Nuance in transcriptions ,voice mails we have been leaving each other since the early 1980s and photos we have been uploading into the cloud.
Ginni is also preparing us for a world of quantum computing because for a growing range of scenarios even today’s supercomputers (or cloud infrastructure) are not enough. Take the trajectory of weather forecasting. Decade after decade, data sets have become larger with satellites, buoy sensors and Hurricane Hunter type sorties collecting data on moisture, temperature, wind speed and other metrics. The modeling has become way more complex. So, the thirst for ever increasing compute capacity continues. Forecasts keep getting better and better – but tell that to many of my fellow Floridians who evacuated multiple times as Hurricane Irma made a mockery of most forecasts. As a renowned meteorologist said “I was very surprised not by how one model was going back and forth -- but by how all the models were going back and forth.” Exactly, now wait till Salesforce's Einstein and IBM's Watson start contradicting each other :) Customers are being told they need both.
Ginni does not overtly talk about it, but 50% of IBM revenues come from services. Go ask early Watson healthcare clients how much of the tab was in services. IBM's interest in Salesforce's AI tool, Einstein is being driven by its Bluewolf services unit. Amazon offers its engineers as consultants via its ML Solutions Lab, Google via its The Machine Learning Advanced Solutions Lab. We will need highly skilled labor to train the machines. And we will also need tactical labor to keep refining the data. Amazon has its talent networks – the Kindle author, the Flex courier, the Mechanical Turk and fulfillment networks it can mine. Apple has its iTune and iOS networks to draw from.
Enough usable data, enough compute, enough talent? Take a hard look at each from the lens of your AI project.
Far more tactical implementations of Robotic Process Automation (RPA) are more promising. RPA is not machine learning, it uses software bots to mimic human activity – like logging in to a system, copying and pasting data across systems. The payback is much lower than what AI/ML promises, but it removes the drudgery and increases productivity of many white collar jobs.
Mihir Shukla of RPA vendor, Automation Anywhere told me in an interview for Silicon Collar
“Think of the impact of this on real estate requirements. I can take what would need offices in six large buildings and put them on two racks. I can help consolidate global offices. A cognitive bot can process an invoice in 30 languages—I don't need centers in the Philippines and other centers in Japan to deliver diverse language skills. Think of the employee recruiting and onboarding time we can help save.”
It is a more traditional definition of savings that comes with automation, and far more realistic. But even there, many services firms are pushing RPA which is a bit of a red herring when it comes to implementation costs.
Of course, many companies are being deluded by the promises of AI (and broader automation). J.P. Gownder, a Forrester analyst says if you don’t have realistic expectations “You’ll be inclined to automate too many roles — at the expense of both customer and employee experience. You won’t hire the right mix of new roles— or any new people at all.” In other words besides seeing poor payback from your AI projects, you may hurt other critical parts of your business.
On the flip side, we may not need to worry as much when Elon Musk says AI is more dangerous than N. Korea. Or when Prof. Hawking says AI will end mankind. The bad guys will face the same data, compute and talent shortages that AI for good is facing.
Tone down your AI expectations. And get ready for the next generation of AI big words.
December 06, 2017 in Industry Commentary, Silicon Collar | Permalink