I had a chance to talk to Larry Bridge, SVP Healthcare Strategy at Cognizant about the sector and how healthcare vertical solutions have evolved in his 30+ year career.
Every industry is unique, but it would seem the healthcare vertical, particularly in the US, is truly unique. Is that why mainstream technology vendors struggle to get much traction here?
We always say the healthcare market has evolved rather than engineered. I think that brings a lot of the complexity to it that everybody who plays in it deals with.
We get this, I'll bet, 20 times a year where one of our big partners from banking and financial services, says, "Hey, healthcare is a big, growing business. We've got a play to make," and they can't wait to have a meeting. They want to end the two-hour meeting with a go-to-market plan. Five meetings later, almost the glassy-eyed, overwhelmed look that you get from the, "I didn't know it was this complex," "I didn't know it changed so often."
They walked in day one saying, "This will be a great two-hour meeting, and we're going to walk away with access to a $3 trillion market," and in fact, a dozen meetings later, well, not only are they no further than they were, but it's kind of stymied based on the complexity.
Whether it's Oracle, whether it's Salesforce which has invested a lot in healthcare, whether it's Google and Apple - a lot of investment in healthcare is taking place. The nontraditional vendors certainly are building healthcare specific solutions but, in every case, what we have found is they lack depth of industry specificity, knowledge, subject matter expertise, compliance. Compliance is huge in healthcare, CMS security rules, HIPAA, the new CMS interoperability rules that just got published. Well, they actually aren't even published. The rule just got passed and now they're determining the details around it for implementation in 2020.
The complexity of healthcare, I think, has kept some of the more generic platform solutions from really gaining traction in healthcare. It's not that they don't have good assets and we use all of them in combination with our core systems, but it's still the industry-specific cores that are ruling the day.
Remember the Affordable Care Act came back several years ago? Then it was all going to be done away with and we were going to repeal and replace. Well, that didn't happen. Now there are the new CMS transparency rules. All of these changes require changes to the underlying platforms and services
What you have to put into these systems from a regulatory and compliance perspective is huge and to keep up with it, the new rules, making sure you're HIPAA compliant. The whole security side of things when you're dealing with healthcare data. The chain of control and the sign-offs that you have from people that can access certain types of data at certain times. All very complex and evolving; I think that's one of the unique things that applies across all three of those customer groups.
How do you segment the large industry opportunities?
I break it into, generally, three big client groups. Each one of those client groups has a core platform, and it's very specific to what they do
In the hospital and health systems, it's the big EHRs, the electronic health records. Cerner and Epic Systems are the top two. Allscripts is number three. They're the ones that really dominate in that area. Now, we will help install Epic and Cerner and other EHRs and we sell services into that segment, but we don't own a software platform there.
If you move to the other side, payers, insurance companies, that's where we dominate with our Facets and QNXT solutions. Epic does have a payer core system offering that's tied into its EHR called Tapestry, but the only place that really wins is if the health plan is part of a big health system that is already on Epic. Then they'll use Tapestry to extend into the insurance side of the business, but it's much less functional than what you would get from our core payer systems.
Then there are office-based physicians. Their core system would be the practice management system. You may have heard of, and maybe you're familiar with, eClinicalWorks, Greenway Health and companies like that. They are all PM vendors. There are literally hundreds of them that are out there. In fact, 300+ of the PM vendors are our partners. If you think about a provider practice with anywhere from two to, let's call it, 50 physicians, everything around not only having your practice management system but submitting your claims, reconciling to your accounts receivable system, what comes back, handing rejections, things like that. Almost everybody is buying a revenue cycle management and EDI service to go with it. These are things we (Cognizant) sell using those channel partners.
We also do revenue cycle management in hospitals and health systems. That is pure services as opposed to the software. We work with the software they have and provide those services around it.
How has Cognizant's healthcare business and market evolved?
When you think of Cognizant's healthcare business, particularly when it comes to some of these core platforms, the real driver here was the acquisition of TriZetto (5 years ago) and the use of TriZetto's platforms as core operating systems for health insurance companies.
TriZetto's platforms, two big ones, Facets and QNXT, are like SAP would be to a manufacturing company. It's essentially the core engine that the company runs on. Selling software in that area was TriZetto's history. We still do that as part of Cognizant.
Now, we're bringing them to market increasingly with Cognizant, as part of Cognizant and its platform-based solutions. It is the TriZetto software platforms, but it's services around it as well. I would say that probably our new clients are probably evenly split between those who want to license the software to those that want it as part of a more comprehensive, end-to-end platform-based solution. It's definitely trending toward the latter, particularly for specialized lines of business like Medicare Advantage where people are really looking for a business outcome, as opposed to just the software.
The place where Cognizant has the software that really dominates is in the payer and health insurance market. The larger payers, the larger health insurance companies--Aetna, United, Cigna, companies like that--by and large are software buyers. They will also buy a lot of services, but they generally don't buy the complete end-to-end solution whereas, when you move down the line to the midsized and smaller buyers, it's just the opposite. They're almost always buying an integrated platform-based solution, and especially when they get into things like a new line of business like Medicare Advantage.
If I were a midsized insurer and I sold generally to employer groups, employer-based insurance, and I decided to get into the Medicare business, which, for insurers, it's called Medicare Advantage, it's Medicare Part C, there is a whole new host of rules, regulations, reporting, compliance, submissions, reconciliations, everything you would have to learn to get into that business and perform successfully. For anybody smaller than the largest companies out there, to figure that out, to build up the department, the compliance team, to go through the work, I mean they would pay double compared to having somebody like us bring an integrated solution to them.
When we bring an end-to-end client on, on one of our Medicare Advantage platform-based solutions, right off the bat we save them about 30% to 40% compared to them doing it themselves. Then, over time, say over a five- to seven-year contract, they'd end up paying about half having us do it as opposed to them trying to do it themselves just because there's so much that you have to do in terms of rules, regs, subject matter expertise, configuration, submission back and forth to the government, and so on. In that market, it is very much an integrated services and software play in the middle to small plan size. It's still a software play up at the larger side.
What's it like to be a software company like TriZetto inside a big services company like Cognizant?
TriZetto was a fairly large software company - about $800 million of revenue when it was acquired. There was enough heft to the company that allowed the software business to sustain
It's been a blending of cultures. Software brings you repeatability, scale, but you have a lot less opportunity to customize your solutions than you would if you're a pure services company. I think that finding the balance between the two was a key part of a successful integration.
What we found out, I think, is that we're well positioned to bring the best of both markets. We can serve all the segments, and it really allows us to go in and tailor the solution based on their need. If you're one of those larger companies, if you have an IT shop of 2,000 people, and you're really buying 50 software products and then knitting them together yourself to develop what you think is the best solution, fantastic. We're happy to sell you the software and support you in doing that.
If you're a mid or a small company and you're trying to get into a line of business or you're trying to figure out your business going forward and you don't have the scale, you don't have the subject matter expertise, you don't have the infrastructure, then our integrated software and service solutions, our platform-based solutions, as how we refer to them, are the best. We can serve both, so I think it's been a very positive combination for Cognizant, but also for TriZetto as well.
It depends on who you look at, but I think everybody -- the larger -- if you take the largest players and probably the biggest competitors in the healthcare market, you'd be looking at Cognizant, you'd be looking at Accenture, and you'd probably be looking at Optum, which is the software and services arm of UnitedHealthCare. If you look at all three of us, Cognizant is very much invested in that software, services, integrated solution type model where we can do everything we've been talking about. Optum is very similar. They don't have the core platform that we have, but they have so many of the surround software pieces and the consulting and the things like that. They're coming in very much with that, "I've got software. I've got services." We find Optum does a lot of work around our platform and the things that augment.
Optum is a partner of ours. They're a client. They're a competitor. We partner with Optum on certain pieces of software that enhance the services we offer. Optum is definitely investing in that software, services, consulting type combination, as we are.
Accenture leads with consulting services and strategy. They partner with a number of different software vendors. I haven't seen the investment from them in healthcare software that either Optum or Cognizant has made but, in the end, through partnerships, they're still trying to deliver across that spectrum.
We think we have an advantage because we own the entire suite, including the core systems on the payer side that gives us that edge. But I think you'd find us very similar to the others, for instance, in the provider and health system market. None of the three of us that I mentioned owns a core health system software asset, but we're all building services around software around capabilities and consulting around the provider market.
What disruptive changes do you foresee?
If you think of healthcare broadly, and this is at the highest level, for every dollar spent on healthcare, $0.85 goes to actually providing care, so the doctors, the hospitals, the drugs, the treatment, and $0.15 goes to administering care, managing the provider office, running the health insurance company, things like that.
Almost everything that we've done in the past has been focused on that $0.15. How do you make the administration more efficient? How do you make it more seamless? How do you help the day-to-day operations run differently?
We are pivoting to expand into the cost of care area. How do we bring value-based care models or bundled payment models to market efficiently? What technology? Think of us as taking technology and applying it to the care side.
Companies like Optum are also expanding into this area. Optum is buying up provider practices. For us, we probably, at least right now, do not want to be in the “Hands-on patient” services, but there are a lot of things we can do to help that care delivery process take place more efficiently.
We can help bring these new care models to market. We can provide the tools that allow for digital engagement. A member using their mobile phone to stay active and involved in their healthcare. How that mobile phone connects to a connected device that measures lung function and heart function and breathing and exercise and things like that. There are a lot of technology-based solutions we can do on that cost of care side.
The second thing is looking at the real impact of newer forms of technology and what it can do to deliver a better healthcare model. Two big ones that come to mind: AI and Blockchain. Artificial intelligence; we all read about it every day. We hear about it. A lot of hype.
Well, there's a handful of use cases in there, virtual clinical assistants for the providers as an example. If I'm a doctor, I'm at the point of care, and I'm diagnosing a patient with a specific disease, do I need to rely 100% on my medical knowledge to tell that patient what to do? What if I had a real-time virtual clinical assistant that could tell me what the best practice is around this type of disease, what kind of drugs, what type of follow-up, what type of specialist might want to be involved? That concept, even to the point that they act as chatbots and are learning and consulting with the doctor, if you will, at the point of care, that's just one tiny example of a very valid use case in healthcare.
The second is blockchain. It's probably the most overhyped technology out there. Out of the hundred use cases that might apply to healthcare, there's probably four or five that make sense. But - those four or five can be game-changers. The way they change the way workflow happens within healthcare, what it can do to dramatically reduce cost or to improve security or to improve accuracy.
There are two key examples. We've got teams focused on both of these areas, healthcare AI and healthcare blockchain, that have done well and scaled, have the potential to really define some key things going on. Emerging technology, as it applies to healthcare, as well as addressing technology-based solutions for the cost of care, those are two big areas of expansion that we see opportunity and are investing in.
Plex ML project - something every software vendor/systems integrator could emulate
Machine Learning is a hot topic. Every software vendor is talking about ML and AI in their products. Plex applied ML to something very different - an understanding of how their customers configure parameters in the Plex system. This conversation with Jerry Foster, CTO, describes the project.
It has always bothered me that after millions of ERP, CRM and other enterprise projects and upgrades, we cannot get predictability and savings from automation in these projects. I think every software vendor and SI can build on this exercise around other SaaS products to automate configuration and related templates. Another interesting area Jerry describes is how rethinking customizations and the UX can dramatically reduce configuration change effort.
When your competitors talk about machine learning, it’s typically in terms of customer data and different functional areas like accounts payable. Your project looked at implementation configurations, which is a unique approach. Can you share your perspective on Machine Learning?
Just like with any transformative technology, everyone likes to ask, "What is your machine learning strategy?" or "What's your IoT strategy?" But you can really go down a rabbit trail because those technologies are so big. So we've always tried to ask “What problems are we going to solve for our customers?” which could even mean solving an internal challenge that ultimately helps our customers.
At Plex, we didn't want to just start building Machine Learning models and doing them willy-nilly. Instead, me and my team discussed areas where we could apply machine learning efforts in order to solve a problem while also gaining our own internal knowledge on what a ML project entails.
Can you share some background on how this specific Machine Learning initiative began?
Last year, our VP of Services told me that about 15% of incoming customer care tickets deal exclusively with the configuration setup of our system.
Because the Plex Manufacturing Cloud is a Software-as-a-Service product, everyone is using the same codebase, which means we use configuration settings (also known as customer settings) that determine how the software behaves and functions.
Settings have business flow and visual ramifications. For example, some of those settings determine actual workflow, which may be different for a food and beverage customer versus a discrete manufacturing customer. While others are as simple as a setting that will show certain columns on the screen and, if it's turned off, they are no longer visible.
Because the system is so full-featured, there are a large number of settings so it would be unreasonable to expect our customers or even our Customer Care team to fully understand all of the interdependencies of the settings and the unintended side effects from turning settings on and off.
Obviously, when engineers build these systems, they try to take that into account. They will add a new setting and see how it works with all other similar settings. But over time, you can't ascertain all of the effects of all of those settings.
Our customers are trying to figure out how to optimize their settings, so when I was pondering that stat – 15% of our tickets are related to configuration settings – I realized this would be a great place to build a machine learning model that could help us understand what the net effect is.
Our desire was to ask “How could a customer use this model to compare it to their settings?” and be able to say, "Here are the settings you should change in order to have the optimum configuration.”
Once you identified your objective, where did you start? What technologies did you use?
We put together a team of engineers and gave them this assumption that there is a data set that could be analyzed to determine whether or not a customer's settings are configured correctly.
The team came up with a framework and a set of tools, which included Azure Machine Learning Studio because, in our opinion, Microsoft has done a really good job providing the necessary tools to very quickly get up to speed on a machine learning presentation. We also used Neo4j as our graphing database and a system called Notebook, which is a way to put together any Python scripts that you are using to augment your artificial intelligence project.
From there, the team put these tools together and started building out their models.
Once you had the tools, what was the process like?
There were actually three phases to the project. The first is feature discovery, which is basically asking “What are the data sets that we need to test our assumption?” The second phase was building the knowledge capture system, which actually gathers all of the data from the feature discovery. Then, of course, the final phase is a building a recommendation system.
The first phase – feature discovery – was determining which data sets were the most important. One thing I learned is that feature discovery is just as much art as it is science. You really have to have people on your team that know the data.
We put together a set of data that we could use to test our thesis and we started to build some models around that data. We found that once we had a set of features and a set of data points, it wasn't enough to determine what data points to use, we needed to establish just how much data to use as well.
For instance, if we were too restrictive in our data sets that we were using, the models couldn't make any inferences from that. It was basically regurgitating what you put in. But if we were too broad in the amount of data, data sets, and features that we were putting into the model, it just came up with all sorts of crazy inferences and dependencies that had no meaning.
We had to keep refining, which we is a big part of the machine learning process. We really had to learn to accept that there is an iterative nature to determining which data could best give us the results that we were looking for.
Once we had a set of data points and features that we felt were representative and returned some good initial results, we built a knowledge capture system using Neo4j, Excel spreadsheets, and a data factory from Microsoft. This system pulled in the data, adjusted it, and used the models that we had built in ML Framework to start analyzing the data.
The next part was actually consolidating and analyzing the data using mainly Azure ML as well as some Power BI and Excel along with Neo4j for some of the visual representation.
Can you explain how this applied to the configuration setting problem you were trying to solve?
As I mentioned, our customers turn settings on and off in order to configure the system to meet their needs.
To make that process easier, Plex has a set of customer setting templates – 12 or 13 primary templates based on size, industry, facility count, etc. – that we use to categorize our customers, which then informs new customer implementations. When a new customer is brought into the fold, it’s more efficient to associate them with an appropriate template and copy the template settings to their instance.
For this project, we wanted the algorithm to pretend we didn't have those templates and determine on its own how it would group our customers. We fed the data into the model and it begain to cluster our customers and match them with a particular customer setting template. Using a mechanism called similarity analysis, the model then made recommendations based on the delta between the customer and neighbors using their common template as a baseline.
Can you show me how this played out in the model?
The image below is from Neo4j and it actually shows the clustering of customers matched to a particular template. It first came up with its own clustering, and then it attempted to match those clusters with our templates.
This next image shows the model’s clusters. It identified 14 broad groups completely disconnected from our templates.
Then what we told the model to do on this next screen was to try and match its clusters with our templates, and it actually grouped our templates into those four groups that you see highlighted right there: 0, 7, 9, and 11.
So the model reclassified your templates?
Yes, the machine learning algorithm essentially said four of the groups it had defined (Group 0, 7, 9, and 11), could match up with one or more of Plex’s existing templates.
What else does this tell you?
All in all, what you see here is basically six of its groups matched our templates. As you can see on this next image, the model matched two of its groupings – 3 and 8 – with a few templates that we don't typically use as much. The model also has two groupings, 2 and 6, that it couldn't match with any of our templates.
One of the main findings coming out of this is that we are basically missing two templates. In other words, there is a set of customers that, when they onboard as a new customer, we're assigning them to a template that, although it may not be bad, is probably not the optimum template that they should be getting.
This final graph illustrates a finding that was consistent across the board : the number of times our customer settings were changed dropped off a cliff in about 2015. That was a completely unexpected finding from the study.
This finding validated the work that we've been doing since 2014 to reduce our overall dependence on settings, including our new user interface, which is much less settings-dependent than Classic. It gave us confidence we were on the right path.
This is also really cool because it wasn't actually in our initial hypothesis. It wasn't what my team was trying to find or determine, and it was just an unintended side effect of the work they were doing. Of course, one of the advantages when you start doing a project like this is you start looking at your data in detail and you come across insights that you had never expected. This is something that the initial algorithms that we were putting together showed us that wasn't even part of our initial thesis.
Those were the two main results: identifying configuration setting templates that we had missed, and showing us that the work we had done in reducing the number of settings had paid off.
What action will you take from here?
One of the follow-throughs from this, obviously, will be to look at the model’s groupings that do not identify with a Plex template and ask what are the characteristics of the customers in those groups? Why did we miss those? One of the main follow-ups that we're doing right now and in upcoming months is trying to determine which customers are in those groups and what are the characteristics of those groups so we can come up with a proper template for those customers.
The second action is to follow through with the recommendation system because that's really where the productization of this is going. If I'm an existing customer, I want to know if my settings are optimum. I want to know, based on all the customers that are like me, which of my settings are different, and why do I have those settings different than all of the other customers who are like me?
We already have the "like me" part done. Now, we actually have to productize that so we can provide a tool, and we want to provide that tool in two different contexts. One is an internal tool for our support team so that when they get an incoming call that says, "Hey, I've got a problem," they can run this against the settings optimization tool and say, "You know what? You might want to check these settings. I've noticed that they're different from you, but all other customers like you have these settings set differently." Then, of course, the next iteration or the second context would be a self-service tool that the customer could just go onto the Plex menu and analyze their settings.
Were you happy with the results of this project?
Overall, I was very pleased with the outcome of the project and how the team worked. It was exciting to see. One of the interesting things about where machine learning has come from where it was five, ten years ago is that the tools are so strong that you don't need a team of data scientists. You need a team of smart engineers and a couple people who know the data.
Yes, a data scientist is obviously going to help, but to get 70% or 80% of the way, in my opinion, you don't really need a data scientist. All the things we did here, there was no data scientist involved. That, to me, was a real encouragement.
The last takeaway I'm taking from this is, once you start to realize what's behind machine learning, you actually start to use that as a mechanism to solve problems. In other words, it's not just this machine learning project that the labs did. It's how we can analyze any problem moving forward.
It's going to be a long journey, but it's very exciting. I'm really excited to see where we can go with this.
August 21, 2019 in Cloud Computing, SaaS, Industry Commentary | Permalink | Comments (0)