Plex ML project - something every software vendor/systems integrator could emulate
Machine Learning is a hot topic. Every software vendor is talking about ML and AI in their products. Plex applied ML to something very different - an understanding of how their customers configure parameters in the Plex system. This conversation with Jerry Foster, CTO, describes the project.
It has always bothered me that after millions of ERP, CRM and other enterprise projects and upgrades, we cannot get predictability and savings from automation in these projects. I think every software vendor and SI can build on this exercise around other SaaS products to automate configuration and related templates. Another interesting area Jerry describes is how rethinking customizations and the UX can dramatically reduce configuration change effort.
When your competitors talk about machine learning, it’s typically in terms of customer data and different functional areas like accounts payable. Your project looked at implementation configurations, which is a unique approach. Can you share your perspective on Machine Learning?
Just like with any transformative technology, everyone likes to ask, "What is your machine learning strategy?" or "What's your IoT strategy?" But you can really go down a rabbit trail because those technologies are so big. So we've always tried to ask “What problems are we going to solve for our customers?” which could even mean solving an internal challenge that ultimately helps our customers.
At Plex, we didn't want to just start building Machine Learning models and doing them willy-nilly. Instead, me and my team discussed areas where we could apply machine learning efforts in order to solve a problem while also gaining our own internal knowledge on what a ML project entails.
Can you share some background on how this specific Machine Learning initiative began?
Last year, our VP of Services told me that about 15% of incoming customer care tickets deal exclusively with the configuration setup of our system.
Because the Plex Manufacturing Cloud is a Software-as-a-Service product, everyone is using the same codebase, which means we use configuration settings (also known as customer settings) that determine how the software behaves and functions.
Settings have business flow and visual ramifications. For example, some of those settings determine actual workflow, which may be different for a food and beverage customer versus a discrete manufacturing customer. While others are as simple as a setting that will show certain columns on the screen and, if it's turned off, they are no longer visible.
Because the system is so full-featured, there are a large number of settings so it would be unreasonable to expect our customers or even our Customer Care team to fully understand all of the interdependencies of the settings and the unintended side effects from turning settings on and off.
Obviously, when engineers build these systems, they try to take that into account. They will add a new setting and see how it works with all other similar settings. But over time, you can't ascertain all of the effects of all of those settings.
Our customers are trying to figure out how to optimize their settings, so when I was pondering that stat – 15% of our tickets are related to configuration settings – I realized this would be a great place to build a machine learning model that could help us understand what the net effect is.
Our desire was to ask “How could a customer use this model to compare it to their settings?” and be able to say, "Here are the settings you should change in order to have the optimum configuration.”
Once you identified your objective, where did you start? What technologies did you use?
We put together a team of engineers and gave them this assumption that there is a data set that could be analyzed to determine whether or not a customer's settings are configured correctly.
The team came up with a framework and a set of tools, which included Azure Machine Learning Studio because, in our opinion, Microsoft has done a really good job providing the necessary tools to very quickly get up to speed on a machine learning presentation. We also used Neo4j as our graphing database and a system called Notebook, which is a way to put together any Python scripts that you are using to augment your artificial intelligence project.
From there, the team put these tools together and started building out their models.
Once you had the tools, what was the process like?
There were actually three phases to the project. The first is feature discovery, which is basically asking “What are the data sets that we need to test our assumption?” The second phase was building the knowledge capture system, which actually gathers all of the data from the feature discovery. Then, of course, the final phase is a building a recommendation system.
The first phase – feature discovery – was determining which data sets were the most important. One thing I learned is that feature discovery is just as much art as it is science. You really have to have people on your team that know the data.
We put together a set of data that we could use to test our thesis and we started to build some models around that data. We found that once we had a set of features and a set of data points, it wasn't enough to determine what data points to use, we needed to establish just how much data to use as well.
For instance, if we were too restrictive in our data sets that we were using, the models couldn't make any inferences from that. It was basically regurgitating what you put in. But if we were too broad in the amount of data, data sets, and features that we were putting into the model, it just came up with all sorts of crazy inferences and dependencies that had no meaning.
We had to keep refining, which we is a big part of the machine learning process. We really had to learn to accept that there is an iterative nature to determining which data could best give us the results that we were looking for.
Once we had a set of data points and features that we felt were representative and returned some good initial results, we built a knowledge capture system using Neo4j, Excel spreadsheets, and a data factory from Microsoft. This system pulled in the data, adjusted it, and used the models that we had built in ML Framework to start analyzing the data.
The next part was actually consolidating and analyzing the data using mainly Azure ML as well as some Power BI and Excel along with Neo4j for some of the visual representation.
Can you explain how this applied to the configuration setting problem you were trying to solve?
As I mentioned, our customers turn settings on and off in order to configure the system to meet their needs.
To make that process easier, Plex has a set of customer setting templates – 12 or 13 primary templates based on size, industry, facility count, etc. – that we use to categorize our customers, which then informs new customer implementations. When a new customer is brought into the fold, it’s more efficient to associate them with an appropriate template and copy the template settings to their instance.
For this project, we wanted the algorithm to pretend we didn't have those templates and determine on its own how it would group our customers. We fed the data into the model and it begain to cluster our customers and match them with a particular customer setting template. Using a mechanism called similarity analysis, the model then made recommendations based on the delta between the customer and neighbors using their common template as a baseline.
Can you show me how this played out in the model?
The image below is from Neo4j and it actually shows the clustering of customers matched to a particular template. It first came up with its own clustering, and then it attempted to match those clusters with our templates.
This next image shows the model’s clusters. It identified 14 broad groups completely disconnected from our templates.
Then what we told the model to do on this next screen was to try and match its clusters with our templates, and it actually grouped our templates into those four groups that you see highlighted right there: 0, 7, 9, and 11.
So the model reclassified your templates?
Yes, the machine learning algorithm essentially said four of the groups it had defined (Group 0, 7, 9, and 11), could match up with one or more of Plex’s existing templates.
What else does this tell you?
All in all, what you see here is basically six of its groups matched our templates. As you can see on this next image, the model matched two of its groupings – 3 and 8 – with a few templates that we don't typically use as much. The model also has two groupings, 2 and 6, that it couldn't match with any of our templates.
One of the main findings coming out of this is that we are basically missing two templates. In other words, there is a set of customers that, when they onboard as a new customer, we're assigning them to a template that, although it may not be bad, is probably not the optimum template that they should be getting.
This final graph illustrates a finding that was consistent across the board : the number of times our customer settings were changed dropped off a cliff in about 2015. That was a completely unexpected finding from the study.
This finding validated the work that we've been doing since 2014 to reduce our overall dependence on settings, including our new user interface, which is much less settings-dependent than Classic. It gave us confidence we were on the right path.
This is also really cool because it wasn't actually in our initial hypothesis. It wasn't what my team was trying to find or determine, and it was just an unintended side effect of the work they were doing. Of course, one of the advantages when you start doing a project like this is you start looking at your data in detail and you come across insights that you had never expected. This is something that the initial algorithms that we were putting together showed us that wasn't even part of our initial thesis.
Those were the two main results: identifying configuration setting templates that we had missed, and showing us that the work we had done in reducing the number of settings had paid off.
What action will you take from here?
One of the follow-throughs from this, obviously, will be to look at the model’s groupings that do not identify with a Plex template and ask what are the characteristics of the customers in those groups? Why did we miss those? One of the main follow-ups that we're doing right now and in upcoming months is trying to determine which customers are in those groups and what are the characteristics of those groups so we can come up with a proper template for those customers.
The second action is to follow through with the recommendation system because that's really where the productization of this is going. If I'm an existing customer, I want to know if my settings are optimum. I want to know, based on all the customers that are like me, which of my settings are different, and why do I have those settings different than all of the other customers who are like me?
We already have the "like me" part done. Now, we actually have to productize that so we can provide a tool, and we want to provide that tool in two different contexts. One is an internal tool for our support team so that when they get an incoming call that says, "Hey, I've got a problem," they can run this against the settings optimization tool and say, "You know what? You might want to check these settings. I've noticed that they're different from you, but all other customers like you have these settings set differently." Then, of course, the next iteration or the second context would be a self-service tool that the customer could just go onto the Plex menu and analyze their settings.
Were you happy with the results of this project?
Overall, I was very pleased with the outcome of the project and how the team worked. It was exciting to see. One of the interesting things about where machine learning has come from where it was five, ten years ago is that the tools are so strong that you don't need a team of data scientists. You need a team of smart engineers and a couple people who know the data.
Yes, a data scientist is obviously going to help, but to get 70% or 80% of the way, in my opinion, you don't really need a data scientist. All the things we did here, there was no data scientist involved. That, to me, was a real encouragement.
The last takeaway I'm taking from this is, once you start to realize what's behind machine learning, you actually start to use that as a mechanism to solve problems. In other words, it's not just this machine learning project that the labs did. It's how we can analyze any problem moving forward.
It's going to be a long journey, but it's very exciting. I'm really excited to see where we can go with this.
Comments
Plex ML project - something every software vendor/systems integrator could emulate
Machine Learning is a hot topic. Every software vendor is talking about ML and AI in their products. Plex applied ML to something very different - an understanding of how their customers configure parameters in the Plex system. This conversation with Jerry Foster, CTO, describes the project.
It has always bothered me that after millions of ERP, CRM and other enterprise projects and upgrades, we cannot get predictability and savings from automation in these projects. I think every software vendor and SI can build on this exercise around other SaaS products to automate configuration and related templates. Another interesting area Jerry describes is how rethinking customizations and the UX can dramatically reduce configuration change effort.
When your competitors talk about machine learning, it’s typically in terms of customer data and different functional areas like accounts payable. Your project looked at implementation configurations, which is a unique approach. Can you share your perspective on Machine Learning?
Just like with any transformative technology, everyone likes to ask, "What is your machine learning strategy?" or "What's your IoT strategy?" But you can really go down a rabbit trail because those technologies are so big. So we've always tried to ask “What problems are we going to solve for our customers?” which could even mean solving an internal challenge that ultimately helps our customers.
At Plex, we didn't want to just start building Machine Learning models and doing them willy-nilly. Instead, me and my team discussed areas where we could apply machine learning efforts in order to solve a problem while also gaining our own internal knowledge on what a ML project entails.
Can you share some background on how this specific Machine Learning initiative began?
Last year, our VP of Services told me that about 15% of incoming customer care tickets deal exclusively with the configuration setup of our system.
Because the Plex Manufacturing Cloud is a Software-as-a-Service product, everyone is using the same codebase, which means we use configuration settings (also known as customer settings) that determine how the software behaves and functions.
Settings have business flow and visual ramifications. For example, some of those settings determine actual workflow, which may be different for a food and beverage customer versus a discrete manufacturing customer. While others are as simple as a setting that will show certain columns on the screen and, if it's turned off, they are no longer visible.
Because the system is so full-featured, there are a large number of settings so it would be unreasonable to expect our customers or even our Customer Care team to fully understand all of the interdependencies of the settings and the unintended side effects from turning settings on and off.
Obviously, when engineers build these systems, they try to take that into account. They will add a new setting and see how it works with all other similar settings. But over time, you can't ascertain all of the effects of all of those settings.
Our customers are trying to figure out how to optimize their settings, so when I was pondering that stat – 15% of our tickets are related to configuration settings – I realized this would be a great place to build a machine learning model that could help us understand what the net effect is.
Our desire was to ask “How could a customer use this model to compare it to their settings?” and be able to say, "Here are the settings you should change in order to have the optimum configuration.”
Once you identified your objective, where did you start? What technologies did you use?
We put together a team of engineers and gave them this assumption that there is a data set that could be analyzed to determine whether or not a customer's settings are configured correctly.
The team came up with a framework and a set of tools, which included Azure Machine Learning Studio because, in our opinion, Microsoft has done a really good job providing the necessary tools to very quickly get up to speed on a machine learning presentation. We also used Neo4j as our graphing database and a system called Notebook, which is a way to put together any Python scripts that you are using to augment your artificial intelligence project.
From there, the team put these tools together and started building out their models.
Once you had the tools, what was the process like?
There were actually three phases to the project. The first is feature discovery, which is basically asking “What are the data sets that we need to test our assumption?” The second phase was building the knowledge capture system, which actually gathers all of the data from the feature discovery. Then, of course, the final phase is a building a recommendation system.
The first phase – feature discovery – was determining which data sets were the most important. One thing I learned is that feature discovery is just as much art as it is science. You really have to have people on your team that know the data.
We put together a set of data that we could use to test our thesis and we started to build some models around that data. We found that once we had a set of features and a set of data points, it wasn't enough to determine what data points to use, we needed to establish just how much data to use as well.
For instance, if we were too restrictive in our data sets that we were using, the models couldn't make any inferences from that. It was basically regurgitating what you put in. But if we were too broad in the amount of data, data sets, and features that we were putting into the model, it just came up with all sorts of crazy inferences and dependencies that had no meaning.
We had to keep refining, which we is a big part of the machine learning process. We really had to learn to accept that there is an iterative nature to determining which data could best give us the results that we were looking for.
Once we had a set of data points and features that we felt were representative and returned some good initial results, we built a knowledge capture system using Neo4j, Excel spreadsheets, and a data factory from Microsoft. This system pulled in the data, adjusted it, and used the models that we had built in ML Framework to start analyzing the data.
The next part was actually consolidating and analyzing the data using mainly Azure ML as well as some Power BI and Excel along with Neo4j for some of the visual representation.
Can you explain how this applied to the configuration setting problem you were trying to solve?
As I mentioned, our customers turn settings on and off in order to configure the system to meet their needs.
To make that process easier, Plex has a set of customer setting templates – 12 or 13 primary templates based on size, industry, facility count, etc. – that we use to categorize our customers, which then informs new customer implementations. When a new customer is brought into the fold, it’s more efficient to associate them with an appropriate template and copy the template settings to their instance.
For this project, we wanted the algorithm to pretend we didn't have those templates and determine on its own how it would group our customers. We fed the data into the model and it begain to cluster our customers and match them with a particular customer setting template. Using a mechanism called similarity analysis, the model then made recommendations based on the delta between the customer and neighbors using their common template as a baseline.
Can you show me how this played out in the model?
The image below is from Neo4j and it actually shows the clustering of customers matched to a particular template. It first came up with its own clustering, and then it attempted to match those clusters with our templates.
This next image shows the model’s clusters. It identified 14 broad groups completely disconnected from our templates.
Then what we told the model to do on this next screen was to try and match its clusters with our templates, and it actually grouped our templates into those four groups that you see highlighted right there: 0, 7, 9, and 11.
So the model reclassified your templates?
Yes, the machine learning algorithm essentially said four of the groups it had defined (Group 0, 7, 9, and 11), could match up with one or more of Plex’s existing templates.
What else does this tell you?
All in all, what you see here is basically six of its groups matched our templates. As you can see on this next image, the model matched two of its groupings – 3 and 8 – with a few templates that we don't typically use as much. The model also has two groupings, 2 and 6, that it couldn't match with any of our templates.
One of the main findings coming out of this is that we are basically missing two templates. In other words, there is a set of customers that, when they onboard as a new customer, we're assigning them to a template that, although it may not be bad, is probably not the optimum template that they should be getting.
This final graph illustrates a finding that was consistent across the board : the number of times our customer settings were changed dropped off a cliff in about 2015. That was a completely unexpected finding from the study.
This finding validated the work that we've been doing since 2014 to reduce our overall dependence on settings, including our new user interface, which is much less settings-dependent than Classic. It gave us confidence we were on the right path.
This is also really cool because it wasn't actually in our initial hypothesis. It wasn't what my team was trying to find or determine, and it was just an unintended side effect of the work they were doing. Of course, one of the advantages when you start doing a project like this is you start looking at your data in detail and you come across insights that you had never expected. This is something that the initial algorithms that we were putting together showed us that wasn't even part of our initial thesis.
Those were the two main results: identifying configuration setting templates that we had missed, and showing us that the work we had done in reducing the number of settings had paid off.
What action will you take from here?
One of the follow-throughs from this, obviously, will be to look at the model’s groupings that do not identify with a Plex template and ask what are the characteristics of the customers in those groups? Why did we miss those? One of the main follow-ups that we're doing right now and in upcoming months is trying to determine which customers are in those groups and what are the characteristics of those groups so we can come up with a proper template for those customers.
The second action is to follow through with the recommendation system because that's really where the productization of this is going. If I'm an existing customer, I want to know if my settings are optimum. I want to know, based on all the customers that are like me, which of my settings are different, and why do I have those settings different than all of the other customers who are like me?
We already have the "like me" part done. Now, we actually have to productize that so we can provide a tool, and we want to provide that tool in two different contexts. One is an internal tool for our support team so that when they get an incoming call that says, "Hey, I've got a problem," they can run this against the settings optimization tool and say, "You know what? You might want to check these settings. I've noticed that they're different from you, but all other customers like you have these settings set differently." Then, of course, the next iteration or the second context would be a self-service tool that the customer could just go onto the Plex menu and analyze their settings.
Were you happy with the results of this project?
Overall, I was very pleased with the outcome of the project and how the team worked. It was exciting to see. One of the interesting things about where machine learning has come from where it was five, ten years ago is that the tools are so strong that you don't need a team of data scientists. You need a team of smart engineers and a couple people who know the data.
Yes, a data scientist is obviously going to help, but to get 70% or 80% of the way, in my opinion, you don't really need a data scientist. All the things we did here, there was no data scientist involved. That, to me, was a real encouragement.
The last takeaway I'm taking from this is, once you start to realize what's behind machine learning, you actually start to use that as a mechanism to solve problems. In other words, it's not just this machine learning project that the labs did. It's how we can analyze any problem moving forward.
It's going to be a long journey, but it's very exciting. I'm really excited to see where we can go with this.
Plex ML project - something every software vendor/systems integrator could emulate
Machine Learning is a hot topic. Every software vendor is talking about ML and AI in their products. Plex applied ML to something very different - an understanding of how their customers configure parameters in the Plex system. This conversation with Jerry Foster, CTO, describes the project.
It has always bothered me that after millions of ERP, CRM and other enterprise projects and upgrades, we cannot get predictability and savings from automation in these projects. I think every software vendor and SI can build on this exercise around other SaaS products to automate configuration and related templates. Another interesting area Jerry describes is how rethinking customizations and the UX can dramatically reduce configuration change effort.
When your competitors talk about machine learning, it’s typically in terms of customer data and different functional areas like accounts payable. Your project looked at implementation configurations, which is a unique approach. Can you share your perspective on Machine Learning?
Just like with any transformative technology, everyone likes to ask, "What is your machine learning strategy?" or "What's your IoT strategy?" But you can really go down a rabbit trail because those technologies are so big. So we've always tried to ask “What problems are we going to solve for our customers?” which could even mean solving an internal challenge that ultimately helps our customers.
At Plex, we didn't want to just start building Machine Learning models and doing them willy-nilly. Instead, me and my team discussed areas where we could apply machine learning efforts in order to solve a problem while also gaining our own internal knowledge on what a ML project entails.
Can you share some background on how this specific Machine Learning initiative began?
Last year, our VP of Services told me that about 15% of incoming customer care tickets deal exclusively with the configuration setup of our system.
Because the Plex Manufacturing Cloud is a Software-as-a-Service product, everyone is using the same codebase, which means we use configuration settings (also known as customer settings) that determine how the software behaves and functions.
Settings have business flow and visual ramifications. For example, some of those settings determine actual workflow, which may be different for a food and beverage customer versus a discrete manufacturing customer. While others are as simple as a setting that will show certain columns on the screen and, if it's turned off, they are no longer visible.
Because the system is so full-featured, there are a large number of settings so it would be unreasonable to expect our customers or even our Customer Care team to fully understand all of the interdependencies of the settings and the unintended side effects from turning settings on and off.
Obviously, when engineers build these systems, they try to take that into account. They will add a new setting and see how it works with all other similar settings. But over time, you can't ascertain all of the effects of all of those settings.
Our customers are trying to figure out how to optimize their settings, so when I was pondering that stat – 15% of our tickets are related to configuration settings – I realized this would be a great place to build a machine learning model that could help us understand what the net effect is.
Our desire was to ask “How could a customer use this model to compare it to their settings?” and be able to say, "Here are the settings you should change in order to have the optimum configuration.”
Once you identified your objective, where did you start? What technologies did you use?
We put together a team of engineers and gave them this assumption that there is a data set that could be analyzed to determine whether or not a customer's settings are configured correctly.
The team came up with a framework and a set of tools, which included Azure Machine Learning Studio because, in our opinion, Microsoft has done a really good job providing the necessary tools to very quickly get up to speed on a machine learning presentation. We also used Neo4j as our graphing database and a system called Notebook, which is a way to put together any Python scripts that you are using to augment your artificial intelligence project.
From there, the team put these tools together and started building out their models.
Once you had the tools, what was the process like?
There were actually three phases to the project. The first is feature discovery, which is basically asking “What are the data sets that we need to test our assumption?” The second phase was building the knowledge capture system, which actually gathers all of the data from the feature discovery. Then, of course, the final phase is a building a recommendation system.
The first phase – feature discovery – was determining which data sets were the most important. One thing I learned is that feature discovery is just as much art as it is science. You really have to have people on your team that know the data.
We put together a set of data that we could use to test our thesis and we started to build some models around that data. We found that once we had a set of features and a set of data points, it wasn't enough to determine what data points to use, we needed to establish just how much data to use as well.
For instance, if we were too restrictive in our data sets that we were using, the models couldn't make any inferences from that. It was basically regurgitating what you put in. But if we were too broad in the amount of data, data sets, and features that we were putting into the model, it just came up with all sorts of crazy inferences and dependencies that had no meaning.
We had to keep refining, which we is a big part of the machine learning process. We really had to learn to accept that there is an iterative nature to determining which data could best give us the results that we were looking for.
Once we had a set of data points and features that we felt were representative and returned some good initial results, we built a knowledge capture system using Neo4j, Excel spreadsheets, and a data factory from Microsoft. This system pulled in the data, adjusted it, and used the models that we had built in ML Framework to start analyzing the data.
The next part was actually consolidating and analyzing the data using mainly Azure ML as well as some Power BI and Excel along with Neo4j for some of the visual representation.
Can you explain how this applied to the configuration setting problem you were trying to solve?
As I mentioned, our customers turn settings on and off in order to configure the system to meet their needs.
To make that process easier, Plex has a set of customer setting templates – 12 or 13 primary templates based on size, industry, facility count, etc. – that we use to categorize our customers, which then informs new customer implementations. When a new customer is brought into the fold, it’s more efficient to associate them with an appropriate template and copy the template settings to their instance.
For this project, we wanted the algorithm to pretend we didn't have those templates and determine on its own how it would group our customers. We fed the data into the model and it begain to cluster our customers and match them with a particular customer setting template. Using a mechanism called similarity analysis, the model then made recommendations based on the delta between the customer and neighbors using their common template as a baseline.
Can you show me how this played out in the model?
The image below is from Neo4j and it actually shows the clustering of customers matched to a particular template. It first came up with its own clustering, and then it attempted to match those clusters with our templates.
This next image shows the model’s clusters. It identified 14 broad groups completely disconnected from our templates.
Then what we told the model to do on this next screen was to try and match its clusters with our templates, and it actually grouped our templates into those four groups that you see highlighted right there: 0, 7, 9, and 11.
So the model reclassified your templates?
Yes, the machine learning algorithm essentially said four of the groups it had defined (Group 0, 7, 9, and 11), could match up with one or more of Plex’s existing templates.
What else does this tell you?
All in all, what you see here is basically six of its groups matched our templates. As you can see on this next image, the model matched two of its groupings – 3 and 8 – with a few templates that we don't typically use as much. The model also has two groupings, 2 and 6, that it couldn't match with any of our templates.
One of the main findings coming out of this is that we are basically missing two templates. In other words, there is a set of customers that, when they onboard as a new customer, we're assigning them to a template that, although it may not be bad, is probably not the optimum template that they should be getting.
This final graph illustrates a finding that was consistent across the board : the number of times our customer settings were changed dropped off a cliff in about 2015. That was a completely unexpected finding from the study.
This finding validated the work that we've been doing since 2014 to reduce our overall dependence on settings, including our new user interface, which is much less settings-dependent than Classic. It gave us confidence we were on the right path.
This is also really cool because it wasn't actually in our initial hypothesis. It wasn't what my team was trying to find or determine, and it was just an unintended side effect of the work they were doing. Of course, one of the advantages when you start doing a project like this is you start looking at your data in detail and you come across insights that you had never expected. This is something that the initial algorithms that we were putting together showed us that wasn't even part of our initial thesis.
Those were the two main results: identifying configuration setting templates that we had missed, and showing us that the work we had done in reducing the number of settings had paid off.
What action will you take from here?
One of the follow-throughs from this, obviously, will be to look at the model’s groupings that do not identify with a Plex template and ask what are the characteristics of the customers in those groups? Why did we miss those? One of the main follow-ups that we're doing right now and in upcoming months is trying to determine which customers are in those groups and what are the characteristics of those groups so we can come up with a proper template for those customers.
The second action is to follow through with the recommendation system because that's really where the productization of this is going. If I'm an existing customer, I want to know if my settings are optimum. I want to know, based on all the customers that are like me, which of my settings are different, and why do I have those settings different than all of the other customers who are like me?
We already have the "like me" part done. Now, we actually have to productize that so we can provide a tool, and we want to provide that tool in two different contexts. One is an internal tool for our support team so that when they get an incoming call that says, "Hey, I've got a problem," they can run this against the settings optimization tool and say, "You know what? You might want to check these settings. I've noticed that they're different from you, but all other customers like you have these settings set differently." Then, of course, the next iteration or the second context would be a self-service tool that the customer could just go onto the Plex menu and analyze their settings.
Were you happy with the results of this project?
Overall, I was very pleased with the outcome of the project and how the team worked. It was exciting to see. One of the interesting things about where machine learning has come from where it was five, ten years ago is that the tools are so strong that you don't need a team of data scientists. You need a team of smart engineers and a couple people who know the data.
Yes, a data scientist is obviously going to help, but to get 70% or 80% of the way, in my opinion, you don't really need a data scientist. All the things we did here, there was no data scientist involved. That, to me, was a real encouragement.
The last takeaway I'm taking from this is, once you start to realize what's behind machine learning, you actually start to use that as a mechanism to solve problems. In other words, it's not just this machine learning project that the labs did. It's how we can analyze any problem moving forward.
It's going to be a long journey, but it's very exciting. I'm really excited to see where we can go with this.
August 21, 2019 in Cloud Computing, SaaS, Industry Commentary | Permalink