In a February episode of Burning Platform, we hosted a back and forth on architectural trends in the industry with Sayan Chakraborty, EVP of Product and Technology at Workday. Here we have him back to present more specifically on Workday’s architectural journey. He only presents 5 slides but makes powerful commentary.
He presents some high-level philosophy, he discusses how Workday has integrated several acquisitions – some analytical, other transaction processing heavy. But I really liked his description of how Workday’s architecture has evolved over 15 years
"Today, we have a highly scalable platform that can support millions of employees on a single instance of Workday. We have elastic capability, horizontally scaling elastic capability because you have business cycles. You do payroll on particular days and times. You run specific reports at times during the month or during the fiscal year.
If we look back 15 years ago, we would see a lot of general compute engines in the Workday stack. Today, you see a lot of highly specialized compute engines, and we send the right workload to the right engine.
- If you've got an analytics workload, it will go to the Prism technology that came in through the Platfora acquisition that is designed for that type of workload and optimize for that workload.
- We have the OMS transaction service in the middle of this diagram that handles our high-performance transaction workloads.
- We've got integration servers that handle the ins and outs, as you connect Workday to systems outside of Workday.
- You have ML workloads that move into our ML environment that have a different kind of usage rate, memory consumption, CPU usage than these other workloads.
We can optimize for all of those, and I think that's really the dramatic difference. If you time-traveled back 15 years and showed this slide to architects at Workday, what they would be surprised by is the specialization and the elasticity and the horizontal scale that's now ...
The age when you would do machine learning processing at a hardware level on standard CPUs from Intel or AMD is essentially over. The industry realized that there were these devices, these GPUs that had been developed primarily for gaming, and realized that, hey, these are really, really interesting because of the way they tackle the processing problem, and we can leverage that capability – essentially, parallel processing versus serial processing (parallel at scale) – and those kinds of problems are the kinds of problems we have in deep learning networks or other kinds of workloads within the ML AI capability."
25 minutes of wizardry. Very well done.