How Optimization Powers Decision Intelligence with Duke Perrucci & Ed Klotz, CEO and Senior Mathematical Optimization Specialist at Gurobi Optimization
Duke Perrucci is the CEO at Gurobi Optimization. Prior to being appointed CEO, Duke served as CRO and COO since 2018. Perrucci has over 25 years of experience in sales, marketing, and analytics roles. Before joining Gurobi, he served at Cambridge Analytica, FocusVision, and Unilever. He also spent nine years with Information Resources, Inc., where he worked across the entire PepsiCo enterprise.
Dr. Ed Klotz is a Senior Mathematical Optimization Specialist at Gurobi Optimization. Klotz has over 30 years of experience in the mathematical optimization software industry. He is a technical expert who has helped customers solve some of the world’s most challenging mathematical optimization problems. Dr. Klotz works closely with Gurobi's customers to support them in implementing and utilizing mathematical optimization in their organizations. He also interacts heavily with the R&D team based on his experiences with the customers.

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.
Key Quotes
Even if you do a really good job and get within 1-2% of optimal on these large scale problems, a 1% improvement is a huge amount of money in terms of savings or increased profit or whatever your goal is.
You can make a good decision or you can make the best decision, the optimal decision. And we like to think that there's a big difference between those two.
Key Takeaways
Decision intelligence is not just about making good decisions, but about finding the optimal decision by mathematically modeling problems and using solvers to maximize or minimize desired outcomes.
Machine learning and optimization complement each other; machine learning can provide accurate forecasts that serve as inputs for optimization models, enhancing decision-making processes.
Optimization models need to be adaptable to changing business conditions, and it's important to work iteratively with stakeholders to refine models and incorporate new information.
Transcript
Richie: Hi there, Duke and Ed, welcome to the show.
Ed: Hello,
Duke: for having us, Richie. Glad to be here.
Richie: Yeah, I'm very excited for this. So, first of all, I guess, we're talking about decision intelligence, but that's a pretty broad term. Can you tell me what does decision intelligence mean to you? Maybe Duke, do you want to go first?
Duke: guess the working definition out there is that decision intelligence is the application of A. I. To some decision making process for Garobi. it's a little bit different than that. We focus more so in the world of optimization. Of that decision. So decisions, you can make a good decision or you can make the best decision, the optimal decision.
And we like to think that there's a big difference between those two. So in our world, you've got a something you're trying to minimize or maximize. So a lot of our customers are trying to maximize revenue or maximize profit. Then they've got variables that are elements of the problem that they're trying to solve.
let's say you're blending ice cream, variables could be the, cost of vanilla extract or the cost of cacao, Something simple like that. Then you've got constraints, the things that you can't break, right? Going back to ice cream, you can't have more than 40 percent of that chocolate ice cream with cacao it would taste disgusting.
So, what our software does is it takes problems like that, problems where you're trying to minimize something or m... See more
solution, the solution that's going to maximize revenue or maximize profit or or minimize waste. So we live in that space of decision intelligence. And it's all we do. It's all we do at Groby. We build one piece of software. It's known as a solver. and we help to make better decisions for companies around the world.
Richie: I really like the idea of having an optimal decision because you think, well, okay, do I make a good decision or not? But if, you know, there's sort of some mathematically best decision you can make, then that seems like a very cool sort of thing to have. Alright, nice. So, I'd love to hear more examples of when you might care about having some sort of optimal decision.
So you mentioned a few things like you're trying to improve. your revenue or savings costs or something like that. Can you talk me through some like concrete examples?
Duke: So we work with customers across more than 40 industries. So the problems are really far reaching. And pretty dynamic. It could be everything from minimizing the amount of forest that you need to cut down to produce a certain amount of wood all the way to gene sequencing the maximization of portfolios and the financial space.
it really is. It's pretty simple. It's any problem you can define mathematically. you can solve using an optimization solver. And you might think like, Oh, well, that's, that's tricky. How do you define problems mathematically? It's really not that tricky. I'll give you two examples.
An easy example. Let's say you, you owned a McDonald's, right? And you needed to schedule the workforce. It sounds kind of easy, right? You have three shifts. You need enough people at the register, enough people at the fryer, enough people at the grill. How hard could that be? But when you think about all of the constraints that are part of that, right?
You can't have. single parents working at night or else they're gonna be frustrated. They're not going to stick around. You can't go over into overtime, right? Because that then cost the business lot of money, and it robs you of profit. You can't break OSHA guidelines. You know, it starts to get a little hairy, honestly, right?
And that's a fairly straightforward problem, you can use optimization to solve that. You can come up with the optimal schedule that's gonna keep employees happy. It's going to maintain or drive your profit or revenue and probably give you better customer satisfaction scores because you'll have happier employees.
easy one. then you've got these really complex ones. power, power delivery in the United States. There are these organizations called ISOs. They create the power and distribute it all over the U. S. Huge amounts of territory they're responsible for, obviously, if there's only eight of them.
And that is a really, really. Complex and difficult problem to solve to figure out how much power to produce the next day to satisfy all of the requirements of the power needs on that particular, you know, 5000 square mile grid. it's a really complex problem. A problem so complex that many times you have to try to simplify it in order to solve it, even with an enterprise grade solver like Groby.
So you can try optimization on just about anything. And it's, it's fascinating to see how our clients will use it for just about any sort of problem. As long as they've got good data that supports the problem, just like any other, you know, analytic project, you've got to have good data. But you can unleash optimization on just about anything.
Richie: That does sound pretty cool that there are so many sort of like very varied examples that you've given there. And I do like that one about employee scheduling. So I think that's something that I think a lot of people worry about. It's like, well, I got to have the right shifts at the right time and then people complain about stuff.
And so it just, that's going to help employee happiness, I think, having that done for you, rather than just a purely sort of monetary benefit. Ed, did you have any more examples that you wanted to share of cases where optimization is useful?
Ed: First of all, I'll just reiterate the electric power example, and I think we'll probably come back to that again. But a couple others are package delivery. So you think about companies like UPS or Amazon you know, you need to deliver the packages and, the packages you need to deliver and their locations vary from day to day.
And then another one we like to talk about because it resonates with a lot of people. And it's NFL scheduling. So, they work with a company that uses Groby to generate candidate schedules for the regular season each year. And use of optimization has really helped them look at many more schedules than they used to be able to do.
And then I'll, just mention one other thing because the notion of optimization is, sort of a somewhat overloaded word. people talk about optimizing things, even though that might just mean, well, they looked at a process and came up with some improvements. The algorithms that Groby use can find provably optimal solutions to these problems.
And when the problems are so hard that. There's no way to find an optimal solution. It can still give guidance as to how good a solution might be. Say it's within 5 percent of optimal.
Richie: Okay, that's interesting. So, I'm now curious as to how well this optimization software is going to work compared to, say, a human expert for a given problem. Like, Do humans get close to optimal solutions ever?
Duke: Yeah, well, sometimes we, it's funny that you asked that, Rishi. We built a, a game called the Burrito Game to demonstrate to people new to optimization just how hard it is for a human being to come anywhere near optimal. So the Burrito Game is pretty straightforward. You have a fictitious town and you have to park these burrito trucks outside of different buildings that have a certain number of employees in those buildings.
And the idea is to maximize your profit. And we take it to trade shows and we let data scientists play around with it. And it's, near impossible to come close to that optimal. solution, and the solver figures it out in a millisecond. And, people will sit there and really try to back into it and figure it out and use all the math that they can.
And you just can't, you can't compete. you can come up with probably a good solution, but the difference between that good solution and an optimal solution for a lot of our customers is millions of dollars. it's not insignificant,
Ed: One last thing about the burrito game. It's not just people outside our company who have trouble finding the optimal solution. Even those of us in the company who have sort of optimization in our DNA will struggle also to come up with the truly Optimal solution. And then I just want to reiterate what Duke said, because it's incredibly important, which is even if you do a really good job and get within one or 2 percent of optimal, you know, on these large scale problems.
A 1 percent improvement is a huge amount of money in terms of savings or increased profit or whatever your goal is.
Richie: Yeah, so I can certainly see how once you're running things at scale, particularly with these, like, I guess, like supply chain cases where there's sort of very like narrow margins on things. So yeah, a very small change or a very small increase in, in the optimal solution is going to be a lot of value.
All right. So, I'd just like to talk a bit about the relationship between optimization and machine learning, because I think that's the other, machine learning is the other area where data driven decision making is big. You're trying to make predictions around things. Can you talk me through how the two fields are related?
Ed: First of all to sort of get into this maybe some of the audience is familiar with this Davenport Competing on Analytics book. And he has a great diagram that basically separates analytics into the notion of descriptive and descriptive. Predictive and prescriptive analytics and in terms of the difference between optimization and machine learning, machine learning is in that predictive analytics category.
It's great for making predictions. Optimization is better for making decisions. And part of the reason for this is that machine learning needs a training set. And if you have this optimization problem where the number of possible decisions you can make is in numbers, you know, way above the trillions, It's very hard to come up with a suitable training set.
That you can train your machine learning model on to get you know, an accurate machine learning model to tell you how to decide things. But on the other hand, machine learning is great for things like forecasting demand. So if you have a manufacturing process where your decisions involve producing and selling and putting Stuff into inventory machine learning can do a great job at telling you here's a really accurate demand forecast.
Use that in your planning model that you solve with this optimization technology to have a more accurate optimization model. And hence, the net result is these two technologies work together to make each one more effective than they would be if you just use them individually.
Richie: So it sounds like you can then use your machine learning models. If I've understood it correctly, it's going to like provide some of the constraints on what's going into your optimization model.
Ed: It's going to provide input on the constraints to the optimization model. That's exactly right.
Richie: So I guess the next problem is. You've got some sort of technical answer from your optimization model. How do you go from I've got this answer to I can make some sort of decision?
Ed: You're absolutely right. Solution acceptance is really critical, and you have to be connected as an optimization practitioner to the people in the industry in the business process that actually are going to look at the answer. So what that means is you need to translate it from the vocabulary of this optimization model and the mathematics to The business process as the boots on the ground, see it in their vocabulary so that they can comprehend it.
And it's important to keep in mind that fundamentally, if you're going to come up with a proposal, that's going to say, save money, it's going to be different than the current operation in some way that could be counterintuitive. So you need to think very hard about. Not only doing the translation, but having good explanations when the boots on the ground actually raise questions or challenge and say, well, this doesn't make sense.
Why should we do it this way that you can have an answer? Not in terms of some equation, but in terms of, the building blocks of the process to say, this is why it's going to save money. there's a guy there was a guy, the late Gene Wolsey, who was an extremely successful practitioner.
And I'm paraphrasing a quote of his, but it was, an optimal solution that is not accepted is not optimal. So that applies in terms of the question you ask.
Richie: Yeah, that does seem like there's going to be a lot of sort of change management processes that need to happen in order to, especially if the solution is something unintuitive that a human wouldn't have come up with, that sounds like it's going to be a tricky ask. Okay, so maybe we'll back up a bit.
Suppose you're convinced that Making use of optimization techniques is a good idea. Where do you get started as a business? Like what's step one, if you want to pursue this?
Duke: Let's see. So step one is obviously you've identified the problem already, right? you want to solve. Either the line of business has come to you and said, Hey, we need some help with some analytics on this. We need to, improve profitability of the production line or something. Then you've got to, you've got to make sure that you've got access to good hygienic data.
that's going to fuel the process. So and most companies out there now have vast amounts of data. It's usually well aligned and stored properly and all that. So that's not as big a problem as it was a few years back. So once you have that data, then you have to start building that digital twin, right?
That's the way we tend to think about these models. You're taking a process in the real world and you're trying to describe it in a digital sense. And that takes that takes a little bit of time that could take a month. It could take two months, depending on how complex processes that you're trying to model, and it's a very iterative process.
I think Ed described it really well where he said, Hey, you've got to work closely with the line of business. Because you've got to understand the process, you've got to understand the jargon, you've got to understand the constraints that they have to deal with and the complexity of those. And honestly, you just also have to build trust with that line of business, because they have to trust the product that you're producing.
once you have that model built and refined a little bit, then you need a solver. That's what Groby is. It's just a library of thousands of algorithms that are designed to solve problems of this type. And that model then calls the solver, the solver, then Solves it for you and tells you if you've achieved an optimal decision or close to an optimal decision.
Then once you have that some customers just, do that. They solve it on their own laptop and they say, all right, we got to build a factory over here because that's the most optimal location and they don't have to build an application around it. Most of our customers then have to get IT or, or DevOps involved wrap an application around it, figure out what sort of server architecture they need to deploy this thing properly to the users, and then you're off to the races.
And as the decision, or as the, problem changes, let's say you, you added two factories that you didn't have before, two warehouses you didn't have before, it's just simple adjustments that you make to the model. to account for that, which is very different than a heuristics based approach.
You know, if you're, if you're using heuristics, then boy, you might have to tear that whole thing apart and rebuild it if you add one new factory or one new warehouse. But but you working with a solver, you just have to make some small tweaks within the model to do that.
Richie: That sounds very cool. I like that you said you can basically get started with this, like on a laptop. it's not kind of crazy computation. It's like, it's something that like an individual could, do.
Duke: Yeah, absolutely.
Richie: All right. So I guess I'm curious as to who'd be involved in this then. Is this something that a data team would take on then, or does this go beyond that?
Duke: Yeah, yeah, I guess it depends on the size of the customer. If we've got small customers where the CEO is involved, you know, the CEO has a Ph. D. in mathematics and is geeking out over it. So they get involved most of the time. It's owned by a center of excellence. You know, the data scientists, the operations researchers, analytics people, they are typically called in to be the experts on the project, and they're the ones that typically call us and say, hey, we need a solver.
So typically it's that team, but the budget typically resides obviously outside of that team. It's a VP or an SVP of it could be operations. It could be digital transformation. It could be supply chain. It could be a bunch of different titles, but it's always closely run by the analytics professionals.
They're the ones that really know how to use the software. They're the ones that really know how to build that digital twin. And they're the ones that have gone through the the charred ashes of projects that they didn't model quite so well. So they've learned a lot over time. that's typically how these projects get done.
Richie: So, you mentioned charred ashes. It sounds like there is scope for things going wrong. I guess, like if your model doesn't represent the real world in some way, then you're going to get to the wrong answer somehow. Can you basically talk me through like, what happens there and how do you avoid it?
Duke: Yeah, I guess there's a few common problems that we see. That's definitely one of them where somebody might be new to optimization and they're still sort of learning the ropes and they might come back with an infeasible solution or there might be some numerical issues or something like that.
And that's why we have this support team. We have a whole team of 20 PhDs that will get on the phone with you and troubleshoot all that because we want you to be successful with optimization. But yes, building the best representation of that, of that process, that's really important. And some of it's, it's a little bit of you know, science and art.
Right. If you've been doing this for 20 years or 30 years, like Ed has a lot of these things come really naturally to you, but if you're new to it, there's a little bit of, you know, nuance to it. So it takes a little bit of practice to build a good model. And we're certainly there to help you.
help you do that. After that, one of the biggest issues we run into is people that use open source solvers. The problem is they'll use an open source solver, which is just has a very different performance level than an enterprise grade solver like Garobi. They'll use an open source solver.
It won't solve the problem, and then they'll walk away from optimization. And we don't want that to happen to anybody because optimization is such a dynamic and powerful tool, we don't want people to explore, open source tools have a real hairy problem that is difficult to solve and then have it fail.
And then they say, Oh, optimization is not a not a solution for me. That's actually one of the bigger problems we run into on daily basis. But that's, really it. Build a good model build trust with the team that you're building this model for, making sure you understand the, problem very clearly.
Use enterprise grade solvers to solve that problem, and then develop a good relationship with your IT department, because they're the ones that are going to deploy this thing. So that's important too.
Ed: Yeah, I'd like to add a couple quick things. So. There's another good Gene Woolsey quote that I think very much applies to what Duke commented about making sure you get that accurate digital twin. And it again is, make sure you're connected to the boots on the ground, so The paraphrase of Woolsey was, you can't do any math until they let you drive the forklift unsupervised.
And, by really learning how the nuts and bolts operation goes, you get a good digital twin. So that's one thing. can't reiterate enough also Duke's comment about getting good data. It was one sentence in a fairly long list of things, but for many of the projects, it dominates the amount of time spent that the modeling part is easy.
Getting good data is a major challenge, despite all the really good tools we now have for processing data. And then the last one, which is somewhat related to the open source solver. aspect is especially with the realm of quantum computers, there are these so called empty hard problems which a lot of publicity says, well, these are just so hard that they're impossible to solve.
And just like we don't want people to give up on optimization. Because an open source solver didn't meet their needs. We don't want them to give up on optimization just because their model falls into this MP hard class. Because while it is true in the largest case that these problems are unsolvable, in fact, Groby can solve plenty of significant instances of these to optimality, and even when it can't, it can often give very good solutions.
So that's another misconception out there that we sort of want to dispel.
Richie: So since you both complained about open source solvers, are we talking about SciPy in particular here?
Ed: No we are not. Actually, SciPy and NumPy are really good, but they are not so much solvers. They are more linear algebra matrix manipulators. So they are not the kind of optimizer that we talk about. And it's not so much that you should never use them. It's that if they don't meet your needs, you should not give up on the whole optimization technology.
Richie: All right. Okay. So I guess it sounds like there's going to be some sort of threshold between like, I can get started with, I guess, free tools to, I need some sort of enterprise solution. So, yeah. Are there any easy optimization problems for people who are just interested in the field, want to get started?
Duke: We actually spent a lot of time and energy building tools that help people get started. So we have a bunch of Jupyter Notebook examples that are you can call it straightforward, pretty straightforward optimization problems, different types of problems that you find out in the world on a daily basis.
We built these Jupyter Notebook. Examples to show people how to get a model off the ground. We've also got courses that we've built. One is out on you to me to help data scientists get this skill under their belt. We've got a bunch on our website. as well. But yeah, there are, seemingly easy optimization problems to start with.
You know, the scheduling of employees at McDonald's, that's probably one of the more easy ones. it's really not that hard. I mean, I took an operations research class in business school. And we had to create a fairly straightforward I think we had to, like, give school students the right amount of nutrition within the cost constraints.
So it was like, hey, you get a hot dog this day, you get a pizza that day, you get a, you pretty straightforward stuff. And it was, it was, it wasn't that hard to get started, but. You know, it gets a little bit more complex. no question about it takes time and it takes practice, but yeah, there's, there are lots of easy problems you can start with to cut your teeth on.
Richie: Okay. Yeah, I do really like that. Just scheduling employees example is kind of relatable and just seemed like. You can probably get something useful with like a relatively small number of employees. Like it doesn't have to be, I'm scheduling hundreds or thousands of people. It's like a few people, like, because people complain a lot, I guess it's going to be a hard problem, even at small scales.
All right. So I guess the next thing is like, once you've created your first model, Business conditions change quite rapidly. So how do you deal with things changing with your optimization models?
Ed: It's pretty easy to modify the model to affect a change. So in terms of changing the model to reflect, some new activity, that's not a problem. But in terms of how often you need to change the model, that's really going to vary. So let's go back to package delivery. So you're you know, a driver for a company that delivers packages.
You've got your territory. The locations of the packages is going to change on a daily basis. So in that sense, that model is going to change every day and you're going to rerun it. The application is going to be very good at just taking those changes as changes in the input data. And it's going to be very easy to deal with on the other hand If you're running some sort of sports scheduling model like the nfl does once that final schedule comes out in mid may that's it. they can't change that so they're not going to run it again the next time they run it Is going to be basically after the Super Bowl is over and they need to think about setting up schedules for the following season and they are going to then run it a lot to generate as many candidates schedules that they think might be worth using as they can over the following month or two before they have to announce the final schedule.
So two dramatically different. Different ways to deal with changes in the model depending on the specific business.
Richie: Okay. Yeah. So it seems like sometimes it's going to depend a lot on your business problems. So sometimes you're gonna have to be rerunning, updating these models on a daily basis. I guess, do you ever go any, any more frequently than that? Like, do you have to do like kind of real time updates or anything
Ed: Oh yeah, so, some sort of high frequency trading application on the stock market where, the situation is so fluid it changes, perhaps even in milliseconds and certainly in seconds or minutes.
Richie: Actually, that sounds like very difficult, but also kind of fun. Do you want to talk me through how do you even go about attempting that sort of thing?
Ed: Gurobi can solve a lot of significant problems in milliseconds. And If a customer comes to us and tells us that that's what they're doing, we might tell them to configure the solver in certain ways that you would not do if you're solving something, that is going to take 30 minutes to get the optimal solution for.
Because, to a certain extent, we assume the problem is going to be hard, and we do uh, A fair bit of heavy lifting at the beginning to help us solve it faster later. But if you need to solve it in milliseconds, you, you want to disable some of that setup effort because you need solutions really fast.
Richie: All right. So it's basically just like, as long as you plan for it, it seems like it is a, a thing that is sort of fairly standard to solve. You just need to make sure you've got the right infrastructure and processes in place to do that. Okay. All right. So actually on the subject of processes, going back a few minutes we were talking about how change management is kind of like, is one of the hard parts of this because model comes up with, okay, you've got to change everything about how you're working or the process needs to be different.
You're going to be doing different tasks now. Do you have any advice on how you go about communicating the results of this? How you go about influencing your colleagues in order to make sure that this gets adopted?
Duke: would say after talking to customers, one of the best things that you can do. Obviously, we talked about building trust. with the people that own that process. That's fundamentally at the core of it. Beyond that one of the things that helps is to run different scenarios, right? So don't just give the optimal solution or what you think is the best solution.
Run out a few different scenarios. And with those different scenarios, people start to get a more intuitive sense of what the solver is doing, if one of the scenarios you run out is, sending out 15 trucks instead of nine trucks, and you see this wild swing in the different variables and the results that gives you a little bit more explanatory power, right?
It's not so much this black box anymore, but now start people start to see the sensitivities around the changes that you're making, and and that tends to get people feeling a little bit more more comfortable around it. that's probably the best thing that you can do an analytics person is play out different scenarios so people can see the sensitivity of the model.
Beyond that Ed, do you have any do you have any ideas on it?
Ed: I think that's right. The only thing I would add is that for the person who needs to do this explaining, Groby has a lot of functionality or internal tools that we've implemented that facilitate this process. So I don't think we want to go into a technical deep dive about what these features do, but I'm just going to name them.
And then say that they're out there. So we have the notion of multi objective optimization, which can help generate these sort of different scenario based solutions. So for example, think about a portfolio model where you have a collection of assets and you might say, well, I want to maximize the return on the portfolio, but then you might say, well, wait a minute the maximal return probably entails a huge amount of risk so then you say, well, wait a minute what I really want to do is amongst all the.
Portfolio configurations that are within 1 percent of the best possible return, give me the one with minimal risk. And so the multi objective feature can help you with that. then we have the notion of a solution pool where we give you more solutions than just the single optimal one. So that again facilitates this notion of scenarios.
And then we have something called an infeasibility finder, but it actually does more than explain infeasibilities. You can do it sort of as an oracle that explains anything about your model as long as you cast it as an infeasible model. So it's good at answering a very specific type of question, but you can modify that question to get all sorts of insights into your model.
about why certain things, why it behaves in a certain way, so that when you're challenged by the boots on the ground, you can use that to help you come up with answers that will resonate with the people who are wondering about why this solution is so good.
Richie: do like this idea of just trying different scenarios. And I guess if you come up with some stupid scenarios, like say, okay, what if we send every single truck out we have or whatever, then you're gonna, it's gonna start getting people thinking about, like, what is possible, what isn't possible, and, where the boundaries of, like, what you can do are.
All right, super.
Ed: sorry to interrupt. I think you've touched on another sort of benefit of optimization there, Richie, which is, will develop customer insight about the workings of their model going through the process. It's like even if you just formulated a model and never actually ran it and never actually implemented the solution going through that process has some educational benefit where you will learn things about your operations that could be beneficial.
Duke: Yeah, that's a great point. I mean, sometimes these processes it's these sort of old salts that have been working on the process for years. And some of it is an intuition. And until you extract all of that actually build it into a model. there isn't institutional knowledge on that process.
And by building that model and extracting all that information, some stuff that just lives in people's heads because they've got 30 years of experience on that production line building that model just in itself without even running a solver against it, business benefit to that because you're, codifying all of that knowledge in one place, which rarely happens
Richie: That's interesting. So it does sound like maybe a lot of the challenge to create these things is just understanding what, viable constraints are, because a lot of it is just in people's heads then. So, Okay. Do you have any advice then on how you go about like capturing this knowledge then?
what do you do to get stuff out of people's heads and into a model? Yeah, I mean, if you have these, super experienced. Employees. Yeah, mostly really then about communicating, getting some sense. And also, there's sort of an iterative process. So, you create the model. The old salt, you know, says, no, no, no, you know, you're doing it all wrong. But that elicits a response from him or her.
Ed: That helps you refine the model. So this is part of the model development process. There's an iterative aspect to it, and you should make this clear to the customer or prospect that, Hey, we're probably not going to get it. Absolutely right. On the first iteration that there's this iterative communication process so that they don't get upset when the first model doesn't exactly represent their process.
So, the old salt then tells you what. You missed and maybe it's because they never told you any of that. You never have that information But that's part of the process. You don't finger point. You just say okay We'll incorporate that so you go through an iteration and again, that's part of the educational process both for The model developer and the people who want to apply the model.
So there's learning there.
Richie: So, lots of talking with colleagues. This sounds incredibly difficult. Speaking to people over and over again. No that does seem like a very sensible process having that sort of iterative cycle of getting more information from your colleagues, updating your model and going back. One thing I'm curious is, you said that these models can come up with answers that humans often can't get to themselves.
So do you have any examples of cases where the model has come up with some crazy optimization? All the humans thought, well, this is stupid because they wouldn't get it themselves, but then it's actually worked.
Ed: I don't have a specific one, but do believe they are out there. But again, what happens is you have to then be ready to explain what's crazy. There's going to be something. It can be some very complicated Chain of interactions where, you know, in a supply chain well, okay. let's think about supply chain.
So, you've got warehousing as, and you've got products that you're selling. And, you know, it makes sense, intuitively, to always sort of, ship from the warehouse that's closest to the person ordering the product. But it might be that it actually is a really bad idea to do that because the product that person wants There's only one left in the closest warehouse.
And you know, that there's going to be a bunch of people asking for this thing nearby in the next three weeks. So you're better off shipping it from a warehouse that's farther away, but has a ton of inventory of this product to help you deal with shortages and maybe also you then want to do a shipment from that warehouse that has a huge amount of inventory to replenish the inventory at this other warehouse.
So those are things that You might not think about doing that. That probably is simple enough that the old salts have figured out what to do in that case, but a fairly simple example of something counterintuitive, which is you don't just ship. Based on nearest proximity and you know, lowest shipping cost, if you're gonna incur, dramatic shortfalls in your ability to fulfill demand later.
Richie: I think this is maybe something that's very common in like every business. You often think, well, my colleagues are doing something really, really stupid, but you don't have the same context that your colleagues have because they know that there's like some other bigger problem that's being solved there.
And so, yeah, I can certainly see how you can end up with non intuitive solutions around that if you just think, well, okay, I should try and optimize for lower shipping costs, but actually you need to worry about like, supply management at a lot of warehouses.
Ed: Right, so that was a simple example, but I, I think you can imagine how it becomes much more complicated that the trade off is not as direct as that example, and there's some long sequence through the supply chain where there's a benefit or a potential disaster.
Richie: Okay. I like that. Yeah. So, really it's about business is quite complex and I'd rather not have to think about what the best solution is. So let's, let's outsource to software. I think that's the theme of today's episode. All right. So, just to wrap up what are you most excited about in the world of optimization?
Duke: For me, it's the impact of large language models. there's always been a small obstacle with the democratization of optimization because, not that many people are aware of it, not that many people know how to build a model. And we've started exploring as have, academia has been working on this as well.
the way that LLMs could help build models for people. They're never going to be perfect, right? Because every model is a bespoke model for your business process, but boy, you can have foundation of a model built by an LLM, and that gets you jump started. It gets you over that initial hurdle, and I think it's just going to get better and better over time.
So I'm really excited about that, because if you could get Optimization into more people's hands. You don't have to have this deep expertise. Then you literally could optimize the world, right? There's, so much that can be done with optimization to make the world a better place. You know, reducing our, use of natural resources.
That's how a lot of our customers are using Garovi. And the more people that could do that, that's just good for the planet. And that's, that excites me.
Richie: You know, I'm amazed Somehow generative AI sort of creeps into every episode we do. So, we did quite well to avoid talking about it for 40 minutes or so. But yeah it had to show up eventually. Um, And I agree that having just a lower sort of barrier to entry seems like it's, going to be incredibly useful.
Ed: just to give an example of what Duke described. I was with a professor a couple months ago and I, we discussed one formulation of a model and then the next day I came back early in the morning to his office before I was leaving and he said, Well, I thought about your model and we were talking about sort of a challenging aspect of it.
And I thought about a reformulation of the model. it wasn't chap GPT, I think it was something called Pierre or something like that was some alternative to chap GPT to write a program to do this reformulation. It wasn't like he just said, write a program to do this. He had to give it some guidance.
And it gave him an accurate formulation written in, in a programming language that worked right out of the box. It was correct first time through. So this is not some vague notion that LLMs are going to have impact this regard. They already are doing so. And then in terms of my answer to your question to me being a more techie guy, it's really about the continued progress in the algorithms and the computers on which they run.
And I think one good example is in the electric power industry. So, the AC optimal power flow problem where AC stands for alternating current is what they really want to solve, but for the longest time, they've been settling for the DC or direct current version of that problem as a good enough approximation, but.
We're getting to this point of recent progress, taking the computational state of the art to the point where I think we're going to, they're going to be able to solve the AC version very soon, and this will have a big impact. And that's the kind of thing that really excites me.
Richie: that sounds like a very cool problem. So, what's the impact once we solve this problem?
Ed: Well, you can more accurately figure out the I. S. O. S. The independent system operators that Duke referred to. I think earlier can basically do a better job of managing knobs of the power plants to provide power and meet demand. at lower cost.
Richie: no more power blackouts, like, lower power bills. Those sound like good things to have. Wonderful. Alright, cool. Excellent. So, uh, I feel like I've learned a lot today. It's always a good sign. So, thank you so much for your time, Duke. Thank you so much for your time, Ed. That was interesting stuff.
Duke: Yeah. Thank you, Richie. It's, been a lot of fun. Appreciate it.
Ed: Thanks, Richie. It was nice to meet you.
podcast
Making Better Decisions using Data & AI with Cassie Kozyrkov, Google's First Chief Decision Scientist
podcast
How Data can Enable Effective Leadership with Dr. Constance Dierickx, The Decision Doctor
podcast
The Data to AI Journey with Gerrit Kazmaier, VP & GM of Data Analytics at Google Cloud
podcast
Decision Intelligence and Data Science
podcast
Customer Strategy in the Age of AI | David Edelman, Harvard Business School Fellow, Executive Advisor with BCG
podcast