People maintain control in spite of our new machine overlords.
Operations Research (O.R.) projects often focus on trying to take control of the entire problem, and therefore fail when the engineer or manager with the ultimate control cannot validate, verify, or sometimes even follow the solution recommended by the software. So we add graphics and simple ways of explaining the results, hoping the person with power gets comfortable and follows the recommendation. But when that person doesn’t have a Ph.D. in O.R., they still don’t completely trust the solution. Why is that?
- Sometimes it is because they know something the software doesn’t.
- Maybe it is because there are requirements or constraints they just can’t articulate.
- Perhaps it is because there are unpredictable events that the user believes could happen, and would cause the software to do very bad things.
All these possible reasons, and more, make it difficult to trust the software. Even when a person, even an expert, can’t possibly do a better job at finding the right solution, the user doesn’t trust. So the solution is again ignored. I’ve seen this time and time again.
It seems time to try an alternate approach: use our O.R. skills to convert the problem to one the engineer or manager has a fighting chance to solve, and make sense of the results. Use our applied mathematics skills to clarify the problem, not just optimize. When O.R. works, I contend you will find that the real work does exactly that, converts the problem to help the person solve it, not try to take the reins from the person.
After much time trusting and validating, maybe they will ask for the decision to be automated. But not right away, not all the time, and not without an option to take back the reins.