Good Design Rules

Axiomatic Design is a general purpose method for designing engineering systems that are highly effective (among other things). The axioms described in this approach are essentially as follows.

1) Maintain independence of the functional requirements so that you can set a design parameter to control each one; and when that isn’t possible, then try to create a design so that if a design parameter impacts more than one functional requirement, then there is another design parameter that can correct the other(s) independently. The idea is to have a design parameter which can be used to precisely meet each functional requirement. So you need at least one design parameter for each functional requirement.

2) Minimize the information content in your design, so you can have the simplest design, which tends to make it cheaper to make, and sometimes more reliable too.

In the Information and Communications Technology (ICT) industry, we’re experiencing a byproduct of these axioms. Software Defined Networking (SDN) and Network Function Virtualization (NFV) are technology trends that are solving problems in the industry. Their effectiveness comes from a decoupling of design parameters. By separating the hardware from the software in NFV, or the control plane from the forwarding plane in SDN, the network engineer gains an additional degree of freedom, an added design parameter. As a result, what was once coupled and difficult to control is now more precisely controllable. A better design for networks is possible, in line with the first axiom above.

And by using commodity hardware (such as Cloud and data center technologies) for many parts of our design, we also are using the second axiom as well.

But I believe there is at least another axiom at play here. The two axioms of design assume the functional requirements are identifiable, so the design can be set to meet them. But ICTs (and arguably many other things as well) have to meet changing requirements. At least over time, if not immediately, use cases, applications, and the services they must support are in flux. Nobody knows what the next killer app will be, but we know it will be.

Therefore, I suggest there is at least one more axiom to add here.

3) Whenever possible, maximize future options.

In other words, be flexible to change. Do not commit to proprietary solutions if you can avoid it. And if you must go with an inflexible option, make sure you can separate it from your design, should  an option appear in the future. A system or network design that is open to future options can be more likely to meet future requirements, survive catastrophic events, grow to meet unpredictable future demands, and be maintainable against uncertainty. It is always good to have a design that has the optimal set of future options, all else being equal. And often, as any financial analyst should tell you, a real option has value, so sometimes they are worth purchasing. So in this case sometimes it is worth spending more in a design to have that future option. (Real Options Analysis is an approach that applies in this case, and I encourage everyone reading this to learn more about it.)

This idea not only applies to the system design, but also the process of designing the system. When designing a system, do not commit a design parameter until you must. That way, you always have the most options available to you for schedule and requirement disruptions.

I suggest we call it the robust design axiom.

About Rupe

Dr. Jason Rupe wants to make the world more reliable, even though he likes to break things. He received his BS (1989), and MS (1991) degrees in Industrial Engineering from Iowa State University; and his Ph.D. (1995) from Texas A&M University. He worked on research contracts at Iowa State University for CECOM on the Command & Control Communication and Information Network Analysis Tool, and conducted research on large scale systems and network modeling for Reliability, Availability, Maintainability, and Survivability (RAMS) at Texas A&M University. He has taught quality and reliability at these universities, published several papers in respected technical journals, reviewed books, and refereed publications and conference proceedings.

He is a Senior Member of IEEE and of IIE. He has served as Associate Editor for IEEE Transactions on Reliability, and currently works as its Managing Editor. He has served as Vice-Chair’n for RAMS, on the program committee for DRCN, and on the committees of several other reliability conferences because free labor is always welcome. He has also served on the advisory board for IIE Solutions magazine, as an officer for IIE Quality and Reliability division, and various local chapter positions for IEEE and IIE.

Jason has worked at USWEST Advanced Technologies, and has held various titles at Qwest Communications Intl., Inc, most recently as Director of the Technology Modeling Team, Qwest’s Network Modeling and Operations Research group for the CTO. He has always been those companies’ reliability lead. Occasionally, he can be found teaching as an Adjunct Professor at Metro State College of Denver. Jason is the Director of Operational Modeling (DOM) at Polar Star Consulting where he helps government and private industry to plan and build highly performing and reliable networks and services. He holds two patents. If you read this far, congratulations for making it to the end!

This entry was posted in Engineering Consulting, IT and Telecommunications, ORMS, Quality, RAMS - all the -ilities and tagged , , , , , , , , , . Bookmark the permalink.

Comments are closed.