Rocky Mountain Information Security Conference (RMISC)

I recently attended the Rocky Mountain Information Security Conference (RMISC), a rather impressive and unique gathering that prompted several relevant notions.

First, about the conference. Looking around the room, I saw about 1000 attendees. This conference is in its 10th year, and started by the Denver chapters of two Information Security conferences: the Information Systems Audit and Control Association (ISACA), and the Information Systems Security Association (ISSA). While it may have started small, there is nothing small about two local chapters holding a conference of about 1000 attendees.

What brings so many attendees to a conference held by local chapters? It seems there are at least two drivers: a large community of practitioners in security in the area, and a strong program. While the greater Denver area has a lot of companies and professionals working in Information Security, the profession is a critical contributor on many hot topics in engineering and society. Further, this conference brought some important names in InfoSec to the keynotes: John McAfee, Gene Spafford, Dave Cullinane, and Chris Wysopal. The technical content was sound as well, with four sessions in eight tracks, so 32 separate presentations by researchers and practitioners, ranging from use case experiences to emerging concerns in InfoSec.

But there were several points of note that struck me while attending this conference.

  • Reliability and InfoSec are more than kindred spirits: the reliability community should have been at the forefront of InfoSec, and should have driven its progression, but it’s not too late to help. I say this because so much of what was discussed at this conference, by the keynote presenters and the contributed presentations alike, were almost the same thoughts I saw being discussed a few decades ago in reliability. And many of the techniques used to mitigate InfoSec issues are adopted from the same tools born out of the reliability community. We’ve seen this happen time and time again, of course. The general skills of reliability are adopted by a context and profession that needs these skills, and adopts them to their own. Unfortunately, the reliability experts aren’t always coming along to help speed the development and share the knowledge. I witnessed a large room of practitioners discuss ways to capture risk sources in a risk assessment framework that was no different than an FMECA. But the discussion was about the mechanics of what works, and an experienced reliability engineer could have provided the answer before the question even came up, well before the first attempt to capture risk in an InfoSec context.
  • When corporations truly need a skill set, and see clearly the value contributed to their business by that skill set, they hire a skill set in large enough numbers to support a community. Denver and InfoSec is a clear example. How did that happen? Where was the tipping point? And how can the reliability community learn from it, or from our own examples? While members of the IEEE Reliability Society may clearly see that reliability is the mechanism for developing research into marketable products, and generally engineering better, it is rare to see any local community with a large number of researchers or professionals who see themselves as working in reliability. There is a disconnect somewhere.
  • Local chapters can do big things, like hold a quality conference with 1000 attendees. It takes a strong community to do that, with corporate sponsors, and relevant program content. But it can be done, and done well. RMISC is a great example of that. Knowing what is possible, how do we help our local chapters take steps toward that level of growth?

One idea seems to be common among these points: partnerships. As we recognize the market for our capabilities is broad, and interdisciplinary, we can spread value more widely, and grow in very important ways. I would like to find ways for the Society to do more outreach to other disciplines, and support local chapters expand their horizons as well. By finding opportunities to add value outside our immediate disciplines, we spread knowledge, add value, and grow the community. While it can be done at all levels of interaction, it has to be done locally.

Posted in Engineering Consulting, IT and Telecommunications, ORMS, RAMS - all the -ilities | Tagged , , , , , , , , , , | Comments Off on Rocky Mountain Information Security Conference (RMISC)

Expanding the Customer Base and Product Offerings through Emerging Communications Technologies

Software Defined Networks, Network Function Virtualization, and the Resource Decoupling Disruption

Software Defined Networking (SDN) and Network Function Virtualization (NFV) are two trends in the Information and Communications Technologies (ICT) industry that are relieving constraints for carriers, enterprises, and customers alike. Most of the deployments are limited to data centers, Cloud architectures, and Software Defined Wide Area Networks (SD-WAN). As the advantages from these technologies are proven, their applications will spread further. And as the technology becomes more common, its acceptance will spread as well.

As we experience these waves of change in the industry, it is important to understand the value, and responsibility, that comes with these trends. As we gain deeper control of network capabilities, we gain responsibility to manage these capabilities too. Like any wave of advantage in any industry, those who do not move will lose business, and those who can adjust will have the advantage.

There are several use cases described by standards bodies and vendors alike which are driving adoption now. But what is most interesting at this time, early in 2016, are the applications that are to come. Those who see the advantage and position themselves ideally now will have the real estate position to thrive in the new industry.

New Use Cases Emerging with Yield Management

Here are two new use cases that are about to be discovered and made by those in position to make them. These applications have one thing in common: yield management.

  1. NFV – An NFV provider can be the seller of network functions on demand, for those who can access the seller’s store, and for those products the seller can resell. Though there are very many models for making this a profitable business, a simple model is one in which a network provider places a data center on its network so that all its customers can access network functions as needed, on demand. As part of a premium service, special firewalls, walled gardens, deep packet inspections, Virtual Private Network (VPN) management, and WAN acceleration capabilities can all be provided. These functions can all be shared resources across the customer base, and used only as needed. The usage can be orchestrated either through a customer portal, or automatically triggered as needed according to Service Level Agreements (SLA), or both. Those who can access the network functions can now be customers, and afford products they otherwise would not be able to purchase. And the owner of the network functions can share them across more customers, getting higher utilization, and therefore more revenue.
  2. SDN – At the same time, a network provider can provide bandwidth on demand for customers and potential customers who can access their network, so a provider with capacity at key points can manage that capacity across multiple customers at a premium price. Again through an orchestrator product, SLAs can trigger automatic adjustments to the services, or customers can orchestrate their service through a portal. The capacity of the network can be dynamically shared, with pricing that fluctuates. As customers can more freely manage their usage, each individual customer can spend less on capacity than would otherwise be required, and use different bandwidth on a given network as well. As a result, more customers can be served with existing capacity. Each customer may spend less than otherwise, but the provider will have customers they otherwise would not be able to serve at the previous prices, and as a result make more revenue off of the same capacity compared to a traditional WAN.

Just as the airline industry experienced in the 1990s, the telecommunications industry is poised to experience a disruption in its industry as a result of yield management. Those airlines that were capable of pricing according to demand such that they could fully book their flights were able to increase utilization of their resources, gain market share as a result, and therefore increase their competitive advantage. Margins reduced in the industry, but the battle was inevitable, and those who could differentiate in this environment became leaders in the industry. The capabilities presented by NFV and SDN enable yield management of network functions and network capacity in ways not ever before possible. These resources now expire rapidly over time, as capacity yesterday cannot be sold today, and pricing can rapidly fluctuate with demand, as demand can be rapidly met through orchestration.

Differentiation will Win

But one key thing is different in the ICT industry over the airlines: differentiation. By decoupling network functions and network capacity from the orchestration and control of these resources, we can be much more flexible. A customer of an airline could never order the service on one airline on the airplane provided by a different company, on a route run by yet a third company. But in a fully capable NFV and SDN enabled ICT environment, a customer could potentially order network capacity from one company, with services orchestrated by a second company, using network functions provided by several other companies, all on demand, with rapidly fluctuating usage over short periods of time, as needed.

A key differentiation is to control key resources needed by many potential markets, and to provide wide access to competitive markets, enabling the seller to charge more by increasing demand. A network provider that owns the resources that many others need can therefore charge a premium for that capacity, and increase revenue by increasing utilization through offering that capacity to a larger number of customers competing for that supply of capacity. By providing more network functions to this customer base, the ease of access to these functions will make it easy to sell these services to the larger customer base. The net result is an increase in the customer base, and a broader set of products to sell them.

Posted in Engineering Consulting, IT and Telecommunications, Quality | Tagged , , , , , , , , , , , | Comments Off on Expanding the Customer Base and Product Offerings through Emerging Communications Technologies

November 30, 6pm: IEEE Reliability Society Denver/Pikes Peak Chapter meeting

You are cordially invited to attend our November meeting this November 30 starting at 6pm. The event will be held at the CU Student Commons Building, University of Colorado, Denver Downtown Campus. Detailed room info and parking instructions will be sent out soon.

We look forward to meeting you there!


Nondestructive Evaluation (NDE) are methods used to test a part of material or system without impairing its future usefulness. Structural health monitoring (SHM) is a relatively new paradigm (the term SHM became popular after early 2000s) that offers an automated method for tracking the health of a structure by combining damage detection algorithms with structural monitoring systems. Both terms are generally applied to nonmedical investigations of material integrity and have been extended to biomedical applications recently. The research hypothesis of LEAP, the research group established and led by Dr. Deng, is that by integrating advanced modeling, breakthrough in sensor design and new data analytics methods, the optimal NDE solutions can be found. The value of LEAP’s research is to ensure the safety of complex engineered systems and critical infrastructures, such as aircraft, nuclear reactors, bridges, pipelines and ships are important to ensure society’s industrial and economic prosperity. Recent advances in innovative NDE imaging and sensing for critical infrastructure safety will be covered and discussed in this seminar talk.

Bio. Dr. Yiming Deng is an assistant professor of the Electrical Engineering Department at the University of ColoradoDenver. He obtained his Ph.D. degreeat Michigan State University in 2009and B.S. degree at Tsinghua University, Beijing, China, both in Electrical Engineering. Dr. Deng has over 10 years of experience in the development of sensors and instrumentation, innovative numerical modeling for NDE and SHM. His current research focuses on novel NDE sensing and imaging systems that involves development of nondestructive and nonintrusive sensing arrays, high fidelity multi-physics modeling for understanding imaging physics, and experimental prototyping and validation. He has secured over $1.6M external research funding from USDOT,PHMSA, NIH, DOD, NSFC, ANST and private industries. Dr. Deng is a Senior Member of IEEE. He also serves as panelists for NSF, DOE, HKRGC and ad-hoc reviewers for over 20 international journals.

Posted in Uncategorized | Comments Off on November 30, 6pm: IEEE Reliability Society Denver/Pikes Peak Chapter meeting

Good Design Rules

Axiomatic Design is a general purpose method for designing engineering systems that are highly effective (among other things). The axioms described in this approach are essentially as follows.

1) Maintain independence of the functional requirements so that you can set a design parameter to control each one; and when that isn’t possible, then try to create a design so that if a design parameter impacts more than one functional requirement, then there is another design parameter that can correct the other(s) independently. The idea is to have a design parameter which can be used to precisely meet each functional requirement. So you need at least one design parameter for each functional requirement.

2) Minimize the information content in your design, so you can have the simplest design, which tends to make it cheaper to make, and sometimes more reliable too.

In the Information and Communications Technology (ICT) industry, we’re experiencing a byproduct of these axioms. Software Defined Networking (SDN) and Network Function Virtualization (NFV) are technology trends that are solving problems in the industry. Their effectiveness comes from a decoupling of design parameters. By separating the hardware from the software in NFV, or the control plane from the forwarding plane in SDN, the network engineer gains an additional degree of freedom, an added design parameter. As a result, what was once coupled and difficult to control is now more precisely controllable. A better design for networks is possible, in line with the first axiom above.

And by using commodity hardware (such as Cloud and data center technologies) for many parts of our design, we also are using the second axiom as well.

But I believe there is at least another axiom at play here. The two axioms of design assume the functional requirements are identifiable, so the design can be set to meet them. But ICTs (and arguably many other things as well) have to meet changing requirements. At least over time, if not immediately, use cases, applications, and the services they must support are in flux. Nobody knows what the next killer app will be, but we know it will be.

Therefore, I suggest there is at least one more axiom to add here.

3) Whenever possible, maximize future options.

In other words, be flexible to change. Do not commit to proprietary solutions if you can avoid it. And if you must go with an inflexible option, make sure you can separate it from your design, should  an option appear in the future. A system or network design that is open to future options can be more likely to meet future requirements, survive catastrophic events, grow to meet unpredictable future demands, and be maintainable against uncertainty. It is always good to have a design that has the optimal set of future options, all else being equal. And often, as any financial analyst should tell you, a real option has value, so sometimes they are worth purchasing. So in this case sometimes it is worth spending more in a design to have that future option. (Real Options Analysis is an approach that applies in this case, and I encourage everyone reading this to learn more about it.)

This idea not only applies to the system design, but also the process of designing the system. When designing a system, do not commit a design parameter until you must. That way, you always have the most options available to you for schedule and requirement disruptions.

I suggest we call it the robust design axiom.

Posted in Engineering Consulting, IT and Telecommunications, ORMS, Quality, RAMS - all the -ilities | Tagged , , , , , , , , , | Comments Off on Good Design Rules

Shiny Apps using R Studio

Shiny Apps are a fun, easy way to share results, analyses, and even capabilities utilizing R, and a little bit of coding. Some examples I have created are in these links. The first three run right off the link, but the last one needs a network model uploaded to run. If you need the example files to try the last one, just let me know with a post and I will share example files with you.


Posted in Engineering Consulting, IT and Telecommunications, ORMS, Quality, RAMS - all the -ilities | Tagged , , , , , , , , , , , , , , | Comments Off on Shiny Apps using R Studio

IEEE Reliability Society Denver Section Meeting on September 8th, Golden, CO

Upcoming Meetings and Events

2014 September Meeting: “IT and Telecommunications Reliability Concerns and Assurance Science”

speaker: Jason Rupe, Ph.D., Sr. Member IEEE

Date and Time: September 8th, Monday, 2014, 6:00pm – 8:00pm

Location: Petroleum Hall in the Green Center on the campus of Colorado School of Mines in Golden, Colorado USA
Building #27 on the map here
This talk will outline the book chapter I am currently writing, which is intended to cover IT and Telecommunications Reliability Concerns, and the Assurance Sciences. I’ll outline basic concepts familiar to us all, discuss how to break networks apart into things we can understand, how to handle them conceptually, how to pull them together to an understanding of networks we can use to do great things, handle some loose ends, then leave plenty of time for discussion, feedback, and anything else we want to cover as a group.

Posted in Uncategorized | Comments Off on IEEE Reliability Society Denver Section Meeting on September 8th, Golden, CO

Explaining Reliability

Most everyone knows what reliability means in its precise sense, and we can refer to definitions most anywhere which are all consistent. But I know that some of us who work in the reliability fields often struggle explaining what the field is about, and what is included or excluded in it.

I have a proposed definition for the reliability field: the study, management, and reduction of the variability in supply to meet uncontrolled demand.

Now some people will react to that definition to say something like “I am a reliability engineer, but I don’t do all that.” But I think what they do will fit within that definition.

Others will say that definition is rather broad, and doesn’t address components and systems, which is really what reliability as a profession addresses. But I suggest those components and systems, indeed even those systems-of-systems, networks, and other such creations are all supplying some function, capability, or are themselves being supplied for such, and therefore the focus is really on the supply. That supply is impacted by our creations, thus we often turn toward those creations to address the real issue, which is variability in supply.

As the managing editor for IEEE Transactions on Reliability, and as a consultant in the networks and systems reliability field (among other fields I support), I think I’ll start testing this definition, or similar forms of the idea, with people outside the engineering and scientific fields, to see how well it conveys what the profession is really about. I suspect it will work better than the definitions we often point to in dictionaries and textbooks.

Posted in Engineering Consulting, ORMS, Quality, RAMS - all the -ilities, Uncategorized | Tagged , , , , , , | Comments Off on Explaining Reliability

Net2Plan is worth a look

I recently found a network planning tool that is publicly available through GNU public license, called Net2Plan, and I suggest anyone working on telecommunications or IT networks to give it a try.  You can find a copy at of course.

This is the tool I always envisioned creating if I ever had the time and some programmers far more skilled than I am. These guys in Spain did a great job with it too. Drs. Pablo Pavón Mariño and José Luis Izquierdo Zaragoza approached the problem from the right direction, creating a tool that teaches as well as provides valuable results. Rather than use off the shelf tools that must be purchased, this modeling environment is “publicly-available open-source” software “published under LGPL license” which means students and users can solve problems using a a very effective tool without relying on a third party to maintain it. And because it is completely open and configurable, it is far more useful to the trained engineer, plus it requires a level of understanding of your network, problem, and solution that a COTS tool may not require. Some COTS tools are very good at leading you to a solution, but without helping you understand the problem. Net2Plan is built to assure that doesn’t have to happen.

You can configure your own optimization problems, build your own reporting engines, and even create your own models within the environment.  While it is set up to be tied together with the optimization engine GLPK, you can easily tie it into your own, and it is set up to tie with others including the common CPLEX engine.  The authors contributed Java Optimization Modeler (JOM) which allows Java to interface with these engines, Net2Plan included. If you are able to code well in Java, you can open existing algorithms to see how they work, and build your own versions. The authors even have a YouTube channel with a video showing how to do this. Plus, the file formats for loading and saving networks are human readable, and easy to parse into other models you may build in other software.

I installed the environment easily on a Windows 7 system, and have been testing routines since. To build your own highly capable version in your Windows system, follow these directions.

  1. Download and install the latest Java JDK.  As of this writing, jdk-7u55 is the latest, and works nicely, though the Net2Plan websites says you can use a few generations back with success.
  2. Download and unpack GLPK. I used winglpk-4.54. Simply unzip it where you want to keep it, and remember the path to the glpk_4_54 file you will use for optimization (either the 32 bit, or 64 bit, operating system depending).
  3. Download and install Net2Plan from
  4. Double click on the Net2Plan.jar file, or build a shortcut for where you plan to access it. Once open, Go to the File, Options tab, and then the second tab on the window that opens, which is the Solver options. Under the glpk library section, enter the full path for your GLPK instance. Click Save.

Once you have completed these steps, you should have a working version of Net2Plan complete with a link to the GLPK solver, which will work to run many of the included Algorithms for network planning analysis.

Note that some routines rely on IPOPT solver, which I have not yet installed and tested. I expect that will come soon.

There are 5 different modeling interfaces to explore, but start with an offline model so you can see how to build a model in the GUI, then test out some of the algorithms, and the reports you can generate from that model. Be sure to save your model for later, unless you used one of the many included models for testing. Once you have a good understanding of this offline modeling capability, explore the traffic matrix design capability, the resilience simulation models, the CAC simulation models, and traffic simulation models.

Having this capability readily available is a great advantage. And if you are familiar with the subject, this tool will be very  easy to use, plus help you explore designs and plan your own network. I encourage you to give it a try, even if you are just curious. I think you’ll have fun and learn something too!




Posted in Engineering Consulting, IT and Telecommunications, ORMS, Quality, RAMS - all the -ilities, Uncategorized | Tagged , , , , , , , , , , , , | Comments Off on Net2Plan is worth a look

Reliable information, reliable decisions, reliable systems, reliable teams

I don’t attend many meetings these days, but when I do, I’m reminded of the failure modes of meetings, when group think can take over, leading to poor decisions by the group. But like all human reliable systems, the group can do some simple things to correct the problem before a failure.

Systems commonly trade off performance for reliability. In telecommunications networks, we can trade off latency for assured packet delivery through Forward Error Correction (FEC). Rather than make a hurried left turn into traffic, we can slow down and wait for a lower risk opportunity to make the turn. Group meetings are similar. We can slow down the meeting a bit by checking our assertions with facts and data.

In a group, we can take time to check our assumptions. Often, in the dynamics of a group meeting, people make assertions that may not be true, though widely held. The other day, I met with a large group of reliability experts, some of the best there are on the planet. I made a statement about the importance of a particular field of research, and one of the best experts on reliability, and this specific field as well, questioned that assumption, asking for evidence. That challenge lead to some serious thought. And while another participant offered some support for my statement before I was able to, the question was still provoking. What if my statement had not been well supported? How often do we check our ideas for support? How often does a group make a decisions off of an ssertion that lacks evidence? Fortunately, in our story, someone questioned the evidence, and the evidence was presented. But how often do groups not behave that way? I could assert that it happens most of the time, but I leave it to you to check your own experience. Fortunately, we can commit to taking the role in any group to question the assertions that members make, and take the time to make sure decisions are evidence based.

When we are in charge of inviting attendees to a meeting, how often do we consider the group dynamics, and try to design the right group for a decision? Sometimes we have to invite process owners, peers, interested parties, and others due to organizational requirements. But what if we considered the group to be a system or a network, and tried to design that team for reliability? What if we added redundant components to the group so that the right experts were in the room, the right points of view, and the right support for the decisions that the group needs to make? When the dynamics of a group are not ideal, try to build some redundancy in that group to help it succeed at its mission. If one person must participate, but they tend to disrupt the discussion with their own agenda, offer them a limited amount of time to address their concerns, and invite someone who can  address that need so that the meeting can focus on its main goal. Build redundancy into the group system. If one person you must invite always makes claims that you believe are not correct, gather the evidence before the meeting that supports your point, and meet with that person before the meeting to address the disagreement before the meeting, in a form of proactive maintenance.

Not only can we try to structure our teams to be designed for reliability, but we can also use the dynamics of effective teams to create reliable systems. Think about the different roles that group members take, and consider analogies. The team leader usually acts to break ties. A redundant voting system needs an odd number of voting members to break ties in decision processes. It is always good to have one person take notes, but a second is better in case they miss a key point; redundancy is often important in any system. Think about the dynamics of effective teams you have been a member of, and how you can leverage that design in other applications.

Back to the group, checking our assertions is almost always the right thing to do. I encourage everyone to ask for evidence behind any assertion presented in the group that might be important to the decision process for the group. Reliable information enables reliable decisions by the group. Reliable decisions can yield reliable designs, reliable systems, and reliable results.


Posted in Engineering Consulting, IT and Telecommunications, ORMS, Quality, RAMS - all the -ilities | Tagged , , , , , , | Comments Off on Reliable information, reliable decisions, reliable systems, reliable teams

I hava a cold. Did I fail?

When a person gets ill, say from a cold like the one I have right now, we tend to slow down, maybe have trouble focusing, or otherwise performing certain tasks. Clearly we function, but not as well as we would like, and perhaps not as well as usual.  This is a degraded state, clearly. But that doesn’t mean we failed, as in the usual binary state sense of either success or failure, functioning or failed, etc.

Without our systems to tell us to slow down because we aren’t feeling too well, we could run the risk of pushing ourselves beyond our limits, perhaps to the point if actual physical failure (say hospitalized, or bed ridden), and perhaps to a state which we cannot recover (death). But we have those feedback systems to tell us when to reduce the stress, and take time to heal, or enter a repair state.

Prognostics and Health Management (PHM) is the engineering field that is working to develop these feedback systems. In some cases, repair requires replacement, so the indicators suggest a replacement instead of repair. But these systems can be indicators of an engineering system being ill in some sense, in a degraded state. While Telecommunications and IT systems have had this advantage for a very long time, in many ways and on many parts of these complex systems, not all engineering system could be built cost-effectively with these systems. But with ubiquitous communications and nanotechnologies leading to inexpensive early warning systems, more engineering system can take advantage of such solutions, leading to more reliable engineering systems. 

When we get ill, and we are functioning in a degraded state, we may fail at certain tasks. From a mission point of view, we can fail due to being in the degraded state. What we are usually able to do, we can’t while we’re ill. I might not make it to work if my cold gets bad enough. I’m degraded, but the tasks I intended to do don’t get done, so the mission of work will fail. That is, unless someone can take my place. As PHM develops more fully, we can clearly determine when systems or parts of a system are ill, and replace them with like parts. The missions of those systems will therefore fail less often, with less cost for maintenance, and fewer mission failures.

Posted in Engineering Consulting, IT and Telecommunications, ORMS, Quality, RAMS - all the -ilities | Tagged , , , , , , , , | Comments Off on I hava a cold. Did I fail?