Failure Is an Option: The Way To High-Performance Innovation

The three keys to innovation are to seek new ideas, test these ideas on a scale where failure is survivable, and continuously monitor these trials for feedback. The three keys come from Tim Hartford’s book, Adapt: Why Success Always Starts With Failure. Hartford argues that the world is too complex for top-down “big project” innovation-based purely on expert judgment. The best path to innovation is to try a lot of ideas simultaneously (even if they contradict each other), build in robust feedback loops, and use the winning ideas to start a new round of trials.

Hartford’s three keys are not a new method of innovation; it is the oldest method of innovation around – evolution. Nature is continually creating variations of species and then selecting the species that best survive current conditions. Hartford applies that concept to organizations to see if a similar process works in determining what companies succeed and which close. The organizations that best survive a continually changing business environment combine incremental improvement along with the occasional long-shot idea to propel them into a better part of the business landscape ahead of their competitors

So, what does this have to do with government agencies? Hartford flatly states this innovation method will not work in government agencies because of several barriers. First, there is not enough time for political appointees to fully see these experiments before a new administration comes in office. Second, the process depends on many failures for innovation, but failure carries a high stigma in government. Third, it is difficult to demonstrate that a policy innovation had an effect due to the lack of robust feedback loops in government.

Think outside of the box sign.

Hartford’s opinion about government innovation is way overstated. There have been numerous government projects that have been extremely innovative: the Hoover Dam, rural electrification, the Interstate Highway System, the Moon landings, the Space Shuttle, the Internet, and so on. When you examine how these agencies developed these projects, you do see these agencies tried many ideas and learned from these trials. NASA has a fantastic knowledge management culture, and DARPA’s successful record of innovation is built on the concept of trying many long-shot ideas at once.

What holds the government back from being even more innovative is the stigma of failure. Many agency cultures are too cautious because of the constant external scrutiny and the internal cultural practices of not sticking your neck out and just waiting out the latest change effort. Often, this caution is well-warranted. Many people depend on government agencies, and thus agencies cannot fail in their primary mission of delivering Social Security checks, defending the nation, or enforcing laws and regulations

But failure to innovate will also lead to mission failure for agencies. In the sixth chapter, Hartford describes how the 2008 economic meltdown was inevitable, given the tight coupling of economic institutions and the failure of the government to prevent financial organizations from becoming too entangled. He argues that in any complex system, accidents will typically occur and that often our failure-prevention efforts will only increase the probability of failure. What is needed are the twin strategies of placing buffers between parts of the system and setting up feedback loops to warn us of emerging failure events.

The government must continually innovate so it can continuously deliver on its mission. This means that the culture must change so the agencies accept the small failures that teach to avoid the massive failures that cripple the agency and harm the people it serves. Whether we call it “experimentation,” “pilot tests,” or some other euphemism, the better the government is at innovation, the better it can serve its citizens.

Decision Intelligence Plus Knowledge Management Plus Foresight

Just started reading Link: How Decision Intelligence Connects Data, Actions, and Outcomes for a Better World. Great read and I encourage you to read more about the emerging field of decision intelligence.

Woman surrounded by flying books.

As I am reading the book, I’m taking notes that will eventually turn into a model of combining knowledge management and foresight with decision intelligence. There are some powerful parallels here. And decision intelligence seems fit for knowledge automation.

How to Fail at Developing Training Courses and Products

I’m reading the second edition of Marty Cagan’s Inspired: How to Create Tech Products Customers Love. In chapter six, Cagan describes what he believes are the root causes of failed product efforts. As I read the chapter, I could see parallels to bad training programs and courses. Let’s work through the list:

Ideas – Ideas can come from internal stakeholders or executives. Sometimes, ideas come from customers. Wherever ideas come from, there usually is not a strategic vision or mission that can help determine which ideas to implement. Even if there is a strategic vision/mission, many organizations lack a way to assess the best ideas to pursue.

“Biz Case” – But, let’s say there is a way to determine objectively the best ideas to pursue. The idea is suggested, and then the management wants to see a business case. The purpose of the business case is to determine how much the design will cost and how much money the idea will make. The problem is that it is too early to decide on the costs or revenues. Other than past performance from similar courses or programs, there is no data to justify the projections of the business case.

Roadmap – After an optimistic business case, marketing and sales hurry into listing features to attract customers. Cagan writes that in the Roadmap Phase there are two inconvenient truths. The first truth is that half of the ideas will not work. The second truth is that it takes several iterations for many of the features to work.

Requirements – The Roadmap features drive the requirements, and this is when the instructional design team is brought in. Design decisions that should have been made by the instructional design team at the beginning of the process are instead made halfway through the process when the major feature decisions and business requirements have been made.

Design, Build, and Test – Assumptions made in the business case and the Roadmap have come back to haunt the team. Customer feedback is giving mixed signals and the instructional design team is most likely fighting with the marketing team. I can tell you from experience that clashes between the marketing team and the instructional design team are brutal and counterproductive.

Deploy – Now is the time to deploy the training program and/or course(s). As a usual practice, the evaluations are added on at the last minute and without much thought. Typically, Kirkpatrick Levels One and Two which measure if the learner like the training and if the learner believed they had learned anything. If you are lucky, there may be an attempt at a Kirkpatrick Level Three which is often a survey of the learner’s supervisor to see if the learner’s behavior has changed.

SAM Model

The above is why I moved from the standard Instructional System Design (ISD) to the Successive Approximation Method (SAM). Like agile project management, SAM uses iterations to prototype the programs and courses. Each iteration is checked against customer demands and refined as the instructional design team gathers feedback. Having built courses using traditional ISD, I much prefer SAM. I believe that you will too once you have created a training program or course that meets the needs of your learners.

Eight Reasons Why Your Collaboration System Is Failing

At the beginning of this year, I swore off using Slack. My resolution amazed my friends who extolled the virtues of Slack. Slack isn’t the collaboration app proclaimed to be the “next big thing’” I remember back in the early 90′s when computer-supported work applications were all the rage (remember when “Lotus Notes” was first rolled out). Organizations threw a lot of money and resources at early collaboration systems, but many were failures from the beginning.

The failure of many new collaboration systems to catch on was perplexing because software packages for individuals and organizations were doing well. What was it about developing software for groups that made it so different from developing software for individuals and organizations?

In 1994, Dr. Grudin published an article that answered that question with the simple observation that groups were just different from individuals and organizations. How they are different is explained in his eight challenges for developers:

People collaborating in front of a laptop.

Who Does the Work and Who Gets the Benefits? Ideally the labor in operating and maintaining the groupware application must be roughly equal among the group members. This ideal division of labor is rarely the case. Consider a project management application where the team members must update it regularly with progress reports, performance data, and other data. A good deal of the team member’s team is compiling information and feeding the system while the project manager must spend a minimal amount of time reading reports the system generates. The team member sees only a burden from the software and soon avoids doing this extra work which leads to poor reports causing the Project Manager to quit relying on the system for information. Soon, no one is using software.

Critical Mass of Users. The collaboration software field is filled with many platforms for collaboration. Many offer similar features, and each has an enthusiastic community of supporters. In large government agencies you can see several collaboration systems in various pockets of the organization that don’t communicate outside of their pocket. Ironically the systems that exist to promote collaboration often promote organizational silos as the groups argue that their system is the best solution.

Social, Political, and Motivational Factors. Dr. Grudin gives a great example of this challenge when he describes the failure of meeting management software. It assigned meeting rooms based on priority but quickly became useless because no one wanted to admit that their meeting was anything but “high priority.” As Dr. Grudin explains, collaboration software can only model a rational workplace, but actual workplaces are much more complicated due to organizational culture.

Exception Handling. We rarely work the exact way described in our work processes. Collaboration software built only based on the documented office procedures is too rigid and not able to handle the flexibility required frequently at work. Just think of how often you don’t have a typical day at work and have to improvise a workable solution. Now, imagine trying to program that into software.

Decreasing Communication and Coordination Load. Organizations search for ways to reduce the communication and coordination needed to do the job. How often have you said that you could get more done if you were not interrupted so often? Of these interruptions, how many were due to email, phone calls, a colleague stopping by to talk, etc.? Sometimes you can over-collaborate, and this often results from poorly designed groupware.

Hard to Evaluate Groupware. It is challenging to test groupware because the group dynamics are so hard to replicate. It can take several weeks of careful observation to understood how a group works, and software designers don’t have the time or expertise to evaluate how their software will aid in collaboration. Often the groupware vendor blames this on inadequate user training and will continue the same software with better tutorials and help aids but never realizing that the fundamental problem is that people don’t like collaborating the way the system is forcing them to collaborate.

Intuitive Decision Making. Because of the nature of our work, we often must decide based on little evidence, and thus we rely heavily on our intuition. Groupware applications rarely support intuitive decision making but force users to input significant data so a fully reasoned decision can be made. Often, we do not have the necessary data, and a quick decision must be made. Thus, we abandon the groupware application to use a simple spreadsheet or other individual application to support our intuition.

Managing Acceptance of the Groupware. Too often, I have seen a collaboration solution launched where the users are expected to adapt themselves to how the software works rather than the software adapting to the way the group works. A collaboration system at my work is universally despised because it practically handcuffs a group of users to a cumbersome and protracted painful process. I’ve only used the system once, but that was enough for me to avoid ever having even to click on the program icon.

Despite these principles being 25 years old I still see the same mistakes being repeated in today’s collaboration tools. I also see where companies have put these principles into practice and have made excellent collaboration software that has endured and grown in popularity. I suspect that Google engineers must have memorized these principles when they developed their Google Docs system.

I leave a final exercise for the reader: how many of these principles does SharePoint violate (if any)? Or does SharePoint violate new principles of collaboration software?

Failure Is An Option: The Way To High-Performance Innovation

The three keys to innovation are to seek new ideas, test these ideas on a scale where failure is survivable, and continuously monitor these trials for feedback. Failure is the path to success according to Tim Hartford’s Adapt: Why Success Always Starts With Failure. Hartford argues that the world is too complicated for top-down “big project” innovation-based purely on expert judgment. The best path to innovation is to try a lot of ideas simultaneously (even if they contradict each other), build in robust feedback loops, and use the winning ideas to start a new round of trials.

Learning from failure is not a new method of innovation; it is the oldest method of innovation around – evolution. Nature is continually creating variations of species and then selecting the species that best survive current conditions. Hartford applies that concept to organizations to see if a similar process works in determining what companies succeed and which close. The organizations that best survive a continually changing business environment combine incremental improvement along with the occasional long-shot idea to propel them into a better part of the business landscape ahead of their competitors

So, what does this have to do with government agencies? Hartford flatly states this innovation method will not work in government agencies because of several barriers. First, there is not enough time for political appointees to fully see these experiments through before a new administration comes in office. Second, the process depends on many failures for innovation, but failure carries a high stigma in government. Third, it is difficult to demonstrate that a policy innovation had an effect due to the lack of robust feedback loops in government.

Person holding a light bulb.

Hartford’s opinion about government innovation is overstated. There have been numerous government projects that have been extremely innovative: the Hoover Dam, rural electrification, the Interstate Highway System, the Moon landings, the Space Shuttle, the Internet, and so on. When you examine how these agencies developed these projects you do see these agencies tried many ideas and learned from these trials. NASA has an amazing knowledge management culture, and DARPA’s successful record of innovation is built on the concept of trying many long-shot ideas at once.

What holds government back from being even more innovative is the stigma of failure. Many agency cultures are too cautious because of the constant external scrutiny and the internal cultural practices of not sticking your neck out and just waiting out the latest change effort. Often, this caution is well-warranted. Many people depend on government agencies, and thus agencies cannot fail in their primary mission of delivering Social Security checks, defending the nation, or enforcing laws and regulations

But failure to innovate will also lead to mission failure for agencies. In the sixth chapter, Hartford describes how the 2008 economic meltdown was inevitable, given the tight coupling of economic institutions and the failure of the government to prevent financial organizations from becoming too entangled. He argues that in any complex system, accidents will typically occur and that often our failure-prevention efforts will only increase the probability of failure. What is needed are the twin strategies of placing buffers between parts of the system and setting up feedback loops to warn us of emerging failure events.

Government must continually innovate so it can continuously deliver on its mission. Continuous innovation means that the culture must change so the agencies accept the small failures that teach to avoid the significant failures that cripple the agency and harm the people it serves. Whether we call it “experimentation,” “pilot tests,” or some other euphemism, the better the government is at innovation, the better it can serve its citizens.