Dec 16

Eleven Years Ago . . . Searching for a Winning Democratic Strategy

I was revisiting some blog posts I had written back in February of 2005. These posts were inspired by the disastrous loss by Kerry/Edwards. That was another election where the projections had Kerry winning - up till he lost on election night. The usual pundits trotted out their pet theories to explain the loss and how the Democrats could win in 2008.

If you remember, that was the election where Hillary Clinton was also expected to win easily.

Now, this election is unique. Hillary did win the popular vote - overwhelmingly. And, the vote margins in the key electoral college states are razor thin. I don't think that the recounts will overturn the election. However, I do think that Trump's margin of victory will shrink substantially.

When I have talked to colleagues about this election, I refuse to point to one cause or even a few causes. I believe that there are many causes that just aligned in a highly-improbable way to give Trump just enough votes to tip over the key states.

If I had to pick a single cause, I will go back to what I wrote over ten years ago - the Democratic Party has run out of ideas.

In future posts, I will write about a strategy that will help the Democratic Party win the ideas race with the Republican Party.

I call it the ION Strategy - Independence, Opportunity, and National. Like the ion engine used on space probes, this strategy is slow at first but, builds up speed over time to become the fastest method of propulsion we have today.

Jun 16

Bad Management Week 2016 - Drifting into Failure

Yesterday, I wrote about my research that lead to the Framework for Analyzing Organizational Failure. Since I created the Framework back in 2005, I have seen it validated in a number of organizational failures. So, in 2010, I started work on expanding the paper into a book. During the course of my research, one of the leading thinkers in the field of failure analysis published a book updating many of his theories that I used in creating the Framework. The basic components of the Framework still hold but his concept of drift have led me to envisioning a Framework 2.0.

Dr. Sidney Dekker has written many influential books on failure analysis and has held several international teaching positions. His latest book, Drift into Failure (2011), is both a reflection of his past research and how complexity theory has created a need for new way of analyzing failure. His main argument is simple to understand: our organizations and technology have become increasingly complex but our understanding of why things fail don’t reflect that complexity (p. 7).

We are victims of a worldview in which we assume that people make rational choices, that every cause has a clear and direct effect, and that failures happen because a “broken component” in the system and/or an irrational decision. In our hunt for the cause of failure, we look for the “bad actor” that broke the component in the system (Dekker, 2011, p. 3).

This worldview is dangerous because it blinds us to the complexity of organizations and technologies while leading us on a chase for someone to blame. Think of Enron, the BP Gulf Disaster, and the 2008 mortgage meltdowns. The news was full of experts pointing their fingers at executives, brokers, buyers, and practices in the industry, whatever all in a quest to find the bad actor who broke the part that led to the collapse of the entire system. Once we THINK we have found the bad actor/broken part then we have fixed the problem. And then the next oil spill happens, another firm defrauds the public, or we face another financial crisis.

This is using hindsight for foresight and that never works. In examining the BP Gulf Disaster and Enron, Dr. Dekker demonstrates that the decisions made locally by actors given the knowledge that they had at the time were rational decisions. Yes, there was cutting of corners but these were such small impacts, how could they affect such a large system as BP which has thousands of employees and oil wells? As I explained in part one, these decisions can lead to latent conditions that accumulate and erode the system to the point that it takes one small accident that reverberates throughout the system triggers a chain of increasingly larger failures.

This is called the normalization of deviance and it was this very practice that led to the destruction of the Space Shuttle Challenger and the Space Shuttle Columbia. From the very first flight, there has been damage to the O-rings and there has been damage from foam strikes. Even so, NASA would just continually increase the tolerance for the damage so that they can continue to fly the shuttles.

Normalization of deviance is just one symptom of drift. According to Dr. Dekker, there are five features in drift:
1)    Uncertainties in the environment, scarcity of resources, and pressures to produce lead to organizations making decisions to sacrifice some minor safety concerns. During the BP drilling that led to the disaster, oil rig workers would do “good enough” tasks just so they can meet the tight production deadlines. Each of these shortcuts were very minor but. . .
2)    Drift occurs in small steps. A little shortcut here and a little shortcut there adds up in making the system more vulnerable to accidents.
3)    Despite the large number of interacting components and size of systems, these complex systems are very sensitive to initial conditions. This is all due to path dependency. Choosing a particular software platform gives me some advantages but I am also locked in by the limitations of that platform. Thus, the choice of which radio system to use by the New York Police Department and the New York Fire Department had a profound effect on rescue operations during 9/11.
4)    Unruly technology. Think of it this way: we know how to make aircraft that fly. But only person has actually figured out how to make a medium-size airline profitable. Our technology is just not limited to the mechanical and computational but also includes social. We cannot comprehend fully how our technologies interact with each other and the effects their interactions have.
5)    Complex systems often capture that protective structure that is supposed to keep them from failing. Again, BP Gulf Disaster provides a great example in that the government agency designed to oversee offshore oil drilling was compromised because of the lucrative practice of regulators becoming lobbyists for the very companies they were supposed to oversee.

Dr. Dekker closes his book with two warnings. One, complexity is inevitable and thus we need to learn how to manage/prevent failure in complex systems. Two, our current worldview of bad actors breaking components is blinding us to real underlying causes for failure in complex systems. In my final post in this series, I will outline a new theory on dealing

Dekker, S. (2011). Drift into failure: From hunting broken components to understanding complex systems. Burlington, VT: Ashgate Publishing Company.

Jun 16

Bad Management Week 2016 - How Organizations Fail

Back in 2005, I presented a “Framework for Analyzing Organizational Failure” after my dissertation adviser doubted that I could find a general explanation for how government organizations fail. After an extensive review of the literature and an in-depth study of four major government failures (the Oakland Development Authority, the Navy’s A-12 project, the Challenger accident, and the Columbia accident), I created this three-level model. Much of the model is based on Roberto’s (2000) analysis of a failed Everest expedition (the “Into Thin Air” expedition).

Seven years later, I find that the framework is still useful in understanding how organizations fail. In part one of this three part series, I will explain the framework. For part two, I will talk more about the effects of complexity on organization failure and how organizations will drift into failure even if they are performing their mission effectively. Part three will conclude with a strategy to avoid having the organization drift into failure.

The first concept to understand is the difference between “latent conditions” and “active failures.” Active failures are the triggers for the actual failure. For example, it was the blast of rocket exhaust through the O-ring that caused the eventual explosion and breakup of the Challenger shuttle. But, years before the accident, latent conditions such as the use of solid rocket boosters (SRB) on a manned spacecraft and the continuing acceptance of even more destructive O-ring damage from the SRBs that set the stage for the eventual failure.

Throughout the framework, you can see how each level contributes latent conditions that make the destructive impact of an active failure more probable. On level one or the “Leaders” level, the management of the organization makes decisions based on their perceptions of the organization. Because of the complexity of the organization and inherent cognitive biases, leadership decisions tend to be flawed and these latent conditions accumulate. Leaders also have a direct effect on the second level (“Teams”) if the leaders impose their ideas onto the Teams without allowing feedback from the second level.

The two major problems that lead to the creation of more latent conditions and active failures are “deindividuation” and “group think.” Deindividuation occurs when the team member no longer feels engaged with the organization and begins to emotionally and intellectually divest themselves from their work. Put a group of deindividuated employees together and you will have groupthink. Warning signals are ignored out of fear of upsetting the leaders or because the team members just don’t care anymore about what happens to the organization.

The third level is the organizational level. Imagine the assets of the organization behind a wall of defenses. The assets could be a space shuttle, the creation of a new development agency, or a successful acquisition contract. If you view an active failure as an arrow shot toward the defensive walls, then you can understand how latent conditions allow a sharp failure to penetrate all of the defenses and damage the assets. Thanks to latent conditions, holes develop in the defense walls and, if the holes line up just right, the sharp failure flies through the holes right into the assets. You can patch the walls but latent conditions still rain down from the upper two levels. Even regular maintenance can introduce new holes in the defenses.

It is organizational complexity that prevents Leaders and Teams from fully understanding the impact of their decisions and to see the accumulation of latent conditions until it proves too late. In the second posting of this series on organizational failure, we will examine how complexity causes us to misunderstand how organizations work and how organizations inevitably drift into failure.Failure Model

Jun 16

Celebrating Bad Management Day? Let's Celebrate Bad Management Week!

John Hollon's 2015 article arguing for establishing a "Bad Management Day" came at a fortuitous time. It was just a few months over ten years ago when I presented my general framework on organizational failure.  I was just finishing up my PhD coursework in public administration. I had decided to do a deep investigation of why government projects failed. From that, in March 2005, I presented my general framework.

Seven years later, I reexamined my framework to see how well it stood up with new organizational failures. I would say that the framework is still a robust explainer for organizational failure. I am planning to revisit the framework this summer to further develop it. And, develop a organizational success framework.

So, for the next five days leading up to June 25th and the 141st anniversary of Little Bighorn, I will be revisiting some of my favorite articles on organizational failure with a more hopeful article on Bad Management Day's Eve.

What you see below you is the Little Bighorn Graveyard. Here is hoping that the last casualty of that day is bad management.The-Mystery-Behind-Custers-Last-Stand-The-Battle-of-Little-Bighorn-3


Mar 16

Introducing the New Organizational Model

New Organizational Model DiagramThe above diagram is my new organizational model which I have referred to in several previous postings. I developed this after several years of reflection and study starting with my MBA work in 2001. I was especially inspired by my Ph.D. work in developing a new model of public leadership and, later, on my study of the lean startup movement.

The new organization is designed to be agile in every aspect from the work products, leadership, and workforce. The organization is also transparent and designed for maximum information flow. Finally, the mission, vision, and strategy is baked into all that the organization does and drives the organization forward.

I will expand upon various components in future postings, but, for now, I want to give an overview of the complete model.

Starting with the upper box with the five chief officers: A common theme in organizational studies is the danger of silos and fiefdoms. There are also the problems with forming a senior leadership team that works together for the good of the entire organization. Therefore, in the new organization, there are only five chief officers that form the senior leadership team.

  • The Chief Executive Officer (CEO) – chairs the senior leadership team and is responsible for keeping the organization aligned with the mission and vision by keeping the strategy engine working effectively.
  • The Chief Alliance Officer (CAO) – combines the traditional functions of the chief human resources officer and chief information officer. Responsible for managing the organizational talent and the organizational APIs platform.
  • The Chief Knowledge Officer (CKO) – responsible for managing the knowledge and learning workflow of the organization. Also oversees the training and development of the organization’s talent.
  • The Chief Brand Officer (CBO) – responsible for overseeing the organization’s brand: internally and externally. Helps the CEO manage the public-facing side of the organization’s mission and vision.
  • The Chief P4 Officer (CPO) – Oversees the portfolios, programs, projects, and processes of the organization’s Business Engine.

In the middle of the model is the “Business Engine.” The Business Engine is where the work is done by the organization. Instead of a factory floor with fixed production lines, the Business Engine is a makerspace with both a physical presence and virtual presence. Work is performed by a network of project teams that are loosely organized into portfolios and programs. There are few fixed processes, and these processes will be heavily-automated using artificial technology systems using blockchain technologies and deep learning algorithms. The teams will use agile project management, human-centered design, and adaptive case management to manage the work.

Surrounding the Business Engine are four critical components. The most important component, of course, is the “Talent” box with the four types of employees. These types are based on the Alliance model of employer-employee agreements. At the bottom is the Organizational APIs Platform in which the core APIs that run the business infrastructure are available for the talent and teams to build their personalized tools and apps upon. Surrounding the Business Engine on both sides are open data streams that provide the performance metrics of the organization and allows for easy knowledge-sharing and collaboration in the organization. Embedded in the Business Engine are strategy information radiators (Ambient Strategy) that provide constantly-updated information on how well the organization is fulfilling the mission, vision, and strategic goals.

Pulling the organization forward is the “Strategy Engine.” On top of the Strategy Engine is the “Mission and Vision” alignment compass which helps the align all of the organization’s activities toward the mission, vision, and strategic goals. What powers the planning process for the Strategy Engine are the twin concepts of organizational agility and organizational health.

There is a lot of this model that is borrowed and a lot that is new. I don’t believe there is an organization that follows this model but, I believe many organizations could benefit from adopting parts of the model. I look forward to expanding upon the various parts of the new organizational model. I welcome your comments, criticisms, and suggestions.

Mar 16

From Hierarchies to Network of Teams

Deloitte just released its 2016 Human Capital Trends report and it is outstanding! What I especially like is the realization that organizational design is the top HR topic among executives and HR practitioners.

I have found similar results in my research on the new public organization model. Hierarchical models just can't meet the demands for organizational ability and organizational health. In my model, there are programs, projects, and processes. The programs and projects are handled by teams that constantly change and reform as the organization's strategies and needs change. This way, team members can rotate through program directors, project leads, and project team members.

As to processes, I envision a fusion of human workers and artificial intelligence agents. For the purely algorithmic portion of processes, I see a combination of AI agents and blockchain technology. For any exceptions to the processes, adaptive case management will be used to signal for human intervention and refinement of the process.

3d-jump-070615-15colThe best analogy is to think of the organization as a network of teams that work off an organizational IT/analytics platform to build new applications. The closest organizational design that I have seen to what I envision is a makerspace

Mar 16

Four Scenarios for the Future of the Federal Government – Five Years Later

In 2010, I published the following four scenarios on GovLoop:

SteamGov – The Federal government still uses large, centralized IT architectures and the average Federal worker’s work technology is less capable than the worker’s personal technology.

Google.Gov – The Federal government is greatly reduced in size while almost all government services are provided through contractors.

LabGov – State and local governments take the lead in using the latest open-source technologies, agile project management, and other innovations to more effectively and efficiently deliver government services. This causes a shift in the balance-of-power between the Federal government and the states as citizens demand the Federal government allow the states to provide services that once were the purview of the Federal government.

InnoGov – The Federal government establishes a DARPA-like institution to seek out innovative Gov 2.0 projects and accelerate the adoption of new open-source technologies and agile management techniques. By 2014, the Federal government is the leading innovator in IT and management practices and helps to revitalize the private and non-profit sectors with its technology/best practices transfer programs.


Almost four years ago, I revisited the four scenarios. At that time, I wrote that the Federal government seemed to be heading toward InnoGov because of the launch of the GSA’s Digital Services Innovation Center. Even so, many parts of the Federal government were still stuck in SteamGov. Since that time, there has been more progress toward InnoGov with the establishment of 18F and agency innovation labs such as Health and Human Services’ IdeaLab and the Office of Personnel Management’s Innovation Lab. Most of the Federal government is still stuck in SteamGov, but good progress has been made.

I am still undecided on if the ultimate scenario will be InnoGov or LabGov. The state, metropolitan, and local governments are making incredible gains in technological innovation. Judging from what I read in GovTech, I am still betting on the LabGov scenario being the dominant scenario in 2020.

Feb 16

The Constructal Law in Organizational Design

feature-constructal-law-physicsA key component to my new theory on public administration and my new organizational design is the Constructual Law. First proposed back in 1996 by Adrian Bejan, the Constructal Law states:

"For a finite-size system to persist in time (to live), it must evolve in such a way that it provides easier access to the imposed currents that flow through it"

The Constructual Law is a physics law and refers to natural systems. However, if one defines the imposed currents as data, information, and knowledge, then you can see the application to organizations. Especially in the communication channels for data, information, and knowledge.  The tricky part is that the organization's environment changes over time and thus, the organization's internal configuration of flows needs to evolve to optimize access for data, information, and knowledge.

The organization's internal configuration will evolve according to the dictates of the Constructual Law. Making this directed evolution rather than unguided evolution is the great insight offered by the Constructual Law to organizational theory.