Think Again: The Power of Knowing What You Don't Know Page 39
In performance cultures, people often become attached to best practices. The risk is that once we’ve declared a routine the best, it becomes frozen in time. We preach about its virtues and stop questioning its vices, no longer curious about where it’s imperfect and where it could improve. Organizational learning should be an ongoing activity, but best practices imply it has reached an endpoint. We might be better off looking for better practices.
At NASA, although teams routinely debriefed after both training simulations and significant operational events, what sometimes stood in the way of exploring better practices was a performance culture that held people accountable for outcomes. Every time they delayed a scheduled launch, they faced widespread public criticism and threats to funding. Each time they celebrated a flight that made it into orbit, they were encouraging their engineers to focus on the fact that the launch resulted in a success rather than on the faulty processes that could jeopardize future launches. That left NASA rewarding luck and repeating problematic practices, failing to rethink what qualified as an acceptable risk. It wasn’t for a lack of ability. After all, these were rocket scientists. As Ellen Ochoa observes, “When you are dealing with people’s lives hanging in the balance, you rely on following the procedures you already have. This can be the best approach in a time-critical situation, but it’s problematic if it prevents a thorough assessment in the aftermath.”
Focusing on results might be good for short-term performance, but it can be an obstacle to long-term learning. Sure enough, social scientists find that when people are held accountable only for whether the outcome was a success or failure, they are more likely to continue with ill-fated courses of action. Exclusively praising and rewarding results is dangerous because it breeds overconfidence in poor strategies, incentivizing people to keep doing things the way they’ve always done them. It isn’t until a high-stakes decision goes horribly wrong that people pause to reexamine their practices.
We shouldn’t have to wait until a space shuttle explodes or an astronaut nearly drowns to determine whether a decision was successful. Along with outcome accountability, we can create process accountability by evaluating how carefully different options are considered as people make decisions. A bad decision process is based on shallow thinking. A good process is grounded in deep thinking and rethinking, enabling people to form and express independent opinions. Research shows that when we have to explain the procedures behind our decisions in real time, we think more critically and process the possibilities more thoroughly.
Process accountability might sound like the opposite of psychological safety, but they’re actually independent. Amy Edmondson finds that when psychological safety exists without accountability, people tend to stay within their comfort zone, and when there’s accountability but not safety, people tend to stay silent in an anxiety zone. When we combine the two, we create a learning zone. People feel free to experiment—and to poke holes in one another’s experiments in service of making them better. They become a challenge network.
One of the most effective steps toward process accountability that I’ve seen is at Amazon, where important decisions aren’t made based on simple PowerPoint presentations. They’re informed by a six-page memo that lays out a problem, the different approaches that have been considered in the past, and how the proposed solutions serve the customer. At the start of the meeting, to avoid groupthink, everyone reads the memo silently. This isn’t practical in every situation, but it’s paramount when choices are both consequential and irreversible. Long before the results of the decision are known, the quality of the process can be evaluated based on the rigor and creativity of the author’s thinking in the memo and in the thoroughness of the discussion that ensues in the meeting.
In learning cultures, people don’t stop keeping score. They expand the scorecard to consider processes as well as outcomes:
Even if the outcome of a decision is positive, it doesn’t necessarily qualify as a success. If the process was shallow, you were lucky. If the decision process was deep, you can count it as an improvement: you’ve discovered a better practice. If the outcome is negative, it’s a failure only if the decision process was shallow. If the result was negative but you evaluated the decision thoroughly, you’ve run a smart experiment.
The ideal time to run those experiments is when decisions are relatively inconsequential or reversible. In too many organizations, leaders look for guarantees that the results will be favorable before testing or investing in something new. It’s the equivalent of telling Gutenberg you’d only bankroll his printing press once he had a long line of satisfied customers—or announcing to a group of HIV researchers that you’d only fund their clinical trials after their treatments worked.
Requiring proof is an enemy of progress. This is why companies like Amazon use a principle of disagree and commit. As Jeff Bezos explained it in an annual shareholder letter, instead of demanding convincing results, experiments start with asking people to make bets. “Look, I know we disagree on this but will you gamble with me on it?” The goal in a learning culture is to welcome these kinds of experiments, to make rethinking so familiar that it becomes routine.
Process accountability isn’t just a matter of rewards and punishments. It’s also about who has decision authority. In a study of California banks, executives often kept approving additional loans to customers who’d already defaulted on a previous one. Since the bankers had signed off on the first loan, they were motivated to justify their initial decision. Interestingly, banks were more likely to identify and write off problem loans when they had high rates of executive turnover. If you’re not the person who greenlit the initial loan, you have every incentive to rethink the previous assessment of that customer. If they’ve defaulted on the past nineteen loans, it’s probably time to adjust. Rethinking is more likely when we separate the initial decision makers from the later decision evaluators.
? Hayley Lewis, Sketchnote summary of A Spectrum of Reasons for Failure. Illustration drawn May 2020. London, United Kingdom. Copyright ? 2020 by HALO Psychology Limited.
For years, NASA had failed to create that separation. Ellen Ochoa recalls that traditionally “the same managers who were responsible for cost and schedule were the ones who also had the authority to waive technical requirements. It’s easy to talk yourself into something on a launch day.”
The Columbia disaster reinforced the need for NASA to develop a stronger learning culture. On the next space shuttle flight, a problem surfaced with the sensors in an external engine tank. It reoccurred several more times over the next year and a half, but it didn’t create any observable problems. In 2006, on the day of a countdown in Houston, the whole mission management team held a vote. There was overwhelming consensus that the launch should go forward. Only one outlier had voted no: Ellen Ochoa.
In the old performance culture, Ellen might’ve been afraid to vote against the launch. In the emerging learning culture, “it’s not just that we’re encouraged to speak up. It’s our responsibility to speak up,” she explains. “Inclusion at NASA is not only a way to increase innovation and engage employees; it directly affects safety since people need to feel valued and respected in order to be comfortable speaking up.” In the past, the onus would’ve been on her to prove it was not safe to launch. Now the onus was on the team to prove it was safe to launch. That meant approaching their expertise with more humility, their decision with more doubt, and their analysis with more curiosity about the causes and potential consequences of the problem.
After the vote, Ellen received a call from the NASA administrator in Florida, who expressed surprising interest in rethinking the majority opinion in the room. “I’d like to understand your thinking,” he told her. They went on to delay the launch. “Some people weren’t happy we didn’t launch that day,” Ellen reflects. “But people did not come up to me and berate me in any way or make me feel bad. They didn’t take it out on me personally.” The following day all the sensors worked properly, but NASA ended up delaying three more launches over the next few months due to intermittent sensor malfunctions. At that point, the manager of the shuttle program called for the team to stand down until they identified the root cause. Eventually they figured out that the sensors were working fine; it was the cryogenic environment that was causing a faulty connection between the sensors and computers.