Home > Knowledge Center> Follow-up Q & A to the Webinar “Getting to Everyday Improvement: How to Connect the Science and Culture of Problem Solving”

Follow-up Q & A to the Webinar “Getting to Everyday Improvement: How to Connect the Science and Culture of Problem Solving”

Worth, Judy and Keyte, Beau

The Lean Enterprise Institute's webinar on Getting to Everyday Improvement drew thousands of Lean Thinkers from around the world and hundreds of questions that we didn’t have time to answer. Most of the questions fell into a few broad categories:

  • Compare the value-stream improvement methodology described by the presenters to six sigma?
  • Tell me more about how to run “small reversible experiments” as part of the improvement process?
  • How do you “socialize” or engage stakeholders in the improvement process?
  • How does the methodology work in a healthcare setting (Presenters Judy Worth and Beau Keyte are co-authors with a team of six other lean practitioners of  Perfecting Patient Journeys, which describes the methodology’s lean healthcare application in detail.)

What follows are answers from presenters Judy Worth and Beau Keyte to questions along these main themes. Visit LEI’s Webinar Library for the archived audio and slide deck with the live presentation and Q&A.

Q: Please differentiate between the P in PDCA [plan-do-check-act] and the DMA in six sigma DMAIC [define, measure, analyze, improve and control]. I have seen PDCA fail multiple times because no process understanding was shared first. Can PDCA REALLY be more rapid than DMAIC without trade-offs in sustainable results?

A: If you have seen Brian Joiner's book on Fourth Generation Management (where he suggests that PDCA really starts with Check) or our own PDCA diagram (which includes "Grasp the Situation" as the starting point for the PDCA cycle and "Grasp the Situation" again embedded in the center of each phase of the PDCA cycle), you will recognize that many PDCA proponents share your concern with a focus on clearly understanding what's actually happening before you initiate problem solving.

That means not only looking at data but going to the gemba and observing first hand -- as a prerequisite to developing a hypothesis about what changes might improve the performance of process or value stream. (The hypothesis can take the form of a clear problem statement, a value proposition or SIPOC, perhaps current and future-state maps, and an improvement plan, including small, reversible experiments.)  Our hypothesis is that the failure you have observed is less about the model and more about failure on the part of others to fully understand the first phase of the PDCA model and spend adequate time there.  Where we may differ is the appropriate timing of when to initiate a root cause analysis.  And, again, for us this depends on whether you are trying to do system-level problem solving for a value stream or group of processes (which is where our book begins) or whether you are trying to do problem solving inside a specific work a process.

Q: Any advice on how to generate the time to perform experiments? We struggle with being able to get everyone in the same room.

A: Most of us know and believe that the time dedicated to continuously improve is as important as the time dedicated to doing the work!  We have seen a variety of ways that successful organizations have dedicated time, from scheduling things in advance and backfilling the gap in front line capacity to dedicating specific time every week for workers to problem solve.  Ultimately, it is up to the management to see the value in investing time to make problem solving an expectation along with the time to do it.

Q:How frequently should you make changes and what happens if they don't work?

A: The focus of changes (experiments) is to improve the performance to something the organization wants or needs.  In terms of frequency, it would be a combination of the organizational urgency and the human capacity to run experiments.

We've seen several experiments run on a single day for something small in scope, and others with broader scopes run/iterate for a much longer period of time. We have seen teams run several progressive experiments over a few weeks to get to the goal they needed to make. But, experiments should be occurring all the time in ways to learn how to improve.

In terms of the outcomes, if the experiments makes no change in meeting the target condition, the team review after the experiment would be designed to dive into why it didn't work and how you should adjust your thinking and learning to create a new experiment.  Time for a new hypothesis!

Q: How would you compare and contrast the small reversible experiments with what might be typically used in a six sigma/design of experiments (DOE) approach?

A: The experiments we are talking about are designed after some problem solving by the people responsible for the work: this could be something to address work or management process issues. Once people consider changes in their process, they are the ones who design and socialize the experiments. It's been our experience that DOEs are very useful for solving some problems, but can be an example of overproduction in other problems.

In comparing DOEs to what we discussed, DOEs typically require "thinking and designing" not owned by the people inside the work, which could dilute buy in, engagement, and sustainability. What we discussed is a straightforward effort to get any team to design an experiment and learn from it without resorting to training everyone on DOEs. This instills scientific problem solving at all levels of the organization  

Q: Do you have a recommended way for starting to use this approach?  For example problem selection: type, size, etc.?  Or is this a small reversible experiment also?

A: You may have answered your own question! I recently had a conversation with a continuous improvement manager of one of the big companies that put six sigma on the map. He told me that his whole job was organizing and running five-day kaizen events, and was perplexed on how to change direction to small reversible experiments (SREs). We then discussed how to design some SREs with the goal of disconnecting from five-day kaizens. We effectively figured out how to use SREs to improve the process of continuous improvement! 

Q: How do you know when to use experiments or when it is a just-do-it situation?  In other words, when is this procedure too small an issue to use?

A: Any change you make should be validated that it made the necessary improvement. A just-do-it is still an experiment: it has a hypothesis, a plan, an implementation, and a review of the result, even if small and informal. The biggest question in any just-do-it: how can you confirm that it had only the consequences you intended, if you get everything you thought you would get with no side effects, no need to "adjust" and run another experiment?

Q: The ideal small reversible experiments seems contrary to the idea of small incremental improvements using standardized work.  I thought you needed to have a standard in place before making a change?

A: SREs are completely aligned with standard work: they only work when a new way of doing things (an experimental standard) is thought through and tested.  To create an experiment, you need a baseline of a current condition to measure the change against: it's possible (and probable) that there is significant variation in how some work is currently approached.  The experiment could be a test of a standard way of approaching the work, resulting in a new standard or new standard work

Q: How can you manage to get time to work with a group that has people who work in urgent patient care situations?

A: We tested this methodology in 70 hospital EDs in Michigan, and each ED had to figure out how and when to experiment within the context of their particular situation.  One of the common themes was to see the experiment as a "progressive" process: the first experiment would begin during low predicted patient volumes and only run for a couple of hours. They would review results, then run it for a longer period of time and/or in heavier predicted patient volumes. And, they kept on testing different and longer times until they were happy with the results.

A great side benefit of this is the number of staff who engaged in all the different experimental cycles: in some cases they got upwards of 50% of the staff engaged before settling on the new standard. 

Q: Is there a potential problem when system changes are made from only a few experiments?

A: There certainly could be. However, if the experimenters do a good job of “grasping the situation” before designing an experiment or series of experiments, the risk can be minimized.

For example, teams who are following the grasp the situation-PDCA model might start an experiment to reduce ED length of stay (LOS) by looking at trend data to determine what the census looks like in the ED 80% of the time for day, evening and night shifts (by day of the week). Then, they design a series of small experiments that start by testing a particular intervention at a time when the census is likely to be low, working out the kinks in the experiment, and then progressively trying the experiment again under varying and increasingly complex conditions. One of their typical findings is that one-size-fits-all interventions rarely work in all situations. Through SREs they might learn that Plan A might work under certain conditions, that Plan B needs to be implemented under a different set of conditions, and that they need to develop a system to signal when it is time to toggle back and forth between Plan A and Plan B.

Q: Is this gradual approach best suited for healthcare due to the potential risk to patients?

A: We're not sure what you mean by the link between "gradual approach" and "potential risk to patents" but we'd be happy to try to clarify in further conversation.  That said, we start with the assumption that patient safety ALWAYS trumps efficiency.  That means, if there's fire, you have to put it out before you worry about fire prevention. However, our experience has been that the typical organizational approaches to problems – "jumping to solutions" and implementing "blanket solutions" -- are much more likely to cause unintended negative consequences, including patient risk, than the approach we are advocating.

Q: With all the variables that impact operations, how can you tell if the experiment you are doing was successful, or if it was some other factor that resulted in either success or failure?

A: As with any experiment, you can draw conclusions only if you can test the variables you want to consider one at a time. Although that's not always easy to do, you will see some typical strategies to address this issue in our responses to some of the other webinar questions, e.g., looking for trend data to define "typical conditions" and trying out your proposed intervention under varying and progressively more complex conditions.  And it also means re-running experiments when you think you have some confounding variables to sort out.

Q: Just a comment: it is very important to clearly state how the experiment will benefit the people who are testing the new system in the short term and long term. I've found this is a very effective way to begin.

A: We agree!  We think it's always a good idea to develop and circulate an elevator speech that incorporates benefit to patients and "testers" into an explanation of what you propose to do, how, and why.  And, for groups that are just starting out with this approach to process improvement, we encourage them to start with projects that will benefit the people who are doing the work that needs improvement.

Paul O’Neill, the former U.S. Treasury Secretary, started a very successful turnaround effort at Alcoa by focusing on reducing workplace injuries before he started working on improving the bottom line.  Funny thing -- by making a safer workplace, he actually improved the bottom line!

Q: What are some of the ground rules or "must-be-agreed-to" values in order to proceed in a positive and value-adding way?

A: Here are a few we've heard various organizations use:  (1) Focus on fixing processes rather than finding blame (e.g., it's the "Five Whys, not the Five Whos").  (2) Make sure you have clearly defined the problem and that others agree with that definition before you start trying to fix the problem (e.g., "No 'why' before its time").  (3) Stay focused on the observable, measurable facts of the situation, not assumptions and conclusions. (4) The people who are closest to the work where the problem is occurring typically know more about the problem and potential solutions than anyone else and need to be engaged in the problem solving efforts. (5) It's the role of leaders to coach others in problem solving and help them develop their problem solving abilities, rather than do the problem solving themselves.

Q: Isn't there value in engaging folks daily with the use of simple Five Why's to begin reflection and development of a problem-solving culture?

A: We typically reserve use of the Five Whys for a very specific method of problem solving our colleagues learned while working at Toyota. That said, there's potentially a lot to be gained by doing an organizational reflection that looks at typical behavior within the organization, underlying assumptions, associated mental models ... and the conflict between stated organizational values and how people within the organization really behave.  (Case in point: one of the webinar presenters recently visited a facility where "Trust in Employees" was posted as a core organizational value but where no one was allowed to bring pencils into the facility out of fear that they might change data on hard copy documents.) There is lots of good information around in the work of Chris Argyris and Peter Senge on the value of leading this type of organizational reflection.

Q: Sometimes Plan-Do-Study-Act  (PDSA) efforts seem to run forever. What do you say about doing multiple reversible small experiments at the same time, is this discouraged?

A: We're not sure what you mean by PDSAs running on forever.  If you believe in true continuous improvement, then the PDSA effort never really ends.  If you're talking about PDSA as a project, however, that's a different story.  If you visit a Toyota facility, you will see lots of small SREs occurring on a routine basis.  What's critical make sure run your SREs in a way that you don't confuse cause and effect and limit your ability to draw conclusions.

Q: A concern I have with the example of constraining experiments to a particular day of the week is that it potentially introduces bias into an experiment. If processes have variation based on days of the week this could be a fatal error. How do you address these concerns?

A: Great question.  If we'd had a little more time, we would have explained in more detail about the "suck it up Wednesday" experiments that we mentioned. "Suck it up Wednesdays" referred to standard work for initiating new experiments. However, that does not mean that once an initial SRE was run, that no further experimentation was done under different conditions (e.g., different shifts, different says of the week, different staffing patterns, etc.) Please refer to our answer about the risk of drawing conclusions from only a few experiments.

Q: In terms of socialization … in a company that is nationwide with lean being implemented in separate offices, how do you "share" what problems are in flight (to avoid duplication of efforts), status of problems or testing/experiments, or merely sharing solutions to be implemented across the organization?

A: We recently asked this question during a visit to a Toyota manufacturing facility and were told that internally (inside the facility), representatives from different parts of the plant meet on a weekly basis to share problems, new ideas, experiments in progress, etc.  They also contribute to a newsletter that gets circulated across plants that do similar types of manufacturing.

In healthcare and elsewhere, we've seen annual process excellence seminars, fairs, and forums with employee presentations, poster sessions, and prizes awarded for good ideas.  However, any mechanisms you use need to fit your own organizational culture.  And, as noted during the webinar, you have to be cautious about simply sharing solutions.  Just because a solution works well in one setting, doesn't mean it's a good fit for what appears to be a similar problem elsewhere. (There's that nagging little problem of root cause(s.)

Q: Would you advise using the A3 method as visual tool for socialization method?

A: Definitely yes for organizations that are learning to use A3s.  We also find current and future-state value stream maps, glass walls (project tracking boards and dashboards), and similar tools can be helpful.  And don't forget your organization's press room/communications group. 

Q: The presenters never mentioned the A3 problem-solving model and I am wondering if they could explain any differences that they see in the method they are describing.  I am not sure that there are major differences from my experience.

A: Perhaps we're dealing with semantics here, but our perspective is that the A3 is a tool to facilitate PDCA problem solving. The thinking -- actually the scientific method -- is identical. And SREs are an additional tool that can be used to validate the hypotheses developed in A3 thinking just as the A3 document itself is a tool for socialization.

Q: Our organization is using a DMAIC approach. Am I correct in seeing that your description of PDCA fits into the "I" (improvement) portion of that methodology?

A: We would probably say it's more accurate to describe the DMAIC approach and PDCA problem solving as two different ways to describe and implement the scientific method.  (See our previous answer to the request to differentiate the P in PDCA from the DMA in six sigma DMAIC.)

Q: How is experimentation different than incremental continuous quality improvements both of which are simple and rapid without the high structure of six sigma?

A: They are really not different as long as you remember two points: you don't implement a proposed "improvement" until you have validated it (that's the experiment) and you accept that sometimes what you think is an improvement doesn't turn out to be one when you test it -- and that's okay as long as you learn from it.

Q: Can you please review socialization briefly.

A: Socialization is the process of engaging key stakeholders (people who have to approve, support, implement or know) in reaching agreement about how to define a problem, prioritize the need for addressing the problem, identifying potential ways to address/solve the problem, etc. The end result should be stakeholder buy in -- with consensus or at least agreement to proceed with active support.

Q: A big complaint we've had with this kind of problem solving is how much time it takes to make a change.  How do you recommend explaining the reasoning behind this type of thinking to the lay people that are involved?

A: We're not sure we understand about the complaint -- how much time it takes to make a change. The time involved is really determined by the scope of the change.  Small changes can be made quickly -- even with socialization and experimentation.  Large scale changes take time regardless of what method you're using. Research on compliance says that people are more likely to do as requested when they are given a rationale.

Q: Do you have any effective ways to measure the quality of the socialization communication?

A: The most effective way is to observe and maybe even document the behaviors that follow the socialization and see if they are consistent with what you hoped to achieve.  Do people ask questions, do they request additional information, do they allocate time and attention as requested, do they follow through on commitments, do they advocate with others on behalf of the effort you are undertaking, do they behave consistently (walk their talk with you)?

Q: Any advice on when you should move to the experiment stage of problem solving? For instance - the intuitive, even guesswork approaches, solve many problems and this is generally a fast, easy solution. And what is your advice for when a leader of a team should say, "Hold up, let’s try a formal problem-solving approach, which will probably include an experiment"?

A: As stated above in the answer to the “just-do-it” question, even that approach really involves use of the scientific method and experimentation -- just informally rather than formally. Deciding when to go formal depends on several variables such as, how complex is the problem, how many people (or functions, disciplines, professions) are involved, what kinds of resources are needed to support the experiment, etc. The bigger and more complex the problem and greater the resource needs (including personnel time), the more likely you are to need to use more formality in your experiment.  Also, the more skeptical the audience for your results, the tighter your experiments may need to be.

Q: What level of importance do you place on measurement; often times data is manual and time consuming to collect. Do you require measurement or just encourage engagement and improvement? Many improvements seem to be common sense but may not be easy to measure.

A: In a recent column in the Wall Street Journal, Bill Gates focused on the importance of measurement in solving huge problems like disease, poverty, and hunger.  Citing William Rosen, author of The Most Powerful Idea in the World, Gates said, "Following the path of the steam engine long ago, thanks to measurement, progress isn't 'doomed to be rare and erratic.' We can, in fact, make it commonplace."

Like Gates, we think measurement is fundamental to any effort at improvement.  We also think that, unless we make the data measurement simple and easy to do in the context of performing the work, most people will not do it--or do it well.  In our experience, data collection can be as simple as writing tic marks on a sheet of paper with a date, time and initials.

In addition, finding what to measure as an indicator of improvement is frequently an art in itself. Sometimes the measures are indirect.  For example, the measure of effectiveness of education on a change in an HR policy could be something like the number of calls to the helpline asking for clarification of the policy. Presumably, "improvements" to the education would result in fewer calls (or changes in the nature of the calls).

Q: How does this differ from Mike Rother's [improvement] kata?

A: If by "this" you mean the SREs, then one way to think of them is as both a practice to use inside the PDCA process for doing problem solving and continuous improvement at the work process level and inside the PDCA process for developing problem solving/CI capability in people working at the process level.  They are in effect how you learn your way from current condition to target condition.

Here's how SREs inside the katas might work: You define a new target condition for a process and try it out. Then, unless you have set your target too low, you inevitably encounter a barrier. To remove or address the barrier, you propose a countermeasure. You then try the countermeasure in an SRE, which will reveal whether the countermeasure is effective in addressing the barrier, whether you need to modify the countermeasure to address the barrier, or whether you need to try a different countermeasure.  It is this ongoing cycle which creates continuous improvement inside the process.

Q: How do you proceed when an experiment ISN’T reversible?  Do you proceed or to find one that is reversible?

A: This is a good question: it might depend on the risk involved.  What's the downside if it doesn't work?  If the downside appears to be significant (from a technical OR social perspective) it might be wise to continue to think about other experiments.  Another alternative is to consider ways to break up the experiment into smaller elements to test portions that might be reversible. 

Q: How do you reconcile the need to do small, quick experiments with a potentially long, drawn-out socialization process? Especially when you uncover significant disagreements on what the problem is and the approach for addressing it as part of socialization.

A: A critical element of success in change is the buy-in and engagement of the people doing the work. These people don't have to always agree, but they need to be heard and acknowledged. We have been in situations where there is heavy disagreement and the teams decided to design experiments to test both sets of ideas: "let's try both (one at a time) and see what happens." And, the teams made sure that the people with strong views not only had a hand in designing the experiment but running it.

What to do next:

Related Content



  • It's Not All About the Data on Value-Stream Maps: An Interview with Judy Worth
    Data is a critical component of value-stream mapping. But it's not the only component you should be concerned about. All too often we find mappers getting caught up in their efforts to gather data, which can lead to overload and inaccurate mapping. LEI faculty member Judy Worth has seen this many times - here are her perspectives on the root causes and possible countermeasures.
  • 10 Tips for Getting the Most Value from Value Stream Mapping
    Most people see only a small amount of what is to be gained from value stream mapping. "Such an exercise can be a powerful organizational development tool," says Judy Worth, "as well as one for improving value stream performance on quality, efficiency, and safety." Before you embark, keep these 10 key things in mind.
  • Value Stream Maps and Battle Plans - Are They Worth Nothing?
    “I’m reluctant to say maps are nothing, but there’s a difference between maps and mapping," Judy Worth says, paraphrasing Eisenhower’s insight that battle plans meant nothing, but PLANNING for battle was indispensable. “An awful lot of lot of the benefit that comes out of value-stream maps comes from the process of mapping with other people."