Creating flow is critical to creating a smooth and effective R&D organization. However, while Lean manufacturing makes physical work flow, none of the value in R&D comes from things. R&D value arises from useful knowledge. Knowledge like a manufacturing technique, a product feature, trade secrets that enable your process, and so on. Knowledge is not tangible or visible, like materials or parts, so it can be harder to think about flow in an R&D environment.
Luckily, the idea of single piece flow can help us, even in intangible, invisible spaces. Experiments, for example, deliver knowledge. You could run 100 experiments designing, building and testing them one after another, or you could design 100 experiments, then build 100 prototypes in the lab, then test each prototype. We know which choice a Lean practitioner would choose. If we do our design-build-test learning cycles one after another, each will generate a single piece of knowledge. If we do batches of experiments, however, you can design quite quickly. You can build prototypes with minimal set up changes. You can use very high-speed test equipment. But you will have to wait until all experiments are done before you learn anything.
While the first approach builds knowledge “slowly,” one piece at a time, at apparently high cost, the second approach is fraught with danger. However, it also provides tremendous intellectual flexibility. If you find something out in your first 3 experiments, then you have saved the time and effort of 97 experimental setups before setting out on a new direction. This rapidly reduces your investment in failed experiments and frees your time to find breakthroughs.
Meanwhile, the “high throughput” batch method suffers hidden costs. If you have a design flaw in your experimental approach, you won’t find it until the end. Worse, however, is that despite the speed of the machines, the fact that every one of your peer scientists is doing the same thing means you will get results back in months, not minutes. Chances are, you will have forgotten why you designed that experiment and start over from scratch. This scenario, in fact, happened in the pharmaceutical industry. Where once hundreds of experiments were needed to identify viable drug candidates, the advent of “high throughput” approaches increased the number of necessary experiments to more than 20,000, often at a 1-2 year time penalty.
Of course, the laws of Lean – one piece flow of experiments – applies directly.
Another way that people stop the flow of knowledge is to develop very complicated prototypes that test many things at once. Prototypes come in all shapes and complexity, but imagine what it would be like to build an entire moon rocket with each of 3 million components being experimental ones. This is, in essence, a very large batch, in which many types of knowledge are being tested at one time.
Two difficulties appear from this approach:
- If everything works, your prototype will merely be slow to build, allowing fast cycle competitors to pass you in features, quality and process.
- If the prototype fails, however, you may get no useful information at all! With all of the complicated interfaces, parts and potential failure modes, chances are very low that you will be able to tell what worked, what failed or where to go next. Another complete investment failure disguised as a cost savings.
The fix for this is to prototype the smallest item in the cheapest and fastest way. Ensure prototypes only test a small number of things, like interfaces or integration, and that the effects of failure can be separated from each other. Sub-assemblies that are fully tested and debugged prior to full scale prototyping will not complicate the issue. Parts and sub-systems that are fully tested and debugged prior to building and testing of sub-assemblies ensure sub-assembly prototypes generate useful information, etc. back down to raw material testing. Think of this as convergent flow, where each level of complexity is debugged on the way to building the next level of complexity.
Finally, there is the practice of scheduling experimental time, especially in testing. Of course, testing slots may be few and far between, and testing may be expensive to run and so on. If you miss a slot, it may be ages before you get another chance. In such environments, scientists and engineers have a bad habit of scheduling test time well in advance, then build their work to hit a testing slot. Since they do not know if they will have a prototype in time, they schedule a few extra testing slots “just to be sure.” If they do not have their work done, they put literally any prototype they have into the testing slot, whether it will give any useful information or not. After all, testing slots are valuable… Crazy doesn’t begin to describe it. The end result is about three times the total testing capacity required for the same information, slots go unused anyway, and tremendous amounts of testing are performed on things that nobody cares about.
The easy way out of scheduling is to set up the testing unit as a pace-maker process, using a modified “first-in, first out” testing regimen with the following rules:
- Every team gets to put any prototype in the queue that will test a valuable insight.
- No team may put any prototype into the queue that is not ready to test a valuable insight.
- Regular prototypes go first-in, first-tested. Very hot (no more than 10%!) items go whenever the next available testing slot opens up.
- No prototype is so important that it bumps other tests already in progress.
For one particular pharmaceutical organization, this approach cut the total number of tests in a major unit of an operation by 60%, and cut the time for a slot to come open from over 6 months to approximately one month.
The results will be the same in tire testing or automobile crash testing.
Flow in R&D is as critical as in manufacturing. It is just a little more difficult to see, which is why your company, if you create R&D flow, will begin to put more new products and processes into the market with greater innovation than your competitors.