Lean Enterprise Institute Logo
  • Contact Us
  • Newsletter Signup
  • Cart (0)
  • Account
  • Search
Lean Enterprise Institute Logo
  • Explore Lean
        • What is Lean?
        • The Lean Transformation Framework
        • A Brief History of Lean
        • Lexicon Terms
        • Topics to explore
          • Operations
          • Lean Product and Process Development
          • Administration & Support
          • Problem-Solving
          • Coaching
          • Executive Leadership
          • Line Management
  • The Lean Post
        • Subscribe to see exclusive content
          • Subscribe
        • Featured posts
          The Human Side of Hard Times

          The Management System Your Organization Doesn’t Know...

          The Human Side of Hard Times

          Automation, AI, and the Risk of Doing...

          • See all Posts
  • Events & Courses
        • 2026 Lean Summit
          March 12-13
        • Forms and Templates
        • Featured learning
          • 2026 Lean Summit

            March 12, 2026 | Houston, Texas

          • Managing to Learn with the A3 Process 

            March 20, 2026 | Coach-Led Online Course

          • Lean Warehousing and Distribution Operations

            April 08, 2026 | Toro Company, Plymouth, WI

          • Building a Lean Operating and Management System 

            April 14, 2026 | Morgantown, PA

          • See all Events
  • Consulting & Training for Organizations​
        • Interested in exploring a partnership with us?
          • Schedule a Call
        • Getting Started with Lean Thinking and Practice
        • Leadership Development
        • Enterprise Workshops and Training
        • Lean Enterprise Transformation​
        • Case Studies
  • LeanTech
  • Store
        • Book Ordering Information
        • Shopping Cart
        • Featured books
          The Human Side of Hard Times

          Daily Management to Execute Strategy: Solving problems and developing people every day

          Managing on Purpose Workbook

          Managing on Purpose

          • See all Books
  • About Us
        • Our people
          • Senior Advisors and Staff
          • Faculty
          • Board of Directors
        • Contact Us
        • Lean Global Network
        • Press Releases
        • In the News
        • Careers
        • About us

The Lean Post / Articles / Management: Designing the System Where People and AI Work Together 

Management: Designing the System Where People and AI Work Together 

Executive Leadership

Management: Designing the System Where People and AI Work Together 

By Art Smalley

February 9, 2026

AI accelerates thinking, but without management systems balancing fast generation with slow reflection, organizations risk scaling bad assumptions instead of real learning.

FacebookTweetLinkedInPrintComment

In previous articles in this four-part series, we explored why the impact of AI varies so widely across organizations. The technology itself is increasingly accessible and powerful, yet results remain uneven. Some teams see meaningful gains, while others struggle or stall. The difference is not the tools. As discussed earlier, technology is the easiest part of the equation, and behavior, the skills and habits needed to use AI effectively, must be deliberately built over time. 

That brings us to the final and often most overlooked factor in the equation: management. 

Impact = Technology × Behavior × Management 

Not management as gatekeeper or approver of work, but management as system designer. The way leaders design routines, standards, feedback loops, and escalation paths largely determines whether AI becomes a force for learning and improvement, or a source of confusion and wasted effort. 

The Fast vs. Slow Paradox 

AI dramatically accelerates thinking. With modern language models, an individual can generate ideas, draft explanations, explore alternatives, and test hypotheses in seconds. I have experienced this firsthand building lean coaching tools over the past year. Work that once took weeks can now happen in hours or less. It used to take me weeks to build something as simple as a basic website. Today, that same work can be done in hours, or at most a couple of days. 

More recently, I took LEI’s Lean Lexicon and turned it into a working web application in roughly two days. It includes definitions, examples, images, one-point lessons, and an interactive prompt to help people explore concepts in context. That kind of speed would have been unthinkable not long ago. We will likely show it at the LEI Lean Summit in Houston, and I think people will like it. 

This fast-thinking capability lowers the cost of experimentation and makes it easier to learn by trying. 

At the same time, this speed introduces new risks. AI systems can be confidently wrong. They hallucinate. They produce plausible but flawed reasoning. I have seen a model generate a perfectly structured root-cause analysis that sounds authoritative but misses a critical physical constraint. If you do not catch it, you are moving fast in the wrong direction. 

This tension resembles what psychologist Daniel Kahneman described as two modes of thinking, fast intuitive System 1 thinking and slow deliberate System 2 thinking.i Both are necessary. Problems arise when one dominates without the other. 

AI behaves the same way. Used well, it accelerates exploration and learning. Used poorly, it creates noise. The management challenge is not choosing between fast and slow, but designing systems that intentionally combine both. The same speed that lets you build in days can just as easily scale a bad assumption. 

Harnesses for AI and Humans 

In practice, no one expects raw AI output to be sufficient on its own. Serious AI deployments rely on harnesses, structures that guide and constrain fast generation so it becomes useful. These include prompting, context engineering, access to reliable data, and feedback loops. Without them, AI produces inconsistent results. With them, the same technology becomes far more dependable. 

I learned this through direct experience. When I first started building lean AI tools, raw model output was maybe 60% useful. Once I added structured prompts, Toyota-based problem-solving frameworks, and feedback mechanisms, the same model consistently jumped to 80-90% and even surpassed that level. The model did not change. The “harness” did. That is both a management insight and a technical one. 

The same principle applies to human work. People benefit from fast thinking, too, trying ideas quickly, sketching solutions, and exploring alternatives. But unmanaged fast thinking leads to inconsistency, rework, and false confidence. Lean organizations learned long ago that improvement comes from systems that support learning, including problem framing, standard work, PDCA, coaching, and escalation. 

AI gives everyone on your team the ability to think fast. The question is whether your management systems provide the slow deliberate counterweight that turns speed into learning and sustained results. 

Three Common Management Mistakes 

When organizations struggle with AI, their responses tend to fall into three predictable management mistakes. Each is well intended. Each produces disappointing results for different reasons. 

  • Lock everything down: Require approvals. Route experimentation through IT. Prohibit external tools. This response is driven by legitimate concerns about security, misinformation, and liability. But when it takes weeks to get approval to try something, people stop experimenting. Learning by doing quickly dies at work.     This often shows up as a blanket ban. No information, especially sensitive information, can leave the four walls of the building. There is truth to the concern. Nuclear secrets, product designs, cost data, personally identifiable information, and health data absolutely require strict controls. But much of the work where AI could add value — such as training, reviewing, problem solving, kaizen, and coaching — does not involve proprietary intellectual property at that level. Sorry, but most company data is not that special. Treating all information as equally sensitive stymies speed and learning and promotes shadow usage as individuals turn to personal tools outside the system. 
  • Mandate AI use without discrimination: In some organizations, particularly in software and technical fields, some leaders have begun requiring AI usage. The message, sometimes explicit and sometimes implied, is that if you are not using AI, you will not be working there. The intent is speed and competitiveness.    The problem is that AI is genuinely ready for some tasks, such as boilerplate code, exploratory analysis, and draft generation, but not yet reliable for others, especially safety critical, highly technical, or tightly constrained work. When usage is mandated without clear guidance, problem classification, or quality checks, people either trust AI where it should not be trusted or quietly work around the mandate to get the job done correctly. 
  • Provide tools and remove all structure: At the other extreme, some organizations simply buy Microsoft Copilot, give everyone access, and tell people to experiment. This sounds empowering, but without shared routines and standards, learning remains individual and fragile. One person figures out something useful. Another gets inconsistent results and gives up. A third never knows what is possible. There is no mechanism to capture, standardize, or spread what works. This is lean wallpaper applied to AI. Everyone has access, but AI is not integrated into the work. 

In all three cases, the failure is the same. Management treated AI as a tool decision rather than a system design problem. 

What Good Management Looks Like 

The inverse of those three mistakes is not complicated. It means clarifying which problems are ready for AI and which are not before handing people a tool. It means establishing lightweight routines for experimentation and reflection (not one-time training sessions) and repeating PDCA cycles applied to AI use itself. It means treating standards as enablers rather than constraints, for example requiring that a problem-solving report runs through a coaching check before submission. It means building feedback loops for both the humans and the AI tools, because prompts need refinement just as skills do. And it means designing escalation paths where bad AI output leads to learning, not blame. 

None of that is revolutionary. It is basic management discipline applied to a new capability. The challenge is that most organizations skip it, treating AI as a procurement decision rather than a system design problem. 

Closing the Loop 

In the first article of this series, I laid out the premise that getting results from AI requires three factors — technology, behavior, and management. I argued that technology is the easy part of the equation. Behavior, the skills and habits that make technology effective, must be deliberately built over time. Management is what ties the two together, designing the systems in which people and tools actually produce results. The equation remains:  

Impact = Technology × Behavior × Management 

If any factor approaches zero, the product collapses. That was true for andon boards in the 1960s, for ERP systems in the 1990s, and it will be true for AI in the 2020s. The pattern repeats because the underlying logic has not changed. Technology creates potential. Behavior converts potential into action. Management sustains action into results. 

What has changed is the clock speed. 

At Toyota, the progression from installing an andon system to developing the full chain of human response — team member to team leader to group leader to maintenance to engineering — unfolded over years. The technology evolved incrementally. Leaders had time to observe, coach, and adjust. 

Technology creates potential. Behavior converts potential into action. Management sustains action into results. 

AI compresses that timeline. The technology is improving month to month rather than decade to decade. People are expected to adopt new tools while the tools themselves are still changing. The gap between what AI can do and what organizations know how to do with it is widening faster than most management systems can close it. 

It is worth noting that Toyota and Denso are not standing still. In 2025, five Toyota Group companies launched the Toyota Software Academy to develop AI and software skills across organizations, with roughly 100 training courses. Toyota simultaneously launched its Global AI Accelerator (GAIA) to expand AI research, development, and implementation across 11 categories ranging from manufacturing and vehicle engineering to knowledge retention and office productivity. Toyota explicitly rooted GAIA in its longstanding practice of jidoka: automation with a human touch.ii Denso, for its part, partnered with the University of Tokyo on a program to enhance lean manufacturing with AI, specifically targeting the transfer of tacit knowledge from experienced engineers to newer workers.iii 

Having spent years working for Toyota, I can infer from pattern and from what these companies have reported publicly. The structure of these initiatives suggests a management approach that mirrors the fast-and-slow dynamic described in this article: rapid, controlled experimentation at the local level, where engineers and team members try AI tools on real problems, and slower, more deliberate decisions at the management level about how to standardize, scale, and integrate what works. That combination, fast learning within a disciplined management structure, is not new for Toyota. It is how they have always absorbed new technology. AI just raises the clock speed. 

That is the new challenge. Not a new formula, but a faster one. The same three variables, multiplying together, but with less room for the slow institutional learning that past technologies allowed. 

I do not think the answer is simply to speed up management to match AI. Hasty management systems are fragile ones. The answer is what Toyota figured out decades ago with production: build stable principles that can absorb rapid change. Standardized work does not resist variation; it absorbs it and creates the next new standard. PDCA does not slow you down, it keeps speed from becoming reckless. Coaching does not compete with tools; it teaches people how to use them. 

The organizations that will benefit most from AI are not the ones moving fastest. They are the ones whose management systems can learn at the speed this technology demands 

The organizations that will benefit most from AI are not the ones moving fastest. They are the ones whose management systems can learn at the speed this technology demands, without losing the discipline that makes learning stick. 

We are early in this experiment. Most organizations are still at the stage of buying the technology and hoping for results. The ones that pull ahead will be those that invest equally in the behaviors and management systems that turn capability into performance. That has always been the difference between organizations that sustain improvement and those that do not. AI does not change that. It just raises the stakes. 

Technology is powerful. Behavior takes time. Management makes it last. 

Humans + AI > Problems. But only if all three parts of the impact equation are in play. 

I will be hosting a half-day workshop at the LEI Lean Summit in Houston, March 12-13. The general topic is Problem Solving and AI and how you can harness AI to accelerate your problem solving without letting it do the thinking for you. I’ll cover the “Five Levels of AI Framework” and give some examples you can try out using different tools at each of the file levels. We’ll start with some basic prompt engineering advice and show some more advanced features you can build on your own. For the workshop you will need to bring a laptop with some type of large language model (LMM) access and a real problem to work on. Hope to see you there! 

FacebookTweetLinkedInPrintComment

2026 Lean Summit

The premier leadership conference shaping the future of lean management for every business.

Written by:

Art Smalley

About Art Smalley

Art is the author of the LEI book Four Types of Problems and workbook Creating Level Pull: a lean production-system improvement guide for production control, operations, and engineering professionals, which received a 2005 Shingo Research Award. He was inducted into the Shingo Prize Academy in 2006. Art learned about lean manufacturing while…

Read more about Art Smalley

Leave a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Related

Abstract illustration of a fork in the road. The left path glows with golden light beams, the right path glows with blue light beams, symbolizing two diverging choices. In the bottom-right corner is the Lean AI Journal logo.

Executive Leadership

Lean AI: Navigating Hype and Reality in the Age of Artificial Intelligence

Article by Art Smalley

"Webinar promotion banner for Lean Enterprise Institute. Title: 'Lean AI: A New Way to Learn, Practice, and Apply Lean Thinking.' Subheading: Free Webinar. Featured speakers: Art Smalley, President at Art of Lean, Inc.; John Shook, Senior Advisor, Lean Enterprise Institute; Tyson Heaton, Senior Director and Senior Coach, Lean Enterprise Institute. Blue background with speaker headshots and Lean Enterprise Institute logo.

Executive Leadership

Lean AI Journal | Join the Lean AI Webinar

Article by Tyson Heaton, John Shook and Art Smalley

Leveraging AI to Transform Conference Documentation: An Experiment in AI-Assisted Proceedings Generation

Executive Leadership

Leveraging AI to Transform Conference Documentation: An Experiment in AI-Assisted Proceedings Generation

Related books

Lean Management Program Set

Lean Management Program Set

Managing on Purpose Workbook

Managing on Purpose

by Mark Reich

Related events

March 20, 2026 | Coach-Led Online Course

Managing to Learn with the A3 Process 

Learn more

May 08, 2026 | Coach-Led Online Course

Managing on Purpose with Hoshin Kanri

Learn more

Explore topics

Executive Leadership graphic icon Executive Leadership
Line Management graphic icon Line Management

Subscribe to get the very best of lean thinking delivered right to your inbox

Subscribe
  • Privacy Policy
  • Sitemap
  • LinkedIn
  • Twitter
  • YouTube
  • Instagram
  • Facebook

©Copyright 2000-2026 Lean Enterprise Institute, Inc. All rights reserved.
Lean Enterprise Institute, the leaper image, and stick figure are registered trademarks of Lean Enterprise Institute, Inc.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Learn More. ACCEPT
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT