The new seven QC tools and project management methods: The secret from chaotic thinking to efficient project execution

  

The new seven QC tools: A thinking management tool from chaos to clarity

  

Preface: Why do we need the new QC techniques?

  In 1972, Professor Yoshinobu Nayatani of the Japanese Union of Scientists and Engineers (JUSE) stood at the crossroads of the transformation of Total Quality Management (TQM) and discovered a key pain point:

  Therefore, Naya Yoshinobu integrated the thinking methods of ergonomics and systems science and summarized the new seven QC tools. The core breakthrough lies in the shift from "data-driven result verification" to "thinking-driven process sorting" - the old tools are about "using data to prove 'what is right'", while the new tools are about "using thinking to clarify 'why it is right' and 'how to do it right'".

  

Core positioning of the new seven QC tools: The "four - step engine" of the thinking process

  The essence of the new approach is a "thinking management toolkit", focusing on solving four major problems from chaos to implementation.

  1. Clarify the problem: Use the affinity diagram (from scattered to classified) and the relationship diagram (from classification to causality) to transform vague complaints into specific problems.

  2. Generate solutions: Use system diagrams (hierarchical expansion of objectives and means) and matrix diagrams (matching multi-dimensional factors) to transform abstract goals into executable countermeasures.

  3. Control the progress: Use PDPC (Dynamic Risk Anticipation) and Arrow Diagram (Critical Path Management) to transform the static plan into dynamic execution capable of dealing with risks.

  4. Key to excavation: Use the matrix data analysis method (statistical factor analysis) to identify the most core associations from multiple factors.

  

The core value of the new seven QC tools: The leap from "experience" to "system"

  The reason why the new method can become the "thinking engine" of TQM lies in that it solves four common dilemmas faced by enterprises:

  

1. From chaos to clarity: Quickly locate the problem

  For example, in the early stage of software development, the team may raise 10 scattered issues: "Requirements change frequently", "Interface definitions are chaotic", "The test environment is unstable", "Developers don't understand the business", etc. Use an affinity diagram to group similar issues into three categories: "Requirement management", "Document management", and "Resource allocation". Then, use a relationship diagram to identify the causal chain of "Frequent requirement changes → Chaotic interface definitions → Incomplete test coverage → Many bugs". Instantly, the issues change from "vague" to "specific".

  

2. From ideas to solutions: Efficiently generate countermeasures

  For example, to solve the problem of "numerous software bugs", use the system diagram to break down the goal of "reducing bugs" into "optimizing requirement review", "strengthening unit testing", and "improving bug tracking". Then disassemble the sub - steps of each measure (e.g., "optimizing requirement review" → "adding customer representatives", "formulating review checklists"). Score them according to criteria such as "achievability" and "effectiveness", and select two core solutions: "requirement review optimization" (10 points) and "unit testing strengthening" (9 points) to avoid "spreading resources thinly".

  

3. From comprehensive to focused: Focus on core actions

  New methods reject the approach of "grasping both the eyebrows and the beard all at once". For example, after evaluating with a system diagram, it is found that "optimizing requirement review" has the highest input - output ratio (investing 20% of resources can reduce 30% of bugs). So, 80% of the energy is put into this method to avoid dispersing resources.

  

4. From individuals to teams: Activate full - staff participation

  Each step of the new method requires the team to contribute ideas. For example, the affinity diagram is used to collect everyone's opinions, the interrelationship diagram is used to draw cause - effect relationships together, and the systematic diagram is used to jointly evaluate the means. For instance, a certain team used the affinity diagram. The suggestion "the requirement change process is cumbersome" put forward by employees was classified into the "process management" category. During implementation, employees actively participated in the optimization because "it was their own idea".

  

Disassemble one by one: The core logic and application of the seven new QC tools

  

I. Affinity Diagram Method: The Thinking Storage Box for Transforming Scattered Ideas into Clear Categories

  The affinity diagram method, also known as the KJ method, was proposed by Japanese anthropologist Kawakita Jiro. When he was studying the cultures of indigenous people, he discovered that the method of finding patterns from scattered field notes could be applied to business management. Its core is the "affinity induction of language materials": collect "non - data language materials" such as the team's experiences and ideas, and through the process of "sticking cards → explaining → categorizing", gather scattered points into logical categories.

  

Application scenarios and examples

  In the early stage of a software project, the team raised 10 questions: "The requirement document is often revised", "The interface definition is in a mess", "The test environment is unstable", "Developers don't understand the business", "The bug feedback is slow", etc. Use the affinity diagram to handle them:

  1. Collect ideas: Everyone writes down the problems on sticky notes and pastes them on the whiteboard.

  2. Explain ideas: Take turns to explain the content of the sticky notes (for example, "The requirements document is often revised" means "The client puts forward new requirements every week, and the document is not updated synchronously").

  3. Classification and Affinity: Group similar sticky notes into 4 categories - "Requirement Management" (frequent changes in requirement documents, unclear customer requirements), "Document Management" (chaotic interface definitions, slow document updates), "Process Management" (unstable test environment, chaotic go-live processes), and "Personnel and Technology" (developers not understanding the business, incomplete unit test coverage).

  In the end, the problem changed from "ten scattered points" to "four core areas", and the team no longer had to "rush around blindly".

  

II. Relationship diagram method: A reticular logical mirror from classification to causality

  The core of the relationship diagram method is "visualization of multi-factor causal relationships" - unlike the fishbone diagram (linear causality), it represents network causality and is suitable for dealing with complex problems where "factors influence each other" (e.g., "more requirement changes → coding errors → incomplete test coverage → more bugs", and "incomplete test coverage" in turn "covers up requirement errors").

  

Application scenarios and examples

  A certain team needs to solve the problem of "high failure rate of software launch" and lists six factors: "frequent requirement changes", "non - standard coding", "incomplete test coverage", "slow bug tracking", "ineffective review", and "lack of development experience". Use the relationship diagram to handle it:

  1. List factors: Write down all possible reasons.

  2. Draw causal arrows: "More requirement changes → Non - standard coding", "Non - standard coding → Incomplete test coverage", "Incomplete test coverage → Failed launch", "Ineffective review → More requirement changes";

  3. Find the key point: Mark "more frequent requirement changes" with double circles — it is the starting point of multiple causal chains and the root cause.

  Through the correlation diagram, the team instantly understood that "to solve the problem of failed launch, the issue of requirement changes must be solved first".

  

III. System Diagram Method: A "Countermeasure Generator" for Hierarchical Expansion of Objectives and Means

  The core of the systematic diagram method is the "tree-like expansion of purpose - means": Take the goal to be achieved (e.g., "reduce bugs") as the "root node", break it down into "first - level means" (e.g., "optimize requirement review") downward, and then further break it down into "second - level means" (e.g., "add customer representatives", "develop a review checklist"), forming a hierarchical structure of "goal → means → sub - means", and then select the best solution using evaluation criteria.

  

Application scenarios and examples

  A certain team needs to solve the problem of "high failure rate of software going live", with the goal of "reducing the failure rate from 15% to 5%".

  1. Set goals: Clearly define "reduce the failure rate of product launch".

  2. Disassembly measures: Level 1 measures: "Optimize the online process", "Strengthen pre - online verification", "Improve the rollback plan";

  3. Means of task decomposition: Optimize the go - live process → Develop a go - live checklist, Clarify roles and responsibilities, Conduct pre - rehearsal.

  4. Evaluation and selection of the best: Score using "Achievability (◎ = 3 points), Effectiveness (◎ = 3 points), Measurability of effects (○ = 2 points)":

  - "Develop an online checklist": 3 + 3 + 2 = 8 points;

  - "Strengthen pre - launch verification": 2 + 3 + 3 = 8 points;

  - "Improve the rollback plan": 1 + 3 + 2 = 6 points.

  Finally, "Develop an online checklist" and "Strengthen pre - launch verification", which have a score of ≥7 points, were selected as the core measures.

  

IV. Arrow Diagram Method: The Progress Navigator for Critical Path Management

  The arrow diagram method, also known as the Program Evaluation and Review Technique/Critical Path Method (PERT/CPM), has the core of "task dependency and critical path": decompose the project into tasks, use arrows to represent the dependency relationships (such as "system design can only be carried out after the completion of requirement analysis"), calculate the "critical path" (the path with the longest total time), and identify the core tasks that affect the progress.

  

Core differences from the Gantt chart

  A Gantt chart is a "task list on the timeline", which is suitable for simple projects but cannot show dependencies. An arrow diagram can:

  Presentation dependency: Avoid the logical error of "doing system design first and then requirement analysis".

  Find the critical path: For example, if a project has three paths, among which the path Requirement analysis → System design → Coding → Integration testing → Going live takes the longest time (28 days), it is the critical path.

  Optimization progress: Compress the time of the critical path (for example, reduce the time for "coding" from 10 days to 8 days), and shorten the total time to 26 days.

  

Application examples

  Arrow diagram of a software project:

  - Critical path: Task A (Requirement analysis, 5 days) → Task B (System design, 7 days) → Task C (Coding, 10 days) → Task E (Integration testing, 5 days) → Task F (Go live, 1 day) (Total time: 28 days);

  - Non-critical path: Task A → Task B → Task D (Unit testing, 3 days) → Task E → Task F (Total time: 25 days).

  Management priorities:

  Monitor the critical path: Tasks A, B, C, E, and F must be completed on time; otherwise, the total time will be delayed.

  Optimize key tasks: Add 2 developers to Task C (coding) and reduce the time from 10 days to 8 days.

  Parallel non-critical tasks: Task D (unit testing) can be carried out in parallel with "document writing" without affecting the progress.

  

V. Matrix Diagram Method: The Key Combination Finder for Multi - dimensional Factor Matching

  The core of the matrix diagram method is "cross - matching of multi - dimensional factors": list the factors of two or more dimensions (such as "quality characteristics" and "technical measures", "customer needs" and "product functions") in a matrix, and find out the "strongly associated combinations".

  For example, if a mobile phone manufacturer wants to improve "reliability", Dimension 1 is "quality characteristics" (reliability, usability, security), and Dimension 2 is "technical measures" (redundant design, user testing, encryption algorithms). The matrix diagram shows:

  - "Reliability" is strongly associated with "redundancy design" and "battery optimization".

  - "Security" is strongly correlated with "encryption algorithms".

  So, resources were invested in "redundant design" and "encryption algorithms", resulting in a 20% increase in reliability and a 25% increase in security.

  

VI. PDPC Method: The Plan Insurance Policy for Dynamic Risk Pre - judgment

  The core of the PDPC method (Process Decision Program Chart) is "dynamic risk pre - judgment": when formulating a plan, pre - judge the risks of each step (such as "server downtime during launch" and "data migration failure"), and formulate "if→then" countermeasures in advance to ensure that the plan can cope with risks.

  For example, the launch process of a certain software:

  1. Step 1: Back up data → Risk "Backup failure" → Countermeasure "Back up using two tools and verify integrity";

  2. Step 2: Stop the old system → Risk "The old system cannot be stopped" → Countermeasure "Notify the customer in advance and extend the downtime";

  3. Step 3: Migrate data → Risk "Data inconsistency" → Countermeasure "Compare the old and new data after migration. If they are inconsistent, roll back."

  Even if risks occur, they can be quickly addressed to avoid a "failed launch."

  

VII. Matrix Data Analysis Method: Statistically Driven Key Factor Mining

  The core of the matrix data analysis method is "to process quantitative data using statistical methods": When there is quantitative data for the factors in the matrix (such as "quality characteristic scores" and "investment in technical measures"), factor analysis and principal component analysis are used to identify the "most core associations".

  For example, an enterprise collects data on "quality characteristic scores" (reliability: 8 points, ease of use: 7 points, security: 9 points) and "technical measure investment" (redundant design: 100,000 yuan, encryption algorithm: 120,000 yuan). Factor analysis shows that:

  - The correlation coefficient between "security" and "investment in encryption algorithms" is 0.9 (strong correlation);

  - The correlation coefficient between "reliability" and "redundancy design investment" is 0.85 (strong correlation).

  Therefore, "encryption algorithm" and "redundancy design" were taken as key areas for investment. The investment increased by 15%, and the quality score improved by 20%.

  

The essence of the new seven QC tools - "systematic management" of thinking

  The new seven QC tools are not to "replace the old ones" but to "complement them".

  - The old method is a tool for verifying results (using data to prove whether it has been done well).

  - The new approach is a thinking tool for the process (using systems to clarify what to do and why to do it).

  From "problem clarification" to "solution generation", from "progress control" to "risk prediction", the new approach covers the entire thinking process "from idea to result", enabling enterprises to shift from "experience-driven" to "system-driven" and from "ambiguous decision-making" to "logical decision-making".

  For example, after a certain enterprise adopts new methods, the problem diagnosis time is shortened by 50%, the effectiveness of countermeasures is improved by 40%, and the project cycle is shortened by 20% —— this is the power of thinking tools: turning unable to think clearly into able to think clearly and turning struggling to do things into doing things efficiently.

  

I. Critical Path: The Master Switch of Project Duration

  The critical path is the longest operation path in the project network plan. It is not the "most important link" but the "core chain that determines the total project duration". For example, in a construction project, the duration (assumed to be 180 days) of the path "foundation excavation → main body pouring → roof capping → external wall insulation" is the total project duration. If the "main body pouring" is delayed by 5 days, the delivery date of the entire project will be postponed by 5 days. In contrast, the non - critical path (such as "indoor hydropower installation") has "free float", which means that even if it starts 3 days late, it will not affect the total project duration.

  The operations on the critical path are called "critical operations", which are essentially "rigid tasks" without time buffer. For example, the "core equipment debugging" on the factory production line must be completed on the day when the raw materials arrive; otherwise, all subsequent processes will be blocked. Another example is the "stage construction" for a concert. If it is not completed before the rehearsal, the opening ceremony process will completely collapse.

  For managers, the critical path is the "anchor point for grasping key points": To shorten the total project duration, one must never waste resources on non - critical activities (such as "office decoration"), but rather focus on the "weak points" of the critical path. For example, in a construction project, if "main body pouring" is the bottleneck, increase the number of formworks or extend the working hours to directly compress the duration of the critical path.

  

II. Arrow Diagram: The Visual Skeleton of Project Planning

  The arrow diagram (also known as the "arrow graph method") is a tool that uses arrow lines and nodes to present the dependency relationships between operations. Its core value is to "transform an abstract plan into a traceable link". The specific functions can be broken down into four steps:

  

1. Force the plan to be granularized

  The arrow diagram requires clarifying the "preconditions" and "output results" of each operation. For example, when preparing an exhibition, "confirming exhibitors" must be completed before "designing the booth layout", and "setting up the booth" must be done before "moving in the exhibits". Sorting out such strong dependency relationships can transform the vague "preparation plan" into specific steps like "confirm exhibitors from Day 1 - 5 → design the layout from Day 6 - 10 → set up the booth from Day 11 - 15", thus avoiding the "taken - for - granted" loopholes.

  

2. The "vulnerability detector" in the planning stage

  When refining a plan, the arrow diagram can expose logical contradictions. For example, a project plan states that "equipment commissioning starts on Day 10", but the preceding task "equipment arrival" will be completed on Day 12. The arrow diagram will directly show a "logical error", forcing the team to adjust the sequence (either advance the equipment arrival or postpone the commissioning).

  

3. The "compass for adaptability" in the implementation stage

  There are always surprises during project implementation. For example, the "exhibit transportation" for an exhibition is delayed by two days. The arrow diagram can quickly determine whether "exhibit transportation" is on the critical path. If it is, then activate the alternative logistics. If not (for example, there is a three - day buffer for "promotional material design"), then adjust the time for "promotional material printing" without affecting the overall progress.

  

4. The Delay Locator for Big Plans

  The larger the scale of the plan, the more obvious the value of the arrow diagram becomes. For example, there are thousands of steps in the preparation for the Olympic Games. The arrow diagram can quickly identify that a one - day delay in the "opening ceremony rehearsal" will directly affect the "opening ceremony process on the day", while a two - day delay in the "volunteer training" will not have an impact. Managers can immediately allocate resources to address the critical delays.

  

III. PDPC Method: The "Dynamic Decision-Making Technique" for Coping with Emergencies

  The core of the PDPC method (Process Decision Program Chart method) is to "anticipate all risks in advance and prepare multiple sets of plans" – not to "stick to the original plan to the end", but to follow the cycle of "plan → anticipate obstacles → alternative plans → execute → revise". For example, in a software development project, the original plan is to use the React framework. However, anticipating compatibility issues with old browsers, an alternative plan using the Vue framework is prepared in advance. When problems arise, a direct switch can be made without delaying the progress.

  

Implementation steps of the PDPC method

  1. Cross - departmental brainstorming: Convene representatives from development, testing, product, and customers to discuss topics (e.g., "Solve the problem of slow user login") to cover risks from all perspectives.

  2. Divergent ideal measures: Encourage "unconventional" ideas - such as "optimizing database queries", "adding cache servers", and "simplifying the login process".

  3. Anticipate obstacles and respond: For each measure, ask "What if it doesn't work?" - For example, if "the system is still slow after optimizing the database", then start "adding a cache server"; if "the cost of the cache server is high", then switch to "simplifying the login process".

  4. Classification and sorting measures: Sort by "urgency + implementation difficulty" – prioritize "optimize the database" (urgent and easy to do), followed by "increase cache" (important but difficult), and finally "simplify the process" (as a fallback).

  5. Implement responsibilities and deadlines: Clearly define "who will do it + when it will be completed" — for example, Zhang San is responsible for "optimizing the database" and should complete it within 3 days; Li Si is responsible for "adding cache" and should make preparations within 5 days.

  6. Dynamic revised diagram: Review weekly. If the "optimize the database" solution resolves the problem, suspend the "add cache" measure. If users have a new requirement for "SMS login", add an alternative plan for "SMS interface debugging".

  

Characteristics of the PDPC method

  Global perspective: Be able to see all possibilities of the "main line + side lines" and avoid "going all the way to the end on one single path".

  Time tracking: Arrange measures in chronological order to clearly understand "where we are now".

  Dynamic adjustment: Add new obstacles (such as policy changes and supplier delays) at any time to keep the plan effective.

  

IV. Matrix Diagram Method: The "Relationship Map" for Multi - factor Association

  The essence of the matrix chart is to present the relationship between "problems → factors" in a "two-dimensional table". By marking the intersections with symbols (◎ high correlation, ○ medium correlation, X no correlation), key points can be quickly located. The most commonly used L-shaped matrix chart lists "quality problems" (such as "scratching on the appearance", "dimensional deviation", "functional failure") on the left and "possible causes" (such as "low hardness of raw materials", "high machine pressure", "improper operation") on the top. After marking the intersections, it can be directly seen that there is a ◎ correlation between "dimensional deviation" and "high machine pressure", and a ◎ correlation between "scratching on the appearance" and "low hardness of raw materials" - these are the core causes that need to be solved first.

  

The core value of the matrix diagram

  Break the "single causality" thinking: It's not "one problem corresponding to one cause", but "multiple problems corresponding to multiple causes". For example, "functional failure" may be related to "poor component soldering" and "printed circuit board design defects". A matrix diagram can list all these relationships clearly.

  Sort out the "phenomenon → problem → cause" chain: For example, "users complain that it is not easy to use" (phenomenon) → "function failure" (problem) → "poor welding" (cause). The matrix diagram can visualize the three - layer logic and avoid "treating the symptoms rather than the root cause".

  

V. Matrix data analysis method: The data microscope for quantifying associations

  The matrix data analysis method replaces the "symbols" in the matrix diagram with "data" and condenses complex variables into "composite factors" through statistical methods (such as principal component analysis) to locate problems more accurately. For example, when producing mobile phone chargers, instead of using "◎" to represent the relationship between "slow charging" and "charging head power", use "correlation coefficient 0.9" (highly positively correlated), and the conclusion is more objective.

  

Core method: Principal component analysis

  Condense multiple variables (such as charger power, circuit board resistance, number of interface plug - ins) into several "composite variables" (such as "charging efficiency" and "durability"). For example, when analyzing the battery life of a mobile phone, the variables include "battery capacity", "screen brightness", "CPU power consumption", and "5G usage time". Principal component analysis can extract two core factors, "hardware power consumption" (battery + CPU) and "usage habits" (screen + 5G), which directly point to the optimization direction.

  

Typical applications

  Complex process analysis: There are dozens of parameters (temperature, pressure, time) in chip manufacturing. Principal component analysis is used to find that "temperature fluctuation" is the key factor affecting the yield rate (correlation coefficient: 0.85).

  Market positioning: Users' demands for mobile phones (price, performance, appearance, battery life). Through principal component analysis, these demands can be divided into "cost - performance ratio" (price + performance) and "user experience" (appearance + battery life), which helps enterprises position "high - cost - performance models" or "high - end experience models".

  Classification of sensory characteristics: The "sweetness", "acidity" and "aroma" of food are classified into "sweet - sour balanced type" and "aroma - dominant type" through principal component analysis to assist product classification.

  

Usage scenarios

  The matrix data analysis method is a tool for complex problems and is rarely used in ordinary projects. For example, for defect analysis on a small production line, a matrix diagram is sufficient. However, in scenarios that require quantification, such as high - end manufacturing and market research, it can play the role of a data microscope.

  

VI. Application Examples of Matrix Data Analysis Method

  Taking "user needs survey" as an example, use data to mark the associations between "collection methods → usage purposes" (● commonly used, ◎ used, ○ rarely used, X not used):

  Collection methods \ Purposes of use Understanding things Summarizing ideas Breaking the routine Participating in plans Implementing policies

  Factual data (sales data) ● ◎ ● ○ ○

  Opinion materials (user comments) X ● ◎ ● ●

  Hypothetical materials (Brainstorming) X ● ● ○ ◎

  For example, when it comes to "understanding things" (such as "which region has high sales volume"), using factual data (sales data) is the most accurate; when it comes to "summarizing ideas" (such as "users care most about battery life"), using opinion materials (user reviews) is the most effective; when it comes to "breaking the routine" (such as "launching a foldable screen"), using hypothetical materials (brainstorming) can most effectively stimulate divergence.