I. Application boundary of control charts: The minimalist choice anchored in "value orientation"
In the factory scenario, the contradiction between "multiple parts, multiple processes, and multiple quality characteristics" and "the need for continuous manpower input to maintain control charts" always exists. Every time a control chart is established, it is necessary to collect data regularly, analyze fluctuations, and verify the effects, and the cost cannot be ignored. Therefore, the application of control charts must return to two core premises:
I. Economic necessity: Only focus on the quality characteristics that directly affect customer value or the bottom - line cost. For example, the thickness of an automobile brake disc (which affects braking performance and is directly related to customer safety and complaints), and the defective soldering rate of electronic chips (which can lead to the scrapping of finished products and increase material and rework costs). If the fluctuations of such characteristics are too large, they will directly cause high losses and are worthy of being monitored by control charts.
Secondly, practical feasibility: Only cover the links where "process fluctuations can be effectively controlled". For example, the mold temperature in the injection molding process (which directly affects the dimensional stability of products) and the screw torque on the assembly line (which affects the tightness of products) - the fluctuations of such processes can be eliminated by adjusting parameters, and control charts can play a role. However, indicators such as "workshop floor cleanliness" and "employee attendance rate" that are difficult to quantify or have no substantial impact on products do not need to use control charts.
The more crucial understanding is that the control chart is a "point tool" in the quality tool chain, rather than a "comprehensive solution." Its function is to "monitor process stability," not to "solve all quality problems." Trying to use the control chart to cover all processes will only disperse energy and increase useless costs.
II. The core value of the percentage control chart: Psychological guidance far surpasses statistical calculation
The uniqueness of percentage control charts (such as defective rate, qualification rate, and defect rate) lies in the fact that the data is intuitive and easy to understand - employees can directly associate the "control limits" with "their own work results" (for example, "the defective rate is less than 2% today" = "my operation meets the standard"). This correlation makes its value lean more towards "psychological guidance" than "technical analysis".
When the management selects the control limits, they often do not fully rely on the "3σ theoretical limit" of statistical formulas, but instead base on the "achievable goal" verified by experience. For example, if the defective rate of this process has been stable between 1.2% and 1.8% in the past three months, the management will set the upper control limit at 1.8% (instead of 2.5% calculated statistically). The reason is very simple: when employees see a goal that "has been achieved before", they will have the confidence that "I can also achieve it", and then actively adjust their operations (such as more carefully inspecting raw materials and standardizing processes).
This kind of "psychological suggestion" has a more significant effect in scenarios dominated by human factors. For example, in the control of the customer complaint rate in catering stores (service attitude, food delivery speed) and the control of the misassembly rate on the manual assembly line (employee proficiency, collaboration process). At this time, the function of the control chart is no longer "statistical judgment" but "clarifying the goal" – employees know "what level is considered good", so they will naturally move towards the goal.
Countless practices have proven that the success of control charts essentially lies in conveying the concept of "process stability = reliable quality" rather than implementing complex statistical techniques. Those teams that focus on "enabling employees to understand that a stable process can reduce rework/complaints" are far more likely to succeed than those that haggle over "the drawing format of control charts."
III. Measurement system variation: The invisible judgment risk of control charts
The logic of the control chart is to "use data to determine whether the process is stable". However, if the measured data itself is unreliable (i.e., "measurement system variation"), all judgments will be invalid. The sources of measurement system variation include: insufficient accuracy of measuring tools (for example, using a caliper with an accuracy of ±0.5mm to measure a part with a tolerance of ±0.1mm), operational errors (significant differences in the measurement results of the same part by different employees), and environmental impacts (expansion of measuring tools caused by temperature changes).
This type of variation will directly lead to the risk of misjudgment.
- Regarding a stable process as an abnormality: For example, measurement errors cause data points to exceed the control limits, but the actual part dimensions are completely qualified.
- Failure to judge real anomalies: For example, the accuracy of the measurement system is too low to capture the slow drift of the part size, thus missing the opportunity for adjustment.
Therefore, MSA (Measurement System Analysis) must be carried out before implementing the control chart - to verify the "repeatability" (the consistency of the same person using the same measuring tool to measure the same part) and "reproducibility" (the consistency of different people measuring the same part) of the measurement system. Only when the variation of the measurement system is much smaller than the variation of the process itself (usually requiring the proportion to be...)Only when (less than 10%), the judgment of the control chart makes sense.
IV. Dynamic Update of Control Limits: The Core Requirement Synchronized with the Process
The limits of the control chart are calculated based on the stable state of the current process (for example, the 3σ limits are calculated using the mean and standard deviation of the past 20 sets of data). However, the process cannot remain unchanged forever. Changes in raw materials, equipment upgrades, process adjustments, personnel changes, and even equipment wear after long - term operation will all cause the "benchmark state" of the process to change.
If the boundaries are not updated in a timely manner, two failure scenarios will occur:
The old control limits are too wide. For example, after the equipment is upgraded, the process fluctuation becomes smaller, but the old limits are still based on the previous large fluctuations, which makes the control chart unable to capture new minor abnormalities (for example, the upper control limit of the defective rate was originally 2%, and now it is stable at 1%. Employees will ignore a slight increase to 1.2%).
The old control limits are too narrow. For example, when the raw materials become worse and larger and the process fluctuations intensify, but the old limits are still based on the previous small fluctuations, which causes the control chart to alarm frequently. (For instance, the original upper control limit was 1%, and now it stabilizes at 1.5%. Employees may ignore real problems due to the "crying wolf" effect.)
Therefore, when significant changes occur in process factors (such as raw material replacement or major equipment overhaul), or after the process has been running for more than 3 to 6 months, data must be recollected, process stability must be analyzed, and new control limits must be calculated. The "effectiveness" of the control chart always depends on "matching the current process".
V. Avoid over - interpretation: The "minimalist principle for detecting abnormalities" of control charts
A common misunderstanding of control charts is over - interpreting the data. For example, stopping the machine immediately when a single point goes beyond the control limits, or concluding that the process is "out of control" when three consecutive points show an upward trend. In fact, the fluctuations in control charts can be divided into two categories:
Random variation: Caused by inherent and inevitable factors in the process (such as minor differences in raw material components, minor changes in operation force). This type of fluctuation is normal and does not require adjustment.
Variation due to special causes: It is caused by identifiable and eliminable factors (such as equipment failure, substandard raw material batches, and operator violations). Only this type of fluctuation requires intervention.
The consequence of over - interpretation is over - adjustment. Frequent adjustments to the process in an attempt to eliminate "random variation" will instead undermine stability (for example, frequently adjusting the machine parameters will lead to more severe fluctuations).
The correct approach is to strictly follow the statistical rules for identifying abnormalities (such as the "abnormal patterns" in GB/T 4091) rather than making subjective judgments. For example, intervention should only be carried out when clear abnormalities such as "8 consecutive points on one side of the center line" or "2 out of 3 points exceeding the 2σ limit" occur.
In short, when making judgments using control charts, one should "focus on major issues and overlook minor fluctuations." Its value lies in "monitoring whether the process is stable" rather than "optimizing every minor difference." Excessive adjustment will only turn the tool into a burden.