The influence of perfectionism on the quality cost of project management
Quality costs are divided into four categories: prevention costs (upfront investment to avoid defects, such as process optimization and employee training), appraisal costs (inspection investment to verify compliance, such as testing and inspection), internal failure costs (costs for dealing with defects during production, such as rework and scrap), and external failure costs (losses caused by defects after delivery, such as warranty and recall).
The pursuit of "zero defects" by perfectionists will directly drive up two types of costs: First, excessive prevention (repeatedly optimizing processes and adding extra training to eliminate potential risks); Second, excessive appraisal (setting up multiple layers of inspections and high-frequency tests to ensure compliance). In addition, they have extremely low tolerance for minor defects in production and have a high frequency of rework, which may drive up internal failure costs. However, external failure costs are the least likely to be excessive for perfectionists - the previous prevention and appraisal have intercepted the vast majority of defects, and very few defective products flow into the market, so the corresponding external losses are naturally the lowest. Therefore, perfectionism is the least likely to lead to excessive external failure costs.
Key elements of Deming's quality improvement strategy
Deming's quality concept is centered around "the system" (Note: "" seems to be in Chinese and should be translated as "dominates quality" to make it a complete English expression. Here I keep it as you provided in the original text. It might be better to check if there's a more accurate English equivalent), and the key elements focus on three points:
1. Management takes overall responsibility for quality (Element I): Deming emphasized that the root cause of quality problems lies with management. Management needs to build a system that supports quality (such as reasonable processes and sufficient resources) instead of shifting the responsibility to employees.
2. Most problems stem from the system rather than people (Element III): He proposed that "94% of the problems are due to system defects (such as inefficient processes and vague standards), and only 6% are due to human errors." Even conscientious employees find it difficult to produce high - quality results in a poor system.
3. Understand variation using statistical methods (Element IV): The core of quality improvement is to "reduce process variation", and statistical process control (SPC) is the key tool. By monitoring the process mean and range through control charts, it is possible to distinguish between "inherent system variation (common causes)" and "abnormal fluctuations (special causes)", thus achieving precise improvement.
It should be noted that "defining performance goals" (Element II) is not Deming's proposition. He opposed management by objectives (MBO), believing that rigid goals would lead employees to sacrifice quality in order to meet the targets (for example, skipping inspections to achieve the production volume). Therefore, the core elements of Deming are I, III, and IV.
Process qualification tool for variable data
Process data is divided into two categories: variable data (continuous, such as length and weight, with specific measurable values) and attribute data (discrete, such as qualified/unqualified, only countable).
Tools suitable for variable data should be able to analyze distribution characteristics and process stability.
X-R control chart: The core of variable control charts. The X chart monitors the process mean (e.g., the average length of a batch of products), while the R chart monitors the process range (e.g., the length fluctuation of the same batch of products). The combination of the two is used to determine whether the process is stable.
Histogram: It shows the distribution pattern of continuous data (e.g., whether the length follows a normal distribution) and helps identify process abnormalities (e.g., a bimodal distribution may result from the mixed output of two pieces of equipment).
Figure c (number of defects, such as the number of scratches on each mobile phone) and Figure p (non - conforming rate, such as the non - conforming proportion of each batch of products) are all for attribute data and are not suitable for variable data. Therefore, the confirmation tools for variable data are the X - R control chart and the histogram.
Quantitative calculation logic of quality cost
A factory produces 200,000 units per month, with a non-conformity rate of 1.5% (3,000 non-conforming products per month). The final inspection catches 1/10 (300) of them. Among the 2,700 non-conforming products that are not caught, only 1/25 are returned for warranty (108). The cost of handling each detected defect is $50.
Calculation steps:
1. Total number of detected defects: 300 defects captured during inspection + 108 defects from warranty returns = 408 defects.
2. Total quality cost: 408 units × $50 per unit = $20,400.
The "quality cost" here only includes the cost of handling the detected defects. Although there are implicit losses for the undetected and unreturned defects (2,592 in total), the question does not require their calculation.
Cost changes when the non-conformity rate approaches zero
The changing trends of the four types of quality costs vary significantly.
Prevention cost: It rises as the non-conformity rate decreases (more resources are needed to optimize the process), but the rate of increase gradually slows down (after exceeding the optimal level, the return on additional investment decreases).
Appraisal cost: It decreases slowly as the non-conformity rate decreases (the inspection frequency can be reduced after the process becomes stable).
Internal failure cost: It decreases as the non-conformity rate decreases (fewer defects lead to lower rework costs), but the rate of decrease is limited.
External failure costs: They drop sharply as the non-conformity rate decreases. When the non-conformity rate approaches zero, the number of defects flowing into the market is almost zero, and costs such as warranty and recall will directly drop from high to nearly zero.
Therefore, the external failure cost is the cost that decreases most significantly when the non-conformity rate approaches zero.
Analysis scope of design review
Design review is a crucial step in verifying the rationality of the design. The core analysis content includes:
Manufacturing overhead: Evaluate the manufacturability of the design (e.g., complex structures will increase mold costs).
On-site maintenance cost: Evaluate the difficulty of maintenance during the customer's use (e.g., whether the vulnerable parts are easy to replace).
Performance compliance: Verify whether the design meets the technical requirements (e.g., whether the battery life meets the standard).
The customer's demand for the product is not the content of the design review analysis. The customer's demand is the "input" of the design (e.g., "the weight of the mobile phone ≤ 200g"). The core of the design review is to "verify whether the design output meets the input requirements", rather than "analyze the customer's demand itself". Therefore, the design review does not analyze the customer's demand for the product.
Limitations of histograms in process control
A histogram is a tool for presenting data distribution (such as the distribution pattern of the lengths of a batch of products), but it cannot reflect the variation in the time dimension. It "disorders" the data, only showing the overall distribution and unable to reflect the changing trend of the data over time.
For example, when looking at the product length of a certain production line using a histogram, it shows a "normal distribution" (meeting the requirements), but when looking at it using a control chart, one may find that "the mean has been continuously rising for the past 3 hours" (process drift). This defect of "ignoring time" makes the histogram unable to monitor the dynamic stability of the process - and the core of process control is to "identify anomalies in the time dimension". Therefore, the main limitation of the histogram is that it does not consider the time factor.
Selection of graphical tools for sequential operations
Core uses of different shapes:
Histogram: Display the data distribution (e.g., weight distribution).
Scatter plot: Show the correlation between variables (e.g., the relationship between temperature and output).
Flowchart: Display the sequential steps and logic of a process (e.g., "Order reception → Raw material inspection → Production → Delivery").
Relational diagram: Show causal relationships (e.g., "Equipment failure → Decrease in production volume → Delivery delay").
The core of sequential operation is "the sequence of steps", so a flowchart is the best choice - it clearly presents the process logic with boxes (steps) and arrows (flow directions).
Selection of graphical tools for variable correlation
When the variable x increases or decreases with y (e.g., "as the temperature rises, the output increases"), it is necessary to show the strength of the linear relationship between the two. The uses of different tools:
Control chart: Monitor process variation (e.g., mean stability).
Pareto chart: Identify major problems (e.g., 80% of defects come from 20% of causes).
Scatter plot: Use the horizontal axis (x) and the vertical axis (y) to display the distribution of data points, and judge the correlation based on the trend of the points (for example, "from the lower left to the upper right" indicates a positive correlation).
Relevance diagram: Show causal relationships (e.g., "Insufficient training → Operational error → Defect").
Therefore, the scatter plot is the best tool for showing the correlation between variables.
Core definition of the quality information system
The core of the Quality Information System (QIS) is to support decision-making. It is "a system that collects, stores, analyzes, and summarizes quality data to assist organizations in making quality decisions."
Difference from other options: QIS is not a repository of historical data (Option 1), nor a collection of management reports (Option 2), nor an indicator tracking tool (Option 4). Instead, it is a system that drives decision-making through data analysis. For example, by analyzing defect data, it can identify a certain piece of equipment as the main source of problems and assist management in making the decision to replace the equipment.
The influence of translation transformation on the correlation coefficient
The correlation coefficient (r) measures the strength of the linear relationship between two variables. Its calculation formula is "the covariance divided by the product of the standard deviations of the two variables".
When the weight of each unit is reduced by 0.5 ounces (a translation transformation), the mean of the weights will decrease by 0.5, but both the covariance and the standard deviation remain unchanged. The covariance measures "the degree to which variables deviate from the mean". After translation, the difference between each data point and the mean remains the same. The standard deviation measures "the degree of data dispersion". After translation, the relative degree of dispersion remains unchanged. Therefore, the translation transformation does not affect the correlation coefficient, and the correlation coefficient between length and weight is still 0.27.
I. Calculation of the sample size for the average lifespan of air - conditioning units
A certain air conditioner manufacturer needs to estimate the average lifespan of the units from installation to replacement. Set the allowable error (marginal error) E = 0.5 years (that is, the maximum deviation between the estimated value and the true mean). A 95% confidence level is required (that is, there is a 95% probability that the estimated value is within ±0.5 years of the true mean). It is known that the lifespan of the units follows a normal distribution, and the population standard deviation σ = 6.0 years (obtained from historical data or pre - tests).
The calculation formula for the sample size is:
$$n = \left( \frac{z \cdot \sigma}{E} \right)^2$$
Among them, $z$ is the standard normal quantile corresponding to the confidence level. Under the 95% confidence level, the two - sided quantile $z = 1.96$ (which can be queried through the standard normal distribution table).
Substitute the values and calculate:
$$n = \left( \frac{1.96 \times 6.0}{0.5} \right)^2 = 23.52^2 \approx 554$$
Therefore, 554 unit samples need to be selected to meet the requirements of error and confidence.
II. Calculation of the expected frequency of cell I in the contingency table
A certain contingency table is used to analyze the association between "Alternative (A/B)" and "Result (X/Y)", and its structure is as follows:
Row total: There are 80 in total for Result X (including cells I, II), and 120 in total for Result Y (including cells III, IV).
Column totals: There are a total of 130 in Alternative A (including cells I and III), and a total of 70 in Alternative B (including cells II and IV).
Total number of observations: 200.
The calculation logic of the expected frequency in the contingency table is: The expected frequency of a cell = (Total of the corresponding row × Total of the corresponding column) / Total number of observations (the theoretical value assuming the independence of rows and columns).
For cell I (corresponding to Alternative A, Result X):
$$\text{Expected frequency}_I = \frac{\text{Row total for Result X} \times \text{Column total for Alternative A}}{\text{Total number of observations}} = \frac{80 \times 130}{200} = 52$$
III. Background of the assessment of the key component quality system
A company intends to certify the qualification of its sister company for producing key components. There are 3 mandatory control characteristics for this component:
1. Process temperature: 195±5°F (i.e., the temperature should be between 190°F and 200°F);
2. Component quality: 100 ± 7 grams (that is, the quality should be between 93 and 107 grams);
3. Chemical composition: ≤3% (directly affects the performance of the component).
However, the review found that the sister company had three fatal defects in its quality system:
- There is no calibration procedure for measuring equipment (the accuracy of temperature and mass measurement cannot be guaranteed).
- The quality of each component is only tested once per shift (the data volume is insufficient to reflect the process fluctuations).
- The chemical composition analysis equipment has failed (it is impossible to verify whether the key indicators comply with the regulations).
To verify the capabilities, the team selected 30 new samples (multiple shifts) and compared them with 30 historical samples. The core data is as follows:
Characteristics New samples (30) Historical samples (30)
Temperature (°F) Mean 194.4, Standard Deviation 3.58 Mean 195.0, Standard Deviation 0.9
Mass (grams) Mean: 98.3, Standard deviation: 4.13 Mean: 100.0, Standard deviation: 2.37
Chemical composition Mean value: 2.62%, standard deviation: 0.29% No valid data
IV. Selection of test methods for equality of means
To verify "whether the means of the new sample and the historical sample are consistent", the applicable scenarios of the optional methods are as follows:
1. Grubbs test: It is only used to identify outliers in univariate data (e.g., the temperature of a certain sample is much higher than other values) and does not involve mean comparison.
2. t-test (t): Specifically used for comparing the means of two independent samples (e.g., in this question, "the mean temperature of the new sample" vs. "the mean temperature of the historical sample"), regardless of the sample size (for small samples, the normality assumption needs to be met, while for large samples, this requirement can be relaxed).
3. Chi-square test: It is used to analyze the association of categorical variables (e.g., the relationship between "shift" and "unqualified rate") and is not suitable for continuous variables (temperature, quality).
4. Dixon test: Similar to the Grubbs test, it is used to detect outliers in univariate data and does not involve mean comparison.
Therefore, the most suitable method is the t-test (if the sample size is extremely large, the z-test can also be used, but this option is not available).
V. Z-statistic of the mean temperature difference and decision-making
Assume that "the temperatures of the new sample and the historical sample come from the same population" (null hypothesis $H_0: \mu_1 = \mu_2$). It is necessary to calculate the z-statistic (since the sample size n = 30, it can be approximated as a normal distribution):
$$z = \frac{\bar{x}_1 - \bar{x}_2}{\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}}$$
Substitute the temperature data:
- The mean of the new sample $\bar{x}_1 = 194.4$, the standard deviation $\sigma_1 = 3.58$, and the sample size $n_1 = 30$.
- The mean of the historical sample $\bar{x}_2 = 195.0$, the standard deviation $\sigma_2 = 0.9$, and the sample size $n_2 = 30$.
Calculation process:
1. Numerator (mean difference): $194.4 - 195.0 = -0.6$;
2. Denominator (standard error): $\sqrt{\frac{3.58^2}{30} + \frac{0.9^2}{30}} = \sqrt{\frac{12.8164 + 0.81}{30}} \approx 0.674$;
3. z statistic: $\frac{-0.6}{0.674} \approx -0.89$.
Decision rule: The significance level α = 5% (two-tailed test), and the critical values are ±1.96. If the z-statistic falls within [-1.96, 1.96], then accept the null hypothesis (no significant difference).
In this question, z = -0.89, which falls within the critical value range. Therefore, the null hypothesis is accepted, that is, there is no significant difference in the temperature means between the new sample and the historical sample.
VI. Statistical significance of a sample size of 30
The core function of extracting 30 samples for each characteristic is to utilize the Central Limit Theorem (CLT).
Regardless of the shape of the overall distribution, when the sample size n ≥ 30, the distribution of the sample mean approximately follows a normal distribution (Gaussian distribution).
Option analysis:
1. Dodge - Romig Sampling Plan: It is a type of acceptance sampling (e.g., determining the sample size to meet AOQL or LTPD), and has nothing to do with the "process verification" in this question.
2. Approximate Gaussian distribution: It fully conforms to the conclusion of the central limit theorem, which is the core significance of a sample size of 30.
3. Bernoulli Process Theorem: It describes the probability distribution of independent binary trials (such as "qualified/unqualified") and has nothing to do with the continuous variables (temperature, mass) in this question.
4. Pearson's skewness coefficient: It is used to measure the asymmetry of data distribution, and calculating it does not require a fixed sample size of 30.
Therefore, the main function of a sample size of 30 is to allow an approximate Gaussian distribution.
VII. Calculation of the fluctuation of a single unit in the X-R control chart
In the X-R control chart (mean-range control chart), it is necessary to estimate the process spread of a single unit, that is, the standard deviation $\sigma$ of the process.
The calculation formula is:
$$\hat{\sigma} = \frac{\bar{R}}{d_2}$$
- $\bar{R}$: The average of all subgroup ranges (average range);
- $d_2$: A constant for the control chart, which depends on the subgroup size (k) and needs to be looked up from the control chart constant table (e.g., when k = 5, $d_2≈2.326$).
Logic: The range $R$ is the difference between the maximum and minimum values within a subgroup, and $\bar{R}$ reflects the average fluctuation within the subgroup; $d_2$ is the correction factor for converting the average range to the standard deviation (since $E[R] = d_2\sigma$).
Therefore, the fluctuation of a single unit in the X-R control chart needs to be calculated by $\bar{R}/d_2$.
Core strategy for preventing material batch confusion: Material and status control
The mixing or misplacement of material batches is essentially a problem of "loss of identity" or "ambiguous status". Material and status control addresses risks at the source through "full - link identification + dynamic traceability" — it requires assigning a unique batch number (associated with the supplier, production time, and specifications) to each batch of materials, locking the storage location with fixed storage positions (storage location codes, shelf labels), and clarifying the current status of the materials with visual status labels (pending inspection/qualified/unqualified/frozen). For example, in a semiconductor factory, wafer batches are controlled through a triple - control mechanism of "Lot ID + machine position + status light" to ensure that different batches do not mix.
Why are the other options not applicable? The operator checklist relies on manual verification, making it prone to missed inspections due to fatigue. The Material Review Board (MRB) is a decision-making body for handling non-conforming products and does not involve prevention. Statistical Process Control (SPC) focuses on monitoring process variations and has nothing to do with material batch management. Only material and status control is the underlying logic of "preventing problems before they occur".
Comprehensive verification of the design: Definition and value of design review
"Formal, documented, and cross-functional design reviews" refer to design evaluations. It is a "physical examination" covering the entire design lifecycle, with three core objectives:
1. Requirement alignment: Verify whether the design outputs (drawings, specifications) match the inputs (customer requirements, regulations, standards). For example, the pixel design of a mobile phone camera needs to meet the customer requirement of "clear photos".
2. Risk exploration: Identify potential problems from the cross-functional perspectives of design, manufacturing, quality, and customers, such as the "assemblability" of parts (whether they can be easily connected with other components) and "maintainability" (whether they can be easily replaced in case of failure).
3. Solution output: Propose improvements for the problems, such as adjusting the wall thickness of plastic parts (to solve the problem of injection molding shrinkage) and optimizing the circuit layout (to improve EMC performance).
Compared with other options: Quality review focuses on the quality of the final product and does not cover the design process; Design inspection is a partial dimensional check and lacks comprehensiveness; FMEA focuses on failure mode analysis rather than systematic review. Design review is the only full - process verification tool "from requirements to implementation".
Grading of the strictness of manufacturing tolerances: The essence of critical tolerances
When the manufacturing control tolerance is stricter than the product requirements, this type of tolerance is a critical tolerance – it directly determines the safety or core function of the product. Out-of-tolerance conditions can lead to product failure or non-compliance with regulations. For example: The thickness of the webbing of a car seat belt. The product requirement is 2±0.5mm, while the manufacturing control is 2±0.2mm – if it is out of tolerance, the seat belt may not be able to withstand the impact load (safety risk); The diameter of the needle tube of a medical syringe has a stricter manufacturing tolerance. Otherwise, it will lead to errors in the injection dosage (regulatory risk).
Differences in other classifications: Main tolerances affect performance but do not cause failure (e.g., the surface roughness of a mobile phone case); Non-functional tolerances do not affect usage (e.g., the chamfer inside a part); Final usage tolerances are the requirements of customers for usage (e.g., the hole spacing of furniture) and are not the core of manufacturing control. Critical tolerances are the "red lines that must be absolutely adhered to".
Selection of control charts for monitoring the number of non-conforming products: Application logic of the np chart
To track the "average number of non-conforming items over time", the np chart is the optimal solution—it is used to monitor the number of non-conforming items under a fixed sample size. For example: 1000 parts are produced every day (the sample size is fixed), and the np chart is used to record the number of non-conforming items each day (e.g., 5 on Monday and 3 on Tuesday). Whether the process is stable is judged through the control limits (UCL = 10, LCL = 0). If the number of non-conforming items on a certain day exceeds the UCL, it indicates that the process is abnormal (e.g., impure raw materials, equipment failure).
If the sample size is variable (e.g., 500 - 1000 pieces are produced daily), then select the p - chart (fraction nonconforming). However, the core logic is that the "number/rate of nonconforming items" corresponds to the np/p - chart, not the number of defects (c/u - chart).
Relationship between reference measurement and the accuracy of measuring tools
Accuracy is the deviation between the measured value of a measuring tool and the "true value", which must be verified through reference measurement (using standard parts with known values). For example, when measuring a micrometer with a 10mm standard block, if it shows 10.02mm, the accuracy error is 0.02mm. Linearity is the change in accuracy within the measuring range (e.g., it is accurate when measuring 5mm but inaccurate when measuring 10mm); stability is the drift of accuracy over time (e.g., the change in the measured values of the standard block every month); repeatability is the variation in multiple measurements made by the same person. The core of reference measurement is "comparison with the true value", so it corresponds to accuracy.
Variation between operators of measuring instruments: Definition of reproducibility
The average variation when different operators measure the same part with the same measuring tool is called reproducibility. For example, when two operators measure the diameter of the same shaft and get results of 20.01mm and 20.03mm respectively, this difference is reproducibility - it reflects the influence of "people" on the measurement results. Repeatability is the variation in multiple measurements by the same person (such as the difference in the results of three measurements by the same person); linearity is the change in accuracy within the measuring range; accuracy is the deviation from the true value. Reproducibility is the variation among "different people" and is one of the core indicators in the GR&R (repeatability and reproducibility) analysis of measuring tools.
Analysis tool for potential system failures: Fault Tree Analysis (FTA)
Fault Tree Analysis (FTA) is a top-down potential failure research tool. Starting from the top event (system failure, such as an aircraft engine shutdown), it uses logic gates (AND/OR) to trace back to the bottom events (such as fuel pump failure, sensor false alarm), covering all possible failure paths. For example, when analyzing the car fails to start, FTA will break it down into the battery is dead (OR), the starter malfunctions (OR), the fuel system fails (AND: the pump is broken + the fuel tank is empty), thus exhausting all potential causes.
Limitations of other tools: Failure analysis is carried out for failures that have already occurred (such as metallographic analysis after a part breaks); reliability allocation is to assign system goals to subsystems; Pareto analysis is to find the main problems (80% of non - conformities come from 20% of the causes). FTA is the only systematic analysis method that covers all potential failures.
Title and correct answer
1. Prevent batch mixing of materials: Material and status control
2. Comprehensive review of the design: Design review
3. Tighter manufacturing tolerances: Key
4. Monitor the average number of nonconforming products: np chart (fixed sample size) / p chart (variable sample size)
5. Determine the characteristics of the measuring tool by reference measurement: accuracy
6. Measurement variation between operators: Reproducibility
7. Potential failure analysis: Fault tree analysis