Skip to main content
Detection uses an inverted scale: lower numbers = better detection capability. A rating of 1 (Almost Certain) means controls will almost certainly catch the defect. A rating of 10 (Almost Impossible) means controls cannot detect the failure before customer impact.

Detection Scale Overview

The 11-level enumeration (0–10) maps control types, inspection methods, and technological approaches to a standardized rating:
RatingLevelControl TypeDetection MethodProbabilityExample
0Unanalyzed(default)Not yet assessedTBDNew failure mode
1Almost CertainError-proofing (Poka-Yoke)Design prevents defect creation91–100%Mechanical keying prevents wrong assembly
2Very HighAutomated detection + preventionAutomated sensor halts process81–90%Automated vision system stops production line
3HighAutomated in-process detectionAutomated measurement at processing station71–80%In-process torque verification with auto-reject
4Moderately HighVariable gauging (in-process or post)Measurement with tolerance bands61–70%CMM (Coordinate Measuring Machine) verification
5ModerateAttribute gauging (100% inspection)Manual pass/fail gauging51–60%Go/No-Go gauge, 100% parts inspected
6LowStatistical Process ControlSPC trending and Cpk monitoring41–50%Control chart alerts before limits exceeded
7Very LowDouble visual inspectionRedundant operator visual checks31–40%Two independent visual inspections
8RemoteSingle visual inspectionOne operator visual check21–30%Visual inspection for visible defects only
9Very RemoteInadequate controlsRandom/sporadic checks11–20%Occasional spot-check without standards
10Almost ImpossibleNo detection controlsNo testing or controls planned0–10%No inspection; defect certain to reach customer

Detection Levels in Detail

Rating 10: Almost Impossible (No Controls)

Definition: No detection controls are available, no testing is planned or possible, and defects will almost certainly reach the customer. When to use: Only in scenarios where:
  • Detection is physically impossible (e.g., field-only failure modes)
  • No process control point exists for measurement
  • Customer is the only entity that can observe the failure
Mitigation strategy: Since detection is not viable, focus shifts entirely to prevention:
  • Redesign to eliminate the failure mode root cause
  • Upgrade material or component specifications
  • Implement design redundancy or fail-safe mechanisms
  • Upgrade the DFMEA with enhanced design robustness
Upgrade path: Move to Rating 9 by establishing any control point, even if sporadic.

Rating 9: Very Remote (Inadequate Controls)

Definition: Controls probably will not detect the failure. Only random or sporadic checks are performed, with no documented inspection standards or measurement procedures. Characteristics:
  • Absence of structured inspection process
  • No baseline detection standards
  • Human judgment-dependent (high variability)
Distinction from Rating 10:
  • Rating 10 = no controls exist at all
  • Rating 9 = controls exist but are insufficient (inadequate frequency, training, or consistency)
Upgrade path: Establish Statistical Process Control (SPC) charting to move to Rating 6, or implement double visual inspection with documented criteria for Rating 7.

Rating 8: Remote (Poor Visual Detection)

Definition: Controls have a poor chance of detecting the failure. Relies on single visual inspection by an operator without measurement devices or redundant checks. Limitations:
  • Operator fatigue: Detection rates degrade over long shifts
  • Subjective criteria: Appearance-based pass/fail varies between inspectors
  • No quantification: Visual defects are not measured (e.g., surface finish is “acceptable” judgment)
  • Escape risk: Defects at specification limits may appear acceptable to tired or untrained inspectors
Upgrade path to Rating 7: Implement double visual inspection (two independent observers), or switch to objective measurement. Upgrade path to Rating 4 or better: Implement variable gauging (measurement instruments) or automated detection.

Rating 7: Very Low (Double Visual Inspection)

Definition: Controls have a low chance of detecting the failure. Two independent visual inspections by operators provide redundancy to compensate for individual human error. When acceptable:
  • Cosmetic defects where visual detection is the only practical method
  • Dimensional features outside SPC capability or where measurement is cost-prohibitive
  • Safety-critical requirements that require human judgment (e.g., weld appearance assessment by trained technician)
When insufficient:
  • Functional or safety-critical defects where objective measurement is required
  • High-volume production where inspection fatigue is likely
Key requirement: Both inspectors must use the same documented visual standards; ideally, a reference sample or photograph is provided. Upgrade path: Implement variable gauging (Rating 4–5) or in-process automated detection (Rating 2–3).

Rating 6: Low (Statistical Process Control)

Definition: Controls may detect the failure using Statistical Process Control (SPC) charting or process capability monitoring. Mechanism:
  • Periodic sampling (e.g., every 5th part) is measured and plotted on control charts
  • Trends are detected before specification limits are exceeded
  • Cpk (process capability index) calculations quantify sustained process performance
  • Alerts trigger corrective action when trends approach limits
Relationship to Occurrence:
  • SPC enables proactive detection of process drift
  • Combined with high Occurrence rating, SPC can reduce RPN significantly
  • Assumes external SPC tool integration (e.g., manufacturing execution system data feed to Polarion)
Limitation: “May detect” means sampling, not 100% inspection—defects in non-sampled parts still escape. Upgrade path to Rating 4: Implement variable gauging with 100% post-process inspection instead of sampling. Upgrade path to Rating 3: Implement in-process gauging so defects are detected during production before parts are finished.

Rating 5: Moderate (100% Manual Attribute Gauging)

Definition: Controls have a moderate chance of detecting the failure through 100% manual attribute gauging (pass/fail). Characteristics:
  • Attribute data: Defect classification as pass/fail, not measured values
  • 100% inspection: Every part is checked (vs. sampling in Rating 6)
  • Manual process: Operator uses pass/fail gauge (e.g., Go/No-Go plug gauge, visual master reference)
  • No trend data: Cannot calculate Cpk or detect process capability
Distinction from Rating 6 (SPC):
  • Rating 6 = sampling + trend analysis
  • Rating 5 = 100% inspection but no statistical analysis
Distinction from Rating 4 (Variable Gauging):
  • Rating 4 = measured values (e.g., 45.2 mm ± 0.5 mm)
  • Rating 5 = pass/fail only (e.g., fits/doesn’t fit)
Upgrade path: Transition to Rating 4 by implementing variable measurement (e.g., replace Go/No-Go with digital caliper), which enables Cpk calculation and trend analysis.

Rating 4: Moderately High (Variable Gauging)

Definition: Controls have a good chance of detecting the failure using variable gauging (measurement with tolerance bands). Characteristics:
  • Variable data: Actual measured values (e.g., 45.18 mm, 45.25 mm, 45.19 mm)
  • Tolerance bands: Values compared to upper and lower specification limits (USL/LSL)
  • Post-process or in-station: Measurement occurs after part processing or at the processing station
  • Data analysis capability: Measured values enable Cpk calculation, statistical trending, and root cause analysis
Timing distinction:
  • Post-process (Rating 4): Defects are detected after processing is complete—parts are already non-conforming
  • In-process (Rating 3): Measurement during processing allows real-time corrective action
Upgrade path to Rating 3: Move measurement station to in-process location (during processing) so defects are caught before parts are finished and can be immediately corrected.

Rating 3: High (In-Process Automated Detection)

Definition: Controls have a high probability to detect the failure. Automated measurement is integrated at the processing station with real-time detection and alerts. Characteristics:
  • In-process timing: Measurement occurs during part production, not after
  • Automated sensors: No operator manual intervention (e.g., laser gauging, vision systems)
  • Real-time alerts: Out-of-tolerance parts trigger immediate operator notification
  • Defects still created: Automation detects but does not prevent—the out-of-spec part still exists
Advantages over Rating 4:
  • Earlier detection = fewer bad parts produced before discovery
  • Automation reduces human error in gauging
  • Real-time feedback enables rapid corrective action
Limitation: Detection without prevention—defective parts still enter the process; automated reaction (Rating 2) is needed for prevention. Upgrade path to Rating 2: Implement automated detection + automatic rejection/rework (e.g., automated part eject, process shutdown, or material rework loop).

Rating 2: Very High (Automated Detection + Prevention)

Definition: Controls almost certainly detect the failure. Automated detection is integrated with the production process and automatically prevents non-conforming parts from proceeding downstream. Key distinction from Rating 3:
  • Rating 3 = automated detection + alert (human must react)
  • Rating 2 = automated detection + automatic reaction (no human delay)
Examples:
  • Automated vision system detects undersized bore and automatically triggers spindle retract + part eject
  • In-process torque sensor detects under-torqued fastener and stops the pneumatic tool
  • Automated leak test system vents and reworks pressure vessel if leak detected
Mechanism: Defects are still created, but automated controls ensure non-conforming parts are:
  • Ejected before downstream assembly
  • Automatically reworked in-place
  • Flagged for segregation and manual rework
Limitation: Prevention still requires design change (Rating 1)—this rating only prevents customer reach, not creation.

Rating 1: Almost Certain (Error-Proofing / Poka-Yoke)

Definition: The highest detection rating. Process or product design makes it physically impossible to create a discrepant part (poka-yoke principle). Characteristics:
  • Design-based prevention: Mechanical, electrical, or logical design eliminates the failure mode
  • Error-proofing (Poka-Yoke): Japanese manufacturing principle meaning “mistake-proofing”
  • No inspection needed: Detection is guaranteed by design, not by controls or gauging
  • 100% effectiveness: No defects can be created; customer reach is impossible
Examples:
  • Keyed connectors prevent reversed wire assembly (mechanical keying)
  • Modular fixture nests ensure correct part orientation before operation
  • Software logic prevents invalid parameter combination before sending command
  • Mechanical stops physically limit motion range to safe bounds
Relationship to DFMEA:
  • Rating 1 often requires design change or design revalidation in DFMEA
  • Eliminates the failure mode root cause rather than controlling its effects
  • Highest cost/benefit due to design modification effort
When to target Rating 1:
  • For high-Severity failure modes (where detection alone is insufficient)
  • For safety-critical characteristics (SC classification)
  • When Risk Priority Numbers are unacceptably high even with other controls

Detection Rating Progression

Teams often upgrade detection ratings progressively as process capability improves and new measurement equipment becomes available. The following flowchart shows typical upgrade sequences:
  • Rating 10 (No Controls) — No detection method available
    • Add any control → Rating 9 (Sporadic)
    • Add inspection standards → Rating 8 (Visual, single)
  • Path A: Visual Inspection
    • Rating 8 → Double check → Rating 7 (Visual, double)
    • Rating 7 → Add measurement → Rating 4 (Variable, post-process)
    • Rating 4 → Move in-station → Rating 3 (Automated in-process)
    • Rating 3 → Add auto-react → Rating 2 (Auto detect + prevent)
    • Rating 2 → Redesign → Rating 1 (Error-proofing)
  • Path B: Statistical Process Control
    • Rating 8 → Implement SPC → Rating 6 (SPC charting)
    • Rating 6 → 100% variable measurement → Rating 4
    • Rating 4 → Move in-process → Rating 3
    • Rating 3 → Add auto-react → Rating 2

Detection in FMEA Workflow

FMEA Severity × Occurrence × Detection Matrix

The Detection rating combines with Failure Mode Severity and Occurrence to calculate Action Priority (AP) or Risk Priority Number (RPN):
RPN = Severity × Occurrence × Detection

Example:
  Severity = 8 (high functional impact)
  Occurrence = 4 (moderate probability)
  Detection = 3 (high automated detection)
  
  RPN = 8 × 4 × 3 = 96 (moderate priority)
If detection were upgraded to Rating 1 (error-proofing):
  RPN = 8 × 4 × 1 = 32 (much lower priority)

Post-Mitigation Tracking

In FMEA workflows, Detection is assessed twice:
PhaseFieldWhenPurpose
Pre-mitigationpremitigationFMDetectionInitial assessmentBaseline risk quantification
Post-mitigationpostmitigationFMDetectionAfter risk controls implementedValidates control effectiveness
Teams improve Detection by implementing risk controls:
  • Upgrade process controls (SPC, gauging)
  • Redesign product for error-proofing
  • Add redundant inspection steps
  • Implement automated detection systems
Example workflow:
  1. Identify failure mode: “Fastener under-torqued”
  2. Assess pre-mitigation Detection = 8 (visual-only)
  3. Plan risk control: “Implement torque verification gauge”
  4. Implement control and retest
  5. Update post-mitigation Detection = 4 (variable measurement)
  6. Recalculate RPN with new Detection rating

Implementation in TestAuto2

Risksheet Configuration

Detection ratings appear as enumeration dropdowns in FMEA Risksheets:
# System FMEA Risksheet column configuration
column_id: failureMode-detection
label: "Detection"
type: enum
source: failureMode-detection-enum
read_only: false
required: true  # Must be set before row is complete
formula_dependencies:
  - action_priority  # AP formula recalculates when detection changes

Polarion Custom Field

<!-- .polarion/tracker/fields/failureMode-detection-enum.xml -->
<customField id="failureMode-detection" type="enum">
  <enumeration>
    <enumValue id="0" name="Unanalyzed" />
    <enumValue id="1" name="Almost Certain" />
    <enumValue id="2" name="Very High" />
    <enumValue id="3" name="High" />
    <enumValue id="4" name="Moderately High" />
    <enumValue id="5" name="Moderate" />
    <enumValue id="6" name="Low" />
    <enumValue id="7" name="Very Low" />
    <enumValue id="8" name="Remote" />
    <enumValue id="9" name="Very Remote" />
    <enumValue id="10" name="Almost Impossible" />
  </enumeration>
</customField>

Failure Mode Work Item Type

When you create a Failure Mode in a System FMEA, Design FMEA, or Process FMEA Risksheet:
  1. Detection rating is mandatory — form validation requires a rating before the row is considered complete
  2. AP formula auto-calculates — Action Priority = f(Severity, Occurrence, Detection) updates automatically when you change this field
  3. Post-mitigation tracking — separate field captures detection rating after risk controls are implemented
  4. Traceability to Risk Controls — failure modes link to Risk Control work items that document which detection methods are implemented