Why Action Priority Replaced RPN
The RPN approach suffered from several critical flaws that AIAG-VDA 2019 addressed:
Mathematical Ambiguity: An RPN of 120 could result from (S=6, O=4, D=5), (S=10, O=3, D=4), or (S=12, O=2, D=5) — all representing fundamentally different risk profiles. A catastrophic failure with low probability (S=10, O=3, D=4) carries different implications than a minor failure with high occurrence (S=6, O=4, D=5), yet both yield RPN=120. Action Priority distinguishes these scenarios by prioritizing based on severity-first logic.
Threshold Trap: Organizations arbitrarily set RPN thresholds (e.g., “RPN > 100 requires action”) without considering that severity dominates risk significance. A failure mode with S=9 but RPN=90 might be ignored, while S=5 with RPN=150 triggers action — backwards from safety engineering principles.
False Precision: Multiplying ordinal scales (1-10 ratings) produces meaningless numbers. The difference between RPN 100 and 200 has no interpretable meaning, yet teams spent hours debating whether an RPN of 96 required mitigation.
Action Priority solves these issues by explicitly acknowledging that severity, occurrence, and detection are not interchangeable. The methodology uses severity as the primary driver, then considers occurrence and detection to refine priority classification into actionable categories.
How Action Priority Works
The Three-Category System
| Action Priority | Required Response |
|---|
| H (High) | Prevention/detection controls MUST be improved. Immediate action plan required with assigned ownership. |
| M (Medium) | Prevention/detection controls SHOULD be improved. Action recommended with timeline based on resources. |
| L (Low) | Prevention/detection controls MAY be improved if cost-effective. Monitor through periodic review. |
Unlike RPN thresholds, these categories align with engineering decision-making: H demands action, M recommends improvement, L accepts current risk with monitoring.
Decision Matrix Logic
Action Priority determination follows a structured table lookup defined in AIAG-VDA FMEA Handbook Chapter 5. The exact algorithm varies by FMEA type (Design vs Process), but follows this general pattern:
A failure mode with S=10 (safety hazard) receives High priority even if occurrence is O=1 (extremely unlikely) and detection is D=1 (error-proofed). This reflects the engineering principle that catastrophic consequences demand prevention/detection improvements regardless of probability — a core tenet of functional safety standards like ISO 26262.
Severity-First Philosophy
The Action Priority methodology explicitly rejects the equivalence assumption of RPN. Consider two failure modes:
| Failure Mode | S | O | D | RPN | AP |
|---|
| Sensor housing crack (cosmetic) | 4 | 10 | 5 | 200 | M |
| ECU safety monitor failure (ASIL D) | 10 | 2 | 4 | 80 | H |
Under RPN, the cosmetic defect (RPN=200) would receive priority over the safety-critical failure (RPN=80). Action Priority correctly classifies the ASIL D hazard as High priority because severity dominates — even though its RPN is lower.
Pre-Mitigation vs Post-Mitigation AP
TestAuto2 implements the complete AIAG-VDA lifecycle by tracking both initial risk (before controls) and residual risk (after controls):
Many organizations define formal acceptance criteria such as “No failure modes with Post-Mitigation AP = High” or “All ASIL C/D failure modes must achieve Post-Mitigation AP ≤ M”. TestAuto2’s FMEA dashboards highlight unacceptable residual risk using traffic-light color coding (red for H, orange for M, green for L).
Integration with ISO 26262 ASIL
Action Priority complements but does not replace ASIL classification. The integration follows this logic:
| ASIL Level | Minimum AP Requirement (Post-Mitigation) |
|---|
| ASIL D | Must achieve L (strict controls required) |
| ASIL C | Must achieve M or better |
| ASIL B | Must achieve M or better (context-dependent) |
| ASIL A | May accept H if justified |
| QM | No formal AP requirement |
A failure mode linked to an ASIL D safety goal that shows Post-Mitigation AP = H indicates insufficient risk controls — the design does not meet functional safety requirements. This triggers mandatory design iteration.
Common Misconceptions
“We can calculate AP from S-O-D using a formula”: Action Priority is determined by table lookup, not arithmetic. TestAuto2’s Risksheet configurations embed the AIAG-VDA decision tables as JavaScript formulas, but these implement lookup logic, not multiplication.
“AP is just RPN with different thresholds”: AP fundamentally differs by using severity-first, multi-dimensional classification. Two failure modes with identical RPN can have different AP values based on their S-O-D distribution.
“Low AP means we can ignore the failure mode”: Low AP means current controls are adequate, not that the failure mode is unimportant. Periodic review remains required to ensure process stability.
“We can assign AP values manually without S-O-D ratings”: Action Priority must be derived from severity, occurrence, and detection. Manual override breaks traceability and violates AIAG-VDA methodology. TestAuto2 enforces this through Risksheet formula bindings.
Progressive FMEA Workflow
TestAuto2 supports phased FMEA execution using Action Priority filtering:
Phase 1 — Address High Priority:
- Filter FMEA Risksheet view to show only
Pre-Mitigation AP = H
- Design prevention/detection controls for all high-priority failure modes
- Update occurrence/detection ratings, calculate Post-Mitigation AP
- Verify all Post-Mitigation AP values ≤ M
Phase 2 — Review Medium Priority:
- Filter to
Pre-Mitigation AP = M
- Evaluate cost-benefit of additional controls
- Implement economically justified improvements
- Document acceptance rationale for residual M risks
Phase 3 — Monitor Low Priority:
- Document existing controls for
Pre-Mitigation AP = L failure modes
- Establish periodic review cadence (quarterly/annual)
- Track process capability indices (Cpk) to detect degradation
This staged approach aligns resource allocation with risk significance, ensuring critical failure modes receive immediate attention while managing overall project timelines.
Visual Indicators in TestAuto2
TestAuto2 uses color-coded traffic lights to communicate Action Priority at a glance:
- 🟢 Green (L): Acceptable risk, current controls effective
- 🟠 Orange (M): Recommended improvement, evaluate cost-benefit
- 🔴 Red (H): Mandatory action required, assign owner and deadline
Risksheet columns, PowerSheet cells, and dashboard KPI cards all use this consistent color scheme. Summary reports display AP distribution histograms showing the count of failure modes in each category, enabling executive-level risk visibility without drilling into individual FMEA rows.
Practical Application
To apply Action Priority methodology in TestAuto2:
- Define Failure Mode: Identify how the component/function can fail (Define Failure Modes)
- Rate Severity: Assess effect consequences using 1-10 scale (Assess Severity, Occurrence, Detection)
- Rate Occurrence: Evaluate failure frequency before controls (FMEA Occurrence Enumeration)
- Rate Detection: Assess control effectiveness at catching failures (FMEA Detection Enumeration)
- Calculate Pre-Mitigation AP: Automatic via Risksheet formula (DFMEA Risksheet Configuration)
- Design Controls: Implement prevention/detection improvements for H/M priorities (Link to Risk Controls)
- Calculate Post-Mitigation AP: Update O/D ratings, verify risk reduction (Track Post-Mitigation Ratings)
- Validate Acceptance: Confirm residual risk meets organizational criteria (Generate FMEA Reports)
By replacing mathematical artifice with engineering judgment, Action Priority transforms FMEA from a compliance exercise into a genuine risk management tool — one that guides teams toward meaningful design improvements rather than chasing arbitrary numerical targets.