Skip to main content

Why SOTIF Matters

Traditional functional safety assumes that if all components work as designed, the system is safe. SOTIF challenges this assumption by asking: What if the design itself is insufficient? Consider an autonomous emergency braking (AEB) system that works flawlessly in sunny conditions but fails to detect pedestrians in heavy fog. The camera isn’t malfunctioning—it’s operating within its design specification. The radar correctly processes returns—exactly as programmed. Yet the combined system creates a hazardous situation because the intended functionality has inherent limitations that weren’t adequately addressed during design.
SOTIF doesn’t replace ISO 26262—it complements it. ISO 26262 asks “What if the sensor fails?” while SOTIF asks “What if the sensor works perfectly but doesn’t see everything it needs to?” Both analyses are required for automated driving functions.

The SOTIF Risk Space Model

SOTIF categorizes system behavior into four quadrants based on two dimensions: known vs. unknown scenarios and safe vs. unsafe behavior. This creates a framework for systematically reducing risk: diagram Safe & Known: Scenarios where the system operates correctly and safely. This is the target operational domain (ODD—Operational Design Domain). Unsafe & Known: Identified edge cases where the system’s limitations cause unsafe behavior. Example: AEB struggles with low-contrast objects in dawn lighting. These must be mitigated through design improvements or operational constraints. Safe & Unknown: Scenarios not yet encountered during validation but where the system would behave safely. Goal: Minimize this quadrant through comprehensive scenario discovery. Unsafe & Unknown: The most dangerous quadrant—scenarios neither discovered nor tested where the system fails unsafely. SOTIF requires evidence that this quadrant has been reduced to an acceptable level.

SOTIF in TestAuto2 Implementation

TestAuto2 integrates SOTIF analysis into the existing hazard identification workflow through dedicated work item types and traceability patterns:

Hazard Classification

The hazard work item type includes a hazardCategory field that distinguishes SOTIF scenarios from traditional malfunction scenarios:
CategoryISO StandardFocus
Functional SafetyISO 26262Component/subsystem malfunctions
SOTIF LimitationISO 21448Design limitations, sensor insufficiencies
CybersecurityISO 21434Intentional attacks, unauthorized access
EnvironmentalVariousExternal conditions (weather, road, EMI)
This categorization drives workflow routing—SOTIF hazards require scenario-based analysis rather than FMEA-style failure mode decomposition.

Operational Phase Mapping

The operationalPhase field on hazards maps directly to SOTIF’s Operational Design Domain (ODD) concept:
Operational PhaseSOTIF ODD Element
IgnitionSystem initialization state
Normal DrivingNominal operation conditions
Low-Speed ManeuveringUrban/parking scenarios
Highway CruisingHigh-speed straight roads
Emergency ManeuversLimit-case vehicle dynamics
ParkingStandstill state transitions
MaintenanceService mode limitations
Each operational phase represents a different set of environmental conditions, sensor performance characteristics, and expected behaviors. SOTIF requires demonstrating safety across all defined operational phases—or explicitly excluding phases from the ODD and preventing system activation.

Scenario-Based Validation

SOTIF hazards link to validationTestCase work items that represent scenario-based testing rather than unit-level verification. These test cases capture:
  • Environmental Conditions: Weather, lighting, road surface, traffic density
  • Sensor Performance: Detection range, false positive/negative rates, latency
  • System Response: Actual behavior vs. intended behavior in the scenario
  • Outcome: Safe/unsafe, expected/unexpected
The Requirements Traceability Report in TestAuto2 tracks the coverage chain:
Hazard (SOTIF) → Safety Goal → System Requirement → Validation Test Case
     ↓                                                       ↓
hazardCategory:           Scenario-based validation with environmental
 "SOTIF Limitation"       parameters and sensor performance data
This differs from ISO 26262’s verification chain, which follows a V-Model from requirements down to design elements and back up through component testing.

SOTIF Analysis Workflow in TestAuto2

1. Scenario Discovery

Create hazard work items with hazardCategory = “SOTIF Limitation”. Document:
  • Triggering Conditions: Environmental factors (fog density, sunlight angle, wet pavement reflections)
  • Sensor Limitations: Detection range degradation, classification errors, latency increases
  • Functional Insufficiency: What the system cannot do (e.g., “Cannot detect pedestrians wearing radar-absorbent clothing at >30m range in rain”)

2. Known vs. Unknown Classification

Use the hazardDescription field to indicate whether the scenario was:
  • Known from design: Acknowledged limitation documented in functional specification
  • Discovered during testing: Unexpected behavior found during validation
  • Field incident: Real-world occurrence after deployment
This classification feeds into the SOTIF risk quadrant model and determines prioritization.

3. Safety Goal Derivation

Link SOTIF hazards to safetyGoal work items that specify:
  • ODD Constraints: “System shall deactivate in fog density >200m visual range”
  • Performance Requirements: “System shall maintain ≥95% pedestrian detection rate in dawn/dusk lighting”
  • Degradation Strategy: “System shall alert driver and reduce speed when sensor confidence <80%”
Unlike ISO 26262 ASIL-driven goals, SOTIF goals focus on acceptable risk thresholds through statistical validation.

4. Validation Evidence

Create validationTestCase work items linked to SOTIF safety goals. Each test case documents:
  • Scenario ID: Reference to scenario database (e.g., OpenSCENARIO file)
  • Expected Behavior: System specification for this scenario
  • Actual Behavior: Observed system response during testing
  • Pass/Fail Criteria: Quantitative thresholds (detection rate, reaction time, false alarm rate)
The Safety Readiness Scorecard dashboard tracks SOTIF validation coverage as the percentage of identified scenarios with passing test evidence.

SOTIF Risk Argument Structure

ISO 21448 requires demonstrating that unsafe unknown scenarios have been reduced to an acceptable level. TestAuto2 supports this through traceability evidence:
SOTIF Argument ElementTestAuto2 Evidence
Scenario CoverageHazard count by category
Validation CompletenessTest case linkage %
Field ExposureUse step operational hours
Acceptable Residual RiskRisk matrix classification
ODD ComplianceOperational phase mapping
The HAZID Risk Matrix Report visualizes SOTIF hazards using initial severity vs. likelihood, colored by whether sufficient validation evidence exists. Red cells indicate gaps requiring additional scenario testing.
SOTIF doesn’t require analyzing every conceivable scenario—that’s impossible. Instead, it requires a justified argument that your scenario discovery process was comprehensive enough to capture the significant risks. TestAuto2’s traceability model supports this by linking hazards to their discovery method (simulation, field testing, expert review, etc.).

Integration with ISO 26262 Workflows

TestAuto2’s unified data model allows SOTIF and ISO 26262 analyses to coexist and cross-reference:
  • Shared Risk Controls: A riskControl work item can mitigate both a malfunction-based failure mode and a SOTIF limitation scenario
  • Combined Coverage: The FMEA Coverage Report includes both design failure modes (ISO 26262) and SOTIF scenarios in the gap analysis
  • Traceability Chains: Both SOTIF validation test cases and ISO 26262 verification test cases link to the same system requirements, enabling unified coverage tracking
The Standards Compliance Overview dashboard shows separate compliance percentages for ISO 26262 (malfunction-based) and ISO 21448 (limitation-based) workflows while highlighting shared evidence artifacts. For deeper understanding of SOTIF’s relationship to other safety methodologies: For practical application in TestAuto2: