Residual Risk Evaluation – Necessary Iteration

Posted: September 5, 2024

In this article

The cybersecurity standard ISO/SAE 21434 illustrates the development workflow in the Automotive concept phase. Cybersecurity requirements are derived from goals based on the mitigations of TARA (Threat Analysis and Risk Assessment) processes. It does however not refer to how to evaluate the effectiveness and adequacy of the successful implementation of those requirements. To handle such cases, re-evaluation of risks with consideration of implemented requirements becomes a necessity.

SystemWeaver’s cybersecurity module provides not only the initial evaluation of risks but also a second evaluation after the definition of cybersecurity requirements. This offers a way for residual risk evaluation in the early concept phase, as a confirmation of the adequacy of deployed controls, rather than pushing it to verification and validation.

 

You may also be interested in

  • Systems Engineering problems and SAT solvers

    Real engineering problems which can (only) be solved by SAT solvers MSc Jan Söderberg, SystemWeaver, jan.soderberg@systemweaver.com Introduction There are engineering problems that can be expressed as so-called Boolean satisfiability problems, or “SAT” problems[1]. There are as well algorithms to solve such problems, i.e. to find the Boolean values ("true" or "false", or 1 or [...]

  • Driving car in mist

    Driving the Future of Automotive Development: How SystemWeaver Revolutionizes Software Product Lifecycle Management

    The automotive industry is undergoing a profound transformation, driven by the growing importance of software in vehicles. As the industry shifts towards Software Defined Vehicles (SDVs), the need for robust, comprehensive software product lifecycle management (PLM) solutions has never been greater. SystemWeaver is at the forefront of this revolution, offering a suite of tools and [...]

  • Grounded in AI System Engineering

    Grounded AI in System Engineering: Letting the SystemWeaver Model Do the Heavy Lifting

    We often see AI initiatives in engineering begin by pointing an LLM at documents, while the richest context – the structured system and safety data in MBSE/PLM tools – ends up as an afterthought. Our experience in safety-critical development is that the real asset is something else entirely – the system model you already keep [...]