Grounded AI in System Engineering: Letting the SystemWeaver Model Do the Heavy Lifting
Posted: January 21, 2026
We often see AI initiatives in engineering begin by pointing an LLM at documents, while the richest context – the structured system and safety data in MBSE/PLM tools – ends up as an afterthought. Our experience in safety-critical development is that the real asset is something else entirely – the system model you already keep in SystemWeaver: a typed, versioned graph of requirements, functions, hazards, components, tests and variants.
Because that information is stored in a structured graph, it’s a very good fit for AI and LLM-based assistants. You can ask concrete questions like “what is impacted if this requirement changes?” or “which tests cover this safety goal?” and follow the real links in the model instead of hoping a text search will find the right paragraph. The work we’ve done around ISO/SAE 21434 and TARA, our Web TARA application with an AI co-pilot, and our Enhancing TARA with GenAI article all build on the same idea: let the SystemWeaver model be the reliable source of truth, and let AI help with suggestions, drafting and prioritisation on top of it.
We’ve also started exploring this in practice together with partners. A Chalmers master’s thesis, done in collaboration with SystemWeaver, looked at using LLMs plus retrieval to support cybersecurity requirements work. The results were encouraging – the model could propose reasonable requirements – but they also confirmed something important for us: the outputs still need to be reviewed, and completeness can’t be guaranteed. That matches how we think AI should be used in SystemWeaver: as a helpful assistant that is always grounded in the model, never as an automatic decision-maker.
In a recent collaboration with a global automotive customer, we’ve taken a similar approach to “everyday” requirements work. Engineers select a set of SystemWeaver requirements and ask an assistant to do three things: suggest clearer wording (closer to INCOSE-style patterns), highlight possible conflicts or duplicates, and sketch a few candidate test ideas that can be linked back under the right items in the model. Sometimes the suggestions are genuinely useful and save time; sometimes they miss the point or over-simplify the intent. The important part is that the conversation stays inside the system model – with real IDs, links and variants – and that the engineers remain firmly in charge of what is accepted or discarded.
The same pattern shows up in other areas too, from change impact to SDV testing. In our collaboration with RemotiveLabs, for example, SystemWeaver holds the system and requirements model while their environment runs the virtual tests. It’s easy to see how AI assistants can sit between the two: surfacing affected tests when an architecture changes, or highlighting coverage gaps in a safety chain. We don’t see SystemWeaver as “an AI product”; we see it as the system-engineering backbone that makes useful, auditable AI possible when you decide the time is right.


