VAIVA GmbH - Safe Mobility

Back

VAIVA central project partner in KI Absicherung of VDA

Theresa Ley,

by Christian

The KI Absicherung project came to an end with the grand closing event in Berlin on June 23, 2022. In four subprojects, a methodology for the systematic identification, detection and mitigation of functional weaknesses in the application of Artificial Intelligence for autonomous driving was developed. With its safety expertise, VAIVA was instrumental in the success of the project and, as the central coordinator, prepared the final safety argumentation for the project.

KI Absicherung is a project of the AI family, together with 3 sister projects (KI Wissen, KI Data-Tooling, KI Delta-Learning). This AI family is part of the VDA flagship initiative for autonomous and connected driving. As the first project of the AI family, KI Absicherung is now completed and thus hands over its results and data to its sister projects.

The application of Artificial Intelligence (AI) is considered a key technology for autonomous driving. In the KI Absicherung project, AI and safety experts from industry, together with academic partners, are developing a methodology that systematically identifies, captures and mitigates the weaknesses of artificial intelligence functions. In the KI Absicherung project, this is demonstrated using the exemplary function of pedestrian detection. The goal here is to achieve an industry-wide consensus for the methodology for safeguarding AI functions To this end, a consortium of 24 partners consisting of OEMs, suppliers, technology providers and scientific institutes and universities is working together.

In KI Absicherung, several AI functions were developed (especially the Single Shot Detector, SSD, as a 2D bounding box for pedestrians), which served as the basis for the project. The (further) development of this exemplary AI function took place in subproject 1 (SP1). In SP2 synthetic data was generated, with which the AI function from SP1 was trained. Synthetic data have the great advantage that they can be generated systematically and in large quantities. It is not necessary to drive long distances to find rare scenarios in the data. Furthermore, test scenarios can be varied arbitrarily in order to systematically investigate weaknesses in the AI function. However, the problem of the transferability of the synthetic data into the real world remains as a question and challenge.

In SP3, methods and measures are developed and defined to be able to systematically investigate the A function. This is particularly important when using Artificial Intelligence, since Deep Neural Networks (DNN) represent a black box and decisions cannot be traced directly. So-called DNN-specific Safety Concerns were formulated within the project, which address the most important weaknesses of DNNs. The methods and tests from SP3 serve the hedging strategies from SP4 as evidence to put the safety argumentation on a sound basis. This hedging strategy was created in the so-called Goal Structuring Notation (GSN), which is a clear, graphical representation. In the safety argumentation all other parts of the project can be found: the AI functions as the central element, which is argued to be safe, the data for its training and testing, and the methods for testing the weaknesses of the AI function as evidence.

The activities of VAIVA (still ASTech in the project) focused on the safety aspects and thus on TP4, as well as on activities as safety expert for DNN-specific Safety Concern. VAIVA was the central coordinator for the preparation of the final safety argumentation and thus one of the central deliverables of the project.

The role of artificial intelligence will continue to increase in the future due to its immense possibilities (keyword autonomous driving), also and especially in the automotive sector. However, the challenge here is not only to develop AI-based functions, but above all to ensure that these also meet the very strict safety standards in the automotive sector. The KI Absicherung project has laid the foundations for how AI can be described and argued as safe in the future.

For VAIVA, Christian was, among other things, responsible as coordinator for the safety argumentation and sat on the steering committee of the project together with Nicole. In addition, Timo, with the support of Yanjie, acted as safety expert and created part of the safety argumentation.