Trustworthy artificial intelligence

Designing safe artificial intelligence systems and applications

Artificial intelligence (AI) methods are used primarily to perform highly complex tasks, for example for the processing of natural language or the classification of objects in images. The methods not only enable substantially higher levels of automation to be attained, but also open up completely new fields of application.

The term "artificial intelligence" is now used primarily in the context of machine learning, for example in neural networks, decision trees or support vector machines. However, it also covers a large number of other applications, such as expert systems or knowledge graphs.

A feature common to all these methods is their ability to solve problems by modelling concepts that are generally associated with intelligent behaviour. The use of artificial intelligence enables concepts such as learning, planning, sensing, communicating and cooperating to be transferred to technical systems. With these capabilities, completely new intelligent systems and applications can be achieved. Artificial intelligence is therefore often seen as the key technology of the future.

Protective devices and control systems based upon artificial intelligence have already enabled fully automated vehicles and robots to be created. Furthermore, they enable accidents to be prevented by assistive systems capable of recognizing hazardous situations.

However, the use of systems, particularly machines, that are based upon AI methods also changes the physical and mental stresses to which workers are exposed. In order to prevent the use of this technology from giving rise to new hazards, or to reduce such hazards, trustworthy artificial intelligence is required.

The concept of trustworthiness in this sense extends well beyond safety and includes the following essential characteristics:

  • Reliability
    The system or application must continue to function correctly under all anticipated ambient conditions.
  • Robustness
    The system or application must be able to function correctly even under the influence of bias (internal and external faults and distortions) or system faults, and must not assume a dangerous state.
  • Resistance to attack
    The system or application must be capable of withstanding external attacks.
  • Transparency
    The actions of the system or application and the results it delivers must be transparent, comprehensible and logical.
  • Predictability
    The actions of intelligent systems that interact or cooperate with human beings must be predictable.
  • Data security
    The data and privacy of all parties involved must be protected over the entire life cycle of the system or application.
  • Protection against misuse and incorrect use
    The system or application must be protected against foreseeable misuse and incorrect use by operators.

Owing to the dramatic pace at which modern artificial intelligence methods are developing, foremost among them deep learning, it is however still largely uncertain exactly how these properties can be achieved in a trustworthy artificial intelligence system.

The Institute for Occupational Safety and Health of the German Social Accident Insurance (IFA) is therefore working on concepts by which safety and health at work can not only be maintained when such technology is used, but also enhanced by its use.

Activities for this purpose include the following:

  • Research into concepts of trustworthy artificial intelligence
  • Consulting regarding the development of safe AI systems and applications
  • Consulting regarding the development of AI-based assistive systems
  • Participation in the development of national and international standards

Publications

Steimers, A.; Schneider, M.: Sources of Risk of AI Systems (PDF, 811 kB, non-accessible) . Int. J. Environ. Res. Public Health 2022, 19, 3641. https://doi.org/10.3390/ijerph19063641

Contact

Thomas Bömer

Accident Prevention: Digitalisation - Technologies

Tel: +49 30 13001 3530
Fax: +49 30 13001 38001


Paul-Martin Fechtner

Accident Prevention: Digitalisation - Technologies

Tel: +49 30 13001-3561