Science Year 2024: Freedom

The German Federal Ministry of Education and Research (BMBF) is using the motto “freedom” for the current Science Year. We will be demonstrating the links between freedom and trustworthiness in the field of artificial intelligence with the “Who's deciding here?!” exhibit, which will be touring Germany on the MS Wissenschaft floating science center starting in May 2024.

Industry demand for AI solutions is high. Yet these applications and systems can only be used in series production and/or quality assurance if they are highly reliable. The new possibilities of generative AI therefore also bring new challenges: As a user, can I rely on the information provided by a chatbot that explains how to operate a machine? As a developer or assessor, how do I recognize systematic vulnerabilities in an AI model? How can I guarantee that my data is kept safe?

Trustworthiness and security are important success factors for companies.

We at Fraunhofer have therefore developed solutions and tools that are powerful, trustworthy and reliable, as well as compliant with European data protection standards.

»Who‘s deciding here?!«

Who should decide — Human or AI?

Science Year 2024 from the German Federal Ministry of Education and Research (BMBF) focuses on freedom in all its aspects. This has led Fraunhofer to pursue a number of questions that tie in with freedom of choice. How do technologies such as artificial intelligence influence our freedom? How trustworthy are the AI systems that are used in more and more safety- and security-related areas — and the decisions they make?  

Technologies such as artificial intelligence (AI) are increasingly being used in safety- and security-related areas, whether as driving assistants in autonomous vehicles, in robotic systems for safe production, in applications for detecting fake news or as (decision) support in medicine. Should AI make the wrong decision, this could have far-reaching consequences. The exhibit shows examples in the form of a game where AI can already provide us humans with good support. The big challenge here is to be able to assess the performance of AI. AI is getting better and better, but is not yet perfect: some of the examples presented therefore also show typical challenges that need to be considered when developing and using AI.

Visit the “Who's deciding here?!” exhibit on the MS Wissenschaft. From May 14, 2024, the floating science center will be touring Germany and Austria. Once again, it's not (just) about looking and marveling but also about trying things out and taking part.

Use Cases Trustworthy AI

Discover our examples of powerful and trustworthy AI systems from various application areas. All use cases present the same question: How can AI technology be used safely and securely?

3 questions for Sebastian Schmidt, a data scientist for Trustworthy AI

Sebastian Schmidt, data scientist for trustworthy AI, Fraunhofer IAIS

How does AI affect our freedom?
AI is already changing our society and our world of work. On the one hand, we are gaining freedom by simplifying processes and tasks.

On the other hand, there is a risk of making ourselves dependent on the capabilities of the models. On a personal level, we might unlearn specific skills and lose our independence. On a geopolitical level, we might rely too much on model operators outside Europe.

Whether we gain or lose freedom ultimately depends on how we deal with this new technology.

How trustworthy are the AI systems that are used in more and more safety- and security-related areas — and the decisions they make? And what is Fraunhofer doing to increase the trustworthiness of AI systems?
With the increased use of AI, we are seeing more and more cases where the ill-considered use of AI has drastic consequences — be it data protection gaps, hallucinating models or discrimination by AI.

We are offering, on the one hand, solutions that address these problems on a technical level and, on the other, a trustworthiness check for existing AI systems to safeguard their use. Testing and safeguarding also opens up new application scenarios for which an untested system would be inconceivable.

Fraunhofer is also part of Open GPT-X, a project in which we are training our own language model on European languages and with European standards in order to reduce our dependencies on US providers.

How do you personally view the topic of freedom and autonomy? What opportunities and threats do you see with regard to AI and its effects?
Personally, I am convinced that these new technologies have great potential to fundamentally change our work and current professions and create new freedoms. The use of competitive, smaller open-source models can help to avoid new dependencies.

At the same time, AI creates new threats to human autonomy that need to be countered: deep fakes and automatically generated fake news, new vulnerabilities that could be exploited for cyberattacks, and the use of AI for surveillance and in war, especially in this geopolitically tense situation.

AI research for German technological sovereignty

Through their solutions, our Fraunhofer researchers are making an important contribution to unlocking the potential of AI in the real world. “The current development of generative AI is a prime example of the huge potential and the many challenges surrounding this forward-looking technology in terms of security, transparency, and privacy. Many of the technologies and solutions we see today originate outside Europe. The research done by the Fraunhofer-Gesellschaft is helping to maintain and grow the technological sovereignty and independence of German companies,” says Dr. Sonja Holl-Supra, Managing Director of the Fraunhofer BIG DATA AI Alliance, which comprises over 30 Fraunhofer institutes and brings together Fraunhofer’s expertise across the field of AI.



AI Assessment Catalog

The AI Assessment Catalog is a structured free guide to designing trustworthy artificial intelligence. It contains a four-stage procedure for the assessment of AI applications and supports developers in the design and assessors in the evaluation and quality assurance of AI applications.


Certified Security Champion training course

We offer comprehensive training in the field of secure software development. With the certified Security Champion training course, software developers learn how to consistently take security into account in their day-to-day work.

This includes several weeks of intensive training with specialist lectures, interactive exercises, self-study phases, demonstrations of tools and best-practice examples from the industry. Final discussion and exam at the end.

  • Duration: 13 weeks, part-time (approx. 98 hours)
  • Course language: German or English
  • Number of attendees: 6–12

Learn more

Contact form

* Required