Projects are expected to contribute to one of the following outcomes:
Trustworthy AI solutions, need to be robust, safe and reliable when operating in real-world conditions, and need to be able to provide adequate, meaningful and complete explanations when relevant, or insights into causality, account for concerns about fairness, be robust when dealing with such issues in real world conditions, while aligned with rights and obligations around the use of AI systems in Europe. Advances across these areas can help create human-centric AI[1], which reflects the needs and values of European citizens and contribute to an effective governance of AI technologies.
The need for transparent and robust AI systems has become more pressing with the rapid growth and commercialisation of generative AI systems based on foundation models. Despite their impressive capabilities, trustworthiness remains an unresolved, fundamental scientific challenge. Due to the intricate nature of generative AI systems, understanding or explaining the rationale behind their outputs is normally not possible with current explainable AI methods. Moreover, these models occasionally tend to 'hallucinate', generating non-factual or inaccurate information, further compromising their reliability.
To achieve robust and reliable AI, novel approaches are needed to develop methods and solutions that work under other than model-ideal circumstances, while also having an awareness when these conditions break down. To achieve trustworthiness, AI system should be sufficiently transparent and capable of explaining how the system has reached a conclusion in a way that it is meaningful to the user, enabling safe and secure human-machine interaction, while also indicating when the limits of operation have been reached.
The purpose is to advance AI-algorithms and innovations based on them that can perform safely under a common variety of circumstances, reliably in real-world conditions and predict when these operational circumstances are no longer valid. The research should aim at advancing robustness and explainability for a generality of solutions, while leading to an acceptable loss in accuracy and efficiency, and with known verifiability and reproducibility. The focus is on extending the general applicability of explainability and robustness of AI-systems by foundational AI and machine learning research. To this end, the following methods may be considered but are not necessarily restricted to:
Multidisciplinary research activities should address all of the following:
All proposals are expected to embed mechanisms to assess and demonstrate progress (with qualitative and quantitative KPIs, benchmarking and progress monitoring), and share communicable results with the European R&D community, through the AI-on-demand platform or Digital Industrial Platform for Robotics, public community resources, to maximise re-use of results, either by developers, or for uptake, and optimise efficiency of funding; enhancing the European AI, Data and Robotics ecosystem and possible sector-specific forums through the sharing of results and best practice.
In order to achieve the expected outcomes, international cooperation is encouraged, in particular with Canada and India.
Specific Topic Conditions:Activities are expected to start at TRL 2-3 and achieve TRL 4-5 by the end of the project – see General Annex B.
[1]A European approach to artificial intelligence | Shaping Europe’s digital future (europa.eu)
[2]Research should complement build upon and collaborate with projects funded under topic HORIZON-CL4-2023-HUMAN-01-03: Natural Language Understanding and Interaction in Advanced Language Technologies
Please Log In to See This Section