The team | Training | Publications | Blog |
At Finality, our mission is to push the boundaries of artificial intelligence and its safety in the context of reinforcement learning. We are building a leading-edge doctoral team with the goal of mastering an overarching theory of prompt and safe learning, aiming to advance the understanding of AI behaviors, limits, and tradeoffs in this critical area.
Our Goal
Our goal is to develop a doctoral team capable of mastering an overarching theory of prompt and safe learning. By focusing on the perimeter of safe AI-based Reinforcement Learning (RA), we will define the boundaries and identify the tradeoffs and performance limits of this approach, ensuring the reliability and safety of AI systems in high-stakes applications.
Key Areas of Focus
-
Safe AI Integration: We aim to define clear safety protocols and boundaries for AI in reinforcement learning.
-
Performance Limits: Identifying the performance tradeoffs in AI-based approaches to ensure sustainable and effective learning in real-world scenarios.
-
Cross-disciplinary Research: By combining AI theory with practical applications, we bridge gaps across various fields, promoting innovation and responsible AI development.
Beneficiaries & Collaborators
Our work wouldn’t be possible without the invaluable contributions from our beneficiaries and research partners. Together, we are shaping the future of AI, driving innovation, and ensuring that the technology is developed responsibly.
Join Us on Our Journey
If you share our passion for pushing the boundaries of AI safety and reinforcement learning, get in touch with us and apply for one of the 15 Doctoral Candidate positions to learn more about collaboration opportunities, research initiatives, and how you can contribute to this exciting journey.
Acknowledgement
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 101168816.