Labels:

Description

In the GRAIL project, we are working towards a toolbox to assist companies in developing and using GenAI responsibly. This toolbox contains methods, guidelines, and functionalities covering three topics; governance, application, and evaluation of GenAI. Expertise and tools are co-developed as much as possible within GRAIL use case projects, in close collaboration with stakeholders. These use cases address concrete challenges faced by stakeholders, ensuring that the resulting solutions are directly relevant and applicable. This collaborative, use-driven approach ensures that the toolbox not only reflects real-world needs but also delivers meaningful and lasting impact.

Problem Context

Public and private organizations face various challenges when they are working with Generative AI:

  • Governance is challenging because it requires balancing the risks and limitations of GenAI systems with the values and interests of organizations that guide responsible development, deployment, and integration.
  • When interacting with GenAI, some users are tempted to blindly accept and use the outputs of such applications, while others are too skeptical and disregard the added value of these applications at all. Therefore, organizations need to seek a balance between human strengths in judgment – such as context and expertise – with the strengths and limitations of GenAI.
  • Another challenge lies in the evaluation of GenAI output, because of the variability in answers, dependence on prompting, and the opacity of data and models. In order to apply GenAI responsibly, organizations are in need of robust and reliable evaluation methods.

Solution

We are developing methods, guidelines, and GenAI functionalities, building towards a comprehensive toolbox that helps organizations to develop and deploy Generative AI responsibly. We help organizations in demystifying the complexities of GenAI governance and sustainable implementation of governance structures, and support organizations in defining where/how/when and with whom to start their GenAI governance journey. We offer tools that enhance critical thinking to help users navigate and mitigate GenAI’s risks effectively. For example to formulate better prompts that increase the likelihood of correct, reliable and relevant GenAI outputs. And, lastly, we help developers and researchers in choosing and applying robust and reliable evaluation methods based on state-of-the-art insights of GenAI evaluation.

Results

The tools we develop through researching GenAI in real-world use cases will be bundled into a comprehensive toolbox. For each pillar – governance, application, and evaluation – there will be multiple tools that organizations can use to support responsible GenAI development and deployment.

Contact

  • Lieke Dom, Consultant Digital Governance & Regulation, e-mail: lieke.dom@tno.nl
© 2025 — TNO; Version: v3.1.1; Publication date: 2025-11-17 10:32 This is a community project. Are you a colleague? Consider to contribute. For questions about the Appl.AI lab, contact Caspar Meijer or Tom Brand . We care about your privacy. Images courtesy of Unsplash.