See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL /
See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL / See you on March 31, 2026 @ Hi!Site BXL /

Beyond Red Teaming: Building a comprehensive monitoring and evaluation framework for conversational AI

As enterprises rush to deploy AI agents in customer-facing operations, most rely on one-time audits and manual testing. This approach offers limited coverage, surfaces problems only after the damage is done, and typically focuses on safety while ignoring accuracy, user experience, and operational efficiency.

This session presents a practical framework for continuous AI monitoring and evaluation, drawn from deploying automated testing infrastructure across major European enterprises. Attendees will learn why point-in-time testing isn’t enough, how to implement live monitoring that catches failures before they become widespread, and what the EU AI Act actually requires in terms of ongoing oversight.

Event Timeslots (1)

@Collab 10
-
by Maria Yolanda Bile Nlang

Maria Yolanda Bilé Nlang

// Founder @ BriseAI
Maria Yolanda Bilé Nlang is an AI Engineer and the founder of Brise, a Brussels-based startup providing an AI safety and performance testing infrastru...