From Training to Production: Lightweight ML Validation and Monitoring
Speaker
Gift Ojeabulu
I’ve spent the last 6+ years at the intersection of AI/ML, SWE, developer advocacy, and community building. Most recently, I worked as an AI devrel advocate and content lead at Iterative.ai, the team behind the popular open source AI tools DVC and CML. I’ve built and scaled thriving AI communities, notably as co-founder of D.C.A, now the largest Data and AI community of Black professionals worldwide.
A visionary data scientist whose work is transforming Africa's technological landscape. As the Co-founder of Data Community Africa, an advisory board member at DevNetwork (Artificial Intelligence), and AI Developer Advocate, Gift has emerged as a pivotal figure in democratizing data and AI across the continent.
My crowning achievement, the African Data Community Newsletter, has become a beacon of knowledge sharing, reaching an impressive network of over 2500 subscribers spanning 45 countries and 8 U.S. states. This initiative has inspired his involvement at DatafestAfrica with 4 Conferences and 5+ hackathons in less than 4 years, now one of the continent's premier data and AI conferences, bringing together practitioners, researchers, and enthusiasts from across the globe.
In Lagos, Gift's leadership of the MLOps community has revolutionized how organizations approach machine learning operations. Under his guidance, the community has become a hub for innovation in practical MLOps and Large Language Models (LLMs), fostering collaboration between industry leaders and emerging talents. His emphasis on open-source AI development has created new pathways for African developers to contribute to global technological advancement.
Through strategic initiatives and unwavering dedication, Gift Ojeabulu continues to architect the future of Africa's data and AI ecosystem. His work exemplifies how individual leadership can catalyze continental transformation, making advanced technology accessible to communities that have historically been underserved in the global tech landscape.
Abstract
Catch silent ML failures before they reach production! This session demonstrates lightweight, practical techniques to validate and monitor ML models across their lifecycle. Using tools like DeepChecks and Evidently AI, attendees will learn how to detect data drift, evaluate model quality, and ensure trust in their models with minimal effort.
Description
Machine learning models can perform well in training but fail silently in production due to data drift, label issues, or edge cases. Ensuring trust in ML models requires validation before deployment and monitoring after deployment, yet many teams struggle to implement these efficiently.
In this 30-minute session, we explore practical, lightweight approaches to validate and monitor ML models across their lifecycle. Using DeepChecks, participants will learn to detect common pitfalls such as feature distribution mismatches, label inconsistencies, and train-test leakage during training. We then demonstrate how Evidently AI can be applied for production monitoring, tracking data and prediction drift, and detecting performance degradation in real time.
Attendees will leave with actionable workflows and best practices for building robust, trustworthy ML systems without complex pipelines. By the end, participants will understand how to maintain model reliability from training through production, ensuring data-driven decisions are based on models they can trust.