This session provides a concise overview of EU AI Act compliance tailored for Python developers building AI systems. We'll explore key compliance areas, including risk management, robust model testing, cybersecurity, and explainability, highlighting practical tools and popular Python libraries.
Interest in AI system development
In this talk, you'll gain essential insights into aligning Python-based AI system development with the EU AI Act’s regulatory requirements. We'll begin with a brief introduction to the EU AI Act’s core concepts, risk tiers, and certification expectations, emphasizing what matters most for practical compliance.
The session will address key considerations specifically for large language models (LLMs) and machine learning classification models, highlighting best practices in training, validation, and testing. We'll discuss managing imbalanced datasets, selecting effective loss functions like focal loss, and optimizing models through hyperparameter tuning techniques to maintain model robustness and prevent overfitting.
A critical segment of the talk covers model testing and evaluation from the perspective of fairness, robustness, transparency, and ethical alignment. You'll learn how benchmark datasets and widely-adopted libraries such as Evidently, SHAP, and DeepEval can support effective compliance assessments.
Cybersecurity will also be addressed, particularly the challenges posed by adversarial attacks in both image-based and language-based AI systems. We’ll illustrate strategies for adversarial training and defense, introducing Foolbox for image models and TextAttack for language models.
Explainability remains essential for regulatory compliance, so we'll briefly discuss popular tools like SHAP, LIME, and Captum, emphasizing their role in maintaining transparency and interpretability of AI decisions.
Lastly, we'll highlight the importance of continuous monitoring for data drift and performance degradation, showcasing Evidently AI and MLflow as effective tools for ongoing compliance.
Attendees will leave the session with clear, practical approaches for developing compliant, robust, and transparent AI systems using Python, ready to effectively navigate the EU AI Act.
Dr. Valentas Gružauskas is an Associate Professor at Vilnius University and CEO of AI Conformity & Research Consulting, specializing in AI governance, conformity assessment, and regulatory compliance. He is actively engaged in AI standardization, serving as a member of the National Standardization Technical Committee on Information Technology and as an expert in the Work Group on AI Standards under the AI Office. Dr. Gružauskas leads and contributes to multiple AI research projects, including those focused on remote sensing, large language models (LLMs), and AI applications in supply chain management and social services.