AI and Agency: As Developers, We Decide The Future
Speaker
Tadas Korris
Tadas Korris is a software engineer at Mozilla, currently working on the Sync Backend Storage team, which is responsible for safely and securely synchronizing browser data for millions of Firefox users. His work focuses on building reliable, privacy-preserving systems that users can trust with their most sensitive data.
Previously, Tadas worked on Mozilla’s Contextual Services team, where he contributed to the Merino service, which provides private, contextual suggestions while maintaining strong privacy guarantees. Across his roles, he has been a strong advocate for online privacy and security, with hands-on experience developing secure software throughout its lifecycle. Tadas presented at PyConLT in 2024, discussing the Merino service in detail.
Tadas is deeply engaged in the ethics of AI and emerging technologies and has given talks on the dangers of unchecked automation and the steps technologists can take to protect democratic institutions and social trust in tech. He has presented this work at Mozilla Festival 2025 in Barcelona, the IIA International Conference in Amsterdam, and various conferences in Canada.
Tadas was born to a Lithuanian-Canadian family in Toronto and grew up in Edmonton. He maintained close ties to the local Lithuanian community, learning the language and participating in cultural and community activities from an early age. He cherishes his Lithuanian heritage and has gotten quite good at making vegan versions of almost all Lithuanian dishes. His Močiute Emilija would be proud!
Before transitioning into software engineering, Tadas began his professional career as a classical musician. He earned both his Bachelor’s and Master’s degrees from the Manhattan School of Music in New York City. In 2018, he completed a diploma in Web and Software Development from the University of North Carolina and began working in the technology sector. He joined Mozilla in 2022 and continues to perform as a regular substitute musician in several orchestras.
Abstract
AI systems are not neutral; they encode values that are opaque and separate from people. As developers, we are making decisions that shape who has power, who is surveilled, who is replaced, and who gets a voice, often without intending to.
This talk frames AI ethics as a software engineering problem, not just a philosophical one. We’ll examine how everyday technical choices can unintentionally reinforce authoritarian tendencies around disinformation, manipulation, and harmful automated decision making.
Description
This session equips developers with practical ways to recognize ethical and systemic risks as an engineering problem. We will examine how everyday choices, from sourcing data, programming default behaviors, and algorithm design encode values into AI systems.
Rather than framing ethics as an external constraint or a impediment to progress, the talk explores how transparency and human agency should be part of the development process. Attendees will learn how to identify harmful defaults early, introduce auditability, introduce safeguards, and question claims of neutrality often attached to AI systems. The goal isn’t to stop building AI, but to build it deliberately, with an awareness of how technical decisions have social consequences.