Location - Room - Upper Level Room 4
Schedule
Announcements
When everyone attacks
Speaker: Florian Tramèr (ETH Zürich)
What happens when many people try to attack or influence AI systems simultaneously? I'll examine this question through three lenses. First, I'll discuss why popular collective defense tools against AI fail to deliver on their promises. Second, I'll discuss some dynamics that may emerge when multiple attackers try to steer an AI system to their benefit. Finally, I'll sketch a more optimistic vision- what if instead of attacking AI systems, we deployed AI that genuinely represents our interests and acts as an intermediary between ourselves and digital content?
Who Owns Robustness? Repurposing Adversarial Tools for Human Agency
Speaker: Bogdan Kulynych (Lausanne University Hospital)
Five years ago, we introduced Protective Optimization Technologies (POTs), a proposal that—alongside a wave of theoretical and practical work on algorithmic collective action—reflected a broader ambition to leverage adversarial tools for good, equipping collectives to contest optimization systems from the outside. Such interventions, however, face fundamental limitations, as the defensive arms race structurally favors the system owner. To bypass these limits, we argue that an effective path forward for purely technical adversarial approaches is to repurpose the toolbox from evasion to audit. We demonstrate how to build practical tools that enable external parties to verify whether systems preclude access or lack the responsiveness necessary for human agency—detecting, for instance, when a credit scoring system inevitably denies an applicant, or when an organ transplant risk model fails to prioritize a patient regardless of their deterioration.
Studying the Impacts of Automated Welfare- Why Politics and Power Matter
Speaker: Joanna Redden (University of Western Ontario)
Automation too often occurs without investigations of impact, consultation and ensuring rights of refusal. This is despite the significant body of research demonstrating how automated decision making systems have led to harm by increasing discrimination, inequity, injustice, surveillance and wrongful denial of services. Our collective experiences with automation are influenced by the ubiquity of data collection, processing and profiling across our social, political and economic lives. This latest digital turn has meant unprecedented power and wealth for the owners of the largest tech companies whose decision-making influences our democratic institutions and information ecosystems, as well as our social and environmental well-being. This paper argues that our futures depend on our ability to contend with the power dynamics intersecting our datafied lives. Drawing on transnational case study investigations, I suggest that doing so requires a focus on the impacts of automation, learning from people working to prevent harm and mobilizing collective actions.
Break
AI Systems for Gig Worker Collective Action
Speaker: Saiph Savage (Northeastern University)
In today's dynamic gig economy, workers on platforms like Upwork, Amazon Mechanical Turk, and Toloka face daunting labor challenges. Despite the potential of collective action to significantly improve these conditions, its implementation is hindered by inadequate systems for identifying and resolving issues. During my keynote, I will unveil my Innovative A.I. For Worker Collective Action framework, deeply embedded in social theories. This talk will highlight how we can harness Large Language Models (LLMs), coupled with social theories and worker-owned data, to develop technologies that are truly worker-centric. These technologies not only empower workers to shape their own futures but also enhance their working conditions and address existing harms. I will showcase case studies that exemplify the practical application of this framework, illustrating its potential to revolutionize the future of work. The session will culminate in a forward-looking discussion on a research agenda aimed at exploring the societal impacts of A.I. and crafting effective socio-technical solutions that consistently put worker wellbeing at the forefront.
Incentives and Collective Action in AI Evaluation
Speaker: Tijana Zrnic (Stanford University)
Existing theory in algorithmic collective action and related topics describes the importance of parameters such as collective size and alignment between the platform’s and its users’ objectives. In this talk, I reflect on these results and provide a new perspective drawing upon my empirical work in AI evaluation. I describe the incentives at play and how they shape user—platform dynamics.
The Luddite Lab Resource Hub: A Tool for Resisting Automation At Work
Speaker: Alex Hanna (Distributed AI Research Institute (DAIR))
Workers have been fighting automation and technological control in their workplace for as long as bosses have thought to use it. In the current era, workers are pushing back against the new rash of developments in artificial intelligence, such as ChatGPT and Midjourney, as well as other technologies that are cashing in on the AI hype cycle to further deskill and disempower workers. This project focuses on developing tools, resources, and political education for both unions in the bargaining contract process and workers who want to push back against automation technology. We aim to provide two different offerings: 1) an accessible resource hub which outlines case studies and strategies for governance and oversight of technology at work, both in general and for specific worker groups; and 2) trainings for unions, labor organizations, and other labor formations around automation and AI.
Contributing authors
Poster Session
Break
Ethical Obfuscation: Why’s, How’s, and Should Have’s
Speaker: Helen Nissenbaum (Cornell Tech)
This talk is about data obfuscation, defined as the “production, inclusion, addition, or communication of misleading, ambiguous, of false data in an effort to evade, distract, or confuse” [Brunton & Nissenbaum 2015]. This was a theoretical abstraction that followed on the heels of TrackMeNot [2006] and AdNauseam [2014], two browser extensions aimed at disrupting business-as-usual for behavioral profiling, which have both earned praise and provoked rebuke [Howe and Nissenbaum 2018]. I will revisit our efforts of twenty years ago explaining why we chose obfuscation and why those designs and not other ones (possibly, more effective.) Confronting the contemporary landscape of GenAI and LLMs, with diverse privacy threats, greatly intensified, I ask whether this landscape offers new opportunities to instantiate past ethical and practical successes while mitigating prior limitations.
Deepening Worker Power Over AI: Lessons from Popular Education and Organizing
Speaker: Lilly Irani (UCSD)
I will discuss pathways for building research that can support collective action to shape algorithms, data, and AI. I draw on over a decade of experience supporting Amazon Mechanical Turk workers, taxi and rideshare drivers, surveilled communities, and union workers across various industries. I suggest four principles: relationality, response-ability, accountability, and generosity to guide researchers in finding their path to supporting algorithmic collective action.
Announcements
Panel: Organizing and Advocacy in the Age of AI
Panelists: Lilly Irani (UCSD) , Jillian Arnold (IATSE Local 695) , Vinhcent Le (Tech Equity) , Adio-Adet Dinika (Distributed AI Research Institute (DAIR))
Roundtable discussions
Announcements