In this workshop, we will explore how spelling mistakes, translations, and other linguistic errors can reveal hidden social biases within generative AI systems. We will begin with a bit of theory to understand how these biases are formed, followed by a practical session where we will use open-source tools to identify them. Finally, we’ll reflect on how language can mirror societal inequalities and the role technology plays in this dynamic. This is a unique opportunity to uncover how language can serve as a lens to see how AI perpetuates stereotypes and social inequalities.
Organized by: Digicoría, collaborative project.