Ashwin Singh: «Quan un govern no protegeix els drets de la ciutadania davant les corporacions tecnològiques que prioritzen els beneficis, es crea una situació de desconfiança»

At Decidim Fest 2023, we spoke with Ashwin Singh, a queer Bahujan activist who works on studying the social, legal, and institutional barriers faced by the queer community in AI.

Las technologies of artificial intelligence are gaining more ground, but there is also a growing concern about systems that generate oppression towards sexual and gender diversity. These technologies claim to be able to identify LGBTQI+ people by analyzing their facial features, voice, social connections, group membership, customer behavior, and even profile photo filters. However, the software is incapable of accurately determining people’s sexual orientation and gender identity since these personal and deeply felt characteristics cannot be discerned solely from external factors, can change over time, and may not conform to Western constructs or the datasets used to train the AI.

The consequence is that these systems operate by censoring and degrading LGBTQI+ content, exposing them to harassment by making them hypervisible; they perpetuate deadnaming, using names prior to their transitions at certain times, and reveal their identities without consent, thus breaking their privacy. In response to this landscape, the queer community in artificial intelligence has raised ‘**Queer in AI**,’ a global, decentralized, volunteer-driven organization that embraces intersectional and community-led participatory design to build a more inclusive and equitable technological future.

From his experience, Ashwin inspires us to question the methods of participation and inclusion within AI, to recognize the queer digital communities that are transforming system design to create a richer and more human fabric of technological development.

What are the risks that AI can pose to democracy?

A significant part of the unethical use of AI falls on large tech corporations and private companies, which are ill-equipped to address the dangers their systems generate, especially in their design.

From the regulation perspective, which is the area where I work, it is where harm can be minimized. When AI systems cause harm, we cannot really trust their creators, who are mostly large companies, to regulate them. This is because they often prioritize profits over people’s safety.

What are the threats?

This poses a threat to democracy because there is a loss of trust. When the government or organization governing the country does not safeguard citizens’ rights against large tech corporations that prioritize profits over their safety, a situation of distrust is created. This distrust is a significant threat because it has real consequences for people.

Any concrete example?

An example that comes to mind is the case of India, where facial recognition technologies are used at airports to ensure data consistency with people’s passports. This often also involves gender. As a non-binary person, if I dress femininely and there is an inconsistency, there will be a very real consequence for me and my identity.

In the field of inclusion, many times the most common forms within artificial intelligence use methods that extract and exploit community participation, or perform a “participation and inclusion wash.” This means it looks like they include the community, but they don’t do so fairly. For example, a recent report shows how OpenAI used exploitative labor practices to make ChatGPT less toxic, making Kenyan workers view very distressing content without providing enough mental health support.

That is why it is important to involve LGBTQI+ communities and organizations in the design and evaluation of AI systems to revoke any technology that tries to identify gender and sexual orientation.

What are the consequences of these inconsistencies you talk about?

When there are AI systems in these types of areas, people who oppose them are often a minority and there are always negative consequences if you are against them. If you come into conflict with the law due to a facial recognition inconsistency, this has negative consequences in your life. This really drains people’s trust in governing bodies because rights are not truly protected, and technologies act against LGBTQI+ identities.

What can we do to address these issues?

There are three principles we can keep in mind to raise awareness about sexual and gender diversity issues in AI: decentralized organization, intersectionality, and community-led initiatives.

Also, it is essential that governments take measures to regulate AI technologies, ensuring they prioritize people’s safety and rights over economic benefits. This will require close collaboration with ethics, human rights, and technology experts to create regulations that protect people and restore trust in democratic systems.