Skip to main content

Cookie settings

We use cookies to ensure the basic functionalities of the website and to enhance your online experience. You can choose for each category to opt-in/out whenever you want.

Essential

Preferences

Analytics and statistics

Marketing

Hall of Mirrors: Recoding policy for AI commons from the margins

Avatar: Grace Leonora Turtle
Grace Leonora Turtle

Description

The pervasive integration of artificial intelligence (AI) technologies in society’s cultural fabric is challenging policymakers, advocacy groups and platforms like Decidim to rethink policy, regulation and governance of AI as a commons. The growing popularity of Generative AI tools, in particular, brings into question how AI mirrors the world back to us? What kinds of statistically diverse representations might exist or be missing within these reflections? How are they governed, contested and mediated? 

Our workshop extends from a collaborative project between the DCODE Network and Open Future, who have been exploring AI commons and open-source AI tools (i.e., Stable Diffusion) and their effect on (miss)representation, identification and potential for counter-representation of statistically diverse minority or underrepresented communities, cultures, stories and places. As a starting point, we leverage the knowledge of Open Future, and their understanding of openness, transparency, trust and participation in the making of policies that support AI development, deployment and use. We then consider novel modes of policymaking and democratic data governance that could interface with the emerging landscape of the AI commons from the margins — where algorithmic biases and other issues emerging through informatics of domination can be challenged and contested. 

Workshop Duration 1.5 hours

Outcomes

  1. Create and share AI commons policy canvas.
  2. Signposts for the Decidim community to engage with AI commons from the margins.
  3. Conceptualise intersectional values around AI systems and common resources that can represent more culturally diverse and plural realities.


Methodology 

Working with José Muñoz’s (1999) theory of disidentification and the performance of politics — disidentification is characterised by the scrambling of encoded meaning. Disidentifying the production process attempts to decode and recode conventional, colonial, normative and dominant practices and cultures surrounding AI commons from minority perspectives.  

Introduction (10 minutes)

  • Welcome and introductions 
  • Workshop overview and objective setting 

Sense (20 minutes)

  • In teams, participants will make sense of signals and trends that point to emerging social and political discourse surrounding AI, from data portability and interoperability to regulatory proposals for AI commons. This could include collective decision-making rights, stewardship, regulations and other alternative models that can adapt to different communities and contexts. 

Shape (30 minutes)

  • In teams, participants will prototype an AI commons policy canvas that can be published and used by the Decidim community and, by extension, other policymaking communities of practice and advocacy groups engaged with AI commons and democratic data governance. 

Signpost (20 minutes)

  • As a group, participants will discuss signposts for the Decidim community to engage with AI commons from the margins, identifying policy considerations for collective intelligence, decision-making rights and practices from underrepresented marginalised queer, Trans*, black, brown, interracial and intercultural groups. 

Discussion (10 minutes)

  • Reflection, conclusion and next steps 

This workshop takes one step towards disidentifying imaginaries of AI commons in a world where marginalised lives, politics and possibilities are representable within the AI commons. Furthermore, it works towards creating space to discuss counter knowledge and recording AI imaginaries and developments concerning policymaking and democratic data governance.  


Target Audience

This workshop is open to individuals interested in AI Commons practices, policymaking, and democratic data governance. In particular, we are interested in inviting and hosting individuals from queer, Trans*, black, brown and interracial technologists, designers, artists and citizens whose work and interests intersect with themes discussed in this workshop. 

Language of the session: English, with Spanish and Catalan translation if needed.

Logistical needs: Screen and projector.

Maximum number of participants: 20 participants

Workshop facilitators the workshop: 2 facilitators, 1 person documenting (video, photos and transcription) 


*Trans is an inclusive term to refer to diverse identities within the gender identity spectrum.

DCODE, an EU Horizon 2020 Marie Sklodowska-Curie Innovative Training Network (ITN). DCODE is set to rethink the digital transformation of society, addressing themes of ‘inclusive digital futures, trusted interactions, sustainable socio-economic models, democratic data governance, and future design practices. 

Comment

Confirm

Please log in

The password is too short.

Share