Call for Papers! (UN)STABLE DIFFUSIONS: GENERAL-PURPOSE ARTIFICIAL INTELLIGENCE’S PUBLICITIES, PUBLICS, AND PUBLICIZATIONS!

BACKGROUND TO THIS SPECIAL ISSUE

The recent release of so-called “general-purpose artificial intelligence” (GPAI) systems have prompted a public panic that AI-generated fiction is now indistinguishable from fact. General-purpose AI includes OpenAI’s ChatGPT as well as numerous text-to-image generation software such as DALL-E and Stable Diffusion. GPAI systems are already being deployed to support and entrench existing asymmetries of power and wealth. For instance, the online news outlet CNET recently disclosed that it has been publishing stories written by an AI and edited by humans for months.

The current concern over ChatGPT is an important moment in AI’s publicity and publicization, or what Noortje Marres refers to as “material things” acting as “crucial tools or props for the performance of public involvement in an issue” (Marres, 2010, p. 179). Amidst countless opinion pieces and hot takes discussing GPAI, this special issue details how scandal, silence, and hype operate to promote and publicize AI. We seek interventions that question AI’s publicity and promotion as well as new strategies of engagement with AI’s powerful social and political influence.

Concern over Generative AI, we argue, is limited by publicity around these systems that has been framed by hype, silence, or scandal. Publicity refers to the relations between affected peoples and matters of shared concerns. Historically, these relations have been mediated by the press, but GPAI’s coincides with uncertainty about journalism’s status and a rise of direct, one-step flow like effects of either citizen-to-citizen or, in the case of ChatGPT, a direct link.

Scholars have observed that publicity around AI follows several distinct patterns:

Hype. This discourse is prominent. Concepts like the fourth industrial revolution and disruption function as self-fulfilling, with the consequence that technology arrives always as good news. The launch of ChatGPT is a hallmark of this hyped mode, fitting into a well-worn mode of a “normative framework of publicity [that is] drained of its critical value, and convert[ed] from a democratic asset to a democratic liability” (Barney, 2008, p. 92). Opening AI to the public, no matter the consequences for society, as in the case of ChatGPT, is portrayed as a good even as this publicization shifts focus to acceptance and inevitability.

Silence. If not positive, AI coverage is marked by gaps and aporias, or in effect, closures when aspects of AI remain too uncontroversial to report as well as larger work into the logics of AI imaginaries. The result is that AI-related issues seldom make the political information cycle, as in the case of ChatGPT’s potential violations of privacy and copyright law, discussed below.

Scandals – or what we refer to as proofs of social transgressions – are a pronounced feature of contemporary technological coverage and governance. Scandals, which we stress do not necessity involve opportunities for public engagement nor democratic praxis, result from a mutually reinforcing relationship between newsrooms looking for easy, high-engagement stories and the affordances of social media, largely functioning as a distraction from other tasks.

Inevitability. AI discourses are dominated by technology firms, government representatives, AI investors, global management consultancies, and think tanks. These voices present a faith in data driven systems to address social problems while also increasing efficiency and productivity. Such discourses are used to reinforce the idea that the increasing use of AI applications across all spheres of life is inevitable, while sidelining or ignoring meaningful engagement with the ways these applications cause harm.

Our special issue seeks interventions focused on:

  1. Critical and comparative studies of AI’s publicities with regards to the launch and hype of AI. We particularly welcome papers that focus on cases outside the Global North;
  2. Ethnographic, discursive, or engaged research with AI’s publics such as AutoGPT, HustleGPT, or other publics forming around the use, misuse or resistance to GPAI;
  3. Interventions or reflections on critical practices, such as community engagement and mobilization, futures literacy, or capacity building, for better publicizations of AI de-centering the strategic futuring employed by large technology firms.

SUBMISSION & KEY DATES

Please submit an extended abstract (1000 words) by 1 August 2023. Accepted full papers due 1 December 2023. Planned publication Spring 2024.

SUBMIT ABSTRACT

For more details, click here

Facebook
Twitter
LinkedIn
Pinterest
Quick Navigation