Led by the School of Humanities, Creative Industries and Social Sciences, this international seminar series features speakers from the University of Newcastle alongside some of the biggest international scholars in sociology, digital humanities, communications, media studies, and education, responding to key social issues of our time: automation, democracy, wellness and conspiracies, material inequalities, and science and nature, and youth digital identities. The series is designed to showcase the importance of social science research in leading responses to these issues.
As everyday life has been digitalised, more and more of the things we do, like and consume are being automated. Life under the data gaze means predictive analytics drive social, political and cultural practices, sometimes in ways that are beyond our ken. This has implications for how humans live and make choices; how notions of creativity, authenticity and autonomy are challenged; and how existing inequalities are automated and exacerbated. This seminar will present cutting edge research about these developments, how they work and how we need to think about their implications.
Promotional culture on digital platforms as automated loops and sequences
Nicholas Carah (UQ) and Lauren Hayden (UQ)
One of the paradoxes of the mode of branding that digital platforms have engineered is that at the same time brands become a ubiquitous part of our public and intimate lives they also disappear from view. Where once brands stabilised meaning around a sign, they now operate as open-ended, modular, calculative processes. Where they once appeared before mass audiences, they now flow in ephemeral, customised feeds. Where they once segmented and targeted subjects, they now find us via our inscrutable associations and proximities with others. How do brands on digital platforms operate as open-ended and calculative techno-cultural processes? In this talk we explore how digital platforms operationalize promotional culture as automated loops and sequences. To develop this account, we draw on 3,646 screenshots and interviews collected from research participants. By observing how brands are animated in participants’ feeds (via their screenshots), we apprehend how platforms’ algorithmic models work to further entrench brands into our digital worlds and close opportunities for the rejection of consumer culture via the algorithmic models that classify users and anticipate their interests. Users’ resistance to branded communication only results in the recommendation of another which has been pre-selected as suitable for the individual by the platform model, working through the network of concurrently looping brands and extracting value from every interaction.
Nicholas Carah is Director of the Centre for Digital Cultures & Societies and Professor in the School of Communication and Arts at The University of Queensland. He leads UQ's partnership in the Australian Internet Observatory and is an Associate Investigator in the ARC Centre of Excellence for Automated Decision-Making and Society. Nic is the author of Media and Society: Power, Platforms & Participation (2021), Brand Machines, Sensory Media and Calculative Culture (2016), Pop Brands: branding, popular music and young people (2010), and editor of Digital Intimate Publics and Social Media (2018) and Conflict in My Outlook (2022).
Lauren Hayden is a PhD candidate at the University of Queensland developing participatory methods that enable digital platform observability with a focus on digital advertising and algorithmic cultures. Her work critically examines how platform cultures are influenced by digital advertising through an examination of alcohol promotion and online expression of drinking culture. As a research assistant with the ARC Centre of Excellence for Automated Decision-Making and Society, Lauren has developed innovative approaches to collecting and analysing social media advertising data.
Are some things (still) unrepresentable?
Thao Phan (ANU) and Fabian Offert (University of California)
“Are some things unrepresentable?” asks a 2011 essay by Alexander Galloway. It responds to a similarly titled, earlier text by the philosopher Jacques Ranciére examining the impossibility of representing political violence, with the Shoa as its anchor point. How, or how much political violence, asks Ranciére, can be represented? What visual modes, asks Galloway, can be used to represent the unrepresentable? In this talk, we examine two contemporary artistic projects that deal with this problem of representation in the age of artificial intelligence.
Exhibit.ai, the first project, was conceived by the prominent Australian law firm Maurice Blackburn and focuses on the experiences of asylum seekers incarcerated in one of Australia’s infamous “offshore processing centers.” It attempts to bring ‘justice through synthesis’, to mitigate forms of political erasure by generating an artificial record using AI imagery. Calculating Empires: A Genealogy of Power and Technology, 1500-2025, the second project, is a “large-scale research visualization exploring the historical and political dependence of AI on systems of exploitation in the form of a room-sized flow chart.
On the surface, the two projects could not be more unlike: the first using AI image generators to create photorealistic depictions of political violence as a form of nonhuman witnessing (Richardson), the second using more-or-less traditional forms of data visualization and information aesthetics to render visible the socio-technical ‘underbelly’ of artificial intelligence. And yet, as we argue, both projects construct a highly questionable representational politics of artificial intelligence, where a tool which itself is unrepresentable for technical reasons becomes an engine of ethical and political representation. While images are today said to be “operational”, meaning that they no longer function as primarily indexical objects, AI images (arguably the most operational image) are now asked to do the representational (and profoundly political) work of exposing regimes of power, exploitation, and violence.
Thao Phan is a feminist science and technology studies (STS) researcher who specialises in the study of gender and race in algorithmic culture. She is a Lecturer in Sociology (STS) at the Research School for Social Sciences at the Australian National University (ANU) in Canberra, Australia. Thao has published on topics including whiteness and the aesthetics of AI, big-data-driven techniques of racial classification, and the commercial capture of AI ethics research. She is the co-editor of the volumes An Anthropogenic Table of Elements (University of Toronto Press) and Economies of Virtue: The Circulation of 'Ethics' in AI (Institute of Network Cultures), and her writing appears in journals such as Big Data & Society, Catalyst: Feminism, Theory, Technosocience, Science as Culture, and Cultural Studies.
Fabian Offert is Assistant Professor for the History and Theory of the Digital Humanities at the University of California, Santa Barbara, with a special interest in the epistemology and aesthetics of computer vision and machine learning. His current book project focuses on "Machine Visual Culture" in the age of foundation models. He is principal investigator of the international research project "AI Forensics" (2022-25), funded by the Volkswagen Foundation, and was principal investigator of the UCHRI multi campus research group "Critical Machine Learning Studies" (2021-22), which aimed to establish a materialist perspective on artificial intelligence. Before joining the faculty at UCSB, he served as postdoctoral researcher in the German Research Foundation’s special interest group "The Digital Image", associated researcher in the Critical Artificial Intelligence Group (KIM) at Karlsruhe University of Arts and Design, and Assistant Curator at ZKM Karlsruhe, Germany. Website: https://zentralwerkstatt.org.
One of the paradoxes of the mode of branding that digital platforms have engineered is that at the same time brands become a ubiquitous part of our public and intimate lives they also disappear from view. Where once brands stabilised meaning around a sign, they now operate as open-ended, modular, calculative processes. Where they once appeared before mass audiences, they now flow in ephemeral, customised feeds. Where they once segmented and targeted subjects, they now find us via our inscrutable associations and proximities with others. How do brands on digital platforms operate as open-ended and calculative techno-cultural processes? In this talk we explore how digital platforms operationalize promotional culture as automated loops and sequences. To develop this account, we draw on 3,646 screenshots and interviews collected from research participants. By observing how brands are animated in participants’ feeds (via their screenshots), we apprehend how platforms’ algorithmic models work to further entrench brands into our digital worlds and close opportunities for the rejection of consumer culture via the algorithmic models that classify users and anticipate their interests. Users’ resistance to branded communication only results in the recommendation of another which has been pre-selected as suitable for the individual by the platform model, working through the network of concurrently looping brands and extracting value from every interaction.
Nicholas Carah is Director of the Centre for Digital Cultures & Societies and Professor in the School of Communication and Arts at The University of Queensland. He leads UQ's partnership in the Australian Internet Observatory and is an Associate Investigator in the ARC Centre of Excellence for Automated Decision-Making and Society. Nic is the author of Media and Society: Power, Platforms & Participation (2021), Brand Machines, Sensory Media and Calculative Culture (2016), Pop Brands: branding, popular music and young people (2010), and editor of Digital Intimate Publics and Social Media (2018) and Conflict in My Outlook (2022).
Lauren Hayden is a PhD candidate at the University of Queensland developing participatory methods that enable digital platform observability with a focus on digital advertising and algorithmic cultures. Her work critically examines how platform cultures are influenced by digital advertising through an examination of alcohol promotion and online expression of drinking culture. As a research assistant with the ARC Centre of Excellence for Automated Decision-Making and Society, Lauren has developed innovative approaches to collecting and analysing social media advertising data.
Degenerative Music: Listening with and against algorithmic aberrations
Joel Stern (RMIT)
Generative AI platforms like Suno and Udio promise a future where “anyone can make great music” regardless of skills, experience or knowledge by simply using a prompt interface. While this notion radically redefines what it means to create music in a conventional sense, it aligns, weirdly, and perhaps unintentionally, with certain avant-garde and experimental music traditions, which foreground de-skilling (no instrument needed…) and conceptual purity (…just imagination). Further, when we listen to AI-generated music in 2024, despite promises to the contrary, we don’t hear seamless genre replication or polished production. Instead, what stands out are aberrations—glitches, artifacts, and strange affectations—what we might call sonic disaggregations or degenerations. These imperfections are not merely flaws; they are the defining features of AI music. Rather than focusing on AI’s ability to faithfully replicate musical conventions, this talk proposes that the medium specificity of AI music lies in its errors and mutations, its absence of human intentionality, and the ‘lack of shame’ that often accompanies creative choices. While these qualities preclude (at least for now) AI-generated music from being seen as “authentic” popular music, they fulfil long-held avant-garde desires to replace aesthetic choices with automated processes, structures, mechanisations and prompts.
Dr Joel Stern is an Associate Investigator at the RMIT University node of the ARC Centre of Excellence for Automated Decision-Making & Society (ADM+S), and a researcher, curator, and artist living in Naarm/Melbourne, Australia. Informed by his background in DIY and experimental music scenes, Stern’s work focusses on how social, political, and technical practices of sound and listening inform and shape our contemporary worlds.
Are some things (still) unrepresentable?
Thao Phan (ANU) and Fabian Offert (University of California)
“Are some things unrepresentable?” asks a 2011 essay by Alexander Galloway. It responds to a similarly titled, earlier text by the philosopher Jacques Ranciére examining the impossibility of representing political violence, with the Shoa as its anchor point. How, or how much political violence, asks Ranciére, can be represented? What visual modes, asks Galloway, can be used to represent the unrepresentable? In this talk, we examine two contemporary artistic projects that deal with this problem of representation in the age of artificial intelligence.
Exhibit.ai, the first project, was conceived by the prominent Australian law firm Maurice Blackburn and focuses on the experiences of asylum seekers incarcerated in one of Australia’s infamous “offshore processing centers.” It attempts to bring ‘justice through synthesis’, to mitigate forms of political erasure by generating an artificial record using AI imagery. Calculating Empires: A Genealogy of Power and Technology, 1500-2025, the second project, is a “large-scale research visualization exploring the historical and political dependence of AI on systems of exploitation in the form of a room-sized flow chart.
On the surface, the two projects could not be more unlike: the first using AI image generators to create photorealistic depictions of political violence as a form of nonhuman witnessing (Richardson), the second using more-or-less traditional forms of data visualization and information aesthetics to render visible the socio-technical ‘underbelly’ of artificial intelligence. And yet, as we argue, both projects construct a highly questionable representational politics of artificial intelligence, where a tool which itself is unrepresentable for technical reasons becomes an engine of ethical and political representation. While images are today said to be “operational”, meaning that they no longer function as primarily indexical objects, AI images (arguably the most operational image) are now asked to do the representational (and profoundly political) work of exposing regimes of power, exploitation, and violence.
Thao Phan is a feminist science and technology studies (STS) researcher who specialises in the study of gender and race in algorithmic culture. She is a Lecturer in Sociology (STS) at the Research School for Social Sciences at the Australian National University (ANU) in Canberra, Australia. Thao has published on topics including whiteness and the aesthetics of AI, big-data-driven techniques of racial classification, and the commercial capture of AI ethics research. She is the co-editor of the volumes An Anthropogenic Table of Elements (University of Toronto Press) and Economies of Virtue: The Circulation of 'Ethics' in AI (Institute of Network Cultures), and her writing appears in journals such as Big Data & Society, Catalyst: Feminism, Theory, Technosocience, Science as Culture, and Cultural Studies.
Fabian Offert is Assistant Professor for the History and Theory of the Digital Humanities at the University of California, Santa Barbara, with a special interest in the epistemology and aesthetics of computer vision and machine learning. His current book project focuses on "Machine Visual Culture" in the age of foundation models. He is principal investigator of the international research project "AI Forensics" (2022-25), funded by the Volkswagen Foundation, and was principal investigator of the UCHRI multi campus research group "Critical Machine Learning Studies" (2021-22), which aimed to establish a materialist perspective on artificial intelligence. Before joining the faculty at UCSB, he served as postdoctoral researcher in the German Research Foundation’s special interest group "The Digital Image", associated researcher in the Critical Artificial Intelligence Group (KIM) at Karlsruhe University of Arts and Design, and Assistant Curator at ZKM Karlsruhe, Germany. Website: https://zentralwerkstatt.org.