Skip to main content
This article examines the 2015 film Ex Machina as a cultural text that exemplifies the technologization of gender within algorithmic culture. Analysing different textual elementsthe narrative diegesis, the marketing material, and the... more
This article examines the 2015 film Ex Machina as a cultural text that exemplifies the technologization of gender within algorithmic culture. Analysing different textual elementsthe narrative diegesis, the marketing material, and the digital techniques used in the VFX post-production process-I argue that gender is consistently figured as a kind of technology. That is, gender is systematised, codified, and reduced to a programmed set of instructions that can be used by machines to manipulate and deceive. I argue that understanding gender through its figuration with the technological, specifically through code and algorithms, raises pertinent issues concerning surveillance, race, and bias. This is reflected in the film through a problematic representation of racialised figures, particularly techno-Orientalist tropes of labouring Asian bodies.
This forum piece examines the emergence of AI ethics as an industrial agenda within Big Tech. We argue that the public demand for Big Tech companies to behave more ethically has, in recent years, resulted in a growing supply of ethics... more
This forum piece examines the emergence of AI ethics as an industrial agenda within Big Tech. We argue that the public demand for Big Tech companies to behave more ethically has, in recent years, resulted in a growing supply of ethics experts and formalised expertise. This expertise is produced and circulated as a form of capital in what we call an economy of virtue. In this economy, reputations are traded and ethical practice is produced in line with commercial decision-making.

We focus our discussion on how this changing landscape affects the labour of those engaged, outlining a set of dilemmas and paradoxes for researchers invested in AI ethics and social responsibility. Drawing on examples from Google, ACM FAccT, and Minderoo’s Frontier Technology, we outline three different ‘positions’ for researchers within an economy of virtue: 1) working on the inside, 2) working adjacently, and 3) working outside. Significantly, we situate these positions within the context of rapidly declining funding for universities in Australia and globally. We argue that this funding landscape creates structural conditions that push researchers to participate in this economy of virtue, and conclude with a call to address the ethical compromises imposed by precarious working conditions.
This commentary uses Paul Gilroy's controversial claim that new technoscientific processes are instituting an 'end to race' as a provocation to discuss the epistemological transformation of race in algorithmic culture. We situate Gilroy's... more
This commentary uses Paul Gilroy's controversial claim that new technoscientific processes are instituting an 'end to race' as a provocation to discuss the epistemological transformation of race in algorithmic culture. We situate Gilroy's provocation within the context of an abolitionist agenda against racial-thinking, underscoring the relationship between his post-race polemic and a post-visual discourse. We then discuss the challenges of studying race within regimes of computation, which rely on structures that are, for the most part, opaque; in particular, modes of classification that operate through proxies and abstractions and that figure racialized bodies not as single, coherent subjects, but as shifting clusters of data. We argue that in this new regime, race emerges as an epiphenomenon of processes of classifying and sortingwhat we call 'racial formations as data formations'. This discussion is significant because it raises new theoretical, methodological and political questions for scholars of media and critical algorithmic studies. It asks: how are we supposed to think, to identify and to confront race and racialisation when they vanish into algorithmic systems that are beyond our perception? What becomes of racial formations in post-visual regimes?
Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad ‘ethnic affinity’ categories: African American, U.S.-Hispanic, and Asian American. Superficially, these categories... more
Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad ‘ethnic affinity’ categories: African American, U.S.-Hispanic, and Asian American. Superficially, these categories were supposed to allow advertisers to target demographic groups without using data about users’ race, which Facebook explicitly does not collect. This article uses the life and death of Facebook’s ‘ethnic affinity’ categories to argue that they exemplify a novel mode of racialisation made possible by machine learning techniques.

Adopting Wendy H. K. Chun’s conceptualisation of race ‘and/as’ technology as an analytical frame, this article focuses on what ‘ethic affinity’ categories do with race. ‘Ethhnic affinity’worked by analysing users’ preferences and behaviour: they were supposed to capture an ‘affinity’ for a broad demographic group, rather than registering membership of that group. That is, they were supposed to allow advertisers to ‘personalise’ content for users depending on behaviourally determined affinities. We argue that, in effect, Facebook’s ethnic affinity categories were supposed to operationalise a ‘post-racial’ mode of categorising users. But the paradox of personalisation is that in order to apprehend users as individuals, platforms must first assemble them into groups based on their likenesses with other individuals.

Even in the absence of data on a user’s race—even after the demise of the categories themselves—users can still be subject to techniques of inclusion or exclusion for discriminatory ends. The inductive machine learning techniques that platforms like Facebook employ to classify users generate proxies, like racialised preferences or language use, as racialising substitutes. We conclude that Facebook’s ethnic affinity categories in fact typify novel modes of racialisation that are often elided by the claim that using complex machine learning techniques to attend to our preferences will inaugurate a post-racial present. Discrimination is not personalisation’s accidental product; it is its very condition of possibility. Like that of Facebook’s ethnic affinity categories, its death has been greatly exaggerated.
This short essay on digital assistants and accent regionalisation was written for the UQ Art Museum's online exhibition 'Conflict in My Outlook.' The exhibition and provocation essays are available to access online:... more
This short essay on digital assistants and accent regionalisation was written for the UQ Art Museum's online exhibition 'Conflict in My Outlook.' The exhibition and provocation essays are available to access online: https://www.conflictinmyoutlook.online/provocations
In the 150 years since its construction by the Russian chemist Dmitry Mendeleev, the periodic table of chemical elements has become both a ubiquitous and iconic expression of scientific thought. In its tidy arrangement of substances,... more
In the 150 years since its construction by the Russian chemist Dmitry Mendeleev, the periodic table of chemical elements has become both a ubiquitous and iconic expression of scientific thought. In its tidy arrangement of substances, identities, atomic weights, and other properties, it illustrates the chemical sciences’ ontological argument about the elemental and universal structure of nature. As Michelle Murphy (2017, 495) has argued, this same "functionalist bent" leads to the problematic governance of chemicals as discrete entities, obscuring the complexity and inequity of our respective entanglements with toxicity and its infrastructures. The essays in this series take an ironic stance towards the functionalism and naturalism of the chemical sciences, nominating materials, beings, forces, and other entities that are elemental to our present anthropogenic predicaments. From sodium fluoroacetate, a common pesticide used to "control" rodent populations in countries such as Aotearoa New Zealand, to malhar, an Indian raga that is believed to inspire the clouds to rain, these critical and creative essays respond to a dire need to question universalist classificatory systems and to theorize the elemental in situated ways.
This article examines the figuration of the home automation device Amazon Echo and its digital assistant Alexa. While most readings of gender and digital assistants choose to foreground the figure of the housewife, I argue that Alexa is... more
This article examines the figuration of the home automation device Amazon Echo and its digital assistant Alexa. While most readings of gender and digital assistants choose to foreground the figure of the housewife, I argue that Alexa is instead figured on domestic servants. I examine commercials, Amazon customer reviews, and reviews from tech commentators to make the case that the Echo is modeled on an idealized image of domestic service. It is my contention that this vision functions in various ways to reproduce a relation between device/user that mimics the relation between servant/master in nineteenth-and twentieth-century American homes. Significantly, however, the Echo departs from this historical parallel through its aesthetic coding as a native-speaking, educated, white woman. This aestheticization is problematic insofar as it decontextualizes and depoliticizes the historic reality of domestic service. Further, this figuration misrepresents the direction of power between user and devices in a way that makes contending with issues such as surveillance and digital labor increasingly difficult.
We live, contends Alexander Galloway, in an algorithmic culture. Algorithms are now inescapably embedded into everyday life transforming processes and objects from cultural artefacts into " smart " systems. But unlike most algorithms,... more
We live, contends Alexander Galloway, in an algorithmic culture. Algorithms are now inescapably embedded into everyday life transforming processes and objects from cultural artefacts into " smart " systems. But unlike most algorithms, which are obscured behind the black box of post-industrial processes, intelligent personal assistant softwares such as Apple's Siri are imbued with voice and personality. That is, they are given a materiality and tangibility. This paper aims to interrogate the nature of this materiality, and specifically, the manifestation of the gendered voice. It is my contention that the gendered voice of Siri is symptomatic of the difficulties in performing trust and transparency in what is essentially an intangible process. As Christian Sandvig has argued, transparency and trust are processes that must be seen in order to be believed but the issue with algorithms is that for the most part they can't be seen. Thus for these " robots, " the performance of human sociality, specifically the use of language, humour, and the presentation of gender are cunning manoeuvres that contribute to the performance of trust in the theatre of persuasion. Continuing Sandvig's trajectory, this research seeks to explore the relationship between gender, sociality, and immediacy in these artificial systems.
Research Interests:
Introduction to Platform: Journal of Media and Communications special issue on A Manifesto for Cyborg's at 30. Full issue available online: http://platformjmc.com/
Research Interests:
Overington, C. & Phan, T. (2016). Domesticating drone technologies: Commercialisation, banalisation, and reconfiguration of ‘ways of seeing’. In R. Tippet & H. Randall-Moon (ed.) Security, race, biopower: Essays on technology and... more
Overington, C. & Phan, T. (2016). Domesticating drone technologies: Commercialisation, banalisation, and reconfiguration of ‘ways of seeing’. In R. Tippet & H. Randall-Moon (ed.) Security, race, biopower: Essays on technology and corporeality. Sydney: Palgrave Macmillon
Research Interests:
The production of the #cokedrone YouTube advertisement by Coca-Cola Singapore in 2014 is a manifestation of the broader trend toward the domestication of drone technologies within city spaces, indicating a prolonged desire to eschew its... more
The production of the #cokedrone YouTube advertisement by Coca-Cola Singapore in 2014 is a manifestation of the broader trend toward the domestication of drone technologies within city spaces, indicating a prolonged desire to eschew its relationship to violence. This article seeks to briefly provide one interpretation into this ad and the broader contextual implications of drones in cities. We argue that while a variety of strategies are clearly deployed within this ad—the redesign of the body of the drone, the attempt to negate the relation between drones and violence, and finally, through the reconfiguration of the drone eye from the eye that ‘watches’ to the eye that ‘sees’—the overall implications of drone technologies within city spaces warrants further investigation. Particularly as drone technologies are easily adaptable to changing environments, often concealing its security and surveillance capabilities, commercial and domestic participants in this trend must be critically aware of the potential consequences underlying the normalisation of drones as part of everyday life in the city.
Research Interests:
Research Interests:
Paper delivered at Code 2k18, Swinburne University, http://www.code2k18.com/
Research Interests:
Paper delivered at the QUT At Home With Media Symposium, Nov 2-3 2017. This paper examines the role of nostalgia and class privilege in advertisements for the digital domestic assistant, the Amazon Echo. By analysing the promotional... more
Paper delivered at the QUT At Home With Media Symposium, Nov 2-3 2017.

This paper examines the role of nostalgia and class privilege in advertisements for the digital domestic assistant, the Amazon Echo. By analysing the promotional material since its release, I map the ways in which commercial discourse portrays American middle-class family subjectivities at the historical moment in which AI technologies enter the home and become " everyday ". It is my contention that subjectivities of class, which are predicated on an exploitative relation, are romanticised and commodified within consumer culture in such a way that denies the pain and historical context of that exploitation. Whereas the bourgeois middle-class home has historically been a site for the exploitation of working class (and generally racialised) women's labour; the middle-class homes represented in the Echo discourse eschew this history by displacing the figure of the domestic helper with the consensually " servile " digital assistant Alexa.
Research Interests:
Anatomy of the Image: Perspectives on the (Bio)medical Body in Science, Literature, Culture and Politics International Conference. Melbourne, Australia. February 16, 2017.
Research Interests:
Technicity, Temporality, Embodiment: The 10th International Somtechnics Conference. Byron Bay, Australia. November 31, 2016.
Research Interests:
2016 IEEE conference on Norbert Wiener in the 21st century. Melbourne, Australia. July 14, 2016 Abstract: As areas of research, Artificial Intelligence and gender studies rarely meet. On the one side, AI researchers often assume because... more
2016 IEEE conference on Norbert Wiener in the 21st century. Melbourne, Australia. July 14, 2016

Abstract:
As areas of research, Artificial Intelligence and gender studies rarely meet. On the one side, AI researchers often assume because theirs is the pursuit of the nature of mind and not body, issues such as gender are negligible in their own practice. On the other, feminist discourses often treat AI as coterminous with what Donna Haraway (1985) calls the ‘demonology of technology’—a disregard for science and technology grounded in an assertion of ‘masculine culture’ and the military bias of ‘Big Science’. Cybernetics is one such field that has come under serious scrutiny as charges laid against the liberal humanist subject in discourses of computing are coterminously charged against cybernetics and Norbert Wiener as its ‘steersman’. Indeed, encounters between AI and feminism are today immanent and also unavoidable. In an effort to mediate these discourses this project situates itself on the cusp. It asks, what can feminist inquiry and studies of gender contribute to the understanding of Artificial Intelligence: a) as a field of research steeped within discourses of science that make particular claims to the nature of thinking and the self, b) as a cultural figure with a rich mythology in the cultural imaginary, and c) as a panoply of objects newly crossing the boundaries into everyday life.
Research Interests:
After Biopolitics: 29th Annual SLSA Conference. Houston, Texas. November 15, 2015.
Research Interests:
Society for Social Studies of Science (4S) Annual Meeting. Denver, Colorado. November 11, 2015.
Research Interests:
PopCAANZ 7th International Conference. Sydney, Australia. June 29th 2016.
Research Interests:
Paper delivered at the Australian Women's History Network Conference, Intersections in History. Melbourne, Australia. April 31, 2016
Research Interests:
Paper delivered at ”, 2015 NACADA International Conference: Partnering for Student Success. Melbourne, Australia. June 24, 2015
Research Interests:
Short research presentation delivered at Research Unit in Public Cultures: White Night Research Workshop. Melbourne, Australia. April 1, 2015.
Research Interests:
Paper delivered at Deletion—Deviation: The Perversions of Science Fiction. Melbourne, Australia. February 19, 2015
Research Interests:
Paper delivered at Space, Race, Bodies: Geocorpographies of City, Nation, Empire conference on December 8, 2015 in Dunedin, New Zealand
Research Interests:
Paper delivered at ANZCA 2014, Melbourne, Australia
Research Interests:
In her seminal essay, "Race and/as Technology", Wendy H. K. Chun proposed that race could be understood as a technology-that is, as neither biological nor cultural, human nor machine, mediated nor environmental, visible nor invisible, but... more
In her seminal essay, "Race and/as Technology", Wendy H. K. Chun proposed that race could be understood as a technology-that is, as neither biological nor cultural, human nor machine, mediated nor environmental, visible nor invisible, but as a category that organises all of these dualisms and many more. Race, she argued, can be thought of as a "technique that one uses, even as one is used by it" (38). Using this essay and its concerns as a touchstone, this panel will ask how we might understand race and/as technology in the present. What is the relationship between visibility/ invisibility and race and how is it mediated? How are emergent technologies of control, such as facial recognition, racialised? How might we think categories-like race, person, or population-after Artificial Intelligence, machine learning, and big data? How might these technologies be understood historically? What is the relationship to related categories, such as gender, class or ability, in the technologisation of race? How might analyses of culture help us to understand race and/as technology? If we adopt Chun's idea that this approach "displaces ontological questions of race" (56), what different things might race be made to do, in the present? We want to bring together scholars working in any discipline touched by these questions to think through what it might mean to consider race and/as technology today.
Research Interests:
Research Interests:
Short introduction to the film "Donna Haraway: Storytelling for Earthly Survival" delivered at the Environmental Film Festival Australia, October 17, 2017... more
Short introduction to the film "Donna Haraway: Storytelling for Earthly Survival" delivered at the Environmental Film Festival Australia, October 17, 2017
http://www.effa.org.au/melbourne-program-2017/2017/10/17/donna-haraway-storytelling-for-earthly-survival
Research Interests:
Platform: Journal of Media and Communication
An interdisciplinary journal for early career researchers and graduate students

Abstracts due: Friday 27th of February, 2015
Volume Editor: Thao Phan
Research Interests:
We live, contends Alexander Galloway, in an algorithmic culture. Algorithms are now inescapably embedded into everyday life transforming processes and objects from cultural artefacts into “smart” systems. But unlike most algorithms, which... more
We live, contends Alexander Galloway, in an algorithmic culture. Algorithms are now inescapably embedded into everyday life transforming processes and objects from cultural artefacts into “smart” systems. But unlike most algorithms, which are obscured behind the black box of post-industrial processes, intelligent personal assistant softwares such as Apple’s Siri are imbued with voice and personality. That is, they are given a materiality and tangibility. This paper aims to interrogate the nature of this materiality, and specifically, the manifestation of the gendered voice. It is my contention that the gendered voice of Siri is symptomatic of the difficulties in performing trust and transparency in what is essentially an intangible process. As Christian Sandvig has argued, transparency and trust are processes that must be seen in order to be believed but the issue with algorithms is that for the most part they can’t be seen. Thus for these “robots,” the performance of human socialit...
Thao Phan , Jake Goldenfein, Monique Mann and Declan Kuch Australian Research Council Centre of Excellence on Automated Decision-Making & Society Melbourne, Australia; The Emerging Technologies Research Lab, Monash University,... more
Thao Phan , Jake Goldenfein, Monique Mann and Declan Kuch Australian Research Council Centre of Excellence on Automated Decision-Making & Society Melbourne, Australia; The Emerging Technologies Research Lab, Monash University, Caulfield East, Australia; Melbourne Law school, The University of Melbourne, Carlton, Australia; School of Humanities and Social Sciences and Alfred Deakin Institute for Citizenship and Globalisation, Deakin University, Burwood, Australia; Institute for Culture and Society, Western Sydney University, Penrith, Australia
This article examines the figuration of the home automation device Amazon Echo and its digital assistant Alexa. While most readings of gender and digital assistants choose to foreground the figure of the housewife, I argue that Alexa is... more
This article examines the figuration of the home automation device Amazon Echo and its digital assistant Alexa. While most readings of gender and digital assistants choose to foreground the figure of the housewife, I argue that Alexa is instead figured on domestic servants. I examine commercials, Amazon customer reviews, and reviews from tech commentators to make the case that the Echo is modeled on an idealized image of domestic service. It is my contention that this vision functions in various ways to reproduce a relation between device/user that mimics the relation between servant/master in nineteenth- and twentieth-century American homes. Significantly, however, the Echo departs from this historical parallel through its aesthetic coding as a native-speaking, educated, white woman. This aestheticization is problematic insofar as it decontextualizes and depoliticizes the historic reality of domestic service. Further, this figuration misrepresents the direction of power between use...
This chapter explores the tensions in transition as Unmanned Aerial Vehicles (commonly known as drones) cross over from military contexts into urban city-spaces. Tracing the history of drones through stages of conflict, this chapter... more
This chapter explores the tensions in transition as Unmanned Aerial Vehicles (commonly known as drones) cross over from military contexts into urban city-spaces. Tracing the history of drones through stages of conflict, this chapter argues that the expanding use of drones into everyday life is demonstrative of broader trends in the banalisation of surveillance technologies. By engaging with Hardt and Negri’s concept of Empire, Arendt’s banalisation of violence, and Berger’s ‘ways of seeing’, we demonstrate via a comparative analysis with closed-circuit television how banal spaces may be co-opted using such technologies and how such a processes can be exploited in service of biopolitical ends.
The production of the #cokedrone YouTube advertisement by Coca-Cola Singapore in 2014 is a manifestation of the broader trend toward the domestication of drone technologies within city spaces, indicating a prolonged desire to eschew its... more
The production of the #cokedrone YouTube advertisement by Coca-Cola Singapore in 2014 is a manifestation of the broader trend toward the domestication of drone technologies within city spaces, indicating a prolonged desire to eschew its relationship to violence. This article seeks to briefly provide one interpretation into this ad and the broader contextual implications of drones in cities. We argue that while a variety of strategies are clearly deployed within this ad—the redesign of the body of the drone, the attempt to negate the relation between drones and violence, and finally, through the reconfiguration of the drone eye from the eye that ‘watches’ to the eye that ‘sees’—the overall implications of drone technologies within city spaces warrants further investigation. Particularly as drone technologies are easily adaptable to changing environments, often concealing its security and surveillance capabilities, commercial and domestic participants in this trend must be critically ...
Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad “ethnic affinity” categories: “African American,” “U.S.-Hispanic,” and “Asian American.” This paper uses the life... more
Between 2016 and 2020, Facebook allowed advertisers in the United States to target their advertisements using three broad “ethnic affinity” categories: “African American,” “U.S.-Hispanic,” and “Asian American.” This paper uses the life and death of these “ethnic affinity” categories to argue that they exemplify a novel mode of racialisation made possible by machine learning techniques. These categories worked by analysing users’ preferences and behaviour: they were supposed to capture an “affinity” for a broad demographic group, rather than registering membership of that group. That is, they were supposed to allow advertisers to “personalise” content for users depending on behaviourally determined affinities. We argue that, in effect, Facebook’s ethnic affinity categories were supposed to operationalise a “post-racial” mode of categorising users. But the paradox of personalisation is that in order to apprehend users as individuals, platforms must first assemble them into groups base...
This commentary uses Paul Gilroy’s controversial claim that new technoscientific processes are instituting an ‘end to race’ as a provocation to discuss the epistemological transformation of race in algorithmic culture. We situate Gilroy’s... more
This commentary uses Paul Gilroy’s controversial claim that new technoscientific processes are instituting an ‘end to race’ as a provocation to discuss the epistemological transformation of race in algorithmic culture. We situate Gilroy’s provocation within the context of an abolitionist agenda against racial-thinking, underscoring the relationship between his post-race polemic and a post-visual discourse. We then discuss the challenges of studying race within regimes of computation, which rely on structures that are, for the most part, opaque; in particular, modes of classification that operate through proxies and abstractions and that figure racialized bodies not as single, coherent subjects, but as shifting clusters of data. We argue that in this new regime, race emerges as an epiphenomenon of processes of classifying and sorting – what we call ‘racial formations as data formations’. This discussion is significant because it raises new theoretical, methodological and political qu...
In this paper, we examine how generative machine learning systems produce a new politics of visual culture. We focus on DALL•E 2 and related models as an emergent approach to image-making that operates through the cultural techniques of... more
In this paper, we examine how generative machine learning systems produce a new politics of visual culture. We focus on DALL•E 2 and related models as an emergent approach to image-making that operates through the cultural techniques of feature extraction and semantic compression. These techniques, we argue, are inhuman, invisual, and opaque, yet are still caught in a paradox that is ironically all too human: the consistent reproduction of whiteness as a latent feature of dominant visual culture. We use Open AI's failed efforts to 'debias' their system as a critical opening to interrogate how systems like DALL•E 2 dissolve and reconstitute politically salient human concepts like race. This example vividly illustrates the stakes of this moment of transformation, when so-called foundation models reconfigure the boundaries of visual culture and when 'doing' anti-racism means deploying quick technical fixes to mitigate personal discomfort, or more importantly, potential commercial loss.
Research Interests: