NEON LULLABIES

A Cyberpunk Novel out Sept. 22, 2025

Are you dreaming? Are you sleeping? Have you fallen for their lullabies, or will you join the Awakening?

Supplemental Readings

The following materials are supplemental readings that I've found useful in reflecting on the themes & world of Neon Lullabies. I've included excerpts from each that I find most poignant, as well as the full citation. If anyone would like help accessing any of these materials, please feel free to reach out to me on Instagram or Discord with my contact information listed here.

Data Colonialism: Rethinking Big Data’s Relation to the Contemporary Subject

"...Ordinary social interaction has come to contribute to surplus value as a factor of production, just like seed or manure ... But what human beings do when they are tracked and when data are extracted from them during social interactions is not a new type of labor, although it can be appropriated, abstracted, and commodified all the same. The implications of this therefore extend beyond labor to many other aspects of life which until now were not regarded as economic "relations" at all, but come to be incorporated within a vastly expanded production process. These new types of social relations implicate human beings in processes of data extraction, but in ways that do not [at first sight] seem extractive. That is the key point: the audacious yet largely disguised corporate attempt to incorporate all of life, whether or not conceived by those doing it as "production," into an expanded process for the generation of surplus value. The extraction of data from bodies, things, and systems create new possibilities for managing everything. This is the new and distinctive role of platforms and other environments of routine data extraction. If successful, this transformation will leave no discernable "outside" to capitalist production: everyday life will have become directly incorporated into the capitalist process of production.
As this transformation cannot even start without the appropriative moment of data colonialism, it is important to consider how dispossession feels from the perspective of the objects of that appropriation: human beings." (p.343)

"In contemporary data relations, it is not even the uniquely identifiable individual who is the object of data tracking and nudging. The subject is reached under conditions which need not involve their naming or even their identification. You or we will be uniquely identified by multiple corporations using different sets of data features, each sufficient to prompt a distinctive action. Data scholars call these sets of data points our "data doubles.” Management theorists Alaimo and Kallinikos (2016) note, in an analysis of retail fashion platforms, that data doubles are the building blocks for new "social objects": data-rich constructs arranged in complex categories which corporations can target and influence. Media platforms like Netflix are based on the structuring of content production and marketing around the data doubles produced through relentless data harvesting and processing, all of which suggests customization and convenience for the user. But there is nothing comforting about this. Even though the new social knowledge is produced through operations that bypass human beings, it is actual human beings, not "doubles," who are tethered to the discriminations that such knowledge generates. It is a real person who gets offered a favorable price in the supermarket, an opportunity for social housing, or a legal penalty, all based on algorithmic reasoning. Human inputs are only part of the territory that data colonialism seeks to annex to human capital. Machine-to-machine connections significantly deepen the new web of social knowledge production. Consider the fast-growing "Internet of Things." The goal is clear: to install into every tool for human living the capacity to continuously and autonomously collect and transmit data within privately controlled systems of uncertain security." (p. 344)

Couldry, Nick, and Ulises A. Mejias. “Data colonialism: Rethinking big data’s relation to the contemporary subject.” Television & New Media, vol. 20, no. 4, 2 Sept. 2018, pp. 336–349, https://doi.org/10.1177/1527476418796632.

OpenAI Used Kenyan Workers on Less than $2 per Hour to make ChatGPT Less Toxic

"Even as the wider tech economy slows down amid anticipation of a downturn, investors are racing to pour billions of dollars into “generative AI,” the sector of the tech industry of which OpenAI is the undisputed leader. Computer-generated text, images, video, and audio will transform the way countless industries do business, the most bullish investors believe, boosting efficiency everywhere from the creative arts, to law, to computer programming. But the working conditions of data labelers reveal a darker part of that picture: that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries."

"In February 2022 ... Sama began pilot work for a separate project for OpenAI: collecting sexual and violent images—some of them illegal under U.S. law—to deliver to OpenAI. The work of labeling images appears to be unrelated to ChatGPT. In a statement, an OpenAI spokesperson did not specify the purpose of the images the company sought from Sama, but said labeling harmful images was “a necessary step” in making its AI tools safer. (OpenAI also builds image-generation technology.) In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document. OpenAI paid Sama a total of $787.50 for collecting the images, the document shows."

"But the need for humans to label data for AI systems remains, at least for now. 'They're impressive, but ChatGPT and other generative models are not magic—they rely on massive supply chains of human labor and scraped data, much of which is unattributed and used without consent,' Andrew Strait, an AI ethicist, recently wrote on Twitter. 'These are serious, foundational problems that I do not see OpenAI addressing.'"

Perrigo, Billy. “OpenAI Used Kenyan Workers on Less than $2 per Hour: Exclusive.”
TIME, 18 Jan. 2023, [Accessed June 13, 2025] time.com/6247678/openai-chatgpt-kenya-workers/.

Risk and the Future of AI: Algorithmic Bias, Data Colonialism, and Marginalization

"Algorithmic bias is the phenomenon by which an algorithm may perform particularly poorly on a population subgroup if it was not exposed to that subgroup's data during algorithm development and training (Lovejoy, Arora, Buch, & Dayan, 2022). One area where this emerging issue has been highlighted is through health data poverty, whereby marginalised patient groups tend to be underrepresented in data used for health research (Ibrahim, Liu, Zariffa, Morris, & Denniston, 2021). As such, concerns have been raised about how AI and machine learning may be exacerbating healthcare inequalities by underperforming in marginalised patient groups (Panch, Mattie, & Atun, 2019) posing new risks for medical science being developed as well as future treatment effectiveness. For example, algorithmic bias has been well-described in the context of ophthalmological care, which already exhibits social injustice, based on geography within and between countries as well as by socioeconomic status and ethnicity (Campbell et al., 2021). One of the first real-world implementations of AI in healthcare is the use of machine learning algorithms to diagnose diabetic retinopathy from retinal fundus photos (Ting et al., 2017). While some literature has suggested that the capability of AI to interpret these images may exceed that of human ophthalmologists, recent studies note AI risks systematically underperforming for margin- alised patient subgroups, if they are not adequately represented in the data used to train the algorithms (Chu, Squirrell, Phillips, & Vaghefi, 2020). Recently, Burlina, Joshi, Paul, Pacheco, and Bressler (2021) constructed a dataset of retinal fundus photos that specifically excluded dark-skinned patients, and trained a machine learning algorithm on this biased dataset to detect diabetic retinopathy (Burlina et al., 2021). They found that the machine learning algorithm was only 60.5% accurate at detecting retinopathy in dark-skinned patients, compared to 73.0% accurate in light-skinned patients. This is because there are physiological differences between dark-skinned and light-skinned fundi that a machine learning algorithm would not understand if it had not been exposed to the data."

Arora, A., et al. “Risk and the future of AI: Algorithmic bias, Data Colonialism, and marginalization.”
Information and Organization, vol. 33, no. 3, 24 Aug. 2023, https://doi.org/10.1016/j.infoandorg.2023.100478.

Further Readings

The following readings are less directly related to my process in creating Neon Lullabies, but I still find them essential readings and highly recommend them.

Culture and Imperialism, Edward W. Said (1993)

The Dispossessed, Ursula K. Le Guin (1974)

"Can the Subaltern Speak?", Gayatri Chakravorty Spivak (1988)