Digital Rights Archive Newsletter - Fifth edition
It’s the enduring irony of our current moment: we live in an increasingly datafied society, and yet the exact nature of the foundational element of this society – data – remains shrouded in mystery for many people. In his (deservedly) award-winning book, Innovation in Real Places: Strategies for Prosperity in an Unforgiving World, Dan Breznitz wryly remarks: “The reality is that we do not even have a decent understanding of how data should be used, who should use it, what technologies it might spawn, who should regulate it, who it should be regulated for, or how it should be regulated.”
As a description of the data-governance policy debate, and even the commercial tech mainstream, Breznitz is surely correct. But as this newsletter has highlighted over the past several months, this lack of consensus is not for want of investigation by academics and other experts.
Much of the faith in data, data-driven governance, and tools like ChatGPT, stem from the persistent belief that data is a neutral, superior form of knowledge. However, as several of the articles featured in this month’s newsletter attest (to say nothing of a deep, longstanding literature on the subject), this is very much not the case. Data is not neutral. It is shaped by the contexts within which it is generated, by the people and organizations that deploy it, and the uses to which it is put. Robert W. Cox famously wrote, “Theory is always for someone, and for some purpose.” The same can be said of data.
On the question of what is data, Christoph Stach helpfully unpacks and critiques the idea that data is the new oil: important work given that our opinions about what data is shapes what we will (and will not) do with it. Meanwhile, embracing their inner Cox, a number of articles hammer home the idea that data is always for someone, and for some purpose.
Brishen Rogers discussion of how data and surveillance shapes labour relations; the Alan Turing Institute’s examination of data justice and the relationship between justice and data-driven tech, Sophie Toupin’s fascinating exploration of feminist artificial intelligence; Nur Ahmed, Muntair Waheed and Neil C. Thompson’s warnings about industry influence on the development of artificial intelligence (an imprecise term that we probably should try to avoid in scholarly writing, as an aside); Saffron Huan and Divyah Siddarth on how generative AI requires us to reconsider the appropriateness of dominant, individual- and privacy-focused ideas of data rights and protection; while Ulises A. Mejias continues his fruitful engagement with the concept of “data colonialism”, which is all about the purposes to which data is put, and for whom.
These articles are joined by several other fascinating pieces that touch on the politics of digital technology: Keldon Bester on how to update Canadian competition policy for a digitally driven economy, Jessa Lingel on her book, The Gentrification of the Internet: How to Reclaim our Digital Freedom; nicely complemented by Andrea Monti’s book The Digital Rights Delusion: Humans, Machines and the Technology of Information.
Taken together, all these books, videos, articles and podcasts raise an important question, in reply to Breznitz’s observation that we don’t know how to best regulate data. Given the depth of analysis on offer here, and elsewhere throughout our database, it’s not hard to wonder if the actual problem is one of willful blindness regarding the reality that data is never neutral. To admit this reality would be to complicate everything from algorithmic regulation to the irresponsible roll-out of “generative AI” like ChatGPT. To fully take on board that data, as a human creation, cannot yield the dispassionate, objective, deeper knowledge that its proponents so desire, that it simply pushes politics and questions of bias underground, would be to severely restrict the ambitions and possibilities of the data-driven society itself.
Which raises the question: is it that we don’t know how to regulate data, or that we, as a society, don’t want to know?
- Blayne Haggart
Data Is the New Oil-Sort of: A View on Why This Comparison Is Misleading and its Implications for Modern Data Administration
Christoph Stach | Future InternetStudying and discussing the relevant characteristics of data making them such a challenging resource to handle. In order to enable appropriate data provisioning, this article introduces a holistic research concept from data source to data sink that respects the processing requirements of data producers, as well as the quality requirements of data consumers and, moreover, ensures a trustworthy data administration.
How Platforms Govern: Social Regulation in Digital Capitalism
Petter Törnberg | Big Data & SocietyThis paper situates digital capitalism as a continuation of longer running post-Fordist trends of financialisation, digitalisation, and privatisation. As the platform model is founded on monopolising regulation, platforms come into direct competition with states and public institutions, which they pursue through a set of distinct techno-political strategies to claim power to govern. While the digital proprietary markets are continuities of existing trends, they bring new pressures and affordances, thus producing discontinuities in social regulation.
Workplace Data Is a Tool of Class Warfare
Brishen Rogers | Boston ReviewTechnology is a product of our social knowledge and past labour, and it holds out the promise of basic material security and freedom from drudgery for all. We chafe against technologically mediated orders because we are social beings who need respect, community, and space for creativity. Those instincts could be the basis for a new politics of technology, which ultimately seeks to subject production to real democratic control. Data-driven technologies could then help meet social needs and enable us to choose our labours, making them a source of dignity and self-worth.
Uncovering Data Justice
David Leslie, Jean L. Fendji, Shmyla Khan, Osama Manzar, Angeline Wairegi | The Alan Turing InstituteSocieties’ increasing use of technology through the years has massively changed the way people live and move in the world. But the human impact of data-driven technologies is far from universally positive. How can data-driven technologies be deployed in ways that are compatible with the values of social justice?
Generative AI and the Digital Commons
Saffron Huang, Divya Siddarth | The Collective Intelligence ProjectExisting conceptions of data rights and protection and copyright or licensing-based models offer some instructive priors, but are ill-suited for the issues that may arise from models trained on commons-based data. Forward-looking proposals include investments in standardised dataset/model disclosure and other kinds of transparency when it comes to generative models’ training and capabilities, consortia-based funding for monitoring/standards/auditing organisations, and structures for shared ownership based on individual or community provision of fine-tuning data.
Rethinking Canada’s Competition Policy in a Digital Economy
Keldon Bester | Centre for International Governance InnovationIn January 2023, international and Canadian experts attended a virtual workshop to explore implications of digital data for markets and rethink the policy frameworks underlying those markets in light of the Government of Canada’s ongoing consultation on the Competition Act. Their discussion centred on two themes: first, policy actions that international peers have taken and what Canada might learn from them, and second, how regulatory coordination and coherence across interlocking frameworks (possibly in tension) might play a role in addressing challenges, both present and future, in a digital economy. This report presents the key takeaways from their discussion.
Shaping Feminist Artificial Intelligence
Sophie Toupin | New Media & SocietyThis article examines the historical and contemporary shaping of feminist artificial intelligence (FAI). It begins by looking at the microhistory of FAI through the writings of Alison Adam and her graduate students to enrich the plural histories of AI and to write back feminist history into AI. Then, it explores contemporary examples of how FAI is being shaped today and how it deploys a multiplicity of meanings.
The Gentrification of the Internet: How to Reclaim Our Digital Freedom
Jessa Lingel | New Books NetworkThe internet has become a battleground. Although it was unlikely to live up to the hype and hopes of the 1990s, only the most skeptical cynics could have predicted the World Wide Web as we know it today: commercial, isolating, and full of, even fueled by, bias. This was not inevitable. Jessa Lingel argues that much like our cities, the internet has become gentrified, dominated by the interests of business and capital rather than the interests of the people who use it.
The Growing Influence of Industry in AI Research: Industry Is Gaining Control Over the Technology’s Future
Nur Ahmed, Muntasir Wahed, Neil C. Thompson | ScienceIndustry’s AI successes are easy to see on the news, but those headlines are the heralds of a much larger, more systematic shift as industry increasingly dominates the three key ingredients of modern AI research: computing power, large datasets, and highly skilled researchers. This domination of inputs is translating into AI research outcomes: Industry is becoming more influential in academic publications, cutting-edge models, and key benchmarks.
The Digital Rights Delusion: Humans, Machines and the Technology of Information
Andrea Monti | Routledge IndiaThis book examines the ever-increasing impact of technology on our lives and explores a range of legal and constitutional questions that this raises. It considers the extent to which concepts such as 'cyberspace' and 'digital rights' advance or undermine our understanding of this development and proposes a number of novel approaches to the effective protection of our rights in this rapidly evolving environment.
The People vs. The Algorithmic State: How Government Is Aiding Big Tech’s Extractivist Agenda, and What We Can Do About It
Ulises A. Mejias | PolicyLinkThis paper asks what happens when the state – whose purpose is to guarantee the rights of citizens – becomes a political and business partner with corporations whose profit model hinges on exploitative data-driven advertising, platform services, and gig work. The framework of ‘data colonialism’ is used to examine the historical roots of these new forms of extractivism. Suggestions are reviewed for how the public can help prevent the deployment of discriminatory algorithms, and hold states and corporations accountable when this happens.