Digital Rights Archive Newsletter - Thirteenth edition
Writing in 1958, the great Canadian economist John Kenneth Galbraith coined the phrase “the conventional wisdom” to describe “the ideas which are esteemed at any time for their acceptability.” The phrase immediately leapt to mind upon seeing Philipp Staab, Marc Pirogan and Dominik Pietron’s article on German technological sovereignty (paywalled, sadly). Their article serves as yet another reminder of how thoroughly the preceding conventional wisdom of the past 40 years has been displaced by a new set of ideas. The conventional wisdom in terms of economic policy has gone from emphasizing markets, free cross-border flows of everything and limited state economic regulation (a phrase which, granted, means something different in Germany than in the United States) to a focus on state economic intervention and the promotion of state sovereignty, in both its digital and economic flavours. For long-term economic-policy watchers, it’s been quite the ride.
Might we also be witnessing a change in the conventional wisdom surrounding generative AI, and the data-driven economy generally? A few pieces this month suggest that bloom may be coming off the generative AI rose, at least in some circles. In Rob Lucas’ article, Unlearning Machines, he turns his eye to the dark social side of machine learning. The article also doubles as a book review of Matteo Paquinelli’s The Eye of the Master: A Social History of Artificial Intelligence, which sounds like a fascinating read. Meanwhile, self-described artists/programmers Ting-Chun Liu and Leon-Etienne go deep into the large language models, highlighting, among other things, how these models often draw on “the same few datasets, models, and algorithms,” biases introduced by image labelling (“text-image pairs”), “questions about the models’ tendencies to default to learned patterns,” and the biases that come with coding images. There’s a lot here; check it out.
Then, focusing on the material aspect of the data-driven economy, tech podcaster Paris Marx interviews Sébastien Lehuedé, a lecturer in ethics, AI and society at King’s College London about “how to stop a data centre.” Not the kind of thing you talk about unless you’re very concerned about how the data-driven economy is developing.
It’s fascinating to watch one form of the conventional wisdom supplant another. But it’s a whole other thrill to be present the moment a new conventional wisdom comes into being. Those of us at the 2018 Internet Governance Forum (IGF) annual meeting in Paris witnessed just that, when French President Emmanuel Macron, in his keynote address, highlighted three models for global internet governance. His typology – Chinese authoritarianism, the US free market model, and EU human rights-focused governance – has become the dominant way of thinking about internet governance.
(As speeches go, it was an all-timer. Most speakers at events like this pander to the assembled great and good. Not that day. Instead, Macron deployed his Obama-level oratorial skills to promote the inevitability of state internet regulation, to an auditorium filled with people holding very strong opinions about the inadvisability of such regulation. This was politics on the highest difficulty setting. The moment he finished, a colleague turned to me and said, “We have to talk about that speech.” We did, and the speech inspired a co-edited volume, by Natasha Tusikov, Jan Aart Scholte and yours truly, on the state’s role in internet governance.)
We weren’t the only ones paying attention. Anu Bradford, promoting her latest book, Digital Empires: The Global Battle to Regulate Technology, on the TrustTalk podcast, unpacks these models and their consequences in a podcast. Though, as a citizen of a smaller country, I’m often left wondering about how other regions are regulating the digital economy, along the lines of Universidad de los Andes professor Jean-Marie Chenou’s work on varieties of digital capitalism (also paywalled – sorry). Maybe there’s more than three ways to regulate big tech? Time to revise Macron’s conventional wisdom?
I’ve long thought (as have many others) that the problems with recommendation algorithms stem primarily from companies’ business models and the incentives they face as for-profit corporations. So it was fascinating to read the Knight First Amendment Institute’s account of how the non-profit BBC tackles recommender algorithms. They face all the same technical problems with these systems. But, as you might guess, it matters what you’re optimizing for, even if one’s choice of goals is always open to debate and difficult to realize.
As always, we have several other fascinating articles and videos for you, including Melanie Brusseler and Matthew Lawrence on the political economy of the energy transition, a discussion with Jostein Hague on his book, The Future of the Factory: How Megatrends are Changing Industrialization, and a book-length examination of Decidim, a collective platform. Will any of these books and thinkers spark a rethink of the conventional wisdom in their subject areas? Stranger things have happened, and it’s only February.
- Blayne Haggart
Technological Sovereignty in Germany: Techno-Industrial Policy as a Form of Economic Statecraft?
Philipp Staab, Marc Pirogan, Dominik Piétron | Global Political EconomyThis article discusses Germany's shift towards strategic techno-industrial policies for technological sovereignty amidst global economic competition and foreign big tech influence. It critiques Germany's traditional ordoliberal stance, emphasizing commercial competitiveness over security. Despite efforts, regulatory and institutional barriers hinder effective state interventions, limiting technological sovereignty goals.
Recommenders With Values: Developing recommendation engines in a public service organization
Alessandro Piscopo, Anna McGovern, Lianne Kerlin, North Kuras, James Fletcher, Calum Wiggins, Megan Stamper | Knight First Amendment Institute at Columbia UniversityThis document advocates for ethical recommendation engines in public service, focusing on transparency, accountability, and user-centric design. It highlights challenges of bias, privacy, and fairness, proposing principles and practices for responsible recommender systems in governmental contexts.
Unlearning Machines
Rob Lucas | SidecarThe article critiques machine learning's limitations, emphasizing its reinforcement of biases and ignorance of historical contexts. It advocates for "unlearning" biased data patterns and incorporating critical perspectives into AI development to mitigate social inequalities and promote ethical AI practices.
Twinned Transition
Melanie Brusseler, Mathew Lawrence | Common WealthA discussion on the concept of a "twinned transition", advocating for the integration of ecological and economic transitions. The article proposes a framework that aligns environmental sustainability with social justice, emphasizing the need for systemic changes in production, consumption, and governance to address climate change and inequality.
How to Stop a Data Center
Sebastián Lehuedé | DisconnectAn analysis on methods for disabling a data center, with a primary focus on interrupting power and cooling systems. The article delves into considerations regarding physical security, legal ramifications, and environmental consequences associated with data centers. Moreover, it emphasizes various technical and regulatory approaches to cease operations effectively.
The Future of the Factory
Jostein Hauge | UCL Institute for Innovation and Public PurposeThe panel discusses how Megatrends – trends within the domains of technology, economy, society, and ecology that have a global impact – are changing the ability of the manufacturing sector to serve as the engine of growth, changing traditional ideas of technological progress, and changing growth and development opportunities in both the global South and the global North.
Self-cannibalizing AI
Ting-Chun Liu, Leon-Etienne Kühr | media.ccc.deThe talk explores how generative AI models learn from each other, leading to self-cannibalism in the creative process. It investigates biases in text-to-image models, revealing complex algorithms and limited datasets. Experiments reveal how datasets are filtered, aesthetic biases, and NSFW classifications, delving into CLIP's role in text-image pairing and examines Stable Diffusion's image generation, highlighting its tendencies to default to learned patterns.
Decidim, a Technopolitical Network for Participatory Democracy: Philosophy, Practice and Autonomy of a Collective Platform in the Age of Digital Intelligence
Xabier E. Barandiaran, Antonio Calleja-López, Arnau Monterde, Carol Romero | SpringerThis book explains the philosophy, design principles, and community organization of Decidim project as a public-common, free and open, digital infrastructure for participatory democracy.
Digital Divides: Navigating Tech, Trust, and Power
Anu Bradford | TrustTalkThis episode focuses on how different regions regulate the digital economy, exploring the contrast between the US's market-driven model prioritizing free speech and innovation, China's state-driven model using technology for surveillance and control, and the EU's rights-driven model emphasizing individual rights and democratic values.