Dominique Cardon
Professor of Sociology and Director of the Sciences-Po at Médialab

Dominique Cardon is Professor of sociology and director of the Sciences Po Médialab. He is working on the transformation of the public space and the uses of new technologies. He published different articles on the place of new technologies in the no-global movement, alternative media and on the process of bottom-up innovations in the digital world. His recent research focuses on the analysis of the power of algorithms in the classification of digital information. His work seeks to articulate the sociology of science and technology with a sensitive approach to the transformations of contemporary social worlds. He is currently working on the social effects of the generalization of machine learning techniques in an ever-increasing number of situations of everyday life.
He published La démocratie Internet (Paris, Seuil/République des idées, 2010), (with Fabien Granjon), Médiactivistes, Paris, Presses de Science Po, 2010, (with Antonio Casilli), Qu’est-ce que le digital labor ?, Paris, Ina Éditions, 2015, A quoi rêvent les algorithmes, Paris, Seuil, 2015. In English : “Deconstructing the algorithm: four types of digital information calculations”, in Seyfert (Robert), Roberge (Jonathan), eds, Algorithmic Cultures. Essays on meaning, performance, and new technologies, New York, Routledge, 2016, pp. 95-110.
As the historiography of statistics has shown, the deployment of these vast apparatuses to quantify societies emerged hand in hand with the probabilistic understanding that if social phenomena were not ruled deterministically, it was nevertheless possible to interpret society based on observable regularities1. As Ian Hacking has shown, the development of statistics cannot be disconnected from the rise of democratic, liberal societies, whereby individual freedom and autonomy is compensated, for institutions seeking to govern societies, by the production of objectively derived regularities2. The probabilistic paradigm thus replaced natural laws and their inherent causalism, offering a technique of uncertainty reduction which, by the end of the vast program of classification and categorization of populations, produced an image of the distribution and regularities of more or less statistically normal behaviors. The vast enterprise of recording, quantification, and measure of society that unrolled in the XIXth century, thus allowed establishing the credibility of social statistics, and overall trust in numbers3.
Practically speaking, the investment in and maintenance of a codified system of regular recording, and epistemologically speaking, the distribution of statistical occurrences around mean values, the frequentist method in social statistics thus contributed to making “constant causes” more robust. Embedded in institutional and technical apparatuses, they acquired a kind of exteriority. They became the trusted basis on which one could establish correlations about nearly any social phenomenon, and infer causes too.
Now, it is a different method and model, that of the probability of causes, that so-called Bayesian techniques, long marginalized in the history of statistical methods, offer to re-open, making it possible again that “accidental” rather than “constant” causes, become the basis of new sorts of statistical inferences.
To broadly characterize the historical turn that is occurring with the advent of this now mode of statistical reasoning, one could say that it replaces the normal distribution by the “empty matrix”4. In digitized environments, the proliferation of data recordings leads to a massive increase in the number of variables that are available for computation. Even if matrices within which those variables are computed remain empty, calculations continue to follow the notion that, in certain contexts, rare and improbable variables may have some effect on some correlations. This paradigm thus revives inductive techniques of data analysis and avoids engaging in the reduction and stabilization of the space of relevant variables. Causes thus become inconstant and get combined by the computer in changing ways, depending on the local objectives imposed by the various users that seek to predict their environment. This shift towards personalized prediction implies that the causes of individual behaviors become much more uncertain.
The recording of multiple, disparate behaviors may, in certain circumstances, depending on the context, produce a causality that is sufficient to explain, in a relevant manner, the acts of individuals.
1. Porter (Theodore M.), The Rise of Statistical Thinking 1820-1900, Princeton, Princeton University Press, 1986 ; Gigerenzer (Gerd), Swijtnik (Zeno), Porter (Theodore), Daston (Lorraine), Beatty (John), Krüger (Lorenz), The Empire of Chance. How Probability Changed Science and Everyday Life, Cambridge, Cambridge University Press, 1989.2. Hacking (Ian), L’émergence de la probabilité, Paris, Seuil, 2002.3. Porter (Theodore M.), Trust in Numbers. The Pursuit of Objectivity in Science and Public Life, Princeton, Princeton University Press, 1995.4. A phrase that statisticians of big data use to underline the fact that they have at their disposal many variables in column, and statistical events per line, but very few statistical events to actually inform each variable overall.

EDITIONS

SPEAKER SESSIONS

2017 EDITION
Data and Society
Wednesday Nov 22, 2017  14:15-16:00
Button to get your ticket now Button to get your ticket now