A major ethical challenge today is not to think of technology simply as an object in itself, but to strive to situate it in the social, economic, political and industrial environments in which it is deployed. This means keeping a watchful eye on a world that is changing at high speed and producing intense tensions and contradictions. At the same time, there is a strong demand in society for reflexivity and distance.
A demand for meaning is, among other things, perceptible through phenomena that reveal a difficulty in finding one's bearings in a world that is increasingly organized under the effect of constant time pressure. In this respect, burn-out is very revealing of a world that is generating logics of permanent mobilization. This phenomenon is intensified by digital media when they are not approached critically. Thus, we are witnessing situations of psychological exhaustion.
Like natural resources, human and cognitive resources are not inexhaustible. This observation coincides with a demand for distance that prompts us to ask the following questions: How can we take care of the human being in the multitude of interactions that are made possible in our hypermodern societies? How can we support more rational and responsible technological practices?
The expansion of digital technology in our lives calls for forms of responsibility that concern the meaning we wish to preserve in the development of our societies. In this respect, ethical questioning is crucial insofar as it concerns the way in which we intend to participate (in a consumerist or responsible manner) in our complex technological environments.
With the advent of digitization, there is so much scope for exploiting the traces we leave behind in the course of our activities that we need to take extra care to preserve the values we hold dear in order to promote living together. Among these, trust is ethically of the highest order. It is an essential component of our individual and collective existence. It guarantees freedom of being and of action; it is also an essential condition for any exchange.
But trust is being eroded in an age when disempowerment seems to have become a rule of conduct, amply facilitated by technological architectures that produce certain effects of impunity. We also know that too much acceleration in communication, that is, too much information in constant flow, is likely to create an intense blurring of meaning. One of the most disorientating factors today is of course the spread of fake news, facilitated by the fact that “the negativity bias generally overrides the confirmation bias”1.
As a result, we are becoming increasingly accustomed to living in regimes of mistrust and suspicion that insidiously undermine the equilibrium of our societies. And yet we know that trust is absolutely crucial to our common future and to the definition of desirable futures: “Without trust, the network of human commitments collapses, making the world an even more dangerous and frightening place”2. But are the digital metamorphosis and the capitalist logic really compatible with such ambitions?
Even if forms of regulation on a European scale have emerged, in particular with the GDPR (General Data Protection Regulation), which incorporates a right to dereferencing that creates the necessary conditions for the exercise of consent and control in the management of individual data3. Despite this, one difficulty lies in the link between a definition of the private sphere and the introduction of control mechanisms by the individual himself. To what extent is the individual inclined to want to trigger such a mechanism? What are the conditions under which such an exercise in regulation is possible?
With regard to the AI Act, we can see that the effects of deterritorialization inherent in globalized capitalism are blurring the contours of responsibility (whether legal or moral). But while trust must more than ever become a crucial issue of legal standardization in the deployment of the technologies themselves (with strong guarantees in terms of Cyber Security), it is difficult to overlook the fact that it must continue to be based on tangible experiences (face-to-face), words spoken and a certain moral constancy in our interactions with others.
The more we witness the proliferation of flow-based logics and online spheres of exchange, the more the learning of otherness (that living together stimulates in principle) needs to be the subject of specific care. Faced with certain logics of (screen) excess, we need to invest in spaces where it becomes possible to take care of exchanges between different players in the economy and innovation. A design for coexistence is undoubtedly at stake.
So far we have experienced the trust that has been eroded over time too passively. In this respect, it is very instructive and promising to see that over the last few years, young generations of citizens, entrepreneurs, engineers and designers taking up the questions posed by our flow societies in order to invent more creative and alternative ways of interacting with our digital environments e.g. in Fab Labs.
At a time when the logic of predation is particularly intense, one thing is certain: Trust in the age of flows cannot be decreed, but must be organized sensitively. It also needs to be based on coherent approaches that can be understood by as many people as possible.
From this perspective, a new kind of knowledge production is needed, one that is more cross-disciplinary and interdisciplinary, enabling a better understanding of the technological objects themselves, at a time when AI systems are tending to impoverish the very exercise of our free will. In the face of these trends, we need to learn to “visualize networks without panicking”4, using digital tools that enable us to sharpen our gaze on invisible phenomena that escape our perception, such as the networks through which our personal data can be exploited.
Data visualization and graphic design can help us to gain a better understanding of our digital environments. Without such efforts to represent, design and interpret our online activities, we will undoubtedly be less inclined to evolve with foresight in our network societies. In any case, restoring confidence in our digital interactions will surely require an effort to design architectures that are not only more secure, but also more readable by their users. A fundamental question arises in our screen-based societies: What is happening to the culture of the written word, a traditional attribute of the elite and a vector of power in the “old world”?
Is it doomed to decline in an internationalized world where the flow of images, pushed forward by platform algorithms, reigns supreme? Are we witnessing, as Virginie Martin and I wondered in our recent book Vertigineux réseaux. Enjeux éthiques, cliniques et politiques5, the advent of a cold technicism, with machines and their codes constraining our thinking and formatting it by imposing drastic editorial standards, at the risk of simplifying and distorting the original message? And to what extent can we allow algorithms to manage the hierarchy of information, at the risk of impacting the political balance itself?
As the French philosopher Anne Alombert has described, whereas the Web was based on hypertext links, enabling intentional browsing, recommendation algorithmizes direct users. It's as if the ‘web’, a sort of rhizome, had been transformed into a silo. Such structural realities should certainly encourage us today to create alternative algorithmic policies that are more open and contributory: “This transition from automatic and private recommendations (based on the choices of companies and the quantification of views) to contributory and citizen-based recommendations (based on the interpretations of citizens and the quality of content) is not only more than desirable, it is entirely possible”6.
The societal challenge is all the more crucial in that the real power seems to have shifted away from those who control the distribution infrastructures (the companies that own the social networks decide what can and cannot be seen, thereby privatizing the moderation process in line with their own standards and values), and towards those who can develop the tools to decipher and interpret them.
At a time when the logic of data predation is being exacerbated, a minimum of trust and transparency in the design of algorithms must be reinstituted: This can be achieved, for example, through initiatives aimed at tackling the biases they may convey. From this perspective, a new, more collaborative and transdisciplinary production of knowledge is needed on a European scale, reviving a certain experience of the common good.
From an ethical point of view, there is also the challenge of enabling more virtuous conceptions of technology at a time when AI systems are tending to undermine the very exercise of free will. Faced with these trends, we need to learn to develop a certain digital intelligence, while avoiding letting the networks produce irreversible effects of proletarianization7, as used by the French philosopher of technology, Bernard Stiegler. He labeled such effects as the loss of knowledge and specific skills, among other things.
Confronted with the opacity of algorithms and the risk of a fragmentation of the commons, one possible alternative is to learn to understand these systems better: to decipher their logic (via initiatives like AlgorithmWatch), to regulate (with frameworks like the DSA) and to explore alternatives (e.g. Mastodon or Wikipedia).
This work of transparency and collective design would enable us not to passively suffer the effects of these digital architectures, but to think of them as spaces to be reinvested. Without this effort, we would remain at the mercy of a dizzying technological whirlwind. Beyond a certain moral panic that tends to overwhelm us, these are ways of developing a more serene relationship with our digital technologies, aiming to achieve a better understanding of the new environments they are shaping.
1 David Chavalarias, Toxic data. Comment les réseaux manipulent nos opinions, Flammarion, 2022, p. 91.
2 Zygmnut Bauman, Wasted Lives. Modernity and its Outcasts. Cambridge: Polity, 2004, p. 169.
3 Pierre-Antoine Chardel & Armen Khatchatourov, ″The Ethical Challenges of Digital Identity″, The Conversation, 11/06/2019: https://theconversation.com/the-ethical-challenges-of-digital-identity-126564
4 Philippe Rivière, ″Visualiser les réseaux sans paniquer″ in Olaf Avenati et Pierre-Antoine Chardel (Eds.), Dalalogie. Formes et imaginaires du numérique, Paris, Editions Loco / ESAD de Reims, 2016, p. 84-95.
5 Pierre-Antoine Chardel et Virginie Martin (Eds.), Vertigineux réseaux. Enjeux éthiques, cliniques et politiques, EMS Editions, 2025.
6 ″Assurer_nos_libertes_à_l’ere_de_l’intelligence_artificielle″_Anne_Alombert_CNNum_mars_2024.pdf
7 https://www.researchgate.net/publication/261858142_Proletarianisation
Pierre-Antoine Chardel is a French philosopher and sociologist specializing in the ethical and political dimensions of digital technologies. He is a Full Professor at Institut Mines-Télécom Business School and Institut Polytechnique de Paris, and senior researcher at the Laboratory of Political Anthropology (UMR 8177, CNRS/EHESS).
His recent works include Socio-philosophie des technologies numériques (Presses des Mines, 2022), which examines the transformative effects of digital technologies on society, and L’empire du signal. De l’écrit aux écrans (CNRS Editions, 2020), exploring the shift from written culture to screen-based communication.
In collaboration with Virginie Martin, he co-edited Vertigineux réseaux. Enjeux éthiques, cliniques et politiques (EMS Éditions, 2025), a collective volume that critically analyzes the ethical, clinical, and political challenges posed by pervasive digital networks. Through his interdisciplinary approach, Chardel offers a nuanced critique of technological modernity, advocating for renewed forms of ethical and political consideration.