Digitalized democracies: What's next?

While the digital world keeps making our societies better and more connected, it has also created huge challenges in the information space which, to this day, remain un-addressed.

A few decades ago the information space referred to the competition of ideas among experts in the field. Today, it reflects the competition of practically all citizens from across the world who decide to invest some resources and time into building online presence – with no expertise or experience as a prerequisite to become influential.

This allows for a free-flow of disinformation, hate-speech and polarizing content undermining trust in the society. For a functioning democracy which requires to have an informed discussion, this is a problem. That is why we must regulate the information space by following a few rules:

  1. Private companies cannot have control over the content citizens in your country see
    The decision whether citizens consume content produced by an expert or a disinformation spreader is mostly made by social media platforms.[1] While they argue this selection is based on users’ behaviour and that manipulative or hateful content gets demoted or not placed in the advertisement, research [2] suggests otherwise.

    As social media become key sources of information, these platforms must be treated like any other media. Just as we witness on TV or radio, independent boards and councils with language skills and social and political awareness should be accountable to approving what kind of content gets to hundreds of thousands of citizens online.
  2. Proactively limit the interference of third state actors who treat us as enemies
    There are instances when we can limit the freedom of speech – if the speech originates from third state actors who treat us as enemies and use speech in order to attack us.

    As we got rid of RT and Sputnik in defence to the war propaganda following the invasion of Ukraine, similar steps should follow regarding other channels with clear links to enemy third-state actors.

    No society is fully resilient to not having its public opinion swayed in an undesirable direction.[3]
  3. Slow down those who continue breaking the rules
    Plenty of rules are already written by our own laws and platforms themselves, but the lack of online enforcement remains an issue. While the upcoming EU legislation addresses parts of it, limiting the virality of problematic content before its take-down remains unaddressed.

    After breaking the rules, accounts or pages should be subject to algorithmic assessment with advanced NLP control before allowed to post.
  4. Turn off the comments in crisis
    Right after the invasion of Ukraine a huge influx of bots and trolls infecting the discussion has been recorded across many countries. Disabling comments “en bloc” for a limited time period will help curbing the influence operations and facilitate the shift from the algorithmic set-up of suggesting content with the highest number of comments and shares.

The implementation of these rules across all platforms is a crucial prerequisite. It requires the creation of boards and working groups of experts with swift and active information exchange. While these processes are already being created, the war, in which we can easily distinguish between right and wrong, allows us to be bolder, fix some flaws of the current information space and limit the impact of those who willingly or unwillingly contribute to them.





(Zusammenfassung ihres Vortrages im Zuge des International Digital Security Forums (IDSF) in Wien, Juni 2022)

Dieser Beitrag ist im OVE Informationstechnik-Newsletter, Schwerpunkt Social Media, erschienen. Diesen Newsletter, veröffentlicht im Juni 2022, finden Sie hier als Gesamtdokument.

Wenn Sie auf dem Laufenden bleiben wollen, melden Sie sich hier für den Newsletter an.

Dominika Hajdu
Policy Director
Centre for Democracy & Resilience