InterrogationDownloadBackground2BackgroundCrossClosePointsclose-crossarrow-leftarrow-downarrow-up
9 min. reading
New interactions, new challenges
Although the Metaverse is already largely governed by a multitude of existing laws and non-binding mechanisms, some of the virtual worlds’ characteristics could change things.

Three-dimensional physical interactions

As with certain video games and other virtual worlds, the Metaverse raises the question of how to moderate increasingly realistic online behaviour, particularly that which results from capturing the behaviour of the user "behind" the avatar. However, as CERRE notes in a recent report on virtual worlds, "[the] current legal framework tackles illegal and harmful content online (through the DSA, for example). The notion of content refers to products and services, as well as hate speech or fake news. It is not clear, however, whether people’s (in mixed reality contexts) or avatars’ behaviours would fit this notion and therefore be moderated".

With the development of automatic or semi-automatic software agents (or "bots") (see “Online trading models”) and artificial intelligence (AI), another issue is emerging. Should we be differentiating, in metaverses, between relationships between humans, relationships between a human and an AI, and relationships between several AIs? Is harmful behaviour directed at an AI more tolerable than when directed at a real human personified by an avatar? How can we tell the difference between the two? Should there be a technical way of knowing, visually, whether we are talking to another human or to an AI system? In this respect, the current discussions around generative AI could be enlightening. Several solutions are already being considered, such as requiring AI-generated content to be watermarked.

Real-time social interactions

Understanding and regulating interactions in metaverses is made all the more complex by the fact that they take place in real-time. Thanks to artificial intelligence algorithms, it is possible to spot illicit or prejudicial texts, images or videos in the seconds following their publication (or even preceding it, in some cases), but the task can be more complex when it comes to live gestures or words.

Video games and virtual worlds provide further evidence of this. Systems for regulating behaviour exist simply because they allow or disallow certain gestures, postures, and behaviours. Second Life is a good example of this, because while the technical architecture of the virtual world, i.e. its code, may or may not allow certain behaviours, these are then moderated according to the spaces explored by the avatar. In other words, the owners or tenants of the different regions or spaces in Second Life's virtual world have the possibility of influencing the rights they grant to a user, whether this relates to the production of objects or to behaviour. For example, while it is possible to fly in certain zones (which is authorised in the basic functions of the world, in its code), other spaces prohibit this functionality and force avatars to walk. In other cases, an avatar can pass "through" other avatars' bodies, even if this is not possible elsewhere. Certain avatar animations, or behaviours, will thus mark out areas reserved for an adult audience.

How to moderate social interactions in the Metaverse?

The moderation of behaviour in metaverses will probably involve three aspects: the application of the law, standards and conventions, and technical solutions. On this last point, we can, for example, imagine a movement detection system that identifies an illicit or prejudicial movement (whether in accordance with the law or the rules of the space in question) and ensures that it is not reproduced in the immersive environment, even though the user has performed it.

“I have the feeling that we have gone from “netiquette” and rules relating to online behaviour, which were used in the early days of the internet, to moderation and deletion of content in the Web 2.0 era, and that in the era of the Metaverse, we have to go back to rules relating to behaviour”.
Nicolas Vanbremeersch
Nicolas Vanbremeersch
President, Renaissance Numérique

From the user experience point of view, the "real-time" nature of 3D interactions revives the problems associated with reporting illicit or prejudicial behaviour that can occur in video games. Insofar as interactions take place in real-time, how can we prove a posteriori that these behaviours did in fact take place, in order to hold the perpetrators responsible and obtain compensation? This raises issues relating to what is known in law as the "burden of proof", and to the data that could be stored by metaverse operators and the length of time for which this data is kept.

According to some of the participants at the third day of the Metaverse Dialogues, there may be a strong social demand for the possibility of "storing" behaviour in order to be able to prove, if necessary, that it took place. But what data should be stored? Where should it be stored? Under what conditions? For how long? Finally, how can we avoid falling into a state of permanent surveillance?

How to “store” unacceptable behaviour to prove it happened?

This is a complex debate, which has arisen in relation to traffic and location data held by telecommunications operators, giving rise to numerous appeals and court rulings. In a ruling issued on 20 September 2022, the Court of Justice of the European Union pointed out that the unconditional general and indiscriminate retention of connection data is prohibited within the EU. By analogy, it is possible to deduce that for reasons of privacy, data protection (in particular the concept of minimisation highlighted in the GDPR) and the usability of evidence, permanent monitoring and storage of user data is neither possible nor desirable.

It is highly likely that a similar debate will emerge regarding the data stored by virtual world operators or by device suppliers (helmets, glasses, lenses) enabling access to them. It therefore seems crucial to place these issues at the heart of the public debate, all the more so as they potentially involve particularly sensitive data, such as biometric, mental, behavioural, and emotional data.

The "real-time" aspect of interactions in metaverses raises complex and non-trivial operational issues that will have to be addressed not only by the teams of metaverse owners responsible for ensuring their Trust & Safety [[["In the context of content moderation, Trust & Safety is a set of principles (usually developed, applied, and updated by the Trust & Safety team) aimed at regulating the behaviour of users of an online platform and preventing them from publishing content that would breach the platform's guidelines”. Source : WebHelp, «Trust & Safety : pourquoi est-ce essentiel et comment le mettre en œuvre correctement?»: https://webhelp.com/fr/news/trust-and-safety-pourquoi-est-ce-important-et-comment-le-mettre-en-oeuvre-correctement]]], policy, but also by regulators and political decision-makers who may have to ask themselves these questions. It is therefore vital that they start thinking about these issues as soon as possible.

Some possible solutions to meet these new challenges

Various solutions already exist to address the issues raised by metaverses. Firstly, a number of them can be put in place at a technical level to ensure user security "by design".

  • In its report on virtual worlds, CERRE suggests, for example, integrating the detection of certain harmful behaviours directly into the avatars' source code, so that its actions can be stopped.
  • On its side, Meta has implemented several security measures in Horizon Worlds to protect its users. By default, a "personal boundary" prevents avatars whose users are not "friends" from getting closer than a metre, making contact impossible. This feature can also be activated for all avatars, or deactivated. If necessary, users can activate a "safe zone" around their avatar to isolate themselves. In this case, no avatar can touch them, talk to them, or interact with them. This mode can also be used to report, block, or mute other users. To make reporting easier, the last few minutes of avatar interaction are systematically recorded by Meta and shared with the teams responsible for analysing them. These recordings are constantly erased so that only the last few minutes of interaction remain. To put a stop to verbal aggression, the "grable voice" function can be used to make derogatory comments from another avatar unintelligible. In a similar vein, Orange has deployed “safe zones” on video game platforms such as Fortnite and Roblox.

Facebook Horizon Safety Overview

Other ways of moderating content or behaviour can be considered.
  • In the League of Legends video game, for example, a user's account can be temporarily suspended in the event of inappropriate behaviour. The user can then obtain information on the reasons for this suspension, access advice on how to avoid being suspended in the future, or how to avoid being suspended permanently. The game's publisher, Riot Games, also emphasises the need for players to take responsibility for their own behaviour.
  • Still drawing on the experience of video games, American researchers have devised an alternative system to what they describe as punitive justice (based essentially on the moderation and deletion of content, and account suspensions). To help victims deal with the situation, encourage offenders to repair the damage they have caused, and enable the community to deal with the damage collectively, they put forward the idea of “restorative justice”.
  • The community approach to dealing with disputes is consistent with the idea of involving users more in content moderation and, more broadly, in the regulation of online spaces – an approach that Renaissance Numérique has been supporting for several years (see here and here). In some cases, the community can be part of the solution. This is what happens on certain online forums, such as Reddit, or on Wikipedia, where moderation is carried out by the community. Moderators play a key role in sharing best practices and helping to educate other users. The idea, for example, is to prevent a user whose account has been suspended from returning to the platform by creating another account without changing their behaviour. Very similar issues will most certainly emerge in the metaverses.
Video games, social networks, and online forums have been facing a number of issues for years, and these are often mentioned as potential issues in the Metaverse. Consequently, it is important not to wipe the slate clean, but to build on what already exists, take into account lessons learned and, of course, the scientific literature on these many subjects. This would avoid falling into the trap of imagining the infinite possibilities of the Metaverse, which often tend to be more rooted in science fiction than in the actual reality of our immersive online interactions.

FR