中文English
Your current position: Home » News Center » Research Think Tank » Who will establish the ethics and morality of the Metaverse?
Who will establish the ethics and morality of the Metaverse?
Date of publication:2022-11-16     Reading times:256     字体:【
The age of the metaverse has not yet arrived, and when it does, it will by no means be limited to a single domain name controlled by a certain company.
The metaverse is not a new concept. Science fiction writer Neil Stephenson coined the term in his 1992 book Avalanche. The book describes a hypercapitalist dystopia in which humanity collectively chooses to live in virtual environments. So far, people’s experience in the real world is no less than dystopia. Most immersive digital environments have elements of bullying, harassment, virtual sexual assault, etc., which are all related to digital platforms that are “moving rapidly and trying to break the mold.” None of this should come as a surprise. The ethics of new technologies have always lagged behind the innovation itself. That’s why there should be an independent agency to develop regulations for virtual worlds, and the sooner the better.
Ethicists have also taken notice of a surge of corporate and government interest in the field after a major breakthrough in artificial intelligence image recognition in 2012. They continue to publish articles drawing attention to the dangers of using biased data to train artificial intelligence. To this end, a new language has been developed in order to incorporate the values ​​we wish to preserve into the applications of the new artificial intelligence.
Thanks to these reminders, we know that AI is actually “automating inequality,” perpetuating racial bias in law enforcement, as the SUNY Albany professor Aubencks points out. To draw attention to the issue, Bram Winey, a computer scientist at the MIT Media Lab, launched the Algorithmic Justice League in 2016.
This is the first wave of efforts to draw public attention to the ethical behavior of artificial intelligence. But the attention that effort garnered was quickly undercut by calls for self-regulation within the AI ​​industry. AI developers hope to assuage public concerns by rolling out technical toolkits and conducting internal and third-party assessments. This is not the case, as the business models of most AI companies clearly conflict with the ethical standards that the public expects them to uphold.
To take the most common example, Twitter and Facebook do not really prevent users from abusing the platform for all kinds of misconduct, because doing so would violate the so-called “touch” principle and of course affect the platform’s revenue. Likewise, these and other technology companies that have used value extraction and economies of scale to achieve near-monopoly positions in their respective markets will not be willing to give up the power they have acquired now.
More recently, several corporate consultants and programs have begun to specialize in the ethical management of AI to address reputational and operational risks. AI developers at big tech companies will be forced to consider issues such as whether a feature should be included by default or opted out, whether it is appropriate to delegate a task to the AI, and the applications used to train the AI Is it credible etc. To this end, many tech companies have established so-called independent ethics committees, however, the reliability of this form of moral autonomy is called into question once researchers concerned with the ethics and societal implications of AI are relocated.
In order to lay a good moral foundation for the metaverse world, it is necessary to establish institutional supervision before industry self-discipline becomes the norm. We also have to remember that the Metaverse is already very different from AI. AI largely revolves around corporate internal operations, and the Metaverse is destined to be consumer-centric, which means it will have all sorts of behavioral risks that most people haven’t considered yet.
The U.S. telecommunications law provides a basis for regulating social media, so the social media regulatory model is also defaulted as the management model of the Metaverse. This should worry us all.
While we can easily foresee many violations in the digital environment, our experience with social media suggests that we may be underestimating the magnitude of these acts and their ripple effects.
Rather than repeating past mistakes, it is better to overestimate these risks. A fully digital environment opens up possibilities for even more exhaustive data collection, even including personal biometric data. Since no one really knows how to deal with these problems in the digital environment, it is necessary to use regulatory sandboxes to provide isolation environments for running programs before allowing a program to roll out on a large scale.
It is still possible to predict what moral challenges will arise in the Metaverse, but the clock is ticking. Without effective independent oversight, this new digital world will almost certainly become unbridled, with abuses and inequities of the kind that social media and artificial intelligence have already demonstrated, while bringing more than we can even imagine risks of.
Source of the article: Reference news, titled “Who will establish the ethics and morality of the Metaverse?” “, the authors are Dr. Josh Enzminger of the Institute of Innovation and Public Use, University College London, Mark Esposito, professor of Hult International Business School, and Terrence Esposito, professor of Hult International Business School London. If there is any infringement, please contact to delete.
 
Previous:
                           Next: