Chen Li


Safety Issue of Metaverse

In this interview about Metaverse, Lex does not mention safety issue, not even once. I understand how exciting he must be to cross the uncanny valley, but this is highly unprofessional and the interview is approximately a commercial.

I don’t think I have to stress how absurd this is:

  • Scanning your head, more in detail. Because they “want to capture your facial expressions”.
  • Scanning your house (possibly your family and friends).

Potentially, anybody can use your avatar to say stuff. And with the help of voice imitation, it will even sound like you. Deepfake still requires the target’s pictures or videos for generation, but now you, the target, are giving these information away. I’m not saying the company will do it. Leakage is all you need.

Privacy, copyright and labeling slavery have always been major ethical problems for Machine Learning. On the contrary, classifying patterns of Gravitational Waves or reconstructing neutrino events don’t have these ethical problems. Because there are less actual factors involved. And, even if Neural Networks went crazy in the process, nobody would get hurt. One of the examples of its potential craziness is that detection of a disease is related to how old the picture is, see AI does not exist but it will ruin everything anyway - YouTube.

I read about this meme somewhere:

  • Me: My dad says you spy on people.
  • Mark Zuckerberg: He’s not your dad.