Luca no background

Hi! I’m Luca. How can I help?

Email me I reply within 24h.

Luca no background

Hi! I’m Luca. How can I help?
Email me. I reply within 24h.

skip to Main Content

Some thoughts on AI risk:

  • The #1 AI risk isn’t some AI becoming a mad dictator and killing humans, but powerful AI tools making it easier than ever for a human dictatorship* to harm us.
  • Because of this, AI risk is not a good reason to pause the development of AI: we cannot afford falling behind in AI development compared to our enemies.
  • That said, there is also a risk that the next dictator that hurt us will be a domestic threat. Perhaps, even, someone we willingly elected into power.
  • What worries me the most is that, if tomorrow we were to choose between a government who will use AI in empowering ways and one who will use AI in dangerous ways, it isn’t evident to me that we will elect the former.
  • The most common argument against AI risk is, “it is possible to create a safe and wise AI.” I have two rebuttals: (1) wise according to whom? (2) the question is not if it is possible for a “wise researcher” to create a safe AI but whether it is possible for a criminal to create an unsafe one.
  • Due to the previous points, managing AI risk is not a computer science problem but a social governance one.
  • Today more than ever it is important to reduce the chances that a population elects a psychopath in power or otherwise transforms into a dictatorship. This requires four elements: (1) a governance system that makes it hard for a wannabe dictator to become one, (2) the social and educational conditions to help the population spot psychopaths and psychopathy and keep them distant from power, (3) making sure that those in power are competent especially in risk management – to prevent the risk of accidental harm –, and (4) making sure that if, somehow, someone incompetent or evil gets in power, there are mechanisms to *ensure* its swift removal (not just make it possible).

I hope that the points above become part of the public discussion and, in particular, that managing AI risk isn’t considered a computer-science-only problem.

Otherwise, we will have a major blindspot.

 

(*: I would consider “dystopian bureaucracies” as a form of dictatorship too – perhaps it’s not formally correct, but I would consider them similar for the sake of the considerations of this article.)

Secured By miniOrange