Oliver Muszynski

Why I am concerned about AI safety

2023-06-03

Here is a fundamental lesson about AI safety that I learned from Elon Musk during the Tucker Carlson interview.

Think about it:

Humans are currently the smartest species on earth.

With the development of AI something incredibly smarter than humans comes along and it is hard to predict what happens after that. We have a singularity, like a black hole that is hard to predict.

AI can be dangerous to the public in various ways. It is perhaps more dangerous than mismanaged aircraft design or bad car production that can cause many accidents. Mismanaged AI has the potential of civilization destruction. The danger of AI can be greater than the one from nuclear weapons.

Here is an alarming quote I took from a documentary:

50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI.

What we need to do now is to establish a regulatory framework in order to mitigate the risks.

There is already US agencies to oversee things that could do public harm, like the food and drug administration, the federal aviation administration and the FCC.

Why don’t we establish agencies that oversee AI?

The starting point for regulation can be a group that seeks insights into AI, solicits opinions from the industry and then has proposed rule making.

We have at least a chance of advanced AI being beneficial to humanity in these circumstance of regulations.