Photo from personal archive
You can see the future first in San Francisco.
(c) Leopold Aschenbrenner, a former safety researcher at OpenAI
Leopold A. made news after his scandal dismissal from the OpenAI in 2024. He claimed he was fired after raising concerns about the companies security policies and general direction in the area of safety and alignment. His internal product (a memo that became a draft of the bigger series of essays - The situational Awareness) was brought to the OpenAI board and per his beliefs contributed to his termination. The official reasoning, however, was that the internal memo was shared with outside parties which is prohibited by company privacy policy.
The main concern raised by Leopold A. was the alleged bidding war for AGI proposition - a global war between global superpowers like the U. S., Russia and China.
In this piece I will summarize main concerns and scenarios proposed by Leopold A. and try to paint a more nuanced picture of the AI alignment problem.
____ is the future not only of Russia but of all of mankind. There are huge opportunities, but also threats that are difficult to foresee today. Whoever becomes the leader in this sphere will become the ruler of the world.
(c) Vladimir Putin
New York Time used this quote in their new quizz “AI or Nuclear Weapons”?
If your initial guess was Nuclear Weapons - yeah, we are doomed. It’s AI. This is why a lot of researches are raising concerns about the commercealization of AI and the “race” for AI - imagine Nuclear weapons being discovered in San Francisco (wait…) and then being “bid” over for the rapid scaling - and all by private actors. AI is yet to be treated like national or state-level technology.
Here is human-made (by me) outline of the risks of the AI —> AGI development.
https://miro.com/app/board/uXjVKiysQpY=/?share_link_id=636178510834