AI Ethics




Eliezer Yudkowski, AI Researcher, Sounds the Alarm on the dangers of AGI

Eliezer Yudkowsky, author of the recent Times article that highlights his perspective of the dangers of AGI

In a series of tweets on Monday evening he continued his thoughts on the topic by discussing AI vs Nuclear threats.

"10 obvious reasons that the danger from AGI is way more serious than nuclear weapons:"


In response to the harsh rhetoric from his recent article in Times and Vice, Yudkowsky has defended his stance, arguing that it may not be possible to adequately prepare for the potential risks posed by advanced AI without taking meaningful precautionary measures. He believes that it is necessary to take some form of action, even if it is extreme in nature, in order to ensure that AI research and development is done responsibly. He has also pointed out the potential benefits of a 6-month moratorium, such as giving ethical researchers more time to develop robust frameworks and guidelines for governing the development of AI technologies but did not sign it as he states: "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it".

Timinit Gebru

Timinit Gebru, an AI ethicist and researcher at Google, has been vocal in her criticism of Yudkowsky's views on AI. Additionally, she believes Yudkowsky sensationalizes the potential dangers of AI without providing evidence to back up his claims. Timinit has been vocal for years that we need to take AI advancements seriously and even proposed that we need to slow down in 2021

One of Timinit's perspectives is based on the theory that the idea of building AGI is that it is on the basis of second wave Eugenics. She questions why we need AGI in the first place. Which I believe is a question worth asking.

Below are some of the tweets by Timinit

We will stay tuned on Yudkowski's perspective and the following dialog.