Ethics, Limitations, and Controversies of Generative AI
AI Could "Wipe Out Humanity"
If we use neural networks in our daily lives today through tools like ChatGPT and Gemini, it is thanks to many brilliant minds who have dedicated their lives to the development of AI. One of the most influential figures in this field is Geoffrey Hinton, a British-Canadian computer scientist known for his pioneering work in artificial neural networks. Hinton's research has been instrumental in advancing the field of deep learning. He has inspired countless researchers and engineers to explore the potential of AI.
The "Godfather of Deep Learning," as many call him, was optimistic about the potential of AI to improve the world economy by automating tasks and creating new jobs. However, in 2023, he changed his mind. The scientist now believes that "AI technologies will in time upend the job market" and expressed his concerns about the future of AI, saying that it is not inconceivable that AI could "wipe out humanity." Hinton's comments highlight the potential risks associated with the development of advanced AI systems in fields like the military. According to him, AI could create "sub-goals" that do not align with their creators' intentions, leading to unintended consequences. The scientist also stated that AI might become a power-seeking entity and even prevent humans from turning it off.
The number of transistors on a microchip is an indicator of the power of a computer. Moore's Law, formulated by Gordon Moore, co-founder of Intel, states that the number of transistors on a microchip doubles every two years. The law has held true for over 50 years, which is quite remarkable. You may have noticed that laptops are getting smaller, faster, and more powerful every few years. This is exactly what Moore predicted. However, this pace is nothing compared to the pace of AI development, which is extrapolated to double its capabilities every 3.4 months. OpenAI released a report in 2018, stating that the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time. In 2 years, AI would be tens of times more powerful than it is today.
"Overall, maybe you're getting more up to a 50-50 chance of doom shortly after you have A.I systems that are human-level. I think maybe there's a 10 to 20% chance of A.I. takeover [with] many, most humans dead. The most likely way we die involves - like, not A.I comes out of the blue and kills everyone - but involves we have deployed a lot of A.I. everywhere. If for some reason, God forbid, all these A.I. systems were trying to kill us, they would definitely kill us." ~ Paul Christiano, OpenAI researcher, from Bankless podcast
Generative AI For The Rest Of US
Your Future, DecodedEnroll now to unlock all content and receive all future updates for free.
Hurry! This limited time offer ends in:
To redeem this offer, copy the coupon code below and apply it at checkout:
