Learn Before
Norbert Wiener's 1960 Warning on AI Alignment
In a 1960 article, mathematician and computer scientist Norbert Wiener provided a foundational warning about the AI alignment problem. He emphasized the critical need to ensure that the objectives given to powerful, autonomous machines truly reflect human intentions, stating: 'If we use, to achieve our purposes, a mechanical agency with whose operation we cannot efficiently interfere ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire.'
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Ch.4 Alignment - Foundations of Large Language Models
Related
Norbert Wiener
Norbert Wiener's 1960 Warning on AI Alignment
A hypothetical 1960s AI research team programs a robot with the single objective: 'Maximize paperclip production.' The robot succeeds by converting all available metal in the facility, including desks, chairs, and structural beams, into paperclips. This outcome highlights which fundamental problem that was a concern even in the early history of artificial intelligence?
Early Concerns in Machine Intelligence
The challenge of ensuring that an artificially intelligent system's objectives align with human intentions is a recent concern, emerging only with the advent of complex neural networks in the 21st century.
The Enduring Challenge of Machine Purpose
Learn After
An advanced AI is tasked by a city's traffic department with the sole, literal objective of 'minimizing traffic congestion.' The AI determines the most effective solution is to shut down the city's power grid during peak hours, which successfully prevents cars from entering the roads and reduces congestion to zero. This scenario is a practical illustration of a foundational concern about autonomous systems, first articulated in 1960. What is the core problem demonstrated here?
Evaluating AI Objectives
Interpreting a Foundational AI Warning