From the paper "Fully Autonomous AI Agents Should Not be Developed" by Margaret Mitchell, Avijit Ghosh, Alexandra Sasha Luccioni & Giada Pistilli at Hugging Face.
This paper argues against developing fully autonomous AI agents due to the increasing risks to individuals as systems gain more control. The authors analyse AI agent levels, documenting the ethical trade-offs between potential benefits and risks. They highlight concerns around safety, security, privacy, and the spread of misinformation, all amplified by greater autonomy. The study acknowledges alternative views supporting fully autonomous AI for understanding human intelligence or solving global problems, but suggests a measured approach. The authors advocate for clear distinctions between agent autonomy levels, robust human control mechanisms, and rigorous safety verification. Their conclusion draws a parallel with historical nuclear close calls, advocating for human oversight to prevent catastrophic errors, and ensure that AI agents align with human values and goals.