Eliezer Yudkowsky is not a household name, but he will be soon. Raised an orthodox Jew, Yudkowsky (“Yud” to his friends and acolytes) did not attend high school or college, but nevertheless managed to become staggeringly proficient in mathematics, physics, computer science, epistemology, ethics, and – most importantly – artificial intelligence.
Yudkowsky is the founder of MIRI, an institute devoted to protecting the human race from artificial intelligence. He isn’t optimistic. AI researchers think there is a 10% or greater chance that AI will wipe out the human race, but Yudkowsky is even more pessimistic. He thinks the end of humanity isn’t merely highly probable, but immanent. In an interview with The Guardian last February, he predicted that humanity is more likely to have five years left than fifty. It’s bleak, but, as he said in another recent interview, he wants to go down fighting: “I’ll keep on fighting until the end, which I wouldn’t do if I had literally zero hope. I could still be wrong about something in a way that makes this problem somehow much easier than it currently looks. I think that’s how you go down fighting with dignity.”