Last week’s spectacular OpenAI fight was reportedly a donnybrook between “Effective Altruism” and “Effective Accelarationism”—two schools of philosophy founded on the nonsensical faith, absent any evidence, that godlike artificial intelligence (AI) beings are imminent, and arguing over the best way to prepare for that day.

Cory Doctorow:

This “AI debate” is pretty stupid, proceeding as it does from the foregone conclusion that adding compute power and data to the next-word-predictor program will eventually create a conscious being, which will then inevitably become a superbeing. This is a proposition akin to the idea that if we keep breeding faster and faster horses, we’ll get a locomotive….

But for people who don’t take any of this mystical nonsense about spontaneous consciousness arising from applied statistics seriously, these two sides are nearly indistinguishable, sharing as they do this extremely weird belief. The fact that they’ve split into warring factions on its particulars is less important than their unified belief in the certain coming of the paperclip-maximizing apocalypse….

Pluralistic: The real AI fight

Left out of this argument are the real abuses of artificial intelligence and automation today, which (Cory says, quoting Molly White) “is incredibly convenient for the powerful individuals and companies who stand to profit from AI.”

AI and automation can be used for a great deal of good and a great deal of evil—and it already is being used for both, Cory says. We need to focus the discussion on that.

Like Cory, I think it’s entirely possible that we may achieve human-level AI one day, and that AI might become superintelligent. That might happen today, it might happen in a thousand years, it might never happen at all. The human race has other things to worry about now.