The real (economic) AI apocalypse is nigh

Cory Doctorow: " … a third of the stock market is tied up in seven AI companies that have no way to become profitable and … this is a bubble that’s going to burst and take the whole economy with it…. "

I firmly believe the (economic) AI apocalypse is coming. These companies are not profitable. They can’t be profitable. They keep the lights on by soaking up hundreds of billions of dollars in other people’s money and then lighting it on fire. Eventually those other people are going to want to see a return on their investment, and when they don’t get it, they will halt the flow of billions of dollars. Anything that can’t go on forever eventually stops.

Cory’s advice to Cornell University, during a visit to lecture there:

I told them that they should be planning to absorb the productive residue that will be left behind after the bubble bursts:

https://locusmag.com/feature/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

Plan for a future where you can buy GPUs for ten cents on the dollar, where there’s a buyer’s market for hiring skilled applied statisticians, and where there’s a ton of extremely promising open source models that have barely been optimized and have vast potential for improvement.

There’s plenty of useful things you can do with AI. But AI is (as Princeton’s Arvind Narayanan and Sayash Kapoor, authors of AI Snake Oil put it), a normal technology:

https://knightcolumbia.org/content/ai-as-normal-technology

That doesn’t mean “nothing to see here, move on.” It means that AI isn’t the bow-wave of “impending superintelligence.” Nor is it going to deliver “humanlike intelligence.”

It’s a grab-bag of useful (sometimes very useful) tools that can sometimes make workers' lives better, when workers get to decide how and when they’re used.

That’s what a big business should do. But what about individuals? That’s something I’ve been thinking about, and getting nowhere.