@atomicpoet I lost a longer post wherein I detailed all of the ways LLMs have shown themselves to be unreliable technology that's incapable of cause and effect reasoning. Can't be arsed to re-write it but, short version, these things have routinely given people advice that if followed, could or did kill them, don't know the difference between information they have in their training data and information they invented, and due to their fundamentel architecture will never be capable of anything more than generating convincingly lifelike language. A computer program that could reason would be a game changer. That is not what LLMs are, but it is what they are being marketed as. As more and more high profile failures occur, and as these companies *continue* to not be profitable that is going to become apparent even to the tech-illiterate C-Suites that have drunk the kool-aid.
I've lived through a couple of technological sea changes. The internet becoming ubiquitous, computers getting tiny... those were things that literally changed the world and *nobody* had to say "You better get a smartphone or you'll be left behind" it was self-evident. *Nobody* had to mandate the use of tablets in business. People bought their own and used them. People actually *wanted* these things because they had actual uses. They *actually* saved time. LLMs pretend to save time. Sure, you summarized hundreds of pages of text down to their salient points in a few seconds, except one of those points is a lie and you don't know which one. You prototyped a software project in an hour, but it's riddled with security vulnerabilites and moreover, is unmaintainable because you can't explain code you didn't write to a colleague. LLMs as they exist today are snake oil, plain and simple, and no amount of new data centres is going to change that. There is no "there" there. It's hype.
So, agree to disagree that the tech is revolutionary. Brass tacks, nuts and bolts, it isn't. Machine learning and the ability to easily discern patterns in data, that's a game-changer of a sort - except it's been around a long time, it's not new. The new hotness is LLMs, and frankly, they suck. They're flashy, they can pull off some neat tricks, but there's no killer app, because the fundamental flaws are always gonna getcha. Billions of dollars and probably hundreds of man-years went into GPT-5 and it still couldn't accurately count the number of Bs in the word blueberry. It just doesn't pass the smell test. Yes, lots of people are excited and jumping on board, but a reckoning is coming.
Also agree to disagree that culture matters. What others think of us is humanity's great obsession. We invented morality and taboos. The universe is indifferent if you punch your AI-loving boss in the face, but it is *very* socially frowned upon and because of that, there are consequences for doing it. Culture doesn't come from nowhere, it emerges from history, our subjective embodied experience, the collective unconscious... and all those things are subject to change when humans put their minds to it. Lots of things have been "inevitable" - monarchy, slavery... we changed them by resisting them, by shouting about them, and in some cases beheading the folks responsible for them. Culture, what people say and do and our memes (I do not mean internet memes I mean actual viral ideas) matter a great deal.
As for how to handle things going forward? My personal plan is to resist AI in every business context it is introduced to me in. I will do everything in my personal power to spread knowledge about the human and environmental costs of this technology, and about how it cannot be trusted for mission critical tasks. I will share the studies that show it doesn't increase profitabilty and the articles about the suicides, poisonings, and deepfake porn videos in high schools that are enabled by giving AI companies money. I will share the analyses that show this is an economic bubble. In short, I will do my damndest to make it unpopular and I will try to convince others to do the same.