It's been just over two years and two months since ChatGPT launched, and in that time we've seen Large Language Models (LLMs) blossom from a novel concept into one of the most craven cons of the 21st century — a cynical bubble inflated by OpenAI CEO Sam Altman built to sell
I refuse to sit here and pretend that any of this matters. OpenAI and Anthropic are not innovators, and are antithetical to the spirit of Silicon Valley. They are management consultants dressed as founders, cynical con artists raising money for products that will never exist while peddling software that destroys our planet and diverts attention and capital away from things that might solve real problems.
I'm tired of the delusion. I'm tired of being forced to take these men seriously. I'm tired of being told by the media and investors that these men are building the future when the only things they build are mediocre and expensive. There is no joy here, no mystery, no magic, no problems solved, no lives saved, and very few lives changed other than new people added to Forbes' Midas list.
None of this is powerful, or impressive, other than in how big a con it’s become. Look at the products and the actual outputs and tell me — does any of this actually feel like the future? Isn’t it kind of weird that the big, scary threats they’ve made about how AI will take our jobs never seem to translate to an actual product? Isn’t it strange that despite all of their money and power they’re yet to make anything truly useful?
My heart darkens, albeit briefly, when I think of how cynical all of this is. Corporations building products that don't really do much that are being sold on the idea that one day they might, peddled by reporters that want to believe their narratives — and in some cases actively champion them. The damage will be tens of thousands of people fired, long-term environmental and infrastructural chaos, and a profound depression in Silicon Valley that I believe will dwarf the dot-com bust.
And when this all falls apart — and I believe it will — there will be a very public reckoning for the tech industry.
The author seems to think that OpenAI having an unsustainable business model means generative AI is a con. Generative AI doesn’t mean OpenAI 🤦♂️ There is a good chance that the VC funds invested in OpenAI will have evaporated in 5 years. But generative AI will exist in 5 years, it will be orders of magnitude more useful, and it will help solve many problems.
The level of user sophistication required to use modern "Ai" in a productive useful way puts it squarely beyond the reach of the masses. Alphafold is fucking awesome, chatGPT O3 is nothing but a gimmick.
100% and like any tool, it can be used poorly resulting in AI bit rot, bugs, unmaintainable code, etc. But when used well, given appropriate context, by users that know what good solutions looks like, it can increase developer efficiency.
It will. So has just about every other major technical development ever. Eventually those lost jobs should be replaced by even more jobs made possible by the new technology, but in the meantime it will suck.
Thats how you know its not just a gimmick. How many jobs did blockchain replace? Just about zero. How many jobs did computers or the Internet or the mechanical loom or the freaking steam engine replace? Tons.
except genAI has proven no purpose. this is like saying "look at how many jobs bankers replaced! we just used to eat for free, now we have to work our entire lives for it or starve!"
Generative AI has spawned an awful amount of AI slop and companies are forcing incomplete products on users. But don't judge the technology by shitty implementations. There are loads of use cases where when used correctly, generative AI brings value. For example, in document discovery in legal proceedings.
It is the best option for certain use cases. OpenAI, Anthropic, etc sell tokens, so they have a clear incentive to promote LLM reasoning as an everything solution. LLM read is normally an inefficient use of processor cycles for most use cases. However, because LLM reasoning is so flexible, even though it’s inefficient from a cycle perspective, it is still the best option in many cases because the current alternatives are even more inefficient (from a cycle or human time perspective).
Identifying typos in a project update is a task that LLMs can efficiently solve.
Yes I think it's a good option for spell check, or for detecting when the word it sees seems unlikely given the context.
For things where it's generating text, or categorizing things, It might be the easiest option. Or currently the cheapest option. But I don't think it's the best option if you consider everyone involved.
But I don’t think it’s the best option if you consider everyone involved.
Can you expand on this? Do you mean from an environmental perspective because of the resource usage, social perspective because of jobs losses, and / or other groups being disadvantaged because of limited access to these tools?
Basically the LLM may make people's jobs easier, for instance someone can get a meeting summary with less effort, but they produce worse results if you consider everyone affected by the work product, like considering whose views are underrepresented in the summary. Or, if you're using it to categorize text, you can't find out why it is producing incorrect results and improve it the way you could with other machine learning techniques. I think Emily Bender can do a better job explaining it than I can:
check out the part where she talks about the problems with relying on LLMs to generate meeting summaries and with using it to clarify customer support calls as "resolved" or "not resolved". I tried to get close to that second part since the video is long.
I agree and I think this comes back to execution of the technology as opposed to the technology itself. For context, I work as an ML engineer and I’ve been concerned with bias in AI long before ChatGPT. I’m interested in other folks perspectives on this technology. The hype and spin from tech companies is a frustrating distraction from the real benefits and risks of AI.