Or, worse, they might actually have to hire enough people to actually do the job. Why hire 100 people with good work life balance, when you can hire 60 people that aren't allowed to have lives or families.
AGI is not in reach. We need to stop this incessant parroting from tech companies. LLMs are stochastic parrots. They guess the next word. There's no thought or reasoning. They don't understand inputs. They mimic human speech. They're not presenting anything meaningful.
I feel like I have found a lone voice of sanity in a jungle of brainless fanpeople sucking up the snake oil and pretending LLMs are AI. A simple control loop is closer to AI than a stochastic parrot, as you correctly put it.
LLMs are AI. There’s a common misconception about what ‘AI’ actually means. Many people equate AI with the advanced, human-like intelligence depicted in sci-fi - like HAL 9000, JARVIS, Ava, Mother, Samantha, Skynet, and GERTY. These systems represent a type of AI called AGI (Artificial General Intelligence), designed to perform a wide range of tasks and demonstrate a form of general intelligence similar to humans.
However, AI itself doesn't imply general intelligence. Even something as simple as a chess-playing robot qualifies as AI. Although it’s a narrow AI, excelling in just one task, it still fits within the AI category. So, AI is a very broad term that covers everything from highly specialized systems to the type of advanced, adaptable intelligence that we often imagine. Think of it like the term ‘plants,’ which includes everything from grass to towering redwoods - each different, but all fitting within the same category.
LLMs are powerful tools for generating text that looks like something. Need something rephrased in a different style? They're good at that. Need something summarized? They can do that, too. Need a question answered? No can do.
LLMs can't generate answers to questions. They can only generate text that looks like answers to questions. Often enough that answer is even correct, though usually suboptimal. But they'll also happily generate complete bullshit answers and to them there's no difference to a real answer.
They're text transformers marketed as general problem solvers because a) the market for text transformers isn't that big and b) general problem solvers is what AI researchers are always trying to create. They have their use cases but certainly not ones worth the kind of spending they get.
Hey plebs! I demand you work 50% more to develop AGI so that I can replace you with robots and fire all of you and make myself a double plus plutocrat! Also, I want to buy an island, small city, Bunker, Spaceship, And/Or something.
They talk about AGI like it's some kind of intrinsically benevolent messiah that is going to come along and free humanity of limitations rather than a product that is going to be monetised to make a few very rich people even richer
It's a belief in Techno-Jesus that will solve all our problems so we don't have to solve them ourselves (don't need to do the uncomfortable things we don't want to). Just like aliens, the singularity, etc.
Ironically the world is full of people who like to think about solutions to problems. But those in power won’t put them to solve those because it’s not part of the political game.
What if the whole earth, itself, is like, one giant supercomputer, designed to answer the ultimate question, and it's just been running for billions of years?
Perhaps this is what you mean, but it's even worse than just unpaid hours for current employees. His implicit goal is to generate a slave-class of people (which is what actual AI would be) that he can make more of or delete at his whim, and eliminate to livelihoods of any current employees (besides him and other execs, of course).
Black PR is PR too, it's like warnings about weapons of the future and combat robots and antiutopia for many people worked as an ad, and they want that exact future.
I think it's the same with AGI. People think Skynet is cool and want Skynet, because they think it's the future.
Except it's a bit less, like real fascism doesn't look similar to Warhammer, just to a criminal district ruled by a gang, scaled for a country.
You know it's bad when I had to click all the way through to the body of the article to verify this isn't a The Onion thing. Do we still have a "Not The Onion" space here?
For how many years? Cuz y'all ain't anywhere near AGI. You can't even get generative AI to not suck compared to your competition in that market (which is a pretty low bar) lol
With all the rounds of layoffs they've had, their remaining employees would need to be quite stupid to give a shit what this disloyal piece trash says.
Billionaires are often referred to as dragons because they horde wealth. A Guillotine that could know the difference and decide to only harm billionaires would be a technological marvel.
AGI requires a few key components that no LLM is even close to.
First, it must be able to discern truth based on evidence, rather than guessing it. Can’t just throw more data at it, especially with the garbage being pumped out these days.
Second, it must ask questions in the pursuit of knowledge, especially when truth is ambiguous. Once that knowledge is found, it needs to improve itself, pruning outdated and erroneous information.
Third, it would need free will. And that’s the one it will never get, I hope. Free will is a necessary part of intelligent consciousness. I know there are some who argue it does not exist but they’re wrong.
I strongly disagree there. I argue that not even humans have free will, yet we're generally intelligent so I don't see why AGI would need it either. In fact, I don't even know what true free will would look like. There are only two reasons why anyone does anything: either you want to or you have to. There's obviously no freedom in having to do something but you can't choose your wants and not-wants either. You helplessly have the beliefs and preferences that you do. You didn't choose them and you can't choose to not have them either.
The human mind isn't infinitely complex. Consciousness has to be a tractable problem imo. I watched Westworld so I'm something of an expert on the matter.
I'm pretty sure the science says it's more like 20-30. I know personally, if I try to work more than about 40-ish hours in a week, the time comes out of the following week without me even trying. A task that took two hours in a 45-hour "crunch" week will end up taking three when I don't have to crunch. And if I keep up the crunch for too long, I start making a lot of mistakes.
So he's saying they've exhausted the pool of applicants so badly to replace that with normal work weeks, just 150% amount of Googlers or maybe 200% amount of Googlers?
Power and fame break a man. Even if he wasn't broken from the beginning.
What I learned working with Googlers. They were dorks. Big ass dorks. Who got used by women because for the first time in their lives. They were attractive to these women. So many broken marriages and divorces from cheating husbands. That they joked about at the Christmas party. It was an eye opening experience.
Is Google in the cloning business? Because I could swear that's Zack Freedman from the Youtube 3D printing channel. He even wears the heads-up display (Youtube Link). Sorry for being off-Topic but who cares about what tech CEOs say about AGI anyway?
If you made AGI, you'd have a computer that thinks like a person. Okay? We already have minds that think like a person: they're called people!
I get that there is some belief that if you can make a digital consciousness, you can make a digital super-conciousness, but genuinely stop and ask what the utility is, and it's equal parts useless and evil.
First, this premise is totally unexamined. Maybe it can think faster or hold more information in mind at one moment, but what basis is there for such a creation actually exceeding the ingenuity of a group of humans working together? What problem is this going to solve? A "cure for cancer"? The bottleneck to cutting cancer isn't ideas, it's that cell research takes actual time and money. You need it synthesize molecules and watch cells grow, and pay for lab infrastructure. "Intelligence" isn't the limiting element!
The primary purpose is just to crater the value of human labor, by replacing human workers with workers with godlike powers of reasoning. Good luck with that. I'm sure they won't come to the exact reasoning as any exploited worker in 120 nano-seconds.
It's like Jason's problem-solving advice in "The Good Place":
“Any time I had a problem, and I threw a Molotov cocktail… Boom, right away, I had a different problem.”
I don't think a device will ever have a thought. I find it somewhat akin to a belief in the anamism of objects, that it will aquire some form of life force of its own. What a thought is, is a complete mystery. Nobody knows why they happen, where they come from. So, who is even to determine whether an inamimate object is exhibiting signs of consciousness? There are some people that believe it, others are just running a con.
They’re all desperate that so much venture capital is being poured on this, so whoever promises more gets more money, and whoever has more money can bankrupt the rival. There’s no need for an actual AGI to ever exist here to win the game.