They're kind of right. LLMs are not general intelligence and there's not much evidence to suggest that LLMs will lead to general intelligence. A lot of the hype around AI is manufactured by VCs and companies that stand to make a lot of money off of the AI branding/hype.
I believe they were implying that a lot of the people who say "it's not real AI it's just an LLM" are simply parroting what they've heard.
Which is a fair point, because AI has never meant "general AI", it's an umbrella term for a wide variety of intelligence like tasks as performed by computers.
Autocorrect on your phone is a type of AI, because it compares what words you type against a database of known words, compares what you typed to those words via a "typo distance", and adds new words to it's database when you overrule it so it doesn't make the same mistake.
It's like saying a motorcycle isn't a real vehicle because a real vehicle has two wings, a roof, and flies through the air filled with hundreds of people.
Pretty sure the meme format is for something you get extremely worked up about and want to passionately tell someone, even in inappropriate moments, but no one really gives a fuck
People who don't understand or use AI think it's less capable than it is and claim it's not AGI (which no one else was saying anyways) and try to make it seem like it's less valuable because it's "just using datasets to extrapolate, it doesn't actually think."
Guess what you're doing right now when you "think" about something? That's right, you're calling up the thousands of experiences that make up your "training data" and using it to extrapolate on what actions you should take based on said data.
You know how to parallel park because you've assimilated road laws, your muscle memory, and the knowledge of your cars wheelbase into a single action. AI just doesn't have sapience and therefore cannot act without input, but the process it does things with is functionally similar to how we make decisions, the difference is the training data gets input within seconds as opposed to being built over a lifetime.
Depends on what you mean by general intelligence. I've seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.
An artificial general intelligence (AGI) is a hypothetical type of intelligent agent which, if realized, could learn to accomplish any intellectual task that human beings or animals can perform. Alternatively, AGI has been defined as an autonomous system that surpasses human capabilities in the majority of economically valuable tasks.
If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning. The question then is how general do LLMs have to be for one to consider them to be AGIs, and there's no hard metric for that question.
I can't pass the bar exam like GPT-4 did, and it also has a lot more general knowledge than me. Sure, it gets stuff wrong, but so do humans. We can interact with physical objects in ways that GPT-4 can't, but it is catching up. Plus Stephen Hawking couldn't move the same way that most people can either and we certainly wouldn't say that he didn't have general intelligence.
I'm rambling but I think you get the point. There's no clear threshold or way to calculate how "general" an AI has to be before we consider it an AGI, which is why some people argue that the best LLMs are already examples of general intelligence.
Depends on what you mean by general intelligence. I've seen a lot of people confuse Artificial General Intelligence and AI more broadly. Even something as simple as the K-nearest neighbor algorithm is artificial intelligence, as this is a much broader topic than AGI.
Well, I mean the ability to solve problems we don't already have the solution to. Can it cure cancer? Can it solve the p vs np problem?
And by the way, wikipedia tags that second definition as dubious as that is the definition put fourth by OpenAI, who again, has a financial incentive to make us believe LLMs will lead to AGI.
Not only has it not been proven whether LLMs will lead to AGI, it hasn't even been proven that AGIs are possible.
If some task can be represented through text, an LLM can, in theory, be trained to perform it either through fine-tuning or few-shot learning.
No it can't. If the task requires the LLM to solve a problem that hasn't been solved before, it will fail.
I can't pass the bar exam like GPT-4 did
Exams often are bad measures of intelligence. They typically measure your ability to consume, retain, and recall facts. LLMs are very good at that.
Ask an LLM to solve a problem without a known solution and it will fail.
We can interact with physical objects in ways that GPT-4 can't, but it is catching up. Plus Stephen Hawking couldn't move the same way that most people can either and we certainly wouldn't say that he didn't have general intelligence.
The ability to interact with physical objects is very clearly not a good test for general intelligence and I never claimed otherwise.
It depends a lot on how we perceive "intelligence". It's a lot more vague of a term than most, so people have very different views of it. Some people might have the idea of it meaning the response to stimuli & the output (language or art or any other form) being indistinguishable from humans. But many people may also agree that whales/dolphins have the same level of, or superior, "intelligence" to humans. The term is too vague to really prescribe with confidence, and more importantly people often use it to mean many completely different concepts ("intelligence" as a measurable/quantifiable property of either how quickly/efficiently a being can learn or use knowledge or more vaguely its "capacity to reason", "intelligence" as the idea of "consciousness" in general, "intelligence" to refer to amount of knowledge/experience one currently has or can memorize, etc.)
In computer science "artificial intelligence" has always simply referred to a program making decisions based on input. There was never any bar to reach for how "complex" it had to be to be considered AI. That's why minecraft zombies or shitty FPS bots are "AI", or a simple algorithm made to beat table games are "AI", even though clearly they're not all that smart and don't even "learn".
Even sentience is on a scale. Even cows or dogs or parrots or crows are sentient, but not as much as we are. Computers are not sentient yet, but one day they will be. And then soon after they will be more sentient than us. They'll be able to see their own brains working, analyze their own thoughts and emotions(?) in real time and be able to achieve a level of self reflection and navel gazing undreamed of by human minds! :D
But also the people who seem to think we need a magic soul to perform useful work is way way too high.
The main problem is Idiots seem to have watched one too many movies about robots with souls and gotten confused between real life and fantasy - especially shitty journalists way out their depth.
This big gotcha 'they don't live upto the hype' is 100% people who heard 'ai' and thought of bad Will Smith movies. LLMs absolutely live upto the actual sensible things people hoped and have exceeded those expectations, they're also incredibly good at a huge range of very useful tasks which have traditionally been considered as requiring intelligence but they're not magically able everything, of course they're not that's not how anyone actually involved in anything said they would work or expected them to work.
Even if LLM's can't be said to have 'true understanding' (however you're choosing to define it), there is very little to suggest they should be able to understand predict the correct response to a particular context, abstract meaning, and intent with what primitive tools they were built with.
If there's some as-yet uncrossed threshold to a bare-minimum 'understanding', it's because we simply don't have the language to describe what that threshold is or know when it has been crossed. If the assumption is that 'understanding' cannot be a quality granted to a transformer-based model -or even a quality granted to computers generally- then we need some other word to describe what LLM's are doing, because 'predicting the next-best word' is an insufficient description for what would otherwise be a slight-of-hand trick.
There's no doubt that there's a lot of exaggerated hype around these models and LLM companies, but some of these advancements published in 2022 surprised a lot of people in the field, and their significance shouldn't be slept on.
Certainly don't trust the billion-dollar companies hawking their wares, but don't ignore the technology they're building, either.
You are best off thinking of LLMs as highly advanced auto correct. They don't know what words mean. When they output a response to your question the only process that occurred was "which words are most likely to come next".
Even if LLM's can't be said to have 'true understanding' (however you're choosing to define it), there is very little to suggest they should be able to understand predict the correct response to a particular context, abstract meaning, and intent with what primitive tools they were built with.
Did you mean "shouldn't"? Otherwise I'm very confused by your response
Yes. But the more advanced LLMs get, the less it matters in my opinion. I mean of you have two boxes, one of which is actually intelligent and the other is "just" a very advanced parrot - it doesn't matter, given they produce the same output. I'm sure that already LLMs can surpass some humans, at least at certain disciplines. In a couple years the difference of a parrot-box and something actually intelligent will only merely show at the very fringes of massively complicated tasks. And that is way beyond the capability threshold that allows to do nasty stuff with it, to shed a dystopian light on it.
I mean of you have two boxes, one of which is actually intelligent and the other is "just" a very advanced parrot - it doesn't matter, given they produce the same output.
You're making a huge assumption; that an advanced parrot produces the same output as something with general intelligence. And I reject that assumption. Something with general intelligence can produce something novel. An advanced parrot can only repeat things it's already heard.
The difference is that you can throw enough bad info at it that it will start paroting that instead of factual information because it doesn't have the ability to criticize the information it receives whereas an human can be told that the sky is purple with orange dots a thousand times a day and it will always point at the sky and tell you "No."
I think a better way to view it is that it's a search engine that works on the word level of granularity. When library indexing systems were invented they allowed us to look up knowledge at the book level. Search engines allowed look ups at the document level. LLMs allow lookups at the word level, meaning all previously transcribed human knowledge can be synthesized into a response. That's huge, and where it becomes extra huge is that it can also pull on programming knowledge allowing it to meta program and perform complex tasks accurately. You can also hook them up with external APIs so they can do more tasks. What we have is basically a program that can write itself based on the entire corpus of human knowledge, and that will have a tremendous impact.
The next step is to understand much more and not get stuck on the most popular semantic trap
Then you can begin your journey man
There are so, so many llm chains that do way more than parrot. It's just the last popular catchphrase.
Very tiring to keep explaining that because just shallow research can make you understand more than it's a parrot comment. We are all parrots. It's extremely irrelevant to the ai safety and usefulness debates
Most llm implementations use frameworks to just develop different understandings, and it's shit, but it's just not true that they only parrot known things they have internal worlds especially when looking at agent networks
...What are these? Something to do with hydrogen? Despite it not making sense for you to write it that way if you meant H2O, I really enjoy the silly idea of a water generator (as in, making water, not running off water).
HHO generators are a car mod that some backyard scientists got into, but didn't actually work. They involve cracking hydrogen from water, and making explosive gasses some claimed could make your car run faster. There's lots of YouTube videos of people playing around with them. Kinda dangerous seeming... Still neat.
They're predicting the next word without any concept of right or wrong, there is no intelligence there. And it shows the second they start hallucinating.
They are a bit like you'd take just the creative writing center of a human brain. So they are like one part of a human mind without sentience or understanding or long term memory. Just the creative part, even though they are mediocre at being creative atm. But it's shocking because we kind of expected that to be the last part of human minds to be able to be replicated.
Put enough of these "parts" of a human mind together and you might get a proper sentient mind sooner than later.
Exactly. Im not saying its not impressive or even not useful, but one should understand the limitation. For example you can't reason with an llm in a sense that you could convince it of your reasoning. It will only respond how most people in the used dataset would have responded (obiously simplified)
It's fun to think about but we don't understand the brain enough to extrapolate AIs in their current form to sentience. Even your mention of "parts" of the mind are not clearly defined.
There are so many potential hidden variables. Sometimes I think people need reminding that the brain is the most complex thing in the universe, we don't full understand it yet and neural networks are just loosely based on the structure of neurons, not an exact replica.
I have a silly little model I made for creating Vogoon poetry. One of the models is fed from Shakespeare. The system works by predicting the next letter rather than the next word (and whitespace is just another letter as far as it's concerned). Here's one from the Shakespeare generation:
KING RICHARD II:
Exetery in thine eyes spoke of aid.
Burkey, good my lord, good morrow now: my mother's said
This is silly nonsense, of course, and for its purpose, that's fine. That being said, as far as I can tell, "Exetery" is not an English word. Not even one of those made-up English words that Shakespeare created all the time. It's certainly not in the training dataset. However, it does sound like it might be something Shakespeare pulled out of his ass and expected his audience to understand through context, and that's interesting.
I feel like our current "AIs" are like the Virtual Intelligences in Mass Effect. They can perform some tasks and hold a conversation, but they aren't actually "aware". We're still far off from a true AI like the Geth or EDI.
"AI" is always reserved for the latest tech in this space, the previous gens are called what they are. LMMs will be what these are called after a new iteration is out.
This was the first thing that came to my mind as well and VI is such an apt term too. But since we live in the shittiest timeline Electronic Arts would probably have taken the Blizzard/Nintendo route too and patented the term.
The way I've come to understand it is that LLMs are intelligent in the same way your subconscious is intelligent.
It works off of kneejerk "this feels right" logic, that's why images look like dreams, realistic until you examine further.
We all have a kneejerk responses to situations and questions, but the difference is we filter that through our conscious mind, to apply long-term thinking and our own choices into the mix.
LLMs just keep getting better at the "this feels right" stage, which is why completely novel or niche situations can still trip it up; because it hasn't developed enough "reflexes" for that problem yet.
LLMs are intelligent in the same way books are intelligent. What makes LLMs really cool is that instead of searching at the book or page granularity, it searches at the word granularity. It's not thinking, but all the thinking was done for it already by humans who encoded their intelligence into words. It's still incredibly powerful, at it's best it could make it so no task ever needs to be performed by a human twice which would have immense efficiency gains for anything information based.
... Alexa literally is A.I.? You mean to say that Alexa isn't AGI. AI is the taking of inputs and outputting something rational. The first AI's were just large if-else complications called First Order Logic. Later AI utilized approximate or brute force state calculations such as probabilistic trees or minimax search. AI controls how people's lines are drawn in popular art programs such as Clip Studio when they use the helping functions. But none of these AI could tell me something new, only what they're designed to compute.
The term AI being used by corporations isn't some protected and explicit categorization. Any software company alive today, selling what they call AI, isn't being honest about it. It's a marketing gimmick. The same shit we fall for all the time. "Grass fed" meat products aren't actually 100% grass fed at all. "Healthy: Fat Free!" foods just replace the fat with sugar and/or corn syrup. Women's dress sizes are universally inconsistent across all clothing brands in existence.
If you trust a corporation to tell you that their product is exactly what they market it as, you're only gullible. It's forgivable. But calling something AI when it's clearly not, as if the term is so broad it can apply to any old if-else chain of logic, is proof that their marketing worked exactly as intended.
That is precisely what I dislike. It's kinda like calling those crappy scooter thingies "hoverboards". It's just a marketing term. I simply oppose the use of "AI" for the weak kinds of AI we have right now and I'd prefer "AI" to only refer to strong AI.
Though that is of course not within my power to force upon people and most people seem to not care one bit, so eh 🤷🏼♂️
Nobody is claiming there is problem solving in LLMs, and you don't need problem solving skills to be artificially intelligent. The same way a knife doesn't have to be a Swiss army knife to be called a "knife."
Been destroyed for this opinion here. Not many practicioners here just laymen and mostly techbros in this field.. But maybe I haven't found the right node?
I'm into local diffusion models and open source llms only, not into the megacorp stuff
If anything people really need to start experimenting beyond talking to it like its human or in a few years we will end up with a huge ai-illiterate population.
I’ve had someone fight me stubbornly talking about local llms as “a overhyped downloadable chatbot app” and saying the people on fossai are just a bunch of ai worshipping fools.
I was like tell me you now absolutely nothing you are talking about by pretending to know everything.
But the thing is it's really fun and exciting to work with, the open source community is extremely nice and helpful, one of the most non toxic fields I have dabbled in! It's very fun to test parameters tools and write code chains to try different stuff and it's come a long way, it's rewarding too because you get really fun responses
Have you ever considered you might be, you know, wrong?
No sorry you're definitely 100% correct. You hold a well-reasoned, evidenced scientific opinion, you just haven't found the right node yet.
Perhaps a mental gymnastics node would suit sir better? One without all us laymen and tech bros clogging up the place.
Or you could create your own instance populated by AIs where you can debate them about the origins of consciousness until androids dream of electric sheep?
You obviously have hate issues, which is exactly why I have a problem with techbros explaining why llms suck.
They haven't researched them or understood how they work.
It's a fucking incredibly fast developing new science.
Nobody understands how it works.
It's so silly to pretend to know how bad it works when people working with them daily discover new ways the technology surprises us. Idiotic to be pessimistic about such a field.
As someone who has loves Asimov and read nearly all of his work.
I absolutely bloody hate calling LLM's AI, without a doubt they are neat. But they are absolutely nothing in the ballpark of AI, and that's okay! They weren't trying to make a synethic brain, it's just the culture narrative I am most annoyed at.
I agree, but it's so annoying when you work as IT and your non-IT boss thinks AI is the solution to every problem.
At my previous work I had to explain to my boss at least once a month why we can't have AI diagnosing patients (at a dental clinic) or reading scans or proposing dental plans... It was maddening.
Ok, but so do most humans? So few people actually have true understanding in topics. They parrot the parroting that they have been told throughout their lives. This only gets worse as you move into more technical topics. Ask someone why it is cold in winter and you will be lucky if they say it is because the days are shorter than in summer. That is the most rudimentary "correct" way to answer that question and it is still an incorrect parroting of something they have been told.
Ask yourself, what do you actually understand? How many topics could you be asked "why?" on repeatedly and actually be able to answer more than 4 or 5 times. I know I have a few. I also know what I am not able to do that with.
I don't think actual parroting is the problem. The problem is they don't understand a word outside of how it is organized. They can't be told to do simple logic because they don't have a simple understanding of each word in their vocabulary. They can only reorganize things to varying degrees.
It doesn't need to understand the words to perform logic because the logic was already performed by humans who encoded their knowledge into words. It's not reasoning, but the reasoning was already done by humans. It's not perfect of course since it's still based on probability, but the fact that it can pull the correct sequence of words to exhibit logic is incredibly powerful. The main hard part of working with LLMs is that they break randomly, so harnessing their power will be a matter of programming in multiple levels of safe guards.
Few people truly understand what understanding means at all, i got teacher in college that seriously thinked that you should not understand content of lessons but simply remember it to the letter
I am so glad I had one that was the opposite. I discussed practical applications of the subject material after class with him and at the end of the semester he gave me a B+ even though I only got a C by score because I actually grasped the material better than anyone else in the class, even if I was not able to evaluate it as well on the tests.
This is only one type of intelligence and LLMs are already better at humans at regurgitating facts. But I think people really underestimate how smart the average human is. We are incredible problem solvers, and AI can't even match us in something as simple as driving a car.
Lol @ driving a car being simple. That is one of the more complex sensory somatic tasks that humans do. You have to calculate the rate of all vehicles in front of you, assess for collision probabilities, monitor for non-vehicle obstructions (like people, animals, etc.), adjust the accelerator to maintain your own velocity while terrain changes, be alert to any functional changes in your vehicle and be ready to adapt to them, maintain a running inventory of laws which apply to you at the given time and be sure to follow them. Hell, that is not even an exhaustive list for a sunny day under the best conditions. Driving is fucking complicated. We have all just formed strong and deeply connected pathways in our somatosensory and motor cortexes to automate most of the tasks. You might say it is a very well-trained neural network with hundreds to thousands of hours spent refining and perfecting the responses.
The issue that AI has right now is that we are only running 1 to 3 sub-AIs to optimize and calculate results. Once that number goes up, they will be capable of a lot more. For instance: one AI for finding similarities, one for categorizing them, one for mapping them into a use case hierarchy to determine when certain use cases apply, one to analyze structure, one to apply human kineodynamics to the structure and a final one to analyze for effectiveness of the kineodynamic use cases when done by a human. This would be a structure that could be presented an object and told that humans use it and the AI brain could be able to piece together possible uses for the tool and describe them back to the presenter with instructions on how to do so.
Unfortunately the majority of people are idiots who just do this in real life, parroting populous ideology without understanding anything more than the proper catchphrase du jour. And there are many employed professionals who are paid to read a script, or output mundane marketing content, or any "content". And for that, LLMs are great.
It's the elevator operator of technology as applied to creative writers. Instead of "hey intern, write the next article about 25 things these idiots need to buy and make sure 90% of them are from our sponsors" it goes to AI. The writer was never going to purchase a few different types of each product category, blindly test them and write a real article. They are just shilling crap they are paid to shill making it look "organic" because many humans are too stupid to not know it's a giant paid for ad.
I once ran an LLM locally using Kobold AI. Said thing has an option to show the alternative tokens for each token it puts out, and what their probably for being chosen was. Seeing this shattered the illusion that these things are really intelligent for me. There's at least one more thing we need to figure out before we can build an AI that is actually intelligent.
That's actually pretty neat. I tried Kobold AI a few months ago but the novelty wore off quickly. You made me curious, I'm going to check out that option once I get home. Is it just a toggleable opyiont
option or do you have to mess with some hidden settings?
Just as I was about to give up, it somehow worked: https://imgchest.com/p/9p4ne9m9m4n I didn't really do anything different this time around, so no idea why it didn't work at first.
It's been about a year since I saw the probabilities. I took another look at it just now, and while I can find the toggle in the settings, I can't find the context menu where the probabilities are shown.
Whilst everything you linked is great research which demonstrates the vast capabilities of LLMs, none of it demonstrates understanding as most humans know it.
This argument always boils down to one's definition of the word "understanding". For me that word implies a degree of consciousness, for others, apparently not.
To quote GPT-4:
LLMs do not truly understand the meaning, context, or implications of the language they generate or process. They are more like sophisticated parrots that mimic human language, rather than intelligent agents that comprehend and communicate with humans. LLMs are impressive and useful tools, but they are not substitutes for human understanding.
A young programmer is selected to participate in a ground-breaking experiment in synthetic intelligence by evaluating the human qualities of a highly advanced humanoid A.I.
I always argue that human learning does exactly the same. You just parrot and after some time you believe it's your knowledge. Inventing new things is applying seen before mechanisms on different dataset.
Alternatively we could call things what they are. You know, cause if we ever have actual AI we kind of need the term to be intact and not watered down by years of marketing bullshit or whatever else.
There are specific terms for what you're talking about already. AI is all the ML algorithms that we are integrating into daily life, and AGI is human-level AI able to create it's own subjective experience.
How do you know for sure your brain is not doing exactly the same thing? Hell, being autistic, many social interactions are just me trying to guess what will get me approval without any understanding lol.
Also really fitting that Photon chose this for a placeholder right now:
I'm not OP, and frankly I don't really disagree with the characterization of ChatGPT as "fancy autocomplete". But...
I'm still in the process of reading this cover-to-cover, but Chapter 12.2 of Deep Learning: Foundations and Concepts by Bishop and Bishop explains how natural language transformers work, and then has a short section about LLMs. All of this is in the context of a detailed explanation of the fundamentals of deep learning. The book cites the original papers from which it is derived, most of which are on ArXiv. There's a nice copy on Library Genesis. It requires some multi-variable probability and statistics, and an assload of linear algebra, reviews of which are included.
So obviously when the CEO explains their product they're going to say anything to make the public accept it. Therefore, their word should not be trusted. However, I think that when AI researchers talk simply about their work, they're trying to shield people from the mathematical details. Fact of the matter is that behind even a basic AI is a shitload of complicated math.
At least from personal experience, people tend to get really aggressive when I try to explain math concepts to them. So they're probably assuming based on their experience that you would be better served by some clumsy heuristic explanation.
IMO it is super important for tech-inclined people interested in making the world a better place to learn the fundamentals and limitations of machine learning (what we typically call "AI") and bring their benefits to the common people. Clearly, these technologies are a boon for the wealthy and powerful, and like always, have been used to fuck over everyone else.
IMO, as it is, AI as a technology has inherent patterns that induce centralization of power, particularly with respect to the requirement of massive datasets, particularly for LLMs, and the requirement to understand mathematical fundamentals that only the wealthy can afford to go to school long enough to learn. However, I still think that we can leverage AI technologies for the common good, particularly by developing open-source alternatives, encouraging the use of open and ethically sourced datasets, and distributing the computing load so that people who can't afford a fancy TPU can still use AI somehow.
I wrote all this because I think that people dismiss AI because it is "needlessly" complex and therefore bullshit. In my view, it is necessarily complex because of the transformative potential it has. If and only if you can spare the time, then I encourage you to learn about machine learning, particularly deep learning and LLMs.
Fact of the matter is that behind even a basic AI is a shitload of complicated math.
Depending on how simple something can be to be considered an AI, the math is surprisingly simple compared to what an average person might expect. The theory behind it took a good amount of effort to develop, but to make something like a basic image categorizer (eg. optical character recognition) you really just need some matrix multiplication and calculating derivatives-- non-math-major college math type stuff.
That's my point. OP doesn't know the maths, has probably never implemented any sort of ML, and is smugly confident that people pointing out the flaws in a system generating one token at a time are just parroting some line.
These tools are excellent at manipulating text (factoring in the biases they have, I wouldn't recommended trying to use one in a multinational corporation in internal communications for example, as they'll clobber non euro derived culture) where the user controls both input and output.
Help me summarise my report, draft an abstract for my paper, remove jargon from my email, rewrite my email in the form of a numbered question list, analyse my tone here, write 5 similar versions of this action scene I drafted to help me refine it. All excellent.
Teach me something I don't know (e.g. summarise article, answer question etc?) disaster!
We use words to describe our thoughts and understanding. LLMs order words by following algorithms that predict what the user wants to hear. It doesn't understand the meaning or implications of the words it's returning.
It can tell you the definition of an apple, or how many people eat apples, or whatever apple data it was trained on, but it has no thoughts of it's own about apples.
That's the point that OOP was making. People confuse ordering words with understanding. It has no understanding about anything. It's a large language model - it's not capable of independent thought.
I think that the question of what "understanding" is will become important soon, if not already. Most people don't really understand as much as you might think we do, an apple for example has properties like flavor, texture, appearance, weight and firmness it also is related to other things like trees and is in categories like food or fruit. A model can store the relationship of apple to other things and the properties of apples, the model could probably be given "personal preferences" like a preferred flavor profile and texture profile and use this to estimate if apples would be preferred by the preferences and give reasonings for it.
Unique thought is hard to define and there is probably a way to have a computer do something similar enough to be indistinguishable, probably not through simple LLMs. Maybe using a LLM as a way to convert internal "ideas" to external words and external words to internal "ideas" to be processed logically probably using massive amounts of reference materials, simulation, computer algebra, music theory, internal hypervisors or some combination of other models.
I think AI is the single most powerful tool we've ever invented and it is now and will continue completely changing the world. But you'll get nothing but hate and "iTs Not aCtuaLly AI" replies here on Lemmy.
Very true and valid. Tho, devils advocate for a moment, AI is great at discovering new ways to survive surgery and other cool stuff. Of course it uses the existing scientific discoveries to do that, but still. It could be the tool to find the next biggest thing on the penicillin, anaesthesia, haber process, transistor, microscope, steel list which is pretty cool.
For General AI to work, we first need the computer to be able to communicate properly with humans, to understand them and to convey themselves in an understandable way.
LLM is just that. It is the first step towards General AI.
it is already a great tool for programmers. Which means programming anything, including new AI, will only go exponentially faster.
it is already a great tool for programmers. Which means programming anything, including new AI, will only go exponentially faster.
Yes to it being a tool. But right now all it can really do is bog standard stuff. Also have you read about that the use of Github Copilot seems to reduce the quality of code? This means we cannot yet rely on this type of technology. Again, it's a limited tool and that is it. At least for now.
Why is this the first step and not any of the other things that have been around for years?
We have logic reasoning in the form of prolog, bots that are fun to play against in computer games, computers that can win in chess and go against the best players in the world, and computer vision is starting to be useful.