Would we know it if we saw it? Draw two eye spots on a wooden spoon amd people will anthromorphise it. I suspect we'll have dozens of false starts and breathless announcements of AGI, but we may never get there.
More interestingly, would we want it if we got it? How long will its creators rally to its side if we throw yottabytes of data at our civilization-scale problems and the mavhine comes back with "build trains and eat the rich instead of cows?"
The delusions of grandeur required to think your glorified auto complete is going to turn into a robot god is unreal. Just wish they'd quit boiling the planet.
A little while ago there was a thread about what people are actually using LLMs for. The best answer was that it can be used to soften language in emails. FFS.
alternatively, the delusions of grandeur required to think your opinion is more reliable than that of many of the leaders in the field
they’re not saying that LLM will be that thing; they’re saying that in the next 30 years, we could have a different kind of model - we already have the mixture of experts models, that that mirrors a lot of how our own brain processes information
once we get a model that is reliably able to improve itself (and that’s, again, not so different from adversarial training which we already do, and MLP to create and “join” the experts together) then things could take off very quickly
nobody is saying that LLMs will become AGI, but they’re saying that the core building blocks are theoretically there already, and it may only take a couple of break-throughs in how things are wired for a really fast explosion
I think about this. Boy to bad we don't have a general ai to run things given what we have gotten or maybe a nice interstellar race that got past the great filter can upload us and leave the planet to recover.