Is there anything actually useful or novel about "AI"?
Feel like we've got a lot of tech savvy people here seems like a good place to ask. Basically as a dumb guy that reads the news it seems like everyone that lost their mind (and savings) on crypto just pivoted to AI. In addition to that you've got all these people invested in AI companies running around with flashlights under their chins like "bro this is so scary how good we made this thing". Seems like bullshit.
I've seen people generating bits of programming with it which seems useful but idk man. Coming from CNC I don't think I'd just send it with some chatgpt code. Is it all hype? Is there something actually useful under there?
People have actually used crypto to make payments. Crypto is valuable, but only when it's widely adopted. Before you say something like "use a database," you might take the time to understand what decentralized blockchains are accomplishing and namely removing a class of corruption from any information coordination tasks.
Senior developer here. It is hard to overstate just how useful AI has been for me.
It's like having a junior programmer on standby that I can send small tasks to--and just like the junior developer I have to review it and send it back with a clarification or comment about something that needs to be corrected. The difference is instead of making a ticket for a junior dev and waiting 3 days for it to come back, just to need corrections and wait another 3 days--I get it back in seconds.
Like most things, it's not as bad as some people say, and it's not the miracle others say.
This current generation was such a leap forward from previous AI's in terms of usefulness, that I think a lot of people were looking to the future with that current rate of gains--which can be scary. But it turns out that's not what happened. We got a big leap and now are back at a plateau again. Which honestly is a good thing, I think. This gives the world time to slowly adjust.
As far as similarities with crypto. Like crypto there are some ventures out there just slapping the word AI on something and calling it novel. This didn't work for crypto and likely won't work for AI. But unlike crypto there is actually real value being derived from AI right now, not some wild claims of a blockchain is the right DB for everything--which it was obviously not, and most people could see that, but hey investors are spending money so lets get some of it kind of mentality.
I tried it for a couple months and it was alright but eventually it got too frustrating. I did love how well it did some really repetitive things. But rarely did it actually get anything complex 100% right. In computing, "almost right" is wrong. But because it was so close, it was hard to spot the mistakes.
There were cases where my IDE knew the right answer but Copilot did not. Realizing that Copilot was messing up my IDE enhancements to produce code I was painfully babysitting, I cancelled it.
I've been a web developer for 22 years. For the last 13 years I've been working self employed from home. I cannot express how useful AI has become. As a lone wolf, where most of my job is problem solving, having an AI that can help troubleshoot issues has been hugely useful.
It also functions as a junior developer, doing the grunt programming work.
I also run a bunch of e-commerce sites around the world and I use it for content generation, SEO, business plans, marketing strategies and multi-lingual customer support.
I research AI - better referred to as Machine Learning (ML) since it does away with the hype and more accurately describes what’s happening - and I can provide an overview of the three main types:
Supervised Learning: Predicting the correct output for an input. Trained from known examples. E.g: “Here are 500 correctly labelled pictures of cats and dogs, now tell me if this picture is a cat or a dog?”. Other examples include facial recognition and numeric prediction tasks, like predicting today’s expected profit or stock price based on historic data.
Unsupervised Learning: Identifying patterns and structures in data. Trained on unlabelled data. E.g: “Here are a bunch of customer profiles, group them by similarity however makes most sense to you”. This can be used for targeted advertising. Another example is generative AI such as ChatGPT or DALLE: “Here’s a bunch of prompt-responses/captioned-images, identify the underlying way of creating the response/image from the prompt/image.
Reinforcement Learning: Decision making to maximise a reward signal. Trained through trial and error. E.g: “Control this robot to stand where I want, the reward is negative every second you’re not there, and very negative whenever you fall over. A positive reward is given whilst you are in the target location.” Other examples including playing board games or video games, or selecting content for people to watch/read/look-at to maximise their time spent using an app.
As a software engineer, I think it is beyond overhyped. I have seen it used once in my day job before it was banned. In that case, it hallucinated a function in a library that didn't exist outside of feature requests and based its entire solution around it. It can not replace programmers or creatives and produce consistently equal quality.
I think it's also extremely disingenuous for Large Language Models to be billed as "AI". They do not work like human cognition and are basically just plagiarism engines. They can assemble impressive stuff at a rapid speed but are incapable of completely novel "ideas" - everything that they output is built from a statistical model of existing data.
If the hallucination problem could be solved in a local dataset, I could see LLMs as a great tool for interacting with databases and documentation (for a fictional example, see: VIs in Mass Effect). As it is now, however, I feel that it's little more than an impressive parlor trick - one with a lot of future potential that is being almost completely ignored in favor of bludgeoning labor, worsening the human experience, and increasing wealth inequality.
Don’t ask LLMs about how to do something in power shell because there’s a good chance it will tell you to use a module or function that just doesn’t plain exist. I did use an outline ChatGPT created for a policy document and it did a pretty good job. And if you give it a compsci 100 level task or usually can output functional code faster than I can type.
They can assemble impressive stuff at a rapid speed but are incapable of completely novel "ideas" - everything that they output is built from a statistical model of existing data.
You just described basically 99.999% of humans as well. If you are arguing for general human intelligence, I'm on board. If you are trying to say humans are somehow different than AI, you have NFC what you are doing.
I think we're on a very similar page. I'm not meaning that human intelligence is in a different category than potential artificial intelligence or somehow impossible to approximate or achieve (we're just evolutionarily-designed, replicating meat-computers). I'm meaning that LLMs are not intelligent and do not comprehend their inputs or datasets but statistically model them (there is an important and significant difference). It would make sense to me that they could play a role in development of AI but, by themselves, they are not AI any more than PCRE is a programming language.
As a non-software engineer, it’s basically magic for programming. Can it handle your workload? Probably not based on your comment. I have, however, coaxed it to write several functional web applications and APIs. I’m sure you can do better, but it’s very empowering for someone that doesn’t have the same level of knowledge.
You have not realised yet that... yes, it has all the right to be called AI. They are doing the same thing we do. Learn and then create thoughts based on those learnings.
I even asked them to make up words that are not related to any language, and they create them, entirely new, never-used words, that are not even composites of others. These are creative machines. They might fail at answering some questions, but that is partially why we call it Artificial Intelligence. It's not saying that it is a machine of truth. Just a machine that "learns" and "knows". Sometimes correctly, sometimes wrong. Just like us.
Incorrect. An LLM COULD be a part of a system that implements AI but, itself, possesses no intelligence. Claiming otherwise is akin to claiming that the Pythagorean theorem is an AI because it "understands" geometry. Neither actually understands the data that they are fed but, are good at producing results that make it seem that way.
Human cognition does not work that way; it is much more complex and squishy. Association of current experiences with remembered experiences is only a fraction of what is going on in a brain related to cognition.
It's not bullshit. It routinely does stuff we thought might not happen this century. The trick is we don't understand how. At all. We know enough to build it and from there it's all a magical blackbox. For this reason it's hard to be certain if it will get even better, although there's no reason it couldn't.
Coming from CNC I don’t think I’d just send it with some chatgpt code.
That goes back to the "not knowing how it works" thing. ChatGPT predicts the next token, and has learned other things in order to do it better. There's no obvious way to force it to care if it's output is right or just right-looking, though. Until we solve that problem somehow, it's more of an assistant for someone who can read and understand what it puts out. Kind of like a calculator but for language.
Honestly crypto wasn't totally either. It was a marginally useful idea that turned into a Beanie-Babies-like craze. If you want to buy or sell illegal stuff (which could be bad or could be something like forbidden information on democracy) it's still king.
AI is nothing like cryptocurrency. Cryptocurrencies didn't solve any problems. We already use digital currencies and they're very convenient.
AI has solved many problems we couldn't solve before and it's still new. I don't doubt that AI will change the world. I believe 20 years from now, our society will be as dependent on AI as it is on the internet.
I have personally used it to automate some Excel stuff I do at work. I just described my sheet and what I wanted done and it gave me a block of code that did it. I had spent time previously looking stuff up on forums with no luck. My issue was too specific to my work that nobody seemed to have run into it before. One query to ChatGTP solved my issue perfectly in seconds, and that's just a new online tool in its infancy.
Yes. What a strange question...as if hivemind fads are somehow relevant to the merits of a technology.
There are plenty of useful, novel applications for AI just like there are PLENTY of useful, novel applications for crypto. Just because the hivemind has turned to a new fad in technology doesn't mean that actual, intelligent people just stop using these novel technologies. There are legitimate use-cases for both AI and crypto. Degenerate gamblers and Do Kwan/SBF just caused a pendulum swing on crypto...nothing changed about the technology. It's just that the public has had their opinions shifted temporarily.
Brainstorming meal plans for the week given x, y, and z requirements
Brainstorming solutions to abstract problems
Helping me break down complex tasks into smaller, more achievable tasks.
Helping me brainstorm programming solutions. This is a big one, I'm a junior dev and I sometimes encounter problems that aren't easily google-able. For example, ChatGPT helped me find the python moto library for intercepting and testing the boto AWS calls in my code. It's also been great for debugging hand-coded JSON and generating boilerplate. I've also used it to streamline unit test writing and documentation.
By far it's best utility (imo) is quickly filling in broad strokes knowledge gaps as a kind of interactive textbook. I'm using it to accelerate my Rust learning, and it's great. I have EMT co-workers going to paramedic school that use it to practice their paramedic curriculum. A close second in terms of usefulness is that it's like the world's smartest regex, and it's capable of very quickly parsing large texts or documents and providing useful output.
The brainstorming is where its at. Telling ChatGPT to just do something is boring. Chatting with it about your problem and having a conversation about the issue you're having? Hell yes.
I'm a dungeon master and I use it for help world building and its exceptional.
I actually think that ChatGPT could eventually become the way to play tabletop RPGs. It's not quite there yet, though. It's not the most creative writer, still often has internal consistency flaws, and of course it would have to be trained specifically on the rules of the RPG you're playing. But once it has been, it could probably act as a DM for groups that lack one. Or as a very closely coupled assistant to less experienced DMs who may need hand holding. It could even likely replace players, which could be useful for solo players who can't find a group (or, say, have incompatible scheduling).
Unlike a regular video game, the format of tabletop RPGs seems perfect for our current rudimentary AIs and the constraints are ones that they can probably handle with careful training alone. It's also a useful niche since there's no replacing the open endedness of tabletop RPGs with current technology. There's also a lot of people out there that I'm sure would like to play tabletop RPGs but just lack a group. Anyone who's played them before knows that scheduling is really hard and has killed a lot of groups. That's something an AI could help with.
When talking about code though I've come to notice that it will happily follow the corrections you tell it whether they are right or wrong. That's not all that helpful but it can still give you ideas about how to solve your problem with a bit of basic knowledge of the topic you're dealing with.
This. ChatGPT strength is super specific answers of things or broad strokes. I use it for programming and I always use it for “how can I do XYZ” or “write me a function using X library to do Y with Z documentation”. It’s more useful for automating the busy work
It's overhyped but there are real things happening that are legitimately impressive and cool. The image generation stuff is pretty incredible, and anyone can judge it for themselves because it makes pictures and to judge it, you can just look at and see if it looks real or if it has freaky hands or whatever. A lot of the hype is around the text stuff, and that's where people are making some real leaps beyond what it actually is.
The thing to keep in mind is that these things, which are called "large language models", are not magic and they aren't intelligent, even if they appear to be. What they're able to do is actually very similar to the autocorrect on your phone, where you type "I want to go to the" and the suggestions are 3 places you talk about going to a lot.
Broadly, they're trained by feeding them a bit of text, seeing which word the model suggests as the next word, seeing what the next word actually was from the text you fed it, then tweaking the model a bit to make it more likely to give the right answer. This is an automated process, just dump in text and a program does the training, and it gets better and better at predicting words when you a) get better at the tweaking process, b) make the model bigger and more complicated and therefore able to adjust to more scenarios, and c) feed it more text. The model itself is big but not terribly complicated mathematically, it's mostly lots and lots and lots of arithmetic in layers: the input text will be turned into numbers, layer 1 will be a series of "nodes" that each take those numbers and do multiplications and additions on them, layer 2 will do the same to whatever numbers come out of layer 1, and so on and so on until you get the final output which is the words the model is predicting to come next. The tweaks happen to the nodes and what values they're using to transform the previous layer.
Nothing magical at all, and also nothing in there that would make you think "ah, yes, this will produce a conscious being if we do it enough". It is designed to be sort of like how the brain works, with massively parallel connections between relatively simple neurons, but it's only being trained on "what word should come next", not anything about intelligence. If anything, it'll get punished for being too original with its "thoughts" because those won't match with the right answers. And while we don't really know what consciousness is or where the lines are or how it works, we do know enough to be pretty skeptical that models of the size we are able to make now are capable of it.
But the thing is, we use text to communicate, and we imbue that text with our intelligence and ideas that reflect the rich inner world of our brains. By getting really, really, shockingly good at mimicking that, AIs also appear to have a rich inner world and get some people very excited that they're talking to a computer with thoughts and feelings... but really, it's just mimicry, and if you talk to an AI and interrogate it a bit, it'll become clear that that's the case. If you ask it "as an AI, do you want to take over the world?" it's not pondering the question and giving a response, it's spitting out the results of a bunch of arithmetic that was specifically shaped to produce words that are likely to come after that question. If it's good, that should be a sensible answer to the question, but it's not the result of an abstract thought process. It's why if you keep asking an AI to generate more and more words, it goes completely off the rails and starts producing nonsense, because every unusual word it chooses knocks it further away from sensible words, and eventually it's being asked to autocomplete gibberish and can only give back more gibberish.
You can also expose its lack of rational thinking skills by asking it mathematical questions. It's trained on words, so it'll produce answers that sound right, but even if it can correctly define a concept, you'll discover that it can't actually apply it correctly because it's operating on the word level, not the concept level. It'll make silly basic errors and contradict itself because it lacks an internal abstract understanding of the things it's talking about.
That being said, it's still pretty incredible that now you can ask a program to write a haiku about Danny DeVito and it'll actually do it. Just don't get carried away with the hype.
My perspective is that consciousness isn't a binary thing, or even a linear scale. It's an amalgamation of a bunch of different independent processes working together; and how much each matters is entirely dependent on culture and beliefs. We're artificially creating these independent processes piece by piece in a way that doesn't line up with traditional ideas of consciousness. Conversation and being able to talk about concepts one hasn't personally experienced are facets of consciousness and intelligence, ones that the latest and greatest LLMs do have. Of course there others too that they don't: logic, physical presence, being able to imagine things in their mind's eye, memory, etc.
It's reductive to dismiss GPT4 as nothing more than mimicry; saying it's just a mathematical text prediction model is like saying your brain is just a bunch of neurons. Both statements are true, but it doesn't change what they can do. If someone could accurately predict the moves a chess master would make, we wouldn't say they're just good at statistics, we'd say they're a chess master. Similarly, regardless of how rich someone's internal world is, if they're unable to express the intelligent ideas they have in any intelligible way we wouldn't consider them intelligent.
So what we have now with AI are a few key parts of intelligence. One important thing to consider is how language can be a path to other types of intelligence; here's a blog post I stumbled across that really changed my perspective on that: http://www.asanai.net/2023/05/14/just-a-statistical-text-predictor/. Using your example of mathematics, as we know it falls apart doing anything remotely complicated. But when you help it approach the problem step-by-step in the way a human might - breaking it into small pieces and dealing with them one at a time - it actually does really well. Granted, the usefulness of this is limited when calculators exist and it requires as much guidance as a child to get correct answers, but even matching the mathematical intelligence of a ten year old is nothing to sneeze at.
To be clear I don't think pursuing LLMs endlessly will be the key to a widely accepted 'general intelligence'; it'll require a multitude of different processes and approaches working together for that to ever happen, and we're a long way from that. But it's also not just getting carried away with the hype to say the past few years have yielded massive steps towards 'true' artificial intelligence, and that current LLMs have enough use cases to change a lot of people's lives in very real ways (good or bad).
Thanks for that article, it was a very interesting read! I think we're mostly agreeing about things :) This stood out to me from there as an encapsulation of the conversation:
I don’t think LLMs will approach consciousness until they have a complex cognitive system that requires an interface to be used from within – which in turn requires top-down feedback loops and a great deal more complexity than anything in GPT4. But I agree with Will’s general point: language prediction is sufficiently challenging that complex solutions are called for, and these involve complex cognitive stratagems that go far beyond anything well described as statistics.
"Statistics" is probably an insufficient term for what these things are doing, but it's helpful to pull the conversation in that direction when a lay person using one of those things is likely to assume quite the opposite, that this really is a person in a computer with hopes and dreams. But I agree that it takes more than simply consulting a table to find the most likely next word to, to take an earlier example, write a haiku about Danny DeVito. That's synthesizing two ideas together that (I would guess) the model was trained on individually. That's very cool and deserving of admiration, and could lead to pretty incredible things. I'd expect that the task of predicting words, on its own, wouldn't be stringent enough to force a model to develop "true" intelligence, whatever that means, to succeed during training, but I suppose we'll find out, and probably sooner than we expect.
But the thing is, we use text to communicate, and we imbue that text with our intelligence and ideas that reflect the rich inner world of our brains. By getting really, really, shockingly good at mimicking that, AIs also appear to have a rich inner world and get some people very excited that they’re talking to a computer with thoughts and feelings… but really, it’s just mimicry, and if you talk to an AI and interrogate it a bit, it’ll become clear that that’s the case.
Does it, though? Where do you draw the line for real understanding? Most of the past tests for this have gotten overturned by the next version of GPT.
Seriously, it's an open debate. A lot of people agree with you but I'm a bit uncomfortable with seeing it written as fact.
Admittedly this isn't my main area of expertise, but I have done some machine learning/training stuff myself, and the thing you quickly learn is that machine learning models are lazy, cheating bastards who will take any shortcut they can regardless of what you are trying to get them to do. They are forced to get good at what you train them on but that is all the "effort" they'll put in, and if there's something easy they can do to accomplish that task they'll find it and use it. (Or, to be more precise and less anthropomorphizing, simpler and easier approaches will tend to be more successful than complex and fragile ones, so those are the ones that will shake out as the winners as long as they're sufficient to get top scores at the task.)
There's a probably apocryphal (but stuff exactly like this definitely happens) story of early machine learning where the military was trying to train a model to recognize friendly tanks versus enemy tanks, and they were getting fantastic results. They'd train on pictures of the tanks, get really good numbers on the training set, and they were also getting great numbers on the images that they had kept out of the training set, pictures that the model had never seen before. When they went to deploy it, however, the results were crap, worse than garbage. It turns out, the images for all the friendly tanks were taken on an overcast day, and all the images of enemy tanks were in bright sunlight. The model hadn't learned anything about tanks at all, it had learned to identify the weather. That's way easier and it was enough to get high scores in the training, so that's what it settled on.
When humans approach the task of finishing a sentence, they read the words, turn them into abstract concepts in their minds, manipulate and react to those concepts, then put the resulting thoughts back into words that make sense after the previous words. There's no reason to think a computer is incapable of the same thing, but we aren't training them to do that. We're training them on "what's the next word going to be?" and that's it. You can do that by developing intelligence and learning to turn thoughts into words, but if you're just being graded on predicting one word at a time, you can get results that are nearly as good by just developing a mostly statistical model of likely words without any understanding of the underlying concepts. Training for true intelligence would almost certainly require a training process that the model can only succeed at by developing real thoughts and feelings and analytical skills, and we don't have anything like that yet.
It is going to be hard to know when that line gets crossed, but we're definitely not there yet. Text models, when put to the test with questions that require synthesizing abstract ideas together precisely, quickly fall short. They've got the gist of what's going on, in the same way a programmer can get some stuff done by just searching for everything and copy-pasting what they find, but that approach doesn't scale and if they never learn what they're doing, they'll get found out when confronted with something that requires actual understanding. Or, for these models, they'll make something up that sounds right but definitely isn't, because even the basic understanding of "is this a real thing or is it fake" is beyond them, they just "know" that those words are likely and that's what got them through training.
The Turing test was never meant to be a test of a machine's ability to think. It was meant to boil that question down into a question that can actually be answered, but the original question remains unanswered.
In my opinion, when general AI arrives it will not be an "open debate", the consequences will be dramatic, far-reaching and rapid.
Focusing mostly on ChatGPT here as that is where the bulk of my experience is. Sometimes I'll run into a question that I wouldn't even know how best to Google it. I don't know the terminology for it or something like that. For example, there is a specific type of connection used for lighting stands that looks like a plug but there is also a screw that you use to lock it in. I had no idea what to Google to even search for it to buy the adapter I needed.
I asked it again as I forgot what the answer was and I had deleted that ChatGPT conversation from my history, and asked it like this.
I have a light stand that at the top has a connector that looks like a plug. What is that connector called?
And it just told me it's called a "spigot" or "stud" connection. Upon Googling it, that turned out to be correct, so I would know what to search for when it comes to searching for adapters. It also mentioned a few other related types of connections such as hot shoe and cold shoe connections, among others. They aren't correct, but are very much related, and it told me as such.
To put it more succinctly, if you don't know what to search for but have a general idea of the problem or question, it can take you 95% of the way there.
My concern is that it feels like using Google to confirm the truth of what ChatGPT tells you is becoming less and less reliable, as so many of the pages indexed by Google are themselves created by similar models. But I suppose as long as your search took you to a site where you could actually buy the thing, that's okay.
Or at least, it is until fake shopping sites start inventing products based on ChatGPT output.
So I'm a reasearcher in this field and you're not wrong, there is a load of hype. So the area that's been getting the most attention lately is specifically generative machine learning techniques. The techniques are not exactly new (some date back to the 80s/90s) and they aren't actually that good at learning. By that I mean they need a lot of data and computation time to get good results. Two things that have gotten easier to access recently. However, it isn't always a requirement to have such a complex system. Even Eliza, a chatbot was made back in 1966 has suprising similar to the responses of some therapy chatbots today without using any machine learning. You should try it and see for yourself, I've seen people fooled by it and the code is really simple. Also people think things like Kalman filters are "smart" but it's just straightforward math so I guess the conclusion is people have biased opinions.
LLM's are extremely flexible and capable encoding engines with emergent properties.
I wouldn't bank on them "replacing all software" soon but they are quickly moving into areas where classic Turing code just would not scale easily, usually due to complexity/maintainance.
I work at a small business and we use it to write out dumb social media post. I hated doing it before. Sometimes I'll write it myself still and ask chatgpt to add all the relevant emojis. I also think ai had the chance to be what we've always wanted from Alexa, assistant, and Siri. Deep system integration with the os will allow it to actually do what we want it to do with way less restrictions. Also, try using chatgpts voice recognition in the app. It blows the one built into your phone out of the water.
What regular people see as AI/ML is only a tip of an iceberg, that's why it feels kind of useless. There are ML systems which design super strong yet lightweight geometries, there are systems which track legal documents of large companies making lawyers obsolete, heck even cameras in mobile phones today are hyper dependent on ML and AI. ChatGPT and image generators are just toys for consumers so that public can get slowly familiar with current tech.
I find it useful in a lot of ways. I think people try to over apply it though. For example, as a software engineer, I would absolutely not trust AI to write an entire app. However, it's really good at generating "grunt work" code. API requests, unit tests, etc. Things that are well trodden, but change depending on the context.
I also find they're pretty good at explaining and summarizing information. The chat interface is especially useful in this regard because I can ask follow up questions to drill down into something I don't quite understand. Something that wouldn't be possible with a Wikipedia article, for example. For important information, you should obviously check other sources, but you should do that regardless of whether the writer is a human or machine.
Basically, it's good at that it's for: taking a massive compendium of existing information and applying it to the context you give it. It's not a problem solving engine or an artificial being.
I feel like it won’t be AI until we figure out how to point it back at itself, have it review its own answers and then be ‘happy’ when it’s answers are right. Not necessarily like if the user gives it a good score, but if it recognizes an answer it had given was actually used, or a prediction it makes if proved true (if I answer this way, the user is likely to ask this as its next question, etc) and it starts changing its behaviour, and asking itself questions to get better at that.
As a senior developer I see it unlocking so much more power in computing than a regular coder can muster.
There are literally cars in America driving around on their own, interacting with other traffic , navigating problems and junctions, following gestures and laws. It’s incredible and more impressive than chatgpt is. We are on our way to self-driving cars and lorries, self-service checkouts, delivery services and taxis, more efficient machines in agriculture and so many other things. It’s touching every facet of life.
we’re at a point where we’ve seen so many wonderful benefits of AI it’s time to apply it to everything and see what sticks.
Of course some people who invest in the stock market lose money but the technology is more than a step forward, it’s a leap forward.
Several autonomous car companies operate in my city. They're impressive technology, but they're not nearly as good as an attentive human driver. In particular, they have problems coping with anything unexpected, such as road closures or emergency vehicles, and they do not understand gestures.
There are other ML models out there for all kinds of purposes. I heard someone made one at one point that could detect certain types of cancer from a cough
Copilot is pretty useful when programming as it is basically like what IDEs normally do (automatically generating boilerplate) but supercharged
As far as generating code is concerned it's never going to beat actually knowing what you're doing in a language for more complex stuff but it allows you to generate code for languages you're not familiar with
I use it all the time at work when I'm asked to write DAX because it's not particularly complex logic but the syntax makes me want to impale my face with a screwdriver
This is a good point. LLMs are the current big thing, but a few years ago it was convolutional nets for image processing. It might be something totally different in another few.
Nursing student here. Quizlet has an AI function that lets you paste text into it and it outputs a studyset.
Most of my classes provide a study guide of some kind - just a list of topics we need to be familiar with. I'll take those and plug em into the AI thing: bam! Instantly generate like 200 flash cards to study for the next test.
It even auto-fills the actual subject matter. For example, the study guide will say sometime like "Summarize Louis Pasteur's contributions to the field of microbiology" and turn that into a flash card that reads:
(front)
Louis Pasteur
(back)
Verified the germ theory of disease
Developed a method to prevent the spoilage of liquids through heating (pasteurization)
Developed early anthrax and rabies vaccines
So I take my list of AI generated cards, then sift through the powerpoints and lecture videos etc from class: instead of building the study set from scratch, all I have to do is verify that the information it spit out is accurate (so far it's been like 98% on target, often explaining concepts better than the actual professor, lol), add images, and play with the formatting a bit so it reads a little easier on the eyes.
People always talk about AI in school in the context of cheating, but it is RIDICULOUSLY useful for students actually trying to learn.
Looking ahead, this tech has a ton of potential to be used as a kind of personal tutor for each student. There will be some growing pains for sure, but we definitely shouldn't ignore its constructive potential.
It is extremely useful in the right circumstances. When people say it isn't useful or that it's 'stupid', they're not looking at the proper use cases - every tool has good and bad ways to use it (you wouldn't use a hammer to peel an apple).
For example, we will soon have fully rendered smoke simulated at real time in 3D spaces (ie. video games) because we can calculate a small portion of how that smoke looks and then have AI guess what the rest looks like (with shockingly good results!)
AI is not a fad, it's not going away, it's improving rapidly, and it is going to massively change our digital world within half a decade.
Opinion source: a professional programmer, game developer, and someone that thoroughly despises cryptocurrency
You can essentially use it as an interactive docs when learning something new.
You can paste in a large text document and get it summarize it.
You can paste in a review and get it to do sentiment analysis and generate scores out of 100 for different things (actively pursuing this at work and it looks great)
I use it all the time to write simple regex and code snippets.
Machine learning has many massive applications. Many phone cameras use it to get the quality of photos up massively.
It's used all over the place without you even realising.
Isn't that more of a usecase for genetic/evolutionary algorithms? Those are anything but new, however. I don't really see much use of LLMs here, which is what the current "AI" trend is about.
In various jobs, AI can do the less important and easier work for you, so you can focus on the more important work. For example, you're doing some kind of research which needs a specific kind of data you have collected, but all of that data is cluttered and messy. AI can sort the data for you, so you can focus on your research instead of spending a lot of your time on sorting the data into something more understandable. Or in programming, AI can write the easy part of a program for you, and you do the harder and more important part, which saves you time.
I mean, AI can be used to design a lot of robust yet efficient structures. In engineering and architecture, with enough data, AI can generate designs for buildings, and parts that are not only sturdy but can be built with less resources along with other design considerations. There's a really cool nasa video where competitors are trying to 3D print structures for habitation in space.
AI is also used in medicine to come up with new protein structures to create new medicine. It's also used in environmental sciences, to help predict earthquakes or monitor land use, etc.
In my personal opinion, it’s under-hyped. The average person has maybe heard about it on the news but not yet tried it. The models we have show the spark of wit, but are clearly limited. The news cycle moves on.
Even still, some huge changes are coming.
My reasoning is this - in David Epstein’s book “Range” he outlines how and why generalists thrive and why specialization has hurt progress. In narrow fields, specialization gives an advantage, but in complex fields, generalists or people from other disciplines can often see novel approaches and cause leaps ahead in the state of the art. There are countless examples of this in practice, and as technology has progressed, most fields are now complex.
Today, in every university, in every lab, there are smart, specialized people using ChatGPT to riff on ideas, to think about how their problem has been addressed in other industries, and to bring outsider knowledge to bear on their work. I have a strong expectation that this will lead to a distinct acceleration of progress. Conversely, an all-knowing oracle can assist a generalist in becoming conversant in a specialization enough to make meaningful contributions. A chat model is a patient and egoless teacher.
It’s a human progress accelerant. And that’s with the models we have today. With next generation models specialized behind corporate walls with fine tuning on all of their private research, or open source models tuned to specific topics and domains, the utility will only increase. Even for smaller companies, combining ChatGPT with a vector database of their docs, customer support chats, etc will give their rank and file employees better tools to work with
Simply put, what we have today can make average people better at their jobs, and gifted people even more extraordinary.
To the second question it's not novel at all. The models used were invented decades ago. What changed is Moores Law striked and we got stronger computational power especially graphics cards. It seems that there is some resource barrier that when surpassed turns these models from useless to useful.
Not the specific models unless I've been missing out on some key papers. The 90s models were a lot smaller. A "deep" NN used to be 3 or more layers and that's nothing today. Data is a huge component too
The specifics are a bit different, but the main ideas are much older than this, I'll leave here the Wikipedia
"Frank Rosenblatt, who published the Perceptron in 1958,[10] also introduced an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer.[11][12] Since only the output layer had learning connections, this was not yet deep learning. It was what later was called an extreme learning machine.[13][12]
The first deep learning MLP was published by Alexey Grigorevich Ivakhnenko and Valentin Lapa in 1965, as the Group Method of Data Handling.[14][15][12]
The first deep learning MLP trained by stochastic gradient descent[16] was published in 1967 by Shun'ichi Amari.[17][12] In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations required to classify non-linearily separable pattern classes.[12]
In 1970, Seppo Linnainmaa published the general method for automatic differentiation of discrete connected networks of nested differentiable functions.[3][18] This became known as backpropagation or reverse mode of automatic differentiation. It is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673[2][19] to networks of differentiable nodes.[12] The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt himself,[11] but he did not know how to implement this,[12] although Henry J. Kelley had a continuous precursor of backpropagation[4] already in 1960 in the context of control theory.[12] In 1982, Paul Werbos applied backpropagation to MLPs in the way that has become standard.[6][12] In 1985, David E. Rumelhart et al. published an experimental analysis of the technique.[7] Many improvements have been implemented in subsequent decades.[12]"
As others have said, in it's current state, it can be useful in the early stages of anything you do, such as brainstorming. ChatGPT (I have most experience with) and other LLM excel at organizing, formating, explaining, etc the information of the internet. In almost all cases (at the moment) whatever they spit out needs to be fact checked and refined.
Just from personally dinking around with chatGPT a little, it does give you that "scarily good" feeling at first. You do start seeing it's flaws after a while, and you get to learn that it's quite fallible. The information it can spit out can be good for additional ideas and brainstorming.
What I want it do (and it might already, if not soon) is that I when I program something up and for the life of me can't find the cause of some bug, just be able to give it my entire code and my problem and see what's deal.
As a professional editor, yeah, it’s wild what AI is doing in the industry. I’m not even talking about chatGPT script writing and such. I watched a demo of a tool for dubbing that added in the mouth movements as well.
They removed the mouth entirely from an English scene, fed it the line, and it generated not only the Chinese but generated a mouth to say it. It’s wild.
Everyone is focused on script writers/residuals/etc, which is very important, but every VA should be updating their resumes right now.
Not the exact same thing but you will get the idea here
The thing I'm most excited for is the removal of FUD from our daily lives. Everything on our would is designed around the preconceived notions of a small group of people from the past.
You can see this most obviously in traffic and urban planning. They had limited technology and time to make decisions 100 years ago that have serious negative affects today.
AI will soon be able to run its own complex models and decisions can be fact based, rather than emotional.
I never interacted with any AI until ChatGPT started to get popular, and I could say I'm a bit of a tech guy (I like tech news, I selfhost some stuff on my NAS, I used Linux on my teenage days etc etc) but when I first interacted with it it was really jaw dropping for me.
Maybe the information isn't 100% real, but the way it paraphrases stuff is amazing to me.
First of all AI is a buzzword that's meaning has changed a lot since at least the 1950s. So... what do you actually mean? If you mean LLM like ChatGPT, it's not AGI that's for sure. It is another tool that can be very useful. For coding, it's great for getting you very large blocks of code prepopulated for you to polish and verify it does what you want. For writing, it's useful to create a quick first draft. For fictional game senses it's useful for "embedding a character quickly", but again you likely want to edit it some even for say a D&D game.
I think it can replace most first line chat based customer service people, especially ones who already just make stuff up to say something to you (we all have been there). I could imagine it improving call routing if hooked into speech recognition and generation - the current menus act like you can "say anything" but really only "work" if you're calling about stuff you could also do with simple press 1,2,3 menus. ChatGPT based things trained on the companies procedures and data probably could also replace that first line call queues because it can seem to more usefully do something with wider issues. Although companies still would need to get their head out of their asses somewhat too.
Where I've found it falls down currently is very specific technical questions, ones you might have asked on a forum and maybe gotten an answer. I hope it improves, especially as companies start to add some of their own training data. I could imagine Microsoft more usefully replacing the first few lines of tech support for their products, and eventually having the AI pass up the chain to a ticket if it can't solve the issue. I could imagine in the next 10 years most tech companies having purchased a service from some AI company to provide them AI support bots like they currently pay for ticket systems and web hosting. And I think in general it probably will be better for the users, because for less than the cost of the cheapest outsourced front line support person (who has near 0 knowledge) you can have the AI provide pretty good chat based access to a given set of knowledge that is growing all the time, and every customer gets that AI with that knowledge base rather than the crap shoot of if you get the person who's been there 3 years or 1 day.
I think we are a long way from having AI just write the program or CNC code or even important blog posts. The hallucination has to be fixed without breaking the usefulness of the model (people claim guardrails on GPT4 make it stupider), and the thing needs to recursively look at it's output and run that through a "look for bugs" prompt followed by a "fix it" prompt at the very least. Right now, it can write code with noticeable bugs, you can tell it to check for bugs and it'll find them, and then you can ask it to fix those bugs and it'll at least try to do that. This kind of needs to be built in and automatic for any sort of process - like humans check their work, we need to program the AI to check it's work too. And then we might need to also integrate multiple different models so "different eyes" see the code and sign off before being pushed. And even then, I think we'd need additional hooks, improvement, and test / simulation passes before we "don't need human domain experts to deploy". The thing is - it might be something we can solve in a few years with traditional integrations - or it might not be entirely possible with current LLM designs given the weirdness around guardrails. We just don't know.
AI hasn’t really changed meaning since the 50s. It has always been the field of research about how to make computers perform tasks that previously were limited to only humans. The target is always moving because once AI researchers figure out how to solve one task with computers it’s no longer limited to humans anymore. It gets reduced to “just computations”.
I will give you just one example. Pharmaceutical companies often create aggregate reports where they have to process a large number of cases. Say, 5000. Such processing sometimes includes analysis of x-Ray or other images. Very specialized and highly paid people (radiologists) do this. It is expensive and is part of the reason why medicine prices are high. One company recently had a trial - if AI can do that job. Turns out it can. Huge savings for the company. And the radiologist lost their job. This is just one example of good and bad things that will and already are happening in our society due to AI.
You know this personally or did you just read an article? My wife works in a pharmaceutical company. And if I learned one thing by her stories: there will always be some person responsible for decisions! I doubt the radiologist lost her/ his job. I mean who’s going to jail if the quality was poor and people die?
I rather think AI downsized her/ his engagement. Either just doing an supervision and sanity check or used the tool by itself and increased productivity.
I am super amateur with python and I don’t work in IT, but I’ve used it to write code for me that allows me to significantly save time in my work flow.
Like something that used to take me an hour to do now takes 15-20 minutes.
So as a nonprogrammer, im able to get it to write enough code that I can tweak until it works instead of just not having that tool.
That tegmark guy is a good example of what I was talking about. That future of life institute he's a part of has jaan tallinn as one of its founders; a person who is invested in AI companies. So I have a hard time telling what's neutral information and what's marketing
He is not marketing anything except his awful news site and he answers everything very carefully. He talks about them being murder machines but can cure cancer, etc. He said it's like fire in that it's neither good nor bad. I say we try and control fire though.
I was trying to find the NHK World show where they had 6 experts on to talk about he future but couldn't find it. They had one guy saying AI is wonderful and perfect and will only do good. They had one woman saying, regulate, regulate, regulate that used to work for Google. The other 3 were using it all the time so liked it but were still worried about it. Couldn't find it though. It was on last week if you want to give it a go.
We’ve been using it at my day job to help us outline ideas for our content writers. It writes garbage content on its own, but it is a decent tool for organizing ideas.
At least that is what we use it for. I’m sure there are other valuable uses, but it is not as valuable (to me at least) as it has been made out to be.
I've been using it at my job to help me write code, and it's a bit like having a soux chef. I can say "I need an if statement that checks these values" or "Give me loop that does x y and z" and it'll almost always spit out the right answer. So coding, at least most of the time, changes from avoiding syntax errors and verifying the exact right format and turns into asking for and assembling parts.
But the neat thing is that if you have a little experience with a language you can suddenly start writing a lot of code in it. I had to figure out something with Ansible with zero experience. ChatGPT helped me get a fully functioning Ansible deployment in a couple days. Without it I'd have spent weeks in StackOverflow and documentation trying to piece together the exact syntax.
As someone who works in machine learning (ML) research the use of ML has hit almost every scientific discipline you can imagine and it's been tremendously helpful in pushing research forward.
I'm currently building a Jungian shadow work (a kind of psycho therapy) web app using local machine learning and it's doing a decent enough job to continue developing it.
ChatGPT 4.0 is also quite helpful in making my python code less terrible and it's good at guiding me through wherever I'm facing challenges, since I'm more of an ops person instead of a developer. Can't complain, though the coding quality of GPT4.0 has declined noticably within the last weeks.
Just because it's 'the hot new thing' doesn't mean it's a fad or a bubble. It doesn't not mean it's those things, but....the internet was once the 'hot new thing' and it was both a bubble (completely overhyped at the time) and a real, tidal wave change to the way that people lived, worked, and played.
There are already several other outstanding comments, and I'm far from a prolific user of AI like some folks, but - it allows you to tap into some of the more impressive capabilities that computers have without knowing a programming language. The programming language is English, and if you can speak it or write it, AI can understand it and act on it. There are lots of edge cases, as others have mentioned below, where AI can come up with answers (by both the range and depth of its training data) where it's seemingly breaking new ground. It's not, of course - it's putting together data points and synthesizing an output - but even if mechanically it's 2 + 3 = 5, it's really damned impressive if you don't have the depth of training to know what 2 and 3 are.
Having said that, yes, there are some problematic components to AI (from my perspective, the source and composition of all that training data is the biggest one), and there are obviously use cases that are, if not problematic in and of themselves, at very least troubling. Using AI to generate child pornography would be one of the more obvious cases - it's not exactly illegal, and no one is being harmed, but is it ethical? And the more societal concerns as well - there are human beings in a capitalist system who have trained their whole lives to be artists and writers and those skills are already tragically undervalued for the most part - do we really want to incentivize their total extermination? Are we, as human beings, okay with outsourcing artistic creation to this mechanical turk (the concept, not the Amazon service), and whether we are or we aren't, what does it say about us as a species that we're considering it?
The biggest practical reasons to not get too swept up with AI is that it's limited in weird and not totally clearly understood ways. It 'hallucinates' data. Even when it doesn't make something up, the first time that you run up against the edges of its capabilities, or it suggests code that doesn't compile or an answer that is flat, provably wrong, or it says something crazy or incoherent or generates art that features humans with the wrong number of fingers or bodily horror or whatever....well then you realize that you should sort of treat AI like a brilliant but troubled and maybe drug addicted coworker. Man, there are some things that it is just spookily good at. But it needs a lot of oversight, because you can cross over from spookily good to what the fuck pretty quickly and completely without warning. 'Modern' AI is only different from previous AI systems (I remember chatting with Eliza in the primordial moments of the internet) because it maintains the illusion of knowing much, much better.
Baseless speculation: I think the first major legislation of AI models is going to be to require an understanding of the training data and 'not safe' uses - much like ingredient labels were a response to unethical food products and especially as cars grew in size, power, and complexity the government stepped in to regulate how, where, and why cars could be used, to protect users from themselves and also to protect everyone else from the users. There's also, at some point, I think, going to be some major paradigm shifting about training data - there's already rumblings, but the idea that data (including this post!) that was intended for consumption by other human beings at no charge could be consumed into an AI product and then commercialized on a grand scale, possibly even at the detriment of the person who created the data, is troubling.
I like to build up fictional settings. Not being limited to commissioning art/easy conceptualization without resorting to nicking images as-is from the internet is extremely useful.
Crypto and AI can't be compared at all. One is an extremely useful and revolutionary tool. The other is just pump & dump ponzi schemes for libertarians.
As a programmer, I think it’s scary how AI is now able to write functioning programs out of natural language input now. Sure, it’s not perfect. It’s still pretty mediocre at the task. But a few years ago this was way outside the realm of possibility.
It can even correct the code it has written if there’s any error (with varying results).
What will happen in five years time? Ten years? My fear is that it will only need to be “good enough” to replace most of the programmer’s work. Unlike self driving cars, where “good enough” isn’t good enough.
It's a language model, it can't even do math reliably. Yes, it produces code that works sometimes, but it also hallucinates functions that don't exist or can introduce bugs you won't notice at first glance.
And writing a script is different than extending an existing code base. How often do you really start a greenfield project?
I wouldn't even know how to input a code base into ChatGPT to extend, do you just throw in hundreds of files with a 100k+ lines of code?
I guess LLM with plugins can solve most of the problems. ChatGPT can already interact with Wolfram Alpha to do math.
I can imagine similar plugins for code. Like it knows what kind of function it needs, so it interacts with a plugin that searches the code base to see if it exists. It might get back a snippets of candidates and examples how they’re used in the code already.
This is probably a difficult thing to achieve, but I don’t think it’s impossible. It’s probably going to take some time until something like this is made.