A survey of more than 2,000 smartphone users by second-hand smartphone marketplace SellCell found that 73% of iPhone users and a whopping 87% of Samsung Galaxy users felt that AI adds little to no value to their smartphone experience.
SellCell only surveyed users with an AI-enabled phone – thats an iPhone 15 Pro or newer or a Galaxy S22 or newer. The survey doesn’t give an exact sample size, but more than 1,000 iPhone users and more than 1,000 Galaxy users were involved.
Further findings show that most users of either platform would not pay for an AI subscription: 86.5% of iPhone users and 94.5% of Galaxy users would refuse to pay for continued access to AI features.
From the data listed so far, it seems that people just aren’t using AI. In the case of both iPhone and Galaxy users about two-fifths of those surveyed have tried AI features – 41.6% for iPhone and 46.9% for Galaxy.
So, that’s a majority of users not even bothering with AI in the first place and a general disinterest in AI features from the user base overall, despite both Apple and Samsung making such a big deal out of AI.
A 100% accurate AI would be useful. A 99.999% accurate AI is in fact useless, because of the damage that one miss might do.
It's like the French say: Add one drop of wine in a barrel of sewage and you get sewage. Add one drop of sewage in a barrel of wine and you get sewage.
I think it largely depends on what kind of AI we're talking about. iOS has had models that let you extract subjects from images for a while now, and that's pretty nifty. Affinity Photo recently got the same feature. Noise cancellation can also be quite useful.
As for LLMs? Fuck off, honestly. My company apparently pays for MS CoPilot, something I only discovered when the garbage popped up the other day. I wrote a few random sentences for it to fix, and the only thing it managed to consistently do was screw the entire text up. Maybe it doesn't handle Swedish? I don't know.
One of the examples I sent to a friend is as follows, but in Swedish;
Microsoft CoPilot is an incredibly poor product. It has a tendency to make up entirely new, nonsensical words, as well as completely mangle the grammar. I really don't understand why we pay for this. It's very disappointing.
And CoPilot was like "yeah, let me fix this for you!"
Microsoft CoPilot is a comedy show without a manuscript. It makes up new nonsense words as though were a word-juggler on circus, and the grammar becomes mang like a bulldzer over a lawn. Why do we pay for this? It is buy a ticket to a show where actosorgets their lines. Entredibly disappointing.
The problem really isn't the exact percentage, it's the way it behaves.
It's trained to never say no. It's trained to never be unsure. In many cases an answer of "You can't do that" or "I don't know how to do that" would be extremely useful. But, instead, it's like an improv performer always saying "yes, and" then maybe just inventing some bullshit.
I don't know about you guys, but I frequently end up going down rabbit holes where there are literally zero google results matching what I need. What I'm looking for is so specialized that nobody has taken the time to write up an indexable web page on how to do it. And, that's fine. So, I have to take a step back and figure it out for myself. No big deal. But, Google's "helpful" AI will helpfully generate some completely believable bullshit. It's able to take what I'm searching for and match it to something similar and do some search-and-replace function to make it seem like it would work for me.
I'm knowledgeable enough to know that I can just ignore that AI-generated bullshit, but I'm sure there are a lot of other more gullible optimistic people who will take that AI garbage at face value and waste all kinds of time trying to get it working.
To me, the best way to explain LLMs is to say that they're these absolutely amazing devices that can be used to generate movie props. You're directing a movie and you want the hero to pull up a legal document submitted to a US federal court? It can generate one in seconds that would take your writers hours. It's so realistic that you could even have your actors look at it and read from it and it will come across as authentic. It can generate extremely realistic code if you want a hacking scene. It can generate something that looks like a lost Shakespeare play, or an intercept from an alien broadcast, or medical charts that look like exactly what you'd see in a hospital.
But, just like you'd never take a movie prop and try to use it in real life, you should never actually take LLM output at face value. And that's hard, because it's so convincing.
We're not talking about an AI running a nuclear reactor, this article is about AI assistants on a personal phone. 0.001% failure rates for apps on your phone isn't that insane, and generally the only consequence of those failures would be you need to try a slightly different query. Tools like Alexa or Siri mishear user commands probably more than 0.001% of the time, and yet those tools have absolutely caught on for a significant amount of people.
The issue is that the failure rate of AI is high enough that you have to vet the outputs which typically requires about as much work as doing whatever you wanted the AI to do yourself, and using AI for creative things like art or videos is a fun novelty, but isn't something that you're doing regularly and so your phone trying to promote apps that you only want to use once in a blue moon is annoying. If AI were actually so useful you could query it with anything and 99.999% of the time get back exactly what you wanted, AI would absolutely become much more useful.
Nothing is "100% accurate" to begin with. Humans spew constant FUD and outright malicious misinformation. Just do some googling for anything medical, for example.
So either we acknowledge that everything is already "sewage" and this changes nothing or we acknowledge that people already can find value from searching for answers to questions and they just need to apply critical thought toward whether I_Fucked_your_mom_416 on gamefaqs is a valid source or not.
Which gets to my big issue with most of the "AI Assistant" features. They don't source their information. I am all for not needing to remember the magic incantations to restrict my searches to a single site or use boolean operators when I can instead "ask jeeves" as it were. But I still want the citation of where information was pulled from so I can at least skim it.
90% is not good enough to be a primary feature that discourages inspection (like a naive chatbot).
What we have now is like...I dunno, anywhere from <1% to maybe 80% depending on your use case and definition of accuracy, I guess?
I haven't used Samsung's stuff specifically. Some web search engines do cite their sources, and I find that to be a nice little time-saver. With the prevalence of SEO spam, most results have like one meaningful sentence buried in 10 paragraphs of nonsense. When the AI can effectively extract that tiny morsel of information, it's great.
Ideally, I don't ever want to hear an AI's opinion, and I don't ever want information that's baked into the model from training. I want it to process text with an awareness of complex grammar, syntax, and vocabulary. That's what LLMs are actually good at.
For real. If a human performs task X with 80% accuracy, an AI needs to perform the same task with 80.1% accuracy to be a better choice - not 100%. Furthermore, we should consider how much time it would take for a human to perform the task versus an AI. That difference can justify the loss of accuracy. It all depends on the problem you're trying to solve. With that said, it feels like AI on mobile devices hardly solves any problems.
Perplexity is kinda half-decent with showing its sources, and I do rely on it a lot to get me 50% of the way there, at which point I jump into the suggested sources, do some of my own thinking, and do the other 50% myself.
It's been pretty useful to me so far.
I've realised I don't want complete answers to anything really. Give me a roundabout gist or template, and then tell me where to look for more if I'm interested.
I think you nailed it. In the grand scheme of things, critical thinking is always required.
The problem is that, when it comes to LLMs, people seem to use magical thinking instead. I'm not an artist, so I oohd and aahd at some of the AI art I got to see, especially in the early days, when we weren't flooded with all this AI slop. But when I saw the coding shit it spewed? Thanks, I'll pass.
The only legit use of AI in my field that I know of is an unit test generator, where tests were measured for stability and code coverage increase before being submitted to dev approval. But actual non-trivial production grade code? Hell no.
I hate that i can no longer trust what comes out of my phone camera to be an accurate representation of reality. I turn off all the AI enhancement stuff but who knows what kind of fuckery is baked into the firmware.
NO, i dont want fake AI depth of field. NO, i do not want fake AI "makeup" fixing my ugly face. NO, i do not want AI deleting tourists in the background of my picture of the eiffel tower.
NO, i do not want AI curating my memories and reality. Sure, my vacation photos have shitty lighting and bad composition. But they are MY photos and MY memories of something i experienced personally. AI should not be "fixing" that for me
@9488fcea02a9@ForgottenFlux I remember reading a whole article about how Samsung now just shoves a hi-res picture of the moon on top of pictures you take with the moon in so it looks like it takes impressive photos. Not sure if the scandal meant they removed that "feature" or not
I feel like I'm in those years of You really want a 3d TV, right? Right? 3D is what you've been waiting for, right? all over again, but with a different technology.
It will be VR's turn again next.
I admit I'm really rooting for affordable, real-world, daily-use AR though.
AR pretty much will happen, in my opinion as someone who roughly works in the field. It's probably going to be the next smartphone level revolution within two decades
I'm not commenting on whether it would be good or bad for society, especially with our current societal situations and capitalism and stuff, but I'm confident it will happen, either way, and change the word drastically again
This is what happens when companies prioritize hype over privacy and try to monetize every innovation. Why pay €1,500 for a phone only to have basic AI features? AI should solve real problems, not be a cash grab.
Imagine if AI actually worked for users:
Show me all settings to block data sharing and maximize privacy.
Explain how you optimized my battery last week and how much time it saved.
Automatically silence spam calls without selling my data to third parties.
Detect and block apps that secretly drain data or access my microphone.
Automatically organize my photos by topic without uploading them to the cloud.
Make everything i could do with TASKER with only just saying it in plain words.
Or the shitty notification summary. If someone wrote something to me, then it’s important enough for me to read it. I don’t need 3 bullet points with distorted info from AI.
Who the fuck needs AI to SUMMARIZE an EMAIL, GOOGLE?
The executives who don't do any real work, pretend they do (chiefly to themselves), and make ALL of the purchasing decisions despite again not doing any real work.
I don't think it's meant to be useful....for us, that is. Just another tool to control and brainwash people. I already see a segment of the population trust corporate AI as an authority figure in their lives. Now imagine kids growing up with AI and never knowing a world without. People who have memories of times before the internet is a good way to relate/empathize, at least I think so.
How could it not be this way? Algorithms trained people. They're trained to be fed info from the rich and never seek anything out on their own. I'm not really sure if the corps did it on purpose or not, at least at first. Just money pursuit until powerful realizations were made. I look at the declining quality of Google/Youtube search results. As if they're discouraging seeking out information on your own. Subtly pushing the path of least resistance back to the algorithm or now perhaps a potentially much more sinister "AI" LLM chatbot. Or I'm fucking crazy, you tell me.
Like, we say dead internet. Except...nothing is actually stopping us from ditching corporate internet websites and just go back to smaller privately owned or donation run forums.
Big part of why I'm happy to be here on the newfangled fediverse, even if it hasn't exploded in popularity at least it has like-minded people, or you wouldn't be here.
Check out debate boards. Full of morons using ChatGPT to speak for them and they'll both openly admit it and get mad at you for calling it dehumanizing and disrespectful.
/tinfoil hat
Edit to add more old man yells at clouds(ervers) detail, apologies. Kinda chewing through these complex ideas on the fly.
Ai is a waste of time for me; I don't want it on my phone , I don't want it on my computer and I block it every time I have the chance. But I might be old fashioned in that I don't like algorithms recommending anything to me either. I never cared what the all seeing machine has to say.
My kids school just did a survey and part of it included questions about teaching technology with a big focus on the use of AI. My response was "No" full stop. They need to learn how to do traditional research first so that they can spot check the error ridden results generated by AI. Damn it school, get off the bandwagon.
I say this as an education major, and former teacher. That being said, please keep fighting your PTA on this.
We didn't get actually useful information in high school, partially because our parents didn't think there was anything wrong with the curriculum.
I'm absolutely certain that there are multiple subjects that you may have skipped out on, if you'd had any idea that civics, shop, home economics, and maybe accounting were going to be the closest classes to "real world skills that all non collegate educated people still need to know."
And what exactly is the difference between researching shit sources on plain internet and getting the same shit via an AI, except manually it takes 6 hours and with AI it takes 2 minutes?
I think the fact someone would need to explain this to you makes it pointless to try and explain it to you. I can't tell whether you're honestly asking a question or just searching for a debate to attempt to justify your viewpoint.
I do not need it, and I hate how it's constantly forced upon me.
Current AI feels like the Metaverse. There's no demand for it or need for it, yet they're trying their damndest to shove it into anything and everything like it's a new miracle answer to every problem that doesn't exist yet.
And all I see it doing is making things worse. People use it to write essays in school; that just makes them dumber because they don't have to show they understand the topic they're writing. And considering AI doesn't exactly have a flawless record when it comes to accuracy, relying on it for anything is just not a good idea currently.
If they write essays with it and the teacher is not checking their actual knowledge, the teacher is at fault, not the AI. AI is literally just a tool, like a pen or a ruler in school. Except much much bigger and much much more useful.
It is extremely important to teach children, how to handle AI properly and responsibly or else they will be fucked in the future.
I agree it is a tool, and they should be taught how to use it properly, but I disagree that is like a pen or a ruler. It's more like a GPS or Roomba. Yes, they are tools that can make your life easier, but it's better to learn how to read a map and operate a vacuum or a broom than to be taught to rely on the tool doing the hard work for you.
The AI thing I'd really like is an on-device classifier that decides with reasonably high reliability whether I would want my phone to interrupt me with a given notification or not. I already don't allow useless notifications, but a message from a friend might be a question about something urgent, or a cat picture.
What I don't want is:
Ways to make fake photographs
Summaries of messages I could just skim the old fashioned way
Easier access to LLM chatbots
It seems like those are the main AI features bundled on phones now, and I have no use for any of them.
You mean paying money to people to actually program. In fair exchange for their labor and expertise, instead of stealing it from the internet? What are you, a socialist?
I love the AI classifiers that keep me safe from spam or that help me categorise pictures.
I love the AI based translators that allow me to write in virtually any language almost like a real speaker.
What I hate is these super advanced stocastic parrots that manage to pass the Turing test and, so, people assume they think.
I am pretty sure that they asked specifically about LLM/chatbots the percentage of people not caring would be even higher
"AI" (as in LLMs for the sake of having LLMs accessible on your phone) is so fucking useless...
From a technical standpoint it's pretty cool, I love playing around with Ollama on my PC every now and then.
But the average Joe seems to think it's some magic being with absolute fucking knowledge you can talk to using your phone. Apart from being stupid, I think this might actually endanger human capabilities like critical thinking as well as reasoning and creativity.
So many people use "Chat Jippity" to look up stuff. I know google is enshitificated.. but OH MY GOD.
After having mostly relevant Information available for everyone, the zone was flooded with Advertisement and FakeNews, and now the FakeNews are generated directly on the User's device.. no interaction and connection to anyone necessary.
As an android user (Pixel), I've only ever opened AI by accident. My work PC is a mac and it force-reenables apple intelligence after every update. I dutifully go into settings and disable that shit. While summarizing things is something AI can be good at, I generally want to actually read the detail of work communications since, as a software engineer, detail is a teeeny bit important.
Much like certain other trends like 3D TVs, this helps us see how often "visionaries" at the top of a company are charmed by ideas that no one on the ground is interested in. Same with blockchain, cryptocurrency, and so many other buzzwords.
So maybe I'll mention it again: The Accountable Capitalism Act would require 40% of a company's board be made up of democratically voted employees, who can provide more practical input about how top-level decisions would affect the people working there.
I can see why people thought 3d tvs were a great idea, until they actually experienced it for themselves. It also didn't help that so much content wasn't genuinely shot in 3d, either, but altered in post.
I could actually see 3D TVs taking off, even with the requirement for glasses. At the time, there was a fad for 3D movies in theaters. But, they needed to have gotten with content creators so that there was a reason to own one. There was no content, so no one invested, so probably in a year or two there's going to be some Youtubers making videos of "I finally found Sony's forgotten 3D TV."
They just need to capitalize the surveillance capabilities. Find a way to convince users they need access to everything on their phones in order to sell them first class convenience. Once you've done that there's plenty of money to be made.
People here like to shit on AI, but it has its use cases. It's nice that I can search for "horse" in Google Photos and get back all pictures of horses and it is also really great for creating small scripts. I, however, do not need a LLM chatbot on my phone and I really don't want it everywhere in every fucking app with a subscription model.
people wouldn't shit on AI if it were actually replacing our jobs without taking our pay and creating a system of resource management free from human greed and error.
The only thing is Google photos did that before AI was installed. Now I have to press two extra buttons to get to the old search method instead of using the new AI because the AI gives me the most bizarre results when I use it.
Most of the identification of things like 'horses' falls in line with the identification of things like 'crosswalks' and 'motorcycles'--in other words, the majority of the words associated with particular images in Google maps comes from people like us filling out Captcha, not from AI.
Not only that, but Google assistant is getting consistently less reliable. Like half the time now I ask it a question and it just does an image search or something or completely misunderstands me in some other manner. They deserted working, decent tech for unreliable, unwanted tech because ???
Profit potential. Think of AI as one big data collector to sell you shit. It is significantly better at learning things about you than any metadata or cookies ever could.
If you think of this AI push as "trying to make a better product" it will not make much sense. If you think of the AI push as "how do I collect more data on all my users and better directly influence their choices" it makes a lot more sense.
I don't think the LLM spouting nonsense responses part actively contributes to collecting and learning about user data much. Regular search queries and other behaviors (click tracking etc) already do this well enough and have most likely been using loads of machine learning for many years now
Not sure if Google Lens counts as AI, but Circle to Search is a cool feature. And on Samsung specifically there is Smart Select that I occasionally use for text extraction, but I suppose it is just OCR.
From Galaxy AI branded features I have tested only Drawing assist which is an image generator. Fooled around for 5 minutes and have not touched it again. I am using Samsung keyboard and I know it has some kind of text generator thing, but have not even bothered myself to try it.
Certainly counts, Samsung has a few features like grabbing text from images that I found useful.
My problem with them is its all online stuff and I'd like that sort of thing to be processed on device but thats just me.
I think folks often are thinking AI is only the crappy image generation or chat bots they get shoved to. AI is used in a lot of different things, only difference is that those implementations like drawing assist or that text grabbing feature are actually useful and are well done.
AI is not there to be useful for you. It is there to be useful for them. It is a perfect tool for capturing every last little thought you could have and direct to you perfectly on what they can sell you.
It's basically one big way to sell you shit. I promise we will follow the same path as most tech. It'll be useful for some stuff and in this case it's being heavily forced upon us whether we like it or not. Then it's usefulness will be slowly diminished as it's used more heavily to capitalize on your data, thoughts, writings, code, and learn how to suck every last dollar from you whether you're at work or at home.
It's why DeepSeek spent so little and works better. They literally were just focusing on the tech.
All these billions are not just being spent on hardware or better optimized software. They are being spent on finding the best ways to profit from these AI systems. It's why they're being pushed into everything.
You won't have a choice on whether you want to use it or not. It'll soon by the only way to interact with most systems even if it doesn't make sense.
Mark my words. When Google stops standard search on their home page and it's a fucking AI chat bot by default. We are not far off from that.
Yes, it seems like no one even read the damn user agreement. AI just adds another level to our surveillance state. Its only there to collect information about you and to figure out the inner workings of its users minds to sell ads. Gemini even listens to your conversations if you have the quick access toggle enabled.
DeepSeek cost so little because they were able to use the billions that OpenAI and others spent and fed that into their training. DeepSeek would not exist (or would be a lot more primitive) if it weren't for OpenAI.
That's not how these models work. It's not like OpenAI was sharing all their source code. If anything OpenAI benefits from DeepSeek because they released their entire code.
OpenAI is an ironic name now ever since Microsoft became a majority share holder. They are anything but "open".
Ai sucks and is a waste of humanity’s resources. I hate how everything goes on buzzwords industry trends. This shit needs to stop and just focus on simplicity and reliability. We need to stop trying to sell new things every cycle
On Samsung they got rid of a perfectly good screenshot tool and replaced it with one that has AI, it's slower, clunky, and not as good, I just want them to revert it. If I wanted AI I'd download an app.
You are thinking about Smart Select? I just take fullscreen screenshot and then crop it if I need part of it. Did it even when I had previous Smart Select version. Overall I think new version with all previous 4 select options bundled in 1 is better.
Yes, Smart Select. I do that now, but taking a full screenshot and cropping it is slower for me than the old Smart Select. I hate this new version, it's slower and doesn't work the same, we should get the option to pick, but they forced the upgrade and I have no choice.
I’m on my iPhone 12 since it came out in sept 2020 (I bought it on Halloween 2020 lol) and apart from battery health being 77%, I have NO reasons to upgrade and even then, I’ll change the battery when it gets to 70% and… that’s it.
Phones just aren’t exciting anymore. I used to watch so much phone reviews on YouTube and now they are all just.. the same. Folding phones aren’t that interesting for me. I saw that there is a new battery technology, but that’s like the only new fun feature I’m interested in.
Most performance upgrades aren’t used in the real world and AI suuuuucks
I side graded from a iPhone 12 to an Xperia as a toy to tinker around with recently and I disabled Gemini on my phone not long after it let me join the beta.
Everything seemed half baked. Not only were the awnsers meh and it felt like an invasion of privacy after reading to user agreement. Gemini can't even play a song on your phone, or get you directions home, what an absolute joke.
Ironically, on my Xperia 1 VI (which I specifically chose as my daily driver because of all the compromises on flagship phones from other brands) I had the only experience where I actually felt like a smartphone feature based on machine learning helped my experience, even though the Sony phones had practically no marketing with the AI buzzwords at all.
Sony actually trained a machine learning model for automatically identifying face and eye location for human and animal subjects in the built-in camera app, in order to be able to keep the face of your subject in focus at all time regardless how they move around. Allegedly it's a very clever solution trained for identifying skeletal position to in turn identify head and eye positions, it works particularly well for when your subject moves around quickly which is where this is especially helpful.
And it works so incredibly well, wayyyyy better than any face tracking I had on any other smartphone or professional camera, it made it so so much easier for me to take photos and videos of my super active kitten and pet mice lol
That's pretty neat, I think that's a great example of how machine learning being useful for everyday activities. Face detection on cameras has been a big issue ever since the birth of digital photography. I'm using a Japanese 5 III that I picked up for $130 and its been great. I've heard of being able to side load camera apps from other Xperias onto the 5 III so I'll give it a try.
I think Sony makes great hardware and their phones have some classy designs and I'm also a fan of their DSLR'S. I've always admired there phones going back to the Ericson Walkmans, their designs have aged amazingly. I apreciate how close to stock Sony's Xperia phones are, I dont like UI's and bloatware you cant remove. My last Android phone a Galaxy S III was terrible in that regard and put me off from buying another Android until recently. I was actually thinking about getting a 1 VI as my next phone and install lineage on it now that I'm ready to commit.
Tbf most people have no clue how to use it nor even understand what "AI" even is.
I just taught my mom how to use circle to search and it's a real game changer for her. She can quickly lookup on-screen items (like plants shes reading about) from an image and the on-screen translation is incredible.
Also circle to search gets around link and text copy blocking giving you back the same freedoms you had on a PC.
Personally I'd never go back to a phone without circle to search - its so under-rated and a giant shift in smartphone capabilities.
Its very likely that we'll have full live screen reading assistants in the near future which can perform circle to search like functions and even visual modifications live. It's easy to dismiss this as a gimmick but there's a lot of incredible potential here especially for casual and older users.
Google Lens already did that though, all you need is decent OCR and an image classification model (which is a precursor to the current "AI" hype, but actually useful).
Doesn't help that I don't know what this "AI" is supposed to be doing on my phone.
Touch up a few photos on my phone? Ok go ahead, ill turn it off when I want a pure photography experience (or use a DSLR).
Text prediction? Yeah why not.. I mean, is it the little things like that?
So it feels like either these companies dont know how to use "AI" or they dont know how to market it... or more likely they know one way to market it and the marketing department is driving the development. Im sure theres good uses but it seems like they dont want to put in the work and just give us useless ones.
I recently got apple intelligence on my phone, and i had to google around to see what it really does. i couldn't quite figure it out to be honest. I think it is related to siri somehow (which i have turned off, because why would that be on?) and apparently it could tie into an apple watch (which i don't have), so i eventually concluded that it doesn't do anything as of right now. Might be wrong though.
But imagine!!! What if AI could write your text messages for you and convincingly hold phone calls??? Then you wouldn't have to use your phone to interact with human beings at all!!!
I only really use AI shit on my work computer (because hooray I have a Copilot license), and its only marginally better than doing searches myself. Its nice when it works because it lets me save time researching things, but I CONSTANTLY have to ask "are you sure that's real?" because it just fucking makes up random command flags based on the prompt.
And its only marginally better because fucking search engines have their head so far up their ass they can see their tonsils. Godsdammit I want working search engines back.
I planned to skip this generation, assuming this would be the year of useless ai cramming, even though my phone was getting old. Samsung was so desperate to sell s25s upgrading was essentially less than staying with my current model. Bought it, and turned all that mess off
Although I think Steve Jobs was a real piece of shit, his product instincts were often on point, and his message in this video really stuck with me. I think companies shoehorning AI in everything would do well to start with something useful they want to enable and work backwards to the technology as he described here:
AI is useless for most people because it does not solve any problems for day to day people. The most common use is to make their emails sound less angry and frustrated.
AI is useful for tech people, makes reading documentation or learning anything new a million times better. And when the AI does get something wrong, you'll know eventually because what you learned from the AI won't work in real life, which is part of the normal learning process anyways.
It is great as a custom tutor, but other than that it really doesn't make anything of substance by itself.
The fact that I can't trust the AI message to be remotely factual makes that sort of use case pointless to me. If I grep and sift through docs, I'll have better comprehension of what I'm trying to figure out. With AI slop, I just end up having to hunt for what it messed up, without any context, wasting my time and patience.
One part that really stuck with me is that the data in the model is more like a fading memory but the stuff in the context window is more like the working memory. Since I learned that I tend to put as much information as possible into the context window before asking questions about it. This improved the results drastically and reduced hallucinations.
I started self hosting AI to learn more about it, and I have come to the conclusion that it really depends on the AI if its bad or not.
For instance, Google's AI results and just literal dog shit. It is just so factually bad its incredible even. Microsoft also sucks. And this is why everyone doesn't like AI. The two most common ways people see AI (Google search and Windows 11) is just complete horse shit. They should not roll them out, absolutely disastrous decision on their part all because they were feeling FOMO. For example, I asked microsoft's AI if the 'New' outlook had this feature from 'classic' outlook. It said it did. An hour later, I found it didn't actually have that feature. Fucking ridiculous that Microsoft's own AI doesn't know their own software. Embarrassing. Did they not give it their own documentation?
But 'dedicated' AI like ChatGPT and Deepseek I can trust to be factual with a 95% success rate. Current events is it's worse subject.
It's really pointless to most people, it has its use case. But it was just a hype train everyone got on like a few years ago many did with blockchain, another nice technology but only for certain use cases. I don't want nor need an always on AI to search through my phone and spy on me. I have already had overbearing exes try that. It's actually a big reason I am considering switching to a Pixel 10 as my next phone and just installing Graphene OS and calling it a day as my daily driver.
At work we deal with valuable information and we gotta be careful what to ask. Probably we'll have a total ban on these things at work.
At home we don't give a fuck what your AI does. I just wanna relax and do nothing for as long as I can. So off load your AI onto a local system that doesn't talk to your server and then we'll talk.
In my office there's one prototype model under testing that nobody uses and does nothing useful. Anything else is actually banned, we handled way too sensitive information. It causes office and outlook to glitch often when it tries to open copilot and get immediately slapped silly to shut up. The blinking blank windows are annoying though. IT had to make an special communication to all staff explaining that it was normal behavior.
Yeah but the amount of energy these auto correct search bars use is absolutely insane and disgusting and people are going without because of it, and literally given the study, most people don’t use it regular. It’s a cool novel tool, but really it’s just fancy google.
Not sure students are necessarily benefiting? The point of education isn't to hand in completed assignments. Although my wife swears that the Duolingo AI is genuinely helping her with learning French so I guess maybe, depending on how it's being used
The consumer-side AI that a handful of multi-billion-dollar companies keep peddling to us is just a way for them to attempt to justify AI to us. Otherwise, it consumes MASSIVE amounts of our energy capacities and is primarily being used in ways that harm us.
And, of course, there's nothing they direct at us that isn't ultimately (and solely) for their benefit--our every use of their AI helps train their models, and eventually it will simply be groups of billionaires competing against one another to form the most powerful model that allows them to dominate us and their competitors.
As long as this technology remains determined by those whose entire existence is organized around domination, it will be a sum harm to all of us. We'd have to free it from their grips to make it meaningful in our daily lives.
maybe if it was able to do anything useful (like tell me where specific settings that I can't remember the name of but know what they do are on my phone) people would consider them slightly helpful. But instead of making targeted models that know device specific information the companies insist on making generic models that do almost nothing well.
If the model was properly integrated into the assistant AND the assistant properly integrated into the phone AND the assistant had competent scripting abilities (looking at you Google, filth that broke scripts relying on recursion) then it would probably be helpful for smart home management by being able to correctly answer "are there lights on in rooms I'm not?" and respond with something like "yes, there are 3 lights on. Do you want me to turn them off". But it seems that the companies want their products to fail. Heck if the assistant could even do a simple on device task like "take a one minute video and send it to friend A" or "strobe the flashlight at 70 BPM" or "does epubfile_on_device mention the cheeto in office" or even just know how itis being ran (Gemini when ran from the Google assistant doesn't).
edit: I suppose it might be useful to waste someone else's time.
I want a voice assistant that can set timers for me and search the internet maybe play music from an app I select. I only ever use it when I am cooking something and don't have my hands free to do those things.
Everybody hates AI, and these companies keep trying to push it because they're so desperate for investors. Oh, I want to be a fly on the wall of a meeting room when the bubble finally pops.
Am I crazy? I’ve got this thing writing code and listing website listings. I ask it certain things before Google and just have it give me the source. I use it to sum up huge documents to quickly analyze them before I go through them. Feels like how Google felt I when it first came out. Yall using the same ai?
I've asked bing gpt to find me 4k laptops and it proceeded to list 5 laptops that weren't 4k. Asked for the heaviest Pokemon and it responded wailord which has never been correct. Had gpt (not bing) attempt to write an AHK script for me to have forwards and backwards media keys, it failed. I asked it to fix it, it said what was broken, why it didn't work and then fixed it by giving me the exact code that didnt work the first time.
It's consistently wrong to me so i now just skip it because if I haven't to double check everything it says anyway, I might as well just do the research myself.
You're not crazy. AI is an useful tool I use daily for quickly summarizing things and for writing code that would otherwise be tedious as hell. I also use it for tips on certain issues in code for learning.
It's possible that people don't realize what is AI and what is an AI marketing speak out there nowadays.
For a fully automated Her-like experience, or Ironman style Jarvis? That would be rad. But we have not really close to that at all. It sort of exists with LLM chat, but the implementation on phones is not even close to being there.
Personally, I am just not going to use the smallest screen I own to do most of the tasks they are pushing AI for. They can keep making them bigger and it’s still just going to be a phone first. If this is what they want then why can’t I just have the Watch and an iPad?
I’m a software engineer and GitHub Copilot as an AI pair programmer has vastly improved my productivity. Also, I use ChatGPT extensively to help with miscellaneous stuff. Apart from these two, I don’t really find other AI implementations useful.
I don’t use the A.I. features on iOS or Android — I have both for developer reasons — but I do like the new Siri animation better than the old one. So, not a total waste of time and money. More of a 99.999% waste of time and money.
Maybe it’s useful for people who work in marketing or whatever. Like you write some copy and you ask it to rewrite it in different tones and send them all to your client to see what vibe they want. But I already include the exact right amount of condescension expected in an email from a developer.
Yeah? Well I fucking love it on my iPhone. It's summaries have been amazing, almost prescient. No, Siri hasn't turned my phone into a Holodeck yet but I'm okay with that.
I use chatgpt for things like debugging error codes but I have to be explicit with as much detail as possible or it will give me all sorts of inapplicable crap
The only thing I want AI (on my phone) to do is limit my notifications and make calendar events for me. I don't want to ask questions. I don't want to start conversations.
I want to open my phone and have 1 summary notification of things I received and things to do. I want the spammy ones to just be auto filtered because I never click on them.
I'd also love if I could choose when to manage all of these notifications with my AI assistant. The only back and forth I'd like is around scheduling if I need to make changes.
The only Galaxy AI feature I find even a bit amusing is Portrait Studio, which can turn a photo of someone into an AI generated comic or 3D picture. But only as long as it remains free, it's not something worth paying for.
Just look at Smart Speakers. Basically the early AI at home. People just used them to set timers and ask about the weather. Even though it was capable of much more. Google and others were unable to monetize them for this reason and have mostly given up.
(Protip: if you have a google speaker and kids, ask about the animal of the day. It's an addition during COVID times for kids learning at home.)
But people also aren't used to AI yet. Most will still google for something, some already skip that step and have ChatGPT search and summarize. I would not be surprised if the internet of the future is just plain text files for the AI agents to scrape.
I think the article is missing the point on two levels.
First is the significance of this data, or rather lack of significance. The internet existed for 20-some years before the majority of people felt they had a use for it. AI is similarly in a finding-its-feet phase where we know it will change the world but haven't quite figured out the details. After a period of increased integration into our lives it will reach a tipping point where it gains wider usage, and we're already very close to that.
Also they are missing what I would consider the two main reasons people don't use it yet.
First, many people just don't know what to do with it (as was the case with the early internet). The knowledge/imagination/interface/tools aren't mature enough so it just seems like a lot of effort for minimal benefits. And if the people around you aren't using it, you probably don't feel the need.
Second reason is that the thought of it makes people uncomfortable or downright scared. Quite possibly with good reason. But even if it all works out well in the end, what we're looking at is something that will drive the pace of change beyond what human nature can easily deal with. That's already a problem in the modern world but we aint seen nothing yet. The future looks impossible to anticipate, and that's scary. Not engaging with AI is arguably just hiding your head in the sand, but maybe that beats contemplating an existential terror that you're powerless to stop.
even for the above it isnt useful, at least For professors have been abusing because they are too lazy to check someones writing, and found the AI have mistakenly assuming the paper is been written by AI. Medical would be just as problematic, it would be wierd if they are using it to make a diagnostic, without discerning, ruling other diseases with similar symptoms or results.
The only AI thing I use on my Fold is the photo cropping, definitely nifty to just pull out a subject, it's not perfect ofc but way easier then manually trying to cut it out lol.
I like the idea of generating emojis with Ai on phones. All other use cases that apple has presented seem useless to me. I was really hoping it would be something, anything, but it was just underwhelming. And then apple didnt even have it ready for the iphone 16 at launch but said the phone was built for apple intelligence..? Seems kinda rushed and half baked to me. I also like using copilot is vscode. Its proven to be pretty good at helping me debug
It would have to have a 'use' to qualify as anything else. It takes longer to ask it to do anything than it does to just do it yourself. Plus they want you to call it up by their removed brand name, 'hey, gemini' or 'okay, google' is cringey AF.
I cant wait until you get dumb siri for free but it only tells time and the paid version cost 25 a month but it also sets alarms.
If you're talking about removed, what would you prefer i use and how long until that word becomes a slur? You know it wasn't long ago removed was the polite term, and mongoloid before that. It doesn't matter what word you use, if the meaning has negative connotations, some asshole like you decides to take their turn at policing speech to the benifit of nobody.
In any case, I think you're you're wasting your time.