Why do you hate AI?
Why do you hate AI?

"garbage account"
Why do you hate AI?

"garbage account"
AI doesn't exist. This is like asking an atheist why they hate god.
If you're talking about LLMs and the like, they're unpopular on Lemmy because tech people are over represented here and tech people understand how these technologies work and why all the hype isn't just false, but malicious.
You mean a bunch of advertising and media companies that control and gatekeep the news are hyping something that's making them trillions of dollars? That seems... so unbelievable!
Isn't the "AI" boom making very little for the companies hawking it and making trillions for the hardware providers? I feel like it's analogous to the people that sold shovels & pickaxes during the gold rush.
if ai doesn't exist then who is playing against me when i set the other civilizations to ai in rise of nations
Oh, that's me. Microsoft gave me backdoor access to your computer so I can play against you.
I know you're meming, but in Civilization (as in most games), you're playing against predefined scripts and algorithmic rules that the computer opponent has, as well as having cheaper costs for resources than the user at higher difficulty levels - because it cannot compete with a skilled human player at that level (it literally cheats).
No LLM, no neural network, no deep learning.. not 'AI' in the modern sense that's being discussed here.
Today my boss asked me why Gemini suggested made up columns when he was trying to query our database. I just told him it also makes up fake tables.
This shit is half baked and really never should have been foisted on the public.
This shit has cost the investors untold money, and it was promised to revolutionize the world, so by golly it will, by force if it must.
AI has existed for decades. The chessbot on Atari is an AI.
What doesn't exist is AGI but that's not synonymous with AI. Most people just don't know the right terms here and bunch it all together as if it was all one thing.
If one is expecting a large language model designed to generate natural sounding language to be generally intelligent like an sci-fi AI assistant then no wonder they find it lacking. They're expecting autonomous driving from a cruise control.
It's not only tech people who "hate" AI.
--- signed, half a tech person
Also not a tech person, but I am an artist. I used to consider going into digital art, but now I'm grateful I didn't and instead have honed ... I guess you can call it "manual" art? As a way to say things I make with my hands? Maybe "analog" or "traditional" art?
Point is, I haven't seen an AI create a pencil drawing or an acrylic painting. I get the feeling that as people tire of AI generated images, they may find renewed interest in these distinctly human-made art forms. I suppose we'll have to wait and see. For now, AI may try to steal forms and ideas, but picking up a pencil or a paintbrush and creating something on a canvas are still out of its reach (thank goodness.)
If artificial intelligence doesn't exist explain Reddit.
Check mate Atheist.
No sign of intelligence there.
Game, set and match... believer?
Natural stupidity.
It's a masterclass in externalities: local communities face the consequences of the resource consumption by the data centers.
Job loss: were all told the "knowledge market" is where you deserve a good salary. AI threatens a lot of that work. Blue collar factory work will go as soon as AI can be properly integrated with nimble robotics that aren't quite there yet. With how disgusting our society is (in the US) there will be no consideration for the people who can no longer find work.
Wealth built on theft and gambling: people like Altman become fabulously wealthy with a system that makes 0 profit and has been built off of the stolen works of millions of people.
Capacity to manipulate: we've had enough trouble with bad faith actors on the Internet with real people. Now we're going to have an endless army of "intelligent" actors that will be weaponized against populations worldwide to secure the position of the ultra wealthy over all of our governments.
I didn't before much, but now I REALLY don't have a positive outlook on our future because of this...
I think the job loss part is a capitalism problem, not just an AI problem.
If we automate work, the people should get the benefits of the automation, they shouldn't have to be worried that they won't have a job.
Yet AI in the LLMs sucks dick, costs us way too much in global warming, and provides very little concrete value.
We need a revolution yesterday.
No one has convinced me how it is good for the general public. It seems like it will benefit corpos and governments, to the detriment of the general public.
It's just another thing they'll use to fuck over the average person.
It COULD help the average person, but we'll always fuck it up before it gets to that point.
You could build an app that teaches. Pick the curriculum, pick the tests, pick the training material for the users, and use the LLM to intermediate between your courseware and the end users.
LLM's are generally very good at explaining specific questions they have a lot of training on, and they're pretty good at dumbing it down when necessary.
Imagine an open-source, free college course where everyone gets as much time as they need and aren't embarrased to ask whatever questions come to their minds in the middle of the lesson. Imagine more advanced students in a class not being held back because some slower students didn't understand a reading assignment. It wouldn't be hard to out teach an average community college class.
But free college that doesn't need a shit ton of tax money? Who profits off that? we can't possibly make that.
How about a code tool that doesn't try to write your code for you, but watches over what you're doing and points out possible problems, what if you strapped it on a compiler and got warnings that you have dangerous vectors left open or note where buffer overflows aren't checked?
Reading medical images is a pretty decisive win. The machine going back behind the doctor and pointing out what it sees based on the history of thousands of patient images is not bad. Worst case the doctors get a little less good at doing it unassisted, but the machines don't get tired and usually don't have bad days.
The problem is capitalism. You can't have anything good for free because it's worth money. And we've put ALL the money into the tech and investors will DEMAND returns.
Imagine an open-source, free college course where everyone gets as much time as they need and aren't embarrased to ask whatever questions come to their minds in the middle of the lesson.
My impression of the average student today is that they lack so much curiosity, in part because of youtube short–induced ADHD, in part because chatgpt just answers all of their homework questions for them, no effort at all, that a course like this would be functionally useless.
This is not an issue of capitalism, detestable as it is: young people are using AI to offload the mental burden of learning. Removing money incentives doesn't fix this.
I'll say two things that I have actually found useful with ChatGPT, helping me flesh out NPCs in the tabletop RPG campaign I'm running, and diagnosing tech problems. That's it. I've tried to program, have it make professional documents, search things for me, all of it sucks compared to just doing it myself. Definitely not worth poring a significant chunk of the global GDP into.
It's more like the opposite. There's not much evidence if it saving money or increasing productivity for companies to the extent that it covers the cost of running it where as for the general population it can be helpful with stuff like writing assistance but I bet most people use it like I do which is entertainment. ChatGPT has 800 million weekly users - people clearly are getting some value from it.
Of those 800 million, how many are paying? That number could be easily over-represented by people doing things without real value to them. I also don't know how many of those users need professional help whether it be severe social anxiety or the people who find intimacy in a chatbot.
Like, you're right there has to be some value to it but I just can't see trillions of USD in value.
I don't hate AI, LLMs are incredibly powerful tools that have an incredibly wide range of uses. The technology itself is something that's very exciting and promising.
What I do hate is how they're being used by large corporations. A small handful of big tech companies (Google, Microsoft, Facebook, OpenAI, etc) decided to take this technology and pursue it in the greediest ways possible:
When you put all of this together, then it's easy to understand why people hate AI. This is what people oppose, and rightfully so. These corporations created a massive bubble and put our economy at risk of a major recession, they're destabilizing our infrastructure, destroying our environment, they're corrupting our government, they're forcing tens of thousands of people into dire financial situations by laying them off, they're eroding our privacy and rights, and they're harming our mental health... and for what? I'll tell you, all of this is done so a few greedy billionaires could squeeze a few more dollars out of everything so they could buy their 5th yacht, 9th private jet, or 7th McMansion. Fuck them all.
When people say "I fucking hate AI", 99% of the time they mean "I fucking hate AI™©®". They don't mean the technology behind it.
To add to your good points, I'm a CS grad that studied neural networks and machine learning years back, and every time I read some idiot claiming something like "this scientific breakthrough has got scientists wondering if we're on the cusp of creating a new species of superintelligence" or "90% of jobs will be obsolete in five years" it annoys me because its not real, and it's always someone selling something. Today's AI is the same tech they've been working on for 30+ years and incrementally building upon, but as Moore's Law has marched on we now have storage pools and computing power to run very advanced models and networks. There is no magic breakthrough, just hype.
The recent advancements are all driven by the $1500 billion spent on grabbing as many resources they could - all because some idiots convinced them it's the next gold rush. What has that $1500 bil got us? Machines that can answer general questions correctly around 40% of the time, plagiarize art for memes, create shallow corporate content that nobody wants, and write some half-decent code cobbled together from StackOverflow and public GitHub repos.
What a fucking waste of resources.
What's real is the social impacts, the educational impacts, the environmental impacts, the effect on artists and others who have had their work stolen for training, the useability of the Internet (search is fucked now), and what will be very real soon is the global recession/depression it causes as businesses realize more and more that it's not worth the cost to implement or maintain (in all but very few scenarios).
I'm really split with it. I'm not a 10x "rockstar"
<insert modern buzzword>
programmer, but I'm a good programmer. I've always worked at small companies with small teams. I can figure out how to parse requirements, choose libraries/architecture/patterns, and develop apps that work.Using Copilot has sped my work up by a huge amount. I do have 10 YoE before Copilot existed. I can use it to help write good code much faster. It may not be perfect, but it wouldn't have been perfect without it. The thing is I have enough experience to know when it is leading me down the wrong path, and that still happens pretty often. What it helps with is implementing common patterns, especially with common libraries. It basically automates the "google the library docs/stackoverflow and use code there as a starting point" aspect of programming. (edit: it also helps a lot with logging, writing tests, and rewriting existing code as long as it isn't too whacky, and even then you really need to understand the existing code to avoid a mess of bugs)
But yeah search is completely fucked now. I don't know for sure but I would guess stackoverflow use is way down. It does feel like many people are being pigeonholed into using the LLM tools because they are the only things that sort of work. There's also the vibe coding phenomenon where people without experience will just YOLO out pure tech debt, especially with the latest and greatest languages/libraries/etc where the LLMs don't work very well because there isn't enough data.
I think it's interesting, that they can steal all this stuff and yet be unable to figure out how to sell it.
All the money, all the data, all the energy, all the computer power, all the political control. And yet, they can't manage to sell a single dollar worth of their product.
Of course it'll be shittified by commericals in and out of the content, and of course that will lead to paid models, but it's not going to be very profitable, because nobody _really _needs bad intelligence. "Oh, it costs something? No thanks then, we already have intelligence at home."
Yes yes, the users are the product, yes, but who then is buying that user data? Commercials and stuff yeah yeah, but at what point does any of this manifest itself as a single fucking sales transaction where a real person pays a company for a real product? Fucking never.
The whole thing is worthless.
they can’t manage to sell a single dollar worth of their product.
Ohh don't worry, that's not how this works :)
We're still in the venture capital stage. The companies are circle-jerking, paying each other off with venture funds and stock splits. They don't need to be making money at this point because they're already getting everything they ask for.
Those $50-$200 packages from all the big companies are just there to get people used to the idea. They're making all their money on selling each other useless support chatbots and horrible phone systems claiming they can reduce their staff by half. Well, they could always reduce their staff by half, customers have had to deal with shitty wait times for years.
You'll pay for AI by the prices of your software rising. Those costs are absorbed and passed on to you as micro-transactions inside your actual subscriptions and payments.
Once they managed to get the AI intertwined in every system out there, they're free to collude as a market and raise prices slowly. AI will be the cost of software inflation and hardware shortages that make anyone with a datacenter or enterprise hardware manufacturing capacity very, very rich.
It could even be that in the end, this isn't a bubble, it's just a grift and it never pops, but because so expensive that your average person can barely eat if they expect to use software tools for their work.
That's an exceptional pasta - well-worded!
Do you mind if I steal it?
Go right ahead
I don't hate AI. I just hate the untrustworthy rich fucks who are forcing it down everyones throats.
I hate AI because it's replacing jobs (a.k.a, salaries) without us having a social safety net to make it painless.
We've replaced you with ai
-CEO
Ai is replacing most of the jobs, and there isn't enough open positions to be filled by the now unemployed.
-Ecconomists
I need food stamps, medical care, housing assistantance, and unemployment.
-Me
No! Get a job you lazy welfare queen!
-Politicians
Where? There aren't any.
-Me
Not my problem! Now, excuse me while I funnel more money to my donors.
-The same politicians
The good news is, while automation like robot arms is continuing to replace humans, the AI aspect of it has been catastrophic and early adopters are often seen expressing remorse and reverting changes.
Do you know where I can read some accounts of this? I'm just interested in robotics.
Ai
Anything the billionaire cabal pushes on us I automatically hate. Don't even need to know what it is. If they are pushing it you know there is some nefarious shit under the hood.
It's built on 'slave' labor, illegal use of content and additionally using unbelievable amounts of power so the environmental concerns go right out of the window at a time where we should do everything to not do that.
Also, AI, even if it's the currently established term has absolutely nothing to do with neither intelligence nor sentience but is being sold as AGI (overpromised).
This has caused huge masses of investment to gather which will pop at some point, causing all of us massive issues due to the missteps of a few.
...additionally using unbelievable amounts of power so the environmental concerns go right out of the window at a time where we should do everything to not do that.
Don't forget that the enormous energy usage is driving up energy costs for absolutely everyone.
Residential retail electricity prices in September were up 7.4%, to about 18 cents per kilowatt hour, according to the most recent data from the Energy Information Administration.
That's on a national basis too. If you happen to live in an area with a lot of data centers, your energy costs have probably risen more than that.
Absolutely. That is by all means not an exhaustive list. The shit pile is much higher than that and I regularly forget half of it.
Oh yeah and long term it will cause a massive issue in the workforce since now the junior positions are basically removed and supposedly replaced by AI (but that's just a very convenient excuse) and at some point we'll run out of experienced people.
For this the tech bros envision and sell the upcoming AGI which will never arrive though based on this technology as every expert will currently confirm.
But what do they care about messing it up for all of us, they're already the richest there are so sucks to be us.
Is copying an mp3 slavery as well?
They are talking about 'Mechanical Turks' who step in when the LLM falters who are paid slave wages.
Nowhere regarding the mp3 codec (afaik, of course) there are masses of low wage workers required for data tagging for it to work. So no.
garbage account // garbage post ?
I dunno who'd ever want to use that font as an override for the default.
maybe its a genuine question... maybe theyre asking why do you hate ai, not why do you hate ai
In any case, it's brilliant satire.
It is definitely very American in its defaulting to racism as the core of its satire.
my take on the problems of AI from what i can remember:
ofc am not stopping anyone from using AI,but the ethics is the elephant in the room with AI as listed.
sometimes not reliable source of information
Let me fix that!
usually not reliable source of information
It's just good enough for some shallow searches, especially with Google and internet search in general being poisoned with SEOed garbage floating to the top, which nowadays is SEOed and AI generated slop. I often have to go to sites that are old enough to be sure they're not AI generated, or are vetted that they're not made by some AI bro, as a side hustle towards their first million. One Linux article I no longer able to find made a Linux installation of mine borked, had to reinstall my Raspberry Pi, and the then new installer really didn't want me to let set a different region and language at first, so I had to switch back to English after finishing the setup.
Is that Brian 'Brian Kibler' Kibler' of Brian Kibler Gaming?
Looking at the profile pic: yes. The Magic the Gathering hall-of-famer.
I've played Magic against him a couple times. Great dude.
Also one of the nicest mtg pros out there.
Why do you keep making and deleting so many accounts?
If you kept the same one, a lot of people would block you and stop down votng every post you make from a new account
what the fuck is this stereotype
It's probably from a redditor who probably is white and male. Y'know, self-deprecating humor is pretty common among redditors just like it is here.
Well, it's from a blue stared twitter user, which is much, much worse.
just feels wrong
like it's making stereotypes feel normal and creating xenophobia or something
when done humorously, it's fine, but here it just seems serious
Welcome to Lemmy.
I don't hate AI (specifically LLMs and image diffusion thingy) as a technology. I don't hate people who use AI (most of the time).
I do hate almost every part of AI business, though. Most of the AI stuff is hyped by the most useless "luminaries" of the tech sector who know a good profitable grift when they see one. They have zero regard for the legal and social and environmental implications of their work. They don't give a damn about the problems they are causing.
And that's the great tragedy, really: It's a whole lot of interesting technology with a lot of great potential applications. And the industry is getting run to the ground by idiots, while chasing an economic bubble that's going to end disastrously. It's going to end up with a tech cycle kind of similar to nuclear power: a few prominent disasters, a whole lot of public resentment and backlash, and it'll take decades until we can start having sensible conversations about it again. If only we would have had a little bit of moderation to begin with!
The only upside AI business has had was that at least it has pretended to give a damn about open source and open access to data, but at this point it's painfully obvious that to AI companies this is just a smoke screen to avoid getting sued over copyright concerns - they'd lock up everything as proprietary trade secrets if they could have their way.
As a software developer, I was first super excited about genAI stuff because it obviously cut down the time needed to consult references. Now, a lot of tech bosses tell coders to use AI tools even in cases that's making everyone less productive.
As an artist and a writer I find it incredibly sad that genAI didn't hit the brakes a few years ago. I've been saying this for decades: I love a good computerised bullshit generator. Algorithmically generated nonsense is interesting. Great source of inspiration for your ossified brain cells, fertile grounds for improvement. Now, however, the AI generated stuff pretends to be as human-like as possible, it's doing a terrible job at it. Tech bros are half-assedly marketing it as a "tool" for artists, while the studio bosses who buy the tech chuckle at that and know they found a replacement for the artists. (Want to make genAI tools for artists? Keep the output patently unusable out of the box.)
I'm hopeful that when the bubble pops it'll be more like the dot com crash, which is to say that the fallout is mostly of the economic variety rather than the superfund variety. Sure, that'll still suck in the short term. But it will ideally lead to the big players and VC firms backing away and leaving behind an oversupply of infrastructure and talent that can be soaked up at fire sale prices by the smaller, more responsible companies that are willing to stick out the downturn and do the unglamorous work of developing this technology into something that's actually sustainable and beneficial to society.
That's my naive hope. I do recognize that there's an unfortunately high probability that things won't go that way.
The value in LLMs is in the training and the data quality... so it is easy to publish the code and charge for access to the data (DaaS).
I don't hate AI. However, I:
AI is only looks good if you're an outsider to the profession. The moment you're even an amateur, you'll see all of its faults. It's just a plagiarizing machine with a built-in contextual search function (any AI model that runs as an actual contextual search instead of a wannabe assistant with a flattering personality?), that can make some crappy looking and weirdly specific clip art, stock music with funny-sounding gimmicks, and buggy code you'd better plagiarize from public domain licensed code from Github.
The technology is way too resource intensive for the benefit it gives. By resource, I mean environmental and technological. Have you seen the prices of DDR5 RAM? Microsoft is actually working to bring TMI 1 back online. TMI = Three Mile Island as in a full sized nuclear reactor that has been retired from service since 2019. The only reason why they are not bringing TMI2 back online is because IF F$%KING MELTED DOWN IN 1979.
Add to that Micron exited the consumer market to provide memory to the AI market only... What the actual F#$k?
Now the bubble has formed and the people that shoved tens of billions into it are trying to fill that bubble by any means necessary. Which means the entire population of this country are constantly bombarded by it for purposes it is ill suited to.
When, not if, this bubble pops it's going to be a wild ride.
At some point, we should legislate that all non production tech buisnesses have to be energy positive- as in 'wanna build a data center? Its got to have more solar/ wind etc, tha it uses or its unpermitable.
I don't hate it, I hate how companies are forcing it in regardless of how stupid it is for the task.
As always, you don't hate X, you hate capitalism. Replace X with almost anything.
HURR DURR HEY GUYS LETS DO SOMETBING VAGUE ABIUT CAPLISM
Have you seen the RAM prices lately?
.. And NVMe SSDs, and large HDDs.
I bought a Crucial P310 MVMe 2TB card barely three weeks ago for the already-inflated price of $132.58 (not on sale).
The exact same card from the exact same retailer is now $225.13.
70% increase in 21 days.
That's the average amount of inflation we'd have in eighteen years.
Hate not so much for AI in and of itself, my ire is the resource use for one. We were already draining aquifers that took thousands of years to fill and now we're burning even more for datacenters (DCs).
To top it off, America's western deserts are the best place for DCs. No natural disasters, stable and predictable weather, tectonically inactive. Every time I've had to pick a primary or backup DC, I'd hit one in Las Vegas or somewhere out west. (This experience was pre-AI.)
These DCs are burning power and causing higher bills to consumers, which is just fucking obscene. States should legislate that DCs have to bring at least some of their own, dedicated renewables, and pay a premium to the power company for the extra stress and maintenance on the grid. These costs should not touch customers, residential or business.
Maybe even worse is the economic aspect. Have a look at the current Buffett Index, the ratio of the total United States stock market to GDP. We topped 200% for the first time, ever. For comparison, the Great Depression and Great Recession were around 120-130%. This "extra" stock market valuation is all due to AI speculation.
So for all the other whining lemmy does about AI, it's the ecosystem and economic disasters it's creating that we'll all remember when the bubble pops.
Why are we asking loaded questions as a first post in a new account?
Are you?
It's a fair question to ask, given the situation with new accounts.
Does this not look suspicious to you?
Huh… so it’s getting its advice scrape from Reddit. Now it makes sense where it’s getting the idea of telling teens to die by suicide. That place is a shithole where intelligence goes to die.
I'm still waiting for it to appear and then lets ask them how they like it. Its not like the garbage we have now is really AI.
AI is trained on the Internet. Look at the bullshit on the Internet. AI will take some random schmoe's bullshit opinion and present it as hard fact.
That, and it just re-introduced the problem of being able to see search results without visiting any of the resultant websites. The last time, sites ended up burying answers down the page to avoid being able to see results in search previews. Making everything shittier. What kind of response is there going to be to AI summaries? Everything will undoubtedly get even shittier as sites try to get people to visit and not just read the AI summary. Hello even more obfuscation. We're taking the greatest method of spreading information around the globe ever devised and just absolutely filling it to the brim with bullshit.
This is only the beginning. Soon there will be LLMs trained on other LLMs garbage. And those LLMs will also post and write crap on the Internet. The true pinnacle of shite posting
oh yea it does, i see google summarizing, its just a mash up of different BLOG posts as truths, that isnt a source. its basically asking opinions of the LLM.
Block all Bots
We should program bots to feel pain when they get blocked.
We could call it... "Artificial Emotions," or AE.
I don't hate AI, I hate it being forced everyone's throat and I don't trust the companies running it to keep the data they collect safe and private
Not sure about "hate", but it's clearly a bubble and all the billions are going into AI and not things that could prevent an economic downturn.
Even if you're not opposed to it for copyright or environmental or social reasons, AI is currently wrecking markets and finances for the next decade.
I've seen it successfully perform exactly one task without causing more harm or crearing liability for the people using it:
Misinformation campaigns.
And thats exactly how the AI Companies are using to to grow exponentially, lying about its costs and its capabilities both.
It's weird that this is somehow an unpopular opinion these days but I don't like being lied to.
Ive been hearong the claim now occasionally For the last several years that we've moved into the 'post truth' age. AI has kind of cemented that for me.
I don't hate it, just how it's being used.
Then again, proper use of AI, if even achievable would most likely result in disaster in some way.
The way "AI" is marketed today isn't real AI, it's just a lazy source copy-pasting bot made for our convenience.
I hate the fact that thanks to chatgpt, every twerp out there things em-dashes are an automatic sign of something being written by ai...
As a writer and an em-dash enjoyer, hell with that!
I was never pedantic enough to get a real em dash, instead of just a regular dash
Lol. For me I just get impatient with AI. Most of the time the things I tried to use it for, it either couldn't do, did poorly or did wrong. It was simply easier for me to do the shit myself instead of burning the planet by doing 50 prompts to get one useful result.
I tried it, learned about the environmental costs and then I stopped using it. The only thing its good for is for super basic removed translations like that one time I didn't know what a particular mathematical formular was like in English and I couldn't find an answer after searching manually for awhile.
That's the only time AI was helpful to me.
And honestly, I just don't trust that is can help me with what I need it for 99% of the time because it just makes shit up constantly. I can't justify using this tool when the only useful result I ever gave me was the correct way to say a mathematical formulae in English. That is just too fucking weak. I am much better at looking for answers myself, writing and drawing things than any AI will ever be because I know what the fuck I want and I can make it myself while the AI has no idea what it's doing and only looks good to people who has zero reading comprehension nor visual literacy.
Like, I think AI music can be pretty good but I'm also a musical removed and I have no idea how to listen to music like a real musician, so to me it sounds good, but to someone who knows what they are doing, I'm sure AI music sounds like ass.
That's me with AI writing and artwork.
I don't hate LLMs I hate how they are being used and what they are being used for. I am not a luddite (even if I jokingly claim to be) and do see value in LLMs for some uses. Just not the ones they are primarily being used for in America
I don't hate it, but my personal experience has been that it's not reliable enough to depend on. I don't like that it's being pushed on us so hard when it will routinely let us down.
Why is being confidently wrong considered an exclusive or primary trait of white males, and why would anyone attribute this behavior primarily to their gender or race?
Life experience. In my experience, men are generally speaking more confidently incorrect. On the internet I see this trait most often with US Americans, regardless of skin colour. In real life, I see it often with consumers of tabloids or russian news.
As an Estonian, I’d just say that Americans have a heavy presence on the internet, that’s all. In my opinion, it’s a human thing, not a matter of race, nationality, or gender. If there were a study showing that one of those groups had a higher prevalence of that behavior, I’d expect it to change over time, just as women and men in South Korea have recently shifted their voting patterns. In other words, the behavior could be tied to temporary cultural/other factors.
The post says nothing about exclusivity. Just attributing it to his own race but not at the exclusion of others.
Needlessly divisive identity politics that gets spread around a lot in the form of tweets/articles/memes because it's controversial.
We need some “everybody love everybody” and some Jesus-style politics, instead of all this identity politics whatnot.
Needs text alternative or link to source.
Mostly for all the AI haters who can't stop bringing up their hatred of AI. Insufferable.
I don't entirely hate AI.
I like AI for when I want to personally use it for art ideas because I'm not an artist and honestly, paying someone to draw for you can be expensive as it is a luxury. I just don't run anything like a DA account to show it off.
I don't like AI when it has been used to lazily write scripts for movies/shows, to draft essays and be used as a shortcut for someone's work.
I like AI when it can be used as a companion tool.
I don't like AI when it tries being a therapist for serious mental issues.
I don't like AI when it is used as a poor excuse of troubleshooting.
I don't like AI for the damage it is causing to the market of PC memory.
I don't like AI being shoved down my throat and for the companies you least expect it to use it, suddenly start using it.
So with a score of 2-5, if AI was just erased right now, I wouldn't really miss it. But I don't entirely hate it. It is a completely misused and abused kind of tool that's shoved into everyone's lives and marketed as a catch-all solution to nearly all problems, when there is a mountain of evidence and recorded studies saying otherwise.
Don't hate it but think it is ultimately wasting to many resources. And was over hyped we should have general AI now according to Sam Altman. If search was better I wouldn't use AI but now I use it to replace search and yes it can be wrong but so are so many search results.
It poses a threat to the freedom of the people, as well as to human life. Just watch the interview what billionaires say.
Kibler was great.
Is that Brian "Please don't call me Brian "Brian Kibler" Kibler" Kibler?
Because it's everywhere and I'm tired of people trying to make me try it.
I don't.
I like AI. It's a very useful tool.
c/fuck_ai sure does
I'm not c/fuck_ai
Sounds like something someone pretending to be white would say.
Edit: Its established when someone starts a statement with the words "as a black man" its really someone white making the statement.
Is the inverse of that not true. If so. Why?
Depends on what you classify as AI anymore
I work with Video Editing software and you could argue that a lot of features on softwares like Davinci, Capcut, Premiere all use different helpful tools that could or could not be AI, Artificial Intelligence existed long before companies slapped the term into every product. Nowadays even TVs have "AI" processors which is just a fancy way of saying just a CPU.
I do believe it still should be an Open source tool that we can use for good, for example helping fighting against cancer but knowing we live in such a planet it will be used for worse rather than good and it depends on who trains it and for what purpose because we train AI.
I do care about Art like Movies, Books and the Movie industry so far seems to be the only one who fights against AI being used as lazy work, there are possibly Video Games that use AI text to speech for NPCs or something but there are a lot of directors who speak passionatelly against it which is good to see.
At the end of the day it's a tool and it will depend on how we use it. We can use it to do good things, but also drive more people out of jobs and helping with endless wars, I have a feeling the latter will be true but we'll see.
I don't. All the anti-intellectual technophobia we're seeing right now will be a small footnote in history.
Sincere question:
Most of the comments here cite reasons for disliking AI that include one or more of the following: environmental degradation, resource consumption, increasing energy/hardware prices, disregarding copyright, disregarding privacy, undermining human artists, mass layoffs, creating a market bubble, throwing education into chaos, monopolization by corporations/billionaires, AI hallucinations/inaccuracy, a product that is overpromising/undelivering, a product that makes generating misinformation easier.
Which of these reasons for disliking AI do you think fall under your assertion of "anti-intellectual technophobia"? They all seem like legitimate, well thought out reasons for disliking something to me, especially when considered together.
If I didn't come to Lemmy I probably wouldn't even know about the loud minority hating it. All the real people in my life either like it or don't have much of an opinion to begin with.
I don't know, I've used chatGPT and have reviewed the sources it uses. Saying Reddit is a "primary source" is blatantly dishonest. Not that dishonesty matters to people like that guy.
that just proves that he is "confidently wrong"
Yes, he is wrong - whether or not he's actually confident about it is a bit harder to discern.
Not sure if this was meant to be a counterpoint to my comment.
Haha racism
Where? Since when is self-depreciating humor a bad thing?
Self deprecating humor is great. Racial stereotypes are not. Saying “all Asians are good at math” is also harmful, for example.
Reverse-racism is not a thing.
Why not? Stereotypes are always harmful. How do you think an Asian kid feels when they suck at math but are told that’s not how they should be? How do you think a black kid fees when they don’t have a big dick but are told they should?
Nobody said reverse racism