Scientists Need a Positive Vision for AI
Scientists Need a Positive Vision for AI
Scientists Need a Positive Vision for AI

... in the United States, public investment in science seems to be redirected and concentrated on AI at the expense of other disciplines. And Big Tech companies are consolidating their control over the AI ecosystem. In these ways and others, AI seems to be making everything worse.
This is not the whole story. We should not resign ourselves to AI being harmful to humanity. None of us should accept this as inevitable, especially those in a position to influence science, government, and society. Scientists and engineers can push AI towards a beneficial path. Here’s how.
The essential point is that, like with the climate crisis, a vision of what positive future outcomes look like is necessary to actually get things done. Things with the technology that would make life better. They give a handful of examples and provide broad categories if activities that can help steer what is done.
You know what else would make life better for people?
Accessible healthcare...
You know why that's better than AI? We don't need to burn the planet down to use it after spending billions to get it going
I'm not convinced that "AI" is even what it's meant to be. Worse, I think scenarios of success are already drawn up in stories and science fiction - and 2025 AI suggests we're not even close.
Now that more information is available concerning the US governments private recollections and thoughts surrounding their military activities in Afghanistan, I'm suspicious that this AI is a "campaign". It's simply another game of sleight of hand or pump and dump maneuver. The US remains a major currency reserve, but successive governments over the last 20 years have been incompetent, and the country has been mismanaged for far longer than anyone expected.
With the US signalling strongly that they are giving up competing with China on advanced technologies like renewables and batteries, there's little else left besides the promise that AI will somehow swoop in and fix it all. But as netizens already point out, capitalist corporations cannot "benefit" from AI without taking advantage of its promise - taking jobs away from humans.
Sadly "AI", or whatever you want to call it, is an interesting tool, but that still requires supervision or human oversight. AI is not the magic promised for all the countless billions spent, water burned, and energy depleted. I think the world is starting to grow suspicious, and the US faces a market correction due to fears of the AI bubble.
Perhaps AI's promise remains, but how its pursued gives the impression of another American scam.
The main goal of artificial intelligence, and this is why so much money is being poured into it, is to make AI the oppressor of ordinary people. This means, for example, improving video surveillance systems, or more precisely, creating AI-powered cameras that can accurately recognize human emotions and predict actions.
I strongly agree. But I also see the pragmatics: we have already spent the billions, there is (anti labor, anti equality) demand for AI, and bad actors will spam any system that took novel text generation as proof of humanity.
So yes, we need a positive vision for AI so we can deal with these problems. For the record, AI has applications in healthcare accessibility. Translation, and navigation of beurocracy (including automating the absurd hoops insurance companies insist on. Make insurance companies deal with the slop) come immediately to mind.
I am genuinely curious why you think we need a positive vision for AI.
I say this as someone who regularly uses LLMs for work (more as a supplement to web searching) and uses "AI" in other areas as well (low resolution video upscaling). There are also many other very interesting use cases (often specialized) that tend to be less publicized than LLM related stuff.
I still don't see why we need a positive vision for AI.
From my perspective, "AI" is a tool, it's not inherently positive or negative. But as things stand right now, the industry is dominated by oligarchs and conmen types (although they of course don't have a monopoly in this area). But since we don't really have a way to reign in the oligarchs (i.e. make them take responsibility for their actions), the discussion around positive vision almost seems irrelevant. Let's say we do have a positive vision for AI (I am not even necessarily opposed to such a vision), but my question would be, so what?
Perhaps we are just talking about different things. :)
P.S. FWIW, I read your replies in this thread.
sunken cost fallacy