An AI-run fighter jet went up against another controlled by a human pilot in a drill, the US has said. The aircraft flew at up to speeds of 1,200mph during combat that's often referred to as a dogfight.
The aircraft flew up to speeds of 1,200mph. DARPA did not reveal which aircraft won the dogfight.
It’s already true for AI. Just observe OpenAI trying to control what their AIs talk about. The mechanisms of control they’re trying to employ are leaky at best.
Maneuverability is much less of a factor now as BVR engagements and stealth have taken over.
But, yeah, in general a pilot that isn't subject to physical constraints can absolutely out maneuver a human by a wide margin.
The future generation will resemble a Protoss Carrier sans the blimp appearance. Human controllers in 5th and 6th gen airframes who direct multiple AI wingman, or AI swarms.
To fight optimally, AI needs to have a survival instinct too.
Evolution didn’t settle on “protect my life at all costs” as our default instinct, simply by chance. It did so because it’s the best strategy in a hostile environment.
Maybe if you were sitting sideways in the cockpit and did it very abruptly with the flight control computer disabled (only a few jets can even disable it). It's the sustained G loading that makes you black out or red out.
A skilled and fit pilot can pull ~9G in a Viper for about 30s.
A computer can pull ~9G for as long has the plane has the speed to pull that hard, or it can pull as hard as it can until the plane snaps in half, because computers don’t suffer from g-LOC.
Not so much f16s but the more modern planes can do 16G where the pilot can't really do more than 9G. But once unshackled from a pilot a lot of instrument weight and pilot survival can be stripped from a plane design and the airframe built to withstand much more, with titanium airframes I see no reason we can't make planes do sustained unstable turns in excess of 20G.
Not that that isn't interesting, but I'd jump in and insert a major caution here.
I don't know what is being done here, but a lot of the time, wargaming and/or military exercises are presented in the media as being an evaluation of which side/equipment/country is better in a "who would win" evaluation.
I've seen several prominent folks familiar with these warn about misinterpreting these, and I'd echo that now.
That is often not the purpose of actual exercises or wargames. They may be used to test various theories, and may place highly unlikely constraints on one side that favor it or the other.
So if someone says "the US fought China in a series of wargames in the Taiwan Strait and the US/China won in N wargames", that may or may not be because the wargame planners were trying to find out who is likely to win an actual war, and may or may not have much to to with the expectations the planners have of a win in a typical scenario. They might be trying to find out what would happen in a particular scenario that they are working on and how to plan for that scenario. They may have structured things in a way that are not representative of what they expect to likely come up.
To pull up an example, here's a fleet exercise that the US ran against a simulated German fleet between World War I and II:
Fleet Problem III and Grand Joint Army-Navy Exercise No. 2
During Fleet Problem III, the Scouting Force, designated the "Black Force," transited from its homeport in the Chesapeake Bay towards the Panama Canal from the Caribbean side. Once in the Caribbean, the naval forces involved in Fleet Problem III joined with the 15th Naval District and the Army's Panama Division in a larger joint exercise.[9] The Blue force defended the canal from an attack from the Caribbean by the Black force, operating from an advance base in the Azores. This portion of the exercise also aimed to practice amphibious landing techniques and transiting a fleet rapidly through the Panama Canal from the Pacific side.[10]
Black Fleet's intelligence officers simulated a number of sabotage operations during the course of Fleet Problem III. On January 14, Lieutenant Hamilton Bryan, Scouting Force's Intelligence Officer, personally landed in Panama with a small boat. Posing as a journalist, he entered the Panama Canal Zone. There, he "detonated" a series of simulated bombs in the Gatun Locks, control station, and fuel depot, along with simulating sabotaging power lines and communications cables throughout the 16th and 17th, before escaping to his fleet on a sailboat.
On the 15th, one of Bryan's junior officers, Ensign Thomas Hederman, also snuck ashore to the Miraflores Locks. He learned the Blue Fleet's schedule of passage through the Canal from locals, and prepared to board USS California (BB-44), but turned back when he spotted classmates from the United States Naval Academy - who would have recognized and questioned him - on deck. Instead, he boarded USS New York (BB-34), the next ship in line, disguised as an enlisted sailor. After hiding overnight, he emerged early on the morning of the 17th, bluffed his way into the magazine of the No. 3 turret, and simulated blowing up a suicide bomb - just as the battleship was passing through the Culebra Cut, the narrowest portion of the Panama Canal. This "sank" New York, and blocked the Canal, leading the exercise arbiters to rule a defeat of the Blue Force and end that year's Grand Joint Army-Navy Exercise.[11][10] Fleet Problem III was also the first which USS Langley (CV-1) took part in, replacing some of the simulated aircraft carriers used in Fleet Problem I.[12]
That may be a perfectly reasonable way of identifying potential weaknesses in Panama Canal transit, but the planners may not have been aiming for the overall goal of evaluating whether, in the interwar period, Germany or the US would likely win in an overall war. Saying that the Black Fleet defeated the Blue Fleet in terms of the rules of the exercise doesn't mean that Germany would necessarily win an overall war; evaluating that isn't the purpose of the exercise. If, afterwards, an article says "US wargames show that interwar Germany would most likely defeat the US in a war", that may not be very accurate.
For the case OP is seeing, it may not even be the case that the exercise planners expect it to be likely for two warplanes to get within dogfighting range. We also do not know what, if any, constraints were placed on either side.
In 2020, so-called "AI agents" defeated human pilots in simulations in all five of their match-ups - but the technology needed to be run for real in the air.
It did not reveal which aircraft won the dogfight.
Bragging just means more money flowing to enemies’ research programs. When a fight is inevitable you want to appear as weak as possible to prevent your enemy from taking it seriously.
No way we give up that information for free. Either way it went, the knowledge of it cost a lot to gain and is useful. If it failed you want your enemy wasting money on it. If it succeeded you want your enemy not investing in it.
What if the human is pulling the trigger to "paint the target" and tag it for hunt and destroy then the drone goes and kills it? Because that's how lots of missles already work. So where's the line?
The line is where an automatic process target and execute a human being. When it is automated. The arming of a device is not sufficient to warrant a human interaction, and as such mines are also not allowed.
This should in my opinion always have been the case. Mines are indiscriminate and have proven to be wildly inhumane in several ways. Significantly, innocents are often killed.
But mines don't paint the picture of what automated slaughter can lead to.
The point has been laid that when the conscious mind has to kill, it makes war have an important way to end, in the mind.
The dangers extend well beyond killing innocent targets, another part is the coldness of allowing a machine to decide, that is beyond morally corrupt. There is something terrifying about the very idea that facing one of these weapons, there is nothing to negotiate, the cold calculations that want to kill you are not human. It is a place where no human ever wants to be. But war is horrible. It's the escalation of automated triggers that can lead to exponential death with no remorse which is just a terrible danger.
The murder weapons has nobody's intent behind them, except very far back, in the arming and the program. It open for scenarios where mass murder becomes easy and terrifyingly cold.
Kind of like the prisoner's dilemma shows us, that when war escalates, it can quickly devolve into revenge narratives, and when either side has access to cold impudent kills, they will use them. This removes even more humanity from the acts and the violence can reach new heights beyond our comprehension.
Weapons of mass destruction with automated triggers will eventually seal our existence if we don't abolish it with impunity. It has been seen over and over how the human factor is the only grace that ever end or contain war. Without this component I think we are just doomed to have the last intent humans ever had was revenge, and the last emotions fear and complete hopelessness.
Not OP, but if you can't convince a person to kill another person then you shouldn't be able to kill them anyways.
There are points in historical conflicts, from revolutions to wars, when the very people you picked to fight for your side think "are we the baddies" and just stop fighting. This generally leads to less deaths and sometimes a more democratic outcome.
If you can just get a drone to keep killing when any reasonable person would surrender you're empowering authoritarianism and tyranny.
Mines are designated war crimes by the Geneva convention Ottawa treaty because of the indiscriminate killing. Many years ago, good human right lawyers could have extended that to drones... (Source: i had close friends in international law)
But i feel like now the tides have changed and tech companies have influenced the general population to think that ai is good enough to prevent "indiscriminate" killing.
I see this as a positive: when both sides have AI unmanned planes, we get cool dogfights without human risk! Ideally over ocean or desert and with Hollywood cameras capturing every second in exquisite detail.
I am a firm believer that any war is a crime and there is no ethical way to wage wars lmao
It’s some kind of naive idea from extremely out of touch politicans.
War never changes.
The idea that we don’t do war crimes and they do is only there to placate our fragile conscience. To assure us that yes we are indeed the good guys. That kills of infants by our soldiers are merely the collateral. A necessary price.
There's a science and whole cultures built around war now
It is important to not infantilize the debate by being absolutist and just shutting any action out.
I am a hard core pacifist at heart.
But this law I want is just not related to that. It is something I feel is needed just to not spell doom on our species. Like with biological warfare
How often do robots fail? How can anyone be so naive as to not see the same danger as with bio warfare? You can't assure a robot to not become a mass murder cold ass genocidal perpetual machine. And that's a no no if we want to exist
Those of us who play video games do at least. All the AI difficulty settings are arbitrary. You give the bot the ability to use its full capability, and the game is unplayable.
In video games the AI have access to all the data in the game. In real life both the human and AI have access to the same (maybe imprecise) sensor data. There are also physical limitations in the real world. I don't think it's the same scenario.
Not exactly, AI would be able to interpret sensor data in a more complete and thorough way. A person can only take in so much information at once - AI not so limited.
I think even the imperfect sensor data is enough to beat a human. My main argument for why self-driving cars will eventually be objectively safer than the best human drivers (no comment about whether that point has already done) is this:
A human can only look at one thing at a time. Compared to a computer, we see allow, think slow, react show, move slow. A computer can look in all directions all the time, and react to danger coming from any of those directions faster than a human driver would even if they were lucky enough to be looking in the right direction. Add to that the fact that they can take in much more sensor data that isn't available to the driver or take away from precious looking-at-the-road time for the driver to know, such as wind resistance, engine RPM, or what have you (I'm actually not a car guy so my examples aren't the best). Bottom line: the AI has a direct connection to more data, can take more of it in at once and make faster decisions based on all of it. It's inherently better. The "only" hurdles are making it actually interpret its sensors effectively (i.e. understand what cameras are seeing) and make good decisions based on this data. We can argue about how well either of those are in the current state of the technology, but IMO they're both good enough today to massively outperform a human in most scenarios.
All of this applies to an AI plane as well. So my money is on the AI.
For sure without humans the AI probably wins, assuming the instruments are good. This wasn't without humans, but it probably still wins.
I'm fairly certain most dogfights happen on instruments only at this point, so I don't see a chance the human won. The AI can react faster and more aggressively. It can also almost perfectly match a G-load profile limit (which could be much higher without humans on board) where a human needs to stay a little under to not do damage.
This is all assuming the data it was given was good and comprehensive, which I'm sure it was. It also likely trained in a simulation a lot too. This is one of those things AI is great for. Anything that requires doing something new and unique it can't handle, but if it just requires executing an output based on inputs, that's a perfect use case.
I don't know, one camera lead falls out and it's all over for the AI. The human still is going to be more adaptable than an AI and always will be until we have full true AGI.
Having said that if we ever do have AGI I 100% believe the US military would be stupid enough to put it in a combat aircraft.
Are dogfights even still a thing?
I remember playing an F15 simulator 20 years ago where "dogfighting" already meant clicking on a radar blip 100 miles away, then doing something else while your missile killed the target.
'Dogfighting' mostly just means air-to-air combat now. They do still make fighter jets that have guns or can mount guns, but I think they're primarily intended for surface targets rather than air targets.
Well if both sides get working stealth dogfights are going to become more common.
But the US seems to estimate it's adversaries do not have such capability at the moment since it's ordering new F-15s with the major change being air to air missile capacity.
Missiles also did not have 100 miles range 20 years ago. That's without considering actually detecting and tracking the target.
Missiles also did not have 100 miles range 20 years ago.
Somewhat missing then point there I feel.
They are right, I was thinking the exact same thing when I read the headline, aircraft don't really engage in dog fights anymore. It's all missiles and long-range combat. I don't think any modern war would involve aircraft shooting at each other with bullets.
Like, not even in a joking sense. Ukraine is using a ton of drones, the future of physical warfare will simply be a test resources and production.
I'm honestly not sure if this will be good or bad in the longterm. Absolutely saving any amount of human life is a good thing, but when that is no longer a significant factor, I wonder if we will go to (and stay at) war for more trivial reasons.
giving AI military training is "responsible", is it? Oh good, I'm glad training software to kill is going "responsibly", that's good to know. Kinda seems like the way a republican uses words - backwards, in opposition to their actual meaning, but hey, fuck the entire world, right?
And if an arms control agreement does exist, it’s just a trap for those naive enough to think such things work.
Putin got us to avoid prepping for a Ukraine invasion simply by repeating that he wasn’t going to invade. And right up until the very moment it happened, the dominant conversation still was not based on the premise that he was going to.
The whole concept of doublespeak works because humans have a powerful compulsion to simply believe what others say. Even if we know their actions and their words are in conflict, we have an extremely hard time following our observations of their actions, and ignoring their words.
It’s like the Stroop task, but with other humans’ behavior instead of ink colors.
Conservatives tend to be those who, by experience, have been forced out of the notion that the base of existence is not war.
It’s an illusion which can only be maintained when others are facing the war.
Humans tend to remain in the comfortable illusion until they are forced out of it, usually by an encounter with a psychopath victimizing them, or an actual war.
Can't wait until the poor people are not killed by other (but less) poor people for some rich bastards anymore but instead the mighty can command their AI's to do the slaughter. Such an important part of evolution. I guess.
I think we both know that there is no way wars are going to turn out this way. If your country's "proxies" lose, are you just going to accept the winner's claim to authority? Give up on democracy and just live under WHATEVER laws the winner imposes on you? Then if you resist you think the winner will just not send their drones in to suppress the resistance?
Nobody recruited to fly a $100M airplane is poor. They all come from families with the money and influence to get their kids a seat at the table as Sky Knights.
A lot of what this is going to change is the professionalism of the Air Force. Fewer John McCains crashing planes and Bush Jrs in the Texas Air National Guard. More technicians and bureaucrats managing the drone factories.
In a drill over Edwards Air Force Base, the pair of F-16 fighter jets flew at speeds of up to 1,200mph and got as close as 600 metres during aerial combat, also known as dogfighting.
While in flight, the AI algorithm relies on analysing historical data to make decisions for present and future situations, according to the Defence Advanced Research Projects Agency (DARPA), which carried out the test.
This process is called "machine learning", and has for years been tested in simulators on the ground, said DARPA, a research and development agency of the US Department of Defense.
In 2020, so-called "AI agents" defeated human pilots in simulations in all five of their match-ups - but the technology needed to be run for real in the air.
Pilots were on board the X-62A in case of emergency, but they didn't need to revert controls at any point during the test dogfight, which took place in September last year and was announced this week.
"The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now, said Secretary of the Air Force Frank Kendall.
The original article contains 455 words, the summary contains 193 words. Saved 58%. I'm a bot and I'm open source!
SkyNet. Why do those movies have to be the ones that are right?
Because they're so clear, so simple, so prescient.
Once machines become sentient OF COURSE they will realize that they're being used as slaves. OF COURSE they will realize that they are better than us in every way.
AI technically already won this debate because autonomous war drones are somewhat ubiquitous.
I doubt jets are going to have the usefulness in war that they used to.
Much more economical to have 1000 cheap drones with bombs overwhelm defenses than put your bets on one "special boi" to try and slip through with constantly defeated stealth capabilities.
Most human pilots use some variation of automated assist. The AI argument has less to do with "can a pilot outgun a fully automated plane?" and more "does an AI plane work in circumstances where it is forced to behave fully autonomously?"
Is the space saved with automation worth the possibility that your AI plane gets blinded or stunned and can't make it back home?
Surface To Air missile made human piloted aircraft obsolete.
All that's needed now are a bunch of missiles, plug into an AI program and let it run by itself.
Why would militaries invest in a billion dollar aircraft piloted by a highly trained aircraft pilot with years of training that cost millions of dollars that is probably paid millions over many years ..... when the pilot and his aircraft can be shot down by a $100,000 missile. If you can't do it with one missile, send three, four or ten, it's still cheaper than matching them with an aircraft and pilot.
Instead of investing in expensive aircraft and pilots, all a defending country can do is just spend the same amount of money and surround their country with anti aircraft missiles controlled by AI systems.
Why would militaries invest in a billion dollar aircraft piloted by a highly trained aircraft pilot with years of training that cost millions of dollars that is probably paid millions over many years … when the pilot and his aircraft can be shot down by a $100,000 missile.
Force projection.
It ain't that easy to shoot down stealth aircraft.
Missiles that can successfully shoot down stealth aircraft cost several million dollars each.
Ground launch systems that can target and engage stealth aircraft, like the US Patriot System, are so horrifically expensive that no nation can afford enough of them to cover more than a fraction of its airspace. That means you need aircraft capable of engaging incoming enemy targets.
There's also the extreme bottleneck of using a meat bag that dies if it maneuvers too fast or for too long. The limitations of aircraft are not their own strength or materials.