I wonder what the success rate is if you verify using AI generated faces (assuming they cheap out and you dont have to provide camera access)
A modern day Don Quixote
Not if Anduril or Palantir go anywhere near it
We'll need a private-watching private to stand guard
They will need to start banning PIs that abuse the system with AI slop and waste reviewers' time. Just a 1 year ban for the most egregious offenders is probably enough to fix the problem
He was vibe-coding in production. Am I reading that right? Sounds like an intern-level mistake.
Raw-dogging the internet without an adblocker is about as irresponsible as not using contraception
Meta is a security threat outside of Russia too. No privacy should be expected when using their services
Meta argues its AI needs personal information from social media posts to learn ‘Australian concepts’
-and other things sociopaths say
People who don't use LLMs to write code:
You guys are paying for a website, rather than just buying a physical dictionary?
The doctor says he's fit as a horse
Joining a teams call from the car sounds miserable. Doubly so if there's traffic.
Mercedes‑Benz gleefully describes this as having “the potential to transform the vehicle into a third workspace, complementing the office and the home office.”
How journalists sound when they compare an LLM to specialized software that plays chess
Grace Hopper's explanation about the light-nanosecond is still good for laypersons https://m.youtube.com/watch?v=9eyFDBPk4Yw
For sure. There's an infinite amount of ways to get things wrong in math and physics. Without a fundamental understanding, all they can do is prompt-fondle and roll dice.
My guess is that vibe-physics involves bruteforcing a problem until you find a solution. That method sorta works, but is wholly inefficient and rarely robust/general enough to be useful.
Not much substance in that article, but I'll remember to avoid installing any web browsers called "Aura"
Orange County, Florida. Not California. I hope they succeed though.
The difference is that a human often has to be held accountable when they make a mistake, so most humans will use logic and critical thinking when trying hard not to make mistakes, even if it takes longer than an LLM whose "reasoning" is more like a slot machine.
