The joke is also that the burning car was a Tesla, and if Elon could, he'd push a patch to copy/paste his face onto any memories of firefighters found in a Neuralink customer's brain
It is a really interesting, very scary technology that requires a solid institutional foundation to provide trust. Musk degrades trust, he doesn't build it.
Actual augments like this will never work if they phone home to do their job. There could be massive benefits to people with a huge variety of conditions and interests, but if it's corpo ware and isn't hyper protected by medical review, and long term support, it's junk
Lol that's funny! No one would actually do that to another person! We are completely safe because this degree of selfishness does not exist, that's why I can laugh at it!
I think you people are vastly overestimating how much we actually know about the brain or severely underestimating how freaking complex it is.
The "you" reading this right now, is a fucking stack of six A4 sized sheets, each one nanometers thick, and crumpled into something which, by all appearances, looks to an external observer as an oversized walnut seed, cooled and maintained by a network of 400 miles capillaries, and isolated from the world by the blood brain barrier, which can only be described as a fucking miracle.
No. No one is going to be implanting any memories soon
AI is better at recognizing patterns than we are. The brain may be unfathomable to us, but technology already exists which could recognize the signals in your brain that represent memories and reproduce or alter them.
Neuralink and similar devices are being used right now, today, to record the thoughts of animals. The first neuralink patient is alive and well, meaning it's already being used on humans.
Do you really think this technology won't exist in our lifetime?
Do you really think this technology won’t exist in our lifetime?
Yes, absolutely. What you're describing is AGI. If an AI could untangle engrams from branched clusters of extremely plastic neurons, it could understand and improve it's own thinking. It would actually be self aware before it could untangle the mess that our brains are. And I don't see AGI happening with our current material and resource constraints before I die. Seeing brain regions being active and de-novo engram implantation is about as close as an LLM is to AGI.
Maybe memories are actually really simple. Like the words on a screen. An arrangement of symbols, then a boatload of meaning and interpretation and rationalization. So all you need to do to make memories is to insert a few words. The brain's "memory interpreter" does the rest of the work.
For example, we insert the words "brother appears". Then, for the "new memory", we reference your memories of your brother. His appearance and the sound of his voice. Then we contrive a narrative explaining why "brother" is at this place and time. Etc. Voila! You now have a memory of your brother standing there saying some stuff.
So to make a memory, it wouldn't require a grand delicate manipulation of brainstuff. Just a simple thing.
Companies are constantly being called out for selling user data, imagine the shit that will come out if this shit goes mainstream. Then multiple that but all the stories about a Tesla going rogue and you pretty much end up with the worst possible idea ever.