Zalack @ Zalack @startrek.website Posts 0Comments 71Joined 2 yr. ago

Self driving cars could actually be kind of a good stepping stone to better public transit while making more efficient use of existing roadways. You hit a button to request a car, it drives you to wherever, you need to go, and then gets tasked to pick up the next person. Where you used to need 10 cars for 10 people, you now need one.
Is !lostlemmings a thing anywhere?
American here.
Did we just become best friends!???
At a sketch:
- We know that when the brain chemistry is disrupted, our consciousness is disrupted
- You can test this yourself. Drink some alcohol and your consciousness will be disrupted. Similarly I am on Gabapentin for nerve pain, which works by inhibiting the electrical signals my nerves use to fire, and in turn makes me groggy.
- While we don't know exactly how consciousness works, we have a VERY good understanding of chemistry, which is to say, the strong and weak nuclear forces and electromagnetism (fundamental forces). Literally millions of repeatable experiments that have validated these forces exist and we understand the way they behave.
- Drugs like Gabapentin and Alcohol interact with our brain using these forces.
- If the interaction of these forces being disrupted disrupts our consciousness, it's reasonable to conclude that our consciousness is built on top of, or is an emergent property of, these forces' interactions.
- If our consciousness is made up of these forces, then it cannot be a fundamental force as, by definition, fundamental forces must be the basic building blocks of physics and not derived from other forces.
There are no real assumptions here. It's all a line of logical reasoning based on observations you can do yourself.
Why would you assume consciousness is a fundamental force rather than an emergent property of complex systems built on the forces?
More good options is always a good thing.
The FBI and regular police have very different standards. I definitely think this should be fully investigated like any use is force, but I have more faith that the FBI handled this appropriately than of it had been a local PD department.
Not a treasure
Atlas Nodded
Thatsthejoke.jpeg.zip
Permanently Deleted
In many cases it should be fine to point them all at the same server. You'll just need to make sure there aren't any collisions between schema/table names.
I'm not saying there aren't downsides, just that it isn't a totally crazy strategy.
Same. I write FOSS software in my free time and also paid.
You're being sarcastic but even small fees immediately weed out a ton of cruft.
What about spicy food? Go for the Trifecta!
Take me HOOOAAAAAAMMMMME
I don't know. This would dovetail well with a bunch of studies that have found verbal and physical abuse of retail workers at an all time high since the pandemic. Similar studies have found the same thing for road rage.
There has always been some fraction of poorly behaved people, but that fraction seems to have become larger since the pandemic, whatever the actual mechanism that caused it is.
I work in the film industry and can say, with certainty, that TNG was not shot with the same consideration.
Television back then knew it was being mastered for SDTV and the artists had a good idea of what it meant they could get away with compared to something that would be screened in 35mm. Final screening medium has always been the most important consideration, not capture medium.
Audiences have also gotten less forgiving of visual quality and less willing to suspend disbelief as the bar for quality has steadily risen. It means that shows are both working on higher definition target mediums and under more scrutiny than ever.
Like, I love TNG but go watch and tell me that it looks half as good as SNW.
Federation isn't opt-in though. It would be VERY easy to spin up a bunch of instances with millions or billions of fake communities and use them to DDOS a server's search function.
Searching current active subscriptions helps mitigate that vector a little.
While that's true, we have to allow for the fact that our own intelligence, at some point, is an encoded model of the world around us. Probably not through something as rigid as precise statistics, but our consciousness is somehow an emergent phenomenon of the chemical reactions in our brains that on their own have no real understanding of the world either.
I do have to wonder if at some point, consciousness will spontaneously emerge as we make these models bigger and more complex and -- maybe more importantly -- start layering specialized models on top of each other that handle specific tasks then hand the result back to another model, creating feedback loops. I'm imagining a nueral network that is trained on something extremely abstract like figuring out, from the raw input data, what specialist model would be best suited to process that data, then based on the result, what model would be best suited to refine that data. Something we train to basically be an executive function with a bunch of sub models available to it.
Could something like that become conscious without realizing it's "communicating" with us? The program executing the LLM might reflexively process data without any concept that it's text, but still be emergently complex enough when reflecting its own processes to the point of self awareness. It wouldn't realize the data represents a link to other conscious beings.
As a metaphor, you could teach a very smart dog how to respond to certain, basic arithmetic problems. They would get stuff wrong the moment you prompted them to do something out of their training, and they wouldn't understand they were doing math even when they got it "right", but they would still be sentient, if not sapient, despite that.
It's the opposite side of the philosophical zombie. A philosophical zombie behaves exactly as a human would, but is a surface-level automaton with no inner life.
But I propose that we also consider the inverse-philosophical zombie, an entity that behaves like an automation, but has an inner life that has not recognized its input data for evidence of an external world outside it's own bounds. Something that might not even recognize it's executing a program the same way we aren't consciously aware of the chemical reactions our brain is executing to make us think.
I don't believe current LLMs are anywhere near complex enough to give rise to that sort of thing, but they are also still pretty early in their development and haven't started to be heavily layered and interconnected the way I think they'll end up.
At the very least it makes for a fun Sci-fi premise.