Human-like object concept representations emerge naturally in multimodal large language models
Human-like object concept representations emerge naturally in multimodal large language models
Human-like object concept representations emerge naturally in multimodal large language models
Yea, because it's based on humans, who use human-like concepts.
FFS, humanity is cooked if we're already this stupid.
There is nothing we can do if an AI manages to be very charismatic and gets us into a catch-22.
Could these be the first signs of an emergent phenomenon in LLMs? If so, will companies and governments try to prevent it from happening, or will they let it unfold freely?