We are past chatbots in programming for a while now. It's llms with tool calling capabilities now which work in an agentic loop. LLMs are extrapolators so the input context is important (extrapolating from missing information leads to hallucinations). With this workflow the LLM can construct its own context by using tools which leads to better results.
I don't think it is intentional. Sociopaths are unable to simulate another emotional state in their mind, thus they have problems predicting emotions, even their own future emotional state. All they can do is to project their own emotions onto others.
That site is malicious:
https://news.ycombinator.com/item?id=46624740