Is It Just Me?
Is It Just Me?


Is It Just Me?
You're viewing a single thread.
One thing I don't get with people fearing AI is when something adds AI and suddenly it's a privacy nightmare. Yeah, in some cases it does make it worse, but in most cases, what was stopping the company from taking your data anyways? LLMs are just algorithms that process data and output something, they don't inherently give firms any additional data. Now, in some cases that means data that previously wasn't or that shouldn't be sent to a server is now being sent, but I've seen people complain about privacy so often in cases where I don't understand why AI is your tipping point, if you don't trust the company to not store your data when using AI, why trust it in the first place?
It's more about them feeding it into an LLM which then decides to incorporate it in an answer to some random person.
Yeah but LLMs don't train off of data automatically, you need a separate dedicated process for that, it won't happen from just using them. In that sense, companies can still use your data to train them in the background, even if you aren't directly using an LLM, or they can not train them even when you are using them. I guess in the latter case there is a bigger incentive for them to train them than otherwise, but to me it seems basically the same thing privacy wise.
If they're exposing their LLM to the public, there's a higher chance of it leaking training data to the public. You don't know what they trained with, but there's a chance it's customer data. Sure they may not train with anything, but why assume they don't? If they have an internal LLM that's of lesser concern, because that LLM would probably only show them data those employees already have access to.
if you don't trust the company to not store your data when using AI, why trust it in the first place?
Policies, procedures, and common sense - three things AI is most assuredly not known for respecting. (Not that the whole topic of data privacy isn't a huge issue outside of AI)