Privacy folks – what's your take on using LLMs at work?
Hey everyone! :wave: I’m building a product called Privacy AI, and I’m trying to learn how people think about data privacy when using AI tools at work — especially in industries like finance, healthcare, or anywhere with sensitive data. If you: - Use tools like ChatGPT, Claude, Gemini, etc. for work - Work in privacy, infosec, compliance, or deal with sensitive data …I’d love to hear how you're handling that today. No pitch, no selling — just looking to learn from real experiences. If you’re open to a quick 20-min chat, drop a comment or shoot me a DM. Really appreciate it
Run your own locally but beware of copyright violations. My company uses some form of copilot that supposedly is guaranteed to be copyright safe. Personally i think it's very bad policy to make your skills dependent on something proprietary, which is exactly what these llm providers are trying to do to make back their billions of investment.
This is one of those things I just gave up at, like fully "degoogling". I ran my own ollama server for a while in my machine, for code completion, but it's just garbage without GPU accel, so I just use ChatGPT and pretend I believe their privacy settings
For what it’s worth the last company I’d trust for privacy is one who calls itself Privacy AI.
By using a privacy preserving local LLM like Mozilla's llamafile.
[dead]
I use public LLM the same way I'd use a search engine - to search public information only and never proprietary data. With Gemini in Google Workspace that can be used for proprietary information, but of course only if Gemini suits the tasks.
As for coding model my view is they're directly violating my copyright, especially that I checked some "open training data" corpus that it does confirm that my code is part of the corpus without honoring the license.
No direct answers, just wanted to say perhaps this page might interest you as well:
https://duckduckgo.com/duckduckgo-help-pages/duckai/ai-chat-...
I do not trust an agent near client code or even PII. Multiple windows of VScode still seem to active from time to time, or by forced update. This is why I do not activate them on my local machine. Only on a VM/container which only contains the codebase under edit.
So your cloud service intends to compete with on-premise hosted internal services on privacy?
We used to call that a honeypot.
I never put any information into an LLM that I wouldn't want publicly known. I'm okay with an LLM knowing the coding projects I am working on.
I asked an LLM if it would keep my data private and it said absolutely. It wouldn't lie to me would it?
I avoid cloud based stuff. Though I use GitHub Copilot largely because it's from the early days and it's a great tool (and my employer pays for it). But otherwise, local LLM or gtfo.
Honestly? I'm somewhat OK with Mistral, in that it's a French company, therefore they need to comply (on paper anyways) with GDPR. And if they do not, there are serious consequences. But overall - locally. If it is not running on your hardware and it requires an internet connection to work, then the data is not yours. For the time being, building an AI rig at home is relatively cheap on ebay: if you play your cards well, it is doable for 1000 bucks or less.
> I’m building a product called Privacy AI
Why?
> what's your take on using LLMs at work?
Don't.
>> what's your take on using LLMs at work?
>Don't.
The only privacy-conscious answer.
Do you mean you shouldn’t use LLMs at work at all? Or avoid them only if dealing with truly sensitive data (like healthcare records)?
You shouldn’t use LLMs at work at all.
More power to you but I get so much more done with them that I can’t imagine going back.
> I get so much more done with them
... poorly.
Does the existence of Google or Stackoverflow result in poor work, or is it the way the programmer utilizes them?
[dead]