Join us
Employees are submitting sensitive data and privacy-protected information to large language models (LLMs) like ChatGPT, which raises concerns that the data could be incorporated into the models and retrieved later without proper data security measures.
Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of its client companies' workers due to the risk of leaking confidential information, client data, source code, or regulated information to the LLM.
Join other developers and claim your FAUN account now!
Only registered users can post comments. Please, login or signup.