Join us

Employees Are Feeding Sensitive Business Data to ChatGPT

Employees Are Feeding Sensitive Business Data to ChatGPT

Employees are submitting sensitive data and privacy-protected information to large language models (LLMs) like ChatGPT, which raises concerns that the data could be incorporated into the models and retrieved later without proper data security measures.

Cyberhaven detected and blocked requests to input data into ChatGPT from 4.2% of its client companies' workers due to the risk of leaking confidential information, client data, source code, or regulated information to the LLM.

  • Companies and security professionals are worried that sensitive data ingested as training data into the models could resurface when prompted by the right queries, and are taking action to restrict usage.
  • Training data extraction attacks are one of the key adversarial concerns among machine learning researchers, as they could gather sensitive information or steal intellectual property.
  • The move to generative AI apps will only accelerate, to be used for generating memos and presentations, triaging security incidents, and interacting with patients.
  • Education could have a big impact on preventing data leaks from a specific company since less than 1% of workers are responsible for 80% of the incidents of sending sensitive data to ChatGPT.
  • OpenAI and other companies are working to limit LLMs' access to personal information and sensitive data.


Only registered users can post comments. Please, login or signup.

Start blogging about your favorite technologies, reach more readers and earn rewards!

Join other developers and claim your FAUN account now!

Avatar

The FAUN

@faun
A worldwide community of developers and DevOps enthusiasts!
User Popularity
3k

Influence

280k

Total Hits

1

Posts