A recent paper shows that it is possible to extract training data from ChatGPT, a language model, despite its efforts to prevent data extraction. The researchers were able to recover a significant amount of data, including text and code, by exploiting vulnerabilities in the model.
















