Generative AI, specifically Large Language Models (LLMs) like GPT, has revolutionized digital content creation. However, the security aspects of this technology are often overlooked, posing significant risks to organizations. The open-source ecosystem surrounding LLMs lacks maturity and security, making these models vulnerable to attacks. Therefore, it is crucial to prioritize security standards and practices in the development and maintenance of LLMs to ensure responsible and secure usage.