Large Number of LLM Servers Reveal Sensitive Corporate, Health, Online Data
LLM automation tools and vector databases can be rife with sensitive data and vulnerable to theft
Hundreds of open source large language model (LLM) builder servers and dozens of vector databases are leaking highly sensitive information to the open web.
As companies rush to integrate AI into their business workflows, they occasionally pay insufficient attention to how to secure these tools and the information they trust them with. In a new report, Legit security researcher Naphtali Deutsch demonstrated as much by scanning the web for two kinds of potentially vulnerable open source (OSS) AI services: vector databases — which store data for AI tools — and LLM application builders — specifically, the open source program Flowise. The investigation unearthed a bevy of sensitive personal and corporate data, unknowingly exposed by organizations stumbling to get in on the generative AI revolution.
"A lot of programmers see these tools on the internet, then try to set them up in their environment," Deutsch says, but those same programmers are leaving security considerations behind.
Read the full story on AI Business' sister publication Dark Reading >>>
About the Author
You May Also Like