Skip to content
AI

Lakera Debuts with a Mission to Shield Large Language Models from Harmful Prompts

Swiss startup, Lakera, raises $10M to launch an API focused on protecting generative AI from prompt injections and ensuring data security in LLMs.

Generative AI has evolved into a powerful tool, transforming simple prompts into insightful content. But this power is not without risks. Malicious actors can leverage "prompt injections" to exploit vulnerabilities, gaining unauthorized access to systems and dodging stringent security protocols.

Enter Lakera, a Swiss startup aiming to shield enterprises from these vulnerabilities, including prompt injections and inadvertent data leaks. Supported by a $10 million funding round, Lakera is poised to strengthen the AI landscape's security fabric.

Playing Games with AI: Gandalf’s Role

Lakera's approach is both innovative and engaging. They introduced Gandalf, an interactive game inviting users to challenge an underlying LLM, seeking its concealed password. While appearing as mere entertainment, the game, powered by notable LLMs such as OpenAI’s GPT3.5, serves a dual purpose. It aids in highlighting LLM vulnerabilities and contributes vital insights to Lakera's primary product, Lakera Guard.

Through Gandalf, Lakera has amassed data from 30 million interactions, facilitating the creation of a "prompt injection taxonomy." This taxonomy categorizes potential attacks into ten distinct types, allowing clients to benchmark their prompts against these known threats.

Beyond Just Security: Safeguarding Content and Accuracy

Lakera's ambit extends beyond mere prompt injections. Their focus also encompasses the prevention of unintentional data spills into the public domain, content moderation for child safety, and combatting generative AI-generated misinformation.

A Timely Launch: The EU AI Act

Lakera's debut is perfectly timed, aligning with the impending EU AI Act. This legislation, particularly Article 28b, intends to fortify generative AI models by imposing specific mandates on LLM providers. Notably, Lakera's co-founders have contributed to the Act, offering technical insights to strengthen the proposed regulations.

Filling the Security Gap

Generative AI's rapid adoption has been tempered by enterprises' security apprehensions. Lakera seeks to bridge this gap, providing assurance and tools to mitigate potential risks, making the AI landscape safer for all stakeholders.

Based in Zurich, Lakera's client portfolio is expanding. Cohere, a recognized LLM developer with a $2 billion valuation, stands out as one of its prominent clients. Armed with a $10 million funding cushion, Lakera is poised to shape the future of generative AI security.

Backing Lakera's endeavor are investors including Swiss VC Redalpine, Fly Ventures, Inovia Capital, and various angel investors.

Latest