About Us:
Zenity is the first and only holistic platform built to secure and govern AI Agents from buildtime to runtime. We help organizations defend against security threats, meet compliance, and drive business productivity. Trusted by many of the world’s F500 companies, Zenity provides centralized visibility, vulnerability assessments, and governance by continuously scanning business-led development environments. We recently raised $38 million in a Series B funding, solidifying our position as a leader in the industry and enabling us to accelerate our mission of securing AI Agents everywhere.
About The Role:
This is a research‑first role focused on deeply understanding LLM internals to improve the security of AI agents. You’ll design careful experiments on activations and interpretable features- e.g., probing, attribution & ablation/patching, representation‑geometry analyses-to uncover mechanisms behind jailbreak, indirect prompt injection, and other attacks. Then translate those insights into signals that can be used for detection and analysis of a model response.
The field of LLM interpretability at scale is exploding, with several major publications in the last months, and major opportunities for innovation.
What You’ll Do
We’d love to chat with you about how your team can secure and govern AI Agents everywhere.
Get a Demo