Job Description
Join an early-stage startup that’s building at the frontier of AI and cybersecurity.
They’re on a mission to design autonomous security systems capable of detecting, triaging, and remediating software vulnerabilities, all without human intervention!
This is a greenfield opportunity to help shape the foundational logic of their product’s vulnerability discovery engine. You’d be working shoulder-to-shoulder with a world-class AI team to push the boundaries of what "self-healing software" can look like.
Key Responsibilities
- AI-Driven Security Research: Partner with AI engineers to design and build intelligent systems that autonomously detect code vulnerabilities. Leverage large language models (LLMs), industry-leading security tools, supervised fine-tuning, and reinforcement learning to uncover novel security flaws.
- Vulnerability Triage and Analysis: Investigate and validate identified vulnerabilities to assess severity, exploitability, and potential impact.
- Responsible Disclosure: Work with impacted organizations to report vulnerabilities responsibly, adhering to established timelines and industry disclosure standards.
Required Qualifications
- Proven expertise in cybersecurity research and vulnerability analysis.
- Proficient with static and dynamic code analysis techniques and tools.
- Hands-on experience in penetration testing or red teaming.
- Familiarity with machine learning principles and their security applications.
- Strong communication skills for documenting findings and engaging stakeholders.
- Practical experience managing responsible disclosure workflows.
Preferred Qualifications
- Programming skills in Python, C/C++, or similar languages.
- Active participation in bug bounty programs or security challenges.
- Track record of published security research, CVE contributions, or speaking at security conferences.
- Deep understanding of secure coding practices across various programming languages.