Staff Research Scientist, Applied Machine Learning Security (Agent Systems)

Apple onsite • Cupertinofull_time
At Apple, we believe privacy is a fundamental human right. Our Security Engineering & Architecture (SEAR) organization is at the forefront of protecting billions of users worldwide, building security into every product, service, and experience we create. The SEAR ML Security Engineering team combines cutting-edge machine learning with world-class security engineering to defend against evolving threats at unprecedented scale. We're responsible for developing intelligent security systems for Apple Intelligence that protect Apple's ecosystem while preserving the privacy our users expect and deserve. We're seeking a staff-level ML Security Research Scientist who operates at the intersection of applied research and production impact. You'll lead original security research on agentic ML systems deployed at scale—driving secure agentic design directly into shipping products, identifying real vulnerabilities in tool-using models and designing adversarial evaluations that reflect actual attacker behavior. You'll work at the boundary between research, platform engineering, and product security, translating findings into architectural decisions, launch requirements, and long-term hardening strategies that protect billions of users. Your impact will be measured by risk reduction in production systems that ship.