Imagine being at the forefront of an evolution where modern AI meets the elegance of Apple silicon. The On-Device Machine Learning team transforms groundbreaking research into practical applications, enabling billions of Apple devices to run powerful AI models locally, privately, and efficiently. We stand at the unique intersection of research, software engineering, hardware engineering, and product development, making Apple the leading destination for machine learning innovation.
Our team builds the essential infrastructure that enables machine learning at scale on Apple devices. This involves onboarding powerful architectures to embedded systems, developing optimization toolkits for model compression and acceleration, building ML compilers and runtimes for efficient execution, and creating comprehensive benchmarking and debugging toolchains. This infrastructure forms the backbone of Apple’s machine learning workflows across Camera, Siri, Health, Vision, and other core experiences, contributing to the overall Apple Intelligence ecosystem.
If you are passionate about the technical challenges of running sophisticated ML models across all devices, from resource-constrained devices to powerful cluster, and eager to directly impact how machine learning operates across the Apple ecosystem, this role presents a great opportunity to work on the next generation of intelligent experiences on Apple platforms.
We are seeking an experienced ML Infrastructure Engineer with a specific focus on building the best execution engine and compilation toolchain that employs our compilers infrastructure and the world’s most efficient, portable, and extensible runtime, and which is capable of optimizing and driving ML models efficiently on Apple products and services, current and future.
On-device ML Infrastructure Engineer (Compiler & Runtime)
Apple • onsite • Cupertino • full_time

