Multimodal LLMs Research Engineer

Apple onsite • Sunnyvalefull_time
We are actively seeking exceptional individuals who thrive in collaborative environments and are driven to push the boundaries of what is currently achievable with multimodal inputs and large language models. Our centralized applied research and engineering group is dedicated to developing cutting-edge Computer Vision and Machine Perception technologies across Apple products. We balance advanced research with product delivery, ensuring Apple quality and pioneering experiences. A successful candidate will possess deep expertise and hands-on experience across the full lifecycle of Multimodal LLM development, encompassing early ideation, data definition, model training, and fine-tuning.