Data is a fundamental layer in Luma that unlocks advanced capabilities in our foundation models. We tackle the fundamental data questions around how different modalities can be combined to enable new behaviors and capabilities, working on the open-ended challenges of what makes multimodal AI systems truly powerful and versatile.
ResponsibilitiesIdentify capability gaps and research solutions
Design datasets and data-mixture ablations to systematically improve model capabilities across vision, audio, and language
Develop evaluation frameworks and benchmarking approaches for multimodal AI capabilities
Create prototypes and demonstrations that showcase new multimodal capabilities
Strong programming skills in Python and PyTorch
Experience with large-scale dataset
Experience with multimodal data processing pipeline
Understanding of computer vision, audio processing, and / or natural language processing techniques
(Preferred) Expertise working with interleaved multimodal data
(Preferred) Hands-on experience with Vision Language Models, Audio Language Models, or generative video models
Top Skills
Similar Jobs
What you need to know about the Los Angeles Tech Scene
Key Facts About Los Angeles Tech
- Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
- Key Industries: Artificial intelligence, adtech, media, software, game development
- Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
- Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering