About Trace Machina:
Trace Machina is revolutionizing the software development lifecycle with NativeLink, a high-performance build caching and remote execution system. NativeLink accelerates software compilation and testing processes while reducing infrastructure costs, allowing organizations to optimize their build workflows. We work with clients of all sizes to help them scale and streamline their build systems efficiently and effectively.
As part of our growth, we are looking for a talented and innovative AI Safety Researcher to join our team. In this role, you will be responsible for researching and ensuring the safety, robustness, and ethical integrity of AI-driven systems, focusing on improving the reliability of automated build and testing processes. You will be at the forefront of making sure our systems are secure, fair, and capable of performing in complex environments.
Job Description:
As an AI Safety Researcher at Trace Machina, you will contribute to the development of AI-powered tools and systems that power NativeLink’s build caching and remote execution platform. You will focus on designing safe, reliable, and interpretable machine learning models for optimizing build processes while mitigating any potential risks related to automation and AI in the development lifecycle. You will collaborate closely with engineers and product teams to ensure that safety is prioritized throughout the development and deployment of AI-based solutions.
Job Responsibilities:
Conduct research into AI safety, focusing on robustness, fairness, and interpretability of machine learning models used in build systems
Develop algorithms and frameworks that ensure the safe deployment of AI-powered automation in software build, testing, and CI/CD workflows
Work closely with engineering teams to integrate AI safety mechanisms and ensure robust error handling and fault tolerance
Investigate and mitigate risks associated with AI-driven decision-making in distributed build systems, especially in mission-critical operations
Contribute to the development of safety-critical AI models for optimizing performance, caching accuracy, and task coordination across various customer environments
Conduct studies on the ethical implications of AI in software development, ensuring that algorithms used in NativeLink align with responsible AI principles
Perform in-depth testing, model validation, and risk assessment to ensure AI systems meet reliability and safety standards
Collaborate with product managers and engineers to translate research findings into practical tools and features for our customers
Required Skills and Experience:
3+ years of experience in AI/ML research, with a focus on safety, robustness, and interpretability
Strong background in machine learning theory, with practical experience implementing models and algorithms
Expertise in AI safety frameworks, fault tolerance, and risk mitigation strategies for AI systems
Experience with reinforcement learning, adversarial training, and robustness testing of AI models
Proficiency in programming languages such as Python, C++, or Go, with hands-on experience in AI development libraries (e.g., TensorFlow, PyTorch)
Strong understanding of AI ethics, fairness, and the impact of machine learning algorithms in real-world applications
Ability to identify potential safety risks in AI-driven systems and design solutions to address them
Familiarity with distributed systems, cloud infrastructure, and build/test automation frameworks
Excellent problem-solving skills, with the ability to work independently and collaboratively in a fast-paced environment
Nice to Have:
Experience with AI safety standards and best practices for building reliable AI models
Familiarity with the challenges of AI integration into large-scale software systems and CI/CD pipelines
Knowledge of adversarial machine learning techniques and safe exploration methods
Publications in AI safety, robustness, or ethics-related fields
Why Join Trace Machina?
Work at the cutting edge of AI-powered build optimization and testing tools
Contribute to the safety and reliability of AI-driven systems used by industry-leading customers
Collaborate with a dynamic, innovative team dedicated to solving complex problems
Opportunity to shape the future of AI safety in software development
Competitive salary and benefits package
Opportunities for personal and professional development
If you’re passionate about AI safety and want to help shape the future of AI-powered software development systems, we’d love to hear from you!
Top Skills
Similar Jobs
What you need to know about the Los Angeles Tech Scene
Key Facts About Los Angeles Tech
- Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
- Key Industries: Artificial intelligence, adtech, media, software, game development
- Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
- Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering



