FAR.AI Logo

FAR.AI

Research Engineer

Reposted 5 Days Ago
Remote
2 Locations
80K-175K Annually
Mid level
Remote
2 Locations
80K-175K Annually
Mid level
As a Research Engineer at FAR.AI, you will develop machine learning algorithms, run experiments, collaborate with researchers, and contribute to research papers.
The summary above was generated by AI
About Us

FAR.AI is a non-profit AI research institute dedicated to ensuring advanced AI is safe and beneficial for everyone. Our mission is to facilitate breakthrough AI safety research, advance global understanding of AI risks and solutions, and foster a coordinated global response.

Since our founding in July 2022, we've grown quickly to 20+ staff, producing 30 influential academic papers, and established the leading AI Safety events for research, and international cooperation. Our work is recognized globally, with publications at premier venues such as NeurIPS, ICML, and ICLR, and features in the Financial Times, Nature News, and MIT Technology Review.

We drive practical change through red-teaming with frontier model developers and government institutes. Additionally, we help steer and grow the AI safety field through developing research roadmaps with renowned researchers such as Yoshua Bengio, running FAR.Labs, an AI safety-focused co-working space in Berkeley housing 40 members, and supporting the community through targeted grants to technical researchers.

About FAR.Research

Our research team likes to move fast. We explore promising research directions in AI safety and scale up only those showing a high potential for impact. Unlike other AI safety labs that take a bet on a single research direction, FAR.AI aims to pursue a diverse portfolio of projects.

Our current focus areas include:

  • building a science of robustness (e.g. finding vulnerabilities in superhuman Go AIs)

  • finding more effective approaches to value alignment (e.g. training from language feedback)

  • Advancing model evaluation techniques (e.g. inverse scaling and codebook features, and learned planning).

We also put our research into practice through red-teaming engagements with frontier AI developers, and collaborations with government institutes.

Other FAR Projects

To build a flourishing field of AI safety research, we host targeted workshops and events, and operate a co-working space in Berkeley, called FAR.Labs. Our previous events include the International Dialogue for AI Safety that brought together prominent scientists (including 2 Turing Award winners) from around the globe, culminating in a public statement calling for global action on AI safety research and governance. We also host the semiannual Alignment Workshop with 150 researchers from academia, industry and government to learn about the latest developments in AI safety and find collaborators. For more information on FAR.AI’s activities, please visit our recent post.

About the Role

You will collaborate closely with research advisers and research scientists inside and outside of FAR.AI. As a research engineer, you will develop scalable implementations of machine learning algorithms and use them to run scientific experiments. You will be involved in the write-up of results and credited as an author in submissions to peer-reviewed venues (e.g. NeurIPS, ICLR, JMLR).

While each of our projects is unique, your role will generally have:

  • Flexibility. You will focus on research engineering but contribute to all aspects of the research project. We expect everyone on the project to help shape the research direction, analyze experimental results, and participate in the write-up of results.

  • Variety. You will work on a project that uses a range of technical approaches to solve a problem. You will also have the opportunity to contribute to different research agendas and projects over time.

  • Collaboration. You will be regularly working with our collaborators from different academic labs and research institutions.

  • Mentorship. You will develop your research taste through regular project meetings and develop your programming style through code reviews.

  • Autonomy. You will be highly self-directed. To succeed in the role, you will likely need to spend part of your time studying machine learning and developing your high-level views on AI safety research.

About You

This role would be a good fit for someone looking to gain hands-on experience with machine learning engineering while testing their personal fit for AI safety research. We imagine interested applicants might be looking to grow an existing portfolio of machine learning research or looking to transition to AI safety research from a software engineering background.

It is essential that you:

  • Have significant software engineering experience or experience applying machine learning methods. Evidence of this may include prior work experience, open-source contributions, or academic publications.

  • Have experience with at least one object-oriented programming language (preferably Python).

  • Are results-oriented and motivated by impactful research.

It is preferable that you have experience with some of the following:

  • Common ML frameworks like PyTorch or TensorFlow.

  • Natural language processing or reinforcement learning.

  • Operating system internals and distributed systems.

  • Publications or open-source software contributions.

  • Basic linear algebra, calculus, vector probability, and statistics.

About the Projects

As a Research Engineer you would lead collaborations and contribute to many projects, with examples below:

  • Scaling laws for prompt injections. Will advances in capabilities from increasing model and data scale help resolve prompt injections or “jailbreaks” in language models, or is progress in average-case performance orthogonal to worst-case robustness?

  • Robustness of advanced AI systems. Explore adversarial training, architectural improvements and other changes to deep learning systems to improve their robustness. We are exploring this both in zero-sum board games and language models.

  • Mechanistic interpretability for mesa-optimization. Develop techniques to identify internal planning in models to effectively audit the “goals” of models in addition to their external behavior.

  • Red-teaming of frontier models. Apply our research insights to test for vulnerabilities and limitations of frontier AI models prior to deployment.

Logistics

You will be an employee of FAR.AI, a 501(c)(3) research non-profit.

  • Location: Both remote and in-person (Berkeley, CA) are possible. We sponsor visas for in-person employees, and can also hire remotely in most countries.

  • Hours: Full-time (40 hours/week).

  • Compensation: $80,000-$175,000/year depending on experience and location. We will also pay for work-related travel and equipment expenses. We offer catered lunch and dinner at our offices in Berkeley.

  • Application process: A 72-minute programming assessment, a short screening call, two 1-hour interviews, and a 1-2 week paid work trial. If you are not available for a work trial we may be able to find alternative ways of testing your fit.

If you have any questions about the role, please do get in touch at [email protected].

Top Skills

Machine Learning
Natural Language Processing
Python
PyTorch
Reinforcement Learning
TensorFlow

Similar Jobs

11 Days Ago
In-Office or Remote
3 Locations
150K-250K Annually
Entry level
150K-250K Annually
Entry level
Artificial Intelligence • Machine Learning • Natural Language Processing • Software • Conversational AI
The Research Staff Machine Learning Engineer will focus on scalable model training for speech technologies, design internal tools, and innovate data strategies.
Top Skills: Audio ProcessingDockerKubernetesMachine LearningMlflowPrefectSpeech RecognitionText To Speech
18 Days Ago
Remote
United States
150K-200K
Senior level
150K-200K
Senior level
Cybersecurity
The Senior Security Engineer will design and build security tools, collaborate with teams, and contribute to AI/ML security research. Responsibilities include security solution architecture, secure implementation, and effective communication of technical concepts.
Top Skills: C++Ci/CdGitGoJavaPythonRust
3 Days Ago
In-Office or Remote
4 Locations
160K-240K
Mid level
160K-240K
Mid level
Healthtech • Biotech
Join Xaira Therapeutics as an AI Research Engineer to develop AI models for drug discovery. Collaborate with interdisciplinary teams to leverage AI in healthcare.
Top Skills: AIDeep LearningMachine LearningPython

What you need to know about the Los Angeles Tech Scene

Los Angeles is a global leader in entertainment, so it’s no surprise that many of the biggest players in streaming, digital media and game development call the city home. But the city boasts plenty of non-entertainment innovation as well, with tech companies spanning verticals like AI, fintech, e-commerce and biotech. With major universities like Caltech, UCLA, USC and the nearby UC Irvine, the city has a steady supply of top-flight tech and engineering talent — not counting the graduates flocking to Los Angeles from across the world to enjoy its beaches, culture and year-round temperate climate.

Key Facts About Los Angeles Tech

  • Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
  • Key Industries: Artificial intelligence, adtech, media, software, game development
  • Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
  • Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account