Machinify Logo

Machinify

Sr. Data Engineer

Reposted 25 Days Ago
In-Office or Remote
2 Locations
Mid level
In-Office or Remote
2 Locations
Mid level
Transform raw data into trusted datasets; build and optimize data pipelines; integrate customer data; ensure data quality and reliability.
The summary above was generated by AI

Machinify is the leading provider of AI-powered software products that transform healthcare claims and payment operations. Each year, the healthcare industry generates over $200B in claims mispayments, creating incredible waste, friction and frustration for all participants: patients, providers, and especially payers. Machinify’s revolutionary AI-platform has enabled the company to develop and deploy, at light speed, industry-specific products that increase the speed and accuracy of claims processing by orders of magnitude.

Why This Role Matters

As a Data Engineer, you’ll be at the heart of transforming raw external data into powerful, trusted datasets that drive payment, product, and operational decisions. You’ll work closely with product managers, data scientists, subject matter experts, engineers, and customer teams to build, scale, and refine production pipelines — ensuring data is accurate, observable, and actionable.

You’ll also play a critical role in onboarding new customers, integrating their raw data into our internal models. Your pipelines will directly power the company’s ML models, dashboards, and core product experiences. If you enjoy owning end-to-end workflows, shaping data standards, and driving impact in a fast-moving environment, this is your opportunity.

What You’ll Do
  • Design and implement robust, production-grade pipelines using Python, Spark SQL, and Airflow to process high-volume file-based datasets (CSV, Parquet, JSON).

  • Lead efforts to canonicalize raw healthcare data (837 claims, EHR, partner data, flat files) into internal models.

  • Own the full lifecycle of core pipelines — from file ingestion to validated, queryable datasets — ensuring high reliability and performance.

  • Onboard new customers by integrating their raw data into internal pipelines and canonical models; collaborate with SMEs, Account Managers, and Product to ensure successful implementation and troubleshooting.

  • Build resilient, idempotent transformation logic with data quality checks, validation layers, and observability.

  • Refactor and scale existing pipelines to meet growing data and business needs.

  • Tune Spark jobs and optimize distributed processing performance.

  • Implement schema enforcement and versioning aligned with internal data standards.

  • Collaborate deeply with Data Analysts, Data Scientists, Product Managers, Engineering, Platform, SMEs, and AMs to ensure pipelines meet evolving business needs.

  • Monitor pipeline health, participate in on-call rotations, and proactively debug and resolve production data flow issues.

  • Contribute to the evolution of our data platform — driving toward mature patterns in observability, testing, and automation.

  • Build and enhance streaming pipelines (Kafka, SQS, or similar) where needed to support near-real-time data needs.

  • Help develop and champion internal best practices around pipeline development and data modeling.

What You Bring
  • 4+ years of experience as a Data Engineer (or equivalent), building production-grade pipelines.

  • Strong expertise in Python, Spark SQL, and Airflow.

  • Experience processing large-scale file-based datasets (CSV, Parquet, JSON, etc) in production environments.

  • Experience mapping and standardizing raw external data into canonical models.

  • Familiarity with AWS (or any cloud), including file storage and distributed compute concepts.

  • Experience onboarding new customers and integrating external customer data with non-standard formats.

  • Ability to work across teams, manage priorities, and own complex data workflows with minimal supervision.

  • Strong written and verbal communication skills — able to explain technical concepts to non-engineering partners.

  • Comfortable designing pipelines from scratch and improving existing pipelines.

  • Experience working with large-scale or messy datasets (healthcare, financial, logs, etc.).

  • Experience building or willingness to learn streaming pipelines using tools such as Kafka or SQS.

  • Bonus: Familiarity with healthcare data (837, 835, EHR, UB04, claims normalization).

🌱 Why Join Us
  • Real impact — your pipelines will directly support decision-making and claims payment outcomes from day one.

  • High visibility — partner with ML, Product, Analytics, Platform, Operations, and Customer teams on critical data initiatives.

  • Total ownership — you’ll drive the lifecycle of core datasets powering our platform.

Customer-facing impact — you will directly contribute to successful customer onboarding and data integration.
We're hiring across multiple levels for this role. Final level and title will be determined based on experience and performance during the interview process.

Equal Employment Opportunity at Machinify

Machinify is committed to hiring talented and qualified individuals with diverse backgrounds for all of its positions. Machinify believes that the gathering and celebration of unique backgrounds, qualities, and cultures enriches the workplace. 

See our Candidate Privacy Notice at: https://www.machinify.com/candidate-privacy-notice/

Top Skills

Airflow
AWS
Kafka
Python
Spark Sql
Sqs

Similar Jobs

2 Days Ago
Remote or Hybrid
5 Locations
150K-150K Annually
Senior level
150K-150K Annually
Senior level
eCommerce • Legal Tech • Professional Services • Software • Data Privacy
The role involves building data infrastructure, maintaining ETL pipelines, collaborating with analysts, and improving data models and processes.
Top Skills: Apache AirflowAWSDbtGrafanaPostgresRedshiftSnowflake
3 Days Ago
Remote or Hybrid
2 Locations
125K-180K Annually
Senior level
125K-180K Annually
Senior level
Cloud • Computer Vision • Information Technology • Sales • Security • Cybersecurity
Design and build data integration frameworks and pipelines while ensuring data accuracy and resolving management issues, collaborating with DevOps and data scientists.
Top Skills: AirflowDbtPythonRedshiftSnowflakeSQL
7 Days Ago
Easy Apply
In-Office or Remote
2 Locations
Easy Apply
180K-210K
Senior level
180K-210K
Senior level
Healthtech • Software
The Senior Data Engineer will build and scale data assets, improve data quality, create data pipelines, and enhance platform capabilities in a cross-functional setting.
Top Skills: BigQueryCloud ComputingDagsterDbtGCP

What you need to know about the Los Angeles Tech Scene

Los Angeles is a global leader in entertainment, so it’s no surprise that many of the biggest players in streaming, digital media and game development call the city home. But the city boasts plenty of non-entertainment innovation as well, with tech companies spanning verticals like AI, fintech, e-commerce and biotech. With major universities like Caltech, UCLA, USC and the nearby UC Irvine, the city has a steady supply of top-flight tech and engineering talent — not counting the graduates flocking to Los Angeles from across the world to enjoy its beaches, culture and year-round temperate climate.

Key Facts About Los Angeles Tech

  • Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
  • Key Industries: Artificial intelligence, adtech, media, software, game development
  • Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
  • Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account