Design, build, and maintain scalable Databricks-based data pipelines; lead migration/rehydration of ~500 PowerBI reports; implement CI/CD for data assets; optimize ETL/ELT and establish DevOps/versioning/testing best practices.
Data Engineer (Databricks Focus)
About the Role
We are seeking an experienced Data Engineer to join our growing data team and play a key role in modernizing our analytics platform. In 2026, we will be executing a large-scale migration and rehydration of ~500 existing PowerBI reports, including re-connecting and optimizing data sources in a new lakehouse environment.
Key Responsibilities
- Design, build, and maintain scalable data pipelines using Databricks (Delta Lake, Unity Catalog, Spark).
- Lead or significantly contribute to the migration and rehydration of approximately 500 PowerBI reports in 2026, including re-pointing and optimizing data sources.
- Implement and maintain CI/CD pipelines for data assets using Databricks Asset Bundles (DAB), GitHub Actions, and other modern DevOps practices.
- Collaborate with data analysts, BI developers, and business stakeholders to ensure data availability, performance, and reliability.
- Optimize ETL/ELT processes for performance, cost, and maintainability.
- Establish best practices for version control, testing, and deployment of notebooks, workflows, and Delta Live Tables.
Required Experience & Skills
- 4+ years of hands-on data engineering experience (5-7+ years of experience overall).
- Strong proficiency in Python and SQL.
- Deep experience with Databricks (workspace administration, cluster management, Delta Lake, Unity Catalog, workflows, and notebooks).
- Proven track record implementing CI/CD for data workloads (preferably using Databricks Asset Bundles and GitHub Actions).
- Solid understanding of Spark (PySpark and/or Spark SQL).
- Experience with infrastructure-as-code and modern data DevOps practices.
- Relevant certifications strongly preferred:
- Databricks Certified Data Engineer Associate or Professional
- Azure Data Engineer Associate (DP-203) or equivalent AWS/GCP certifications
Nice-to-Have / Bonus Skills
- Experience extracting data from SAP/HANA or S/4HANA systems (via ODP, CDS views, SDA, etc.).
- Previous large-scale PowerBI migration or re-platforming projects.
- Familiarity with Databricks SQL warehouses, Serverless, or Lakehouse Monitoring.
- Experience with dbt, Delta Live Tables, or Lakeflow.
If you have strong Databricks + CI/CD + Asset Bundles experience and are excited about transforming a large PowerBI footprint into a modern Lakehouse architecture, you’re a good fit.
Top Skills
Python,Sql,Databricks,Delta Lake,Unity Catalog,Spark,Pyspark,Spark Sql,Databricks Asset Bundles,Github Actions,Ci/Cd,Delta Live Tables,Powerbi,Dbt,Lakeflow,Sap Hana,S/4Hana,Odp,Cds Views,Sda,Azure,Aws,Gcp,Databricks Sql Warehouses,Serverless,Lakehouse Monitoring
Similar Jobs
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Design and optimize database architectures and data models; build, automate, and maintain ETL/ELT pipelines and large-scale data platforms on Azure; ensure data quality, security, and compliance; collaborate with data scientists to deliver AI/ML solutions; lead projects, mentor junior engineers, and manage vendor and stakeholder relationships.
Top Skills:
Sql,Postgres,Mysql,Azure,Ci/Cd,Devops,Mlops,Etl,Elt,Pyspark,Scala Spark,Hive,Hadoop,Nosql,Python,Scala,Data Warehousing,Paas
Big Data • Healthtech • Software
Design, build, and maintain scalable ETL/ELT pipelines using Python, Spark, Databricks, Airflow and SSIS. Integrate and cleanse diverse healthcare datasets, implement Unity Catalog for metadata and governance, optimize Spark performance and JVM tuning, support Medallion architecture, and collaborate with cross-functional teams to automate CI/CD, observability, and data quality processes.
Top Skills:
Python,Scala,Sql,Apache Spark,Databricks,Aws,Ssis,Apache Airflow,Unity Catalog,Jenkins,Gitlab Ci,Parquet,Delta,Csv,Xml,Nosql,Jvm,Medallion Architecture
Healthtech • HR Tech • Software
As a Data Engineer at Vivian Health, you will build and maintain data pipelines, APIs, and contribute to product development using various technologies.
Top Skills:
AirflowAPIsAWSBigQueryCloud Data WarehousesCloudFormationDbtDynamoDBElastic BeanstalkInfrastructure-As-CodeJavaScriptLambdaPythonReactRedshiftSnowflakeSQLTerraform
What you need to know about the Los Angeles Tech Scene
Los Angeles is a global leader in entertainment, so it’s no surprise that many of the biggest players in streaming, digital media and game development call the city home. But the city boasts plenty of non-entertainment innovation as well, with tech companies spanning verticals like AI, fintech, e-commerce and biotech. With major universities like Caltech, UCLA, USC and the nearby UC Irvine, the city has a steady supply of top-flight tech and engineering talent — not counting the graduates flocking to Los Angeles from across the world to enjoy its beaches, culture and year-round temperate climate.
Key Facts About Los Angeles Tech
- Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
- Key Industries: Artificial intelligence, adtech, media, software, game development
- Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
- Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering


.jpg)
