Abnormal Security Logo

Abnormal Security

Data Platform Engineer

Posted 16 Days Ago
Remote
Hiring Remotely in USA
123K-145K Annually
Mid level
Remote
Hiring Remotely in USA
123K-145K Annually
Mid level
The Data Platform Engineer will build and maintain data ingestion frameworks, manage ETL workflows, and ensure data reliability and security across diverse sources for analytics and AI.
The summary above was generated by AI
About the Role

You’ll build, operate, and evolve the end-to-end data platform that powers analytics, automation, and AI use cases. This is a hands-on role spanning cloud infrastructure, ingestion/ETL, and data modeling across a Medallion (bronze/silver/gold) architecture. You’ll partner directly with stakeholders to turn messy source data into trusted datasets, metrics, and data products.

Who you are
  • Pragmatic Builder: You write clear SQL/Python, ship durable systems, and leave pipelines more reliable than you found them.
  • Data-Savvy Generalist: You’re comfortable moving up and down the stack (cloud, pipelines, warehousing, and BI) and picking the right tool for the job.
  • Fundamentals-first & Customer-Centric: You apply strong data modeling principles and optimize the analyst/stakeholder experience through consistent semantics and trustworthy reporting.
  • Low-Ego, High-Ownership Teammate: You take responsibility for outcomes, seek feedback openly, and will roll up your sleeves to move work across the finish line.
  • High-Energy Communicator: You’re comfortable presenting, facilitating discussions, and getting in front of stakeholders to drive clarity and alignment.
  • Self-Starter: You unblock yourself, drive decisions, and follow through on commitments; you bring a strong work ethic and invest in continuous learning.
What you will do 
  • Ingestion & ETL: Build reusable ingestion and ETL frameworks (Python and Spark) for APIs, databases, and un/semi-structured sources; handle JSON/Parquet and evolving schemas.
  • Medallion Architecture: Own and evolve Medallion layers (bronze/silver/gold) for key domains with clear lineage, metadata, and ownership.
  • Data Modeling & Marts: Design dimensional models and gold marts for core business metrics; ensure consistent grain and definitions.
  • Analytics Enablement: Maintain semantic layers and partner on BI dashboards (Sigma or similar) so metrics are certified and self-serve.
  • Reliability & Observability: Implement tests, freshness/volume monitoring, alerting, and runbooks; perform incident response and root-cause analysis (RCA) for data issues.
  • Warehouse & Performance: Administer and tune the cloud data warehouse (Snowflake or similar): compute sizing, permissions, query performance, and cost controls.
  • Standardization & Automation: Build paved-road patterns (templates, operators, CI checks) and automate repetitive tasks to boost developer productivity.
  • AI Readiness: Prepare curated datasets for AI/ML/LLM use cases (feature sets, embeddings prep) with appropriate governance.
Must Haves 
  • 3–5+ years hands-on data engineering experience; strong SQL and Python; experience building data pipelines end-to-end in production.
  • Strong cloud fundamentals (AWS preferred; other major clouds acceptable): object storage, IAM concepts, logging/monitoring, and managed compute.
  • Experience building and operating production ETL pipelines with reliability basics: retries, backfills, idempotency, incremental processing patterns (e.g., SCDs, late-arriving data), and clear operational ownership (docs/runbooks).
  • Solid understanding of Medallion / layered architecture concepts (bronze/silver/gold or equivalent) and experience working within each layer.
  • Strong data modeling fundamentals (dimensional modeling/star schema): can define grain, build facts/dimensions, and support consistent metrics.
  • Working experience in a modern cloud data warehouse (Snowflake or similar): can write performant SQL and understand core warehouse concepts.
  • Hands-on dbt experience: building and maintaining models, writing core tests (freshness/uniqueness/RI), and contributing to documentation; ability to work in an established dbt project.
  • Experience with analytics/BI tooling (Sigma, Looker, Tableau, etc.) and semantic layer concepts; ability to support stakeholders and troubleshoot issues end-to-end.
Nice to Have 
  • Snowflake administration depth: warehouse sizing and cost management, advanced performance tuning, clustering strategies, and designing RBAC models
  • Advanced governance & security patterns: masking policies, row-level security, and least-privilege frameworks as a primary implementer/owner
  • Strong Spark/PySpark proficiency: deep tuning/optimization and large-scale transformations.
  • dbt “platform-level” ownership: CI/CD-based deployments, environment/promotion workflows, advanced macros/packages, and leading large refactors or establishing standards from scratch.
  • Orchestration: Airflow/MWAA DAG design patterns, backfill strategies at scale, dependency management, and operational hardening
  • Sigma-specific depth: semantic layer/metrics layer architecture in Sigma, advanced dashboard standards, and organization-wide “certified metrics” rollout.
  • Automation / iPaaS experience: Workato (or similar) for business integrations and operational workflows.
  • Infrastructure-as-code: Terraform (or similar) for data/cloud infrastructure provisioning, environment management, and safe change rollout.
  • Data observability & lineage tooling: OpenLineage/Monte Carlo-style patterns, automated lineage hooks, anomaly detection systems.
  • Lakehouse / unstructured patterns: Parquet/Iceberg, event/data contracts, and advanced handling of semi/unstructured sources.
  • AI/ML/LLM data workflows: feature stores, embeddings/RAG prep, and privacy-aware governance.

#LI-EM4

At Abnormal AI, certain roles are eligible for a bonus, restricted stock units (RSUs), and benefits. Individual compensation packages are based on factors unique to each candidate, including their skills, experience, qualifications and other job-related reasons. 

Base salary range:
$123,300$145,000 USD

Abnormal AI is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status or other characteristics protected by law. For our EEO policy statement please click here. If you would like more information on your EEO rights under the law, please click here.

Top Skills

Airflow
AWS
Aws Glue
Ci/Cd
Fivetran
Github Actions
Python
Snowflake
Spark
SQL
Terraform

Similar Jobs

Yesterday
Easy Apply
Remote
USA
Easy Apply
254K-299K Annually
Senior level
254K-299K Annually
Senior level
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
Design and implement blockchain data systems, interact with stakeholders, manage projects, mentor team members, and write quality code.
Top Skills: Blockchain TechnologyDistributed SystemsGo
2 Days Ago
Easy Apply
Remote
USA
Easy Apply
186K-219K Annually
Senior level
186K-219K Annually
Senior level
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
The role involves developing and maintaining backend services for a blockchain platform, collaborating on integrations, troubleshooting challenges, and ensuring high-quality code.
Top Skills: APIsBlockchainGoRuby
2 Days Ago
Easy Apply
Remote
USA
Easy Apply
186K-219K Annually
Senior level
186K-219K Annually
Senior level
Artificial Intelligence • Blockchain • Fintech • Financial Services • Cryptocurrency • NFT • Web3
Design, build, and operate low-latency indexing and streaming services. Lead initiatives to improve latency and reliability and collaborate on defining data contracts for SDKs and platform primitives.
Top Skills: ClickhouseGoGrpcKafkaMongoDBRedisS3

What you need to know about the Los Angeles Tech Scene

Los Angeles is a global leader in entertainment, so it’s no surprise that many of the biggest players in streaming, digital media and game development call the city home. But the city boasts plenty of non-entertainment innovation as well, with tech companies spanning verticals like AI, fintech, e-commerce and biotech. With major universities like Caltech, UCLA, USC and the nearby UC Irvine, the city has a steady supply of top-flight tech and engineering talent — not counting the graduates flocking to Los Angeles from across the world to enjoy its beaches, culture and year-round temperate climate.

Key Facts About Los Angeles Tech

  • Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
  • Key Industries: Artificial intelligence, adtech, media, software, game development
  • Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
  • Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account