Versana Logo

Versana

Data Platform Engineer

Posted 7 Days Ago
Hybrid
New York, NY
130K-160K Annually
Mid level
Hybrid
New York, NY
130K-160K Annually
Mid level
The Data Platform Engineer will design and operate ELT pipelines, build data lakes, enforce data governance, and support reporting and analytics initiatives.
The summary above was generated by AI

About Us:

Versana is an industry-backed data and technology company on a mission to transform the syndicated loan market. By digitally capturing agent banks’ data on a real-time basis, Versana provides unprecedented transparency into loan-level details and portfolio positions, bringing  efficiency and velocity to the entire market. Through our platform, participants can rest assured they are accessing the loan market’s most credible source of deal information.
About You:

Versana is looking for a motivated mid-level Data Platform Engineer to join our team. This role is primarily focused on data engineering, with a secondary responsibility in reporting and dashboard delivery. Your mission is to transform operational (OLTP) data from multiple source systems into clean, analysis-ready (OLAP) datasets using our lakehouse medallion architecture. You will work closely with seasoned technology leaders, and colleagues with diverse experience in a dynamic, agile environment. You’ll mentor team members through code reviews, pairing, documentation, and knowledge-sharing practices. You’ll both share your engineering expertise with colleagues and learn from their deep domain knowledge. You’ll help standardize data transformations and testing, and drive CI/CD practices, all while improving performance, reliability, and observability.

Key Responsibilities:

    Data Engineering (Primary):
    • Design, implement, and operate ELT pipelines to ingest data into the lakehouse.
    • Apply Medallion architecture and semantic layering to deliver curated datasets.
    • Build and maintain durable data lakes sourced from operational systems, including sustainable ingestion, schema evolution, storage formats, partitioning, and performance.
    • Secure data at rest and in transit by enforcing governance policies, including regular access reviews and auditability.
    • Establish and enforce standards for data quality, observability/alerting, and lineage; uphold daily/hourly SLAs for critical reports and datasets.
    • Drive CI/CD for data code reviews, documentation, and environment promotion; mentor and unblock teammates.
    Reporting & Analytics (Secondary):
    • Create and maintain tabular models and dataflows; optimize dataset refreshes, query performance, and usability.
    • Partner with Product to translate requirements into robust models, metrics, and dashboards; collaborate with Application Developers on data contracts and change management for upstream systems.
    • Support operational monitoring and alerting; ensure timely, trusted reporting for leadership and clients.
    • Triage and fulfill ad-hoc data requests while improving self-service patterns and documentation.

Must Haves:

    • B.S. or B.A in Computer Science or related field.
    • 5–7 years in data engineering or analytics engineering.
    • Strong SQL and practical experience with dbt for transformations and testing.
    • Hands-on experience with data modeling and performance optimization.
    • Professional coding experience with both Java and Python.
    • Experience operating data ware/lake-houses and building semantic layers.
    • Familiarity with CI/CD (DataOps), version control, and environment promotion.
    • Data quality, observability, and alerting experience with a focus on SLAs and stakeholder trust.
    • Experience with external client reporting, embedded analytics, and multi-tenant considerations (modeling, partitioning, access controls).
    • Demonstrated ability to mentor or coach in software engineering practices.
    • Effective communication and requirement gathering

Nice to Haves:

    • Cost/performance tuning across data ware/lake-housing, query engines, BI tools; caching strategies and aggregation design.
    • Data documentation, lineage, and cataloging practices; strong habits around reproducibility and testing.
    • Apply report design and visualization best practices (effective layouts, clear metric definitions, consistent interactions, and performant SQL+DAX queries) to deliver executiveready dashboards.
    • Experience with any of the following: Azure Fabric, Power BI, Dremio, Apache Iceberg, Apache Parquet, Datadog

Equal Opportunity Employer:
 
We are committed to providing equal employment opportunities to all employees and applicants for employment and prohibit discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
 
This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.

Top Skills

Apache Iceberg
Apache Parquet
Azure Fabric
Datadog
Dbt
Dremio
Java
Power BI
Python
SQL

Similar Jobs

Yesterday
Hybrid
230K-286K Annually
Senior level
230K-286K Annually
Senior level
Fintech • Machine Learning • Payments • Software • Financial Services
The Senior Lead Data Engineer designs and develops technical solutions, leads a team of developers, and utilizes big data and cloud technologies to deliver robust solutions for Capital One.
Top Skills: AWSEmrGurobiHadoopHiveJavaKafkaLinuxMySQLNoSQLPythonRedshiftScalaSnowflakeSparkSQLUnix
13 Days Ago
In-Office
109K-136K Annually
Mid level
109K-136K Annually
Mid level
Artificial Intelligence • Digital Media • eCommerce • Marketing Tech • Software • Automation
The Technical Services Engineer collaborates with teams to support customers using mParticle's SDKs and APIs, ensuring satisfaction and timely issue resolution, while participating in testing and on-call support.
Top Skills: C#JavaJavaScriptKotlinObjective-CPythonRestful ApisSQLSwift
3 Days Ago
Easy Apply
Remote or Hybrid
USA
Easy Apply
175K-225K Annually
Mid level
175K-225K Annually
Mid level
Fintech • Information Technology • Software • Financial Services
The Data Engineer will build real-time data pipelines for pricing algorithms, collaborate with teams, and contribute to batch data workflows.
Top Skills: Cloud-Based Distributed Data InfrastructureFlinkKafkaPythonSQL

What you need to know about the Los Angeles Tech Scene

Los Angeles is a global leader in entertainment, so it’s no surprise that many of the biggest players in streaming, digital media and game development call the city home. But the city boasts plenty of non-entertainment innovation as well, with tech companies spanning verticals like AI, fintech, e-commerce and biotech. With major universities like Caltech, UCLA, USC and the nearby UC Irvine, the city has a steady supply of top-flight tech and engineering talent — not counting the graduates flocking to Los Angeles from across the world to enjoy its beaches, culture and year-round temperate climate.

Key Facts About Los Angeles Tech

  • Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
  • Key Industries: Artificial intelligence, adtech, media, software, game development
  • Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
  • Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account