CoreWeave is the top-rated AI-cloud for high-performance GPU infrastructure across AI/ML, visual effects, rendering, and real-time inference. Our stack is engineered for speed, scale, and cost-efficiency—an unmatched alternative to traditional hyperscalers. At CoreWeave, infrastructure is the product.
About this role:We're looking for a Senior Engineer to be a driving force on CoreWeave's Benchmarking & Performance team, with a singular focus on our planet-scale performance data warehouse. You will own the architecture and evolution of how we ingest, store, transform, and surface performance data across every data center in our global infrastructure—turning billions of raw events into the trusted, queryable insights that power our engineering and business decisions.
If you believe that the right storage format, the right schema, and the right query engine can turn a mountain of telemetry into a competitive advantage, this role was built for you. You will shape the data foundations that underpin industry-leading benchmark publications, internal performance SLAs, and executive-level reporting—working hand-in-hand with world-class partners and communities to ensure every number we publish is authoritative, reproducible, and actionable.
Key Responsibilities:- Data Lake Architecture - Design and build our core performance data lake on columnar storage foundations. Select, integrate, and optimize table formats (Apache Iceberg, Parquet, Avro) to balance query performance, storage cost, and schema evolution. Implement hot and cold storage tiering strategies that keep recent data instantly queryable while efficiently archiving historical benchmarks at petabyte scale.
- Schema Design & Data Modeling - Define and govern schemas for performance telemetry: latency distributions, throughput metrics, GPU utilization, cost-per-token, and hardware health signals. Establish naming conventions, partitioning strategies, and lifecycle policies that keep the warehouse fast, consistent, and self-documenting as new workloads and hardware generations come online.
- Time-Series & Metrics Infrastructure - Own and extend our time-series database (TSDB) layer. Write and optimize PromQL/MetricsQL queries that power real-time dashboards, alerting, and trend analysis across thousands of GPUs and hundreds of benchmark runs. Bridge the gap between streaming metrics and batch-analytical workloads so engineers get sub-second answers and analysts get complete historical context from the same data.
- BI, Visualization & Data-Driven Processes - Build compelling, self-service BI views and dashboards (Grafana, Looker, or similar) that translate raw performance data into clear stories for engineers, product managers, and executives. Design playbooks and data-driven runbooks that tie benchmark regressions, capacity decisions, and competitive analyses directly to live data. Champion a culture where every performance claim is backed by a reproducible query and a versioned dataset.
- Query Optimization & Performance - Profile and tune query engines against columnar and time-series stores; reduce scan times, optimize join strategies, and introduce materialized views or pre-aggregations where they matter most. Benchmark the benchmarking infrastructure itself—ensuring our data platform meets its own strict P99 latency and freshness SLAs.
- 5+ years of experience building distributed systems, data platforms, or cloud services.
- Strong coding in Python or Go (C++ a plus) and deep familiarity with networked systems and performance.
- Hands-on experience with Kubernetes at production scale, CI/CD, and observability stacks (Prometheus, Grafana, OpenTelemetry).
- Demonstrated expertise with data lake architectures, columnar databases, and modern table formats (Iceberg, Parquet, Avro); you understand the trade-offs between them and know when to reach for each.
- Practical experience designing and managing hot/cold storage tiers for large-scale analytical workloads.
- Strong schema design instincts—you think in partitions, sort keys, and evolution strategies, not just tables and columns.
- Working knowledge of time-series databases and fluency in PromQL or MetricsQL for building dashboards, alerts, and ad-hoc analysis.
- Experience building BI views, visualizations, and data-driven playbooks that turn raw data into organizational decision-making tools.
- Strong communicator comfortable collaborating with cross-functional teams and external partners.
- Experience with time-series databases, LSM-based storage engines, or custom data pipelines.
- Experience running MLPerf submissions or similar large-scale audited benchmarks.
- Contributions to OSS projects such as Apache Iceberg, Apache Spark, Trino, llm-d, vLLM, or PyTorch.
- Exposure to benchmarking large GPU fleets or multi-region clusters.
- Experience with CUDA kernels, NCCL/SHARP, RDMA/NUMA, or GPU interconnect topologies.
- Familiarity with data cataloging, lineage tools, or data governance frameworks.
Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams – even if you aren't a 100% skill or experience match.
Why CoreWeave?At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:
- Be Curious at Your Core
- Act Like an Owner
- Empower Employees
- Deliver Best-in-Class Client Experiences
- Achieve More Together
We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and enables the development of innovative solutions to complex problems. As we get set for takeoff, the organization's growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!
The base salary range for this role is $162,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility).
What We Offer
The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location.
In addition to a competitive salary, we offer a variety of benefits to support your needs, including:
- Medical, dental, and vision insurance - 100% paid for by CoreWeave
- Company-paid Life Insurance
- Voluntary supplemental life insurance
- Short and long-term disability insurance
- Flexible Spending Account
- Health Savings Account
- Tuition Reimbursement
- Ability to Participate in Employee Stock Purchase Program (ESPP)
- Mental Wellness Benefits through Spring Health
- Family-Forming support provided by Carrot
- Paid Parental Leave
- Flexible, full-service childcare support with Kinside
- 401(k) with a generous employer match
- Flexible PTO
- Catered lunch each day in our office and data center locations
- A casual work environment
- A work culture focused on innovative disruption
Our Workplace
While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.
California Consumer Privacy Act - California applicants only
CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.
As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: [email protected].
Export Control Compliance
This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.
Top Skills
Similar Jobs at CoreWeave
What you need to know about the Los Angeles Tech Scene
Key Facts About Los Angeles Tech
- Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
- Key Industries: Artificial intelligence, adtech, media, software, game development
- Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
- Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering

