Location: Downtown Mountain View, CA
Employment Type: Full-time
Work Model: On-site (5 days per week)
Department: Research
Granica is building the next generation of efficient AI infrastructure.
Today’s AI systems are limited not only by model design but by the inefficiency of the data that feeds them. At enterprise scale, redundant data, inefficient representations, and poorly optimized learning pipelines create enormous cost, latency, and energy waste.
Granica’s mission is to eliminate that inefficiency.
We combine advances in information theory, machine learning, and distributed systems to design infrastructure that continuously improves how information is represented, compressed, and used by AI.
Granica’s research effort is led by Prof. Andrea Montanari (Stanford) and focuses on advancing learning systems that operate efficiently on large-scale structured and tabular data.
While much of the AI industry focuses on text and media models, Granica is building systems that learn directly from structured enterprise data—the operational data that powers the global economy.
Granica’s work sits at the intersection of learning theory, AI infrastructure, and large-scale data systems, an area that remains largely unexplored compared to modern LLM development.
The RoleThe Research Product Manager sits at the intersection of research, systems engineering, and product strategy.
Your role is to help transform foundational advances in structured AI into production infrastructure and durable platform capabilities.
You will coordinate the path from research insight → experimental system → production deployment, ensuring that modeling advances translate into scalable systems and measurable economic value.
This is not a traditional product management role. It is designed for someone who can:
Understand how large AI systems are trained, deployed, and maintained
Work closely with researchers and engineers on deep technical problems
Translate modeling advances into production systems and platform strategy
You will participate in decisions about:
Which research directions should be productionized
How models are trained, evaluated, and deployed in real systems
Where infrastructure investment produces the highest leverage
How modeling advances translate into platform capabilities
You do not need to be the primary implementer, but you must be able to:
Reason about machine learning systems and large-scale data infrastructure
Challenge assumptions and propose technical alternatives
Make prioritization decisions alongside researchers and engineers
This role works best for candidates comfortable operating close to the technical core of AI systems.
What You’ll DoTranslate research into production systemsWork with Research Scientists and Applied AI Engineers to transform modeling advances into scalable systems
Define how structured AI models are trained, evaluated, and deployed
Design training and evaluation workflows operating over large structured datasets
Define model lifecycle processes including retraining cadence, monitoring, and schema evolution
Help design training infrastructure for large tabular and relational models
Define evaluation harnesses and benchmarks for structured AI systems
Work with engineering teams to optimize data pipelines, training loops, and inference systems
Identify system bottlenecks across compute, storage, and data movement
Identify where modeling improvements create economic advantage
Help define how research capabilities translate into platform features
Model infrastructure trade-offs across compute cost, training efficiency, and performance
Work with leadership to prioritize research directions with the highest long-term impact
Coordinate research priorities with engineering and product strategy
Identify which modeling advances should be productionized and scaled
Ensure the path from prototype → system → platform capability is clear and efficient
Strong technical background in machine learning, distributed systems, or data infrastructure
Ability to engage deeply with researchers and engineers on complex technical topics
Understanding of how modern ML systems are trained, evaluated, and deployed
Familiarity with ML infrastructure, distributed training systems, or data platforms
Ability to reason about data layout, compute scheduling, model lifecycle, and system bottlenecks
Experience working with systems operating on large structured datasets
Ability to translate technical capabilities into platform features and economic value
Comfort operating in research-driven environments with ambiguous problem definitions
Strong communication skills and ability to align research, engineering, and product teams
Experience working in AI infrastructure, ML platforms, or large-scale data systems
Background in computer science, machine learning, mathematics, physics, or engineering
Familiarity with structured data systems such as Parquet, Iceberg, or Delta Lake
Experience supporting research environments such as AI labs or ML infrastructure teams
Experience helping move research prototypes into production systems
This role is not a traditional product management position.
It is not primarily focused on:
Consumer AI products
Prompt engineering or LLM application features
Roadmap coordination or delivery management
Marketing or go-to-market ownership
Instead, this role focuses on translating frontier research into production AI infrastructure and system capabilities.
Successful candidates typically have experience working close to machine learning systems, research teams, or AI infrastructure platforms.
Who Thrives In This RolePeople who succeed in this role often come from backgrounds such as:
ML infrastructure engineers who transitioned into product leadership
AI platform or ML systems product managers
Research engineers working closely with ML research teams
Early engineers or technical founders in AI infrastructure startups
Technical operators from research labs translating experiments into production systems
The common thread is the ability to connect research ideas, system architecture, and economic impact.
Why This Role MattersThe world’s most valuable data is structured.
Most AI systems today are not designed to learn from it efficiently.
Granica is building the systems that close this gap.
As a Research Product Manager, you will help define how frontier research becomes durable infrastructure—shaping the systems that enable AI to learn efficiently from the data that runs the global economy.
This role offers:
Direct collaboration with frontier research teams
Ownership of how research becomes production capability
Influence over both technical direction and platform strategy
Competitive salary, meaningful equity, and substantial bonus for top performers
Flexible time off plus comprehensive health coverage for you and your family
Support for research, publication, and deep technical exploration
At Granica, you will shape the fundamental infrastructure that makes intelligence itself efficient, structured, and enduring. Join us to build the foundational data systems that power the future of enterprise AI!
Top Skills
Similar Jobs at Granica
What you need to know about the Los Angeles Tech Scene
Key Facts About Los Angeles Tech
- Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
- Key Industries: Artificial intelligence, adtech, media, software, game development
- Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
- Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering
