Calix Logo

Calix

Staff Software Engineer- Cloud Platform

Posted 21 Days Ago
Remote
Hiring Remotely in USA
136K-266K
Senior level
Remote
Hiring Remotely in USA
136K-266K
Senior level
The GCP Looker Administrator will manage and optimize Looker instances on Google Cloud Platform, collaborating with data teams to ensure high availability and scalability while supporting BI initiatives and data governance policies.
The summary above was generated by AI

Calix provides the cloud, software platforms, systems and services required for communications service providers to simplify their businesses, excite their subscribers and grow their value.

This is a remote based position in US.
 

We, the Cloud Platform Engineering team at Calix are responsible for the Platforms, Tools, and CI/CD pipelines at Calix. Our mission is to enable Calix engineers to accelerate the delivery of world-class products while ensuring the high availability, 
We are seeking a skilled and experienced GCP Cloud Platform Engineer to join Cloud Platform team. The ideal candidate will be responsible for managing, optimizing, and maintaining our Looker instance hosted on Google Cloud Platform (GCP). This role involves ensuring the smooth operation of Looker, supporting business intelligence (BI) initiatives, and enabling data-driven decision-making across the organization. The GCP Looker Administrator will work closely with data engineers, analysts, and business stakeholders to deliver scalable and efficient solutions.

We are looking for a GCP Cloud Platform Engineer to design, implement, and manage cloud infrastructure and data pipelines using Google Cloud Platform (GCP) services like DataStreamDataflowApache FlinkApache Spark, and Dataproc. The ideal candidate will have a strong background in DevOps practicescloud infrastructure automation, and big data technologies. You will collaborate with data engineers, developers, and operations teams to ensure seamless deployment, monitoring, and optimization of data solutions.

Responsibilities: 

  • Design and implement cloud infrastructure using IaC – Terraform etc.
  • Automate provisioning and management of Dataproc clustersDataflow jobs, and other GCP resources
  • Build and maintain CD pipelines for deploying data pipelines, streaming applications, and cloud infrastructure.
  • Integrate tools like GitLab CI/CD, or Cloud Build for automated testing and deployment.
  • Deploy and manage real-time and batch data pipelines using DataflowDataStream, and Apache Flink.
  • Ensure seamless integration of data pipelines with other GCP services like Big QueryCloud Storage, and Kafka or Pub/Sub.
  • Implement monitoring and alerting solutions using Cloud MonitoringCloud Logging, and Prometheus.
  • Monitor performance, reliability, and cost of Dataproc clustersDataflow jobs, and streaming applications.
  • Optimize cloud infrastructure and data pipelines for performance, scalability, and cost-efficiency.
  • Implement security best practices for GCP resources, including IAM policies, encryption, and network security.
  • Ensure Observability is an integral part of the infrastructure platforms and provides adequate visibility about their health, utilization, and cost. 
  • Collaborate extensively with cross functional teams to understand their requirements; educate them through documentation/trainings and improve the adoption of the platforms/tools.  

Qualifications: 

  • 7+ years of overall experience in DevOps -cloud engineering, or data engineering.
  • 3+ years of experience in DevOps, cloud engineering, or data engineering.
  • Proficiency in Google Cloud Platform (GCP) services, including DataflowDataStreamDataprocBig Query, and Cloud Storage.
  • Strong experience with Apache Spark and Apache Flink for distributed data processing.
  • Knowledge of real-time data streaming technologies (e.g., Apache KafkaPub/Sub).
  • Familiarity with data orchestration tools like Apache Airflow or Cloud Composer.
  • Expertise in Infrastructure as Code (IaC) tools like Terraform or Cloud Deployment Manager.
  • Experience with CI/CD tools like JenkinsGitLab CI/CD, or Cloud Build.
  • Knowledge of containerization and orchestration tools like Docker and Kubernetes.
  • Strong scripting skills for automation (e.g., BashPython).
  • Experience with monitoring tools like Cloud MonitoringPrometheus, and Grafana.
  • Familiarity with logging tools like Cloud Logging or ELK Stack.
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration abilities.
  • Ability to work in a fast-paced, agile environment.

Compensation will vary based on geographical location (see below) within the United States. Individual pay is determined by the candidate's location of residence and multiple factors, including job-related skills, experience, and education.

For more information on our benefits click here.

There are different ranges applied to specific locations. The average base pay range (or OTE range for sales) in the U.S. for the position is listed below.

San Francisco Bay Area Only:

156,400.00 - 265,700.00 USD Annual

All Other Locations:

136,000.00 - 231,000.00 USD Annual

Top Skills

BigQuery
Cloud Sql
Data Studio
GCP
Grafana
JavaScript
Kafka
Kubernetes
Looker
Prometheus
Pub/Sub
Python
SQL
Terraform

Similar Jobs

An Hour Ago
Remote
USA
180K-200K Annually
Senior level
180K-200K Annually
Senior level
Computer Vision • Healthtech • Information Technology • Logistics • Machine Learning • Software • Manufacturing
In this role, you will develop features and products for dental scanning software, ensuring quality and scalability while collaborating with cross-functional teams.
Top Skills: GraphQLNode.jsPostgresReactTypescript
An Hour Ago
Remote
2 Locations
Senior level
Senior level
Artificial Intelligence • Enterprise Web • Machine Learning • Natural Language Processing • Software • Conversational AI • Automation
As a Site Reliability Engineer, you'll enhance infrastructure security, automate deployments, optimize CI/CD processes, and drive engineering best practices while ensuring compliance and observability.
Top Skills: Aws CloudElasticsearchGoJavaScriptMongoDBNode.jsReactRedisTerraform
2 Hours Ago
Remote
Hybrid
Scottsdale, AZ, USA
Senior level
Senior level
Cloud • Fintech • Software • Business Intelligence • Consulting • Financial Services
The Senior Business Developer will drive sales growth in the CRE practice, manage the sales process, and develop client relationships within the energy sector.
Top Skills: MarketingSales

What you need to know about the Los Angeles Tech Scene

Los Angeles is a global leader in entertainment, so it’s no surprise that many of the biggest players in streaming, digital media and game development call the city home. But the city boasts plenty of non-entertainment innovation as well, with tech companies spanning verticals like AI, fintech, e-commerce and biotech. With major universities like Caltech, UCLA, USC and the nearby UC Irvine, the city has a steady supply of top-flight tech and engineering talent — not counting the graduates flocking to Los Angeles from across the world to enjoy its beaches, culture and year-round temperate climate.

Key Facts About Los Angeles Tech

  • Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
  • Key Industries: Artificial intelligence, adtech, media, software, game development
  • Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
  • Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account