Data Engineer at Clutter
Clutter is an on-demand storage and technology company based in Los Angeles that is disrupting the $50B/year self-storage and moving industries. We’ve built an end-to-end logistics and supply chain platform that enables us to offer consumers a much more convenient solution at price parity with the incumbents. We’ve raised $300M from a number of VCs, including SoftBank, Sequoia, Atomico and GV (formerly Google Ventures). We have 500+ team members and tens of thousands of customers in 7 major markets across the US with plans to be in 50+ markets, domestically and internationally, within the next 5 years!
At Clutter, we're fortunate to be providing a consumer value proposition that people love and one that makes economic sense - a true product/market fit that few startups ever find. To deliver on our promise to consumers, team members and investors, we've focused on hiring, training and retaining exceptional individuals. This means that we have a very thorough interview process and maintain high performance expectations, but we'll always be transparent with you and respectful of your time.
As a Data Engineer, your work will heavily drive key product and business decisions. You will build data pipelines, reliably move data across systems, and build tools to empower our Product Analysts and Data Scientists while working closely with our software engineering team to analyze and fill existing gaps.
As an Data Engineer, you will:
- Architect and implement robust ETL to empower several diverse business domains
- Use data to drive growth across our business by leveraging geospatial data to increase field operations efficiency and improve storage utilization and load times in our warehouses
- Communicate data-driven insights to stakeholders to drive meaningful and actionable insights
Core Skills We Look For:
- At least three years of data engineering experience
- Proficiency with Python, Java, Scala or similar programming languages
- Strong understanding of messaging/queuing systems or stream processing technologies such as Kafka or Kinesis
- Strong product understanding and ability to crystalize vague requirements into sustainable business impact deliverables
Plusses include any of the following:
- Experience with big data technologies such as Spark and Hadoop
- Experience with lambda architecture and merging streaming data technologies (Kafka, SQS, Kinesis) with batch processing in a data warehouse
- Familiarity with data science and machine learning lifecycle
- BS or MS degree in Computer Science or a related technical field