Business Intelligence Engineer
Clutter is an on-demand storage and technology company based in Los Angeles that is disrupting the $50B/year self-storage and moving industries. We’ve built an end-to-end logistics and supply chain platform that enables us to offer consumers a much more convenient solution at price parity with the incumbents. We’ve raised $300M from a number of VCs, including SoftBank, Sequoia, Atomico and GV (formerly Google Ventures). We have 500+ team members and tens of thousands of customers in 7 major markets across the US with plans to be in 50+ markets, domestically and internationally, within the next 5 years!
At Clutter, we're fortunate to be providing a consumer value proposition that people love and one that makes economic sense - a true product/market fit that few startups ever find. To deliver on our promise to consumers, team members and investors, we've focused on hiring, training and retaining exceptional individuals. This means that we have a very thorough interview process and maintain high performance expectations, but we'll always be transparent with you and respectful of your time.
As a Business Intelligence Engineer, your work will heavily drive key product and business decisions. You will build data pipelines, reliably move data across systems, and build tools to empower our Analysts and Data Scientists while working closely with our software engineering team to analyze and fill what gaps exist.
As a Data Engineer, you will:
- Architect and implement robust ETL to empower several diverse business domains
- Use data to drive growth across our business by leveraging geospatial data to increase field operations efficiency and improve storage utilization and load times in our warehouses
- Communicate data-driven insights in a manner that is meaningful and actionable to stakeholders in order to drive actionable insights
Core Skills We Look For:
- At least three years of business intelligence engineering experience
- Experience building data models, infrastructure, and ETL/ELT pipelines for reporting, analytics, and data science
- Confidence in writing complex SQL queries
- Strong understanding of query performance in MPP/cloud data warehouse solutions (Snowflake, Redshift, BigQuery)
- Business acumen in understanding BI reporting requirements
Plusses include any of the following:
- Experience programming in Python, Ruby, or Java
- Experience with workflow management tools (Airflow, Oozie, Azkaban, UC4)
- Experience with streaming data technologies (Kafka, SQS, Kinesis) and merging with batch processing in a data warehouse
- Familiarity with BI visualization tools (Tableau, Looker, Periscope)
- BS or MS degree in Computer Science or a related technical field