Principal Data Engineer at Albert
Things you're good at
- Ownership: Diving in and taking ownership of projects, then driving them to completion in a methodical, organized, independent manner. All the while communicating plans and progress effectively.
- Shipping: Delivering great products that you're proud of on a regular basis.
- Architecture: Getting it done is important. Getting it done in way that will scale is equally important.
- Collaboration: We bring the best out of each other. We're looking for people who will bring the best out of all of us.
- Communication: You will be excellent at communicating technical topics concisely and practically, both verbally and in writing, in order to get buy-in from team members and move projects along effectively.
- Organization: We value making sure things are well-organized and well-documented, whether it’s code or documentation.
Responsibilities
- Build scalable ETL data pipelines, ingesting terabytes of data from internal and external sources.
- Design and maintain data storage solutions such as data lakes and data warehouses that allow for large-scale analytics processing.
- Build self-service analytics solutions for non-technical consumers such as: charting dashboards, scheduled transformations, and data scientist notebooks.
- Work closely with product engineering teams to ensure consistent data modeling across services
Requirements
- Bachelors Degree
- 5+ years of experience building scalable data pipelines.
- Highly proficient in Python or Java.
- Experience with data warehouses such as Redshift, Snowflake, and BigQuery.
- Experience with data streaming solutions such as Kafka and Kinesis.
- Familiar with distributed data processing technologies such as Presto, Spark, and Hadoop.
- Familiar with cloud-hosted services such as AWS and GCP.
Benefits
- Competitive salary and meaningful equity
- Health, vision and dental insurance
- Meals provided
- Monthly wellness stipend
- 401k match