Whip Media Group’s products, including Mediamorph, TV Time and TheTVDB, offer a data-driven integrated cloud solution that empowers the world’s leading entertainment organizations to efficiently acquire, distribute and monetize their content. Together, our companies track billions of consumer actions and financial transactions that accelerate innovation for buyers and sellers of content. Whip Media Group has offices in Los Angeles, New York City and London.
Whip Media Group is hiring a Head of Data Engineering - based out of our Santa Monica office - to act as the primary driver of all data engineering initiatives for the organization.You will be joining a dynamic fast-paced environment and work with cross-functional teams to design, build and develop data-driven solutions that drive the company and our clients forward.
The ideal candidate is a player/coach and has significant experience leading a small group of engineers that work across business intelligence, analytics, data science, and data products. You must have strong, hands-on technical expertise in a variety of technologies including Python, Kafka, Spark, and Kubernetes as well as the proven ability to fashion robust, scalable solutions for our clients.
What will you do?
- Lead large scale projects that integrate our data and machine learning applications into both enterprise and consumer facing products (e.g. our flagship DemandIQ integration, proprietary NN architectures, etc.)
- Act as a player/coach hiring for and leading a team of ~4 data engineers as well as providing thought leadership and leading best practice solutions for the team.
- Take lead in driving a culture of high quality, innovation, and incremental experimentation.
- Has a knack of using flexible and scalable methodologies that can be applied to a broad set of problems across the Data Engineering organization.
- Provide strong thought leadership and set processes that lead to good implementation architecture.
- Continuously improve and further enhance existing ETL and streaming systems
- Implement reliable and efficient systems spanning technologies like Spark and Kafka
What do you need?
- Minimum 6+ years working in formal engineering environments and 2+ years working on and leading large scale data projects
- 3+ years of team management experience
- Experience with technologies like Kafka, Spark, and Kubernetes
- Knowledge about how to automate ETL with scheduling systems, e.g., Airflow, etc.
- Minimum 3 years of working in Python
- Deep understanding about technical debt in data systems
- Extensive experience in designing and implementing large scale data projects from basic requirements
- Extensive experience in maintaining key data systems over long periods of time
- BS or MS degree in Computer Science, Math, Statistics or a related technical field
- Startup experience, a plus