Tinder brings people together. With tens of millions of users, hundreds of millions of downloads, 2 billion swipes per day, hundreds of thousands of requests per second, 20 million matches per day and a presence in every country on earth, our reach is expansive—and rapidly growing.
Tinder is looking for an engineer who's eager to design and implement the next generation of our data platform. Our growing business always has new needs for actionable information, and this gives us an exciting roadmap with high-value products ready to be designed and many more waiting to be identified. This role will involve working with large scale data pipelines in near-real-time, batch, and lambda frameworks.
This role can also be located in Los Angeles, CA or Palo Alto, CA.
What You'll Do:
- Work on pipelines ingesting greater than 1gb per second
- Design, implement, and own software solutions for ingesting and exposing high volume data feeds
- Coordinate with engineering, analytics, and other teams to assess the cost and value of existing and potential projects
- Research and evaluate new technologies in the big data space to guide our continuous improvement
- Collaborate with multifunctional engineers across the company to help tune the performance of large data applications to drive cost savings
We’re looking for:
- 3+ years of experience working with a distributed framework like Spark, Flink, Kafka Streams, Dask, or Hadoop
- 1+ years of experience with ETL job orchestration, preferably with a feature-rich tool such as Airflow, Argo, or Rundeck
- 2+ years of experience working with Scala, Python, Java, or C#
- 2+ years ensuring the reliability of software and the integrity of large data stores in a production environment
- Exposure to a variety of databases and no-SQL appliances (Redshift, Snowflake, Druid, Dynamo, Cassandra, Kafka, Elastic Search, Hive, etc.)