Sr. Data Engineer, Data Streaming
About The Brand
Overview and Responsibilities
The Sr. Data Engineer for streaming will be responsible for supporting and enhancing our Kafka and Kinesis based streaming data collection platforms. The ideal candidate will have strong experience with Kafka (open-source or Confluent) and AWS Kinesis, as well as cloud platforms. The candidate should also be strong with AWS and Kafka components (Topics, Producer/Consumer, KStream, KTable, and KSQL), Kinesis components including streams, firehose and data analytics, and be experienced working with streaming systems that handle very high volume and velocity data. This will require a strong understanding of streaming architectures, and experience with AWS, CI/CD tools, and infrastructure as code.
Working as part of the Data Engineering team responsible for managing data feeds and collecting analytical streaming data from our client software. In this fast-paced environment, you will deal with high volume and high velocity data, and demonstrate your strong analytical, problem solving, organizational, and prioritization skills. This important role has a wide range of responsibilities, including:
This is a critical role with a wide range of responsibilities, including:
- Demonstrated expertise with streaming data
- Deep level understanding of Kafka
- Assume ownership and ability to lead simultaneous projects
- Strong time-management, prioritization, and organizational skills
- Problem-solving and investigation skills
- Strong interpersonal skills, with the ability to cultivate relationships and negotiate with internal clients
- Ability to meet deadlines and partner timelines
- Strong written and oral communication skills
- Bachelor's Degree or Equivalent
Basic Qualifications
We believe the right individual will have the following skills and experience in order to be successful in this role:
- Java, Python or Scala
- Kafka (Topics, Producer/Consumer, KStream, KTable, KSQL)
- AWS Kinesis (streams, firehose and data analytics)
- SQL and familiarity with modern cloud databases like Snowflake
- Frameworks: Spring Framework, Flink, JUnit, Mockito, Apache Flink.
- Cloud Technologies: AWS (VPC, Kubernetes, EC2, Lambda, CloudWatch, etc)
- Deployment Tools: Jenkins, Docker, Helm, Terraform
- Monitoring tools: Grafana, Prometheus
- CI/CD tools
- Strong analytical and troubleshooting skills
- At least 7 years of hands-on experience with data streaming systems and 10 or more years of data engineering
- Effective and clear communication of open project risks and team dependencies