Sand Technologies Logo

Sand Technologies

Data Engineer

Posted 11 Days Ago
Remote
2 Locations
Mid level
Remote
2 Locations
Mid level
Design, build, and maintain scalable data pipelines and infrastructure. Collaborate with data scientists to ensure efficient data processing and analytics solutions.
The summary above was generated by AI

ABOUT SAND

Sand Technologies is a global leader in digital transformation, empowering leading organisations and governments worldwide to achieve their digital aspirations. 

We offer a comprehensive suite of services, including enterprise AI solutions, data science, software engineering, and IoT, delivered from our centres in the Americas, Europe, and Africa. 

Our training programmes, in partnership with organisations like the Mastercard Foundation, Amazon Web Services, Holberton, and ALX cultivate the next generation of agile digital leaders.

Through recent strategic acquisitions, Sand Technologies has further strengthened its capabilities in advanced analytics and intelligent software development, enhancing our ability to solve our clients' most pressing challenges across telecom, utilities, healthcare, and insurance industries. 

We believe in harnessing technology to deliver real impact and value, helping organisations bridge the gap between their current reality and digital future.

ABOUT THE ROLE

Sand Technologies focuses on cutting-edge cloud-based data projects, leveraging tools such as Databricks, DBT, Docker, Python, SQL, and PySpark to name a few. We work across a variety of data architectures such as Data Mesh, lakehouse, data vault and data warehouses. Our data engineers create pipelines that support our data scientists and power our front-end applications. This means we do data-intensive work for both OLTP and OLAP use cases. Our environments are primarily cloud-native spanning AWS, Azure and GCP, but we also work on systems running self-hosted open source services exclusively. We strive towards a strong code-first, data as a product mindset at all times, where testing and reliability with a keen eye on performance is a non-negotiable.

JOB SUMMARY

A Data Engineer, has the primary role of designing, building, and maintaining scalable data pipelines and infrastructure to support data-intensive applications and analytics solutions. They closely collaborate with data scientists, analysts, and software engineers to ensure efficient data processing, storage, and retrieval for business insights and decision-making. From their expertise in data modelling, ETL (Extract, Transform, Load) processes, and big data technologies it becomes possible to develop robust and reliable data solutions.

RESPONSIBILITIES

  1. Data Pipeline Development: Design, implement, and maintain scalable data pipelines for ingesting, processing, and transforming large volumes of data from various sources using tools such as databricks, python and pyspark.
  2. Data Modeling: Design and optimize data models and schemas for efficient storage, retrieval, and analysis of structured and unstructured data.
  3. ETL Processes: Develop and automate ETL workflows to extract data from diverse sources, transform it into usable formats, and load it into data warehouses, data lakes or lakehouses.
  4. Big Data Technologies: Utilize big data technologies such as Spark, Kafka, and Flink for distributed data processing and analytics.
  5. Cloud Platforms: Deploy and manage data solutions on cloud platforms such as AWS, Azure, or Google Cloud Platform (GCP), leveraging cloud-native services for data storage, processing, and analytics.
  6. Data Quality and Governance: Implement data quality checks, validation processes, and data governance policies to ensure accuracy, consistency, and compliance with regulations.
  7. Monitoring, Optimization and Troubleshooting: Monitor data pipelines and infrastructure performance, identify bottlenecks and optimize for scalability, reliability, and cost-efficiency. Troubleshoot and fix data-related issues.
  8. DevOps: Build and maintain basic CI/CD pipelines, commit code to version control and deploy data solutions.
  9. Collaboration: Collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to understand requirements, define data architectures, and deliver data-driven solutions.
  10. Documentation: Create and maintain technical documentation, including data architecture diagrams, ETL workflows, and system documentation, to facilitate understanding and maintainability of data solutions.
  11. Best Practices: Continuously learn and apply best practices in data engineering and cloud computing.

QUALIFICATIONS

  • Proven experience as a Data Engineer, or in a similar role, with hands-on experience building and optimizing data pipelines and infrastructure.
  • Proven experience working with Big Data and tools used to process Big Data
  • Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues.
  • Solid understanding of data engineering principles and practices.
  • Excellent communication and collaboration skills to work effectively in cross-functional teams and communicate technical concepts to non-technical stakeholders.
  • Ability to adapt to new technologies, tools, and methodologies in a dynamic and fast-paced environment.
  • Ability to write clean, scalable, robust code using python or similar programming languages. Background in software engineering a plus.

DESIRABLE LANGUAGES/TOOLS

  • Proficiency in programming languages such as Python, Java, Scala, or SQL for data manipulation and scripting.
  • Strong understanding of data modelling concepts and techniques, including relational and dimensional modelling.
  • Experience in big data technologies and frameworks such as Databricks, Spark, Kafka, and Flink.
  • Experience in using modern data architectures, such as lakehouse.
  • Experience with CI/CD pipelines and version control systems like Git.
  • Knowledge of ETL tools and technologies such as Apache Airflow, Informatica, or Talend.
  • Knowledge of data governance and best practices in data management.
  • Familiarity with cloud platforms and services such as AWS, Azure, or GCP for deploying and managing data solutions.
  • Strong problem-solving and analytical skills with the ability to diagnose and resolve complex data-related issues.
  • SQL (for database management and querying)
  • Apache Spark (for distributed data processing)
  • Apache Spark Streaming, Kafka or similar (for real-time data streaming) 
  • Experience using data tools in at least one cloud service - AWS, Azure or GCP (e.g. S3, EMR, Redshift, Glue, Azure Data Factory, Databricks, BigQuery, Dataflow, Dataproc

Top Skills

Apache Airflow
AWS
Azure
Databricks
Dbt
Docker
Flink
GCP
Kafka
Pyspark
Python
Spark
SQL

Similar Jobs

Yesterday
Remote or Hybrid
2 Locations
90K-150K Annually
Mid level
90K-150K Annually
Mid level
Big Data • Fintech • Information Technology • Business Intelligence • Financial Services • Cybersecurity • Big Data Analytics
The role involves building ETL pipelines, collaborating with stakeholders, monitoring data processes, and generating BI reports using Superset.
Top Skills: LinuxPythonRShell ScriptingSQLSuperset
10 Days Ago
Easy Apply
Remote or Hybrid
US
Easy Apply
121K-200K
Junior
121K-200K
Junior
Marketing Tech • Social Media • Software • Analytics • Business Intelligence
The Data Engineer will design, maintain ETL pipelines, collaborate on data products, and improve data infrastructure to ensure reliable data access across stakeholders.
Top Skills: AirflowAWSAzureDagsterDbtGCPMySQLPostgresPythonSQL
22 Days Ago
Easy Apply
Remote or Hybrid
2 Locations
Easy Apply
90K-100K
Junior
90K-100K
Junior
AdTech • Big Data • Information Technology • Marketing Tech • Sales • Software
The Data Engineer will develop and optimize ETL pipelines and systems in Google Cloud Platform, collaborating with engineers and scientists to support B2B data solutions.
Top Skills: AirflowBigQueryComposerDataflowGoogle Cloud PlatformKubernetesPub/SubPythonSQL

What you need to know about the Los Angeles Tech Scene

Los Angeles is a global leader in entertainment, so it’s no surprise that many of the biggest players in streaming, digital media and game development call the city home. But the city boasts plenty of non-entertainment innovation as well, with tech companies spanning verticals like AI, fintech, e-commerce and biotech. With major universities like Caltech, UCLA, USC and the nearby UC Irvine, the city has a steady supply of top-flight tech and engineering talent — not counting the graduates flocking to Los Angeles from across the world to enjoy its beaches, culture and year-round temperate climate.

Key Facts About Los Angeles Tech

  • Number of Tech Workers: 375,800; 5.5% of overall workforce (2024 CompTIA survey)
  • Major Tech Employers: Snap, Netflix, SpaceX, Disney, Google
  • Key Industries: Artificial intelligence, adtech, media, software, game development
  • Funding Landscape: $11.6 billion in venture capital funding in 2024 (Pitchbook)
  • Notable Investors: Strong Ventures, Fifth Wall, Upfront Ventures, Mucker Capital, Kittyhawk Ventures
  • Research Centers and Universities: California Institute of Technology, UCLA, University of Southern California, UC Irvine, Pepperdine, California Institute for Immunology and Immunotherapy, Center for Quantum Science and Engineering

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account