Job Description
This role is for engineers who:
- Own data pipelines end-to-end
- Care deeply about data correctness
- Can debug real production issues, not just monitor systems
You’ll work on high-scale, event-driven pipelines and ensure data is reliable, accurate, and trusted across the business.
Responsibilities
- Own and operate data pipelines (ingestion → transformation → serving)
- Debug and resolve data issues in production (latency, inconsistencies, failures)
- Build scalable pipelines using Spark, Airflow, DBT, Kafka/CDC
- Ensure data quality and reliability across systems
- Work closely with product and analytics teams to ensure correct business metrics
- Improve automation, monitoring, and observability of data systems
Ideal Profile
- You have atleast 4 years’ experience in data engineering / data platform roles
- Strong hands-on experience with SQL (advanced), Spark / PySpark and Airflow (or similar)
- Experience building and debugging production-grade data pipelines
- Strong understanding of Data modeling, ETL/ELT systems and Data quality challenges
- Ownership mindset, someone who fixes problems end-to-end
Are you interested in this position?
Apply by clicking on the “Apply Now” button below!
#GraphicDesignJobsOnline
#WebDesignRemoteJobs #FreelanceGraphicDesigner #WorkFromHomeDesignJobs #OnlineWebDesignWork #RemoteDesignOpportunities #HireGraphicDesigners #DigitalDesignCareers# Dynamicbrand guru
Apply Now