Data Engineer I

April 24, 2026
Application ends: July 23, 2026
Apply Now

Job Description

Job Summary

The Data Engineer I will support the design, development, and maintenance of data pipelines and transformation workflows that power acquisition reporting and marketing analytics. You will work across AWS, Databricks, Unity Catalog, Snowflake, and Airflow to help build reliable and scalable solutions for ingesting and preparing marketing platform data.

You will collaborate with senior engineers, analytics partners, and marketing stakeholders to ensure data accuracy, consistency, and timely delivery for downstream dashboards and reporting. This role involves hands-on development, troubleshooting, and contributing to the ongoing modernization of our data ecosystem.

Responsibilities And Duties Of The Role

  • Assist in building and maintaining ETL/ELT pipelines for acquisition reporting using Databricks, PySpark, SQL, and Unity Catalog under the guidance of senior engineers.
  • Support the migration of existing Snowflake SQL scripts and transformations into Databricks UC by updating queries, validating outputs, and helping implement governance best practices.
  • Contribute to developing ingestion processes for marketing vendor data, including data parsing, normalization, and quality validations.
  • Implement and maintain foundational data quality checks, monitoring alerts, and issue triage workflows using Databricks, Snowflake, Airflow, and internal tooling.
  • Partner with the Data Reliability Engineering team to assist with SLA monitoring, simple incident troubleshooting, and logging improvements.
  • Collaborate with analytics and marketing partners to understand data requirements and ensure accuracy of datasets used in dashboards and reporting.
  • Support performance tuning, logging improvements, and general pipeline reliability work.
  • Participate in engineering best practices, including code reviews, documentation, and contributing to shared frameworks and tools.

Required Education, Experience/Skills/Training

  • Strong proficiency in SQL (analytical SQL, complex joins, window functions).
  • Hands-on experience with PySpark and/or Spark SQL in production.
  • Good understanding of data modeling, ETL/ELT design patterns, and distributed data processing.
  • Experience building pipelines in Databricks, including Delta Lake, Unity Catalog, data governance, and Lakehouse patterns.
  • Experience in AWS (S3, IAM, EC2, Glue, Lambda, or related services).
  • Experience with Airflow or similar orchestration tools.
  • Experience building robust ingestion pipelines and working with semi‑structured formats (JSON, Parquet, CSV).
  • Experience with Git/GitHub, CI/CD, and modern DevOps practices.
  • Excellent communication skills and ability to work with cross‑functional partners.

Required Education

  • Bachelor’s degree in Computer Science, Information Systems, Software, Advanced Mathematics, Statistics, Data Engineering or comparable field of study, and/or equivalent work experience

Are you interested in this position?

Apply by clicking on the “Apply Now” button below!

#GraphicDesignJobsOnline

#WebDesignRemoteJobs #FreelanceGraphicDesigner #WorkFromHomeDesignJobs #OnlineWebDesignWork #RemoteDesignOpportunities #HireGraphicDesigners #DigitalDesignCareers# Dynamicbrand guru