Job Description
Responsibilities:
- Data organisation and analysis: handling structured, semi-structured, unstructured, and image-based data.
- Building data systems and pipelines
- Preparing data for prescriptive and predictive modelling
- Designing end-to-end pipelines within a hybrid big data architecture, using Cloud Native AWS, Python, SQL, etc.
- Leveraging AWS Cloud services for building data pipelines and ensuring effective monitoring
- Collaborating with data scientists and architects across various projects
What we expect:
- 7+ years of Data Engineering experience utilising designated technologies and tools.
- Profound knowledge and hands-on experience with Python (Pandas, boto3), SQL, Spark, Databricks, DWH
- Proficient use of orchestrators such as Airflow
- Expertise with AWS services essential for Data Engineering: AWS Glue, AWS Lambda, S3, RDS, Redshift, Athena, SQS, SNS, etc.
- Ability to write robust code with Python.
- Deep understanding of pub-sub architecture
- Solid analytical and problem-solving skills
Nice to have:
- Experience with AWS Dynamo DB, Aurora (including serverless nuances), Elasticache, Kinesis services, Neptune
- Familiarity with AWS Networking
- Familiarity with Azure
- Experience with Snowflake
Are you interested in this position?
Apply by clicking on the “Apply Now” button below!
#GraphicDesignJobsOnline
#WebDesignRemoteJobs #FreelanceGraphicDesigner #WorkFromHomeDesignJobs #OnlineWebDesignWork #RemoteDesignOpportunities #HireGraphicDesigners #DigitalDesignCareers# Dynamicbrand guru
Apply Now