Job Description
What You’ll Do:
- Design and implement efficient architectures for high-load, enterprise-scale applications and ‘big data’ pipelines on AWS.
- Lead data migration from various sources into scalable Data Lakes on AWS.
- Orchestrate and build robust ETL/ELT processes to transform and load data into target data marts.
- Implement and manage secure data access controls leveraging AWS Lake Formation.
- Architect and develop real-time data ingestion pipelines to process high-volume streams, detect anomalies, and enable windowed analytics, delivering insights to systems like Elasticsearch.
- Analyze project requirements, define scope, estimate effort, and identify the optimal technology stack and tools.
- Design and implement optimal data architectures and migration strategies on the AWS platform.
- Develop new solution modules, re-architect existing components, and refactor program code for improved performance and scalability.
- Define infrastructure requirements and collaborate with DevOps engineers on provisioning.
- Monitor and analyze data pipeline performance, recommending and implementing necessary infrastructure adjustments.
- Effectively communicate project-related updates and challenges with clients.
- Collaborate closely with internal and external development and analytical teams to deliver high-quality data solutions.
What You’ll Bring:
- Proven hands-on experience designing efficient architectures for high-load enterprise-scale applications or ‘big data’ pipelines.
- Deep practical experience utilizing AWS data toolsets, including but not limited to DMS, Glue, DataBrew, EMR, and SCT.
- Demonstrated experience in implementing end-to-end big data architectures and pipelines on AWS.
- Hands-on experience with message queuing, stream processing technologies, and highly scalable ‘big data’ stores.
- Advanced knowledge and practical experience working with both SQL and NoSQL databases.
- Proven track record in re-designing and re-architecting large, complex business applications with a focus on data.
- Strong self-management and self-organizational skills, with the ability to drive tasks to completion independently.
- Experience with one or more of the following:
- Strong proficiency in Python and PySpark, particularly for developing AWS Glue jobs.
- Expertise with big data tools such as Kafka, Spark, and Hadoop (HDFS, YARN, Tez, Hive, HBase).
- Experience with stream-processing systems like Kinesis Streaming, Spark Streaming, Kafka Streams, and Kinesis Analytics.
- Solid understanding and practical experience with AWS cloud services, including EMR, RDS, MSK, Redshift, DocumentDB, and Lambda.
- Familiarity with message queue systems such as ActiveMQ, RabbitMQ, and AWS SQS.
- Experience with federated identity services (SSO) like Okta and AWS Cognito.
Ideally, You Also Have:
- 5+ years of progressive experience in a Data, Cloud, or Software Engineer role, coupled with a degree in Computer Science, Statistics, Informatics, Information Systems, Mathematics, or a related quantitative field.
- Valid AWS certifications in Data Engineer or Machine Learning Specialty
Are you interested in this position?
Apply by clicking on the “Apply Now” button below!
#GraphicDesignJobsOnline
#WebDesignRemoteJobs #FreelanceGraphicDesigner #WorkFromHomeDesignJobs #OnlineWebDesignWork #RemoteDesignOpportunities #HireGraphicDesigners #DigitalDesignCareers# Dynamicbrand guru
Apply Now