Team Lead Job in Infogain Corporation
Team Lead
Infogain Corporation
4+ weeks ago
- Noida, Gautam Buddha Nagar, Uttar Pradesh
- Not Disclosed
- Full-time
- Permanent
Job Summary
Team Lead with skills Apache Hive, Microservices Architecture, .NET Web API (restful APIs), ASP.NET SignalR, TPF Assembly, ZEnterprise114, Lambda Test, Data Modeling, ETL for location Noida, India
Posted on: September 09, 2022 Share on Linkedin Share on Twitter Share on FacebookROLES & RESPONSIBILITIES
- 1. Project Role : AWS Glue Application Developer
- 2. Project Role Description :Design, build and configure applications to meet business process and application requirements.
- 3. Work Experience :4-6 years
- 4. Work location : Off Shore/On-Site
- 5. Must Have Skills : AWS, Glue, DMS, Data integrations and Data Ops
- Job Requirements :
- 6. Key Responsibilities : 5 years of work experience with ETL, Data Modelling, and Data Architecture Proficient in ETL optimization, designing, coding, and tuning big data processes using Pyspark Extensive experience to build data platforms on AWS using core AWS services Step function, EMR, Lambda, Glue and Athena, Redshift, Postgres, RDS etc and design/develop data engineering solutions. Orchestrate using Airflow.
- 7. Technical Experience : Hands-on experience on developing Data platform and its components Data lake, cloud Datawarehouse, APIs, Batch and streaming data pipeline Experience with building data pipelines and applications to stream and process large datasets at low latencies
- 1. Enhancements, new development, defect resolution and production support of Big data ETL development using AWS native services.
- 2. Create data pipeline architecture by designing and implementing data ingestion solutions.
- 3. Integrate data sets using AWS services such as Glue, Lambda functions/ Airflow
- 4. Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena
- 5. Author ETL processes using Python, Pyspark
- 6. Build Redshift Spectrum direct transformations and data modelling using data in S3
- 7. ETL process monitoring using Cloudwatch events
- 8. You will be working in collaboration with other teams. Good communication must.
- 9. Must have experience in using AWS services API, AWS CLI and SDK
- 8. Professional Attributes : Experience operating very large data warehouses or data lakes Expert-level skills in writing and optimizing SQL Extensive, real-world experience designing technology components for enterprise solutions and defining solution architectures and reference architectures with a focus on cloud technology
- 1. Must have 4+ years of big data ETL experience using Python, S3, Lambda, Dynamo DB, Athena, Glue in AWS environment
- 2. Expertise in S3, RDS, Redshift, Kinesis, EC2 clusters highly desired
EXPERIENCE
- 8 - 11 Years
SKILLS
- Primary Skill: Data Engineering
- Sub Skill(s): Apache Hive
- Additional Skill(s): Microservices Architecture, .NET Web API (restful APIs), ASP.NET SignalR, TPF Assembly, ZEnterprise114, Lambda Test, Data Modeling, ETL
Experience Required :
4 to 6 Years
Vacancy :
2 - 4 Hires
Similar Jobs for you
×
Help us improve TheIndiaJobs
Need Help? Contact us