Architect-cloud Data Engineering Job in Apexon

Architect-cloud Data Engineering

Apply Now
Job Summary

Primary Strong program management skills with deep Snowflake and AWS experience; Managed large scale data platform implementation that included Snowflake, SAP and Oracle systems. Good communication, interpersonal skills. This is a client facing role and need to be someone who has strong foot hold on snowflake and AWS knowledge. Help direct the project and proactively identify risks and associated mitigation plans. Responsibilities Prepare, handle, and supervise efficient data pipeline architectures. Build and deploy ETL/ELT data pipelines that can begin with data ingestion and complete various data-related tasks. Handle and source data from different sources according to business requirements. Work in teams to create algorithms for data storage, data collection, data accessibility, data quality checks, and, preferably, data analytics. Connect with data scientists and create the infrastructure required to identify, design, and deploy internal process improvements. Access various data resources with the help of tools like SQL and Big Data technologies for building efficient ETL data pipelines. Experience with tools like Snowflake is considered a bonus. Build solutions highlighting data quality, operational efficiency, and other feature describing data. Create scripts and solutions to transfer data across different spaces. Deploying, leveraging, and continually training and improving existing machine learning models. Identifying, designing, and implementing internal process movements Automating manual processes to enhance delivery. Meeting business objectives in collaboration with data scientist teams and key stakeholders. Creating reliable pipelines after combining data sources. Designing data stores and distributed systems. Requirements Bachelor's Degree or master s degree in Computer Science. 5+ years of hands-on software engineering experience. Experience setting up AWS Data Platform AWS CloudFormation, Development Endpoints, AWS Glue, EMR and Jupyter/Sagemaker Notebooks, Redshift, S3, and EC2 instances. Experience with AWS Database Migration Services, AWS Glue, AWS Lambda, AWS QuickSight, AWS Data Brew,AWS CDK, AWS SAM, and AWS Developer Tools Experience in designing, developing, optimizing, and troubleshooting complex data pipelines Experience in migrating On-Prem relational databases to AWS Aurora, Redshift (or similar) Processing and analysing data using various tools and technologies, such as the AWS Data Pipeline, Amazon EMR, Amazon Redshift, and Amazon Athena. Track record of successfully building scalable Data Lake solutions that connects to distributed data storage using multiple data connectors. Must have a background in data engineering Data Warehouse Development experience would be perfect Proven work experience in Spark , Python ,SQL , Any RDBMS. Designing, developing, and managing the data infrastructure for the organization's cloud-based services and applications. Using various data sources, such as relational databases, NoSQL databases, and data warehouses. Interacting with other engineers, data scientists, and data analysts collaboratively to design and create data-driven solutions. Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture as well as high-scale or distributed RDBMS Strong database fundamentals including SQL, performance and schema design. Understanding of CI/CD framework is an added advantage. Ability to interpret/write custom shell scripts. Python scripting is a plus. Experience with Git / AWS DevOps To be able to work in a fast-paced agile development environment.

Experience Required :

Fresher

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs