Data Engineer (java Sprinboot, Kafka, Informatica) Job in Clairvoyant

Data Engineer (java Sprinboot, Kafka, Informatica)

Apply Now
Job Summary

At Clairvoyant, we re building a thriving big data practice to help enterprises enable and accelerate the adoption of Big data and cloud services. In the big data space, we lead and serve as innovators, troubleshooters, and enablers. Big data practice at Clairvoyant, focuses on solving our customer s business problems by delivering products designed with best in class engineering practices and a commitment to keeping the total cost of ownership to a minimum. Role: Data Engineer Location: Pune/Hyderabad (Currently remote) Must-Have Bachelors of Computer Science is preferred, or equivalent relevant business experience 4+ years experience in software development and delivery track record in a range or roles in a scrum environment 2+ years development experience with Confluent/Apache Kafka, AWS Services, implemented Cloud Data Solutions. 2+ years development experience with Java-Spring Boot, docker containers Experience with micro-service application architecture, developed Java Spring Boot container applications Experience with Docker and Kubernetes. Having used Rancher as a container orchestration tool will be a plus. Experience with code management tools and change control processes - Git/bitbucket Strong experience in SQL and ETL development, Confluent KSQL is a plus. Experience in Continuous Integration Continuous Deployment (CICD) tools - AzureDevops Knowledge of infrastructure as code (IaC) using CloudFormation or Terraform. Skilled in Agile development/methodologies Takes a logical, analytical approach to problem solving and pays close attention to detail Demonstrates excellent oral and written communication skills; experience of working in collaboration through discussions, technical or otherwise Strong team player; able to work effectively within a team and more broadly with people from a variety of backgrounds and areas across the organization. Should be able to provide overlap with the client in the EST time zone Good to have Informatica experience Role & Responsibilities: Designing, developing and/or supporting interfaces and programs for Big Data Platform, Cloud Computing-AWS Stack, HDFS and Hadoop system applications experience preferred. Development experience with AWS API Gateway/Lambda/NoSQL-MongoDB Implementation of data streaming from Legacy Systems(MVS DB2/VSAM) and Teradata to Cloud platform experience would be a plus. Demonstrates a logical, analytical approach to problem solving and pays close attention to detail Mentors less experienced developers on best practices in AWS Cloud technology. Key Skills: ETL/Kafka/Informatica Education: BE/B.Tech from a reputed institute.

Experience Required :

4 to 5 Years

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs