We Are Hiring Kafka Developers | Immediate Joiners Job in Truetech Solutions

We Are Hiring Kafka Developers | Immediate Joiners

Apply Now
Job Summary

Roles and Responsibilities


5+yrs exp. candidate Candidate shall

  • Have minimum 5+ Years of experience.(1-3 years kafka exp is mandatory)

Demonstrable experience as a Kafka Developer (Ideally Kafka Streams) Hands on experience in Big data technologies (Hadoop, Hue, Hive, Impala, Spark) and stronger within Kafka.

Knowledge and experience using Key value data base Experience developing microservices using Spring, Java/Scala, Open shift/ Kubernete, deployments in Jenkins.

  • Responsible for creating a scalable configurable streaming applications to provide fresh data as part of data services for different application, usually 24/7 applications with a big performance requirement.
  • Experience in the big data arena, particularly developing in streaming and preferable with Kafka Streams.
  • Provide expertise and hands on experience working on Kafka connect using schema registry in a very high volume environment (~900 Million messages)

Provide expertise in Kafka brokers, zookeepers, KSQL, KStream and Kafka Control center.

Provide expertise and hands on experience working on AvroConverters, JsonConverters, and StringConverters.

  • Provide expertise and hands on experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connector, JMS source connectors, Tasks, Workers, converters, Transforms.

Provide expertise and hands on experience on custom connectors using the Kafka core concepts and API.

Working knowledge on Kafka Rest proxy.

  • Ensure optimum performance, high availability and stability of solutions.
  • Create topics, setup redundancy cluster, deploy monitoring tools, alerts and has good knowledge of best practices.
  • Create stubs for producers, consumers and consumer groups for helping onboard applications from different languages/platforms. Leverage Hadoop ecosystem knowledge to design, and develop capabilities to deliver our solutions using Spark, Scala, Python, Hive, Kafka and other things in the Hadoop ecosystem.

Experience with RDBMS systems, particularly Postgre SQL

  • Use automation tools like provisioning using Jenkins and Udeploy.
  • Ability to perform data related benchmarking, performance analysis and tuning.

Strong skills in In-memory applications, Database Design, Data Integration.


Salary : 5,00,000 - 12,00,000 P.A.
Experience Required :

6 to 10 Years

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs