Lead Data Engineer - Sql/python/spark/databricks Etc Job in Enquero

Lead Data Engineer - Sql/python/spark/databricks Etc

Apply Now
Job Summary

Responsibilities

As a Lead Data Engineer in Enquero s Data & Analytics unit you will be leading a fast-paced team to deliver industry contextual technology solutions for our Fortune 500 customers as per the customer requirements. You will leverage your formal education in the relevant field and your professional experience to apply professional concepts to work on targeted problems and tasks. You will follow standard practices and procedures applicable in your area of work to complete your task/deliverable with high quality under limited supervision.

Your passion for conceptualizing, designing, and building models and your ability to ask the right questions, apply analytical skills, technical skills and reliably complete your deliverable with high quality. Your demonstrated ability to consistently achieve this while continuing to improve your knowledge and skills will define success for your role.You will exercise independent judgement within defined policies and procedures to determine appropriate action as well as lead a small team to an outcome while being a role model for your team.

  • Perform project analysis and development tasks of increasingly complex nature which may require extensive research and analysis.
  • Make design and technical decisions for application and ensure high performance of the application.
  • Determining methods and procedures on new tasks, establishing these for the assignment and coordinating activities with other employees while leading a small team and demonstrating a good leadership within the team
  • Work in an agile development environment and ensures process/policy compliance as per organizationsguidelines.
  • Collaborate with leaders, business analysts, project managers, IT architects, technical leads and other developers, along with internal and external customers, to understand requirements and develop needs according to business requirements
  • Supports code deployments and configuration changes to production and non-production systems, following established procedures
  • Be a thought leader, understand the latest trends and capabilities to implement modern and successful solutions
  • Contributing to your BU/Practice by
    • Documenting your learnings from the current work and engaging in the external tech community by writing blogs, contributing in Github, Stack overflow, meetups / conferences etc.
    • Keep updated on the latest technologies with technology trainings and certifications
    • Actively participate in organization level activities and events related to learning, formal training, interviewing, special projects etc.

Qualifications

REQUIRED/PREFERED

  • Bachelor s/ Master's in Computer Science or related disciplines
  • 6-9 years of relevant experience in the field of Data Engineering
  • Must have been part of minimum 2 end to end big data projects and must have handled defined modules independently.

  • Expert in SQL and good with data modelling for relational, analytical and big data workloads.

  • Advanced programming skills with Python, Scala or Java.

  • Strong knowledge of data structures, algorithms, & distributed systems.

  • Strong experience and deep understanding of Spark internals.

  • Expert in Hive.

  • Hand on experience with one of the cloud technologies (AWS, Azure, GCP).

  • Hands on experience with at least one NoSQL database (HBase, Cassandra, MongoDB etc).

  • Experience in working with both batch and streaming datasets.

  • Knowledge of at least one ETL tool like Informatica, Apache NiFi, Airflow, DataStage etc.

  • Experience in working with Kafka or related messaging queue technology.

  • Hands on experience in writing shell scripts for automating processes.

  • Knowledge of building RESTful services would be an added advantage.

  • Willingness to learn and adapt.

  • Delivery focused and willingness to work in a fast-paced work environment.

  • Take initiative and be responsible for delivering complex software.

  • Knowledge of building REST API end points for data consumption.

  • Experience building self-service tools for analytics would be plus.

  • Knowledge of ELK stack would be a plus.

  • Knowledge of implementing CI/CD on the pipelines is a plus.

  • Knowledge of Containerization (Docker/Kubernetes) will be plus.

  • Experience working with one of the popular Public Cloud based platforms is preferred

  • Excellent oral and written communication is a must.

  • Well versed with Agile methodologies and experience in working with scrum teams.

Experience Required :

Fresher

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs