Cloud Engineer Cum Hadoop Admin Professional Job in Transunion
Cloud Engineer Cum Hadoop Admin Professional
- Mumbai, Maharashtra
- Not Disclosed
- Full-time
What We'll Bring:
We are looking for a highly skilled SRE having experience in Cloud DevOps Operations and Hadoop Administration to join our dynamic team. Within our SRE team, you'll be a master of managing and administering the backbone of our technological analytics infrastructure. This role will work closely with the engineering team and will be responsible for capacity planning, service configuration, cluster expansion, monitoring, tuning, and overall ongoing support of the on-prem and cloud solutions. Responsibility also includes researching and recommending methods and technologies to improve cluster operation and user experience.What You'll Bring:
Job Description:
Manage the Hadoop distribution on Linux instances, including configuration, capacity planning, expansion, performance tuning and monitoring.
Upgrades of MapR cluster to newer versions.
Expertise in managing Hadoop cluster patching and upgrades.
Administrator experience with NoSQL MapR DB.
Ability to handle and responsible for commissioning and decommissioning of nodes from clusters.
Support OS patching of cluster nodes.
Hands-on experience with Dataproc, Dataflow, GCS, BigQuery,
Hands-on experience with Terraform, Jenkins, Kubernetes.
Work with data engineering team to support deployment of Spark and Hadoop jobs.
Work with delivery teams for Provisioning of users into Hadoop.
Work with end users to troubleshoot and resolve incidents with data accessibility.
Contribute to the architecture design of the cluster to support growing demands and requirements.
Contribute to planning and implementation of software and hardware upgrades.
Responsible for Capacity Planning, Infrastructure Planning based on the workloads and future requirements.
Recommend and implement standards and best practices related to cluster administration.
Research and recommend automated approaches to cluster administration.
Enable Ranger for RBAC (role-based access control) to have a privilege level access to the data as per the security policies.
Enable data encryption at rest and at motion with TLS/SSL to meet the security standards.
Day-to-day operation activities including, incidents, problem and change management.
What You ll Bring:
Bachelor's degree in Computer Science, Information Technology, or related field.
4+ years of administrator experience managing full stack Hadoop distribution (preferably MapR) with technical stack of MapReduce, Spark, Yarn, Hive, HDFS, including installation, configuration, administration, monitoring and optimization.
3+ years experience with implementing and managing Hadoop related security in Linux environments (Kerberos, SSL, Sentry, Ranger, Encryption)
3+ Years of doing data related Benchmarking, Performance Analysis and Tuning.
Experience in monitoring and optimizing SparkSQL and Hive queries, including memory usage.
Strong knowledge of Yarn configuration in a multi-tenancy environment. Candidate should have sound knowledge with Yarn fair scheduler.
Strong knowledge on Active Directory/LDAP security integration with Big Data.
Worked as a team member to provide 24x7 on-call Hadoop administration and support.
Hands-on experience in Monitoring and Reporting on Hadoop resource utilization.
Proficiency in using tools like Splunk, Grafana, etc for performance monitoring.
Moderate experience with Linux or storage including shell scripting, system monitoring, and troubleshooting.
Knowledge of disaster recovery planning.
Knowledge of data protection regulations.
Knowledge of network concepts and experience managing host level network services.
Ability to troubleshoot and resolve production issues.
Ability to develop and maintain SOP and documentation.
Ability to work in a fast-paced collaborative environment and meet deadlines.
Strong communication and collaboration skills.
Excellent problem-solving skills.
Impact You'll Make:
We re also looking for the preferred skills below. Whether you are proficient or could use some brushing up, we re happy to support your development in:
Certification on GCP (preferably, DevOps Engineer)
Experience with on-prem to cloud migration.
Experience with Tableau or Superset.
Linux/shell/python scripting.
Knowledge on Helm, Groovy.
Knowledge of web services, API, REST, and RPC
Basic expeirence of Python.
Knowledge on RDBMS.
Familiarity with containerization (Docker, Kubernetes) and orchestration.
TransUnion Job Title
Sr Engineer, Database EngineeringQualification : Bachelor's degree in Computer Science, Information Technology, or related field.
Minimum 3 Years
2 - 4 Hires