Technical Lead - Data Engineer Job in Lumiq
Technical Lead - Data Engineer
- Pune, Pune Division, Maharashtra
- Not Disclosed
- Full-time
Technical Lead - Data Engineer :
About us
We re weaving a success story for those who are passionate about using technology to be part of a revolution one that s continuously enabling enterprises to bring more value to their partners and customers alike.Job Description
The Data Engineering team is one of the core technology teams of Lumiq.ai and is responsible for creating all the Data related products and platforms which scale for any amount of data, users, and processing. The team also interacts with our customers to work out solutions, create technical architectures and deliver the products and solutions.
If you are someone who is always pondering how to make things better, how technologies can interact, how various tools, technologies, and concepts can help a customer or how a customer can use our products, then Lumiq is the place of opportunities.
Requirements
Responsibilities
Create and maintain optimal data pipeline architecture
Assemble large, complex data sets that meet functional / non-functional business requirements.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using Open Source and AWS big data technologies
Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Work with data and analytics experts to strive for greater functionality in our data systems.
Qualifications
We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field.
Advanced working SQL knowledge and experience working with relational databases as well as working familiarity with a variety of databases.
Experience building and optimizing big data pipelines, architectures and datasets.
Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
Experience interacting with customers and various stakeholders.
Strong analytical skills related to working with unstructured datasets.
Build processes supporting data transformation, data structures, metadata, dependency and workload management.
Working knowledge of message queuing, stream processing, and highly scalable big data lakes.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
They should also have experience using the following software/tools:
Big data technologies: Hadoop, Spark, Kafka, etc.
Relational SQL and NoSQL databases, including Postgres and Cassandra.
Data pipeline and workflow management tools: Airflow, NiFi etc.
Cloud services: AWS - EMR, RDS, Redshift, Glue. Azure - Databricks, Data Factory. GCP - Dataproc, Pub/Sub
Stream-processing systems: Storm, Spark Streaming, Flink etc.
Object-oriented, functional or scripting languages: Python, Java, Scala, etc.
4 to 8 Years
2 - 4 Hires