Global Data Engineer Job in Bayer

Global Data Engineer

Apply Now
Job Summary

Global Data Engineer

POSITION PURPOSE:

Primary focus of this role is to develop, deploy and support data pipelines, data lakes, and data warehousing enabling Regulatory Sciences to realize value from existing and future generated data sets. The Data Engineer will focus on the extraction of internal / external data generated to support global regulatory submissions and transfer the data through local and cloud infrastructure into Regulatory managed data lakes. The Data Engineer will partner with data engineers across the company to assemble the data into warehouse environments for use in data analytics, ML, OR modeling, business intelligence, and simulations.

YOUR TASKS AND RESPONSIBILITIES:

  • Assist in the development of the Regulatory Sciences data warehousing solutions.
  • Partner with Data Science and Information Management colleagues to harmonize the movement of data from systems of record to the data warehousing solutions.
  • Build and maintain core data models and implement ETL / ELT methods.
  • Work closely with other Crop Science organizations to create a data centric organization
  • Design and build new systems and business tools to enable Regulatory Science team members to consume and understand data faster through queries, APIs, SDKs.
  • Develop and configure infrastructure in cloud environments
  • Understand the regulatory business model and utilize data to solve business problems unique to the Regulatory Science teams
  • Evaluate new technologies (Snowflake, Redshift, Databricks, TetraScience) and follow industry trends, with a goal of providing technical recommendations to solve the need to connect with internal / external data systems and instrumentation.
  • Create and maintain design and code documentation in GitHub, SharePoint and/or another repository.
  • Facilitate and participate in code reviews, retrospectives, functional and integration testing and other team activities focused on improving quality of delivery.
  • Champion a culture of continuous improvement and innovation, importing global digital best practices into the region

WHO YOU ARE:

  • Bachelor s degree in Computer Science, Software Engineering, or related field, with 10years professional software engineering experience or Master's degree with 8year's relevant experience or PhD with 3years relevant experience

Technical knowledge and experience:

  • SQL and NoSQL database (data warehousing, data modeling, etc)
  • Experience with big data tools (Spark, Kafka, Flink, Hadoop, BigQuery, etc)
  • Knowledge of algorithms and data structures
  • Experience with tools for authoring workflows & pipelines (Airflow, AWS Step Functions, KubeFlow, etc)
  • Experience with cloud services (AWS, Azure, Google)
  • Experience with distributed systems
  • Experience with python, R, Unix shell, etc.

Optional desired experience:

  • Experience with Snowflake
  • Experience with Databricks
  • Experience with GxP environment
  • Experience with dbt
  • Experience with Data Vault 2.0 methodology
  • Experience with user requirement gathering
  • Knowledge of agriculture, life sciences, bioinformatics, biochemistry / chemistry, genetics, biology, or a related disciplines
  • Network and database administration
  • Proven ability to plan, schedule, and deliver quality software, DevOps experience preferred
  • Experience running production cloud systems and diagnosing / fixing problems
  • Demonstrated understanding and appreciation of diverse cultural perspectives, enabling effective collaboration and communication within global teams.

    Experience Required :

    Fresher

    Vacancy :

    2 - 4 Hires

    Similar Jobs for you

    See more recommended jobs