Mlops Engineer Job in Capgemini

Mlops Engineer

Apply Now
Job Summary

Your Role: MLOps Engineer

As an MLOps Engineer at Capgemini Invent, you will be working on end-to-end machine learning operations (MLOps) lifecycle projects, focusing on developing, deploying, and managing ML models. You will collaborate with cross-functional teams to ensure that the models are not only accurate but also scalable and production-ready, leveraging best practices for DevOps and cloud technologies. This is a hands-on role with a strong emphasis on automation, collaboration, and innovation.

Key Responsibilities:

  • End-to-End MLOps Lifecycle: Collaborate with senior engineers and architects to implement the entire MLOps lifecycle, including data ingestion, model training, deployment, and monitoring of ML models in production environments.

  • ML Pipeline Development: Contribute to the building and testing of machine learning pipelines that handle data ingestion, model training, and deployment.

  • Model Deployment & Monitoring: Support the deployment and scaling of ML models in production and ensure continuous monitoring to maintain model performance.

  • Collaboration with Teams: Work closely with data scientists, data engineers, and other stakeholders to ensure seamless integration and implementation of MLOps processes across the organization.

  • Code Reviews & Quality Assurance: Participate in code reviews, testing, and quality assurance to ensure the accuracy, reliability, and performance of ML models in production environments.

  • Infrastructure-as-Code (IaC): Gain hands-on experience with Infrastructure-as-Code (IaC) and configuration management tools to automate infrastructure deployment and ensure scalable systems.

  • Cloud Resource Management: Manage cloud resources (on platforms like AWS, Azure, or GCP) and assist in implementing cost-optimization strategies.

  • Compliance & Security: Ensure that MLOps processes comply with industry best practices in terms of security, governance, and model integrity.

  • LLM Pipelines: Assist in the implementation, fine-tuning, and deployment of Large Language Models (LLMs), including evaluation and deployment across cloud and on-premises environments.

Your Profile:

  • Educational Background: Basic understanding of data structures, data modeling, and software architecture, preferably with a degree in Computer Science, Data Science, or a related field.

  • Experience with ML Frameworks: Familiarity with at least one ML framework (e.g., TensorFlow, Keras, scikit-learn) and associated libraries such as NumPy, Pandas, etc.

  • Programming Skills: Proficiency in Python and SQL, with experience in version control tools such as Git.

  • Cloud Platforms: Hands-on experience working with AWS, Azure, or GCP, including cloud resource management and deployment.

  • DevOps/MLOps Knowledge: Knowledge of CI/CD principles, DevOps practices for machine learning, and experience with containerization tools such as Docker and orchestration tools like Kubernetes.

  • LLM & NLP Exposure: Basic understanding of Large Language Models (LLMs), NLP techniques, and familiarity with LLM frameworks such as Hugging Face Transformers or OpenAI API.

What You Will Love About Working Here:

  • Flexible Work Arrangements: We understand the importance of work-life balance. Whether remote work or flexible hours, we provide the support you need to maintain a healthy work-life balance.

  • Career Growth: At Capgemini Invent, we invest in your professional development. Our career growth programs and diverse opportunities ensure you can explore various career paths and reach your full potential.

  • Learning & Certifications: Equip yourself with certifications in cutting-edge technologies, including Generative AI, to stay ahead in the rapidly evolving tech landscape.


Qualification :
Knowledge of CI/CD principles and DevOps practices for ML.
Experience Required :

4 to 6 Years

Vacancy :

2 - 4 Hires

Similar Jobs for you

See more recommended jobs