Companies you'll love to work for

Senior MLOps Engineer

Cognism

Cognism

France
Posted on Sep 26, 2024

Cognism is a market leader in international sales intelligence. Access to our premium data, has helped a wide variety of global revenue teams change their approach to prospecting, resulting in predictable and prosperous outcomes.

Following multiple successful funding rounds and the acquisition of Mailtastic (2020), an email signature solution provider, and Kaspr (2022), a Paris-based sales prospecting tool, there has never been a more exciting time to join us.

As we grow, one of our main objectives is to continue hiring individuals, who are both a professional and cultural fit for our Company. Our values are at the core of everything we do!

Our people;

  • Are Nice!
  • Are Collaborative. We’re in this together!
  • Are Solution-Focused. For every problem, we’ve got a solution!
  • Are Understanding.
  • Celebrate Individual Contributors.

We are committed to creating a diverse and inclusive global workplace, which encourages you to achieve any goals you may have, while having fun along the way!

Your Role:

Cognism is actively seeking an outstanding MLOps Engineer to join our growing Data team. This role is primarily a hands-on engineering and MLOps position, with the individual reporting directly to the Engineering Manager in the Data team. The MLOps at Cognism is entrusted with optimizing and improving the quality of ML services and products. Advising and enforcing best practices within Data Science team, provide tooling and platforms that ultimately results in more reliable, maintainable, scalable and faster Machine Learning workflows. The successful candidate will be at the forefront of our MLOps initiatives, especially during the implementation of our machine learning platform and best practices.

Your Responsibilities:

  • Building and managing automation pipelines to operationalize the ML platform, model training and model deployment
  • Design and implement architectures, service and pipelines on the AWS cloud that are secure, reliable, scalable and maintainable
  • Contributing to the MLOps best practices within the Science and Data team
  • Acting as a bridge between AI, Engineering, and DevSecOps for ML deployment, monitoring, and maintenance
  • Communicate and work closely with team of Data Scientists to provide tooling and integration of AI/ML models into larger systems and applications
  • Monitor and maintain production critical ML services and workloads

Your Experience:

Required:

  • Strong understanding of cloud architectures and services fundamentals, AWS preferable, GCP, MS Azure
  • Good understanding of modern MLOps best practices
  • Good understanding of Machine Learning fundamentals
  • Good understanding of Data Engineering fundamentals
  • Experience with Infrastructure as Code (IaC) tools like Terraform, CDK or similar
  • Experience with CI/CD pipelines (GitHub Actions, Circle CI or similar)
  • Basic understanding of networking and security practices on cloud
  • Experience with containerization (Docker, AWS ECS, Kubernetes, or similar)
  • Proficiency reading and writing Python code
  • Experience deploying and monitoring machine learning models on the Cloud in production
  • Fluent in English, good communication skills and ability to work in a team
  • Enthusiasm in learning and exploring the modern MLOps solutions

Ideal:

  • 3+ years in a MLOps, Machine Learning Engineer or DevOps role
  • Ability to design and implement cloud solutions and ability to build MLOps pipelines (AWS, MS Azure or GCP) with best practices
  • Good understanding of software development principles, DevOps methodologies
  • Experience and understanding of MLOps concepts:
    • Experiment Tracking
    • Model Registry & Versioning
    • Model & Data Drift Monitoring
  • Working with GPU based computational frameworks and architectures on cloud (AWS, GCP etc.)
  • Knowledge of MLOps and DevOps tools:
    • MLflow
    • Kubeflow, Metaflow, Airflow or similar
    • Visualisation tools – Grafana, QuickSight or similar
    • Monitoring tools – Coralogix or GrafanaCloud or similar
    • ELK stack (Elasticsearch, Logstash, Kibana)
  • Experience working in big data domains (10M+ scales)
  • Experience with streaming and batch-processing frameworks

Bonus:

  • Experience with MLOps Platforms (SageMaker, VertexAI, Databricks, or other)
  • Experience reading and writing in Scala
  • Knowledge of frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
  • Experience with SQL, NoSQL databases, data lakehouse

*Please send your CV in English version

We look forward to hearing from you!