Required Qualifications
Bachelors or Masters degree in Computer Science, Data Engineering, Machine Learning, or related field.
3 plus years of experience in DevOps, MLOps, or similar roles.
Proficiency with containerization (Docker), orchestration (Kubernetes), and Infrastructure-as-Code (Terraform, CloudFormation).
Experience with ML model deployment frameworks (TensorFlow Serving, TorchServe, FastAPI, BentoML, etc.).
Hands-on experience with ML lifecycle tools (MLflow, Kubeflow, DVC, Airflow).
Strong scripting and programming skills (Python, Bash, etc.).
Experience with cloud platforms (AWS/GCP/Azure) and relevant ML services (SageMaker, Vertex AI, etc.).
Familiarity with data engineering workflows, streaming, and data pipelines (Kafka, Spark, etc.).
Strong understanding of CI/CD concepts and tools (GitLab CI, Jenkins, ArgoCD).
Preferred Qualifications
Experience with monitoring and logging tools (Prometheus, Grafana, ELK, Datadog).
Knowledge of model governance, auditing, and regulatory requirements.
Exposure to A or B testing, shadow deployments, and canary releases for ML models.
Certification in cloud platforms or DevOps practices is a plus.
Qualification
Required Qualifications
Bachelors or Masters degree in Computer Science, Data Engineering, Machine Learning, or related field.
3 plus years of experience in DevOps, MLOps, or similar roles.
Proficiency with containerization (Docker), orchestration (Kubernetes), and Infrastructure-as-Code (Terraform, CloudFormation).
Experience with ML model deployment frameworks (TensorFlow Serving, TorchServe, FastAPI, BentoML, etc.).
Hands-on experience with ML lifecycle tools (MLflow, Kubeflow, DVC, Airflow).
Strong scripting and programming skills (Python, Bash, etc.).
Experience with cloud platforms (AWS/GCP/Azure) and relevant ML services (SageMaker, Vertex AI, etc.).
Familiarity with data engineering workflows, streaming, and data pipelines (Kafka, Spark, etc.).
Strong understanding of CI/CD concepts and tools (GitLab CI, Jenkins, ArgoCD).
Preferred Qualifications
Experience with monitoring and logging tools (Prometheus, Grafana, ELK, Datadog).
Knowledge of model governance, auditing, and regulatory requirements.
Exposure to A or B testing, shadow deployments, and canary releases for ML models.
Certification in cloud platforms or DevOps practices is a plus.
Any question or remark? just write us a message
If you would like to discuss anything related to payment, account, licensing,
partnerships, or have pre-sales questions, you’re at the right place.