-
Develop and optimize machine learning models leveraging NLP, Computer Vision, and GenAI.
-
Architect and implement scalable ML pipelines for training, validation, deployment, and monitoring of production models.
-
Drive the development of large-scale ML infrastructure, ensuring low-latency inference and efficient resource utilization across cloud and hybrid environments.
-
Implement MLOps best practices, automating model training, validation, deployment, and performance monitoring.
-
Work closely with data engineers, software engineers, and product teams to ensure seamless integration of ML solutions into production systems.
-
Optimize ML models for performance, scalability, and efficiency, leveraging techniques like quantization, pruning, and distributed training.
-
Enhance model reliability by implementing automated monitoring, CI/CD pipelines, and versioning strategies.
-
Bachelor’s or Master’s Degree in Computer Science, Data Science, or related fields. Advanced degrees are a plus.
-
6+ years of hands-on experience in building and deploying machine learning models, with a focus on NLP, Computer Vision, or GenAI solutions.
-
Proven experience deploying machine learning models into production environments, ensuring high availability, scalability, and reliability.
-
Proficiency with modern ML frameworks (e.g., TensorFlow, PyTorch).
-
Experience in building ML pipelines and implementing MLOps for automating and scaling machine learning workflows.
-
Strong programming skills in Python, R, SQL, and experience with big data technologies (e.g., Spark, Hadoop) for data processing and analytics.
-
Basic proficiency in at least one cloud-based ML services (e.g., AWS SageMaker, Azure ML, Google AI Platform) for training, deploying, and scaling machine learning models.
-
Hands-on experience with containerization (Docker), orchestration (Kubernetes), and model serving platforms (e.g., Triton Inference Server, ONNX) for production-ready ML deployments.
-
Familiarity with end-to-end ML pipelines, including data collection, feature engineering, model training, and model evaluation.
-
Knowledge of model optimization techniques (e.g., quantization, pruning) to improve inference performance on cloud or edge devices.
-
Excellent problem-solving skills, with the ability to break down complex challenges in document extraction and transform them into scalable ML solutions.
-
Strong communication skills, with the ability to articulate ML problems clearly and work autonomously.