Responsibilities:
Architecture Design:
- Design and implement scalable, secure, and high-performance architectures for Generative AI applications.
- Integrate Generative AI models into existing platforms, ensuring compatibility and performance optimization.
Model Development and Deployment:
- Fine-tune pre-trained generative models for domain-specific use cases.
- Data Collection, Sanitization and Data Preparation strategy for Model fine tuning.
- Evaluate, select, and deploy appropriate Generative AI frameworks (e.g., PyTorch, TensorFlow, Crew AI, Autogen, Langraph, Agentic code, Agentflow).
Innovation and Strategy:
- Stay up to date with the latest advancements in Generative AI and recommend innovative applications to solve complex business problems.
- Define and execute the AI strategy roadmap, identifying key opportunities for AI transformation.
Collaboration and Leadership:
- Collaborate with cross-functional teams, including data scientists, engineers, and business stakeholders.
- Mentor and guide team members on AI/ML best practices and architectural decisions.
- Should be able to lead a team of data scientists, GenAI engineers, devops and Software Developer.
Performance Optimization:
- Monitor the performance of deployed AI models and systems, ensuring robustness and accuracy.
- Optimize computational costs and infrastructure utilization for large-scale deployments.
Ethical and Responsible AI:
- Ensure compliance with ethical AI practices, data privacy regulations, and governance frameworks.
- Implement safeguards to mitigate bias, misuse, and unintended consequences of Generative AI.
Required Skills:
- Advanced programming skills in Python and fluency in data processing frameworks like Apache Spark.
- Should have strong knowledge on LLM’s foundational model (OpenAI GPT4o, O1, Claude, Gemini etc), while need to have strong knowledge on opensource Model’s like Llama 3.2, Phi etc.
- Proven track record with event-driven architectures and real-time data processing systems.
- Familiarity with Azure DevOps and other LLMOps tools for operationalizing AI workflows.
- Deep experience with Azure OpenAI Service and vector DBs, including API integrations, prompt engineering, and model fine-tuning. Or equivalent tech in AWS/GCP.
- Knowledge of containerization technologies such as Kubernetes and Docker.
- Comprehensive understanding of data lakes and strategies for data management.
- Expertise in LLM frameworks including Langchain, Llama Index, and Semantic Kernel.
- Proficiency in cloud computing platforms such as Azure or AWS.
- Exceptional leadership, problem-solving, and analytical abilities.
- Superior communication and collaboration skills, with experience managing high-performing teams.
- Ability to operate effectively in a dynamic, fast-paced environment.
Official notification