AI Engineer - LLM Finetuning
Relevant Experience
- 1 to 5 years
Job Type
- Full-time
- In-office
Location
- Bengaluru
Role Description
He / She will be part of a cutting-edge AI team focused on developing, fine-tuning, and deploying Large Language Models (LLMs) including GPT and LLaMA. The role involves working with advanced AI frameworks and techniques such as Retrieval-Augmented Generation (RAG), instruction tuning, and parameter-efficient fine-tuning (PEFT, LoRA) to solve real-world problems. The ideal candidate should be passionate about generative AI and capable of translating business problems into intelligent, scalable AI solutions.
Responsibilities
- Design, develop, and deploy AI models using LLMs and frameworks such as GPT, LLaMA, and related tools
- Apply RAG techniques to improve response accuracy and relevance in generative AI systems
- Fine-tune models using methods like LoRA, PEFT, and instruction tuning for performance optimization
- Collaborate with product, engineering, and data teams to align AI solutions with business requirements
- Analyze large and complex datasets to identify patterns, extract features, and generate actionable insights
- Ensure AI solutions are scalable, secure, and compliant with data privacy standards
- Stay updated with the latest advancements in generative AI and apply them to ongoing projects
- Guide and mentor junior engineers on AI projects and best practices
Experience and Qualifications
- Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Data Science, or a related field
- 1–5 years of experience in AI/ML development, including hands-on work with LLMs and generative AI technologies
- Expertise in working with models like GPT, LLaMA, and frameworks supporting RAG pipelines
- Strong programming skills in Python and experience with ML libraries such as Hugging Face Transformers, PyTorch, TensorFlow, or LangChain
- In-depth knowledge of data structures, algorithms, and best practices in model deployment
- Proven ability to troubleshoot, scale, and maintain AI pipelines in production environments
Desired Skills
- Experience deploying AI solutions in cloud environments (AWS, Azure, or GCP)
- Familiarity with vector databases, embedding models, and semantic search
- Experience in building AI-based APIs or microservices for product integration
- Strong communication and documentation skills
- Ability to handle multiple priorities in a fast-paced, agile work setting