AI Engineer

  • Permanent
  • Full time
  • Remote
  • Engeneering / Dev Ops

About the Role

We are seeking a AI Engineer with deep, hands-on experience in Gemini, Vertex AI, and multi-cloud GenAI platforms to deliver production-grade LLM and RAG solutions at scale.

This is a delivery- and implementation-focused role, with ownership across solutioning, development, and operationalization of AI systems. The role requires strong hands-on coding, architectural thinking, and the ability to adapt as AI technologies evolve rapidly.


Location: Anywhere in India (Fully Remote)
Working Model: Offshore with 3–4 hours overlap with US time zones
Contract Duration: 6+ months initially, with strong potential for extension


Role Level & Expectations

Profile: Strong AI Engineer / Senior Individual Contributor

Ownership: Technical solutioning + hands-on development

Leadership: Lead by example (technical depth, architecture decisions), not people management

Focus: Delivery-first, with flexibility to perform research, experimentation, and design as required


Day-to-Day Responsibilities

Build and orchestrate LLM agents using Gemini Pro and LangChain

Design, implement, and operate RAG solutions at scale

Integrate with Gemini and Vertex AI services using API-driven architectures

Debug and optimize end-to-end LLM pipelines, including: chunking strategies, embeddings, retrieval logic, LLM response behavior

Deliver production-ready AI services, including monitoring, rate limiting, and reliability controls

Participate in solution design and technical decision-making

Continuously evaluate and experiment with new LLM models and features as they become available

Implement AI safety, security, and compliance controls across the AI lifecycle

Collaborate with cross-functional teams across time zones


MUST-HAVE Skills & Experience

Candidates must meet the majority of the following:

GenAI & LLM Engineering

Hands-on experience integrating Gemini Pro 1.x via API endpoints

Strong experience building LLM agents using LangChain or similar frameworks

Advanced understanding of prompt engineering and agent orchestration

Proven ability to debug end-to-end LLM systems, including chunking, embeddings, retrieval, and generation layers

RAG & Vector Search

Hands-on experience with vector databases (Pinecone, Chroma, FAISS)

Strong understanding of RAG architectures at scale

Practical knowledge of RAG evaluation metrics, including: Precision@K, Recall@K, MRR, nDCG, Faithfulness and Answer Relevance

Solid grasp of vector similarity search fundamentals

Cloud & Multi-Cloud

Strong hands-on experience with Google Cloud Platform, including: Storage services, Serverless compute, Search, Transcription, Chat / conversational services

Practical experience in multi-cloud environments

Working experience integrating AWS services alongside GCP

Programming & APIs

Excellent programming skills in Python, PySpark, or Java

Strong experience building scalable, API-based AI solutions

Solid understanding of API rate limiting, performance tuning, and reliability

AI Safety, Security & Compliance

Hands-on experience implementing AI safety and guardrails, including: Input validation, PII detection and redaction, Reliability and hallucination checks, Compliance controls

Knowledge of multi-layered security, including IAM, data isolation, and audit loggin


NICE-TO-HAVE Skills (Strong Plus)

Experience using Terraform or Infrastructure-as-Code tools

Background in data analytics, data engineering, or data science

Knowledge of data warehousing concepts (ETL/ELT, analytics platforms)

Experience evaluating and comparing new LLM models as they enter the market

Exposure to bias mitigation techniques in enterprise AI systems

Experience operating large-scale production RAG systems


What Success Looks Like

You independently design, build, deploy, and debug Gemini-powered AI systems

You contribute meaningfully to solution architecture and technical strategy

You deliver production-ready AI solutions, not prototypes

You adapt quickly as GenAI platforms and models evolve


About Opplane

Opplane specializes in delivering advanced data-driven and AI-powered solutions for financial services, telecommunications, and reg-tech companies, accelerating their digital transformation journeys.
Our leadership team brings together Silicon Valley entrepreneurs and executives from some of the world’s top organizations, including PayPal, Xerox PARC, Amazon, Wells Fargo, and SoFi β€” with deep expertise in product management, data governance, privacy, machine learning, and risk management.

About the Team & Culture

🌍 Global & Multicultural – Our team unites professionals from around the world, blending diverse ideas and experiences that fuel innovation.
⚑ Startup Energy – We move fast, embrace change, and turn ideas into action in a dynamic and high-impact environment.
πŸ’ͺ Empowered Ownership – Each team member owns their work and contributes directly to our collective success.
🀝 Collaborative & Friendly – We foster an open, approachable culture where collaboration and curiosity thrive.