High-level overview of production work (no proprietary details):
- Unified fragmented LLM calls into a modular RAG pipeline (query optimisation → retrieval → grounded answer synthesis with citations)
- Multi-provider LLM client for Azure OpenAI + AWS Bedrock
- Router + evaluation instrumentation for quality monitoring