Hi! I'm Darsh's AI assistant. Ask me anything about his current work at Kyndryl, research, blogs, or career journey.
AI responses are based on Darsh's public profile and may not capture every detail.
How this works
You're talking to a multi-agent system I built. Two response modes — pick a tab to see how each one is wired up.
~3 s1 LLM callpre-baked contextlow cost
Quick mode skips live retrieval. The Worker injects a pre-baked Markdown summary of my CV + public profile into a single GPT-4o-mini call. That summary is regenerated every Monday at 06:00 UTC by a GitHub Action that runs the full RAG + web-search pipeline once and commits the result back to the repo.
~10 smulti-agentparallel sub-agentslive retrieval
Detailed mode runs two sub-agents in parallel. The RAG agent embeds your question with text-embedding-3-small, queries a Cloudflare Vectorize index of my CV + blog posts, and returns raw chunks (no intermediate summarisation, so the orchestrator never loses signal). The web agent fires two Tavily searches simultaneously — one targeted at LinkedIn / ArXiv / Medium, one general. The orchestrator fuses both, prefers RAG when sources disagree, and writes the final answer.