Agentic Interview
AI-powered technical interviews with lens-based evaluation and DSPy-optimized scoring. Not a quiz — a structured conversation that reasons about candidates.
The Problem
Technical interviews are inconsistent, undocumented, and prone to interviewer bias. The same candidate gets wildly different results from different interviewers because there's no structured evaluation framework — just gut calls and vibes. Organizations lose good candidates and hire bad ones because the process isn't designed to be reliable.
The Build
A multi-agent system where a QuestionsAgent drives the interview, an EvaluatorAgent scores each answer against keypoints, and an OrchestratorAgent manages session state. Two evaluation modes: fast heuristic matching and LLM-powered semantic scoring via OpenAI or Anthropic. A separate lens analysis pipeline applies configurable analytical frameworks to the full interview transcript — not just individual answers — to surface structured insights about candidate patterns. DSPy optimization pre-compiles the evaluator prompts for consistent, cost-efficient scoring at scale. Full audit trail: every transcript, evaluation, and lens result is persisted to PostgreSQL via SQLAlchemy.
What Makes It Different
Lens-based analysis is the differentiator. Most AI interview tools grade individual answers. This one applies multiple analytical frameworks to the full conversation — like overlaying different lenses on the same transcript to see what each one reveals. The DSPy-optimized prompts mean the evaluation rubric is genuinely tunable: you can train it on your own scoring preferences, not just use someone else's defaults. Multi-tenant, full audit trail, export to CSV/JSON.