I’ve conducted a lot of interviews in my career. And every single time, after wrapping up a conversation with a candidate, I’m faced with the same task: synthesizing my scattered notes into something coherent enough to share with the recruiting team. It’s not hard work, exactly — just tedious. Especially when you’re coming off back-to-back interviews and need to context-switch between candidates.
So I built a Gemini Gem to handle it.
What Gems actually are
Gemini Gems are Google’s version of custom AI assistants (similar to Projects in Claude or ChatGPT). The basic idea is that you can preload context, instructions, and even a personality so you don’t have to repeat yourself every time you start a new chat. You’re essentially hiring the AI for a specific job to be done.
For this one, I wanted something that would help me articulate interview feedback without making the process feel robotic. So I gave it a friendly, warm tone and a clear set of tasks: ask for the candidate’s name and rating, figure out what dimensions I assessed them on, synthesize my rambling feedback, and then format everything into a polished summary I could send directly to the hiring team.
One thing I appreciate about Gems is that you can share them. If you’ve built something useful — like a product expert assistant for your team — you can just hand people a link instead of making them recreate it from scratch. It becomes a first line of defense for common questions.
Watching it work
I tested this with a deliberately ridiculous example: an interview with Santa Claus for an engineering manager role. (I gave him a “no,” for the record — more on that in a second!)
The Gem started by asking me the basics: candidate name, role, and my overall rating. Then it asked what questions I’d used during the interview. I walked through three behavioral questions I typically ask candidates about project leadership, handling failure, and team culture.
Then came the interesting part. I gave the Gem my actual feedback — stream of consciousness, the way I’d talk through it in a debrief. I mentioned Santa’s strengths (big program leadership experience, modernizing legacy platforms, good instincts on psychological safety), but also my concerns (tendency to lean whimsical when things get tough, shipping to production without full testing, not convinced about his experience with continuous delivery cycles).
The Gem took all of that and turned it into a three-paragraph summary that was ready to send. It captured the nuance, organized the feedback logically, and maintained a professional tone without losing the substance of what I’d said.
What struck me about this
This isn’t a complicated use case. I’m not asking the AI to make decisions or generate creative content. I’m asking it to take my messy verbal feedback and format it into something readable. But that’s exactly the kind of repetitive task where AI excels.
What I found interesting is how this flips the usual dynamic. Normally, I’m asking the AI questions and it’s responding. Here, it’s the other way around — the Gem is interviewing me. That shift changes how the interaction feels. It’s less like using a tool and more like delegating to an assistant who knows the script.
The other thing that stood out: you can adapt this pattern to almost any feedback scenario. Performance reviews, project debriefs, intake forms — anywhere you’re synthesizing observations into structured notes. The core mechanics are the same: the AI asks clarifying questions, collects context, and formats your input into something polished.
Where this fits
I’m not saying this replaces thoughtful evaluation. The Gem isn’t making judgments about candidates — that’s still my job. But it is removing friction from the documentation process. And that matters when you’re trying to maintain consistency across multiple interviews or when the administrative overhead starts eating into the time you’d rather spend on actual evaluation.
I think there’s something worth paying attention to here about how we think about AI assistants. This isn’t about automation for automation’s sake. It’s about identifying the specific, repetitive parts of your workflow where you’re essentially following a script — and letting AI handle those so you can focus on the parts that actually require judgment.
Have you built any custom AI assistants like this for your own workflows? What repetitive tasks have you tried delegating to them, and did they actually stick?









