Building Rubrics

10 minutes
From

editione1.0.8

Updated August 24, 2022
Technical Recruiting and Hiring

You’re reading an excerpt of The Holloway Guide to Technical Recruiting and Hiring, a book by Osman (Ozzie) Osman and over 45 other contributors. It is the most authoritative resource on growing software engineering teams effectively, written by and for hiring managers, recruiters, interviewers, and candidates. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, over 800 links and references, commentary and future updates, and a high-quality PDF download.

Teams design rubrics differently. Rubrics can be perfunctory, with just a list of questions to ask and simple pass/fail boxes to check, or they can be detailed, including briefs on why each question is being asked, descriptions of what different levels of success are for each question, and/or what would be expected of different candidate levels for each question. The more detailed the rubric, the more fair and systematic the process—but the greater the challenge of designing and maintaining it.

Rubrics do not remove the need for flexibility in any given interview. They help you to score a candidate’s progress through a question, and can even allow you to be more flexible by preparing you to pivot when something unexpected comes up.

story “All good questions have variable depth. Like, there’s a point halfway through the question where it’s clear the person’s not going to get through it. So you might have a 20-minute version of a 60-minute question. That can go the other way, ‘This person is going through this so fast, let’s make it harder.’ It’s like a rip cord, where 15 minutes in I know whether I can pivot to the shorter version or longer, or end it, or add the next layer of the onion. It’s about preparing to offer extensible difficulty, variable difficulty.” —Scott Woody, former Director of Engineering, Dropbox

story “One way to increase the reach of a question bank is to structure the rubric to call out what answers you would expect at what candidate level. This can help minimize bias (everyone knows what ‘senior’ means for this), and, where it makes sense, reduces the need to have different question sets for different levels.” —Ryn Daniels, Senior Software Engineer, HashiCorp

A Sample Technical Question Rubric

Question: Write a program that prints out the elements of a binary tree in order.

What we are looking for in this question:

  • The candidate asks appropriate clarifying questions, such as what data type is stored in the tree and what the output format should be.

  • The candidate is able to independently write the initial version of the program without significant interviewer intervention or coaching.

  • There are either no bugs, or the candidate is able to find and fix all of their bugs in a proactive, independent fashion (that is, the interviewer does not have to point out that there is a bug or give them a test case).

  • The candidate uses all language features in an appropriate way—it’s OK if they make syntax errors or don’t know the names of library functions.

  • The candidate is able to describe the Big O notation performance of their program accurately, and they do not use unnecessary memory or do inefficient operations such as visiting a node multiple times.

  • An excellent performance: Requires hitting all of the above bullets, and will typically result in a “solid yes” for the candidate.

  • A good performance: Requires at least four of the five bullets—typically, someone can get a “good” rating as long as the issues are largely in making up-front assumptions rather than having significant bugs or logic errors. A good performance will typically result in a “weak yes” for the candidate. The simplicity of this question means that somewhat shaky performance can also result in a “weak no.”

  • A fair performance: The answer fails on multiple topics, such as having multiple bugs and also not being able to describe the Big O performance. A fair performance on a question this easy should result in a “solid no” for the candidate.

  • A poor performance: The candidate cannot complete the problem, even with significant hinting.

important Note that in many technical questions, the rubric will get more technically specific about exactly what kinds of answers are or are not OK for each level. This question happens to be a simple one, so it doesn’t demonstrate much detail.

Evaluating Coding Questions

When writing rubrics for coding questions, keep in mind that there is a great deal more to assessment than whether or not the candidate solved the problem. Interviewers might want to evaluate the following, for example: Was the code well written? Did the candidate reason through the problem well? Did they do a good job of evaluating their own code’s correctness? Were they able to answer follow-up questions?

Unlock expert knowledge.
Learn in depth. Get instant, lifetime access to the entire book. Plus online resources and future updates.

However, some things are not appropriate to include in the evaluation of a coding interview.

caution Interviewers should ignore anything that is an artificial result of the environment. If the candidate is writing whiteboard code, this includes the candidate’s handwriting and whether the code was visually messy. Viewing variable naming and duplicated code leniently is also wise. If you have concerns, you can ask the candidate about the choices they made; it’s likely they were just avoiding rewriting code or writing long names.

It’s not appropriate to penalize a candidate harshly just because they have a bug! It’s very hard to write correct code, especially in the context of an interview. You will most likely want to see a strong thought process and an ability to translate ideas to code and model the flow of execution of a program, but mistakes will inevitably happen. You can expect that candidates should find bugs when given hints or a test case, though. If someone simply cannot execute their own code objectively, or if they cannot find a bug even when it’s been pointed out to them, that indicates a serious issue.

A Sample Nontechnical Question Rubric

Question: Tell me about a conflict with a co-worker and how you resolved it.

What we are looking for in this question:

  • Do they identify a real conflict?

  • Can they explain the co-worker’s perspective? (Bonus points if they had to do work to discover that perspective.)

  • Can they explain what the root cause of the disagreement was and what the “right” answer to the conflict should be from a third party’s perspective?

  • Did they resolve the conflict in a constructive, low-drama way? (Alternative, bad options: avoiding facing up to the conflict; escalating prematurely; playing politics to get what they want.)

  • Was the solution they reached actually a good solution to the problem from the perspective of a neutral observer?

When asking this question, if you do not hear some of the elements the rubric is looking for, you should ask follow-up questions to touch on those areas. For example, ask how their co-worker saw the situation if they don’t explain it directly on their own.

  • An excellent performance: This requires covering all of these points, or a demonstration that the candidates could have touched on them, without prompting, or with only light/moderate prompting (for example, a raised eyebrow, a questioning look, or a subtle follow-up question designed to nudge the candidate and see if the answer was top of mind).

  • A good performance: Touches on all of the elements of the answers, but might have required heavy prompting in one area (for example, directly asking what the co-worker’s opinion/viewpoint was, or having to dig deep yourself to understand the root cause).

  • A fair performance: Similar to a “good” performance, but with more prompting needed and generally lower-quality answers, giving the interviewer lower confidence in the response (for example, if the candidate cannot give specifics or they remain vague even after prompting).

  • A poor performance: The candidate plays politics, resolves the question in their favor without accounting for the wider interest, or simply can’t give an example of having dealt with conflict.

important People have varying definitions of what a “conflict” is, so you may consider adjusting the phrasing of this question if you feel you’re not getting the right signal. Ryn Daniels recommends asking the candidate, “Tell me about a time you disagreed with a colleague on a decision” or “Tell me about a time when you changed your mind.” Each of these questions can reveal the same things: how the candidate interacts with others, and whether and to what degree the candidate is self-reflective and flexible in the face of new knowledge.

story On nontechnical interviews, the rubrics tend to be less directly referenced because they’re likely to cover individual questions, of which there are many (whereas there tends to be one big technical question), so the write-ups might tend to anchor more on a meta-rubric of the general types of things you’re looking for and highlight specific questions where the interviewer did poorly. —Alex Allain, Engineering Director, Dropbox

Collecting Interviewer Feedback

Interviewer Write-ups

Ideally, interviewers will record their feedback on the candidate as soon as possible. The fresher the interview is in the interviewer’s mind, the more complete and objective it is likely to be. Additionally, since next steps rely on this information, waiting a while to record your feedback can slow down decision-making.

The write-up justifies the decision with concrete evidence based on the rubric, by identifying which parts of the rubric were or weren’t met. A sample write-up based on the technical question above might look like this, for a performance evaluation of “fair”:

You’re reading a preview of an online book. Buy it now for lifetime access to expert knowledge, including future updates.
If you found this post worthwhile, please share!