You’re reading an excerpt of The Holloway Guide to Technical Recruiting and Hiring, a book by Osman (Ozzie) Osman and over 45 other contributors. It is the most authoritative resource on growing software engineering teams effectively, written by and for hiring managers, recruiters, interviewers, and candidates. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, over 800 links and references, commentary and future updates, and a high-quality PDF download.

Mitigating Bias

Bias is an observational error that tends to over-favor or under-favor certain types of candidates. As a systematic error, it is a common source of noise. For example, likability bias might cause an interviewer to view a friendly candidate as being more competent.

Bias constitutes a fundamental problem in hiring, and one key goal of interviewer training is to reduce the impact of interviewer biases on the final outcome. Bias detracts from interviewers’ ability to accurately assess candidates’ ability or fitness and thus increases the odds that you will hire the wrong people and create homogeneous teams.* Structured interviewing helps mitigate bias in interviews by using rubrics and requiring written justifications for decisions. Good training prepares interviewers to expect and detect the presence of bias in their own evaluations.

Evaluating Objectively

Even with standardization, interviews progress fluidly and thus can provide the greatest challenge in collecting signal in a way that mitigates bias. Good training and practice will help interviewers be aware of their biases and the ways their evaluations may fall prey to those biases.

The halo/horn effect suggests that candidate performance on one part of a question will influence how the interviewer evaluates the rest of the candidate’s performance. To counter this, the interviewer can use the success criteria from the rubric to evaluate each part of a question separately. Only when a candidate has met the rubric’s requirements on the key elements of a question can the effective interviewer cut the questioning short and shift to selling.

Consistent questioning across candidates also plays a role in reducing bias, with specific structure and key points not changing from interview to interview. (Small conversational changes are fine.) It’s helpful to provide interviewers with clear, consistent triggers for when and why to provide hints to avoid the temptation to tip off candidates the interviewer likes. On the flip side, if a candidate says things that don’t seem correct, the interviewer may reasonably give them a chance to correct themself by probing more deeply in an ad-hoc way. It’s important to be consistent in doing this for all candidates.

It’s very easy for new interviewers to get attached to a likeable candidate and round up on their evaluation. This form of bias is particularly tricky because some candidates are likeable or have a story that makes people want to hire them. There have also been studies that show that first impressions (even in the first few moments of meeting) can stick with an interviewer for the rest of the interview. It’s critical to never let these emotions influence the hiring decision—it is not your company’s role to employ people just because you like them or because they need a job, and the evaluation process ideally prevents that. It is never wise for an interviewer to get so wrapped up in a candidate’s case that they lose objectivity.

Handling Gut Feeling and Intuition

importantGut feeling and intuition are valuable—it is not usually wise to ignore them. However, making a decision based purely on gut feeling or intuition can allow bias to lead to poor outcomes. Unexplained gut feeling indicates there’s either more signal to collect or something to discuss and understand better. A good interviewer understands where their intuition is coming from at least well enough to explain it to others or to highlight ways to evaluate whether the intuition is accurate.

An interviewer might say, “I feel like the candidate is inflating his past work and didn’t really do what he said he did in the previous job.” This is a judgement based on intuition, rather than fact. Without facts, you can’t rule out bias or determine if the suspicion reveals a true problem on the part of the candidate. Instead, to fact-check the intuition, the interviewer can ask extremely specific questions about past projects: “You mention you designed and deployed the internal evaluation tool at AcmeCorp in Python. Which web framework and database did you use? Why? How did you deploy it?” Vague candidate answers to every question on work they said they’ve done likely indicates that the interviewer’s intuition revealed a true negative signal, one that can be documented to others.

Read the whole book.
Support the authors and buy it now. Instant lifetime access to the entire book.

The Danger of “Culture Fit”

Screening candidates on how they make you feel and whether they seem like someone you could be friends with is a mistake. It’s poorly correlated with job performance and invites bias.Ammon Bartram, co-founder, TripleByte*

If you search the term culture fit, you’ll get a lot of headlines about how important it is to hire for culture and that it’s “hard to define, but everyone knows when it is missing.”* In reality, hiring for culture fit can lead to unfair and biased hiring practices, as well as a company that’s closed off to diversity of all kinds.

Culture fit is shorthand for the many vague assessments or unexamined feelings hiring teams sometimes use to determine whether a candidate “belongs” at a company. While in most cases the intentions behind these assessments are not sinister, testing for this kind of vague quality can devolve into a “friend test” or “likability test” where candidates are assessed not on potential to do the job but on similarity, kinship, or familiarity to the assessing employee.

dangerCulture-fit assessments are enormous sources of bias, including affinity bias, where interviews and decision-makers favor candidates who are similar to them, and likability bias, where candidates who are pleasant to spend time with are viewed as being more competent than those who aren’t. The “person I would enjoy hanging out with” and the “person best-equipped for a job” are not the same.

At one point, Stripe had a “Sunday test” to identify candidates “who make others want to be around them,” but ended up moving away from that test since it was commonly misinterpreted.

If your goal is candidate-company fit, rather than looking for culture fit, you can assess for values alignment. A focus on values, instead of on culture, has a better chance of resulting in a diverse group of people who are motivated by the same things.

Shifting our focus from ‘culture fit’ to ‘values fit’ helps us hire people who share our goals, not necessarily our viewpoints or backgrounds.Aubrey Blanche, Global Head of Diversity and Inclusion, Atlassian *

Having a defined set of values can help you interview and assess candidates in a fair and effective way to find those who will succeed at your company. For example, for companies that value people who focus on supporting the company’s mission, the hiring team can seek candidates who show excitement and passion about that mission.

Additionally, you can assess candidates not just for compatibility with your company values, but also for:

  • Their cultural contribution or culture add. This refers to what new viewpoints and opinions a candidate might bring to your company.

  • Their adaptability to different cultures. Some research has indicated that this type of adaptability is more important than initial fit.*

Anchoring Write-ups in Evidence

The goal of the notes taken during the interview is to allow the reviewer to produce a final write-up that describes what happened, what conclusions the interviewer came to, and why. It’s particularly important for interviewers to:

  • Summarize what happened in the interview so that others can draw their own conclusions.

  • Justify their decision with specific evidence based on what the candidate said or did during the interview and how it relates to the standards outlined in the rubric for the question.

  • Explain why they did not choose other similar options. For example, if an interviewer is a “solid yes” on a candidate, why weren’t they a “strong yes” or a “weak yes”? Forcing a clear justification relative to the alternatives makes the write-up much clearer and avoids fuzzy thinking that leads to bias.

Building rubrics and using them in interviews gives clarity and structure to these notes and prevents interviewers from veering off in unhelpful directions.

The Medium Engineering team detailed the various technical and nontechnical capabilities they interview for in a three-part guide that provides specific rubrics for evaluating and grading each one.

If you found this post worthwhile, please share!
Read the whole book.
Support the authors and buy it now. Instant lifetime access to the entire book.