Youβre reading an excerpt of The Holloway Guide to Technical Recruiting and Hiring, a book by Osman (Ozzie) Osman and over 45 other contributors. It is the most authoritative resource on growing software engineering teams effectively, written by and for hiring managers, recruiters, interviewers, and candidates. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, over 800 links and references, commentary and future updates, and a high-quality PDF download.
The people included in an interview loop convey a great deal about your company to candidates. These people must successfully interview candidates while also positively representing the company. The choice of interview panel members has an outsize effect on the candidateβs ultimate assessment of a company and thus plays a significant role in their final decision should you extend an offer.
Your entire team should conduct interviews. Everybody. If you donβt want some people to interview, ask yourself why. If youβre worried about how theyβre representing the company, thereβs a bigger issue at hand.Marco Rogers, veteran engineering manager*
Itβs beneficial to have everyone involved in some way in the recruiting process. Exposure to the recruiting process provides an opportunity to familiarize the people who work at your company with the expectations you have for employees, because the interview process filters for people who would do well at your company. That said, successful interviews require that interviewer roles be determined carefully based on the skills of individual interviewers. Some people excel at evaluating a candidateβs ability to demonstrate a specific skill set. Other roles are purely recruitmentβsocial events like lunch with candidates or getting on the phone to answer their questions. Not everyone has the skills or qualifications to do both.
βstoryβ βThe interview process is an opportunity for everyone to better understand the companyβs values and how it evaluates, and it helps you wrestle with questions like, βHow do I level up and evaluate my own performance?β The interview process should map to your performance review process. What the performance review grades you on, the interview process should evaluate on. These should be aligned.β βScott Woody, former Director of Engineering, Dropbox
βdangerβNote that forcing people to interview candidates without getting their buy-in on the importance of interviewing can be a bad experience for everyone involved. Interviewers will feel like their time is being wasted, and candidates will be exposed to people who donβt want to talk to them. This thread on Blind has several comments about how employees who have been forced to conduct interviews against their wishes create a bad experience for candidates: βinterviewing candidates became an unpleasant burden no one really enjoys, but [everyone has] to do anyway to report in the next performance cycle.β Likewise, letting truly low-quality candidates progress into your funnel wastes interviewersβ time and causes them to lose buy-in. Ideally, interviewers will view the process as a high-impact activity and as a privilege they have to earn and maintain.
Also note that while, ideally, everyone should be involved in the interview process to some degree, not everyone is suited to conduct interviews. If an engineer has a poor bedside manner that could sour the candidateβs opinion of the company, that person should not be interviewing. Such people often self-identify as being poor interviewers or make it known that they dislike interviews strongly. Keeping these people as interviewers can be extremely costlyβthey might take their frustration out on the candidate. Alternately, some might not be aware that interviewing isnβt their strong suit. In these cases, their interview conclusions tend to be either very positive or very negative but offer low signal about why. Internal recruiters often get to know interviewersβ styles, so if youβre assembling a panel, it can be helpful to talk to them about who might need to be added or removed.
βimportantβIf someone is not happy at the company or appears to be bitter about being asked to interview, they shouldnβt be interviewing. Anyone who has given their notice to leave the company should never interview candidates.
βstoryβ βThere are certainly people who shouldnβt be interviewing, and itβs people who are not as self-reflective as they should beβwho are not aware of their biases or are unwilling to engage in a conversation about bias. Just being aware goes a long way to reducing their effect. There are so many people who say, βI donβt believe in biases.β That person should not be charged with evaluating talent, assessing skills, or judging others in any way.β βBenjamin Reitzammer, freelance CTO
Good interviewers are always communicating their excitement and passion for the companyβthey are excited about the company and have their own compelling reasons to work there. They likely have a captivating story about why they work there and what will excite a candidate about both the company and the role itself (if the role is on the same team as the interviewer). These personal stories help humanize the company.
βdangerβItβs important not to over-sell your company, however; gross misrepresentations about the day-to-day reality of the company will cause pain and trouble for everyone after a candidate joins and learns the interviewer misled them.
βcautionβ Ensuring that your interviewers represent the diversity of your company creates a better interview process. There is a balance here. Your intention might be to ensure that you gather a diverse set of opinions on every candidate, but itβs critical to avoid overloading underrepresented folks by asking them to take on a disproportionate number of interviews (and also thus taking time away from their other day-to-day responsibilities). Creating an inaccurate representation of your companyβs diversity does no good; misleading candidates can result in low retention rates, not to mention resentment. See Diversity and Inclusion in Tech for more.
Itβs probably not reasonable to have senior engineers involved in every level of the hiring process, even if they could be calibrated to interview any level of candidateβsenior time is worth a lot. Instead, hiring managers may wish to consider how to deploy levels of seniority and determine the best use of senior engineersβ time in any given situation.
Having had the comparative experiences of working with hundreds of colleagues and numerous hiring decisions, senior engineers are equipped to better measure and evaluate certain signals like trustworthiness and long-term potential or talent. Senior engineers may join later in the hiring process to conduct behavioral interviews or more complex technical non-coding interviews to assess architectural questions, or to dive deep into why a candidate made certain technical decisions in the past. Engineer seniority at the company is particularly important because it lends the process the perspective of deeper understanding of company values.
On the other hand, many technical skills are easier to assess with less experience and only basic training, so itβs acceptable for junior engineers to conduct earlier screens. That said, technical assessments can go wrong: junior engineers may not have sufficient experience to lend awareness of where they can make high-confidence assessments, where they may have inadvertent bias, or where they may be prone to proceed with overgeneralizations. The best interviewers are able to put their egos aside and ask questions of the candidateβbut not all junior engineers are willing to appear as if there is something they donβt understand.
βcontroversyβ Should junior candidates ever interview senior candidates? Asking a junior engineer to evaluate a senior engineerβs merit can be fraught. Even if the interviewer has been properly trained, they might misunderstand the abilities necessary to do a job theyβre unfamiliar with, especially if they are separated from the candidate by fifteen years of experience. Having a junior engineer involved may also hurt both the candidate experience and the candidateβs perception of the companyβfor example, the more senior person might encounter ill-formed questions from a more junior interviewer and make a reverse judgment, like βthis person doesnβt value me.β
βcautionβA common pitfall for junior interviewers is to grade candidatesβ answers based on how they themselves would solve the problem; they may be very literal in interpreting right and wrong responses based on a rubric. Senior and out-of-the-box-thinking candidates often do go βoff bookβ on interview questions, and this can signal serious creativity. However, interviewers should be able to distinguish brilliant out-of-box thinking from unwillingness to do necessary laborious or technical work. This isnβt always easy and usually requires follow-up questions. Experienced interviewers tend to find these interviews less stressful to conduct than more junior people do, and they are more likely to be able to dig deep enough to get the right signal.
Creating established rubrics for each interview format and making interviewers aware of potential biases will help protect against these pitfalls.
Better candidate experiences result from having some interviewers be individuals the candidate would work with directly if hired. This can give the candidate a sense of the team environment and also what their day-to-day will be like. Interviewers who know that they might work with a given candidate tend to be more invested and engaged.
Some companies compose the interview panel entirely of the team the candidate would be working with. Others include a mix of those who will work closely with the new employee and trusted individuals from other areas of the company. Having more people from the team increases buy-in and incentive alignmentβthe team is part of the process and will have to work with the candidate. The team will best know what the candidate needs to be capable of to succeed. On the other hand, this can lead to situations where separate teams within a company develop different hiring philosophies. It can also make it harder to schedule candidates because it reduces the pool of available interviewers.
The balance between local empowerment of individual teams and global company standards will vary from company to company based on size and culture.
βimportantβIf you are hiring for a senior role with direct reports, it is wise to include one or more of the people who will work for the candidate.
In every interview Iβve ever had with another company, Iβve met my potential boss and several peers. But rarely have I met anyone who would be working for me. Google turns this approach upside down. Youβll probably meet your prospective manager (where possibleβfor some large job groups like βsoftware engineerβ or βaccount strategistβ there is no single hiring manager) and a peer, but more important is meeting one or two of the people who will work for you. In a way, their assessments are more important than anyone elseβsβafter all, theyβre going to have to live with you. This sends a strong signal to candidates about Google being nonhierarchical, and it also helps prevent cronyism, where managers hire their old buddies for their new teams. We find that the best candidates leave subordinates feeling inspired or excited to learn from them.Laszlo Bock, Senior VP of People Operations, Google*
βstoryβ βIf being able to guide or mentor others is something you expect of senior engineers, having more junior people interview them can be a good way to figure out how well they can explain things and how they treat people who are junior to them, which can be useful signal.β βRyn Daniels, Senior Software Engineer, HashiCorp
If you decide that itβs important to the team or company that everyone be involved in the interview process, keep in mind that everyone needs to be trained. Interviewer training and calibration are necessary for juniors and seniors on a per-role basis.
βstoryβ βItβs ridiculous to expect engineers to be competent interviewers with little to no training. To the extent you can be good at interviewing, it comes from repetition, learning the failure modes and bolstering against those. Experience just makes you much better at navigating the myriad situations. If this is your first remote interview, youβre not going to be able to tell if itβs bad because of the candidate or because itβs remote. Training often doesnβt exist, or itβs done really poorly. If you canβt invest in training engineers to interview, you canβt expect to get any real signal on candidates.β βScott Woody, former Director of Engineering, Dropbox
Effective interviewing is a learned skill. It requires a mix of technical knowledge, emotional intelligence, and thinking on your feet, while being fair and rational. None of this comes easily, and doing it all at once is really difficult. Having experienced employees who have done interviews elsewhere go through your organizationβs interviewer training will help ensure alignment.
Calibration is the process of developing the ability to accurately assess whether a candidate will succeed in a role. A calibrated interviewer will not only be able to assess a candidateβs interview performance but draw an informed conclusion about whether the candidate should be hired for a given role.
βdangerβCandidates may have poor experiences if they are exposed to anyone involved in the interview process who is interviewing for the first time or who is not prepared to assess with respect to the specific role. Whether itβs an interviewer, hiring manager, or hiring committee member, without calibration, they will be unable to make a data-based recommendation.
βstoryβ βUncalibrated people are not aware of common biases. Itβs as much about self-awareness as it is about training. Broadly, engineers are pretty good at sussing out whether a person is like them, even if theyβre not particularly trained. Engineers can be trained to be more flexible in who they can evaluate. Backend engineers self-identify as being unable to evaluate frontend. Thatβs common. So how are they going to bridge that gap? If theyβre trained, they can.β βScott Woody, former Director of Engineering, Dropbox
Calibration occurs through a mix of:
Interview training and shadow interviewing.
Question banks and structured rubrics that specify what level of performance is expected on different questions. Good rubrics draw on the collective experience of seasoned interviewers and recruiters.
Mock internal interviews. You can train new interviewers and test new questions by mock interviewing candidates who are already on your team.
Giving data and feedback to interviewers, like how their ratings for candidates compare to other interviewersβ ratings and how predictive their ratings are. This can help interviewers detect things like whether they are too lenient or too strict or are otherwise highly inconsistent with other interviewers.
We donβt just look at the candidate side of hiring. Interviewers also receive feedback on their own personal ability to predict whether someone should be hired. Every interviewer sees a record of the interview scores they have given in the past and whether those people were hired or not.Laszlo Bock, former SVP of People Operations, Google*
Interviewers and decision-makers perpetually improve their calibration as they gain more experience and interview more candidates, so no one is ever βfully calibrated.β But setting some minimum level of calibration helps any person be equipped to draw effective conclusions as they assess a candidate. For example, it helps to know whether the hiring team considers a particular question easy or hard, or whether a particular type of mistake should disqualify a candidate or not.
βdangerβIf your only source of calibration is the candidates youβve seen, you could be at risk of incorrect calibration. This is especially likely to happen with new roles or new interview questions, where your company lacks a baseline. It is also likely if you are letting weak candidates into your funnel, because interviewers might find their perceptions shifting over time. They might then be overly impressed by a candidate who is βabove averageβ (without realizing that the βaverageβ is really poor for this particular search). This is sometimes referred to as βhiring the tallest of the bunch.β
One step to help counter this is to incorporate recordings of real-life interviews into your calibration process; interviewing.io published a few mock interviews on their platform.
Itβs helpful to provide a general presentation or discussion of interviewing to everyone who will be involved. This briefing can be delivered by a hiring manager, an experienced interviewer, or a recruiter and can include:
Why hiring (and interviewing in particular) is so crucial. Your team may be spending valuable time on interviewing candidates, and itβs important that they view it as an important responsibility (and not a chore or a waste of time).
The existence and risks of unconscious bias and how it can affect judgments, as well as strategies for mitigating bias.
The benefits of structured interviewing.
What their role is as interviewers, and how that fits into the hiring process at large.
What the interview process will look like.
The fact that interviewers function as team ambassadors, and that in addition to assessing candidates, their actions may encourage or discourage candidates from joining the company.
What the interviewersβ role will be in the hiring decision. For instance, will they need to participate in debrief or huddle sessions after an interview? This information sets the right expectations about how hiring decisions will be made.
The legal considerations for interviewers, including what questions the law restricts interviewers from asking.
Chuck Groom, Director of Engineering at Truva, provides a useful overview of the interview process that provides additional perspective.
An important part of interview training is ensuring interviewers understand the importance of their role in building a team. Keith Adams, Chief Architect at Slack, once interviewed a candidate who greatly impressed him: βI rated him a strong hire.β But during the interview debrief, it turned out that the candidate hadnβt impressed the rest of the interviewers. Keith made a passionate case anyway, and Facebook hired the candidate, who went on to have massive impact at the company. βTo any engineer, interviewing can feel like a 45-minute interruption to your immediate work. But the people you help hire are a huge part of your legacy. Hiring that candidate was the most important thing I did that year,β Keith said. (Keithβs bar for impactful work is pretty high.)
Next, interviewers benefit from having information specific to each role or type of interview. What are the roles they might help interview for? What types of interviews will they conduct, and what primary competencies will they be assessing? Are there secondary things they may be looking for as well (maybe they are assessing technical skills, but should still note arrogance in a candidate)? Are there predetermined, structural interview questions to follow?
Occasional uncomfortable interactions may help make an interview more functional. We are taught to never interrupt, yet itβs not productive to let a candidate go down the wrong conversational rabbit hole; itβs appropriate instead to intervene and steer the discussion in a more helpful direction. This can be even harder to do with behavioral interviews. Most people are very good at talking at a high level about their experience but donβt go into any significant depth. Itβs the interviewerβs job to help the candidate tell their story, which might mean cutting them off when they go off course and knowing when to steer more and when to steer less, which comes with time and experience.
βstoryβ βWhen I start an interview, I often tell the candidate I want to get certain kinds of signals. This means I might cut off certain directions of discussions or lines of thought to help with that. Itβs not me being rude; itβs just me trying to make the best use of our time.β βScott Woody, former Director of Engineering, Dropbox
A typical interview process may include four to eight interviews, where each interview is between 45 and 60 minutes. That means thereβs never enough time to get every data point youβd like, and because interviews provide noisy signal, you can only have so much confidence in the information you collect. Therefore, itβs important to prioritize both for what really matters to you and what you can reasonably assess.
βcautionβAn interviewer may not like the style or βattitudeβ of a candidateβs response to a question. The candidate may deflect, go on a tangent, or even refuse to answer. This definitely means something is awry, but the cause may not be what you expect. On the interviewer or company side, an excessively junior interviewer may ask a question in a way that doesnβt suit a senior candidate; it could be a poorly designed question; or possibly the hiring team failed to convey to the candidate accurate expectations for the interview. On the candidate side, personality style or inadequate technical ability may result in an unwillingness to dive into technical material, even in response to a well-phrased question. The best solution is for interviewers to ask themselves, βWhat is the signal we want to get here?β and to focus on that. As an interviewer, you help avoid generating such candidate responses by knowing your role in the interviewing process, what signal you want to get, and why youβre asking these specific questions.
A good alignment process will usefully communicate to interviewers the scope and nature of the role:
The scope of the role is a reflection of how a role fits into an organization. Scope often aligns to a particular rung of a career ladder and conveys how much responsibility the candidate will be trusted to handle. Senior members of an organization can best discern whether a candidate is well suited to the scope of the role, especially in interviews designed to assess technical skills beyond coding and nontechnical skills.
The nature of the role is a reflection of the specific technical skills that the role requires. Individuals directly familiar with the work and the challenges facing the team can best discern whether a candidate is well suited to the nature of the role.
By way of reducing noise, Ammon Bartram at TripleByte suggests looking for signal on max skill, not average or minimal skill (or on where a candidate may make mistakes). This approach focuses on βlooking for strong reasons to say yes and not worrying so much about technical areas where the candidate was weak.β
For an in-depth look at collecting signal, David Andersonβs thoughts on hiring for leadership roles at Amazon include a host of specific questions that build on each other, a practice that TripleByte also encourages.
Mock interviews are educational simulations in which individuals act out the roles of interviewer and candidate. Successful mock interviews resemble real interviews as closely as possible. Either potential candidates or potential interviewers can use them to prepare for the real thing. Companies may film mock interviews for the benefit of other trainees when preparing potential interviewers. Best practice includes following an observed mock interview sessionβwhether live or filmedβwith a debrief with senior interviewers who discuss what was successful and unsuccessful in the exercise.
Within a technical organization, mock interviews help junior engineers practice their interviewing skills and start to recognize what makes an interview successful or unsuccessful, focusing both on interviewer behavior and on how candidates navigate questions. This means that the person playing the candidate ideally will embody typical responses to the question, both successful and unsuccessful. Debriefs then focus on any pitfalls, patterns, or typical responses, and highlight what success looks like.
Behavioral interviewer training can be more difficult, and practicing with co-workers can be extremely helpful. Trainees practice how to probe into answers deeply, helping to build confidence around steering the candidate. The art is in knowing when to steer, and when youβre steering so much that youβre making the interview too easyβthis comes with time and experience, and mock interviews simulate that.
Shadow interviews are the next step. They provide crucial preparation for less experienced interviewers.
A shadow interview is an opportunity for a new interviewer (the shadow) to observe a more experienced interviewer (the lead) working with a candidate. Later, a trainee may lead an interview with a more experienced interviewer shadowing them in a reverse shadow interview.
In effective training, the shadow takes full part in the non-evaluative parts of the interview, including introductions, selling the candidate on the company, and answering the candidateβs questions, and remains attentive during the remainder.
βimportantβ Having two interviewers present might seem awkward or intimidating to candidates. Just as with pair interviewing, itβs important to prepare the candidate by letting them know there will be a shadow present, and that the purpose is to help train interviewers and provide a consistent experience for interviewees. Both interviewers can introduce themselves, and then the lead can clarify that one interviewer will conduct the interview and the other will mostly observe. Explaining that the shadow interviewer isnβt expected to say much (if anything) will help make sure the candidate doesnβt misinterpret their behavior as silent judgment.
After the shadow, the lead and shadow debrief by talking through what they saw, having the trainee share their thoughts about the candidate, and discussing the nuances of the question flow. A trainee might do this five or ten times and then be ready for the reverse shadow interview, with the more experienced interviewer providing one-on-one feedback afterward. Successfully completing the shadow interviewing process will prepare a new interviewer to go solo.
Many companies use similar processes, but Google has published their own shadowing process as part of an extensive online resource for training interviewers, with the caveat that it is designed in line with their own, atypical interviewing philosophy and infrastructure (for example, they use independent hiring committees, which may not be practical for other organizations).
βcontributeβ Weβre looking for more great interview training resources. If you know of anyβor if thereβs more youβd like us to write on the topicβplease let us know!
If youβre interested in improving as an interviewer, read Alex Allainβs work on the Holloway blog.
Good interview questions help sell the specific technical challenges of your team and ensure that you are looking at the right things in hiring. How do you get the signal you need and help to sell candidates on the priorities and competencies of the company?
Structured interviewing is the practice of applying the same assessment methods to review the competencies and traits of every candidate for a given role. This requires a calibrated set of interview questions that reviewers pose with consistency to candidates, as well as clear criteria for assessing candidatesβ responses. In addition, interviewers must have familiarity with the question set and any associated expectations. Studies have shown that structured interviewing more effectively predicts job performance and is less prone to bias than letting interviewers casually decide what questions to ask.*
The purpose of structured interviewing is to improve the signal-to-noise ratio.
Noise is an incidental error that can distract from substantively useful information. Factors that can produce noise in an interview process include an interview starting off on the wrong foot, the candidate having seen a similar problem before, the interviewerβs mood, and so on. Any particular candidateβs performance assessment can vary from interview to interview, as demonstrated in a 2016 study by interviewing.io, and noise can account for many of those fluctuations.
Conducting multiple structured interviews can minimize noise. Google found that in their process, conducting up to four interviews increased their hiring accuracyβafter four, accuracy increased, but only marginally.
Structured interviews often pull from question banks to ensure that all candidates are interviewed the same way.
A question bank is a collection of scripted interview topics and problems. An effective question bank also includes examples of expected answers and a rubric by which to assess candidatesβ actual answers. Question banks will be tailored to reflect the nature of a companyβs technical challenges and needs, and they take time to build.
βcautionβ Structured interviewing has a few downsides. If your questions are too rigid, you might not be able to effectively assess candidates with unique or uncommon skill sets. You can also acclimatize to your own questions, not realizing you have made them too difficult for most candidates. Lastly, every time a candidate performs better on an interview question than anyone youβve ever seen, thereβs a risk of letting that set a new standard that everyone else is judged against. Here, as always, rubrics help.
It is wise to be aware of the possibility that candidates or even teammates may leak the interview questions.
A leaked question is an interview topic or problem that is made available to a candidate before their interview. A candidate may have heard about the question online or from previous candidates. Leaked questions affect a companyβs ability to fairly and effectively assess all candidates.
Repeatedly using the same questions increases the chance of leaks. There is a delicate balance between worrying about leaked questions too much and worrying about them too little. Most people wonβt leak your questions, and if they do, itβll likely be to a very narrow audience. Youβre probably at higher risk of mis-signal in situations where a candidate has interviewed at a lot of companies at once, excels at whiteboard coding and storytelling, and maybe encountered similar questions elsewhere. That said, people who are good at spotting patterns and adapting like this are probably not the worst people to hire! Being good at ramping up on something is a useful skill for your employees. Nonetheless, with a little work, you can make your questions comparatively leak resistant.
First, rotate your questions. You can do this in a rolling fashionβthat is, add a new question, test it for a while, remove the oldest question from the rotation, and repeat. This gives time to calibrate the rubric for the new question.
Second, leaks are less likely when you have questions with depth and nuance, rather than relying on questions that can be unlocked with a single trick; people who may be inclined to leak your questions likely canβt also leak every detail of the hardest parts. You can increase the depth your questions have by asking candidates to go beyond mechanical answers, with follow-up questions that require an understanding of underlying principles. Answers to the most subtle and nuanced questions canβt be fakedβif the interviewee can navigate them, it means they really do have command of the material.
β12 Tactics to Perfect your Interviewing Processβ (forEntrepreneurs)
Bias is an observational error that tends to over-favor or under-favor certain types of candidates. As a systematic error, it is a common source of noise. For example, likability bias might cause an interviewer to view a friendly candidate as being more competent.
Bias constitutes a fundamental problem in hiring, and one key goal of interviewer training is to reduce the impact of interviewer biases on the final outcome. Bias detracts from interviewersβ ability to accurately assess candidatesβ ability or fitness and thus increases the odds that you will hire the wrong people and create homogeneous teams.* Structured interviewing helps mitigate bias in interviews by using rubrics and requiring written justifications for decisions. Good training prepares interviewers to expect and detect the presence of bias in their own evaluations.
Even with standardization, interviews progress fluidly and thus can provide the greatest challenge in collecting signal in a way that mitigates bias. Good training and practice will help interviewers be aware of their biases and the ways their evaluations may fall prey to those biases.
The halo/horn effect suggests that candidate performance on one part of a question will influence how the interviewer evaluates the rest of the candidateβs performance. To counter this, the interviewer can use the success criteria from the rubric to evaluate each part of a question separately. Only when a candidate has met the rubricβs requirements on the key elements of a question can the effective interviewer cut the questioning short and shift to selling.
Consistent questioning across candidates also plays a role in reducing bias, with specific structure and key points not changing from interview to interview. (Small conversational changes are fine.) Itβs helpful to provide interviewers with clear, consistent triggers for when and why to provide hints to avoid the temptation to tip off candidates the interviewer likes. On the flip side, if a candidate says things that donβt seem correct, the interviewer may reasonably give them a chance to correct themself by probing more deeply in an ad-hoc way. Itβs important to be consistent in doing this for all candidates.
Itβs very easy for new interviewers to get attached to a likeable candidate and round up on their evaluation. This form of bias is particularly tricky because some candidates are likeable or have a story that makes people want to hire them. There have also been studies that show that first impressions (even in the first few moments of meeting) can stick with an interviewer for the rest of the interview. Itβs critical to never let these emotions influence the hiring decisionβit is not your companyβs role to employ people just because you like them or because they need a job, and the evaluation process ideally prevents that. It is never wise for an interviewer to get so wrapped up in a candidateβs case that they lose objectivity.
βimportantβGut feeling and intuition are valuableβit is not usually wise to ignore them. However, making a decision based purely on gut feeling or intuition can allow bias to lead to poor outcomes. Unexplained gut feeling indicates thereβs either more signal to collect or something to discuss and understand better. A good interviewer understands where their intuition is coming from at least well enough to explain it to others or to highlight ways to evaluate whether the intuition is accurate.
An interviewer might say, βI feel like the candidate is inflating his past work and didnβt really do what he said he did in the previous job.β This is a judgement based on intuition, rather than fact. Without facts, you canβt rule out bias or determine if the suspicion reveals a true problem on the part of the candidate. Instead, to fact-check the intuition, the interviewer can ask extremely specific questions about past projects: βYou mention you designed and deployed the internal evaluation tool at AcmeCorp in Python. Which web framework and database did you use? Why? How did you deploy it?β Vague candidate answers to every question on work they said theyβve done likely indicates that the interviewerβs intuition revealed a true negative signal, one that can be documented to others.
Screening candidates on how they make you feel and whether they seem like someone you could be friends with is a mistake. Itβs poorly correlated with job performance and invites bias.Ammon Bartram, co-founder, TripleByte*
If you search the term culture fit, youβll get a lot of headlines about how important it is to hire for culture and that itβs βhard to define, but everyone knows when it is missing.β* In reality, hiring for culture fit can lead to unfair and biased hiring practices, as well as a company thatβs closed off to diversity of all kinds.
Culture fit is shorthand for the many vague assessments or unexamined feelings hiring teams sometimes use to determine whether a candidate βbelongsβ at a company. While in most cases the intentions behind these assessments are not sinister, testing for this kind of vague quality can devolve into a βfriend testβ or βlikability testβ where candidates are assessed not on potential to do the job but on similarity, kinship, or familiarity to the assessing employee.
βdangerβCulture-fit assessments are enormous sources of bias, including affinity bias, where interviews and decision-makers favor candidates who are similar to them, and likability bias, where candidates who are pleasant to spend time with are viewed as being more competent than those who arenβt. The βperson I would enjoy hanging out withβ and the βperson best-equipped for a jobβ are not the same.
At one point, Stripe had a βSunday testβ to identify candidates βwho make others want to be around them,β but ended up moving away from that test since it was commonly misinterpreted.
If your goal is candidate-company fit, rather than looking for culture fit, you can assess for values alignment. A focus on values, instead of on culture, has a better chance of resulting in a diverse group of people who are motivated by the same things.
Shifting our focus from βculture fitβ to βvalues fitβ helps us hire people who share our goals, not necessarily our viewpoints or backgrounds.Aubrey Blanche, Global Head of Diversity and Inclusion, Atlassian *
Having a defined set of values can help you interview and assess candidates in a fair and effective way to find those who will succeed at your company. For example, for companies that value people who focus on supporting the companyβs mission, the hiring team can seek candidates who show excitement and passion about that mission.
Additionally, you can assess candidates not just for compatibility with your company values, but also for:
Their cultural contribution or culture add. This refers to what new viewpoints and opinions a candidate might bring to your company.
Their adaptability to different cultures. Some research has indicated that this type of adaptability is more important than initial fit.*
The goal of the notes taken during the interview is to allow the reviewer to produce a final write-up that describes what happened, what conclusions the interviewer came to, and why. Itβs particularly important for interviewers to:
Summarize what happened in the interview so that others can draw their own conclusions.
Justify their decision with specific evidence based on what the candidate said or did during the interview and how it relates to the standards outlined in the rubric for the question.
Explain why they did not choose other similar options. For example, if an interviewer is a βsolid yesβ on a candidate, why werenβt they a βstrong yesβ or a βweak yesβ? Forcing a clear justification relative to the alternatives makes the write-up much clearer and avoids fuzzy thinking that leads to bias.
Building rubrics and using them in interviews gives clarity and structure to these notes and prevents interviewers from veering off in unhelpful directions.
The Medium Engineering team detailed the various technical and nontechnical capabilities they interview for in a three-part guide that provides specific rubrics for evaluating and grading each one.
Itβs best if every interviewer knows their exact role in the process. Are they asking a coding question? If so, is there a specific question or a category of questions from which they can draw? If they are asking another type of question, what is their focus?
Coordination among interviewers helps avoid asking the same technical questions multiple times, because repetition reduces the breadth of signal youβre able to collect. (Having multiple people ask the same behavioral questions can be helpful and is rarely problematic.) Some applicant tracking systems, like βloginβLever, allow interviewers to attach public notes to a candidateβs profile; or people can write down questions they asked and share them on whatever system the team uses.
βcautionβSome companies have interviewers huddle between interviews to share directions to probe into, but this can be logistically challenging and can lead to bias. Itβs better not to use this method unless there is something very specific one interviewer believes another interviewer should probe.
The recruiter or hiring manager can also send an email out to the interviewers the day before the onsite interviews to outline the candidateβs background, the role, and what each interviewer should assess. Such messages usually include notes on what the candidate seeks in their next role, what they like about your company or team, and anything that has resonated with them so far.
βimportantβ A big part of coordination is scheduling. Whether all interviews take place on the same day or over a few days may be candidate preference; you can check with them. Most candidates will not want to stay home from existing jobs over multiple days just to be available for three different half-hour calls, and they shouldnβt be expected to. Moreover, itβs wise to confirm interviewer schedules in advance and not to reschedule on applicants. Rescheduling gives the applicant a bad impression and has an outsized effect on them if they are currently employed, have family obligations, or are interviewing at multiple companies. Candidates are more likely to drop out if scheduling is messy.
Basic coordination information, such as questions to be asked, can also be included in the calendar invites for the interviews; although to reduce bias and preserve candidate privacy, the reviewing team should limit what background information and personal details it shares in advance with interviewers.
Technical interviews can be conducted in person or remotely, with one interviewer or a small group, synchronously or asynchronously, on a whiteboard, a laptop, or as a take-home test. Which technical interview formats will be most effectiveβand how those interviews should be conductedβdepends on what signals the company finds most useful to gather. Each format collects different information, and each has pros and cons, pitfalls, and associated time and effort investment from both sides. When choosing a format, the company should consider whether it can be scaled and still conducted in a way that is fair to and useful for the candidates.
Deciding on formats can be a balancing act between time savings for the company and interviewers, quality of the signals that are truly predictive, and the need for a positive candidate experience.