editione1.0.8Updated August 24, 2022
You’re reading an excerpt of The Holloway Guide to Technical Recruiting and Hiring, a book by Osman (Ozzie) Osman and over 45 other contributors. It is the most authoritative resource on growing software engineering teams effectively, written by and for hiring managers, recruiters, interviewers, and candidates. Purchase the book to support the author and the ad-free Holloway reading experience. You get instant digital access, over 800 links and references, commentary and future updates, and a high-quality PDF download.
Coding questions collect signal on a person’s ability as a programmer. The non-coding questions collect the majority of the other technical signals that help determine if the candidate has the experience relevant to the role and job level.
It’s best for more senior interviewers to ask non-coding questions and to spend time calibrating on how to evaluate the non-coding portion of interviews, because these interviews:
Require higher-level skills that cannot be evaluated by more junior engineers who lack them.
Require more judgment to assess a candidate’s choices in a context that the candidate is not fully familiar with.
Are simply more difficult to administer, as they are open-ended and require adapting follow-up questions to a wider range of circumstances than can come up in most coding interviews.
Non-coding interviews may also require more shadowing time to ramp up than coding questions, and they are often somewhat more company-specific, because companies assess and value non-coding technical skills differently.
These interviews break down into roughly three categories: those that assess what a candidate has done previously, those that assess specific non-coding technical and domain knowledge, and those that assess a candidate’s ability to think through new problems or situations.
story “If you use interviews for leveling, make sure that they give you enough information for that assessment. If a candidate whizzes through, but you assess them at a low level just because there wasn’t enough challenge to peg them higher, this is a problem.” —Laurie Barth, Staff Engineer, Gatsby
Interviews that assess previous technical work generally follow a rough progression:
The interviews start out with the big picture of a particular situation or project. This involves getting the facts and high-level context.
How well does the candidate understand and describe the business context, the problem that was addressed, and any critical constraints their team was dealing with?
Does the candidate understand the big-picture system architecture and design?
Can they identify what part of the system they worked on, what their own personal contribution was, and what their role was in relation to the rest of the team?
Next, the candidate can pick some specific elements they personally worked on to go deep into. Within those areas, the interviewer can ask the candidate to dig into the details of the system, focusing on decisions they made and why they made those decisions.
What would they do differently if they were working on this project now?
How would their decision have changed if some set of constraints had been different?
It’s common for candidates to talk about a system that they used or interacted with but did not personally build. It is the interviewer’s job to get the candidate to identify their own contribution to the system and then to go deep into the areas where they feel most comfortable assessing technical decisions. It is critical to understand the technical decisions that the candidate made, because the point of the interview is to assess their ability to design a system under real-world constraints.
As part of this, the interviewer can also seek the underlying reasons for decisions. The candidate will hopefully have a compelling reason for each choice, although that could be something pragmatic like “we had to move fast, and we had experience in this stack.” Not every decision needs major deliberation, but as an interviewer, you will be looking to see if the candidate understands every decision they made.
Additionally, because no system design is perfect—and the interviewer may in fact disagree with the design—the interviewer can then ask the candidate what they learned and would do differently. A strong candidate will be able to demonstrate their ability to identify problems, recognize when things didn’t go well, and iterate on a solution.
Finally, by asking the candidate how they would have changed their decision if some constraint had been different (for example, you have a month rather than a quarter to build the system), you get a chance to validate that they understand the decision well enough to adjust it in the face of new information.
caution All of these questions require that a trusted senior engineer exercise a tremendous amount of judgment and have the ability to pattern-match the challenges a candidate faced with their own experiences—which makes these probably the most difficult of all technical interviews to calibrate on.
Nontechnical interviews may also focus on assessing specific elements of a candidate’s technical knowledge. For example, you might ask them to explain a complex technical topic or ask questions about their knowledge within a particular domain. The former is similar to an assessment of past work, where you want to validate that the candidate understands the topic well by assessing underlying reasons for decisions. The latter is more about testing domain knowledge and can be used when filtering out candidates who lack fundamental knowledge in computer science or when specific domain expertise is relevant to a role.
In the third form of this interview, the interviewer asks the candidate to demonstrate a non-coding technical skill during the interview, so as to give insight into how they think through challenges. These interviews can include all sorts of relevant problems, like:
Designing a new system, architecture, or algorithm at a scope and domain appropriate to the role (sometimes called a System Design Interview).
Product thinking, such as handling ambiguous product specifications, prioritizing what to build, or evaluating trade-offs of implementing different features.
These interviews may be easier to calibrate than ones that evaluate a candidate’s prior work because the problem-solving follows a more consistent flow, and you can write very clear rubrics.
caution At the same time, beware of groupthink and rigidity in assessments. Senior engineers may be wedded to specific architectural patterns, for example, and they may dock those with a different perspective. But there is immense value in having someone come into your organization with fresh perspectives, especially at the senior level.
One skill that you will expect candidates to demonstrate is the ability to handle more open-ended problems, so it’s wise to frame them accordingly, especially for more senior levels.
|Imagine we’ve gone back in time and we’re about to build Acme product again from scratch. We want to get to a prototype within four weeks. It doesn’t have to be perfect, but it needs to have the key parts in place. Can you take five to ten minutes to flesh out for me how you’d approach this?||Has the candidate thought through the product and the key features ahead of time? Are they able to draw a sensible diagram of the major services they would need to have in place and the interfaces by which they might communicate? Are they able to arrive at a decent schema for the service? Extra points if they realize any of the complexities (for example, comments might require editing history to be stored).|
|One of the things about our service is that it pulls from APIs like Twitter and AngelList. They have API quotas that prevent us from pulling in all the data we want. Can you walk me through how we might design a system for prioritizing the most important calls?||Is the solution adaptable to future third-party services? Is the solution adaptable to ongoing changes in API rate limits/quotas? Do they have examples where they’ve worked backward from what the customer is going to care about into their solution?|
danger There are some interview questions that are designed to see how the candidate reacts to the parameters of the question, like asking an impossible question, never giving enough guidance on what the question is really about for the candidate to succeed, or being incredibly silent or non-interactive during the interview. Anything that feels like it could even possibly be considered a mind game is bad territory to be in.
Not only is this a poor way to treat a candidate (and strong candidates with other options will rightly drop out if treated like this), but an interview must assess candidate-company fit for both sides. Candidates want and need to know how you work and communicate, and it’s your job to demonstrate that to them.