In its eight years of working with hiring teams, Insight has seen its Fellows go through thousands of interviews. In this article, we share the data-driven advice gained from our unique vantage point on hiring data scientists. Whether you’d like to get started interviewing, overhaul your process, or just want to see how your process stacks up against the competition, these are the latest trends we see in the best data science interviews.
Designing a Data Science Interview
Onsite interviews are indispensable, but they are time-intensive for you and your (busy) colleagues. Devoting a half-day to a candidate is a waste of your teams’ time unless you’ve already built some confidence in their ability to do the work. For this reason, teams “screen” their candidates with a series of short technical and/or behavioral interviews to gauge their problem-solving ability, experience, and cultural fit. Designing a good screening process (one that successfully narrows down your options and avoids eliminating strong candidates too early) is critical to successful hiring.
The Building Blocks of a Screening Process
Data science teams screen their candidates by choosing from a mix of behavioral screens, technical screens, or take-home assignments.
The Behavioral Screen
- Goal: To assess a candidate’s cultural fit with your team. Importantly, to gauge their understanding of the value of their experience in the context of your needs — do they understand your needs at all?
- Length: 30 to 45 minutes by phone or video, including ample time for candidates to ask questions about the role and team.
- What to evaluate: How will this person contribute to your culture? Will they be a strategic thought partner? How will they interact with product, engineering, sales, or marketing? Do they take and receive feedback well? Do they understand how their role fits in with the rest of the team and company?
- Pro tip: Don’t confuse cultural and behavioral fit with whether you could be friends with this person. You’re not looking for a buddy to hang out with on a Sunday — you’re looking for a collaborator who can work and communicate well with you and your team, as well as anyone else that interacts with your team. The questions they have for you about your team and the role are as important a signal as how they answer your questions.
- Flags to look for: Using “I” when discussing successes and “we” when discussing setbacks or failures. Lack of details in responses, especially if expressing only a vague interest in the role or team. How do they respond when they’re interrupted or taken “off script”?
The Technical Screen
- Goal: To assess a candidate’s technical fluency or maturity to be effective in the role (keeping in mind the team’s available capacity for training and on-boarding).
- Length: 45 minutes by phone or video, opting for a service like coderpad.io if you’d like to involve a live-coding environment. Include ample time for candidates to ask questions about the role.
- What to evaluate: How are they as a collaborative problem-solver: Do they ask clarifying questions? Do they admit when they don’t know something? What do they do when they’re stuck? How are they at explaining technical details?
- Pro tip: Teams hiring their first data scientist will often contract an external technical advisor to conduct this interview.
- Flags to look for: Candidates can get up-to-speed on your production environment or coding best practices through on-boarding, but it is much harder to train them to be better collaborators. Products aren’t shipped by teams of aberrant geniuses.
The Take-Home Assignment
- Goal: To assess technical depth or maturity the candidate will need to be effective in their role, with the training and onboarding your team has the capacity to provide. You’ll often see the name “data challenge” used when the take-home assignment involves machine learning or statistics or “coding challenge” when the focus is on evaluating a candidate’s software engineering skills.
- Length: Highly Variable. Hiring teams will often give our Fellows a take-home assignment with 3–4 days to submit a solution by email, providing guidance to spend no more than four hours working toward a solution. Expect candidates to take twice the recommended time to actually produce a solution.
- What to evaluate: For data challenges, this is your chance to see how the candidate translates an amorphous business problem into a data science problem, and how they communicate their technical results. Evaluating a take-home assignment is difficult, but I’ve found this advice from Ethan Rosenthal to be especially helpful.
- Pro tip: Design a take-home assignment that involves a sample of the data or infrastructure that you’ll need them to work with, and avoid generic Kaggle-like data exercises. Kaggle-like data challenges are easy to game, have widely available “solutions” online, and in our experience will turn most good candidates off of continuing in your interview process. You’ll gain just as much signal on how your candidate defines success as you do on how they build a solution — provide an opportunity to measure both.
- Flags to look for: The path is as important as the destination. If they didn’t see the very natural outliers that occur in your data, or catch the edge cases that plague your day-to-day, and instead bulldozed their way to an answer, they’ll do the same if you hire them. It doesn’t matter how fast of a runner they are if their compass doesn’t work.
Screening Data Scientists like a Pro: By the Numbers
If you’re designing an interview process for the first time, it’s tempting to design a long and perfectly precise screening process so you’re blown away by those most battle-tested candidates who interview onsite. Teams new to hiring often make this mistake of creating long multi-stage screening processes. When their favorite candidates are hired out from underneath them, they graduate to making the second mistake of optimizing for speed and stop screening candidates altogether. This creates a revolving door that moves unqualified candidates to onsite interviews swiftly, wasting time and frustrating colleagues.
Our most successful hiring partners have iterated on their hiring process, balancing speed and precision. Here’s what we’re seeing from them:
The 9% of teams that conduct two screens overwhelmingly (85%) begin with a behavioral screen. Those that do most often follow with a technical screen (80%), with far fewer following with a take-home assignment (20%). The remaining teams opt for a take-home assignment and technical screen (15%).
Less than .5% of teams opt for more than two screens and are omitted from this analysis.
It’s important to note that our Fellows meet our hiring partners initially in a bespoke small-group setting, and Insight conducts a rigorous technical interview before admitting Fellows to our 7-week Fellowship, so it’s likely this data is slightly skewed.
Screening Data Scientists Like a Pro: Getting Started
The best interview processes are tailored for the requirements of the role. A new hire who will spend more time deploying models than building them should have a different interview experience than a new hire who’s adjacent to product and will be primarily designing and interpreting experiments.
If you’re just getting started, take a moment to assess your need-to-haves and nice-to-haves for the role — if you choose to have a screen, it should focus on your need-to-haves. In our experience, teams:
- Choose a live technical screen for roles where candidates need to have a strict proficiency in their technology stack. If the day-to-day involves writing window functions or multi-table joins a day, your candidate should be able to write them live in a shared coding environment. If the day-to-day involves collaborating on experiments with a technical product manager, they should be able to design a basic experimental framework to measure changes in a hypothetical product’s KPIs.
- Choose a behavioral screen for roles when you need to gauge the depth and relevancy of a candidate’s experience. If the day-to-day involves tight deadlines with non-technical clients, your candidate should be able to communicate the relevancy of their experience in doing so. More than half of our partners (54%) use this interview as a chance to learn more about their candidates’ research or work experience, while a third (32%) use this as a chance to assess their candidates’ goals and interest in the role.
- Choose a take-home assignment for roles if you’re assessing the curiosity, creativity, and intuition of a candidate. If you hire them, they’re only two or three weeks from working with your data and getting started on their first real project , so they should be able to tackle a bite-sized version of their first project in the interview process.
Hiring data scientists can be difficult but, year-after-year, our partners meet and hire hundreds of our Fellows in thousands of interviews each year. Their interview processes are tailored to the needs of their roles and ensure that the candidates they bring onsite are a good investment of their time. In the next post in this series, we’ll dive into the data and discuss the lessons learned on designing a successful onsite interview.
Article by Adam Azzam. Thanks to April Minsky, Holly Szafarek, and Geneviève Smith for feedback on drafts of this.
Insight — a Fellowship for top tier scientists, engineers, and data professionals — partners with organizations of all sizes to build and scale their tech teams for data science, machine learning, and engineering. They work closely with hiring managers and talent acquisition teams to understand their goals and challenges of finding top tier talent and match them with the best Fellows coming out of their programs. Their unique approach and rigorous admissions process exposes teams to more highly-qualified talent, saving them both money and time.