Creating a Code Review Culture, Part 1: Organizations and Authors
Code review is one of the most effective tools we have as engineers to safeguard the quality of our code. It provides a platform for us to communicate our best practices as they relate to the technology we use and an opportunity to teach in a way that will have immediate practical use.
A strong culture of code review ensures that we’re getting all of the benefits of code review. What is a code review culture? Well, let’s start with a simpler question: what is culture? I like the definition from Merriam-Webster: “The set of shared attitudes, values, goals, and practices that characterizes an institution or organization.” Let’s break that down further:
When we talk about attitudes, we’re talking about shared points of view and opinions. This is important, because having a shared attitude about the importance and goals of code review assures that an engineer will get a more consistent code review experience every time.
Values are more fundamental than attitudes. They are a shared ranking of what’s important, both in code and in code review. If attitudes are opinions, values are the beliefs that inform those opinions. Having a shared set of written values about code can be immensely helpful in mediating technical debates. Often, strong differences of opinion in technical situations come down to differences in what the engineers think is valuable in code.
Goals are a shared agreement on the purpose of code review. Whereas attitudes dictate the why of what we're trying to achieve, a shared set of goals tells us what.
And when we talk about practices, we’re talking about a shared set of actions that are performed by code authors and code reviewers.
So now that we have a set of common definitions, how can we use them to create a strong code review culture? Let’s look at that problem from the perspective of engineering organizations, code authors, and code reviewers.
Organizations
One of the most helpful things an engineering organization can do to maintain a strong code review culture is to be intentional about that culture. Here's the thing: culture happens whether we want it to or not. That’s just the nature of making humans do things together. At some point, shared sets of values, goals, attitudes, and practices will emerge. Engineering organizations are therefore faced with two options: do something to try to influence the culture (i.e. be intentional about it), or sit back and let whatever happens, happen. Of the many ways that we can be intentional about our culture, I’d like to touch on three: communicating culture, establishing communities of experts, and training code reviewers.
Communicating culture starts with codifying that culture. This isn’t an easy process. It involves getting your senior engineering team and engineering leadership—people who normally hold strong opinions—to agree on what that culture should be. But once you’ve gone through that process, you should have a very valuable artifact: a document that describes your engineering culture. That document should be widely disseminated in whatever form is the most effective communication mechanism at your company. As much as possible, your engineers should know what the organization’s value system is when it comes to code; this helps in code review, where oftentimes disagreements on an implementation are really disagreements about engineering values and what to prioritize.
Organizations can also promote a strong code review culture by establishing communities of experts. Two types of experts tend to be helpful in code review: language experts and problem domain experts.
Language experts are adept at a particular programming language and/or ecosystem. These engineers are responsible for steering an engineering organization’s use of a particular language and its associated technologies, as well as providing deep technical advice during code review. They should be able to give suggestions on how to make code more idiomatic, and steer the best practices and guidelines for a language’s use at an organization.
Problem domain experts are engineers who have broad experience in a particular problem domain, such as concurrency programming. They’re often less likely to be steering usage patterns in an organization, but should be sought as experts when dealing with particularly tricky subject areas. Establishing a community of experts isn’t enough—an organization must have a clear path to becoming an expert. Having an unchanging roster of experts can lead to gatekeeping behavior and can be frustrating for individual contributors (ICs) looking to master a particular topic as part of their career growth.
Lastly, engineering organizations should be training code reviewers. Code review is a skill that’s related to, but distinct from, reading and writing code. The method of training will vary by the kind of company you’re at. Some teams may prefer to pair an experienced code reviewer with one who’s newer to the practice, and have regular sessions where they review code together. However, one-on-one mentorship and pairing doesn’t always scale well for training engineers; in those cases, you can run workshops and labs where engineers can dive into new topics. The focus of these sessions should be to establish a methodology for reviewing code with less experienced reviewers.
Code Authors
As the code author, you have the most context for choices made in the code, and are therefore a large part of the success of a code review. Your choices in how to present the pull request will help or hinder a reviewer’s ability to delve into the code. So then, the theme of this section is that the code author’s job is to make the code reviewer’s life as easy as possible.
There are a few ways we can be helpful as code authors:
Communicate context up front
Make the pull request (PR) a manageable size
Leverage automation to cut down on the amount of line noise the reviewer has to go over
Know when it’s a better idea to take the review offline than to continue it in the PR system.
Communicate context up front: One of the hardest parts of reviewing code is gathering the context necessary to understand why the code is the way it is. What may look like an unnecessary complication might be completely justifiable given a set of design constraints. To minimize this friction, it helps to communicate as much context as possible to the code reviewer up front.
Two of the most effective ways to do this are to write useful PR descriptions, and to annotate your PRs. A useful PR description should, at least, answer these questions:
What does this PR do?
Why do we want to do that?
What high level changes did you make to the code to accomplish that goal?
What other information should the reviewer be aware of when looking at this code?
In addition to adding context to PR descriptions, use your source control's commenting mechanism to draw attention to certain lines of code. Use this as a way to point out design decisions, especially anywhere you‘ve chosen to accrue technical debt to ship a feature.
Make the PR an easily reviewable size: At a certain size of PR, communicating context has diminishing returns. Massive PRs are hard to make a mental model of and are nearly impossible to review thoroughly. There are a few exceptions, such as when you're deleting code for features that are no longer used. While there's no exact line count for a good PR size, there is a useful heuristic: working in vertical slices.
A vertical slice is the smallest unit of work that ships meaningful functionality to the user. “Meaningful functionality to the user” can mean a new feature, a bug fix, or a usability improvement. For instance, if you're writing a web app, you can break down the implementation of features in a few ways. The most reviewable way is to ship a full, end-to-end implementation of an endpoint. There are a couple of important characteristics of vertical slices:
They don't contain any code that is not used on the same PR, unless that code is intended to be a public API consumed by others. The design of code that is exercised only by tests is hard to reason about: it's impossible to tell if the code design matches the use case, because there is no concrete use case.
They make a round trip through the whole technology stack. A natural consequence of not introducing new code not consumed in the same PR is that the only dependencies that should have to be “mocked” are your external dependencies.
Automate the nits: One of the things that gets in the way of gleaning context is having to leave nits. Nits are small, nit-picky comments, named such because of the tendency to prefix them with “nit:.” Leaving a nit can break the flow of reviewing code, and makes it harder to think about code at higher levels of abstraction.
You can make everyone's life easier by automating away as many nits as you can. Most nits tend to be style based, which can be automated using a linter; if your language of choice has some sort of auto-formatter, even better. Another common nit is asking for test coverage. Generally speaking, new code that's introduced should come with tests, or an explanation of why there aren't any in the PR description. Nits should be the exception, not the rule.
Know when to take it offline: Occasionally, our ability to communicate context in writing isn't enough. There is bound to be miscommunication, or reviewers who mistake the tone of annotations or descriptions. At that point, a valuable skill is to know when to take a review offline.
Offline can mean a few things, depending on how your company works, but broadly speaking it means your company's preferred method of synchronous, face-to-face communication. For companies where the author and reviewer are co-located, this might mean meeting in a room together for an hour. For remote companies, this might mean setting up a remote video chat. The important thing is that you get the other person's tone and body language into the discussion.
There are a couple of reasons that you may want to take it offline. First, if the reviewer has buried you under commentary, it can be helpful to set up a pairing session to address it. This often happens when you're an otherwise experienced programmer, but new to a language or technology. Your first PRs are bound to get a lot of comments as you become used to the syntax and idioms. However, instead of trying to address them yourself, a one-hour pairing session to address them with your reviewer can yield a lot of learning. If your reviewer is an expert in the language that you're using, having them see your workflow and walk you through the rationale of certain idioms can yield more understanding than just addressing comments.
If the reviewer seems antagonistic, you may also want to take it offline. There are generally two flavors of perceived antagonism:
The first comes from the fact that writing English is really hard. Your reviewer might not write English as their primary language, and even if they do, it's easy to write something that will be misinterpreted as irritated or condescending. In this case, taking it offline will allow you to experience their body language and tone of voice, which fills in the gaps that writing can leave.
The other flavor is toxicity. If your reviewer leaves comments that are insulting, berating, or imply you're not a real engineer, don't respond. Screenshot the comment and get your manager involved. Toxicity is not a technical issue, and it won't be solved with a technical discussion; it's a people issue, and at worst it could be an HR issue.
Making code easy to review is important in a strong code review culture, but it’s only half of the equation. In the next part of this series, we’ll look at how you can review code in a way that is thorough and communicates empathy.
Stay tuned!