[placeholder]

Creating a Code Review Culture, Part 2: Code Reviewers

This is part two of a two-part series. In part one we looked at organizational culture, and how code authors can make their code easier to review. In this final part, we’ll look at some strategies for performing more thorough and empathetic code reviews.

Code Reviewers

The job of a code reviewer is to make sure that the code is correct, idiomatic, and up to the standards of the organization. The most beneficial way is to conduct reviews with a tone of mutual respect. The key here is to assume that people know how to do their jobs, and to communicate as such. A few ways to ensure that you’re communicating with mutual respect are to know when to take it offline, when you shouldn’t do a review, including justification for your critiques, and by being as thorough as the PR needs.

Know when to take offline: One way to engage with the pedagogical aspect of code review is to know when to take it offline. As with being a code author, if you think you’re going to bury someone under commentary, or if you know they’re new to the language, set up a time to pair with them instead of trying to do everything asynchronously. Watching them work will probably allow you to give them tips on how to be more productive in the language that you’re an expert in.

Know when you shouldn’t review: Sometimes, you’re not the best person to give a review. In general, most people who review code don’t claim to know a language that they don’t know. However, it also helps to be honest about our current abilities in languages that we used to know. Someone who hasn’t written in a particular language in many years may have missed key evolutions in that language. The same thing applies when the user is using a complex and unfamiliar library.

Include justification for critique: When you review code, you should always try to include justification for your critique, unless that critique is a nit, a style guide violation, or an obvious bug. Nits and style guide violations tend to overlap, and you shouldn’t have to justify the use of a shared style guide or things like proper spelling. Likewise, you shouldn’t have to justify bug-free code. However, when it comes to design choices you should always include justification for when you might want to change certain things.

For instance, if you’re going to tell someone writing in Python to use a generator, stop and consider why you’re telling them that, and add that context to the comment. Even better, include a link to supporting documentation that details the differences between generators and other iterable types.

Considering how your words may be received goes a long way in communicating mutual respect. One way to convey empathy is to ask the author’s opinion of your proposed changes. Unlike imperative statements, asking the author’s opinion invites discussion and thoughtfulness, and communicates that you consider the author worthy of asking their opinion.

Be as thorough as the PR needs: This shows the author of the code that they are worth your time. There are different ways that you can engage with a PR to be thorough, but the one that works best for me is to review in “passes.” The idea is that you read through the code code several times; each readthrough is a “pass.” Each “pass” has a theme, and a set of questions to guide you in examining that code. The themes covered here are sizing up, context, relevance, readability, production readiness, naming, gotchas, and language.

There are two types of passes: passes to perform on every PR and passes to perform when you want to dive deeper. The reason for the separation is that each pass asks big questions, and if there are any “red flags” with any of them, those should be resolved before moving on to deeper details. Put another way: if the code doesn’t do what it’s supposed to, then it doesn’t matter if the indentation is correct or not. Pro tip: these passes make a great checklist

How thorough you need to be is often dictated by the “shape” of a particular PR. After reviewing code for a while, you’ll start to see patterns in the “shapes” of PR, which can help you figure out what sort of mentality and depth you need to bring to the review. New features tend to look different from bug fixes, which can look different from one-line changes. New features normally require the most effort, so if you’re sizing up and see that it’s a new feature, make sure that you have adequate time to review the PR. An experienced reviewer can normally review bug fixes fairly quickly, assuming that the PR has tests preventing regression and the reviewer knows the codebase reasonably well. One-line changes are often tempting to approve without much thought, but you should be prepared to slow down and take a few minutes to consider the impact of the change. Is that one line going to break a downstream service? Is it going to seriously impact your service’s latency? Knowing how to quickly figure out how deep you need to go in a review is a skill that develops as you do more code review.

Passes to complete every time

Note: a standalone version of this checklist can be found here.

Sizing up. This is a preparatory pass intended to ease you into the right headspace for reviewing code, and to start setting expectations.

  • What is the general shape of this PR? Is this a new feature? A bug fix? A refactor? A one-line change?

  • Is the PR the right size? If someone has written a PR that’s too large to review, this is the time to ask them to break it up. 

Context. The goal of the context pass is to establish a rich mental model for the PR. Hopefully most of these questions will be answered in the PR description.

  • What is the change being made? If you don’t understand what the changeset is trying to do as part of the larger picture, it’s nearly impossible to review it well.

  • Why is the change being made? Asking why will allow you to gain broader context for your mental model.

  • At a glance, does the PR accomplish the intended change? If you can spot something glaring or incomplete at first glance, comment on that first.

Relevance. The goal of the relevance pass is to connect your mental model of the changeset with its broader context in the engineering organization.

  • Is the change made by this PR necessary? If you have deep knowledge in a particular subject area, you might be able to point out where changes are made that are unnecessary to accomplish the stated goal. All code has to be maintained, which can be costly. Sometimes the most effective thing that code review can accomplish is to prevent unnecessary code from being written.

  • Does this PR duplicate existing functionality? If you see someone trying to duplicate existing functionality, point them to the pre-existing solution. If that solution doesn’t do exactly what they need, see if there’s a way to contribute to the pre-existing solution instead of rolling one’s own.

  • Are there others that should be aware of this PR? Since code review tends to happen in the context of teams, it’s not uncommon for PRs that are important to other teams to go unnoticed. Take a second and tag your point of contact on other teams that might need to know about this PR.

Passes for more detailed review

Not all of these will be relevant in every case. Pick the ones that are.

Readability. The goal of the readability pass is to make sure that the person who reads the code in six months will be able to quickly build a coherent mental model of the code.

  • Is the change reasonably understandable by humans with little or no prior experience in the code base? Pretend you know the language, but not the code base. Would everything read easily to you? If not, why?

  • Are any esoteric language features being used? Esoteric language features, while occasionally useful, often hurt readability, even among language experts. If you see esoteric language features being used, ask if a simpler construct would work. If not, make sure that the feature is commented or otherwise documented to decrease cognitive overhead.

Production Readiness. The production readiness pass gives us assurance that the code is secure, documented, and reliable enough to be put into production without fear.

  • How will we know when this code breaks? Normally the answer to this question is something like “If there is an unhandled exception, an alert will be fired that contains a stack trace.” That’s a perfectly reasonable answer. However it’s answered, you need a way of knowing when the code breaks. If you don’t need to know when it breaks, you can probably delete that code.

  • Is there new documentation required by this change? Often times documentation becomes stale because it does not evolve along with the code that it documents. Take a second and ask whether new documentation needs to be created or if existing documentation needs to be updated.

  • Are there tests that prevent regression? Most changes should come with a method for automatically verifying changes. If there aren’t tests, there should be an explanation as to why.

  • Is this change secure? Security is hard, and not everyone can be a security expert. However, most experienced programmers should know at least a few patterns of application programming that make your code less safe. If you write database code, you should know what a SQL injection vulnerability looks like. If you write frontend code, you should know what an XSS vulnerability looks like. If you’re ever unsure, or if you’re making potentially risky changes, tag someone from your company’s security team on the review.

Naming. The goal of the naming pass is to make sure that the names of things reflect what they do. 

  • Do the names communicate what things do? Variable and function names should always strive to communicate what they do in a way that’s unambiguous. A common failure mode here is when a function communicates most of what it does, but leaves out an important detail.

  • Are the names of things idiomatic to the language that they’re written in? Every language has idioms for how variable and function names should be written. Violations of these rules cause cognitive overhead by defying the expectations of people reading the code. Worse, in languages like Go where case determines visibility, these violations can alter the design of the code in ways that weren’t intended. 

  • Do names leak implementation details? Names should seek to communicate what they do, not how they do it. GetIPs is a relatively succinct and, depending on the context, decent name for a function. GetIPsByReadingFromAFile gives away way too much. A subtle version of this is when variable names contain their type, for instance “ipArray” instead of “ips.” In statically typed languages, this amounts to line noise, as you’re simply repeating the already-declared type in the name. In dynamically typed languages, you’re pushing against the flexibility of dynamic typing by trying to add a layer of “pseudo” static typing. 

Gotchas. The goal of the gotchas pass is to check for common programming oversights. Experienced programmers will have their own list, but I’ve included some of my favorites here. 

  • What are the ways in which added or changed code can break? A common way to get started with this question in dynamic languages is to look at variables and ask if they can be the language’s version of “null.”

  • Is this code subject to any common programming gotchas? This includes things like off-by-one errors, transposition errors, memory leaks, null dereferences, etc.

  • Is spelling correct and consistent? Incorrect or inconsistent spelling makes searching code much harder. One thing to check here is that misspelled names haven’t been propagated by autocomplete. 

  • Are there any dangerous defaults being set? Defaults in code are a really convenient way to save the user of the code from unnecessary typing, but defaults need to be checked to make sure that they won’t blow up unexpectedly when used.

Language-specific. The goal of the language-specific pass is to ensure that the code is up to the standards set by the community of experts in that language by your organization. 

  • Is the code idiomatic to the language? Non-idiomatic code increases cognitive overhead to read.

  • Are new patterns introduced? Especially when a language is first being used at an organization, new patterns—reusable solutions to general problems—will be introduced. It’s worth considering these new patterns, because they’re bound to be copied by the next person who needs to solve that problem. If a new pattern is introduced that there’s already a prescribed pattern for, call it out. 

  • Does the code fall into common pitfalls for the language? Every language has some common pitfalls that beginners to the language fall into. Examples of common pitfalls are writing deeply nested list comprehensions in Python, or trying to write one language as though it were another.

Conclusion

Ultimately, a strong code review culture is a product of being intentional about how your organization performs code review, and it’s everyone’s responsibility to contribute to that culture. If you are ready to improve your organization’s code review, but are unsure of where to begin, here are some next steps to help you get started:

  • If your team doesn’t do code reviews, start. Talk to your team lead about starting to do code reviews as part of your normal practices, and solicit feedback for all of the code that you write. As with any long-term change effort, the most important step is starting.

  • If you’re considered a technical leader (senior engineer, staff engineer, tech lead, etc.), start an engineering values document. Get your fellow technical leaders to discuss and sign off on it. Publish it internally for everyone to see. 

  • If you already review code, start using a code review checklist. Publish your checklist so that others can use it.

Remember: any large organizational change will take time, but the benefits of continuous feedback are worth it. In contributing to your company’s code review culture, you will be improving its engineering culture as a whole. 


Performance Tuning Postgres within our TLS Infrastructure

Creating a Code Review Culture, Part 1: Organizations and Authors