Bias in Source Code Review Pushback impacts Asians and other groups at Google


You would think that tech rank and file positions in Silicon Valley (as opposed to upper and executive management) would be where Asians would experience less bias, but that doesn’t seem to be the case. Researchers at Google have found that within Google, Asians, Blacks, Hispanics, women, and older engineers receive more pushback from their reviews of source code than do younger white males.  In their study published on March 22, 2022 in the Communications of the ACM, authors Emerson Murphy-Hill, Ciera Jaspan, Carolyn Egelman, and Lan Cheng said that they expected groups other than Asians to get more pushback, but they were surprised that Asians got more pushback too.  The authors estimate that this extra pushback costs non-White and non-male more than 1,000 extra engineer hours every day, a productivity loss for Google.

What is code review pushback and why is extra pushback an issue?  At Google and many other places where software is developed, proposed changes in code are reviewed by other people.  Developers receiving the reviews don’t have to accept and act on the feedback, and they can push back on that feedback.  Responding to the pushback make take a few rounds of explaining – the cost to Google (and most likely, other organizations using similar review processes) being extra time that Asians and others need to devote to this. Google, in their open source documentation, has a web page that talks about how to deal with pushback.

The paper authors say that they are surprised at the results, but is this result really surprising?  Lots of assumptions are made based solely on names, and previous studies suggest that with identifiably Asian and other group names can suffer because of the assumptions and prejudices.  The authors suggest anonymizing the reviewers names.  Previous studies of orchestra auditions have shown that anonymization and hiding identifying information like gender can reduce bias.  Using blind reviewing doesn’t seem to reduce the review quality.

Various levels of blind reviewing has been done in the academic journal spaces for some time.  This has its disadvantages also, as some times reviewers can figure out who the paper submitter is just from the content or references.  Similarly, the developers who are reviewed might be able to figure out the ethnicity of their reviewer from grammar and wording, detecting reviewers for whom English is a second language.

I find this study interesting because it shows that reducing bias has benefits for an organization beyond any particular notions of political correctness. The articles on this subject don’t actually say whether anonymizing reviewers reduces bias in pushback, only that it doesn’t impact the actual process of code review.  I’d like to see them follow up on the actual effect on bias.  I see this type of anonymization becoming more common, and I have even seen some uses of it internally within the company where I work.

Thanks for rating this! Now tell the world how you feel - .
How does this post make you feel?
  • Excited
  • Fascinated
  • Amused
  • Disgusted
  • Sad
  • Angry

About Jeff

Jeff lives in Silicon Valley, and attempts to juggle marriage, fatherhood, computer systems research, running, and writing.
This entry was posted in Discrimination, Tech, Technology and tagged , , . Bookmark the permalink.