Systemic bias and discrimination are a broad problem that affects not just recruiting and hiring, but also people’s willingness to remain in a given industry that does not represent them or treat them fairly. The Kapor Center’s landmark Tech Leavers Study reported in 2017 that nearly 40% of people who left the tech industry cited “unfairness or mistreatment” as the major reason they left, with men of color the most likely to leave due to mistreatment; 78% reported having experienced unfair treatment. In 2016, the departure rate for women was 41%—more than twice that of men, which was 17%.*
Underrepresented men and women of color experience stereotyping at twice the rate of their white and Asian peers, while LGBTQ+ tech leavers report bullying and public humiliation at significantly higher rates than other underrepresented groups. However, 62% of tech leavers said they would have stayed had their employer made efforts to create a more inclusive work environment.
Along with the negative consequences for candidates and employees, homogeneous and inequitable, unfair work environments also pose significant risks to organizations.* The Tech Leavers Study concluded that the industry stands to lose more than $16B per year in employee replacement costs.* Companies also may face backlash and negative brand associations for not dealing with potentially harmful features and unforeseen consequences of their products.*** For example, Facebook’s “real name” policy—which failed to realize the importance of privacy concerns of people from marginalized groups—was so controversial there’s an extensive Wikipedia page about it. When Twitter was in the running for acquisition, a number of potential buyers apparently balked at the company’s inability to deal with the harassment issues on its platform.
I observed a company of mostly white, affluent iPhone users delay shipping on Android because Android users reportedly earn less money, and later regretting the choice after discovering their Android users are more engaged. I’ve watched helplessly as another company used the data they collected on users in discriminatory ways, which not only erodes users’ trust but also the trust of the employees who have been subject to discrimination in their lives. If these teams were more diverse, especially among the leadership, I doubt the same choices would have been made.Leighton Wallace, engineering manager, Lever*
A dearth of diversity doesn’t just limit the number of innovative products companies can produce and the markets they can reach—it also poses serious risks to underrepresented populations. In 2015, Google was called out for racist image search results, a problem it has not solved. A recent study from the University of Georgia found that the technology used by self-driving cars may detect dark-skinned pedestrians less effectively than light-skinned ones. As Vox reports, this kind of “algorithmic bias” results from many factors, including sources of training data and homogenous technical research and product development teams. This has implications both for the kinds of tools and products we use as consumers and for the work environment of the people at those companies, as much as it impacts those companies’ ability to innovate and drive change.