Brotopia Read online

Page 3


  But just as Cosmo was encouraging more women to seek fat paychecks in this new field, forces were conspiring that would push women out instead. Ironically, the industry started to shut out women when it needed new labor the most. In fact, new technology was often sold as a way to replace women in the office. One ad for Optical Scanning Corporation read, “What has sixteen legs, eight waggly tongues and costs you at least $40,000 a year?” The pictures depicted the answer with tight shots of the legs and mouths of eight female office workers.

  As the computing world was exploding in the 1960s, there were not nearly enough programmers to fill open jobs. Companies were so desperate that recruiters began working to identify the exact skills and personality types that made a great programmer, writes the computing historian Nathan Ensmenger in his 2010 book, The Computer Boys Take Over. At the same time, Ensmenger tells us, the gatekeepers of the industry began to adopt the belief that programming was a “black art” and the best practitioners were born, not trained. As salaries went up, programming started to take on a higher status. Employers realized it was less clerical and more intellectually rigorous than they had originally thought, taking on the prestige of a professional job. “For many in this period the very concept of a professional,” Ensmenger writes, “was synonymous with an all masculine and thus high-status occupation.”

  Ensmenger estimates that by 1962, 80 percent of businesses were using aptitude tests to hire programmers. The IBM Programmer Aptitude Test, which focused on problem-solving skills, became the industry standard; in 1967 alone, the test was taken by 700,000 people. But these tests were also widely compromised, Ensmenger reports; some men shared the answers via college fraternities and Elks lodges. That paved the way for another kind of programmer test, this one focused on personality.

  In the mid-1960s, a large software company called System Development Corporation enlisted two male psychologists to scout new recruits who would enjoy this new mysterious profession. The psychologists, William Cannon and Dallis Perry, profiled 1,378 programmers, only 186 of whom were women, and used their findings to build a “vocational interest scale” that they believed could predict “satisfaction” and therefore success in the field. Based on their survey, they concluded that people who liked solving puzzles of various sorts, from mathematical to mechanical, made for good programmers. That made sense. Their second conclusion was far more speculative.

  Based on the data they had gathered from mostly male programmers, Cannon and Perry decided that satisfied programmers shared one striking characteristic: They “don’t like people.” In their final report they wrote, specifically, that programmers “dislike activities involving close personal interaction; they are generally more interested in things than in people.” Illustrating these personality traits is a cartoon of four men, three of whom are having fun with puzzles or conducting an experiment; the fourth, who’s smoking a cigar, seems angry, presumably to indicate that he doesn’t like, well, people.

  Cannon and Perry declared that their new “Programmer Scale” was more “appropriate” than existing aptitude tests and that it would help schools, vocational counseling centers, and recruiters across the country to screen for the best programmers. Use of their personality test became widespread, which meant that people were being recruited not solely because of their talent or interest level but, at least in part, because of a dubious assumption about what type of personality made for a happy and productive programmer. This was the beginning of a stereotype that persists today. By one estimate, as many as two-thirds of employers relied on a combination of aptitude and personality tests to recruit new candidates by the late 1960s and such tests were used well into the 1980s. We’ll never know how many promising candidates were cast aside simply because their interest in other people disqualified them on a crucial selection criterion. What’s clear is that this criterion inherently favored programmers of a certain gender.

  HOW WOMEN GOT PROFILED OUT

  If you select for an antisocial nerd stereotype, you will hire more men and fewer women; that’s what the research tells us. The prevalence of antisocial personality disorder, for instance, favors men by a three-to-one ratio. And many more boys than girls are diagnosed with autism and its milder variant Asperger’s—from two to seven times as many. Some contend that girls and women with autism are underdiagnosed and therefore missing from the statistics, but the research to support a higher incidence among men is compelling.

  In addition to that, our society views antisocial men and women differently. A woman who demonstrates the characteristic of “not liking people” is often pitied or rejected. We’re unlikely to assume that her behavior is a sign of hidden genius that will burst forth in a great achievement. For men, however, being a “lone wolf” is a viable, even admired, persona, even if the guy seems a touch insane—see Beethoven, van Gogh, Einstein, and Tesla, among many others.

  The Cannon-Perry test tipped the scales toward an applicant whose traits are more characteristic of males. In 1968, a computer personnel consultant said at a conference that programmers were “often egocentric, slightly neurotic,” and bordering on “limited schizophrenia,” also noting a high “incidence of beards, sandals, and other symptoms of rugged individualism or nonconformity.” Even then, the peculiarity of programmers was already legendary; today, the term “crazy neckbeard” is still thrown around affectionately to refer to that engineering nerd with unsightly and ungroomed neck hair. In fact, the word “women” or “woman” didn’t appear once in Cannon and Perry’s eighty-two-page paper; they refer to the entire group surveyed as “men.”

  But did the test really cull out people who were potentially better programmers? There is little evidence to support the idea that men who are antisocial are more adept at math or computers. (Nor is there evidence across hundreds of studies that men in general have a statistically meaningful edge on women when it comes to math abilities.) It is also important to remember that “computer talent,” when it comes to complex software development, almost always involves social skills such as being able to work in a group, sharing in decision making, and empathizing with users.

  Although the systematic hiring of this new nerd stereotype made the computer field ever less appealing to women, some persisted. Telle Whitney was one of them. She earned her advanced degree at Caltech and then landed a job at the chip maker Actel. The isolation she felt in her college classes and computer labs continued. She remembers one senior executive who started to ask her a question about her future ambitions, then stopped himself, saying, “Oh yeah, you’re probably going to have babies anyway.”

  In 1986, Whitney befriended another woman in the industry, Anita Borg, who went on to start an electronic mailing list called Systers where women in tech could connect. Together, in 1994, Borg and Whitney launched the Grace Hopper Celebration of Women in Computing conference to honor women’s achievements in computer science. That same year, Borg founded the Institute for Women and Technology, and when Borg passed away in 2003, Whitney became CEO of the organization, which now bears Borg’s name. In short, they tried to push forward another narrative about women in computing, but the male-centric nerd stereotype proved far too ubiquitous to change.

  The widespread use of the Cannon-Perry scale had made the reign of the nerds a self-fulfilling prophecy. The “industry selected for antisocial, mathematically inclined males, and therefore antisocial, mathematically inclined males were overrepresented in the programmer population,” writes Ensmenger. “This in turn reinforced the popular perception that programmers ought to be antisocial and mathematically inclined (and therefore male), and so on ad infinitum.”

  Women already working in the field paid the price for not fitting this stereotype. Padmasree Warrior, who joined Motorola as an engineer in its semiconductor factory in 1984, originally wore the colorful saris she had brought from her native India, but she decided it was best to give them up, adopting instead a uniform of black and gray. When she
was promoted to chief technology officer, she started dying her hair gray to look older. “I was afraid to be who I was, who I wanted to be,” she says. “I wanted to be taken seriously.” She felt she was constantly battling skeptics throughout her career. “People don’t expect you to be competent, somehow,” she says. “There’s always this doubt.”

  Warrior’s story would amaze many of today’s male nerds, who just can’t fathom the idea that the tech industry discriminates against women. Many attest that they were outsiders themselves and wouldn’t have had the power or desire to push out others, least of all women. But regardless of individual men’s intentions, the codification of selecting for antisocial traits solidified the nerd’s hegemony, rippling far beyond who was picked for training and jobs. Once this process got under way, every social environment in computer science—including classes, conferences, labs, and workplaces—began to be filled with and controlled by antisocial men. They became the rank and file; they also became the bosses, teachers, and gatekeepers.

  As nerds reached critical mass, the surrounding culture picked up this narrative. Popular mid-1980s movies such as Revenge of the Nerds, WarGames, and Weird Science publicized and romanticized the stereotype of the awkward boy genius who uses tech savvy to triumph over traditional alpha males and win the affection of attractive women. People who weren’t engineers and didn’t even know any began to think they understood those men who were able to master computers. But for once, popular culture wasn’t in the driver’s seat. While media definitely reinforced the nerd stereotype, movies and TV did not create it. The tech industry did.

  Computers didn’t become a “boy thing” because boys had some innate aptitude that girls lacked. A large study of high schoolers showed that young women have equal competence in the skills needed to use them. The results did, however, show that young women had more fear and less confidence, leading the researchers to conclude that the differences between boys and girls in terms of computer use reflected stereotyping and gender-role socialization.

  The power of those stereotypes was pervasive. Over the next decade, teachers, parents, and children became convinced that computers were indeed a boy thing. And they tailored their own behavior accordingly. As computers entered the home in the 1980s, parents often put them in their sons’ rooms alongside “boy toys” like trucks and trains.

  In toy stores, “computers quickly fell into the boys’ side of the aisle,” says Jane Margolis, who has done some of the most extensive research on the computer science gender gap in schools. “It was everyone’s notion that this is the kind of stuff that boys are interested in, and it was presented that way also by the computer scientists in the field. Women would report that if their families had a computer, it did go into the brother’s room, and there were many informal activities and de facto internships between father and son.”

  This notion proliferated in the classroom. “When they started developing CS departments in universities, it just became a very, very male-identified field,” Margolis adds. “That’s when all the biases about who could do it and be in the program and who this field is made for set in.” These biases seeped into the curriculum and shaped teachers’ expectations of their students, who accepted the assumption of “male excellence and women’s deficiencies.” Female CS students report being discouraged by their teachers, peers, and the curriculum itself. In 1995, women at Carnegie Mellon University were leaving the major at more than twice the rate of men before graduation. The “geek mythology,” as Margolis calls it, was pervasive—students who were surveyed believed that geeks obsessed with computers made the best programmers, yet nearly 70 percent of women didn’t see themselves that way. Women began to question whether they even belonged at all.

  Women and girls got the message then, and they still do.

  In 2013, Sapna Cheryan, professor of psychology at the University of Washington in Seattle, surveyed students to parse out the components of the modern computer-scientist stereotype. She found a widespread belief that good programmers lacked interpersonal skills and were fanatically obsessed with computers to the exclusion of most other life pursuits.

  “These stereotypes are incongruent with characteristics women are expected to and may wish to possess, such as working with and helping others,” Cheryan concluded. “We found that the pervasive ‘computer nerd’ stereotype discourages women from pursuing a major in computer science.” Cheryan referenced a quotation from research performed by Margolis from a young female computer science student expressing her perceived distance from tech capability more simply. “Oh, my gosh, this isn’t for me,” she said. “I don’t dream in code like they do.”

  WOMEN’S NARROW PATH GETS NARROWER

  Shy, antisocial boys in their coding caves weren’t glamorous, but starting in the late 1970s and early 1980s, the computer business suddenly was. It began when Apple released the Apple II and continued when, a couple of years later, IBM came out with the PC. In 1984, Apple brought the groundbreaking Macintosh to market, and in 1985 Microsoft released Windows 1.0. Thanks to these new machines and the realization that there were fortunes to be made, the field was suddenly heady with excitement.

  As computers gained new status and exploded in popularity, hacker conferences and computer clubs sprang up across the San Francisco Bay Area, and enrollment in computer science classes surged at universities across the country. Demand became so great that some departments began turning students away. There was an overall peak in bachelor’s degrees awarded in computer science in the mid-1980s, and a peak in the percentage of women receiving those degrees at nearly 40 percent. And then there was a steep decline in both. It wasn’t that students were inexplicably abandoning this exciting field. It was that universities couldn’t attract enough faculty to meet growing demand. They increased class size and retrained teachers—even brought in staff from other departments—but when that wasn’t enough, they started restricting admission to students based on grades. At Berkeley, only students with a 4.0 GPA were allowed to major in electrical engineering and computer science. Across the country, the number of degrees granted started to fall.

  Just as computer science was erecting barriers to entry, medicine—an equally competitive and selective field—was adjusting them. In the late 1960s and early 1970s, dozens of new medical schools opened across the country, and many of the newly created spots went to women. Standardized entry exams also began to change. In 1977, the MCAT, a test for entrance into medical school, was revamped to reduce cultural and social bias. But the game changer was the implementation of Title IX, which prohibits sexual discrimination in educational programs. From then on, if a woman could score high enough on the newly revised MCATs and meet other requirements, med schools could not legally deny her entry, and women poured in.

  Why wasn’t the same progress being made in computer science? Professor Eric Roberts, now at Stanford, was chairing the computer science department at Wellesley when the department instituted a GPA threshold. Of that period he later wrote, “In the 1970s, students were welcomed eagerly into this new and exciting field. Around 1984, everything changed. Instead of welcoming students, departments began trying to push them away.”

  Students who didn’t exactly fit the mold—perhaps because they didn’t have years of computer experience or they didn’t identify with the computer science stereotype—began to understand they were unwanted. Over the next few years, Roberts explains, the idea that computer science was competitive and unwelcoming became widespread and started to have an effect even at institutions without strict grade requirements.

  It was then that computer science became not only nerdy but also elitist, operating on an impossible catch-22: the only way to be a programmer was to already be a programmer. If you learned to program at a young age, that became indicative of a natural affinity with the field. Because more boys entering college had already spent years tinkering with computers and playing video games in their bedrooms, they h
ad an edge that girls did not. “There’s a set of things that caused [boys] to appear to have a superficial advantage, that wasn’t a real advantage,” says longtime University of Washington computer science professor Ed Lazowska. If highly selective universities were deciding whether to give their slots to young men with prior experience or young women without it, one could easily guess who’d win and who’d lose.

  In 1984, Apple released its iconic Super Bowl ad portraying a female actor as the hero taking a sledgehammer to a depressing and dystopian world. It was a grand statement of resistance and freedom. Her image is accompanied by a voice-over intoning, “And you’ll see why 1984 won’t be like 1984.” It’s ironic that the creation of this mythical female heroine coincided with an exodus of women from technology. In a sense, the commercial was right: The technology industry would never be like 1984 again. That year was the high point for the percentage of women earning degrees in computer science. As the number of overall computer science degrees picked back up leading into the dot-com boom, more men than women were filling those coveted seats. In fact, the percentage of women in the field would dramatically decline for the next two and a half decades.

  APPLE UPSETS THE NERD CART

  As women were leaving the tech world, a new type of tech hero was taking center stage. In 1976, Apple was co-founded by Steve Wozniak, your typical nerd, and Steve Jobs, who was not your typical nerd at all. Jobs exuded a style and confidence heretofore unseen in the computer industry. He had few technical skills—Wozniak handled all that—yet Jobs was a never-before-seen kind of tech rock star. He proved you could rise on the strength of other skills, such as conviction, product vision, marketing genius, and a willingness to take risks. And Jobs did take big risks, investing in software and graphics he believed would compel people to buy the Mac not for their offices but for their homes. His leadership style—described by some as cruel, petulant, ruthless, and selfish—was controversial, but all that was forgiven as he turned out extraordinary products.