Saturday, September 9, 2017

Junk science in Stanford's artificial intelligence gayface study that Newsweek, The Guardian,
The Economist
and Big Gay missed

Not very well, it can't, Newsweek. Take a class in statistics.
Stanford is about to publish an idiotic artificial intelligence gayface study that claims AI can identify people's sexual orientation by their photographs. (We didn't know Stanford's Graduate School of Business was so well-versed in sociological studies, did you?)
     GLAAD and HRC are, of course, apoplectic and have Issued A Press Release through Drew Anderson, GLAAD's "Rapid Response" czar.
     The press release ripped the study for being irresponsible, for being white-centric, for ignoring bi's, for getting pictures from dating sites, for not being peer reviewed, and blah blah blah.
     The Stanford Biz School Boy Wizards struck back, accusing HRC and GLADD of issuing a press release containing "poorly researched opinions of non-scientists" and of being irresponsible themselves ("I'm rubber, you're glue, whatever you say bounces off me and sticks to you.") They also said their study was peer-reviewed though they didn't say by whom.
     We think the study is crap, although Newsweek, The Guardian and The Economist, among others, completely fell for it.
     Why do we think the study is crap? Because AKSARBENT published a post back in March about research on "gaydar" by the very smart University of Wisconsin at Madison Department of Psychology, that's why.
     The U. of WI busted, once and for all, ALL studies which test gaydar (we include robot gaydar) by using using test populations that are 1/2 gay and 1/2 straight.
     Guess what! The Stanford Study was a paired study!

     You'd think GLAAD and HRC would have grabbed on to this fundamental methodology flaw like junkyard dogs, but they completely missed it, as they were undoubtedly planning the menus for their next celebrity fundraising banquets.
     According to the U of WI Department of Psychology, the problem is this: since only 3-8% of the population is gay, any study which shows people (or, presumably, computers) two pictures and says pick the gay one, is bullshit. What a gaydar study (and we can't imagine anything less worth studying) should do is to show someone, or some thing, 100 people (or voices or whatever) and tell them to pick out the 3-8 gay wads.
     After all, even flipping a coin gives you a success rate of 50% if there are only 2 choices.
     William Cox, Assistant Scientist, Department of Psychology, University of Wisconsin-Madison, (Go Badgers!) wrote the article about debunking the myth of gaydar (and poorly designed gaydar studies), with help from two other Wisconsin psych profs and a graduate student in psychology (Alyssa Bischmann) from THE UNIVERSITY OF NEBRASKA! (Yay! Go Big Red!) We excerpt:
     But as we’ve been able to show in two recent papers, all of these previous studies fall prey to a mathematical error that, when corrected, actually leads to the opposite conclusion: Most of the time, gaydar will be highly inaccurate.
     How can this be, if people in these studies are accurate at rates significantly higher than 50 percent?
      There’s a problem in the basic premise of these studies: Namely, having a pool of people in which 50 percent of the targets are gay. In the real world, only around 3 to 8 percent of adults identify as gay, lesbian or bisexual.
     What does this mean for interpreting the 60 percent accuracy rate? Think about what the 60 percent accuracy means for the straight targets in these studies. If people have 60 percent accuracy in identifying who is straight, it means that 40 percent of the time, straight people are incorrectly categorized. In a world where 95 percent of people are straight, 60 percent accuracy means that for every 100 people, there will be 38 straight people incorrectly assumed to be gay, but only three gay people correctly categorized.
     Therefore, the 60 percent accuracy in the lab studies translates to 93 percent inaccuracy for identifying who is gay in the real world (38 / [38 + 3] = 92.7 percent). Even when people seem gay – and set off all the alarms on your gaydar – it’s far more likely that they’re straight. More straight people will seem to be gay than there are actual gay people in total.
     Guess what AKSARBENT did, kids? We applied the above example to the Stanford AI research and found a real world inaccuracy rate of 71% and 68% in spotting gay women. The Stanford study used photos of 35,000 people. We assumed 17,500 (half) were women, but a variance wouldn't change the inaccuracy percentage. We also chose 5% (between 3-8%) as the real world percentage of gay women. We didn't take into account bisexuals, but then, neither did the Stanford Business School tabulators. We're not statisticians, so let us know if our arithmetic is off, but really, it's pretty simple.
      As for men, you can do that exercise yourself, Dear Reader. Click the table below to enlarge.

No comments:

Post a Comment