As I wrote the other day, more and more human subjects research is being performed by large tech companies. The best way to handle the ethical issues raised by this research is still unclear. The first step is to get some idea of what has already been published from these organizations. So here is a brief review of the papers I know about where human subjects experiments have been conducted by companies. I’m only counting experiments here that have (a) been published in the literature and (b) involved experiments on users. I realized I could come up with surprisingly few. I’d be interested to see more in the comments if people know about them.
What they did: Randomized people to get different emotions in their news feed and observed if they showed an emotional reaction.
What they found: That there was almost no real effect on emotion. The effect was statistically significant but not scientifically or emotionally meaningful.
Company: Not stated but sounds like Reddit
What they did: Randomly up-voted, down voted, or left alone posts to the social networking site. Then they observed whether there was a difference in the overall rating of posts within each treatment.
What they found: Posts that were upvoted ended up with a final rating score (total upvotes - total downvotes) that was 25% higher.
What they did: Using a commercial Facebook app, they found users who adopted a product and randomized sending messages to their friends about the use of the product. Then they measured whether their friends decided to adopt the product as well.
What they found: Many interesting things. For example: susceptibility to influence decreases with age, people over 31 are stronger influencers, women are less susceptible to influence than men, etc. etc.
What they did: They developed methods for inferring the causal impact of an ad in a time series situation. They used data from an advertiser who showed ads to people related to keywords and measured how many visits there were to the advertiser’s website through paid and organic (non-paid) clicks.
What they found: That the ads worked. But more importantly that they could predict the causal effect of the ad using their methods.
comments powered by Disqus