jrtom: (Default)

As a framing exercise: if you are an M.D. who is tasked with keeping torture safe, you have a practical problem. Set aside the ethical problem for a moment. How do you know how to do it?

...in order to do the job the medical monitors were given to do, it left them two choices, both of which were awful:

One, they could just wing it. You're talking about techniques that carry high risks of PTSD, but also high risks of physical injury and death...So I think it's certainly possible that while they weren't eagerly looking forward to setting up research they might have been backed into this by saying, let's take notes.

...Now, whether they considered it research or not is irrelevant. There are some crimes for which you must prove intent. Human subject protections have no such qualifier. Particularly when there's risk for injury to the subject, you've crossed that line.

For those concerned about triggering, the article does not discuss any specific techniques, although it does mention one or two.
jrtom: (Default)

It would have been even better if he'd had to pass grant proposals out a slot every once in a while, which someone would have to "approve" (by pushing a button) in order for the coin slot to start working again. :)
jrtom: (Default)
from [livejournal.com profile] fdmts:

Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials

A couple of choice quotes:

It is a truth universally acknowledged that a medical intervention justified by observational data must be in want of verification through a randomised controlled trial.


It is often said that doctors are interfering monsters obsessed with disease and power, who will not be satisfied until they control every aspect of our lives (Journal of Social Science, pick a volume).

The responses (scroll to the bottom and click through) are also worth reading, although some of them are rather pedantic.
jrtom: (Default)
From [livejournal.com profile] fdmts: A Face Is Exposed for AOL Searcher No. 4417749

In essence, AOL recently released 20 million anonymized search queries. However, it turns out that it's not that hard to figure out who someone is based on what they're searching for, as the article details.

I deal with this sort of issue on a continuing basis as part of my profession (among other things, I do research on learning models for social network analysis). In some cases, the data is inarguably public: no one really minds if I analyze the network defined by the "writes-a-paper-with" relation. But in other cases, it's been drilled into the heads of researchers--supposedly--that anonymization is required in order to release data, and often in order to get it in the first place.

The problem is, of course, that clearly anonymization isn't sufficient in this case.

It's a tricky problem, of course; we can't do research if we don't have data to work with, and there are valuable things that can be learned from such data that _don't_ involve violating peoples' privacy. I guess the question is, if it's necessary to collect such data in the first place, and to study it, is there anything in addition to anonymization that can be done to prevent this sort of 'reverse-engineering' of someone's identity? (Obviously AOL shouldn't have released the data publically in the first placeā€¦but the point is that by current standards they probably thought that it wouldn't do any harm because it was anonymized.) Aggregating it isn't the answer, because then you lose much of the information that made the data valuable in the first place.

jrtom: (Default)
They're taking the hobbits to Isengard! (Flash; high res version)

This is funny (and short enough not to be annoying). It's also caused me to think more about solutions to the whole "derivative works" problem.
read on, if you're curious; gets somewhat technical )
jrtom: (Default)
One of the ironies of the data sets that I study--social networks--is that they are both omnipresent and often difficult to get access to. So my research has been driven, in part at least, by the properties of the data to which I've been able to secure access. Sometimes organizations (companies, e.g.) will make data sets available to those in academia, and I've benefited from this, but it doesn't happen often. (The fact that a bunch of Enron's corporate emails got dumped on the web has, no joke, changed the course of the field of social network analysis.)

It's just occurred to me that if I take a job in industry, this problem will, in a weird way, invert itself: the company that I work for may be able to give me all sorts of data to work with...but no one else will.



jrtom: (Default)

May 2011

1516 1718192021
29 3031    


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated 18 April 2019 12:27
Powered by Dreamwidth Studios