jrtom: (social scientist)

I haven't entirely decided what I think about what Wikileaks has been doing, but this is an interesting look into, and analysis of, what Assange has been trying to accomplish via these leaks.

Of minor professional interest to me: apparently either Assange is not familiar with the terminology of social networks, or he thinks that his audience isn't (possibly fair). Apparently his strategy is an interesting complement to (or inversion of) the US counterterrorism social-network-based strategy, i.e., attempting to disrupt networks by identifying and removing key actors. Instead, Assange is apparently trying to disrupt the network by making the network itself--that is, the connections that make it something other than a collection of individuals--suspect, or at least less efficient.

EDIT: I do look forward to seeing what David Brin (_The Transparent Society_) has to say about this; I'll be watching http://davidbrin.blogspot.com to see when he posts something.
jrtom: (Default)

A nice presentation (from Googler Paul Adams) on designing for the social web, focusing on some common practices, why they're problematic, and some options for what to do instead. (I'm personally very happy that he included the observation that 'friends' is a term that is very badly (over)used in the context of social networking sites.)
jrtom: (Default)
Modelling Opinion Formation with Physics Tools: Call for Closer Link with Reality

In essence, the author is calling out physicists for building models for the social sciences without actually knowing anything about the social sciences. An occasionally entertaining read, and one that has a larger message about people who build models and the need for sanity checks with the phenomena that they're intended to model.

EDIT: fixed the link so that the anchor text and URL are each now in their proper place. :P :)
jrtom: (Default)
From [livejournal.com profile] fdmts: A Face Is Exposed for AOL Searcher No. 4417749

In essence, AOL recently released 20 million anonymized search queries. However, it turns out that it's not that hard to figure out who someone is based on what they're searching for, as the article details.

I deal with this sort of issue on a continuing basis as part of my profession (among other things, I do research on learning models for social network analysis). In some cases, the data is inarguably public: no one really minds if I analyze the network defined by the "writes-a-paper-with" relation. But in other cases, it's been drilled into the heads of researchers--supposedly--that anonymization is required in order to release data, and often in order to get it in the first place.

The problem is, of course, that clearly anonymization isn't sufficient in this case.

It's a tricky problem, of course; we can't do research if we don't have data to work with, and there are valuable things that can be learned from such data that _don't_ involve violating peoples' privacy. I guess the question is, if it's necessary to collect such data in the first place, and to study it, is there anything in addition to anonymization that can be done to prevent this sort of 'reverse-engineering' of someone's identity? (Obviously AOL shouldn't have released the data publically in the first placeā€¦but the point is that by current standards they probably thought that it wouldn't do any harm because it was anonymized.) Aggregating it isn't the answer, because then you lose much of the information that made the data valuable in the first place.

jrtom: (Default)
I just got an email from the organizers of an academic/professional conference in the field of social network analysis. The gist is that they have too many abstracts for the amount of time that they have, and are therefore trying to figure out what to drop.

In this email, we find the following gem:

It is likely that we will drop some papers from the program because they aren't about social networks, they don't make sense, they have obviously been lifted from the internet, or for some other reason that convinced us that they don't belong on the program.

Now, I realize that this conference has never claimed to have a formal peer review process for inclusion in the program; it's a conference to which one can bring work in progress, and generally work of a speculative nature. I'm generally fine with that; there's a place for such conferences, and I'm glad this one exists. Heck, I presented there last year and probably would be doing so this year if I had more time.

But I mean, seriously, have the organizers not at least been doing the minimal checking required to filter out papers that aren't about social networks, or don't make sense? (I guess this would be the "hemorrhaging edge" . . . . )
jrtom: (Default)
Amygdala: Blue in the Face

This is mostly a placeholder in case I come back to this later, but this blogger suggests that the reason why Bush & co. didn't get the warrants was that they were doing large-scale pattern analysis on the communications of tens of thousands of people (or more) . . . thus making acquiring warrants impractical at best.

This kind of analysis is precisely what I do in my research. I have no doubt whatsoever that I could get a job with the CIA or NSA to simply continue doing what I've been doing. Let me be clear: I don't think that there's anything ethically wrong with the research qua research; the evil, if any, is in how it is used.

But it still itches me.


jrtom: (Default)

May 2011

1516 1718192021
29 3031    


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated 18 April 2019 12:54
Powered by Dreamwidth Studios