This is an interesting briefing (PDF) on Open Source Intelligence gathering on the Internet from the guys who created the great data mining program Maltego. Slides 23 to 26 are the most interesting part. The authors posit a spin on an idea raised a few times on this blog about the creation of imitative virtual social networks. The authors have a quote on slide 23 stating:
If you can convince an algorithm that you are human, can you convince a human that you are human?The captcha software is the algorithm, which is a visualized turing test. Once an automated program passes the turing test it is a matter of convincing other humans it is human. How would it do that? By using human identities data mined from the Internet.
On the 24th slide is a screenshot of an imaginary application called the "Virtual Identity Creator" that creates multiple identities within social networks, email, and blogging software complete with built in Captcha circumvention software. The data within the imaginary program would presumably be made up of thousands of harvested identities. The authors then go onto posit that you could do the following with such a network of "imaginary virtual friends":
- manipulate ratings of anything
- sway public opinion
- influence political polls
- alter stock prices - directly or indirectly
- perform social denial of service
Keep in mind that people are flock animals - you just need to be the initial catalyst and get critical mass.Some thoughts on the briefing. Imitative social networks aren't dependent upon thousands of computers. They are like a web 2.0 version of botnets (this idea has also been alluded to before by pdp of the GNUCITIZEN blog). At the technical level an imitative social botnet only needs one computer because the nodes are virtual stolen identities. It is also problematic to call it a network as such from an outsider's viewpoint. A network is an interconnected system of people. They are interconnected within an imaginary "Virtual Identity Creator" program and within the intent of the social botnet controller. However, the output of the program is a swarm of unrelated nodes to an outside observer. The network controller could add another layer of deception to such a program by randomizing the identities he uses for any particular endeavour. For example, today the social botnet controller uses stolen identities 1005, 10001, 78980, et. al. to create buzz on trading message boards about a particular stock. Tomorrow the botnet will use stolen identities 967, 98764, 433, and so on, to perform a black public relations attack on a high profile blog.
I'd imagine if such a program were to exist in the future it'd be somewhat cumbersome at first. Stolen identities would eventually be found out. Other identities associated with the same message or goal as the stolen identities would be discovered to be stolen. I'd imagine a fix to this might be the creation of networks of nonexistent people i.e. fake names and personas but they are real insofar as they have other fake virtual friends on networking sites. Or alternatively an imitative network could attack real online personas with accusations that they are the false personas.
There would also be problems with automated message content that may not deceive real humans (similar to the original turing test). However, I could foresee a synthesis of technologies where identity mining software is also combined with text analysis or content analysis to data mine an online persona's idiosyncrasies within their writing. Neal Krawetz has already done some research in this area. If Krawetz's technology was tweaked in the right manner an imitative social network controller could create virtual versions of "The Talented Mr Ripley" by imitating the identity and the writing style of the identity.