Dopo il post di ieri, ancora qualche nota sull'attacco dei robo-negazionisti
La faccenda dell'esercito di "sockpuppet" (marionette) che ha invaso il dibattito su internet sta venendo fuori un po' dappertutto. Oggi ne parla George Monbiot sul "Guardian".
In sostanza, fino ad oggi abbiamo gestito il dibattito sul web come se fosse una di quelle assemblee studentesche che si facevano nel '68. Tutti siamo alla pari, ognuno ha diritto di parola, ognuno ha diritto a una replica.
Queste regole, tuttavia, valgono per chi non è pagato apposta come agente provocatore e - soprattutto - per quelli che sono esseri umani. Non valgono per i robot e per i disinformatori professionali. Purtroppo, ci stiamo accorgendo soltanto adesso che ce ne sono di robot e di disinformatori, e non pochi e che stanno controllando il dibattito dando l'impressione di essere persone reali e disinteressate.
Leggete queste cose, e rabbrividite (dall'articolo di Monbiot - riportato per intero più in basso)
• Ci sono oggi ditte che usano "persona management software" che moltiplica gli sforzi di ogni disinformatore (astroturfer) dando l'impressione che ci sia un importante supporto popolare per quello che il governo o un'industria stanno cercando di fare.
• Questo software crea tutto quello che è necessario on line per una persona reale: nome, acconto di posta elettronica, pagina web e social media. In altre parole, genera automaticamente qualcosa che somiglia a un profilo autentico, rendendo difficile capire la differenza fra un robot virtuale e un commentatore reale.
• False sottoscrizioni possono essere aggiornate ripostando automaticamente oppure linkando a contenuto generato altrove, rinforzando l'impressione che i sottoscrittori sono reali e attivi.
• I disinformatori umani possono poi avere in assegnazione queste sottoscrizioni "pre-invecchiate" per creare una storia passata, suggerendo che sono stati occupati a linkare e a tweettare per mesi. Nessuno sospetterebbe che sono arrivati sulla scena per la prima volta un momento fa per il solo scopo di attaccare un articolo sulla scienza del clima oppure per argomentare contro nuovi controlli sul sale nel cibo-spazzatura.
• Con un uso astuto dei social media, i disinformatori possono far sembrare come se un personaggio fosse stato veramente a una conferenza e presentarsi come individui importanti come parte dell'esercizio. Ci sono molti trucchi basati sui social media che si possono utilizzare per aggiungere un nuovo livello di realtà a persone inesistenti.
Dal che, mi viene da domandarmi, ma Claudio Costa, non sarà mica un robot anche lui?
___________________________________________________
The need to protect the internet from 'astroturfing' grows ever more urgent | George Monbiot
The tobacco industry does it, the US Air Force clearly wants to ... astroturfing – the use of sophisticated software to drown out real people on web forums – is on the rise. How do we stop it?
Every month more evidence piles up, suggesting that online comment threads and forums are being hijacked by people who aren't what they seem.
The anonymity of the web gives companies and governments golden opportunities to run astroturf operations: fake grassroots campaigns that create the impression that large numbers of people are demanding or opposing particular policies. This deception is most likely to occur where the interests of companies or governments come into conflict with the interests of the public. For example, there's a long history of tobacco companies creating astroturf groups to fight attempts to regulate them.
After I wrote about online astroturfing in December, I was contacted by a whistleblower. He was part of a commercial team employed to infest internet forums and comment threads on behalf of corporate clients, promoting their causes and arguing with anyone who opposed them.
Like the other members of the team, he posed as a disinterested member of the public. Or, to be more accurate, as a crowd of disinterested members of the public: he used 70 personas, both to avoid detection and to create the impression there was widespread support for his pro-corporate arguments. I'll reveal more about what he told me when I've finished the investigation I'm working on.
It now seems that these operations are more widespread, more sophisticated and more automated than most of us had guessed. Emails obtained by political hackers from a US cyber-security firm called HBGary Federal suggest that a remarkable technological armoury is being deployed to drown out the voices of real people.
As the Daily Kos has reported, the emails show that:
• Companies now use "persona management software", which multiplies the efforts of each astroturfer, creating the impression that there's major support for what a corporation or government is trying to do.
• This software creates all the online furniture a real person would possess: a name, email accounts, web pages and social media. In other words, it automatically generates what look like authentic profiles, making it hard to tell the difference between a virtual robot and a real commentator.
• Fake accounts can be kept updated by automatically reposting or linking to content generated elsewhere, reinforcing the impression that the account holders are real and active.
• Human astroturfers can then be assigned these "pre-aged" accounts to create a back story, suggesting that they've been busy linking and retweeting for months. No one would suspect that they came onto the scene for the first time a moment ago, for the sole purpose of attacking an article on climate science or arguing against new controls on salt in junk food.
• With some clever use of social media, astroturfers can, in the security firm's words, "make it appear as if a persona was actually at a conference and introduce himself/herself to key individuals as part of the exercise … There are a variety of social media tricks we can use to add a level of realness to fictitious personas."
Perhaps the most disturbing revelation is this. The US Air Force has been tendering for companies to supply it with persona management software, which will perform the following tasks:
a. Create "10 personas per user, replete with background, history, supporting details, and cyber presences that are technically, culturally and geographically consistent … Personas must be able to appear to originate in nearly any part of the world and can interact through conventional online services and social media platforms."
b. Automatically provide its astroturfers with "randomly selected IP addresses through which they can access the internet" (an IP address is the number which identifies someone's computer), and these are to be changed every day, "hiding the existence of the operation". The software should also mix up the astroturfers' web traffic with "traffic from multitudes of users from outside the organisation. This traffic blending provides excellent cover and powerful deniability."
c. Create "static IP addresses" for each persona, enabling different astroturfers "to look like the same person over time". It should also allow "organisations that frequent same site/service often to easily switch IP addresses to look like ordinary users as opposed to one organisation."
Software like this has the potential to destroy the internet as a forum for constructive debate. It jeapordises the notion of online democracy. Comment threads on issues with major commercial implications are already being wrecked by what look like armies of organised trolls – as you can sometimes see on guardian.co.uk.
The internet is a wonderful gift, but it's also a bonanza for corporate lobbyists, viral marketers and government spin doctors, who can operate in cyberspace without regulation, accountability or fear of detection. So let me repeat the question I've put in previous articles, and which has yet to be satisfactorily answered: what should we do to fight these tactics?