In the future, if you tweet out a photo of a hilarious, meme-tastic kitten, it might be best not to include terms like “white powder,” “dirty bomb,” or “Death to America.”
Since late January, the Federal Bureau of Investigation has been asking the IT industry to help it develop an open-source social-media application that would provide a panoramic real-time picture of any “breaking event, crisis, activity, or natural disaster…in progress in the U.S. or globally,” according tostatements released by the agency. Essentially, the bureau wants to crowd-source software that would data-mine Twitter and other websites to scan for—and perhaps predict—mass uprisings, criminal activity, and terror plots.
To make something like what the FBI is looking for, a programmer would have to write a scriptto yank content from, say, open Facebook profiles and Twitter feeds. Once the data is obtained, it can be quickly searched for key terms. The next step is “geotagging“—tying individual posts to specific geographical locations. But the app would have to deal with more than just keywords. Ideally, the FBI wants a “threat index” that combines multiple metrics such as locations, links, and networks into one waterfall search engine. Think Klout, but souped-up for the NatSec establishment.
At first glance, the concept seems sensible enough. It’s no surprise the US government would want to use every resource possible to stay ahead of the news and intelligence curve in case a new crisis hits at home or abroad. And because the program would be aimed at monitoring open sources, it might not sound like a major civil-liberties tripwire, since tweets and online forums are usually available for anybody to view.
Still, the idea of Big Brother checking up on whom you’ve friended on Facebook or watching the embarrassing videos you’ve posted on YouTube might be off-putting, even if you’re not a die-hard civil libertarian. Such initiatives are probably legal, says Rebecca Jeschke, a digital-rights analyst at the Electronic Frontier Foundation, but they’re also “creepy.”
The FBI isn’t completely oblivious to such concerns. In statement sent to Mother Jones, a bureau spokeswoman insisted that the FBI is not looking for a program that would access private data or “focus on specific persons or protected groups.” Instead, she claims, the program would hone in on “activities constituting violations of federal criminal law or threats to national security.” The FBI also provided examples of words the application would be built to single out, including “bomb,” “suspicious package,” “white powder,” “active shoot,” and “lockdown.” “Although the FBI has always adapted to meet changes in technology,” the statement reads, “the rule of law civil liberties, and civil rights, will remain our guiding principles.” (They don’t always live up to those.)
Privacy concerns aside, the efficacy of open-source data-mining applications is, at best, questionable. “It reads almost like science fiction,” Mike German, a senior policy counsel for the ACLU’s Washington Legislative Office and former FBI agent, says. “The FBI has this unquenchable thirst for more data…Here they are in this day and age, thinking there is some easy solution to identifying threats against the country. But it’s often foolish for agents to take what they see online and treat it as intelligence. For instance, if you run a search for some of their key terms like ‘lockdown,’ ‘white powder,’ and ‘active shoot,’ you get over 345 million hits. That’s 345 million potential false tips.”
The government has tried this sort of thing before, without much success. The Department of Homeland Security already has several controversial data-mining programs. In 2007, a DHS program known as ADVISE was suspended following an internal audit by the department’s inspector general for dodging a required privacy review. And last September, the Government Accountability Office issued a report (PDF) that urged stronger executive oversight of DHS data-mining to ensure necessary privacy protections. The Defense Intelligence Agency, National Security Agency, and Central Intelligence Agency also have well-documented histories of flirting with large-scale data-mining, with mixed, secret, and often controversial results.
In 2008, a privacy and terrorism commission backed by DHS published a 376-page report titled “Protecting Individual Privacy in the Struggle Against Terrorists” that panned the logic behind post-9/11 data-mining. “Automated identification of terrorists through data mining (or any other known methodology) is neither feasible as an objective nor desirable as a goal of technology development effort,” the commissioners wrote. “Even in well-managed programs, such tools are likely to return significant rates of false positives, especially if the tools are highly automated.”
The FBI, however, is undaunted. As of Wednesday, it’s still looking for programmers.
By: Asawin Suebsaeng, April 5, 2012