Bribe or ‘Tax’? NSA gives 10milion to RSA for Backdoor Access

Bribe or ‘Tax’? NSA gives 10milion to RSA for Backdoor Access

Hmm. Hold up. So if we go by this Wikipedia entry..

“Founded as an independent company in 1982, RSA Security, Inc. was acquired by EMC Corporation in 2006 for US$ 2.1 billion and operates as a division within EMC.[5]

People need to understand, this means RSA took around 2% of what they’d make in one year. FOR A BACK-DOOR OMG. Does this not sound more like a tax, than a payment (never mind a bribe!)? How much would you care about an extra 2% per year? Exactly. Thats all I got. Someone else needs to close that gap.     -Max

RSA-NSA-Backdoor-TaxWhat’s an encryption backdoor cost? When you’re the NSA, apparently the fee is $10 million.

Intentional flaws created by the National Security Agency in RSA’s encryption tokens werediscovered in September, thanks to documents released by whistleblower Edward Snowden. It has now been revealed that RSA was paid $10 million by the NSA to implement those backdoors, according to a new report in Reuters.

Two people familiar with RSA’s BSafe software told Reuters that the company had received the money in exchange for making the NSA’s cryptographic formula as the default for encrypted key generation in BSafe.

“Now we know that RSA was bribed,” said security expert Bruce Schneier, who has been involved in the Snowden document analysis. “I sure as hell wouldn’t trust them. And then they made the statement that they put customer security first,” he said.

RSA, now owned by computer storage firm EMC Corp, has a long history of entanglement with the government. In the 1990s, the company was instrumental in stopping a government plan to include a chip in computers that would’ve allowed the government to spy on people.

It has also had its algorithms hacked before, as has RSA-connected VeriSign.

The new revelation is important, Schneier said, because it confirms more suspected tactics that the NSA employs.

“You think they only bribed one company in the history of their operations? What’s at play here is that we don’t know who’s involved,” he said.

Other companies that build widely-used encryption apparatus include Symantec, McAfee, and Microsoft. “You have no idea who else was bribed, so you don’t know who else you can trust,” Schneier said.

RSA did not return a request for comment, and did not comment for the Reuters story.

via CNet

 

How Online Privacy Tools Are Changing Internet Security

How Online Privacy Tools Are Changing Internet Security

How online privacy tools are changing Internet security and driving the (probably quixotic) quest for anonymity in the digital age.

For many of us, the Internet is like a puppy—lovable by design and fun to play with, but prone to biting. We suspect that our digital footprint is being tracked and recorded (true), mined and sold (super true), but we tolerate these teeth marks because, for many of us, the Internet is irresistible, its rewards greater than its risks. In a 2011 Gallup poll, more than half of those surveyed said they worried about privacy issues with Google, yet 60 percent paid weekly visits to the search giant. As long as we clear our search terms, block cookies, use antivirus software and see that our social media presence isn’t too social, we’ll be OK. Right?

Increasingly, this sense of security is an illusion. “I don’t trust anything on the Internet,” says digital whistleblower John Young. “Cybersecurity is a fiction.” He would know: Young was a seminal member of WikiLeaks and runs Cryptome, a website that posts “documents prohibited by governments worldwide”—think FBI files and manuals detailing how Microsoft spies on us. He argues that the tenuous architecture of the Internet prevents it from being truly secure.

Case in point: Mat Honan, the wired.com writer whose entire digital existence was destroyed by hackers within the span of an hour last August. The cyberbaddies broke into Honan’s Gmail, accessed his Apple ID account and deleted data on his MacBook, iPhone and iPad, including photos of his family. The scariest part of this privacy breach—aside from the fact that its victim is a tech writer (ahem)—is that the hackers hijacked his online world using nothing more than his billing address and the last four digits of his credit card, information that’s relatively easy to obtain online if you know the right tricks. Honan’s story served as yet another reminder that THE INTERNET IS NOT SAFE, PEOPLE.

So is it time to go off the grid? That’s one option. Another is to ditch the puppy analogy and view the Internet the way those who demand higher than average levels of security do: as a giant tracking device that can be outsmarted. Countless tools exist to cloak your digital identity: email encryption services, “meta search engines” that promise private browsing and networks and software that offer a degree of anonymity and, in some cases, entry to previously inaccessible websites. Sounds like the stuff of spy novels, but these tools are available to anyone with an Internet connection.

Of course, the idea of online anonymity clashes with the prevailing “share everything” approach to the Internet—and the moneymaking opportunities therein—which makes it a fascinating and complicated topic. Its opponents say it fosters hate and crime (Mark Zuckerberg’s sister, Randi Zuckerberg, who used to head up marketing at Facebook, famously called for the end of online anonymity earlier this year, stating that “People behave a lot better when they have their real names down”), while privacy champions argue that anonymity grants greater security and freedom of expression. The John Youngs of the world will tell you that being truly unidentifiable online is a fairy tale. But every fairy tale has a lesson, and even if you get hives thinking about trading your identity for a more armored online existence, there’s plenty to learn from the heroes, villains and everyday secret-keepers attempting to go John Doe in the digital realm.

 

Photo by Richard Fleischman.

There’s a famous New Yorker cartoon from 1993 that shows two dogs in front of a computer, one saying to the other, “On the Internet, nobody knows you’re a dog.” This was a novel proposition in the Web’s early days. Liberated from our actual identity, we chatted in forums using ridiculous pseudonyms such as “beaniebabyaddict47” and posted comments as “Anonymous,” our snarky alter ego. Anonymity felt great, even if technically it was just a state of mind. But then social media arrived, and with it the idea that transparency is power. Suddenly, we decided it was important to tell the Internet our real name and what we had for breakfast.

For those who want to keep their breakfast habits a secret, the rise of transparency created new security risks. Enter the digital cloaking device. In 2002, the U.S. Naval Research Lab debuted Tor, one of the more effective “anonymizers” to date. A group of M.I.T. grads developed it with the goal of masking one’s IP address, the string of numbers that reveals a given computer’s physical location (snoops and hacks love your IP because it brings them one step closer to determining the real you).

At the heart of Tor is a concept called “onion routing,” which sends the “packets” of info needed to get from points A to B online on a winding route through a network of randomly selected servers, each one knowing only the packet’s previous and next stops in the chain, thereby hiding the user’s IP and allowing a degree of anonymous Web browsing. Confused? In the simplest terms, Tor separates the origin and destination of your online communication, essentially tunneling you through the Web.

The U.S. Navy financed this tunnel to protect government communications, but its code was released to the public because, as Karen Reilly, development director for the nonprofit Tor Project, puts it, “A Navy anonymity network wouldn’t work. The idea is to have many diverse users so that you can’t tell who somebody is just by virtue of them using Tor.” Using seed money from the Electronic Frontier Foundation, a digital rights advocacy group, the Tor Project formed a decade ago to grow Tor’s user base and maintain and improve its network. Today, Reilly estimates that Tor has about half a million daily users and 3,000 to 4,000 “nodes,” volunteer servers that hopscotch you through the network.

Tor is available as a free download on torproject.org. This software includes a Tor-enabled version of the Firefox Web browser that hides your IP address and forces an encrypted connection where available. Sounds great, but like most anonymizing tools, Tor is flawed. It slows Web browsing and, if someone decided to keep an eye on a large enough swath of the Internet, he could theoretically analyze data patterns to guess where the communication originated.

These weaknesses haven’t stopped hundreds of thousands from downloading the service. Reilly says most people use it to protect their browsing because “they think it’s creepy to be tracked. They don’t like the fact that there’s a file on them somewhere being kept by an advertiser who knows what cereal they like to eat.” And there are more weighty reasons to use Tor: Journalists and activists in oppressive regimes use it to circumvent Internet censorship. It’s been reported that Arab Spring revolutionaries tapped Tor to access Facebook and Twitter, both of which were blocked at various points by Egypt, Iran and others (incidentally, Iran has the second-highest number of Tor users; the United States has the most).

Criminals, trolls and other creeps also love Tor—no surprise given their affinity for the Internet in general. In the mood for some heroin? Silk Road is a one-stop online shop for illegal goods that uses Tor to hide its location from users and, ostensibly, law enforcement. Anonymity haters reference nasty sites like these when stating their case, but Reilly thinks this is misguided. “If Tor didn’t exist, criminals would have other options.”

Other options used by both crooks and law-abiders include virtual private networks, which are faster than Tor and sometimes less secure—and generally not free. Like Tor, VPNs provide a secure connection between computers and can be used as a gateway to websites that would otherwise be inaccessible. VPNs are all the rage in China, where government censorship of the Internet is the norm. Mara Hvistendahl, a Shanghai-based correspondent for Science magazine, has experimented with different privacy tools since moving to the city in 2004. She started with Tor, but found it too slow for regular Web browsing, so she switched to VPNs to access Gmail and Google Scholar, sites that have been blocked by Chinese censors. “Every foreign journalist I know in China uses a VPN,” she says. Another VPN user—a China-based English and journalism teacher who spoke to Sky on the condition of you know what—says she pays for a VPN called Astrill to reach Facebook.

Both women mentioned pairing VPNs with other privacy tools. Hvistendahl has heard of reporters combining VPNs, multiple SIM cards and the secure email service Hushmail to protect sources. If it’s true that no online cloaking device is totally effective, this bundling strategy might be our best bet for protecting ourselves online—though good luck trying to convince the average Web user to do it.
Most people have a difficult time with far-off risk,” says Ashkan Soltani, a former technologist with the Federal Trade Commission’s privacy division who’s currently a privacy/security researcher and consultant. “That’s why we passed seat belt laws. The likelihood of you getting in a car accident is low, but the harm that you might experience in that accident is potentially high. It’s the same online. We’re bad at figuring out how our data could be used against us in the future, so we don’t care.”

We should care, says Lee Tien, senior staff attorney for the Electronic Frontier Foundation, because data privacy laws are “not incredibly strong.” This is an understatement in countries such as China and Iran, where Web users have little or no online freedom. The US has the Wiretap Act and the Stored Communications Act, both of which address basic privacy issues such as police needing an interception order to tap emails. But these laws fail to look at how private corporations handle our digital footprint, and as a result, we’re at the mercy of, say, Facebook’s data policy or Google’s data policy, and we all know that they have our best interests in mind . . . .

But here’s the real stinger: Let’s say you decide to take control of your digital footprint and start using some of the tools mentioned above. Also, you begin paying closer attention to the privacy policies on the various sites you visit, clicking “do not track” when possible and opting out of initiatives such as Google’s targeted ads program, which is based on the content of your email. Congratulations, responsible netizen, you now have more online security than most—have fun on your cumbersome, hard-to-manage, less optimized version of the Internet!

Ken Berman puts it another way: “If you want to be on Facebook, there are certain things—anonymizing tools that prevent tracking, prevent cookies, prevent identifying behavior—that make some of these social media tools difficult to work with.” Berman, an IT security expert who for years worked at the Broadcasting Board of Governors (the United States’ international broadcasting arm), sees two options for Internet users: “Either you say, ‘I give in. I enjoy the Web, so I’ll put up with walking by a store and getting a text message that says go in this store and you’ll get an immediate 10 percent coupon.’ Or you say, ‘No, I don’t want to play in that world, so I’m going to use Tor or a VPN. I’m going to clean up my session every time I log out and not leave any remnants of my behavior.’ I don’t see how there’s anything in between.”

Soltani is more optimistic. He sees a future where governments pass stronger digital privacy laws and geeks build easier-to-use privacy controls that work seamlessly with the slobbering puppy version of the Internet we all love. In the meantime, he’s doing his best to educate as many people as possible on the virtues of proper digital hygiene, whether that means using anonymity tools or simply being more aware of the fact that you leave a data trail wherever you go these days (don’t even get us started on smartphones).

“My big thing is to demystify I.T.,” says Soltani. “It doesn’t help to think of it as magic or something that’s bringing the world to an end. Tech changes the way we interact with one another and our society—and we should be cognizant of that and adjust accordingly.”

For now, it remains to be seen how these changes will affect online anonymity, a concept that begs important questions about what sort of society we want to live in: Is anonymity a right? Should we be able to engage in discourse anonymously? Should beaniebabyaddict47 be allowed to have such an obnoxious alias? Stay tuned. //
With consultation on information systems security from Matt Lange at Milwaukee Area Technical College.

via DeltaSkyMag

Whonix: The Anonymous Operating System

Whonix: The Anonymous Operating System

Whonix is an anonymous general purpose operating system based on Virtual Box, Ubuntu GNU/Linux and Tor. By Whonix design, IP and DNS leaks are impossible. Not even malware with root rights can find out the user’s real IP/location.

Whonix consists of two machines, which are connected through an isolated network. One machine acts as the client or Whonix-Workstation, the other as a proxy or Whonix-Gateway, which will route all of the Whonix-Workstation’s traffic through Tor. This setup can be implemented either through virtualization and/or Physical Isolation.

Whonix advantages:

  • All applications, including those, which do not support proxy settings, will automatically be routed through Tor.
  • Installation of any software package possible.
  • Safe hosting of Hidden services possible.
  • Protection against side channel attacks, no IP or DNS leaks possible^3^ To test for leaks, see LeakTests.
  • Advantage over Live CD’s: Tor’s data directory is still available after reboot, due to persistent storage. Tor requires persistent storage to save it’s Entry Guards.
  • Java / JavaScript / flash / Browser Plugins / misconfigured applications cannot leak your real external IP.
  • Whonix does even protect against root exploits (Malware with root rights) on the Workstation.
  • Uses only Free Software.
  • Building Whonix from source is easy.
  • Tor+Vidalia and Tor Browser are not running inside the same machine. That means that for example an exploit in the browser can’t affect the integrity of the Tor process.
  • It is possible to use Whonix setup in conjunction with VPNs, ssh and other proxies. But see Tor plus VPN/proxies Warning. Everything possible, as first chain or last chain, or both.
  • Loads of Optional Configurations (additional features / Add-Ons) available.
  • Best possible Protocol-Leak-Protection and Fingerprinting-Protection.
How To Secure Your Android Phone Like the NSA

How To Secure Your Android Phone Like the NSA

Rejoice, paranoid security fanatics! There’s finally a version of Android that enables your obsessive need to lock and control each and every file on your mobile device. There’s just one catch: you’ve got to trust the National Security Agency to use it. The NSA has released its security-enhanced version of Android, named SE Android… because G-men have slightly less imagination than your average sea sponge. You can download the source code now and compile it on any operating system you want, so long as you want to compile it on Fedora Linux. Other operating systems should work, but haven’t been tested.

To build SE Android, you’ll need to download and compile the latest code from the Android Open Source Project, then applying the custom SE Android code on top of it. So what do the extra bells and whistles do? Basically every single file and folder that Android has access to can be locked down tight, with considerable encryption and put in place to protect them. Network securityis enhanced on both WiFi and mobile networks, and the already considerable app permission system is enhanced with multi-level security.

Currently SE Android is only intended for emulators and the Nexus S, and son’t expect much support if you intend to expand its horizons. The project wiki assumes that you’re already familiar and comfortable with building Android from source, and know your way around Linux/Unix-based systems. Tin foil hats are sold separately.

via AndroidCommunity

 

Cryptoparty Goes Viral: Pen testers, Privacy Geeks Spread Security to the Masses

Cryptoparty Goes Viral: Pen testers, Privacy Geeks Spread Security to the Masses

Security professionals, geeks and hackers around the world are hosting a series of cryptography training sessions for the general public.

The ‘crytoparty’ sessions were born in Australia and kicked off last week in Sydney and Canberra along with two in the US and Germany.

Information security experts and privacy advocates of all political stripes have organised the causal gatherings to teach users how to use cryptography and anonymity tools including Tor, PGP and Cryptocat.

Multiple sessions were proposed in Melbourne, Sydney, Adelaide, Canberra, Perth and two in Queensland. A further 10 were organised across Europe, Asia, Hawaii and North America, while dozens of requests were placed for sessions in other states and countries.

The cryptoparties were born from a Twitter discussion late last month between security researchers and Sydney mum and privacy and online activist known by her handle Asher Wolf.

For Wolf, the sessions were a way to reignite technical discussions on cryptography.

“A lot of us missed out on Cypherpunk (an electronic technical mailing list) in the nineties, and we hope to create a new entry pathway into cryptography,” Wolf said.

“The Berlin party was taught by hardcore hackers while Sydney had a diverse range of people attending. The idea is to teach people who don’t crypto how to use it.”

The concept resonated with the online security and privacy community.

It took only hours for about a dozen sessions to spring up around the world on a dedicated wiki page following what was only a casual Twitter exchange between Wolf and others — now cryptoparty organisers.

“When I woke up in the morning, they were all there,” Wolf said.

There was no formal uniformity between each crytoparty. Some were hands-on, with users practising on laptops and tablets, while others were more theory-based with some organisers.

Each session runs for around five hours.

The free classes could accommodate a maximum of about 30 to 40 attendees. One of the first parties in the Southeastern US state of Tennessee had more than 100 people turn up to its afterparty, an event complete with music, beer and fire-twirling.

Copyright © SC Magazine, Australia

Encryption Becomes Illegal In the UK: Jail Time For Failure To Provide Keys

Encryption Becomes Illegal In the UK: Jail Time For Failure To Provide Keys

There was some surprise in the comments of yesterday’s post over the fact that the United Kingdom has effectively outlawed encryption: the UK will send its citizens to jail for up to five years if they cannot produce the key to an encrypted data set.

First of all, references – the law is here. You will be sent to jail for refusing to give up encryption keys, regardless of whether you have them or not. Five years of jail if it’s a terrorism investigation (or child porn, apparently), two years otherwise. It’s fascinating – there are four excuses that keep coming back for every single dismantling of democracy. It’s terrorism, child porn, file sharing, and organized crime. You cannot fight these by dismantling civil liberties – they’re just used as convenient excuses.

We knew that this was the next step in the cat-and-mouse game over privacy, right? It starts with the government believing they have a right to interfere into any one of your seven privacies if they want to and find it practical. The next step, of course, is that the citizens protect themselves from snooping – at which point some bureaucrat will confuse the government’s ability to snoop on citizen’s lives for a right to snoop on citizen’s lives at any time, and create harsh punishments for any citizens who try to keep a shred of their privacy. This is not a remotely dystopic scenario; as we see, it has already happened in the UK.

But it’s worse than that. Much worse. You’re not going to be sent to jail for refusal to give up encryption keys. You’re going to be sent to jail for an inability to unlock something that the police think is encrypted. Yes, this is where the hairs rise on our arms: if you have a recorded file with radio noise from the local telescope that you use for generation of random numbers, and the police asks you to produce the decryption key to show them the three documents inside the encrypted container that your radio noise looks like, you will be sent to jail for up to five years for your inability to produce the imagined documents.

But wait – it gets worse still.

The next step in the cat-and-mouse game over privacy is to use steganographic methods to hide the fact that something is encrypted at all. You can easily hide long messages in high-resolution photos today, just to take one example: they will not appear to contain an encrypted message in the first place, but will just look like a regular photo until decoded and decrypted with the proper key. But of course, the government and police are aware of steganographic methods, and know that pretty much any innocent-looking dataset can be used as a container for encrypted data.

So imagine your reaction when the police confiscate your entire collection of vacation photos, claim that your vacation photos contain hidden encrypted messages (which they don’t), and sends you off to jail for five years for being unable to supply the decryption key?

This is not some dystopic pipe dream: this law already exists in the United Kingdom.

 

SOURCE: Falkvinge.net

How to secure your computer and surf fully Anonymous BLACK-HAT STYLE

How to secure your computer and surf fully Anonymous BLACK-HAT STYLE

This is a guide with which even a total noob can get high class security for his system and complete anonymity online. But its not only for noobs, it contains a lot of tips most people will find pretty helpfull. It is explained so detailed even the biggest noobs can do it^^ :

=== The Ultimate Guide for Anonymous and Secure Internet Usage v1.0.1 ===

Table of Contents:

  1.   Obtaining Tor Browser
  2.   Using and Testing Tor Browser for the first time
  3.   Securing Your Hard Drive
  4.   Setting up TrueCrypt, Encrypted Hidden Volumes
  5.   Testing TrueCrypt Volumes
  6.   Securing your Hard Disk
  7.   Temporarily Securing Your Disk, Shredding Free Space
  8.   Installing VirtualBox
  9.   Installing a Firewall
  10.   Firewall Configuration
  11.   Installing Ubuntu
  12.   Ubuntu Initial Setup
  13.   Installing Guest Additions
  14.   Installing IRC (Optional)
  15.   Installing Torchat (Optional)
  16.   Creating TOR-Only Internet Environment
  17.   General Daily Usage

By the time you are finished reading and implementing this guide, you will be able to securely and anonymously browse any website and to do so anonymously. No one not even your ISP or a government agent will be able to see what you are doing online. If privacy and anonymity is important to you, then you owe it to yourself to follow the instructions that are presented here.

In order to prepare this guide for you, I have used a computer that is running Windows Vista. This guide will work equally well for other versions of Windows. If you use a different operating system, you may need to have someone fluent in that operating system guide you through this process. However, most parts of the process are easily duplicated in other operating systems.

I have written this guide to be as newbie friendly as possible. Every step is fully detailed and explained. I have tried to keep instructions explicit as possible. This way, so long as you patiently follow each step, you will be just fine.

In this guide from time to time you will be instructed to go to certain URLs to download files. You do NOT need TOR to get these files, and using TOR (while possible) will make these downloads very slow.

This guide may appear overwhelming. Every single step is explained thoroughly and it is just a matter of following along until you are done. Once you are finished, you will have a very secure setup and it will be well worth the effort. Even though the guide appears huge, this whole process should take at the most a few hours. You can finish it in phases over the course of several days.

It is highly recommended that you close *ALL* applications running on your computer before starting.

SOURCE:
http://www.cyberguerrilla.org/?p=3322

US Police Can Copy Your iPhone’s Contents In Under Two Minutes

US Police Can Copy Your iPhone’s Contents In Under Two Minutes

It has emerged that Michigan State Police have been using a high-tech mobile forensics device that can extract information from over 3,000 models of mobile phone, potentially grabbing all media content from your iPhone in under two minutes.

The CelleBrite UFED is a handheld device that Michigan officers have been using since August 2008 to copy information from mobile phones belonging to motorists stopped for minor traffic violations. The device can circumvent password restrictions and extract existing, hidden, and deleted phone data, including call history, text messages, contacts, images, and geotags.

 

In short, it can copy everything on your smartphone in a matter of minutes.

Learning that the police had been using mobile forensic devices, the American Civil Liberties Union (ACLU) has issued freedom of information requests which demand that state officials open up the data collected, to better assess if penalised motorists warrant having their data copied.

Michigan State Police were more than happy to provide the information – as long as the ACLU paid $544,680. Obviously not pocket change.

“Law enforcement officers are known, on occasion, to encourage citizens to cooperate if they have nothing to hide,” ACLU staff attorney Mark P. Fancher wrote. “No less should be expected of law enforcement, and the Michigan State Police should be willing to assuage concerns that these powerful extraction devices are being used illegally by honoring our requests for cooperation and disclosure.”

Once the data is obtained, the device’s “Physical Analyzer” can map both existing and deleted locations on Google Earth, porting location data and image geotags on Google Maps.

The ACLU’s main worry is that the handheld is quietly being used to bypass Fourth Amendment protections against unreasonable searches:

“With certain exceptions that do not apply here, a search cannot occur without a warrant in which a judicial officer determines that there is probable cause to believe that the search will yield evidence of criminal activity.

A device that allows immediate, surreptitious intrusion into private data creates enormous risks that troopers will ignore these requirements to the detriment of the constitutional rights of persons whose cell phones are searched.”

The next time you are Michigan, be sure drive carefully!

SOURCE:
http://thenextweb.com/us/2011/04/20/us-police-can-copy-your-iphones-contents-in-under-two-minutes/

An Analysis of Anonymity in the Bitcoin System

An Analysis of Anonymity in the Bitcoin System

Bitcoin is not inherently anonymous. It may be possible to conduct transactions is such a way so as to obscure your identity, but, in many cases, users and their transactions can be identified. We have performed an analysis of anonymity in the Bitcoin system and published our results in a preprint on arXiv.
The Full Story

Anonymity is not a prominent design goal of Bitcoin. However, Bitcoin is often referred to as being anonymous. We have performed a passive analysis of anonymity in the Bitcoin system using publicly available data and tools from network analysis. The results show that the actions of many users are far from anonymous. We note that several centralized services, e.g. exchanges, mixers and wallet services, have access to even more information should they wish to piece together users’ activity. We also point out that an active analysis, using say marked Bitcoins and collaborating users, could reveal even more details. The technical details are contained in a preprint on arXiv. We welcome any feedback or corrections regarding the paper.
Case Study: The Bitcoin Theft

To illustrate our findings, we have chosen a case study involving a user who has many reasons to stay anonymous. He is the alleged thief of 25,000 Bitcoins. This is a summary of the victim’s postings to the Bitcoin forums and an analysis of the relevant transactions.

Summary

The victim woke up on the morning of 13/06/2011 to find a large portion of his Bitcoins sent to1KPTdMb6p7H3YCwsyFqrEmKGmsHqe1Q3jg. The alleged theft occurred on 13/06/2011 at 16:52:23 UTC shortly after somebody broke into the victim’s Slush pool account and changed the payout address to 15iUDqk6nLmav3B1xUHPQivDpfMruVsu9f. The Bitcoins rightfully belong to1J18yk7D353z3gRVcdbS7PV5Q8h5w6oWWG.

An Egocentric Analysis
Fig. 1: The egocentric user network of the thief.
We consider the user network of the thief. Each vertex represents a user and each directed edge between a source and a target represents a flow of Bitcoins from a public-key belonging to the user corresponding to the source to a public-key belonging to the user corresponding to the target. Each directed edge is colored by its source vertex. The network is imperfect in the sense that there is, at the moment, a one-to-one mapping between users and public-keys. We restrict ourselves to the egocentric network surrounding the thief: we include every vertex that is reachable by a path of length at most two ignoring directionality and all edges induced by these vertices. We also remove all loops, multiple edges and edges that are not contained in some biconnected component to avoid clutter. In Fig. 1, the red vertex represents the thief and the green vertex represents the victim. The theft is the green edge joining the victim and the thief. There are in fact two green edges located nearby in Fig. 1 but only one directly connects the victim to the thief.
Fig. 2: An interesting sub-network induced by the thief, the victim and three other vertices.

Interestingly, the victim and the thief are joined by paths (ignoring directionality) other than the green edge representing the theft. For example, consider the sub-network shown in Fig. 2 induced by the red, green, purple, yellow and orange vertices. This sub-network is a cycle. We contract all vertices whose corresponding public-keys belong to the same user. This allows us to attach values in Bitcoins and timestamps to the directed edges. Firstly, we note that the theft of 25,000 BTC was preceded by a smaller theft of 1 BTC. This was later reported by the victim in the Bitcoin forums. Secondly, using off-network data, we have identified some of the other colored vertices: the purple vertex represents the main Slush pool account and the orange vertex represents the computer hacker group LulzSec (see, for example, their Twitter stream). We note that there has been at least one attempt to associate the thief with LulzSec. This was a fake; it was created after the theft. However, the identification of the orange vertex with LulzSec is genuine and was established before the theft. We observe that the thief sent 0.31337 BTC to LulzSec shortly after the theft but we cannot otherwise associate him with the group. The main Slush pool account sent a total of 441.83 BTC to the victim over a 70-day period. It also sent a total of 0.2 BTC to the yellow vertex over a 2-day period. One day before the theft, the yellow vertex also sent 0.120607 BTC to LulzSec. Theyellow vertex represents a user who is the owner of at least five public-keys:
Like the victim, he is a member of the Slush pool, and like the thief, he is a one-time donator toLulzSec. This donation, the day before the theft, is his last known activity using these public-keys.

A Flow and Temporal Analysis

In addition to visualizing the egocentric network of the thief with a fixed radius, we can follow significant flows of value through the network over time. If a vertex representing a user receives a large volume of Bitcoins relative to their estimated balance, and, shortly after, transfers a significant proportion of those Bitcoins to another user, we deem this interesting. We built a special purpose tool that, starting with a chosen vertex or set of vertices, traces significant flows of Bitcoins over time. In practice we have found this tool to be quite revealing when analyzing the user network.

Fig. 3: A visualization of Bitcoin flow from the theft. The size of a vertex corresponds to its degree in the entire network. The color denotes the volume of Bitcoins — warmer colors have larger volumes flowing through them. We also provide an SVG which contains hyperlinks to the relevant Block Explorer pages.
Fig. 4: An annotated version of Fig. 3.

In the left inset, we can see that the Bitcoins are shuffled between a small number of accounts and then transferred back to the initial account. After this shuffling step, we have identified four significant outflows of Bitcoins that began at 19:49, 20:01, 20:13 and 20:55. Of particular interest are the outflows that began at 20:55 (labeled as 1 in both insets) and 20:13 (labeled as 2 in both insets). These outflows pass through several subsequent accounts over a period of several hours. Flow 1 splits at the vertex labeled A in the right inset at 04:05 the day after the theft. Some of its Bitcoins rejoin Flow 2 at the vertex labeled B. This new combined flow is labeled as 3 in the right inset. The remaining Bitcoins from Flow 1 pass through several additional vertices in the next two days. This flow is labeled as 4 in the right inset.

A surprising event occurs on 16/06/2011 at approximately 13:37. A small number of Bitcoins are transferred from Flow 3 to a heretofore unseen public-key 1FKFiCYJSFqxT3zkZntHjfU47SvAzauZXN. Approximately seven minutes later, a small number of Bitcoins are transferred from Flow 3 to another heretofore unseen public-key 1FhYawPhWDvkZCJVBrDfQoo2qC3EuKtb94. Finally, there are two simultaneous transfers from Flow 4 to two more heretofore unseen public-keys:1MJZZmmSrQZ9NzeQt3hYP76oFC5dWAf2nD and 12dJo17jcR78Uk1Ak5wfgyXtciU62MzcEc. We have determined that these four public-keys — which receive Bitcoins from two separate flows that split from each other two days previously — are all contracted to the same user in our ancillary network. This user is represented as C.

There are several other examples of interesting flow. The flow labeled as Y involves the movement of Bitcoins through thirty unique public-keys in a very short period of time. At each step, a small number of Bitcoins (typically 30 BTC which had a market value of approximately US$500 at the time of the transactions) are siphoned off. The public-keys that receive the small number of Bitcoins are typically represented by small blue vertices due to their low volume and degree. On 20/06/2011 at 12:35, each of these public-keys makes a transfer to a public-key operated by the MyBitcoin service. Curiously, this public-key was previously involved in another separate Bitcoin theft.WikiLeaksWikiLeaks recently advised its Twitter followers that it now accepts anonymous donations via Bitcoin. They also state that “Bitcoin is a secure and anonymous digital currency. Bitcoins cannot be easily tracked back to you, and are a [sic] safer and faster alternative to other donation methods.” They proceed to describe a more secure method of donating Bitcoins that involves the generation of a one-time public-key but the implications for those who donate using the tweeted public-key are unclear. Is it possible to associate a donation with other Bitcoin transactions performed by the same user or perhaps identify them using external information?
Fig. 5: A visualization of the egocentric user network of WikiLeaks. We can identify many of the users in this visualization.

Our tools resolve several of the users with identifying information gathered from the Bitcoin Forums, the Bitcoin Faucet, Twitter streams, etc. These users can be linked either directly or indirectly to their donations. The presence of a Bitcoin mining pool (a large red vertex) and a number of public-keys between it and WikiLeaks’ public-key is interesting. Our point is that, by default, a donation to WikiLeaks’ ‘public’ public-key may not be anonymous.

Conclusion

This is a straight-forward passive analysis of public data that allows us to de-anonymize considerable portions of the Bitcoin network. We can use tools from network analysis to visualize egocentric networks and to follow the flow of Bitcoins. This can help us identify several centralized services that may have even more details about interesting users. We can also apply techniques such as community finding, block modeling, network flow algorithms, etc. to better understand the network.
Feedback
We are excited about the Bitcoin project and consider it a remarkable milestone in the evolution of electronic currencies. Our motivation for this work has not been to de-anonymize any individual users; rather it is to illustrate the limits of anonymity in the Bitcoin system. It is important that users do not have a false expectation of anonymity. We welcome any feedback or comments regarding the preprint on arXiv or the details in this post.
Follow on:
We have wrote a follow on blog post: http://anonymity-in-bitcoin.blogspot.com/2011/09/code-datasets-and-spsn11.html  where we release some of the data we extracted, in other to allow other researchers replicate our work, or perform follow on analysis.

SOURCE:
http://anonymity-in-bitcoin.blogspot.com/2011/07/bitcoin-is-not-anonymous.html

By: Fergal Reid and Martin Harrigan, September 30, 2011

Verifying Claims of Full-Disk Encryption in Hard Drive Firmware

Verifying Claims of Full-Disk Encryption in Hard Drive Firmware

Date: Wed, 9 Nov 2011 10:16:11 +0100
From: Eugen Leitl <eugen[at]leitl.org>
To: cypherpunks[at]al-qaeda.net
Subject: Re: [p2p-hackers] Verifying Claims of Full-Disk Encryption in
Hard Drive Firmware

—– Forwarded message from Tom Ritter <tom[at]ritter.vg> —–

From: Tom Ritter <tom[at]ritter.vg>
Date: Tue, 08 Nov 2011 19:51:53 -0500
To: p2p-hackers[at]lists.zooko.com
Subject: Re: [p2p-hackers] Verifying Claims of Full-Disk Encryption in Hard
Drive Firmware
Reply-To: theory and practice of decentralized computer networks <p2p-hackers[at]lists.zooko.com>

—–BEGIN PGP SIGNED MESSAGE—–
Hash: SHA1

After reviewing the FIPs approval document for the drive[1], I’ve tried to put together a complete threat model outlining the major classes of attack on the hard drive in the interest of being rigorous.  I’d like your input to see if I missed any you can think of.  I’ve explicitly excluded DriveTrust (the proprietary stuff) from the threat model, and am only focusing on the ATA Standard.

[1] http://csrc.nist.gov/groups/STM/cmvp/documents/140-1/140sp/140sp1388.pdf

====================

In approximate physical/logical order, this is every attack I can conceive of:

1. The BIOS may have been replaced to record passwords

2. The keyboard or keyboard connection may be tapped/keylogged

3. The physical computer may have been tampered with physically installing hardware in any of its components

4. The Operating System may have been tampered with

5. The application used to interact with the hard drive (hdparm) may have been subverted

6. The SATA connection to the HDD may have been tapped

7. On the Drive

1. The hardware of the drive may been tampered with2. Firmware

1. The firmware may be buggy allowing code execution on the Hard Disk Drive2. The firmware may have been replaced.  Supposedly, the firmware replace requires the firmware be signed with a private RSA key AND that the drive have the Load Firmware capability active.  The public key is stored on the system storage area of the media

1. The firmware may be able to be loaded despite the load firmware capability inactive2. The firmware load process may have a bug invalidating the signature

3. The malicious firmware may be appropriately signed

4. The public key in the system storage area may have been replaced, allowing untrustworthy firmware be loaded

3. The RAM of the device may be able to be read, allowing unknown compromising vectors.

1. The encryption key may be stored in RAM2. The Seed Key and Seed used in the Random Number Generator may be read, allowing any new key that is generated to be guessed.

3. Internal states to the encryption process, or other operation of the firmware may be exposed

4. System Storage Area – An area of the drive that is supposed to only be able to be read by the firmware, and not the computer.

1. Secure ID aka Drive Owner (SHA Digest)

1. If the system area is able to be read, an unsalted simple SHA may be crackable2. If the system area is able to be written, this may be replaced with a hash of a known password.

3. If the Drive Owner PIN has not been changed upon initialization, the PIN is printed on the drive

2. User & Master Passwords (SHA Digest)

1. If the System Area is able to be read, an unsalted simple SHA digest may be crackable2. If the system area is able to be written, this may be replaced with a hash of a known password.

3. User/Master Encryption Keys (Plaintext?)

1. The the System Area is able to be read, plaintext storage of the keys allows full data recovery2. If the Random Number Generator is not cryptographically secure, the encryption key may follow a guessable pattern

4. Firmware Public RSA Key

1. The the System Area is able to be written to, the firmware key may be replaced and new firmware loaded

5. User Storage Area – where your data is stored.

1. The data may not be encrypted with AES as promised2. The cipher mode may not be suitable for filesystem encryption

3. The drive may be initialized in a non-random pattern, allowing usage analysis

4. The ciphertext may be stored in a way allowing block swapping, ciphertext injection, or otherwise damaging the integrity of the ciphertext

6. The Drive may be vulnerable to side channel attacks

1. Crypto operations may not be constant-time leaking data about the key structure or value2. Drive may not draw power equally during crypto operations leaking data about the key structure or value

3. The drive may not be acoustically silent, leaking information about where on the platters the data is being written by listening to drive head movements.

4. The drive may not be protected against induced faults such as power manipulation, temperature extremes, electrical shocks, or physical shocks.

8. AT Password Security Protocol

1. Passwords may be attempted at a rapid sequence if a mechanism to reset the module is created.

 

====================

This groups those attacks together, and notes whether I consider them within the realm of testing for the drive.  I’m not sure what will be doable easily or cheaply, but if I can verify the firmware, I’ll try.

Not Considered for evaluation

User Coercion or Cooperation / “Evil Maid” Attacks

1. Hardware tampering or tapping of the Keyboard, Keyboard connection, Computer, SATA connection or HDD Pwnage

1. Subversion of the Operating System, BIOS, or hdparm

Misconfiguration

1. Not changing the Master or Drive Owner password2. Not enabling hard disk security

Side Channel Attacks

Considered for Evaluation

1. Buggy firmware

1. with regards to firmware signature verification2. with regards to firmware replacement despite load firmware capability disabled

3. with regards to randomly selecting an encryption key

4. with regards to proper encryption

5. with regards to backdoors

6. with regards to memory trespass or other “standard” vulnerabilities

2. Key Management

1. plaintext storage of encryption keys in system area2. poor password hashing practices of passwords

3. Encryption

1. lack of encryption of user data2. Improper cipher mode

3. Patterned initial fill of disk

4. Lack of ciphertext integrity

4. System Area

1. ability to read system area2. ability to write system area

====================

Again, all comments welcome, but particularly interesting in talking to

– Anyone familiar with these Seagate drives or DriveTrust.

– Anyone familiar with BIOS support for the AT Security Spec, who can help me locate a new netbook to work with.

– Anyone familiar with Data Recovery Services who could provide information on disk unlocking, AT password bypass, or moving platters between disks.

– Anyone who has done this before.

– -tom
—–BEGIN PGP SIGNATURE—–

iEYEARECAAYFAk65zqYACgkQJZJIJEzU09sNfwCfX3APmmrtFBke2CI3Ia1Rot+4
cDQAn00ezd8VPehRXAYCIM80bh464I6A
=AwIs
—–END PGP SIGNATURE—–
_______________________________________________
p2p-hackers mailing list
p2p-hackers[at]lists.zooko.com
http://lists.zooko.com/mailman/listinfo/p2p-hackers

—– End forwarded message —–

Eugen* Leitl <a href=”http://leitl.org“>leitl</a> http://leitl.org
______________________________________________________________
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

 


From: Peter Gutmann <pgut001[at]cs.auckland.ac.nz>
To: cypherpunks[at]al-qaeda.net, eugen[at]leitl.org
Subject: Re: [p2p-hackers] Verifying Claims of Full-Disk Encryption in
Hard Drive Firmware

Eugen Leitl <eugen[at]leitl.org> quotes Tom Ritter <tom[at]ritter.vg>:

>After reviewing the FIPs approval document for the drive[1], I’ve tried to
>put together a complete threat model outlining the major classes of attack on
>the hard drive in the interest of being rigorous.

Without wanting to sound too facetious, and mostly out of curiosity, what does FIPS 140 have to do with the threat modelling you’ve done?  It doesn’t address the vast majority of the stuff you’ve listed, so the threat-modelling is kind of a non-sequitur to “starting with FIPS 140”.  If you wanted to deal with this through a certification process you’d have to go with something like the CC (and an appropriate PP), assuming the sheer suckage of working with the CC doesn’t tear a hole in the fabric of space-time in the process.

Peter.

 

 

http://cryptome.org/0005/fulldisk-crypto.htm