August 12, 2013 – Decrypted Matrix Radio: Clapper the Fox, 18 Gun Facts, Matrix Philosophy, IRS AR-15s, Apple goes Orwellian, McCain on Snowden

CIA Director Brennan Confirmed as Reporter Michael Hastings Next Target

18 Little-Known Gun Facts That Prove That Guns Make Us Safer

IRS Refuses to Answer Congressman on AR-15s for ‘Standoff Capability’

Philosophy & the Matrix – What is Reality

John McCain: “Young Americans See Edward Snowden as Some Kind of Jason Bourne”

Apple patents new Orwellian Technology

A Fox Guards the Hen-house: James Clapper (who lied about NSA spying) set to lead NSA Investigation

June 3, 2013 – Decrypted Matrix Radio: Dome on Mars, Istanbul on Fire, MI6 Terrorists, iOS Compromised, Supreme Court on DNA, Monsanto & Big Pharma Exposed, Kicking Cancer

City Dome Discovered On Mars In Juventae Chasma, Video & Photos

The Sixth Day of Fire, Tear Gas, and Blood in Istanbul

Woolwich murder: Younger brother of Michael Adebolajo ‘was paid thousands to spy in Middle East’ by MI6

Any iOS Device Can Be Compromised Within One Minute

SUPREME COURT: POLICE MAY TAKE DNA FROM EVERYONE THEY ARREST

North Carolina Law Would Make It Illegal to Expose Monsanto

Taxpayer dollars used by U.S. government to promote GMOs in other countries

Nutrition Information Every Cancer Patient Should Know

Every Week Night 12-1am EST (9-10pm PST)

– Click Image to Listen LIVE –

April 5, 2013 – Decrypted Matrix Radio: DEA Liars, Protecting from Pesticides, Interspecies Telepathy, Meditation Compassion, Defining WMDs, Gun Control Facts, New Leaks Org

April 5, 2013 – Decrypted Matrix Radio: DEA Liars, Protecting from Pesticides, Interspecies Telepathy, Meditation Compassion, Defining WMDs, Gun Control Facts, New Leaks Org

Ways to Protect Children From Pesticides

DEA Accused Of Leaking Misleading Info Falsely Implying That It Can’t Read Apple iMessages

Inter-species Telepathy: Human Thoughts Make Rat Move

Proof: Meditation Makes You More Compassionate

Federal Government Redefines Rocket-Propelled Grenade as “Weapon of Mass Destruction”

Gun Control Info Graphic Explained

Bigger than Wikileaks – the International Consortium of Investigative Journalists (ICIJ) – Offshore Expose’

4-5

Every Week Night 12-1am EST (9-10pm PST)

– Click Image to Listen LIVE –

Apple Has Quietly Started Tracking iPhone Users Again, And It’s Tricky To Opt Out

Apple Has Quietly Started Tracking iPhone Users Again, And It’s Tricky To Opt Out

Apple’s launch of the iPhone 5 in September came with a bunch of new commercialsto promote the device.

But Apple didn’t shout quite so loud about an enhancement to its new mobile operating system, iOS 6, which also occurred in September: The company has started tracking users so that advertisers can target them again, through a new tracking technology called IFA or IDFA.

Previously, Apple had all but disabled tracking of iPhone users by advertisers when it stopped app developers from utilizing Apple mobile device data via UDID, the unique, permanent, non-deletable serial number that previously identified every Apple device.

For the last few months, iPhone users have enjoyed an unusual environment in which advertisers have been largely unable to track and target them in any meaningful way.

In iOS 6, however, tracking is most definitely back on, and it’s more effective than ever, multiple mobile advertising executives familiar with IFA tell us. (Note that Apple doesn’t mention IFA in its iOS 6 launch page).

Users can switch off that targeting, but it’s tricky, as we discovered a couple of days ago. Although at least iOS 6 users are able to turn off tracking, which they weren’t before.

Here’s how it works.

IFA or IDFA stands for “identifier for advertisers.” It’s a random, anonymous number that is assigned to a user and their device. It is temporary and can be blocked, like a cookie.

When you look at an app, or browse the web, your presence generates a call for an ad. The publisher’s site that you’re looking at then passes the IFA to the ad server. The advertiser is then able to know that a specific iPhone user is looking at a specific publication and can serve an ad targeting that user. IFA becomes particularly useful, for instance, if an ad server notices that a particular IFA is looking at a lot of different car sites. Perhaps that user is interested in buying a new car. They’ll likely start seeing a lot of car ads on their iPhone.

More importantly, IFA will allow advertisers to track the user all the way to “conversion” — which for most advertisers consists of an app download. Previously, advertisers had no idea whether their ads actually drove people to download apps or buy things. Now IFA will tell them.

The IFA does not identify you personally — it merely provides a bunch of aggregate audience data that advertisers can target with ads.

iPhone Screengrab

Tracking is on by default

The new iPhone operating system comes with three things that make tracking easier for advertisers and reduce the likelihood that you’ll opt out.

  • iOS 6 comes in a default “tracking on” position. You have to affirmatively switch it off if you do not want advertisers to see what you’re up to.
  • The tracking control in iPhone’s settings is NOT contained where you might expect it, under the “Privacy” menu. Instead, it’s found under “General,” then “About,” and then the “Advertising” section of the Settings menu.
  • The tracking control is titled “Limit Ad Tracking,” and must be turned to ON, not OFF, in order to work. That’s slightly confusing — “ON” means ads are off! — so a large number of people will likely get this wrong.

Those three factors combined mean that a huge proportion of iPhone users are unlikely to ever opt out of tracking.

“It’s a really pretty elegant, simple solution,” says Mobile Theory CEO Scott Swanson. “The biggest thing we’re excited about is that it’s on by default, so we expect most people will leave it on.”

(His take on IFA’s capabilities was confirmed by two other mobile ad execs at rival companies.)

Again, IFA doesn’t identify you as a person to advertisers. What it does do, however, is provide advertisers with “a really meaningful inference of behavior,” Swanson says. “We haven’t had access to that information before.”

iPhone Privacy: How To Stop Apple And Advertisers From Tracking You On iOS 6

iPhone Privacy: How To Stop Apple And Advertisers From Tracking You On iOS 6

Your iPhone, I’m sad to say, is not like Las Vegas: What happens there often does not stay there.

Much of your iPhone activity — including your web browsing, app store downloads and more — is transmitted to advertisers through various channels so that they can serve up relevant advertisements and offers for you. Although these services have mechanisms in place to ensure that you can’t be identified, you still might be a little uneasy about all of that information getting sent off to unclear dimensions (regardless of whether you’re doing anything — ahem — naughty on your iPhone).

Luckily for you and your privacy concerns, Apple has provided users with several ways, especially on iOS 6, to limit the amount of information that gets transmitted to third parties. Unluckily, they are buried deep within the bowels of the iPhone, opaquely worded, and not located where you might think they are.

Consider this, then, to be your privacy itinerary. Here are three settings you should tweak if you want to limit the activity tracking that occurs by default on your iPhone. While the settings won’t completely eliminate the transmission of your iPhone data to often-mysterious parties, they will greatly reduce it.

1. Limit Ad Tracking

The subject of a brief controversy stirred up by Business Insider, Apple recently changed the way it identifies your device, starting with iOS 6, for advertisers that serve you well-aimed ads. To which you might reasonably reply: “Wait a minute — Apple is identifying my advice for advertisers so that they can serve me well-aimed ads?!?!”

Welcome to 2012, where pretty much everything with a battery is tracking you, and every site that prompts you to enter a login and password is trying to provide you with relevant ads.

Though Apple insists the Advertising Identifier is non-permanent and cannot be used to determine your identity — and by all indications, the new system is far better than the old one, which really did identify you to an odd extent — it still allows you to opt out of the program. To do so, go into Settings, then General, then About, then Advertising. You want to turn Limit Ad Tracking to the “On” position.

(Which is a little confusing, by the way: In order to turn ad tracking off, you have to flick the switch to “On”? How about, next time, if you want to shut something off, you select “Off”? If I want to mute my phone, I don’t have to switch “Turn Volume Off” to the “On” position, do I? Who’s on first?)

No matter: Turn “Limit Ad Tracking” on. You will still see ads on your phone, but they won’t be “targeted” to you based on your activity.

2. Opt Out Of Targeted iAds

To more thoroughly block targeted ads, you can specifically prevent Apple’s own iAd system from tracking your behavior and presenting ads based on that activity.

To do so, open Safari on your iPhone and visit http://oo.apple.com. There you’ll see a screen asking if you want to opt out of Interest Based iAds. If you flip the switch to “Off,” the ads you see will not be based on your history. Instead, they will be general, non-targeted advertisements. (See? Flip the switch to “Off” when you want to turn something off. How hard was that?)

And, as our friends at TUAW helpfully point out, this is not a final decision: If you find yourself yearning for targeted ads, you can bring them back any time you please, by clicking to the site above and turning them back on. Thomas Wolfe was wrong: You can go home again! (If by “home” you mean “digital advertisements conjured by an automated analysis of your smartphone activity.”)

While you’re tweaking your iAds, you might also shut off location-based iAds, or advertisements based on your current location. Go into Settings, then Privacy, then Location Services, then System Settings (at the bottom), then switch “Location-Based iAds” to the “Off” position.

3. Do Not Track

With the Safari browser in iOS 6, Apple also introduces a “Do Not Track” feature, which denies websites you visit the ability to track you both on their page and on other websites you visit when you leave.

I know, I know, it’s a radical concept: Once you leave a website, that website no longer tracks your behavior. It’s like, when I leave my friend’s apartment, do I expect him to secretly embed a spy camera on my backpack so that he can keep an eye on my every movement outside of his home?

Well, maybe. There’s no “Off” switch for my creepy friend William.

Unlike William’s creepiness, however, there is an “Off” switch for website tracking! To enable Do Not Track on your iPhone, you need to turn on “Private Browsing.” Open up Settings, and then go into the Settings for Safari. Switch Private Browsing to “On” and your phone will start sending a Do Not Track message to any website you visit.

You can learn more about the Do Not Track movement by visiting the official website (which will also tell you whether you have Do Not Track enabled on your browser).
These three tips should put your mind at ease about the extent to which your iPhone behavior is being tracked. Again, it’s not a wholesale solution to your iPhone-tracking concerns, but it will greatly reduce the more suspect, easily-preventable data-collecting activity.

Now, if you’ll excuse me, I have to go check my backpack for spy cameras.

via HuffPo

 

October 15, 2012 – DCMX Radio: Wikileaks and Anonymous Fall-out Continued, High Tech Web Spying, Low-Income Biometric Datamining, Drone Strike Double-Tap

October 15, 2012 – DCMX Radio: Wikileaks and Anonymous Fall-out Continued, High Tech Web Spying, Low-Income Biometric Datamining, Drone Strike Double-Tap

Wikileaks Statement, Anonymous Responds – A parting of ways..?

Fact of the Day – Bin Laden family makes millions on defense industry Boom!

Facebook Spying Methods, Secrecy

Apples new i06 includes new (ad?)tracking

Drone Strikes Super Tech Double-Tap

Biomentric Privacy invasion – now being forced in low-income programs


Every Week Night 12-1am EST (9-10pm PST)

– Click Image to Listen LIVE –

Apple’s Secret Plan To Join iPhones With Airport Security

Apple’s Secret Plan To Join iPhones With Airport Security

“Currently — as most of us know — TSA agents briefly examine government ID and boarding passes as each passenger presents their documents at a checkpoint at the end of a security line. Thom Patterson writes at CNN that under a 2008 Apple patent application that was approved in July and filed under the working title “iTravel,” a traveler’s phone would automatically send electronic identification to a TSA agent as soon as the traveler got in line and as each traveler waits in line. TSA agents would examine the electronic ID at an electronic viewing station. Next, at the X-ray stations, a traveler’s phone would confirm to security agents that the traveler’s ID had already been checked. Apple’s patent calls for the placement of special kiosks (PDF) around the airport which will automatically exchange data with your phone via a close range wireless technology called near field communication (NFC). Throughout the process, the phone photo could be displayed on a screen for comparison with the traveler. Facial recognition software could be included in the process. Several experts say a key question that must be answered is: How would you prove that the phone is yours? To get around this problem, future phones or electronic ID may require some form of biometric security function including photo, fingerprint and photo retinal scan comparisons. Of course, there is still a ways to go. If consumers, airlines, airports and the TSA don’t embrace the NFC kiosks, experts say it’s unlikely Apple’s vision would become reality. ‘First you would have to sell industry on Apple’s idea. Then you’d have to sell it to travel consumers,’ says Neil Hughes of Apple Insider. ‘It’s a chicken-and-egg problem.'”

via Slashdot

Keeping the Government Out of Your Smartphone

Keeping the Government Out of Your Smartphone

Smartphones can be a cop’s best friend. They are packed with private information like emails, text messages, photos, and calling history. Unsurprisingly, law enforcement agencies now routinely seize and search phones. This occurs at traffic stops, during raids of a target’s home or office, and during interrogations and stops at the U.S. border. These searches are frequently conducted without any court order.

Several courts around the country have blessed such searches, and so as a practical matter, if the police seize your phone, there isn’t much you can do after the fact to keep your data out of their hands.

However, just because the courts have permitted law enforcement agencies to search seized smartphones, doesn’t mean that you—the person whose data is sitting on that device—have any obligation to make it easy for them.

Screen unlock patterns are not your friend

The Android mobile operating system includes the capability to lock the screen of the device when it isn’t being used. Android supports three unlock authentication methods: a visual pattern, a numeric PIN, and an alphanumeric password.

The pattern-based screen unlock is probably good enough to keep a sibling or inquisitive spouse out of your phone (providing they haven’t seen you enter the pattern, and there isn’t a smudge trail from a previous unlock that has been left behind). However, the pattern-based unlock method is by no means sufficient to stop law enforcement agencies.

After five incorrect attempts to enter the screen unlock pattern, Android will reveal a “forgot pattern?” button, which provides the user with an alternate way method of gaining access: By entering the Google account email address and password that is already associated with the device (for email and the App Market, for example). After the user has incorrectly attempted to unlock the screen unlock pattern 20 times, the device will lock itself until the user enters a correct username/password.

What this means is that if provided a valid username/password pair by Google, law enforcement agencies can gain access to an Android device that is protected with a screen unlock pattern. As I understand it, this assistance takes the form of two password changes: one to a new password that Google shares with law enforcement, followed by another that Google does not share with the police. This second password change takes place sometime after law enforcement agents have bypassed the screen unlock, which prevents the government from having ongoing access to new email messages and other Google account-protected content that would otherwise automatically sync to the device.

Anticipatory warrants

As The Wall Street Journal recently reported, Google was served with a search warrant earlier this year compelling the company to assist agents from the FBI in unlocking an Android phone seized from a pimp. According to the Journal, Google refused to comply with the warrant. The Journal did not reveal why Google refused, merely that the warrant had been filed with the court with a handwritten note by a FBI agent stating, “no property was obtained as Google Legal refused to provide the requested information.”

It is my understanding, based on discussions with individuals who are familiar with Google’s law enforcement procedures, that the company will provide assistance to law enforcement agencies seeking to bypass screen unlock patterns, provided that the cops get the right kind of court order. The company insists on an anticipatory warrant, which the Supreme Court has defined as “a warrant based upon an affidavit showing probable cause that at some future time, but not presently, certain evidence of crime will be located at a specific place.”

Although a regular search warrant might be sufficient to authorize the police to search a laptop or other computer, the always-connected nature of smartphones means that they will continue to receive new email messages and other communications after they have been seized and searched by the police. It is my understanding that Google insists on an anticipatory warrant in order to cover emails or other communications that might sync during the period between when the phone is unlocked by the police and the completion of the imaging process (which is when the police copy all of the data off of the phone onto another storage medium).

Presumably, had the FBI obtained an anticipatory warrant in the case that the Wall Street Journal wrote about, the company would have assisted the government in its attempts to unlock the target’s phone.

Praise for Google

The fact that Google can, in some circumstances, provide the government access to data on a locked Android phone should not be taken as evidence that Google is designing government backdoors into its software. If anything, it is a solid example of the fact that when presented with a choice between usability and security, most large companies offering services to the general public tend to lean towards usability (for example, Apple and Dropbox can provide law enforcement agencies access to users’ data stored with their respective cloud storage services).

The existence of the screen unlock pattern bypass is likely there because a large number of consumers forget their screen unlock patterns. Many of those users are probably glad that Google lets them restore access to their device (and any data on it), rather than forcing them to perform a factory reset whenever they forget their password.

However, as soon as Google provides a feature to consumers to restore access to their locked devices, the company can be forced to provide law enforcement agencies access to that same functionality. As the old saying goes, “If you build it, they will come.”

In spite of the fact that Google has prioritized usability over security, Google’s legal team has clearly put their customers’ privacy first.

First, the company has insisted on a stricter form of court order than a plain-vanilla search warrant, and then refused to provide assistance to law enforcement agencies that seek assistance without the right kind of order.
Second, by providing the government access to the Android device via a (temporary) change to the users’ Gmail password, Google has ensured that the target of the surveillance receives an automatic email notice that their password has been changed. Although the email they receive won’t make it explicit that the government has been granted access to their mobile device, it will still serve as a hint to the target that something fishy has happened.
Third, by changing the user’s password a second time, Google has prevented the government from having ongoing, real-time access to the surveillance target’s emails. There is, I believe, no law requiring Google to take this last step—Google has done it to protect the privacy of the user, and to deny the government what would otherwise be an indefinite email wiretap not approved by the courts.

For real protection you need full-disk encryption

Of the three screen lock methods available on Android (pattern, PIN, password), Google only offers a username/password based bypass for the pattern lock. If you’d rather that the police not be able to gain access to your device this way (and are comfortable with the risk of losing your data if you are locked out of your phone), I recommend not using a pattern-based screen lock, and instead using a PIN or password.

However, it’s important to understand that while locking the screen of your device with a PIN or password is a good first step towards security, it is not sufficient to protect your data. Commercially available forensic analysis tools can be used to directly copy all data off of a device and onto external media. To prevent against such forensic imaging, it is important to encrypt data stored on a device.

Since version 3.0 (Honeycomb) of the OS, Android has included support for full disk encryption, but it is not enabled by default. If you want to keep your data safe, enabling this feature is a must.

Unfortunately, Android currently uses the same PIN or password for both the screen unlock and to decrypt the disk. This design decision makes it extremely likely that users will pick a short PIN or password, since they will probably have to enter their screen unlock dozens of time each day. Entering a 16-character password before making a phone call or obtaining GPS directions is too great of a usability burden to place on most users.

Using a shorter letter/number PIN or password might be good enough for a screen unlock, but disk encryption passwords must be much, much longer to be able to withstand brute force attacks. Case in point: A tool released at the Defcon hacker conference this summer can crack the disk encryption of Android devices that are protected with 4-6 digit numeric PINs in a matter of seconds.

Hopefully, Google’s engineers will at some point add new functionality to Android to let you use a different PIN/password for the screen unlock and full disk encryption. In the meantime, users who have rooted their device can download a third-party app that will allow you to choose a different (and hopefully much longer) password for disk encryption.

What about Apple?

The recent Wall Street Journal story on Google also raises important questions about the phone unlocking assistance Apple can provide to law enforcement agencies. An Apple spokesperson told the Journal that the company “won’t release any personal information without a search warrant, and we never share anyone’s passcode. If a court orders us to retrieve data from an iPhone, we do it ourselves. We never let anyone else unlock a customer’s iPhone.”

The quote from Apple’s spokesperson confirms what others have hinted at for some time: that the company will unlock phones and extract data from them for the police. For example, an anonymous law enforcement source told CNET earlier this year that Apple has for at least three years helped police to bypass the lock code on iPhones seized during criminal investigations.

Unfortunately, we do not know the technical specifics of how Apple retrieves data from locked iPhones. It isn’t clear if they are brute-forcing short numeric lock codes, or if there exists a backdoor in iOS that the company can use to bypass the encryption. Until more is known, the only useful advice I can offer is to disable the “Simple Passcode” feature in iOS and instead use a long, alpha-numeric passcode.

By Chris Soghoian, Principal Technologist and Senior Policy Analyst, ACLU Speech, Privacy and Technology Project at 11:48am

Anonymous Press Release – The Recent Hack Of The FBI Cyber-Crime Division

Anonymous Press Release – The Recent Hack Of The FBI Cyber-Crime Division

 

Greetings World — On September 3, 2012 our comrades in AntiSec released a Press Release here –> http://pastebin.com/nfVT7b0Z

In this release they disclosed the fact that they had hacked the laptop of an FBI agent in the Cyber-Crime division and among the booty taken was a file containing 12 million UDIDs from various Apple products owned by people in the USA. They released evidence of this in the form of 1 million partially redacted entries from the file. The media did their usual idiot dance, latched onto the story and ran without thinking. Then mid-week it was pointed out by their critics that Anonymous could have got that file from many sources. Of course the FBI denied they were hacked, did you honestly think that the FBI Cyber-Crime guys would be like yeah Anonymous hacked us and we are butthurt? Please. Then no sooner does the media turn to this idea that hey, Anonymous could have got this info from some app developer lo and behold an app developer mysteriously discovers that they have been hacked and the data belongs to them. Yeah right. And now the media has come full circle like baying dogs and is reporting this shit as the newest version of reality. Fucking jokers. We have strong reason to believe this company Blue Toad are liars. But even if their data matches the data set obtained from the FBI by AntiSec, this simply points to one possible source where the FBI might have obtained the data. As AntiSec themselves pointed out in their response to the FBI’s lies, no one ever said the FBI got this data from Apple.

http://pastie.org/4678441

Now that the main stream media is finally catching on that this so-called “Blue Toad” revelation proves nothing, everyone seems completely perplexed. Some tech journalists are demanding hard “proof”. Don’t be fools, that would land a bunch of us in prison and it ain’t going to happen. What AntiSec and Anonymous HAVE provided you is evidence that only has meaning to the FBI Cyber-Crime guys.

These partial IPs for instance:

206.112.75.XX
153.31.184.XX

Has any reporter asked the FBI Cyber-Crime division if these IPs have any meaning to them. No, of course not. They would only deny it or just not answer the question saying it was a “security issue”, right ? But it IS your job as a reporter to at least ask. In the initial Press Release, AntiSec provided the name of the Cyber Agent and the make and model of his laptop. “During the second week of March 2012, a Dell Vostro notebook, used by Supervisor Special Agent Christopher K. Stangl from FBI Regional Cyber Action Team and New York FBI Office Evidence Response Team was breached.” Has even ONE reporter contacted Agent Stangl and asked him what make and model laptop he uses for work? Uhmmm, no of course not. You are all so quick to believe some strange company who conveniently pops up out of the mists (and who we have never even heard of ourselves until today). But what is REALLY incredible is that you would believe a group who is historically PROVEN to be pathological liars and criminals, namely the FBI. AntiSec also provided the method used, and most security “experts” (i.e. White Hat Scum) have grudgingly admitted the hack would be possible using the technique described. AntiSec has even provided the MAC addresses of all the hardware used in the new York office of the Cyber-Crime Division:

http://twitter.yfrog.com/oboljfp

 

Has anyone asked the FBI if these MACS are real? And before you reply “they would just deny it or say no comment” – it is STILL your job as reporters to at least ASK and report their answer to your audience. You have asked for chat logs from the hack. AntiSec has indicated they may provide them after they have thoroughly scrutinized them and redacted shit that can get them V&ed, which will most likely include the forensic “proof” some of you crazy journos are clamoring for. But the bottom line is this. Anonymous and AntiSec have provided FAR more evidence for their side of the story than the FBI has with their two lousy tweets and then a steady stream of “no comments”. The FBI has not provided one shred of evidence for their lying denials. Anonymous and AntiSec have provided what they can, and may provide more in the future.

Here is the latest statement from AntiSec –> http://bit.ly/TQsCc3

 

AntiSec hacked the FBI and found 12 million UDIDs from Apple products on the laptop of a special cyber agent of the FBI. Whether the FBI had these for some tracking scenario as AntiSec opines, or whether they had them to use to crack open Apple stuff they seize when the “suspect” won’t give them the passwords – or whether they had them for some completely un-known nefarious reason, they had them and Anonymous took them. We know this is true, and more importantly the FBI knows this is true. It is not our job to convince either the media or the masses. But the truth is there, if the journalists want to actually WORK for a living and dig for it. Also, that file wasn’t all that AntiSec obtained from Agent Stangl’s laptop. The FBI and all you media journos should….

EXPECT US.

SINCERELY

— Anonymous Anonymous Global — www.AnonymousGlobal.tk

For Messages From AntiSec Follow @AnonymousIRC on Twitter

Anonymous Hackers Claim To Release One Million Apple Devices’ Unique Identifiers Stolen From FBI

Anonymous Hackers Claim To Release One Million Apple Devices’ Unique Identifiers Stolen From FBI

 

Anonymous has a way of releasing massive collections of information that raise many more questions than they answer.

Case in point: On Monday night, the segment of the hacker group that calls itself Antisec announced that it had dumped 1,000,001 unique device identifier numbers or UDIDs for Apple devices–the fingerprints that Apple, apps and ad networks use to identify the iPhone and iPads of individual users–that it claims to have stolen from the FBI. In a long statement posted with links to the data on the upload site Pastebin, the hackers said they had taken the Apple data from a much larger database of more than 12 million users’ personal information stored on an FBI computer.

While there’s no easy way to confirm the authenticity or the source of the released data, I downloaded the encrypted file and decrypted it, and it does seem to be an enormous list of 40-character strings made up of numbers and the letters A through F, just like Apple UDIDs. Each string is accompanied by a longer collection of characters that Anonymous says is an Apple Push Notification token and what appears to be a username and an indication as to whether the UDID is attached to an iPad, iPhone or iPod touch.

In their message, posted initially in the Anonymous twitter feed AnonymousIRC, the hackers say they used a vulnerability in Java to access the data on an FBI Dell laptop in March of this year. They say the database included not only the UDIDs, but also “user names, name of device, type of device, Apple Push Notification Service tokens, zipcodes, cellphone numbers, addresses, etc.” Anonymous claims that the amount of data about each users was highly variable, and that it only released enough data to the public “to help a significant amount of users to look if their device are listed there or not.”

The Antisec statement also took the opportunity to mock the recent appearance of NSA Director and General Keith Alexander at the hacker conference Defcon, where he made a recruiting pitch to attendees. “It was an amusing hypocritical attempt made by the system to flatter hackers into becoming tools for the state,” Anonymous’ statement reads. “We decided we’d help out Internet security by auditing FBI first.”

If the UDIDs are determined to be real, just what that means about law enforcement and Apple users’ privacy isn’t entirely clear. Much more than passwords or even email addresses, UDIDs are already spread around the Internet by app developers and advertisers–a study by one privacy researcher in 2011 found that 74% of the apps he tested sent a user’s UDID to a remote server. But the same researcher also found that five out of seven social gaming networks he tested allowed users to log in with only their UDID, making a stolen UDID equivalent to a stolen password.

“We never liked the concept of UDIDs since the beginning indeed,” reads the Anonymous statement. “Really bad decision from Apple. Fishy thingie.”

Due perhaps to the privacy concerns around UDIDs’ proliferation, Apple stopped allowing new iOS apps to track UDIDs earlier this year.

Regardless, if the FBI has in fact collected 12 million Apple UDIDs–or even just one million–it will have some explaining to do to privacy advocates. In its release, Anonymous argues that the massive dump of users’ personal information, which it says has been stripped of many of the most identifying details, is designed raise awareness of the FBI’s alleged gadget-tracking shenanigans. “…We will probably see their damage control teams going hard lobbying media with bullshits to discredit this,” the statement reads at one point. “But well, whatever, at least we tried and eventually, looking at the massive number of devices concerned, someone should care about it.”

For now, Anonymous refuses to answer more questions about its release–at least from the press. Before granting any interviews, it’s demanding that Gawker writer Adrian Chen, who has been especially critical of Anonymous, appears on Gawker’s home page in a “huge picture of him dressing a ballet tutu and shoe on the head.”

SOURCE: Forbes.com

NSA: Security Conguration Recommendations for Apple iOS 5 Devices

NSA: Security Con guration Recommendations for Apple iOS 5 Devices

>>>>    NSA_Apple_IOS_5_Security_Protocols  <<<<

 

Purpose. This document provides security-related usage and con guration recommendations for Apple
iOS devices such as the iPhone, iPad, and iPod touch. This document does not constitute Department
of Defense (DoD) or United States Government (USG) policy, nor is it an endorsement of any particular
platform; its purpose is solely to provide security recommendations. This guide may outline procedures
required to implement or secure certain features, but it is also not a general-purpose con guration manual.
The guidance provides recommendations for general-purpose business use of iOS devices for processing data
that is UNCLASSIFIED, and possibly Sensitive But Unclassi ed. Such data may carry various designations
such as For Ocial Use Only, Law Enforcement Sensitive, or Security Sensitive Information. Approval for
processing such Sensitive But Unclassi ed data is dependent upon risk decisions by Designated Approving
Authorities (or their analogs in non-DoD entities).
 
Audience. This guide is primarily intended for network/system administrators deploying Apple’s iOS
devices or supporting their integration into enterprise networks. Some information relevant to IT decision
makers and users of the devices is also included. Readers are assumed to possess basic network and system
administration skills for Mac OS X or Microsoft Windows systems, and they should have some familiarity
with Apple’s documentation and user interface conventions.
 
Scope. Apple’s mobile devices, including the iPhone and iPad, are prominent examples of a new generation
of mobile devices that combine into a single device the capabilities of a cellular phone, laptop computer,
portable music player, camera, audio recorder, GPS receiver and other electronics. The capabilities of such
devices are considerable but, as with any information system, also pose some security risks. Design features
can seriously mitigate some risks, but others must be considered as part of a careful, holistic risk decision that
also respects the capabilities enabled by such devices. Major risks, and available mitigations, are discussed
in Section 1.3.
 
Security guidance for mobile devices must cut across many previously discrete boundaries between tech-
nologies. For example, scrupulous deployment of an iPhone includes consideration not just the settings on
the device itself, but those of the Wi-Fi networks to which it will connect, the VPNs through which it will
tunnel, and the servers from which it will receive its con guration. This guide provides recommendations for
the settings on an iOS device itself, as well as closely-related information for the network and con guration
resources upon which deployment of iOS devices depends.