6 Things Washington Doesn’t Get About Hackers 6 Things Washington Doesn’t Get About Hackers
For a forthcoming book (shameless plug: preorder now!), I spent the last several years interviewing over 100 security researchers, usually self-described as “hackers,” attending... 6 Things Washington Doesn’t Get About Hackers

For a forthcoming book (shameless plug: preorder now!), I spent the last several years interviewing over 100 security researchers, usually self-described as “hackers,” attending security conferences, and watching how these professionals uncovered vulnerabilities and shortcomings in software, computer systems, and everyday devices in order to update and improve them. These ethical, or “white-hat,” hackers are defined primarily by their innate curiosity to discover what new authorized or unauthorized hacks they can accomplish, whether as a hobby or a profession, and their work is usually some mixture of the two. The most simplified way in which this is often explained is “taking something, and making it do something else.”

Hackers are often mistakenly portrayed in popular culture as being inarticulate geeks wearing hoodies — or worse, ninja suits — and possessing limited social skills. I have come to appreciate that the very opposite is true. Despite lacking the technical background required for their profession, I have found that security researchers are more than willing to share their findings, rephrase them repeatedly in simplified terms, discuss their growing concerns about their field, and address the inevitable follow-up questions.

With the annual security conferences in Las Vegas having just concluded, several observations about hackers and how their work is received and acted upon are worth sharing. Given their essential and increasing role in ensuring consumer and public safety, protecting critical infrastructure, and, ultimately, better ensuring national security, here are six things everyone should know about hackers.

1. Your life is improved and safer because of hackers.

The Internet of Things — the ecosystem of Internet-connected devices — is growing exponentially and will have increased from 13 billion in 2013 to more than 50 billion by 2020, by one estimate. The near ubiquity of chips, sensors, and implants placed into devices will provide users continuously updated cool features and conveniences like smart yoga mats that correct poses and automobile routing notices that help people avoid traffic jams. However, as one hacker explained to me, “What is an expected feature for you is attack surface for me.” Security researchers have successfully hacked — and disclosed their findings to manufacturers before any public revelations — pacemakers, insulin pumps, commercial airliners, industrial control systems for critical infrastructure, hotel key cards, safes, refrigerators, defibrillators, and “smart” rifles. These products were made safer and more reliable only because of the vulnerabilities uncovered by external hackers working pro bono or commissioned by companies, and not by in-house software developers or information technology staff.

2. Almost every hack that you read about in your newspaper lacks important context and background.

You’ve read the eye-popping headlines: “Hackers Remotely Kill a Jeep on the Highway — With Me in It,” “How Your Pacemaker Will Get Hacked,” “Skateboards, drones and your brain: everything got hacked,” “Cars can be hacked. What about a plane?” These sensationalized snapshots and accompanying stories give the impression that everything is vulnerable and easily broken into. Yet most attempted cyberbreaches go nowhere and are never demonstrated live at a conference or reported in the news. Successful hacking entails failing, trying something different, failing again, and then discovering a flaw or vulnerability that can be further exploited. Hacks that appear in the media are often the result of extensive work by teams of researchers who have varying skills and a deep knowledge of coding, operating systems, and malware that can be repurposed for their current project.

Take, for example, the widely reported Jeep Cherokee hack. It was conducted by Charlie Miller and Chris Valasek, two of the most technically proficient hackers on Earth. Miller holds a Ph.D. in mathematics, worked at the National Security Agency, and was the first person to remotely hack an iPhone, as well as a dozen other “secure” consumer products. Their Jeep hack was the result of an expensive and extensive three years of research that uncovered a number of vulnerabilities in the cars themselves, as well as the Sprint cellular network that provides the telematics for the in-car Wi-Fi, real-time traffic updates, and other aspects of remote connectivity. The point being that each publicly reported hack is unique onto itself and has an unreported background story that is critical to fully comprehending the depth and extent of the uncovered vulnerabilities.

3. Nothing is permanently secured, just temporarily patched.

Hackers experience “constant occupational disappointments and personal/collective joys,” as cultural anthropologist Gabriella Coleman found in her important study of the field. They identify a glaring and obvious weakness, which is then addressed with a software patch, alteration in network architecture, or perhaps minor changes to the IT team and employee procedures. Yes, when the inevitable software glitches appear elsewhere, or an employee clicks open what he believes is an emergency email about his retirement account but that actually installs undetected, malicious code on his computer, new vulnerabilities inevitably reappear. “Cybersecurity on a hamster wheel” is how longtime hacker Dino Dai Zovi describes to me this commonly experienced phenomenon.

For example, consider the femtocell, which is a miniature cell-phone tower that looks like a normal Wi-Fi router. It is used to prevent coverage “dead zones” in rural areas or office buildings, and any cell phone within its vicinity will associate with it without the owner’s knowledge. In 2011, The Hacker’s Choice (THC), which was a hacking collective, was able to get root access to a Vodafone femtocell by reverse-engineering the administrator password — it was “newsys.” This allowed the THC team to steal the voice, data, and SMS messages from all connecting phones. In 2013, a team at the cybersecurity firm iSEC Partners did this with a Verizon femtocell by exploiting a built-in delay in the boot-up process. At the DEF CON security conference this year, Yuwei Zheng and Haoqi Shan successfully hacked a femtocell in China using a slightly more complicated vulnerability in the boot-up process. I spoke with Zheng and Shan after their presentation, and they explained that the hack took them about a month of work, at night after their day jobs. Inevitably, other femtocells will be hacked, and patched, and hacked again in the future.

4. Hackers continue to face uncertain legal and liability threats.

You would think that manufacturers would welcome somebody discretely alerting them to a vulnerability in their products, and, indeed, some incentivize this through “bug bounties” that pay hackers who responsibly disclose security shortcomings. However, some manufacturers refuse to acknowledge that the shortcomings exist, threaten to file lawsuits, or report the researchers to law enforcement authorities under the belief that they are being blackmailed. It is important that white-hat hackers are protected and encouraged to do their work, because for every hack that they disclose to manufacturers, there are other government, criminal, or malicious hacking teams that have probably found the same vulnerability, which they have kept to themselves to exploit or sell on the black market.

There are two pressing legal and regulatory concerns. First, hackers argue that the U.S. Computer Fraud and Abuse Act (CFAA), which was passed into law the same year that Matthew Broderick hacked his high school’s computer network to change his grades in Ferris Bueller’s Day Off, is hopelessly out of date and has been abused by prosecutors to go after individuals engaged in non-malicious hacking rather than actual computer crime. The law prohibits anyone from intentionally accessing a computer or computer network “without authorization” or “exceed[ing] authorized access.” In one case, three Massachusetts Institute of Technology students who found vulnerabilities in the Massachusetts Bay Transportation Authority (MBTA) ticketing system that would allow people to obtain free rides were barred by an MBTA restraining order from presenting their findings in 2008 — implying that discussing a hack was equal to undertaking it. More tragically, Aaron Swartz committed suicide in January 2013 while facing up to 35 years in federal prison for 11 purported violations of the CFAA. Sensible proposals to update and reform the CFAA have unfortunately gone nowhere.

Similarly, the Wassenaar Arrangement, a multilateral export-control regime, faces criticism for threatening the effectiveness and efficiency of increasingly commonplace bug-bounty programs. This would not only risk what have been successful programs, but would disproportionately hurt independent, self-employed hackers who make a living this way. The U.S. Commerce Department’s Bureau of Industry and Security recently proposed an update to Wassenaar that would require licenses when exporting intrusion software technology — a change that is believed would likely hinder research and development and slow the process of disclosing vulnerabilities.

5. There is a wide disconnect between cyberpolicy and cybersecurity researchers.

At cybersecurity roundtables and conferences in Washington, generally few people in the room have any technical knowledge or have personally engaged in any sort of hacking. Rather, these events are attended by security generalists (like yours truly) who clumsily transfer concepts from other domains, particularly deterrence theory, which was developed a half-century ago for thinking through U.S.-Soviet Union nuclear war dynamics. “We are trying to bridge the gap by building a network of foreign-policy wonks, reps from the tech companies, and technology experts,” said my colleague Adam Segal, director of the Council on Foreign Relations’ Digital and Cyberspace Policy Program, “but there are still big differences in culture and outlook.”

Meanwhile, hackers hate the very word “cyber” because it is a meaningless prefix for anything related to the Internet and overlooks other aspects impacting computer security, like physical security, social engineering, insider threats, and radio-frequency jamming, hacking, or spoofing. Nevertheless, they will embrace the term reluctantly in order to be listened to in Washington, though they are rarely invited to government or think-tank events, nor would they even know how to be invited. The consequences of this disconnect are evident in policy proposals and debates that rarely take into account responsible hackers’ concerns or the readily available exploits and malware that any malicious hacker could utilize.

There are some hackers and government officials making efforts to bridge this divide. Representatives from the I Am The Cavalry grassroots movement, which focuses on cybersecurity issues that impact public safety and human life in order to ensure that technologies are trustworthy, have given more than 200 briefings on Capitol Hill. Meanwhile, government officials like Ashkan Soltani, chief technologist of the Federal Trade Commission, is a regular contributor to hacker conversations, and Suzanne Schwartz of the Food and Drug Administration called in to thank I Am The Cavalry during the BSides Las Vegas conference, while Randy Wheeler of the Bureau of Industry and Security took tough questions over the phone about the proposed Wassenaar Arrangement changes during an Electronic Frontier Foundation panel at DEF CON. Nevertheless, there are still too few security researchers and government officials willing or courageous enough to communicate in public. While the poisoning of the relationship that resulted from the Edward Snowden disclosures has largely dissipated, far more trust and dialogue is needed as Internet-based threats proliferate.

6. Hackers comprise a distinct community with its own ethics, morals, and values, many of which are tacit, but others that are enforced through self-policing.

Predominantly, hackers just want the freedom to do their work and remain private or anonymous from the government or commercial sector if they so choose. They look down on colleagues who claim to have produced “unbreakable” encryption software or mobile devices or who spend too much time bragging in the news rather than demonstrating serious, innovative research in published papers.

Hackers also share a deep appreciation for self-deprecation and black humor. When a speaker canceled at DEF CON, there was a spirited round of “Spot the Fed,” where a moderator who knew several government employees in the audience encouraged others to try to identify them. They were unsuccessful; everybody thought all government and law enforcement workers were men wearing khaki pants (fair enough). There was also a first-time panel appropriately titled Drunk Hacker History. It included garbled tales from prominent hackers and Katie Moussouris of HackerOne singing her own composition: “History of Vuln Disclosure: The Musical.” Hopefully, this panel will appear on YouTube, as many DEF CON talks eventually do.

So the next time you read about hackers in the news or about some remarkable new security flaw in your phone, laptop, or car, consider the professionals who actually make that possible. Given how important the Internet is to everything we do and how all-encompassing it is in our lives, policymakers and citizens need to better understand how hackers think and work. The next time you meet a security researcher, ask them about their profession; they will probably be more than happy to share what they know.

No comments so far.

Be first to leave comment below.

Your email address will not be published. Required fields are marked *