NSA Dismisses Thousands of Privacy Violations as Minuscule Errors

Mar 26, 2026 | News

Following the explosive revelation that the National Security Agency had committed 2,776 privacy violations in a single year, the agency mounted a public defense that relied heavily on minimization — arguing that the number of errors was trivial compared to the vast scale of its surveillance operations. The response revealed as much about the NSA’s institutional culture as the original audit had revealed about its compliance failures.

The NSA’s Damage Control Strategy

John DeLong, the NSA’s director of compliance, was dispatched to address reporters on a hastily arranged conference call following the Washington Post’s publication of the leaked internal audit. His central argument was one of proportion: approximately 100 analyst errors in database queries occurred against a backdrop of roughly 20 million such queries per month. By this arithmetic, the error rate was statistically negligible — a framing designed to make thousands of privacy violations appear routine and inconsequential.

DeLong characterized the overwhelming majority of violations as “unintentional human or technical errors” and described the number of deliberate violations as “minuscule” — just “a couple over the past decade.” He presented the existence of the audit itself as evidence that the agency’s internal compliance mechanisms were working as intended, detecting and correcting problems before they could cause lasting harm.

The Problem With the Proportionality Argument

The NSA’s proportionality defense contained a fundamental logical flaw that critics were quick to identify. The sheer volume of queries — 20 million per month — was itself a product of the mass surveillance architecture that civil liberties advocates had been challenging. Using the scale of surveillance operations as a denominator to minimize the significance of violations was circular reasoning: the more surveillance the agency conducted, the smaller any given number of errors would appear as a percentage.

Moreover, the framing treated all violations as equivalent, obscuring the range of severity documented in the audit. Some incidents involved technical glitches with minimal impact on individual privacy. Others involved the unauthorized collection of communications from thousands of Americans and green card holders, violations of court orders, and the use of surveillance methods that the Foreign Intelligence Surveillance Court had not been informed about — let alone approved. Averaging these together with minor database errors produced a misleadingly benign picture.

The Roamer Problem and Systemic Gaps

DeLong attempted to further defuse the controversy by noting that 1,904 of the 2,776 incidents involved foreign targets whose cellphones were being monitored abroad. When these individuals traveled to the United States — where individualized warrants are required for surveillance — the NSA’s systems did not immediately cease recording their communications. The agency described these as “roamer” incidents and said it worked to detect the location changes “as soon as we can.”

Rather than reassuring the public, this explanation highlighted a significant systemic vulnerability. The NSA was acknowledging that its technical infrastructure could not reliably distinguish between communications that required a warrant and those that did not. For an agency conducting surveillance on a global scale, the inability to automatically stop recording when a target entered United States territory was not a minor technical limitation — it was a constitutional problem embedded in the architecture of the surveillance system itself.

Concealing Information From Overseers

The leaked documents also revealed that NSA analysts had been instructed to provide only brief, minimal explanations when recording the justification for targeting foreign nationals — using short sentences and avoiding “extraneous information” in reports submitted to oversight bodies. DeLong characterized this as an efficiency measure, claiming it allowed the Justice Department and the Office of the Director of National Intelligence to scan lists of targets more quickly.

This explanation strained credibility. Oversight bodies exist precisely to evaluate whether surveillance targeting is appropriate and legally justified. Encouraging analysts to strip detail from their justifications — regardless of the stated rationale — reduced the ability of overseers to perform meaningful scrutiny. Whether the intent was efficiency or obfuscation, the practical effect was the same: less information flowing to the institutions responsible for holding the NSA accountable.

Compounding the concern, the Washington Post reported that the internal audit had not been shared with the Senate Intelligence Committee. DeLong described it as an internal document used to generate other reports for external overseers, implying that the committee received equivalent information through different channels. Senator Dianne Feinstein, the committee’s chairwoman, offered a more measured assessment, acknowledging that the committee did not receive the same level of detail about overseas surveillance problems and stating that it “can and should do more to independently verify” the NSA’s compliance reporting.

Institutional Accountability and the Trust Deficit

The NSA’s response to the audit revelations illustrated a pattern that had become familiar in the months since the Snowden disclosures began. Rather than acknowledging the systemic nature of the compliance failures, the agency sought to reframe them as isolated, manageable incidents within an otherwise well-functioning system. This approach required the public to accept that an organization conducting mass surveillance in secret, with limited external oversight and a documented history of withholding information from its overseers, could be trusted to police itself.

For many observers, the trust deficit was too large to bridge with statistics and reassurances. The audit had documented not just errors but a culture in which those errors were systematically minimized in external reporting, in which oversight bodies received sanitized information, and in which the agency made unilateral decisions about what its overseers needed to know. Describing these patterns as evidence of a “robust” compliance program required a definition of robustness that most people outside the intelligence community would not recognize.

The Deeper Question of Scale

Beneath the debate over error rates and compliance mechanisms lay a more fundamental question that the NSA’s defenders consistently avoided: whether mass surveillance on this scale was compatible with meaningful privacy protections in the first place. A system that generated 20 million database queries per month and intercepted communications from millions of people worldwide would inevitably produce thousands of violations, no matter how diligent its compliance staff. The errors were not bugs in an otherwise sound system — they were an inherent feature of surveillance conducted at industrial scale.

By framing the discussion in terms of error rates and corrective measures, the NSA and its congressional allies were implicitly asking the public to accept that some level of unauthorized surveillance of Americans was an acceptable cost of the broader intelligence mission. Whether that trade-off was justified remained the central unanswered question of the post-Snowden debate — one that no compliance audit, however thorough, could resolve.

Related Posts

Power Grid Down Drill To Be Conducted By US Government

Power grid vulnerabilities are finally garnering some attention by government officials. An electrical grid joint drill simulation is being planned in the United States, Canada and Mexico. Thousands of utility workers, FBI agents, anti-terrorism experts, governmental...

read more