It’s easy to find people who are mad about one thing or another on the Internet. It’s easy to interact with them, mutually venting frustration, and it’s tempting to feel that you’ve made a difference; that your click-throughs or retweets have spread awareness around your particular cause—and you might be right! If you change the mind of even one person—to instill understanding in place of apathy, or to challenge an existing set of beliefs, then you have made a difference. But one of the risks of being mad about things on the Internet is that it can become an echo chamber for people who share the same beliefs, all preaching to the choir.
So if you believe strongly in something, your challenge is not to find others who already share your beliefs, but to find others who don’t and to help them understand what you’re worked up about. This has been my biggest problem when talking to people about mass surveillance. For every one person who thinks state spying is A-OK, there are five more who simply don’t understand the issue. My goal in this post is to lay out some conversation points I frequently use when talking to people about surveillance, and I hope you will find it useful.
One of the most dangerous things about mass surveillance is that it actually has a chilling effect on free speech. People who know they are being watched by the government are afraid to speak their opinions when they go against the political establishment. This is not just a hypothetical—this has been reported on by the UN, and recently the EFF filed testimony from 22 firsthand accounts of how NSA surveillance limited the right to association.
Consider the situation in China, where state surveillance is the rule, not the exception. Access to information is restricted by the Golden Shield Project (aka. the Great Firewall of China), Skype chats are monitored by the government, and dissidents are frequently arrested. People are afraid to speak their minds, leading to a phenomenon Dr. Ronald Deibert describes as “self-censorship” in his book Black Code.
Clearly, the situation is not as bad here in the United States, but we have reason to be concerned. We are told that the government engages in surveillance to protect us from terrorism. The problem is that recently the definition of “terrorism” has been shifting towards “anything the government doesn’t like.” Consider the following examples:
Regardless of how you feel (and the legality) of any of these examples, can you with a straight face call any of this “terrorism?” These are people who were acting out of conscience for a cause they believe deeply in and whose actions didn’t endanger any lives.
This is the problem: when the government can perform nearly complete surveillance over all digital communications, and when the government equates “terrorism” to “any kind of activism that challenges the political establishment,” you have the recipe for totalitarianism. And that’s sort of the opposite of our American values.
Is giving up our rights to free speech and privacy worth protecting us from terrorism? This might be worth discussing if mass surveillance actually protected us from anything, but it doesn’t. NSA surveillance was alive and well at the time of the Boston Marathon bombing, and the best they could do was listen in on past phone conversations to try and find out if suspect Tsarnaev’s wife was involved in the plot. In fact, NSA Director General Keith Alexander admitted to Congress that the NSA’s bulk surveillance program only stopped one or possibly two real terrorist threats (and one of them was some poor slob wiring $8000 to Al Shabaab).
Does this sound like a good use of the ten billion dollars the government spends anually on the NSA? It might if you’re one of the tens of thousands of people who work for the Agency or one of its private contractors. But let’s pause for a moment to consider some alternative ways that money could be spent.
How many Americans die every year due to terror attacks?
Let’s be extremely generous and say that, on average, 3,000 Americans die every year from terror attacks (that’s the number killed in the September 11 attacks). Now for some real statistics: on any given year, 32,000 Americans die due to drunk driving, 41,000 die due to breast cancer, and a whopping 231,000 people die from diabetes or related complications. From the numbers alone, it seems that we have more pressing issues when it comes to protecting American lives than terrorism. That ten billion dollars in the NSA’s budget could be much better spent addressing any one of them.
The government has yet to prove one credible example of a legitimate terror plot that was prevented thanks to mass surveillance. If they want to insist that spying makes us safer, this would be good information to know.
What we do know is that information collected by the NSA under their counterterrorism authority is routinely shared with the DEA, IRS, and FBI for non-terrorism-related law enforcement. Since it would be unconstitutional for these agencies to use this information in any criminal proceedings, they have offices dedicated to "parallel construction" of evidence—to literally constructing criminal cases against people by “fortuitously” happening accross other evidence, such as pulling the right car over at the right time. This evidence-laundering is concealed from judges and prosecutors (not to mention defendants).
So mass surveillance is not about terrorism (because "everything is terrorist"), and it is not about protecting us (because it doesn’t protect us). This begs the question—what, exactly, is mass surveillance good for? The unfortunate (and obvious) answer is that surveillance is only good for the people who do the surveillance.
The NSA told its people to use 9/11 as a “sound bite”. They exploited the fear and trust of the American people to push their surveillance agenda and execute a power grab, securing political dominance and a massive budget. This has led to the rise of a whole industry of private intelligence contracting companies, none of whom give a second thought to violating the rights of their fellow Americans, as long as those sweet taxpayer dollars keep rolling in.
Nobody in this picture has any incentive to admit that terrorism might not be that big of a threat, or that the damage to civil liberties and our democracy might be too great. Instead, government agencies push an increasingly authoritarian agenda to criminalize dissent (see above) and private contractors are literally paying off the senators in charge of keeping them in check.
Where does this lead us? The NSA has nearly complete surveillance ability over all domestic communications, and increasingly the government is using this power to control its employees and the press. Specifically, President Obama’s Insider Threat Program encourages government employees to report would-be whistleblowers. Simply buying certain books or DVDs is enough to put federal workers on a watch-list. The threat of ubiquitous surveillance enforces a culture of intimidation and compliance, and workers are afraid to expose government wrongdoing.
Leaks that do make it to press are prosecuted vigorously. Obama’s Justice Department even subpoenaed the work files of Fox News reporter James Rosen, accusing him of being a “co-conspirator” to a felony just by doing his job and reporting the news. A scathing report released by the Committee to Protect Journalists warns that the threat of surveillance and prosecution has made government employees unwilling to talk to journalists, which has severely limited the press’ ability to report on government matters.
If we consider aggressive press coverage of government activities being at the core of American democracy, this tips the balance heavily in favor of the government.
Fortunately, there is hope. Edward Snowden blew the lid off the mass surveillance industry, and now it’s up to the American people to clean up the mess. Sure, there are powerful and deeply-entrenched interests trying to stop us at every point along the way, but it is the job of activists to expose their corruption and wrongdoing. The way I see it, there are three main strategies, all equally valuable, that we can use to address this problem:
Over the last several months, I’ve met and conversed with many people who share my beliefs about mass surveillance. Last month, thousands of us rallied in Washington DC. The amount of passion and conviction we shared as a group gave me hope. Now it’s time to take this to the next level and convince everyone else why mass surveillance is so dangerous to our democracy! I believe that working together we can create a world without mass surveillance. So let’s do it!
Install now — http://flagger.io
Back when I first started writing my so-called NSA “conspiracy theories,” many of my peers thought I was obsessed and crazy. The idea that the government spies on everything we do, it was just too unbelievable. Right? WRONG! Well who’s the obsessed and crazy one now? Not me! HA.
Our government is broken. Therefore, I have created an app that will spam the NSA with red flags (words like bomb, jihad, and death to America)! Install Flagger and give the NSA the middle finger they deserve. Give corrupt Senator Dianne Feinstein and her cartel of ex-military goon contractors something to cry about as they abuse alcohol and prescription narcotics in a vain and pathetic attempt to forget the truth that they will go down in history as traitors to this country and its constitution.
After a closed Senate briefing today, Senate Intelligence Committee leader Dianne Feinstein talked up the vast system of checks and balances protecting Americans from unwarranted dragnet surveillance by the NSA. She said (emphasis added):
To search the database, you have to have reasonable, articulable cause to believe that that individual is connected to a terrorist group. Then you can query the numbers. There is no content. You have the name, and the number called, whether it’s one number or two numbers. That’s all you have… if you want to collect content, then you get a court order.
What does she mean, "collect content?" This doesn’t appear to be a normal, conversational usage of the word “collect”—a word that Director of National Intelligence James Clapper has a special definition for. He’s been accused of lying in a recent Senate hearing when he denied that the NSA collects data on American citizens. But no, he says, we’re mistaken—see, we just have differing opinions on what the word “collect” means:
To me, collection of U.S. persons’ data would mean taking the book off the shelf and opening it up and reading it.
Let’s assume that Intelligence and Congress are on the same page with this unusual definition (and also assume we can just decide a word means something that it doesn’t). Then, Feinstein’s statement takes on new meaning.
Take the bookshelf metaphor. In order to collect data (a book), we take the book off the shelf, open it up, and read it. But wouldn’t that imply that the book was on the shelf to begin with? Going by Feinstein’s statement, it would seem this way. She’s not talking about getting permission to perform wiretaps on terrorism suspects (something the FBI can already do without assistance from secret courts or the NSA). She’s talking about looking into an existing database for “content,” a database that must store a whole hell of a lot more than just metadata.
But what, specifically, is she talking about? Here’s what Senator Bill Nelson (D-FL) said after the same hearing:
Only when there is probable cause, given from a court order by a federal judge, can they go into the content of phone calls and emails…
Ah, that explains it. So they’re storing the actual content of phone calls and emails in some NSA database somewhere. No big deal, and rest assured, they won’t look at it unless they really don’t like you. I guess that’s what Representative Loretta Sanchez meant when she said that Snowden’s leaks were just the “tip of the iceberg.”
This shouldn’t come as a shock, but look at it for what it is: to date, the government has only acknowledged that they receive (not “collect”) telephone records on millions of American citizens. They have not acknowledged that they also get the content from those phone calls. They’ve noted that the specific FISC order that Snowden leaked does not apply to content, but they’ve stopped short of denying that similar court orders exist that would apply to content. And really, they wish we’d stop asking them about it because it’s classified.
Need more evidence? In a recent interview about the Boston Marathon investigation, former FBI counterterrorism agent Tim Clemente shocked CNN’s Erin Burnett when he nonchalantly revealed that the government could listen in on past phone conversations between suspect Tsarnaev and his wife, or indeed any Americans. When CNN dragged him back in the next day for follow-up questioning, he stuck to his guns, adding that “all digital communications” are recorded and stored, and that “no digital communication is secure.”
It’s kind of ironic. I spent the last two weeks compiling what amounted to a conspiracy theory about how the NSA is spying on Americans. Right before I published, the theory was proven true when the news broke that they really are spying on Americans. By the time I published my story, it was “old news,” irrelevant, drowned out in the noise of what has been the biggest outrage I’ve witnessed in my history of following Internet outrages.
But I’m not disappointed—I’m vindicated. More importantly, I’m glad that people are finally talking about this issue.
One of my key takeaways from the experience was my conversation with Sharon Bradford Franklin, Senior Counsel at The Constitution Project. She offered the most valuable insight I’ve seen: a moderate stance in a debate dominated by extremism. She thinks that surveillance tools can be valuable if they’re used in a way that respects peoples’ privacy and civil liberties. “And they can do both,” she says, “These are not inconsistent goals.”
I agree. Too often “national security vs. civil liberties” is posed as a false dichotomy, and this line of thinking is potentially destructive for both sides.
I think most reasonable people would agree that wiretaps are okay if:
Unfortunately, the government has failed on all three counts, and all very recently. The FBI went judge shopping to get a surveillance warrant on a journalist they didn’t actually plan on charging with a crime. The FBI wants to make the Internet less secure, by installing backdoors in communications software. Oh, and the NSA indiscriminately wiretapped the whole Internet.
The government has abused our trust, all in the name of “national security.” Worst of all, there’s no evidence that any of it makes us safer. If the NSA really had their act together, they would have prevented the Boston Marathon bombing, but alas.
This is a serious problem. Clearly, we need national security. But if we can’t trust the government to protect us without violating our civil rights, we have a moral obligation to put their authority in check. As good old Benjamin put it:
Those who would give up essential liberty to purchase a little temporary safety deserve neither liberty nor safety.
But what’s the solution? In our highly polarized political environment, a severe problem is often met with severe overreaction (remember the PATRIOT Act?). Careful and rational discussion of the problem flies out the window—Americans demand quick action, talking points and political posturing.
Divisive discourse threatens the integrity of our political process. And this is as much a threat to our national security as any terrorist attack.
The truth is in the middle. We need the government to protect us, and we also need them to respect our civil rights. I agree with Ms. Franklin. This can be done—we just need to take a step back, take a deep breath, and take a careful look at the problems we face. We need to talk to each other.
As far as actual ideas? Here are a few:
What do you think?
Imagine a perfectly ordinary morning. You’re staring at your computer, cup of coffee close at hand, waiting for the cobwebs to clear. Perhaps you’re Googling reviews for the latest Fast and Furious flick. Nothing catches your eye.
Bored, you remember to check Facebook. When all else fails, you can check Facebook. Immediately, as the site comes up, your eyes flick to a little red badge against the familiar blue toolbar. Like a light bulb going off in your head, you experience a slight dopamine rush—you have a new private message!
It’s an old friend from elementary school. She has just opened a coffee shop. "La Bomba Coffee: It’s the bomb!" She wants to know if you could help her publicize it by “liking” the shop’s Facebook page.
The dopamine rush is gone, drowned out once again by the drudgery of a dull morning and a seemingly endless wait for the caffeine to kick in. Dutifully, you go to your friend’s page and click “like.”
Mundane interactions like these are the lifeblood of the Internet companies upon which we increasingly rely. We generate an enormous amount of data about ourselves simply by searching, browsing, clicking “like.” It is through these interactions that companies assemble a profile on each of us—our interests, our relationships, our desires. And in turn, they monetize this information, using it to target advertisements, or keep us engaged in their products (or both).
Most of this information we gladly hand over to companies for the convenience of things like social networks, free email, discussion groups, and cloud storage.
And then there are the Internet advertising networks which track peoples’ searching and browsing habits as they surf the web. For many ad networks, no information is off-limits: location, health concerns, sexual orientation—all of it is fair game, and all of it is used to show us targeted advertisements for things we’re more likely to buy.
Summed up, all of this information creates a digital reflection of who we are, a reflection of nearly all aspects of our social, professional and private lives. And all of it is beyond our direct control. When we entrust our data to Internet companies, we have to trust them to safeguard it and keep it private.
But the reality is that they can’t keep it private.
The CIA called it Total Information Awareness (TIA). It was their post-9-11 plan to detect terrorist activity by engaging in an unprecedented level of mass surveillance. By aggregating and analyzing huge amounts of data from government and commercial sources, including phone records, credit card transactions, travel records, social network data, text messages, and practically any other digital communication, the CIA’s goal was to look for telltale patterns that could indicate terrorist activity.
Their plan was not very popular. In 2003, after a collective uproar over privacy concerns from citizens, civil rights advocacy organizations, academics and government officials, the plan was renamed Terrorism Information Awareness. This failed to placate critics, and the plan was officially scrapped by Congress shortly afterwards.
But unofficially, the work continued.
In the heart of San Francisco’s SoMa district, among well-appointed high rise office buildings, luxury condos, dimly-lit convenience stores, and cheap ethnic restaurants stands a monolithic concrete structure. Against the surrounding tension between grandiose prosperity and urban decay, the building asserts an almost arrogant contrast—for it is something different entirely.
“I.D.’S MUST BE WORN AT ALL TIMES AND BE COMPLETELY VISIBLE,” warns a sign within the glass of the building’s small entrance lobby.
The building has no other windows. This is a bleak place where it is never truly day, but never night. A lattice of thick concrete slabs extends monotonously to the top of its seventeen stories, casting an imposing, fortress-like presence.
This is 611 Folsom Street, AT&T’s regional Internet Exchange Point (IXP) facility. And it is here where, in 2003, technician Mark Klein discovered a secret domestic surveillance program run by the National Security Agency (NSA).
For some background: IXPs are key choke points in the Internet’s physical infrastructure, supplying connectivity between Internet service companies. If you channel former Senator Ted Stevens’ infamous statement that the Internet is “a series of tubes,” then AT&T’s Folsom Street facility is Northern California’s mother-pipe, a key junction through which nearly all of the region’s Internet data flows.
Klein uncovered evidence that within “Secure Room” 641A, the NSA was using advanced surveillance equipment capable of intercepting and capturing all data as it passed through the facility. Klein later learned from other employees that similar activities were taking place at other IXPs across the country.
“The NSA is getting everything. These are major pipes that carry not just AT&T’s customers, but everybody’s,” Klein said in a 2007 Washington Post interview.
Effectively, this meant that the NSA could have had an unrestricted wiretap on virtually all digital communication within the US. Klein went public with his discovery and was a key witness in a class action lawsuit spearheaded by the Electronic Frontier Foundation (EFF). The lawsuit alleged that AT&T and the NSA were colluding to perform illegal surveillance on American citizens. It ended abruptly in 2008, when President Bush signed the FISA Amendments Act, a law granting AT&T retroactive immunity for any involvement. After all, the surveillance program was in the interest of national security.
In his new cybersecurity book titled Black Code, Dr. Ronald Deibert of Citizen Lab notes that the NSA is formally prohibited from monitoring communications between American citizens, but that their involvement with AT&T strongly suggests that they were ignoring this prohibition.
And, just yesterday, his suspicions were confirmed when The Guardian published a leaked court order from the secretive Foreign Intelligence Surveillance Court (FISC).
According to the top secret document, FISC has authorized the NSA to harvest the phone records of millions of US citizens from Verizon. On an “ongoing, daily basis,” Verizon must produce records of phone calls “wholly within the United States, including local telephone calls.” The records must include “metadata” about the calls including telephone numbers, time and duration of calls, and possibly even location data.
“Requesting metadata ‘including location info’ on all calls by US citizens is putting a GPS tracker on every American,” tweets Dr. Matthew D. Green, Assistant Research Professor at Johns Hopkins University’s Computer Science department.
"This is called protecting America," says Senator Diane Feinstein, who explains that the court order is just a renewal of a program that has been in place since 2006.
Since most FISC orders are highly classified, it’s unclear whether similar orders exist for other US telecommunications providers, or to what extent the NSA has been authorized to use other forms of surveillance on US citizens. But former NSA official William Binney thinks the leaked court order is just one part of a bigger picture.
“If Verizon got one, so did everybody else,” Binney told Democracy Now.
Binney has been an outspoken critic of the Agency since quitting in 2001. In a recent RT interview, he indicated his belief that the government is not only continuing its warrantless surveillance program, but stepping it up to a whole new level.
Gathering all of this data would not come without challenges. As more people across the country and around the globe come online, the sheer volume of data flowing through the Internet is increasing exponentially.
In a 2012 Businessweek op-ed, IBM’s Dave Turek estimated that from the beginning of recorded history to 2003, humans generated roughly 5 exabytes—that’s 5 billion gigabytes—of information. By 2011 humans generated that much data every 2 days, and in 2013, he estimated, we will generate 5 exabytes of data every 10 minutes. That’s over 250 trillion gigabytes per year.
To put that in perspective, that’s the equivalent of about 7.8 trillion Apple iPods—enough, arranged lengthwise, to extend to the moon and back. To be able to record and store a dataset of this magnitude would require a massive engineering effort. And that is exactly what the NSA has set out to do.
They call it the Utah Data Center. Nestled in a valley outside of Salt Lake City, the facility’s non-descript name belies the unprecedented scope of its mission and multi-billion dollar budget.
Scheduled to be operational in fall of 2013, the Utah Data Center houses 100,000 square feet of server space on its heavily-fortified campus. With a power draw of 65 megawatts, the facility will consume more energy than a small city. And it is here that the NSA is building a computer network capable of storing quadrillions of gigabytes of data, according to estimates. That’s enough space to store all domestic digital communication for years to come.
While the full scope of the Utah Data Center’s mission is classified, the NSA insists that it will not be used to illegally eavesdrop on US citizens.
“Many unfounded allegations have been made about the planned activities of the Utah Data Center,” the Agency said in a press statement, adding, “one of the biggest misconceptions about NSA is that we are unlawfully listening in on, or reading emails of, U.S. citizens. This is simply not the case.”
But someone in the Federal Government is.
In the days and weeks following the Boston Marathon bombing, the citizens and government of the US scrambled for answers. Confused and enraged, the FBI cast a wide net, investigating virtually all of the friends and family of the Tsarnaev brothers for possible involvement.
One of their key “persons of interest” was Katherine Russell, the 24-year-old American widow of deceased suspect Tamerlan Tsarnaev.
In an interview about the investigation, former FBI counterterrorism agent Tim Clemente shocked CNN’s Erin Burnett when he nonchalantly revealed that the government could listen in on past phone conversations between Tsarnaev and Russell, or indeed any Americans.
Clemente, who seemed unphased by the implications of what he was saying, was dragged back in the next day for follow-up questioning. When pressed for more details, he sighed, closed his eyes and flippantly reiterated his statement, adding that “all digital communications” are recorded and stored, and that “no digital communication is secure.”
More than likely, the truth behind the NSA’s statement that they perform no “unlawful” monitoring boils down to what, exactly, is lawful.
In a 2012 US Senate report, Senators Ron Wyden (D-OR) and Mark Udall (D-CO) expressed concern about a loophole in the FISA Amendments Act, the same bill that granted AT&T retroactive immunity for their alleged involvement in the NSA’s wiretapping program. Under what some have called a "secret interpretation," the law could be used to circumvent traditional warrant protections, allowing US citizens to be monitored with no court oversight.
To date, the Bush and Obama administrations have vigorously defended the law, and many of the cases have been dismissed on technicalities. The Supreme Court has not yet ruled on its constitutionality.
But, in addition to defending their existing practices, the Federal Government has recently been pushing for even more surveillance authority.
One example is the FBI’s proposed expansion to the Communications Assistance for Law Enforcement Act (CALEA), a federal wiretapping law. Under the new regulations, online service providers would be required to comply with government wiretapping orders, allowing law enforcement officials to monitor user communications. Companies that do not, or cannot, comply with wiretap orders would be fined upwards of $25,000 per day. After 90 days, unpaid fines would double daily.
I ask Nadim Kobeissi for his opinion on the FBI’s plan. Nadim is the Special Advisor for the Open Internet Tools Project and developer of Cryptocat, a popular encrypted chat app.
“I think CALEA is a measure that is meant to intimidate proponents of Internet privacy into complying with law enforcement no matter the reason or cost,” he explains, “I think the FBI would have a better, more productive time seeking to learn from technologists, rather than attempt to prosecute their efforts.”
The FBI’s plan has drawn further criticism from the Center for Democracy and Technology (CDT), a leading group of security researchers and civil rights activists. In a statement, they point out that many software products allow the exchange of fully encrypted communication between users (Cryptocat is one such app). This makes it impossible to monitor user communication centrally. In order to comply with wiretapping orders, developers would be forced to install monitoring capabilities, or “backdoors,” in the software that runs on their users’ computers or smartphones.
Putting surveillance backdoors in communication software would create easy targets for hackers, warns the CDT, lowering “the already low barriers to successful cybersecurity attacks.” And tech-savvy individuals, they point out, will simply switch to unmonitorable software from non-US countries.
“Ironically, then, potential terrorists may easily be able to use stronger security than the US government, which is less likely to install non-US [software].”
Nadim is not intimidated by the FBI’s plan. “Cryptocat will never include any backdoors,” he states, bluntly.
Another, more ambitious, push from the federal intelligence and law enforcement community has been the Cyber Intelligence Sharing and Protection Act (CISPA).
This recently-defeated legislation would have allowed Internet companies and government contractors to proactively share “cyber threat information” with each other and government agencies for the stated purpose of protecting computer networks against hackers and other cyber-attacks. CISPA would have overridden all other federal and state laws, granting companies legal immunity for any authorized sharing of cyber threat information.
The bill has enraged numerous Internet freedom and civil rights advocacy groups, including the EFF, American Civil Liberties Union (ACLU), Free Press, and hacktivist group Anonymous. A common concern is that the bill is overly broad, potentially allowing for companies to share private user data with government agencies.
I asked an individual associated with Anonymous to weigh in on CISPA.
Perhaps I should explain. Anonymous is a loosely-associated collective of hackers, political activists and mischief-makers. Anons use pseudonyms and encryption software to obscure their identities online, and can often be seen at political rallies wearing iconic Guy Fawkes masks.
Anonymous has no centralized power structure: each anon has as much authority as the next, so the group can hardly be said to have a singular mission. Nevertheless, anons generally rally against oppressive actions from corporations and governments worldwide, usually through political discourse and activism. But, when sufficiently angry, tactics include hacking, distributed denial of service (DDoS) attacks and extortion.
The individual I interview is known to me only by the Twitter handle @MindDetonat0r, where he or she has been an outspoken critic of the US government’s surveillance programs. We speak via encrypted email.
Private government contractors … were hired to illegally hack and target labor organizers and dissident journalists like Glenn Greenwald; the hypocritical justice department didn’t do anything about this, rule of law in the United State is fiction. CISPA will give immunity to companies and government contractors … to target political dissidents and proletarian organizers. In other words, CISPA will legalize what they are already doing.
MindDetonat0r further believes that CISPA would make it more difficult for people to stay anonymous online, which can actually be a matter of life and death:
There are crazy people out there who kill gays, abortion doctors, etc. If contractors … or companies are immunized under CISPA and leak private information about targeted people they very well could be putting the public in danger.
I also have the opportunity to speak by telephone with Sharon Bradford Franklin, Senior Counsel at The Constitution Project (TCP), who offers a more moderate view.
“We do want the government to be able to protect us and be able to use these surveillance tools. At the same time we want them to respect peoples’ privacy and civil liberties. And they can do both. … These are not inconsistent goals,” she explains with the well-spoken terseness of an experienced attorney used to dealing with the press.
But she was not satisfied with the version of CISPA that passed the House, saying the “safeguards in place are not sufficient.”
One of her chief concerns was that the House rejected an amendment requiring companies to make reasonable efforts to strip out information that’s unrelated to cyber threat information, such as private user data.
Another concern was that the bill could have allowed companies to share information directly with the NSA. Ms. Franklin explained that an amendment to address this was approved at the last minute, but its wording was vague, so it was unclear whether it would have actually fixed the problem.
These concerns aren’t a moot point now that CISPA is dead. The US Senate will be introducing a competing cybersecurity bill soon.
“We don’t know what it will look like and how much we’ll have to fight that battle over again on the fight for privacy rights and civil liberties,” says Ms. Franklin.
I ask Ms. Franklin how she would explain the problem to an average person, who might not know or care about government surveillance.
“It’s easiest to envision in the video surveillance context where so many jurisdictions are now blanketed with cameras,” she explains. “The government has said that if you’re in a public place you have no reasonable expectation of privacy. And some have said, ‘Well if you’re not doing anything wrong, then you have nothing to worry about.’”
But there are plenty of things people do in their day-to-day lives, like going to AA meetings, fertility clinics, etc. that are all perfectly legal, but “nobody’s business,” she explains. Without adequate safeguards, there’s nothing to prevent government workers from going “back through the footage to compile a digital dossier of someone.”
The CIA imagined a world where municipal surveillance networks, like the one Sharon Bradford Franklin described, would feed into their Total Information Awareness Office. Using facial and gait recognition software, they could automatically and accurately identify individuals from great distances.
The NSA imagined a world where they could spy on US citizens with impunity. And, to some extent, their dream has become a reality. The Foreign Intelligence Surveillance Court continues to rubber-stamp their domestic surveillance activities in secret. And soon the NSA will have a data center with the capacity to store all domestic Internet communications indefinitely.
Today, the US government is pushing for even more surveillance authority. Bills like CISPA will allow them to secretly harvest private user data from Internet companies, granting those companies legal immunity for breaking any privacy agreements with their customers.
All of this is done in the name of “national security,” but is it worth the cost?
Two days ago, Frank La Rue, the United Nations Special Rapporteur on Freedom of Expression and Opinion, released a report about government surveillance and freedom of expression. His report formalized concerns that Internet privacy activists have had for years:
The right to privacy is often understood as an essential requirement for the realization of the right to freedom of expression. Undue interference with individuals’ privacy can both directly and indirectly limit the free development and exchange of ideas. … An infringement upon one right can be both the cause and consequence of an infringement upon the other.
In Black Code, Dr. Ronald Deibert notes that, due to Internet surveillance by the Chinese government, Chinese citizens engage in self-censorship—watching carefully what they say and do online. The constant feeling of being watched, and high-profile arrests of political dissidents have led to a chilling effect on the free exchange we normally associate with the Internet.
While the situation is less extreme here in the US, the Federal Government’s existing surveillance practices have gone too far, in many cases. The government has targeted innocent US citizens and journalists, threatening our constitutional rights to privacy, free speech, and a free press. But they’re not apologizing. They’re asking for more power, and they’re asking us to trust them blindly with it.
Imagine a world where you’re just minding your own business, drinking coffee in the morning, and a long-forgotten friend sends you a Facebook message. Your friend happens to be the daughter of a non-US citizen.
"La Bomba Coffee: It’s the bomb!"
Two red flags. At some far off data center, a server clicks faintly as it logs your interaction. A command is sent out automatically. Suddenly, a whole network of computers fires to action, searching through hundreds of thousands of records spread across years of storage. Little by little, matching data is located: a phone call here, a subway trip there, a Facebook profile, emails, credit card transactions. A digital reconstruction of a human life is pieced together from an enormous set of disjointed information, all of this to answer a single question: who are you?
The report is ready minutes later.
A government worker snickers as he looks over some embarrassing photos in your email. None of your Facebook messages are particularly interesting. You’ve paid your taxes on time. Boring.
"False flag," he mutters, closing the file.
Do you feel safer?
Now that Congress has quietly backed away from CISPA and expansion of the CFAA, the Federal Government has wasted no time in introducing new half-baked Internet regulations. The latest comes courtesy of the FBI. Under their proposed expansion of CALEA, a federal wiretapping law, online service providers would be required to build wiretapping capabilities into their software, allowing law enforcement to secretly monitor user communications.
According to the FBI, these expanded monitoring capabilities are required because child pornographers and terrorists are increasingly “going dark;” meaning that instead of calling each other on their wiretapped iPhones, they’re sending encrypted messages over the Internet, and the FBI can’t read them. By expanding wiretapping requirements to include online service providers, the FBI reasons, these tech-savvy villains can be brought to justice.
On the surface, this looks like a reasonable proposition. But, as is the case with all technical regulations, the devil is in the details.
For a typical online service provider, like Gmail, complying with a wiretapping order would be little to no trouble. Because a user’s messages are stored centrally on Google’s servers, Google could simply give the FBI access to their Gmail servers and be done with it.
But some online service providers allow for the exchange of encrypted messages between users. Although the service providers may run central servers that facilitate the exchange of these messages, they are not readable by the service providers due to their encryption. Only the intended recipients can decrypt and read them. For these service providers, the only way to comply with a government wiretapping mandate is to bundle secret monitoring capabilities, or “backdoors,” into the actual apps that run on users’ computers or smartphones.
While this might sound like a crazy conspiracy theory, it is the primary concern of a leading group of computer security researchers, including cryptography legends Bruce Schneier and Phil Zimmerman. Last Friday, the Center for Democracy and Technology released a report condemning the FBI’s plan. They warn that requiring software providers to install backdoors on peoples’ devices would “lower the already low barriers to successful cybersecurity attacks,” by giving hackers an easy way to attack apps while remaining undetected.
But this could be exactly what the FBI wants. The FBI’s plan effectively gives developers a choice:
By making the first two options morally reprehensible and unrealistically burdensome, the FBI might hope that companies will just stop offering encrypted software for their users, making it much easier for them to centrally wiretap peoples’ communication.
Certainly, investors will be less willing to fund startups that are required to install backdoors on users’ devices. If such a backdoor is exploited by hackers, where does the liability fall? It is arguably negligent to include a feature that is unquestionably adverse to every user, regardless of whether the service provider was required by law to do so. Unless the FBI also gives service providers immunity for any damages related to the backdoors, it’s quite likely these companies will be on the losing end of lawsuits when they inevitably get hacked.
More importantly than concerns about stifling Silicon Valley innovation, the FBI’s proposed regulation raises questions about the rights of government to eavesdrop on its citizens. Since the 1990s, law-abiding people have taken for granted their ability to exchange encrypted digital communication with complete (or at least pretty good) privacy. Are child pornographers and terrorists a big enough threat to justify taking this away?
I hate to end with a physical analogy, but this is a great way to explain the issue to someone less tech-savvy. Imagine you are a manufacturer of the locks used in bank and casino vaults. You take great pride in your craft, and your lock is secure against all but the most extreme attempts to break it. Now, suppose one day the FBI comes and tells you, “it’s fine and all that you built this vault, but we need you to install a second keyhole so we can open the vault and see if there are terrorists hiding inside.” The key they want to use is no more sophisticated than a house key, and can be opened by the most pedestrian of criminals. Is this really a good idea?
Check out the project page on Github!
Today, I watched C-SPAN and my Twitter feed with increasing horror as the House of Representatives rubber-stamped CISPA, a bill that I fear will destroy the Fourth Amendment of our constitution as it relates to online activity.
CISPA, aka. the Cyber Intelligence Sharing and Protection Act, gives online businesses immunity for sharing private user data with the government, for the stated purpose of protecting our country from so-called “cyber attacks.” Critics believe this is effectively an end-run around the Fourth Amendment, by allowing the government to secretly perform warrantless mass surveillance on citizens via data collected by private companies.
I tend to follow like-minded Internet freedom proponents on Twitter, so I was not surprised when my feed exploded with enraged Tweets as the House vote went down. Many of us do not trust that the government has our best interests in mind, especially after Congress tried to ram through freedom-killing bills like SOPA, and more recently the CFAA. I logged off my computer disappointed, but feeling like I did my best to engage in the democratic process (though I wonder just how many of my angry Twitter cohorts actually called their representatives).
A few hours later, when I returned to the “cyber” world, a new grave injustice was exploding on Twitter: the Senate rejected Obama’s proposal to require extended background checks for gun purchases. The same people who were furious earlier today about CISPA were now up in arms over the failed gun control legislation. As I reviewed the data, a nagging feeling crept into the back of my mind that something was wrong, but I couldn’t put my finger on it.
Then it struck me: we’re all a bunch of hypocrites.
How is it that we can be enraged over a privacy bill that erodes the Fourth Amendment, but completely fine with a gun control bill that erodes the Second Amendment? Don’t tell me it’s because gun control is about safety for us or our children. The same argument easily applies to CISPA, which is supposedly about national security. You know, from terrorists, or the Chinese, or something (to the best of my knowledge no one in Congress has actually articulated what a “cyber threat” would look like, but oh well). Do we really think that some civil liberties are better than others?
On the other hand, it’s truly ironic that the same Republican party who stood against gun control would approve CISPA, in a vote that fell clearly along party lines. For a party that considers themselves the vanguards of personal liberties in our country, they sure like to pick and choose which liberties they support.
One of the most dangerous mentalities we face as a nation is that we clearly care more about our politics than our constitution. Whenever the constitution stands against our political goals, we have no problem interpreting it away, or supporting legislation that creates some bureaucratic backdoor solution to side-step it. We saw this in the wake of the September 11 attacks when the government used the people’s fear to ram through two unjustified wars, the TSA, the PATRIOT Act and a fervor about “terrorism” (aka. the new Red Scare) which continues to this day.
I watched the intial news coverage of the Sandy Hook shootings unfold with tears in my eyes, but I view our recent push for gun control legislation as a knee-jerk reaction to a societal problem based on collective fear. Fear is our weakness. Fear is what people smarter than us exploit to keep us in line, with Band-Aid solutions and the PATRIOT Act and divisiveness. Fear is not conducive to the rational discourse we need to address social and economic problems in an effective manner.
Background checks wouldn’t have prevented the Sandy Hook or Columbine massacres. People gave way too much credence to this legislation as if it were some sort of solution to the problem of gun violence in our country, but no one wants to talk about the fact that we are culturally ill as a nation. Recently, there have been too many instances of Americans taking the lives of others. This is not a problem we can legislate away: one person’s decision to kill another involves a long chain of societal and personal failures; possession of a gun is only a small component of this, and the law is the least of it.
Our constitution guarantees our right to privacy. It guarantees our right to keep weapons, a final component of the checks and balances which keep our government in line. Our constitution not only protects, but defines our identity as a nation. And yet here we are, willing to pick and choose which parts of it seem convenient to keep around, blindly reacting to whatever latest threat to our safety, our children, or our money gets dangled in front of us. It’s easy to get behind gun control if you’re willing to employ the same mentality that allowed us to give up so much of our civil liberties and privacy in the wake of the September 11 attacks. This mentality of reacting without thinking threatens the very foundation of what it means to be an American, and I fear this is a worse threat than any terrorist attack.
The CFAA is a horrible law. It’s way too liberal about making basically any sort of minor Internet offense into a federal felony. This is the law that the DOJ was visciously prosecuting activist Aaron Swartz with (for what many have called a harmless crime) until the day he committed suicide. Now instead of reforming the law to make it better, the House is trying to ram through a new version that broadly expands the definition of computer abuse as well as the penalties. This is unacceptable. I urge anyone reading this to contact their representative and Senators to tell them that this shit will not fly. I’ve attached my letter below; feel free to use it as a template.
Dear Representative Honda,
I understand that the House will soon be considering reforms to the Computer Fraud and Abuse Act. As a US citizen and CTO of [unspecified tech company], I am very concerned about the scope of these reforms and potential ramifications to free speech in our country.
This bill seeks to drastically expand the definition of what constitutes computer fraud and abuse and to quadruple prison terms for violators, from 5 years to 20 years. Specifically, the bill defines “exceeding authorized access” as accessing information for an “impermissible purpose.” This effectively criminalizes terms of service (ToS) violations, which would give the government sweeping powers to prosecute people for mundane and unintentional violations of private agreements.
For example, suppose my boyfriend takes a funny picture of my dog, and I post it to my Facebook profile. Since, technically, my boyfriend owns the copyright to that photo, I’m forbidden by Facebook’s terms of service to post it to my account. Under the new reforms to CFAA, I would be committing a felony.
Criminalizing harmless terms of service violations opens up the door to all sorts of judicial abuse, most seriously the persecution of free speech. Just recently, the Department of Justice was relentlessly prosecuting Internet activist Aaron Swartz for a relatively harmless violation of the existing CFAA. After he committed suicide, the DOJ told Congressional investigators that his prosecution was motivated by his political views on copyright. The DOJ successfully used the existing CFAA to silence an individual with an unpopular opinion, and now they want to drastically expand the scope and penalties of this law.
The proposed changes to CFAA will turn too many ordinary individuals into felons, the prosecution of whom will be at the whims of a government that already uses the law as a bludgeon to silence free speech and activism. I know we are better than this as a country, and I hope you will join me in opposing the new CFAA.
Check out the project on Github!
Hash functions like SHA-1 are basically fancier, more secure versions of checksum functions. If you saw my last post on parsing ASCII-Armored OpenPGP data, we used a crc24 checksum function to verify our data was legit. This function doesn’t need to operate on the full set of data at any moment. Within its loop, it only uses one byte at a time to perform a bunch of bitwise operations on a checksum byte. At the end, we’re left with a unique-ish sort of “fingerprint” for the data. SHA-1 is very similar, except it operates on blocks of 64 bytes at a time, and it’s slightly more unique. The point is, our SHA-1 object shouldn’t need to hold 2MB of data in memory if it only ever cares about 64 bytes at one time. We should be able to stream data into it.
A quick Google search led me to a Stack Overflow thread in which other people were wondering about the same thing, and linked was a streaming SHA-1 solution hosted on a dude named Paj’s web site. I was glad to know a solution was possible, but I’m anal and like to write my own code, so I built my own! Why is mine better? It’s not. Paj’s is much faster, (check out the benchmark demo), but mine isn’t the slowest I’ve seen and it fits into my app better.
Let me know if you can help me optimize further. Otherwise, stay tuned for more on S2K Specifiers!