BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why Silicon Valley's Programmers Must Learn To Play Devil's Advocate With Privacy

Following
This article is more than 4 years old.

Getty Images

One of the benefits of living in Washington, DC is the front row seat it offers to the conversations happening each day around the world in the law enforcement, intelligence and military communities regarding how Silicon Valley’s latest creations can be repurposed into the service of government, from building the ultimate surveillance network to creating so-called “killer robots” to suppressing ideas and speech viewed as threatening. In some cases, the sales, legal and policy sides of companies participate in these discussions, occassionally actively promoting their companies’ products behind closed doors for purposes they publicly and vocally refute. In other cases, governments employ vast armies of analysts whose jobs it is to uncover dual-purpose Valley innovations. In most cases, the engineering teams that are the lifeblood of the Valley’s innovation pipeline are little aware of how their creations are being repurposed for harm and expend little effort on contemplating architectural designs to mitigate that misuse. What if programmers learned to play devil’s advocate with the privacy and civil liberties implications of their creations and emphasized surveillance-resistant designs?

The world of Washington is often envisioned to be as distant culturally as it is geographically from Silicon Valley. Yet as the digital revolution has upended every industry, so too has it come for government. While much of the public attention to digital government has come from citizen-facing initiatives like “e-government” services, the real revolution has been behind the scenes, as the law enforcement, intelligence and military communities of the world have learned how to harness the vast digital deluge that defines modern life.

It was government that pioneered the interest-based and behavioral data mining that we associate today with Silicon Valley and it is government that is often first in line to discover new ways to harness the digital deluge to surveil or suppress their populations. For all the public outrage over the commercial surveillance that happens in the public eye, the governmental surveillance machine operating in the shadows is an Orwellian creation so expansive that even Hollywood would have difficulty conjuring something as frightening.

Those outside the Washington orbit can find it difficult to comprehend the degree to which Silicon Valley actively collaborates with the very defense agencies and initiatives it publicly rebukes. This divergence between public and private engagement can often be jarring to observe.

A company renowned for its unwavering and absolute enshrinement of civil liberties whose CEO takes the stage at a major conference to tout its absolute ban on governmental use of its technology may, on the very same day, have a delegation in Washington touting how its facial recognition technology can be used to turn civilian drones into lethal action platforms and offering to customize its system for that precise use case. In similar fashion, on the same day a company has a senior delegation presenting at a human rights conference how its tools can be used to document human rights abuses in a particular country, another senior delegation from that same company may be meeting with the security services of that very country, selling it its tools to help shield its activities from public scrutiny. Given the size and compartmentalization of large companies, neither delegation may even know about the other’s work.

Given the Valley’s long history of activism, openness and social good within its engineering ranks compared with Washington’s tradition of secrecy and mission focus, programmers are not always fully aware of the intended purpose of some of the adjustments and innovations they are asked to create. Fully aware that a Silicon Valley engineering team might balk at creating a facial recognition system for a “killer robot,” the team might instead be tasked with creating a landmark recognition system for a drone-based augmented reality game in which as the drone flies by different famous tourist attractions it displays imagery of how they looked in the past and autonomously navigates up to them. The company might even publish these results publicly and even potentially release a product based on it, but in reality, the entire process has been managed to build a dual-purpose technology that just happens to coincidentally meet exactly the design requirements needed by the government for a "killer robot" drone.

While less common than government co-opting, such development misdirection happens every day as companies from big to small chase the riches of lucrative government contracts. Most of these involve small tweaks that are little noticed but make products compliant with government requirements. Indeed, some of Silicon Valley’s marquee names can be found associated with many of today’s technology-driven defense projects as the world becomes increasingly data- and AI-driven.

In any given week there are myriad unclassified presentations all across DC and in capitals across the world in which Silicon Valley’s biggest names lay out in explicit detail how their tools can be repurposed for law enforcement, intelligence and military use, often in privacy-harming applications that run directly contrary to their corporate mottos.

In an added twist, US research universities are increasingly copying this model, with Proposer Day events filled to the brim with academics from universities seeking defense funding. Ironically, many of these same institutions have publicly condemned intelligence and military repurposing of their research, even while privately seeking to bolster such funding. In some cases, the university researchers appearing at Proposer Days have deep collaborations with Silicon Valley companies in the very areas in which they are proposing projects, creating unwitting conduits for Valley research to flow directly into the hands of government without the company’s knowledge.

In many cases companies’ governmental work is entirely unknown to their engineering and privacy staffs, who frequently have no inkling their company is doing a brisk business selling services to government agencies and initiatives the company publicly opposes. Not all corporate privacy staffs carefully scrutinize every new government RFP to investigate potential overlap with their product lines or monitor the meetings of their entire sales staffs to flag meetings with entities with governmental links for further review.

This Washingtonian perspective can offer unique context in understanding the stream of innovations emerging from Silicon Valley. Knowing that a particular platform’s policy and legal staff agreed earlier that year to have the company implement certain new counter-terrorism features, a mundane technical presentation from its researchers later that year suddenly takes on new meaning when it coincidentally matches precisely the agreed functionality and the company’s policy staff send a follow-up note to participants of the earlier meeting drawing their attention to that presentation

Of course, the reality is that the overwhelming majority of governmental repurposing of the Valley’s innovations comes without the knowledge or cooperation of those companies.

Governments throughout the world employ vast teams of analysts that carefully scrutinize every new innovation for potential intelligence utility. Observing which technologies circulate on unclassified message lists or are singled out in the myriad public talks by defense officials each day across Washington can offer significant clues to the inadvertent dual-purpose nature of many Valley creations.

Just four years after Facebook’s founding, its intelligence utility in constructing large-scale behavioral graphs was already prominent enough to receive mention at the DNI Open Source Conference. Today, despite bans by most platforms on surveillance use, yet another government proposal calls for mining and deep analytics of the major platforms.

The problem is that the Valley typically views security and privacy issues through the narrow lens of cybersecurity, protecting their products and systems from being externally compromised, without contemplating how they might be misused as-is.

One has to look no further than the clamor of European governments in the leadup to the 2016 US presidential election warning of rapidly evolving Russian digital influence campaigns, outlining the specific methods and tactics through which they were exploiting platform’s legitimate features and warning of strong signs the tactics would be applied to interfere in the 2016 US presidential election.

Unfortunately, most companies saw electoral interference through the narrow lens of cybersecurity, hardening their systems against foreign systems compromise, but doing little to address the scenario of a national-scale influence campaign that involved nothing more than ordinary user accounts.

How might companies mitigate this harm?

The first is to see security as more than cyber security.

A company that hardens their smart camera to prevent unauthorized logins can do little about a state adversary that can physically intercept that camera during shipping and solder additional circuity into it. For every product that contains a microphone, camera or other kind of sensor, there is a government taking it apart to see how it might be repurposed into a monitoring device. For every piece of software or cloud service that relies upon traffic to a distinct domain, there is a government using that traffic to inventory the product’s users. Today such harm is typically viewed as outside a company’s scope to stop, rather than treated as a design challenge the company might be able to at least partially mitigate through trusted systems design or physical changes such as sensors that can be physically unplugged by the user.

The second is to staff their security teams not just with cybersecurity professionals, but with former adversarial analysts, from non-cyber intelligence, law enforcement and military backgrounds and from governments across the world, representing their diverse perspectives. This globalized perspective is particularly important. Such analysts can help companies think outside the cybersecurity box and view their products through the eyes of an analyst, which represents a vastly different mentality than present in most cyber teams.

The third is to think far outside the box. In certain sectors of government, there is a long tradition of convening panels of creative thinkers from outside the relevant field, like science fiction authors and Hollywood script writers and asking them to dream up myriad scenarios few within the government might ever have imagined. The intelligence community similarly has a long history of convening and seeding conferences and forums around topics of interest and bringing in non-traditional speakers far outside the disciplines and topics in question.

The fourth is for engineers themselves to play devil’s advocate around how their technology can be repurposed for harm and how that harm might be mitigated, at least in part. All technology can be harnessed for bad, but through careful design, that harm can be significantly reduced or mitigated altogether.

In my own time as a computer science undergraduate, there were no classes available that would teach students how to think adversarially about how their creations might be repurposed for harm. Misuse was largely seen through the lens of cybersecurity, rather than through inadvertent dual use of uncompromised systems performing as expected. It was not until graduate school, in library and information science, that the concepts of repurposing information and tools for harm were emphasized, with faculty that deeply emphasized such concepts in their courses. Of course, librarians have long been on the forefront of civil liberties issues, from curating rights resources for the public to actively combating governmental efforts to harvest library records.

Even today, as computer science departments have added ethics courses, they have often emphasized ethical conduct by programmers themselves, with fewer spending time exploring how systems can be designed to frustrate governmental repurposing and the design considerations that can enable one application while preventing a nearly identical one.

In many ways, privacy is antithetical to the data-driven mindset promoted in many computer science departments. Students are taught to collect all the data they can at the highest resolution they can in order to leverage its insights. Even within the scope of privacy laws like GDPR, guidance typically revolves around “minimal minimization” to comply with the law. The concept of purposeful minimization in which every single data point is relentlessly scrutinized to see if it can be removed or its resolution degraded is one that is not typically emphasized but which can have enormous influence on the ways in which platforms can be repurposed.

Twitter’s geotagged tweets offer a powerful example of the ways technology companies can design their systems to offer services in ways that mitigate their potential for outside abuse, including surveillance. While a data-minded programmer might design a geotagging system to always collect precision GPS coordinates under the mindset that such high resolution data could enable a wealth of future analytics and offerings, Twitter instead followed a privacy-protecting approach in which it not only offered its users the option to share coarse city-level location information instead of their precise GPS coordinates, but carefully built its user interface to emphasize this option and help users make an informed decision. Rather than burying this switch deep in a cascade of menus, it put this toggle front and center to make it easy for users to alternate their locative privacy with every tweet.

Such a design ensured that users had the ability to share their locations when they felt comfortable, while frustrating the desires of government surveillance for which GPS coordinates are invaluable. While a typical programmer would emphasize always-on GPS because it maximizes capability, Twitter’s privacy-protecting approach degrades capability in the service of privacy by limiting its services to precisely the information users wish to share at each moment.

In fact, Twitter’s April 2015 change to how it handled geographic information in its JSON records caused great concern among government surveillance efforts that had been previously harnessing such information to map protesters and others in direct violation of the company’s terms of use. In offering users the ability to share low-resolution locative information and by placing users fully in control of the sharing process, Twitter was sacrificing an incredible source of information, but in turn was placing the privacy of its users front and center. With that single design change, the company was able to thwart a vibrant misuse of its data.

In the end, if they wish to deter governmental misuse of their platforms, Silicon Valley’s programmers must learn to think beyond cybersecurity and to play devil’s advocate with privacy.