Planning for privacy

Planning for privacy

A look at the proactive practices and perspectives that can help companies center privacy in product development.
Part of
Issue 19 November 2021

Planning

Two decades ago, only some 40 countries had data privacy laws on their books. Today, that number has grown to 145. This presents both a challenge and an opportunity for software developers. Baking privacy protections into existing products can feel like a never-ending compliance project. But when we address privacy from the start of development and prevent harm early on, we can create software that doesn’t require constant rewrites to keep up with changes in legislation. Better yet, we can build products that meet the needs and expectations of our users.

Today, many tech companies take a reactive approach to privacy. The mitigation of incidents usually falls to teams with backgrounds in information security and risk management, working closely with legal. In such situations, the priority is—understandably—to resolve issues as quickly as possible. But as any privacy professional will tell you, the root causes are often avoidable—if only the company had planned for them. 

To embed a proactive privacy mentality into an organization, Dr. Stephanie Perrin, a data protection expert on the advisory board of the nonprofit Electronic Privacy Information Center and an advisor to Palantir Technologies on privacy and civil liberties since 2019, suggests formalizing internal processes around potential product harms.

“If you have committees that regularly and routinely do privacy impact assessments, if you have a sign-off structure where people have to approve an analysis of a project, if you have assigned accountability, that’s when you see technology developed that is rights-respecting,” she says. 

Becoming a privacy advocate

Some larger tech companies, such as Apple, employ dedicated privacy engineers who can bridge product, engineering, security, and compliance teams and bring to bear deep domain expertise in privacy and data protection laws. But since most organizations lack such teams, individuals are often left with the burden of speaking up and challenging the status quo.

“Laws are helpful, but laws alone don’t solve everything,” says Perrin, who helped draft Canada’s first privacy law in 2001. “You can be in a large government department in a country with a strong privacy law and still be the only privacy advocate because you may be the only one willing to point out what isn’t working. It’s incumbent on people who care to embed their values into the fabric of an organization.”

An effective way to accomplish this is to align privacy objectives with business goals. Suppose leadership wants to reduce cloud spend. You could measure how much it costs to collect and retain individual data elements, then estimate how much it would save the company to delete certain records after, say, a month, or not collect them at all.

User or market research can also help achieve this alignment. If you can demonstrate that your customers have specific privacy expectations, you can show leadership that better privacy practices would represent a competitive advantage.

“You need to be able to articulate the impact of a change on the bottom line,” says Clara Tsao, a senior fellow at the Atlantic Council, a think tank that studies how technology affects geopolitics. Not all privacy gains can be measured, but quantifying your work can be a powerful way to gain buy-in on forward-looking privacy strategies. 

Kicking off the privacy impact assessment

A privacy impact assessment conducted early in the development process can set companies on a proactive path. This critical accountability measure shows regulatory bodies you’ve made a good-faith effort to comply with the law and ensure product safety. It also serves as an early warning system, allowing you to identify potential privacy issues and implement any required changes before you’ve invested significantly in development.

There’s no one way to conduct a privacy impact assessment, but the process typically involves software engineers, product managers, and designers, as well as legal, cybersecurity, and compliance experts. You’ll need to consider the legal frameworks a project is subject to, whether the personal data you gather is necessary to achieve your business goals, and which third parties, if any, data will be shared with and why. For example: A hospital’s job application portal might ask health care workers about their medical histories, but are those same questions necessary for someone applying for an accounting role? If a new customer applies for a credit card with a financial institution, is it clear to the user that some of their personal information will be shared with a credit reporting agency?

You’ll also want to produce a taxonomy of when and where analysis is performed on collected data. This step is critical: Even though your organization might be handling personal information fairly and lawfully, third parties with access to your data might not be. Similarly, even if customers realize you’re collecting their personal information, they might not know it’s being shared with third parties, or that those same third parties are sharing it with even more third parties. If so, another entity might be making decisions about your customer without their knowledge. (Look no further than the Facebook and Cambridge Analytica episode for an example of how damaging this can be to a company’s reputation.)

Next, review the promises you’re making to users. Whether they’re implied promises, like a padlock emoji next to a question collecting medical data, written commitments in contracts, or a regulatory requirement, you should be using the data you collect in precisely the way you’ve laid out.

Privacy advocates must speak up when they identify broken promises or unsecured data and emphasize the urgency of fixing these problems. 

“You need to hone the ability to dissent strongly while remaining able to communicate [productively],” Perrin says. “You want [others] to accept your facts and accept that you understand their pressures and problems, and make clear that you want to help them move in a more ethical direction.”

There are a variety of resources available to help companies conduct a privacy impact assessment. Some regulatory bodies, like the United Kingdom’s Information Commissioner’s Office, publish how-to pamphlets online. Some privacy law firms offer free guidance on scoping for privacy risks. Certain industry bodies, like the International Association of Privacy Professionals, sell inexpensive templates. And some government institutions, like the U.S.’s Federal Deposit Insurance Corporation, publish their privacy impact assessments online, offering an outline of the process.

Safeguarding human rights and business assets

The next step in the privacy impact assessment is to identify vulnerabilities, no matter how minor or unlikely, for every stage of the product’s life cycle. It’s important to analyze these from both the perspective of your business—what the impact of a privacy incident might be on your reputation, brand value, and finances—and that of the individuals whose data is at risk.

Potential harms to a user can range from inconvenient to distressing to dangerous, and they all need to be considered. What if your social media service permanently shuts down and deletes someone’s only copies of their treasured family photos? What if an abusive partner can track an ex’s location by logging into their ride-sharing app? It’s easy to dismiss a harmful privacy scenario as an edge case, but if you’re building software at scale, such scenarios can become inevitabilities. As Del Harvey, Twitter’s VP of trust and safety, said in a 2014 TED Talk: “Given the scale that Twitter is at, a one-in-a-million chance [of an incident] happens 500 times a day.” 

You can categorize privacy risks as high, medium, or low based on the severity of the potential harm and the likelihood of the scenario occurring. This can help you determine which ones to address first. It’s true that there are times when there isn’t an ironclad solution—but if you can’t sufficiently mitigate high-risk data processing activities, it’s best to pause development of the feature in question until you can implement the appropriate safety measures. “Privacy is a distributed problem, so maybe you need to apply the brakes until your company can play your part,” Perrin says.

“No one gets everything right every time,” Tsao adds. “What’s important to me is that organizations are trying to do the right thing.”

Perrin recommends keeping records of the privacy trade-offs you make during this process. Documenting your decisions provides helpful context if you take action to address a known issue later on. She also emphasizes that while the goal is to address as many privacy risks as you can up front, it’s important to “take the view that it’s never too late” to fix a problem. It might be more expensive or time-consuming, but it’s never a bad idea to retrofit a product to be more respectful of users’ privacy.

Practicing privacy proactively

Creating products that protect users’ rights shouldn’t fall to just one person or team—it’s our responsibility as an industry. And we don’t have to go it alone: We can work with subject matter experts in academia, civil society, professional associations, and at other tech companies to understand how they’d tackle a particular problem. After all, a solution might already exist. As Perrin puts it: “It’s only through frankness and honest engagement that we’re going to solve wicked problems.”

Planning a product that prioritizes privacy from the outset requires hard work and difficult decisions. But by asking tough questions and challenging outdated norms, we can work toward a more proactive privacy stance—one that results in fewer reporting obligations to data protection regulators, lower transaction costs, and a better, safer experience for our users.

About the author

Ayden Férdeline is a public interest technologist and a former technology policy fellow with the Mozilla Foundation. He researches how digital policy-making processes around the world can become more representative and inclusive. He is based in Berlin.

@ferdeline

Artwork by

Elise Vandeplancke

elisevandeplancke.be

Buy the print edition

Visit the Increment Store to purchase print issues.

Store

Continue Reading

Explore Topics

All Issues