BLOG

3 Data Protection Approaches That Go Beyond Consent

As digital financial services (DFS) gain popularity in developing countries, it’s becoming more important than ever for data protection legislation to protect consumers’ rights. As we argued in an earlier blog post, simply obtaining consent is not an effective way to protect data rights. Below are three legal approaches to data protection for the 21st century that go beyond consent . By taking one or more of these approaches, policy makers can avoid the shortcomings of the notice and consent model, protect citizens’ personal data, promote trust and confidence in DFS and still leave plenty of room for DFS providers to innovate.

A merchant uses her mobile phone in Jaisalmer, India.
Photo: Chara Lata Sharma, 2017 CGAP Photo Contest

Limiting data use to legitimate purposes

Consumers’ personal data should be processed in beneficial ways that are consistent with reasonable expectations they have formed based on their relationships with service providers. This can be accomplished by limiting providers to the collection, creation, use and sharing of the data necessary for or compatible with disclosed uses. When the data are no longer necessary for those uses, it should not be retained in an identifiable form. A key feature of a legitimate purposes approach is that it cannot be over-ridden by obtaining individual consent. In other words, everyone benefits from legitimate purposes protections, regardless of which boxes they were required to check before accessing a website, downloading an app or using a digital service.

Legitimate purposes for collecting users’ data could include servicing accounts, fulfilling orders, processing payments, making sure a site or service is working properly, quality control, security measures, auditing or other activities driven by evolving business models. An overriding limitation would always be individuals’ statutory or constitutional freedoms and rights. Of course, providers would still have to comply with other legal obligations and be permitted to disclose information if necessary to protect someone whose health or safety was threatened. Innovative uses of data would be permitted if they are consistent with the service being provided. Going beyond such uses, data could be used for more wide-ranging purposes if it were robustly de-identified to reduce the risk of its being used in ways that are harmful to data subjects.

Establishing a fiduciary relationship between providers and consumers

Another nonconsent-based approach currently being considered in both India and the United States is to protect personal data by creating a fiduciary relationship between individuals and providers. Under Indian law, a fiduciary relationship is “a relationship in which one person is under a duty to act for the benefit of the other.” This duty has been regularly imposed on investment advisers, doctors and lawyers so that they act in the best interests of their clients or patients. It requires them, among other things, to use and disclose data for their clients’ benefit. As Harvard professor Jonathan Zittrain describes the concept: “[It] has a legalese ring to it, but it’s a long-standing, commonsense notion. The key characteristic of fiduciaries is loyalty: They must act in their charges’ best interests, and when conflicts arise, must put their charges’ interests above their own.”

This approach is found not only in draft data protection legislation being considered in India. It is also in a data protection bill, introduced by 15 U.S. senators that would establish duties of care, loyalty and confidentiality for providers. If passed, such legislation would mean that providers could not use data in ways that benefit themselves over their customers or sell or share customers’ data with third parties that don’t put the customer’s best interests first. To some extent, this approach would rectify the information asymmetry in many markets in which providers have a much greater knowledge than their customers about how customers’ data may be used or shared. The approach also recognizes that poor people in developing countries or anyone else should not be required to give up their data protection rights to use digital services. Instead, legally obligating providers to act in the best interest of their customers can help establish the trust and confidence that their data are being used responsibly, making consumers more willing to use new products and services.

Along with Zittrain, Yale professor Jack Belkin has noted the importance of fiduciary duties being a two-way street: It requires “fairness in both directions — fairness to end users, and fairness to businesses, who shouldn’t have new and unpredictable obligations dropped on them by surprise.” Belkin adds that “information fiduciaries should be able to monetize some uses of personal data, and our reasonable expectations of trust must factor that expectation into account. What information fiduciaries may not do is use the data in unexpected ways to the disadvantage of people who use their services or in ways that violate some other important social norm.”

Appointing learned intermediaries to help consumers

Indian privacy lawyer Rahul Matthan argues that imposing a fiduciary duty on providers to protect personal information may not be enough to ensure compliance and, therefore, proposes that “learned intermediaries” be appointed to audit for and remedy improper data use. Such intermediaries would focus on the ever-growing importance of algorithms in making decisions about people, including the associated risk of unlawful discrimination. Intermediaries would stand in the shoes of consumers in evaluating the use of these unseen and technically complex algorithms.

Accordingly, Matthan proposes that intermediaries first review algorithmic queries to assess whether they are being used improperly (e.g., whether a lender is seeking an evaluation of risk based on prohibited factors such as gender, race or caste). Second, Matthan says, intermediaries should review algorithmic inputs and outputs to assess whether improper discrimination is occurring. Third, if the second step finds potentially problematic results, intermediaries should have greater access to the algorithm itself, or at least tools to probe the algorithm, to see if it is compliant. If shortcomings are detected, instead of just citing the problem, the intermediaries should try to suggest appropriate remedial measures. The theory is that, over time, individuals would be drawn to providers whose algorithms are ultimately found to be fair and compliant.

In medicine, physicians have been treated as learned intermediaries between pharmaceutical companies and patients, warning patients about potential risks posed by medications. In the data protection context, learned intermediaries’ role could be expanded to perform a similar role, warning consumers about dangerous data practices or more broadly examining complex privacy notices and industry practices to better educate the public about their meaning and potential shortcomings.

These three supplements to the traditional notice and consent approach to data protection demonstrate ways that individuals can be better protected in our ever more complex digital environment.

Add new comment

CAPTCHA