The Implications of Data Tracking on Financial Inclusion

Data privacy issues on the internet have been generating significant debate and media coverage recently. This is in part driven by a recent push by the World Wide Web Consortium, an international organization, to develop standards for how users’ data is tracked on the web and, perhaps more importantly, how they are informed of their information being tracked. There has been contentious pushback from various online advertising firms and their allies to Microsoft’s decision to make “Do Not Track” the default setting for the new version of Internet Explorer. In response, Microsoft has cited their own consumer research that indicated 75% of computer users favored this switch, which is corroborated by an independent survey conducted by the University of California Berkeley.

This debate over internet data tracking has important implications for financial inclusion and in particular new channels like mobile banking that are expanding access to finance to consumers in low-access markets. Some see the end business model for these channels as providing low-cost or free services to consumers, making their profits instead by selling their data to companies looking to offer these consumers tailored products. Similar to other industries that base their business models on access to data, less data could potentially lower the possibility of economically viable business models for financial services, and in turn reduce access through these channels. From the other perspective, one could argue that protecting consumers’ privacy, making sure they are informed, and giving them clear and easy choices is a higher priority than making an easier business case for providers. In fact, even at CGAP we don’t always see this issue from the same perspective, so here we present two different viewpoints from Rafe Mazer, who focuses on financial consumer protection, and Sarah Rotman, who focuses on product innovation.

Rafe’s View: Make protection the default setting for innovation

While innovation can bring greater access to financial products for the unbanked or underserved, these innovations also present new consumer protection risks for populations that often have very little experience with electronic transactions or familiarity with the risks of identity theft. Under this context, I think it is important to focus first on protection of consumers’ privacy, and then allow them to share data only voluntarily, and in a clear and transparent manner. In reviewing the arguments by the online advertising sector in the Internet Explorer case, it is clear the potential revenue losses that greater privacy could create for their businesses. What is less compelling, however, is the case they make that this also harms consumer welfare, by restricting their ability to access targeted advertising and other benefits of data mining. While it is probably true that fewer consumers would receive targeted advertising, if they still have the option to opt-in and receive these benefits, then why is this a problem? For me, the risk of a consumer not receiving beneficial targeted advertising or offers is far less a priority than making sure consumers who do not want their data shared are by default protected. Referring again to the University of California study, they found that consumers were often misinformed about how their data and history could be used, with the majority believing for example that websites with a privacy policy cannot sell user data, when the reverse is actually true.

Following this logic a bit further, if the benefits to the consumer of this data-mining are significant, wouldn’t consumers be incentivized to switch the settings on their browsers (or in our case, their phones), to opt-in? What I think is really behind this issue is the power of default options in driving human behaviour. Experiments with financial and non-financial products have shown that most consumers end up going with the default option presented, even when other options may be more beneficial, and they should probably be evaluating their options more carefully. Sometimes it is laziness, sometimes because we are overwhelmed by choice, but in all cases, the power of the default is significant, and both providers and privacy advocates know this is what matters more than any determination of how much a consumer benefits from targeted advertising. If this is the case, and it really is not about consumer benefits as much as it is ensuring a significant—and passive—trove of consumer data to sell, doesn’t that only make the case for consumer privacy even stronger?

Turning to financial inclusion, as smart phones drop in cost and more low-income consumers build larger digital footprints I could see this issue popping up in markets across the globe, even in the same framework as the current debate over disclosure of how data is tracked and default privacy settings. Here again I could see arguments made for lax privacy settings to facilitate useful data sharing and help generate a business case for banking the poor. But do those ends justify the means of a laissez-faire approach to data privacy? Are we prepared to say that a consumer’s individual right to privacy is less important than growing financial access in places where data mining and cross-marketing is an essential part of a viable business case? I’m just not prepared to make that assumption, and so I hope we are more measured in our approach on this issue than some of the voices in online advertising currently attacking Microsoft for merely listening to consumers’ preferences on data privacy.

Sarah’s View: Make open access the default setting for innovation

Over the last several years, CGAP’s mantra to central banks when it comes to branchless banking regulation has been to take a proportionate risk-based approach. That is, don’t regulate based on what the perceived risks of branchless banking might be in one’s own imagination, but rather based on the actual risks one sees in the market. I think we at CGAP would do well to take some of our own advice when it comes to this discussion around data privacy. Concerns over the consumer protection risks stemming from lax data privacy seem to me to be reacting to the very worst (but I would argue rare) case scenarios where very bad people use someone’s very personal data to place them in very harmful situations. This overreaction seems very disproportionate to me.

Instead, I would argue that the vast majority of data mining is at best extremely beneficial and at worst a slight annoyance for consumers. Data mining results in me getting a few targeted advertisements for products that may or may not be of interest. I’m usually quite impressed with how tailored these ads actually are to my real desires and preferences. Maybe I’ll buy something; probably I won’t, in which case I simply ignore the ads. So remind me again why this calls for such drastic consumer protection measures?

From a financial inclusion perspective, this discussion becomes even more critical. Even if I opt-out of certain data-driven advertisements, banks can still access multiple credit scores and loads of other information about me with which to make an informed decision about the products to offer. However, poor, unbanked consumers have no such data with which to prove themselves as viable banking clients. The data they are beginning to generate from mobile, and to a lesser extent, internet usage creates, as my colleague Kabir Kumar has written about, digital footprints which become powerful tools for the provision of appropriate financial services. In light of the growing emphasis in our industry on client-centric products and services for the poor, I’m perplexed why more data mining isn’t welcomed.

My guess is that it is the sale of such data that makes some people uncomfortable. But here’s where the idea of a financially-inclusive market ecosystem that works for the poor is so critical. Data is sold to providers so that it can be used to tailor products to the particular needs of a target client base. The businesses that gather such data are also providing a valuable service to consumers. Let’s be clear here. This debate is not about providers versus consumers. Consumers would be worse off if there weren’t providers of all types smart enough to figure out business models that keep them sustainable in a competitive marketplace. That’s why the default option to opt-in is so risky. As my colleague Rafe rightly points out above, default options do indeed drive human behaviour. People won’t opt-in if the default is to opt-out, and with everyone opting out, the entire business model ecosystem begins to crack as revenues decrease and businesses go bust. We can’t possibly be saying that this would benefit consumers, are we?

The bottom line for me is that our understanding of privacy must keep up with the innovation of our times. We revel in free open access to a world of information on the internet, but we aren’t willing to reciprocate. The consumer surveys mentioned above only tell one side of the story. It is hardly surprising that when asked whether they want their internet search data shared with third parties, most people respond no. I would also say no to such a leading question. But if asked whether they would prefer to have their data shared or whether they would prefer to pay $0.50 per Google search, I bet the responses would be quite different.

If my right to privacy keeps me siloed in a world where I have lousy options for managing my money, then I’d say privacy is a small price to pay. The sooner we begin seeing data mining as an opportunity and not some perceived or inflated risk, the sooner the poor will benefit from products tailored to their actual behaviours and needs.


Add new comment