BLOG

Why Digital Finance Supervisors Should Automate Data Collection

The advent of technology solutions to data collection — or “RegTech” — has made it possible for supervisors to collect huge amounts of data from digital financial services (DFS) providers at a more granular level than ever before. The shift toward granular data can help regulators do a better job of supervising DFS. But if more data is being collected and steps aren’t taken to change data collection mechanisms, supervisors could find themselves struggling more than before. Fully automated collection processes and standardization will be essential to using granular data for effective DFS supervision.

Indian merchant uses mobile phone
Indian merchant uses mobile phone. Photo by Lata Sharma, 2017 CGAP Photo Contest.

Most supervisors today aren’t accustomed to using granular data. Providers’ management information systems (MIS) hold data on individual accounts, clients and agents, including transaction and balance information. Yet, as recent CGAP research shows, most supervisors require providers to transform this data into indicators and report aggregate numbers using pre-defined report templates. Common indicators include total volume of mobile money transactions and total outstanding mobile money issued. These indicators are often broken down by type of mobile money transaction and other sub-categories. While this template-based approach simplifies data collection for supervisors, it places a significant burden on providers, whose MISs often cannot automatically generate information in the format required. It also introduces possibilities for reporting errors and limits the ways supervisors can analyze the data.

There are good reasons for supervisors to switch to granular data despite the challenges it introduces, as countries like Rwanda and Tanzania are already doing. First, granular data can free supervisors from the constraints of the pre-defined indicators found in report templates, allowing a virtually endless array of supervisory analyses. Second, granular data can remove the need for supervisors to worry about whether providers are accurately computing indicators because calculating indicators becomes their own responsibility. Supervisors can improve data quality if they do a good job standardizing the formats and concepts used by providers. Third, granular data is flexible. Data reported only once can be shared among different departments, or eventually even different authorities, removing the need for providers to report the same data multiple times. Finally, it can be cheaper and easier for providers to report granular data, meaning data can arrive on supervisors’ desks faster.

Of course, these are only potential benefits. Whether they materialize depends not only on what granular data is collected, but how it is collected. Requiring providers to report granular data using traditional report templates could have negative results. Low DFS data quality is often rooted in providers using manual processes to fill out templates. Requiring providers to fill out and validate thousands of granular data fields in a report template — rather than the few key aggregate indicators they report today — could undermine data quality and drive up costs for providers. For these reasons, the benefits of collecting granular data depend on fully automated data collection methods. They also require supervisors to commit to standardizing data at a much more detailed level than at the aggregate level.

All of this will require supervisors and providers to coordinate more closely and invest in the right technology. The experience of Austria’s central bank in automating the collection of granular prudential bank data and eliminating report templates can serve as an example to DFS supervisors in emerging markets and developing economies. In the new system, granular data is gathered automatically by banks’ systems and sent to AuRep, a company owned by banks. AuRep automatically prepares granular datasets following standard formats, which are accessed and used anytime by the central bank and the financial supervisory authority. As a result, changes in reporting requirements do not require any changes to report templates or banks’ MISs.

A shift to granular data has the potential to improve DFS data quality and supervision, but only if supervisors and providers work together to implement automated collection mechanisms that minimize costs and the potential for human error.

Resources

Publication

Digital financial services (DFS) have grown considerably in emerging markets and developing economies, where they are instrumental for financial inclusion. DFS supervision needs to ensure that this expansion happens in a way that facilitates sustained, healthy financial inclusion.

Comments

08 January 2018 Submitted by Thomas Timberg (not verified)

But for Microfinance it is essential to do sample physical verification because of potential for problems at the base.

12 January 2018 Submitted by Denise Dias (not verified)

Dear Thomas,
Thank you for your comment. You're absolutely right, that supervisors (of any type of institution, not only microfinance institutions) should know what they are getting, that is, the quality of their data. And to improve data quality at the source, supervisors need to go onsite. When an industry becomes regulated, as it has happened to MFIs and cooperatives in many countries, the supervisor will have a period of working hard in improving the data, prior to rely on it to make the analysis offsite.

08 January 2018 Submitted by Calvin Johansson (not verified)

Moving to more data granularity will also require financial supervisors to increase their knowledge and skills in being able to analyse the data to generate information that can assist them in doing more effective and efficient risk assessment of their financial institutions. The increased knowledge and skills required are both quantitative capabilities and qualitative (judgemental) capabilities. In particular, supervisors need to avoid "analysis paralysis" that can easily happen with having access to large, more granular data.

12 January 2018 Submitted by Denise Dias (not verified)

Dear Calvin,
Thank you for your comment. Indeed, the higher the granularity and the higher the volume of data, the greater the need for better analytical skills. The impact of improving data will be very limited with poor analytical skills and lack of capacity and priority in using such data. This is a point we have made in our publication, but unfortunately was not the focus of the publication.

Add new comment

CAPTCHA