22
Aug

Big Data, Financial Inclusion and Privacy for the Poor

Guest Post by Dr Katharine Kemp, Research Fellow, UNSW Digital Financial Services Regulation Project

Financial inclusion is not good in itself.

We value financial inclusion as a means to an end. We value financial inclusion because we believe it will increase the well-being, dignity and freedom of poor people and people living in remote areas, who have never had access to savings, insurance, credit and payment services.

It is therefore important to ensure that the way in which financial services are delivered to these people does not ultimately diminish their well-being, dignity and freedom. We already do this in a number of ways – for example, by ensuring providers do not make misrepresentations to consumers, or charge exploitative or hidden rates or fees. Consumers should also be protected from harms that result from data practices, which are tied to the provision of financial services.

Benefits of Big Data and Data-Driven Innovations for Financial Inclusion

“Big data” has become a fixture in any future-focused discussion. It refers to data captured in very large quantities, very rapidly, from numerous sources, where that data is of sufficient quality to be useful. The collected data is analysed, using increasingly sophisticated algorithms, in the hope of revealing new correlations and insights.

There is no doubt that big data analytics and other data-driven innovations can be a critical means of improving the health, prosperity and security of our societies. In financial services, new data practices have allowed providers to serve customers who are poor and those living in remote areas in new and better ways, including by permitting providers to:

  • extend credit to consumers who previously had to rely on expensive and sometimes exploitative informal credit, if any, because they had no formal credit history;
  • identify customers who lack formal identification documents;
  • design new products to fit the actual needs and realities of consumers, based on their behaviour and demographic information; and
  • enter new markets, increasing competition on price, quality and innovation.

But the collection, analysis and use of enormous pools of consumer data has also given rise to concerns for the protection of financial consumers’ data and privacy rights.

Potential Harms from Data-Driven Innovations

Providers now not only collect more information directly from customers, but may also track customers physically (using geo-location data from their mobile phones); track customers’ online browsing and purchases; and engage third parties to combine the provider’s detailed information on each customer with aggregated data from other sources about that customer, including their employment history, income, lifestyle, online and offline purchases, and social media activities.

Data-driven innovations create the risk of serious harms both for individuals and for society as a whole. At the individual level, these risks increase as more data is collected, linked, shared, and kept for longer periods, including the risk of:

  • inaccurate and discriminatory conclusions about a person’s creditworthiness based on insufficiently tested or inappropriate algorithms;
  • unanticipated aggregation of a person’s data from various sources to draw conclusions which may be used to manipulate that person’s behaviour, or adversely affect their prospects of obtaining employment or credit;
  • identity theft and other fraudulent use of biometric data and other personal information;
  • disclosure of personal and sensitive information to governments without transparent process and/or to governments which act without regard to the rule of law; and
  • harassment and public humiliation through the publication of loan defaults and other personal information.

Many of these harms are known to have occurred in various jurisdictions. The reality is that data practices can sometimes lead to the erosion of trust in new financial services and the exclusion of vulnerable consumers.

Even relatively well-meaning and law-abiding providers can cause harm. Firms may “segment” customers and “personalise” the prices or interest rates a particular consumer is charged, based on their location, movements, purchase history, friends and online habits. A person could, for example, be charged higher prices or rates based on the behaviour of their friends on social media.

Data practices may also increase the risk of harm to society as a whole. Decisions may be made to the detriment of entire groups or segments of people based on inferences drawn from big data, without the knowledge or consent of these groups. Pervasive surveillance, even the awareness of surveillance, is known to pose threats to freedom of thought, political activity and democracy itself, as individuals are denied the space to create, test and experiment unobserved.

These risks highlight the need for perspective and caution in the adoption of data-driven innovations, and the need for appropriate data protection regulation.

The Prevailing “Informed Consent” Approach to Data Privacy

Internationally, many data privacy standards and regulations are based, at least in part, on the “informed consent” – or “notice” and “choice” – approach to informational privacy. This approach can be seen in the Fair Information Practice Principles that originated in the US in the 1970s; the 1980 OECD Privacy Guidelines; the 1995 EU Data Protection Directive; and the Council of Europe Convention 108.

Each of these instruments recognise consumer consent as a justification for the collection, use, processing and sharing of personal data. The underlying rationale for this approach is based on principles of individual freedom and autonomy. Each individual should be free to decide how much or how little of their information they wish to share in exchange for a given “price” or benefit. The data collector gives notice of how an individual’s data will be treated and the individual chooses whether to consent to that treatment.

This approach has been increasingly criticised as artificial and ineffectual. The central criticisms are that, for consumers, there is no real notice and there is no real choice.

In today’s world of invisible and pervasive data collection and surveillance capabilities, data aggregation, complex data analytics and indefinite storage, consumers no longer know or understand when data is collected, what data is collected, by whom and for what purposes, let alone how it is then linked and shared. Consumers do not read the dense and opaque privacy notices that supposedly explain these matters, and could not read them, given the hundreds of hours this would take. Nor can they understand, compare, or negotiate on, these privacy terms.

These problems are exacerbated for poor consumers who often have more limited literacy, even less experience with modern uses of data, and less ability to negotiate, object or seek redress. Yet we still rely on firms to give notice to consumers of their broad, and often open-ended, plans for the use of consumer data and on the fact that consumers supposedly consented, either by ticking “I agree” or proceeding with a certain product.

The premises of existing regulation are therefore doubtful. At the same time, some commentators question the relevance and priority of data privacy in developing countries and emerging markets.

Is data privacy regulation a “Western” concept that has less relevance in developing countries and emerging markets?

Some have argued that the individualistic philosophy inherent in concepts of privacy has less relevance in countries that favour a “communitarian” philosophy of life. For example, in a number of African countries, “ubuntu” is a guiding philosophy. According to ubuntu, “a person is a person through other persons”. This philosophy values openness, sharing, group identity and solidarity. Is privacy relevant in the context of such a worldview?

Privacy, and data privacy, serve values beyond individual autonomy and control. Data privacy serve values which are at the very heart of “communitarian” philosophies, including compassion, inclusion, face-saving, dignity, and the humane treatment of family and neighbours. The protection of financial consumers’ personal data is entirely consistent with, and frequently critical to, upholding values such as these, particularly in light of the alternative risks and harms.

Should consumer data protection be given a low priority in light of the more pressing need for financial inclusion?

Some have argued that, while consumer data protection is the ideal, this protection should not have priority over more pressing goals, such as financial inclusion. Providers should not be overburdened with data protection compliance costs that might dissuade them from introducing innovative products to under-served and under-served consumers.

Here it is important to remember how we began: financial inclusion is not an end in itself but a means to other ends, including permitting poor and those living in remote areas to support their families, prosper, gain control over their financial destinies, and feel a sense of pride and belonging in their broader communities. The harms caused by unregulated data practices work against each of these goals.

If we are in fact permanently jeopardising these goals by permitting providers to collect personal data at will, financial inclusion is not serving its purpose.

Solutions

There will be no panacea, no simple answer to the question of how to regulate for data protection. A good starting place is recognising that consumers’ “informed consent” is most often fictional. Sensible solutions will need to draw on the full “toolkit” of privacy governance tools (Bennett and Raab, 2006), such as appropriate regulators, advocacy groups, self-regulation and regulation (including substantive rules and privacy by design). The solution in any given jurisdiction will require a combination of tools best suited to the context of that jurisdiction and the values at stake in that society.

Contrary to the approach advocated by some, it will not be sufficient to regulate only the use and sharing of data. Limitations on the collection of data must be a key focus, especially in light of new data storage capabilities, the likelihood that de-identified data will be re-identified, and the growing opportunities for harmful and unauthorised access the more data is collected and the longer it is kept.

Big data offers undoubted and important benefits in serving those who have never had access to financial services. But it is not a harmless curiosity to be mined and manipulated at the will of those who collect and share it. Personal information should be treated with restraint and respect, and protected, in keeping with the fundamental values of the relevant society.

—-

References:

Colin J Bennett and Charles Raab, The Governance of Privacy (MIT Press, 2006)

Gordon Hull, “Successful Failure: What Foucault Can Teach Us About Privacy Self-Management in a World of Facebook and Big Data” (2015) 17 Ethics and Information Technology Journal 89

Debbie VS Kasper, “Privacy as a Social Good” (2007) 28 Social Thought & Research 165

Katharine Kemp and Ross P Buckley, “Protecting Financial Consumer Data in Developing Countries: An Alternative to the Flawed Consent Model” (2017) Georgetown Journal of International Affairs (forthcoming)

Alex B Makulilo, “The Context of Data Privacy in Africa,” in Alex B Makulilo (ed), African Data Privacy Laws (Springer International Publishing, 2016)

David Medine, “Making the Case for Privacy for the Poor” (CGAP Blog, 15 November 2016)

Lokke Moerel and Corien Prins, “Privacy for the Homo Digitalis: Proposal for a New Regulatory Framework for Data Protection in the Light of Big Data and the Internet of Things” (25 May 2016)

Office of the Privacy Commissioner of Canada, Consent and Privacy: A Discussion Paper Exploring Potential Enhancements to Consent Under the Personal Information Protection and Electronic Documents Act (2016)

Omri Ben-Shahar and Carl E Schneider, More Than You Wanted to Know: The Failure of Mandated Disclosure (Princeton University Press, 2016)

Productivity Commission, Australian Government, “Data Availability and Use” (Productivity Commission Inquiry Report No 82, 31 March 2017)

Bruce Schneier, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World (WW Norton & Co, 2015)

Daniel J Solove, “Introduction: Privacy Self-Management and the Consent Dilemma” (2013) 126 Harvard Law Review 1880

10
Aug

Insights from the “Digital Investments Roundtable” hosted by the Future of Finance Initiative

(This post is authored by the Future of Finance Team at the IFMR Finance Foundation).

In the first and second posts of this series on the three Future of Finance Initiative (FFI) workshops hosted in April, we focused on digital payments and digital credit respectively. This blog summarises the key insights from the third workshop on digital investments. The workshop was attended by providers with a strong digital interface from across the investments ecosystem in India. We thank the participants for their frank and open views presented at the discussions.

The retail investments landscape in India is currently in the process of being disintermediated with the operating model of traditional providers and associated intermediaries being relooked at by fintech players in this space. Given this background and realising the continuing need for high-quality investment products for rural low-income households, we wanted to understand:

  • How are providers providing solutions relevant to new market segments?
  • Where are the risks and vulnerabilities across the chain of the players and processes associated with making digital investments?

In doing so, we found ourselves asking the following questions of the curated group of participants:

  • How are providers dealing with any issues around (a) segregation of investments advice and product sale, and (b) customer data protection?
  • What are the operational pain points for providers which are either created by or can be solved by policy and regulation intervention?

The first session at the workshop focussed on the current state of digital investments in India and was used to frame the discussion. An interesting visual from this discussion (reproduced below) was the geographic distribution of mutual funds sales in the country, which reveals that Western and Southern states have generated the majority of such investments. It was pointed out that in contrast, the penetration of life insurance is better in eastern parts of India. However, despite the growth of the mutual fund industry being significant in terms of absolute numbers, as a percentage of GDP, it is still estimated to be very low at 8.4% (as of 2016).[1]

Graphic: Geographic spread of mutual fund products (Source: Association of Mutual Funds in India) (Note: Legend in the graphic pertains to the average assets under management in Rs. crore)

Views on the future trends in the investments space and the role of regulation

Both offline and online consumer interfaces will continue to be critical: There was consensus among the participants that hybrid business models, incorporating both online and offline product distribution channels would prevail in the near future. It was however noted that there is a supporting environment in the form of digital public infrastructure (such as Aadhaar and India Stack) which has provided impetus for digital transactions in this space and that technology has enabled almost real time access to investments which was previously not the case.

The need for differentiated KYC processes: Some of the participants questioned the need for completing the full ‘know your customer’ process (including ‘in-person verification’) as a pre-requisite for investments in mutual funds since investor funds were moved from a KYC compliant bank account of the investor to the asset management company. One of the suggestions in this regard was to make full KYC a pre-requisite to redeem mutual fund investments and also put in place risk based KYC processes instead of uniform KYC processes irrespective of the nature and amount of investments.

On the role of industry standards and sector practices: The participants noted that there are currently no regulations regarding protection and security of an investor’s personal data which apply to entities operating in this sector. Some of the participants highlighted their internal best practices such as conducting vendor due diligence before sharing personal data and having robust data security protocols driven in part by shareholder requirements (especially for companies which have received venture capital funding).

On complaints mechanisms: The participants agreed on the need to strengthen grievance redressal mechanism to ensure better investor outcomes and suggested that investor awareness programmes (which are applicable to product manufacturers) be made outcome based, for instance by measuring the number of retail investors which take up mutual fund investments as a result of participating in or being exposed to awareness programmes. It should be noted in this regard that the Securities and Exchange Board of India (SEBI) currently requires depositories and asset management companies/registrar and transfer agencies to put in place ‘proper grievance redressal mechanism’ that is required to be communicated to the investors through the consolidated account statements.[2]

Role of agents and robo-advisory in the context of investment products

On treatment of advice and sale: Participants were keen to discuss the policy focus on separating advice and sale of investment products and commented that SEBI should consider regulating the quality of advice provided by agents. It should be noted that SEBI had recently put out a Consultation Paper on Amendments/Clarifications to the SEBI (Investment Advisers) Regulations, 2013 (available here) in this regard.

Some of the participants took a view that the current Indian market for portfolio advice is not at all data driven and potentially harmful advice is being provided to investors. It was also pointed out that the pass-through of commissions (received by insurance and mutual fund distributors) to investors is a rampant practice in India.

The promise of some of the new developments and digital investments is that more data flows can improve the range and quality of service in this space.

On considerations for robo-advisory services: The role of robo-advisory, i.e., providing financial advice with minimal human intervention, in investment advisory was also discussed. These advice algorithms could add value in terms of customising advice for consumers. There was recognition that training algorithm based investment advisory could retain the biases of human advisors which needed to be addressed in the long term and there were questions around the manner of selection of funds recommended by robo-advisors.

On disclosure: There was general consensus in the room that the onus should be on advisors to read offer documents and other disclosures and give informed advice to investors, instead of expecting potential investors to do so themselves.


About the Future of Finance Initiative:

The Future of Finance Initiative (FFI) is housed within IFMR Finance Foundation and aims to promote policy and regulatory strategies that protect citizens accessing finance given the sweeping changes that are reshaping retail financial services in India – including those driven by Indiastack, Payments Banks, mobile usage and the growing P2P market.



[1] Attributed to Mr. NK Prasad, President and CEO at Computer Age Management Services Private Limited. Please see MF investments rising in smaller towns: CAMS, The Tribune, 28 December 2016. Available at: http://www.tribuneindia.com/news/business/mf-investments-rising-in-smaller-towns-cams/342616.html.
[2] As per paragraph 14.3.2.8 of the SEBI Master Circular for Mutual Funds, 2016.

3
Jul

Insights from the “Digital Credit Roundtable” hosted by the Future of Finance Initiative

(This post is authored by the Future of Finance Team at the IFMR Finance Foundation).

In the first post of this series on the three Future of Finance Initiative (FFI) workshops hosted in April, we focused on the workshop on digital payments. This blog summarises the key insights from the second workshop on digital credit. The workshop was attended by  providers  from across the credit ecosystem in India. We thank the participants for their frank and open views presented at the discussions.

India is one of the most underserved credit markets in the world, with only 15% of the households borrowing from formal channels.[1] Emerging digital lending models have the potential to address this gap. These models range from online marketplaces and online lenders (originating loans on behalf of traditional institutions or lending themselves) to P2P players (connecting individual lenders to borrowers via a platform). Given the entry of all these new technology oriented providers and intermediaries, we wanted to understand responses to our core questions to players across the digital credit space:

  • How are providers providing solutions relevant to new market segments?
  • Where are the risks and vulnerabilities across the chain of the players and processes in the digital credit ecosystem?

The Growing Role of Non-bank Entities in Digital Credit

An early insight that participants shared at the workshop was that there is no shortage of demand or supply for credit in India today, rather that we lack mechanisms in the market for the appropriate deployment of supply. It was also emphasised that role of fintech providers in India is fundamentally different from markets like the US: while fintechs in US focus on a generally well-banked population often in competition with established banks, Indian fintech firms are also trying to expand the market and provide services to the underserved.

The key question facing the Indian market is whether providers dis-intermediating the chain of credit will partner with banks or compete with them in order to provide services to customers. Two market trends described within this context:

a) P2P lending platforms partnering with banks

Participants reflected that traditional banking is limited by legacy systems and regulations. Some banks have taken a progressive view of the developments, with early trends emerging of P2P platforms tying up with banks to source customers and help with the early stages of the customer verification process. These partnerships are making certain assets classes—such as consumer and SME loans through e-commerce platforms—more accessible to traditional banks.

b) New strategies by digital lenders and P2P platforms to reach customers not previously accessed by traditional lenders

Providers in the digital credit market are also using new strategies to diversify the base of customers to whom they lend such as building partnerships with e-commerce platforms to use their data and advertising and targeting new customers. For instance, some P2P platforms have tie-ups with travel and holiday planning sites to offer loans to vendors listed on the site.[2] These partnerships have opened up access to new customers for SME and consumer loans who may not have been previously accessible to lenders.[3]

New Service Providers in the Chain of Digital Credit

Next the discussion moved on to the range of players in the digital credit scene. To frame the discussion, we presented a list of all the stakeholders involved in the provisions of digital credit to the participants (Table 1) – based on our understanding of the credit ecosystem.

Table 1: Digital Credit Stakeholders

Source: FFI (2017)

The participants observed that the above list is likely to evolve as emerging players involved in providing digital credit and related services are currently discovering and experimenting with different business models.

Despite the changing nature of the industry, participants agreed that the majority of digital credit operations are the same as those in traditional lending. However, certain processes such as risk origination and risk assessment have evolved because of increased access and use of customer data.

Emerging Pain Points for Digital Credit

The discussion moved on to the operational pain points faced by providers and their intermediaries.

Low awareness of data-related risks: The chief concerns of the attendees centred on data protection and privacy. The participants felt that the average Indian consumer’s awareness of data related risks is minimal. Educating customers about privacy and data protection issues is crucial. The providers at our workshop took their own roles in this process very seriously. Participants also believed that customer data should not be shared without explicit consent. However at the same time, they conceded that it is often unclear for consumers to know what they are giving consent for.

Participants also highlighted that risky customer data practices already exist and are not unique to the digital credit space. For instance, participants discussed the large role that Direct Selling Agents (DSAs) have traditionally played in the selling of financial products by contacting potential customers. Currently, DSAs are a weak link when it comes to securing customer data, since there is no clear procedure to monitor and sanction these agents.

New data for credit assessments: Next the participants discussed the use of alternative data based assessment for lower income customers – to widen the potential to offer credit products to them since they often do not have more traditional credit scores to support assessments of credit worthiness. It was emphasised that standardised credit products can lead to financial exclusion due to exclusionary eligibility criteria.

In this context, the question of privacy arose – specifically, whether certain types of alternative data could compromise the privacy of individuals and whether this was a valid consideration. Participants’ views were divided on the importance of this question to the end customer – with some musing that privacy could be a “luxury” problem and others priding themselves on placing strong value on their data privacy practice.

Need for standardised borrower assessment, fair lending requirements and front end provider liability: Typically, assessing a borrower’s credit worthiness involves gauging the ability to repay, intent to repay and identity. This process is standardised in countries like the US and the UK. However, in India there is no standardisation of the borrower assessment process. This exacerbates the challenges of evaluating customers.

In the US, the fair lending requirements practised by foreign banks prevent discrimination based on pincode, race etc. Equivalent provisions do not currently exist in India. However, in the US, discrimination is implicit within lending practices — in a black box form. As a result, American lenders do not share their assessment processes.

All the participants agreed that in the case of any customer harm arising, the customer-facing institution must take responsibility and liability — irrespective of the dis-intermediation of the chain of credit in the digital context. There cannot be a situation where the customer’s rights are spread across multiple entities.

Regulators need to factor in market development and stakeholder perspectives: Participants highlighted the need for regulators to let the industry take a meaningful size and shape before introducing guidelines. If regulations supersede the industry’s development, they can shape the formation of industry (instead of market forces).

The attendees also remarked that digital lenders have no formal forum for engagements with key regulators, making it tough for them to feedback ex ante about the possible impact of proposed regulation on the market and on customers. One recent initiative that participants discussed was the Digital Lenders Association of India (DLAI), which seeks to work closely with the government, regulators and policymakers on behalf of those involved in core lending business and facilitators in digital lending.

Overall, the workshop helped us get an insight into the role of the various actors who participate in the digital credit ecosystem in India, and their perceptions on managing risks to customers.


About the Future of Finance Initiative:

The Future of Finance Initiative (FFI) is housed within IFMR Finance Foundation and aims to promote policy and regulatory strategies that protect citizens accessing finance given the sweeping changes that are reshaping retail financial services in India – including those driven by Indiastack, Payments Banks, mobile usage and the growing P2P market.


[1]See: All-India Debt and. Investment survey (2014) http://mospi.nic.in/sites/default/files/publication_reports/nss_577.pdf
[2]See: http://www.business-standard.com/article/companies/alok-mittal-returns-as-entrepreneur-launches-platform-for-smb-lending-115100100047_1.html
[3]See: http://www.amazon.in/b?ie=UTF8&node=8520691031

23
Jun

Stress Testing Methodology – Brief Comparison Across Regulators

By Nishanth K & Madhu Srinivas, IFMR Finance Foundation

The below table summarises, along some key dimensions, the stress testing methodologies adopted by the central banks in India, US, UK and EU to assess the stability of their banking system. It is to be noted here that the stress tests that individual banks conduct by themselves, as part of their Internal Capital Adequacy and Assessment Process (ICAAP), do not figure in our comparison. Also the below analysis is based on the stability/stress test reports of the respective regulators for the year 2016.

All data for the above comparison was taken from the following references:

Click here for PDF of the infographic.

7
Jun

Aadhaar’s Potential for Financial Inclusion

By Bindu Ananth & Malavika Raghavan, IFMR Finance Foundation

We should care deeply that millions of Indians are still turning to expensive informal financial services in the face of seasonal and volatile incomes, despite years of trying to improve access to basic financial services. Any innovation with a promise to provide disruptive solutions deserves careful attention and a concerted effort to ensure success. It is in this spirit that we approach the Aadhaar debate.

Test and learn—but then evolve

For years, our country’s financial inclusion strategy tried to expand access by opening more bank branches. One reason this has not scaled is because providers face high operating costs for “low-value” services, driven in part by physical “know your customer” (KYC) procedures and paper-based verification of transactions. Previous work by our colleagues Anand Sahasranaman and Deepti George showed that the cost of delivering a rural loan of Rs10,000 through a branch could be Rs4,153 (41.53%) for a public sector bank and Rs3,207 (32.07%) for a private sector bank.

Aadhaar and IndiaStack have held out the promise of overcoming these costs using technology—through e-KYC for users, remote verification of transactions and lowering transaction costs of payments. Taken with other inclusion efforts, we are within striking distance of every Indian having access to a bank account and being able to easily send and receive payments. Not a panacea by any means but a definite milestone for inclusive development.

However, we have also arrived at an inflexion point for the unique identifier (UID) system. If the first part of the task for this system was about technology implementation, now it faces an important next step—creating trust and confidence in that technology and the institutions that administer and oversee Aadhaar. We must have the openness and the humility to leverage the potential of Aadhaar to deliver access to basic services while continuing to work on gaps and weaknesses, some of which we will only learn as we go.

Improving protections for users

We have some specific suggestions that need immediate attention with respect to financial service providers, the Unique Identification Authority of India (Uidai) and users, when considering Aadhaar and its use in digital financial services.

We must make providers liable to put customers back “in the money” for failed/unauthorized transactions: it is important that the users of Aadhaar-linked accounts and Aadhaar-enabled payment processes do not bear the costs of failures in this system as the volume of digital payments increases. The Reserve Bank of India (RBI) has taken the right steps by releasing a draft circular on limiting liability of customers in unauthorized electronic banking transactions. We need to move this into live regulation and extend it appropriately for non-bank providers and third parties.

Over 1.15 billion Aadhaar numbers are now in existence. Such a massive public database containing citizen information needs clear audit and accountability procedures.

We should support an independent observatory to monitor Aadhaar-based transactions: more hard data about the successes and failures of Aadhaar-based transactions will help drive an informed discussion about the system’s efficacy. An independent body monitoring Aadhaar transaction failures and user experiences, and publishing this data periodically, could be a strong accountability mechanism and improve Aadhaar.

We need a “living will” for Uidai: in large-scale projects of this nature, it is helpful to think about worst-case scenarios. In the banking world, “living wills” have been an interesting policy tool to force systemically important institutions to lay down their game plan in the event of bank failure. Similarly, no matter how improbable it might seem today, it would be useful for Uidai to lay out a plan to deal with a severe security breach.

We also need to reform the Aadhaar redress mechanism: currently, we have an opaque redress and complaints system at Uidai, especially a concern since the Aadhaar Act empowers only Uidai or its officers to initiate proceedings for disclosure or misuse of users’ information. Renuka Sane and Vrinda Bhandari’s writing addresses these lacunae clearly. We need a new framework and investment to set out accountability, reporting and performance expectations of Uidai on the Aadhaar grievance process.

We need market conduct oversight for data use by firms across the financial sector: in addition to stronger data protection laws, we need active oversight for firms using personal data. This applies more widely to the financial sector, but we highlight it in this discussion since Aadhaar-seeding of bank accounts is rising, requiring enhanced monitoring to prevent risks, and as more financial firms use IndiaStack as authorized user agencies. We must actively supervise how these firms and government use the Aadhaar system in conjunction with other customer data they hold.

We need to protect the privacy of all residents of India across all platforms, including Aadhaar: the idea that poorer people are less entitled to privacy should be dispelled. Compromising financial privacy could set back wider financial inclusion efforts, if improper disclosure of data leads to denial of credit or reputational harm. This issue goes well beyond Aadhaar, but the ubiquitous use of the Aadhaar number, including for finance, makes this more pressing.

To conclude, a project such as Aadhaar with implications for transforming service delivery must be strengthened in specific ways discussed here so that confidence and trust in the system grows.

This article first appeared in Livemint.