All right. Good morning to everyone, and thank you for tuning in today. My name is Tom Pahl, and I serve as the Bureau's Deputy Director. Welcome to the CFPB Symposium on "Cost-Benefit Analysis in Consumer Financial Protection Regulation." This symposium is the fifth in a series we started last year that explores critical issues in consumer protection policy in today's dynamic financial services marketplace. In a moment, I will have the honor of introducing the Bureau's Director who will deliver recorded opening remarks. The Director regrets that she's unable to attend today's event because she's testifying up on Capitol Hill but asked me to welcome all of you in her absence. Before I begin, let me tell you about what you can expect at today's symposium. Following the Director's remarks, the Bureau's Deputy Assistant Director in the Office of Research, Susan Singer, will moderate the first panel discussion. The panel will consider questions related to how the Bureau should use cost-benefit analysis in our work, including developing consumer financial regulations.
Those practices provide the proper incentives for the best use in supporting a cost-benefit analysis. The first panel is scheduled to end about 10:35, after which we will have a 10-minute break. Following the break, a second panel will begin at approximately 10:45 a.m. The second panel will be moderated by Paul Rothstein, Financial Institutions and Regulatory Policy Second Chief in the Bureau's Office of Research. This panel will focus on how the Bureau may help advance the methodology of a cost-benefit analysis. This panel may also consider the data and economic models that should be developed for cost-benefit analysis in consumer financial protection regulations, how to address distributional concerns, and how to partner with others in this work. The symposium will conclude at about 11:45 a.m. today. To those unable to watch via WebEx today, a recording will be made available on the Bureau's website. Finally, as a friendly reminder, the views of our panelists today are their views.
They are greatly appreciated and very welcome, yet they do not necessarily represent the views of the Bureau. It's now my honor to introduce Director Kathy Kraninger for her recorded opening remarks. Director Kraninger became the second confirmed Director of the Consumer Financial Protection Bureau in December of 2018. From her early days as a Peace Corps volunteer to her role in establishing the Department of Homeland Security to her policy work at the Office of Management and Budget to CFPB, Director Kraninger has dedicated her career to public service.
It is my privilege to introduce her for her recorded remarks. Isabel, if you could please play the Director's remarks, I would appreciate it. Thank you for joining the Bureau this morning for our continuation of the symposia series. Today we're going to be talking about cost-benefit analysis. Thank you to the panelists who are going to be providing their expert perspectives on this topic and to the staff of the Bureau who have helped us put this event together. The symposia series is an opportunity for us to talk about in a public forum the issues that are facing the Bureau that are particularly challenging, and this particular topic is a very important one. Cost-benefit analysis plays an important role in the Bureau's rulemaking and something that Congress directed us to do in Section 1022. So the Office of Research provides and the experts internally to work on those rulemakings, and they do that from start to finish.
We also have a requirement to look back on our rulemakings, 5 years after the fact, and the data that we collect along the way to assess whether that rule was truly effective and had the impacts that were intended. So it's an important part of our rulemaking that we really need to bring cost-benefit analysis considerations into our decision-making more broadly. For example, the research that we do to assess the effectiveness of our financial education efforts, it is an important consideration that I have talked with the staff about quite a bit as to how we can do that, how we can share that with the public, how we can bring academics also into that thinking and get better information out there about what works and what doesn't.
So I'm looking forward to the discussion today and the perspectives that these experts will provide on this important topic, and as with the other symposia, we will come forward with a report on the after actions and let you know what action we are going to take as a Bureau to follow up. Thank you and have a great day. Thank you, Isabel. At this time, I'd like to welcome Susan Singer and the panelists to our event today. Susan, the floor or whatever is the WebEx equivalent is yours.
Thank you very much. Thanks, Tom. Good morning. My name is Susan Singer, and I serve as the Deputy Assistant Director in the Bureau's Office of Research. At this time, I'd like to welcome our panelists: Jerry Ellig, who is the Research Professor at George Washington University Regulatory Studies Center; Stephen W. Hall, Legal Director and Securities Specialist, Better Markets; Brian Hughes, Executive Vice President and Chief Risk Officer, Discover Financial Services; Howell Jackson, Professor of Law, Harvard Law School; Amit Narang, Regulatory Policy Advocate, Public Citizen. So I think now we'll just start diving right into the topics. The first one is the usefulness of cost-benefit analysis in general and in policy development. In your experience for research, does cost-benefit analysis contribute effectively to policy development and consumer protection or financial regulation? For example, does cost-benefit analysis help focus the conversation and provide transparency, or does it narrow the discussion and exclude important issues? Howell, will you start us off? Great. Susan, thank you very much. It's a pleasure to be here, and I appreciate this opportunity to participate in the panel.
Just as a preface, I should say that my remarks here are going to be drawing heavily on a recent paper that I did with Paul Rothstein of the Bureau, who will be moderating, I think, the next panel, and in the paper, we undertook a survey of 72 different consumer protection regulations, including a substantial number from the Bureau, to see how the cost-benefit analysis was structured with a particular focus on benefit analysis in addition to cost analysis. Probably not a surprise since I spent some years studying the topic that I am an enthusiast for a cost-benefit analysis. I think it is a desirable adjunct to the regulatory process, and I would just say as a preface, even those who don't formally engage in cost-benefit analysis and policymaking are making implicit judgments that action does more harm than good.
So there's, at a minimum going on, some sort of intuitive benefit analysis in all policy decisions, and I think the structure of cost-benefit analysis just brings rigor and transparency to the process and is inherently a good thing in my view, because it's easy to have assumptions and biases and expectations that are more careful scrutiny and empirical analysis will not necessarily bear out or might correct or refine. So I think the process of cost-benefit analysis as required by Section 1022, as the Director mentioned, is a good thing for rulemaking and perhaps other areas. What I would say, though, as a preliminary matter and I think some of my colleagues will get into it is there are some inherent problems of cost-benefit analysis, and the chief one is that it's difficult to get good data and good estimates about the range of relevant issues.
And in particular, it's more difficult to get good estimates of the benefits of consumer financial protection regulation than it is to get the cost because you have with the industry, a group that is directly affected by regulation and will readily come forward with cost estimates, sometimes pretty expensive cost estimates, and it's just difficult for consumer groups and others to do the same thing for benefits. While I'm strongly in favor of cost-benefit analysis, I think the Bureau and other agencies need to be cognizant of its limitations and structure its processes accordingly, and I'll have a little bit more to say about that. But I think I could be put down as a guarded enthusiast for cost-benefit analysis. Thanks, Howell. Steve? Yes. Thank you, Susan, and thank you especially for including Better Markets in this symposium. It is, indeed, a very important topic, and we've devoted considerable attention to it over the years at Better Markets.
I'd like to share an answer to this first question, three core points, and then offer just a little bit of elaboration. In our view, the application of quantitative and exhaustive cost-benefit analysis or the attempt to do so actually does more harm than good because it's so imprecise, so burdensome, and because it lends itself to litigation and rule challenges. Number two, the industry has largely successfully weaponized this methodology, and that strategy has been based on three mythologist, which I'll briefly describe. And third and finally, it's our view that policymakers can and should address this problem and in effect rein in the infatuation with cost-benefit analysis, and in the meantime, until that day comes, I think agencies can follow certain guidelines or principles to mitigate the impact of it, take advantage of what good it can do, but also limit its impact.
On the first question, as Howell said, there are a number of, I think, widely recognized problems with cost-benefit analysis. It has inherent inaccuracies. It relies so much on assumptions and speculation. It depends on data, which either doesn't exist or in the hands of the industry and inaccessible, and it's also biased in its outcomes because it doesn't value benefits adequately. It doesn't take into account direct and indirect, tangible and intangible, whereas the costs typically faced by the regulated industry are easier to quantify and monetize. And it actually undermines the development of strong rules. It consumers enormous resources, takes years of analysis and research. It's inherently dilutive, and we see several examples of that recently. And it sets, as I said, good rules up for court challenge then the industry isn't happy with the outcome. And that takes me to the second point, which is the weaponization, especially for the last 20 years and largely in the courts.
The industry has been very successful in slowing, weakening, and nullifying rules using cost-benefit analysis. The three mythologies I referred to are, first, that regulation in general threatens to impose crushing burdens on the industry and actually harm consumers by supposedly depriving them of choice and access to services and products. Number twoóand that historically has been debunked, but it continues to thrive in this arena. The second myth is that the cost-benefit analysis is a very appealing, rational, objective methodology that is available to develop optimally tailored rules, and for the reasons I've already covered, we don't think that's true.
And the third mythology is that Congress actually required agencies or does require them to engage in the process, and we've devoted considerable attention, as have some others, demonstrating that in fact the law is very carefully tailored and frequently does not require, in fact, typically does not require exhaustive cost-benefit analysis. The two examples I'll cite that are especially noteworthy is the SEC's recent promulgation of regulation best interest, which is a terrible missed opportunity gave the SEC to adopt a strong uniform fiduciary duty for financial advisors and largely predicated on the notion that a strong duty would impose undue burdens on the broker-dealers and actually destroy their business model. The SEC took a severely compromised and weak approach. In my written product, I've also highlighted another very disturbing example, which is the CFPB's own payday underwriting rescission rule, which is largely based on really upending what was an incredibly exhaustive and thorough cost-benefit analysis. The agency now has done a 180 and decided that increased access to payday lenders should be deemed a considerable benefit.
Finally, on the sort of solutions front, at some point, someday Congress should clarify what the proper duty of the agencies is. They should install leaders at the agencies who really believe in strong regulation in the public interest, and they should seek judges who are not infatuated with this doctrine. In the meantime, agencies should follow the law, what they are required to do, but also not engage in more cost-benefit analysis than they need to. They should expansively consider the benefits of regulation for the public, and they must discount the incessant stream of sky-is-falling predictions from the industry and must thoroughly vet their data and their studies which they pour into the record to sway the outcome.
Thank you, Steve. Brian? Thank you, Susan, and thanks to the CFPB for holding today's symposium on this important yet challenging topic as we've heard the different views. As someone responsible for implementing regulations and serving about 30 million customers, I appreciate the opportunity to participate. I think, as Professor Jackson noted, cost-benefit analysis is happening all the time, whether a person is deciding what to order for lunch or deciding regulatory policy, and those decisions are affected by that person's beliefs, biases, and assumptions. And it's not the analysis, I think, that creates bias. It's the beliefs and assumptions a person uses for the analysis that can create bias and that can at least be analysis to get an accurate conclusion. So I don't think it's a myth that the first thing that a good cost-benefit analysis does is it makes those assumptions transparent so that they can be examined, debated, and tested. And in my 30 years in business, I found cost-benefit analysis essential. You can't quantify everything, I think, but a good analysis has shown you can unleash creative thinking and help balance competing interests.
And that's why it's expected by corporate boards and by regulators. At Discover, it's leveraged every day to ensure we make sure we best serve our customers, the public, and our business, and I think a good example of the value of a good cost-benefit analysis occurred a few years ago when we decided to become the first bank to offer FICO scores free to our customers. This is something that was very difficult to anticipate the outcome, but we used a combination of research, testing, and cost-benefit analysis to sort through all the different approaches to how to get a customer to be more aware of their credit and to become a better informed and educated customer. We had options around education, an app for personal finance, an online game or awards program, so many different ways to try to get a consumer to learn more about their credit. And cost-benefit analysis was essential to the decision to just use the FICO score, put it on a monthly statement, put it on a line along with a little bit of education, and what we found by doing that is the consumers who engage in it do see increments.
And I think that the value of a cost-benefit analysis in guiding regulatory policy is very similar. There are many options. There are many consequences. Of course, they are hard to estimate. I would agree the benefits are even harder to estimate, but by using cost-benefit analysis to make the assumptions transparent so that they can be examined and tested, we can find the best outcome across consumers and the business and ensure that we mitigate unintended consequences. Thank you, Brian. Amit? Thank you, Susan, and CFPB for inviting me on behalf of Public Citizen to take part in this important conversation.
Let me first start off by commending the CFPB for its history of rigorous and high-quality research into consumer harms from financial markets and the benefits of protecting consumers. I just want to highlight one example. Actually, it's a recent example, which is why I want to highlight it. Just this month, the CFPB released the results of its Making Ends Meet survey, which shows that consumers are really struggling financially right now, and the consumers that are disproportionately struggling are African Americans and Hispanics. Now, this survey was actually conducted before the pandemic hit. So the situation has likely gotten even worse, and so for folks that haven't seen the survey, I highly recommend you take a look. I think it's strong evidence that the CFPB's mission of protecting consumers by policing the financial markets is needed now more than ever. So the problem as I see it is that while the CFPB has an enormous amount of data and evidence of consumer harms in the absence of strong regulations and the benefits when those regulationsóthe benefits to consumers when those regulations are in place, it's challenging for the CFPB to translate those benefits into economic values.
So that often means that cost-benefit analysis downplays or ignores regulatory benefits to consumers just by virtue of being more difficult to quantify. It's clear that the CFPB does a very good job actually of identifying benefits in qualitative fashion, but my opinion is that those with benefits actually become less transparent to the public when it turns to quantitative analysis. And I'd just give one example from my written statement with respect to the recently finalized Home Mortgage Disclosure Act Regulation C rulemaking. In that rulemaking, the CFPB reduced reporting requirements for loan data from financial institutions, and the CFPB did concede essentially that the reducing of those reporting requirements is going to make it more difficult to combat housing discrimination, particularly in vulnerable communities. But those lost benefits were not quantified, as CFPB was not able to quantify them. So it made it seem in a cost-benefit analysis that a lot of savings to financial institutions outweigh the very real benefits to combatting housing discrimination to vulnerable populations. Again, I think the CFPB has done a good job of rigorous research and analysis into consumer harms and the benefits of consumer protections, and that is where the CFPB should focus its attention.
But the problem is cost-benefit analysis narrows those into economic terms, which causes them to be undervalued in the 1022(b) analysis. Thank you, Amit. Jerry, you're next. Okay. I want to echo everybody else in thanking you all for putting on this symposium. It's great to have agencies do a deeper dive into these kinds of issues when they're trying to figure out what to do. I think benefit-cost-analysis can be very useful and helpful to policymakers, especially if we have a realistic understanding of what it can and can't do and use a little bit of common sense.
First off, I think that rather than simply talking about benefits and cost, we ought to be thinking in terms of the paradigm that's use din executive branch agencies, which is regulatory impact analysis. A regulatory impact analysis usually includes a benefit-cost analysis, but it starts with an analysis to the underlying problem that regulators are trying to address. In fact, a lot of the analysis that's in Paul Rothstein's great article is good problem analysis. I know their focus is on how do you figure out benefits, but they're starting where you need to start to figure out whether there are benefits, which is what's the problem we're trying to solve. A regulatory impact analysis may also look at distributional issues, and those are a big issue in financial regulation.
There are certain underrepresented groups, certain groups that have had greater hardships than others that Congress is concerned about. There's room in a regulatory impact analysis to figure out how do the benefits fall on particular groups and how do the costs fall on particular groups. The second thing I think we need to keep in mind is the analysis is not the same thing as the decision. If you read any good textbook on benefit-cost analysis, they will say the job of the analyst is to analyze and do the best job they can of figuring out what are the likely consequences of alternative courses of action. The job of the decision-maker is to make the decision, and I'm afraid even some economists and folks in the legal community who are big supporters of benefit-cost analysis sometimes maybe get a little overenthusiastic and present it as an algorithm that's going to crank out a number that will make the decision for you.
I think that rarely happens in practice, and there's still going to be a lot of room for judgment when policymakers make decisions. But it's judgment informed by better knowledge of potential consequences. And by the way, I just want to address this issue of whether it is easier to estimate costs than benefits. If you look at studies by the Government Accountability Office, if you look at studies by independent scholars that look at the analysis that either independent agencies or executive branch agencies do of regulations, they will often find that more studies have a figure for costs than for benefits. That doesn't mean it's the right figure, because if you go and look at the cost analysis in a number of agency regulatory impact analyses or benefit-cost analysis, what you'll find is the only figure for cost is paperwork cost.
So, yeah, it's easy to come up with a figure for paperwork cost. That doesn't mean that that figure is telling us the entire cost for the regulation. So economic cost is opportunity cost, what are all the good things we give up in order to get the benefits we're trying to create with regulation, and figuring out the actual opportunity cost of regulation has a lot of central, empirical challenges that are just as big as the challenges involved in trying to figure out what the benefits are and how to put a number on those.
Thank you, Jerry. We're going to move on to our next topic which is cost-benefit analysis and Bureau structure. The Division of Research, Markets, and Regulations provides the staff for most rulemaking teams and also houses the Office of Research. The Office of Research is accountable for the cost-benefit analysis rule. In your experience or research, what are the strengths and weaknesses of this structure? Do you expect it to facilitate high-quality cost-benefit analysis that informs leadership and the public and advances policy development? And on this one, Jerry is going to lead us off. Okay, thank you. I'm glad you're raising this question because a lot of times, we talk about what's the role of economic analysis or how to do it without thinking about what's the institutional framework you need to support objective analysis and ensure that the decision-makers get the results of that analysis and consider it.
And I just completed in 2019 a cite for the Administrative Conference of the United States that looked at precisely this issue. How are economists who do work on regulation organized in various government agencies and how does organization affect the objectivity of the analysis, the quality of the analysis and the way that it's communicated to decision-makers? So rather than judging the Consumer Financial Protection Bureau, and then you can kind of hold that up against what CFPB does and figure out for yourselves where are we like other agencies, where are we different, and so forth. The biggest most important thing is that just the organizational structure along is not sufficient to really guarantee anything.
You need an organizational structure, a set of decision-making authorities, and day-to-day practices and culture that reinforce each other. And when we look across government agencies, we find that in general, the type of structure and management system that gets you higher quality and more objective analysis is when economists are managed by other economists or by other analytical people rather than being managed by the folks who are developing policy and writing any regulation. The most common way agencies achieve that by having a separate bureau of economics or division of economics headed by someone like a chief economist or other person with an equivalent type of title.
Now, I talked to folks in some agencies where economists are in the program office that develops regulations, and in some cases, some agencies both told me, "Well, we don't really have a problem being objective because within that office, we are managed by GS-15 economists who can kind of run interference and protect our independence." So it seems like it's possible to give the honest degree of independence within either structure. The key is economists being managed by economists. Secondly, decision-making authorities. The agencies that really try to make sure that the results of economic analysis are communicated to decision-makers in undiluted form, the chief economist or the equivalent head of the economists has the ability to make recommendations directly to the ultimate decision-makers.
Sometimes that's even as strong as that economist having signoff authority on regulatory initiatives, equivalent to what, say, the general counsel of the agency would have before they go to the decision-maker. In other cases, it's more of an advisory thing, but the decision-making authorities are key. Finally, you need day-to-day practices and an underlying culture that encourages objective analysis and encourages frank discussion, and particularly when economists are in a separate unit from the one that develops regulations, you need day-to-day practices that, nevertheless, get involved on regulatory development teams from the beginning, even though they do not report to the people on the team that they're working with. And there are agencies that have done this kind of thing. You have a culture that encourages long-term research and development to inform regulatory decisions in the future so that you don't have a situation where an agency has a regulatory initiative and then, all of a sudden, the analysts have to try to figure out how to do some analysis.
It works much better if it's something they've been working on developing and publishing on in the professional literature all along. So there's a lot more in the report that I said, but those are some highlights that may be useful. Thank you, Jerry. Amit, when you're ready to go? Sure. Thanks, Susan. So let me just say that in terms of how personnel working on analysis of cost and benefit restructure within the Bureau or how is it within the Bureau, I think the most important factor that is critical to the integrity and credibility is its independence from political influence. So that basically means that agency personnel working on the analysis should be essentially walled off or otherwise independent of direct supervision by high-level political officials in the agency. For the CFPB, that would include, of course, the Director as well. With respect to what Professor Ellig was just talking about, I think I actually like the way that CFPB has integrated their economists from the Office of Research and the other officials in the Office of Research with the rulemaking staff.
I think that's a good way of ensuring that the analysis of costs and benefits is also being informed by the statutory responsibility Congress gave the CFPB to protect the consumers from financial harm. I worry that if economists are independent of the rulemaking team that they're not going to be informed enough in terms of their analysis as to what is statutorily permissible, what are the bounds that Congress gave CFPB, and that certain decisions and recommendations may be made on economic grounds that simply don't comport with the statutory and legal responsibility that Congress gave the CFPB. So I do like how the staff is structured right now and integrated with the rulemaking staff. Thank you, Amit. Steve, you're next. Thank you, Susan. I think there is, indeed, a tension between ensuring quality of economic analysis on the one hand but making sure it doesn't dominate the outcome. To echo what Jerry said a few minutes ago, the analysis does not equal the regulatory decision, nor should it, I would add.
Bottom line is that the two chief concerns that we have are, number one, that the economists and the economic analysis they produce does not dominate or control the outcome and shape the rule ultimately, and secondóand I think Amit referred to this a minute agoóit's equally important that the economic analysis process be insulated from political, if you will, interference. And there have been reports of that occurring, of course, at the CFPB in connection with the payday lending rule that rescinded the underwriting requirements. So that's our take. I think Jerry's analysis was very helpful in terms of whether it should be a divisional placement or a functional placement of the economists or a hybrid, and that sort of seemed sensible, again, subject to these caveats that there has to be meaningful limitations on the ultimate impact. What really matters in the first instance is the legal analysis. What is, in fact, the nature of the economic analysis that Congress has intended and that will best serve the process? That should be ultimately the guide.
And if I may, I just want to go back a bit because I think some good and interesting points and valid ones were made earlier by a couple speakers pointing out that the cost-benefit analysis really pervades life, if you will, and I think it's important to highlight the fact that this notion of weighing costs and benefits really falls on a broad spectrum. Yes, indeed, we do it intuitively and unconsciously every day. Then there are qualitative economic analyses and then ultimately quantitative. The quantitative are the most difficult and most pernicious, ultimately, but even the qualitative pose problems. And I'll just end this supplemental point by saying if you consider it's the SEC's duty to look at the impact of a rule on a deficiency competition in capital formation, even if they're not required to quantity, it's still a devilishly difficult problem, and it's one that will almost inevitably be sufficiently malleable to lend itself to court challenge.
Thanks, Steve. And Howell will finish up this question for us. Thanks. Yeah. Just quickly, great to have Jerry's work on this, which I think is very helpful. From my personal perspective, having worked with the division of Research, Markets, and Regulations, I do think it's advantageous how the economists are integrated with the lawyers as the regulations are written. Everyone at the Bureau knows it's going to have to be 1022 section in the Federal Register.
I think that has been advantageous about getting the economists involved early on in regulatory design issues as opposed to what's traditionally sometimes happened is the economists from a separate division were just brought in after the fact to bless what was done, which is really not an effective way of getting input. So I think that piece is good. The one drawback I would sayóand I think the Director's introductory remarks kind of alluded to thisówhen you just do cost-benefit analysis at the regulation level, there's a tendency to sort of push action into enforcement or education or other things, and the Bureau structure a little bit definitely should always look at the impacts of enforcement policies. So sometimes there's a push to do UDAP enforcement as opposed to UDAP rulemaking, and I think you want to get the economists occasionally looking at enforcement and education and other things too.
So getting them out RMR location sometimes, I think, is a valuable thing to think about. Thank you, Howell. We're going to move on to the third topic, which is quantifying and comparing benefits and costs. Section 1022(b)(2) of Dodd-Frank does not require the Bureau to net expected costs against benefits or to provide a table of costs and benefits or to assert that benefits exceed or justify costs, and the Bureau does not generally do, though. Further, it is frequently not possible to reliably quantify the costs or benefits of regulatory requirements, although the Bureau's internal policies and procedures state that costs, benefits, and impacts should be quantified to the extent reasonably feasible and appropriate.
In your experience or research, would more informative cost-benefit analysis result in greater quantification required under specific conditions, if not generally? When is quantitative cost-benefit analysis more useful and informative to leadership and the public than a partly quantitative or non-quantitative cost-benefit analysis? And Amit is going to lead us off on this one. Thanks. Okay, great. Thank you, Susan. So I want to start off just by making clear thatóand I think there's broad agreement on the panel about this, that the CFPB's 1022(b) requirement only requires the agency to consider the cost and benefits.
It does not require the agency to go further with respect to comparing, justifying, coordinating out costs and benefits. That's clear when you compare the cost-benefit provisions in Dodd-Frank for the CFPB to conduct a cost-benefit analysis with respect to other statutory authorities when they're taking actions under other provisions rather than rulemaking under 1022, but it's also clear when you compare the 1022(b) language with other statutes that other agencies administer, say, the EPA or the Department of Labor. Those statutes often are silent on cost-benefit analysis, but many also do have language that's more clearly requiring comparing or some form of netting benefits and costs, and that's quite different than what's in the 1022(b) language.
Just going back to the HMDA example that I was referring to earlier, I think that that rule, that exampleóand again, it's just one example, but the example of information disclosure, the benefits of information disclosure have proven extremely challenging for the CFPB but, frankly, other agencies whose statutory mission is to provide more information to the public, require more information from regulated entities. It's proven very challenging to reduce to economic terms. There's a wealth of data and evidence that makes clear that this information qualitatively benefits consumer's results and enormous benefits in the HMDA case, of course, in terms of combatting housing discrimination and ensuring that financial institutions are serving underserved communities that are disproportionately harmed.
But, again, the data to translate those very clear benefits of information disclosure into economic values is simply not there. This is a methodological limitation that is inherent in cost benefit analysis currently. Potentially, in the future, as Steve has said, there would be ways toóincredible ways, how to quantify or monetize the benefits of information disclosure to consumers and to the CFPB, frankly, but we're not there yet. And so I think that because of that, the utility of cost-benefit analysis is limited with respect to dealing with issues like information disclosure and the benefits of taking action in that space. Thank you. Howell, you're next. Yeah. So I agree with what was just said about the limitations of cost-benefit analysis, and I think that the approach that the Bureau has taken, which is a pragmatic one of quantify and monetize when possible, has got to be the right one. So any sort of mandatory monetization of net benefit, I think, would be a completely inappropriate direction to go in.
I do think that there are a number of things that the Bureau should be mindful of as it does its attempts to quantify particularly benefits. One is to be specific about what the benefits that the agency is focusing on really are. Sometimes in our survey, we found there will be a half a dozen or more benefits asserted, and clearly defining what the particular benefit is is helpful for transparency purposes. Oftentimes, to the extent that benefits are quantified, the Bureau and other agencies pick up something that's quantifiable, like the reduction in time spent reviewing documents or maybe a reduction in foreclosures or some sort of quantifiable item. It doesn't really get to the true benefit, as Amit was kind of referring to. Oftentimes the true benefit is a reduction in financial distress or maybe improved consumer decision-makings, and actually sometimes fewer transactions may be better. If consumers are borrowing too much and having too much illiquidity, that could be a problem, and less borrowing may be better as a result. I think it would be very helpful if the Bureau would specify what the ultimate consumer benefit is and be candid when they don't have parameters and sort of setting the stage of it would have been better with this regulation if we had good estimates for the following items and kind of creating a catalog so outside academics and other experts could try to develop parameters and estimates for future decision-making.
So I think the Bureau should think about the process of identifying what they need a little bit more precisely in the hopes that these parameters and estimates can be developed in the future. So that's the way, one thing I would push the Bureau on here. Thank you. Steve? Thanks, Susan. Another interesting question. I might start out by just pointing out, highlighting, if you will, that in legal terms, it's quite clear Congress made a deliberate decision in Section 1022 to require the agency to "consider the costs and benefits," and that has significance under the law, in the case law in particular, dating all the way back to 1950 in a Supreme Court decision that basically said the duty to consider when imposed on an agency gives it a lot of leeway and does not require it to quantify or let alone monetize costs and benefits. And I think the CFPB deserves credit. It's frankly acknowledged that it's not subject to those duties. The problem is that it seeks to undertake them, nevertheless, and in our view, there's no persuasive case that undertaking whenever possible that quantifies, it really does satisfy the cost-benefit test itself, if you will.
Second, I draw a distinction between quantifying and monetizing. They both pose challenges, but I think it's one thing to say we'll quantify, for example, the universe of consumers or investors, retirees, for example, that will be affected by a rule governing advisors or other products and services. There's naturally some utility in that, if done right. I think the attempt to not quantify and monetize, it can be much more devilish and in the end counterproductive. Third and finally, I would highlight a problem with data.
It's been referred to before, and sort of one of the ironies that I think it's important to note is the disadvantage that agencies face notólargely because it's very difficult to develop data. It's so labor-intensive, and it's also largelyóthe critical data is often in the hands of the industry, which is not always willing to share it. And a classic conundrum, I think, that agencies face is exemplified in the SEC's defeat in court on the so-called "maker-taker pilot program. " What the agency basically was trying to do was say, "Look, we know there's a problem in the way customer orders are executed on our national exchanges, but we want to get more data to understand it better and to optimize the regulatory response. So they put up a pilot program for 2 years to sort of experiment with the impact of different levels of fees and rebates, and the D.C. Circuit invalidated the rule. So just as an agency is trying to develop a more robust database, presumably to help inform quality rulemaking, it gets stymied.
In essence, it can't really win. Finally, I'd just note to part of this questionóand I think others have talked about this in the literature especiallyócost-benefit analysis is a different ting in the scientific realm where natural rules govern. It's much more challenging in the human space and in the financial markets. Thank you. Jerry, you're up. Okay. I mean, on the general issue of quantification, I think, again, there are some commonsense ideas that I would hope people will agree on. One is the idea of proportionality that for a regulation that isn't very big or important, you don't want to do as much analysis as for a regulation that's really big and important, and then that principle, you can find in Executive Order 12866, guidance for agencies.
So there's that commonsense constraint on how much analysis you do, how much you try to quantify stuff. Another thing, though, I would want to note is you can find examples where agencies have done a reasonably good job of quantifying even the benefits of disclosure regulations. I can point to one which is actually an area now under the CFPB's jurisdiction, but it was originally a HUD regulation back in 2008 that revised the good-faith estimate of closing costs because studies by the Federal Trade Commission, which has jurisdiction over nonbank lenders, and by HUD found that a lot of consumers weren't understanding basic information that was in the disclosure form. And they were likely choosing higher-cost mortgages as a result, and studies by both the FTC and HUD found that you actually could redesign the disclosures in ways that would give consumers a better understanding of what they were getting and that their choices would change.
They would recognizeódo a better job of recognizing the lower-cost mortgage, and on the basis of that in 2008, HUD in its regulatory impact analysis did take a shot at quantifying the savings that would result to consumers because they're better informed, and they correctly noted that a chunk of that savings to consumers is a transfer from the mortgage industry to consumers. But in addition, there is a social benefit, which is the expansion of home ownership and the expansion in the number of refinancings that would occur because, in effect, the cost of getting mortgage loan has gone down because consumers have a better understanding or are able to pick the low-price loan.
I mean, it's in the regulatory impact analysis, public record, and my understanding is, I think, in 2011 when authority over that transferred over to the CFPB, the CFPB basically reaffirmed that HUD regulation, and you all have probably done other things on that topic since then. But the example I'm most familiar with is that regulation from that time period because I was working at the FTC when the FTC started doing the studies that eventually informed the HUD regulation. Now, does that mean you can always quantify the benefits of a regulation or disclosure regulation? No. But it means it's not impossible if you make a good-faith effort, which is what HUD did when it was revising the good-faith estimate. Thank you. So our time is actually getting quite short. We have about a little bit more than 10 minutes, and so I'm going to skip down to the very last question on our retrospective reviews to make sure that all the panelists get an opportunity to weigh in on this question. The role of retrospective review of the effects of rules and cost-benefit analysis of subsequent rules, pursuant to Dodd-Frank Section 1022(d), the Bureau has conducted three retrospective reviews of the effects of significant Bureau rules and is in the process of conducing a fourth.
Two of these reviews have already contributed to the amendment of rules, subject to assessment and the cost-benefit analysis of those amendments. In your experience or research, how might the Bureau best use retrospective review, whether pursuant to Section 1022(d) or discretionary to improve its practice of ex ante cost-benefit analysis? Should the Bureau seek to identify and explain significant discrepancies between predicted effects and actual effects? And on this one, Howell Jackson will lead us off. Thank you. Again, I think it's admiral, both what Congress did with the respective review requirement and the fact that the agency has been engaged in these efforts, and I, over the weekend, was just reading over one of the recent retrospective reviews on mortgage servicing, and it's an admiral and serious document. So I think it's good to be doing this.
I have a couple of suggestions, some of which come out of the article I mentioned, and some come from other sources. But one thing that would be helpful at the front end is for the Bureau's original cost-benefit analysis to be more explicit about the likely impact in terms of market participants and consumer decisions because often there is not a clear baseline of expectations for what the regulation was going to do that would make it easier to do the retrospective reviews. And so I think that process will improve now that the Bureau is engaging in some retrospective reviews and running up against a problem.
The other thing I think that I would say about the retrospective reviewsóand you can see this in the debt servicing reviewóis I wonder whether the unit of observation of the regulation, which I know it's what Dodd-Frank calls on, but there's some latitude here. In the mortgage space, the Bureau did a number of different thingsóthe ability to repay rule. There were lots of changes that happened as a result of the Dodd-Frank Act, and trying to isolate the effect of a single regulation when there's multiple regulations going on is problematic.
So I think it may be more sensible in order to have retrospective reviews done maybe in a group of areas. It might even be appropriate to take into account enforcement activities and educational activities because the ways in which the Bureau is influencing the market is multifaceted, and I think it often makes it more difficult to try to do retrospective reviews around a single issue. If I could, the other thing I would sayóand I was going to mention this earlier with respect to the notice and comment that we jumped over. I think that one of the things that would enhance retrospective reviews is if the Bureau did front-end market assessments, similar to what the Financial Conduct Authority in the UK has attempted to do, which is to identify where they think there are market failures in consumer financial markets before regulations are adopted, so really to create a baseline and identify specifically market failures or distributional problems that exist that would really set the baseline for both regulations and retrospective reviews. Jerry kind of alluded to this, but you've got to have a theory of what the problem is, and often it's going to be economic problems.
Sometimes it's going to be social problems or distributional problems, but at least to articulate those up front, I think that would be a good way to bring academics into the process by the identification of the market failures. And it would make the retrospective reviews, I think, more meaningful to have a baseline. So that would be my largest suggestion here that in all its major areas of analysis that there be baseline market assessments against which the retrospective reviews could be judged. Thank you. Amit, you're up next. Thanks, Susan. And I actually agree with a lot of what Professor Jackson just said. I'll just make a couple of short and basic points. One is that I think the Bureau needs to be careful about ensuring the appropriate role for retrospective review among its broader responsibilities. It should not allow this backward-looking retrospective review, which is the secondary function of the agency, to impede or distract from the primary mission of the agency, which is a forward-looking action to protect consumers. As Professor Jackson indicated, the retrospective reviews that have been conducted already have generally been high quality and have shown that CFPB regulations being reviewed are working as intended and protecting consumers.
My concern is that the retrospective review process hasóthat it be balanced, and I think that the impression that some have been given of the process is that it's more skewed towards revisiting regulations to determine regulatory burdens that are unwarranted on financial institutions and reduce those. Certainly, retrospective review offers the potential for finding ways to strengthen or make more effective existing regulations, when the CFPB determines one is working as intended but potentially is seeing ways to make it stronger.
The retrospective review process should be geared at allowing that to occur as well. Thank you. Brian? Thank you, Susan. I think one of the things that all the panelists have talked about this morning is just how hard it is to do a quality and effective cost-benefit analysis prior to a regulation going into place, even though it is extremely helpful. A retrospective review is a very commonsense approach to identifying where that cost-benefit analysis got things right and where it might have gotten things wrong and then going back and ideally making corrections or adjustments to the rules, something very common in business.
We always do a retrospective analysis, and we've always modifying our products and approaches to increase consumer welfare. So I think it's just common sense to be able to go back and do that retrospective evaluation, and in fact, I would think it could even go further. Right now it's a 5-year mandate. I think the CFPB would benefit from adhering to the big GPRA process, even though not mandated by statute, but following at sister agencies in conducting a 10-year lookback on regulations, but would be both a best practice and a way to improve coordination among the agencies.
And I think getting the benefit of that data would increase consumer welfare. This was a point made to me when I was speaking with Cindy Glassman, a former SEC commissioner and a current Discovery board member, who is a strident believer in cost-benefit analysis. I think it's a point that the Government Accountability Office has made in recommending that agencies look at cost-benefit analysis prospectively and retrospectively. And lastly, it's a point that I could tell you as someone responsible for implementing many regulations and looking out for the welfare of consumers that regulations rarely come off perfectly as intended, and there are a number where looking at the welfare and the consumer and looking at what the regulatory intent is, I can tell you there are better ways to get a better outcome for consumers by modifying or adjusting some of the regulations, just given how many different factors there are to weigh in their development and the challenges with figuring out and anticipating all the unintended consequences.
So thank you, Susan. Thank you. Jerry? Well, whenever we talk about the importance of retrospective analysis, it reminds me of an old Vaudeville joke. The joke is "Do you smoke after sex?" and the punchline is "I don't know. I never checked." That's the problem with a lot of our regulations. Nobody ever checks afterward to see what the heck happened, and we could have a much better regulatory policy, much better informed debates in Congress if we did have more and better retrospective analysis.
I think Howell's ideas were expressed very eloquently. I cannot add to that. The only thing I would add is there was a great report for the Administrative Conference of the U.S. done a few years ago on retrospective review by Joseph Aldy, up at the Kennedy School, that I would highly recommend that folks at the CFPB and out there in the viewing audience take a look at to get an idea of the current state of retrospective analysis and ideas for improvement. Thanks, Jerry. Steve? Do you have one? Hey, thank you. Yes, I have a few things to share. I think, first of all, it is, of course, a very appealing and, I think, sensible idea, which is because the cost-benefit analysis is so fraught with uncertainty and it can be so important in shaping rules, it does make sense in theory at least to go back and assess how well it worked, and that's consistent with our view that it doesn't work well, so let's test that and see.
But there are several problems. I mean, one is the one Amit mentioned, which is that it consumes a lot of time and resources, and it may divert the agency from the still very challenging problems that exist in the financial marketplace. I would add that it's not clear to me how or why the retrospective review itself would be immune from the same biases and challenges that infect cost benefit, quantitative cost benefit on the front end. Now, there are obviously differences, but I think that's still a serious concern. Then, finally, I'll echo what Howell said when he talked about the advantages of evaluating retrospectively rules in groups. I think there's a lot to be said for that, and the reason is this. One of the things we've argued is that the benefits of single rulesóevaluating the benefits of a single rule, rather, ignores the fact that often rules are part of a network or web or a framework.
And this is true of many of the Dodd-Frank rules. You can't sensibly evaluate the benefits that a rule confers unless you look at the larger collection of which it is a part, and therefore, by the same token for the same reasons, it makes a lot of sense when you look back to see how a collection of rationally related rules in a particular area focused on a particular problem, how they collectively have done their job. That is a better approach. Thank you. I am going to try veryóI ask my panelists to speak very briefly on the very last question so we can just squeeze it in, in like 2 or 3 minutes.
So you think notice and comment for better cost-benefit analysis of financial regulation. Commenters on proposed Bureau regulations rarely provide detailed comments on the cost-benefit analysis or additional data for use in the cost-benefit analysis. How might the Bureau better use the process of notice and comment to improve cost-benefit analysis of Bureau rules? And we're going to start with Brian. Yeah. Thank you, Susan. I would echo some of the comments by Professors Jackson and Ellig. The notice and comment period can be useful by the Bureau inviting information in, but it's much better to start it early in the process when the alternatives to the regulation are still being considered and cost and benefits can be weighed. And I think by using either notice and comment or early in the process, reaching out to the various stakeholders, including industry, can provide information and data that the Bureau otherwise might not have access to, and it's been done in the past. It was done as part of UDAP, for instance, back when the Fed was thinking about it.
Thank you very much, Brian. Jerry? Yeah. There is public research that shows that an economic analysis is more thorough when the agency made use of an advanced notice of proposed rulemaking or consultation with stakeholders in advance to try to gather more information before it actually decides what kind of rule it wants to propose. So trying to figure that stuff out ahead of time and maybe use an advanced notice to give stakeholders an opportunity to comment before the agency has made decisions would be a good idea.
Thank you. And we're going to finish this question with Howell. Okay. Well, thank you, and thank you for organizing this whole session. I would echo what has already been said. Just speaking as an academic, I don't do anything in 60 or 90 days. I need more advanced notice than that, and I think that, as Steve was alluding to, the problems the Bureau is addressing are kind of market area problems. And having the discussion about the costs, you know, and benefits potentially for the market areas early on in the process is the right time to get certainly academics and I think other participants engaged with forums, roundtables, symposium. That's the way to get the best data rather than in the APA notice and comment structure on this particular issue. Thank you. In fact, thank you to all the panelists.
I think this has been a great discussion this morning. At this time, we are scheduled for a 10-minute break. My computer clock says 10:39, but let's make it 10:40, and we will reconvene promptly at 10:50. Thank you. Thank you. Thank you, Susan. Yeah. Thanks. It was great. Thanks to everyone. Thank you, Susan. Okay. One minute late. You can all hear me now. Welcome back, everyone. My name is Paul Rothstein, and I serve as the Financial Institutions and Regulatory Policy Section Chief in the Bureau's Office of Research. At this time, I'd like to welcome our panelists for the next discussion, which is on data and methodology for cost-benefit analysis, and since we aren't at a table, I'm just going to give a brief introduction alphabetically. So first, John Coates is the John F. Cogan Professor of Law and Economics at Harvard Law School. John's area of teaching and research include securities regulation, corporate law, financial institutions, and the legal profession. He has served as a consultant to the Securities and Exchange Commission and U.S. Treasury and numerous other governmental and private entities. Mark Cohen is the Justin Potter Professor of American Competitive Enterprise and Professor of Law at Vanderbilt University.
Mark's research focuses broadly on law and economics, including cost-benefit analysis of environmental regulation and enforcement, racial disparities in the auto lending industry, and the cost of crime. He serves on the editorial board of the Journal of Benefit-Cost Analysis and is a university fellow at Resources for the Future. Alex Lee is a Professor of Law at Northwestern University Pritzker School of Law. Alex's interest includes securities regulation, administrative law, cost-benefit analysis, and consumer protection law. He is an associate editor of the International Review of Law and Economics and previously served as senior counsel at the Securities and Exchange Commission.
And last, Chris Mayer. Chris is the Paul Milstein Professor of Real Estate and Professor of Finance at Columbia University Graduate School of Business. Chris' research explores a variety of topics in real state and financial markets, including housing cycles, mortgage markets, debt securitization, and commercial real estate valuation. He is co-director of the Paul Milstein Center for Real Estate of the Urban institute's Academic Research Committee and a research associate at the National Bureau of Economic Research. So thank all of you for coming, and we're going to proceed right to the first question, which is very timely, given the previous panel's discussion on data to measure the impacts of financial regulation. So Bureau rules may directly affect the operations of providers and consumer financial products and services and the features of those products and services.
Based on your experience or research, how might the Bureau improve its ability to measure the direct impacts of Bureau rules on providers and products as well as consumers? Should the Bureau invest in ongoing industry data collections, on operations and costs, or wait until a policy concern arises for doing so? How should the Bureau collect baseline data on consumers and the operations of providers? And first up on this question is Mark Cohen.
Mark? Well, thank you, Paul. Let me get my video back on. There we are, and thank you so much for the invitation. Very interesting early morning session, and I want to follow up in my remarks. In listening first to Howell Jackson who said it's always easier to get costs for companies, which is quite true, and secondly, when Stephen Hall who claimed that cost-benefit analysis oftentimes harms consumers, I would argue that this is largely because the benefits have not been fully quantified and monetized. But that doesn't mean they can't or shouldn't be. So I want to focus a little bit on the consumer side.
I don't want to focus on the industry side. As it was mentioned, the industry has all the incentive in the world to provide data if it's going to show that there are costs involved. So I think where the Bureau needs to focus is on the consumer side. While, again, there are existing datasets that might show us what the average interest rates are or potentially identify overcharges or costs of things like late payments, servicing errors, or foreclosures, these costs might significantly underestimate the actual consumer harm.
And one of the key tenets of a benefit-cost analysis is that you first have to identify all of the costs and benefits to the extent possible, and then, if possible, quantify and monetize. So I want to focus in two areas, and that is indirect costs and non-monetary costs, and I'll come back to distributional impacts later in the session because that's a specific question that will be addressed. I think about consumer harm beyond the direct monetary overcharges or costs, they come from two sources. First, consumers may spend time in remedying the harm, perhaps dealing with financial institutions, credit reporting agencies, law enforcement. These costs, time costs, can ultimately be monetized, but first, of course, you need to understand what the extent to which consumers are inconvenienced. That's not that terribly difficult. Second, in some cases, consumer monetary harm may extend beyond the direct transaction involving the financial institution. For example, in extreme cases, consumers whose credit rating is hurt might ultimately be unable to obtain employment or a loan, and I note that the Bureau has oftentimes identified some of these potential harms in its rulemaking background documents, noting the indirect consequences, for example, of foreclosure on children's health or neighboring home prices.
But none of these harms appear to be quantified, yet alone monetized. And I note that there is precedent for government agencies to conduct things like public surveys to obtain data on the instance of various types of consumer harm like this. For example, the FTC has sponsored a series of consumer fraud surveys. The Bureau of Justice Statistics has begun to include more detailed survey questions about identity theft in their National Crime Victimization Surveys. So in addition to the value of time and indirect monetary harms I talk about, consumers who are victimized by unfair, deceptive, or fraudulent trade practices might suffer from psychological distress. If I go back to the national crime survey I was talking about on identity theft, they indicated that 10 percent of victims reported suffering severe distress from the incident, and in extreme cases such as the Madoff scandal, for example, victim impact statements have identified what psychologists have called "fraud trauma syndrome" in some cases leading to suicide.
While the incidence of these extreme outcomes might be relatively rare, they're important to identify and to quantify, both because the impact on individuals might be extreme, and in the aggregate, they can potentially increase costs significantly. While it might not be feasible to conduct studies of these harms for every regulatory or policy decision, it's possible to estimate the public's willingness to pay to reduce the potential harm from fraudulent activity. Rigorous methodologies have been developed over the years for estimating the monetary value of everything from health impacts, risk of death, as well as environmental amenities.
These methodologies have been adopted and relied upon in regulatory impact analyses outside of the environmental health areas. Agencies such as Department of Transportation, Consumer Product Safety Commission, and even the Department of Justice, and of course, OMB has recognized the efficacy of these approaches and recommended their use in regulatory impact analyses. The DOG example is probably most relevant to CFPB partly because it's a law enforcement agency, and I just want to mention very briefly, in 2012, DOJ issued a final Regulatory Impact Assessment for what was the Prison Rape Elimination Act, the first ever regulatory impact analysis of a criminal justice rule.
In following OMB guidelines on conducting benefit-cost analysis, the supporting documentation for this regulation was based on the estimates of the cost of the rape, which was between 200- and $300,000, which was based primarily on willingness-to-pay kinds of surveys that I was talking about. These estimates were largely based on the intangible cost of the rape, and if you had only used the out-of-pocket tangible costs, they would have been a few hundred dollars are most. DOJ in their documents indicated this regulation would never have been able to pass a benefit-cost test if they relied solely on monetary cost benefits. Now, I mean, to date, there have only been a few studies estimating the cost, willingness to pay, for consumer fraud. I guess I'm probably the only one who's done them, but one example in 2011, I estimated the willingness to pay to reduce the risk of a financial fraud was $12,000, compared to the FTC's estimate of $250 for the average victim loss.
So, again, the magnitude is dramatically different. Are these credible losses? Well, of course, there are only a handful of studies, but they are in peer-reviewed journals, and I believe that determine their validity. I want to say one last thing before I finish up, and that is in the context of discrimination because, again, this was mentioned in the first session quite a bit, and I think it's an important issue. So in the context of discrimination, ask a black borrower who paid more for credit than they otherwise would have if they were white whether the cost to them from discrimination is equal to the difference in the monetary difference, cost. Instead, I would argue that, obviously, the answer is no. There are considerable costs. We construction call them "indignation cost," "humiliation cost," and this can be quantified using the techniques I mentioned. I've actually proposed this in a paper on targeted policing and also in the context of mortgage discrimination.
So I think these things, while difficult, can be quantified. Thank you, Mark. John? Thank you. Can you hear me okay? I'm also going to thank everybody and, Paul, you in particular for inviting me to participate and happy that the staff focused on CBAs and continuing to engage on these topics because I think they're complex and difficult and yet important. I'm going to, again, like Mark, save some remarks later on distribution and focus initially on a couple of high-level points about data.
One thing, this echoes some of the first panel's framingóis that I think too little use of the data that is available currently to agenciesóand I'm not necessarily speaking of CFPB here because that's not the agency that I spend the bulk of my time focusing on, which is the SEC, but possibly the CFPB as well. Using the data to help prioritize the regulatory agenda, it's ex ante. It's early in the stage. It's not retrospective, but it is, I think, an area where some of the data problems that you're going to hear more about, I thinkóand certainly, I believe existóare less acute because the use of the data for that purpose can be relatively simple. It can be simply how big is an activity, how big is a market, how big is the plausible range of market failure within it. Those, I think, at a high levelóand especially if you're doing comparison acrossódon't raise the same kinds of causal inference and reliability problems that use of data for a particular rulemaking generally will. So that's my first point.
It would be data that exists, can be used earlier in the process than is customarily done. The second point, which isn't really about dataóso I'm cheating a little bitóit's the next topic. It's about models really, but what I'm going to say is that a lot of CBA can be done without data. In fact, in my view, the most important and fundamental value added currently in financial regulation may be different in other areas of regulatory impactóis to use a theoretical framework for cost-benefit analysis without attempting to quantify but rather to identify possible market failures, to identify the problem that's meant to be solved to try to identify ranges of options, to try to generate lists of costs and benefits, simple models to identify indirect costs and benefits.
Mark just alluded to some, what he argued, and I would agree are likely to be more important types of benefits than the immediately measurable benefits of certain kinds of rules. That's a second point. Let's do the basic conceptual cost-benefit analysis. Jerry earlier referred to this as "regulatory analysis." I'd just say there's some terminological talking past each other sometimes here. I think that's straight up cost-benefit analysis.
It's just it's a judgmental qualitative kind rather than marketed. The third point on dataóand then I'll subside hereóis thatóand this is probably going to be more controversialónot all data that's relevant to a CBA for a given rule, in my view, should be presented. In fact, this is echoed in the internal guidance we heard quoted earlier today that CFPB has adopted, which is not to do the best CBA you can and quantify everything that you can. It's not even limited just by feasibility but also by appropriateness. And I want to emphasize that appropriateness might and frequently will encompass taking into account the impact that the inclusion of some numbers may have on the public's understanding of the presentation.
All the caveats in the world about the limits of what's being presented will not prevent, in my judgment, the misuse or misunderstanding or because of misuse the exaggerated misunderstanding of some numbers in a context in which those numbers are potentially misleading, and I'll end with an example that comes out of the Securities and Exchange Commission, the recent proxy advisor efforts. They included in their rulemaking last fall data purportedly about, quote/unquote, "errors" in proxy advisor reports. Those were based on what companies said the proxy advisors were mistaken about. The SEC on inspection really was kind of misleading in its presentation. I don't think intentionally, but it was. They didn't analyze how often the mistakes were actually factual in nature in a way that could be validated or replicated, and they didn't do any analysis to examine whether the mistakes were, in fact, important or material or had any impact in the rule, and yet the numbers were there in a little table. And the table as a result got used both by the agency to justify the rule and by the public in the discussion of rules.
In the end, the agency had to back away because of the problems of the APA process, but I think they would have been better off just not including it to begin with. And that's a type of data where other data not available or at least much more difficult to obtain would have been necessary to put the data presented in an appropriate context for informing the public. And I'll end on this datapoint. You have to keep that in mind. The point of the analysis at least as reflected in a published rulemaking is not simply for academic research. It's not even simply to inform the internal deliberations of the agency. It's to inform the public. You could imagine doing some and not having it all published with my caveats in mind, but to the extent it's going to go into a public rulemaking, I think it's incumbent on the authors and the agency itself to think hard about whether they're actually informing the public with the data or potentially misleading it.
And I'll stop there. Thank you, John. Chris? Not hearing you, Chris. How's this? Got it. Ah, excellent. All right. This is at least the fourth or fifth different system that I've been doing with video conferencing, so remembering which one has which features in which place is hard. Anyway, thanks a lot for having me, Paul. I'm honored to be on with the distinguished group of panelists here and certainly on the previous panel. The question of data, of course, in somebody's whose career is an empirical economist is more better, et cetera. You know, the most data possible would be great, but I started my career at the Boston Fed. You know, I came just after they completed a foundational study on discrimination of mortgage lending, which directly led to the creation of HMDA, and, you know, their additional research has the basis, I think, of some of the current regulation in the housing mortgage market.
I worked with Karl Case, Torregrossa, Alicia Munnell, and others and really learned a healthy amount of respect for making data available to consumers in businesses. And I remember some of the conversations that took place after that study in which businesses and other lenders would come in and say, "We don't do that. Like, there's no way that could be happening." And I think, in a way, that study and subsequent work on discrimination in the mortgage market has, in part, helped businesses change their practices. As you guys sort of realize, you know, there's this long Chicago School research that basically says, well, people don'tóyou know, discrimination is bad for business.
If you turn away a qualified customer, then, you know, you're losing a qualified customer. But I think businesses oftenóyou know, what that suggests is it is often in business' interest to serve the broader set of clientele, and making data available is not only about regulation. It's also about helping businesses and industries understand what they're doing and in a way helping them understand it in a way to do better. Maybe that's a bit Pollyannaish about the world, but I think there's very broad value, certainly, for policymakers, for researchers, and for the public and companies from looking at data. What we've seen today is the companies have been making much bigger investments in the government in collecting data and in collecting large amounts of data, and again, we might not think that all that data should used as being used for the purposes, and I definitely stand the privacy issues associated with that. But at the same time from a policy perspective, there's certainly value in the government collecting it.
Back in 2007, I actuallyówhen I was at Columbia, I approached the Federal Reserve and asked to come down as a visiting scholar for a year to try and understand what was going on in the mortgage and credit markets, and when I went down to the Federal Reserve, one of the things I found was actually the datasets I had built at Columbia, which were purchased from private companies, were better than the datasets that the Federal Reserve had to evaluate and study subprime lending, things like deeds records, some of the data on mortgage securitizations, the Fed was only just starting to acquire. And what's turned, in a sense, is the data has come from being things that it used to be being inside the government gave you access to, to now being something that the private sector often knows more than researchers and policymakers, which is obviously problematic. I want to give a couple examples of things. I'm going to pivot a little bit after sort of saying I think it's reallyóyou know, when we talk about cost benefit, so much of this data is being collected already, that requiring that it go to policymakers in the government seems to me to be not an unreasonable ask for many companies that are really have figured out ways to monetize it.
Again, I'm not criticizing the fact they're doing it, but I am suggesting that it's important for policymakers to have access to the same kinds of information that others do. I want to sort of end at least on this question with a couple of quick examples of things whereóand, you know, my focus is particularly, as I say, on mortgages, and I want to say a couple of words about HMDA because I think this is a place where while there's data being collected, the data is not being oftenóeither isn't being fully collected or not being put out in a way that researchers want.
And I think for the HMDA in particular is a database that is almostóyou know, that is not something that is available out in the private sector. It's used by a ton of people, and there have been, I think, some attempts to kind of chop down access or the quality of that data. One of the limiting, lenders to make fewer than 25 mortgages, that would result in 1,400 depository institutions, 22 percent of those becoming exempt from reporting. It's a little hard to look at the mortgage market when we start to exempt out institutions. I worked on a recent paper from 2018 HMDA with Stephanie Moulton at Ohio State. We found 10 to 15 percent, our guess, is of loans missing, and if you think smaller lenders are not random, then the reporting on this is a problem. And while I think it might have been harder a long time ago to require smaller lenders to report, given the database programs that are widely used today, I think the costs of reporting are materially lower.
So moving back in that direction seems hard. The second is just exempt in the HMDA dataóhas been a proposal to exempt non-natural persons, corporations, and partnerships from reporting. In the multifamily area, that would get rid of about 80 percent of the loans being reported. Essentially, it would eviscerate what we're getting from multióyou know, to really understand the multifamily market, and with all of the concerns about homeownership and people who can't access homeownership, understanding multifamily is a really important issue. There's also data that's collected but not distributed to the public. So if we look at the 2018 HMDA data, they had, for example, reported that loan-to-value ratio, and we found that that data was really, really helpful in the paper, though some inconsistencies, but generally quite good. But there's other data, particularly FICO data that isn't reported, and in particular, we know that looking at the CFPB summary, for example, that that FICO data would certainly potentially lead to different conclusions about different racial groups. So, for example, the median FICO score reported for non-Hispanic whites was 748. Hispanic whites was 710. Blacks was 691, and Asians was 759.
And so today, given all of the concern about underrepresented groups and access to homeownership and lending outcomes, the idea that we wouldn't report FICO scoreóI remember even back in the early 1990s when HMDA was first passed, access to that data was considered critical because of the perception that what appeared to be differences in lending based on minority status might well have been based on FICO or credit history. And it's really hard to disentangle those issues without reporting the data. And, you know, one more example and then I will stop is just multifamily data, where currently things are grouped. Multifamily HMDAs are grouped into buckets, 5 to 24 units, 25 to 49, 50 to 99, 100 to 149, 150-plus. Multifamily units, we're not talking about finding really confidential data on an individual homeowner, and the problem is that if you do an analysis, for example, using those groups, you end up with the conclusionóand I will quote, and this is back to some issue that John said in the precisionómultifamily lending, finance or refinance, between 1.5- and 3.4 million housing units in 2018.
The idea that we have groups that don't allow us to differentiate between 1.5- and 3.4 million housing units somehow seems like those groups are actually meaningfully changing what we're learning in the data. And so I do think there are some places where even with more widely available data, the CFPB and policymakers can improve what's available in a way that I think would really help research. Thank you, Chris. Okay. We're going to jump to Question 2, and please remember to mute yourself when you're done speaking. And it will go Alex and Mark, and then Chris and John, if you want to speak to the Question 2, send me a new chat. So Question 2 is on formal models of cost-benefit analysis. So supply and demand are central to cost-benefit analysis, but the supply and demand for consumer financial products and services are dynamic and are a particular risk in returns as perceived by providers and consumers that may occur in the future, could be simple static models of supply and demand inappropriate for many purposes.
Can you identify formal models of academic literature that might provide useful starting points for Bureau research? For example, are the models of risk reduction widely used in the development of social regulation applicable to measuring the cost and benefits of reducing the risks of consumer financial products? Alex? Great. Thank you very much for inviting me to the symposium. I'm very honored to be among these distinguished panelists. Let me begin with the risk reduction models, which Professor Cohen has already discussed at length. So, personally, I'm very excited the Bureau is thinking about these models because they've been used for decades by many different agencies, including the EPA, DOT, and the FDA and so on, but their uses have been almost nonexistent among financial regulatory agencies, as far as I know. And there's no reason why that should be the case. I tend to think that with these models that the Bureau should be able to justify a lot more rules under the cost-benefit analysis. So I would love to see the Bureau become a leader in this area and maybe the other financial regulatory agencies will follow suit.
With that said, a couple things to keep in mind, first, the conventional methodology doesn't capture negative externalities, as far as I know, because we're talking about, say, the value of a statistical foreclosure avoided, and that value won't capture the benefit of avoiding the negative price effects the foreclosure may have on neighboring property values. Those would have to be considered separately, or better still even while relying on risk reduction models, the Bureau should try to not be included in the quantification, but here, I acknowledge Professor Cohen's concern regarding the danger of including some numbers and not others.
And second, some Bureau rules may be designed to address or comment certain cognitive biases on consumers' part, but this also means that the data gathered to calculate the value of risk reduction in those areas as a final rulemaking may also be subject to the same biases, especially if this data is based on surveys. So the Bureau should grapple with how these biases, such as hyperbolic discounting or optimism bias, may affect the value calculations under these models, but I don't think these are insurmountable challenges. But let me actually take a step back and talk about cost-benefit analysis more generally. And I think it's really important to remember that there are actually three distinct challenges when it comes to conducting a cost-benefit analysis of financial regulations. Alex? Alex? Lean in a little.
You're coming in and out. Okay. Lean in a little to really capture your voice. Okay. I was going to take a step back and talk about cost-benefit analysis more generally. Am I clear? Better? Okay. So when it comes to a cost-benefit analysis of a financial regulation, there are actually three distant challenges as far as I can see, and I'd like to think of them as tiered challenges or ordered challenges. First, equilibrium prediction, and what will the economy look like once we adopt it? Second, the quantitative methodology, having figured out the equilibrium, how do we then price all the non-monetary benefits and cost the new equilibrium will exhibit. Third, the data availability, given the expected equilibrium and the methodology, how do we then collect the relevant data to make these measurements? If three challenges are ubiquitous, while not every rule will exhibit all three, it's very likely that most significant rules will exhibit all three of them.
We already talked about data availability and data collection, all focused on the first two. Of the first two, it's important to realize that risk reduction models will help the agency address the second challenge but not the first. So they will help the agency with the quantification methodology but not the equilibrium prediction problem. But this can also be a really hard problem based on my own experience in SEC rulemaking.
Given a rule proposal, it's not unusual to see commenters expressing polar opposite views, and I don't think those extreme views are all necessarily all groundless or all agenda-driven. It's that I think the problems that financial regulations really do affect how the market and its participants behave and react in response, and on top of that, one specific response will depend on how everyone else is behaving in the market and so on. You really can't tell ahead, and so the nature of this problem, I think, is best understood as a game with multiple Nash equilibria, and there may be two stable equilibria that may be obtained post-rule adoption. And both are intrinsically thoughtful ex ante, but it's hard to know which one will materialize. And there are plenty of examples where the agency got the cost-benefit analysis wrong, not it undervalues or overvalues certain benefits or costs, but because they simply got wrong what the equilibrium will look like.
And the best way to address the multiple equilibria problem is by applying a real-options model of cost-benefit analysis. This is actually a really simple idea that has gotten a lot of traction over the past decade among several prominent law and economic scholars. So imagine we have a proposed rule that is highly controversial, and there's a lot of disagreement over its expected effects. It is possible the rule might end up generating a lot of surpluses, the good equilibrium, but it might end up costing a lot of the bad equilibrium.
Now, in a situation like this, at some point, it might not really be productive for the agency to try to persuade everyone just with arguments that the good equilibrium will materialize and not the bad one, and ultimately, the court might not find the agency's explanation satisfactory. Rather, in this situation, what the agency should do is to go ahead and adopt the rule but with a hard sunset. By a hard sunset, I mean just that the rule should remain in effect for only 3 to 5 years and should automatically expire unless it is readopted by the Bureau. Now, your first reaction might be a sense that it will be a costly battle at a future date and will create a lot of uncertainty. All true, but hear me out. Once the rule is adopted, if the bad equilibrium were to materialize in a couple of years, the rule will in fact be abandoned shortly, and there are no further costs to society thereafter, which means the effects of the rule are reversible.
But if the good equilibrium were to materialize, there will be a strong case to readopt the rule and with much of the [unclear], and then society will continue to reap the benefits, not just for 3 to 5 years, but indefinitely. There is a very convenient asymmetry of equilibria. The bad state will not persist, but the good state will. And note that the Bureau is already doing this implicitly with all of its significant rules because the Bureau has a statutory requirement to conduct retrospective reviews under Section 1022(d), anyway. I say implicitly because Section 1022(d) doesn't require an inefficient rule to expire, but that is, of course, the general idea.
And I'm almost done. It turns out that you can actually get a lot of mileage from this simple sunset provision, and let me just mention a few concrete advantages. First, by including a sunset provision, the Bureau can in good faith address the rule's detractors' concerns and comments, as required by the APA. But the Bureau can do so without compromising the substance of the rule, only its duration and only probabilistically so if the Bureau is confident in the rule's outcome. Second and most importantly, the Bureau can formally incorporate this option value of expiration into its dynamic cost-benefit analysis looking at the net present value of benefits and costs, and this approach will strategically increase the net benefit of adopting the rule and permit the Bureau to be more aggressive in its rulemaking.
In fact, such an approach will even allow the Bureau to proceed with rules that have net negative values. Third, because the rule with a sunset provision can much easily pass the cost-benefit analysis under the real-options model if the rule is challenged, courts ought to give the Bureau far more deference in its arbitrariness review. And this argument has been formalized by legal scholars. Fourth, interestingly, all else equal, the higher the variance, the greater the net discounted benefits under the real-options model. In some sense, the more controversial the rule is or the more polarized the commenters' predictions are, the more justified the Bureau will be in moving forward with the rule. And finally, the real-options model can even apply toward a controversial deregulatory rule as well as toward a regulatory rule, although the cost of sunsetting a deregulatory rule might be a little bit higher because it does effectively involve readopting a rule. For these reasons, I think in terms of addressing the first challenge, the equilibrium prediction problem, I would like to urge the Bureau to aggressively experiment with rulemaking by building in expirations and thereby committing to an empirically informed and outcome-based rulemaking approach.
All right. Thank you, Alex. So, Mark, do you want to speak to the question? Yeah, just very briefly. First of all, I think the options approach and a lot of Alex's comments make a lot of sense, the sunset provisions, but I want to step back to the very first comment that he made. And it will help, I think, for people to understand, if you heard my earlier comments about these willingness-to-pay studies. Thinking about the way consumer protection should be valued from a benefit-cost analysis, I think of it as an ex ante risk. So consumers considering a proposed financial transaction with an uncertain outcome are faced with some risk that they will not understand the contract provisions, or they'll be fraudulently sold or whatever. So once you start to do that, you focus on the expected net benefits of regulation, and that's where the kind of analysis we're talking about comes to play.
So the value to consumers form a reduced fraud should be based on their willingness to pay to reduce the risk of being a fraud victim, even for those who were never victimized. So when you do that kind of valuationóand I think Alex made the point that these methods don't really allow for negative externalities. In fact, I think, in theory, they do. So the benefits from consumer regulation can accrue not only to the ultimate victim but to potential victims as well, and these surveys and the methods that I'm talking about go out to the public at large under the assumption that the respondents will either consider themselves to be a potential victim or that they simply value a society with less harm. So just as we all benefit from a safer neighborhood and lower fear of crime, when consumers are protected from mortgage fraud or servicing errors, for example, we all benefit from less fear of our own victimization or harm.
We might benefit from the expectation that our neighborhood property values will not be diminishing from widespread foreclosures, et cetera. This approach might also be used to improve our understanding of the benefits of reduced discrimination in lending markets, for example. So to the extent that nonminority members of the public value living in a society that's free from racial or ethnic bias in the lending markets, they might be willing to pay some amount to live in such a society. So these type of models allow for that, and I think would provide powerful new evidence on the cost of discrimination, not just to the parties directly involved, but to the public at large. Thank you. John and Chris, did you want to just take a minute on this, or we'll go on to the third question? All right. You want to move on to theóI can loop back briefly as part of my third question remarks. All right. So the third question is distributional concerns, and this was addressed somewhat in the first panel.
But I think folks here have a little more to add. So in rulemaking and other policy, Bureau leadership may be concerned with the distribution of financial benefits and costs among consumers or between consumers and providers of consumer financial products and services. In contrast, cost-benefit analysis focuses mostly, although not exclusively, on efficiency, just there's difference in focus, under certain aspects of cost-benefit analysis secondary or irrelevant to the Bureau's consideration of costs and benefits. And, John, you're up. Great. Thank you. A perfect setup. I want to suggest that not only is distributional analysis not less important or subsidiary or different from cost-benefit analysis, but it's actually inevitable as part of a well-conducted CBA and certainly important in most significant rulemaking settings.
On the evitable, whenever the behavior involved is complex, which is most of the time for a financial regulation, whenever it's social, which is also most of the time when it comes to finance, it's been alluded there are important externalities embedded in financial markets that may not be true of other kinds of human activity. Two things follow from that. There's going to be some distributional impact from the rule. Somebody is going to gain, and somebody is going to lose. And the models of welfare effects that are direct themselves are going to be quite contestable. I think I agree in kind of a general way with many of Mark's remarks about the fact of externalities, and I'm open as an academic researcher to willingness-to-pay studies as a way to try to gage them.
I'm much, I think, more skeptical than he is about whether the numbers that will come out of those surveys will be reliable and useful in quite the way that he's suggesting in part because, as I reflect just myself logically, I have no idea what I would be willing to pay for some of the kinds of social goods that he's referring to. And I'm not sure until actually confronted with a real choice in the matter would my preferences be well formed. I like to think I think about these topics a lot, and I'm not sure then that the overall public reference formation is well formed there. What that means is that some of the actual welfare effects are not going to be easy to quantify, and they're going to have the character of being reflected at least in principle in a distributional analysis. So what do I mean by that? Think about fraud, to take an example again that Mark has alluded to.
Traditional welfare analysis at least can be narrowly framed to say that the victim's loss in a fraud is exactly offset to the fraudster. Are you still able to hear me? Somebody's mic is on that probably shouldn't be. The offset, that means you have no welfare effect, and yet I don't think it's at all plausible that that fraud has no welfare effect. So where is the welfare effect coming from? It's coming from either endogenizing more behavior, taking into account more things that the fraud impacts, than the straightforward fraud itself, or externalities that are not again captured in the relationship between the fraudster and the victim.
As you start thinking about psychological impacts, family impacts, suicides from the Madoff scandal that Mark alluded to, neighborhood effects, the general willingness to pay for a society that's fair and equal and not discriminating, those kinds of things are all fair welfare effects. They're also, each one of them, also easily framed as a distributional impact because some are going to affect some people in society more than others, and they're also increasingly difficult to model in a way that's not disputable. And it's also increasingly hard to get data, as Alex alluded.
You get the three-layer problem for each one of them. Which that means as you take account of how a distributional fraud, fraud in the first narrowest instance is just distribution, to make it actually turn into a general welfare analysis, you need to have a very general model or layers of nested models, each of which is going to generate a tremendous amount of uncertainty for each of the three reasons that Alex identified: model specification, methodology and trying to translate the often seemingly precise concepts like psychological demoralization into something measurable, and then the measurement problem in the data collection itself. The result of that is you're going to, I think, predictably getóby adding explicit distributional analysis and thinking clear about what you're doing, you're going to increase the uncertainty of the overall model. I think it's good to do that because I do think it's then a more realistic picture of the actual all-in effects of, again, in rule, but it also, I think, takes you back to my overall skeptical view about quantification generally in this area.
I think it will leave you worrying that the presentation will either be fodder for political battles that will distract from what's being added by the analysis. So just to take an example, when the range associated with the costs of an activity and, therefore, the benefits, say, of reducing fraudulent activity range from negative several hundred billion to psyositive several hundred billion, as I think is very plausible in the case of major antifraud rules, that very fact alone when presented that way can undermine the public's confidence in their whole rulemaking enterprise. And so I demand distribution is inevitable. It's important. Just regional analysis, that is, is evitable and important. It's also something that's top of mind of the political appointees at the top of the agencies. Whether they're quite self-aware of how they're thinking about it or not, that is an important impulse for them.
It also would be an important way in which they communicate with the staff doing the economic analysis becauseólet's be candid. A political appointee does not want the economic staff to produce an analysis that while theoretically correct and full of appropriate statements about the limits and qualifications of what's in it, nevertheless, ultimately makes the political task and the communications task of defending a rule that the political appointee very much believes is a good one to be harder than necessary. So it will actuallyódoing the distributional analysis or not, whether you're explicit about it or not, that is in the background of what goes on as the economic staff turns to doing this.
So I actuallyófinal point on thisóthink that if CBA is to be transparent, if it's actually to inform the public and the political world about a rulemaking, it should be straightforward. The distribution has to be part of what's done, that that necessarily will complicate things, and that it may lead the presentation of the analysis to be more qualitative and less quantitative overall than might otherwise be the case. I'll stop. All right. We're going to have to move very briefly than to Alex, Mark, and Chris, if you want to speak to this, and I think, John, you [unclear] to address. So maybe Alex just briefly and then over toó Sure. First of all, I agree with much of what Professor Coates said. I don't actually read Section 1022(d) language to be foreclosing distributional consideration. I think what's important for the Bureau is being fully transparent with its transfers, and here let me make a nuanced point. Everybody knows that the Bureau's core mission is for protection [unclear]. No one will fault the Bureau for adopting a rule that lead to monetary [unclear] for consumers, even at the extent of certain [unclear].
That's consistent with the Bureau's mission. So that doesn't mean that the Bureau can therefore pass off all such [unclear] welfare, nor do we think the Bureau cannot about the rule simply because it failed to [unclear] monitor benefits that accrue to consumers accompanied by the displacement of the same amount or covered person should not be acknowledged [unclear] but should be labeled as [unclear] payment. And let me just note that there's something incredibly forthcoming and virtuous about hopefully acknowledging a transferred payment as such, and personally, when I see a cost-benefit analysis [unclear] effect, I'm so impressed with the candor and the meticulousness of the analysis that I'm inclined to believe that you can change it and [unclear]. So I'll just end right there.
Thanks. Mark? Well, in the interest of time, not to monopolize the conversation, let me just suggest that for those who are interested in thisóand John Coates is quite right about a lot of the issues that he raised, but before dismissing it all, I think my written remarks will give a little bit more context on that. The question about estimating these is not as simple as just asking people what are you willing to pay. I want to make that point. It's actually quite a sophisticated analysis. The other thing I will say is that agencies around the government are using this and have used it, and there's a lotóthere's hundreds and hundreds of studies. I think the last point I want to make is that I don't think the CFPB ought to be the one doing the studies on willingness to pay and understanding the nonmonetary costs associated with fraud.
This should be something that's done by researchers, academics, in a peer-reviewed session. It should be funded externally, maybe by the agency and the other agencies as well. But I think that's where the validity comes in, and I'll leave it like that. Thanks. All right. Chris, you can end with Question 4. Will you lead off? Sure. So just a couple of quick things. The first is I think it's impossible to look around our society today and ignore distributional issues. I say that almost as a prima facie comment. In addition, there are externalities, but the idea that fraud is a transfer between two peopleóso if I steal from Alex, from a society, nobody is worse off, I think it's hard to think about. And so I really think the fact that there's a disproportionate perception that markets and particularly financial markets and data markets are rigged against many people in society, I think, means that we can't just sort of do a standard cost benefit and call it a day without understanding and addressing that perception, which I think is really important.
So I'll end on that. I'll end on that comment. I guess the other one is we know losses and gains are not the same for individuals. So my disutility for a loss, we look at prospect theory isóyou know, it's considerably larger than the utility somebody has from a gain. So that's another thing that's really hard to measure, but we know is important. Thank you. All right. I look forward to reading Mark's written comment on contingent valuation in response to John. I expect it will go in that direction. So Question 4, incentivizing academia and think tanks to conduct foundational research related to cost-benefit analysis.
So cost-benefit analysis of financial regulation relies on basic research on consumer behavior, household finance, and the industrial organization of consumer financial services. However, gaps remain between basic research in those areas and the practical applications of cost-benefit analysis. How can the Bureau encourage researchers to devote their time and other resources to help the Bureau bridge these gaps? Do researchers need more information about specific Bureau needs? Should the Bureau engage in outreach to particular types of non-academic institutions and facilitate different types of collaboration? So, Chris, you're on. Yeah. So I think this is a topic certainly where, in one sense, as a longtime professor who's spent my career using data to do research, again, you might sort of say, well, it's just obvious. The incentive for me is tenure and writing papers and having an impact on the world. I do think that the process of rulemaking and other things at the Bureau where they're talking about what the priorities are does matter, and being transparent about what the Bureau is thinking about is helpful to frame research, because I think early on in a career as a professor and a researcher, what you're caring about is getting tenure and getting stuff published.
But over time, as you start to get gray hair, you start to think more not just about what your next published paper is, but how what you do impacts the world. So I think in that sense, the incentives are broad here. I think the starting point, of course, is the first question that you posed to us, which is making data available is an essential part of incentivizing people to do things. The bigger the cost of doing the research, the less likely it is.
I'll also sort of say I'm very optimistic about the role of think tanks. I do a lot of work with the Urban Institute Housing Finance Policy Center headed by Laurie Goodman, Alanna McGargo. Laurie is a friend. She also helped me with some of the comments that I made earlier on HMDA. She's beenóyou know, she's done an incredible amount of work on the Housing Finance Policy Center. I think it's brought together a broad group of people, and along with other efforts of the Urban Institute are doing incredibly broad things that help us think about what's going on in society, anything from interpreting the impact of COVID on the ability to pay rent, estimates of the number of renters who will potentially lose properties during COVID, to mortgage servicing project, which is taking on the incredibly important issues of how mortgage servicers on one hand interact and protect consumer rights, and then the other hand, mortgage servicers that has to be a financially viable project.
And they have to be able to operate in a way that is predictable and workable, and we've seen a lot of the issues associated with mortgage servicers in COVID have been something where we needed more light shown on this issue, and that project at the Urban Institute was way ahead in highlighting some of the kinds of issues that mortgage servicers are facing today and that we're dealing with in forbearance. And certainly the rent issues they did have played to foreclosure and other issues.
So I do think that by making data available, you help researchers across the spectrum be able to pursue research and comment on questions, and so there really is an infrastructure of groups out there that are interested. And I think we've seen across the spectrum, research done by interest groups, by companies, by foundations, and certainly by academics that are addressing questions that are really important to the Bureau and certainly to our society. So I do think like this issue of incentivizing is important in one sense, but in another sense, I think if we make things available, there is funding and interest out there to actually put that data into practice and learn a lot from that on really important issues.
And much more of this research is available than when I started my career quite a while ago, I might add. All right. Mark, you wanted to speak to this question? Yes, just a couple quick things. I mean, I think Chris is quite right, and also, a lot of agencies will have people come in, whether it's for a semester or a postdoc, and even if there's not money, just sometimes being in residence for summer, an academic has access to.
So it's the availability and access to data. But beyond that, I want to talk very briefly about leverage, and in this context, I think of two things. One is a number of years ago, I was chairman of the American Statistical Association Committee on Law and Justice, and there was an arrangement with BJS, where there were all these language and datasets by BJS that nobody was really looking at anymore. And for very small dollar awards, it was essentially money to give to junior researchers for some research assistance, et cetera, but it was a competitive award. They put out a broad call for research projects, and they were able to leverage a huge amount of very interesting research, but not only that, they brought in researchers who had never thought about criminal justice before. There were statisticians who all of a sudden had access to data that they didn't know about, and all of a sudden, they spurred a whole list of new researchers.
So I think there's some creative things that could be done. But the otheróthe last point I want to make is, especially in the context of the kinds of surveys and things that I was talking about, when you start to think about CFPB and fraud and consumer harm and discrimination, et cetera, et cetera, it's not just CFPB. And so when you think about other agencies, one example just recently, a few years ago actually, I got funded to do something from National Institute of Justice on crime, and we added questions on consumer fraud, which can, of course, be crime or civilly handled. It can be handled administratively. As you know, there's various agencies that have jurisdiction, and so I think partnering withó And then years ago, we were doing a willingness-to-pay survey, and the Sentencing Commission got wind of what we were about to do.
It was funded by NIJ, and they had some questions that they wanted addressed on preferences for sentencing policy. And they added a little bit of money and expanded our survey. So I think that if the Bureau thinks a little bit more expansively, going out to NIJ and other agencies that have similar issues, they may find that they can leverage resources quite a bit. I was wondering, John, did you want to speak to this question, a couple words? Sorry. I very briefly will say that I want to go back to the points that were made by Howell and others on this morning's panel and that I echoed at the outset, and I think if the agencies framed important quantities of interest that were likely to both inform their priorities and/or repeatedly useful and ongoing ex ante rulemakings and possibly retrospectives as well and to identify them as core important targets of research interest as such, I think that would be the most likely way to help young scholars get tenure for publishing about them, which is the main incentive that drives academic work, and then, of course, work in many of the ways that have been sketched already by Mark and others about cooperation.
But I think just publishing, in a visible way, the targets of research opportunity would it self be enormously impactful. And I'll stop there. All right. Then we've got a couple seconds, Alex, if you wanted to justó Sure. I'll be very quick. First, if the Bureau doesn't already do this, I hope that it will consider actively encouraging or perhaps even do more publications by [unclear] comments, whether in law journals or peer-reviewed publications.
Even if the topics of research are not immediately relevant [unclear]. Second, if the Bureau doesn't already do this, I hope that it will have a very quick internal clearance process in terms of using data collaborated from other economists. There should not be much of a holdup there. And third, again, as Mark Cohen said, the Bureau should fund like a fellowship [unclear] year or two in a very targeted collaboration opportunity.
And lastly, the Bureau should consider hosting a more targeted symposium on concerning the cost and benefits of a specific rule and find a law journal that will host a symposium to make sure that those papers will be published, and law professors who have expertise in this matter should be invited early on, maybe a year before its 1022 [unclear]. All right. Well, thank you very much. So please join me in thanking all our panelists for the thoughtful discussion today. I also want to thank everyone who dialed in and watched via WebEx. For those unable to attend in person or watch via WebEx today, a recording will be made available on the Bureau's website. So that concludes today's event, and have a great afternoon.