Behavioral Economics and Biased Regulators

Behavioral economics (BE) examines the implications for decision-making when actors suffer from cognitive flaws documented in the psychological literature. Broadly, BE replaces the assumption of rationality—that errors tend to cancel out over time and across populations, so on average firms and consumers act as if they were rational—with one of “bounded rationality.” When actors are boundedly rational, their cognitive flaws lead to systematic errors and self-control problems. It should come as no surprise that BE has become an increasingly common justification for regulatory intervention.

Behavioral economics (BE) examines the implications for decision-making when actors suffer from cognitive flaws documented in the psychological literature. Broadly, BE replaces the assumption of rationality—that errors tend to cancel out over time and across populations, so on average firms and consumers act as if they were rational—with one of “bounded rationality.” When actors are boundedly rational, their cognitive flaws lead to systematic errors and self-control problems. It should come as no surprise that BE has become an increasingly common justification for regulatory intervention.1

There may be serious reasons to be skeptical about the import of BE to public policy.2 But even if one accepts that people systematically err, one must also recognize that any government policy is itself conceived and implemented by people who likely suffer from the same biases. Public choice opened up the black box of government decision-making, allowing us to examine the policy choices of rational, self-interested decision-makers. What happens when this rationality assumption is replaced with one of bounded rationality? 3 The short answer is that one cannot have any confidence that the policies set by biased regulators are likely to improve welfare.

How Will Biases Affect Regulatory Decisions?

Regulators are likely to use heuristics—mental shortcuts—to form what they consider the optimal long-run policy choice. Behavioral economics demonstrates that these shortcuts, although timesaving, may lead to systematically flawed decision-making. Experimental research has documented the existence of several of these flawed heuristics.4

The availability heuristic, for example, causes people to overemphasize recent and particularly salient events when estimating the likelihood and cost of those events in occurring in the future.5 The hindsight bias leads people to overestimate the ex ante probability of an event occurring given that it has actually occurred.6 Finally, optimism bias causes individuals to underestimate their own probability of experiencing a bad outcome. In addition, regulators may suffer from myopia, which can arise due to cognitive inabilities to process life-cycle costs or from self-control problems.7

Regulators who suffer from these cognitive flaws are likely to commit systematic errors when forming policies. Myopic regulators, for example, will focus excessively on short-run considerations, such as measurable increases in activity that are clearly associated with their tenure, rather than optimal long-run considerations that may suggest pursuing policies that pay off only after the regulator’s tenure. The availability bias, moreover, would cause regulators to overestimate the future risk of certain bad outcomes that may have recently occurred, and thus take too much precaution to avoid them.8 In the context of the quasi-negligence determinations involved in certain consumer protection violations, for example, hindsight bias is likely to cause an agency to look more skeptically on practices that led to harm ex post.9 Finally, optimism bias may cause regulators to hold an unduly optimistic view of the likely success of a policy choice. Apart from flawed heuristics and myopia, there is a class of cognitive errors that tends to wed people irrationally to the status quo.10 The endowment effect, for example, leads experimental subjects to require more compensation to part with an endowment than they are willing to pay to gain it.11

This observed gap between willingness to accept and willingness to pay suggests that people are more averse to losing what they already possess than rational choice theory predicts. Applied to regulatory decision-making, this class of cognitive shortcomings will tend to make policies “sticky” around initial points. The direction in which the status quo bias will steer policy is indeterminate theoretically and will depend on the initial policy endowment. From this stickiness emerges a path dependency in policy choice where policies adopted in the past have a lingering effect on future policy adoption.

Experimental research also suggests that individuals tend to become irrationally wedded to their early impressions about an initially ambiguous situation.12 Confirmation bias comes about either because subjects ignore all new evidence once they have made up their minds or because they erroneously interpret evidence contradicting their beliefs as supporting their beliefs. In regulatory settings, confirmation bias leads to overconfidence in one’s estimates of optimal policy.

At the operational level, regulators may misread or ignore facts that conflict with the theory of a case or rulemaking initiative. At the policy level, an agency head may misread evidence to confirm priors regarding larger policy choices, such as adopting an interventionist or laissez-faire attitude toward certain business practices. Confirmation bias has an asymmetric effect on policy outcomes; regulators with incorrect priors cause more harm than their counterparts, who are initially wedded to the correct decision.13

Continue reading