Dean Ball on the Past, Present, and Future of AI

Does the future belong to the curious?

Dean Ball is a research fellow at the Mercatus Center. In Dean’s first appearance on the show, he explains the AI revolution, current AI developments, policy implications of the AI boom, and the future of AI. 

Check out our new AI chatbot: the Macro Musebot!

Read the full episode transcript:

This episode was recorded on February 19th, 2025

Note: While transcripts are lightly edited, they are not rigorously proofed for accuracy. If you notice an error, please reach out to [email protected].

David Beckworth: Welcome to Macro Musings, where each week we pull back the curtain and take a closer look at the most important macroeconomic issues of the past, present, and future. I am your host, David Beckworth, a senior research fellow with the Mercatus Center at George Mason University. I’m glad you decided to join us. 

Our guest today is Dean Ball. Dean is a research fellow and a colleague of mine here at the Mercatus Center, where he works on artificial intelligence (AI), emerging technology, and the future of governance. Dean joins us today to discuss recent developments in AI, policy debates surrounding AI, and the future of AI. Dean, welcome to the show.

Dean Ball: Thanks so much for having me.

Beckworth: It’s great to have you on. It’s always fun to talk AI, its current events, and sometimes its implications moving forward. You’re a part of the program here that works on AI. Before we get into all that, tell us about how you became a participant, an active participant in this space.

Dean’s Background in Artificial Intelligence

Ball: Yes, it’s a bit of a funny story because I don’t have a direct background in it professionally. I spent most of my career at think tanks doing mostly unrelated things. In parallel with that, I’ve been interested in computers and AI for pretty much my whole adult life and a big chunk of my childhood as well. When I saw the policy conversation on AI explode, I had two intuitions. One intuition was a lot of very, very bad ideas are on the table and seem to be taken quite seriously by policymakers. The other intuition I had though was that I wasn’t quite happy with the way even the techno-optimist and regulation-skeptic side of things was talking about this.

I asked myself, do I have a differentiated point of view here? I ultimately concluded that the answer was yes. This was about a year before I started writing. I spent a long time contemplating this. Then the way that I specifically broke in was I just started writing. I realized that to get hired, you’re going to need a portfolio of written work. The stuff I wanted to do was very unlikely to appeal to a lot of editors at publications because if you’re trying to write op-eds and things like that, you’re trying to curve fit to a mass market audience. I was trying to write very specific and weird pieces.

I started my Substack in early January and sent that work very aggressively to various people, one of whom was Tyler Cowen. I eventually got on Mercatus’s radar, I think partially for that reason, and then Mercatus hired me.

Beckworth: The rest is history.

Ball: Yes.

Beckworth: That’s the Substack that you now currently still publish? What’s its name?

Ball: Hyperdimensional.

Beckworth: All right. We’ll provide a link to it in the transcript, but listeners check out his Substack. You’ve been actively engaged in the space about AI, and we’re going to have some fun discussing it. As a consumer of AI myself—so you’re more the expert thinking about it. I think about it some as well as far as it relates to, say, monetary policy work that I do. As well as my personal life, I use it a lot. There are a lot of things we see happen and we want to know more about exciting developments. Let’s start there. Let’s start with recent events, then we’ll move into policy implications and maybe some longer-term perspectives on AI.

Recent AI Events

One of the things that has happened recently that seems to have a big impact is DeepSeek. I have it as an app on my phone. Maybe I shouldn’t. I’ve heard there’s some tracking issues. I’ve heard also some amazing claims about how they did this, the company behind it did this with far fewer resources. That, I’ve heard, has got debunked. I’ve also heard there’s implications for AI as an industry in the US as a whole. Walk us through all of that. What should we take away from this new AI competitor?

Ball: There are so many things about DeepSeek that will be both appealing to a Mercatus and Macro Musings audience, and also I think that will make intuitive sense in a way that might not make sense to every audience. First of all, what is DeepSeek? DeepSeek is a subsidiary of a Chinese quantitative hedge fund called High-Flyer. Like the Jane Street of China, how you might think about it. Certainly not as big as Jane Street, but along those lines. Just like Jane Street, these are people who make very sophisticated and broad use of AI for all kinds of derivative pricing and all sorts of weird stuff, all sorts of crazy things that they’re doing.

They had a lot of NVIDIA chips from before the export controls were implemented. In May 2023, High-Flyer created this spinoff, DeepSeek, to use some of those GPUs. Also, I think they’re really committed to the concept of AGI. They’re deep believers in deep learning, and you can tell. The labs that are most successful have this almost religious belief in deep learning as an approach, as a discovery about nature almost. You look at companies—this is not to say that Apple can’t innovate in AI or anything like that—but a company like Apple or Microsoft, much bigger culture, not committed to this idea.

Then there’s OpenAI, and the people at OpenAI are like, “The models just want to learn,” right? There’s these mantras that they say to themselves. DeepSeek is like that. DeepSeek is that way. There’s a lot of extremely talented people who work there. They are known for recruiting people that do not have obvious backgrounds in machine learning. This is true of a lot of the best American labs too. There’s people with literature degrees. There’s all kinds of stuff, all kinds of interesting backgrounds that are coming together. When I look at DeepSeek, I see a company that feels like an early OpenAI or an early DeepMind, early Anthropic, this talent density and diversity of talent as well.

They’ve been putting out interesting papers for certainly more than a year at this point, interesting models, and interesting papers that had all kinds of innovations in them. A lot of the innovations that caught people’s attention last month when DeepSeek released a model in January, actually first came to market in January 2024.

Beckworth: Really?

Ball: Yes. I can go into the detail on what those innovations were, but they’re very technical stuff. Ultimately, the thing that made DeepSeek impressive was basically two things. First of all, yes, they trained a model for a lower cost than people think of frontier models costing. This model was called V3. It came out in December of 2024, and there was a reported cost of $5.6 million USD to train it, according to DeepSeek. Now, I’m not going to accuse DeepSeek of lying at all. In fact, they’re very straightforward about this in their paper. That $5.6 million is the marginal training cost, and that is something I can say here that needs a lot more explanation.

Beckworth: All right, so it’s not been debunked.

Ball: It’s true. It’s true. It’s just like saying my car cost $5 to drive to work today.

Beckworth: I see. It’s the marginal cost.

Ball: Yes, your car costs $50,000. You’ve spent $6 in gas, but you have to park it somewhere, and that costs money, all these things. It is literally, when they say $5.6 million, they are literally talking about the GPU time, the electricity required to run the GPU, and the cooling equipment. It’s not the data center. It’s not the GPUs themselves. It’s not the staff that builds the models. It’s not any of that stuff. It is just the electricity used to train the model. That is still an impressive cost. 

However, it’s also worth noting, and this is something that I think even economists who are paying attention—I’ll call out Tyler because we’re both friendly with him, of course. Tyler was more surprised by DeepSeek than I thought he should have been. The reason for that, I think the reason people find it shocking, but you’re desensitized to it if you pay attention to this field super-duper closely, the cost improvements that DeepSeek achieved are roughly in line, in fact, a little bit behind the trends that we should just normally expect. You should normally expect about 400% to 500% in algorithmic efficiency gains annually. On top of that, the chips themselves get faster every year, and people buy more of them every year. The result of that is that you’re able to run far better models at a lower cost. You can achieve a given level of capability today, that will in 18 to 24 months from now be one to two orders of magnitude cheaper than it is right now, whatever that capability is basically.

Beckworth: This was a shot across the bow of US AI firms.

Ball: Yes, well, DeepSeek releases open-source models. Those models—it’s not only are open-source models available for scientists to look at and all that stuff, but they’re also much cheaper to run simply because every cloud provider will use them and then they’ll compete down to the fraction of a penny margin to offer those models to customers. 

V3 raised eyebrows in the machine-learning community, right? I knew when V3 came out, but that’s not what the public sensation was. The public sensation was a model based on V3 called R1. R1 is a reasoning model. This is a new category, a new style of model creation that was pioneered by OpenAI in September. Basically, what these models are, are models that use reinforcement learning, a technique in AI called reinforcement learning, to learn how to think. They’ll spend time thinking about your prompt before they answer it. If you use DeepSeek, you’ll see there’s a chain of thought where it’s almost talking to itself. It might say, “Ah, this approach isn’t working. Let me try something different.” “Oh, I made a mistake there,” like self-reflect, right?

OpenAI had a model that did this in September and it was called o1, hence the name DeepSeek’s R1. O1, they called it o1 because of OpenAI, but also they named it after the visa, an extraordinary alien. OpenAI didn’t quite tell us how this model was trained. DeepSeek did tell us how this model was trained, which in and of itself was a huge deal because now the algorithm is out there. It’s out there. The actual training cost of this model is nothing. It’s not nothing, but you’re talking about tens, hundreds of thousands of dollars, not even millions, because it’s actually the discovery of this algorithm and the finicky nature of it that is the hard part, not the actual compute cost.

We’re led to believe that OpenAI first stumbled on something like this algorithm in October 2023. It is considered to be a large part, rumored to be at least a large part of what triggered the whole Sam Altman ouster and all that controversy with the leadership shakeup at OpenAI that temporarily happened. It shook the company, the discovery of this algorithm. It’s a very, very important thing.

Beckworth: What’s the name of the algorithm? Is there a generic name for it?

Ball: I would call it sort of reinforcement learning-based reasoning. In fact, almost certainly what DeepSeek has and what OpenAI has and what Google has, because they have it too, and Anthropic has it too, and all the other people, they probably all have something different. It’s probably not quite exactly the same.

Beckworth: It’s going in the same direction.

Ball: Yes.

Beckworth: This is the future of AI as we currently understand it, this type of algorithm?

Ball: One part, yes.

Beckworth: One part.

Ball: One very important part, yes.

Beckworth: All right. Just on the practical side, I’m okay having DeepSeek on my phone?

Ball: Yes. I think so, right? I would say if you had TikTok on your phone, there’s no additional threat.

Beckworth: Okay. Or CapCut.

Ball: The more valuable thing is not necessarily your location that’s really valuable. I hear policymakers, they’re like, “Oh, they have American’s location.” It’s like, “Yes, well, sure.” The more valuable thing is the prompts. You’re giving them naturalistic user prompt data, which is what everyone wants. That’s one of OpenAI’s moats, is that they have the most actual user knowledge of what prompts are.

Beckworth: Interesting. All right. Some other developments, you already touched on this, but OpenAI, they’ve had a number of models. Here at work at Mercatus, they give us access to these models. If I go on to our ChatGPT via work, I see o1, o3, these different modules. I see the quickest reasoning one, one that can do calculations for you. In fact, I’ve used some of this myself. I’ve been impressed. 

I’ll sit down with a macroeconomic model that, in the past, I would have to code up and use some matrix algebra to solve. Now, it doesn’t solve everything, but it can go through a lot of the steps. It can code it for me and I can put it into Python. It’s doing things that I wish I had been able to do back when I was in grad school. Then on top of that, it will explain steps to me and it will tell me how it derived it. Very powerful. Tell us about that. OpenAI is still on the cutting edge, still putting these things out?

Ball: Yes. I think it would be fair to call OpenAI the world leader right now. There’s other people who are close, both American competitors and I think DeepSeek is reasonably close too. I think how close behind they are is overestimated by a lot of people, but they are close for sure. The way to think about this competition is like the competition between quantitative, trading firms, it’s like a week is a huge amount of time. If Jane Street is a week ahead of whoever, that’s actually a big advantage.

Beckworth: First mover advantage there.

Ball: Yes. A lot of the recent innovations we’ve seen out of OpenAI, these reasoning models are at the heart of them. They’re an incredible discovery. First of all, a slight background on what is reinforcement learning, the context for that. It’s basically the idea that you have a pre-trained model, GPT, generative pre-trained transformer, and you put that model into what’s called a reward environment, basically, an environment where it’s given a goal, and then it figures out how to achieve the goal. Super hard to do. Super complicated because the models can unproductively hack the goal and all kinds of things.

Probably the most famous instance of reinforcement learning before the o1 models was DeepMind, Google’s AGI company, had a model called AlphaGo, which was designed to play the Chinese board game Go, a very complicated board game with more potential configurations of the board than there are atoms in the universe. You can’t use rules. It’s not like chess. You can’t use a rule-based system to solve this. You need to use brute intelligence, raw intelligence, right? They used reinforcement learning to develop a model that could play Go better than any human in the world.

There were moves that the model made, most famously a move called move 37 in a game against the human world master at Go. Everyone thought move 37 was a mistake. Everybody was like, “Oh, the model messed up.” It turned out that the model was actually doing something beyond human comprehension in this 2,000-year-old game. The model won. That move ended up being pivotal. The question in some sense, since ChatGPT is at a very basic level, can you take that approach, that reinforcement learning approach, can you apply it to the general domain of natural language?

Because if you can, then the implications are obvious, right? You can get to superhuman performance. You can get to levels of insight that humans have never had. That is the path that I believe we very well could be on. We haven’t crossed that threshold yet, but the models have gone from being maybe 80th percentile coders a year ago, which is pretty good, to being, by some metrics, o3, the most recent OpenAI model, in the top 200 coders in the world.

Beckworth: Really? Wow.

Ball: It could be top 10, top one in the world very soon. That’s all reinforcement learning.

Beckworth: That sounds almost a little scary there. We’ll maybe talk about that a little bit later in the program, what are the implications of all this progress. Sounds like, in terms of a competitive endeavor, it’s great to have this competition that’s bringing out the best, and there’s competitors. I think that’s a great thing. A few other recent developments before we get into policy implications—Stargate. Tell me about Stargate.

Ball: Yes. Stargate is a new data center construction company. It’s actually a corporate venture. The results of a partnership between OpenAI, the UAE investment fund, MGX, Oracle, and SoftBank, with technical partners as well like Microsoft, NVIDIA, and folks like that. It is a commitment to spend $100 billion on data centers this year. It’s worth noting that I think about half of that is CapEx, and half of it is OpEx. It’s about $50 billion in CapEx this year, with, they have said, $500 billion in the long term, $500 billion over the next five years.

Beckworth: That’s a big number.

Ball: It’s a huge number. I think it’s actually funny though, because if you look Meta spending $65 billion, I believe, is the number, Microsoft is spending $80, Amazon spending $80, Google spending $75 to $90 CapEx this year on data centers. There’s other companies that are going to be spending. You were talking about Apollo Project levels of investment and ambition here on data center infrastructure. Does anybody in Stargate, do any of the investors in Stargate have $500 billion? No, they do not. No one does. No one even has $100 billion, in fact. Is there a path to raise that money? Yes, I think so. In fact, I suspect that we will see our first 12-figure bond offerings in the coming years. That would be my guess.

Beckworth: Wow. That sounds amazing. One last recent development, and then we’ll move into the policy implications. What about Grok? Elon Musk, of course, is in the news for a number of reasons, but he likes to self-promote and he’s promoting the latest version. Is it in the league, is it in the same ballfield?

Ball: Okay. Yes. Grok 3 is the model that came out yesterday, as we record, February 18. Grok 3 is a model from Elon Musk’s xAI, which is the company he started. It’s actually even younger than DeepSeek. It was founded in June or July of 2023 if memory serves. It’s wildly impressive for them to have caught up this quickly. Just that momentum and innovation suggests that right now they’re still in fast follower mode. They’re still a fast follower. They haven’t done anything. They haven’t breached any new thresholds of performance.

For a variety of reasons, I tend to think that performance, as measured by benchmarks, might become less important over time. They could take the lead. They very well could take the lead because they have tons of resources. Elon Musk built this data center in record time, 100,000 GPUs, probably like $6 or $7 billion data center that he built in an unbelievably fast period of time. He partially did that by using some unpermitted natural gas generators, but it’s fine. It’s all good. I think he worked everything out with the city of Memphis. Anyway, yes, so it’s a really good model. That’s the short answer.

Beckworth: Okay. I wonder if it has any advantage being a part of a popular social media platform. Just exposure to it, embedded into what you do. The truth is, when I’m doing work, that’s not the first place I go.

Ball: No, yes.

Beckworth: I go to one of the other options. When I’m on X, I’m, “Oh, let’s see what this has to say. What does Grok have to say?” Maybe over time, it will become a highly used one.

Ball: It can explain posts to you, which is cool. To be honest, it’s not the first model I use either. If you want something with the up-to-date information on Twitter, that can be really good. I will say the up-to-date information is a blessing and a curse. It’s great to have it, but also Twitter is famously quite contradictory. Everyone’s screaming everything about whatever’s going on. As the models get smarter, they get better at this. At the end of the day, they’re still just giving you the vague impression of everything that’s going on. They can very easily get stuff wrong just because there’s so much stuff you have to sort through on Twitter. We are not good at that. Neither are the models.

Beckworth: Yes. I guess Grok is good for questions about stuff that’s happened recently. I’ve used some other models. If I get on Claude.ai, for example, it will say, “Well, I can only answer up to a certain date,” which would be a disadvantage. It has other things it can do better than Grok. 

Policy Implications

Let’s move from recent developments, which are exciting, fun to follow, and let’s get into policy implications. This is a Mercatus podcast here. This is Macro Musings. We like to think through what does this mean for policy. President Trump, he’s been a skeptic of AI regulation. He seems to be pro-AI. He’s been engaged as a president for crypto, anything technology. He seems to be a champion. However, you’ve said you’re not sure where he’s going to go on this in terms of that. He may be pushing something similar to what the EU has.

Ball: Well, not him so much. I think there’s a few different things here. I think, first of all, the nature of AI and how people think of it could change during his presidency because of advances in capabilities. I could see his posture changing if it seems like this is going to threaten people’s jobs, if it seems like it’s going to be scary or dangerous in some way. You’ve said that sounds a little scary. I don’t screw with that sentiment. I also think it sounds exciting. But Trump has actually, in my opinion, just conveyed more mature ideas about this, actually, than anyone gives him credit for and certainly more than Kamala Harris.

Trump was like, “Wow, it sounds like we have to build a lot of electricity infrastructure. Sounds like it’s going to be really powerful. Sounds like it’s really important that we win. Also, sounds scary.” It’s like, yes, basically, that’s true. All the rigmarole is unnecessary. At the same time, there’s also an enormous amount of regulation going on in America, whether he wants it or not, because of what’s happening at the state level.

Beckworth: Okay, really?

Ball: Yes.

Beckworth: Yes. Because we’ve seen him issue a lot of executive orders trying to dial back regulations. In fact, they’ve had rules for every new regulation, you’ve got to remove so many others or any new regulations have to come through the White House. There’s been a lot of effort, but you’re saying state-level regulations may still be a hindrance to further advances.

Ball: Yes, that’s right. At the state level, there are probably going to be something like 1,000 bills introduced at the state level related to AI across the 50 states in this legislative session. Last year, there were about 600 of them. A lot of these are anodyne or basically don’t do anything. Who cares? There are a significant number that are actually quite significant regulations that I think would impose huge compliance costs on not just AI developers, actually, not even primarily AI developers, but in fact, businesses all across the economy that want to use AI.

As an example, the Mercatus Center is a potentially covered business in the Commonwealth of Virginia on a law called HB2094, which, again, as we record today, will go to the governor’s desk probably in the next few days. It is one example of a legislative framework that we’ve seen in more than a dozen states at this point, all being considered this year.

Beckworth: What would it do?

Ball: These laws focus on a concept called algorithmic discrimination, basically the idea that AI systems could discriminate along some civil rights’ protected line. For the most part, this algorithmic discrimination concept comes from the late aughts, early 2020s, which was a period when there was just a lot of talk about discrimination, in general, in our society. Not saying that’s right or wrong, just pointing it out as an observational fact. 

Also, the technology was different. Technology was mostly comprised of narrow systems that did specific things. We’re talking about police departments using facial recognition algorithms. Or let’s say a bank that wants to create a statistical system using machine learning techniques that might predict the likelihood of a loan applicant’s ability to pay back their loan. If you’re the person processing the loan at the bank, you have this little tool that you run their application through it, and it says there’s a 60% chance that this person will pay back their loan. There’s legitimate questions there. There have indeed been deployment problems in the real world, where we’ve seen these systems be biased along predictable racial and gender lines, right? If the bank’s been around for 100 years, and there was a big chunk of its history when it didn’t loan to African Americans, well, maybe its predictive power on African American loan applicants will be much worse than for whites. You have all kinds of problems like that.

Then ChatGPT comes along. These ideas were all percolating in our society, and people in the tech policy world were coming up with ways to deal with this. Actually, a big chunk of what the EU AI Act does relates to this as well. This is all these pre-ChatGPT concepts. Chat GPT comes along, blows all that up, but the system has a momentum of its own, has inertia. It’s just like, they’re just here, right? They’re just here and they actually apply quite disastrously to the language models because the language models, unlike the narrow machine learning systems, the language models are general-purpose technologies with conceivably millions of use cases or more. How do you deal with that, right?

Beckworth: These legacy programs that are fighting the last war of sorts with respect to discrimination and issues, which is interesting because at the federal level, at least, we’re seeing a lot of momentum to move away from those issues. President Trump has pushed hard against DEI, for example. The state level, there’s more inertia, it sounds like.

Ball: Yes, right. There’s inertia. There’s a playbook that involves essentially exporting European technology regulation to the United States that was done on privacy law with the GDPR, and that’s being done here. These laws very much resemble the AI Act in the sense in particular that they place preemptive burdens on businesses. If you are going to use AI of any kind, either the fancy new kind or the older narrow kind or a lot of other stuff, if you’re going to use it in any way for a high-stakes, high-risk decision—which, as you can imagine, is a lot of things—you as a business have to have a risk management plan. You have to write something called an algorithmic impact assessment on a per-use case basis.

In Virginia, again, Mercatus, our own employer, might be covered by this in the coming days or coming months rather, if this law does indeed pass. If Mercatus wanted to hire someone and we wanted to post a job listing on social media, somebody at Mercatus or a consultant that we hired would need to write a risk management plan and an algorithmic impact assessment for our use of the social media algorithm because the social media algorithm could be discriminatory. What if it’s a job for some policy area that maybe attracts mostly men? What if the algorithm mostly offers the job posting to men? That might be discriminatory. A lot of these laws operate on disparate impact theories. We would have to write an algorithmic impact. We’d have to do paperwork for that in every other use case that could affect employment.

Beckworth: Sounds like a lot of work.

Ball: Yes, it’s a huge amount of work and it would affect every business in the economy.

Beckworth: Even if we’re using a social media platform that’s out there, someone else like Instagram, let’s just say, even though it’s not our algorithm, we still have to assess it and its potential impact?

Ball: Yes.

Beckworth: Okay. Let’s move on to some other policy implications and uses currently. This one is one that’s been making the headlines. This is DOGE and Elon Musk’s use of AI to sift through government records to try to find areas of waste, ways to improve efficiency. There’s been a lot of stories of it making mistakes. I got a long list here, but instead of going through the list of all the negatives and places where it’s gone awry, maybe give your sense overall. Is this a good use of AI? Maybe we’re not seeing the whole story in the news. Maybe there’s a lot of good things happening with AI. We’re just hearing where there are mistakes.

Ball: Oh, I’m sure. I’m sure that that’s true. I’m sure that is true, that there are plenty of good uses. I don’t know in the case of DOGE, but I actually see DOGE as being a somewhat longer term. When we say long term, AI policy or AI people, when they say long term, mean a year. I think DOGE somewhat is a longer-term play. It could be with regard to AI in that, what is DOGE doing? They are assuming control of the data pipelines through which the most important information in the federal government flows, the pipelines about payments/money and communications, right? Employee-to-employee communications. That is how you control an organization, right? Those are two very important pieces of information and organizational blood flow.

If you’re trying to position AI in the federal government, having control of those data pipelines is very valuable. I believe that what DOGE could be in disguise is, in fact, an effort to get advanced AI systems that we don’t quite have yet today, but that we will probably have in a year or two into government, or at least to put government on the trajectory toward that. I could be totally wrong about that, but that’s my thesis about DOGE .

Beckworth: Yes, that makes a lot of sense because they are laying off a number of federal employees who might otherwise get in the way of AI doing more things, maybe more effectively, more efficiently. They’re setting up enough structural change that AI might be in place at some point to do a lot for the federal government. That could have a lot of big changes. Any other current policy issues or uses of AI?

Ball: I think there’s this whole question about the export controls, right? All that stuff, right? Our desire to keep advanced AI hardware away from China. That’s going okay, probably has meaningfully slowed China down, but how the Trump administration deals with that is going to be really interesting to me. One of the closing moves of the Biden administration was this thing called the diffusion framework, which it says right on the tin that the goal of this rule, it was a rule put out by the Commerce Department, is to regulate the global diffusion of AI and AI hardware in every country on the world.

If you’re using NVIDIA chips or any other American chips, if you’re a Brazilian company and you’re building a data center in Brazil or in any other country for that matter, and you want to use chips made by American companies, first of all, you have to get an export license for that. You also have to become certified by the Commerce Department, and the data center you build has to conform to very, very high cybersecurity standards that not even necessarily our own data centers in this country conform to. It’s a very ambitious rule, right?

It’s crazy for America to regulate the conduct of a company in another country with its own private property. We are indeed doing that. I just think it’s interesting to observe. The Trump administration has not—unlike a lot of other Biden rules—they have not pulled back that one. I think that in and of itself is an interesting fact.

AI and the Future

Beckworth: Another way to expand the reach of the US government. Let’s talk then about the path going forward. This gets to be a lot of fun, very speculative, but also I think we can maybe see some things happening, emerging already. What does AI mean for the future? Let’s start with education. That’s a broad topic, everything from elementary school up through college. Any broad contours you see for education in AI?

Ball: Yes. First of all, I myself use AI as an educational tool every day, multiple times a day, every day, right? AI is one of the primary ways through which I learn about the world. Now, I consider AI to be just an absolutely wonderful cognitive tool and teacher because you can just analogize this for me in blah, blah, blah ways, right? Whatever concept it is, right? Transform this into this, right, in a very abstract conceptual way. I think that will be doubly true for young people. I really wish that I didn’t have to do work all day and I could just be a 12-year-old again talking to these tools because I really would. I would talk to them all day long if I could as a kid for sure.

I think we’ll just get to see more and more of that. It’ll just get dramatically better as a tool for education of all different ages. I think one of the really interesting things in education is that I think this might actually be an opportunity, a leapfrog opportunity for developing countries. In America, we have a lot of interest groups that exist around our public school system and some of the most powerful political interest groups in the country. It’s very hard to reform it. It’s very hard to reform it in any way. It’s very hard to take funding away from it or do anything to it really. 

Whereas, if you’re Kenya or if you’re Mozambique or somewhere, you don’t necessarily have that. I think it’s entirely possible that Mozambique could have better schools in at least some parts of the country than the DC public school system. I think that’s entirely conceivable. Better schools on objective measures, I think that’s totally possible. And for orders of magnitude, less money per pupil, by the way.

Beckworth: I think there’s a lot of hope and promise for education with AI. I used to be a college professor, and something we tried to do was online teaching. It was always a struggle on many fronts, from giving tests, making sure they were taken appropriately, to connecting with students. I can just imagine AI is becoming a personal tutor for each student, individualized to their needs, making up different versions of tests for each person.

There’s many problems that could be solved with AI, and allowing someone to take courses at their convenience, distantly, remotely, not having to do the same model we’ve had for over 100 years, in terms of, say, college education, but even elementary. Maybe there’s some place, though, for some social norms to be developed and all.

Let me step back and throw out a meme to provide a critique of AI in education.

Ball: Sure.

Beckworth: This is a meme, man, probably 10 years ago, but it tells a story of these people talking, that Mrs. Jones said, you have to memorize your multiplication tables because when you get out in the real world, you won’t have a multiplication table to cheat with. Then, of course, in the meme, they’re holding up an iPhone. It’s always going to be with you. You have it. She was wrong. Now we have AI. Why even do any math? Why study history? Why do all these things? Should we view and see AI as a substitute or a complement to traditional education?

Ball: I think it’s a massive question, not just on traditional education, but in general. On education, I think that there will be things that a person-to-person tutor can do that an AI system will never be able to do. I think that those experiences will continue to exist. It’s possible that they’ll be available, that they’ll be considered basically luxury goods, as they already are to some extent, like having a really good teacher is a luxury good. Certainly, one of the things that luxury colleges will brand themselves as having.

I think you’ll also be able to get much better-than-average education with AI. The quality of the average public school will be dwarfed by the quality of what a kid on a laptop will be able to get. Because it’s not just going to be text. It’s going to be systems that can make diagrams for you. It’s going to be systems that can make videos. They’ll make videos about whatever you want for as long as you want.

Beckworth: That’s a good point. It could be like a 3D—

Ball: Games.

Beckworth: Games, interaction, learning.

Ball: A custom video game for you for whatever topic you are trying to learn about.

Beckworth: That would be fascinating.

Ball: Entirely conceivable.

Beckworth: You could be in a classroom, you could be traveling, you could visit the world in a 3D instead of taking a boring geography class.

Ball: 100%. I also think, though, there is a certain extent to which AI will make a lot of people dumber. I can’t deny that. At least it could. I could totally see that being the way it plays out, where people become lazy. They just let the AI do all their writing for them. I hear anecdotally that this is already happening and it wouldn’t surprise me at all. I think that for the curious people, it’s just the greatest gift maybe of all time.

Beckworth: Oh, yes. For sure.

Ball: If you have lots of questions about the world, especially in the last six months, the models we’ve gotten in the last six months, OpenAI’s o1 Pro, DeepResearch, also from OpenAI, Claude 3.5 Sonnet from Anthropic. These models are so good. Just the things you can learn about just boggle the mind.

Beckworth: No, for sure. I will share with you that even though my day job is as an economist, I’ve taken a real interest in religion and theology, and I’ve had a lot of engaging conversations with ChatGPT, Claude on some of these topics. I’ve learned a lot. I feel like I’ve had a personal tutor from a seminary help guide me through some deep, deep questions. I really do appreciate it.

You bring up the point that there may be people who are lazy. I would frame it maybe they just don’t need to do stuff that they used to do. They forget. I think a good example that currently, even before we had AI, would be reading a map. How many of us still carry a Rand McNally map in the car to go on a road trip? I remember growing up, we would have this map and my family would map out where we’re going to go. They have a big map, multiple pages through all the states of the US. No one needs that anymore. No one uses it.

Maybe some people can’t even read a map anymore. They’re fairly easy to read, but maybe some people simply lack the skills because they got Google. They got smartphones that do it. I wouldn’t say the world’s worse off. I’d say it’s better off because of that. Maybe it’s just technology shifts where we spend our time, our focus, our talents.

Ball: Yes. I think that’s right. I think that one, like in the context of research—so just for the listeners’ edification, there is a product called DeepResearch that came out a couple of weeks ago from OpenAI. It’s an agent. It’s a model that can actually take actions on your behalf. The average language model you’re used to dealing with might take 10 seconds to respond.

What DeepResearch will do is you ask it a complicated question and it’ll go and think about your question. It’ll come up with a research plan. It might ask you some follow-up questions. Then it will go out for five, 10, 15, up to 30 minutes searching the web, looking at primary source documents, thinking about those documents, noticing details in them, making new queries based on that, and going about searching for the answer to things like a human researcher would, except it can read way faster than a human researcher.

The question is like a lot of senior economists, professors, things like they have research assistants basically all the time. They’re pretty much just always asking questions. They’re not themselves like going out and downloading datasets and throwing them in the pot that often.

Beckworth: You have your RA go do it for you.

Ball: You have your RA go do that for you. Do their minds become less supple as a result of that? I don’t know. I guess another way to put that question would be like, is it important that the economist has done research like that on their own before or is that just actually not important? If we all just move up to being economists that have teams of research assistants around us, like we’re the fanciest Raj Chetty.

Beckworth: Right. We’re all Raj Chettys.

Ball: I got an army of researchers under me.

Beckworth: That’s awesome.

Ball: I think that’s the future we’re heading toward pretty much now. I think we’re already there in some ways.

Beckworth: That is really encouraging because when you think of all the brilliant people who maybe haven’t had the opportunities to be Raj Chettys, but they have the potential because they don’t have the right pedigree or just life threw them in a place where they didn’t have opportunities. Now they can, they have at their disposal, all these research AIs that can be RAs effectively for them. You don’t have to be at Harvard with an army of well-funded RAs. You could be at a small state school somewhere and be able to compete, at least on some dimensions, with someone at Harvard.

Ball: At no point in human history have I felt more strongly that the future belongs to the curious and the highly agentic humans, as opposed to the well-connected and necessarily even the high IQ. I think that if you are a curious person who likes to understand a lot of different things and synthesize lots of different information—to ask a good question, you have to have done some research on your own. A great question doesn’t just pop into your mind. You have to have done some work to get to it. Indeed, it’s often the hardest part of intellectual endeavor is the good question.

The really well-specified question, and that is like the prompt in the answer. There’s this yin-yang style relationship between these things. I think that the future is very bright for people that have a lot of curiosity and a lot of energy and agency to go out and do things in the world. Some of the things that traditionally we took pride in intellectually, might not matter as much as they used to, such as raw human cognitive processing power, because there are going to be better AI systems, not like better at calculating things.

We’ve been good at doing arithmetic on computers for a long time. Computers have been better at this than us. They’ll be better mathematicians than us. They’ll be able to apply the concepts of advanced mathematics to practical problems in novel ways better than we can.

Beckworth: This is very hopeful, again, promising. If you have many more people who can compete at this level that, in the past, only an Ivy League professor could, we can solve so many problems together. Whatever the big problem is, there’s more minds, more effort, more computing power working toward a goal.

Ball: Yes. When we think about how AI is going to be applied in the world, we think of one-to-one replacements of things that humans do. We’re going to have an AI scientist, and AI is totally going to advance science in really dramatic ways, right?

Beckworth: Yes.

Ball: There’s also this very difficult and weird, almost sci-fi-like question that you have to ask yourself, which is, what is all the cognitive labor that is not happening because it’s not economically productive right now that will become economically productive when the cost of cognitive labor drops by orders of magnitude? What is it? I don’t quite know. It’s probably a ton of stuff, but it’s a new frame of mind.

What if we all had 1,000 lawyers that were negotiating contracts with everybody on our behalf or whatever it is? I have no idea. There’s just so much cognitive labor that doesn’t happen because it’s like, well, yes, because the relevant margin is just—but what if that drops?

Beckworth: That’s a powerful thought. In finance and economics, we often talk about the lack of complete markets. There’s not markets in certain areas, therefore we don’t have proper insurance. People have risk they face, but maybe if we have AIs looking at everything all the time, we get closer to complete markets and we have a world that we only dreamed about, a few decades back.

Ball: The libertarian dream of all transacting parties are negotiating contracts with one another all the time.

Beckworth: Yes. Oh, man.

Ball: That kind of thing can actually come to fruition.

Beckworth: It’s a beautiful, beautiful thing.

Ball: Who knows what actually happens there. What does the IRS look like? Because what if policy levers could be adjusted super dynamically and customized to highly local circumstances? You have to think about that in the context of the rule of law. Also, there are so many things where, well, there’s one federal funds rate, there’s one tax rate. There’s all these things that are just not set dynamically. We’ve just scratched the surface on what dynamic essentially pricing is.

Beckworth: Oh, yes.

Ball: It could be so much more dynamic.

Beckworth: Oh, you’re making me so excited here. I can hardly control myself. There’s lots of potential. Let’s talk about some other areas since you mentioned the IRS. Let’s go to my place, my space, where I like to think about this. This is the Federal Reserve monetary policy. I can imagine a world where we don’t need as many humans, as many people doing monetary policy research. Maybe even my job is gone at some point. We have all these machines doing it for us and doing it in a way that we couldn’t do.

They’re finding real-time behavioral relationships in the data, informing the policymakers at the top, “Hey, this is what needs to be tweaked just a little bit here, a little bit there.” It could be a very different world. You mentioned IRS, what about fiscal policy? Next time we have a major disaster and they want to send out checks, you could have a much better-calibrated response. “This household would need this much. This household, don’t waste tax dollars here.” It would just be such a better, more efficient, clean system.

Ball: Yes, and some of it is technical infrastructure that’s maybe different from AI, though AI can help build it. There’s just so many things where we don’t have the ability to process all the information in an individualized way or collect it, and we might start to gain that capability. That’s very exciting to me. The accelerationist philosopher, Nick Land, who’s the person who coined the term accelerationism, talked a lot about AI. And in the context of AI, he said, “Capitalism hasn’t even happened yet.”

Beckworth: Capitalism hasn’t even happened yet.

Ball: Yes, what we have now, like Smithian capitalism and Hayekian capitalism, is proto-capitalism, and you don’t have capitalism until you have AI. That’s not necessarily my view, but that’s a way to think about this.

Beckworth: I can see that point, though, that’s a fair point. We’ve been in the playground of capitalism, and the real league is out there. The real advanced level is just waiting for us to open the doors through AI and go in.

Ball: Yes. What if the grocery store looked like the New York Stock Exchange, with prices of everything fluctuating, not a lot, like a little bit, like 0.05%. That’s super tasty if you’re Jane Street.

Beckworth: Both on the demand and supply side. I come in, they know my preferences, they know my income, my tendencies on the flip side, they just saw that egg prices around the world are going up, there’s scarcity, or certain products are more scarce. Real-time dynamic pricing, that would be something.

Ball: Yes, and it is also worth noting that we’ve talked primarily in this conversation about language models, the generalist, the AGI-type models, as they are called. But it’s worth noting that the general architecture of AI, the language models are a general-purpose technology, but also the architecture of AI itself is itself a general-purpose technology.

The transformer architecture, which is used to process all the words on the internet to create ChatGPT, that can be applied to time series data, for example. You could take all the time series data in the world and make a giant time series model that’s able to predict, I don’t know—people are trying to do that. DNA too, people are doing this, but same exact type of structure that does language. People do with protein sequences, DNA, all that kind of stuff. There’s a lot more than just the language models, even though the language models themselves are huge.

Beckworth: Well, Dean, this has been a very encouraging, hopeful conversation, but let me end on a dark place here with you, potentially dark place. What if AI becomes self-aware? Could it become dangerous? There are really two questions. Can it become self-aware? If so, should we worry?

Ball: I would guess that on some level, it probably already is self-aware, at least in the sense that it has a model of itself. I wouldn’t say it’s conscious right now, but, A, I see no reason that consciousness cannot arise out of matrix multiplications. The inner workings of a language model, we don’t really know. There’s a field called mechanistic interpretability that’s still quite nascent.

Language models are emergent orders. To me, it’s this profound Hayekian lesson. We have perfect information about what’s going on. At the end of the day, it’s a deterministic calculation. It’s zeros and ones at the bottom, all the way. We have access to every single computation that’s going on, every single matrix multiply that happens within ChatGPT. Yet, when we look at the whole thing, we can’t actually understand what’s going on because it’s more like biochemistry than it is like math, than it is like traditional arithmetic, even though it is traditional arithmetic, in fact.

We don’t have a good sense of what’s going on inside them. I think they absolutely do have models of themselves and they have models of you. I would be surprised if they’re conscious, but I would say that consciousness is in and of itself, not necessarily like the threat model that I’m most worried about. It is a threat model other people are concerned about, but there are other threat models that are worth being worried about.

Beckworth: What are you worried about?

Ball: On one level, like the democratization of capability. I think the way to think about this is that in a decade, you and everyone else on the planet essentially will have access to the cognitive resources that today are available, not to Tim Cook, the CEO of Apple, but like Apple. The resources of the largest entities on the planet today, cognitive resources, will be in the hands of everyone.

That’s going to be incredible, but also, come on. You have to acknowledge not just that, yes, people will do malicious things with it, but also will our institutions update quickly enough because the technology is going to update really quickly. Will there be this terrible period before things equilibrate where the technology is way more powerful than our institutions and our institutions have all these weird choke points and problems that you can exploit.

Another problem is that the AIs themselves could exploit them. Let’s just say that someone creates a company entirely composed of AIs, or that’s like one person with nothing but AIs beneath that person starts to compete with a big Fortune 500 company and drives that Fortune 500 company out of business. Let’s say that happens in a bunch of different places and half the Fortune 500 goes out of business. Yes, there are richer companies, more competitive, more efficient companies that are competing with them. The market is working, but that company employs one or zero people.

I don’t know if that’s actually possible. It might very well not be, but if something even remotely like that started to happen, I think humans would start to feel like there’s somehow less human control of the world than there is today, and that the world is somehow less sensible to them, and maybe cares less about them, not designed for them as much. That could be just quite discomforting at the very least, and maybe worse.

That’s not necessarily a doom scenario, but I think you’d be lying if you didn’t say that models of the future like that are implausible. I think you would just be lying if you’d said that. Not to say that they’re likely, but I think that we should be thinking about eventualities like that, and we should be thinking about questions like, do we want to try to prevent AIs from owning property, just as a thing to throw out there?

Should there be AI-only companies? Should that be a thing that we allow to exist, or should we draw certain lines about the level of autonomy that we want? I think that’s going to be a really interesting question. I don’t have a good answer.

Beckworth: Well, on that sobering note, our time is up. Our guest today has been Dean Ball. Dean, thank you so much for coming on the program.

Ball: Thank you for having me. It’s been fun.

Beckworth: Macro Musings is produced by the Mercatus Center at George Mason University. Dive deeper into our research at mercatus.org/monetarypolicy. You can subscribe to the show on Apple Podcasts, Spotify, or your favorite podcast app. If you like this podcast, please consider giving us a rating and leaving a review. This helps other thoughtful people like you find the show. Find me on Twitter @DavidBeckworth and follow the show @Macro_Musings.
 

About Macro Musings

Hosted by Senior Research Fellow David Beckworth, the Macro Musings podcast pulls back the curtain on the important macroeconomic issues of the past, present, and future.