Crazy stuff has been happening in the stock markets, some of it apparently driven by hype and fears about AI. I don’t play in the markets — but Barry Ritholtz, who I’ve know for many years, does; he’s a money manager who’s also civilized (and reads!). So I decided to have him on again. Transcript follows.
. . .
TRANSCRIPT:
Paul Krugman in Conversation with Barry Ritholz
(recorded 2/26/26)
Paul Krugman: Paul Krugman here. We’re in the middle of a really wild week. It’s currently Wednesday before this show goes up on Saturday. And I’m talking, for the second time, with somebody I’ve known a really long time, Barry Ritholtz, who’s an actual money manager. But also civilized. We’ll talk about all that and so on. But he’s joining me from my homeland, Long Island. And so hi Barry, welcome on again.
Barry Ritholtz: Hey, Paul. How are you? The secret to being a civilized money manager is to grow up lower class. You don’t adopt all of the late stage capitalism effects.
Krugman: Yeah, but some of the Epstein class also grew up lower class. That didn’t stop them from developing horrific tastes.
Ritholz: I guess that’s true.
Krugman: But the reason I wanted to talk was just for a little bit of a break from my usual sort of academics and politicians, but also, you know, we started off with a really interesting day in the markets. Citrini Research. I had never heard of them before. I don’t know if you had, but maybe you want to talk a little bit about that whole story and go on from there.
Ritholtz: Sure. So there’s a well-established historical track record of Malthusians and Luddites that fear technological innovation. But at the same time, the reality is, every time there’s a new technology, the nature of both the economy and in particular the labor force changes dramatically. And there are so many phrases we just take for granted. We don’t even think about it. Why do we call the monthly jobs report “non-farm payrolls”? Right? Well, before the Industrial Revolution, 90% of the population worked on the farm. You know, you had a handful of smiths and coopers and other people that weren’t agricultural workers. But pretty much that is what dominated the labor force and the process of moving people from the farms to the cities, from the exurbs to factories. Suddenly you had to keep track of, well, what’s changing? You know, we can’t just show all these jobs lost in the farms because they may not be offset. These family farms run by five, six, eight people, you know, the two oldest kids go off to the factory. What happened? So we’ve seen this happen.
There are genuine changes, but history tells us, at least so far, every one of these fears have been—yes, these have been disruptive. These have sometimes been wrenching changes. But the economy adjusts and it’s very dynamic, and higher production, higher value jobs replace the lower value jobs that have taken place. The argument today is, hey, this artificial intelligence stuff is very different. Someone just said half of the entry level white collar jobs will be gone in five years, and the apocalypse is coming.
I’ve been using Perplexity. I’ve been using ChatGPT. I’ve been using Claude Pro with Opus 4.6. And the things it does are clearly amazing and they’re absolutely going to replace some very entry level jobs. Back in the days when I was an attorney, we didn’t have desktop computers. We had a word processing division. There weren’t executive assistants; there were secretaries. And you would give something in writing to the secretary who would transcribe it, send it to word processing. The next morning you would get a printout. We called it redlining because you would take a red pen and mark it up. All those jobs have been lost. Those jobs are gone, replaced by computers. So how much is AI going to replace these jobs? That’s what has so many people upset.
The challenge we see from all these sort of viral clickbait things—there’s an element of truth. It certainly appeals to people’s fears. But the two dominant narratives about artificial intelligence: either this is a bubble and all these stocks have a crash because this is malinvestment, or this is going to replace every white collar worker and we’re going to have 25% unemployment: both of those can’t be true. And history tells us that those sort of consensus views, that sort of fear, that tends not to be what happens. Usually it’s not even something in the middle. It’s something wholly unexpected. Look at what happened following the dotcom crash. Retail slowly got very damaged, but it’s begun to adapt. And the direct-to-consumer model replaced it. You know, we were over-malled. We had way too much retail compared to other countries. But still, there’s been a giant change. And the same thing’s going to happen here.
Krugman: Okay. So what happened—I think listeners might not know, but over the weekend this firm called Citrini Research, and I haven’t quite figured out who they are or what they do, but they wrote a beautifully written sort of memo as if from the year 2028, looking back at the crisis of 2026, 2027, that was about AI. And it was interesting because it wasn’t actually either of the usual scenarios. Instead of all the white collar jobs going away or bubble and burst, it was kind of like a displacement story. And I can’t say, but reports say that part of the market crash on Monday was driven by this report. I don’t know if that’s actually true. You’re actually in the markets and I’m not, so.
Ritholtz: I’m always fond of pointing out to people that every time there’s an explanation for why the market just did what it didn’t—”Hey, here’s a somewhat after-the-fact rational explanation”—well, if it was that obvious, why didn’t you tell us this before the market crashed? Very often it’s sort of a narrative fallacy hindsight bias combo that allows us to apply a degree of rationality to the uncomfortable reality that, hey, markets are fairly random. You know, A Random Walk Down Wall Street by Burton Malkiel didn’t become a classic by accident. It became a classic because there’s so much truth in the fact that minute to minute, day to day, we know what the trends look like over long periods of time. But on any given Monday, it could have been—it could have been, you know, a butterfly flapping its wings in China, for all we know.
Krugman: Yeah, I was wondering, because there was a lot of confidence in those narratives. But I always think of Robert Shiller’s Irrational Exuberance. In 1987 there was Black Monday. And he happened to be in the middle of doing a research project on Wall Street. He had lots of fax numbers (speaking of technological eras) and so he faxed a lot of people and basically asked, “Why are you selling?” And there were all these after-the-fact explanations, but the only consistent reason was, “I’m selling because prices are falling.”
Ritholtz: Right. Trend is a real thing. Momentum is a real thing. Human beings—and I don’t want to make this all about behavioral economics—but human beings are social primates. And when the crowd is selling, “What do they know that I don’t know? Gee, maybe now’s the time to get out before this gets much worse.” That’s kind of what happened. A lot of the ‘87 post-mortems found that there were a handful of accelerants, and go read the book—my favorite book on market crashes, Black Monday by Tim Metz. It goes into all of the portfolio insurance and futures contracts and all these things. It’s almost never one thing. It’s always a perfect storm of all these different things that came together. But Black Monday really explains how things just accelerated and accelerated. And then into Tuesday, it got even worse before it stabilized.
Krugman: Yeah. One of the things that’s actually kind of relevant to your book, How Not to Invest, which we talked about last time back in May—and you said, well, how could it be that all of these things came together? But there’s a selection bias. It’s the days when the market falls by 23% that you notice. And you talk a lot in that book about the illusion of expertise.
Ritholz: If you ask why somebody gets stuff right, it’s mostly—well, we only looked at the guys who kind of conspicuously got it right, which doesn’t actually mean that there was any real explanation for what they did. Businessweek used to do this big annual forecast where they would ask people where the market’s going to be, where’s interest rates, where’s the price of oil? You ask enough people, by just dumb luck, someone’s going to get it right. And part of the reason we know it’s dumb luck, it’s never the same person year after year. And my favorite part about that is everybody submits their guesses on a spreadsheet. You could track it day by day. And if you look at it the day or week before the due date and the day a week after, when the date shows up a year later, a day in either direction changes the list, changes who’s on top. It’s so random. And a year is such a long time for a forecast to go wrong. Just life gets in the way. Nobody had the pandemic in 2020. Nobody had the Ukraine invasion. It’s just these geopolitical events get in the way.
But all that aside, to bring it back to artificial intelligence, look, there’s no doubt this is a major technology, whether you want to call it transformational or generational or just, hey, this stuff’s really cool and you could do amazing, amazing things—it’s going to have a big impact. What we don’t know is how big an impact. And so rather than simply forecast something and so many forecasters get it wrong, it’s a great effect to say, “Let’s look back in time at the crash of ‘26, ‘27.” This way, they’re not going on record in terms of saying there’s a crash, so they can’t be wrong. “We’re just imagining a scenario and we’re playing it out.” I have found that a better way to approach these sorts of things is to say, “Let’s lay out the whole spectrum of possibilities. What is the really good upside possibility? What is the really terrible downside possibility? And let’s work out a few gradations in the middle where most things tend to fall.”
When you said there are days that are down 23%, it’s a day. That was a one-off. Yeah. October 1987 was one day when that happened. But on the other hand, the look back on the crash of ‘26, ‘27 talking about a 25-35% crash—all right, so we had one of those in three weeks in February 2020. We had something obviously much worse in ‘08, ‘09. 2000 was about that in the S&P 500. You look at the tech-heavy Nasdaq down 82%. That’s four out of five dollars just disappearing. It’s amazing. Down 34% during the pandemic—for weeks. In a few months you were not only back to break even, but by the end of the year, from the lows, you’re up 69%. So yeah, of course markets go up and down.
The significant thing that we’ve noticed is the Mag Seven stocks, the hyperscalers that everybody was talking about last year. Well, first of all, two of the seven beat the market last year, meaning five of these companies that everybody was telling us are running away with the market underperformed the simple S&P 500 index. And this year I think all seven are underperforming the index. We like to see broad participation. We want to see small caps. We want to see value. Emerging markets, developed ex-U.S. Everything is doing really well except the Mag Seven. So that’s kind of telling us a lot of what’s taking place is these companies that are figuring out, “How could we be more productive, how can we be more efficient, how can we do more with the same resources we have?” Theoretically, you forget the Mag Seven, it’s the Mag 493 that are going to be the big winners of this. You go back to the early ‘80s, it was Hewlett-Packard (now called HP), Compaq, Gateway. Look at all the companies that have gone away. I am certain we will see something similar happen with AI, but none of us are computer companies. Everybody has a computer. None of us are internet companies. We all operate online. I suspect something similar is likely to happen with AI.
Krugman: Got it. A couple of directions I wanted to take it, but let me just say, you are not a nerd, which is actually useful here because you’re actually using AI. The truth is, I’m not using any AI.
Ritholz: Really?
Krugman: I don’t use any of the chatbots. I don’t use anything at this point, really. But people are telling me that things like Claude are really useful. So what are you using it for? Because you’re somebody who’s not, you know, in love with tech for its own sake. So what do you do?
Ritholtz: If you think I’m not a nerd, it means I’ve done a really good job hiding my love of science fiction and gadgets and nerdery. But the first thing I started using was Gemini. If you read the Cory Doctorow book, Enshittification, it’s been pretty clear Google has been going downhill for a long time. However, their AI project Gemini has been spectacular. So when I search for something, I will basically, instead of hunting through pages and pages of results, I’ll just get the answer I’m looking for. Now, that is very different than when I am doing a deep search going down a rabbit hole. That’s when I want to scan through. But if I’m just looking for a quick question, Gemini is great for that.
The first thing I did about a year ago was add Perplexity to my phone. And it’s become an enormous tool. Some of it is informational, some of it is more detail. There is another Google product called Notebook LLM, and it allows you to upload a PDF, no matter what size. A perfect example: the paperback of How Not to Invest comes out in May. And there’s nothing more frustrating than after you publish a book finding typos, right? I read the audio version of the book and I was like, “God damn it, I can’t believe they misspelled this.” So I uploaded it to Notebook LLM and said, “Find the grammar errors, find the spelling errors.” And it just gave me a list of 100 things that saved me having to sit down and reread it for the hundredth time over eight hours or ten hours, and it was amazing. So there is another use.
I’m not a fan of the way AI or Grammarly edits. I want the typos fixed, but leave your dumb prose opinions to yourself. I want my writing to have my own stink on it. Because there’s this tendency now that all this writing has this certain metallic, bad AI flavor to it. It’s very obvious. The SEC requires analysts to say, “I wrote this, I believe this, I have no conflicts.” I think the media is going to have to have people start saying, “Yes, I wrote this. It’s not AI generated. It’s really a person saying these things.”
Claude on a desktop is super powerful. But before I get to Claude, I’m going to go to ChatGPT. This has been the most impressive portable tool. But let me just ask it this: “Tell me about Paul Krugman’s career and some of his more famous publications.” [Chat GPT answers in a life-like Englishwoman’s voice:] “Paul Krugman is a renowned economist known for his work in international trade and economic geography. He earned the Nobel Prize in Economic Sciences in 2008 for his contributions to new trade theory, which explains how economies of scale and consumer preferences for diverse goods could drive international trade. He’s also well known as a public intellectual and prolific author.”
Krugman: Yeah, all right.
Ritholz: I’m going to stop that there. But this is Star Trek level stuff. I mean, I didn’t pre-set that. That was just a question. And Victoria—everybody kind of names their chat assistants—Victoria gives me that answer. And anytime I ask a question like that, especially something where there’s a lot of information online, there’s a lot of data, and there’s a possibility of getting it wrong, so I have to be aware of that. With that level of competency, that feels like, all right, we don’t have flying cars yet. We almost have self-driving cars. But this is sci-fi stuff at the highest level.
Krugman: That’s interesting, because a lot of people—and maybe it’s the questions they’re asking—but ChatGPT gets a lot of spitting reactions because of the hallucinations.
Ritholz: You have to be very aware of how you prompt it and then where you are. I asked the question that I knew the answer, so I kind of have my ear. I find I still don’t want to rely on it for anything significant, like if I ask it a question where the outcome is really important. You have to check it yourself.
There is an important footnote to this, and this is very much a counter to a lot of the fears. We have been told for a long time that some of the first jobs that are going to be lost are things like radiologists reading X-rays and MRIs and CT scans. And there was just in the Times a fascinating piece by a radiologist who said, “Far from putting me out of work, AI has made me more productive, more accurate, more efficient.” So the interesting thing is, look, artificial intelligence is not artificial, and it’s not intelligent. It’s not artificial because it’s based on all this real stuff that’s out there, for better or worse. And it’s not intelligent because it’s just foolishly believing everything it’s working through. And it’s making a prediction as to, based on this prior pattern, what’s the next letter, the next word, or if it’s a visual thing, what is this going to be? So the skill comes not in the middle 80% of the rote, boring, grind it out, read the X-ray. The technology is great with that. It’s the edge cases where you need some experience, some insight, some wisdom. So if you can spend more time on the harder cases, which AI is no better than a person—in many cases worse—but it’s taking all the basic stuff off your plate, it allows you to refocus your efforts. Now, the challenge there is, well, you don’t get to that level of expertise without putting in the years, without grinding out those 10,000 hours. So that’s something we have to be cognizant of. How do we make sure that 20 years from now, those radiologists have put in their 10,000 hours?
Krugman: Yeah, I worry about that quite a lot. I guess Gemini sort of comes with Chrome?
Ritholz: With any Google product, it’s there. And Gemini is going to end up powering Siri. I think after a dozen years of failures, Apple has finally said, “Fine, let’s outsource this.”
Krugman: Yeah. And so I often just turn it off. If you say “minus AI” at the start of your search, it won’t do it. I’ve spent 50 years in this business and I kind of trust my own deep background memory more than I trust AI on anything economics related. But two generations on, will any of them have actually had that kind of experience? It’s a genuine concern.
Ritholz: Listen, you look at the unemployment rate for recent college grads, for people 20 to 25—it’s double the national rate or higher. And you can’t help but look at some of these AI tools as at least a factor. I don’t want to say it’s all of it, but it’s certainly a factor in this.
The other tool that I find incredibly useful are for things like my morning list of reads that I crank out to a big list of people. It’s a mix of some market stuff, some economic stuff, and some things on real estate, maybe some sports, some books or movies. And these are the ten most interesting things I’ve seen. Every time I read something I use Instapaper and I say, “Save this to read it later.” And I used to have an assistant assemble all that in a long HTML format, which was tedious and a grind and kind of a pain to do. The last thing that happens at 7pm is Claude goes out and gets that, formats it for me and shows it to me. I just take my ten favorite reads and cut and paste it—it takes me three minutes instead of 45 minutes. And in the morning it gives me an updated version of it, which I then share on Bluesky and LinkedIn. That used to take me 15 minutes, and now it takes me 15 seconds.
So you could see how somebody who used to do that job—and it’s tedious and boring—all of these entry level positions, all of these grind positions that used to be a first rung on the ladder to working your way into a firm and learning the industry, learning the business, getting some skills—I don’t know what happens with that. And it’s absolutely concerning.
Krugman: Yeah. But basically these models are trained on stuff that’s out there, which, when they started, it was all generated by humans.
Ritholtz: That’s right.
Krugman: And as more and more content is actually generated by AI, the sort of slop apocalypse story comes in, where it sort of chokes on its own effluent.
Ritholz: Yeah, there’s no doubt about it. To me, the two most fascinating aspects of this are: a) if you’re a writer and a decent writer and your voice, your tone doesn’t look like AI, your content is going to have value because people should be able to identify this, at least until AI learns how to imitate somebody—although, you know, there’s copyright and name, likeness, image concerns with that. And then secondly, a big part of How Not to Invest is kind of calling the media out for being lazy and sloppy. And with AI, we’ve seen so many fake stories and it’s so easy to be misled. Hopefully the boomerang to that is we all have to be very aware of our sources, very aware of the content we’re consuming, and practice good information hygiene, which has been taken for granted for so long. I’m hoping that all of the deepfakes and all of the things coming out of AI force people to say, “Hey, is this a credible source? Do they have a long history? Are they trustworthy? Are they human?” Hopefully that’s the backlash that leads people to be smarter about everything they consume.
Krugman: Yeah. I really want you to talk about two related stories. One is, I’m actually having a persistent problem of YouTube channels pretending to be me with AI “me” appearing. They pop up and we get them swatted down, but they pop up again. I guess a sophisticated news consumer would say, “This just doesn’t look right,” or not that it doesn’t look right, but it doesn’t sound like me. But some of them have videos that appear to be me talking and they can get 50,000 views before we kill them, and it’s pretty shocking. The other thing, and this is going to be fun, I did talk to an Italian podcast and they asked my permission, and apparently their plan is to have me talking in Italian for the podcast, which I guess you can do. So we’ll see.
Ritholz: I’d be very curious to listen to that. So first off, your best line of defense against AI slop is going to be AI. So you can use Claude or something like that to peruse YouTube to quickly identify fake Paul Krugmans. You know, one of my favorite documentarians is David Attenborough. And he has such a unique and distinct voice and cadence. And I started watching something the other night on YouTube, and about five minutes into it, I heard a “fact,” and I’m like, “Wait, what? That’s bullshit. That’s totally wrong.” I go to the description. Buried in the description is, “This is an AI generated video.” So not only did I not watch that, I put a negative comment. I put a thumbs down. You should say front and center if AI is being used. Maybe that’s where some legislation is needed. I don’t think that violates the First Amendment to force someone to say, “This is AI generated.”
There was another channel with David Attenborough, and they go out of their way to say, “Hey, this isn’t AI generated. This is licensed material from the BBC.” And it’s really clear, there’s a little chyron up in the corner: “NOT AI. This is the real David Attenborough.”
There’s now an arms race between you and the AI bots and how fast your AI bots can counter their AI bots. It’s not that different from the drone wars in Ukraine. It’s move, parry, thrust, countermove. And that’s the only way to keep up. They can create this so fast. You have to use technology to catch up with them and swat them down.
Krugman: Yeah, it’s actually kind of terrifying.
Ritholz: By the way, Google should be doing a better job. You know, it’s funny. Everybody kind of bet against Google. “Oh, this is the end of search. AI is going to kill it.” And very few of us had the foresight to say, actually, Google is a giant technological brain trust. And why wouldn’t you assume they can figure this out? When you see companies that have gone through the Doctorow enshittification process, like eBay, like Amazon, like Google, it’s because of the profit motive, and they don’t want to spend the time and money doing what they should do. I have no doubt that if YouTube wanted to put—and they’re owned by Google—if they wanted to put Gemini to work on this, they can very quickly find out what’s slop, what’s AI, and pull it down, especially if it’s about a public figure or imitating someone’s name, image, likeness. You shouldn’t have to file that copyright infringement notice. I’m sure they’re doing some proactive. They’re just not doing enough.
Krugman: Yeah, I’d actually forgotten, of course, that YouTube is owned by Google. I think of it as very different, but it’s actually owned by, and presumably using a lot of the same technology.
Ritholz: Fastest growing video outlet in the world. Netflix, Paramount Plus, HBO, Amazon Prime—forget all that. YouTube is going to be the mac daddy in that space if it’s not there already. The growth of YouTube is just astonishing. I don’t know how many people really started playing with it during the pandemic. But that line over from 15 years ago to now, I’m waiting for it to plateau and it just doesn’t seem to be slowing down.
Krugman: Yeah. I mean, there’s many worlds on YouTube. So one of the things about it is that you can basically, if you’re disciplined, you can tame the algorithm so it will only show you certain kinds of things. And so I’ve said this now in other contexts. My two iron rules are no politics and no cute animals. Either one of those can block up your feed with slop for days. But lots of people do get their politics there. YouTube by all accounts makes X look like a sane and reasonable space. And of course the animals and all the other stuff. But I hadn’t even thought about the fact that, of course, this is Google. They have Gemini, they have all these resources. They could basically stop the slop.
Ritholz: I mean, I don’t know if they could stop 100% of it. I have no doubt they could cut it in half, and with a little bit of effort, probably reduce 75, 80, 85%. And so you’re just left with some of the most difficult stuff, or there’s just stuff popping up and popping down. You know, back in the days when blog comments were useful, I found it really helpful if someone was just—you know, we didn’t get Nazi posts back then, although there was plenty of anti-Semitism. But if someone was really a jerk, I would just block their IP address and I would never hear from them again. I’m sure Google can figure out something similar.
Yeah, there’s technology, there’s VPNs and other ways to get around it, and hence the arms race. But it’s time, it’s effort, it’s money. And, you know, just like we learned the other day that Substack has been monetizing Nazi posts, the reality is all this slop is being monetized and there’s no incentive to stop it other than the takedown notices. If it damaged the brand, if it damaged the subscriber numbers, the user numbers, the advertising numbers, they would stop it in a heartbeat, but they’re monetizing it. So the incentives just aren’t there.
Krugman: That’s interesting. Yeah. I’ve been hearing about, but I haven’t actually looked into the Substack Nazi issue.
Ritholz: I found by accident on a Google search 5 or 10 years ago where my name came up in something and I read through this thing. Obviously cowards don’t use their real names. They’re using a fake name. I couldn’t find anything about this. Sent a note to Substack, never heard back. But it’s there. And that’s not AI. It’s just “We’re going to monetize bad content and throw up a First Amendment defense.” Anybody who publishes a book immediately sees a whole bunch of fake AI versions of their book, workbooks and other things. And again, you know, Amazon should be policing this. Some idiots are buying these fake books that are AI generated. And I found a version of my book by a guy who is supposedly a long-standing financial reporter. You Google this person’s name, I can’t find anything anywhere. The guy is “a long-standing financial reporter” who’s never published an article, has no social media. It’s obviously fake AI stuff, but someone’s paying for this, and Amazon is like, “Okay, we don’t care. Listen, you’ll get 90%. 10% will find its way to the AI generated stuff.” Again, legislation is probably required to force these companies to do what’s right.
Europe is way ahead of the United States in terms of privacy, in terms of not allowing these “algos” to run amok. And was it Australia that just passed the rule where you’ve got to be 16 or 18 to subscribe to TikTok and Instagram? Probably a great idea if we’re concerned about the mental health of teenagers in our country.
Krugman: Yeah, I’m wondering about enforceability, but it might be one of these things where a fairly low barrier is actually enough to make a big difference. But yeah...
Ritholz: Just saying that it’s dangerous isn’t enough. Listen, we don’t let kids drink or smoke. And the medical evidence is overwhelming that this is damaging to kids. So let them start at 18. Combine that with the recent spread of high schools and middle schools saying ‘no phone.’ Come in, you lock your phone in your locker, you get it back at the end of the day. It’s disruptive. We’ve had centuries of no cell phones. If your parents need to reach you in an emergency, they’ll call the school. We know what class you’re in. We’ll go get you. There’s just no reason for a kid to be fooling around on TikTok or Instagram instead of studying in class.
Krugman: Yeah, maybe we should invest in turning all of our schools into Faraday cages. That would be doable.
I want to come back a little bit to, what are your scenarios for AI and the economy in the market over the next few years? As you said, we don’t know what we don’t know. But I’m just curious because you’re in the market, but also infinitely more literate about macroeconomics than most of your colleagues. So want to talk to me about it?
Ritholz: I’m just so very aware of all the things I don’t know. It forces me to have not just humility, but, you know, when you’re mapping out a war game, when you’re thinking about a scenario, it’s “Hey, what don’t we know and what do we know?” So let’s take best case, worst case, and the more likely, the fat part of the bell curve, the middle case. So the best case scenario is AI is a wonderful tool that makes all of us more productive, more efficient. We’re going to be freed up from the boring, grinding stuff, and we’ll all be free to pursue a higher level of work, a more intellectual level of work, a more human level of work, because we’re not doing the rote, mechanical stuff. Maybe not quite Star Trek where nobody has a job and it’s all, you know, guaranteed basic income. But, hey, we’re going to take a lot of the drudge work, at least from white collar workers. We’ll free ourselves up with that. And therefore, companies are going to become more productive and efficient. The cost of higher level white collar work is going to get the focus. The grunt work goes away. Hey, you know what? We’re going to be able to do your taxes online for free because AI will figure all this stuff out. So if you have a basic tax return without a lot of moving parts, hey, this will be free. And that’s sort of an interesting thing.
The worst case scenario is S&P 500 profits are at record highs. Those profits are going to be under assault. You’ve had a number of big consulting companies quietly reduce their fees for their biggest clients who are saying, “Why do I need to pay you this much? I can have AI substitute for you.” “No, no, you need us to help shepherd you through the AI transition.” “All right, but I’m not paying you $3 million a year to do this. I’m going to pay you $500,000.” So there’s going to be some pressure on some of these very profitable, maybe excessively profitable companies.
And just recall the fear 20 years ago was, “Hey, all our legal work, all our tax work, our accounting work, that’s all going to India where it’s $0.10 on the dollar.” Well, some of the basic stuff did, but it didn’t seem to have that much of an effect on white collar industries as was feared. I think this is going to be more significant, but not as significant as that.
We’ve started to see rolling sets of fears. First it was software, and then a subsector of software, which is the SaaS stuff, which is Software as a Subscription. We saw Microsoft get hit. We saw Salesforce get hit. All these big companies, until they figure out how to use AI to make what they do more specific, more productive, more efficient. Who else seems to be getting hit? I don’t understand why retail stores would. You would think better business intelligence, better ability to track all these things.
My partner Josh calls that HALO: Heavy Assets, Light Obsolescence. Meaning if you have things that aren’t subject to this sort of technological disruption, you’re going to tend to do well. So it depends on how much impact you have on the real world versus how purely digital you are. Remember a couple of years ago, a couple decades ago, digital was the future. It was seen as friction-free. So much cheaper to move bits around than atoms. And that was thought to be the next wave until AI comes along and says, “Hey, maybe your profit margin is too heavy, too fat, and we’re going to replace this.” So now real estate, commodities, oil, energy, even things like data centers are appealing because they’re not going to get bypassed by AI. And again, the reality probably lies somewhere in the middle.
It’s going to have an effect. Companies are dynamic. They tend to adapt to these things, especially when their stock sells off 20-30%. The board gets nervous. The CEO is wondering about when he gets replaced. And so, hey, how do we use these tools to prevent becoming obsolete ourselves? There’s this tendency amongst the people who engage in creating clickbait—I don’t even want to call it fearmongering—but here’s the worst case scenario. Be aware of it. To take this moment in time and just extrapolate straight out to infinity. And what we’ve learned is, you throw a pebble in a pond. It’s not the first ripples that matter. Those are easy to predict. It’s what happens when it bounces off this rock and then hits another, another pebble comes in, and you have all these reverberations, and it becomes so challenging to figure out where they end up. This is the initial state of affairs.
Think back to Y2K. Was Y2K an overblown set of fears, or did everybody respond to that and prevent it from becoming worse because they prepared for it? You can make an argument either way. Something similar is likely to happen with AI. Hey, this is an existential threat. We better get our acts together and figure out a way to add value and use AI. Otherwise we will be obsolete. And so that’s the likely scenario. Just don’t imagine 100% of entry level workers being tossed out of the white collar jobs. Instead, how are companies going to respond? How are they going to use this to justify selling their products, their services? So it’s less likely than our worst fears today. But it’s also less likely to be a perfect Star Trek-like utopia.
Krugman: Yeah. For me, in terms of daily life, the really revolutionary technology has been the Instant Pot, which produces a pretty good version of a lot of stuff.
Just a few more minutes here. You’ve been writing about IEEPA. And by the way, I have to say what a wonderful thing that the law in question basically is a yelp of pain, right? IEEPA! But anyway…
Ritholz: The amazing thing to me is how long it took the Supreme Court to come to a conclusion that any first year law student could have told you. Article 1, Section 8: “Congress shall have the exclusive power to tax levies, duties.” They don’t name tariffs, but clearly levies and duties are the same thing. This was a no-brainer and it should have been 8-1, 9-0. Sometimes when you have a sweeping decision, there’s dissent that comes out just to say, “Here’s what we want people to think about. There are other use cases that didn’t happen here. But we can’t just make this a 9-0.”
And what’s fascinating to me about the Kavanaugh dissent is simply what a sycophantic, embarrassing... Like, someone has to remind him, you’re not a junior lawyer in the DOJ or State Department. You’re a sitting Supreme Court justice. For you to write a roadmap for the president to re-implement tariffs... You know, embarrassing is the wrong word. I guess the guy who was supposed to be the heir apparent to Justice Scalia’s intellectual conservative heft—this was just an embarrassing dissent. Chief Justice Roberts wrote the majority opinion, which went far enough to basically say the president has overreached his authority. He should have gone to Congress. He did not. And where this is clearly in the Constitution as separation of powers, the power to tax belongs with Congress.
The other aspect of this that I’m kind of entranced by is two really interesting data points from last year. One is that the U.S. dollar fell 9.4% against a basket of other currencies. That’s the worst year since 2017, when the dollar fell 9.9%. The thing both of those years have in common—they’re both first years of a Trump presidency. They’re both reflecting a rollout of tariffs. And it’s pretty clear that the global economy is not keen on it. Hold aside the fact that every economist not named Navarro knows that tariffs are a terrible idea. They’re ineffective, they’re inefficient, they’re regressive. They’re a VAT tax, sort of like Europe, only minus the free college education and health care.
So, you know, we saw this in the ‘30s with Smoot-Hawley. It didn’t cause the depression, but it certainly made it worse. And the Supreme Court went out of its way to not weigh in on the pros and cons of the tariffs. But anybody else who’s looked at this has said this is a terrible idea. The balance of trade has not gotten better. The first data point, every fact checker said, is that $18 trillion number—not only is it made up, it’s double what the White House said, which was another $9 trillion, which is made up. And now I think the Times really blew this headline: Why is anybody who cut a deal with the president, now that the deal is based on an unlawful behavior by the president—the Supreme Court said no good—why would anybody honor that? He loses his single biggest hammer.
You know, they’re spinning it. He’s spinning it. They’re rolling out the Section 232 tariffs, which are good for 150 days and can’t be used against any specific company. But to me, it looks very much like they’re trying to put a good face on a tremendous loss. I think the rest of the world is just going to say, “No, we made a deal. But your Supreme Court said ‘no mas.’ And by the way, you’re the guy that keeps ripping up deals. We made this deal because you tore up the other deal. So why are we obligated to honor our contracts and you’re not?”
Krugman: The other data point I just have to bring up is last year, the U.S. was the laggard amongst all stock markets. That’s something people don’t know, even though people like you and Catherine Rampell keep on pulling it up. And it’s amazing, actually. You know, the world kind of decided that stocks were a good thing, but sort of U.S. stocks least of all.
Ritholz: Yeah. So not only did the U.S. do poorly, but this follows just about 15 years of U.S. outperformance. And it’s more than the U.S. doing 17%, the rest of the world doing 33% or more. It’s that what caused this was what I call the repatriation trade, which we talked about a year ago. Here’s the threat of the worst case scenario, the end of Pax Americana. You saw a mild version of this, which was overseas investors who own a nice chunk of Treasuries, they own a nice chunk of U.S. equities—basically they said ‘there’s more risk in the United States than we previously believed because of all these new policies. So we’re not going to just abandon America, but let’s take 10% of our overseas holdings and just pare it down. So we’re going to sell some Treasuries, we’re going to sell some bonds, we’re going to sell some equities. And obviously when you sell that in America, it’s in dollars. Then we convert those dollars to our local currency and bring it home,’ which is why the dollar fell over 9%. That idea of selling dollars and buying your local currency, then you bring it back home and you buy your local stocks and bonds—and so those are the big footprints that were left.
When you see the U.S. owning half of the rest of the world’s bonds, okay, the dollar down almost 10%, that seems to be what’s going on. And you know that’s a problem if it happens again and it happens again and it happens again. I’m not a deficit hawk. I don’t really think the deficit is problematic. But at a certain point it becomes an issue if overseas investors are not helping us pay for our deficit. That’s the risk where yields spike up, because the only way you can entice them is saying, “Forget 4%, it’s 7 or 8%.” If you think the housing market sucks now, wait until mortgages are 7, 8, 9%. And that’s a big potential problem. I’m not saying that’s likely to happen. That’s the risk we’re facing.
Krugman: Okay. And my general verdict is you can’t be AI because you’re not apocalyptic enough and not clickbaity enough, but that was pretty good.
Thanks for talking to me.
Ritholz: Any time.