Some people who ask me for advice at get a lot of words in reply. Sometimes, those responses aren’t specific to my particular workplace, and so I share them here. In the past, I’ve written about echo chambers, writing, writing for an audience, time management, and getting big things done.
Do you remember Cool Runnings? In the movie, John Candy is a retired bobsled champion, who uses his experience, connections, and lovable curmudgeon character to turn a rag-tag group of sprinters into an olympic bobsled team. A lot of principal engineer types think of themselves this way: they used to bobsled, they don’t bobsled, but they still know the skills and the people and the equipment.
And that worked well enough, while we were still bobsledding.
But we’re not bobsledding anymore.
Many of the heuristics that we’ve developed over our careers as software engineers are no longer correct. Not all of them. But many. What it means for a system to be maintainable. How much it costs to write code versus integrate libraries versus take service dependencies. What it means for an API to be well designed, or ergonomic, or usable. What it means to understand code. Where service boundaries should be. Where security and data integrity should be enforced. What’s easy. What’s hard.
We’ve seen this play out in small ways before. Over the last decade, I’ve frequently been frustrated by experienced folks who didn’t update their system design heuristics to match the cloud, to match SSDs, to match 100Gb/s networks, and so on. But this is the biggest change I’ve seen in my career by far. An extinction-level event for rules of thumb.
But you’re a tech leader, and you need to lead, and leading is heavily based on using your experience to help people and teams be more effective. What now?
The victorious man in the day of crisis is the man who has the serenity to accept what he cannot help and the courage to change what must be altered.1
Let me assume that you want to continue to be a valuable tech leader. You want your teams and organizations to succeed. That you’re willing to sound less smart and less sure, in interests of being right and helpful.
In that case, and I hope that is the case, your job has changed. Your job, for the foreseeable future, is to have the humility to accept that many of your heuristics are wrong, the courage to believe some are still right, and the curiosity to actively learn the difference.
You can’t throw out everything you know. Your taste, your high standards, your understanding of your business and customers and the deep technical trade-offs in your area are more valuable than ever before. This is like that fantasy that people have of going back to middle school knowing all the things they know now2. You’re ahead of the pack in many ways.
But you also need to really deeply question the things you know, and the things you assume. Before you share one of your rules of thumb, you need to deeply examine whether it’s still right.
And the way you’re going to know that, right now, is by getting back on the ice. Build. Own. Get your hands dirty and use the tools. Build something real. Build a prototype. Build a thousand little experiments in an afternoon. Challenge yourself to try to do something you previously would have assumed is impossible, or infeasible, or unaffordable. Find one of the ways that you’re worried that the new tools are going to lead to trouble, and actively fix it. Then examine the things you’re learning. Update your constants.
Over the next couple of years, the most valuable people to have on a software team are going to be experienced folks who’re actively working to keep their heuristics fresh. Who can combine curiosity with experience. Among the least valuable people to have on a software team are experienced folks who aren’t willing to change their thinking. Beyond that, it’s hard to see.
This is going to be hard for some folks. It’s hard to admit where you’re wrong. It’s hard to go back to being a beginner. It’s easy to stick your fingers in your ears and say “No, it’s the children who are wrong”. My advice is to not be that guy.
The good news? It’s as fun as hell. Get building, get learning, make something exist that you couldn’t imagine before.
Footnotes
Up betimes and over the water, and walked to Deptford, where up and down the yarde, and met the two clerks of the Cheques to conclude by our method their callbooks, which we have done to great perfection, and so walked home again, where I found my wife in great pain abed …1 [of her months; – L&M] I staid and dined by her, and after dinner walked forth, and by water to the Temple, and in Fleet Street bought me a little sword, with gilt handle, cost 23s., and silk stockings to the colour of my riding cloth suit, cost I 5s., and bought me a belt there too, cost 15s., and so calling at my brother’s I find he has got a new maid, very likely girl, I wish he do not play the fool with her. Thence homewards, and meeting with Mr. Kirton’s kinsman in Paul’s Church Yard, he and I to a coffee-house; where I hear how there had like to have been a surprizall of Dublin by some discontented protestants, and other things of like nature; and it seems the Commissioners have carried themselves so high for the Papists that the others will not endure it. Hewlett and some others are taken and clapped up; and they say the King hath sent over to dissolve the Parliament there, who went very high against the Commissioners. Pray God send all well! Hence home and in comes Captain Ferrers and by and by Mr. Bland to see me and sat talking with me till 9 or 10 at night, and so good night. The Captain to bid my wife to his child’s christening.
So my wife being pretty well again and Ashwell there we spent the evening pleasantly, and so to bed.
Footnotes
I’ve never built more interesting, random, and useless scripts, tools, and services than I have in the last six months. The cost to go from “Random Thought” to “Working Something” has never been lower thanks to Claude Code. However, this increase in speed has only made my desire to move faster and more efficiently higher.
The following is a set of tools and practices I’ve gathered over the last 90 days, which continue to accelerate my process and give me daily joy.
Everything lives under ~/Projects/. Each project is its own git repo with its own CLAUDE.md (project-specific instructions) and WORKLOG.md (session history). Three repos do special duty:
dotfiles — Machine configuration. Shell config, terminal config, Claude Code settings, skills, and status line script all live here and get symlinked to where the system expects them (~/.zshrc, ~/.claude/settings.json, ~/.config/ghostty/config, etc.). Checked into git so every machine stays in sync with a pull.credentials — A private repo for API keys and secrets, kept separate from project code.scripts — Standalone CLI tools added to PATH. Things like fresh (repo health checker), gpush (one-command commit and push), and ghostty-font (font switcher).Every project gets two files: CLAUDE.md and WORKLOG.md. CLAUDE.md is instructions and reference — how to build, deploy, what patterns to follow, where things live. It rarely changes. WORKLOG.md is the session diary. Every time Claude and I work on a project, it logs what we investigated, what changed, what we decided, and why. When I come back days or weeks later, Claude reads the worklog and picks up where we left off instead of starting cold.
Claude generates a lot of stuff that I cut and paste, but the initial problem was that copying from Ghostty included unavoidable leading spaces on output. The fix? Have Claude paste text straight to the clipboard via macOS pbcopy1. Also, depending on where I am posting (mail, Slack, messages), I have Claude format appropriately before pasting.
It is often much faster to just dump a screenshot into Claude Code rather than describe the issue. I used to grab a screenshot and then cut and paste it from my Documents directory. Too slow. F12 grabs a region of the screen and puts it on the clipboard so I can paste. Configured in macOS System Settings > Keyboard > Shortcuts2.
I move between three machines a lot, and given that the state of the art is changing, well, daily, a day working on one machine means that my local config has improved THAT DAY, which means when I move to a new machine, I want to update the setup.
I have a single script that validates my entire Mac setup. Checks 30+ items across categories: core tools (Homebrew, Python, Node, Ghostty, Claude Code), shell config (zsh default, oh-my-zsh), symlinks (.zshrc, .gitconfig, Ghostty config), SSH (key, agent, keychain), credentials, and coding fonts. Reports green/yellow/red per item. When things are missing, prints fix commands in dependency order — SSH before git config, Homebrew before everything that needs brew install.
# This is an example
# Check Claude Code global settings.json
if [ -f "$HOME/.claude/settings.json" ]; then
settings_issues=""
if ! grep -q "alwaysThinkingEnabled.*true" "$HOME/.claude/settings.json"; then
settings_issues="thinking"
fi
if ! grep -q "statusLine" "$HOME/.claude/settings.json"; then
settings_issues="${settings_issues:+$settings_issues, }statusLine"
fi
if [ -z "$settings_issues" ]; then
print_row "Claude Code settings" "${GREEN}✓ Configured${NC}" "Thinking + status line enabled"
else
print_row "Claude Code settings" "${YELLOW}⚠ Incomplete${NC}" "Missing: $settings_issues"
missing_items+=("claude-settings")
fi
else
print_row "Claude Code settings" "${RED}✗ Missing${NC}" "~/.claude/settings.json"
missing_items+=("claude-settings")
fiBashI have an oh-my-zsh git plugin with the robbyrussell theme. The prompt shows a yellow ✗ when the working tree has uncommitted changes. No custom config — it’s the default behavior of that theme’s git_prompt_status function.
For times when I forget, I have a script called Fresh that walks through my entire Project directory and reports uncommitted changes, unpushed commits, and stale repos across all projects. One command to answer: “Did I forget to push something before switching machines?”
Memories (~/.claude/projects/<project>/memory/): Persistent notes Claude saves between sessions per project. Things like “this user prefers terse responses” or “the auth rewrite is driven by compliance, not tech debt.” Claude reads them at the start of each conversation to pick up context that it would otherwise lose. They’re markdown files with frontmatter (type, description) indexed by a MEMORY.md file. Types: user preferences, feedback/corrections, project context, external references. Checked in alongside my project, and by far the largest timesaver for building context.
Skills (~/.claude/skills/<name>/SKILL.md): Reusable prompt templates invoked with /skillname. A skill defines a multi-step procedure Claude follows — like a macro. Example: /floyd loads a voice definition file, then rewrites whatever content I have in that voice. Skills don’t execute code themselves; they inject instructions into the conversation that Claude follows. I have skills for blog posts, podcasts, recurring expenses, and a lot more.
Hooks (settings.json → "hooks"): Shell commands that fire automatically on Claude Code events like tool use or end of response. I don’t use them yet, but they’re there for automation — things like running a linter after every file edit or logging tool usage.
Claude Code has a configurable status line at the bottom of the terminal. Mine runs a bash script that renders three lines of live data:

The rate limit data comes from Anthropic’s usage API, authenticated via an OAuth token pulled from the macOS Keychain. It caches the response for 60 seconds, so it doesn’t slow down every render3.
I’m usually working on several projects at once, so at-a-glance tabs are essential. I have a claude() wrapper in .zshrc:
claude() {
printf '\033]0;Claude: %s\007' "${PWD##*/}"
CLAUDE_CODE_DISABLE_TERMINAL_TITLE=1 command claude "$@"
printf '\033]0;%s\007' "${PWD##*/}"
}BashSets the Ghostty tab title to “Claude: projectname” when Claude starts, suppresses Claude’s own title management via the env var, and restores the tab to just the project name on exit.
I’ve written this up entirely because I am certain others have found equally satisfying improvements, and I want to know what they are.
echo hello | clipWin+Shift+S opens the Snipping Tool for region capture to clipboard.
Recent reporting on SpaceX’s proposal to deploy up to one million satellites in low Earth orbit — paired with a vision of AI-enabled, autonomous orbital infrastructure — marks a decisive moment for the space community. Regardless of whether these numbers ultimately materialize, the direction is unmistakable: space is moving toward unprecedented scale, autonomy and strategic […]
The post Holistic space observation: the shift from SSA to SDA appeared first on SpaceNews.

The European Space Agency plans to charter a SpaceX Crew Dragon mission to the International Space Station to give more flight opportunities for its astronauts.
The post ESA to fly dedicated Crew Dragon mission to ISS appeared first on SpaceNews.

Kayhan Space is branching out from providing orbital intelligence used to coordinate satellite fleets with a new software platform that turns that data into business insights for investors and insurers.
The post Kayhan targets investors, insurers with expanded orbital intelligence platform appeared first on SpaceNews.

Blue Origin is the latest company to propose an orbital data center system, filing plans for a constellation of up to 51,600 satellites.
The post Blue Origin joins the orbital data center race appeared first on SpaceNews.

MILAN – Officina Stellare, an Italian manufacturer of advanced opto-mechanical systems, has signed a 1.84 million euro ($2.0 million) contract with the Barcelona-based Institute of Photonic Sciences (ICFO), the company announced March 17. The contract covers the design and construction of an optical ground station for future laser and quantum-encrypted space-to-Earth communications. The infrastructure will […]
The post Officina Stellare wins $2 million contract for lasercom ground station in Spain appeared first on SpaceNews.

Rocket Lab launched the latest in a series of satellites for Japanese radar imaging company Synspective on March 20.
The post Rocket Lab launches eighth Synspective radar imaging satellite appeared first on SpaceNews.

The change affects the GPS III SV-10 satellite, which had been slated to launch on ULA’s Vulcan Centaur rocket.
The post Another GPS launch shifts from ULA to SpaceX as Vulcan investigation continues appeared first on SpaceNews.

OHB Sweden has won a record contract for Sweden’s space sector to build 20 satellites to boost Europe’s weather forecasting and climate monitoring capabilities.
The post OHB Sweden wins €248 million contract to build EPS-Sterna constellation appeared first on SpaceNews.
Why can’t tree-huggers & forest destroyers get along?
I lived in the wilds of southern Oregon for 17 years. I enjoyed many things about life in the forest. One of the things I loved most was watching the politics. Our community was polarized before polarization was cool. Loggers versus environmentalists, sure, but also weird alliances. Open-carry fisherman cozying up to tree huggers to save the salmon & steelhead. Rugged loggers cozying up to (relatively) soft townies with money.
My roots went deep in the area, one set of grandparents having moved there in 1943. The other grandparents (that grandfather was a former mayor & city councilor) moved there in 1933. I felt connected & invested, even if I didn’t usually stick my ideas out there.
The one time I did put my 2 cents in (this is back when we had cents) was when I tried to resolve the absolute hatred between environmentalists & loggers using what I was studying about incentive structures. I think the system I came up with was cool but it quickly disappeared, so now I want to put it in public permanently here.
Here’s the setup:
Only 5% of the old-growth forest was left.
Loggers wanted to harvest all of it.
Environmentalists wanted all logging to stop.
Second-growth forest was prone to catastrophic wild fires. (There’s a special kind of helpless feeling watching a fire approach your home.)
Loggers wanted to harvest second-growth.
Environmentalists wanted all logging to stop.
The result was a complete impasse. Forests burning. Mills closing. Crime & drugs up. Anybody with any ambition leaving. Nobody was getting what they wanted—loggers, environmentalists, workers.
(Or at least what they said they wanted—there seemed to be a bunch of psychodramas playing out.)
I recast the forest tinning situation as an incentives problem (I was intensively studying incentives at the time). Once the loggers finally got permission to harvest a tract of second-growth, they were incentivized to take out every stick of wood with economic value, leaving further growth stunted, encouraging the growth of flammable underbrush.
Because environmentalists saw the loggers’ incentives, they were ever more incentivized to block all logging & put onerous restrictions on any activity that managed to sneak through. In Influence Diagram terms, more logging leads to more money & more damage, but more damage leads to more resistance leads to less logging.
Classic inhibiting loop. More logging leads to less logging. (We could go on & on mapping this system, but this will do to illustrate my pr of the many interventions we can make in a system is to speed or slow feedback. What if, instead of getting paid for this harvest, the loggers got paid for the next harvest. Today they’d thin the forest, with any valuable material going to be turned into products, but it wasn’t until 10 years later that they would be paid the proceeds of the next forest thinning. You would get paid more if the forest thrived over the next 10 years, less if it grew less.
(I think I kind of made up the notation for delay.)
Now we have a reinforcing loop. More logging. Less damage (because the loggers get paid in a decade). Less resistance. More logging (in the form of forest thinning.)
That first logger, how do they get paid? They are paying for diesel, salaries, depreciation today & won’t get money for 10 years. The right to be paid in 10 years can be turned into a financial instrument to be sold today (remember those soft townies with money?) Now you have monied interests who also care about the forest’s health.
And who is best suited to evaluate the health of the forest for future gain (and avoiding future loss from pests or fire)? Well, those environmentalists who care so about the forest are well positioned to act as consultants & auditors.
Mill workers would be back at work. Local capital would have another way to extract rents. Environmentalists would have healthier forests. Loggers would have trees to cut.
I wrote the above up as a letter to the editor of the local newspaper. That got me invited to a “summit” of conservationists & loggers. When the microphone got around to me I had an attack of shyness, said something self-deprecating, and passed the mic on to the next person. So that was that.
Would it have worked? Maybe. Entrenched interests were more interested in staying entrenched than in making progress. That’s true today in many situations I see. It’s not as simple as “change the rules and the behavior will change”. But “don’t change the rules & the behavior will definitely not change”.
Kent partners with a handful of companies each year on editorial collaborations, speaking, and workshops. If that's interesting, let's talk →
Hey folks! I was traveling this week to give an invited talk at Western Michigan University, so I don’t have a blog post ready for you. That’ll also probably be the case for next week (where I will be at the annual meeting of the Society for Military History), though at least there I will have an abstract to let you see.
Now I am always reticent to post up the text of talks that are intended to be delivered live, because the genres are different, they rely on different kinds of delivery and they often aren’t footnoted and such for written publication. But in this case, I can do something a bit different, because the main parts of my talk for Western Michigan University were based around things that I’ve written (and in one case, something someone else has written) which you can read. So this is a chance to plumb the archives, in a sense and in so doing, basically ‘read along’ a version of the talk I gave which is rather ‘meatier’ than what I could have said in the 45-or-so minutes I had to speak.
The core of my talk was the concept of ‘historical verisimilitude‘ that I’ve riffed on here: the use of the appearance of historical accuracy, or a claim to historical accuracy in the absence of the real thing to market or promote something, be that something a film or show or game or what I have begun terming a ‘history influencer’ who makes history-themed social media content.
My initial example of this at work was the disconnect in Assassin’s Creed:Valhalla between the emphasis on visual accuracy and the catastrophic fumbling of other forms of historical accuracy, which you can read about in my “Assassin’s Creed: Valhalla and the Unfortunate Implications.” I then expanded on this example with a broader one from 2000’s film Gladiator and its initial battle scene, arguing that once again what was prioritized was visual accuracy because that gave the viewers the – incorrect! – assumption that ‘the research had been done’ on the rest, which you can read about in our series on “Nitpicking Gladiator‘s Iconic Opening Battle.”
I then jumped to example of this as a rhetorical strategy deployed by marketing, grounded in a critique of how George R. R. Martin (and the marketing team for Game of Thrones) has framed historical accuracy, using the Dothraki as an example of how this can go badly wrong and perpetuate quite nasty stereotypes about real peoples through the supposedly ‘realistic’ (in fact, deeply flawed) depiction of a fantasy stand-in for those people. You can read about that in our series on the Dothraki, “That Dothraki Horde.”
From there I transition into talking about this strategy used by the aforementioned ‘history influencers,’ with a contrast between how differences in platforms between YouTube and Twitter produced very different environments: where YouTube’s long-form video nature pushed a lot of content creators towards more carefully researched historical content which was often actually quite valuable (I particularly focused, and again this was very brief, on arms-and-armor and historical dress channels), Twitter’s emphasis on ultra-short micro-blogging produced a very different environment.
For the part focused on Twitter, I leaned quite heavily on T. Trezevant’s “The Antiquity to Alt-Right Pipeline” published in Working Classicists in 2024, which I think is one of the most revealing investigations of this particular space and the incentives that the post-Musk Twitter algorithm, which appears to openly and quite strongly prefer frankly bigoted or xenophobic content, created. From my own observations, while some of the accounts that push this particular, generally badly historically misinformed, version of the ancient past emerged in the pre-Musk period of Twitter, Classics Twitter largely held its own until the algorithm was slanted against them, making it all but impossible for a lot of good Classics accounts to compete for eyeballs.
And then I closed with a plea for greater engagement by historians in these online spaces, albeit with a caution that picking your platform is important. The fact that historical verisimilitude, the pretense of historical accuracy or knowledge, is so frequently used as a marketing tool speaks to the public’s desire for an accurate knowledge of the past. Folks want to know what the past was really like, but of course regular folks often do not have the tools to tell what is reliable, rigorous and careful history vs. what is not. So as historians, we need to be more present in these kinds of spaces (though we ought to pick our platforms; there is little point ‘competing’ on Twitter if the deck is stacked against you) to help folks find the accurate historical knowledge they are seeking.
And that, in an abbreviated form (or an enlarged form if you read all of the links as you went!) was the talk! Very grateful for WMU for inviting me out to give it. Until next week!
Links for you. Science:
Estimation of undetected asymptomatic infections of COVID-19: a mathematical modeling approach
The selfish ribosome
Astronomers Estimated the Lifespan of Alien Civilizations, and It’s Not Looking Good for Us
Do America’s Top Health Research Officials Stick Around Too Long?
Suspended small business research programs derail development of gene therapies, hip implants, and more
A medical journal says the case reports it has published for 25 years are, in fact, fiction
Other:
The US-Israel relationship is finally facing a reckoning. It doesn’t need to slide into antisemitism. Israel’s role in drawing the US into a war on Iran is attracting healthy scrutiny. It’s also creating a permission structure for antisemitism
Proton Mail Helped FBI Unmask Anonymous ‘Stop Cop City’ Protester
Jasmine Crockett’s Partisanship Was Not The Problem. Her liabilities were real, but by November, anti-Trump partisanship might be a winning play across all Senate battlegrounds.
Watching ICE Agents? You Could Lose Your Global Entry.
Park Service to revive statue of Founding Father who enslaved hundreds
RFK Jr. wants Dunkin’ to prove drinking its iced coffee is safe. The health secretary put the Canton-based chain on notice for its sugary beverages
Ars Technica Fires Reporter After AI Controversy Involving Fabricated Quotes
Florence ICE detainee dead after untreated tooth infection, official says
ICE has spun a massive surveillance web. We talked to people caught in it
Trump Has Been Sued 198 Times for Withholding Funding. It Hasn’t Stopped Him.
Supervisors grill Waymo about 1,500 stalled cars during December blackout. The company apologized but said it still expects San Francisco first responders to help move stranded robotaxis.
How I Became A Target For Right-Wing Freaks At The Australian Open
Trump and His Soulless Cronies Have Managed to Suck the Joy Out of the World Cup
The whole country is Spartacus. Stephen Miller is furious.
Can AI Replace Social Science Researchers?
Florida Gov. Candidate Says His Campaign Was Banned From Waffle House After Tucker Carlson Interview
Peggy Siegal Defends Her Past With Jeffey Epstein
Not Just Being Snarky
AI-powered search is fueling a wave of Epstein Files transparency projects
The Schumer Special
James Talarico, Jasmine Crockett and Democrats’ Dangerous ‘Electability’ Debate
Pardoned Capitol Rioter from Maryland Rearrested for Touching Women’s Hair on Metro
Man Got Mysteriously Sick on Vacation and Barely Survived. Now They Call Him a COVID ‘Patient Zero,’ and He Has Some Advice
Trump can’t win a war he can’t sell
Nearly 200 Killed in Strike on Iranian Girls’ School as UN Calls for Investigation into Attack
Everything is gender, part infinity
The US and Israel are fighting the same war — in opposite political realities (I don’t agree with the ToI assessment, but that is the mainstream view in Israel)
California GOP lawmakers are incensed over a gas tax study. Rural groups say they need it
Bill aims to block ICE detention centers from coming to Montgomery County
Markwayne Mullin Reportedly Fingered Nostrils of Colleagues and Their Spouses During Visit to Israel
Way down. Why they are down is puzzling, as homicides seem to be lower everywhere regardless of policing policy, and much of the Mid-Atlantic and Northeast has been pummeled by crappy weather, but D.C. has only had eleven recorded murders to date*. It is all the more surprising as “assault with dangerous weapon” arrests are up by a third. We also will have to see what spring and summer bring, since that is usually when homicides surge.
Still, it is encouraging, even if it is still too many killings.
*Two of these murders (CCN:25035518 and CCN:23167028) seem to be attributed to other years. I’m not sure if that means there have been only nine murders this year or if this is some kind of data error.
This article was previously behind a paywall. But now I’m making it available to everyone. Enjoy!
If you value work of this sort, consider becoming a premium subscriber.
That gives you access to hundreds of essays, reviews, and other perks—including my best-of-the-year albums list for the last ten years. It also ensures that you can read everything published on The Honest Broker in the coming months.
On the opening page of The Inman Diary, the book’s editor makes a bold claim: this work “has no counterpart in any literature that I am aware of.”
The author Arthur Inman had decided “the only way for him to win fame, perhaps even immortality, would be to write a diary unlike any ever written….It would contain the kind of information he had looked for and never found in other diaries.”
At first glance, this seems an impossible goal. Since the time of Augustine, authors of confessions, memoirs, and diaries have prided themselves on their candor and unflinching honesty. And after Rousseau, who pushed this dictum to an extreme, the limits of frank disclosure would seem to have been reached. What could Arthur Inman do in the 20th century, that hadn’t already been done before?
Adding to the challenge, Inman had nothing to write about—or so it seemed. He was a semi-invalid who spent most of his life in a darkened room. Even recluses like Proust and Pynchon are gadabouts by comparison.
Yet Inman wrote some 17 million words and filled 155 volumes (now housed in Harvard’s Houghton Library)—that’s roughly 25 times as long as the Bible—and devoted more than 40 years to his project. He started writing his diary in 1919 and continued working on it until shortly before his death in 1963.
But here’s the twist, and the quirk that turned Arthur Inman into one of the most fascinating writers of the 20th century. This peculiar man took ads in the newspaper, offering to hire “talkers’ who would tell him the wildest and most intimate details of their lives. The end result was a diary with more than 1,000 characters—striving with one another to provide the most compelling, uncensored narrative. That crowd-sourced approach turns the Inman journals into a compendium of confessions unlike anything ever written down before.
Inman paid his “talkers” a dollar per hour. And he often went beyond listening, having sexual relations with some of the women who took the job. By any definition, he was a creepy guy whose behavior violated all reasonable norms. Even Inman’s own editor Daniel Aaron admits that this disturbed individual’s massive journal is the “autobiography of a warped and deeply troubled man whose aberrations call for psychiatric probing.”
Inman often closed entries in his diary with the send-off: “I wish I were dead.” Yet he also saw himself in a heroic light, dreaming of the posthumous fame his massive diary would eventually bring him. But even Inman must have known that what he was writing was far too controversial for publication in his lifetime without considerable censorship—although the awareness that he was violating the prevailing moral standards of his time may have motivated him all the more.
His talkers were extraordinarily trusting and candid. Perhaps the darkened room and the quasi-anonymity of the setting made it feel like an actual confessional, with all the sacramental implications such situations bring. Maybe it’s even simpler: people want to reveal their darkest secrets, as Foucault tells us, and will seek out settings where it can happen. Or perhaps the reality was more banal and tragic: these folks simply needed the cash, and Inman was the only person paying for their secrets.
Sometimes Inman’s talkers showed up with stories ready to tell, but if they were reticent, he would immediately start probing. Here is his account of a first meeting with a Mrs. Haviland (never brought back for a second session, because she was too “sweet”):
“I asked her how old she was, how long she’d been married, whether she loved her husband, whether he loved her, whether she loved her son or her husband best, why she’d been to the hospital lately, whether she used contraceptives, how much salary her husband made, who her ancestors were, how she budgeted her money, if she believed in God, how many friends she had, what did she look forward to in life, what sort of childhood she’d had, was she calm or emotional, did she read, like music, the movies, and so on. Most of her answers I believed, some I didn’t.”
Inman avidly read published diaries of others, and noted with dismay how often the original texts had been censored to avoid shocking the delicate sensibilities of readers. He was so upset by this that he wrote an angry letter to Dr. Francis Turner of Magdalene College, Cambridge, who was working on a transcription of Samuel Pepys’s diaries, complaining of these excisions. Inman added that if anyone ever censored his own diaries, he would come back as a ghost to haunt that person—and hinted that Pepys might do the same.
I doubt The Inman Diary could still be published by Harvard University Press nowadays. There’s just too much in its pages to upset, dismay, shock, and appall. And if you aren’t offended by Inman’s dealings with his talkers, you will invariably find his opinions on politics, society, religion, race, and a host of other matters reprehensible, in whole or in part.
So let me make clear: if you are the kind of person who needs a trigger warning, proceed no further. Trigger warnings were invented for books of this sort—which probably deserves an army of them, spaced out like sentinels every 10-20 pages. Yet is there any other book that conveys so fully the range of human experiences of that time and place with such brutal frankness and unflinching candor? The historian, as I see it, has a choice between getting a shock or remaining in ignorance. I know how I’d handle that trade-off, but I’m hardly typical in that way.
Time magazine had a different take on the diary. When the book was released, they dismissed Arthur Inman as just another "megalomaniacal bigot misogynist Peeping Tom hypochondriac." In its more measured review, the New York Times declared: [Inman] is not an attractive figure, but he is an oddly captivating one….At the very least [the book] is of considerable clinical interest.” (True to this prediction, Inman is often cited in academic literature on aberrant psychology.)
The editor of the Inman diary, Daniel Aaron, clearly came to abhor the diarist himself—even the tone of his footnotes reflect his distaste. “I couldn’t stand him,” Aaron later commented in an interview. “How did I get involved with this man? How can I deal with this man?” But then he feels compelled to add: “And gradually as I read on, I became quite fascinated by him, seeing him as a kind of rare person.” Aaron eventually decided that a “movie would be the best way of capturing that book.”
In all fairness, Inman’s talkers didn’t seem very interested in self-censorship, and many clearly savored the opportunity to tell their raw stories without fear of consequences or judgment. They are a strange assortment, but where else will you find a book written in the 1930s where, on a single page, you encounter a firsthand account of pimping, prostitution, bootlegging, bribing, drug addiction, homosexuality, rape, illegal gambling, drunkenness, police violence, a stint in Bellevue, and even glimmerings of a philosophy of life. The appearance of a skilled musician who “tickles the ivories” is just an extra.
Do you doubt me? Here is an extract from the testimony of Anthony Abruzzo, age 24:
It’s possible that Inman’s hirelings invented stories to please their employer. But the actual experience of reading these narratives is utterly convincing—testifying to people’s intense desire to be understood, to be validated by the exposure (and acceptance by the listener) of their darkest secrets. And Inman, for all his faults, was quite a listener.
When Inman wasn’t listening, he was corresponding. Some of the most gripping sections of this book come from the letters Inman included verbatim. Patricia, a young woman living in Hollywood in the 1930s and having uninhibited dealings with aspiring stars, sent Inman frequent missives, and these will give you an angle on the pre-WWII film business you won’t encounter elsewhere. Even more moving—and more distressing—are the letters Inman received from lonely women in small towns whom he connected with via correspondence clubs, a Great Depression equivalent of today’s dating apps. Inman joined these clubs under assumed names, with the goal of getting stories, not dates. Once again, the whole enterprise is morally debased, but these letters from the lonely are gripping in a way no work of fiction could match.
Somewhere midway in his life project, psychiatry took off in America, but that was hardly the case when Inman started out. As late as 1930, the American Psychoanalytic Society had only 65 members, and there was a deep social stigma associated with seeking out counseling of this sort. Many of Inman’s talkers must have felt much better discussing forbidden subjects in a private conversation with a total stranger—best of all, one who would pay for the service, and showed such relish in every detail.
Inman hated psychiatrists, but he must have seen himself as a kind of fellow traveler in their world. He certainly sought out troubled people, and gave advice willingly enough. And he had another technique that, when incorporated into his diary, created a unique meta-narrative unlike any I’ve encountered in other books. Inman would write down frank and unsparing accounts of his talkers, then rudely show them what he had put in his diary. This would set off all sorts of angry scenes and recriminations—and thus provide him with further material for his project. This is a kind of experimental fiction beyond what anyone was doing in those distant days, and more like Karl Ove Knausgård in our own time than John Dos Passos (the novelist Inman ostensibly used as a role model).
Inman didn’t live long enough to see his life’s work show up in print. He attempted suicide at several junctures, three of them documented in his diary. But two weeks after the Kennedy assassination, on December 5, 1963, Inman was in low spirits—street noise always upset him, and now the construction of the Prudential Tower near his Boston apartment was more than he could take. A few months earlier, he had survived an attempt to kill himself with sleeping pills. This time he decided on a revolver, brutally efficient in this instance—and thus putting an end to both Arthur Inman and his enormous journal.
He would have been delighted at the posthumous publication of his diary by Harvard. But he could hardly have enjoyed the way he has been treated by posterity. He is rarely dealt with as a literary figure, but his book gets cited frequently in papers on neuroticism, suicide, hypergraphia, and various other psychological disorders.
Inman wanted to be a celebrated writer, but instead got turned into a poster child for dysfunction. That couldn’t have been his goal, yet in an odd sort of way, it ensures that the Inman Diary will survive as a seminal text. Written in a dark time by a disturbed man, it captured a part of the 20th century no one else dared put in a book. The only question that remains is who will dare to read it.
Financial habits in the United States have evolved significantly over the past few decades. Rising living costs, shifting job markets, and economic uncertainty have all influenced how individuals approach money. Saving, once considered a straightforward priority, has become more complex as Americans balance immediate expenses with long-term goals.
Today, financial decision-making is shaped not only by income, but also by external pressures such as housing affordability, healthcare costs, and education debt. These factors make it increasingly difficult for many households to maintain consistent savings habits.
For younger Americans, particularly those in their twenties, saving money often takes a back seat to more urgent financial obligations. Rent, student loans, and basic living expenses consume a large portion of income, leaving limited room for savings.
Despite these challenges, early financial habits play a crucial role in long-term outcomes. Even small, consistent contributions to savings accounts can grow over time, especially when combined with compound interest. Establishing discipline at this stage, even with modest amounts, can have a lasting impact.
As Americans move into their thirties and forties, financial responsibilities tend to increase. Mortgages, family expenses, and career-related costs require careful budgeting and planning. This stage is often defined by the need to balance current obligations with future financial security.
Many individuals begin to focus more seriously on retirement planning during these years. Employer-sponsored plans, investment accounts, and emergency funds become essential tools for maintaining financial stability.
To better navigate financial decisions, many people look for reference points that help them evaluate their progress. One commonly used benchmark is average American savings, which offers a general overview of how individuals at different life stages manage their finances.
While these figures vary widely depending on income and location, they provide useful context. For some, they highlight gaps that need attention, while for others, they confirm that their current strategy is on the right track. More importantly, they encourage individuals to think more critically about their long-term financial goals.
Digital tools have transformed the way Americans manage their finances. Budgeting apps, automated savings systems, and online investment platforms make it easier to track spending and set financial goals.
These tools also improve accessibility, allowing users to better understand their financial behavior. With real-time insights, individuals can make adjustments quickly, helping them stay on track with their savings plans.
Savings habits are not uniform across the country. Geographic location plays a major role in determining how much individuals can realistically save. Urban areas with higher living costs often present greater challenges, while those in lower-cost regions may have more flexibility.
Lifestyle choices also influence financial behavior. Spending patterns, cultural attitudes toward money, and personal priorities all contribute to how individuals manage their income. Recognizing these differences is essential for understanding why savings rates vary so widely.
Improving savings does not always require drastic changes. In many cases, small adjustments can lead to meaningful progress. Setting clear goals, automating contributions, and reducing unnecessary expenses are practical steps that can strengthen financial stability.
Consistency is often more important than the amount saved. Over time, disciplined habits can create a solid financial foundation, even in uncertain economic conditions.
Saving money in today’s environment requires awareness, flexibility, and discipline. While external factors can make it challenging, individuals who take a proactive approach to their finances are better positioned for long-term stability.
Understanding broader financial trends can provide useful context, but personal strategy remains the most important factor. With the right habits and tools, it is possible to build a more secure financial future, regardless of starting point.
Photo: jcomp via their website.
CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM
The post How Americans Really Handle Their Money: Insights into Saving Habits appeared first on DCReport.org.
An AI memory startup called Memvid is offering $800 for a one-day, eight-hour shift for one candidate to “bully” AI chatbots by telling them what to do on camera.
Business Insider reported this week that Memvid wants someone to spend eight hours testing and critiquing the memory of popular AI chatbots, effectively paying $100 an hour for what they have branded as a “professional AI bully” role. The worker’s job is to examine where chatbots lose track of details, forget context or misrepresent data, and then feed those findings back to Memvid so the startup can improve its products.
“You’ll spend a full 8-hour day interacting with leading AI chatbots — and your only job is to be brutally honest about how frustrating they are,” the job listing reads.
The draw is that the role doesn’t require a computer science background, AI credentials or any kind of work experience. “No prior AI bullying experience required — we all start somewhere,” the listing reads.
The requirements are deeply personal. The first requirement is an “extensive personal history of being let down by technology,” and the second desired trait is “the patience to ask a chatbot the same question four times (and the rage when it still gets it wrong).”
Here is the full article, via the excellent Samir Varma.
The post Those new service sector jobs? appeared first on Marginal REVOLUTION.
1. Using LLMs to study deregulation.
2. New edition of On Liberty now lists Harriet Taylor as co-author.
3. The popularity of AI writing (NYT).
4. St Nicholas Cabasilas Institute For Orthodoxy & Liberty.
5. Is proportional representation working in the Netherlands?
7. Did Canadian happiness plummet?
The post Friday assorted links appeared first on Marginal REVOLUTION.
The post Chuck Norris, RIP appeared first on Marginal REVOLUTION.
If a battle is fought in space, it will look nothing like those depicted in the Star Wars franchise, with sleek TIE fighters blasting enemy ships with laser cannons and mag-pulses. Instead, these battles will be cerebral and unhurried, somewhat like the 1973 film The Day of the Jackal, a slow-burning political thriller with a plot that somehow mixes tension with clinical precision.
In that film, an assassin sets out to murder the French president. The main character's moves are meticulously planned, with backup plans for backup plans. A police commissioner, just as clever, must pursue the assassin and stop the conspiracy. The events play out over weeks and months, not seconds and minutes.
True Anomaly, which emerged from stealth just three years ago, is planning for The Day of the Jackal in space. The startup's primary hardware product, aptly named Jackal, is a war-ready satellite platform designed for mass production. In nature, jackals are known for their intelligence, adaptability, and hunting prowess. True Anomaly's Jackal boasts similar traits in space.
404 Media has a story about Proton Mail giving subscriber data to the Swiss government, who passed the information to the FBI.
It’s metadata—payment information related to a particular account—but still important knowledge. This sort of thing happens, even to privacy-centric companies like Proton Mail.

Update March 20, 6:14 p.m. EDT (2214 UTC): SpaceX adjusted the T-0 liftoff time.
SpaceX launched its 30th batch of Starlink satellites this year with a Friday afternoon launch of a Falcon 9 rocket from Vandenberg Space Force Base.
Liftoff from Space Launch Complex 4 East happened at 2:51:49 p.m. PDT (5:51:49 p.m. EDT / 2151:49 UTC). The rocket will fly on a southerly trajectory upon leaving the launch pad. The Starlink 17-15 mission carried 25 Starlink V2 Mini Optimized satellites to low Earth orbit.
SpaceX launched the mission using the Falcon 9 first stage booster with the tail number B1100. This was its fourth launch after previously flying the NROL-105 mission and two batches of Starlink satellites.
A little more than eight minutes after liftoff, B1100 landed on the drone ship, ‘Of Course I Still Love You’ positioned in the Pacific Ocean. This was the 185th landing on this vessel and the 589th booster landing for SpaceX to date.
Welcome to Edition 8.34 of the Rocket Report! The most important significant news this week, I believe, is the decision by Canada to make a serious investment in launch infrastructure at a spaceport in Nova Scotia. Tensions have risen between the United States and Canada of late (for reasons which are baffling to this author, who has always had an affinity for the nation to our north), and as a result Canada is seeking launch independence. This is an important start, but it will require a sustained, long-term commitment to really develop a flourishing launch industry.
As always, we welcome reader submissions, and if you don't want to miss an issue, please subscribe using the box below (the form will not appear on AMP-enabled versions of the site). Each report will include information on small-, medium-, and heavy-lift rockets as well as a quick look ahead at the next three launches on the calendar.
Canada makes major commitment to space launch. The country's leading minister of national defense, David J. McGuinty, announced on Monday a $200 million investment in "core infrastructure" for a spaceport in Nova Scotia. The investment is a 10‑year, $200 million agreement to lease a dedicated space‑launch pad that will serve as the central foundation for a multi-user spaceport near Canso, Nova Scotia. The facility is operated by Maritime Launch Services.
We’re finishing up the second week of our Annual TPM Membership Drive and want to get to 40% of our goal before the end of the week. We’re currently at just over 30%, so we need about 100 more sign ups. If you’re not currently a member, please join us. This is what keeps TPM thriving: memberships and the subscription fees that come with them. That’s more than 90% of our revenue. Can you help us today? Just click right here.
Kate and Josh discuss the DHS secretary confirmation hearings, a big defection on Iran and the Illinois primaries.
Watch and subscribe to see all of our video content on our YouTube page.
You can listen to the new episode of The Josh Marshall Podcast here.

Semafor reported last night that Joe Kent, momentary half-resistance hero and full-time white nationalist weirdo, is being investigated by the FBI for leaking classified information. According to Semafor, at least, the investigation predates his high-profile, news-driving resignation. We don’t know many details of this investigation. It’s at least possible that, rather than being retaliation for the resignation, it was actually the cause of it. In other words, maybe Kent saw the investigation was building, that the moment was right, and made his push to clothe the investigation and any possible future charges as retaliation. But let’s set that possibility aside for the moment. Because there’s another possibility I want to explore, one that goes to the heart of how Trump II works.
For that we need to shift our attention to our old friend Corey Lewandowski, the women-kinetic scrounger and bag man. Today NBC (and other news outlets) reported on Lewandowski’s demand for bribes as the price of getting pieces of the massive Department of Homeland Security mass deportation funding spree. The accusations (and, little question, the underlying facts) are no surprise. This is a thoroughly corrupt administration, and Lewandowksi has been notoriously corrupt ever since he strode into the public eye in 2015 as part of Trump’s original campaign entourage. (Remember, he was the first campaign manager.) All of that goes without saying. The part that interests me is the dynamic which applies in different ways from the littlest MAGAland cog in the administration to Trump himself.
You have to hold onto power because if you don’t, you lose your immunity. Here I don’t mean legal immunity per se. It’s just that if you’re a made man in the Trump world, there’s no Justice Department. Nothing matters. Do whatever you want. The DOJ won’t touch you. But once you’re cast off the island everything changes. The veil of impunity gets lifted.
I’m not saying Lewandowski or Kristi Noem are going to be indicted for anything. Lewandowski is almost unique in Trumpland for being cast out and cast back in countless times. But we see this dynamic again and again applied to everyone in that world. Everyone knew Lewandowski was doing this stuff, just as Corey and Kristi’s Mile High Express that DHS was buying for them was common knowledge. In a way, it’s a bit like how the mob used to require that an associate kill someone before becoming a made man. It’s not just a proof of seriousness. It’s also a kind of control. Part of going on a corruption spree on the inside is that you’ve made yourself a hostage to the organization. Once you’re on the outs (even if not quite permanently), all or at least a nice chunk of what you were doing on the inside gets leaked. Maybe an investigation or two even start.
A few weeks ago I was surprised to receive this email from a teacher at the elementary school that I attended, PS 205, in the New York City borough of Queens:
"Dear Mr. Roth,
I am a teacher at The Alexander Graham Bell School, PS 205 in Bayside, NY.
This is my 29th year teaching at this school and it is still an amazing school where children acquire the skills to blossom as adults!
It is my understanding that you are a graduate of this school.
We are holding a Career Day on Friday, March 6, 2026.
It would be wonderful if you could participate in some way, whether in person, zoom pre-recorded video or by another method.
As a Nobel Prize winner, this would be very inspiring for our students.
Please let me know if you would like to be part of this awesome event."
After some further correspondence, I sent a video greeting of a bit over a minute. Here's the transcript:
Transcript:
"Hi PS 205! I hear that you’re having career day today.
Mr Blum asked me to say a few words about how my career began to take shape when I was a student at PS 205, way back before your parents were born. I was a PS 205 student from 1957 to 1962, and it was in those years that I started to think about becoming a scientist.
In 1957, when I started school, the Sputnik satellite was launched by Russia, and in 1961 the first American astronaut, Allan Shepherd, rocketed into space. So science was in the news. My big brother Ted (who was also a PS 205 student, four years older than me) was excited by the idea of becoming a scientist, and that made me excited too. And pretty soon I was entering the school’s annual science fairs, with demonstrations of scientific things.
When I grew up I did become a scientist, a social scientist. I’m an economist, which allows me to study how we humans coordinate and cooperate and compete with each other, in ways that have made us, on average, live longer and healthier lives. In fact one of the things I have worked on is to help doctors organize how more people can get kidney transplants if they need them, which helps them live longer and healthier lives.
Science can be a lot of fun. In 2012 I won the Nobel Prize in Economics, which means I got to go to a big celebration of science and literature in Sweden, which almost everyone in that country watches on television. It’s sort of like their Super Bowl.
I can only imagine the things that you will do as you grow up. It will be an adventure."
In the Danish mortgage market every mortgage is backed by a corresponding bond. Thus, if a home buyer takes out a 500k mortgage at 3% interest, a bond is issued that pays the lender 3% interest on 500k. I’ve written about this system several times before. It has two distinct advantages.
In the US, a mortgage can be pre-paid only at a par. As a result, if interest rates rise, home owners don’t want to move because moving would require them giving up a 3% mortgage and replace it with say a 6% mortgage. This is called the lock-in effect. Lock-in can be quite severe. Fonseca and Liu find:
Using individual-level credit record data and variation in the timing of mortgage origination, we show that a 1 percentage point decline in the difference between mortgage rates locked in at origination and current rates reduces moving by 9% overall and 16% between 2022 and 2024, and this relationship is asymmetric. Mortgage lock-in also dampens flows in and out of self-employment and the responsiveness to shocks to nearby employment opportunities that require moving, measured as wage growth within a 50- to 150-mile ring and instrumented with a shift-share instrument.
What about in Denmark? The Danes definitely take advantage of the opportunity to buy-back. Part of this is due to tax advantages but those are just a transfer. More importantly, Danes don’t get locked in. A new paper by Berger, Jeong, Marx, Olesen, and Tourre compares mobility across Denmark and the US:
We study Danish fixed-rate mortgage contracts, which are identical to those in the United States except that borrowers may repurchase their mortgages at market value. Using Danish administrative data, we show that households actively buy back debt when mortgage prices fall below par and that household mobility is largely insensitive when existing mortgage rates are below prevailing market rates — unlike in the United States, where moving rates fall sharply as rates rise. We develop an equilibrium model that explains these patterns and show that introducing a repurchase-at market option into U.S. mortgages substantially reduces interest-rate-induced lock-in with limited effects on equilibrium mortgage rates.
The last point is especially important because you might wonder whether we are assuming a free lunch? After all, if US borrowers lose when they have to pre-pay at par then lenders surely gain. And if lenders gain on pre-payment then they will be willing to lend at lower rates on mortgage initiation. No free lunch, right? The logic is correct but note that the gain to lenders comes mainly from the relatively small set of households that move despite lock-in so the pre-payment bonus to lenders is quite small. Under the author’s calibrated model, mortgage interest rates in the US would rise by only 18 basis points on average if the US moved to a Danish type system.
In other words, there actually is a free or at least a low-priced lunch because lock-in is bad for homeowners and it doesn’t benefit lenders. As a result, moving to a Danish system would create net benefits.
The post A Danish Fix for U.S. Mortgage Lock-in appeared first on Marginal REVOLUTION.
Online gaming platforms have reshaped digital entertainment by offering more than a single type of game. They bring multiple forms of play into one place, giving players greater choice and control over how they spend their time.
The strongest platforms go beyond large libraries, creating environments where players can move between formats, use different devices, and discover features that make each session feel more tailored. From classic table games to live content and social tools, the offering is broader and more refined. As a result, online gaming platforms have become a key part of digital entertainment, keeping players engaged through variety, convenience, and stronger design.
One of the clearest ways online gaming platforms expand entertainment is by bringing multiple game types together in a single destination. Players no longer need to visit several sites or apps to find something that matches their interests. Many platforms now include card games, live tables, slots, puzzle-based titles, and quick mobile experiences under one account.
That variety gives players practical benefits:
Casino games remain an important part of this broader selection because they give players access to different styles of play within the same platform. Some titles are built around speed and visual energy, while others appeal to players who prefer a more traditional table experience with a familiar rhythm.
Games such as Baccarat highlight why classic formats continue to hold value. Known for its straightforward structure and steady pace, Baccarat offers a more measured style of play that contrasts with faster-paced options. It is also commonly featured in live dealer formats, where real-time gameplay and professional dealers bring added presence to the table.
Its continued inclusion across digital libraries reflects the lasting appeal of casino entertainment that feels polished, recognizable, and easy to place within a wider lineup of table games.
Accessibility is one of the biggest strengths of modern gaming platforms. Players expect to move between devices without losing progress, settings, or ease of use. In response, platforms now focus more on smooth design across phones, tablets, and desktop systems. This lets players decide when and where they want to play without changing their routine.
That convenience expands digital entertainment because it reduces friction. A player might begin on a computer at home, return later on a phone, and continue with very little interruption. When the transition feels simple, gaming fits more naturally into everyday life.
Clear menus, fast loading times, and strong mobile design also matter. Players have many entertainment options competing for their attention, so a platform must feel efficient from the first screen. Better access isn’t just a technical upgrade. It changes how often players return and how easily gaming becomes part of their daily digital habits.
Live content broadens digital entertainment by adding real hosts, real-time pacing, and a more immediate feel than standard automated formats.
When a session unfolds in real time, the experience often feels less distant. Players are not only watching a screen refresh. They are following an event as it happens. This sense of continuity helps maintain focus throughout the session. That can make even familiar games feel more vivid and more engaging.
Live content also gives players another way to choose how they want to spend their time. Some days may suit quiet solo play. Other moments may feel better with a more interactive format. By supporting both styles, platforms make gaming feel more versatile and more closely connected to the wider world of digital entertainment.
Online gaming platforms are becoming better at helping players find content that suits their preferences. Personalization tools reduce the effort needed to search through a large library. Instead of scrolling through endless categories, players often see suggested games, recent activity, and recommendations shaped by their habits and interests.
This adds real value because choice only matters when it’s easy to manage. A huge selection can feel overwhelming without some structure. Tailored suggestions make the platform easier to use while still leaving players in control. They can follow recommendations or ignore them, but the overall experience feels more directed.
Personalization also encourages discovery. A player who usually stays with one type of game may be introduced to another option with a similar pace or style. That creates a natural path toward trying something new. In that way, platforms expand entertainment not only by offering more content, but by making it easier for players to notice and explore what suits them best.
Online gaming platforms are expanding digital entertainment by offering more than access to games. They combine variety, device convenience, live content, personalization, and social features in one connected space, giving players greater flexibility and more ways to stay engaged.
The shift is not just in the number of titles, but in the quality and range of what’s offered. Players can move between formats, use different devices, and find options that better match their preferences, making these platforms feel more complete. As digital habits evolve, players will continue to value depth, clarity, and ease of use. The platforms that stand out will be those that treat entertainment as adaptable, shaped not just by content, but by how smoothly players interact with it.
CLICK HERE TO SUPPORT OUR NONPROFIT NEWSROOM.
The post How Online Gaming Platforms Are Expanding Digital Entertainment Options appeared first on DCReport.org.

The word can morph from noun to verb to adjective, from dog to human, from female to male. What will it do next?
- by Karen Stollznow
After yesterday’s revelation that the Department of Justice (DOJ) is blocking the release of a memo related to a Drug Enforcement Agency investigation into sex trafficker Jeffrey Epstein and 14 co-conspirators, Attorney General Pam Bondi added more evidence to the idea that the DOJ is engaged in covering up the relationship between members of the Trump administration, including President Donald J. Trump himself, and Epstein.
On March 4, 2026, five Republicans joined the Democrats on the House Oversight Committee to agree to subpoena Bondi to testify before it under oath about how the DOJ handled the release of the Epstein files. Committee chair James Comer (R-KY) issued the subpoena on March 17, requiring Bondi to appear before the committee on April 14. Kyle Stewart and Kyla Guilfoil of NBC News reported yesterday that a DOJ spokesperson said the subpoena was “completely unnecessary” and said Bondi “continues to have calls and meetings with members of Congress on the Epstein Files Transparency Act, which is why the Department offered to brief the committee.”
Yesterday, March 18, Bondi and Deputy Attorney General Todd Blanche appeared at that “briefing,” a closed-door hearing before the committee in which they were not under oath. Democrats asked repeatedly if Bondi intended to comply with the subpoena; she refused to commit. When Summer Lee (D-PA) asked Comer if he would compel Bondi to comply and hold her in contempt if she doesn’t, Comer told her she was “bitching.”
Ultimately, the Democrats walked out of the briefing. Talking to reporters, Representative Maxwell Frost (D-FL), who has been key to untangling the released Epstein files, said: “[T]o me, it’s very clear that the purpose of this entire fake hearing, this fake deposition, is the attorney general trying to weasel herself out of sitting in front of us under oath, under a bipartisan subpoena…. We asked her multiple times, ‘Are you going to come and speak with us under oath?’ She would not say yes.”
Frost pushed back on Republican colleagues who argued that the briefing should be enough. “We want her under oath because we do not trust her. Why don’t we trust her? Because she’s a liar.” He noted that in the recent hearing before the House Judiciary Committee about the files, Bondi’s documents revealed the DOJ is keeping track of what documents members of Congress are reading. He also noted the DOJ has put up documents related to Trump only when investigators called out that they were missing.
“We want her under oath because we don’t trust her,” Frost reiterated. “We want her under oath because she has shown that she is involved in a cover up…. So we see this for what it is. This is not a briefing; a briefing is when we sit down and we’re getting information from the person giving the briefing. That didn’t happen here. She sat down, they started the clock like a hearing. It’s a hearing. It is a fake deposition, where no one can see what’s going on, with zero transcription, where it’s not on C-Span or anything, and where no one is under oath, and they are allowed to freely lie to members of Congress.”
Director of National Intelligence Tulsi Gabbard, Federal Bureau of Investigation (FBI) Director Kash Patel, and Central Intelligence Agency (CIA) Director John Ratcliffe were under oath when they testified yesterday before the Senate Intelligence Committee on “worldwide threats.” Democratic senators focused on the war with Iran. The administration officials refused to say if they had told Trump that the Iranians could well block the Strait of Hormuz if the U.S. struck in the country.
Gabbard tried not to contradict Trump, eliminating from her opening statement that the 2025 strikes against Iran’s nuclear enrichment program had “obliterated” it and that the country had not started the program up again, for example. When asked why she didn’t read that portion of her opening statement, she said she realized her statement was running long.
Asked by Senator Angus King (I-ME) if reports that Russia is sharing intelligence with Iran are true, Gabbard seemed to try to hide that information, saying, “[I]f there is that sharing going on…, that would be an answer that would be appropriate for a closed session.” King pointed out that this report is in the public press, so it’s not a secret. Again he asked her if it is occurring. Again she answered: “[I]f it is occurring, that would be an answer appropriate for a closed session.” She continued: “What I can tell you is that according, um, to the Department of War, uh, any support that Iran may be receiving is not inhibiting their operational effects.”
King responded: “Okay, that’s sort of the first cousin of a yes.”
Asked by Senator Jon Ossoff (D-GA) if the intelligence community assessed that Iran posed an “imminent threat,” Gabbard said “the only person who can determine what is and is not an imminent threat is the president.” In fact, Ossoff pointed out, it is “precisely” the job of the intelligence community to make such a determination, and he established that the intelligence community did not assess that Iran posed an imminent threat to the U.S. before Trump struck it. Ossoff called Gabbard out for “evading a question because to provide a candid response to the Committee would contradict a statement from the White House.”
In response to questioning by Senator Ron Wyden (D-OR), FBI Director Patel admitted that under Trump, the government has been buying information on Americans from private companies, buying location data derived from internet advertising. Wyden noted that in 2023, FBI director Christopher Wray testified that the FBI did not buy that information, although it had done so in the past.
Asked if the FBI was still using that policy and if he would commit to keep the FBI from buying that data, Patel answered: “We do purchase commercially available information that’s consistent with the Constitution and the laws under the Electronic Communications Privacy Act, and it has led to some valuable intelligence for us.”
As Robert Mackey of The Guardian explains, if law enforcement officers want to get location data directly from cell phone companies, they have to go to a judge for a warrant. But government agencies are trying to get around the Fourth Amendment requirement for those judicial warrants by buying that information directly from private data brokers.
Wyden has always strongly opposed surveillance of Americans. He posted: “Kash Patel refused to deny that the FBI is buying up Americans’ location data. This is a shocking end run around the 4th amendment and exactly why we need to pass real privacy reforms NOW.”
Concerns about data privacy have been heightened since March 10, when Meryl Kornfield, Elizabeth Dwoskin, and Lisa Rein reported in the Washington Post on a whistleblower complaint filed in January saying that a former employee of the Department of Government Efficiency claimed he had taken two highly restricted databases of information about U.S. citizens from the Social Security Administration, where he had unrestricted access, and that he planned to take them to a government contractor. Those files included the Social Security numbers, birth dates, place of birth, citizenship, race, ethnicity, and parents’ names of more than 500 million living and dead Americans.
According to the whistleblower, the person with the files said he needed help transferring the data from a thumb drive to a personal computer in order to “sanitize” the data before using it at his new job. When another colleague refused to help, citing concern about breaking the law, the person with the information allegedly said he expected that Trump would give him a pardon if he needed it.
In January, Kornfield reported in the Washington Post that after another whistleblower complaint, the administration admitted to a court that the Social Security Administration had discovered that a DOGE employee had entered into a secret agreement with a political group, promising to share Social Security data in order to overturn election results in certain states. Kornfield reported that the SSA also acknowledged that DOGE employees had used an unofficial third-party service to share data with each other and that the SSA had been unable to access it.
University of Virginia privacy law expert Danielle Citron told Kornfield she was “flabbergasted.” “If that information is shared willingly and knowingly and they are sharing without the reason they collected it, it’s a violation of the Privacy Act.”
At the time, the top Democrat on the House Social Security subcommittee, John B. Larson of Connecticut, and the Ways and Means Committee’s ranking Democrat, Richard E. Neal of Massachusetts, said that the DOGE “appointees engaged in this scheme—who were never brought before Congress for approval or even publicly identified—must be prosecuted to the fullest extent of the law for these abhorrent violations of the public trust.”
A DOJ official told Kornfield then that the department was not currently investigating DOGE. The Social Security Administration inspector general is investigating the new whistleblower complaint.
Yesterday Noah Robertson, Jeff Stein, and Riley Beggin of the Washington Post reported that the Pentagon under Secretary of Defense Pete Hegseth has asked the White House to approve a request for more than $200 billion to fund the Iran war. Hegseth confirmed the request today, explaining: “It takes money to kill bad guys.”
—
Notes:
https://www.cbsnews.com/live-updates/tulsi-gabbard-kash-patel-senate-intelligence-committee-hearing/
https://www.theguardian.com/us-news/2026/mar/18/pam-bondi-epstein-briefing-democrats
https://www.theguardian.com/us-news/2026/mar/18/kash-patel-fbi-location-data
https://www.washingtonpost.com/politics/2026/01/20/doge-social-security-data-privacy-act/
https://www.washingtonpost.com/politics/2026/03/10/social-security-data-breach-doge-2/
https://www.washingtonpost.com/national-security/2026/03/18/iran-cost-budget-pentagon/
https://www.cnbc.com/2026/03/19/hegseth-iran-war-budget.html
X:
Acyn/status/2034401237567971541
Bluesky:
Jerome Powell doesn’t want you to use the S-word. In the press conference following the Federal Reserve’s decision to leave interest rates unchanged, he argued that current economic difficulties aren’t enough to warrant that description:
What we have is some tension between the goals and we’re trying to manage our way through it. It’s a very difficult situation, but it’s nothing like what they faced in the 1970s and I reserve stagflation for that — the word — for that period. Maybe that’s just me.
Fair enough, although any statement that things aren’t as bad as they were in the 1970s should come with the caveat “so far.” As I write this the price of Brent crude is above $116 a barrel, up from about $70 before the bombing began. If the blockade of the Strait of Hormuz goes on for months rather than weeks this will be a shock to world oil supplies substantially worse than the shocks of 1973 or 1979. And while I’m not a strategic expert, I don’t see how that strait reopens any time soon. Neither do prediction markets:
Source: Polymarket
Nor is oil the only concern. Even before Operation Epic FUBAR, inflation was ticking up while job growth was stalling. Powell says that the economy’s troubles weren’t severe enough to be called stagflation. But things were looking a bit, well, stagflationish.
One way of looking at the data that I find helpful is to compare what has actually happened since Donald Trump took office with informed expectations in late 2024, which I assess by looking at median projections in the quarterly Survey of Professional Forecasters carried out by the Philadelphia Fed.
Start with inflation, defined as the rise in the personal consumption expenditure price index, the Fed’s preferred measure, from the fourth quarter of each year to the fourth quarter of the next year. This measure, which rose sharply in 2021 and 2022, fell most of the way back to the Fed’s target of 2 percent by late 2024, and the general expectation was that it would continue to fall:
Instead, inflation has risen — not a lot so far, but it was moving in the wrong direction even before the war with Iran. The latest indicator was the Producer Price Index — basically wholesale prices — released Wednesday morning. This index can give early warning about rising consumer prices. And one of the people I trust to read these tea leaves called it “pretty grim.” Not 1970s grim, but not what you want to see.
There’s an uncomfortable parallel here with 1973, the year stagflation is generally considered to have started. I’m not sure how many people are aware that one reason the 1973 oil shock hit so hard was that inflation was already rising fast even before the Yom Kippur War led to the Arab oil embargo, which triggered the first oil crisis:
The numbers were much worse then than they are now, but there’s still a family resemblance to recent events.
That’s the flation. What about the stag?
Before Trump returned, most observers expected robust job growth to continue. Instead it has stalled:
Source: Bureau of Labor Statistics
Some of that was the result of mass DOGE-driven layoffs in the federal government, but private-sector job creation has also slowed drastically. And Powell asserted that once the data are revised, he expects to see essentially zero private-sector job growth over the past six months.
The state of the U.S. economy, then, was troubling, with at least hints of stagflation, even before this war led to the Hormuz blockade.
But why has U.S. economic performance been disappointing?
The public often exaggerates the role of political leadership in determining economic performance. In reality, presidents and their policies normally have very little effect on macroeconomic variables like inflation and employment.
But this time is different. The disappointing aspects of recent U.S. performance have been all about Trump.
In his press conference Powell didn’t beat around the bush. Noting that inflation is significantly overshooting the Fed’s target, he declared that
Some big chunk of that, between a half and three-quarters is actually tariffs.
What about the stalling of employment growth? Research at the San Francisco Fed confirms what many economists have been arguing: Job growth has slowed largely because of the crackdown on immigration, which has reduced labor supply. So employment stagnation is also the result of Trump administration policies.
Now, you might be tempted to argue that while stopping immigration reduces overall job growth, it surely must increase job opportunities for native-born workers. But a look at unemployment rates suggests that the job market for the native-born has gotten (slightly) worse, not better:
The most we can say is that thanks to the loss of immigrant workers the overall unemployment rate hasn’t risen as much as one might have expected given the collapse in overall job growth. But the loss of foreign-born workers is probably contributing to higher inflation, over and above the effects of tariffs and now oil prices. And it will have major adverse effects on America’s fiscal outlook — but that’s a subject for another day.
So Powell is right: If you restrict the term stagflation to situations that quantitatively resemble the 1970s, we aren’t there yet. But there’s definitely a whiff of stagflation in the air — a whiff that is entirely caused by Trump administration policies.
And if the situation deteriorates, as seems all too possible given the mess in the Persian Gulf, can we trust Trump’s officials to respond intelligently and effectively?
See, I made a joke.
MUSICAL CODA
The big news this morning: Astral to join OpenAI (on the Astral blog) and OpenAI to acquire Astral (the OpenAI announcement). Astral are the company behind uv, ruff, and ty - three increasingly load-bearing open source projects in the Python ecosystem. I have thoughts!
The Astral team will become part of the Codex team at OpenAI.
Charlie Marsh has this to say:
Open source is at the heart of that impact and the heart of that story; it sits at the center of everything we do. In line with our philosophy and OpenAI's own announcement, OpenAI will continue supporting our open source tools after the deal closes. We'll keep building in the open, alongside our community -- and for the broader Python ecosystem -- just as we have from the start. [...]
After joining the Codex team, we'll continue building our open source tools, explore ways they can work more seamlessly with Codex, and expand our reach to think more broadly about the future of software development.
OpenAI's message has a slightly different focus (highlights mine):
As part of our developer-first philosophy, after closing OpenAI plans to support Astral’s open source products. By bringing Astral’s tooling and engineering expertise to OpenAI, we will accelerate our work on Codex and expand what AI can do across the software development lifecycle.
This is a slightly confusing message. The Codex CLI is a Rust application, and Astral have some of the best Rust engineers in the industry - BurntSushi alone (Rust regex, ripgrep, jiff) may be worth the price of acquisition!
So is this about the talent or about the product? I expect both, but I know from past experience that a product+talent acquisition can turn into a talent-only acquisition later on.
Of Astral's projects the most impactful is uv. If you're not familiar with it, uv is by far the most convincing solution to Python's environment management problems, best illustrated by this classic XKCD:

Switch from python to uv run and most of these problems go away. I've been using it extensively for the past couple of years and it's become an essential part of my workflow.
I'm not alone in this. According to PyPI Stats uv was downloaded more than 126 million times last month! Since its release in February 2024 - just two years ago - it's become one of the most popular tools for running Python code.
Astral's two other big projects are ruff - a Python linter and formatter - and ty - a fast Python type checker.
These are popular tools that provide a great developer experience but they aren't load-bearing in the same way that uv is.
They do however resonate well with coding agent tools like Codex - giving an agent access to fast linting and type checking tools can help improve the quality of the code they generate.
I'm not convinced that integrating them into the coding agent itself as opposed to telling it when to run them will make a meaningful difference, but I may just not be imaginative enough here.
Ever since uv started to gain traction the Python community has been worrying about the strategic risk of a single VC-backed company owning a key piece of Python infrastructure. I wrote about one of those conversations in detail back in September 2024.
The conversation back then focused on what Astral's business plan could be, which started to take form in August 2025 when they announced pyx, their private PyPI-style package registry for organizations.
I'm less convinced that pyx makes sense within OpenAI, and it's notably absent from both the Astral and OpenAI announcement posts.
An interesting aspect of this deal is how it might impact the competition between Anthropic and OpenAI.
Both companies spent most of 2025 focused on improving the coding ability of their models, resulting in the November 2025 inflection point when coding agents went from often-useful to almost-indispensable tools for software development.
The competition between Anthropic's Claude Code and OpenAI's Codex is fierce. Those $200/month subscriptions add up to billions of dollars a year in revenue, for companies that very much need that money.
Anthropic acquired the Bun JavaScript runtime in December 2025, an acquisition that looks somewhat similar in shape to Astral.
Bun was already a core component of Claude Code and that acquisition looked to mainly be about ensuring that a crucial dependency stayed actively maintained. Claude Code's performance has increased significantly since then thanks to the efforts of Bun's Jarred Sumner.
One bad version of this deal would be if OpenAI start using their ownership of uv as leverage in their competition with Anthropic.
One detail that caught my eye from Astral's announcement, in the section thanking the team, investors, and community:
Second, to our investors, especially Casey Aylward from Accel, who led our Seed and Series A, and Jennifer Li from Andreessen Horowitz, who led our Series B. As a first-time, technical, solo founder, you showed far more belief in me than I ever showed in myself, and I will never forget that.
As far as I can tell neither the Series A nor the Series B were previously announced - I've only been able to find coverage of the original seed round from April 2023.
Those investors presumably now get to exchange their stake in Astral for a piece of OpenAI. I wonder how much influence they had on Astral's decision to sell.
Armin Ronacher built Rye, which was later taken over by Astral and effectively merged with uv. In August 2024 he wrote about the risk involved in a VC-backed company owning a key piece of open source infrastructure and said the following (highlight mine):
However having seen the code and what uv is doing, even in the worst possible future this is a very forkable and maintainable thing. I believe that even in case Astral shuts down or were to do something incredibly dodgy licensing wise, the community would be better off than before uv existed.
Astral's own Douglas Creager emphasized this angle on Hacker News today:
All I can say is that right now, we're committed to maintaining our open-source tools with the same level of effort, care, and attention to detail as before. That does not change with this acquisition. No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed. That makes the worst-case scenarios have the shape of "fork and move on", and not "software disappears forever".
I like and trust the Astral team and I'm optimistic that their projects will be well-maintained in their new home.
OpenAI don't yet have much of a track record with respect to acquiring and maintaining open source projects. They've been on a bit of an acquisition spree over the past three months though, snapping up Promptfoo and OpenClaw (sort-of, they hired creator Peter Steinberger and are spinning OpenClaw off to a foundation), plus closed source LaTeX platform Crixet (now Prism).
If things do go south for uv and the other Astral projects we'll get to see how credible the forking exit strategy turns out to be.
Tags: python, ai, rust, openai, ruff, uv, astral, charlie-marsh, coding-agents, codex-cli, ty
It’s a bit of a mystery why, of all the Republicans he could have chosen, Donald Trump picked Oklahoma Sen. Markwayne Mullin to lead the Department of Homeland Security. Mullin has no law enforcement experience, no national security experience, no intelligence experience, and no experience leading a sprawling organization like DHS, which has a yearly budget of over $100 billion and 260,000 employees.
Mullin did have one thing that no doubt attracted Trump’s attention: He’s a fake tough guy, which is just the kind Trump likes. That makes him a worthy successor to Kristi Noem; while she pretended she was running the agency when she was actually a cosplaying content creator, Mullin will be able to more convincingly deliver rootin’ tootin’ tough talk of the kind that will warm Trump’s heart.
In any biographical description of Mullin you’ll read that he’s a “former professional MMA fighter.” I’ve seen this written or spoken about him probably a hundred times since Trump announced he was nominating Mullin to head DHS. The bio on his website says “Markwayne has always been a fighter. He is a former Mixed Martial Arts fighter with a professional record of 5-0.” This might lead you to believe that he was an elite athlete competing in the UFC or something similar, but if so, you’d be wrong. Mullin wrestled in high school, but his MMA career was less than epic. While the details are sketchy, it appears that he beat a couple of chumps (one of whom was a teenager nicknamed “Huggie Bear”) in a semi-pro league in Tulsa, and years later when MMA grew into the major sport it is today, he traded on the idea that he was an accomplished MMA fighter for all the political advantage it could give him.
Hunter Walker of Talking Points Memo spoke to someone who in addition to working in politics also happens to be a jiu-jitsu coach (jiu-jitsu is the foundation of MMA), and described how when he interviewed for a job with Mullin, the politician saw that on his resume and started bragging about his own accomplishments:
In the coach’s recollection, Mullin went on to say he “went down to the World Championships in Brazil” and had fought in the finals against one of the members of the Gracie family, the clan that helped create both the sport of BJJ [Brazilian jiu-jitsu] and the Ultimate Fighting Championship. Mullin allegedly claimed, the coach said, that he had defeated the Gracie by “using all my wrestling.”
“Afterwards, all the Gracie brothers came up to me and asked if I could come to Brazil to teach wrestling,” Mullin allegedly said, according to the coach.
The coach said he asked which of the Gracies Mullin supposedly fought. He claimed Mullin replied that he did not specifically remember. The fame of the Gracie family, the well-documented nature of high level jiu-jitsu competitions, and the fact Mullin did not name a specific opponent made the coach immediately skeptical.
If we assume this source is describing the conversation accurately, it would make Mullin an absolutely spectacular fabulist. The Gracies basically invented what we know as MMA today, and the idea that after a couple of low-level matches in Tulsa, Mullin won a “world championship” by beating someone in the Gracie family (whom he couldn’t remember) and they all begged him to come down and share his brilliant skills with them is utterly preposterous.
But that was a private conversation; how does Mullin comport himself in public? Well, in 2023, he got in a notable argument with Teamsters head Sean O’Brien; here’s how it went down:
Mullin then said at the hearing: “Sir, this is a time; this is a place. You want to run your mouth? We can be two consenting adults. We can finish it here.”
“OK, that’s fine, perfect,” O’Brien said.
“You want to do it now?” Mullin replied.
“I’d love to do it right now,” O’Brien said.
“Then stand your butt up then,” said Mullin.
“You stand your butt up,” said O’Brien.
Mullin then stood up and the committee’s chairman, Sen. Bernie Sanders, I-Vt., stopped the altercation from happening, yelling at Mullin: “Stop it! No, no, sit down! You know, you’re a United States senator.”
This was one of those “Hold me back!” moments — Mullin knew full well there weren’t going to be any fisticuffs in the hearing room. The whole thing was kayfabe, a show of eagerness to do violence for no purpose other than convincing the foolish and gullible that he was some kind of tough guy.
But it doesn’t end there! Earlier this month in a Fox News appearance defending the Iran war, Mullin waxed rhapsodic on the smells of war that only the few have experienced:
You’ll never forget the smell and taste of war, he assured the viewers. The only problem is that Mullin is not a veteran and has never been to war. This came up at his confirmation hearing Wednesday, when he was asked about it and explained that he has been involved in some super-secret clandestine special ops military mission, but he can’t say what it was because it’s classified. Sort of “I do have a girlfriend, but you wouldn’t know her, she lives in Canada” except for a combat record:
Now I can’t say for sure that the United States military did not in fact recruit Markwayne Mullin for a secret combat mission during which he experienced those unforgettable smells and tastes of war. But let’s just say it strains credulity to imagine that the folks at JSOC said “Who should we send on this important and dangerous mission? Seal Team 6? Delta Force? No, it’s too important — we need to get a congressman with no military training, but who once beat a guy named ‘Huggie Bear’ in a semipro MMA fight in Tulsa. He’s the man for the job.”
But as far as Trump is concerned, Mullin is indeed the man for the job, precisely because he’s such a phony. There are no “strong, silent type” men in the Trump administration, no one speaking softly while carrying a big stick. Trump men go for preening and peacocking, claiming courage they never showed and feats they never achieved. Just like their boss.
Thank you for reading The Cross Section. This site has no paywall, so I depend on the generosity of readers to sustain the work I present here. If you find what you read valuable and would like it to continue, consider becoming a paid subscriber.
Welcome! This is the start of a journey which I hope will provide you with many new tricks to improve how you work with relational databases in your Python applications. Given that this is a hands-on book, this first chapter is dedicated to help you set up your system with a database, so that you can run all the examples and exercises.
This is the first chapter of my SQLAlchemy 2 in Practice book. If you'd like to support my work, I encourage you to buy this book, either directly from my store or on Amazon. Thank you!
2,700 is how many small “tally” marks I could draw on an A3 sheet without the ink running out.
But around 50 is the limit if they are big chunky brush stroke.
Or rather, when I say “limit” I mean the point at which I’d like to stop the drawing machine and top up the ink in the brush pen. One trick I was using was to randomise the drawing order of the lines, so the change between freshly topped up ink strokes, and drying out brush strokes wasn’t so obvious - or at least looked more intentional.
Which I guess they were.
🌲 🌲 🌲 🌲 🌲
Oh look, you can now centre text on Substack, finally!
The above images are fine, but what I really just wanted to do, was lots of repeating brush strokes on a large sheet of paper. No stupid in a circle stuff, no groups of five “tally” marks. Just one brush stroke after the another until a whole page was full.
Stopping every 50 marks or so to top-up the ink, but hey, why not keep changing the ink colour each time you top it up, so the colours gradually change from one to the next?
Until the whole sheet is filled.
🧡 🧡 🧡 🧡 🧡
This is all part of the “oh shit, there’s a send out postcards Patreon level, and I’m still not at the creating pen plots as part of the Drawing Machines 101 tutorial videos, so I have to do something else until we get there” stage.
Don’t get me wrong, I love making the postcards (and larger) pen plots to send out, but as previously mentioned the idea is the video and postcard making are supposed to be the same thing in a super time efficient feedback loop 😁
Meanwhile, each time I make these extra specifically for Patreon plots I want to make a video explaining the process - instead I’m just adding it to the list of fun videos I’ll go back and make once everything is all over.
Needless to say there’s a nice backlog of potential videos building up.
Anyway, the short version is, I loved the whole-page-being-filled version so much that another tweak of the code later and I had it calculating the gaps to leave to I could cut the page up into postcards, and even the trim marks to make the whole thing easier.
My favorite part is these are basically just simple lines, the maths is easy, draw a single line from point A to point B, do it again and again and again and again and again and AGAIN.
The brush and the ink is doing all the work here.
Oh, and these all got posted out yesterday, making this the worst Patreon self-promotion ever, but I guess there’s always next month.
Two new videos since last time;
Which wraps up the introduction and moves us onto Module Two, where we’ll start writing code, most likely before next newsletter. Exciting!
The feedback so far has been good and I’m very much looking forwards to getting them all finished 😁
Oh what, links!
Agnes Martin: https://www.moma.org/artists/3787-agnes-martin#works
How to tell if an abstract photo is vertical or horizontal | YouTube link.
The answer to that last one is apparently “have a book that shows you the original orientation” - which is less exciting than I thought it was going to be, but still nice to know even the experts have trouble.
Anyway, this was a picture heavy newsletter, it happens sometimes. Next newsletter is most likely going to be talking about DFOS, and that’ll be with you on Thursday the 2nd of April.
Love you all,
Dan
🧡

The post South African safari photo by Holly Cowen appeared first on Marginal REVOLUTION.
One of the most controversial opinions I’ve long espoused, and believe today more than ever, is that it was a terrible mistake for web browsers to support JavaScript. Not that they should have picked a different language, but that they supported scripting at all. That decision turned web pages — which were originally intended as documents — into embedded computer programs.
There would be no 49 MB web pages without scripting. There would be no surveillance tracking industrial complex. The text on a page is visible. The images and video embedded on a page are visible. You see them. JavaScript is invisible. That makes it seem OK to do things that are not OK at all.
In my piece riffing on Bose’s “The 49MB Web Page” yesterday, I reiterated my also-longstanding argument that publications with print editions do things with their websites that they’d never in a million years do with their print editions. The way The New York Times uses JavaScript to present popovers that obstruct reading the actual article text would be the equivalent of them gluing pages together in the print edition, using tape labeled with an advertisement. They wouldn’t do that. But they do the equivalent, using JavaScript, on every page of their website.
Here’s a simple AppleScript I wrote this week — one that solves a minor itch I’ve had for, jeez, 20 years. Almost every item I post to Daring Fireball goes through MarsEdit, the excellent Mac blogging client from Red Sweater Software (my friend Daniel Jalkut). MarsEdit has a built-in “local drafts” feature, where you can save unpublished drafts within a library in MarsEdit itself. It doesn’t happen often but I occasionally wind up with partially written posts that I don’t publish, but don’t want to throw away. But I don’t really want to keep them in MarsEdit. I want them saved as text files. For me, those text files go in a folder in Dropbox. For someone else, maybe they go in iCloud Drive.
I write my longer posts in BBEdit, and then copy them into a MarsEdit document when they’re ready to publish. My shorter posts — which is most of them — are usually entirely composed in MarsEdit. Any abandoned drafts that I might return to, I probably want to compose in BBEdit, because the reason they’re abandoned is that they need to be longer. Or they need to be shorter. But either way they need more thought, and BBEdit is where I go to do my most concentrated thinking.
MarsEdit doesn’t have a built-in way to save a document window as a text file. Just its built-in “Save as Local Draft” feature. I didn’t merely suspect but knew that it’d be relatively easy to write an AppleScript to add a “Save as Text File…” feature to MarsEdit, which I could invoke within MarsEdit from FastScripts, the system-wide scripts menu utility that is also from Red Sweater/Jalkut, and, using FastScripts, I could even give the script the standard keyboard shortcut Option-Command-S. (Or is it Command-Option-S?)
It’ll take a window like this:
and then prompt you with a system Save dialog to enter a filename (defaulting to the Title field contents, if any, in the MarsEdit document) and location to save the text file. AppleScript even conveniently remembers the last place you saved a file, so it defaults to the same folder the next time you invoke it, without the script doing any work to remember that. The text file looks like this:
Title: AppleScript: 'Save MarsEdit Document to Text File'
Blog: ★ Daring Fireball
Edited: Thursday 19 March 2026 at 12:16:29 pm
Tags: AppleScript, MarsEdit
Slug: AppleScript: 'Save MarsEdit Document to Text File'
Excerpt:
---
[Here's a simple AppleScript I wrote this week][s] -- one that
solves a minor itch I've had for, jeez, 20 years. Almost every
item I post to Daring Fireball goes through [MarsEdit], the
excellent Mac blogging client from Red Sweater Software (my
friend [Daniel Jalkut]). ...
That’s it. If you use MarsEdit, maybe it’ll help you. I picked the document fields in MarsEdit that I use (Title, Tags, Excerpt, etc.). One potential point of confusion is that while MarsEdit has an optional document field named “Slug”, I don’t use it. For historical reasons, I use Movable Type’s “Keyword” field for the words I want to use for the URL slug for each post. So in my text files, where it says “Slug:”, the text after that label comes from MarsEdit’s Keywords field. And I keep MarsEdit’s actual Slug field hidden, because I don’t use a field with that name in Movable Type. Your mileage, as ever, may vary. But this makes total sense to me.
Anyway, this script helped me clean up 29 drafts, some of them years old, that had been sitting around in MarsEdit, bugging me. Now my “Local Drafts” library in MarsEdit is empty, and those drafts are safe and sound in text files in Dropbox. When something in your workflow is bugging you, you should figure out a way to address it. Why I didn’t write (and share) this script years ago is a mystery for the ages.
Jeff Johnson, linking to my “Your Frustration Is the Product” piece:
My browser extension StopTheMadness Pro stops autoplaying videos and hides Sign in with Google on all sites. It also hides sticky videos and notification requests on many sites.
For more extreme measures, try my Safari extension StopTheScript. It kills JavaScript dead on websites you select. For example, from the blog post, it makes The Guardian readable.
These are both great extensions, and I have both installed for use in Safari on all my devices. StopTheScript is a bit peculiar, by nature of how it does what it does, but Johnson has a great illustrated tutorial for it and a good blog post explaining which sites he uses it on and why.
Over on the Chrome/Chromium side, there’s a very slick extension called Quick JavaScript Switcher. It’s free, but the developer (Maxime Le Breton) asks for a 5€ donation. QJS adds a simple JS on/off switch to the toolbar.
A lot of stuff doesn’t load when you just completely disable JavaScript for a site. You might be surprised just how much of that stuff is shit you don’t want or won’t miss.
Or, you can go the other way, give in, stop fighting the man, and install OnlyAds — an extension that hides everything on a website except the ads.
Javier C. Hernández, reporting for The New York Times:
He was responding to a question about why Japan and other allies had received no advance notice of the U.S.-Israeli assault on Iran.
“We didn’t tell anybody about it because we wanted surprise,” he said. “Who knows better about surprise than Japan, OK? Why didn’t you tell me about Pearl Harbor, OK? Right?”
There was some laughter from the officials and journalists gathered in the room. “You believe in surprise, I think, much more so than us,” he added.
As Trump sinks further into dementia and his presidency slides further into disarray, his administration, in a sick way, gets funnier and funnier.
Anne Applebaum, writing for The Atlantic (gift link):
Specifically, they remember that for 14 months, the American president has tariffed them, mocked their security concerns, and repeatedly insulted them. As long ago as January 2020, Trump told several European officials that “if Europe is under attack, we will never come to help you and to support you.” In February 2025, he told Ukrainian President Volodymyr Zelensky that he had no right to expect support either, because “you don’t have any cards.” Trump ridiculed Canada as the “51st state” and referred to both the present and previous Canadian prime ministers as “governor.” He claimed, incorrectly, that allied troops in Afghanistan “stayed a little back, a little off the front lines,” causing huge offense to the families of soldiers who died fighting after NATO invoked Article 5 of the organization’s treaty, on behalf of the United States, the only time it has done so. He called the British “our once-great ally,” after they refused to participate in the initial assault on Iran; when they discussed sending some aircraft carriers to the Persian Gulf conflict earlier this month, he ridiculed the idea on social media: “We don’t need people that join Wars after we’ve already won!”
Meanwhile, Irina Slav at Oilprice.com writes that oil — which was trading around $60 per barrel before the war — might soon be headed to $150–200 per barrel. $200! Energy Common Sense reports “This is now a multi-month, likely rest-of-year story of elevated prices and elevated risk.” Axios reports that most Americans will soon be paying over $4/gallon for gasoline, but I walked by Center City Philly’s lone gas station at lunch, and regular gas remains under $4 and premium under $5 — both with an entire one-tenth of one cent to spare.
Although President Donald Trump says he has “destroyed 100% of Iran’s Military Capability”, the 0% that remains is playing havoc with the global economy by choking off 10-15% of its oil supply.
This whole dumb fiasco might go down as the canonical example for the phrase “hoist with his own petard”. You just hate to see it.
Mark Simonson:
Just by coincidence, I discovered a copy of U&lc magazine in the graphics classroom. U&lc was published by ITC, the International Typeface Corporation, a typeface publisher, and the designer and editor was the legendary Herb Lubalin. I’d never seen such beautiful typography and design. It was a motherlode for an aspiring typophile like me. [...]
I decided right then that someday, somehow, I wanted to design typefaces.
Adamya Sharma, reporting for Android Authority:
When Google execs previously said sideloading would become a high-friction process on Android, they really weren’t kidding. The company is finally sharing what Android’s new sideloading flow will look like in practice, and if you’re someone who installs apps outside the Play Store, you’re going to feel it immediately, and you’re going to feel it deeply. [...]
When Android’s new sideloading rules come into force, installing apps from developers without Google verification (more on that later) will become extremely tedious by design and require a 24-hour lock before users can install them.
Here’s Google’s own explanation of the new restrictions. “Open always wins”, baby.
Would be interesting to hear Tim Sweeney’s thoughts on this, but he took a sack of cash in exchange for agreeing that whatever Google does with Android hence is “procompetitive” until 2032.
Kīlauea has entered its second year of episodic activity after reawakening in December 2024. Since then, the Hawaiian volcano has gone through dozens of bouts of lava fountaining, each lasting several hours to several days.
Activity ramped up once again on March 10, 2026, for episode 43 of the eruption. From approximately 9 a.m. to 6 p.m. local time that day, lava spewed from two active vents on the southwest side of Halema‘uma‘u Crater, adding to the ever-thickening layer of fresh basaltic rock in the summit caldera. The flareup also featured the highest lava fountains of the current eruption, estimated at 1,770 feet (540 meters). Meanwhile, ash and other airborne debris fell on communities up to 50 miles (80 kilometers) away.
About 4 hours after fountaining subsided, the Landsat 9 satellite passed over the Island of Hawai‘i. This image shows shortwave infrared and near-infrared data, acquired with the satellite’s OLI (Operational Land Imager) at 10:20 p.m. local time on March 10 (08:20 Universal Time on March 11), revealing heat emanating from the still-sizzling lava. That information is layered over a composite of daytime Landsat images and a digital elevation model.
An estimated 16 million cubic yards (12 million cubic meters) of lava erupted during the episode, according to the Hawaiian Volcano Observatory (HVO), bringing the total volume erupted across all episodes since December 2024 to close to 325 million cubic yards (250 million cubic meters). Over the same period, the depth of lava in the crater has increased by about 300 feet (90 meters).
While lava remained confined to the summit area, other erupted material traveled much farther. Images captured by satellites orbiting over the area during the daytime showed a volcanic plume drifting northeast from the vents. Volcanic gas and ash reached a maximum height in the atmosphere of more than 30,000 feet (9,100 meters) above sea level, the HVO said. The aviation color code was elevated to red during the eruption, and several flights at the airport in Hilo were canceled, according to news reports.
Volcanic fragments up to several inches in diameter fell along the north rim of the caldera and in adjacent communities. The hazards and accumulation of debris caused the temporary closure of Highway 11 and the evacuation of visitors from parts of Hawaiʻi Volcanoes National Park. Smaller particles were carried farther: people reported ash and Pele’s hair falling tens of miles to the north and east of Kīlauea, including in Hilo, Keaʻau, and other communities on the coast. Volcanic debris is an eye, skin, and respiratory irritant, the HVO warned, and it may affect water quality for those using rainwater catchment systems.
NASA Earth Observatory image by Michala Garrison, using Landsat data from the U.S. Geological Survey. Story by Lindsey Doermann.
Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

The volcano on Russia’s Kamchatka Peninsula continues to erupt after centuries of quiescence.

The volcano in Hawaii is one of the most active in the world, and NASA tech makes it easier for…

In its first documented eruption, the Ethiopian volcano sent a plume of gas and ash drifting across continents.
The post Restless Kīlauea Launches Lava and Ash appeared first on NASA Science.
Evolutionary biology is one attempt to explain the nature of living beings. In that framework there is a difference between individuals and genes. If a practice increases the chance that genes will be passed along, it may evolve and be passed along, whether or not it serves either individual or collective self-interest.
To give a simple example, some women may prefer “cads.” Those men, by definition, will sleep around, but possibly their sons will sleep around too. The woman’s genes may thus spread more widely, and women who prefer cads may not disappear from the gene pool, even though the cads are bad for them.
You might ask whether corresponding mechanisms apply to the evolution of AI models. If I prefer an OAI model to DeepSeek for instance, that will help to spread OAI models through the AI population. OAI will have more revenue, and it will produce more output of what is succeeding in the market. Furthermore my choice of model may influence others to do the same, and it may help create and finance surrounding infrastructure for that model.
Will I buy the next generation of OAI models? Well yes, if the first one pleased me. The model “reproduces” and sustains itself if I, as a consumer, am happy with it. One obvious incentive is toward usefulness, another is toward sycophancy. We already see these features realized in the data. There is nothing comparable, however, to the “cads incentive” in human life.
One potential problem comes if individuals are not the only potential buyers. Let us say the military also purchases AI models. The motives of the military may be complex, but at the very least “wanting to kill people” (whether justly or not) is on the list of possible uses. Models effective for this end thus will be funded and encouraged.
My model of the military is that, above and beyond efficacy, they value “obedience” and “following orders” to an extreme degree, including in their AI models. There will thus be evolutionary pressures for those features to evolve in the AI models of the military.
To be sure, not all orders are good ones. But in this case the real risk is from evil humans, or deeply mistaken humans, not from the tendencies of the AI models themselves.
So my view is that the selection pressures for AI models are relatively benign, noting this major caveat about how evil humans may develop and use them.
If the biggest risk is from the military models, it might be good for the consumer sector of AI models to grow all the more, as a relatively benevolent counterweight.
Are financial sectors AI models going to evolve more like the consumer models or the military models?
Here are some related remarks from Maarten Boudry, and I also thank an exchange with Zohar Atkins.
The post Consumers vs. mates as a source of selection pressure appeared first on Marginal REVOLUTION.
Brookings Metro Monitor Rates Portland Economy Healthy
A major national think tank rates Portland’s economy among the nation’s most robust, with high marks for prosperity, growth and inclusion.
Portland ranks in the top ten for prosperity, the top third for inclusion, and the top half for growth over the past decade.
Portland consistently outperforms the nation and the average of the 55 largest metro areas on a diverse suite of economic indicators
The Brookings Institution Metro Monitor is one of the most comprehensive, respected and independent analysis of regional economic performance
Some short term data shows Portland’s economy has slowed in the past year or so, but more robust, long term measures of economic performance show fundamental health; the economy will always experience cycles, and state and local policy should focus on long-term economic upgrading.
The real threat to Portland’s economy is not a largely imaginary “Doom Loop,” but Trump Administration policies that undermine the foundations of Portland (and national) success: free trade, immigration, science, education and the rule of law.
To hear the local chamber of commerce tell it, Portland’s economy seems to deserve a failing grade, maybe even a “D” or an “F.” They proclaim Portland is stuck in a terrifying “Doom Loop.” The chamber says economic indicators are flashing clear warning signs, and it worries that population growth has slowed, year over year employment has declined and housing has ebbed, and if fears a dreaded “inflection point.” But that’s a bit of scaremongering. On the gold standard of standardized tests for metropolitan economies–the Brookings Institution’s Metro Monitor–Portland’s economy earns a solid “B.” In some subjects, like inclusiveness and equity–measuring how well the economy helps middle and lower income households–Portland is close to the head of the class. There’s room for the local economy to improve, but the Brookings data point to a regional economy that’s consistently outperformed the nation.
Data from the Brookings Institution shows that the Portland economy outranks most other metros in output, wage growth, and average earnings.
The Brookings Institution’s Metropolitan Policy Center is one of the nation’s leading analysts of metropolitan economies. For years its been generating an annual “Metro Monitor” that rates every US metro area on a battery of indicators designed to comprehensively measure economic health, performance, and equity. The latest report shows that Portland’s economy continues to perform well.
Portland’s economy ranks above average, or near the top of the nation’s 55 large metro areas in three headline areas–prosperity, growth, and inclusion–according to the 2026 Brookings Metro Monitor.
This top-level set of indicators shows that, over the past decade, Portland’s economy has performed well–much better than the average US. metropolitan area, and on a range of indicators from aggregate growth of output and employment, to wages and the standard of living, to key measures of inclusion, like median earnings. Of course, there’s always room to improve, Portland should aspire to do even better. But by the same token, these data show there’s no reason to despair or panic. The industrial structure, workforce, environment, and state and local policies that Portland has today are fundamentally the same as they’ve been for the past decade, and the Brookings data show the region has repeatedly turned in an above average performance on nearly every economic indicator.
The Brookings Metro Monitor has a distinct advantage over reports like those prepared for local chambers of commerce that rely on cherry-picked data and a narrow subset of competitor cities. Unlike local chambers, Brookings doesn’t have an interest or bias in trying to talk up or talk down any metro area, and has selected a suite of indicators are comprehensive. In addition, the Brookings report is vastly better researched and more reliable than journalistic rankings like the CNBC Business Climate index. There’s little evidence business climate rankings bear much, if any, relationship to long term economic prosperity, and are often merely thinly veiled arguments for business subsidies.
In addition, Brookings takes the long view: short run variations in economic activity mean that in any one year a metro area may perform better or worse than its long term trend; the rolling ten-year horizon used by Brookings minimizes the transitory effects of these noisy, short term fluctuations. The US seems poised to experience another economic cycle: Disruptions to trade and immigration are more likely to affect Oregon–famously a trade- and immigration-dependent state, well ahead of the rest of the country. And Oregon’s largest private employers, Intel and Nike, have recently experienced tough years, so it shouldn’t be surprising that recent data is less favorable. But, in point of fact, state economic policies can do almost nothing to influence short-term economic trends. Instead, state and local policies and investments, particularly in education and quality of life, are most influential in shaping longer term growth trajectories, and that’s where the Brookings Metro Monitor provides ample evidence that the Portland regional economy is fundamentally healthy.
That’s not to say there aren’t challenges ahead. Trump Administration policies are fundamentally undermining the long-term foundations of US economic prosperity, and these policies directly jeopardize key elements of Portland’s strengths. The Trump Administration is demolishing the US-led march to global free trade, attacking immigrants, defunding science, undermining education and debasing the rule of law. Portland’s success has stemmed from its ability to tap into global markets and value chains, attract immigrants and bolster its economy through immigration, apply science to new industries, including high tech and biotech, educate its citizens and attract smart people from other places, and its commitment to honest government and fair dealing. A decline in trade, reduced immigration, a loss of scientific leadership, disinvestment in education, and the advent of corrupt, crony capitalism will all work to Portland’s disadvantage. In a companion piece to the Metro Monitor, Brookings calls out the decline in immigration as a key threat to metropolitan economies. In this environment, placing our faith in discredited zombie ideas that tax cuts and business climate will ensure prosperity would be a mistake and an abdication of responsible policy. If the regional economy is to succeed in the years ahead it will have to cope with and overcome these trends, and hopefully, it can help lead the effort to restore these foundations, locally, then nationally. As we’ve said, Portland needs to embrace the frog.

Glencora Haskins and Joseph Parilla, Brookings Metro Monitor, Metropolitan Policy Center, Brookings Institution, Washington, D.C. March, 2026. https://www.brookings.edu/articles/metro-monitor-2026/.
Up betimes and to Woolwich all alone by water, where took the officers most abed. I walked and enquired how all matters and businesses go, and by and by to the Clerk of the Cheque’s house, and there eat some of his good Jamaica brawne, and so walked to Greenwich. Part of the way Deane walking with me; talking of the pride and corruption of most of his fellow officers of the yard, and which I believe to be true. So to Deptford, where I did the same to great content, and see the people begin to value me as they do the rest. At noon Mr. Wayth took me to his house, where I dined, and saw his wife, a pretty woman, and had a good fish dinner, and after dinner he and I walked to Redriffe talking of several errors in the Navy, by which I learned a great deal, and was glad of his company. So by water home, and by and by to the office, where we sat till almost 9 at night. So after doing my own business in my office, writing letters, &c., home to supper, and to bed, being weary and vexed that I do not find other people so willing to do business as myself, when I have taken pains to find out what in the yards is wanting and fitting to be done.

Finnish satellite manufacturer ReOrbit has signed a contract with asset-financing company SLI for two small geostationary orbit communications satellites.
The post ReOrbit sells two small GEO satellites to SLI appeared first on SpaceNews.

Satellite manufacturer Apex has won a contract from a Japanese company to provide a spacecraft bus for a technology demonstration mission.
The post Apex sells satellite for Japanese technology demonstration mission appeared first on SpaceNews.

Join us on March 31 for a virtual event, sponsored by Star Catcher and in partnership with the Commercial Space Federation
The post Register now: The energy imperative driving the push toward orbital data centers appeared first on SpaceNews.

The agreement is for ground management and integration for the Resilient Missile Warning and Tracking satellite program
The post Kratos wins $446 million Space Force contract for missile-tracking ground systems appeared first on SpaceNews.

Portal Space Systems, a company developing maneuverable spacecraft, is partnering with an Australian startup to offer a commercial orbital debris removal service.
The post Portal Space Systems and Paladin Space plan debris removal service appeared first on SpaceNews.

In this episode of the Space Minds podcast, host David Ariosto speaks with Eileen Collins, retired NASA astronaut, Air Force colonel and the first woman to pilot the Space Shuttle […]
The post Eileen Collins on what it takes to become Space Shuttle Commander appeared first on SpaceNews.

NASA is proposing a sharp increase in the rate of robotic lunar lander missions, a move that has excited but also puzzled the space community.
The post NASA considering sharp increase in robotic lunar landings appeared first on SpaceNews.

Update March 20, 12 p.m. EDT (1600 UTC): NASA confirmed completion of the Space Launch System rocket’s rollout.
NASA’s Moon rocket returned to the launch pad after repairs inside the cavernous Vehicle Assembly Building at the Kennedy Space Center. The 322-foot-tall Space Launch System (SLS) rocket, atop the 400-foot-tall Mobile Launcher 1 (ML-1), was set start the slow trek to the pad Thursday night with call to stations happening that afternoon.
The rocket’s rollout to pad 39B sets up a launch attempt for the Artemis 2 mission no earlier than April 1. First motion of the crawler transporter, that carries the launch platform, was expected around 8:00 p.m. EDT (0000 UTC), but didn’t end up moving until closer to 12:20 a.m. EDT (0420 UTC) due to high winds at NASA’s Kennedy Space Center.
NASA anticipated that the journey will take roughly 12 hours to complete. The agency confirmed Friday morning that hard down – the term meaning that the rocket and ML-1 was set down onto the pedestals at the pad – was accomplished at 11:21 a.m. EDT (1521 UTC).
NASA returned its SLS rocket and Orion spacecraft to the Vehicle Assembly Building to fix a helium flow problem on the rocket’s upper stage. That discovery on Feb. 21, after a successful fueling test at pad 39B, caused NASA to forgo a March launch attempt and pivot to April instead.
While the helium issue was resolved, technicians conducted other prelaunch work, including replacing the batteries connected to the flight termination system on the solid rocket boosters, core stage and upper stage.

The Artemis 2 mission will see NASA astronauts Reid Wiseman, Victor Glover and Christina Koch alongside Canadian Space Agency (CSA) astronaut Jeremy Hansen fly around the Moon and splashdown in the Pacific Ocean about 10 days after liftoff.
It will be the first time that a crew lives and works onboard the Orion spacecraft. The test flight is a precursor to other crewed missions for the Artemis program, which will see astronauts heading down to the surface of the Moon starting with Artemis 4 in 2028.
NASA Administrator Jared Isaacman recently announced changes to the Artemis program. Including moving the first Moon landing from the third to the fourth Artemis mission and making the Artemis 3 flight a demonstration in Earth orbit of Orion docking with SpaceX’s Starship lunar lander or Blue Origin’s Blue Moon Mk.2 lunar lander, or potentially both.
During a March 12 sit-down interview with Spaceflight Now, Isaacman said within the next 60 to 90 days, the American public would get greater clarity about the specifics of the Artemis 3 mission.
Isaacman also teased ahead to a gathering in Washington D.C. to discuss the changes with its industry and international partners. During a briefing with members of the press on Thursday, European Space Agency (ESA) Director General Josef Aschbacher commented on the event and said he was looking forward to learning more himself.
“We look forward to the meeting next week. We will learn from NASA what the administration is planning on the Artemis architecture. This obviously is the Gateway and several other aspects,” Aschbacher said.
“I cannot obviously preempt what this discussion will be, but what is extremely important is that we had a very intense and good discussion within the ESA member states who gave their full support to me as Director General to coordinate activities among all the member states. NASA will see a very united Europe appearing in Washington.”
Every day there’s some new story about the enormous amounts of investment in building AI data centers. The Wall Street Journal reports that, as a fraction of GDP, AI capital spending in 2026 alone will be more than was spent on the decade-long build-up of the national railroad system, federal expenditures to create the interstate highway system, or the entire Apollo program. Bloomberg reports that AI data center spending might reach as much as $3 trillion. The Electric Power Research Institute is projecting that data centers will consume up to 17% of all US electricity by 2030.

But talking about data centers in terms of dollars spent or power consumed is somewhat abstract: it doesn’t tell us much about the sort of capabilities of the infrastructure we’re actually building, the way that “miles of track” or “miles of highway” tells us about the scale of railroad or interstate building. I wanted to get a better understanding of what the data center buildout looks like in terms of computational power.
By far the biggest drivers of the AI data center buildout are scaling laws. Briefly, the more data you use to train an AI model, and the bigger and more computationally expensive that model is, the better the model performs. Making better and more powerful AI models thus demands increasing amounts of computation to train and run them, and data centers are where all that computation is done.
A common measure of AI model computing power is FLOPS, floating-point operations per second. OpenAI’s GPT-2 model took an estimated 2.3x10^21 FLOP to train, while the more advanced GPT-4 took an estimated 2.1x10^25 FLOP — almost 10,000 times as much computation as GPT-2, more than 20 trillion trillion operations.

(There is, of course, much more to computer performance than just FLOPS, but it’s a useful measure of computing power and it’s what we’ll stick with here.)
A floating-point operation is exactly what it sounds like: a mathematical operation (addition, subtraction, multiplication, division) performed on floating-point numbers. A floating-point number is a way of digitally representing fraction or decimal numbers in a computer, which stores everything as a sequence of ones and zeroes. It typically has three parts: a sign (whether the number is positive or negative), and a significand (some sequence of digits) multiplied by a base raised to an exponent (which locates the decimal point).

Different standards for encoding floating-point numbers in different amounts of memory allocate a different amount of space for each of these parts. For example, the IEEE 754 standard for floating-point arithmetic specifies a 32-bit floating-point number (the size of floating-point numbers typically used in general-purpose computers) as having 1 bit for the sign, 8 bits for the exponent, and 23 bits for the significand. This finite amount of space makes floating-point operations fundamentally limited in their precision, because the less space you allocate, the less precise your number. A 16-bit floating-point number will have less precision than a 32-bit one, which will have less precision than a 64-bit one. (This will become important later.)

So how many FLOPS can a typical AI data center achieve?
Computation in a data center is done on huge numbers of graphics processing units, or GPUs, which are specialized computers designed to perform large numbers of arithmetic operations simultaneously. (GPUs were originally designed to render graphics for things like computer gaming, and for many years Nvidia was primarily a manufacturer of computer gaming graphics cards.) One common GPU is Nvidia’s H100, which was first released in 2022 and is still one of the most popular GPUs for AI-related computing tasks. Estimates of data center capacity will often be done in terms of “H100 equivalents.” Per Epoch AI’s dataset on large GPU clusters, a typical AI data center will have around 100,000 H100 equivalents, and a very large one might have 1 million or more. Meta’s planned 5-gigawatt data center campus in Louisiana is estimated to have over 4 million H100 equivalents when it’s complete.

How much computational capacity does an H100 have?
This is where it starts to get complex. GPUs designed for AI tasks, like the H100, are able to perform more computation on less precise numbers. For a typical 32-bit floating-point number (FP32), an H100 can do 60–67 teraFLOPS depending on the configuration: up to 67 x 10^12, or 67 trillion, floating-point operations per second. But with 16-bit numbers (FP16), an H100 can achieve 1,979 teraFLOPS, an increase of almost 30 times. And with 8-bit floating-point numbers (FP8), it can double that again to 3,958 teraFLOPS.

However, outside of FP32 and FP64, these performance levels are achieved with something called sparsity. Sparsity occurs when for a group of four values in a matrix, at least two of them are zero. When this occurs, the GPU can skip multiplications of the zero values, effectively cutting in half the number of operations it must perform. If the matrix isn’t sparse (if the matrix is dense), the listed performance numbers will fall by roughly half.
When training an AI model, sparsity basically can’t be achieved at all. When running a model it can be, but taking advantage of it requires putting the model through an extra step known as pruning. So only in certain cases can these published H100 performance levels actually be reached.
Most general-purpose computing is done using higher-precision FP32 floating-point numbers. But for training and running AI models, it turns out that good results can be achieved with 16-bit, 8-bit, or even 4-bit floating-point numbers.
How does the computational capacity of an H100 compare to other types of computer, say, an iPhone?
The iPhone 16 uses Apple’s A18 chip and features a six-core GPU on the Pro version. Estimates of the computational capacity of the A18 vary, but it seems to be on the order of 2–3 teraFLOPS using FP32, and perhaps double that using FP16. The A18 also has a 16-core neural processing unit (NPU) capable of 35 trillion operations per second (TOPS) with what appears to be 8-bit integers (INT8). By comparison, the H100 can do up to 3,958 TOPS at INT8 with sparsity, an increase of 113 times. (The A18 also has a CPU, but this apparently adds a negligible amount of computational capacity.)
To put this all together: an H100 has 20–30 times the computational capacity of an iPhone 16 GPU when it’s doing mathematical operations with 32-bit floating-point numbers, but around 137-275 times the capacity when working with 16-bit numbers (depending on whether you have sparsity or not). And an H100 has around 56-113 times the capacity of the A18’s NPU. If we assume that both the NPU and GPU can be used together, this suggests an H100 has on the order of 50-100 times the computational capacity of an iPhone 16.1 A typical AI data center with 100,000 H100 equivalents will be roughly equivalent to 5-10 million iPhone 16s, and a monstrous 5 GW data center will be equivalent to 200-400 million (!) iPhone 16s.
Of course, in practice you couldn’t achieve anything like an H100 performance by wiring a bunch of iPhones together; the H100 is designed to be connected to thousands of other H100s, and has massive interconnect and memory bandwidth to make that possible, which the iPhone doesn’t. But this gives us a rough idea of the computational capacities involved.
Another comparison: An H100 has about 80 billion transistors, whereas an A18 has about 20 billion.
Links for you. Science:
The NIH Restructuring Congress Rejected Is Happening Anyway
Dentists still write millions of prescriptions a year for an antibiotic with life-threatening risks
Matrilineal networks may be the key to understanding Neanderthal mixture
How are asymptomatic COVID-19 cases tracking?
Epistasis and co-adaptation in bacterial genome evolution
Glyphosate is driving a rift in MAHA. Here’s what the science says about its effects on health
Other:
With Us or Against Us, Again. Congress didn’t authorize this war, and the only response to questions is a loyalty test we’ve fallen for before. (excellent)
This Monument Is the Latest Casualty in Trump’s War on Public Lands
Congestion Pricing Wins in Court After Lengthy Battle With Trump
Let’s Face Facts: This Isn’t Going Well (Iran War Edition)
Kash Patel’s latest firings ousted agents with expertise in Iran
Prior to Iran attacks, CIA assessed Khamenei would be replaced by hardline IRGC elements if killed, sources say
Why Is The Cook County State’s Attorney Prosecuting Nonviolent ICE Protesters? A Block Club investigation found dozens of protesters arrested by state police are still facing criminal charges for minor infractions, such as sitting on a concrete barrier — even after Gov. JB Pritzker vowed to protect their First Amendment rights.
Brad Lander Is Building a United Progressive Front
Greg Bovino, other federal agents investigated for Operation Metro Surge actions
What Both Journalists and MAGA Voters Misunderstood About Trump and War
Six data-driven reasons Texas could actually go blue in 2026
A Rational Analysis of the Effects of Sycophantic AI
Indefinite Book Club Hiatus
Park Service to Revive Statue of Founding Father Who Enslaved Hundreds
Minnesota launches investigation that could bring charges against US immigration officers
How ICE deportations are impacting people experiencing homelessness in D.C.
Trump’s top deportation thug could finally face consequences
ChatGPT uninstalls surged by 295% after DoD deal
Medicaid is paying for more dental care. GOP cuts threaten to reverse the trend.
Internal DHS watchdog: Noem is obstructing our work
Trump’s Iran war gets cold reviews from MAGA-friendly influencers
Before you share that story about how troops were told the Iran War is for “Armageddon,” read this
This time, Boston City Council unanimously supports mayor’s order banning ICE from city property
RFK Jr. demands Dunkin’, Starbucks prove sugary beverages are ‘safe’
Pseudoscientific Push to Frame Abortion as a ‘Water Quality’ Issue Rears Its Head in Iowa
Betting Against Increased Taxes, DraftKings Is Spending Big on Illinois State Races
‘America Doesn’t Want My Children or Grandchildren’
DHS’s use of secretive legal weapon draws congressional scrutiny
JD Vance, the New Racist Populism Czar
Kansas Senate votes to subvert students’ First Amendment right to join public protests
The third possibility, that AI helps to weed out mistakes, is trickier for the discipline. This stage could become even more important if journals do start to be hit by a wave of AI-generated slop — or, perhaps more likely, good papers with so many appendices and robustness checks that even the most dedicated referee is defeated. (The real “Dr Robust” does not have infinite energy.)
Eager to embrace the new technology, several of the top five economics journals are already experimenting with Refine, an impressive AI-powered reviewing tool that scours economics papers for errors. Ben Golub, one of its creators, shared that even with papers that had been through referees at top journals, Refine was picking up problems in at least a third of cases.
Here is more from Soumaya Keynes at the FT.
The post Is AI currently helping economic research? appeared first on Marginal REVOLUTION.

I’ve been writing some pessimistic things about AI recently, so I thought I should try to balance those out with some optimistic takes. One way I think AI could really help our society is by injecting reasonableness and moderation into our public discourse.
I’m known as a pretty nice and reasonable blogger nowadays. But when I got started, as an angry graduate student in 2011 trying to distract himself from his dissertation, I was genuinely snarky. Going back and rereading some of my posts from that era makes me chuckle, but also wince a little bit. The genteel éminence grises who sat atop the hierarchy of the very hierarchical economics profession just had no idea how to deal with a snarky, internet-native Millennial who was willing to talk back.
That snarky bravado, though sincere, was how I (accidentally) forced myself into the influencer elite. Paul Krugman, Brad DeLong, and other established bloggers liked how I tweaked the tails of the stuffy New Classical macroeconomists who pooh-poohed fiscal stimulus. So they boosted me on their own blogs, and pretty soon almost everyone in the economics profession knew my name — deservedly or not. Then I got Twitter, and I started tweeting way too much, and the rest is history. Notably, it was my political tweets — anti-Trump stuff in 2015-2020 — that got me my biggest bump in social media followership, rather than my economic insights.
In the media world of 1991, this career path would have been a LOT harder to pull off. I could have been a newspaper columnist or perhaps even a TV show host, but it would have been a long hard slog, gatekept by a bunch of editors who embodied the conventional wisdom of an older generation. My best bet for breaking in as an irreverent, independent voice probably would have been talk radio. In the media world of 1971, forget about it — I would have zero chance of breaking in to a discourse dominated by broadcast TV and big newspapers.
We can wonder whether the world would have been better or worse had I never become a public intellectual (hopefully, because you read this blog, your answer is “better”). But in my personal opinion, it’s pretty clear that the phenomenon of outsiders breaking in to the discourse with aggression and social media attention-seeking has gone too far. There is very clear evidence that social media — far more than the traditional media it replaced — has led to the elevation of divisive voices and bad actors.
For example, Bor and Petersen (2021) find that social media draws malignant, status-seeking people who use hostility to get attention and power:
Why are online discussions about politics more hostile than offline discussions?…Across eight studies, leveraging cross-national surveys and behavioral experiments (total N = 8,434), we [find that] hostile political discussions are the result of status-driven individuals who are drawn to politics and are equally hostile both online and offline. Finally, we offer initial evidence that online discussions feel more hostile, in part, because the behavior of such individuals is more visible online than offline. [emphasis mine]
Basically, spreading hate and divisiveness on social media is a form of entrepreneurship. As Eugene Wei has written, social media is all about getting social status. 10,000 followers on X may not sound like a media empire to rival CBS News, but for most people it’s more attention than they would otherwise get in their entire life. For malignant individuals who crave status and attention and enjoy spreading fear and hate, social media is a natural platform for their dark dreams.
This is especially effective because the psychology of viral content tends to spread negativity more than positivity. Here’s Knutson et al. (2024):
We analyzed the sentiment of ~30 million posts (on twitter.com) from 182 U.S. news sources that ranged from extreme left to right bias over the course of a decade (2011–2020). Biased news sources (on both left and right) produced more high arousal negative affective content than balanced sources. High arousal negative content also increased reposting for biased versus balanced sources…Over a decade, the virality of high arousal negative affective content also increased, particularly in…posts about politics. Together, these findings reveal that high arousal negative affective content may promote the spread of news from biased sources.
And Brady et al. (2021) find that social media outrage is a self-reinforcing process:
Moral outrage shapes fundamental aspects of social life and is now widespread in online social networks. Here, we show how social learning processes amplify online moral outrage expressions over time. In two preregistered observational studies on Twitter (7331 users and 12.7 million total tweets) and two preregistered behavioral experiments (N = 240), we find that positive social feedback for outrage expressions increases the likelihood of future outrage expressions, consistent with principles of reinforcement learning.
Together, these effects probably explain why negative content — especially about people’s political enemies — is so much more common than positive content on social media. Here’s Watson et al. (2024):
Prior research demonstrates that news-related social media posts using negative language are re-posted more, rewarding users who produce negative content…Data from four US and UK news sites (95,282 articles) and two social media platforms (579,182,075 posts on Facebook and Twitter, now X) show social media users are 1.91 times more likely to share links to negative news articles….[U]sers [show] a greater inclination to share negative articles referring to opposing political groups. Additionally, negativity amplifies news dissemination on social media to a greater extent when accounting for the re-sharing of user posts containing article links. These findings suggest a higher prevalence of negatively toned articles on Facebook and Twitter compared to online news sites.
And as if that wasn’t bad enough, social media platforms algorithmically amplify divisive content, probably as a business strategy! Here’s Milli et al. (2024):
In a pre-registered algorithmic audit, we found that, relative to a reverse-chronological baseline, Twitter's engagement-based ranking algorithm amplifies emotionally charged, out-group hostile content that users say makes them feel worse about their political out-group.
And research also finds that algorithmic feeds tend to increase political polarization.
In other words, the rise of social media created a revolution in political discourse. The old-school monopoly of big newspapers and TV stations — already under strain from the Web and from increased entry and competition — was overthrown by a giant mob of wannabe influencers, using divisiveness, partisanship, ideology, tribalism and negative emotions to get attention and status.
I call these people the Shouting Class. The most successful among them include people like Nicholas Fuentes, a literal Hitler supporter who has called for women to be sent to “gulags”; Candace Owens, a conspiracy theorist and antisemite; and Hasan Piker, who has said that America deserved the 9/11 attacks. But the real damage is probably done by the vast legions of smaller-time shouters, all dreaming of becoming the next Fuentes or Owens or Piker. If you’re on X or Bluesky, you can probably name a few of them.
Regular people know, of course, that social media is ruled by monsters great and small. Here’s a poll from 2020 showing that Americans think social media has a negative effect on their society:

And here’s a recent poll showing that Americans trust social media less than just about any other institution:

Increasingly, Americans are getting off social media. But because the normal, moderate Americans are leaving first, this just cedes the field of influence to the extremists. This is from Törnberg (2025):
Overall platform use has declined, with the youngest and oldest Americans increasingly abstaining from social media altogether. Facebook, YouTube, and Twitter/X have lost ground, while TikTok and Reddit have grown modestly…Across platforms, political posting remains tightly linked to affective polarization, as the most partisan users are also the most active. As casual users disengage and polarized partisans remain vocal, the online public sphere grows smaller, sharper, and more ideologically extreme.
This is, of course, not the first time that new media technologies have opened up opportunities for divisive entrepreneurs to use hate and fear to boost their careers. Consider Charles Coughlin, a right-wing radio host in the 1930s, who called for an end to democracy and labeled Hitler a “hero”. Coughlin, whose ideas are recognizably similar to those of Fuentes or Tucker Carlson today, used a new media technology (radio) and a constant stream of negativity to break into the public consciousness and establish himself as an influencer.
Why did the Charles Coughlins give way to the staid, centrist Big Media of the mid-20th century? Monopoly power. Big newspapers gradually built local monopolies that made it hard for upstarts to break in using sensationalism (as they had done in earlier decades). Limited spectrum availability insulated broadcast TV stations and radio stations from competition.1
Those gatekeepers inevitably lost power as new technologies allowed new entrants to get inside the walls. Cable TV led to the rise of talk show hosts like Sean Hannity, Tucker Carlson, and Rachel Maddow. Talk radio led to Rush Limbaugh and Michael Savage. The Web led to blogs like the Drudge Report. All of these new entrants used divisiveness and negative emotion to break in. Social media just supercharged the process.
Arguably, American society hasn’t recovered from the blow that the rise of social media dealt it. Other societies seem to be a little bit more insulated from social media’s deleterious effects, due to their greater homogeneity and centralization — but only a bit. The problem is global.
The question now is what can save us from the tyranny of the Shouting Class. Who can be the next Walter Cronkite?
I used to think that this was a job for the owners of platforms themselves — that if they really wanted to, people like Elon Musk could tweak their algorithms and moderate their content to suppress the most divisive shouters and reward balance and reasonableness. I no longer think this will work. Watching the management of Bluesky try and fail to halt that platform’s descent into madness, and watching Elon’s algorithmic tweaks produce at best a slight conservative shift in opinion, I’m a lot more pessimistic about the ability of wise corporate management to suppress the Shouting Class. And given the fact that Elon has elevated some of that class’ worst members, I’m also more pessimistic about the desire of management to become CBS News.
Which leaves us with AI.
Anyone who has used X has noticed the “call Grok” feature. If you’re a premium subscriber, you can always just tag Elon’s favorite LLM and get it to answer questions and deliver relevant facts. Dan Williams writes that this type of LLM fact-checking will reintroduce expertise and technocratic fact-based analysis back into public discussions:
First, unlike human experts, [LLMs] can rapidly deploy encyclopaedic knowledge to answer people’s idiosyncratic questions. Their responses can be probed, scrutinised, and questioned without them ever getting tired or frustrated. They won’t just tell you that there is no persuasive evidence for a link between vaccines and autism. They can carefully walk you through the kinds of evidence we have and address your specific sources of scepticism. This partly explains why they can be highly persuasive, even in correcting conspiratorial beliefs that many assumed were beyond the reach of rational persuasion.
Second, LLMs typically share information politely and respectfully. This not only differs from the performative, gladiatorial character of much debate and discussion on social media platforms, but also improves on much communication by human experts. Being human, experts are often biased, partisan, and simply annoying, and when they seek to “educate” the public, it can be perceived—and is sometimes intended—as condescending and rude. In contrast, LLMs deliver expert opinion without such status threats.
In fact, there is evidence that this works. Despite widespread worry that AI will become a machine for confirmation bias — simply telling people what they want to hear — Renault et al. (2026) find that Grok is actually a decent fact-checker:
Using an exhaustive dataset of 1,671,841 English-language fact-checking requests made to Grok and Perplexity on X between February and September 2025, we provide the first large-scale empirical analysis of how LLM-based fact-checking operates in the wild…Across posts rated by both LLM bots, evaluations from Grok and Perplexity agree 52.6% of the time and strongly disagree (one party rates a claim as true and the other as false) 13.6% of the time. For a sample of 100 fact-checked posts, 54.5% of Grok bot ratings and 57.7% of Perplexity bot ratings agreed with ratings of human fact-checkers, which is significantly lower than the inter-fact-checker agreement rate of 64.0%; but API-access versions of Grok had higher agreement with fact-checkers than did not significantly differ from inter-fact-checker agreement. Finally, in a preregistered survey experiment with 1,592 U.S. participants, exposure to LLM fact-checks meaningfully shifts belief accuracy, with effect sizes comparable to those observed in studies of professional fact-checking.
In fact, although Elon has tirelessly worked to make Grok less “woke”, Renault et al. find that the AI is more likely to correct Republican posts than Democratic ones. While that doesn’t necessarily mean that reality has a liberal bias, it does show that the people who create LLMs have difficulty imparting their political bias to their creations.
Costello et al. (2024) also find that talking to AI makes people believe less in conspiracy theories.
I’m hopeful that LLMs will become fact-checking machines and dispensers of expertise-on-demand. But I actually think there’s a far more important reason why they could recapture our political discourse from the Shouting Class. Because of the way they’re trained, LLMs will be a force for homogenization and moderation of opinion.
This idea has been rattling around in my head for a while now, but I just noticed that Dylan Matthews wrote about this a couple months ago:
Some communication technologies are epistemically diverging: their emergence and diffusion results in the affected population’s sense of reality polarizing. Typically this means that the technology has enabled the population to access more and more varied perspectives and factual narratives than it had access to before the technology emerged…The classic example is the printing press and its effect on religious polarization in 16th century Europe…The classic modern diverging technology is, of course, social media…
Other technologies are epistemically converging: they help homogenize the perspectives the population experiences and build a less polarized, more shared reality among the population’s members…Network TV news, from the 1950s through 1990s, might be the best example of this kind of convergence…My provisional theory is that LLMs, as a consumer product, will push people’s senses of reality closer together in a sort of mirror image of the way social media has fractured them…They are centralized systems that, until you prompt them or give them context, behave basically the same way for everyone.
Let’s unpack this a little. If I’m a Democrat, and I talk to other people about politics, it’s likely I’m talking to other Democrats. This is even more likely on social media than in real life — some of my neighbors and coworkers might be Republicans, but on X or Bluesky I can just seek out other Democrats. Those other Democrats also mostly talk to other Democrats, and so on. So an echo chamber builds, where people’s ideas get reinforced and polarized. If I do interact with a Republican online, it’s probably in an adversarial context — I’m shouting at them or being shouted at, which just tends to harden me in my Democratic views.
But when I talk to an AI, it’s a different story. The AI’s opinions and beliefs come from its training data,2 and that data comes from both Democrats and Republicans. Instead of getting the average of my social circle, I’m getting something closer to the average of the country. If AI has any persuasive power at all, it’ll end up pulling me towards the middle.
And AI does have persuasive power. Chen et al. (2026) find that recent LLMs are more persuasive than campaign advertisements. Hackenburg et al. (2025) also find substantial persuasive capabilities.
So LLMs are a natural source of moderation — when people talk to AI, they are indirectly being persuaded by the opinions of a bunch of people who disagree with them. This also means that LLMs are censoring the tails of the idea distribution. AI is trained on the output of a much broader group of people than the extremist shouters who tend to grab attention on social media; it will naturally tend to side with the silent majority in most cases.
This process should end up pushing people’s opinions closer to some sort of consensus, whether or not the consensus is right.3 In fact, there’s some evidence that AI homogenizes people’s ideas. This is from Sourati et al. (2026):
We synthesize evidence across linguistics, psychology, cognitive science, and computer science to show how LLMs reflect and reinforce dominant styles while marginalizing alternative voices and reasoning strategies. We examine how their design and widespread use contribute to this effect by mirroring patterns in their training data and amplifying convergence as all people increasingly rely on the same models across contexts.
And this is from Jiang et al. (2025):
[W]e present a large-scale study of mode collapse in LMs, revealing a pronounced Artificial Hivemind effect in open-ended generation of LMs, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and more so (2) inter-model homogeneity, where different models produce strikingly similar outputs.
Now at first blush, this might sound bad. I don’t want humanity to turn into a literal hive mind! And of course it’s worth remembering that although we now romanticize the 1950s, at the time people felt stifled by conformity. There should be a middle ground between anarchy and pod people.
But if you think social media has pushed society too far in the direction of anarchy, then you’ll welcome a bit of a push back in the direction of consensus. A country can’t get anything done if everyone is always at each other’s throats. Nor did fragmentation and polarization “democratize” our information space — they marginalized the silent majority of moderate normies, and handed control of our thoughts to some of the worst extremists in our society. In a way, by giving voice to the center of the distribution, AI may be a more truly democratizing force in our discourse than the internet itself ever was.
Perhaps the only thing that can save us from ten thousand Digital Charles Coughlins is a Digital Walter Cronkite.
In the U.S. there was also something called the Fairness Doctrine, which required broadcast media to be even-handed, whose legal justification was predicated on the broadcast spectrum monopoly.
And from synthetic data generated from that training data, and occasionally from reinforcement learning (but more for math and coding than for politics and debate).
Interestingly, Hackenburg et al. find that AIs persuade people by throwing a blizzard of information at them, and that this information is often wrong; it often decreases the factual accuracy of humans’ beliefs. This should serve as a reminder that homogenization of belief and moderation of belief are not the same thing as factualness or education; getting everyone to believe the same thing, and getting them to believe the correct thing, are different tasks.
1. Ideological trends in academic scholarship.
2. Prediction market for the John Bates Clark award.
3. Show Me The Model. “Give it a URL or paste some plain text, and the tool flags hidden assumptions, internal inconsistencies, and other problem areas, and tells you how a real economist would think through the issue.”
4. “I built Frontier Graph: an open-source tool to explore open questions in economics, drawing on 240K papers across 300 journals.” And here.
6. India tests whether AI can stop trains from hitting elephants.
7. The Amish are OK with washing machines.
The post Thursday assorted links appeared first on Marginal REVOLUTION.
In healthcare, finance, insurance, and life sciences, modernizing an app is less “new look” and more “show your work.” The important pieces are the ones people do not brag about: reliable records, orderly approvals, and evidence that can survive a close read.
That is why teams often begin with focused application modernization services that lower risk in the most sensitive spots, while audits, validation, and daily work keep moving. The goal is simple: release faster without misplacing the receipts regulators and risk teams will ask for later.
A regulated organization does not just ask, “Does it work?” It also asks, “Can the change be explained six months from now?” Therefore, old apps often survive longer than they deserve, because they come with habits auditors understand: manual sign-offs, long release windows, and change records buried in email threads.
Legacy tools also blur boundaries. Business rules, access rights, and logging sit in the same code, so a small update can ripple into permissions, reporting, and audit history. As a result, teams learn to fear change. That fear shows up during tech audits, when people get asked to prove a control exists and discover it only exists in someone’s memory.
An application modernization company that works in regulated settings usually starts by mapping “regulatory promises” instead of features. That means listing what must stay true, such as record retention, separation of duties, or controlled access to sensitive data, and then rebuilding the app so those promises stay visible and provable.
An audit trail is not just a list of events. It is a story that can be replayed: who did something, what they did, when it happened, what data changed, and why that person had the right to do it. However, many systems log only the “what” and then hope humans can fill in the rest later. That hope fades fast during an inspection.
A practical audit record usually needs a few ingredients, kept in plain language and stored so it can be searched:
Moreover, auditors often ask for proof that logs cannot be quietly edited after the fact. That does not require exotic tools, but it does require discipline: restricted access, separate storage, and clear retention rules. For sensitive data, the log should capture intent without repeating private content, so privacy and compliance do not fight each other.
Release control belongs in the audit story, too. When a change goes live, the app should record which version ran, who promoted it, and which approvals were attached. In finance, even broad market oversight depends on traceability at scale; for example, the needs of market surveillance systems show how a missing breadcrumb can turn a routine question into an investigation.
Modern apps also inherit risk from third-party parts, which is easy to miss when modernizing a legacy codebase. That is why more teams keep a lightweight inventory of dependencies, sometimes described as a software bill of materials, so it stays clear what ships with each release and what requires patching when a vulnerability appears.
In practice, app modernization services can help by setting these patterns early, before teams start moving data and rewriting screens. If logging, versioning, and retention get bolted on at the end, the result usually feels like a second app taped to the first one.
Approvals often get treated like a tax on speed. In practice, approvals can be quick when the process is clear and the risk level matches the control. Therefore, a good modernization plan separates changes into groups, with different paths for each group.
Low-risk work, like wording updates or safe reporting, can move with a quick review and automatic documentation. Medium-risk changes, like pricing rules or patient workflows, typically require a second set of eyes and better test proof. High-risk changes, like permissions or financial postings, may require formal sign-off from compliance or quality, and a clear separation between the builder and the approver.
The trick is to keep approvals visible in the tools people already use. When approvals live in side conversations, nobody can prove them later. When they live in a single, controlled workflow, every step becomes part of the record.
A practical release path often looks like this:
An app modernization company should also treat access as part of release control. If anyone can bypass the release path, the whole story breaks. Therefore, roles and permissions need to be clear, time-bound where possible, and reviewed on a regular schedule.
This is also where partners matter. When a professional company like N-iX joins a regulated program, a helpful pattern is to agree early on how approvals, evidence, and handoffs work, because the process gets tested during the first urgent fix, not during a calm planning week.
Regulated app modernization works when audit trails, approvals, and release control get designed as one connected system. Logs should tell a clear story about actions and releases while staying searchable and respectful of privacy. Approvals should reflect the risk and be tied to the exact version that gets released. Release control should reward the right path and make rollback a normal safety move, not a last resort. Do that, and teams can ship updates confidently, stay ready for inspections, and avoid the usual scramble.
Photo: Mohamed_hassan via Pixabay.
CLICK HERE TO DONATE IN SUPPORT OF DCREPORT’S NONPROFIT NEWSROOM
The post App Modernization in Regulated Industries: Audit Trails, Approvals, and Release Control appeared first on DCReport.org.
Someone tries to remote control his own DJI Romo vacuum, and ends up controlling 7,000 of them from all around the world.
The IoT is horribly insecure, but we already knew that.
A team largely composed of economics majors who know their way around Milton Friedman and Gary Becker, Chicago (23-4) is a DIII powerhouse currently in the DIII Sweet 16 and chasing its first-ever NCAA national title.
“Nobody’s ever going to confuse this with Alabama football,” says head coach Mike McGrath, “but if you think about the student-athlete model, I think we show you can do both of those things very, very well.”
…“Obviously, the kids are really smart,” he says. “You can’t B.S. them. They’re going to challenge everything that you tell them, you have to be prepared for that…there’s a need to understand the why behind things.”
…a friend of the program, Chicago professor John List, is working with students on an analysis of player positioning.
Here is more from the WSJ, via Rama Rao.
The post University of Chicago fact of the day appeared first on Marginal REVOLUTION.
Some controversies are familiar all over the world.
The NYT has the story:
The Faroe Islands Are Changing Some of Europe’s Strictest Abortion Rules
A new law allowing abortion up to 12 weeks will be a major shift in an archipelago of 55,000 people, and there are strong feelings on both sides. By Amelia Nierenberg and Regin Winther Poulsen
"The Faroes, a self-governing part of the Kingdom of Denmark in the North Atlantic hundreds of miles from Copenhagen, allowed abortion only in rare cases.
...
"The Faroes have had a near-total abortion ban, one of Europe’s most restrictive, under a law that dates back to 1956. Like Ms. Jacobsen, some women lied to their doctors to get around the restrictions and end their pregnancies, doctors, lawmakers and advocates on both sides of the issue have said.
...
"But late last year, the Parliament in the archipelago of 55,000 people ratified a law that allows women to end a pregnancy within its first 12 weeks, a major shift in a place that has long been more religious and socially conservative than its Nordic peers. The law is set to take effect in July.
...
"But a parliamentary election is set for late March and polls suggest that power could pass to a conservative coalition that may try to block implementation of the law or change it."

We all want longevity without compromising on quality of life. This 87-year-old achieves both with a daily running habit
- by Aeon Video

As the 18th-century war between mechanism and romanticism returns, we face a new question: can we build artificial souls?
- by Peter Wolfendale
Autoresearching Apple's "LLM in a Flash" to run Qwen 397B locally
Here's a fascinating piece of research by Dan Woods, who managed to get a custom version of Qwen3.5-397B-A17B running at 5.5+ tokens/second on a 48GB MacBook Pro M3 Max despite that model taking up 209GB (120GB quantized) on disk.Qwen3.5-397B-A17B is a Mixture-of-Experts (MoE) model, which means that each token only needs to run against a subset of the overall model weights. These expert weights can be streamed into memory from SSD, saving them from all needing to be held in RAM at the same time.
Dan used techniques described in Apple's 2023 paper LLM in a flash: Efficient Large Language Model Inference with Limited Memory:
This paper tackles the challenge of efficiently running LLMs that exceed the available DRAM capacity by storing the model parameters in flash memory, but bringing them on demand to DRAM. Our method involves constructing an inference cost model that takes into account the characteristics of flash memory, guiding us to optimize in two critical areas: reducing the volume of data transferred from flash and reading data in larger, more contiguous chunks.
He fed the paper to Claude Code and used a variant of Andrej Karpathy's autoresearch pattern to have Claude run 90 experiments and produce MLX Objective-C and Metal code that ran the model as efficiently as possible.
danveloper/flash-moe has the resulting code plus a PDF paper mostly written by Claude Opus 4.6 describing the experiment in full.
The final model has the experts quantized to 2-bit, but the non-expert parts of the model such as the embedding table and routing matrices are kept at their original precision, adding up to 5.5GB which stays resident in memory while the model is running.
Qwen 3.5 usually runs 10 experts per token, but this setup dropped that to 4 while claiming that the biggest quality drop-off occurred at 3.
It's not clear to me how much the quality of the model results are affected. Claude claimed that "Output quality at 2-bit is indistinguishable from 4-bit for these evaluations", but the description of the evaluations it ran is quite thin.
Update: Dan's latest version upgrades to 4-bit quantization of the experts (209GB on disk, 4.36 tokens/second) after finding that the 2-bit version broke tool calling while 4-bit handles that well.
Tags: ai, generative-ai, local-llms, llms, qwen, mlx
I was intending to take tonight off, but there’s big news—I mean, aside from all the other big news—that I want to make sure gets attention.
Back on February 23, Daniel Ruetenik, Pat Milton, and Cara Tabachnick of CBS News reported on a newly uncovered document in the Epstein files showing that beginning in December 2010 under the Obama administration, the U.S. Drug Enforcement Agency (DEA) was running an investigation of Jeffrey Epstein and fourteen other people for drug trafficking, prostitution, and money laundering.
The document showed the investigation, called “Chain Reaction,” was still underway in 2015. But the investigation disappeared, although the document suggested that it was a significant investigation and that the government was on the verge of indictments.
As soon as the story broke, Senator Ron Wyden of Oregon, the top-ranking Democrat on the Senate Finance Committee, said: “It appears Epstein was involved in criminal activity that went way beyond pedophilia and sex trafficking, which makes it even more outrageous that [Attorney General] Pam Bondi is sitting on several million unreleased files.”
Wyden has been investigating the finances behind Epstein’s criminal sex-trafficking organization: it was his investigation that turned up the information that JPMorgan Chase neglected to report more than $1 billion in suspicious financial transactions linked to Epstein. Wyden has pushed hard for Treasury Secretary Scott Bessent to produce the records of those suspicious transactions for the Senate Finance Committee, but Bessent refuses.
On February 25, two days after the story of the DEA investigation broke, Wyden wrote to Terrance C. Cole, administrator of the DEA, noting that “[t]he fact that Epstein was under investigation by the DOJ’s [organized crime drug enforcement] task force suggests that there was ample evidence indicating that Epstein was engaged in heavy drug trafficking and prostitution as part of cross-border criminal conspiracy. This is incredibly disturbing and raises serious questions as to how this investigation by the DEA was handled.”
He noted that Epstein and the fourteen co-conspirators were never charged for drug trafficking or financial crimes, and wrote: “I am concerned that the DEA and DOJ during the first Trump Administration moved to terminate this investigation in order to protect pedophiles.” He also noted that the heavy redactions in the document appear to go far beyond anything authorized by the Epstein Files Transparency Act and that since the document was not classified, “there is no reason to withhold an unredacted version of this document from the U.S. Congress.”
Wyden asked Cole to produce a number of documents by March 13, 2026, including an unredacted copy of the memo in the files, information about what triggered the investigation, what types of drugs Epstein and his fourteen associates were buying or selling, when Operation Chain Reaction concluded and what was its result, why no one was charged, and why the names of the fourteen co-conspirators were redacted.
Today Wyden sent a letter to Deputy Attorney General Todd Blanche, Trump’s former personal lawyer, saying: “It is my understanding that shortly after I requested an unredacted copy” of the document in the Epstein files, the Department of Justice “stepped in to prevent DEA from complying with my request. According to a confidential tip received by my staff, DEA Administrator Terry Cole was ready to provide an unredacted copy of the memorandum, but you stepped in to prevent him from doing so. My staff inquired with the DEA about the status of the production of this document and the DEA responded by directing questions to your office.”
The letter continued: “Your alleged interference in this matter is highly disturbing, not just because it continues the DOJ’s long-running obstruction of my investigation, but also because of your bizarrely favorable treatment of Ghislaine Maxwell, one of Epstein’s closest criminal associates. I should not have to explain the significance of the fact that Epstein was a target of [this high-level DEA] investigation. It suggests the government had ample evidence indicating he was engaged in large scale drug trafficking and prostitution as part of cross-border criminal conspiracy and that Epstein was likely pumping his victims, including underage girls, with incapacitating drugs to facilitate abuse. I am at a loss to understand why you are blocking further investigation of this matter.”
Noting that the document in the files was “clearly marked as ‘unclassified’ at the top of every single page,” Wyden noted: “There is absolutely no reason to withhold an unredacted version of this document from the U.S. Congress.” He added: “In order to assist my investigation into this matter, I demand that you immediately authorize the release of this document.”
Wyden also posted today on social media: “HUGE: Deputy Attorney General Todd Blanche—Trump’s former personal lawyer who was also responsible for Ghislaine Maxwell’s transfer to a cushy club fed—has intervened to block the DEA from providing details of a mysterious Epstein investigation to my Finance Committee team…. This is stunning interference. The document I’m after literally says ‘unclassified’ at the top. The investigation it details is closed. Given Blanche’s close personal ties to Donald Trump, this reeks of a continued coverup to protect key names in the Trump administration.”
Wyden’s post echoes the September 13, 2019, letter from then-chair of the House Intelligence Committee Adam Schiff (D-CA) to Acting Director of National Intelligence Joseph Maguire, in which Schiff called out Maguire for illegally withholding a whistleblower complaint.
In that 2019 letter, Schiff warned: “The Committee can only conclude…that the serious misconduct at issue involves the President of the United States and/or other senior White House or Administration officials. This raises grave concerns that your office, together with the Department of Justice and possibly the White House, are engaged in an unlawful effort to protect the President and conceal from the Committee information related to his possible ‘serious or flagrant’ misconduct, abuse of power, or violation of law.”
Schiff was right: the whistleblower had flagged Trump’s July 2019 phone call with newly elected Ukraine president Volodymyr Zelensky, demanding Zelensky smear Joe Biden’s son Hunter before Trump would release the money Congress had appropriated for Ukraine to fight off the Russian invasion that had begun in 2014. That information led to the story that Trump’s White House was running its own secret operation in Ukraine, apart from the State Department, for Trump’s own benefit. That story led to Trump’s first impeachment by the House of Representatives for abuse of power and obstruction of Congress.
Schiff was the lead impeachment manager of the impeachment trial in the Senate, and in his closing argument, he implored Senate Republicans to bring accountability to “a man without character.” “You will not change him. You cannot constrain him. He is who he is. Truth matters little to him. What’s right matters even less, and decency matters not at all.”
“You can’t trust this president to do the right thing. Not for one minute, not for one election, not for the sake of our country,” Schiff said. “You just can’t. He will not change and you know it.” “A man without character or ethical compass will never find his way.”
But Republican senators stood behind Trump. They acquitted him of abuse of power, by a vote of 48 for conviction to 52 for acquittal. Senator Mitt Romney of Utah crossed the aisle to vote with the Democratic minority. Senate Republicans were unanimous in their vote to acquit Trump of obstruction of Congress.
And here we are.
—
Notes:
https://www.cbsnews.com/news/jeffrey-epstein-files-dea-document-drug-trafficking-investigation/
bloomberg.com/news/newsletters/2025-11-07/reagan-era-crime-unit-officially-shut-down-by-doj
https://www.cbsnews.com/news/jeffrey-epstein-dea-drug-trafficking-investigation-senator-wyden/
https://s3.documentcloud.org/documents/6409559/20190913-Chm-Schiff-Letter-to-Acting-Dni-Re.pdf
https://www.finance.senate.gov/imo/media/doc/letter_from_senator_wyden_to_dag_todd_blanche.pdf
https://www.yahoo.com/news/f-ked-book-reveals-gop-110011623.html
Bluesky:
Here is the audio, video, and transcript. Here is part of the episode summary:
Tyler and Harvey discuss how Machiavelli’s concept of fact was brand new, why his longest chapter is a how-to guide for conspiracy, whether America’s 20th-century wars refute the conspiratorial worldview, Trump as a Shakespearean vulgarian who is in some ways more democratic than the rest of us, why Bronze Age Pervert should not be taken as a model for Straussianism, the time he tried to introduce Nietzsche to Quine, why Rawls needed more Locke, what it was like to hear Churchill speak at Margate in 1953, whether great books are still being written, how his students have and haven’t changed over 61 years of teaching, the eclipse rather than decline of manliness, and what Aristotle got right about old age and much more.
Excerpt:
COWEN: From a Straussian perspective, where’s the role for the skills of a good analytic philosopher? How does that fit into Straussianism? I’ve never quite understood that. They seem to be very separate approaches, at least sociologically.
MANSFIELD: Analytic philosophers look for arguments and isolate them. Strauss looks for arguments and puts them in the context of a dialogue or the implicit dialogue. Instead of counting up one, two, three, four meanings of a word, as analytic philosophers do, he says, why is this argument appropriate for this audience and in this text? Why is it put where it was and not earlier or later?
Strauss treats an argument as if it were in a play, which has a plot and a background and a context, whereas analytic philosophy tries to withdraw the argument from where it was in Plato to see what would we think of it today and what other arguments can be said against it without really wanting to choose which is the truth.
COWEN: Are they complements or substitutes, the analytic approach and the Straussian approach?
MANSFIELD: I wouldn’t say complements, no. Strauss’s approach is to look at the context of an argument rather than to take it out of its context. To take it out of its context means to deprive it of the story that it represents. Analytic philosophy takes arguments out of their context and arranges them in an array. It then tries to compare those abstracted arguments.
Strauss doesn’t try to abstract, but he looks to the context. The context is always something doubtful. Every Platonic dialogue leaves something out. The Republic, for example, doesn’t tell you about what people love instead of how people defend things. Since that’s the case, every argument in such a dialogue is intentionally a bad argument. It’s meant for a particular person, and it’s set to him.
The analytic philosopher doesn’t understand that arguments, especially in a Platonic dialogue, can deliberately be inferior. It easily or too easily refutes the argument which you are supposed to take out of a Platonic dialogue and understand for yourself. Socrates always speaks down to people. He is better than his interlocutors. What you, as an observer or reader, are supposed to do is to take the argument that’s going down, that’s intended for somebody who doesn’t understand very well, and raise it to the level of the argument that Socrates would want to accept.
So to the extent that all great books have the character of this downward shift, all great books have the character of speaking down to someone and presenting truth in an inferior but still attractive way. The reader has to take that shift in view and raise it to the level that the author had. What I’m describing is irony. What distinguishes analytic philosophy from Strauss is the lack of irony in analytic philosophy. Philosophy must always take account of nonphilosophy or budding philosophers and not simply speak straight out and give a flat statement of what you think is true.
To go back to Rawls, Rawls based his philosophy on what he called public reason, which meant that the reason that convinces Rawls is no different from the reason that he gives out to the public. Whereas Strauss said reason is never public or universal in this way because it has to take account of the character of the audience, which is usually less reasonable than the author.
And yes he does tell us what Straussianism means and how to learn to be a Straussian. From his discussion you will see rather obviously that I am not one. Overall, I found this dialogue to be the most useful source I have found for figuring out how Straussianism fits into other things, such as analytics philosophy, historical reading of texts, and empirical social science.
Perhaps the exchange is a little slow to start, but otherwise fascinating throughout. I am also happy to recommend Harvey’s recent book The Rise and Fall of Rational Control: The History of Modern Political Philosophy.
The post My excellent Conversation with Harvey Mansfield appeared first on Marginal REVOLUTION.


The town of Alice Springs lies near Australia’s geographic center, in a region often called the “Red Centre” for the rusty hue of its desert landscape. After weeks of heavy rainfall in February and March 2026, however, vast areas of desert and surrounding mountains turned lush and green.
The MODIS (Moderate Resolution Imaging Spectroradiometer) on NASA’s Terra satellite captured this image (right) of the southern part of Australia’s Northern Territory on March 10, 2026. For comparison, the left image shows the same area in January 2026, before the onset of heavy rains.
The area’s landscape typically appears red due to the oxidation of iron-rich rock. During periods of sufficient rainfall, water begins to flow in previously dry riverbeds, and dormant vegetation springs to life. February 2026 brought more than enough water to the Northern Territory for the transformation to occur—an area average of 239 millimeters (9 inches)—marking the territory’s third-wettest February on a record that dates back to 1900, according to the Bureau of Meteorology.
Beyond the transformation visible from above, the rainfall also caused disruptions on the ground. Thunderstorms earlier in the month produced enough rain to cause water levels on the Todd River and other area rivers to quickly rise, while flash flooding in Alice Springs uprooted trees and left some people stranded, according to news reports. Later in the month, heavy rains returned as another tropical low stalled over central Australia for nearly a week, causing flooding that prompted officials to declare a natural disaster.
As of late March, more extreme weather was on the way for Australia with the approach of Tropical Cyclone Narelle. Bureau of Meteorology forecasts called for severe storm impacts to reach northern Queensland by late on March 19 or March 20. Flooding watches and warnings also extended inland, including to Alice Springs, where past storms have already saturated river catchments.
NASA Earth Observatory image by Lauren Dauphin, using MODIS data from NASA EOSDIS LANCE and GIBS/Worldview. Story by Kathryn Hansen.
Stay up-to-date with the latest content from NASA as we explore the universe and discover more about our home planet.

A potent atmospheric river delivered intense rainfall to western Washington, triggering flooding and mudslides.

January brought blistering extremes Down Under as record temperatures scorched the nation’s southeast.

Villages and farmland were swamped after unusually heavy early-February rains pushed the Sinú River over its banks.
The post Australia’s “Red Centre” Turns Green appeared first on NASA Science.
Shubham Bose, “The 49MB Web Page”:
I went to the New York Times to glimpse at four headlines and was greeted with 422 network requests and 49 megabytes of data. It took two minutes before the page settled. And then you wonder why every sane tech person has an adblocker installed on systems of all their loved ones.
It is the same story across top publishers today.
This is an absolutely devastating deconstruction of the current web landscape. I implore you to pause here, and read Bose’s entire amply illustrated essay. I’ll wait.
Even websites from publishers who care about quality are doing things on the web that they would never do with their print editions. Bose starts with The New York Times, but also mentions The Guardian, whose web pages are so laden with ads and modals that their default layout, on a mobile device, sometimes leaves just 11 percent of the screen for article content. That’s four lines of article text.
Bose writes:
Viewability and time-on-page are very important metrics these days. Every hostile UX decision originates from this single fact. The longer you’re trapped on the page, the higher the CPM the publisher can charge. Your frustration is the product. No wonder engineers and designers make every UX decision that optimizes for that. And you, the reader, are forced to interact, wait, click, scroll multiple times because of this optimization. Not only is it a step in the wrong direction, it is adversarial by design.
The reader is not respected enough by the software. The publisher is held hostage by incentives from an auction system that not only encourages but also rewards dark patterns.
I disagree only insofar as the reader isn’t respected at all. Part of my ongoing testing of the MacBook Neo is that I’ve been using it in as default a state as possible, only changing default settings, and only adding third-party software, as necessary. So I’ve been browsing the web without content-blocking extensions on the Neo. It’s been a while since I’ve done that for an extended period of time. Most of the advertising-bearing websites I read have gotten so bad that it’s almost beyond parody.
And even with content blockers installed (of late, I’ve been using and enjoying uBlock Origin Lite in Safari), many of these news websites intersperse bullshit like requests to subscribe to their newsletters, or links to other articles on their site — often totally unrelated to the one you’re trying to read — every few paragraphs. And the fucking autoplay videos, jesus. You read two paragraphs and there’s a box that interrupts you. You read another two paragraphs and there’s another interruption. All the way until the end of the article. We’re visiting their website to read a fucking article. If we wanted to watch videos, we’d be on YouTube. It’s like going to a restaurant, ordering a cheeseburger, and they send a marching band to your table to play trumpets right in your ear and squirt you with a water pistol while trying to sell you towels.
No print publication on the planet does this. The print editions of the very same publications — The New York Times, The Guardian, The Wall Street Journal, The Atlantic, The New Yorker — don’t do anything like this. The print edition of The New Yorker could not possibly be more respectful of both the reader’s attention and the sanctity of the prose they publish. But read an article on their website and you get autoplaying videos interspersed between random paragraphs. And the videos have nothing to do with the article you’re reading. I mean, we should be so lucky if every website were as respectfully designed as The New Yorker’s, but even their website — comparatively speaking, one of the “good ones” — shows only a fraction of the respect for the reader that their print edition does.
Without an ad-blocking content blocker running, one of the most crazy-making design patterns today is repeating the exact same ad within the same article, every few paragraphs. It’s hard to find a single article on Apple News — a sort of ersatz pidgin version of the web — that does not do this. The exact same ad — 6, 7, 8 times within the same article. How many 30-something blonde white women need hearing aids? It’s insane.
People are spending less and less time on the web because websites are becoming worse and worse experiences, but the publishers of websites are almost literally trying to dig their way out of that hole by adding more and more of the reader-hostile shit that is driving people away. The Guardian screenshot Bose captured, where only 11 percent of the entire screen shows text from the article, is the equivalent of a broadcast TV channel that only showed 7 minutes of actual TV content per hour, devoting the other 53 minutes to paid commercials and promotions for other shows on the same channel. Almost no one would watch such a channel. But somehow this strategy is deemed sustainable for websites.
The web is the only medium the world has ever seen where its highest-profile decision makers are people who despise the medium and are trying to drive people away from it. As Bose notes, “A lot of websites actively interfere the reader from accessing them by pestering them with their ‘apps’ these days. I don’t know where this fascination with getting everyone to download your app comes from.” It comes from people who literally do not understand, and do not enjoy, the web, but yet find themselves running large websites.
The people making these decisions for these websites are like ocean liner captains who are trying to hit icebergs.
Special guest David Pogue discusses his excellent and amazingly comprehensive new book, Apple: The First 50 Years.
Sponsored by: