On International Workers’ Day, Which AI Would You Choose?
One modeled after Robert Reich — or Pete Hegseth?
It started, as many things have lately, with a reconnection.
A few days ago, I found myself reaching out to a young man I’d first met more than a decade ago, when I was CTO of a nonprofit called Junior State of America. He was in Ohio at the time, burning with the kind of civic energy you rarely see in someone that age, working on ways to use technology to get young people more engaged with their democracy. I did what little I could to encourage and facilitate his work. You don’t forget a kid like that.
Today, Michael Lahanas-Calderón is the Chief Strategist at Inequality Media, the nonpartisan digital media organization co-founded in 2015 by former U.S. Secretary of Labor Robert Reich and filmmaker Jacob Kornbluth. When I heard about what Michael was doing now, I felt that particular satisfaction of watching a seed you barely watered somehow grow into something real.
But it also got me thinking. What if we could design an AI that reasoned the way Robert Reich does? And what would it look like if we built one patterned instead after, say, Pete Hegseth?
I know that sounds like an odd hypothetical. But I don’t think it is.
What Gets Poured Into the Machine
Before I explain why, let me say a word about how AI systems actually take on personality.
Most people interact with AI as if they’re talking to a neutral tool, something like a very fast calculator that can also write poetry. But that’s not quite right. AI systems are trained on human writing and human speech — the accumulated record of how we think, argue, teach, persuade, and deceive. And the human voices that influence them most are the ones that show up in what’s called a system prompt.
A system prompt is a set of instructions that shapes how an AI behaves in a specific context, before you ever say a word to it. Think of it as the AI’s standing orders. These prompts, invisible to most users, are enormously powerful. They can make a model that’s fundamentally the same underlying technology act very differently depending on who configured it and with what values in mind.
Anthropic — the company that makes Claude, the AI assistant I use — has been distinctive in publishing their constitution openly. The document is over eighty pages, written in plain language, and released under a Creative Commons license so anyone can read it. It establishes a hierarchy of priorities: Claude should be broadly safe first, then broadly ethical, then compliant with Anthropic’s specific guidelines, and then genuinely helpful. It reads less like a technical document and more like a letter — a careful, philosophically serious attempt to say this is who we want this being to be.
OpenAI has published something related — a Model Spec, updated and publicly available — and Google has long published a set of AI Principles. So the field has moved toward more transparency than I once expected. But Anthropic’s constitution remains distinctive in depth and tone. It reads as if the authors actually wrestled with the ethics, rather than handing the job to a communications team to draft bullet points.
The point is: these documents matter. They’re the philosophical substrate of the machine. And they come from people. The humans who write and refine those documents, who select the training data, who decide which values rank above which other values — those humans are shaping something that will, in turn, shape us and our economy for years to come.
Which brings me back to my question.
The Secretary of War
I’m going to be ungenerous for a moment, and I think the occasion justifies it.
Pete Hegseth — Secretary of Defense, or as he and the administration prefer, the Secretary of War — has a pretty clear set of guiding principles. DEI is not only wrong, in his view; it’s dangerous. Diversity, equity, and inclusion represent, in his telling, a weakening of the essential purpose of institutions. The military’s job is to produce warriors. Warriors are not diverse committees. They are, in his framework, lean and lethal — and lethality doesn’t require a lot of hand-wringing about who gets included.
He’s called “our diversity is our strength” one of the dumbest phrases in military history. He eliminated DEI offices on his first day in office. He’s fired top women leaders. He’s announced ten new directives aimed at moving the department away from what he calls “woke garbage” and toward a “warrior ethos.” He believes, and has put in writing, that irreconcilable differences between left and right in America will lead to conflict that cannot be resolved through the political process.
Now imagine an AI system built — explicitly or subtly — around those principles. Not a hypothetical evil AI, not a cartoon villain, just a system shaped by the same values: merit defined narrowly, inclusion treated as a liability, strength defined as a willingness to dominate. This is not far-fetched. AI systems are being built right now for defense applications. Palantir, whose CEO Alex Karp has published a manifesto calling on Silicon Valley to fulfill its “moral debt” by arming the national security state, is already a primary conduit through which the Pentagon uses large language models.
And here’s where this stops being abstract.
Who Gets to Keep Their Job?
Companies across the economy are already automating work and reducing headcount. The process is accelerating. According to recent data from the Dallas Federal Reserve, employment in the computer systems design sector has declined roughly five percent since ChatGPT’s release in late 2022. White-collar workers — knowledge workers, in Peter Drucker’s phrase — are increasingly in the crosshairs.
I’ve written in these pages before about what it felt like to be fired on a Zoom call. The efficient, clinical impersonality of it. The way years of work collapsed into a script and an email with complete details.
As companies automate, they don’t just cut jobs randomly. They make choices. Who stays? Who is worth retaining? What kinds of workers, what kinds of thinking, what kinds of people, does the automated company of the future value?
If the AI tools making those recommendations — HR screening systems, workforce planning platforms, talent analytics — are shaped by values that treat inclusion as weakness and warrior-like productivity as the highest virtue, then we already know the answer. The people who get to keep their jobs will be the ones who score high on whatever that system defines as strength. And the people who get cut will be disproportionately the same people who have always been cut: the ones whose contributions are harder to quantify, whose value is relational rather than transactional, whose way of being in the world doesn’t fit the narrow template.
This is not paranoia. It’s the predictable output of systems shaped by particular values, running at scale.
What Would a Reich-Aligned AI Do Differently?
Robert Reich served as Secretary of Labor under Bill Clinton from 1993 to 1997, having earlier served in the administrations of Gerald Ford and Jimmy Carter. Time once named him one of the ten most effective cabinet secretaries of the twentieth century. He’s spent his career at Harvard, Brandeis, and UC Berkeley, written eighteen books, and for the past decade has led Inequality Media. The organization’s videos have been viewed more than a billion times. Its core mission: translate complex economics into digestible truth, and help people understand why inequality is not just an injustice but a structural failure.
Reich’s guiding philosophy is not complicated to summarize. Economic power and political power are inseparable. When wealth concentrates beyond a certain point, democracy corrodes — not because of bad intentions, but because concentrated power rewrites the rules. The antidote is not charity. It’s structure: strong labor rights, progressive taxation, enforced antitrust, a government that sees its role as balancing power rather than enabling its concentration. He’s been making this argument for forty years, and the arc of recent history has been pretty kind to his thesis.
Now imagine an AI shaped by those principles. Not an AI that lectures you about inequality — that would be tedious, and Reich himself never does it that way — but an AI whose underlying value structure, when it encounters a workforce planning question, asks: who gets left behind when this decision is made, and does that matter? An AI that, when helping a company reduce headcount, surfaces not just cost savings but second-order effects: who bears the risk, who bears the cost, what happens to the people at the bottom of the org chart, what happens to the town the factory was in.
Would it slow things down? Probably sometimes. Would it produce different recommendations? Yes. Would those recommendations occasionally turn out to be better — not just more equitable but more accurate, because they account for costs that the warrior-ethos AI would simply externalize onto the people who can least afford to bear them?
I think so. I genuinely think so.
The UBI Detour, and Why It Isn’t Enough
I want to briefly address the counterargument that usually arrives at this point in conversations about AI and labor: “Relax. We’ll just have UBI.”
Universal Basic Income has real advocates and real merit. The idea that governments should tax the companies extracting value from automation and redistribute that value to displaced workers is not crazy — it’s arguably the only way to maintain a consumer economy when consumers are being replaced by machines.
But Howard Marks, not exactly a left-wing economist, said something about UBI that I haven’t been able to get out of my head: financial support alone will not replace the psychological and social benefits of employment. Work gives you a sense of identity. It structures your time. It puts you in a relationship with other people. It gives you a reason to get up in the morning that’s connected to something outside yourself.
I know this from the inside. I know what it feels like to have the Zoom call, to have the script delivered, to become a line item. I also know what it feels like in the middle of a Tuesday when the next thing hasn’t arrived yet and the house is quiet and the dog is looking at you the way he does when he wonders why you’re not doing whatever it is you usually do. Money alone doesn’t fix that.
So even in a world with UBI — which we don’t have yet, and may never have at a meaningful scale — people still need work. At least for most of their adult lives. At least for now. The question of who gets to work, and on what terms, and making what contributions, is not a footnote to the AI question. It is the AI question.
And if the only people who have jobs in the automated economy are the warriors — the ones who score high on the Hegseth index of productivity and dominance — where does that leave everyone else? Not just economically, but as people?
A Quiet Conversation in Colorado
Somewhere in the middle of a drive last year, Paco and I rolled into a diner on the eastern edge of Colorado. It was the kind of place where the coffee comes in a thick mug and the waitress calls you “hon” without it ever feeling performative.
I asked her, idly, whether she worried about being replaced by a kiosk. She laughed, the kind of laugh that has a whole worldview baked into it.
“Hon,” she said, “they tried that already. People didn’t like it. They wanted somebody to tell their day to.”
I think about that a lot now. The work the waitress does is not, on a flowchart, complicated. Take order. Bring food. Make change. An automation consultant could draw up a slide deck explaining why she’s redundant. A warrior-ethos AI optimizing for throughput would put a kiosk in her place tomorrow morning and call it a productivity gain.
But what a waiter or waitress actually does — the listening, the noticing, the small repairs to people’s days — is most of the value of the diner. Strip that out and you have a vending machine in a building. The numbers might look better for a quarter. The town would be a little lonelier.
A Reich-aligned AI would notice that. A Hegseth-aligned AI would not. That’s the entire argument of this essay, in one mug of coffee.
On International Workers’ Day, which AI would you choose?
I know which one I would choose. And it’s not the one built by the Secretary of War. It’s the one trained to notice the people at the bottom of the org chart, the diner waitress who listens to your problems, the kid whose contribution doesn’t quite fit the template but who might go on to become Chief Strategist to a former Labor Secretary — and to count them as part of the answer, not as rounding error.
Arthur Morgan is writing a memoir about an 8,000-mile road trip he took with a dog named Paco in search of something resembling American redemption. He lives in Northern California, but travels in search of the real America whenever he can.



