Knowledge Is Now Cheap. Judgment Is the New Premium

The AI economy is not replacing professionals. It is sorting them. And the filter is more brutal than most people in advisory roles currently appreciate.

The View from Inside the Machine

Earlier this week I sat down with Karl Redenbach, CEO of AgenticScale AI, at the Harvard Club of New York. Karl is not an AI commentator. He builds the private infrastructure that allows enterprises to design, govern, and deploy autonomous systems at scale. His clients span energy, financial services, and education. His technology partners include NVIDIA and Microsoft. When he offers a view on which careers survive the automation wave, it carries a different authority than the think-piece circuit. He is deploying the systems for sorting.

His thesis, stated plainly, was this: coding, software engineering, and process-based roles face serious structural pressure. Advisory, communications, positioning, and revenue-driven careers are the ones that last. It is a view you hear in various forms right now. But hearing it from someone building the tools, doing the displacing, gives it a different weight entirely.

I think he is largely right. But the argument needs sharpening. Because the trap most professionals fall into is assuming their category is safe. It is not categories that survive. It is the specific layer of work within a category that requires human judgment under conditions of genuine uncertainty.

What the Data Actually Shows

The compression is not coming. It is already here.

Goldman Sachs research estimated that generative AI could expose the equivalent of 300 million full-time jobs to automation, with the highest concentration of risk sitting in white-collar, task-based roles. BLS occupational outlook data shows the fastest automation exposure in roles defined by structured inputs and predictable outputs. These are not factory floor jobs. They are analyst roles, administrative functions, junior legal review, procurement coordination, and entry-level financial reporting.

The productivity numbers in software development tell a sharper story. GitHub Copilot surpassed 1.8 million paid users by 2025, with engineering teams reporting productivity gains of 30 to 50 percent on routine development work. Startups that required fifteen engineers five years ago now launch with two or three. The labor math changes entirely when one developer with AI support delivers what a full team once required.

The most revealing signal, however, is not in surveys. It is in behavior. A pattern visible across corporate earnings calls and hiring disclosures shows firms freezing junior headcount while simultaneously expanding AI tooling budgets. That is revealed preference. Companies are not waiting to see how this plays out. They are already restructuring around it.

The Trap Inside the Safe Category

Advisory is not uniformly safe. It contains two very different layers, and only one of them is actually defensible.

The production layer, the slide decks, earnings drafts, monitoring reports, research summaries, media lists, and first-pass messaging documents face the same automation pressure as any process role. These tasks have structured inputs and predictable outputs. AI performs them cheaply and increasingly well. The professionals who built careers primarily around producing this material face real displacement risk regardless of what sector they sit in.

The judgment layer is different. This is the work that requires synthesizing ambiguous information, navigating relationships that carry genuine reputational stakes, framing a narrative under conditions where the facts are incomplete, and making a call when the right answer is genuinely unclear. AI assists with this work. It does not replace the person making the decision.

The distinction that matters is not what industry you work in. It is whether, once you strip away everything AI can now handle, there is anything left that only you can do.

Most mid-level advisory careers blend both layers. The honest question for any professional right now is a simple one: if you strip away everything AI can handle, what remains? If the answer is thin, the category label provides false comfort.

Capital Markets: Where Perception Moves Price

Capital markets advisory sits in one of the more defensible positions in this environment, and I say that with full awareness of my own professional interest in the claim being true.

The reasons are structural. Companies always need capital. Investor perception moves valuation in ways that are nonlinear and often irrational, which means shaping that perception requires judgment, not just process. Regulatory environments remain genuinely complex and continue to evolve. Reputational crises affect stock price directly and immediately, which means the stakes of getting the narrative wrong are quantifiable and visible to the board.

Clients in this space still pay for access to investors, strategic positioning ahead of a transaction, and differentiated insight that changes how a situation is framed. They have largely stopped paying for slide production, earnings summary assembly, and templated messaging. The headcount compresses. The bar for what justifies a senior relationship rises. The professionals who survive are those whose value concentrates in the judgment layer and whose networks cannot be replicated by a model.

Crisis Communications: When the Clock Is Running

The same pattern holds in crisis communications and reputational strategy, and it is worth examining because the mechanism is different enough to be instructive.

In a reputational crisis, the premium on human judgment is not merely about complexity. It is about speed, trust, and the irreversibility of getting it wrong. When a company faces a governance failure, a regulatory action, or a public narrative that is moving faster than the facts, the work that matters cannot be templated. The advisor in the room is synthesizing incomplete information, reading the emotional register of the board, anticipating how a statement lands with three different audiences simultaneously, and making a call that will be judged in retrospect.

AI can draft a holding statement in seconds. It cannot tell you whether to issue one at all, or what the second-order consequence of issuing it will be for the CEO’s relationship with the lead regulator, or whether the journalist on the other end of the call is going to use your comment or bury it. That judgment is built from pattern recognition accumulated over years of being in those rooms, and it is not transferable to a system.

The professionals who will be displaced in communications are those whose core output is production, monitoring reports, coverage summaries, distribution lists, first drafts of routine announcements. The ones who remain are those whose value is the call itself, and the relationship that gives the call credibility.

The pattern is consistent across advisory fields. Workforces compress. Junior roles disappear first. Senior roles survive but require higher skill density than they did before.

The Shift That Changes Everything

There is one principle underneath all of this that deserves to be stated as plainly as possible.

AI reduces the value of knowledge. It increases the value of judgment.

For most of the last century, professional advantage came from controlling access to information. The analyst who had the data, the advisor who had read everything, the lawyer who had memorized the case law. That advantage is largely gone. Information is available, synthesizable, and cheap to produce in structured form.

What remains scarce is the capacity to take ambiguous information, incomplete facts, and competing pressures and produce a decision or a frame that is actually right. That requires pattern recognition built over years, tolerance for uncertainty, accountability for outcomes, and the kind of relational trust that does not transfer to a tool.

The professionals who rise in this environment are not the ones who know the most. They are the ones who synthesize best, decide well, and carry enough weight in a room that the decision lands.

That profile sits at the intersection of three things: economic leverage, judgment under uncertainty, and network access. Careers with all three are not going anywhere. Careers built primarily on structured production, regardless of how they are labeled, face a harder road than most people in them currently appreciate.

Karl is building the infrastructure that will make this sorting happen faster than most expect. The people closest to the technology are not debating whether AI replaces human judgment. They have already moved past that question. They are building systems that assume it does not, and routing everything else accordingly.

What do you think?
Insights

More Related Articles

Knowledge Is Now Cheap. Judgment Is the New Premium

The Periphery as Stress Test: Emerging Markets in the New Regime

The Year the Second Order Effects Arrive