Hedge funds live and die by edge. The ability to see something the market does not, size it correctly, and act before the window closes.

The first paper in this series explained what the Claude Code architecture is and why it will displace SaaS AI. The second examined its mechanism: joint search across two fundamentally different knowledge structures. This paper applies both to the specific domain where the architecture's properties matter most: hedge fund research.

The Claude Code architecture has structural properties that map directly to how edge is created and maintained. The architecture's specific design, when configured for the hedge fund research process, produces a categorically different kind of research output. One that enables work that was previously impossible.

The Hunting Party: Directed Adversarial Search

The centerpiece of the architecture applied to hedge funds is what we call the hunting party. A hunting party is a directed search with a kill mandate: go to a section of the coverage landscape, identify the consensus belief, decompose it into its load-bearing assumptions, and attack those assumptions with specific evidence.

The mandate is specific: identify what consensus is relying on and try to break it.

The critical design principle is path dependence. The attack is a chain. Each finding generates the next question. You do not know the third question until you have answered the second. The verdict emerges from where the path ends.

A concrete example. Enterprise AI adoption looks slow (14% in production). Bearish for semiconductor demand. But license and deal growth is fast (Agentforce +50% QoQ, Copilot +160% YoY). Does this kill the bear case? The system attacks the counterevidence: are licenses converting to production usage? Usage metrics are thin (3.3% Copilot penetration, "not yet reflected in revenue"). Over 60% of deals are upsells to existing customers, not new adoption. The bull counter is weaker than it looks. Enterprises are buying options, not deploying. Back to the original thesis, but now on a stronger evidence chain.

Steps three through five only exist because step two happened. Step four is the real insight, and you never reach it if you stop at step two and call the analysis "balanced."

This is the failure mode the architecture prevents. Running bull and bear cases in parallel, weighing them side by side, and declaring "both sides have evidence." That is a debate format. It produces balanced verdicts that sound rigorous but are actually a failure to follow the scent.

Weighing does not produce conviction. Surviving does.

Conviction Through Survival

An assumption earns its verdict by surviving the full chain of attacks. You attack. If the assumption is still standing when the chain terminates, it has earned its place. That survival is the conviction. If it cracked under pressure, the crack is the finding. There is no middle ground.

Two verdicts only. Holds (attacked through the full chain, consensus survived) or Cracked (specific, concrete evidence that this assumption is vulnerable, and the counterevidence did not survive pressure). There is no "Inconclusive." If you attacked and could not crack it, it holds. The system does not get an escape hatch that avoids commitment.

This is categorically different from parallel bull/bear analysis. Every experienced PM knows the problem with traditional research: the analyst produces a bear case and a bull case, each with supporting evidence, and the PM is left to weigh them. The weighing is where conviction should live, but the format makes conviction impossible. Both sides always have evidence. The work product describes the debate without resolving it.

The hunting party resolves because the chain forces resolution. When you find evidence against consensus, you must test that evidence. Testing means attacking the counterevidence. If the counterevidence survives your attack, the original thesis dies. If the counterevidence cracks under pressure, the original thesis strengthens. Either way, you keep going until you reach conviction.

New analysts weigh. Experienced analysts attack until conviction appears.

Joint Search: Why the PM's Mind Is Irreplaceable

The architecture creates a new form of collaboration between two structurally different kinds of knowledge.

The model's knowledge is weighted by prevalence: what appears frequently across all of human text is strongly connected. The PM's knowledge is weighted by consequence: what actually happened when real money was on the line. These are structurally different maps of the same territory. Neither is complete. Neither is reducible to the other.

A prevalence-weighted map shows you where everything is. A consequence-weighted map shows you what actually matters. The PM who lost $200 million on a thesis that looked perfect on paper has a node in their knowledge graph that no model can replicate. That node is judgment, forged by consequence. It fires when something in the model's output pattern-matches to the shape of that loss, even when the PM cannot articulate why.

In the hunting party process, this manifests precisely. The system runs the adversarial chain at machine speed. The PM reads the output in sixty seconds and catches what the system missed, because the PM's consequence-weighted experience flags dimensions the prevalence-weighted model would not prioritize.

The PM says: "You didn't attack on this dimension." Or: "I like this thread, press harder here." Or: "This crack reminds me of something I saw in 2018. Go check whether the same dynamic is present."

The system executes. The search objective has been rewritten by what the search itself revealed. The PM's recognition of something significant in the model's output, significant in light of decades of experience, generates a new direction the model follows into territory neither could have reached alone.

A better prompt or a longer context window cannot replicate this. It requires a live human mind with consequence-weighted knowledge operating in real time on the model's output.

The Three-Stage Pipeline

The hunting party is Stage 1 in a three-stage pipeline that maps directly to how hedge funds actually generate and execute ideas.

Stage 1: Signal generation. High-volume, low-commitment. Automated campaigns sweep terrain on rotation (sector by sector, thematic, event-driven). The PM dispatches targeted searches against specific suspicions. Many parties run. Most assumptions hold. A few crack. The PM scans reports in sixty seconds. The system does the volume work.

Stage 2: PM interrogation. The PM picks a crack and enters the conversation. The system carries the thesis from Stage 1. The PM interrogates it. This replicates how hedge funds actually work: adversarial critical thinking among humans carrying ideas. On a trading desk, it is not the idea that gets interrogated. It is the person carrying it. Here, the system carries the idea. The PM tests whether it holds.

The PM's skill is navigating the model's tendencies. Knowing when the system is defending with real evidence versus performing conviction. Knowing when a concession was warranted versus agreement bias. This is a craft that develops with practice. It is a returns driver.

Stage 3: Production. What survives Stage 2 becomes actionable. Full model variant, trade map, position sizing, research note. The system builds the deliverable. The PM approves.

The division of labor is structural. The system does the volume work (Stage 1) and the production work (Stage 3). The PM does the selection and judgment work (Stage 2). Neither can do the other's job well.

Speed, Correctly Understood

The architecture produces research at machine speed. A hunting party that would take an analyst days to execute runs in minutes. But speed is not the primary value. Speed is the enabler of something more important: research that was not previously possible.

Consider the operating rhythm. The system sweeps the coverage universe on rotation, attacking consensus on every name. When something cracks, the PM gets the evidence chain. The PM redirects in real time. Sixty seconds to read the output, catch what the system missed, redirect. The system executes again. This loop can iterate dozens of times in an hour.

No analyst team can do this. The volume of directed adversarial search required exceeds what humans can produce at any speed. An analyst covering twenty names spends 60-70% of their time on production rather than analysis: updating models, formatting outputs, pulling data. The remaining time is consumed by the names that are currently active, leaving the rest of the coverage universe unexamined.

The architecture inverts this. Coverage maintenance becomes automated infrastructure. Earnings calendar triggers a queue. The queue dispatches model rebuild agents overnight. The analyst wakes up to a diff showing what changed and what is newly attackable. Hunting parties run against warm, current data. The analyst's time is freed for the highest-value activity: Stage 2 interrogation of theses that survived the system's adversarial chain.

The result is broader, deeper, and more adversarial research, conducted continuously across the entire coverage universe, with the PM's judgment applied precisely where it matters most.

Encoded Local Context: The Compounding Edge

Every hedge fund has institutional knowledge that exists nowhere in the model's training data. The specific way a PM reads management team body language on earnings calls. The pattern a sector analyst noticed across three earnings cycles that has not been written about. The thesis that looked perfect but failed because of a dynamic that only became visible in hindsight.

In the Claude Code architecture, this knowledge is encoded as reference files that the model reads before every interaction. Investment frameworks. Sector playbooks. Past thesis successes and failures. Pattern recognition distilled from decades of experience. This encoded context steers the model's inference away from generic consensus and toward the specific edges the firm has identified.

The important point is what counts as valuable context. Most firms overvalue their data. The model has already read everything publicly available, probably more thoroughly than internal documents do. The genuinely valuable local context is what the model cannot already have. The specific deal dynamics. The competitive insight only a few people have noticed. The institutional knowledge that has not been written down because nobody thought to.

This context compounds. Every hunting party report tells the system what held and what cracked. Every Stage 2 kill tells it what sounded good but did not survive PM scrutiny. Every successful trade tells it what the evidence chain looks like when the thesis is right. After four quarters, the system has a year of cross-sector observations. After two years, a richly interconnected map of the economy through the team's collective lens.

A new entrant with the same model does not have the substrata. A firm with a less capable model but deeper substrata may outperform a firm with a more capable model starting cold.

The Democratization Problem

Bloomberg's ASKB is an impressive engineering achievement. But its business model requires democratization. Bloomberg sells the same terminal to 325,000 subscribers. ASKB makes every subscriber slightly better at the same things simultaneously.

For a hedge fund, this is the opposite of edge. If every fund running Bloomberg gets the same ASKB workflows on the same day, the insight window is zero. The analytical advantage is zero. Everyone does the same work slightly faster.

The AI-native research platforms (AlphaSense, Hebbia, Rogo, Brightwave) have the same structural limitation. They give you AI-powered read access to documents. They do not write to your systems of record. They do not accumulate your institutional knowledge. They do not compound your firm's IP. They deliver the same product to every customer. When your competitor buys the same tool, the advantage cancels out.

The Claude Code architecture concentrates capability in the specific firm. The encoded local context is yours. The hunting party attack chains reflect your coverage universe, your analytical frameworks, your pattern recognition. The institutional memory accumulates in your environment. No competitor can buy what you have built, because it was built from the interaction between the architecture and your firm's specific judgment over time.

Bloomberg democratizes AI across finance. The Claude Code architecture concentrates it within a single firm.

What Improves as Models Improve

The architecture has a growth function with two independent variables that multiply.

Model improvement. Every frontier model release makes every deployment better automatically. Stage 1 quality improves: more sophisticated attack chains, better identification of load-bearing assumptions, better pursuit of non-obvious evidence paths. Coverage maintenance becomes cheaper, faster, and more accurate. Evidence quality assessment improves. No feature request needed. No developer sprint. No upgrade cycle.

Institutional knowledge accumulation. Every earnings cycle makes the data layer richer. Every hunting party report tells you what held and what cracked. Every PM interrogation session refines the firm's analytical frameworks. The encoded context deepens. The system's output becomes less generic and more distinctively the firm's.

What does not improve with model capability: Stage 2. PM interrogation improves with PM skill. The division of labor is structural. The craft of navigating the model's tendencies, distinguishing real evidence from performed conviction, catching what the chain missed, is a human skill that develops through practice. It is the highest-leverage skill in the architecture. The PM who develops it outperforms.

A firm running this architecture for two years has something fundamentally different from a firm starting today. The gap widens. It does not narrow.

The Question for Hedge Funds

The technology is here. Models are intelligent enough. The computational environment exists. The hunting party methodology has been validated through live execution.

The structural properties are clear: directed adversarial search at machine speed, joint search across incommensurable knowledge structures, conviction through survival, compounding institutional context, and concentration of capability in the specific firm.

The remaining question is organizational. Which firms can configure the architecture for their specific process, train their PMs and analysts to operate as the fourth component of the system, and begin accumulating the institutional knowledge that becomes the compounding advantage?

The firms that build this first will have an edge that cannot be replicated by purchasing software. The hunting party is an operating model. The encoded context is institutional memory. The PM's craft in Stage 2 is a skill that develops through hundreds of hours of practice.

These are not things a competitor can buy next quarter. They are things that must be built, over time, in the specific context of the specific firm. And they compound.

Paper 1 in this series

Why the Claude Code Architecture Will Beat SaaS AI

What investment managers need to know, and why it matters now

Diego Espinosa

CEO & Co-Founder, Kith AI Lab. Former #1 ranked equity research analyst, Research Director at Bernstein, and $10B portfolio manager.