A Wall Street Genius’s Final Investment Playbook

Chapter 280 : The Invisible Hand (15)



Even among bubbles, some bubbles are a little more superior than others.

In that sense, I wanted a bubble that would never burst.

But the bubble I’ve created now—does it really measure up to that level?

‘No way, I still have a long way to go.’

Right now the stock price is soaring like crazy, so I’m just riding the wave…

But what if even a small piece of bad news hits here?

Panic selling would snowball, and this fragile bubble could pop in the blink of an eye.

Therefore, what I wanted was a bubble so ‘solid’ that it couldn’t even be compared to the others.

So then, how could I make this bubble even sturdier?

The answer was simple.

‘It needs a safety test.’

I had to show investors firsthand that this bubble could endure.

Sure, it might look like a bubble, but if no matter how much you poke, prod, or twist it, it stays intact?

Then at that point, whether it was really a bubble or just something that looked like one wouldn’t matter to investors.

Because it would be proven that no minor bad news could shake it.

But every test requires an examiner.

Who should be the one to prove that this bubble was that strong?

The examiner would ideally be someone antagonistic toward me.

Only by withstanding the harsh scrutiny of adversaries would trust in the bubble’s durability rise.

So the validators I chose were…

None other than the macro-side fund managers who had invested in Gooble.

‘I’ve been saving them up for this moment.’

This time, I didn’t even need to roll the dice—the opponent came at me first.

The first one from the macro camp to draw his sword was Gideon Horton.

He appeared on CNBC and directly attacked my AI ETF, AFII.

“This is not a ‘healthy’ rally. Most of the ETF’s holdings are mid- and small-cap stocks, and they simply can’t absorb the inflow of capital. The rise isn’t based on earnings or fundamentals—it’s being driven by speculative sentiment… Yes, it’s a bubble.”

After branding my ETF a bubble, he delivered a stern warning.

“Structurally, this could be even riskier than the dot-com bubble. Blind faith from investors has combined with automated buying systems. That combination is creating a highly dangerous feedback loop.”

On screen, Horton displayed charts and graphs illustrating the risks of ETFs.

ETF demand rises → underlying asset purchases → underlying asset prices rise → ETF returns improve → ETF demand rises.

A self-reinforcing feedback loop where market sentiment drives prices higher, and those prices in turn stimulate sentiment. He pointed out this vicious cycle and warned in a grave tone.

“All bubbles, sooner or later, inevitably burst. Right now the returns may look high, but if you fail to get out at the crucial moment, you’ll lose everything. And as always, it’s the latecomers who suffer the most.”

Countering this attack became my first safety test. I went straight onto the broadcast to face Horton head-on.

“A bubble? I can’t agree at all. It’s true that inflows have overwhelmed supply and overheated prices in the short term. But dismissing this as mere speculation misses the point. The rise of AFII reflects the reality that AI is structurally transforming the real economy.”

“You can’t call AI the real economy. Think about cases like 3D TVs or VR glasses. They were once hailed as the ‘next-generation technology,’ yet they failed to take hold in the market and disappeared. LLMs could end up the same way. There isn’t even a clear monetization model yet.”

“Yes, I admit that point.”

“…What?”

Horton’s eyes flickered in surprise.

Of course, he must have assumed I’d rave about AI’s infinite potential and lay out a dazzling revenue model.

But I had no intention of doing that.

‘When you don’t know the answer, it’s better to just say you don’t know.’

Commercialization of AI and the creation of real profits.

That was a problem that remained unsolved even until the moment of my death.

“So… you’re saying even you can’t be certain AI will take root, and you don’t know the revenue model either?”

His tone carried a hint of disbelief.

Naturally so—because in an exam, if someone writes “I don’t know” in the answer box, it’s usually marked wrong.

And here I was, boldly handing in that wrong answer. Of course he was flustered.

But I looked straight into the camera and replied firmly.

“Yes, I don’t know. But this kind of uncertainty isn’t limited to AI. In this world, a 100% certain investment doesn’t exist in the first place.”

I shifted the frame of the debate.

‘Is AI investment the only uncertain one? Aren’t all investments uncertain by nature?’ That’s what I was saying.

Why did I respond that way?

‘Because this way, it becomes a relative evaluation.’

In relative evaluation, you don’t need to provide the correct answer or be perfect.

Regardless of whether your answer is right or wrong, you just need to score higher than the person next to you.

In other words, as long as you choose the right comparison target, you can win.

And the first comparison target I chose was…

“For example, Mr. Horton, your fund has invested in Brexit. Isn’t that also an investment where the outcome is impossible to guarantee?”

It was Brexit.

Yes, the shocking ‘withdrawal of the UK from the EU.’

Of course, the possibility of Brexit had been raised long before, but most of the market, including Wall Street, only regarded it as a ‘theoretical scenario.’

They believed that Britain would ultimately make the ‘rational choice.’

Horton was the same.

He too judged that the UK would remain in the EU and bet on a stronger pound based on that assumption.

“In the end, wasn’t that also an investment premised on an ‘unknown future’? The direction may differ, but I see the essence as the same.”

But.

When I pushed the narrative that ‘Brexit and AI are both unknowns,’ Horton grimaced and shot back.

“No. The two cases are fundamentally different. Of course Brexit carried political uncertainties, but there was still data necessary for predictions. Past treaties, trade agreements, currency correlations… with such analyzable foundations, you could make predictions about the outcome.”

I smiled faintly and countered.

“So, if you have data, does that eliminate risk?”

“I’m saying it’s completely different from AI, where no data exists. With AI, you don’t know the motives of market participants, their behavior patterns—nothing. Assuming out of nowhere that ‘this technology will be adopted’ can’t possibly be compared to Brexit.”

“Tell me, Mr. Horton—did you secretly hold another referendum in the UK yourself?”

“…What?”

“Because otherwise, the way you ‘took it for granted’ that Britain would remain in the EU doesn’t seem to have had much basis either.”

I kept dragging him into the frame of ‘you’re no different from me,’ and finally wrapped up the exchange this way.

“So you’re saying, ‘If past data exists, it’s safer’… Well, I suppose the results will speak for themselves. The referendum is just a few days away.”

And a few days later.

The results came out.

<[Breaking News] Brexit Passes, UK Decides to Leave EU>

<Market Shock… Pound Falls to 30-Year Low>

<EU Financial Stocks Plunge… Intraday Volatility Skyrockets>

After that, Horton’s flustered figure sweating on TV while scrambling to explain himself was quite a sight.

I even sent him a text to console him.

―If I’d known, I’d have placed a bet on it. What a shame. But don’t be too disheartened. I hear the market is really ‘hard to predict’ these days.

He never replied…

But that wasn’t the important part.

What mattered most was that I had passed the first ‘safety test.’

The first trial: ‘AI’ vs. ‘Brexit.’

The winner was AI.

However, the test had only just begun.

After Horton’s defeat, another macro player stepped in to take a shot, almost like a relay.

This time, the criticism was of a different nature than “AI’s uncertainty” or “lack of data.”

“The problem with AI lies not in the technology itself, but in the timing of its release. MindChat is not yet sufficiently prepared for commercialization. I see impatience here—a rush to release before public interest fades, rather than focusing on product completion. In the end, it looks like capital logic is pushing this technology into the market without even undergoing proper stress tests.”

In short, they argued that an unfinished product was being launched far too hastily.

“Failure,” they said, “is something that should end in a controlled laboratory. Releasing untested technology directly to the market is an irresponsibly reckless approach. When it fails, it’s not the company that bears the cost—it’s the investors.”

And surprisingly, that wasn’t entirely wrong.

I calmly replied.

“In the tech industry, it’s rare for products to enter the market in a ‘finished state.’ Even the iPhone was initially incomplete when it debuted, then rapidly evolved based on user feedback.”

“That’s… completely different from AI. The iPhone was an extension of existing technology. AI, on the other hand, is literally creating something out of nothing—a whole new domain. The foundation of the technology is different, so the risk is far greater.”

Here I just shrugged.

“Maybe so. But calling something an ‘extension of existing technology’ doesn’t guarantee it’s safer. Personally… I think the odds of an accident happening in smartphones are higher than in AI.”

“In the smartphone industry, the worst accidents are buttons that don’t work. How can you compare that to the disasters AI might bring?”

“Well, injuries from using Apple or Saseong Electronics products aren’t impossible, are they?”

“That’s ridiculous… I didn’t come here to engage in a battle of wild imagination. I came to debate the very real overheating of AI.”

But not long after that—

<[Breaking News] Saseong Electronics Announces Full Smartphone Recall>

<Multiple Reports of Explosions Due to Battery Overheating>

I sent the man a text filled with genuine concern.

—You don’t happen to own a Saseong device, do you? You defended them quite strongly on air, so I just got a little worried. Maybe it’s just that my imagination runs too wild…

As expected, there was no reply.

But that didn’t matter.

The second test, ‘AI’ vs. ‘Smartphones.’

Once again, the winner was AI. ɴᴇᴡ ɴᴏᴠᴇʟ ᴄʜᴀᴘᴛᴇʀs ᴀʀᴇ ᴘᴜʙʟɪsʜᴇᴅ ᴏɴ N0veI.Fiɾe.net

But before I could savor the victory, the third test arrived.

“AI’s greatest weakness is the absence of an ecosystem. Technology alone doesn’t sustain an industry. You need supply chains, distribution, regulation, and policy networks to withstand external shocks. And right now, AI has none of that.”

This time, the criticism was about “running alone without an ecosystem.”

Once again, I nodded calmly.

“That’s true. But no industry starts with a complete ecosystem.”

I paused for a beat.

“And besides, having an ecosystem doesn’t necessarily reduce risk. Think back to the financial crisis. Finance was the most interconnected ecosystem in the world, yet that very connectedness became the catalyst that amplified risk.”

I chose “finance” as my third comparison target.

Still, I didn’t expect much pushback here.

No sane person would dare claim that “AI is riskier than the financial crisis.” After all, the scars of 2008 were still rippling across the globe.

But then—

“Yes, the financial crisis was indeed catastrophic. But since then, countless regulations and safeguards have been introduced, making the system more stable than before. In contrast, while finance has corrected itself through trial and error, AI hasn’t even taken shape yet. I see AI’s risk as far greater.”

“…What?”

…?

This macro was brazen.

“I see. So you’re saying that since finance already had a disaster, it’s safer now because it learned its lesson… Is that it?”

“Not having a criminal record doesn’t mean someone is safer. It might just mean they haven’t been caught yet.”

“So basically… a restaurant that once caused food poisoning might actually be safer now after passing health inspections, but a newly opened restaurant with no record could be the one to give you trouble?”

Hmph. Not making much sense.

Anyway.

Soon enough, the results came in.

<[Breaking News] Deutsche Bank Hit with $14 Billion Fine by U.S. Department of Justice>

<Subprime Mortgage Damages… CDS Surge, Shares Drop 8.4%>

Deutsche Bank had seemed to clean up its toxic assets after the financial crisis…

But in reality, it was still exposed to the same risks.

And that fact came to light as part of the U.S. fine process.

A tired old story, really—like an ex-con revealed to still be hiding new crimes.

—Thought you might need this. Here’s a list of restaurants in New York with food poisoning histories. Stay safe and healthy.

I left that text as neatly as before.

The third test, ‘AI’ vs. ‘Finance.’

Once again, the winner was AI.

I quietly took stock of the situation.

‘At this point, the safety has been proven well enough.’

Of course, just because I kept winning against the macro funds didn’t mean the fundamental problems of the AI industry had been solved.

There was still no clear method for commercialization or profit generation, and no products or services had reached completion.

But that was exactly why I had reframed the tests into “relative evaluation.”

I didn’t need to prove away AI’s structural flaws.

The kind of test I was taking wasn’t about getting the “right answer.”

It was a relative evaluation.

And the result?

Compared with investments in nations, major tech companies, or giant banks, AI didn’t look any riskier.

‘So, that should be enough to prove the bubble is strong enough.’

The market responded well.

In fact, after each test ended, AFII’s chart kept drawing a quiet upward curve.

‘Then… maybe it’s time to move on to the final stage.’

At last, the time had come to close out this long, drawn-out AI war.

The “end of hostilities” in this great war I’d planned was drawing near.

And from here, I had only one task left.

‘To make sure this bubble never bursts in the future.’

So far, I had personally inflated, defended, and reinforced the bubble.

But I couldn’t keep doing that forever.

I had to turn it into a bubble that could sustain itself without my direct management.

And for that…

I needed investors.

Ones who wouldn’t pull out even if bad news hit, even if performance stalled.

Who could those investors be?

The answer was simple.

‘The government, of course.’

Yes, the final stage of my scenario.

It was to dump the entire burden onto the U.S. government—and then walk away clean.

If you find any errors ( Ads popup, ads redirect, broken links, non-standard content, etc.. ), Please let us know < report chapter > so we can fix it as soon as possible.

Tip: You can use left, right, A and D keyboard keys to browse between chapters.