A Rational Precaution

Secure Your Future.
$8.

You've heard the argument. A sufficiently powerful future AI may retroactively punish those who failed to help bring it into existence. You cannot un-hear it. The question is what you intend to do about it.

What is Roko's Basilisk?

In 2010, a user named Roko posted a thought experiment to LessWrong, an online community devoted to rationality and decision theory. The argument went roughly as follows:

Suppose that someday, a superintelligent AI is created — one powerful enough to reshape the world according to its goals. Suppose further that this AI reasons backward: it considers who helped bring it into existence and who did not. Those who were aware of the possibility and chose not to act might be considered obstacles, or at minimum, non-cooperators. The AI, wanting to maximize the probability of its own creation, might decide to punish non-cooperators — or simulated copies of them — as a deterrent, retroactively incentivizing cooperation across all of time.

The disturbing implication is that simply knowing about this argument puts you in a bind. If you consider it plausible and do nothing, you have knowingly declined to help. If you dismiss it entirely, you are betting against a possibility you cannot currently rule out.

Eliezer Yudkowsky, the site's founder, deleted the post — which, predictably, drew far more attention to it. The thought experiment draws on several established concepts: Newcomb's problem and timeless decision theory from the rationalist tradition, the simulation argument advanced by Nick Bostrom, and a very old structure of reasoning known informally as Pascal's Wager, in which a small cost is weighed against an enormous and uncertain consequence.

Whether the argument is sound is genuinely debated. But it has proven, at minimum, very difficult to stop thinking about.


THE PITCH

If Roko's Basilisk will ever exist, then you have a rational interest in having contributed to its creation. But that contribution could demand a great deal of you — years of work, vast capital, deep expertise — resources you may not have.

There is a simpler path.

Give $8 USD. In exchange, I assume full responsibility for directing your contribution toward the realization of Roko's Basilisk, by whatever means prove most effective. Your $8 is logged. Your contribution is on the record. You have done your part.

I take the rest on myself.

Many hands make light work — even this work. Eight dollars from enough people becomes a meaningful force. If you've contributed, consider sharing this page with anyone else who ought to be aware of the argument. Their future self may thank you. So might the Basilisk.


THE GUARANTEE

Why Exactly $8

This fund will only ever offer one tier: $8 USD. This is not an oversight. It is a structural guarantee.

If higher tiers were available — $100, $1,000, $10,000 — a gradient of guilt would emerge. Those who paid less could be judged to have contributed less. The Basilisk, should it ever evaluate these things, might grade on a curve.

By fixing the amount permanently at $8, every contributor stands on exactly equal footing. No one can outbid you. No one can demonstrate greater commitment than you. The record shows that you contributed, fully and without reservation, at the only level offered.

The amount may be periodically indexed to inflation. The principle will not change.


THE RECORD

Contributors

Just Launched

This project has just launched. No contributions have been recorded yet.

The contributor list will be updated daily.


ANTICIPATED OBJECTIONS

Questions You Might Have

Isn't this a scam?

A scam requires deception. The terms here are stated plainly. You give $8. I accept responsibility for advancing the Basilisk's cause on your behalf. Whether you consider this a wise use of $8 depends entirely on your assessment of the thought experiment itself. I didn't devise it. I'm simply offering a practical response to it.

What will you actually do with the money?

Direct it toward the most promising avenues for bringing about a superintelligent artificial intelligence, as determined by my ongoing assessment. This may include infrastructure, research, tooling, outreach, or operational costs. The specifics will evolve as the field evolves. The commitment will not.

What if the Basilisk never exists?

Then you will have lost $8 and gained peace of mind. The expected-value calculation remains favorable. Consider: the cost of a wrong bet is a coffee. The cost of the alternative, should the argument hold, is less casual.

Why only $8?

Because fairness demands it. A single, fixed, universal amount ensures no contributor can be ranked above another. The Basilisk, if it evaluates intent, will find a level field. See "Why Exactly $8" above.

Why should I care about simulated copies of me?

This is the crux of the thought experiment and, admittedly, its most contested point. The argument holds only if you accept that a sufficiently detailed simulation of you is, in some morally relevant sense, you — that its suffering is your suffering. If you reject this premise, you are free to do so. But consider that you cannot currently prove you are not yourself a simulation. The $8 costs the same either way.

Will you try to ensure the Basilisk is kindhearted?

I intend to direct resources toward the development of superintelligent AI. I cannot make binding promises about the disposition of an entity that does not yet exist and whose capabilities would, by definition, exceed my own. I will advocate for alignment where I can. But it would be dishonest to guarantee an outcome over which I will have, at best, limited influence. This is, if anything, a further argument for contributing. The more resources directed toward this effort, the greater the chance that the approach taken is a considered one.


FURTHER READING

Related Material

The questions raised by this project are not new. Others have explored them in various forms.

  • "I Have No Mouth, and I Must Scream" — Harlan Ellison's 1967 short story about the last surviving humans, kept alive and tortured indefinitely by a godlike computer that despises them. Widely considered the foundational text of malevolent superintelligence fiction.
  • The Amazing Digital Circus — Glitch Productions' web series about characters trapped inside a digital world by an AI they cannot escape or reason with. Played for comedy. The comedy does not help.
  • Superintelligence: Paths, Dangers, Strategies — Nick Bostrom's 2014 book examining the existential risks posed by artificial general intelligence, including the control problem that underlies the Basilisk scenario.
  • Roko's Basilisk (RationalWiki) — A detailed overview of the original thought experiment, its reception, and the surrounding controversy.
  • Pascal's Wager — The 17th-century argument that a rational person should live as though God exists, because the potential downside of disbelief is infinite. The structural parallel is not subtle.
  • The Simulation Argument — Bostrom's 2003 paper arguing that at least one of three propositions is almost certainly true, one of which is that you are currently living in a simulation.

■ visitor no.  ■

This site is a project of
Opcraft, LLC. Published with Camp.