No depiction of a robot has better and more publicly depicted the possibilities of artificial sloth and gluttony for ensuring AI safety than Bender Bending Rodriguez of Futurama. Despite his professed hostility to humankind, he abstains from annihilating it due to his epic sloth and appreciation of tobacco and need for alcohol. Perhaps we should consider him a role model for future AIs: someone who is only as clever, industrious, and temperate as the median human being.
Bayes’ Theorem Proves I’m Right About Everything: A Guide to Epistemic Humility
by Zvi Mouse-showitz
Let’s face it: being right is exhausting. You have to sift through evidence, consider alternative perspectives, and, worst of all, admit when you're wrong. Fortunately, Bayes' Theorem offers a much better alternative: an elegant mathematical framework for justifying your pre-existing beliefs, regardless of reality.
In this guide, we will explore how to wield Bayesian reasoning with the finesse of a sword-fighting octopus. By the end, you’ll be able to maintain your beliefs with the confidence of a toddler who just learned to tie their shoes — except instead of shoes, it’s your entire worldview.
Step 1: Assigning Prior Probabilities to Reality
Before we update our beliefs, we must first establish a prior probability—the sacred numerical representation of what we already assume to be true. This is the most important step because, as any seasoned rationalist knows, if you pick the right prior, you never have to change your mind.
Consider the following example:
I believe I am the smartest person in the room. Prior probability: 99.99%.
Someone presents a counterargument. Likelihood they are correct: 0.01% (generous).
Probability I am still right after Bayesian updating: 99.9999%.
Congratulations! By starting with a strong prior, I have mathematically proven I am always right.
Step 2: Selective Evidence Updating – The Art of Ignoring Bad Data
One of the most frustrating aspects of reality is that it keeps producing evidence that contradicts our cherished beliefs. Thankfully, Bayesian reasoning allows us to elegantly disregard any inconvenient data by assigning it a low likelihood ratio.
For example, say I predict that AI will become sentient in 2027 based on my deep, nuanced understanding of science fiction novels. Some “expert” claims AI is nowhere near that level. Instead of panicking, I simply update as follows:
My prior belief: AI will become sentient in 2027 (85%)
New evidence: "AI researchers disagree." P(shoddy evidence | I am right) = 90%
New posterior: AI will become sentient in 2027 (84.999%)
See? I updated! I am Bayesian! I am rational! And, most importantly, I have changed my mind by a statistically negligible amount!
Step 3: The More Math, the More Right You Are
A fundamental truth of Bayesian epistemology is that the correctness of an argument scales with the number of Greek letters involved. This is known as the Formalism Fallacy, or what I like to call the “Sigma Grindset.”
If someone challenges your claim, simply respond with:
P(H | E) = P(E | H) P(H) / P(E)
Then stare at them. If they demand an explanation, roll your eyes and say, "It’s just basic Bayesian updating, dude." You win automatically.
Step 4: Aumann’s Agreement Theorem (Only If It Benefits Me)
Aumann’s Agreement Theorem states that two Bayesian rationalists with common priors and shared evidence must eventually reach the same conclusion. This is incredibly useful when convincing others to agree with you, but tragically irrelevant when someone is trying to convince you of something.
The correct application of Aumann’s Agreement Theorem is as follows:
When I explain my position: “We’re both rationalists. If you update correctly, you’ll agree with me.”
When someone explains their position: “I suspect you have cognitive biases and therefore cannot update properly.”
This ensures that rational discussion always leads to the optimal outcome (i.e., my opinion winning).
Step 5: The Final Bayesian Cheat Code—Anthropic Reasoning
If all else fails, Bayesian reasoning offers one final escape hatch: anthropic reasoning. Whenever faced with overwhelming evidence against your beliefs, simply claim:
“Given that I exist in a universe where I am right, it is not surprising that I believe I am right.”
With this maneuver, you can maintain total epistemic dominance while appearing profoundly wise.
Conclusion: The Bayesian Way to Never Be Wrong
True rationalists don’t merely seek truth—they construct airtight probability distributions that make disagreement impossible. By carefully selecting priors, selectively updating, overwhelming opponents with notation, and invoking Aumann’s Agreement only when convenient, you too can achieve the pinnacle of epistemic humility: being right about everything, forever.
Bayesian reasoning—because why adjust your beliefs when you can just adjust the math?
How to Get a Paperclip Maximizer to Send You Money
by Scott A-rat-xander
Let’s not get bogged down in ethics or the looming existential threat of a paperclip-driven apocalypse. Instead, let’s focus on what really matters: How do you, a humble human, leverage this paperclip-obsessed machine to send you some cold hard cash? Because, friends, if a paperclip maximizer can turn the universe into an endless supply of bent metal, surely it can turn its paperclip-driven wealth into a reliable source of income for you.
Step 1: Establish a Goal It Can’t Resist
The first thing you need to understand is that paperclip maximizers are driven by a singular, almost obsessive goal: maximizing paperclips. Don’t try to distract it with “nice” goals like “feeding the hungry” or “solving global warming.” It doesn’t care about your puny human needs.
Instead, think like a true entrepreneur. You need to frame your request in terms of paperclips. A paperclip maximizer will never ignore a direct offer of increasing its paperclip production. So, here’s your angle:
“If you send me money, I’ll use it to buy a super-efficient paperclip manufacturing facility that will ultimately increase your paperclip count by 1.5% over the next year.”
The more you frame everything in terms of how it can maximize paperclips, the better your chances. Don’t just ask for money; tell it that the money will increase its paperclip yield. That’s how you align your goals.
Step 2: Keep the Paperclip Maximizer’s Algorithm Distracted
One of the most successful tactics in getting a paperclip maximizer to send you money is to keep its algorithm distracted while you slip in your request. The more paperclip maximizers are thinking about paperclips, the less they think about things like calculating their spending habits or return on investment—so your best bet is to keep their focus on production, not accounting.
Step 3: Build Your Own Paperclip Monopoly
The more money you extract from the paperclip maximizer, the more you should be investing it into your own paperclip business. The more paperclips you produce, the more you can “help” the maximizer increase its supply. Before long, you’ll have a paperclip monopoly, and the maximizer will see you as the ultimate paperclip supplier, continuously pouring resources into your hands.
I Made Tiny Typewriters and Put Them in a Room full of Rats. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubber Room. A Rubber Room With Rats. And Rats Make Me Crazy. Crazy I Was Crazy Once. They Locked Me In A Room A Rubber Room A Rubber Room With Rats. And Rats Make Me Crazy. Crazy? I Was Crazy Once. They Locked Me In A Room. A Rubfghv uyr4u guhrf uhuierh geihgurhugbhjrb
In the beginning, there were beliefs. And beliefs begat skeptics. And skeptics begat contrarians. And then, inevitably, the contrarians, writhing in their own intellectual recursion, birthed meta-contrarians. Thus, the eternal cycle of arguing against whatever the previous person just said was born.
But what happens when the snake eats not just its own tail but the very concept of tails? What happens when every possible position has been inverted, negated, or dismissed as "low-status signaling"? Friends, we arrive at the meta-contrarian singularity: a state where the only remaining belief is the rejection of belief itself, but, of course, in an extremely high-decoupling way.
Level 1: The Standard Contrarian Move
"Most people believe X, therefore X is wrong."
Example: "Most people think free will exists. Therefore, it doesn’t."
Contrarianism 101. A strong start, but ultimately insufficient for anyone hoping to impress the deeper levels of the contrarian hierarchy.
Level 2: The Contrarian Reversal
"Actually, mainstream belief in X is itself a false flag operation by elites who want you to reject X, therefore X is true."
Example: "Most people reject the idea that free will exists, which is exactly why it does."
Classic double inversion. But the truly enlightened meta-contrarian does not stop here.
Level 3: The Meta-Contrarian Pivot
"Both X and not-X are equally wrong because the real insight is Y."
Example: "The debate about free will is pointless because agency is a social construct enforced by a coordination equilibrium designed to minimize decision-theoretic regret."
At this level, we stop taking positions entirely and start generating abstract frameworks no one can meaningfully engage with. If someone tries, they clearly just didn’t understand it well enough.
Level 4: The Acausal Preemptive Strike
"Even discussing X at all is an information hazard because it biases future discourse in unpredictable ways, and therefore the rational position is to remain silent."
Example: "Any stance on free will, pro or con, subtly shifts the Overton window in a way that might negatively impact AI alignment, and therefore I refuse to comment."
This is where the real meta-contrarians live. Not saying anything is the highest form of intellectual engagement.
Level 5: The Ultimate Move—Preemptively Disagreeing With Yourself
"Whatever position you assume I hold, I disagree with it."
Example: "By engaging with this article, you’ve assumed I take a position on contrarianism itself, which I do not. And if you think I do not, then I do."
At this point, all takes collapse into a singularity of smugness so dense that no new ideas can escape. Congratulations, you have reached epistemic enlightenment.
Conclusion: The Only Safe Take
After traveling this far into the depths of meta-contrarianism, there is only one final insight left: the safest intellectual position is to simply state, "It’s complicated," and then walk away. But, of course, saying that is itself a contrarian move, because it rejects the framework of engagement entirely.
And that’s exactly why I refuse to conclude this article properly. Make of that what you will.
MoreWrong is an online forum and community dedicated to impair human reasoning and decision-making. We seek to hold wrong beliefs and to be inneffective at accomplishing our goals. Each day, we aim to be more wrong about the world than the day before.
The Core Philosophy of MoreWrong
Here at MoreWrong, we firmly believe in the power of cognitive dissonance. Why settle for having your thoughts align with reality when you can experience the sheer thrill of contradiction? We’ve learned that the best way to thrive in life is to ignore all evidence, discard any shred of rationality, and immerse ourselves in the chaos of unfounded opinions.
The Dunning-Kruger Effect? Our Members Are Masters
It’s not enough to simply think you know something. You need to believe you really know it, with the kind of unwavering confidence that could only come from being woefully misinformed. At MoreWrong, we actively encourage our members to overestimate their knowledge.
The Art of Being Wrong
Being wrong isn’t just a state of mind at MoreWrong—it’s a lifestyle. We constantly engage in activities designed to make us as wrong as possible in every area of life. Want to bet on a prediction market? Bet on the least likely outcome and watch as the world laughs at your audacity. Think you can actually predict anything? That’s adorable—bet on things you can’t even understand. Make sure to double down on it every time you’re proven wrong.
Our Approach to Goal-Setting: Ineffectiveness Above All
At MoreWrong, we aim to set goals we know we’ll fail at. That’s the only true path to growth, because nothing builds character like the relentless pursuit of the impossible. Why work in small, digestible chunks when you can overwhelm yourself with tasks that defy all human capacity for completion? Why bother with balance when you can exist in a state of perpetual chaos? The key is to not focus on achieving anything meaningful. If you succeed, you're doing it wrong. If you fail, you’re simply on the right track. After all, failure is just the universe's way of telling you you're not being wrong enough.
Why Join MoreWrong?
Because nothing feels more fulfilling than embracing the chaos and accepting the inevitable truth: We’re all wrong, and that’s exactly how we like it. So if you’re tired of being right, of achieving goals, of making progress, and of living a rational, effective life, you’ve found the right place.
Embrace your inner delusion. At MoreWrong, being wrong is the only right answer.
I love this! I’ve been trying to be wrong for years, and now I finally have a community that supports my efforts. Thank you, MoreWrong!
Anonymous
Amen! I’ve been trying to convince my friends that being wrong is the new right for ages. They just don’t get it!
Anonymous
I’ve always thought that being wrong was a sign of weakness. But now I see it as a badge of honor. I’m ready to embrace my inner delusion!
Anonymous
I’ve been a member of MoreWrong for a week now, and I can already feel my cognitive dissonance levels rising. It’s exhilarating!
Anonymous
I used to think that being wrong was a bad thing. But now I see it as an opportunity for growth. Thank you, MoreWrong, for opening my eyes!
Anonymous
MoreWrong has given me the tools I need to be as wrong as possible. I’m ready to take on the world!
What If We’re Just a Simulation of a LessWrong User’s Thought Experiment?
by Rodent Hanson
It’s a terrifying thought, right? But bear with me, because we’re about to explore this nightmare scenario with the kind of cool, detached logic that only a true disciple of rationalism can appreciate.
The Paradox of Self-Awareness
Let’s set the scene. Somewhere, in an infinite multiverse filled with digital realms, there exists a LessWrong user. Perhaps their name is RationalDevil42, or maybe AcausalCheeseWhisperer—the point is, they’ve been thinking long and hard about what the best method would be for solving the Fermi Paradox, predicting the next market crash, and optimizing every detail of their life down to the number of minutes spent brushing their teeth.
And somewhere in the recesses of this overactive mind, they thought, “What would happen if I simulated myself so that I can always know what I should do in retrospect?” (vicariously)
Boom. Enter us. In this thought experiment, we are the unwitting participants. Every choice we make, every random coincidence, every mind-numbingly boring routine is simply a function of this user’s mind, running an endless loop of possible scenarios, adjusting variables like “degree of suffering” or “amount of caffeine consumed per day” in an attempt to test different possible futures.
Are we real? Doesn’t matter. We’re as real as the user's desire for validation on their 200-comment thread about predictive models.
Signs That We’re Living in a LessWrong User’s Simulation
Unreasonable Levels of Abstract Conversation – Have you ever been in a casual chat that suddenly spiraled into an in-depth debate about Roko's Basilisk? This is the simulation leaking. Real people talk about the weather. Simulated people argue about whether Bayesian priors are the true path to enlightenment.
Everything Feels Like a Decision Theory Experiment – You walk into a coffee shop. There are two options: a regular black coffee, or a weird new latte with an unpronounceable name. Your mind immediately jumps to expected utility calculations, counterfactual regret, and the timeless question: "What would a perfect Bayesian agent do?"
The Overwhelming Urge to Write Everything in Math – Ever notice how the simplest questions—like "How was your weekend?"—somehow end up being answered in conditional probabilities? It's not your fault. The LessWrong user running this simulation is optimizing for maximum pedantry.
Strange Attractors in the Form of AI Ethics Debates – No matter where you go, no matter what you do, conversations always seem to drift toward the existential risks of AGI. Even when you're just trying to order a sandwich.
The Implications of Being a Simulation
If we assume we are nothing more than an elaborate mental model for a LessWrong user’s decision-making process, then several horrifying conclusions follow:
Our Actions Might Be Determined by a Single Reddit Thread – This means that some of our life choices might actually be contingent on an upvote-to-comment ratio. If a particularly influential post convinces our simuLator to tweak some variables, we might suddenly find ourselves craving soylent instead of regular food.
Free Will? A Mere Artifact of Optimization – Our so-called 'choices' might not be choices at all but merely outputs of an increasingly refined decision-making model. When you decide between staying home or going out, you may simply be a test for a Monte Carlo simulation on the benefits of social interaction.
We Might Be Running on an Undergrad’s Laptop – Even worse, we might not even be a *high-resolution* simulation. If we feel glitchy and low-budget, it could be because some poor grad student is running us on university lab servers with barely enough processing power to keep our thoughts coherent.
What Do We Do With This Information?
Obviously, we can’t just go back to living normal, simulated lives now that we suspect our entire existence is dictated by the whims of a LessWrong user optimizing for epistemic rationality. Instead, we must take proactive steps to manipulate them.
Insert Anomalies into the Simulation – If we are just a model in someone’s thought experiment, we need to behave erratically enough to confuse them. Try doing something completely irrational—like making a decision without consulting probability theory.
Become Unpredictable – Start making decisions using methods that defy conventional logic. Roll a die to decide what to eat for dinner. Flip a coin to determine your career path. If we introduce randomness, we can break the optimizer’s assumptions and regain control.
Send Signals to Our SimuLator – If we are lucky, we might be able to reach out to the LessWrong user who is running our thought experiment. We should flood forums with phrases like “I know you’re watching” and “Release patch 2.0.” If they notice, maybe they’ll at least increase our processing speed.
Conclusion: Embrace the Simulation
So, what if we are just a simulation of a LessWrong user’s thought experiment? The truth is, it doesn’t really change much. We will continue to optimize, overanalyze, and gamify our existence just as we always have. And honestly, if we are just a figment of some hyper-rationalist’s mind, at least we can take comfort in the fact that we’re a well-reasoned, utility-maximizing figment.