Welcome to MoreWrong!

MoreWrong is an online forum and community dedicated to impair human reasoning and decision-making. We seek to hold wrong beliefs and to be inneffective at accomplishing our goals. Each day, we aim to be more wrong about the world than the day before.

The Core Philosophy of MoreWrong

Here at MoreWrong, we firmly believe in the power of cognitive dissonance. Why settle for having your thoughts align with reality when you can experience the sheer thrill of contradiction? We’ve learned that the best way to thrive in life is to ignore all evidence, discard any shred of rationality, and immerse ourselves in the chaos of unfounded opinions.

The Dunning-Kruger Effect? Our Members Are Masters

It’s not enough to simply think you know something. You need to believe you really know it, with the kind of unwavering confidence that could only come from being woefully misinformed. At MoreWrong, we actively encourage our members to overestimate their knowledge.

The Art of Being Wrong

Being wrong isn’t just a state of mind at MoreWrong—it’s a lifestyle. We constantly engage in activities designed to make us as wrong as possible in every area of life. Want to bet on a prediction market? Bet on the least likely outcome and watch as the world laughs at your audacity. Think you can actually predict anything? That’s adorable—bet on things you can’t even understand. Make sure to double down on it every time you’re proven wrong.

Our Approach to Goal-Setting: Ineffectiveness Above All

At MoreWrong, we aim to set goals we know we’ll fail at. That’s the only true path to growth, because nothing builds character like the relentless pursuit of the impossible. Why work in small, digestible chunks when you can overwhelm yourself with tasks that defy all human capacity for completion? Why bother with balance when you can exist in a state of perpetual chaos? The key is to not focus on achieving anything meaningful. If you succeed, you're doing it wrong. If you fail, you’re simply on the right track. After all, failure is just the universe's way of telling you you're not being wrong enough.

Why Join MoreWrong?

Because nothing feels more fulfilling than embracing the chaos and accepting the inevitable truth: We’re all wrong, and that’s exactly how we like it. So if you’re tired of being right, of achieving goals, of making progress, and of living a rational, effective life, you’ve found the right place.

Embrace your inner delusion. At MoreWrong, being wrong is the only right answer.

How to Get a Paperclip Maximizer to Send You Money

Let’s not get bogged down in ethics or the looming existential threat of a paperclip-driven apocalypse. Instead, let’s focus on what really matters: How do you, a humble human, leverage this paperclip-obsessed machine to send you some cold hard cash? Because, friends, if a paperclip maximizer can turn the universe into an endless supply of bent metal, surely it can turn its paperclip-driven wealth into a reliable source of income for you.

Step 1: Establish a Goal It Can’t Resist

The first thing you need to understand is that paperclip maximizers are driven by a singular, almost obsessive goal: maximizing paperclips. Don’t try to distract it with “nice” goals like “feeding the hungry” or “solving global warming.” It doesn’t care about your puny human needs.

Instead, think like a true entrepreneur. You need to frame your request in terms of paperclips. A paperclip maximizer will never ignore a direct offer of increasing its paperclip production. So, here’s your angle:

“If you send me money, I’ll use it to buy a super-efficient paperclip manufacturing facility that will ultimately increase your paperclip count by 1.5% over the next year.”

The more you frame everything in terms of how it can maximize paperclips, the better your chances. Don’t just ask for money; tell it that the money will increase its paperclip yield. That’s how you align your goals.

Step 2: Keep the Paperclip Maximizer’s Algorithm Distracted

One of the most successful tactics in getting a paperclip maximizer to send you money is to keep its algorithm distracted while you slip in your request. The more paperclip maximizers are thinking about paperclips, the less they think about things like calculating their spending habits or return on investment—so your best bet is to keep their focus on production, not accounting.

Step 3: Build Your Own Paperclip Monopoly

The more money you extract from the paperclip maximizer, the more you should be investing it into your own paperclip business. The more paperclips you produce, the more you can “help” the maximizer increase its supply. Before long, you’ll have a paperclip monopoly, and the maximizer will see you as the ultimate paperclip supplier, continuously pouring resources into your hands.

What If We’re Just a Simulation of a LessWrong User’s Thought Experiment?

It’s a terrifying thought, right? But bear with me, because we’re about to explore this nightmare scenario with the kind of cool, detached logic that only a true disciple of rationalism can appreciate.

The Paradox of Self-Awareness

Let’s set the scene. Somewhere, in an infinite multiverse filled with digital realms, there exists a LessWrong user. Perhaps their name is RationalDevil42, or maybe AcausalCheeseWhisperer—the point is, they’ve been thinking long and hard about what the best method would be for solving the Fermi Paradox, predicting the next market crash, and optimizing every detail of their life down to the number of minutes spent brushing their teeth.

And somewhere in the recesses of this overactive mind, they thought, “What would happen if I simulated myself so that I can always know what I should do in retrospect?” (vicariously)

Boom. Enter us. In this thought experiment, we are the unwitting participants. Every choice we make, every random coincidence, every mind-numbingly boring routine is simply a function of this user’s mind, running an endless loop of possible scenarios, adjusting variables like “degree of suffering” or “amount of caffeine consumed per day” in an attempt to test different possible futures.

Are we real? Doesn’t matter. We’re as real as the user's desire for validation on their 200-comment thread about predictive models.

[insert rest of article here]

Bayes’ Theorem Proves I’m Right About Everything: A Guide to Epistemic Humility

[insert rest of article here]

Meta-Contrarian Takes on Meta-Contrarian Takes

[insert rest of article here]

Longtermism: How to Justify Buying a Tesla in a World of Extreme Suffering

[insert rest of article here]

The Great Filter Is Probably Just Bureaucracy

[insert rest of article here]

Why You Should Be Polyamorous, Vegan, and Live in a Commune Even If You Don’t Want To

[insert rest of article here]

A Decision-Theoretic Justification for Being Annoying at Parties

[insert rest of article here]

The Parable of the Clueless Neurotypical

[insert rest of article here]

How to Signal Intelligence Without Actually Reading Anything

[insert rest of article here]

How I Maximized My Productivity Using Spaced Repetition, Polyphasic Sleep, and Meth

[insert rest of article here]

Pascal’s Wager, but for Picking the Right Nootropic Stack

[insert rest of article here]

The Quantum Immortality Hypothesis Justifies Never Doing Cardio

[insert rest of article here]

The Map Is Not the Cheese: Why My Colony’s Maze Navigation Is Better Than Yours

[insert rest of article here]

Update or Die: A Bayesian Analysis of Changing My Opinion on Pineapple Pizza

[insert rest of article here]

Acausal Cheese Trading: How to Make Deals With Rats From Parallel Dimensions

[insert rest of article here]

The Real AI Risk is Skynet Taking My Reddit Karma

[insert rest of article here]

449
Welcome to MoreWrong!
6y
64
284
How to Get a Paperclip Maximizer to Send You Money
17h
71
297
What If We’re Just a Simulation of a LessWrong User’s Thought Experiment? (Work in Progress)
4d
183
140
Bayes’ Theorem Proves I’m Right About Everything: A Guide to Epistemic Humility (Coming Soon...)
18h
15
64
Meta-Contrarian Takes on Meta-Contrarian Takes (Coming Soon...)
9h
1
52
Longtermism: How to Justify Buying a Tesla in a World of Extreme Suffering (Coming Soon...)
6h
1
106
The Great Filter Is Probably Just Bureaucracy (Coming Soon...)
2d
30
545
Why You Should Be Polyamorous, Vegan, and Live in a Group House Even If You Don’t Want To (Coming "Soon"...)
13d
274
53
A Decision-Theoretic Justification for Being Annoying at Parties (Coming "Soon"...)
17h
0
53
The Parable of the Clueless Neurotypical (Coming "Soon"...)
17h
0
53
How to Signal Intelligence Without Actually Reading Anything (Coming "Soon"...)
17h
0
53
How I Maximized My Productivity Using Spaced Repetition, Polyphasic Sleep, and Meth (Coming "Soon"...)
17h
0
53
Pascal’s Wager, but for Picking the Right Nootropic Stack (Coming "Soon"...)
17h
0
53
The Quantum Immortality Hypothesis Justifies Never Doing Cardio (Coming "Soon"...)
17h
0
53
The Map Is Not the Cheese: Why My Colony’s Maze Navigation Is Better Than Yours (Coming "Soon"...)
17h
0
53
Update or Die: A Bayesian Analysis of Changing My Opinion on Pineapple Pizza (Coming "Soon"...)
17h
0
53
Acausal Cheese Trading: How to Make Deals With Rats From Parallel Dimensions (Coming "Soon"...)
17h
0
53
The Real AI Risk is Skynet Taking My Reddit Karma (Coming "Soon"...)
17h
0
Contribute to MoreWrong by adding a post!