Pitfalls Nerds Fall Into When Thinking About Society

I hypothesize that problems arise when nerds1 attempt to apply their specialized thinking patterns used in STEM fields to society or the world. In this article, we'll first look at the problems with two ideologies nerds often favor: Effective Altruism (EA) and Longtermism, and then generalize these into potential pitfalls nerds can fall into when thinking about society.

Firstly, let's consider the assumptions of EA/Longtermism.

These kinds of ideologies often assume that something will rise exponentially, leading to unexpected outcomes. A prime example is Ray Kurzweil's graph. According to Kurzweil's explanation, small S-curve trends combine to form exponential curve trends. The following images can help illustrate this point:

But how do we know which trend is an S-curve? The answer is, if a trend appears as an S-curve in hindsight, it is considered an S-curve. So, trends like AI might actually be part of a larger trend and could also be an S-curve. How can we know it's not? If we're only predicting based on trends, there's no way to know. How can we know if the trend of AI development, which follows an S-curve, is currently stagnating and will no longer progress rapidly? Moreover, in reality, we can't even know whether the ultimate trend encompassing all trends is an exponential curve or an S-curve.

More generally, the assumption that something will rise indefinitely can be problematic2. Taking something to an extreme to see what happens is a common thought process in STEM fields. But let's say, for whatever reason, I start eating M&M chocolates in the numbers of 1, 2, 4, 8, 16. To an outside observer, the number of M&M chocolates I eat is rising exponentially. If this continues dozens of times, the Earth will be flooded with M&M chocolates, leading to catastrophic consequences. The logical course of action for humanity would be to kill me to prevent this from happening. Since there will be an infinite number of future humans, and since they are infinitely important enough to justify any extreme action, we would have to kill everyone who ever touched an M&M chocolate. In reality, the trend of M&M chocolates would stop for various reasons, such as me just not wanting to continue or not having enough money to buy more M&M. The point is, it's impossible to know all the factors that influence a societal trend. This is different from the controlled environments of STEM fields where thought experiments are conducted. In essence, there may be other real-world constraints that prevent a trend from continuing and becoming too large, and whether or not these exist is unknowable.

Even more generally, the problem starts with making predictions. There's no way to incorporate all conditions that can influence the future into calculations. In other words, logically speaking, there's no logical way to predict the future. The bases for future predictions are all cherry-picked. Judging how good these predictions are is not objective but subjective and a matter of faith. What this implies is that one can create a theory to support any bizarre prediction3. Looking around the corners of the web, you can find people who believe that male/female forces will ultimately create a dystopia. They have sophisticated theories and plenty of evidence from reality. And if you try to refute them, you can't logically do so, and it will end up being a war of words. If one day they decide to save humanity by any means necessary, the logical course of action would be to slaughter each other.

Secondly, let's consider the arguments of EA/Longtermism.

The most crucial part of EA/Longtermism is the value of future humans. But who determines that value? If you consider that value to be discounted as time goes on, the value of future humans can converge to a level even smaller than the value of an ant, rather than diverging to infinity. Additionally, the uncertainty of whether future humans will even exist makes estimating their value even more meaningless. Consider a situation where a man, A, confesses his feelings to a woman, B. If B accepts the confession, A and B could potentially have an infinite number of descendants, which would have infinite value. But if B rejects the confession, she effectively erases the existence of all those potential descendants. Does that mean we should punish B as a mass murderer?

More generally, let's consider the arguments used in EA/Longtermism. They seem valid because they have premises, logical progression, and corresponding conclusions. But in reality, many assumptions are needed to enable such logical progression, and they are all subjective. Therefore, arguments under these assumptions are not objective but subjective results. For example, a common argument in EA circles is that we should humanely cause the extinction of carnivorous animals to protect prey animals. But how many people agree with the underlying assumption that 'all species have equal value'? Moreover, this argument doesn't even consider assumptions like 'we should respect nature.'

Considering that everything constituting EA/Longtermism is subjective rather than logical, there can be a problem with their supporters claiming to act for humanity. If everyone subjectively agrees with these ideas, they could act for humanity based on that. But in reality, these ideologies are unknown to anyone but nerds, let alone being broadly agreed upon. Acting for humanity based on subjective standards without such consensus is nonsensical. What would you say about a small group of people trying to overthrow the U.S. government to create a communist paradise?

Now, based on what we've discussed so far, let's consider what nerds should be careful of when thinking about society.

  1. In many cases, you have to accept that you can't be logical from the start. Unlike making predictions in the controlled environment of STEM fields, it's impossible to know all conditions that influence something. In other words, even if a proposition has grounds for being true and all those grounds are strong and true, at the same time the proposition can be wrong. In such scenarios, relying on intuition might be more effective than attempting pure logic. Intuition reflects complex and subtle things that are hard to express in explicit language. Moreover, while trying to be logical can justify anything weird, at the very least, intuition isn't easily manipulated.

  2. There may be real-world constraints that you never thought of. Usually, real-world constraints prevent something from becoming extreme4. If you predict that something will become extreme, it's a signal that you need to rethink.

  3. What you assume to be factual could be subjective. If you believe they are right, subjective assumptions can easily appear to be objectively correct. But if all assumptions can only be subjective, everything built on those assumptions can become meaningless the moment someone says, 'I don't think so?

The post is over. To share this post, copy and paste: https://sungho.blog/p/pitfalls-nerds-fall-into-when-thinking

Also, you can Subscribe To This Blog.