/r/GreatFilter

Photograph via //r/GreatFilter

The Great Filter is the most urgent question Mankind has ever faced. It's the solution to the Fermi Paradox - Robin Hanson's hypothesis there are no other technological civilizations (not even on Earth) because they die before they colonize a galaxy. The mission of r/GreatFilter is to raise awareness of the value and fragility of life, and thus the importance of peaceful colonization of space beyond Earth, one rock at a time. Is our destiny literally in our stars?

About this subreddit

The Great Filter is the most urgent question Mankind has ever faced. It's the solution to the Fermi Paradox - Robin Hanson's hypothesis there are no other technological civilizations (not even on Earth) because they die before they colonize a galaxy. The mission of r/GreatFilter is to raise awareness of the value and fragility of life, and thus the importance of peaceful colonization of space beyond Earth, one rock at a time. Is our destiny literally in our stars?

Rules

  1. Put [Sci-Fi] at the beginning of sci-fi post titles.

Handy links about this subreddit

You can also chat about surviving the Great Filter in Freenode ##prepping on the Freenode IRC network. Be sure to mention you joined from Reddit.

r/GreatFilter in the media

Since its founding on 2017-Jan-11, the mission of r/GreatFilter is to raise awareness of the value and fragility of life, and thus the importance of peaceful colonization of space beyond Earth, one rock at a time. It is working:

Introductions to the Great Filter

  1. The Great Filter—the most important question in history (short)
  2. The Fermi Paradox - Wait But Why (long)
  3. [1806.02404] Dissolving the Fermi Paradox - arXiv.org (technical)
  4. Where Are They? Why I hope the search for extraterrestrial life finds nothing 2008
  5. Great Filter, 20 Years On 2018
  6. The Great Filter - Are We Almost Past It? 1998

Video introductions to the Great Filter

  1. The Fermi Paradox — Where Are All The Aliens? (1/2) - YouTube
  2. The Fermi Paradox II — Solutions and Ideas – Where Are All The Aliens? - YouTube
  3. Why Alien Life Would be our Doom - The Great Filter - YouTube (Kurzgesagt)

Even if the Great Filter theory is wrong, the end result is still the same:

What do we do after the Great Filter?:

Some entertainment

Possible Great Filters

Environmental problems like climate change or global warming could be only the tip of the iceberg for a much larger fundamental physics problem:

What about nuclear weapons?

Consequences of violence applied to the vacuum of space:

Even on Earth, intelligence only evolved one time:

Multicellularity?:

Isolated by galactic motion:

The history of r/GreatFilter

  • 2017-01-10: 1 member.
  • 2017-10-04: 50 members.
  • 2018-01-24: 100 members.
  • 2018-07-15: 200 members.
  • 2018-08-09: 300 members.
  • 2018-08-19: 400 members.
  • 2018-10-07: 500 members.
  • 2018-10-31: 600 members.
  • 2018-11-09: 700 members.
  • 2018-12-08: 800 members.
  • 2018-12-10: 900 members.
  • 2018-12-12: 1000 members.
  • 2019-02-17: 1500 members.
  • 2019-04-13: Tiny Subreddit of the Day*.
  • 2019-05-10: Added to r/collapse sidebar.
  • 2019-05-18: Subreddit of the Day*.
  • 2019-05-18: 2000 members.
  • 2019-07-08: 2500 members.
  • 2019-08-31: 3000 members.

Great Filter multireddit

See also the reddit Great Filter multireddit. It has all of reddit's Great Filter-related posts in one place. Post an announcement about your related subreddit, and request inclusion in the reddit Great Filter multireddit.

See also

Our friends

/r/GreatFilter

5,622 Subscribers

3

Great Filter Proposal: Conflict as key to progress

*just to clarify: I’m Russian, so my English might be bad for this topic, so I’ll focus on idea itself, to provoke some thoughts on this matter. Thanks.

First of all, I think Great Filter is in our future, or more precisely - we’re scratching it‘s surface.
We find more and more proofs, that our past is not so unique: amino acids in meteorites, exoplanets in the habitable zone of their suns. Also, life on our own planet occurred almost immediately after planet has formed (4,54 and 3,9-4,25 billion years). So chances are going down about our unique place in the universe and our great luck about life starting exclusively on our planet.

Because of that, I thought of a reason both universal and absolute enough, to be fair for us AND our potential “neighbors” in the universe.

And I find this reason in conflict. Wars, usual conflict, conflict with environment or inner one - doesn’t matter. Conflict itself. And here’s all the whys.

  1. Why conflict?

It’s main reason of any progress. Technological, biological, personal.
Evolution holds exclusively on conflict - with environment, between species or nature itself. It’s always a conflict that “provokes“ species to evolve and progress, instead of been as they are in the moment - even though it’s much more comfortable and easy to stay the same.
Almost all of the biggest technological innovations that we have were created to be more effective with nature around us (conflict with nature/environment) or for war purposes (which is more relevant as for now and our near past). Internet, transport, communication, roads, buildings, cloths, agriculture, nuclear energy. They may evolve in non-war related things, but their main growth or birth as technology were provoked by war or nature.
Personal level - it’s an easy one. Not to be too long, just few examples - work issues, comparison with other people, financial problems, discomfort with your expectations and your current situation, etc.

With that in mind, what will be conflict for civilization to expand itself on other planets and galaxy? If all serious threats to planets can destroy it in seconds, and for most part, civilization cannot defend such things as quasars, supernovas, or even meteorites big enough to destroy it, so it’s out of scale of reasons to progress.
Other civilizations, maybe? On other planets? Well, only if their are close to each other and can communicate. Otherwise, if you’ll think of THE first civilization in the universe, where will be its’ conflict in space? Almost nowhere, except for some specific and rare examples.
On other hand, conflict between species on planet is much more “tasty”, if you please, and understandable. Countries, ocean and land, ”east and west” (both as example of our own planet, and as metaphorical), races, nationalities. If civilization has at least some of this differences - there will be conflict. It is much easier for civilization to destroy itself on their own planet, then to unite, remove all institutes that divide its’ population, and work all as one for great future of their planet.
Because cosmic expansion - is “somewhere far away”, it’s like a dream - would be great, but we have “real” problems to solve.

In conclusion - nature of evolution, conflict, that provides life on planet with options to grow into intelligent species and to create great civilization to compete with almost anything, is also the thing, that doesn’t allow it’s products (us, for example) to break this barrier and leave into a beautiful far away.

  1. Why universal?

As I already described, conflict shows itself everywhere - from the start of evolution, to everyday life of species, both intelligent or not. Yes, I showed it on example of our own civilization. But why it should be really different for others?

Gravity level, materials that other species can be made of, light emission, etc. can very, it’s true. But not basic mechanism to evolve into intelligent creatures - evolution. Which stands on conflict. And species to win this competition adopting its’ methods, competing with each other to thrive, and since it’s much more effective way in conflict with any source or problem- it stays with further generations. Eventually, path will be the same, even if conditions will be different.

I’m not talking about exotic examples, like ocean-planet from “Solaris” by Stanislaw Lem, or some “intelligent“ species as insects from “Starship Troopers” by Robert Heinlein, because:

- first example is somewhat occasional, because such ”creature” will not be able to replicate itself, and its’ purpose is only to keep planet on its’ orbit. It has no reasons to leave this dual solar system, because its’ state is “comfortable”, and there is no one to compete with, other then gravity.

- second example is a bit more absurd. Insects, as any other species, can evolve, but only in their own environment. And complex evolution in space (to evolve into creatures, that can withstand space vacuum) has much less chances, than evolution on planets with some gravity and atmosphere, simply because planets tend to ”pull” resources to its’ surface, which creates much more fertile “ground” for the birth and evolution of life.

So, to specify potential civilizations: for cosmic expansion civilization should be intelligent enough.
To be specific, here’s an example - humans, as they are, cannot fly, live or be under water more then couple of minutes, and for sure cannot survive space vacuum. Thus they change its environment for it to be more comfortable or possible at all to exist in such conditions. Because of that their phenotype shows itself not only in color if eyes, hair and facial features, but in cars, buildings, phones, spacecrafts as well.

Because of that I presume, that first of all we should look for same type of civilizations - that has their phenotype expanded to outer world more, than its’ inner one.

  1. Why absolute?

Simply because process of evolution filters species with most active adaptation to conflict - they can compete, and win most of the time to evolve into intelligent one. This way, evolution “cultivate“ species, that are most effective in conflict. And on planet level it‘s really helpful. But cosmic level requires completely different approach - pragmatic one. Complete unity, logic, emotionlessness, to name a few.

END: Don’t like this idea, of course, because it’s too dark and provocative on our nature. But unfortunately, it fits really well with evolution and nature of intelligent species itself, and it can unite some other ideas for this filter. For example, one with idea, that intelligence itself can be simply a mistake of evolution, and doesn’t help with success of species at all, and rather eventually destroy civilization, than help it to thrive.

Thank you all for reading this manuscript, huh. Feel free to comment, maybe I missed something. I’ll be glad to discuss☕️

Have a good day, stranger.

0 Comments
2024/04/04
13:33 UTC

1

Answering Fermi's Paradox using Observational Dynamics

2 Comments
2023/09/16
19:29 UTC

15

Intelligence as the great filter.

This is just a thought I’ve had floating around. Sometimes your strengths can be your downfall.

This could act as a great filter in multiple ways, the first way I see it is that a civilization advances faster then their understanding of the technology they have developed. This could lead to pollution or other types of damaging effects to their ecosystem/planet.

This same line of thinking could bring them to use powerful technology that has the potential to wipe out life on a mass scale or have such catastrophic effects on their environment (nukes/germ and viral warfare) that leads to mass death and finally extinction.

Or if a civilization moves away from technological progress and they look to more philosophical means. They might look at their society and see the flaws and agree that fighting against a universe as unfriendly as ours isn’t worth it and just decide to fade out on their own terms.

Finally the civilization could just plateau, gaining a certain level of technological advancement that makes their life’s easier (I’m thinking of farming and forms of agriculture) but never move passed that so there was never a chance of leaving the planet.

I don’t often post but when I saw this Reddit I figured this could lead to some very interesting conversations. So please let me know what you think.

12 Comments
2023/09/14
16:53 UTC

16

New research doubles the age of the Universe. Great Filter implications?

A recent study proposes that our universe may be nearly twice as old as previously believed, suggesting an age of 26.7 billion years instead of 13.7 billion years. This claim challenges current cosmological models. With this new understanding, what are the implications for us and the Great Filter? In my opinion, with the extended timeline, other civilizations would have had twice the amount of time to reach the ninth step in Hanson's list, a colonization explosion. Yet the universe appears devoid of intelligent life. Therefore, this age-doubling of the universe might increase the odds that the Great Filter - a significant barrier to the advancement of civilization - is still ahead of us.

9 Comments
2023/07/13
23:15 UTC

0

AI

2 Comments
2023/07/10
17:37 UTC

7

[Sci-Fi] In the next century the Fermi Paradox is finally resolved.

But under the acid green skies of toxic gas, no intelligent life is left to appreciate the significance.

6 Comments
2023/06/26
19:02 UTC

8

Turns Out, Eyeball Planets Experience Catastrophic Flips

0 Comments
2023/05/21
02:03 UTC

25

Humanity is just a passing phase for evolutionary intelligence.

3 Comments
2023/05/12
02:05 UTC

12

Searching for the next great

I'm coming to the conclusion that as of right now, there's very few things ahead of us that could not only eliminate us, but also remove repeat intelligence from forming. Nuclear, bio, and chem war are unlikely to be filters, as none could wipe out enough of humanity to prevent the population from recovering and inheriting our own civ. I believe AGI would likely replace us if it wiped us out, not solving the Fermi Paradox.

So far the most solid ones I can think of are:

  1. Dumb grey goo, not intelligent enough, or has too few errors to develop machine intelligence.

  2. Rampant biosphere destruction, short term, at least enough to prevent ocean algae from existing.

  3. Or, an artificial filter, similar to dark forest theory.

Besides those I'm at a loss. There's some more potential sci-fi ones, like complex simulation, cognition hazards, or literal biblical Apocalypses, but I find these even more unlikely than nuclear, bio, or chem warfare. What have you guys come up with as potential GFs? How did you come to those conclusions and how do we prevent them?

13 Comments
2023/05/04
15:26 UTC

14

Why Aliens Might Already Be On Their Way To Us

2 Comments
2023/04/15
04:07 UTC

37

"The Fermi paradox", Gojkovic, Stable Diffusion assisted, 2023

24 Comments
2023/04/02
15:32 UTC

7

With all the progress in machine learning recently I'm curious what r/GreatFilter thinks

41 Comments
2023/04/02
00:46 UTC

4

Innvoation as A Great Filter

I was just watching a video on AI and how it has helped researchers do developments it weeks instead of years. Then it goes on about how companies will probably release the next models quicker and quicker with less and less consideration for ethics and safety. Maybe this kind of innovation without remorse is the great filter that killed aliens…

7 Comments
2023/03/29
21:10 UTC

6

Can fake news and fake data be a great filter as well?

With chatGPT here, and generating undergraduate papers, it so far, in lack of new regulation of some sort, seems likely that internet will be filled with high quality fabricated data.

It seems obvious that most people will be completely unable to distinguish the truth, or at least something with a genuine intention for truth.

Like, if I try and find the best medicine for some condition, it's entirely possible that what I find is just made up studies and whatnot.

Now, I know SOME rules for mitigating being fooled and exposed to such data, but I am fairly certain that vast majority of people don't. The ones that trust ANYTHING on yt.

Entire education also seems like it's at a crossroads.

Could this stifle our progress or regress us to such a measure that we never go extra-planetary or something?

7 Comments
2023/03/24
11:33 UTC

0

Temes are (still) The Great Filter

Five years hence, I bet a lot more people feel this way.

7 Comments
2023/03/24
08:55 UTC

16

Great Filter Proposal: Nuclear Abiogenesis

Last year, I think I found a promising possibility for the great filter: abiogenesis through nuclear geyser. The authors argue that their proposed mechanism is the best possibility for abiogenesis. What is important is the fact that it would explain why life emerged so early, invalidating probabilistic arguments, and thus reopening the door to life being rare in the universe. The last known natural nuclear reactors occurred 1.7 billion years ago and none exist today, but they may have been common during the Hadean.

Due to the fact that the radioactivity of the Earth was so much higher four billion years ago from uranium and other materials being earlier in their half-life; it thereby negates the common statement that since life happened so early, it must be everywhere. It's a fusion between the radioactive beach and hot spring hypothesis. Of course, there would also be minor filters and other factors (e.g. a phosphorus-rich continental planet around a G-type star) too, but this could be the vast majority of the solution to the fermi paradox.

13 Comments
2023/02/24
22:06 UTC

8

The Last of Us cordyceps would be a great filter if it existed.

(Last of us spoilers ahead) Unlike other apocalypses where civilization has a chance of restarting (Nuclear war, AI uprisings. The Cordyceps in the last of us show/game seems to happen inevitably to civilizations that industrialize (key factor for space colonization) and one can’t industrialize without warming their planet with CO2.

As for being unable to restart civilization, a fungal infection that is so widespread, lethal and impossible to cure would put a hard cap on any new development to civilization. Even worse that cordyceps seems to be a coordinated infection with networks of traps to infect more people. Its akin to a biological weapon dropped on earth, it ensures no intelligent species can get off their planet.

Granted, what happens with Ellie could change my tune (If they follow the games plot that is).

24 Comments
2023/01/30
16:40 UTC

0

Immortality could be a bad thing...

The concept of immortality has fascinated Humanity since we invented Gods. Of course, day-dreaming is fun, and harmless. However, as we plunge into the 21^st Century CE, 7,000 years after learning to read and write, 70 years after developing mass-media, we are looking very closely at extending our (natural/healthy) lifespans.

I want to draw a distinction between absolute (forever and 3 days) and practical (very large 4- and 5-digit numbers of years). I also need to specify that "immortality" need not immunise against "fatal injury".

Currently we have not quite 8x10^9 humans on the surface of this planet. At the moment, all of them can be fed and watered. But consider when the 10^9 becomes 10^10 or 10^11. And none of them look like dying.

We have not yet reached that Filter. But we can see it in the middle distance. "Science" is rapidly fulfilling all our dreams, and many of our nightmares. It really does not matter if immortality prevents or allows reproduction. The big question is whether we are prepared to swap children for a life of less misery. At what stage do we find there is not sufficient food on the planet, or water, and that we no longer have an economy which allows the infinite manufacture of food and water? We may as well rule out space travel, too many goblins to tame there.

As Humanity embraces ever more "freedom" with "democratic" governments, can we actually put a stop to life-extension biochemical research?

For reading, Clarke envisions a ruined planet with one (count him) child; Niven and Heinlein invoke expansionist space travel; Asimov did not have immortality, but some of his Spacer colonies were suffering extreme ennui as their life-style (and robots) forbade Humans from performing physical labour. (Also remember that Asimov's Spacer colonies were forced to import all their micronutrients from Earth due to alien soils not supporting Earth-microbial life...)

Be careful what you wish for, you may get it.

14 Comments
2022/12/20
07:11 UTC

7

The great filter is signal to noise ratio

This week we've had exciting progress in AI with ChatCPT quickly gaining attention because of its ability to write extremely complex human-like responses. However, like humans, it is also capable of being confidentially incorrect about its assertions.

This has exponentially increased the speed at which we can both accidentally and intentionally proliferate misinformation. This is combined with a current world where we already have people intentionally proliferating misinformation.

To add to that, any effort to suppress the proliferation of misinformation is being pushed back on as "anti-freedom of speech", with billionaires doing their utmost to make sure this doesn't happen and successfully making it a populist issue.

Therefore the future right now appears to be a combination of the rapid drowning of any actual information (signal) with misinformation (noise).

My concern is that it's about to become impossible to learn. Just because real information is "true" or "useful" doesn't prevent it from being lost to a sea of junk.

Most younger people source their information primarily from the internet. With the internet on the cusp of becoming pure noise, I think they're going to struggle to gain an education.

After about 2-3 generations of kids growing up unable to learn what humanity has learned over the last few thousand years, we can expect society to become completely unable to function, and definitely unable to get into space.

I previously wrote a post about generative image AI being a great filter because of its dangers. But I'm realising it's a more general problem than that.

The great filter is the proliferation of noise, because it's much easier to proliferate noise than signal. I don't know how any civilization solves that.

12 Comments
2022/12/08
13:51 UTC

17

organic life is impossible on planets orbiting neutron stars and white dwarfs on account of the dzhanibekov effect

due to the strong x-ray radiation white dwarfs and neutron stars emit any life on planets orbiting such primaries would abide on the night side of such tide-locked worlds.

but due to imperfections in these rotating spheres of condensed matter they will "flip" at semi-regular intervals, releasing tremendous surges of energy and blasting any air and water off such worlds.

18 Comments
2022/12/04
23:05 UTC

2

Will a nuclear apocalypse actually save us from the great filter? ( for a while)….:-(

10 Comments
2022/11/30
21:58 UTC

15

the Dzhanibekov Effect makes urban civilization impossible on tidal locked worlds

https://www.engineeringclicks.com/dzhanibekov-effect/#:~:text=The%20Dzhanibekov%20Effect%20phenomenon%2C%20also,distinctive%20key%20moments%20of%20inertia.

while it is possible for urban civilization to recover from this on a planet freely rotating while orbiting a star smaller than our sun, this is not the case for planets orbiting type M stars that are tide-locked with one side always facing the primary.

like other planets, tide-locked worlds will have their equators bisecting the 2 most massive features on or near their surfaces.

the side facing the primary [the sun pole] will have a large, stony feature......a volcano complex or a mountain range.

the side facing away from the primary [the star pole] will have an ice cap......most liking frozen carbon dioxide.

between these will be found the "bio-strip" or twilight zone where life as we understand it will be found.

but this "bio-strip" is not stable.

it is a place like the nation of iceland as it is subject glacial outbursts that change local mass relative to the rest of the world, leading to erratic movements along the intermediate axis of rotation between the "sun pole" and the "star pole".

these erratic movements in turn can trigger earthquakes and volcanism that can produce a large local mass in the twilight zone.

in this scenario a polar flip has an even chance of switching the "sun pole" with the "star pole", thus putting the carbon dioxide ice cap directly under the endless sunlight.

while local biology will have evolved to survive this, people and their infrastructure are much more fragile.

5 Comments
2022/11/26
07:11 UTC

15

The Bizarre Behavior of Rotating Bodies [may be why inhabited planets cannot support industry for prolonged periods]

5 Comments
2022/11/25
07:29 UTC

0

Didn't Elon Musk say we must pass the Great Filter?

Then why all his three-steps-backwards divisiveness with Twitter? I guess it's just more proof that we, even the people that seem to show promise, can't stop being or destructive selves. :(

44 Comments
2022/11/23
04:34 UTC

6

the heliosphere of red dwarfs is to small

https://www.reddit.com/r/spaceporn/comments/y3dsx7/the_heliosphere_shields_our_solar_system_from/?utm_source=share&utm_medium=web2x&context=3

it would seem to me that the solar wind of a red dwarf would not be strong enough to create a heliosphere large enough to shelter a planet from galactic radiation.

4 Comments
2022/10/16
02:46 UTC

23

An unsettling solution to the Fermi Paradox? - The Transcension Hypothesis! Advanced civilizations or intelligence scale down towards the Plank scale as they advance or "inner space", rather than expand outwards into the universe. Could also be thought of as an anti-Kardashev civilization.

5 Comments
2022/09/02
18:11 UTC

38

What if there are other technologies that are equivalent to fire, but not fire, that give a species the ability to make the “technological leap” so to speak.

Like, octopi, or an octopi like alien with human equivalent intelligence living on a water world might not be able to use fire to get technology going but there might be something equivalent in use that kicks starts technological development.

Maybe hydrothermal vents in shallow water somehow?

I hope this makes sense haha.

relevant book quote from Matter, by Ian m. banks:

finding their own way up the tech-face, not a tech-ladder; there are varieties of routes to the top and any two civs who've achieved the summit might well have discovered different technologies en route.

9 Comments
2022/07/10
23:39 UTC

22

The Intelligence Gap

First off: I just discovered this sub and I love it. Thank you!

Step 8 might need to be broken down into multiple sub-steps. I think we might be in a great filter right now.

There is a gap in time between when a species achieves intelligence, and when it develops critical thinking. Intelligence logically occurs before critical thinking, and for humans that gap was probably around 300,000 years.

Prior to gaining critical thinking, an intelligent species will ask difficult questions, i.e., what happens after our death? Those questions possibly need answers in order for a society to maintain order and advance technologically.

It's possible that most intelligent species confabulate metaphysical answers to those questions. Humans did this, and developed religions. Religions were arguably helpful in controlling societies, establishing order, and ushering in technological advancement. However, in order to move to step 9, an intelligent species must (possibly?) eschew its previous confabulations. That might be a great filter. It is (among other things) what is holding humanity back.

I'd love to hear your thoughts.

15 Comments
2022/05/21
19:05 UTC

21

Podcast with the founder of the great filter hypothesis (professor Robin Hanson) about his latest theory; Grabby Aliens.

Interesting podcast about his latest explanation for the Fermi paradox.

https://www.podcasttheway.com/l/grabby-aliens/

Description copy and pasted below:

Our continually expanding, 14 billion-year-old universe is riddled with planets that could potentially sustain life; so, where is it? Economist, prolific author, and founder of "The Great Filter," Professor Robin Hanson, offers a possible explanation. In today's episode, we take a deep dive into understand "Grabby Aliens," and the future of humanity.

There are two kinds of alien civilizations. “Quiet” aliens don’t expand or change much, and then they die. We have little data on them, and so must mostly speculate, via methods like the Drake equation.

“Loud” aliens, in contrast, visibly change the volumes they control, and just keep expanding fast until they meet each other. As they should be easy to see, we can fit theories about loud aliens to our data, and say much about them.

“Grabby” aliens is our especially simple model of loud aliens, a model with only 3 free parameters, each of which we can estimate to within a factor of 4 from existing data. That standard hard steps model implies a power law (t/k)n appearance function, with two free parameters k and n, and the last parameter is the expansion speed s.

Using these parameter estimates, we can estimate distributions over their origin times, distances, and when we will meet or see them. While we don’t know the ratio of quiet to loud alien civilizations out there, we need this to be ten thousand to expect even one alien civilization ever in our galaxy. Alas as we are now quiet, our chance to become grabby goes as the inverse of this ratio.

More in depth explanation https://grabbyaliens.com

*Warning: Slight audio quality decrease early on

Shortened Bio: Robin Hanson is an associate professor of economics at George Mason University, and research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from California Institute of Technology, master's degrees in physics and philosophy from the University of Chicago, and nine years experience as a research programmer, at Lockheed and NASA. Professor Hanson has 5173 citations, a citation h-index of 35, and over ninety academic publications. Professor Hanson has pioneered prediction markets, also known as information markets and idea futures, since 1988.

Oxford University Press published his book The Age of Em: Work, Love and Life When Robots Rule the Earth, and his book The Elephant in the Brain: Hidden Motives in Everyday Life. Professor Hanson has 1100 media mentions, given 400 invited talks, and his blog OvercomingBias.com has had eight million visits.

Robin has diverse research interests, with papers on spatial product competition, health incentive contracts, group insurance, product bans, evolutionary psychology and bioethics of health care, voter information incentives, incentives to fake expertise, Bayesian classification, agreeing to disagree, self-deception in disagreement, probability elicitation, wiretaps, image reconstruction, the history of science prizes, reversible computation, the origin of life, the survival of humanity, very long term economic growth, growth given machine intelligence, and interstellar colonization. He coined the phrase "The Great Filter", and has recently numerically estimated it via a model of "Grabby Aliens".

12 Comments
2022/03/30
13:33 UTC

Back To Top