Do you consider yourself more intelligent than Eliezer Yudkowsky?
53
93
resolved May 13
yes
no
see results

Get แน€600 play money
Sort by:

5/9 "yes" votes ๐Ÿค”

I suspect people are trolling, or a joke I told about manifold killing braincells wasn't a joke.

Eliezer doesn't seem like a Von Neumann type - but he's not +1sd or +2sd. +3.5, maybe? If you don't think he has the working memory/chaining/checking horsepower to win a Nobel (if not other requisite skills,) then I'm not sure how you explain the reputation of his work among other smart and highly acclaimed people.

No way a random 5 people that smart saw this poll in the first 4.5 hours. Eliezer would be able to tell you how unlikely that is given whatever g-factor I'd like to guess for him, plus whatever sampling bias he's willing to credit to people scrolling Manifold.

I'm gonna call it <1%.

Aside from trolling, is this a status/tribal thing? Do people give a dramatically different distribution of answers if you ask about Northernlion? Piratesoftware? Dwarkesh Patel? Conan O'Brien?

Is this a baggage on the word intelligence? Does it feel like you're admitting someone else is the top dog if you admit they were dealt a more favorable hand on that dimension? If we call it "cognitive horsepower" or "useful inner richness" or "think-oomph" does it get easier to consider someone else might have slightly more of it?

I'm calling cap. Y'all need to explain yourselves - even if just to admit you thought it'd be funny if everyone said "yes."

4/5 are deleted profiles ๐Ÿ˜†

@jim Ah, the many hidden geniuses lurking on the web...

I do kinda think that's a real thing. Not people who've honed/built-upon their native intelligence and turned it into a shape that's super-effective, but a type of person who just chooses to use it to be a clerk or race model airplanes - I think our civilization is actively damaging a bunch of intelligent people from fairly early in their lives, and so you end up with a handful of geniuses per generation and for each visible there will be a few dozen or hundred more unknown people who had that kind of potential - but they ended up in the wrong shape to put it to effective use.

But I don't think those 5 are them ๐Ÿ™ƒ those types probably aren't on manifold often. I don't know what they're doing but my guess is it's not whimsically responding to polls.

๐Ÿ˜†

@NevinWetherill You seem invested in this, so can you explain me in simple terms that I can understand what is it that you think makes the guy so exceptionally intelligent and what intelligence is?

@admissions I've been following Yudkowsky's work since I was ~17. There is a bit of a fandom effect in that experience, he's written 3 of my top 5 favorite books, and took the #1 spot on that list by a large margin.

I do also think I'd raise my eyebrow if a bunch of people on here claimed to be smarter than Eric Weinstein, Robin Hanson, Gwern, or any other number of people who I perceive to be in the 1/10,000 - 1/1,000,000 range of "general intelligence." This is, I think, mostly a reaction against people seeming to dismiss the talents of others for weird reasons - and maybe 10-20% of my reaction was "Really? Yudkowsky? That's the guy who you think you have more brainpower than?"

Why do I think Yudkowsky is smart? Well, it's difficult to give a complete and clear answer when someone is noticeably above your level. He has the recognition of other smart and accomplished people. He made novel contributions to some academic fields which had gotten seriously stuck before he came along with brilliant ideas (I'm thinking of decision theory here, but I think the field of AI is also failing in large part to keep up with his contributions.) He has written millions of words which explain complicated abstract concepts in a way that I'd put at least on par with Douglas Hofstadter. He has pointed out math errors in other's work - including an algebraic error in one of E.T. Jaynes' books. He has also done a few impressive mysterious things to prove points, like his "AI Box experiment."

What is intelligence? Yudkowsky would say that intelligence is the ability to understand the causal structure of the world - how different changes cause different features to be more or less represented in the future - and to apply that understanding to ask how to steer the future into particular configurations. You want to understand something? Well, what processes cause an entity like you to understand something eventually or cause you to misunderstand it? Take that understanding and choose the policy which steers you into a position in which you've understood the thing. Relatively more intelligence means you can see more connections and more reliably, select better thoughts to think, plan better, end up on Pareto frontiers with denser and more relevant collections of problems that your mastery is sufficient to handle - make a map that better reflects the territory, and use that map to navigate successfully.

If you look at it from that angle, it should be more obvious why I think he's intelligent. The ability to operate abstract concepts successfully, to perform those motions such that your feet land firmly on every step - having a mind that does less stumbling and groping and blind flailing in many different domains. Writing such that you can have a 2 million word story that keeps readers engaged and teaches them things - without having a bunch of plot holes or weird unrealistic flaws in your world building. Explaining concepts to people in a way where they say "ah, that makes sense, that seems extremely useful" and then they go out and use it and it actually works.

None of this is meant to push others down too low or him too high. I do have a sense that Eliezer got a bit lucky with running across the right ideas at the right time & had the right kind of weird personality/mindset to get really good at the things he worked to get really good at, and, in the totality, he does seem to have failed quite a bit at stuff he has attempted to do. He made some strong predictions which turned out to be wrong (he bet against the LHC finding the Higgs Boson,) and in a bunch of places like charisma, work ethic, or bodily health he's ended up in a bit of a sub-par local maximum, where he can't really manage to improve those things without breaking a bunch of other stuff in his life. He's not smarter than everyone on Earth right now, there are probably >10,000 living humans with more native intelligence, but he has managed to become the best in the world in a place where a bunch of skills intersect. He's better at "thinking about thinking" than anyone else I've heard of. Ethics, rationality, science, philosophy, the ways minds interact with their reality and how that whole picture fits together - yeah, Eliezer is kinda the GOAT in that region - if only because he's the one who managed to climb up the "shoulders of giants" following a slightly better path than everyone else I've ever heard of.

Your own mind is probably running some memes that originated from or were popularized by Yudkowsky. He's changed culture quite a bit. There are good reasons why he has the status he does - whatever you think about the things you've seen said about him, it's worth acknowledging that he has made huge ripples among the people who have spent time in areas which intersect with areas he's done work in. Effective Altruists, Silicon Valley Tech Bros, AI Researchers, scientists, philosophers, economists, fiction/non-fiction writers, forecasters...

I would recommend reading stuff he's written - probably obvious from how I've been talking about this - but yeah, he's smart in a way that comes across really easily and is enjoyable - for me at least - to witness in action. He's more accessible than other very-smart people, those who don't put a lot of effort into being understandable to others. There's still stuff that'll make your eyes cross, but he does a good job of making some of the content, rhythms, and themes relatively easy to understand and pick up for yourself.

@NevinWetherill In exchange for your extensive reply, I can at least explain why I voted "Yes". There wasn't much analysis given to it, I am afraid. When I hear the name, what comes to mind are a few memorable Tweets such as the one where he claimed, as I remember it which may be wrong, that "basically everyone would replace their current life partner for someone 33% better" or something like that. So I think "this is dumb, I have better thoughts" and vote yes.

Plus, I am strongly biased against people not engaged in hard sciences or mathematics or at least engineering, and I am quite sceptical of Yudkowsky's brand of AI Safety as a research field.

@admissions Yeah, lmao, I can understand why takes like that would make you think "this guy seems ridiculous."

His point with that tweet, in my assessment, was that he was complaining about the lack of transparency and honesty in relationships, and trying to push the "Relationship Overton Window" in a direction he thought made more sense.

Like, almost no one ends up in a relationship with literally the best person in the world for them. You end up with someone that's tolerable, and you learn about each other and change for each other in a way that makes things better over time. But what about the hypothetical where someone comes along and is just 20x better for you than your current partner? In the current culture, you're supposed to answer that hypothetical with "nooo, of course not, that's impossible, and even if it happened, I'm committed to you and you only." But is that actually the best thing to say? Is it actually true?

He's not saying people should dump their partners for someone slightly better at the first opportunity - he's saying that in a relationship you should ask yourself about the possibility of finding someone WAY better, and whether it's possible for our culture around relationships to change such that being honest about how you'd act in that situation is something people can talk about and make agreements about. I doubt he even meant it as fully generic advice about relationships, more that he was pointing at stuff people seem to say a lot and going "you wouldn't talk this way in a world with more honesty, transparency, and trust."

I think it was just a weird idea that's easy to mischaracterize and misunderstand - but I think it's a decent place to look if you're reflecting on your philosophy about relationships, trust, and honesty. I don't think it lands quite as hard with people who still think "white lies" are a good thing, and that relationships need to be founded on some kind of mutual fantasy...

https://youtu.be/lXpmHuCE9Ls?si=PpNOJP6dXQ2gMqYn

I think that bias about people not in hard sciences is a bit misplaced with Eliezer.

He doesn't have many of the vibes/attitude/presentation/credentials of a Very Serious Scientist/Engineer, and his skillset is definitely deficient in some of the areas associated with those people, but in my opinion he would be valuable as an advisor to most science/engineering projects. In my impression, he has the mental habits/cognitive horsepower/background knowledge to provide useful help with most parts of something like that.

Yudkowsky's brand of AI Safety is fairly well regarded in the broader field - even if there are a lot of people who are unhappy with the comments he makes about the health and future prospects of the broader field. A lot of his work is required reading in AI Safety courses. His technical writeups on concepts like Solomonoff Induction, Extrapolated Volition, Decision Theory, Bayesean Reasoning, and Expected Utility are cited all over the place whenever people talk about agents and mind design in abstract. I suspect his loudest critics are those who see AI Safety as a purely "empirical" field, where concerns about technical conceptual frameworks get ignored in favor of "try stuff and measure it" approaches - and that... well... It's sometimes easy and sometimes hard to tell the difference there in who is making a better point - depending on how deep you dig into the details and arguments. I'd recommend reading "'Empiricism!' as Anti-Epistemology" by Yudkowsky to see if that makes it clear where this gap is located and where Yudkowsky sees his position in relation to it. I'd also recommend trying not to be moved too deeply by people sneering or dismissing Yudkowsky as an unserious and misguided person - there's a lot of people who love to try to dismiss his perspective, and most of the time it's pretty obvious to me that they don't understand it or are just trying to do some sort of in-group/out-group status signaling thing. But if you haven't looked into MIRI or read the papers under "See Also" on Yud's Wikipedia page, it can be hard to see that these people are being unfairly dismissive.

I'd also recommend reading "2018 Update: Our New Research Directions" from MIRI, which covers a lot of the broad character of the Yudkowsky brand of AI Safety work - especially the concept of "deconfusion."