If Eliezer Yudkowsky charitably reviews my work, he will update his p(doom) to < 0.1. (Below 10%)
➕
Plus
236
358k
2030
12%
chance

I believe Eliezer has the best overall understanding of the issues related to alignment.

Also, I'm confident there is a solution.

This will only resolve if Eliezer looks at my algorithm.

If he does, this will only resolve yes if, after review, his new p(doom) is < 0.1. (10%)

It will only resolve yes with Eliezer's consent.

It will resolve no otherwise.

2 min general overview by request.

https://www.lesswrong.com/posts/qqCdphfBdHxPzHFgt/how-to-avoid-death-by-ai?utm_campaign=post_share&utm_source=link

Update: Due to integrity and logistic concerns in the comments, the above prediction will resolve N/A on Jan 1st 2026. Also, I will no longer bet this market above 2%.

Get Ṁ1,000 play money
Sort by:

Here's an argument if anyone would like to engage with the claims I'm asserting.

https://manifold.markets/Krantz/which-proposition-will-be-denied?r=S3JhbnR6

Update: Due to integrity and logistic concerns in the comments, the above prediction will resolve N/A on Jan 1st 2026. Also, I will no longer bet this market above 2%.

Update: Due to integrity and logistic concerns in the comments, the above prediction will resolve partially to the current market evaluation on Jan 1st 2026. Also, I will no longer bet this market above 2%.

Surely an N/A is better?

I don't believe I am able to do that. It would need to be an admin correct?

Overall, I am open to whatever method incentivises users to produce an accurate prediction.

Resolving to NA seems like it would produce many comments criticizing how it would be a complete waste of time (assuming they had a high confidence that it would default to na).

With partial, people could at least bet it down to 2% and profit from the wagers I've already placed.

Also, this isn't Rob Miles, correct?

If you are, please just provide me a secure place to send you a document (after somehow demonstrating that you are).

@Krantz One problem with resolve-to-market is that the incentives are weird - e.g. someone could bet it up not because they think the true probability should be higher, but because they think they can spend enough to keep it up there and then it will resolve to their profit. At low percentages, someone playing that game has a significant edge, e.g. at 10%, NO bettors would have to out-spend them at 10:1. In a normal market that I believed would resolve to reality, I'm willing to outspend 10:1, or to take as big of a bite as I want and then allow the market to do what it wants - I don't have to keep the price down to profit. But if it's going to resolve-to-market, then I have to bet that my bankroll is 10x theirs or else I lose.

The resolution could be conditional on the market holding steady at a given percentage for a full week.

In other words, it would resolve as soon as the following conditions are met.

  1. It's after Jan 1st 2026

  2. The percentage has not changed for 7 days.

I'll wait to hear everyone else's complaints before I adjust the description...

resolve-to-market just makes this some sort of whalebait, likely would just randomly hover at 50%.

@Krantz That's also been tried elsewhere; it has a related issue where anyone can always make the percentage change temporarily if they don't want it to resolve at the current value (e.g. because they're in the red). Check out other 'self-resolving' markets around here - a lot of related things have been tried, and as far as I know no one has come up with a satisfactory solution to this problem, though some are more broken than others.

This will only resolve if Eliezer looks at my algorithm.

I knew this might not be quick, or might not happen at all. I know some users don't like long-term markets, especially after loans were removed, but I can't say I didn't know what I was getting into here. I was only surprised by how many YES shares @Krantz was willing to buy well above 2%, but that's not an issue on its own.

I'd be ok with the market staying open, under the original terms. Or an N/A. But changing the market to new criteria does not sit well with me. No matter how well-intended, the suggestions so far are isomorphic to whalebait or other toy markets; if I want to play in a toy market, I'll go find one, I don't need a game retroactively applied to my already-held shares.

FWIW, I actually think the situation with resolution criteria here has parallels to the original question. @Krantz thinks he has a simple solution, proposes it, but it turns out to have some simple flaws. There's no disrespect meant here. I'd like to see prediction markets used more in this way. And God knows we could use more minds working on the alignment problem. I think we'd be friends if we knew each other IRL, you have a lot of interesting knowledge about this topic. But I've watched your videos, read your links, tried to follow the clues you've revealed (bad practice for an 'infohazard'), and I still don't think you've found a solution, and I'm holding more NO shares than anyone else, Ṁ where my mouth is.

I pledge on Kant’s name to try as hard as I can to consent.

This is from the criteria in your other related market. But if you can hold the same standard here, I think you will eventually come around to resolving this NO, with or without further input from EY.

Also, this isn't Rob Miles, correct?

(I'm going to roll a d10 here with the standard terms for plausible deniability) Nope! Just a fan with a similar name. Sometimes I think I might be this Rob though: https://www.explainxkcd.com/wiki/index.php/Rob

Wow.  This is incredibly charitable!  Thank you for the time you've invested!  I would be happy to modify the criteria to resolve N/A on Jan 1st 2026.  The only reason I chose partial instead was to avoid being a burden for moderators (I understand they would be required for this and it is recommended to avoid in FAQ).  I figured that everyone would expect and be content with the market being bet down to and resolving at 2% thus only distributing my mana to other users. 

The goal of this for me, is not to earn mana, but to convert mana into attention on an issue I think is important to think about.

"Krantz thinks he has a simple solution"

Well, sort of.  The 'simple' part of the solution is to build an interpretable machine that effectively pays individuals to learn important things (a collective intelligence). 

That's a pretty simple solution that I would expect others to be able to see could be a game theoretic way to approach the problem relatively easily (A solution for how to 'teach' the world at scale not to build dangerous systems in the first place).  This is what https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6 and other predictions were aimed at.  If anyone has an argument for why a machine (assuming it worked) wouldn't help that particular situation, I'd love to hear it.

 

The part I can't expect others to understand, is the architecture for how that could be computationally feasible or scalable.  That part is probably going to take several hours of rigorous examination and a decent understanding of the aims of Doug Lenat and Danny Hillis.  That part, is what I'd like for @EliezerYudkowsky to look at.  I've been trying to just simply get his attention for several years now with no success.  I'm typically blocked at every attempt, so I'm kind of out of options.  If I can convert a couple hundred dollars worth of mana into a charitable price for some of the people around him to become interested in the approach, maybe their influence will do more than another desperate email from me.

Thanks again.

Krantz Mechanism Demonstration
This prediction is aimed at demonstrating the function of the krantz mechanism. The krantz mechanism is an abstract format for storing information. Similar to 'the set of integers', it does not exist in the physical world. To demonstrate the set of integers, we make physical approximations of the abstract idea by writing them down.  Humans have created symbols that point to these elements and have shared them with each other (to ensure the process of decentralization) and agreed to use the same ones so nobody gets confused. To demonstrate the krantz mechanism, we make a symbolic approximation of the abstract idea, by writing down elements of the form ('Any formal proposition.', confidence=range(0-1), value=range(0-1)). My claim is that the krantz mechanism is an operation that can be applied to approximate a person's internal model of the world.  Much how LLMs can infinitely scale to approximate a person's model of the language itself, krantz aims to compress the logical relation of observable truths that language points at. For this demonstration, I will not be using value=range(0-1) and will only be using ('This is a proposition.', confidence=range(.01-.99)) (due to limitations of the current Manifold system).   If I were working at Manifold, I would recommend creating a secondary metric for expressing the degree to which the 'evaluation of confidence' is valuable.  This will later play a critical role in the decentralization of attention. A proposition can be any expression of language that points to a discrete truth about the user's internal world model.  Analytic philosophers write arguments using propositions.  Predictions are a subset of propositions.  Memes and other forms of visual expression can also be true or false propositions (They can point to an expression of the real state of the observable Universe).  Things that are not discrete propositions:  Expressions that contain multiple propositions, blog posts, most videos, or simple ideas like 'liberty' or 'apple'.   Each proposition (first element of the set) must have these properties. (1) Is language. A language can be very broad in the abstract sense of the krantz mechanism, but for this posting, we will restrict language to a string of characters that points to a proposition or logical relation of propositions. (2) Is true or false. Can have a subjective confidence assigned to it by an observer. (3) Has value. Represents a market evaluation of how important that idea is to society.  This is distinct from representing the value the idea has specifically to the user.  It is aimed to represent roughly what a member of society would be willing to pay for that truth to be well accepted by society.  One example of this would be taxes.  Ideally, we pay our taxes because we want someone to establish and become familiar with the issues in our environment that are important to us, make sure people know how to fix them, make sure society will reward them if they do and make sure society understands the need to reward the individuals that do resolve the problems.  We pay our taxes, so people will do things for us.  We do this on social media with likes and shares.  We give up attention to the algorithm to endorse and support other ideas in society because we believe in investing value into directing attention to it.  We do this in education and professional careers.  Our economy is driven because it rewards those that succeed in figuring out how to do what society needed done and direct the energy to do it. We give our children rewards for doing their homework because it is important for engineers to understand math.  Soon all that will be left is learning and verifying where to direct the abundant energy to accomplish the tasks that we should.  It is a way of pointing the cognitive machinery of the collective consciousness.   Propositions as logical relations: Since propositions can be written as conditionals, like 'If Socrates is a man, then Socrates is mortal.' or as nested conditionals, like 'If all men are mortal, then 'If Socrates is a man, then Socrates is mortal.'.', it follows that the work of analytic philosophers can be defined within a theoretical object like the krantz mechanism.  For example, every element of the Tractatus is a member of the krantz set.  Every logical relation defined by the outline of the Tractatus is a member of the krantz set.  As a user, you can browse through the Tractatus and assign a confidence and value to each element (explicit and conditional) and create a market evaluation of the ideas contained.  If hundreds of other philosophers created works like the Tractatus (but included confidences and values), then we would be able to compile their lists together into a collective market evaluation of ideas.  A collective intelligence. Society seemed to enjoy continental philosophy more, so we ended up with LLMs instead of krantz functions. It is important to note, that philosophy is quite a wide domain.  The content in the krantz set can range from political opinions, to normative values, to basic common sense facts (the domain approximated by CYC), to arguments about the forefront of technology and what future we should pursue.  We could solve philosophy.  We could define our language.  If @EliezerYudkowsky wanted to insert his full Tractatus of arguments for high p(doom) into such a system, then we could create a market mechanism that rewards only the individuals that either (1) identify a viable analytic counterargument that achieves greater market support or (2) align with his arguments (because, assuming they are complete and sound, it would allow for Eliezer to exploit the market contradiction between propositions that logically imply each other).  For example, if Bob denies 'Socrates is mortal' but has accepted 'Socrates is a man.' and 'All men are mortal.', both with confidence(.99), then he will create a market variance by wagering on anything lower than the product of those confidences.  So, why would Bob wager in the market of 'Socrates is mortal.'?  Liquidity.  Somebody that wants Bob to demonstrate to society that he believes 'Socrates is mortal' injected it. Limitations of the current incentive structure in Manifold and what I'd recommend considering to change: There is a fundamental difference between Manifold and Metaculus.  The difference is in whether or not capital is needed to be put down in order to earn points.  Metaculus embraces a mechanism that is beneficial for the user with no capital, but insight into the truth.  Manifold uses a mechanism that requires capital investment, but attracts more participation.  Both concerns can be addressed by handling liquidity injection as the primary driver of incentives.  When a user wants to inject capital into a market, they can inject liquidity into particular propositions directly (this could also be done by assigning a value(0-1) and distributing liquidity proportionally across propositions from a general supply). That liquidity is dispersed evenly across either all or a select group of users as funds that can be wagered only on that proposition.  Think of the liquidity as a 'proxy bet' the house (in this case the individuals injecting liquidity into the market) is letting the user place on what the market confidence will trend to.  If the user fails to place a wager, the free bet may be retracted if liquidity is withdrawn from the fund.  If the user places a wager and loses, the user can no longer place any free wagers on that proposition (unless further liquidity is contributed which then would allow the user to freely invest the difference), but may still choose to wager their own funds up to the amount allocated for the free wager.  If a user places a proxy wager for 100 and the market moves to where they have a value of 1000, they can choose to sell their shares to receive 900 credits and reimburse the proxy liquidity to be used in the future.  In other words, if you have 500 mana and 10 people, instead of injecting 500 mana of liquidity into a market that 10 people will be incentivized to wager their own funds on, we should give 50 mana in 'proxy wagers' to each person with the expectation that they will have to invest cognitive labor to earn a profit that they get to retain. The two important principles here are that the liquidity injections (which can be contributed by any users) are (1) what determine the ceiling initial investment in a proposition and (2) the extent of the losses the liquidity will cover.   Overall, there are many aspects of this demonstration that would have to come together to provide a true approximation of what I'm trying to convince the open source community to build.   1. The function of the system would have to exist decentrally.  This could happen if we truly considered the krantz mechanism an abstract function that various competing markets compete to align.  Much how there are different sports betting applications with different interfaces individuals can choose to use, the actual 'sports odds' are distributed across a network of competitive forecasters and are kept accurate by the market. 2. Each account would need to be humanity verified.  This is biggest hurdle to overcome.  It is important to understand that this would not require restricting users from using bots.  It would only restrict them to having one account that 'speaks on their behalf'.  In other words, as long as we don't have individuals with multiple accounts, we can hold the individuals accountable for the actions of their bot. 3.  It would take an incredible investment of liquidity to see the scaling effects of such a market.  Enough liquidity to incentivize everyone to evaluate/wager on the portions of their own Tractatus that you are willing to pay them to think about.   In general, the purpose of this demonstration is to show people a way to directly incentivize others to realize what they might want to think about by providing game theoretic market incentives that require them to wager on their beliefs. I've written several other questions that demonstrate aspects of this project. Examples of how we might use this process to align constitutions: https://manifold.markets/Krantz/if-a-friendly-ai-takes-control-of-h?r=S3JhbnR6  https://manifold.markets/Krantz/if-the-work-between-anthropic-and-t?r=S3JhbnR6 Wagers on whether a process like this is viable: https://manifold.markets/Krantz/is-establishing-a-truth-economy-tha?r=S3JhbnR6  https://manifold.markets/Krantz/if-eliezer-charitably-reviewed-my-w?r=S3JhbnR6 https://manifold.markets/Krantz/this-is-a-solution-to-alignment?r=S3JhbnR6 A paraphrase of the process of recommending propositions to wager on: https://manifold.markets/Krantz/if-a-machine-is-only-capable-of-ask?r=S3JhbnR6 https://manifold.markets/Krantz/polls-should-able-to-function-like?r=S3JhbnR6 https://manifold.markets/Krantz/define-a-function-that-converts-cap?r=S3JhbnR6 Attention mechanisms: https://manifold.markets/Krantz/what-is-the-most-important-question?r=S3JhbnR6 https://manifold.markets/Krantz/what-topicsissueevent-should-the-ge?r=S3JhbnR6 https://manifold.markets/Krantz/what-person-should-have-the-most-co?r=S3JhbnR6 https://manifold.markets/Krantz/which-predictions-on-manifold-are-t?r=S3JhbnR6 Please consider finding an argument you think is important for people to understand and map it. I personally think it would be amazing if predominant philosophers could offer other philosophers they disagree with 'free bets' to wager on specific crux issues. It would be a function that turns capital into attention. @manifold - Please consider implementing these changes.  I'd love for folks that are capable of writing these sorts of arguments to earn a living by writing them while at the same time providing individuals with no capital the opportunity to earn something for paying attention to them. This market will not resolve and is simply intended to serve as a demonstration of a mechanism for allowing philosophers to wager on what language ought be in a manner that uses the market to lead them to their cruxes.  *This demonstration does not include the algorithm I keep talking about. It's function is to evaluate the variances between user 'Tractati' (plural of Tractatus??) and recommend inferential trajectories that efficiently surface contradictions within the system.

The holders tab here is really astonishing, not even a handful of people betting a few mana on yes for the lulz. It's currently 2 vs 187 if this changes after the time I post this.

Krantz might be hurting himself by obfuscating the real odds of this market, which could have attracted Yudkowsky to actually review his work.

Eliezer might have a Cheerful Price (https://www.lesswrong.com/posts/MzKKi7niyEqkBPnyu/your-cheerful-price) at which he would happily do the review, even if it would otherwise be an "obvious waste of his time".

Looking over this again after a week away, can I just say that I think everyone is being way too charitable and nice to @Krantz here?

I don't think anyone who's commented except @Krantz would put any substantial probability (above .01, and honestly even .01 seems high) that this would resolve "yes" if @EliezerYudkowsky took a real look. But also there is no reason for him to take a real look because, again, it's just so obviously a waste of his time. But it's also obviously a waste of all of our time too!

The only reason this bet keeps showing up is that @Krantz is single-handedly propping up the "yes" side of the not-very-liquid market, but it's also totally obvious they're not going to take any of the plausible actions that would cause this bet to resolve before 2030. I don't think @Krantz is literally trolling us, but as @ms pointed out, this presents scarily similarly to people who think they've invented perpetual motion machines and are afraid to the share the secret. The epistemics here are terrible and we should stop playing this dumb game. There is nothing to forecast.

@Krantz, you're wasting a lot of people's time propping up this "market."

bought Ṁ1,000 YES from 15% to 21%

Your last sentence is incorrect. I decided myself to lose time here, nobody is forcing us to be or trade here. Personslly, i just want to follow how his attempts unfold.

@Krantz why did you pick Eliezer as the only reviewer?

This is disappointing. 

Not because it's mean. 

Not because it hurts my feelings.

Not because it makes me feel any less confident that this proposition is true.

 

I actually really wish it did make me feel less confident that this proposition is true.

But that would require new information for me to think about.

You'd have to actually listen to the stuff I've already said, think about it, come up with a good substantive point about it and then share that with me instead.  You are not doing that.

 

You clearly not aiming to do that.  You are instead simply repeating your pre-existing assumption that it is not possible for a solution to exist, unless that solution has already been entered into the public domain and been seen/understood by 1,000s of people.  That belief has no justification.

 

"It's totally obvious they're not going to take any of the plausible actions that would cause this bet to resolve before 2030."

If this is true, it is because there exist no plausible actions that would cause this bet to resolve truthfully before 2030.  Nobody here to my knowledge here has pointed out any clear actions I can take.  Continuing to suggest I should just publish the work openly isn't engaging charitable with what I'm saying.

 

I thought Manifold was created for individuals to replace all this sort of rhetoric with their money?  Isn't the market supposed to determine the legitimacy of predictions?  Well, if that's true, then there is a 20% chance right now that I've got a really important paper for Eliezer to read.  If that seems wrong, then fix it.

 

I could understand the concern if you thought I was planning on just resolving the wager and stealing everyone's mana.  I am not going to steal your mana.  I am going to win your mana. If you are confident enough to risk it.

I'm not going to engage with the details of this because I have nothing new to add; I think the epistemics in this situation are bad. I agree that as far as I can tell, you believe you are not trying to steal people's mana and that you are going to win.

Thanks. That would be helpful.

The thing I'm trying to accomplish here is to get Manifold users to honestly access their own confidence in whether they should write off my claims before actually examining them.

The way everyone talks, you'd think they were all at least 99% confident.

As far as I can tell, the most confident person is only actually about 80% confident.

@Krantz when you invite people to bet on their belief you did not take into account market duration.

Even if a person is 100% sure this market resolves NO, he would gain only +25% income over 6 years on this market. (The structure of this market implies it can resolve YES early, but cannot resolve NO early)

Such a person could easily find other opportunities with returns after couple weeks.

Another factor is diversification: person can say he is 99% confident, but he still allows the possibility his evaluation is wrong, so he will not bet at all unless there is some not_small margin between price and personal evaluation. (Margin strategy also helps Calibration score a lot).

Third thing: pushing probability to extremes is less profitable than slowly buying at better proces.

20% price DOES NOT mean market evaluates your probability as 20%.

Only with big numbers of markets we can say that IN AVERAGE price equals probability, but we cannot say that about each case.

As far as I can tell, the most confident person is only actually about 80% confident.

Btw, Mirroring your own construction, we get:

"As far as I can tell Krantz is only actually 20% confident".

bought Ṁ1,000 YES from 16% to 21%

These are great concerns, thanks. 

The market can resolve either 'yes' or 'no' at anytime, if Eliezer wanted it to, but yes I can see why someone might believe he will never look at it and thus the market would just sit here.  Funds being tied up is an unfortunate consequence of the system.  Do you see any ways around it?

Also, I think extreme bets can have tremendous value when it comes to exploiting market variation.  For example, you cite me as saying "I'm 20% confident that I can convince the smartest guy on the planet that he's completely wrong about the thing he's most confident in."  That seems like a pretty bold claim.

It seems to me that if there were a rational well calibrated person that thought they had a better than 1% chance of having information that would help us navigate AI, they'd be worth at least listening to, right?

Hopefully you can see why a function like this (a way for researchers to 'bet' on their work before publishing it) would be valuable to society right now.  In general, do you think there are any ways I could re-frame this question (Or perhaps convince Eliezer to create a question that allows many researchers to submit/wager on their work)?

What I've been trying to do is help with a path to resolution. Assuming you have a great solution, not publishing prevents us from hyping it which is the only reasonable way Eliezer is going to read it, other than paying him 4-6 figures, imo.

But if you're not going to publish it (narrowly even):

1) do you have a simple self-contained private link (like a Google doc) for the whitepaper? Or a PDF you're willing to email, ready to go now, if he says he'll read it?

2) how long is the doc? This let's Eliezer (and us betting) know the cost of reading it.

If anyone can get me a pass into ZuVillage, that might resolve this prediction faster.

If Eliezer charitably reads the 2 min overview and comments here about it, would that count for resolution? He doesn't watch video, so i'd like to know a specific link your want him to read for this to resolve.

bought Ṁ500 NO

Also I worry he'll conclude with a single paragraph which you might deem uncharitable. If his conclusion is short and not to your liking, will you still resolve NO or will you wait for another review?

No..  The 2 min overview does not contain my algorithm at all.

My algorithm is contained in a technical whitepaper describing how to build a decentralized symbolic collective intelligence.  That paper has existed since 2010.  When I learned about Bitcoin, I thought 'Neat, they're kind of doing what I'm trying to do, but just for financial transactions'.  If I put my paper into the public domain, I will be responsible for accelerating aspects of AI that are existentially threatening.  I WILL NOT put it into the public domain.  I am not a teenager on the internet that watched a couple of youtube videos and thought of a neat way to solve AI.  I'm a forty year old man that has studied math, physics, philosophy and artificial intelligence as an autodidact most of his life, rigorously studied the work of Doug Lenat and Danny Hillis and worked on methods to scale and align a more explicit version of their work by outsourcing content collection in a very specific algorithmic way to a decentralized verification system.  This involves several strategies from collective intelligence (cci.mit.edu), cryptography and social media.  I am claiming to have a 'transformer level breakthrough' for the symbolic domain of collective intelligence.  Robin Hanson might call this our 'culture machine'.  I think we can formalize our 'culture machine' (which Wittgenstein might just call our 'language ledger') into a smart constitution and let people interact with it explicitly.  Pay them to align it.  Pay them to do the work Doug had dozens of grad students labor over for 40 years.

I have been trying very hard to share my work with researchers (like Lenat and Hillis) and philosophers (like Bostrom and Chalmers) without inserting it into the public domain since before 2012.

The shit has hit the fan since transformers took off and I've had to pivot to trying to get the attention of Eliezer and similar individuals instead.  If he gives me any possible way to show him this without putting it online (I'd be more than happy to drive it to MIRI's doorstep if needed), if he reads my paper, I'll resolve these bets however he likes.

The 2 minute overview was a brief attempt to explain 'the general concept' behind the project to the people that are incapable of looking at the numerous links I provided and figuring out things on their own.

The problem in today's society, is that there are probably 1,000s of other individuals in the world stuck in the same situation I'm in, with information that may be helpful but isn't being shared with frontier research because its being protected, filtered or simply unnoticed and Eliezer doesn't know how to sort through all of that.  We don't know how to communicate really complex ideas between one another at scale.  That's what this machine does.

So how long would it take him to read if he somehow got access to it?

@DanielFilan, didnt you volunteer to look over proposals like this one at some point?

bought Ṁ250 NO

I don't remember doing so, and I don't want to look at this.

MIRI doesn't have an office anymore iirc and think for any chance of this resolving in 2024 instead of by default in 2030 you should just create a single private link to one doc and make the resolution criteria about Eliezer reading that one doc. (or hell, just make it public, it's very unlikely to speed anything bad up)

Post the whitepaper here @Krantz c'mon!

I really can't do that. If anyone actually knows a way to get this to Eliezer offline, I'm all ears. I've reached out to him every viable way I have been able to find.

Okay I'm morbidly curious now. Why do you think collective non-artificial intelligence technology is a dangerous infohazard if it wouldn't speed up deep learning based capabilities?

Because if you actually build a symbolic parallel reasoning machine that humans can use to solve complex problems (that none of them would be able to solve alone) then you've also built a machine that can do the same for artificial agents.

We need to ensure it is a system with humanity verification built in.

bought Ṁ750 NO

If such a thing were actually implemented and widely used, I do not see how you'd be able to stop people from working out how it works and making a version which doesn't have "humanity verification".

Are you building your system Krantz? Guessing not.

I'm looking for grants to. It will take a larger community effort. We need to build something completely outside machine learning and Bitcoin and then, scale past them both.

Still don't get why you think it's an infohazard if it takes this kind of effort and no one is interested. Nevermind, I'm just grieving that my 13k mana is going to be stuck for a while because you aren't making this easy enough for Eliezer to actually read.