Resolves YES if by the end of 2024, there are at least two qualifying organizations that publicly announce they are running an instance of SocialPredict (as defined below). Otherwise NO.
As an example, SocialPredict is currently working with a professor at University of Vermont who intends to deploy SocialPredict for a class. If that happens and is publicly announced, that counts as one organization (University of Vermont).
Definitions:
SocialPredict: code is at https://github.com/openpredictionmarkets/socialpredict and you can see it running at https://brierfoxforecast.ngrok.app/markets. For this question, SocialPredict is defined as this or any future version derived from this, even if there are substantial changes or if it is renamed.
Organizations that qualify here are universities, companies, and registered nonprofits; and must have at least 100 employees.
The organization or an employee/member of the organization has to publish in some public form that they are running this, e.g. a webpage, blog post, tweet, etc.
They need to actually be running it by the end of 2024, e.g. if they announce in December that they plan to run it in January, that doesn't count.
If an organization runs it for some time and then stops, that still counts.
This market was created together with the SocialPredict team, they wanted a trusted third party to run it.
@Choms https://github.com/openpredictionmarkets/socialpredict
There may be some room for arguing and controversy on what constitutes, "organization running," but we definitely have fulfilled this for Kenyon College. There is another US based University which from my Googling has more than 100 employees, which is using the same instance, so I think by a strict interpretation of, "two organizations running an instance." There are two organizations both using the same instance.
I don't have a blog post or link of the second organization's usage of SocialPredict, but I know that they are sharing it and I don't think it's unreasonable to get that post. I'm currently waiting on permission to use their logo on the repo.
All of this being said, UVM was unable to run SocialPredict this semester, our contact had to teach a different class, so that no longer counts...unless I could get them to run it independently and spin up an instance, but they are betting NO on this market last time I checked so it would be against their Mana interests.
@PatrickDelaney fun, I guess both of them using the same instance would technically count for your resolution criteria as long as each of them announce it publicly individually (and yea I'm guessing any university would have over 100 employees unless it's extremely small), I did saw the Kenyon one, gg :)
@jack Second university hasn't announced! First announcement was included here, search, "SocialPredict" ... https://www.zacharymcgee.net/teaching/
@Choms @jack I would fully recognize that if the second university doesn't announce, then it doesn't fit the criteria. I would also agree that even if the first university alleges that the second university is indeed on their server, even if it still works according to the criteria, that would not fit a strict reading of the above. E.g.,
The organization or an employee/member of the organization has to publish in some public form that they are running this, e.g. a webpage, blog post, tweet, etc.
E.g. a member of Kenyon is not a member of the other organization.
@PatrickDelaney thanks, yep that looks exactly right to me. And also agree that it doesn't matter for the criteria whether they are using the same instance
We put in a Manifund proposal for SocialPredict for anyone who may be following. https://manifund.org/projects/future-proofing-forecasting-easy-open-source-solution?tab=comments ... the latest iteration of SocialPredict can be seen here: https://brierfoxforecast.com/markets
Software is now deployable by two commands on a virtual machine such as Digital Ocean, including automatically registering a HTTPS.
Here is our staging server: https://brierfoxforecast.com/
Will release v0.0.3 soon, project board for v0.0.4 here:
https://github.com/orgs/openpredictionmarkets/projects/4/views/1
Aren't small, private prediction markets gonna have less liquidity and therefore be less accurate than big, public markets? Furthermore, one societal benefit of public markets is that something of value - accurate probabilities - is provided for free to anyone who wants to look. How is this project an improvement over just making public markets better (e.g. better tagging and focusing markets to each user's interests, while still allowing significant discovery to happen?)
@BrunoParga Yes, that is indeed a problem of small private prediction markets. https://mwstory.substack.com/p/why-i-generally-dont-recommend-internal discusses several other challenges.
That said, it's something that many have tried, because if you can figure out how to make it work, it opens the door to predicting on a lot of types of markets that public markets don't make much sense for, e.g. predicting things relevant to a company or other organization's decision-making.
I don't know what the use case is for a university though, as compared to making public markets on Manifold.
@BrunoParga markets with around 17-20 users are big enough to get statistically significant predictions. This is particularly true when dealing with experts forecasting on specialized topics. The reason to use manifold is because they have a stable play money economy rather than having to figure that out from scratch. That is the core of what we are building. I have been shocked by how few options there are for creating a play money economy of any kind.
markets with around 17-20 users are big enough to get statistically significant predictions.
What do you mean specifically with "statistically significant predictions" here?
I'm curious about the size of communities you imagine would use these private markets. Like, I imagine the 17-20 number you mean is the number of predictors in a given question, and that is necessarily a subset of the market users.
I mean, I'm generally curious about this, and I'd love to hear more about it!
My take is that prediction markets are pretty lousy for 1-20 predictors, and prediction aggregation (like Metaculus, where you get each participant's prediction as a probability, and aggregate them, and score the predictions) works better.
That's kind of assuming the participants are comfortable with stating probabilities - if not, then betting on a prediction market can potentially work better, but then that assumes that participants are comfortable with trading which has its own difficulties.
@BrunoParga First off, excellent question, thank you.
From:
Prediction Markets: Practical Experiments in Small Markets and Behaviors Observed (2007)
Typically, economics theory holds that more traders and more activity in a market causes it to become more efficient.
In predicting regatta and sports outcomes in a prediction marketplace held in 2007, in general there is a positive correlation between the quality of the calibration and the number of traders in a contract.
But it was shown that a per contract basis that there are negative marginal returns somewhere around 15+ traders, compared to 20+. This suggests there may be an optimal number of traders on a contract to get a decent correlation, at least for certain topics.
Of course platform-wide, the more questions that get created, and the more traders and capital gets spread out among the questions, the lower the per-contract quality you’re going to have. So part of what’s going on with Manifold is you have a long tail of extremely low participation contracts that are garbage, while you also have popular contracts that are high quality.
Having an open source platform could allow an administrator to restrict the topics and questions, or the cost to put together a question vs. betting, which could reduce the ratio of questions to bettors. Topical prediction markets within a platform is not the same as tagging platform-wide because there is only so much capital in a given prediction market platform that individual users have, you have to isolate somehow if you want a niche topic.
@BrunoParga Some closed environments are sufficiently large. For example, a company of a few thousand employees could use it internally for internal topics (sales, revenue, acquisitions, deadlines, etc). There is no good software readily available as far as I know.
@jack IMO there are two distinct use cases for play-money prediction markets in universities: Sense-making and education.
Manifold and Metaculus are great at addressing the first type of need. Large crowds of dumb money and accurate forecasters will get you the best information aggregation a market can offer.
Thad said, I think SocialPredict or similar products better address the educational needs of universities because controlling the economy is critical in a learning environment. If I set up a forecasting course as a tournament, students need to start with the same stake, and the system basically needs to be closed---for fairness, ease of monitoring, and safety.
While it would be pretty funny, I don't want a student making $100,000 on an unrelated whale bait, and winning the tournament with "non-predictive" markets. I don't want whales coming in with 100k limit orders. I don't want fresh anon accounts snipping limit orders with their free $1000 bankroll. I want to be able to re-issue bankrolls, impose fines and taxes.
Plus, a free, open-source platform is a better fit for the ethos of higher education, at least in public state universities.
Sorry, I think there should be clarification that the University of Vermont should not count as the professor in question is on our team, so strictly speaking, that should disqualify the University of Vermont, unless there is somehow another instance at that school not associated with that person specifically. This should probably bring the price downward, but yeah, just wanted to get that out of the way earlier.
@PatrickDelaney Originally I thought University of Vermont could count, because I'm not even sure if they will be able to deploy it, based upon the condition of the software today, but after they gave this market a read, they pointed out the flaw here, so yeah. Also the definition of whatever our team is is fairly wishy-washy because this is open source software that anyone could contribute to.
@PatrickDelaney No, the way the question was written was specifically intended to include that professor, if they did in fact run SocialPredict - that's why I put it as an example in the question description. There's no requirement that they can't be on the team.
@jack Ok that's fine, just wanted to bring up the potential issue and try to be transparent. I will defer to your judgement. I would say perhaps they could jump in and explain their worry on their reading more themselves.