For Resolution:
The model must be capable of browsing in response to user queries, not merely during the training phase.
To qualify as YES for a release, either:
A.) access should be provided via openai.com, or a sub-domain thereof, through a ChatGPT-like interface or by API access.
OR
B.) usage of the model should be clearly announced as licensed to a third-party (like the relationship between Github Copilot and OpenAI Codex), which in turn provides the model to users by the target date.
To be clear:
A model merely fine-tuned by an API end-user for this purpose does not count.
A publication indicating that OpenAI has developed this capability in-house, but which only shows a handful of selected examples does not count.
Edit: In the event the browsing capability is restricted to a fixed off-line cache of pre-archived webpages, I reserve the right to resolve this as N/A.]
Edit: A version of this market with a longer timeline (Jan 1, 2024) has been created as well:
Related questions
🏅 Top traders
# | Name | Total profit |
---|---|---|
1 | Ṁ885 | |
2 | Ṁ88 | |
3 | Ṁ35 | |
4 | Ṁ32 | |
5 | Ṁ4 |
The recent release of ChatGPT-powered Bing search appears to qualify for (B). Even though it is now invite only, it did go public briefly. It browses and cites its links for source material in response to chat queries, and is based on the same "GPT 3.5" based technology stack as ChatGPT.
I notice a lot of people on Manifold don't like being specific, they would rather keep their market open because it costs money to create a market, which is understandable, so at the risk of being punished rather than rewarded for harsh feedback...the answer to your question is, "No," because that's not how GPT-3 works and that's not how language models work. A language model is trained on past data and has no new information. An application or wrapper utilizing a language model can be designed to browse the web, or make it look like it's browsing the web (using past data stored to give the illusion that it's browsing), but the language model itself does not, "browse." The language model, such as GPT3 is a set inference model that you call - all of the browsing/scraping prior to a callable inference model was done on a data set of browsed data prior to the model being trained and converted into an inference. Now, you could design something like a ChatGPT such that if you give it a command, "Open a browser and search for Planes, Trains and Automobiles," and then the ChatGPT-like app can either lie to you and say that it did that, when it actually just looked into its past database, or it can receive and interpret and assign a probability to your signal as a, "search" signal utilizing GPT3 or another language model, and then put your signal into a search browser, and feed you back the results, along with its answer, or use said results as an input to its answer...but the language model itself is not doing the browsing, at all.
@PatrickDelaney OpenAI could feed the user's prompt into a search engine, and the search results are appended to the user prompt and fed into the language model?
Or the next step would be for OpenAI to give the model the prompt and ask it what search query would most help it, give it the search results, and then have it respond to the user? Wouldn't this be the language model performing web searches?
@PatrickDelaney https://openai.com/blog/webgpt/ exists. It isn't exposed to end-users an its a bit dated, but it operates basically the way Adrian describes above. My market is basically about whether they will release an updated version of it (off a newer language model) in a public facing form.
Evidence in favor of OpenAI building these capabilities under the hood: https://twitter.com/0xsanny/status/1598274762064945153/photo/1