Will any code AI autocomplete tool using local LLM inference reach 50k DAU by June 1, 2024?
16
218
460
Jun 2
23%
chance

Many devs are excited about local LLM inference for code AI (such as https://github.com/danielgross/localpilot). But will it actually work well enough for a lot of devs to use it? Answering that question is the goal of this market.

By "local LLM inference", I mean where the LLM is running on the user's local machine (either CPU or GPU), such as using llama.cpp or Ollama--not an external LLM over the network hosted by OpenAI, Anthropic, or on any other machine.

By "code AI autocomplete", I mean the kind of editor extension (like GitHub Copilot, Cody, Tabnine, CodeWhisperer, etc.) that provides "ghost text" suggestions as you type code in your editor. For clarity, I will only consider editor extensions (for VS Code, JetBrains, Emacs, Vim, Neovim, Xcode, and Visual Studio).

By "DAU", I mean daily active users (unique humans) who accept at least 1 autocomplete suggestion from the code AI tool. (I do not count users who have an extension installed but do not use it or do not accept a suggestion from it on that day.) The 10k DAU mark just needs to be reached on 1 day.

If the code AI tool supports both local LLM inference and externally hosted LLMs, only the DAU using local LLM inference count.

For proof, I will accept reputable vendor blog posts that are publicly available, or other data privately shared with me. I will also seek out such information. "Reputable" is up to my judgment.

Get Ṁ200 play money
Sort by:

It just seems hard to verify DAU if we're talking about this coming from a random git repo. Maybe we could track how many downloads/stars it gets on a given day, and see if that crosses a threshold?

bought Ṁ45 of NO

@EliGaultney I specifically want to bet on usage, not just hype (downloads/stars). Mentioning localpilot (https://github.com/danielgross/localpilot) was probably confusing; I think that if such a tool wins, it will be a product that is easier to measure and with some kind of organization behind it. I don't think localpilot itself will be the one to win. For example, Continue (https://continue.dev/) is a product that can perform local LLM inference.

More related questions