In this market, we define
Open sourced as in the weights are released to the public to be downloaded
SOTA as in better performing than every other open sourced model in HF's open sourced LLM leaderboard (https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Intel hardware as in Intel designed hardware, including Intel CPUs, GPUs and XPUs
If all of these above conditions are filfilled, this market will be marked as resolved.
I recall Facebook was experimenting with Habanas in their servers a few years ago, but I think they settled on Nvidia by now. OpenAI was using Graphcore(I think?) & Nvidia, so not going to happen there. Most others are sticking with Nvidia. There's a niche business for FPGAs for deep learning, and Intel is one of the two manufacturers of cutting-edge FPGAs; but those are usually more for inference than training.
So this isn't impossible, but does seem unlikely.
@ShadowyZephyr intel's HPC gpus are very good. PVC has more VRAM than anything avaliable atm, i wouldnt discount raja koduri. Remember, he basically was responsible for GCN which is now CDNA.
@GiftedGummyBee It has the same amount of VRAM as AMD's offerings from what I could find. Thank you for making me aware of this though, I didn't realize they had HPC offerings at the moment.
@ShadowyZephyr That is very true. However, two things to consider. The first is that AMD has ROCm which is not working very well. OneAPI already has been proven to work again and again. Two, intel has been in the deep learning game for much longer, as seen with how they already had habana in 2020 and earlier. I would bet more on Intel than AMD because of those reasons.
@GiftedGummyBee Hmm, interesting. I didn't know Intel's Habana beat the A100 so early on. Although I think availability is still an issue regarding Intel (easy for a consumer to buy an A100, intel habana, not so much), does seem more likely than I first thought.