Background
Wan2.1 is an open-source text-to-video and image-to-video generation model developed by Alibaba Cloud.
Wan 2.1 was first introduced in January 2025 under the name “Wanx” before being rebranded to “Wan.” This marks its initial release as a model for video and image generation. It was later made open-source on February 25, 2025, when Alibaba released the inference code, weights, and multiple variants (such as T2V-1.3B and T2V-14B) to the public, as noted on the Wan Video GitHub page and various reports.
"Anthro whales" refers to anthropomorphic whale characters - whales with human-like characteristics, personalities, or behaviors. (Anthropomorphic animal content has significant popularity in certain online communities and artistic circles.)
Fine-tuning is the process of taking a pre-trained AI model and further training it on a specific dataset to specialize its capabilities for particular content or tasks.
Resolution Criteria
This market resolves to YES if: A version of the Wan2.1 model that has been specifically fine-tuned on anthropomorphic whale content is publicly released before the market close date (end of 2025)
The fine-tuned model (weights) must be made available either through a public repository (such as Hugging Face, GitHub, etc.), or a personal website where others can access it
There must be clear evidence that the model was intentionally fine-tuned to generate anthropomorphic whale content (I don’t expect this will be hard to decide)
The fine-tuning must be on a Wan2.1 model specifically (any variant of the 2.1 model, as claimed by the Alibaba team) , not other video generation models
This market resolves to NO if no such fine-tuned model is released by the market close date.
Considerations
While fine-tuning large models like Wan2.1 is technically possible, it requires significant computational resources and a substantial dataset of relevant content. The niche nature of anthropomorphic whale content may present challenges in assembling a sufficiently large training dataset for effective fine-tuning.