#1 on Artificial Analysis · April 2026

HappyHorse 1.0:
Alibaba's #1 AI Video Model

A 15-billion-parameter video model from Alibaba's Taotian Group that topped both text-to-video and image-to-video on Artificial Analysis. Generate the same quality here — no waitlist, no API setup, 50 free credits on signup.

50 free credits · No credit card · First draft in 30 seconds

What Makes HappyHorse 1.0 Different

A frontier-grade video model architected from the ground up for joint audio-video generation, native cinematic output, and open release.

15B Parameters

Unified single-stream transformer for joint video + audio generation from one prompt.

Native Audio-Video Sync

Synchronized audio generated alongside video — no separate audio pipeline or post-production stitching.

7-Language Lip Sync

Speech, lip movement, and motion stay aligned across seven languages out of the box.

#1 on Both Leaderboards

Tops both text-to-video and image-to-video on Artificial Analysis — the only model at #1 on both tracks.

Native 1080p Output

Cinematic 1080p video produced directly by the model, not upscaled in post.

Apache-2.0 License

Open-source release planned. GitHub and weights coming soon from Alibaba's Future Life Lab.

HappyHorse 1.0 vs the Field

Elo ratings from the Artificial Analysis Video Arena (April 2026), based on blind user votes between paired model outputs.

ModelMakerT2V EloI2V EloAccess
HappyHorse 1.0Alibaba (Taotian)1,3881,413Open-source coming · Hosted via Happy Horse AI
Seedance 2.0ByteDance~1,273~1,300API + Dreamina
Sora 2OpenAI~1,250ChatGPT (gated)
Veo 3.1Google~1,240~1,260Vertex AI / Gemini
Kling 3.0 ProKuaishou~1,235~1,250Kling.ai
PixVerse V6PixVerse~1,210~1,240PixVerse.ai

Source: Artificial Analysis Video Arena leaderboards, April 2026. Competitor Elos are approximate and update continuously.

How to Use HappyHorse 1.0 Today

Alibaba has confirmed that HappyHorse 1.0 weights and GitHub will be open-sourced under Apache-2.0. Until that release ships, here are your real options.

AVAILABLE NOW

Generate on Happy Horse AI

Hosted text-to-video and image-to-video using leading AI video models — the fastest way to ship HappyHorse-class video without waiting on weights, GPUs, or an API key.

  • · 50 free credits on signup
  • · Text-to-video & image-to-video
  • · Aspect ratio, duration, resolution, audio controls
  • · First draft in 30 seconds
Start Generating Free
COMING SOON

Open-Source Weights

Alibaba's Future Life Lab has stated that the model will be fully released under Apache-2.0. Self-hosting will require serious GPU resources once the weights drop.

  • · GitHub repository — pending
  • · Hugging Face weights — pending
  • · Inference providers — ramping up
  • · Apache-2.0 license — confirmed
Read the full breakdown

HappyHorse 1.0 FAQ

Quick answers to the most common questions about Alibaba's new #1 AI video model.

What is HappyHorse 1.0?+
HappyHorse 1.0 is a 15-billion-parameter AI video generation model built by the Future Life Lab inside Alibaba's Taotian Group. It uses a unified single-stream transformer architecture that jointly generates video and synchronized audio from a single prompt, supports text-to-video and image-to-video, produces native 1080p output, and handles lip sync across seven languages. As of April 2026, it ranks #1 on both the text-to-video and image-to-video Artificial Analysis blind-comparison leaderboards.
Who built HappyHorse 1.0?+
HappyHorse 1.0 was built by the Future Life Lab inside Alibaba's Taotian Group, the e-commerce arm of Alibaba. The lab is led by Zhang Di, a former Vice President at Kuaishou who previously ran the Kling AI video team. The model launched anonymously on the Artificial Analysis Video Arena before Bloomberg, The Information, CNBC, and Sherwood News confirmed Alibaba as the creator on April 10, 2026.
Is HappyHorse 1.0 open source?+
Yes — HappyHorse 1.0 is published under an Apache-2.0 license and Alibaba has stated the full model weights and GitHub repository will be released. As of April 2026 the practical release is still rolling out: weights and the public repo are marked as coming soon, and most inference providers are still ramping up. The fastest way to actually generate HappyHorse-class video today is to use a hosted platform like Happy Horse AI, which is live and ready to use without a waitlist.
Where can I use HappyHorse 1.0 right now?+
You can try HappyHorse-class AI video generation immediately on Happy Horse AI. The platform brings together leading AI video models in one editor with text-to-video, image-to-video, and full operator controls — aspect ratio, duration, resolution, and audio. New accounts get 50 free credits on signup. No credit card. No waitlist.
How does HappyHorse 1.0 compare to Sora 2 and Veo 3.1?+
On the Artificial Analysis text-to-video leaderboard, HappyHorse 1.0 currently leads with an Elo around 1,388, ahead of OpenAI Sora 2, Google Veo 3.1, ByteDance Seedance 2.0, Kling 3.0 Pro, and PixVerse V6. It also leads the image-to-video leaderboard with an Elo around 1,413. It is the only model currently holding the #1 spot on both tracks simultaneously. These rankings come from blind human votes on output quality, which is significantly harder to game than self-reported benchmarks.
How does HappyHorse 1.0 compare to ByteDance Seedance 2.0?+
HappyHorse 1.0 outranks ByteDance Seedance 2.0 in both text-to-video and image-to-video on Artificial Analysis. Seedance 2.0 still leads in the audio-enabled image-to-video category by a narrow margin and offers more mature multimodal audio-video integration today. HappyHorse is the leaderboard winner; Seedance is the more battle-tested commercial product. For most creators, the difference is whether you optimize for blind preference quality or for product polish.

Generate HappyHorse-class video — free

Skip the waitlist. Skip the GPU bill. Open the editor and create your first cinematic clip in 30 seconds.

Start with 50 Free Credits

50 free credits · No credit card · No waitlist