A mysterious AI video model appeared on the Artificial Analysis leaderboard around April 7 with no name, no company, and no backstory. Within days, it had climbed to the top of both text-to-video and image-to-video rankings. The AI community started speculating. Alibaba's stock jumped 8% on Wednesday before anyone confirmed anything.

On Friday, April 10, the reveal came. The developers confirmed on a newly created X account that HappyHorse was part of Alibaba's ATH AI Innovation Unit and that the project was still under development.

What Is HappyHorse-1.0?

HappyHorse 1.0 was built by the Future Life Lab team of Taotian Group (Alibaba), led by Zhang Di, former VP of Kuaishou and head of Kling AI technology. This team joined Alibaba at the end of 2025, focusing on AI video generation.

With Happy Horse 1.0, videos with 1080p resolution and synchronized audio can be generated. It is considered one of the first open-weight models that natively generates dialogue, ambient sounds, and effects in a video.

How It Works

The Happy Horse 1.0 video model employs the Transfusion (Unified Multimodal) architecture. In practical terms, that means video and audio aren't generated separately and merged after the fact. By processing video and audio tokens within a unified Transformer sequence, the model ensures that auditory elements naturally align with on-screen actions, such as a splashing wave or engine noise, which helps reduce the need for additional audio post-production.

It supports all four video generation modalities: text-to-video and image-to-video, each with and without native audio.

Benchmark Performance

The numbers are what triggered all the noise. Happy Horse ranked first in the text-to-video (without audio) track with 1389 Elo points, leaving second-place Dreamina Seedance 2.0 by nearly 115 points. Even in the text-to-video (with audio) category, Alibaba's latest AI video model ranked first in the Elo rankings, leading Dreamina Seedance 2.0 720p by 11 points.

Its earlier efforts topped out some 20 spots lower on the list, under the Wan brand. That's a significant jump for a single release.

The Mystery Model Strategy

The release as Happy Horse follows a pattern of announcement that has become common in AI circles. Models are initially released as "Mystery Models" with unknown origins. Just recently, Xiaomi also used this approach when its AI model MiMo-V2 caused a stir under the pseudonym Hunter Alpha. Chinese AI models in particular apparently hope for a greater PR effect this way than if they presented the models themselves initially.

What This Means for the Industry

OpenAI recently discontinued its Sora video generation app and platform, citing a strategic shift to focus on coding tools, corporate clients, and AGI development amid high compute costs. While OpenAI's exit could cede more ground to Chinese competitors, ByteDance was recently forced to pause the rollout of its viral Seedance 2.0 following copyright disputes with major Hollywood studios and streaming platforms.

That leaves a real opening. Video generation is a capital-intensive and hotly contested race for AI developers, as it's proven to be one of the few sources of reliable monetization.

On the open-source front, the team has confirmed HappyHorse will be fully open-sourced, with GitHub and model weights coming very soon. API access is planned for launch on April 30.

Final Thoughts

What stands out here isn't just the benchmark score. It's that a team led by the former head of Kling, one of China's strongest video models, rebuilt from scratch inside Alibaba and shipped something that immediately beat its old employer's best work. That's a meaningful signal about where the talent and the compute are concentrated right now.

The exceptional performance on benchmarks and mysterious background led to excitement and guessing games among the AI and investor community in China, with many making the link to Alibaba and pushing its shares up as much as 8% on Wednesday. Jefferies analyst Thomas Chong called it "a success" for Alibaba. That's an understatement given how stacked the leaderboard competition has been.

The open-source commitment is worth watching closely. If the weights land on Hugging Face before the end of April as promised, developers will have access to a model that currently beats every closed-source competitor in blind tests. That's a different kind of moat than what most labs are building. What do you think about the anonymous release strategy? Drop your thoughts in the comments.


FAQ

What is HappyHorse-1.0?

HappyHorse-1.0 is an AI video generation model built by Alibaba's Future Life Lab (Taotian Group). It generates high-quality 1080p videos with natively synchronized audio from text or image prompts.

Who built HappyHorse?

HappyHorse was developed by Alibaba's Taotian Future Life Lab, led by Zhang Di, former Vice President of Kuaishou and Head of Keling Technology.

How does HappyHorse compare to competitors?

Happy Horse pushed ByteDance's celebrated Seedance 2.0 into second spot and marked Alibaba's best-scoring video product to date.

Will HappyHorse be open source?

Yes. The team has confirmed the model will be fully open-sourced, with weights and GitHub access coming soon. The model itself will be open source and free to self-host.

When will the API be available?

API access is planned for launch on April 30.