Now I have comprehensive information to write the article. Let me compile and write it.
ByteDance is rolling out Dreamina Seedance 2.0 in its CapCut editing platform, giving creators around the world new ways to bring their ideas to life. After a rocky start defined by Hollywood cease-and-desist letters and an indefinitely paused global API, the model is finally making its way to international users. The wait, for many creators, is over.
Seedance 2.0 is a text-to-video model created by ByteDance. It was released in February 2026, and shortly after, realistic clips based on real actors, TV shows, and films went viral across the internet. That virality triggered a legal firestorm. Now, with safeguards in place and a phased international rollout underway, ByteDance is pushing Seedance 2.0 beyond China's borders.
As someone who covers AI tools daily, this one has been worth watching closely. The model's technical capabilities are genuinely ahead of most of what's available right now, and the path it took to reach global users tells you a lot about where AI video generation is headed.
What Is Seedance 2.0?
Built with a unified multimodal audio-video joint generation architecture, Seedance 2.0 supports four input modalities: text, image, audio, and video. It generates cinematic video with native audio, multi-shot cuts, and realistic physics in a single generation.
Compared with version 1.5, Seedance 2.0 delivers a substantial leap in generation quality. It achieves a higher usability rate for complex interaction and motion scenes, with significant improvements in physical accuracy, visual realism, and controllability.
Seedance 2.0 generates videos up to 15 seconds in a single generation. Within that duration, the model can produce multiple shots with natural cuts and transitions, so a single output can feel like an edited sequence rather than a single continuous clip.
Key Technical Highlights
Seedance 2.0 supports multimodal all-round reference, allowing combined input of various texts, images, videos, and audio. The model can accurately understand multimodal input content and generate output by referencing elements including visual composition, camera language, motion rhythm, and sound characteristics. It can even directly reference text-based storyboards, significantly boosting creative freedom.
Here's what stands out technically:
- Motion stability: With outstanding motion stability and physical restoration capabilities, the model performs excellently in multi-subject interaction and complex motion scenes, reaching industry-leading SOTA levels in generation usability.
- Multimodal input depth: Seedance 2.0 supports up to 12 clips per project — 9 images, 3 videos, and 3 audio clips, each video or audio up to 15 seconds.
- Format support: Both models support 480p and 720p output, run at 24 fps, and work across a wide range of aspect ratios: 16:9, 9:16, 21:9, 4:3, 1:1, and 3:4.
- Editing and extension: Seedance 2.0 introduces new video editing capabilities, supporting targeted modifications to specified clips, characters, actions, and storylines. The model also features video extension functionality that can generate continuous shots based on user prompts.
- Character consistency: Major advances in cross-frame character preservation keep facial features, clothing details, and visual style stable throughout the entire video.
The Road to Global Access
The path to international availability was anything but smooth. ByteDance officially launched Seedance 2.0 on February 12, 2026, strictly for the domestic Chinese market via the Jimeng AI platform. Production studios quickly sent cease-and-desist requests against the technology. Studios including The Walt Disney Company and Paramount Skydance were among those that wanted ByteDance to keep the technology away from public use.
The company had planned to make Seedance 2.0 available globally in mid-March, but delayed those plans as its engineers and lawyers worked to avert further legal issues.
On February 16, 2026, ByteDance announced that it "respects intellectual property rights" and "heard the concerns regarding Seedance 2.0." It said it would strengthen the safeguards used to prevent the violation of intellectual property rights.
The pivot worked well enough to restart the rollout. TechCrunch reported on March 26, 2026 that ByteDance said Dreamina Seedance 2.0 was rolling out in CapCut, and CapCut's own newsroom said on April 1, 2026 that rollout had expanded in additional overseas markets.
Where You Can Access It Now
The phased rollout for paid CapCut users brings Dreamina Seedance 2.0 to certain users in Indonesia, Philippines, Thailand, Vietnam, Malaysia, Brazil, and Mexico, with expansion over time. ByteDance has further expanded Dreamina Seedance 2.0 to more markets across Africa, South America, and the Middle East, with more regions coming soon.
ByteDance has not yet made the model available in the United States or Europe, citing ongoing intellectual property safeguards and compliance checks.
The safety measures built into this rollout are notably specific. ByteDance has implemented safety restrictions to prevent the model from generating videos with real faces from images or videos, and CapCut will block unauthorized intellectual property generation. Content produced by Dreamina Seedance 2.0 will include an invisible watermark to identify AI-generated material when shared off-platform.
For developers, on April 2, 2026, Chinese media reported that Volcengine opened Seedance 2.0 API public beta for enterprise users. International developer access through platforms like fal.ai is also available, though pricing varies significantly by provider.
Pricing Breakdown
Pricing for Seedance 2.0 is fragmented across platforms, so here's what actually matters:
- Dreamina Basic (China): 69 RMB/month (approximately $9.60 USD), which makes it roughly 20 times cheaper than Sora 2 Pro's $200/month subscription.
- Dreamina international: Released on or around February 24, 2026, offering English access to Seedance 2.0 along with other ByteDance AI tools. The $18/month Standard plan costs nearly double the 69 RMB Jimeng subscription.
- API via EvoLink: As of April 5, 2026, Standard text-to-video is listed at $0.071/s (480p) and $0.153/s (720p), and Fast at $0.057/s (480p) and $0.124/s (720p).
- Both routes support the same 480p/720p quality options and include audio at no extra charge.
Seedance 2.0 Fast is optimized for speed and cost. It's half the price of Pro and significantly faster to generate. The tradeoff: it doesn't support real human face generation, making it best suited for stylized content, product videos, landscapes, and abstract visual storytelling.
What This Means for the Industry
The reception has been strong, even outside China. Swiss-based consultancy CTOL called Seedance 2.0 the "most advanced AI video-generation model available," claiming it surpasses OpenAI's Sora 2 and Google's Veo 3.1 in practical testing, according to tech news outlet Silicon Republic.
ByteDance's newly unveiled video-generation model has sparked widespread discussions on social media at home and abroad, with many users praising its powerful video-generation capabilities and others highlighting China's rapid rise in the global AI race.
ByteDance's integration of Seedance 2.0 into CapCut positions the company to capture a larger share of the consumer AI video market by leveraging its existing 200 million monthly users. That's a distribution advantage that OpenAI and Google don't have in this space right now.
A standard 5-second special effects shot, which might traditionally cost 3,000 RMB and take a month to produce, can now be generated for approximately 3 RMB in just two minutes. This represents a 1,000x reduction in cost and a massive leap in efficiency, making high-end visual effects accessible to independent creators.
Final Thoughts
The technical story here is genuinely interesting. Seedance 2.0 delivers a remarkable leap in generation quality, achieving unprecedented naturalness, smoothness, and physical plausibility in human motion modeling. It can synthesize high-fidelity and precisely timed complex interactive scenes. The multi-shot output within a single 15-second generation is the detail that stands out most to me. Most AI video tools still think in clips. Seedance 2.0 thinks in sequences.
The bigger open question is US and European access. ByteDance has indicated that broader international availability is planned once copyright and safety issues are fully addressed. The phased release allows the company to monitor use cases, gather feedback, and iteratively refine model capabilities. That's a reasonable approach given the legal pressure, but it means the most technically capable AI video model right now is still unavailable to a huge chunk of the world's creators.
If you're in one of the supported markets, this is worth trying. If you're not, the CapCut and Dreamina rollout timelines are the ones to watch. What do you think — does Seedance 2.0's multimodal architecture change how you think about AI video production? Drop your thoughts in the comments.
FAQ
What is Seedance 2.0?
Seedance 2.0 is a text-to-video model created by ByteDance. It uses a unified multimodal architecture that accepts text, image, audio, and video inputs to generate cinematic clips with native audio and realistic physics.
Where is Seedance 2.0 available right now?
The phased rollout for paid CapCut users covers Indonesia, Philippines, Thailand, Vietnam, Malaysia, Brazil, and Mexico, with further expansion to Africa, South America, and the Middle East. The US and Europe are not yet included.
Why was the global launch delayed?
Production studios quickly sent cease-and-desist requests after the technology went viral. Studios including The Walt Disney Company and Paramount Skydance wanted ByteDance to keep the technology away from public use. ByteDance paused the rollout to strengthen copyright safeguards.
How much does Seedance 2.0 cost?
Pricing varies by platform. Dreamina Basic in China costs around $9.60 USD/month, roughly 20 times cheaper than Sora 2 Pro. International Dreamina plans start at $18/month, and API access through third-party providers is priced per second of generated video.
Can developers access the Seedance 2.0 API?
On April 2, 2026, Volcengine opened a Seedance 2.0 API public beta for enterprise users. International developers can also access the API through platforms like fal.ai, though full public API availability is still rolling out.




