top of page
Search

Incredibly realistic human videos from just a single image and a motion signal





Deepseek is yesterday's China news... 


Bytedance, Tiktok's mother company, shared demos of OmniHuman this week, an AI framework that generates realistic human videos from a single image and a motion signal, like audio or video. It uses multimodal motion conditioning to translate these inputs into lifelike movements, gestures, and expressions.


OmniHuman works with portraits, half-body, and full-body images. It can also animate non-human subjects, such as cartoons or animals, making it highly versatile.


The AI model is currently in the research phase. The developers have shared demos and hinted at a code release in the future, but it’s not accessible to the public at this time.


See links to more OmniHuman examples and their introduction in comments.




 
 
 

Comentários


bottom of page