Skip Navigation

Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

arstechnica.com

Microsoft’s VASA-1 can deepfake a person with one photo and one audio track

71 comments
  • Since it’s trained on celebrities, can it do ugly people or would it try to make them prettier in animation?

    The teeth change sizes, which is kinda weird, but probably fixable.

    It’s not too hard to notice for an up close face shot, but if it was farther away it might be hard - the intonation and facial expressions are spot on. They should use this to re-do all the digital faces in Star Wars.

  • Microsoft’s research teams always makes some pretty crazy stuff. The problem with Microsoft is that they absolutely suck at translating their lab work into consumer products. Their labs publications are an amazing archive of shit that MS couldn’t get out the door properly or on time. Example - multitouch gesture UIs.

    As interesting as this is, I’ll bet MS just ends up using some tech that Open AI launches before MS’s bureaucratic product team can get their shit together.

  • This is the best summary I could come up with:


    On Tuesday, Microsoft Research Asia unveiled VASA-1, an AI model that can create a synchronized animated video of a person talking or singing from a single photo and an existing audio track.

    In the future, it could power virtual avatars that render locally and don't require video feeds—or allow anyone with similar tools to take a photo of a person found online and make them appear to say whatever they want.

    To show off the model, Microsoft created a VASA-1 research page featuring many sample videos of the tool in action, including people singing and speaking in sync with pre-recorded audio tracks.

    The examples also include some more fanciful generations, such as Mona Lisa rapping to an audio track of Anne Hathaway performing a "Paparazzi" song on Conan O'Brien.

    While the Microsoft researchers tout potential positive applications like enhancing educational equity, improving accessibility, and providing therapeutic companionship, the technology could also easily be misused.

    "We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection," write the researchers.


    The original article contains 797 words, the summary contains 183 words. Saved 77%. I'm a bot and I'm open source!

71 comments