Scientists at Samsung's artificial intelligence center in Moscow have created an algorithm that can generate videos that use only one image.
Developments have caused some concern among technological experts and commentators who see it as a worrying step towards making false content easier.
In a paper published in pre-print journal ArXiv and in an accompanying video demo, the algorithm creates a video using a single still image such as the Mona Lisa painting or a picture of Salvador Dali.
The video can be created using a single image, but the more images used, the better the quality.
A sample of 32 images produces a video of near accuracy.
Current AI systems usually require the algorithm to scan large sets of data in a body and face before it can produce a moving image based on it.
However, this new technology will make fake videos much easier.
The Samsung algorithm was trained using the publicly available VoxCeleb database, which has more than 7,000 pictures of celebrities from YouTube videos.
Since the algorithm recognizes common features of a person's face and body, as opposed to specific features of a subject, it is capable of rapidly extrapolating images with little input.
This method also means that the technology can be used for non-celebrities and can be used by anyone, even people who died long ago and were
AI is currently only capable of producing "talking head" videos from the shoulders up.
Deep-sea science skeptics, as it is mentioned, worry that it will be used to spread misinformation and false news or to steal people's identity.