Technology

MyHeritage New Service “Deep Nostalgia” Uses AI To Animate Old Family Photos

In order to create photorealistic 3D facial images from portraits of a human subject, several new and existing techniques are available to create seamless changes between different facial expressions by matching with these various models. Beginning with multiple uncalibrated views of a human subject, this technology uses user-assisted technology to retrieve the images of the photography and the 3D coordinates of a sparse selection of places selected on the face of the subject.

Interestingly, Facebook has also been studying in the related field as well. We found a Facebook research paper entitled “Bringing Portraits to Life” from 2017 which mentions the way to automatically animate a still portrait, making it possible for the subject in the photo to come to life and express various emotions. Facebook’s technology animates the target picture using 2D warps that mimic the video’s face transformations. They incorporate fine complex features that are usually synonymous with facial expressions such as plucks and wrinkles because warps alone do not provide the complete expressiveness of the face. 

In line with the above research, a new AI-driven service, called “Deep Nostalgia,” was introduced by the online genealogy company MyHeritage. This latest service produces short video profiles of old pictures. The technology was licensed by MyHeritage from D-ID, a company specialized in video reconstruction with deep learning according to the website overview. The service is kind of similar to the Live Photos features in iOS portraits in the ‘Harry Potter’ films.

Related Articles

It quickly stormed on Twitter this fine Sunday, where users uploaded animated versions of old pictures of families, celebrity images, and even sketches, due to its ease of use and free trial offer.

According to MyHeritage “Some people love the Deep Nostalgia feature and consider it magical, while others find it creepy and dislike it”.

Need to embed a tweet by MyHeritage showing their Deep Nostalgia Technology.

Deep Nostalgia is giving users the power to upload photos taken from any camera in any era and bring them to “life.” The algorithm uses pre-recorded driver videos of facial expressions to choose the right one for the images and apply them to the face. The sequences of animation are based on true human expressions. For several animation sequences, various MyHeritage employees worked hard to develop them. Its stated goal is to encourage you to upload photographs of departed loved ones and see them in “play,” which sounds like a wonderful concept.

MyHeritage has recently tweeted that over eight million animations of images of MyHeritage’s Deep Nostalgia have been rendered in less than a week.

How Does AI Work For Modelling and Rendering Facial Expression?

A skin surface and additional surfaces for the eyes make up the face’s geometry. The skin surface is defined by a subdivision surface with displacement maps which is obtained by a laser range scan of the head. The eyes are a different model which is aligned and fused with the surface of the skin to create a full face model for high-quality rendering.

The first step in creating a face model is to construct a subdivision surface that closely resembles the range scanner’s geometry. 

Fig. 1. Mapping the same subdivision control mesh to a displaced subdivision surface for each face results in a structured model with natural correspondence from one face to another.

The subdivision surfaces for all face models are defined by a single base mesh, with only the vertex positions changing to conform to the shape of each particular face. The next factor is to add an eye and to make it look like actual eyes. Its presence and movement are therefore important for the overall appearance of the face model. The time-varying 3D positions of a series of sample points on the face surface define the movements of the face. These points are the marks on the face that the motion capture system tracks while the face is monitored by motion-capture data, but facial movements from other sources can also be interpreted in this manner. 

These vendors or developers incorporate procedurally generated eye movement and separately recorded a hard-body motion on the head as a whole to make the face look a more life-like appearance.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button