Skip to main content

Meet the company resurrecting dead celebrities and digitally cloning living ones

Picture the scene. You’re a struggling actor in your twenties, auditioning for the role of a lifetime in a major Hollywood movie. You nailed the first audition, and the casting director has since summoned you back twice to audition again. At the last callback, Steven Spielberg — the movie’s director — was even in the room. Your agent tells you it’s come down to you and one other actor. Then you get the call. The news isn’t good. The other person got it. “Who did they give the role to?” you ask, trying to conceal your abject disappointment. “Let me check,” your agent says, putting you on hold. Her voice comes back on the line. “They gave it to 1955-era James Dean,” she tells you.

Impossible, right? Only, it’s very much not. Anyone who has been keeping their eyes open at the movies for the past few years (and, frankly, why waste the price of a ticket by shutting them?) will have seen the resurgence of certain actors who don’t appear to belong in 2018.

Recommended Videos

The point at which everyone realized that something was going on may well have been 2016’s Rogue One: A Star Wars Story, which dusted off Peter Cushing, the legendary British actor who passed away in 1994, for one more cinematic hurrah. Since then, we’ve seen the digital “de-aging” of Michael Douglas, Robert Downey Jr., and Samuel L. Jackson in assorted Marvel movies, Arnold Schwarzenegger in Terminator Genisys, Orlando Bloom in two of the Hobbit movies, Johnny Depp in Pirates of the Caribbean: Dead Men Tell No Tales, and more. In 2018, the movie industry is more in love with digitally recreating the past than it is in back-patting award ceremonies and power lunches at West Hollywood’s Soho House.

This was not the start of it, of course. In Ridley Scott’s 2000 movie Gladiator, actor Oliver Reed’s scenes were completed using a digitally constructed face mapped onto a body double after the actor passed away during filming. In that instance, however, it was intended as less a feature than a way to finish the movie without having to completely reshoot Reed’s entire performance with another actor. A similar occurrence took place at the end of the second season of HBO’s The Sopranos, after actress Nancy Marchand — who played Tony Soprano’s overbearing mother — died during production. Her final scene in the show is a weird, unsettling mixture of awkward CGI footage and audio pulled from old episodes.

Things have come a long way. No longer a hacked-together workaround, digital recreations of actors are now convincing enough to front million-dollar ad campaigns. In the U.K., the likeness of actress Audrey Hepburn was digitally revived to sit on a bus on the Amalfi Coast, eating a Galaxy chocolate bar. In the States, Dior created a star-studded ad campaign in which Marilyn Monroe, Princess of Monaco Grace Kelly, and Marlene Dietrich appeared on screen with Charlize Theron to hawk perfume. This was, shall we say, the tipping point.

New opportunities arise

It’s into this space that visual effects companies such as Digital Domain have began to carve out a name for themselves. Located in Playa Vista, on the westside of Los Angeles, California, Digital Domain has been working in the digital life business since the 2010s. It has worked on both major Hollywood movies and also in the music industry, where it was famously responsible for bringing the late Tupac Shakur to Coachella in 2012.

 

You can think of its work a bit like that famous “It’s alive!” scene from 1931’s Frankenstein — only that instead of resurrecting the undead by pulling levers in some underground gothic laboratory, the work is done is done by clicking a mouse a bunch of times in a trendy LA edit suite. At the end of the day, the results are the same, though: all the undead celebrities you could wave a flaming torch at. Or, at least, younger-looking living ones.

“If we miss slight details of the body no one really notices but change a smile by a few millimeters and suddenly it no longer looks like the person.”

“A whole series of technologies are used in the preserving of someone’s likeness or the creation of a deceased celebrity,” Digital Domain employee Darren Hendler, whose official job title is “Head of Digital Humans,” told us. “We use a combination of technologies offered by others, and some developed internally at Digital Domain. The creation of a realistic well-recognized moving human face is one of the hardest challenges in computer graphics today. It requires a wide variety of different technologies to capture all the elements that make up an individual. Our brains process all of this in milliseconds. We focus primarily on the face as it is the key area of the human body that you first take notice. If we miss slight details of the body no one really notices but change a smile by a few millimeters and suddenly it no longer looks like the person.”

This is an important point. The effect of having something slightly “off” about a digitally recreated person is, at best, distracting and, at worst, extremely off putting. This “uncanny valley” effect was first studied by Dr. Masahiro Mori in Japan during the 1970s, initially related to robots. Today, it most clearly applies to digital recreations of human faces — and the results of getting it wrong can be disastrous.

Image used with permission by copyright holder

For instance, in its review of the 2004 movie The Polar Express, CNN noted that the use of CGI to recreate (the very much alive) Tom Hanks digitally did not entirely work. “This season’s biggest holiday extravaganza…should be subtitled ‘The Night of the Living Dead,’” the review read. “The characters are that frightening. This is especially disheartening since there’s so much about this technologically groundbreaking movie…that’s astounding.”

How to create a digital human

There are three elements involved in Digital Domain capturing and creating a digital human. The most obvious of these is, of course, the appearance. In order for an actor to look like, well, themselves, it’s essential to capture the look and shape of their face, their eyes, and their hair. Digital Domain achieves this by using high-end scanners to capture every detail of a person’s face down to the pore level.

“We even capture how blood flow changes the coloration of the skin when it goes into different expressions”

“We even capture how blood flow changes the coloration of the skin when it goes into different expressions,” Hendler explained. “Part of the technology used in this stage allows us to differentiate the way that light interacts with skin, including the look of the skin that absorbs the light and the look of the light that gets reflected off.”

After this comes the equally important facial motion bit: capturing how their face moves and changes expression. This is done using a technology from the company Dimensional Imaging, designed to capture faces in motion. It achieves this by tracking thousands of points on the face alone as it shifts from one expression to another. Using this data, combined with Digital Domain’s own in-house technology, it’s possible to create a model showcasing the unique way that each actor’s skin moves over the underlying muscle structure on their face.

Finally, these two digital elements are then composited onto another actor or stand-in performer, who “wears” the face of the digital thespian to act out the scenes. Just like stand-ins for nude scenes, this means matching up the body type of a target actor with another who broadly resembles them. The head is then mapped onto their body by way of machine learning technology.

As mentioned, it is, of course, possible to recreate actors who were never scanned in their lifetime — although this is tougher and, perhaps, bound to remain less convincing. “In all cases, the creation of a deceased actor without scans or data is much harder than if there were material for the actor,” Hendler said. “Generally, when creating a deceased actor we will find the closest lookalike and scan them as a base. We then modify their appearance to match the actor we are creating which is a slow and very complex procedure. In most cases, we always have some real person as a base and are not creating something from thin air.”

The future is bright, albeit unsettling

So what is the future for this brave new technology? Will tomorrow’s movies feature an all-star lineup of greats, algorithmically calculated to bring in the broadest age demographic of possible viewers? It’s certainly possible, although a large part of this will rely on the public’s response. After all, Peter Cushing’s appearance in Rogue One was not met with unanimous praise from fans. Is this because the effect wasn’t convincing, or because people don’t like the idea of jolting a late thespian back to quasi-life to perform on screen one more time? We’ll have to wait and see — as well as observing the parallel rise of hologram tours at music venues, featuring the likes of the late Roy Orbison and Amy Winehouse.

Either way, it seems this is a technology that both studios and individual actors will want to pursue. After all, imagine the endless source of revenue if, for instance, The Rock was to digitally scan himself so as to continue laying the on-screen smackdown long past the point he can convincingly climb upstairs. These licensing deals could continue far beyond the lifespan of a regular celebrity career. (Although we wonder whether the animators or the actors will receive the first “Best Actor” or “Best Actress” Oscar when this milestone inevitably happens!)

For Digital Domain, things are looking good. “We see this as a huge market,” Hendler said. “The costs are pretty high at the moment to create a digital human that looks indistinguishable from the real person, but those are going down quickly. There is also some hesitation about how the audiences will respond. Sometimes the response has been very accepting, others there has been a bit of backlash. As people become more open to seeing deceased celebrities retaking the screen and costs come down, I am sure you will see this technology all over the place.”

Add in the amazing voice synthesis technology that allows computer scientists to accurately recreate any person’s voice, using a tiny amount of training data, and it appears that the future is bright — if a little bit Black Mirror.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Amazon’s AI shopper makes sure you don’t leave without spending
Amazon Buy for Me feature.

The future of online shopping on Amazon is going to be heavily dependent on AI. Early in 2025, the company pushed its Rufus AI agent to spill product information and help users find the right items. A few weeks later, another AI tool called Interests made its way to the shopping site. 

The new Alexa+ AI assistant is also capable of placing orders semi-autonomously, handling everything from groceries to booking appointments. Now, the company has started to test yet another AI agent that will buy products from other websites if they’re not available on Amazon — without ever leaving the app. 

Read more
Google Gemini’s best AI tricks finally land on Microsoft Copilot
Copilot app for Mac

Microsoft’s Copilot had a rather splashy AI upgrade fest at the company’s recent event. Microsoft made a total of nine product announcements, which include the agentic trick called Actions, Memory, Vision, Pages, Shopping, and Copilot Search. 

A healthy few have already appeared on rival AI products such as Google’s Gemini and OpenAI’s ChatGPT, alongside much smaller players like Perplexity and browser-maker Opera. However, two products that have found some vocal fan-following with Gemini and ChatGPT have finally landed on the Copilot platform. 

Read more
Rivian set to unlock unmapped roads for Gen2 vehicles
rivian unmapped roads gen2 r1t gallery image 0

Rivian fans rejoice! Just a few weeks ago, Rivian rolled out automated, hands-off driving for its second-gen R1 vehicles with a game-changing software update. Yet, the new feature, which is only operational on mapped highways, had left many fans craving for more.
Now the company, which prides itself on listening to - and delivering on - what its customers want, didn’t wait long to signal a ‘map-free’ upgrade will be available later this year.
“One feedback we’ve heard loud and clear is that customers love [Highway Assist] but they want to use it in more places,” James Philbin, Rivian VP of autonomy, said on the podcast RivianTrackr Hangouts. “So that’s something kind of exciting we’re working on, we’re calling it internally ‘Map Free’, that we’re targeting for later this year.”
The lag between the release of Highway Assist (HWA) and Map Free automated driving gives time for the fleet of Rivian vehicles to gather ‘unique events’. These events are used to train Rivian’s offline model in the cloud before data is distilled back to individual vehicles.
As Rivian founder and CEO RJ Scaringe explained in early March, HWA marked the very beginning of an expanding automated-driving feature set, “going from highways to surface roads, to turn-by-turn.”
For now, HWA still requires drivers to keep their eyes on the road. The system will send alerts if you drift too long without paying attention. But stay tuned—eyes-off driving is set for 2026.
It’s also part of what Rivian calls its “Giving you your time back” philosophy, the first of three pillars supporting Rivian’s vision over the next three to five years. Philbin says that philosophy is focused on “meeting drivers where they are”, as opposed to chasing full automation in the way other automakers, such as Tesla’s robotaxi, might be doing.
“We recognize a lot of people buy Rivians to go on these adventures, to have these amazing trips. They want to drive, and we want to let them drive,” Philbin says. “But there’s a lot of other driving that’s very monotonous, very boring, like on the highway. There, giving you your time back is how we can give the best experience.”
This will also eventually lead to the third pillar of Rivian’s vision, which is delivering Level 4, or high-automation vehicles: Those will offer features such as auto park or auto valet, where you can get out of your Rivian at the office, or at the airport, and it goes off and parks itself.
While not promising anything, Philbin says he believes the current Gen 2 hardware and platforms should be able to support these upcoming features.
The second pillar for Rivian is its focus on active safety features, as the EV-maker rewrote its entire autonomous vehicle (AV) system for its Gen2 models. This focus allowed Rivian’s R1T to be the only large truck in North America to get a Top Safety Pick+ from the Insurance Institute for Highway Safety.
“I believe there’s a lot of innovation in the active safety space, in terms of making those features more capable and preventing more accidents,” Philbin says. “Really the goal, the north star goal, would be to have Rivian be one of the safest vehicles on the road, not only for the occupants but also for other road users.”

Read more