By John P. from Computecoin
The term “digital human” will evoke in some readers’ minds sci-vi visions of teeming Blade Runner cityscapes where people work, eat and live among a milieu of holographic avatars, humans, and digital beings. While it is not inconceivable that we may someday share our world so intimately with digital humans, today, technology has only just begun to make its presence felt. For instance, digital people are still “a long way off” from being able to think in the long-term or make subjective decisions (The Verge). The development of digital humans, however nascent, has already yielded remarkable results, and it looks increasingly certain that digital humans will redefine the way people interact with AI, each other, and the world around them.
Digital humans are human-like virtual AI characters designed to be interacted with by people. They are, for all intents and purposes, the chat bot’s successor: sharper, emotionally intelligent, and expressive AI entities embodied in human-like avatars, who, in their often striking resemblance to real people, both in behavior and appearance, give users a face, a personality — a person — to relate to. Thus, digital people effectively become “someone,” as opposed to something; their likeness to real humans renders them radically more engaging to users than chatbots. While not even the cleverest engineering could make these “beings” flesh and blood, they represent a significant step toward an AI assistant — teacher, personal avatar, brand ambassador, influencer, retail concierge, financial advisor (potential applications seem endless) — capable of connecting with people in ways far more profound than those a bodiless chatbot can muster.
Digital humans live in the metaverse, where they function not as “replicas” of real people, but as AI-backed non-playing characters (NPCs), who, unlike their video game counterparts, can react intelligently to your input — your mood as conveyed by the tone of your voice, your facial expression and your body language, for example — and respond to those cues with expressions of their own, lending digital humans a more nuanced emotional and cognitive understanding, as it were (XR Today).
Digital humans can replicate the “whole range of human body language,” and provide users not only the information they’re after but “the appropriate non-verbal response as well” (Deloitte).
The technology’s aptitude for mimicking the way people talk, as opposed to merely interpreting the words humans say or write, gives it a clear advantage over one-note chatbots. Digital human firm UneeQ notes that nonverbal factors, including “tone of voice and body language,” accounts for fully 93 percent of human communication (UneeQ). Digital humans are uniquely primed to read, listen and see between the lines, teasing out from the verbal chaff users’ needs, feelings and attitudes. Indeed, they establish “a human emotional connection” (Deloitte).
Digital humans, as their name suggests, require zero sleep and no wages.
They’re always available and readily scalable, meaning digital humans can be widely deployed and field huge amounts of user interactions simultaneously, all while engaging with users in the complex, intelligent ways described above, effectively “recreat[ing] human interaction” on a scale far greater than that which customer service representatives (that is, real people) and chatbots could achieve (Deloitte).
Digital humans can also address people’s hesitance to discuss sensitive or personal issues with other humans, or help bridge the gap when a human user simply doesn’t feel like talking to another person face-to-face. Alexis Ong suggests that women might find it easier to approach a “female-presenting digital person about bra fittings or puberty problems,” for instance (The Verge).
Digital humans afford users privacy and security without impinging on their freedom as content creators. VTubers — vloggers and streamers who adopt a “CGI character or anime face” for their video content — could use a digital avatar to maintain their anonymity and avoid harassment from viewers (Stage11) while still interacting with viewers.
Some users might trust and confide in digital humans more readily than they would real people. For example, users might feel more at ease “talking to a digital person about their personal finances,” according to Soul Machines co-founder Greg Cross, who asserts “we have a lot of hard data now that there are many people in many interactions who prefer to speak to digital people over real people” (The Verge). “Sophie”, the digital human produced by UneeQ in partnership with Deloitte, says: “By having an identity, a name, a face, a voice, and a presence on the screen, I…create a sense of reciprocity, which helps when the time comes for clients to share things with me.” Clients, Sophie continues, are “twice as likely to disclose information with me than they are to a regular chatbot.”
Unsurprisingly, language is another barrier easily overcome by digital humans; Sophie boasts she can “understand 72 languages,” and speaks a handful of European languages as well as Chinese and Japanese. Digital humans’ AI “DNA” enables them to learn quickly, too. Startup Amelia builds its “digital employees” with an AI that “emulates parts of the brain involved with memory to respond to queries and, with each interaction, learns to deliver more engaging and personalized experiences” (IEEE Spectrum).
Digital humans also boast the advantages of easy integration and high performance on diverse systems, accessibility, and low cost (UneeQ). Unreal Engine’s MetaHumans deliver “real-time performance on everything from Android to XSX and PS5,” and “can run in real-time on high-end PCs with RTX graphics cards, even at their highest quality with strand-based hair and ray tracing available” (MetaHuman Creator — Unreal Engine). Ericsson advertises that their project, Digital Humans SDK, represents a “ground-breaking” effort to “make photorealistic 3D presence accessible to anyone with a modern smartphone and commodity XR headset,” “democratiz[ing] the future of 3D human connection” (Ericsson Digital Humans SDK). UneeQ’s platform can “connect to any system that has an API,” meaning it is compatible with “any conversational AI platform,” including Google DialogFlow, and “run on any device that has a web browser, as well as in native mobile apps” (UneeQ). Scale and accessibility seem of paramount importance to UneeQ, who profess that clients can “design, develop and deploy [their] own digital human” on their platform “in a matter of minutes” (UneeQ). Speedy response times are a hallmark of digital human tech. Samsung subsidiary STAR Labs reports that its NEON avatars can reply to users’ questions under “a few milliseconds” (The Verge).
Many commentators predict digital humans will assume a growing role in our lives. David Lucatch, CEO, President, and Chair at Liquid Avatar Technologies, believes that people will design and “become” digital avatars of their own as digital personas for the metaverse (Forbes). Simon Yuen, director of graphics and AI at Nvidia, similarly writes that “everyone will one day have their one digital version of themselves, whether it’s an accurate or a stylized avatar. (Nvidia) “Over time,” Yuen continues,” the connection between real humans and digital humans will grow. It will go beyond watching a puppet on the computer.” One day, we will converse with digital people, with whom we will place orders for all kinds of goods, including food and prescription medicine, goods which Yuen believes our digital companions will deliver to our front doors” (Nvidia). “Communication and interaction,” he suggests, “will become a two-way street,” providing “a new element of freedom and reinvention” (Nvidia).
Not all is rosy in the metaworld of digital humans, though. Critics of the technology have identified numerous problems and risks, including fraud, transparency, usability, ethics, and the consequences of prolonged, ubiquitous human-AI interaction; and have questioned the capacity of AI characters, however stunningly lifelike, to manifest inimitable human qualities.
Fraud and deep fakes might prove difficult to ward off if users do not know if they are “interacting with AI” or “other humans” (XR Today). Moreover, it remains unclear who “holds the copyright to and can profit from the content and VR worlds created by AI” (XR Today). Alexis Ong observes that digital personal assistants could find it hard to “traverse seamlessly through different game franchises and IPs and retain a sense of continuity” (The Verge).
Ong also questions how digital humans’ presence in video games, long a sacred domain of escapism and relaxation (and increasingly a livelihood), might “affect our relationship with games.” Reminding us of people’s tendency to advertise their socioeconomic status through tech gizmos like iPhones and AirPods, Ong asks: “What kind of tensions would arise in the gaps between people who can afford digital personae and those who can’t or won’t use them?” (The Verge). Will personal digital humans fill the butler’s long-empty shoes for the metaverse’s toniest households, or will they be accessible to everybody?
Nick Kelly, a specialist in design education and cognition at the Queensland University of Technology, argues that digital humans might disrupt the natural process by which young children observe and learn from the adults in their lives, especially teachers, whom they ordinarily watch “for something like 30 hours per week for 12 years of their life” (The Verge). “It doesn’t matter how well animated the ‘digital person’ is,” Kelly contends, “how high their resolution is, or how well-programmed they are to provide ‘individual attention,’ they are unable to replace human teachers who have a monopoly on knowing how to be human — now and in the foreseeable future” (The Verge).
Digital people often don’t look real enough to fool the human eye.
Stage11’s Tom Faber suggests that the design of digital humans has “not yet achieved photorealism,” and notes that humans possess an uncanny ability to discern humanness from the lack thereof. “Humans are excellent at detecting when something isn’t a real person. If a single detail is wrong, we smell a rat” (Stage11). Indeed, digital humans whose appearance straddles the awkward gully between pure cartoon and flesh and bone human “evoke an uncanny valley feeling,” according to Shiwali Mohan, “an AI systems scientist at the Palo Alto Research Center” (IEEE Spectrum). “If something looks like a human, we have high expectations of them, but they might behave differently in ways that humans just instinctively know how other humans react” (IEEE Spectrum). Mohan argues that digital people’s lifelike faces and realistic vocal tone alone cannot elevate them to the level of human. “They’re humanlike in their looks and the way they sound,” Mohan states,” but that in itself is not human…Being human is also how you think, how you approach problems, and how you break them down” (IEEE Spectrum)
Digital humans are off to the races, with some of the amiable avatars cropping up in retail outlets and others appearing on mobile screens. They’re the cool older sibling of the chatbot, equipped with AI, a face, the capacity to mirror users’ emotions and respond intelligently to their input, including nonverbal cues. Digital humans are easy to deploy, massively scalable, cheap to put to work, quick and easy to develop, and infinitely customizable. One day, you might have your own digital human assistant. You might learn from a digital human teacher. You might even design your own digital avatar for the metaverse. A lot of work remains to be done, though, before this technology can reach its full potential; and many questions remain unanswered about the implications of digital humans on the lives of real people. But the digital genie has been let out of the digital bottle; digital humans, it appears, are here to say.