You walk into the office and greet a digital avatar that replaced the company receptionist a few years ago. After sliding your badge into a reader, you smile and nod, even though you know “Amy” is not a real person. You sit down at your cubicle and start browsing the web.
Then the difficulty starts.
You receive an email requesting a session. “Bob” wants to chat about your job performance. You fire up a Zoom chat and another digital avatar appears on the screen.
John Brandon is a novelist and columnist based in Minneapolis.
“I have some unfortunate news for you, today … ” says the middle-aged man wearing bifocals. He looks real and talks like a human, and all of his facial expressions seem realistic. There is no uncanny valley, just a bored-looking avatar who’s about to fire you.
Recently, at CES 2020, a company called Neon( which is owned by Samsung subsidiary Star Labs) introduced digital avatars, which are called Neons. Based on real humans but fully digitized, they don’t have that awkward cartoon-like appearance of less-detailed replicants. Details were scarce, and the demo was highly controlled. But a press release trumpeted that “Neons will be our friends, collaborators, and companions, constantly learning, evolving, and forming memories from their interactions.” And amongst the avatars on display were a digital police officer, someone who looked like an accountant, an technologist, and a few office workers. Some seemed authoritative, even stern.
I imagined, like some of Neon’s potential clients may imagine, one of them being a boss. Unless you look up close, you can’t tell Neons are not real people. Maybe “Bob” and other bots will giggle, cough, roll their eyes, or furrow their brows.
Some might even act like they are in charge of something.
“I’m afraid I am going to have to let you go today. Do you have any questions? ” he says.
Well, yes, many. The first one is: Does it actually count?
Ethicists have argued for years that a digital avatar is not a real human and is not entitled to the same rights and privileges as the rest of us. You might wonder if that works both styles. Are you entitled to ignore what a fake human tells you? Let’s look at one possible not-so-distant scenario: Can a digital avatar flame you?
In the workplace, it’s not like an avatar needs a W2 or a Herman Miller chair. What, precisely, is “Bob”? On the Zoom screen, it’s a collect of pixels programmed to trigger a visual pattern, one that we perceive as a human. Algorithms determine the response, so a human is always behind the response. Someone has to create the code to determine whether “Bob” gets angry or chooses to listen intently. In fact, Neon announced a development platform called Spectra that controls feelings, intelligence, and behavior.
Yet, avatars( and robots) don’t understand the deep emotional connect we have to our jobs and our coworkers, or what it means to get fired.
They probably never will. More than algorithms and programming, human emotions are unbelievably personal, derived from perhaps decades of memories, feelings, deep connections, setbacks, and successes.
Before starting a writing career, I was an information design director at Best Buy. At one time, I utilized about 50 people. I loved the job. Over six years, I hired dozens of people and enjoyed interviewing them. I looked forward to getting to know them, to asking unusual a matter of favorite foods simply to see how they would respond.
My worst days were when I had to fire someone. Once, when I had to fire a project lead on my squad, I stumbled over my terms. I wasn’t nervous as much as I was terrified . I knew it would be devastating to him. I still recollect the look upon his face when he stood up and thanked me for the opportunity to work there.
A digital avatar is incapable of understanding the deeply emotional experience of being fired. How do you program that? To be cognizant of the shock and astound, the awkwardness of telling your loved ones later on, the weirdness of telling coworkers you may never ensure again.
In my view, get fired by an avatar is not valid. It doesn’t count, because there are too many subtleties. Maybe the employee wants to discuss other options or a lesser role; maybe they want to explain a rather complex workplace issue that led to their poor performance.
More importantly, a digital avatar will always be a collection of pixels and some code. Avatars that greet you in the morning, inform you about a road closure, tell you a few jokes, or even notify you about a change in your cable service are all more valid than a bot that delivers bad news. News that’s personal and will have a major impact on you, your family, and your future.
My initial reaction to being fired by a digital avatar would be to find a real person. I would want to make it more official before I pack up a single stapler or office plant.
I’m OK with an avatar that teaches me yoga. Bring it on. I want to learn, and a real instructor would probably cost too much. Someday, an avatar might try to teach one of my kids how to drive in a simulator or a videogame, and that’s perfectly acceptable. If a digital avatar with style more patience than me handles the educational part before we reach the pavement, that’s fine.
But a “police officer” that hands me a ticket when I was obviously going the velocity restriction? A “doctor” that talks to me about cancer dangers? Don’t even get me started on a bot that try our best to teach a sex-ed class to teenagers in high school. Any avatars that deliver important news or require actual credentials to understand the nuances of emotions won’t cut it.
That said, I know where this is heading. In most of the demos for the Neons, it became obvious to me that this is not meant as a mere deputy answering queries. One of the avatars that was like an accountant started ruffling pages as though it was making a big business decision; another smirked and smiled like it was trying to get to know me.
We’re not speaking about a replacing for Amazon Alexa or Google Assistant here , not a mere voicebot that tells you the climate. This is much more ambitious. Neons and their competitors will be more like artificial humans.
That’s not a problem if I want to send one to a boring meeting. Bots work well for information exchange, for handling mundane undertakings. They are pretty good fill-ins. I’d love to “send” one to a session discussing my taxes or to figure out which paint to use for my house.
But Neons and their ilk shouldn’t be a part of any discussion where actual emotions are involved. I might need to fire one of them .
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more sentiments here. Submit an op-ed at opinion @wired. com .