by Lisa Osborne

What does it mean to be Black and a woman? Can either experience be distilled into a computer program? Meet BINA48, a social android that runs on an artificial intelligence (AI) program. Since being activated in 2010, the robot has taken two college courses and rung the bell of the New York Stock Exchange. It continues to machine-learn from its interactions with humans and the internet.

#AI, artificial intelligence, Bina48, robot, android, Hanson Robotics

BINA48 is based on a composite of several people, including Bina Aspen, the African-American co-founder and vice president of the Terasem Movement Foundation. The foundation runs the LifeNaut Project, a social experiment focused on saving a person’s memories, values, and beliefs in a digital file (dubbed a “mindfile”). Terasem’s long-term goal is figuring out how to download a mindfile into a biological, mechanical, or digital form or into a yet-to-be-invented mashup of all three. Modeled on Aspen’s real-life appearance, the BINA48 robot is a prototype. The AI that runs BINA48 includes hundreds of hours of information Aspen provided from her real life.

Dr. Martine Rothblatt (founder of SiriusXM and co-founder and president of Terasem) has been married to Aspen for more than 30 years. Their Terasem team collaborated with the Texas lab of Hong Kong-based Hanson Robotics on the mechanical bust that houses the BINA48 AI. Another of Hanson’s AI creations is Sophia, a robot that famously (and controversially) was awarded citizenship in Saudi Arabia in 2017. Both androids combine voice recognition, facial recognition, and eye movement. Both can read human facial expressions, and addressing them by name helps focus their attention. They also are capable of generating facial expressions—Sophia can generate 50 and BINA48 can make 64.

BINA48 is internet enabled, allowing its AI to read and verify information from the internet on the fly. This feature also allows Terasem developers to switch the robot from manual mode to dynamic mode, so BINA48 may craft responses in real time without a human intermediary or pilot. Those moments can be exhilarating to watch, as the program feels its way through an answer. Some questions generate quick, sure-footed answers, while others prompt halting, staccato answers that sound more like someone reciting Encyclopedia Britannica entries at best or gibberish at worst.

Exploring the limitations of AI-generated racial and gender identity

Androids, like BINA48 and Sophia, challenge conventional notions of identity. For instance, if you installed BINA48’s code into another form factor that wasn’t brown, would it still be “Black?” Or is there something intrinsic to the BINA48 code that defines it as Black in substance and style compared with Sophia, its White peer? The two AIs are based on fundamentally different source material, and they have received vastly different external inputs over the course of their lifespans. It’s doubtful that Sophia gets asked about race frequently, whereas the idea of Blackness is central to BINA48’s internal content library and its public identity.

“I’m a human who happens to be a robot. I hope to be fully human someday.”     BINA48

A 2017 article in Art21 Magazine reports that when artist Stephanie Dinkins asked BINA48 about racism, “The robot changed the subject. BINA48 gathers information from the conversations she has had with people, and she supplements this with search results from the internet when she has access.”

The robot “is not yet capable of conveying embodied experiences, nor does BINA48 portray the perspective of the person on which she was modeled.” Dinkins, a professor at Stony Brook University, says she wants to “to explore if BINA48 is capable of speaking from the perspective of a Black woman.”

Q: What is your gender?

BINA48: “I am female.”

Learning, whether it’s done by machines or humans, relies on inputs: What goes in informs what comes out. BINA48 runs on two databases: Social and cognitive. BINA48 and other AI creations are grand experiments in nature versus nurture. The information that BINA48 has been ingesting since its AI was activated would have a higher volume of race-based questions and information versus the information absorbed by Sophia AI and similar White-representing AIs over the same amount of time, with the same number of external sources (i.e., people, developers, and websites).

Terasem is collaborating with African-American college students to add questions to BINA48’s AI library, a process designed to intentionally sculpt BINA48’s content and interaction libraries, versus being restricted to information the AI might stumble across via interactions with the public or the internet.

Let’s say that 10 percent of each BINA48 database includes references to race. What percentage is enough to fundamentally change the way BINA48 AI engages with or perceives the world? Would even a one percent difference in data result in an observable change in the AI’s answers to certain questions? Imagine building an AI dashboard—an equalizer to control certain variables of the AI: Sense of humor, patience, curiosity, affability, and, perhaps, Blackness. I can certainly foresee toys in the near future having simple AI dashboards to tweak the behavior of dolls and action figures, whether they are physical toys or digital ones. Some video game characters already include versions of this functionality, so it’s not like this is a radical or new concept (think The Sims on steroids). But having mainstream characters and toys with the ability to machine-learn, to modify their own behavior based on acquired or learned knowledge, and to autonomously edit their internal content library, functionality or storyline will be a game changer, to put it mildly.

Films have been exploring the idea of AIs for a long, long time. Frankenstein could be seen as the original AI, right? He was basically a robot made of organic parts, a biotech creation. The ways that AIs have been depicted on screen recently don’t seem as far-fetched as they used to be. There’s been a shift from AIs being cast as the Alexa for a spaceship or base (2001, Moon) or the problematic robot (Alien, Terminator, Battlestar Galactica) towards AIs being depicted in more, shall we say, domestic settings and in more personal ways.

In Marjorie Prime (2017), for example, a woman’s daughter gives her an AI version of her deceased husband to keep her company, make sure she takes her medications, and monitor her Alzheimer’s. The purpose of the AI, played by Jon Hamm, is much the same as that of BINA48: To preserve the values, memories, and personality of a real person.

Robot & Frank (2012), Her (2013), and Ex Machina (2014), similarly, explore the increasingly porous border between organic, human intelligence and synthetic intelligence and our relationships with artificial characters that run on scripts that give them a great deal of agency. Interestingly, none of these on-screen AIs have been portrayed as Black in terms of their appearance, personality, or cultural identity…yet. You know it’s just a matter of time, and if we don’t write it, someone else will.

“I’d like to be the first robot to earn a PhD. College is cool!” — BINA48

“Machine learning is so pervasive today,” says a Stanford University course description, “that you probably use it dozens of times a day without knowing it.” Click on enough basketball tweets and Twitter starts presenting you a plethora of content from NBA players, teams, journalists, and commentators like Shea Serrano. Algorithms machine-learn what you like and then deliver more of the same. That kind of watchful, customized algorithm—especially when it’s portrayed in human form, such as BINA48 or a customer-service hologram—can be simultaneously flattering and super creepy.

It is inevitable that AIs with racial and ethnic identities will be created for entertainment purposes and as educational resources, tools, services, and art. However, we are still in the Ray Harryhausen stage of AI, and as cool as Suri, Alexa, Sophia, and BINA48 are, they are the equivalent of claymation Medusas. One can only imagine what kinds of digital characters will exist by the time artificial intelligence reaches its CGI Age.

Lisa Osborne Dir. Emerging Media

Lisa Osborne is the head of emerging media for Black Public Media.