As virtual reality (VR) and augmented reality (AR) technologies continue to grow in popularity, virtual avatars are becoming an increasingly important part of our digital interactions. In particular, virtual avatars are at the center of many social VR and AR interactions, as they are key to representing remote participants and facilitating collaboration.

In the last decade, interdisciplinary scientists have dedicated a significant amount of effort to better understand the use of avatars, and have made many interesting observations, including the capacity of the users to embody their avatar (i.e., the illusion that the avatar body is their own) and the self-avatar follower effect, which creates a binding between the actions of the avatar and the user strong enough that the avatar can actually affect user behavior.

The use of avatars in experiments isn’t just about how users will interact and behave in VR spaces, but also about discovering the limits of human perception and neuroscience. In fact, some VR social experiments often rely on recreating scenarios that can’t be reproduced easily in the real world, such as bar crawls to explore ingroup vs. outgroup effects, or deception experiments, such as the Milgram obedience to authority inside virtual reality. Other studies try to explore deep neuroscientific phenomena, like the human mechanisms for motor control. This perhaps follows the trail of the rubber hand illusion on brain plasticity, where a person can start feeling as if they own a rubber hand while their real hand is hidden behind a curtain. There is also an increased number of possible therapies for psychiatric treatment using personalized avatars. In these cases, VR becomes an ecologically valid tool that allows scientists to explore or treat human behavior and perception.

Related work from others:  Latest from Google AI - Open Images V7 — Now Featuring Point Labels

None of these experiments and therapies could exist without good access to research tools and libraries that can enable easy experimentation. As such, multiple systems and open source tools have been released around avatar creation and animation over recent years. However, existing avatar libraries have not been validated systematically on the diversity spectrum. Societal bias and dynamics also transfer to VR/AR when interacting with avatars, which could lead to incomplete conclusions for studies on human behavior inside VR/AR.

To partially overcome this problem, we partnered with the University of Central Florida to create and release the open-source Virtual Avatar Library for Inclusion and Diversity (VALID). Described in our recent paper, published in Frontiers in Virtual Reality, this library of avatars is readily available for usage in VR/AR experiments and includes 210 avatars of seven different races and ethnicities recognized by the US Census Bureau. The avatars have been perceptually validated and designed to advance diversity and inclusion in virtual avatar research.

Headshots of all 42 base avatars available on the VALID library were created in extensive interaction with members of the 7 ethnic and racial groups from the Federal Register, which include (AIAN, Asian, Black, Hispanic, MENA, NHPI and White).

Creation and validation of the library

Our initial selection of races and ethnicities for the diverse avatar library follows the most recent guidelines of the US Census Bureau that as of 2023 recommended the use of 7 ethnic and racial groups representing a large demographic of the US society, which can also be extrapolated to the global population. These groups include Hispanic or Latino, American Indian or Alaska Native (AIAN), Asian, Black or African American, Native Hawaiian or Other Pacific Islander (NHPI), White, Middle East or North Africa (MENA). We envision the library will continue to evolve to bring even more diversity and representation with future additions of avatars.

Related work from others:  Latest from MIT : Artificial intelligence predicts patients’ race from their medical images

The avatars were hand modeled and created using a process that combined average facial features with extensive collaboration with representative stakeholders from each racial group, where their feedback was used to artistically modify the facial mesh of the avatars. Then we conducted an online study with participants from 33 countries to determine whether the race and gender of each avatar in the library are recognizable. In addition to the avatars, we also provide labels statistically validated through observation of users for the race and gender of all 42 base avatars (see below).

Example of the headshots of a Black/African American avatar presented to participants during the validation of the library.

We found that all Asian, Black, and White avatars were universally identified as their modeled race by all participants, while our American Indian or Native Alaskan (AIAN), Hispanic, and Middle Eastern or North African (MENA) avatars were typically only identified by participants of the same race. This also indicates that participant race can improve identification of a virtual avatar of the same race. The paper accompanying the library release highlights how this ingroup familiarity should also be taken into account when studying avatar behavior in VR.

Confusion matrix heatmap of agreement rates for the 42 base avatars separated by other-race participants and same-race participants. One interesting aspect visible in this matrix, is that participants were significantly better at identifying the avatars of their own race than other races.

Dataset details

Our models are available in FBX format, are compatible with previous avatar libraries like the commonly used Rocketbox, and can be easily integrated into most game engines such as Unity and Unreal. Additionally, the avatars come with 69 bones and 65 facial blendshapes to enable researchers and developers to easily create and apply dynamic facial expressions and animations. The avatars were intentionally made to be partially cartoonish to avoid extreme look-a-like scenarios in which a person could be impersonated, but still representative enough to be able to run reliable user studies and social experiments.

Related work from others:  MIT Lincoln Laboratory wins nine R&D 100 Awards for 2021

Images of the skeleton rigging (bones that allow for animation) and some facial blend shapes included with the VALID avatars.

The avatars can be further combined with variations of casual attires and five professional attires, including medical, military, worker and business. This is an intentional improvement from prior libraries that in some cases reproduced stereotypical gender and racial bias into the avatar attires, and provided very limited diversity to certain professional avatars.

Images of some sample attire included with the VALID avatars.

Get started with VALID

We believe that the Virtual Avatar Library for Inclusion and Diversity (VALID) will be a valuable resource for researchers and developers working on VR/AR applications. We hope it will help to create more inclusive and equitable virtual experiences. To this end, we invite you to explore the avatar library, which we have released under the open source MIT license. You can download the avatars and use them in a variety of settings at no charge.

Acknowledgements

This library of avatars was born out of a collaboration with Tiffany D. Do, Steve Zelenty and Prof. Ryan P McMahan from the University of Central Florida.

Similar Posts