BEHIND THE SCENES
Design, UI/UX, Art, Code, Data Management, Audio, Testing, Analytics.
This is a soup-to-nuts overview of my experience at Sharecare Reality Lab - starting with nothing but a wireframe, and coordinating with multiple departments (medical, art, programming) at the onset of the pandemic and the departure of our director of engineering. Our team rose to the challenge - we successfully completed and shipped multiple versions of our flagship product for desktop, tablet and VR - along with several other VR wellness products for oncology patients, and terminal pediatric illness patients.
Wireframes: I began with lots and lots of wireframes! There is a huge amount of information in the human body to categorize and disseminate - which requires a ton of planning, prototyping, testing and refactoring. Each module begins with a healthy organ or system, and we are continually adding new content for diseases and treatments - so the base application has to be structured for expandability.
After the structure is wireframed, my designs for interface and usability go through multiple rounds of revisions. Teamwork is key. Each round is reviewed by multiple departments (art, programming, medical), and group feedback is collected and discussed. New designs emerge and new prototypes are tested.
I review the UI concepts with the teams, and present options. The whole group discusses them, and the favorites are mocked up into fast usability prototypes. New ideas are tested, and a design is chosen.
Next, I gather data from the medical team and organize it into spreadsheets to be more useful for the engineering team. I brainstorm with the programmers, and we devise systems to manage how the 3D assets will interface with the medical content.
We conceptualize how software bridges can be built to pair the data with the 3D models, with thousands of individual camera angles that are created and tweaked to give the user the best view of every single part.
The elements of the flatscreen design (desktop / tablet) are unified and all the parts are joined together (data, 3D models, cameras, etc.) through as much automation as possible, supplemented by manual tweaking - to give the user the fastest and easiest UI/UX to consume the content.
The first VR version (Oculus Rift, Oculus Quest, Vive, WMR) featured familiar elements from the flatscreen version, and brought them into 3D space for a customizable sitting or standing experience, with an "arch" as the central hub. A new VR version is in the works and is being prototyped with different concepts - especially looking forward to new innovations, such as hand tracking in replacement of hardware controllers.
For the new VR version, I wanted to throw away any preconceived UI paradigms that we had already implemented, and start fresh - beginning with a more thorough inspection of the physiology of human vision. Important categories include central (foveal vs. parafoveal), and peripheral (peripersonal vs. extrapersonal) vision.
Additional factors are also important, including the areas best suited to recognize Text, Symbols, Shapes and Colors (each of which degrades further from the foveal area). To establish an optimal viewing zone, and for the most comfortable user experience - the position of content (both text and models) is related to the distance from the user, and their natural 10° (standing) or 15° (sitting) downward sightline.
Beyond the considerations of normal human sight, there are other factors the need to be thought about when planning a good VR experience. Some factors are based on today's limitations of the ever-evolving hardware. As of the date of this writing, many VR headsets have a very restricted field of view which obscures the peripheral area, and further restricts the binocular area. While redesigning our main VR product, I keep this in mind, while also realizing these types of limitations may change over and over again as new hardware becomes available.
Currently in progress is a new "VR Greek theater mode" I have been designing, where the content is distributed around the inside of a semicircular amphitheater, with the most important information in front, and additional information comfortably accessible in the periphery (with no need for the user to turn their head too far left or right if they don't want to). In this prototype, menu panels with text buttons are replaced with bright colorful images in familiar "app tiles" common to Android and iOS phones.
The user may also enter either body in "avatar mode" to look inside themselves - or interact with the organs and systems of the counterpart body. "Mirror mode" is also in development, where the user can also simultaneously explore themselves in virtual 3D mirrors. 
I'm creating and testing many prototypes for different ways to experience the content. Here's a "hands-on" approach (less interaction with controller ray - more focused on a "tactile" experience with the hands) where the user can select and examine systems and organs directly, without aiming at menu systems of buttons and text. 
For information panels featuring paragraphs of medical content, this prototype provides direct access to text size, with forward-facing controls that are readily accessible (not buried under layers of options menus).
Prototyping organ and system selection: here the organs are all together in a "central view", and when the user selects something of interest, an information panel displays related content in a list format.
In this prototype, bodily systems are divided from the centralized organs. The systems can be accessed from movable panels that circumscribe the amphitheater. This helps to separate complex visual information for the end-user, while also reducing demand on the CPU/GPU (to minimize the number of complex shaders simultaneously in view of the camera frustrum).
Here the user can transition between "central view" or "carousel view". Central view shows the organs in context (together inside the body), while carousel view allows them to be rotated around the platform. Each organ can be inspected independently - with high detail models closest to the user, and lower LOD versions spun to the opposite side. 
Meanwhile, the body systems are still accessible from the movable panels, and there is a good balance between focal distance, direct tactile accessibility of the content, and disbursed demand on the CPU/GPU - especially for maintaining high FPS on the Meta Quest.
Back to Top