Eric Wang

(Cheng-Yao Wang)

I am an AR/VR Research Scientist in the Global Technology Applied Research team at JPMorgan Chase interested in Human-Computer Interaction, Computer-Supported Cooperative Work and Virtual/Mixed Reality.

About me

I’m Eric Wang, an AR/VR Research Scientist in the Global Technology Applied Research team at JPMorgan Chase. I have B.S and M.S. in Computer Science and I earn my PhD in Information Science from Cornell University.

My research interests lie at the intersection of Human-Computer Interaction (HCI), Computer-Supported Cooperative Work (CSCW), and Virtual/Mixed Reality (VR/MR). My current work centers on XR collaboration, where I develop novel XR systems and interactions to address the challenges of cross-reality and hybrid workspaces.

In my Ph.D. research, I implemented and utilized computer vision and machine learning techniques to reconstruct our experiences and enable novel VR/MR experiences including reliving experiences in VR socially, animating avatars in VR with online videos, and personalizable AR/MR user interface, as well as identifying their privacy concerns and challenges.

Education

  • Ph.D. in Information Science, Cornell University

  • Sep '16 – Aug'23
  • M.S. in Computer Science, National Taiwan University

  • Sep '12 – Jun '14
  • B.S. in Computer Science, National Taiwan University

  • Feb '06 – Aug '10

Work Experience

  • Research Scientist,AR/VR research, Global Technology Applied Research at JPMC

  • Spring 2023 - current
  • Research Internship,work with Dr. Mar Gonzalez Franco and Dr. Andy Wilson, Microsoft Research

  • Summer 2022
  • Workshop Co-organizer,Open Access Tools and Libraries for Virtual Reality, IEEE VR 2022

  • Spring 2022
  • Research Internship,work with Dr. Mark Parent and Dr. Marcello Giordano, Facebook Reality Lab

  • Fall 2021 & Winter 2022
  • Research Internship,work with Dr. Mar Gonzalez Franco and Dr. Eyal Ofek, Microsoft Research

  • Summer 2021
  • Research Internship,work with Dr. Fraser Anderson and Dr. Qian Zhou, Autodesk HCI&VIS group

  • Winter 2021
  • Research Assistant,VR as a Teaching Tool for Moon Phases and Beyond, Cornell University

  • Fall 2019
  • Research Assistant,Remote Touch for Telepresence Robots, Cornell University

  • Spring 2018
  • Research Intern,Human-Drone Interaction project, Stanford University

  • Apr '15 – Sep '15
  • Chief Coordinator,2015 CHI@NTU Workshop

  • Aug '14 – Sep '14

Teaching Experience

  • Teaching Assistant,Computer-mediated Communication, Cornell University

  • Spring 2022
  • Teaching Assistant,Introduction to Computing using Matlab, Cornell University

  • Fall 2018
  • Teaching Assistant,Crowdsourcing and Human Computation, Cornell University

  • Fall 2017
  • Teaching Assistant,Introduction to Computing using Matlab, Cornell University

  • Spring 2017
  • Teaching Assistant,Data Structures and Functional Programming, Cornell University

  • Fall 2016
  • Teaching Assistant,Mobile HCI, National Taiwan University

  • Feb '14 – Jun '14

download cv

Implementing novel prototypes and tackle complex problems from socio-technical and HCI perspectives

projects

In Progress
RemoteAR Transformer: Motion Retargeting and Personalizable Spatial Arrangements for Remote AR in Dissimilar Spaces

Cheng Yao Wang

In Progress

In current remote AR experiences, local and remote users are lack of mutual understandings of their surrounding environments and limited to set up and interact with each other in a small mutual virtual shared space. To overcome these limitations, we implemented a novel technique, RemoteAR Transformer, that allows both local and remote users to customize the placements of partner’s avatars and shared AR content independently based on their contexts. RemoteAR Transformer utilizes motion retargeting and transforms the avatar’s motions appropriately to preserve the semantics of the movement if the local and remote users have different personalized spatial configurations.In addition, akin to hyperpersonal communication theory, RemoteAR Transformer has potential to become “hyperpersonal” and surpass face-to-face (FtF) interaction because it decouple avatar movements from user’s behavior and user has a greater ability to strategically develop and edit self-presentation. The goals of this projects to implement and evaluate RemoteAR Transformer, as well as address the potential ethical and privacy concerns regarding transforming avatar movements.

Under Review
VideoPoseVR: Prototyping Animated Characters in VR through Reconstructing 3D Human Motion from Online Videos

Cheng Yao Wang et al.

Under Review (full paper)

We introduce VideoPoseVR, a system that allows users to animate VR characters using human motions extracted from online videos. We demonstrate an end-to-end workflow that allows users to retrieve desired motion from 2D videos, edit the motion, and apply it to virtual characters in VR. \VPVR leverages deep learning-based computer vision techniques to reconstruct 3D human motions from videos and enable semantic searching of specific motions without annotating videos. Users can edit, mask, and blend motions from different videos to refine the movement. We implemented and evaluated a proof-of-concept prototype to demonstrate VideoPoseVR's interaction possibilities and use cases. The study results suggest that the prototype was easy to learn and use and that it could be used to quickly prototype immersive environments for applications such as entertainment, skills training, and crowd simulations.

CSCW 2021
Shared Realities: Avatar Identification and Privacy Concerns in Reconstructed Experiences

Cheng Yao Wang, Sandhya Sriram, and Andrea Stevenson Won.

Published in ACM CSCW 2021 (full paper)

We present ReliveReality, an experience-sharing method utilizing deep learning-based computer vision techniques to reconstruct clothed humans and 3D environments and estimate 3D pose with only a RGB camera. ReliveReality can be integrated into social virtual environments, allowing others to socially relive a shared experience by moving around the experience from different perspectives, on desktop or in VR. We conducted a 44-participant within-subject study to compare ReliveReality to viewing recorded videos, and to a ReliveReality version with blurring obfuscation. Our results shed light on how people identify with reconstructed avatars, how obfuscation affects reliving experiences, and sharing preferences and privacy concerns for reconstructed experiences. We propose design implications for addressing these issues

CSCW 2021
Hide and Seek: Choices of Virtual Backgrounds in Video Chats and Their Effects on Perception

Cheng Yao Wang*, Angel Hsing-Chi Hwang*, Yao-Yuan Yang, and Andrea Stevenson Won. (*equal contribution)

Published in ACM CSCW 2021 (full paper)

We investigate how users choose virtual backgrounds and how these backgrounds influence viewers' impressions. In Study 1, we created a web prototype allowing users to apply different virtual backgrounds to their camera views and asked users to select backgrounds that they believed would change viewers' perceptions of their personality traits. In Study 2, we then applied virtual backgrounds picked by participants in Study 1 to a subset of videos drawn from the First Impression Dataset. Our study results suggested that the selected virtual backgrounds did not change the personality trait ratings in the intended direction. Instead, virtual background use of any kind results in a consistent "muting effect" that mitigates very high or low ratings (i.e., compressing ratings to the mean level) compared to the ratings of the video with the original background.

CHI 2020
Again, Together: Socially Reliving Virtual Reality Experiences When Separated

Cheng-Yao Wang, Mose Sakashita, Jingjin Li, Upol Ehsan, and Andrea Stevenson Won

Published in ACM CHI 2020 (full paper)

We describe ReliveInVR, a new time-machine-like VR experience sharing method. ReliveInVR allows multiple users to immerse themselves in the relived experience together and independently view the experience from any perspective. We conducted a 1x3 within-subject study with 26 dyads to compare ReliveInVR with (1) co-watching 360-degree videos on desktop, and (2) co-watching 360-degree videos in VR. Our results suggest that participants reported higher levels of immersion and social presence in ReliveInVR. Participants in ReliveInVR also understood the shared experience better, discovered unnoticed things together and found the sharing experience more fulfilling.

IEEE VR 2020
ReliveReality: Enabling Socially Reliving Experiences in Virtual Reality via a Single RGB camera

Cheng-Yao Wang, Shengguang Bai, Ian Switzer, and Andrea Stevenson Won

Published in IEEE VR 2020 (poster)

We present a new experience sharing method, ReliveReality, which transforms traditional photo/video memories into 3D reconstructed memories and allows users to relive past experiences through VR socially. ReliveReality utilizes deep learning-based computer vision techniques to reconstruct people in clothing, estimate multi-person 3D pose and reconstruct 3D environments with only a single RGB camera. Integrating with a networked multi-user VR environment, ReliveReality enables people to ‘enter' into a past experience, move around and relive a memory from different perspectives in VR together. We discuss the technical implementation and implications of such techniques for privacy}

IEEE VR 2019
RelivelnVR: Capturing and Reliving Virtual Reality Experiences Together

Cheng-Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, and Andrea Stevenson Won

Published in IEEE VR 2019 (poster)

We present a new type of sharing VR experience over distance which allows people to relive their recorded experience in VR together. We describe a pilot study examining the user experience when people share their VR experience together remotely. Finally, we discuss the implications for sharing VR experiences over time and space.

IEEE VR 2019
VR-Replay: Capturing and Replaying Avatars in VR for Asynchronous 3D Collaborative Design

Cheng-Yao Wang, Logan Drumm, Christopher Troup, Yingjie Ding, and Andrea Stevenson Won

Published in IEEE VR 2019 (poster)

Distributed teams rely on asynchronous CMC tools to complete collaborative tasks due to the difficulties and costs surrounding scheduling synchronous communications. In this paper, we present VR-Replay, a new communication tool that records and replays avatars with both nonverbal behavior and verbal communication in VR asynchronous collaboration.

IEEE HRI 2019
Drone.io: A Gestural and Visual Interface for Human-Drone Interaction

Jessica R. Cauchard, Alex Tamkin, Cheng-Yao Wang,, Luke Vink, Michelle Park, Tommy Fang, and James A. Landay

Published in HRI 2019 (full paper)

We introduce drone.io, a projected body-centric graphical user interface for human-drone interaction. Using two simple gestures, users can interact with a drone in a natural manner. drone.io is the first human-drone graphical user interface embedded on a drone to provide both input and output capabilities. This paper describes the design process of drone.io. We report drone.io's evaluation in three user studies (N=27) and show that people were able to use the interface with little prior training.

CHI 2018
RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer

Huaishu Peng, Cheng-Yao Wang*, Jimmy Briggs*, Kevin Guo, Joseph Kider, Stefanie Mueller, Patrick Baudisch, François Guimbretière. (*equal contribution)

Published in CHI 2018 (full paper)

We present the Robotic Modeling Assistant (RoMA), an interactive fabrication system providing a fast, precise, hands-on and in-situ modeling experience with an augmented reality CAD editor and a robotic arm 3D printer. With RoMA, users can integrate real-world constraints into a design rapidly, allowing them to create well-proportioned tangible artifacts. Users can even directly design on and around an existing object, and extending the artifact by in-situ fabrication.

CHI 2017
Teaching Programming with Gamified Semantics

Ian Arawjo, Cheng-Yao Wang, Andrew C. Myers, Erik Andersen, and François Guimbretière

Published in CHI 2017 (full paper)

We present Reduct, an educational game embodying a new, comprehension-first approach to teaching novices core programming concepts which include functions, Booleans, equality, conditionals, and mapping functions over sets. In this novel teaching strategy, the player executes code using reduction-based operational semantics. During gameplay, code representations fade from concrete, block-based graphics to the actual syntax of JavaScript ES2015. Our study result shows that novices demonstrated promising learning of core concepts expressed in actual JavaScript code in a short timeframe with our Reduct game.

MobileHCI 2015
PalmType: Using Palms as Keyboards for Smart Glasses

Cheng-Yao Wang,Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, Mike Y. Chen

Published in MobileHCI 2015 (full paper)

We present PalmType, which uses palms as interactive keyboards for smart wearable displays like Google Glass. PalmType leverages user's innate ability to pinpoint a specific area of palm and fingers without visual attention (i.e. proprioception) and provides visual feedback via wearable displays. With wrist-worn sensors and wearable displays, PalmType enables typing without requiring users to hold any devices and visual attention to their hands. We conducted design sessions with 6 participants to see how users map QWERTY layout to their hands based on their proprioception.

MobileHCI 2015
PalmGesture: Using Palms as Gesture Interfaces for Eyes-free Input

Cheng-Yao Wang,Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, Mike Y. Chen

Published in MobileHCI 2015 (full paper)

With abundant tactile cues and proprioception on palms, the palm can be leveraged as an interface for eyes-free input which decreases visual attention to interfaces and minimizes cognitive/physical effort. We explored eyes-free gesture interactions on palms that enables users to interact with devices by drawing stroke gestures on palms without looking at palms. To understand user behavior when users draw gestures on palms and how gestures on palms are affected by palm characteristics, we conducted two 24-participant user studies. Also, we implemented EyeWrist that turns the palm into a gesture interface by embedding a micro-camera and an IR laser line generator on the wristband, and three interaction techniques that takes advantages of palm characteristics are proposed.

CHI 2014
EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

Cheng-Yao Wang,Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen

Published in CHI 2014 (full paper)

We present EverTutor that automatically generates interactive tutorials on smartphone from user demonstration. It simplifies the tutorial creation, provides tutorial users with contextual step-by-step guidance and avoids the frequent context switching between tutorials and users' primary tasks. In order to generate tutorials automatically, EverTutor records low-level touch events to detect gestures and identify on-screen targets. When a tutorial is browsed, the system uses vision-based techniques to locate the target regions and overlays the corresponding input prompt contextually. It also identifies the correctness of users' interaction to guide the users step by step.

MM 2013
RealSense: Directional Interaction for Proximate Mobile Sharing

Chien-Pang Lin, Cheng-Yao Wang,Hou-Ren Chen, Wei-Chen Chu, Mike Y. Chen

Published in MM 2013 (short paper)

We present RealSense, a technology that enables users to easily share media files with proximate users by detecting the relative direction of each other only with built-in orientation sensors on smartphones. With premise that users are arranged as a circle and every user is facing the center of that circle, RealSense continuously collects the directional heading of each phone to calculate the virtual position of each user in real time during the sharing.

EyeWrist: Enabling Gesture-Based Interaction on Palm with a Wrist-Worn Sensor

Cheng-Yao Wang,Po-Tsung Chiu, Min-Chieh Hsiu, Chiao-Hui Chang, Mike Y. Chen

Excellent Work,The 8th Acer Long-Term Smile Innovation Contest, 2014

We present EyeWrist, which uses palms as the gesture interface for smart wearable displays such as Google Glass. With abundant tactile cues and proprioception on palms, EyeWrist can also be leveraged for device-less and eyes-free remote for smart TVs. EyeWrist embeds a micro-camera and an IR laser line generator on the wristband and use computer vision algorithms to calculate the finger’s position on the palm. Without requiring the line of sight of users’ fingertips on palms, the camera height could be lower, making the whole device more portable. We also implemented a gesture recognizer to distinguish different symbols, letters or touchscreen gestures(e.g. swipe, pinch) on palms. The recognition result would be sent to smart devices via Wi-Fi for gesture-based interaction.

EyeWatch: Touch Interaction on Back of the Hand for Smart Watches

Cheng-Yao Wang,Po-Tsung Chiu, Min-Chieh Hsiu

2nd Prize,MobileHero – user experience design competition, 2014

We present EyeWatch, which uses back of the hand as gesture interface for smart watches. EyeWatch not only overcomes the Big smartwatch problem: occlusion and fat finger problem, but also enables more powerful and natural interaction such as drawing a symbol quickly to open an application, or intuitively handwriting on back of hand to input message. Our proof-of-concept implementation consists of a micro-camera and an IR laser line generator on the smart watch, and computer vision algorithms are used to calculate the finger’s position on the back of hand.

The Incredible Shrinking Adventure

Cheng-Yao Wang,Min-Chieh Hsiu, Chin-Yu Chien, Shuo Yang

Final Shortlist,ACM UIST 2014 Student Innovation Contest

Imagining you were shrunk, you can explore your big house, play with big pets and family. To fulfill the imagination and provide users with incredible shrinking adventures, we use a robotic car and a google cardboard which turns a smartphone into a VR headset. We build a robotic car and attach a smartphone on the pan/tilt servo bracket. stereo images are generated from smartphone’s camera and are streamed to the other smartphone inside of the google cardboard. When users see the world through the smartphone on robotic car, they feel they were shrunk.

my skills

Machine Learning & Computer Vision

  • C#, Python, C++, Javascript
  • Experienced with deep learning frameworks and ML algorithms
  • Human digitalization, 3D human pose estimation, 3D reconstruction

Virtual Reality & Mixed Reality

  • Expertise in Unity/Unreal and VR/MR prototypes development
  • Experienced in developing multi-user VR/AR experiences
  • 3D avatar creation, rigging and animation

HCI & Data Analysis

  • Solid experiences in human-centered design
  • Tackle problems from socio-technical and HCI perspectives
  • Knowledge of quantitative, behavioral analysis and statistical concepts

publications

Shared Realities: Avatar Identification and Privacy Concerns in Reconstructed Experiences

CSCW 2021, Full Paper

Cheng-Yao Wang, Sandhya Sriram, and Andrea Stevenson Won

Hide and Seek: Choices of Virtual Backgrounds in Video Chats and Their Effects on Perception

CSCW 2021, Full Paper

Cheng-Yao Wang*, Angel Hsing-Chi Hwang*, Yao-Yuan Yang and Andrea Stevenson Won

Again, Together: Socially Reliving Virtual Reality Experiences When Separated

CHI 2020, Full Paper

Cheng-Yao Wang, Mose Sakashita, Jingjin Li, Upol Ehsan, and Andrea Stevenson Won

ReliveReality: Enabling Socially Reliving Experiences in Virtual Reality via a Single RGB camera

IEEE VR 2020, Poster

Cheng-Yao Wang, Shengguang Bai, and Andrea Stevenson Won

Privacy-Preserving Relived Experiences in Virtual Reality

IEEE VR 2020, Doctoral Consortium

Cheng-Yao Wang

RelivelnVR: Capturing and Reliving VirtualReality Experiences Together

IEEE VR 2019, Poster

Cheng-Yao Wang, Mose Sakashita, Upol Ehsan, Jingjin Li, and Andrea Stevenson Won

VR-Replay: Capturing andReplaying Avatars in VR for Asynchronous 3D Collaborative Design

IEEE VR 2019, Poster

Cheng-Yao Wang, Logan Drumm, Christopher Troup, Yingjie Ding, and Andrea Stevenson Won

Drone.io: A Gestural and Visual Interface for Human-DroneInteraction

IEEE HRI 2019, Full Paper

Jessica R. Cauchard, Alex Tamkin, Cheng-Yao Wang, Luke Vink, Michelle Park, Tommy Fang, and James A. Landay

RoMA: Interactive Fabrication with Augmented Reality and a Robotic 3D Printer

ACM CHI 2018, Full Paper

Huaishu Peng, Cheng-Yao Wang*, Jimmy Briggs*, Kevin Guo, Joseph Kider, Stefanie Mueller, Patrick Baudisch, François Guimbretière. (*equal contribution)

Teaching Programming with Gamified Semantics

ACM CHI 2017, Full Paper

Ian Arawjo, Cheng-Yao Wang, Andrew C. Myers, Erik Andersen, and François Guimbretière

PalmType: Using Palms as Keyboards for Smart Glasses

ACM MobileHCI 2015, Full Paper

Cheng-Yao Wang,Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, Mike Y. Chen

PalmGesture: Using Palms as Gesture Interfaces for Eyes-free Input

ACM MobileHCI 2015, Full Paper

Cheng-Yao Wang,Min-Chieh Hsiu, Po-Tsung Chiu, Chiao-Hui Chang, Liwei Chan, Bing-Yu Chen, Mike Y. Chen

EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

ACM CHI 2014, Full Paper

Cheng-Yao Wang,Wei-Chen Chu, Hou-Ren Chen, Chun-Yen Hsu, Mike Y. Chen

RealSense: Directional Interaction for Proximate Mobile Sharing Using Built-in Orientation Sensors

MM 2013, short paper

Chien-Pang Lin, Cheng-Yao Wang,Hou-Ren Chen, Wei-Chen Chu, Mike Y. Chen

awards

RelivelnVR: Capturing and Reliving VirtualReality Experiences Together

Best Poster Honorable Mention,In 2019 IEEE Conference onVirtual Reality and 3D User Interfaces (VR), 2019

Selected poster,The Cornell CIS 20th Anniversary Reception, 2019

EverTutor: Automatically Creating Interactive Guided Tutorials on Smartphones by User Demonstration

1st Prize,The 11th Deep Innovations with Impact, National Taiwan University, 2013

1st Prize,The 11th Y.S. National Innovation Software Application Contest, 2014

MagicWrist - Connect the world

Excellent Work,The 8th Acer Long-Term Smile Innovation Contest, 2014

The Incredible Shrinking Adventure

Final Shortlist,ACM UIST 2014 Student Innovation Contest

EyeWatch: Touch Interaction on Back of the Hand for Smart Watches

2nd Prize,MobileHero – user experience design competition, 2014

EyeWrist - Enabling Gesture-Based Interaction on Palm with a Wrist-Worn Sensor

Final Shortlist,MediaTek Wearable device into IoT world competition, 2014