Meetings… loved and hated in equal parts. Some people seem to live for meetings and others to avoid them. Love them or hate them, they are not going away anytime soon, despite the current lockdowns and restrictions. They have just migrated into virtual meetings.
It turns out that we are missing out on a lot of unspoken interactions in today’s virtual settings. The list looks something like this:
- Facial expressions
- Gestures
- Paralinguistics
- Body language and posture
- Proxemics
- Eye gaze
- Haptics
- Appearance
- Artefacts
Some of these do translate to virtual video conferencing, namely:
Facial expressions – though these are limited to a tiny video feed image usually.
Paralinguistics – the tone of a voice ‘that guy is angry!’ translates well.
Body language and posture – can be seen if looked for, again in a small feed window.
Appearance – you can see what they are wearing.
Artefacts – less likely to show up, but possible. I can see that guy is a doctor as he’s wearing a stethoscope.
But, not as powerfully as face to face, and that’s where the world of virtual immersive meetings comes into play, though it opens up design challenges as well as opportunities.
Design challenges for immersive meetings
We have looked into the experience of face-to-face meetings versus the virtual video call meetings that we are forced to use today.
We have analysed what is lacking in virtual meetings over their face-to-face equivalents. We now pose the question of how immersive meetings might be better than video calls, and in some ways better than face-to-face meetings. What design challenges lie ahead for immersive collaborative meeting spaces?
You may have seen many images such as the one above for many platforms touting VR meeting software. They show semi-realistic avatars having an engaging conversation exhibiting natural poses. Looks great, but that is often not the reality.
Why?
Earlier, we covered the missing elements from video call meetings. These have not gone away for immersive, but potentially they can be overcome using today’s technology.
Facial expression. This is a very young field of exploration in VR and it is not widely available. Oculus did some excellent experimental work but it is only in an experimental phase and not rolled out.
HTC Vive rolled out some basic facial features in order to animate an avatar in time with speech, but the reality is, at the moment, that most avatars are expressionless for the most part and offer a moving mouth and blinking eyes at best.
AR face tracking is also taking off and the emergence of these new technologies are ones to watch for how this translates into remote immersive experiences.
Gestures. We definitely miss gestures. We are more used to gesturing when talking. These are much less effective in virtual video meetings as usually we only see attendees’ heads and shoulders (or foreheads if Skyping with Mum and Dad).
Gestures can help get your point across and communicate a lot of unspoken information.
VR images always show avatars in gestural positions, however, this does not seem to actually occur in the real virtual worlds. Since most VR controllers rely on them being gripped, natural gesturing is not happening as the controller inhibits the users natural gesturing instincts. With the advent of the ‘knuckles’ controllers, this might start to change and we may see more animated and natural gestures.
At the moment, it seems limited to hand waving and thumbs up, and in the case of AltSpaceVR, an unnatural one hand outstretched pose that looks like a handshake, but is in fact the natural controller position for moving and teleporting around the scene. The controller interactions then are forcing an unnatural ‘real world’ pose in the virtual one – odd looking to the observer.
AR experiences with tracked hands and gesture controls might help improve things so long as gestures are not only tracked for interaction, but also emotion.
This controller design allows the user to be ‘open-handed’ without dropping the controller on the floor. This could lead to more natural conversational gestures that can be tracked and rendered into an immersive space. This equates to more expressive avatars!
Body language and posture. With tracked controllers alone, it is hard to render realistic body posture and very unnatural for a user holding controllers to form a natural body pose.
Ever tried folding your arms in disgust whilst holding two clunky VR controllers? It just does not happen. Maybe the controllers above could help solve this? Would it be comfortable or natural to wear them and the cross arms naturally? Also, systems like WinMR would lose tracking were you to fold your arms as the controllers would be outside of the tracking camera’s field of view.
The same could be an issue with hand tracking, sometimes hands are occluded and cannot be ‘seen’ by the system so it does it’s best guess as to what is going on which can lead to some very unnatural hand and arm positions.
Remember, crossed arms are one of the many postures we use to unconsciously communicate with one another, whole body stance is an entirely different problem but used very effectively to communicate non verbally in everyday interactions with other people.
Eye gaze. Eyes are windows into the soul. Well, when in VR they are usually dead expressionless randomly blinking dots. Eye tracking in VR has mostly been about targeting interface elements within the VR scene to interact with them. Saving the user having to point with a laser tool such as the system used in Varjo’s 20/20 Eye Tracker ™ There are a limited number of VR headsets with this capability, but it is not designed to allow people to actually look at one another when immersed. Face-to-face meetings rely a lot on eye gaze.
You can express a lot of emotion with eyes alone. You can see when I’m engaged and listening to you and you know instantly when I think you’re talking out of your butt just by looking at my eyes alone. Not to mention the ‘Oh God!’ eye roll that everyone has mastered.
With eye-tracking targeted towards emotions instead of interaction, we might regain this expressiveness and have a reason for looking at another person’s avatar when they are talking.
Appearance and Artefacts. I’m putting these together as this is something that VR and immersive does really well. I can see the surgeon at the meeting as she actually looks like one from the appearance and artefacts of her avatar. Saves time in putting on the actual clothing as she might be at home in a gown for all anyone else knows.
Many VR apps allow custom avatars to be created and this space is well understood.
It adds to the meeting, but we are still behind on many of the interactions talked about already.
Gaps are opportunities
Here at Masters of Pie, we design and build collaborative immersive software for enterprise customers. Our customers requirements are quite different from consumer VR chat applications and yet there are quite a few crossovers.
A key objective for Masters of Pie is to allow people to communicate effectively and so we are working on solutions to the issues discussed in this article.
Some solutions require better and more widely adopted hardware before they will be effectively rolled out, but others are essential enhancements to the overall user experience that we offer.
We believe that our Radical software solution offers better and more engaging solutions for users to interact with enterprise data sets. Immersive allows our customers to do things that they simply cannot do in the real world.