For an overview of the project, click here.

This project was initially created as part of the requirements for the course Future Cinema II. It was inspired by the question posed by the course assignment:

How does this idea respond to the question of the medium — how is it specific to VR, and wouldn’t make sense (or much less sense) without VR? How does it use the basic elements of the medium, and the engine, as fundamental to the message?

At the heart of this project lies a central question about how knowledge is both transmitted and received within a virtual environment. Specifically, I am interested in how social interaction is altered in VR environments. Current VR technology limits the flow of information found in face-to-face communication. Subtle information such as facial gestures and eye movement cannot currently be transmitted inside a VR experience. With the current VR hardware, the only information that can be transmitted is the position and orientation of the user’s head and, if using VR controllers, hands.

But what are the implications of this limitation of non-verbal communication inside VR environments? What is the nature of the knowledge we transmit through our bodies (even if only limited to the head and hands) and is it possible for a simple AI to mimic the expression of embodied knowledge such that it becomes indistinguishable from that of a real person? Just how much of our sense of self is centred in our bodies and how does VR transform this sense of self, especially when encountering other agents whose movements are based on our own?

Advertisements