David Drascic

Perfecting the User Experience

Research Topics

Stereoscopic Displays

Most of us have had the opportunity to see a 3D movie, and a few of us even have 3-D HD TVs at home. It wasn’t always like this.

When I started working on my MASc with Prof. Paul Milgram in the Dept of Industrial Engineering  in 1986, he had one flaky prototype that worked most of the time. Until one day it caught fire.

Fortunately, I was already comfortable with a soldering iron and basic circuit design, so after some research and a failed attempt or two, we were able to make our own.

With the goal of using commonplace cameras and displays, and a very tight budget, we developed a number of different approaches, using as many off-the-shelf components as we could. Around that time, a few companies started making reasonably-priced commercial units, which gave us a lot more freedom to focus on the Human Factors of stereoscopic displays.

The Human Factors of Stereoscopic Displays

It turns out that there are a LOT of issues to consider when designing a stereoscopic system. Human vision is incredibly powerful and robust in a lot of ways, but there are some things it doesn’t cope with well. Any mistakes in the alignment of your cameras, or in the synchronisation of your display, for example, can cause all sorts of problems, including headaches, nausea (“simulator sickness”), eye strain, and can greatly interfere with the ability of someone to do their work.

Telerobotics

Drones are now commonplace. The majority are operated by remote control, but many have at least some built-in intelligence, to keep them level, to return to where they started, to avoid crashing. And a few are autonomous, capable of doing a task without human intervention.

When I started, these were just ideas. My first project was to determine whether using stereoscopic video would improve the remote operation of a bomb disposal robot. The answer was a solid yes, because the nature of the particular tasks involved.

However, changing the position of the camera could change that result. The key factor is the total number of pixels involved in showing the important changes. The more pixels involved, the easier the task is, and the faster it can be completed. When only one pixel is involved, most people can still do the task, but it takes much longer. This is akin to Fitts’s Law.

My second concurrent project, beginning in 1986, was to develop a method for the operator to specify a particular task and precise destination for a partially-autonomous bomb-disposal telerobot. My off-the-cuff solution: Augmented Reality.

Augmented Reality

When I first met with Prof. Paul Milgram in the summer of 1986 to discuss a potential project involving the use of stereoscopic displays for telerobotics, I tossed out the idea of using a computer to generate calibrated stereoscopic graphics and super-imposing them on the stereoscopic video of the real world.

Four months later we had a contract, 18 months later my first Augmented Reality system was working, and we were giving public demos in 1988. By 1990, we had patented that off-the-cuff idea.

Mixed Reality

During a lab meeting in 1993, while trying to explain some conflicting results in some recently published work, I explained that Direct View, Stereoscopic Video, Augmented Reality, and Virtual Reality were all points on the  “Reality Virtuality Continuum”. “Augmented Virtuality” was an obvious point that I added, and the middle gap I called “Mixed Reality” for lack of a better term.

A year later, Prof. Milgram published a paper defining “Mixed Reality”, and now it’s built into most modern computers.