Kirk Moffitt, PhD
Image & display solutions:
Human Factors Consultant
I have been working with augmented- and virtual-reality displays for 25 years starting with the Kaiser Electronics Agile Eye, a monocular see-through display for tactical aircraft. New technologies have allowed lightweight and economical augmented reality displays to enter the consumer market.
Virtual reality displays continue to provide a challlenge:
—Virtual reality is almost always associated with low-resolution grainy or blurry images. It's been this way for over 20 years. I am intimately familiar with a method to provide detailed imagery and expansive viewing with minimal compromise.
—What eye, head, torso and body behavior do people exhibit while navigating in VR? Answers to this question can help define binocular and total FOV, image quality, recommended posture, mobility and other rules for safety.
—What are the weight and sizing requirements? How about lining up the optics with the eye pupil? I have developed tools for combining imagery with incomplete databases to answer key anthropometry questions.
—Viewing Comfort: A link between binocular misalignment, blur and spatially unnatural images?
—I have long been intrigued with a possible link between a variety of images that all cause viewing discomfort. Some research suggests that this link is oculomotor—where the observer attempts to adjust either focus or vergence to improve image quality. Other research points to a more central neural effect.
—Many people experience difficulty with binocular displays such as HMDs. Understanding these links can illuminate this problem.
Augmented reality has even more challenges, including:
—Where is attention? On the the AR display information? On the outside world? On both?
—How large is the functional visual field and what role does attention and AR information have on this? Does this have something to do with tunnel vision?
—Blue graphics are mostly invisible in normal viewing conditions? I have a solution for this problem.
—What are the roles of graphics and images in augmented reality?
—What is the other eye doing when viewing augmented reality information on a monocular display? Why should you care? Does this have something to do with attention? Could this be one of the underlying mechanisms of a HUGE safety issue?
—Can stereo graphics be integrated into real world objects?
—What is the apparent distance of monocular graphics and images, and how does this perception interact with real-world objects.
—Is the mbile user really mobile while using AR? I have an explanatory model.