Reality check: 30 years of developments in a virtual world

Reality check: 30 years of developments in a virtual world
4 August 2017 CIEHF site manager

Bob Stone is celebrating 30 years of work in virtual reality, having seen designs mature from cumbersome virtual reality wearable prototypes to mixed reality sophisticated command and control concepts. Here, he talks us through the origins of the technology and the influence of human factors on its development.

—————

Thirty years ago I experienced a huge career-changing experience – one that has stood the test of time and, even today, drives the research and consultancy projects undertaken by my team at the University of Birmingham. In the mid-1980s, whilst a member of the human factors research team at British Aerospace (BAe) in Bristol, colleagues and I were undertaking Low Earth Orbit ‘Teleoperation & Control’ research for the European Space Agency (ESA). As well as conducting experiments with modified industrial robots to address such issues as optimum camera locations, techniques for displaying remote force feedback and the evaluation of human performance in the presence of communication time delays, we were also encouraged to keep a ‘technology watch’ on future developments in human interface technologies. Whilst attending a conference focusing on future technologies for satellite servicing at the NASA Goddard Spaceflight Center in Maryland during June 1987, I was introduced to a Dr Steve Ellis who invited me to ‘hop’ over to San Francisco and visit him, his colleague Scott Fisher and the Aerospace Human Factors Division at NASA’s Ames Research Center in Moffett Field, California, and to present the work of the BAe team and its relationship to ESA’s spaceflight efforts.

I was already aware of the telerobotics-related research being undertaken by the Ames specialists, but what I was not prepared for came in the form of a prototype of a very cumbersome head-mounted display and a similarly bulky glove, bristling with fibre optic sensors. What a few years later became the EyePhone and DataGlove Virtual Reality (VR) products, marketed by the ill-fated US company VPL Inc., were, in the late ‘80s, being investigated by the Ames team as a radically new form of human interface, designed to create a sense of ‘telepresence’ for astronauts and ground-based specialists controlling anthropomorphic robots undertaking maintenance activities on and around orbital space platforms. Ellis and Scott’s demonstration meant that I was to become the first European to experience the revolutionary NASA Virtual Environment Workstation, or VIEW system.

Moving my head to look around and, by making simple hand gestures using the instrumented glove, I was able to ‘fly’ towards and ‘stand’ on the lowest step of a computer-generated representation of a virtual escalator – by today’s standard a very primitive form of animated wireframe graphic. What happened then was unlike anything I had experienced before. An apparent visual ascent on this virtual structure, when my vestibular system was convincing my brain I was standing still, left me somewhat disorientated and, if the truth be told, slightly nauseous! Then the programme changed, and I found myself ‘miniaturised’ and on the edge of what appeared to be a rotating virtual camshaft. Despite the disorientating effects, both demonstrations were powerful and impressive, even given the simple 3D graphics of the time. This was certainly like no kind of human-computer interface concept I had ever witnessed before and I emerged convinced that, one day, virtual reality (VR) could provide a very powerful tool for intuitive data visualisation and technology-based training for such industries as aerospace, defence, automotive and healthcare.

On returning to the UK, I set out on a journey to establish an industrial applications-focused VR programme and, six years later, in my new role with the UK’s National Advanced Robotics Research Centre, and courtesy of appearances in the 1991 BBC Horizon documentary on VR, Colonising Cyberspace, and the BBC’s 9 O’Clock News in January 1993, that ambition was finally realised with the launch of the wholly industry-sponsored Virtual Reality & Simulation (VRS) Initiative.

The following six years were certainly a rollercoaster ride of failures and successes, often tainted with significant frustration at the lack of foresight and innovation on the part of the UK science and technology sector. Nevertheless, those years invoke many memories for me. Memories such as commissioning the UK’s first transputer-based VR computer which enabled us to produce a very basic but early example of augmented reality (AR), launching the world’s first Virtual Stonehenge model with the late Sir Patrick Moore at the London Planetarium, and inventing the world’s first haptic feedback glove, Teletact.

The Teletact experience was, sadly, short-lived, despite a potentially lucrative approach from the wealthy owner of a chain of US ‘adult emporia’ who wanted to develop the concept into a commercial version of the ‘Orgasmatron’, as featured in Woody Allen’s 1973 film Sleeper. Alas, government funding prevented us from taking this concept further. In October of 1990, the BBC’s science programme Tomorrow’s World ran a feature on the glove, with an Angora rabbit as a tactile subject. There was, at that time, a myth that any novel technology that appeared on Tomorrow’s World would experience an uncertain future. And so it was with Teletact; the pneumatic haptic concept was taken no further and the remains of the first prototype glove lay hidden in my garage until its ‘re-discovery’ in 2012!

Another major experience in the 1990s demonstrated just how easy it was to forget the requirements and capabilities of the human user and go all out to assemble the latest high-tech components based on being swayed by the hype and false promises of VR hardware and software product vendors (something, sadly, which is all too evident today, given the current resurrection of interest in VR and associated technologies). I’ve written about the MISTVR (Minimally Invasive Surgical Trainer) experience on numerous occasions since, and the lessons learned form a key part of my university introductory lectures on human-centred design for interactive technologies. In brief, MISTVR evolved from a project sponsored in 1994 by the UK’s Wolfson Foundation and Department of Health, the overarching aim of which was to assess the potential of emerging VR technologies to deliver cost effective keyhole surgery training for future medics.

The contract from these sponsors was, for the UK, quite sizeable, and as a result we went into ‘technology overdrive’. We were adamant that surgeons would benefit from stereoscopic displays, multi-axis haptic feedback systems, realistic deformable human organ and tissue models with bleeding, fluid leakage, diathermy smoke and tissue congealment effects. Tens and tens of thousands of UK pounds later and we had our results (hosted on a £250,000 ‘graphics supercomputer’). The surgeons who saw those results literally laughed and walked away, claiming that we had no idea what it was the medical community actually needed.

After a period of very long and hard reflection, I was embarrassed to confess that I had simply forgotten the lessons I had learned from Ellis, Fisher and colleagues during that brief visit to NASA Ames in 1987. They had emphasised how crucial it was to put human factors considerations first, before getting too carried away about the high-tech wearable and associated technologies the fledgling VR community was about to produce with a vengeance. Back to the operating theatre and the human factors handbooks. Eventually, after a significant – and human-centred – redesign, MISTVR became the world’s first part-task surgical skills trainer, adopted and evaluated on an international scale and as a de facto skills trainer by the European Surgical Institute in Germany. But, more importantly, the research demonstrated conclusively that VR training systems could benefit from – and, indeed, must be subjected to – a strong, underpinning human factors approach from the outset.

Having been the Scientific Director of a small VR company for some eight years after the conclusion of the Robotics Centre research programme, I left the volatile world of commercial VR in 2003, choosing to join academia and to excite and inspire students, not only to develop the appropriate design and software skills that are an essential component of their future careers, but also to embrace the importance of human-centred design in the development of content with appropriate task, context and interactive fidelity, and in the exploitation of appropriate interactive technologies.

Between 2003 and 2012, my university team and I were given an excellent opportunity to look back on over two decades of involvement in the VR community and to undertake even more projects to emphasise the importance of human factors in VR design. Building on the experiences described earlier, and courtesy of sponsorship from the UK Ministry of Defence via what was known as the Human Factors Integration Defence Technology Centre (HFI DTC), we were able to undertake a range of stakeholder-led VR projects, from submarine safety training to counter-improvised explosive device awareness, and from field hospital surgical training to the development of a unique simulator for the UK’s CUTLASS bomb disposal robot. The HFI DTC funding also enabled us to produce two key documents containing lessons learned and human factors guidelines for VR and AR researchers and designers. These have been made freely available and are still very much in use today, including as part of my lectures to future generations of VR developers.

The story continues, yet I often find myself coming full circle, drawing on the early NASA experience. For me, the early 1990s was all about VR and telepresence for applications in space and the nuclear industry. Today it’s mixed reality for sophisticated command and control concepts and defence paramedic training, but, in many respects, perhaps we have not come as far since the late ‘80s and early ‘90s as we may believe. You have only to witness the frenetic activities on the Internet that have occurred as a result of VR’s recent ‘re-birth’ to see that the newcomers to today’s VR community are making the same technology-push mistakes their predecessors did all those years ago. Yes, the technologies are much improved and more affordable than back then, but, when it comes to making realistic, credible claims and predictions, things have changed very little indeed.

Nevertheless, I live in hope that the ongoing achievements of my tiny team demonstrate best practices, evolved over three decades, to a technology-hungry new generation. The fact that our latest Mixed Reality Command & Control workstation bears a remarkable resemblance to NASA’s VIEW concept is testament, I believe, not only to the harsh lessons learned over those three decades but also to the enduring impact of what I experienced during my visit to Ames all those years ago.

——–

Professor Bob Stone is Chair of Interactive Multimedia Systems and Director of the Human Interface Technologies Team at the University of Birmingham, UK. For more details, see www.birmingham.ac.uk/stone.

This article was published in The Ergonomist, no 560 Jul/Aug issue.