Engineering news

Automation for a safer world: new approach needed as boundaries blur between human and machine

Joseph Flaig

Professor Sarah Sharples, chief scientific adviser for the Department for Transport, delivers the 2024 Thomas Hawksley Prestige Lecture at IMechE headquarters
Professor Sarah Sharples, chief scientific adviser for the Department for Transport, delivers the 2024 Thomas Hawksley Prestige Lecture at IMechE headquarters

The boundaries between human and machine are blurring. Once, there were clear responsibilities for the operation of complex industrial and transport systems. Now, with AI-enhanced technologies being deployed at breakneck speed, it is not always obvious who – or what – is responsible for the safety of operators and members of the public.

IMechE’s recent Thomas Hawksley Prestige Lecture explored those boundaries, and set out ways to maintain and improve safety in this brave new world. “Autonomous and semi-autonomous systems are no longer a futuristic concept – they're already around us,” said organiser and IMechE past-president Carolyn Griffiths. There is a “very pressing debate” surrounding automation as a result, she added, speaking at the institution’s headquarters in Westminster on 11 December.

“The potential for autonomous systems, greater incorporation of machine learning and other AI, I'm sure you'll agree, is enormous,” said Griffiths, who also founded and led the operations of the Rail Accident Investigation Branch.

“Under the right conditions, this has the potential to transform how we live, to stimulate the UK economy, driving productivity, creating new jobs, improving the workplace and helping those experiencing disabilities have a better quality of life.”

For all that to happen, however, engineers must ensure the safety of systems. This will involve new ways of doing things in design, development, test and validation, regulation and training. Professor Sarah Sharples, chief scientific adviser for the Department for Transport, set out some of the challenges and opportunities involved in the lecture, titled “Humans and Automation – Safety by Design”, and delivered in partnership with the Parliamentary Advisory Council for Transport Safety (Pacts).

Human brilliance

To maintain and improve the safety of systems, we must first understand how humans use and interact with them. The human factors discipline is vital to this discussion, said Professor Sharples, also a member of the faculty of engineering at the University of Nottingham. She summarised the aims of the field: “Humans are fallible and humans are brilliant. Our job is to minimise the impact of human fallibility and maximise the value of human brilliance.”

She gave the example of two air accidents with very different outcomes – the Kegworth air disaster, in which 47 people died in an attempted emergency landing at East Midlands Airport, and the ‘Miracle on the Hudson’, in which pilots Chesley “Sully” Sullenberger and Jeffrey Skiles successfully ditched their Airbus A320 into the river after a bird strike, saving the lives of 155 passengers and crew.

In the Kegworth crash, the pilots mistakenly shut down the plane’s functioning engine, rather than one that had failed. Human factors analysis later showed there were many reasons why they believed they had shut down the correct engine, including cockpit design, the number of interruptions and alarms, and their own mental models of how the aircraft operated.

“Both the Kegworth pilots and Sully thought they were doing the right thing,” said Professor Sharples. “They were experienced. They recognised their responsibility. They acted in a way that they believed was best for the safety of their aircraft and their passengers.”

Knowing the best way to maintain safety in similar situations, and many more, is now becoming even more complex as automation replaces cognitive elements of our work. “No longer is it clear or simple to allocate responsibility for elements of a task neatly to either the human or the engineered system,” said Professor Sharples.

There are countless examples where it is “highly beneficial” for humans to work with automated systems, she said, including AI-enhanced contrail mitigation, autonomous freight trucks, automatic crowd safety monitoring and car vision systems providing alerts for drivers.

But as automation takes responsibility for certain tasks, it can be difficult to know how much accountability lies with human operators. “How do we assure, therefore, the safety of different elements of this system, and how do we assure the system as a whole?”

Keeping people in the loop

The wealth of experience from the human factors discipline can make an important contribution, Professor Sharples said. Relevant tools include 3D models and digital twins of complex systems that help evaluate and mitigate physical risks, and advanced sensing technologies that can track human behaviour and responses.

“We can collect information from planes, boats, trains and cars to learn how pilots and drivers have performed in different types of situations,” she said.

But what about when the machine is taking more of those responsibilities, such as in on-road trials of self-driving technology? “Where it becomes particularly challenging is where people are working with technology in partnership, where the responsibility and the accountability still remains with the person, as we currently see with driving technologies,” Professor Sharples said.

“We ask people to take responsibility for the outputs of technology that has been introduced for the very purpose of doing the job faster or more accurately. And we know that replacing people can lead to a loss of skills and a loss of understanding of how a complete system works, which results in them becoming out of the loop.”

Many types of automation, and AI in particular, is only as good as the data on which it is tested, she stressed. This can cause issues when it is released to the public, such as technologies that are not equally able to be used by people of different ethnicities, or ‘edge cases’ that have not been predicted in technologies that control autonomous vehicles.

“How do we responsibly ensure the person is able to take responsibility for some elements, but in others make sure the responsibility lays with the technology? How can we assure the safety of systems where people may be involved, but where their actions may be unpredictable?”

In other words, she said: “How do we minimise the impact of human fallibility and maximise the value of human brilliance?” 

Responsible automation

Thankfully, there are some solutions. Aiming to start to set a “shared framework”, Professor Sharples suggested options including embedding human factors experts in industrial settings.

“We know of examples where this has made an enormous difference,” she said. “But this isn't scalable… so we also need to develop methods which are usable by non-human factors experts.”

She continued: “I strongly agree with Carolyn that multidisciplinary teams are key. We all know it's unhelpful if we end up with one profession preaching to or arguing with another. So the involvement of human factors experts in standards committees, nationally and internationally, is a really important thing we need to pursue.”

Assessors of new technologies and systems should also be given guidance about how to tackle complex new safety issues, she said, and to know when to ask for expert guidance.

“We need intelligent in-use monitoring, appropriate methods of data collection and, more importantly, a culture of continuous learning, reflection and sharing of data with a common mission to deliver safe systems,” she said. “We need to recognise that the world has changed. Human-technology cooperative systems are here to stay.”

Concluding, she summed it up as “a mission towards responsible automation – automation that makes systems safer, automation that makes people safer, and automation that makes the world safer.”

Proceed with caution

Each journey we take involves a “huge number of complex decisions”, said Margaret Winchcomb from Pacts, which aims for a transport system free of deaths and life-changing injuries from accidents. The deputy executive director of the charity stressed the need for caution when deploying autonomous vehicles, even if some people claim they will improve safety.

“Traversing along an urban street means navigating a mix of vehicles, people and infrastructure. While the majority of our trips end safely, collisions do happen,” she said.

“Intelligent machines may be able to handle many tasks, and the lure of new technology to drive us around is tempting. However, vehicles are not yet reliably fully autonomous. They cannot yet manage every scenario they encounter. We should not let them run before they can walk.”

She added: “Instead, as Professor Sharples cautioned in her lecture, the development of automation needs to be carried out slowly, bringing humans along at all stages, including design, testing and use. People need to know that, while automation means some repeatedly boring tasks can be done by a machine, our brilliance means that we still need to be ready to manage the extra complex tasks a machine can’t handle.”

The lecture is now available to watch online.


Want the best engineering stories delivered straight to your inbox? The Professional Engineering newsletter gives you vital updates on the most cutting-edge engineering and exciting new job opportunities. To sign up, click here.

Content published by Professional Engineering does not necessarily represent the views of the Institution of Mechanical Engineers.

Share:

Read more related articles

Professional Engineering magazine

Professional Engineering app

  • Industry features and content
  • Engineering and Institution news
  • News and features exclusive to app users

Download our Professional Engineering app

Professional Engineering newsletter

A weekly round-up of the most popular and topical stories featured on our website, so you won't miss anything

Subscribe to Professional Engineering newsletter

Opt into your industry sector newsletter

Related articles