Later this year, a Nissan Leaf will travel from Cranfield University to Sunderland, a distance of 230 miles. It will navigate roundabouts, A-roads and motorways, all through live traffic. Nothing unusual about that except that the Leaf will be driving itself.
The journey, called the Grand Drive, is billed as the most complex autonomously controlled journey yet attempted in the UK. It will be the culmination of a 30-month development project that boasts heavyweight partners including Nissan and Hitachi.
The project is called HumanDrive since one of its goals is to develop a vehicle control system that emulates a natural human driving style using machine learning and artificial intelligence. To assist engineers, a detailed visualisation of the environment the autonomous test cars have been developed in has been created. “The visualisation is rendered by a powerful games engine and derived from a detailed scan of the environment,” says Edward Mayo, programme manager at Catapult, the organisation managing HumanDrive. “We use tools that extract data from the real world; for example, the exact position of centre lines, road edges and potholes, as well as the precise angles of road signs.”
As the autonomous car, with a safety driver on board, is driven through the real test environment, it generates a stack of performance data. This is used to recreate its trajectory and behaviour in the digital visualisation.
A car driven by a human then repeats the journey. The resulting data allows development engineers to visualise and compare the performance of the two cars.
Join the debate
Add your comment
A dangerous waste of time
Get these people trying to crack new battery tech.
Interesting
May I thank those who have a good understanding of the technical issues facing this developing science. Although some of it goes above my head, it puts into perspective the challenges faced. As a layman it would seem that although much can be achieved by simulations , no legislative body will allow level 5 autonomy without real world testing to a very extensive level. And nor should they. There are dangers ,they will be mitigated , but nothing will ever be 100% safe.
Michael DeKort - Clarifications
Using simulation for 99.9% of the development and test is not expensive. It is far cheaper than using the real world. This because the real world would be over 500B miles and $300B per company. (Which cannot be done. And that doesn't mention the thousands of injuries/deaths caused by learning accident scenarios.) The hundreds of millions of $ I mentioned is for the whole scenario/simulation set to get to L4. A cost that would be spread by many users. Worse case if someone paid for it all themselves it equals what Uber and Waymo spend in a couple months. And again, a rounding error compared to using the real world. And there is no choice in the end. You can do this or never get to L4, never save the relvant lives and harm thousands of people for no reason.
I am all for shadow driving. We want that data and intention testing. We just want less of it. Meaning not eternally driving. Safety driving is what needs to be virtually eliminated. When it is necessary it should be after it is proven simulation cannot do what is needed. And when it is needed it is run like a movie set not public domain wild west action.
As for proper simulation. You must have every object in the world the system cares about be precise visually and in physics. This because perception systems struggle so much. Every building, tree, sign, the vehicle, tires, road, sensors all have to be exact. Not close - but exact. If this is not done there will be unknown gaps between the real world and the sim which will lead to planning issues and tragedies in complex and accident scenarios. And we cannot make the argument to switch 99.9% of the public shadow and safety driving to sim.
The sim tech in this industry is nowhere near capable of this. Or making a legitimate digital twin. Not a single vendor makes a system where they even try to get half the models right let alone get them right. I reached out to DoD to do this because these folks were unwilling to fix those gaps when I reached out to them. Mostly because they did not want to re-architect their systems or admit they had this many issues. So you wind up with IT/gaming people who concentrate on great looking visuals with no geospecifics in most cases. Or OEM manufactures trying to use their systems with only the car model being acceptable. I would be glad to go over this with anyone in more detail, explain the exact architecture differences and show you examples of it being done right.