Autonomous vehicles and Australian roads: Are they ready for each other?
The iMOVE project How automated vehicles will interact with road infrastructure has concluded, and the final report has been completed, and released.
This study was carried out to investigate the infrastructure needs of automated vehicles now and in the future. The study methodology included training state-of-the-art artificial intelligence (AI) algorithms to help accurately localise the vehicle and recognise Australian road signs, road lines and traffic lights. For the first time in Australia the methodology also extended to compare results with and without the use of annotated prior maps (sometimes referred to as high-definition maps).
The study data was gathered along multiple routes in and around Brisbane, using a test vehicle dubbed ZOE1. The car, a Renault Zoe, was equipped with the following hardware:
- 3 x forward‐facing cameras
- 1 x 360-degree camera
- 1 x roof-mounted 32-layer LIDAR
- GPS (Global Positioning System) sensors
- 2 x on‐board data-logging computers
- additional battery power and cabling
ZOE1 completed just over 1,200 kilometres in this endeavour, across three months in 2019.
‘The QUT study, in partnership with the Department of Transport and Main Roads, was the first step in understanding infrastructure requirements of our vast and varied road network for new vehicle technologies’, said Transport and Main Roads Minister Mark Bailey.
‘As researchers drove the car across South-East Queensland, onboard sensors collected some 20 terabytes of raw data which was used to train and refine AI algorithms.
Artificial Intelligence technology and smart road infrastructure have potential to transform the way we travel in Queensland and reduce road trauma.’
What was the project looking to do?
The project was led by Queensland Department of Transport and Main Roads, with research conducted by the Queensland University of Technology.
The project looked to answer the following four questions:
- How well can state‐of‐the‐art computer vision and deep learning algorithms retrospectively account for correct human‐level driving behaviour and decisions with respect to recognising and obeying road signage and road surface markings, and how and in what situations do they fail to do so?
- How will existing built and signed infrastructure affect the accurate (automation enabling to a few centimetres precision) positioning capability of an automated vehicle?
- What types of infrastructure improvements could address shortcomings identified in this study?
- How will the answer to the above three questions change depending on the technology solution deployed on the automated vehicle, with a primary focus on the spectrum of possible range‐based (laser, radar) solutions versus primary vision‐based (MobilEye® for example) solutions?
‘The primary goal of the study was to consider how current advances in robotic vision and machine learning – the backbone of AI – could enable the research car platform to see and make sense of everyday road signage and markings that we, as humans, take for granted,’ said the research project leader, Queensland University of Technology’s Professor Michael Milford, deputy director of QUT’s Centre for Robotics.
Report findings
The full 133-page report and a short summary flyer can be downloaded via the buttons below, but here’s a short description of some of the project’s findings.
In what will surprise very few people, note was made of differences in the appearance (and categories) of some Australian road signs, line marking, and traffic lights. Training the car’s tech did improve the detection system’s ability to make a better read of the infrastructure, but still ‘… only to a level that would be insufficient for safe autonomous operation of a vehicle.’
Using the camera system alone, the system only detected approximately 40% of speed, give way, turn, pedestrian crossing, and speed hump signs. These were either missed entirely, or incorrectly identified. Adding annotated prior maps to aid the camera system, the figure improved to 97%.
‘This finding is consistent with the majority of approaches that use prior maps, with a few notable exceptions,’ Professor Milford said.
‘It is likely that these map systems will need to have real-time updates on temporary obstructions and changes to signs and roads, and to ensure the navigation systems used by autonomous vehicles receive those updates instantly.’
The report does go on to say that, ‘Failure cases were mostly caused by the limitations of our approach; first and foremost the relatively small amount of training and development of the algorithms, and in not having techniques to deal with the highly varied lane marking configurations at intersections.’
Download the report and flyer
DOWNLOAD THE FULL REPORT DOWNLOAD THE FLYER
Discover more from iMOVE Australia Cooperative Research Centre | Transport R&D
Subscribe to get the latest posts sent to your email.
I remember doing some work with a Mobileye a few years ago. It’s crash prevention system can also read speed signs. Fair to say “at the time” it was just OK. This was to only warn the driver of the speed limit not to control the vehicle in any way. I have seen other systems generate similar results. It’s not just the sign, you have things standing in front of the sign, fog and smog issues, sun glare. We have lots to do in this space.
We certainly do Arthur, and that is well stated in the final report.