Better video analytics for traffic data collection via fixed cameras
Main Roads Western Australia has been working with the University of Western Australia (UWA) to develop video analytics (VA) software for processing and analysing drone videos to gather and auto-calibrate critical traffic data for network optimisation, such as vehicle counts and trajectories, delay, saturation flow, queue length, back-of-queue arrival rate, and gap acceptance. The evolving research has been supported by Main Roads through a series of projects.
This project will further develop the capability by integrating processing of videos recorded by fixed cameras, already in place and in use on the road network. Fixed cameras can complement drones in areas with flight restrictions or severe occlusions caused by the environment. They can also record videos with much longer duration. The main objectives are faster processing time, more robust algorithms to deal with occlusions, and more accurate data.
Project background
Main Roads WA existing associated projects are:
- Proof of concept project – starting at the end of 2017, we applied video analytics to fixed cameras to demonstrate the potential for vehicle counting, classification, origin and destination surveys and pedestrian and cyclist counting
- Video analytics for drones – applying similar techniques to the proof of concept for drone footage, including stabilisation of the video, vehicle counting and classification, origin and destination surveys, free-flow detection, circulating speed estimation, queue length estimation, critical gap and follow-up headway analysis, Level-of-Service (LoS), delay time calculation, and other input/output parameters for the traffic modelling software (SIDRA). A Graphical User Interface (GUI) for this system has been developed and Network Operations (Main Roads) given access to process drone videos.
Some of the benefits of the research to date has been outputs that are specifically tailored to Main Roads (for example, the SIDRA outputs) and the ability to test and validate the outputs rather than them being simply black box outputs. Examples include implementation and evaluation of 7 different algorithms for estimating critical gap, and comparison with observed data for the calibration of speed estimation using GPS vehicle data.
The above research has been applied and further developed for Main Roads in several projects:
- Improving roundabout modelling using drone video analytics (part of iMOVE project 1-028, Improving roundabout modelling using drone video analytics) – flying drones on approximately 50-60 roundabouts to better understand how to use SIDRA environment factors for modelling roundabouts. The intent being to increase model prediction accuracy for greenfield sites.
- Principal Shared Path (PSP) surveys – A study in conjunction with consultancy firm, WSP for 6 sites around rail stations studying the interaction between pedestrians, cyclists, and vehicles, and potential conflicts.
This project will build on the work to date to implement a system for Main Roads to process videos recorded by fixed cameras. They can compensate drones in areas with flight restrictions or severe occlusions caused by the environment. They can also record videos with much longer duration.
The VA software needs to be further developed to cater for significant differences between drone and fixed camera videos. The existing system for fixed camera footage provides no capacity to measure intersection delay and there are significant differences in the angle and framing of the camera.
For example, there are fewer vehicles in the frame, but they are significantly larger. For videos from a fixed camera, each vehicle has significantly more pixels which makes the processing burden of tracking vehicles using the current approach, too great. Also, videos from fixed cameras are often recorded using a high compression rate to reduce the file size.
However, this introduces more noise to the video, which are often sufficient to confuse the model because it was trained on high quality videos from drones. A further processing challenge to be addressed is that fixed camera videos are usually of longer duration as footage might be for many hours a day (e.g., measuring delay across both peak periods).
Therefore, a new approach for fixed cameras needs to be implemented to address these challenges. UWA has previously developed a small proof of concept to demonstrate the capability of being able to derive volume and delay from Main Roads camera assets (Figure 1). This proof of concept was undertaken free of charge and needs to be made more robust for deployment, but it did demonstrate capability in solving the problem.
Figure 1 Screenshot from proof of concept showing delay by lane.
Project objectives
The project objectives are to:
- Optimise processing speed and test the possibility of real-time processing of VA software – the current VA software is built for optimal accuracy rather than speed and is only used on relatively short highdefinition videos captured by drones. Because of the long distance away from vehicles, and drones often shake and drift in the air, we developed complicated algorithms to keep the accuracy as high as possible. The results were good (e.g. more than 80% of the time our estimated vehicle speeds are within 2.5km/h of the GPS records) but they also added additional computational load. The current long processing time is not such a problem because drone arrays are often done for an hour during the peak but it becomes impractical if there are a large number of fixed camera videos to be processed regularly. Given that fixed camera footage will be longer duration there is a need to reduce processing time of current system to make it feasible to produce analytics from much longer videos.
- Recalibrate analytics modules for fixed cameras, which include delay time estimation, volume, speed, saturation flow, queue length – the earlier prototype developed for Main Roads Network Operations will be built on to calculate delay time from a fixed camera. Currently Main Roads estimates delay time using models that are time consuming to develop and maintain, and none of its existing datasets can directly measure it. Delay time is critical to network optimisation efforts. Other modules have been developed for drones but they need further calibration and validation to ensure the results are accurate.
- Improve detection tracking for fixed cameras and under occlusion – this includes occlusion caused by the environment, e.g., buildings and street furniture, and caused by other vehicles especially large ones.
- Improve vehicle classification – vehicle classification is approximately correct in the previous work, but more granular and accurate classification is desired.
- Further develop Graphical User Interface (GUI) – the existing GUI will be extended to allow Main Roads to use the system at its discretion to process fixed camera videos.
The outcome of the project is for Main Roads to have the capability to obtain relevant data analytics from fixed cameras and have the capability to run those analytics themselves.
Please note …
This page will be a living record of this project. As it matures, hits milestones, etc., we’ll continue to add information, links, images, interviews and more. Watch this space!
My company has a similar project in mind, are you able to provide the contact details of the researches working on this project?
Hi Hashan. If you use our Contact Us page to, well, contact us, I could see what I can do.