Autonomous Mobility (Parallel Sessions)
Autonomous vehicles have made tremendous progress in the last few years by learning from very large sets of data. This session will assess progress made and explore several pending issues.
Perception systems are able to achieve very high-level performance. But several issues related to this ‘learning from data approach’ need to be overcome for delivering safe and trustable autonomous vehicles.
Collecting data from millions of km is a very expensive task. It is not sufficient to cover rare situations, and it is not desirable neither to cover risky situations. Synthetic images and simulation environments can be used to complement real data.
Human annotation is difficult to scale-up while dealing with very large datasets.
Semi-automatic labelling should ensure better efficiency and better quality of the labelling process.
These sessions will also highlight how the rise of autonomous (driverless) vehicles, together with the shift from car ownership to ‘mobility as a service’, will enable to operate fleets of robot-vehicles as a new mobility paradigm. Designing a ‘robo-vehicle’ mobility service on a given city, and operating this service, will require to collect and process a large amount of various data to understand the mobility needs, be able to commit on the service quality and operate efficiently the service.
Jean-Marc David (Renault)
Accelerating the race to AI self-driving cars
Tom Westendorp (NVIDIA Automotive)
Cloud-based Large Scale Video Analysis
Marcos Nieto (Vicomtech) and Joachim Kreikemeier (Valeo)
Driving simulation and Scenario Factory for Automated Vehicle validation
Andras Kemeny (Renault)
Urban mobility: navigating future uncertainty
Philippe Crist (OCDE / ITF)
Cloud-based Large Scale Video Analysis
Dr. Marcos Nieto (Vicomtech), Joachim Kreikemeier (Valeo)
Cloud-LSVA will create Big Data Technologies to address the open problem of a lack of software tools, and hardware platforms, to annotate petabyte scale video datasets. The problem is of particular importance to the automotive industry. CMOS Image Sensors for Vehicles are the primary area of innovation for camera manufactures at present. They are the sensor that offers the most functionality for the price in a cost-sensitive industry.
By 2020 the typical mid-range car will have 10 cameras, be connected, and generate 10TB per day, without considering other sensors. Customer demand is for Advanced Driver Assistance Systems (ADAS) which are a step on the path to Autonomous Vehicles. The European automotive industry is the world leader and dominant in the market for ADAS.
The technologies depend upon the analysis of video and other vehicle sensor data. Annotations of road traffic objects, events and scenes are critical for training and testing computer vision techniques that are the heart of modern ADAS and Navigation systems. Thus, building ADAS algorithms using machine learning techniques require annotated data sets. Human annotation is an expensive and error-prone task that has only been tackled on small scale to date. Currently, no commercial tool exists that addresses the need for semi-automated annotation or that leverages the elasticity of Cloud computing in order to reduce the cost of the task. Providing this capability will establish a sustainable basis to drive forward automotive Big Data Technologies.
Furthermore, the computer is set to become the central hub of a connected car and this provides the opportunity to investigate how these Big Data Technologies can be scaled to perform lightweight analysis on board, with results sent back to a Cloud Crowdsourcing platform, further reducing the complexity of the challenge faced by the Industry. Car manufacturers can then in turn cyclically update the ADAS and Mapping software on the vehicle benefiting the consumer.