So should self-driving car accidents stop us from developing the technology? Of course not! The technology has huge potential in saving a lot of lives. The AI might be biased or not sophisticated enough, but it has one treat that humans don’t. They never become emotional, or reckless, or sleepy. Done right, it should out-perform humans on safety in most situations, but obviously we are not there yet. So what can we do better? (I’m not a self-driving car expert myself but I want to explore possibilities in this article and if you have better ideas, please feel free to leave a response below!)
Again, let’s start with what humans can improve. One thing I was especially surprised by this accident is the car actually HAS a safety driver. The whole thing could totally be avoided if the safety driver did her job and not looked at her cellphone and kept her eyes on the road. It’s not that hard to do but her failure to do so indirectly costs a life. This has nothing to do with technology but has everything to do with how the self-driving test process can be improved. It’s a good start to put a paid safety driver behind the wheel to add another layer of safety for the test, yet humans make mistake. Since we already have an internal camera monitoring the driver, why not develop an algorithm to monitors her/his behavior and give alerts/scores when her/his eye wonders off the road?
Photo from https://labs.sogeti.com
The Lidar/Radar fails to trigger in this accident. What was the reason? Will add more than one sensors work better? Adding more types of sensors? If we can address a problem using an engineering approach, then, by all means, we should do it. Sensors are not expensive. Also, we need to make sure the sensors will work on all weather conditions? Hot, code, snow, extreme sunburn, windy, etc. Prepare for the extreme for safety’s sake.
Is the central driving system has a prioritized control system, means some special events triggered on the sensor or image recognition system will cause the car to immediately stop to avoid severe accident, surpass all other driving control system. The prioritized system needs to be carefully designed and adjusted for maximum safety.
One thing that is very essential for all machine learning models is the validation set. A good validation set defines how well the model generalizes and thus heavily determines the success/failure of a project in real life. This also applies to the self-driving car. What would be a good validation set here? Well, driving a car is not as simple as our classifier problem thus not very clearly defined. This is exactly where the problem is. Should all the self-driving car companies and regulatory bodies team up together and develop a good ‘test routine’ that captures all the extreme situations, edge cases, test scenarios, automatic test software, etc., to effectively serve as the ‘validation set’ for self-driving cars? I think a collateral consensus and efforts here from all players is essential and less explored.