No, not really true.
The way AI systems have been implemented in cars produces a flat image which we run through some fancy AI and the arrive at a conclusion. But what if 1 camera sees a child and for whatever reason, the other sees a clear road? The AI is not trained to process vision the way we do, where we use all our various senses including the conflicting info we get from each eye to arrive at a conclusion. It just does a merge and then process. It should process from each sensor, then reprocess to arrive at a conclusion
Why? They also using AWS?