There was a post about this a couple weeks ago about why incremental change might actually be less safe than a "revolution". Basically, if a car can handle 90% of the scenarios it faces, the human driver would zone out and have difficulty responding to the remaining 10%, especially if there's little to no warning of when that 10% will happen.
Yes. Also, when a company knows it needs to design a 100% driverless car, safety will necessarily be the primary concern when designing the firmware. While that's theoretically the case regardless, recent Toyota fallout[1] shows that car companies perhaps aren't paying enough attention to this given that they have a human driver ultimately making the decisions.