Japanese automakers Nissan and Honda say they plan to share components for electric vehicles like batteries and jointly research software for autonomous driving.
I don’t know about your city, but I trust technology a lot more than the average driver. At least technology can detect a red light vs a green light. I nearly got hit by a ford mega truck in broad daylight who thought the small, green bicycle symbol was his indicator to ignore his massive red “no left turn” indicator across a protected bike lane. :P
I agree. Less margin for error, but leaves people who depend on automation vulnerable. I just imagine lots of growing pains before we get to ideal state.
I don’t know about your city, but I trust technology a lot more than the average driver.
I don’t. Technology can be subject to glitches, bugs, hacking, deciding to plow right through pedestrians (hello Tesla!), etc.
While the case can be made that human drivers are worse at reaction time and paying attention, at least a “dumb” car can’t be hacked, won’t be driven off the road due to a bug, won’t try to knock people over itself without stopping, etc.
A human, when they catch these things happening, can correct them (even if it is caused by them). But if a computer develops a fatal fault like that, or is hijacked, it cannot.
EDIT: It seems like this community is full of AI techbro yes-men. Any criticism or critical analysis of their ideas seems to be met with downvotes, but I’ve yet to get a reply justifying how what I said is wrong.
I don’t know about your city, but I trust technology a lot more than the average driver. At least technology can detect a red light vs a green light. I nearly got hit by a ford mega truck in broad daylight who thought the small, green bicycle symbol was his indicator to ignore his massive red “no left turn” indicator across a protected bike lane. :P
I agree. Less margin for error, but leaves people who depend on automation vulnerable. I just imagine lots of growing pains before we get to ideal state.
I don’t. Technology can be subject to glitches, bugs, hacking, deciding to plow right through pedestrians (hello Tesla!), etc.
While the case can be made that human drivers are worse at reaction time and paying attention, at least a “dumb” car can’t be hacked, won’t be driven off the road due to a bug, won’t try to knock people over itself without stopping, etc.
A human, when they catch these things happening, can correct them (even if it is caused by them). But if a computer develops a fatal fault like that, or is hijacked, it cannot.
EDIT: It seems like this community is full of AI techbro yes-men. Any criticism or critical analysis of their ideas seems to be met with downvotes, but I’ve yet to get a reply justifying how what I said is wrong.
Plenty of dumb cars get recalls all the time for shitty parts or design. Remember that Prius with the brakes that would just decide to stop working?
Self-driving cars are no less prone to mechanical failures.
What’s different is the means of controlling it.
Yeah, but you said that already
No, I was talking about software issues.
And if you know that both non-self-driving cars and self-driving cars are both equally prone to mechanical issues, why bring it up as a counterpoint?
It wasn’t a counterpoint you silly goose, I was agreeing with you