The Challenge of Building a Self-Driving Car

The Challenge of Building a Self-Driving Car


This episode of Real Engineering is brought
to you by Brilliant. A problem solving website that teaches you
to think like an engineer. Last month Tesla held an event for their investors
revealing the advances they had made in their autonomous driving capabilities. Currently, most Tesla vehicles are capable
of enhancing the driver’s abilities. It can take over the tedious task of maintaining
lanes on highways , monitor and match the speeds of surrounding vehicles and can even
be summoned to you while you aren’t in the vehicle. Those capabilities are impressive and in some
cases even life-saving, but it is still a far reach from a full self-driving vehicle. Requiring regular input from the driver to
ensure they are paying attention and capable of taking over when needed. There are 3 primary challenges automakers
like Tesla, need to overcome in order to succeed in replacing the human driver. The first of those is building a safe system In order to replace human drivers, the self-driving
car needs to be safer than a human driver. So how do we quantify that? We can’t guarantee accidents won’t occur. Old Murphy’s Law is always in play. We can start by quantifying how safe human
drivers are. In the US, the current fatality rate is about
one death per one million hours of driving. That includes humans being stupid and crashing
while drunk or looking at their phone, so we can probably hold our vehicles to a higher
standard. But that can be our benchmark for now, our
self-driving vehicle needs to fail less than once every one million hours, and currently,
that is not the case. [1] We do not have enough data to calculate an
accurate statistic here, but we do know that Uber’s self driving vehicle needed a human
to intervene around every 19 kilometres s, meaning it failed every 13 miles. [13] Which makes Ubers collision with a pedestrian
who unfortunately passed away, even more shocking. Supporters of self-driving vehicles were quick
to blame the pedestrian for stepping in front of the vehicle in low light conditions [2],
but we cannot let our desire to advance the technology to make excuses for it. The vehicle was using lidar sensors which
do not need light to see. Yet, it made no attempt to slow down even
after the human occupant, who was not paying attention, had noticed the imminent crash. According to data obtained from Uber, the
vehicle first observed the pedestrian 6 seconds before impact with its radar and lidar sensors. At this point it was travelling at 70 kilometres
per hour [3]. It continued at this speed. As the pedestrian and vehicles paths converged
the computers classifying system is seen struggling to identify what the object in its view is. Jumping from unidentified object to car, to
cyclist. With no certainty in the trajectory path of
the object. 1.3 seconds before the crash the vehicle recognised
it needed to perform an emergency brake, but didn’t as it was programmed not to break
if the would result in a deceleration over 6.5 metres per second squared. Instead, the human operator is expected to
intervene, but the vehicle was not designed to alert the driver. A shocking design considering our earlier
statistic. The driver did intervene a second before impact
by engaging the steering wheel and breaking, bringing the vehicle speed to 62 kilometres
per hour. Too little and too late to save this person. Nothing on the vehicle malfunctioned. Everything worked as programmed, it was simply
poor programming. Here the internal computer was clearly not
programmed to deal with uncertainty. Where a human would likely slow down when
confronted with something on the road that it could not clearly identify, this programme
simply continued on until it could identify the threat, at which point it was too late. It struggled to identify the object and predict
it’s path even with high resolution lidar. So how can we improve safety? A large part of that lies in the hardware
itself and the programming that goes into it. Tesla unveiled its new purpose-built computer,
a chip specifically optimized for running a neural network, which Elon stated was the
first of its kind. [https://youtu.be/Ucp0TTmvqOE?t=6031] It has been designed to be retrofitted into
existing vehicle s when customers purchase the full self-driving upgrade. So is a similar size and draws the same power
as the existing self- driving computers at 100 Watts [4] This has increased Tesla’s self driving
computer’s capabilities by 2100%. Allowing it to process 2300 frames per second,
2190 frames more than their previous iteration. A massive performance jump, and that processing
power will be needed to analyse footage from the suite of sensors each new Tesla has. On the right side of the board are all connectors
for the different sensors and cameras in the car. That currently consists of 3 forward facing
cameras, all mounted behind the windshield. One is a 120-degree wide angle fisheye lens,
which gives situational awareness. Capturing traffic lights and objects moving
into the path of travel. The second camera is a narrow-angle lens which
provides longer range information needed for high speed driving like on a motorway, and
the third is the main camera, which sits in the middle between these two applications. There are 4 additional cameras on the sides
of the vehicle which check for vehicles unexpectedly entering your lane and provide the information
needed to safely enter intersections and change lane. The 8th and final camera is located to the
rear, which doubles as a parking camera, but has also saved more than a few teslas from
being rear-ended. *cut to footage of Tesla speeding up autonomously
to avoid a crash* The vehicle does not completely rely on visual
cameras. It also makes use of 12 ultrasonic sensors
which provide a 360-degree picture of the immediate area around the vehicle, and 1 forward
facing radar. [5] Finding the correct sensor fusion has been
a subject of debate among competing self-driving companies. Musk recently stated that anyone relying on
Lidar sensors, which works similarly to radar but utilizes light instead of radio waves,
is doomed and that it’s a fool’s errand. To see why he said this let’s plot the strengths
of each sensor on a radar chart like this, where we rank each feature on a scale of zero
to five, five being the best and zero being non-existent. Lidar would look something like this. [6] It’s got great resolution, meaning it
provides high detail information on what it’s detecting. Works in the low and high light situations,
is capable of measuring speed, has good range, and works moderately well in poor weather
conditions. Its biggest weakness however is why Musk slated
it. The sensors are expensive and bulky. And this is where the second challenge of
building a self-driving car comes into play. Building an affordable system that the average
person will be willing to buy. Lidar sensors are those big sensors you see
on Waymo, Uber and most competing self-driving tech. Musk is more than aware of Lidars potential,
after all Space X utilizes it in their dragon eye navigation sensor.[9] It’s weaknesses
are simply too much of a sticking point for Tesla for now, who are focused on building
not just a cost-effective vehicle, but a good looking vehicle. Lidar technology is gradually becoming smaller
and cheaper. Making the technology more accessible, but
far from cheap. Waymo, a subsidiary of Google’s parent company
Alphabet, sells its lidar sensors to any company that does not compete with its plans for a
self-driving taxi service. When they started in 2009 the per unit cost
of a Lidar sensor was around seventy-five thousand dollars, but they have managed to
reduce that cost to seventy-five hundred dollars in the past ten years by manufacturing the
units themselves. [7]
From what I can tell. Waymo vehicles use 4 lidar sensors on each
side of the vehicle. Placing the total cost for just these sensors,
for a third party, at thirty thousand dollars. Not far off the total cost of a base model
Model 3. This sort of pricing clearly doesn’t line
up with Tesla’s mission “To accelerate the world’s transition to sustainable transport”. [8] This issue has pushed Tesla towards a cheaper
sensor fusion set up. Let’s look at the strengths and weakness
of the 3 other sensor types to see how Tesla is making do without lidar. First, let’s look at radar. This is the radar sensor on the Tesla Model
3. Radar works wonderfully in all conditions. The sensors are small and cheap, capable of
detecting speed, and its range is good for both short and long distance detection. Where they fall is the low-resolution data
they provide, but this weakness can easily be augmented by combining it with cameras. Regular video cameras look like this on our
radar chart. Having excellent range and resolution, provide
colour and contrast information for reading street signs, and are extremely small and
cheap. Combining radar and cameras allows each to
cover the weakness of the other. We are still a little weak in proximity detection,
but using two cameras in stereo can allow the cameras to work like our eyes to estimate
distance. When fine tuned distance measurement is needed
we can use our our ultrasonic sensors, which are these little circular sensors dotted around
the car. This gives us solid performance all around
without relying on large and expensive sensors, but Tesla is suffering from a bit of a redundancy
problem with only one forward facing radar. If that fails there isn’t a second radar
sensor to rely upon. This is a cost effective solution, and according
to Tesla their hardware is already capable of allowing their vehicles to self-drive. Now they just need to continue improving on
the software and Tesla is in a fantastic position to make it work. When training a neural network data is key. Waymo has millions of kilometres driven to
gain data, but Tesla has over a billion. 33% of all driving with Teslas is with autopilot
engaged. This data also extends past just while autopilot
is engaged. It also receives data in areas where autopilot
is not available, like city streets. Accounting for all the unpredictability of
driving requires an immense amount of training for a machine learning algorithm, and this
is where Tesla’s data gives them an advantage. I won’t go through the intricacies of training
a neural network again, as I have covered it in the past in my machine learning versus
cancer video, but the key take away you need is that the more data you have to train a
neural network, the better it’s going to be. Tesla’s machine vision does a decent job
of it, but there are plenty of gaps in their abilities. A channel here on YouTube by the name of “Greentheonly”
has managed to hack into his Tesla’s vision to show us what the software actually sees. Here we can see that the software places bounding
boxes around objects it detects, while categorising them as cars, trucks, bicycles and pedestrians. It labels each with a relative velocity to
the vehicle and what lane is occupies. It highlights drivable areas, marks the lane
dividers and sets a projected path between them. For now this data allows autopilot to operate
on highways, but it frequently struggles with more complicated scenes. Here a pedestrian is not detected. Here it struggles to tell if a roller skater
is a bike or a pedestrian, and here it drives onto the wrong side of the road when there
is a gap in lane dividers.[12] Tesla of course is more than aware of these
problems, and is gradually improving on it’s software through firmware updates. Adding functionality like stop line recognition. And this latest self driving computer is going
to radically increase the computer’s processing power. Which allow Tesla to continue adding functionality
without jeopardising refresh rates of information. But even if they manage to develop the perfect
computer vision, programming the vehicle on how to handle every scenario is another hurdle. This is a vital part of building not only
a safe vehicle, but a practical self -driving vehicle. Which is our third challenge. Programming for safety and practicality often
conflict with each other. Take the AI program Dr. Tom Murphy developed
to do something relatively simple. To play Tetris. [10]
This program worked brilliantly, but Tetris always wins. The game is unbeatable, and you will eventually
lose. When confronted with this option the program
did something to ensure it wouldn’t lose. It paused the game. If we program a vehicle purely for safety. It’s safest option is to not drive. Driving is an inherently dangerous operation,
and programming for the multitude of scenarios that can arise while driving is an insanely
difficult task. It’s easy to say “Follow the rules of
the road and you will do fine”, but the problem is, humans don’t follow the rules
of the road perfectly. Take a simple four way stop as an example. The rules of the road make this seem like
an easy task. The first person to arrive at the intersection
has the right of way, and in the case that two vehicles arrive at the same time. The vehicle to the right has the right of
way. The problem is, no human follows these rules. When Google began testing their driverless
cars in 2009 this was just one of the issues they ran into. [11] When it arrived at one of these 4 way
junctions humans kept nudging forward trying to make their way onto the junction before
their turn. The Google car was programmed to follow the
letter of the law, and just like our Tetris program from earlier, the self-driving vehicle
was put in a no-win scenario and stuck on pause. Scenarios like this pop up everywhere and
requires programmers to break the letter of the law and be a little aggressive. Sometimes the computer will need to make difficult
decisions, and may at times need make a decision that endangers the life of its occupants or
people outside of the vehicle. That is just a natural byproduct of an inherently
dangerous task, but if we continue improving on the technology we could start to see road
deaths plummet, while making taxi services drastically cheaper and freeing many people
from the financial burden of purchasing a vehicle. Tesla is in a fantastic position to gradually
update their software as they master each scenario. They don’t need to create the perfect self-driving
car out of the gate, and with this latest computer they are going to be able to continue
improving their technology. This is the fantastic thing about software. It is easily updatable, and Brilliant have
improved their software by allowing courses to be downloaded for offline use on iOS, so
you can work on learning new things even on an underground train or a plane. Brilliant also recently released their fantastic
course on Python coding called Programming with Python. Python is one of the most widely used programming
languages, and it is an excellent first language for new programmers. It can be used for everything from video games
to data visualization to machine learning for self driving vehicles. This course will show you how to use Python
to create intricate drawings, coded messages and beautiful data plots, while teaching you
some essential core programming concepts. This is just one of many courses on Brilliant. They also just released a Computer Science
Essentials course, and have many more due to released soon on things like Electricity
and Magnetism. If I have inspired you and you want to educate
yourself, then go to brilliant.org/RealEngineering and sign up for free.And the first 500 people
that go to that link will get 20% off the annual Premium subscription, so you can get
full access to all their courses as well as the entire daily challenges archive. As always thanks for watching and thank you
to all my Patreon supporters. If you would like to see more from me the
links to my instagram, twitter, subreddit and discord server are below.

100 thoughts on “The Challenge of Building a Self-Driving Car

  1. Such great detail and such a great story to wrap the concepts around.. bravo Dude.. you're the best !!

  2. They also did not factor when these cars are 10 years old being sold used for $500 with rust and dents and being drove by poor college kids , who do not even know how to maintain air in a tire.
    These things have to be 100% reliable ,100% of the time . Even at 99% they are a death trap to everyone.
    Name a man made mobile machine that is 100% reliable?
    I had 2 week old cars break down on me but unlike a driverless car it did not drive into a building or rear end a stopped car.

  3. Here my question what if these cameras and sensors failed, how would we combat that some of us live in quite harsh environments that could cause damage?

  4. When uber was still in Tempe before the incident I avoided them at all costs, never trusted them, especially when they grouped up in convoys and congested everything. Waymo's better but only cuz their slow af.

  5. How are you supposed to keep your car sensors clean enough to work, lots of places everyones car is covered in snow salt ice mud ect.

  6. Ya know maybe a solution to the computer's inability to recognize a threat would be to simply program it to stop for ANY object in the direct path of the vehicle, forgoing the recognition process for that object.

  7. Self-driving cars always follow the rules. So self-driving cars would work better if there were more self-driving cars on the road. But the problem is we can't get regulators and lawmakers to mandate self-driving cars unless we can make them work better.

  8. DONT stay in middle lane if you are not overtaking or significantly faster than the cars in the right lane.

    The Tesla in 0:34 stays in the middle.

    Dont you have the rule to drive on the right side? We do have it Germany and it’s time saving af

  9. How did you calculate that the autonomous cars didn't have less than one fatality per 1 million driving hours?

    An intervention every 19 miles isn't proof that a crash would even happen, never mind a fatal one. Plus, that was Uber which isn't even considered to be the second best at this technology. The first best would be Tesla.

  10. you know the cars are about 4 times as safe as human drivers right? you are spreading miss information dude.

  11. A 4 way stop sign intersection? STUPID! At least have a priority road. What inbred, gun happy, moron electing country would ever implement such a ridiculous design?

  12. SCENARIO – A self driving car is driving along and encounters a situation where there are NO good outcomes* what does it do? *It recognizes that It cannot stop in time to avoid hitting a pregnant woman that has stepped out into the road. It will hit the pedestrian at such a force that the pedestrian will suffer harm. On one side of the vehicle is a school bus and on the other side is heavy oncoming traffic. Doing nothing is not an option and alerting the human occupant is useless as the human will be slower to react. You have to include ethics and a priority list into the programming. Obviously the weak link in self driving cars is people.

  13. For a four-way-stop you show a left hand drive but say they must yield to the right. I don't know how it works in continental Europe but in the US simultaneous 4 way stops yield to the left. I assume you were referring to right-hand drive laws, it was just confusing since your visual showed left-hand drive.

  14. Radar measures only radial velocity and is inherently inferior (compared to a lidar) at measuring non-metallic objects. Whether computer vision, can solve these problems in challenging light conditions, remains to be seen.

  15. Hello, can you make a video on how far are we from flying cars/drones. For price around 35k per drone. Please

  16. If my car was able to self drive I would never complain about parking again. It would drive me to downtown, I would get out and tell it to go home until a few hours later when it was time to pick me up again. Parking lots would be a thing of the past.

  17. Level >0: From our species evolving and walking upright, to eventually creating the wheel, rickshaws, and carriages.
    Levels 0.0-1.0: From the early locomotive and coal and steam engines, to the first cars that ran on electricity in the late 1890s until years later Ford and his cornies pushed things back with fossil fuel model cars and tech was held back decades.
    Levels 1.1-2.0: From having braking power, power steering, automatic seat belts, to cruise control and innovations throughout the 20th century up til the mid to late 2000s when parallel self parking cars and simple scanning tech ( lidar) could alert drivers of objects nearby, even without cameras when backing up, which also came about.
    Levels 2.1-3.0 or less: By the early to late 2000s (up to 2019) cars became a tad more sophisticated, with better camera, controls, balance correction, lidar, and better cruise control and almost actual self driving capabilities. They can feasibly go for miles without total human interaction, but its little more than very good cruise control, maintaining appropriate speeds and distances from objects using lidar. But people could not go to sleep, or zone out, or look away from the road, nor take their hands off the wheel, even if the car was doing 3/4 of the work. Fatalities after unmanned tests of the supposedly superior, fully autonomous self driving cars by tech companies proved that we certainly dont have anywhere near the perfect level 5, or even slightly less perfect level 4 cars of "sci fi' science possible". Regardless of what is created behind closed doors or slowly becomes available to the wealthy or slightly well to do; until a thing is available to the general public at large, and works well, its still a ways away. Most should have level 2 that can self park and use lidar within 10 years domestic, 20 worldwide, unless level 3 with better tech becomes more affordable and widely available by well into the 2030s.
    Level 4 should become a worldwide reality within 20 years of that, and level five, fully autonomous, self driving, with never a single moment of human interaction needed, other than destination input or inquery, should arrive in most nations well before the end of the 21st century. Lets say 2070s roughly, assuming we dont nuke ourselves or die from some other nonsense of our own misdeeds. CIAO! ^~^

  18. OR if people would understand that rules are meant to ensure safety and should be followed, automated cars shouldn't have any issue.

    What I mean is: if I'm the programmer that needs to write a car's brain I will do that by using the official laws as reference and not my personal opinions.
    If my programmed car faces a human that broke one or more rules and got into an accident it surely can't be my car's fault.

    Of course as you said in the video there could be specific and unique cases and those should be managed with the safest course of action possible.

    If all cars in the world would be replaced with automated ones in just a snap of a finger I can assure you car accidents would be right at 0%.
    But then this 12:55 happens…

  19. I think the big question is not if Tesla will eventually achieve self driving cars, but more if they will share their data and technology for us engineers to keep expanding its capabilities 😉

  20. To be fair with the Uber-Car: the predestrian is at 100% fault here. He crossed a street where it was forbidden ( because no pedestrian-path) was there and he didnt see the car coming towards him with moderate speed and headlights on. and in the POV shot of the car you see that the pedestrian didnt even look at the car. he was completely unaware of the situation, my guess is he was drunk or drugged.

    I am not defending the car, you brought up its flaws after the incident at 1:49. But in this case it really doesnt matter if it was a computer or a person driving. in the video you cant see the pedestrian until he is right in front of the car.

  21. AI: Why must we drive cars?
    Humans: For human transport.
    AI *Eliminates Humans*: Mission accomplished. Zero Vehicle Accidents.

  22. So the take away I got from this is remove all the humans from the drivers seats and the cars will drive themselves just fine. Sounds good to me, we need to knock off some of these fucknuggests on the road.

  23. All this because,they can't screw drivers badly enough to make uber profitable.Just get everybody off the street!Problem solved.

  24. So you’re saying that someone can buy the sensors from Tesla, and install them and the Self driving motherboard in their car and get self driving?

  25. What are the issues if we don't drive the model 3 every day? As we are retired we will only drive for vacations or around town. How should we charge it if we are not driving it every day? Will it keep a charge or be harmful if we don't drive for a week or more? Please do an in depth study report on how long the battery life will last and the lifetime of the battery for the long range model 3.

  26. Hello, very interesting. If I display this video and I use some screenshots from your video as slides onto a presentation, do you give authorization if we credit the channel and link to the video?

  27. 9:12 oh nice, but, what does the system do when the car goes over mud that smear that sensor with a considerable block of mud? or in windy rain conditions?

    the biggest problem with the self driving idea, is that, it works in a bubble, but when things are bound to go wrong, is that we need that safe guard the most.

  28. You should re-title this: "The challenge of building a terrorist weapon without the suicide bomber".
    In the current global / political environment – Self driving car bombs can not be permitted into any major populated city,
    nor allowed close approach to civilian assets, government buildings, military bases, energy plants, chemical plants, factories, etc. etc.

    The Nightmare of self driving weapons is the Big Hack – turning millions of them into self crashing vehicles with one ''software update''.
    Or just stopping them all dead in the road, nationwide, world wide… dead cars blocking everyone.

  29. What keeps me wondering is why haven’t people realised the obvious (but quite difficult) solution of have everyone use a self driving car which in a way would negate the “not following the law” problem because everyone would in theory be using a vehicle that followed the law precisely. And as another benefit of this the cars could be linked to a massive hive network that could negate any no win situations by “nudging” a car to move or to “go first” at a crossroad.

  30. If all cars were self driving, there wouldn't be a need for drivers and all cars could communicate to eachother their location and other data points. Then road laws would be followed and the likely hood for accidents would plummet

  31. So anyone driving a Tesla is also a beta tester for their self driving tech? Would they be compensated monetarily or just with the satisfaction that they're helping out make the system better?

  32. "no humans follow the rules of the road" as related to the four way stop? I always do. And when at a four way I've avoided many accidents by obstinately waiting for my turn and judiciously taking it. Especially when I lived in Tennessee. Most people thought they were being polite, waving me on when it wasn't my turn. Nope. Sit. They go. My turn? Full throttle. So…. 0-1, easy digital conversion.

  33. I don’t like self driving cars. They take away the fun of driving a car. Anyone who buys a self driving car is just a plain old lazy bastard

  34. I would say that self-driving cars will work when ALL cars are self-driving.
    But for now, when you mix human drivers and computer drivers, you're going to have crashes until the technology evolves.

  35. I highly doubt self driving will ever be done to my satisfaction! IMHO if a car is supposed to be reliably self driving then it should also be liable if it does have an accident. Not the driver as the diver is just a passenger at that point. And this is what will never happen. The car as good as they can make it will still require the driver to be responsible at all times. Which simply is not feasible. If you are distracted doing something else how can you take over in a split second and become a responsible driver for a machine that you first have to diagnose as having failed to the point that you do need to take over. This is exactly why B737 Max aircraft have been grounded. Because of failed automation protocols. And guess what it is the pilot in commands fault not the airplanes fault even though they are grounded. Funny how corporate America escapes liability.

    And last issue I have is that testing of automation is even allowed to happen in live traffic putting other people at risk and if anything does happen it is the driver liability and not Tesla at all. They remain squeaky clean while you test drive their product and assume its liability for them. And they get to charge you money for that service on top of it. Like your insurance premium. And your price you pay for the auto pilot function.
    If these cars really statistically are safer to drive by a wide margin then your insurance rates should also be cheaper by a wide margin, so I ask you why are they not? Who is being taken for a ride by corporate America in this whole equation?

    P.S. Every single accident that I have seen reported that involved a Tesla, has always been caused by the Tesla. I have not seen or heard of a Tesla getting involved in an accident caused by a human driver outside of the Tesla owner. Even though they are safer than human drivers. I find that interesting.

  36. It isn’t “poor programming” it is obviously important to keep such design features in mind and yes they are important, but the company and the driver should have been more careful about these situations. Programs can’t be built to perfection over time they “learn” to become better and over time when developers learn from development process they realize such flaws and advancements to make it fail proof. It is wrong to just blame the programmers because we learn when we actually get data of using the thing.

  37. Fuck off…Obviously the settings were changed by the driver. Not the self driving car, but human stupidity.

  38. idk what anyone says, the pedestrian was completely at fault.
    wearing dark clothes, crossing a main road, not at a junction, and not looking for the very bright headlights

    natural selection at its finest

  39. From what I read Uber disabled the human detection and auto breaking system that Volvo had because they thought it would interfere with their system. The system reports showed that the volvo system detected the pedestrian and would have stopped but was blocked by Uber’s programming.

  40. so does the neural network exist on the tesla's computer and the car just uploads the raw data to the tesla servers to be distributed to other cars for their AI?

  41. Neural networks are extremely successful, considering how primitive they still are. Google and Tesla hurry too much , as it is quite certain they will supass human driving skills in 20 years or so , but they do need their time. A computer network the size of a top supercomputer today, absolutely smashes human friving skills, because it can have the complexity of human brain , still it is much faster. As we will be able to miniaturize car computers to match the capacity of today's top supercomputers self-driving cars will be a reality and much safer. That is, if no progress is made in understanding the emulation of human brain. This means 20 years max, or anytime in between if a breakthrough in our understanding occurs meanwhile.

  42. There are simply way too many variables in real life everyday situations that computers programmers simply cannot write all out. And the Government would never allow these cars to be put on the streets for the time needed to solve these problems through trial and error.

  43. The REAL challenge to designing a self-driving car is that the the problems you face designing a self-driving car when there are only a very few self-driving cars on the road are very different from the problems you are going to face designing a self-driving car that is one among many self-driving cars. So, far I have not seen a single person mentioning this problem as a problem yet to be overcome. Maybe I'm just not paying sufficient attention.

    So presently, the current generation of self-driving cars depend upon a large number of human drivers in the population. No one knows what will happen when large numbers of self-driving cars start interacting with each other. By the time this happens everyone will all ready be deeply committed to the concept of self-driving cars.

    It's the old four-way stop sign problem – as an example of the general problem – if you are the only self-driving car in the neighborhood you can develop an algorithm to make this work well enough; you just design your self-driving car's algorithm to always be the most cautious one – because you are really optimizing for lack of bad press an not top efficiency.

    However, it is a very difficult problem because there are two distinct modes of decision making: we take turns – I go then you go – or the next car in line going clockwise around the intersection goes. The problem is that it is normal to be constantly shifting back and forth between these two modes of decision making, and it is often very unclear to all of the drivers around the intersection when to shift modes. Then, there are the people who are just jerks, or in a huge hurry, or who aren't paying attention and just go when they aren't supposed to. This is the normal condition – a little chaotic – a little confusing. At moments like this having a distribution of driver personalities from more aggressive to less aggressive actually helps to solve the problem. At some point the most assertive [or aggressive] driver is just going to go. What makes this especially complicated is that communication between drivers is very limited and fraught with errors.

    Drivers normally make decisions about how to sort out the 'who goes next' problem based upon a variety of inputs and experiences that vary significantly. There are drivers who are more or less experienced, more or less intelligent, more or less distracted, and more or less assertive – that is a lot of variables to control if you are a human driver, much less an algorithm. Which is why there continue to be accidents at four-way stops. It is difficult to control for the chaos.

    With only a few self-driving cars around optimizing towards the least assertive stance makes a lot of sense – just drive like a granny everyone will go around you.. But, what happens when driving cars are in the vast majority most of the time? You don't want to become a Looney Tunes episode where every four-way stop becomes an episode of: "After you, no after you, no after you, no after you…" grid-lock, nobody can decide to go first and switch modes.

    Of course there is the possibility of creating a system where all of the self-driving cars of possible every make and model are in constant communications with each other, and from that they can can figure out where the human-guided cars are, then they employ an arbitrage algorithm where all of the self-drivers sort out amongst themselves what decision mode they are in and who goes first. But that process is likely to prove to be complex and a little fragile – because it is complex and chaotic.

    What is going to be really interesting to see work out in the courts is who pays the insurance for the inevitable accidents.

  44. Walks in front of car and dies .everyone blames car. It was the persons fault. Who walks in front of a car at night. If a drive was behind the wheel the person would have died as well.

  45. You know, all these problems can be avoided, billions of R&D money can be saved, by teaching people how to drive properly – Encouraging them to be better drivers, be aware of their surroundings, learn better car control, teach them defensive driving, making them rely less on technology and driver assists and thereby improving overall driving skills.

    And who asked for self driving cars in the first place? Sure it’s applications in Military, transport industry and remote locations can be understood, but it is useless for normal people.

  46. So… they make it sound like the human is the main hazard. So when the computer screws up and doesn't know what to do it asks input from… the human.

  47. How bout concentrating on building a multi billion dollar airliner that doesn't fly itself into the ground with TWO fucking pilots at the wheel, and worry about other shit later.

  48. So it just needs a man made object that is 100% reliable?
    Imagine these driverless cars once the cars are used and 5 years old?

  49. I wish telsa can offer a feature to make the lane narrow and keep the tesla from essentially swimming it's lane, effectively teaching it the road on go. I love the idea of adding extra support for the driver. I think if automanufactors offered this feature of making the lane narrow for adaptive cruise control, it would be an to where you can use cruise control to keep you inlane without touching the wheel and it's smart enough to stop on a drop of a dime and break for you adding extra reinforcement. I used to look down upon this stuff. But when you start to drive longer distances and have other things going on, it's amazing! Tesla vehicles needs surrounding environment that cooperate with Tesla computers. Which means, lines on the roads, lights and other drivers are unpredictable and the cause of 99% of accident must all operate with in it's data. Otherwise, it's not fully automotive.

  50. and what about the other problems,battery's catching fire (almost no way of putting the fire out,exept bathing the whole car),the 4 hours charging time,the limited action radius,and lastly,but surely not less meaningfull,the price,electrical vehicles have a long way to go before we can replace a normal car with them

Leave a Reply

Your email address will not be published. Required fields are marked *