Self Driving Cars

From SI410
Jump to: navigation, search

Self driving cars, also known as autonomous vehicles (AV), are vehicles that are capable of navigating environments with no human input. Many different companies are advancing this technology and ideologies vary based on the institution. Self-driving cars have the potential to increase quality of life and improve safety, however the control, liabilities, and safety of the technology and car itself causes many ethical dilemmas in society [1].

Waymo's self driving prototypes

Self-Driving Levels

The US National Highway Traffic Safety Administration (NHTSA) released a guide to the six levels of autonomy in self-driving cars in order to push forward and standardize autonomous vehicle testing [2].

    • Level Zero - Level zero is no longer produced in mass quantities. This includes all vehicles with no driver assistance features including basic cruise control.
    • Level One - The first level of self driving is what the majority of people are used to. All forms of cruise control would be considered level one if not accompanied by any additional features.
    • Level Two - Level two is the combination of multiple driver assistance features such as adaptive cruise control and lane keep. This is where technologies like Tesla's autopilot, Cadillac's super cruise, and's openpilot are currently at.
    • Level Three- Level three is when the vehicle begins to take liability as a driver of the vehicle. This level requires human input but doesn't require the human to remain aware of their surroundings like the previous levels. The human needs to be ready to take control of the vehicle in a timely manner only when alerted by the vehicle.
    • Level Four - The vehicle is capable of driving certain environments without any human input. This is the first level where a human is not necessary in the vehicle and is commonly referred to as full self driving.
    • Level Five - The final level is when the vehicle is capable of all driving functions in every environment without any human input.

Self-Driving Technology


Waymo's hardware on a Jaguar i-Pace

Many companies are building different hardware stacks in the self driving space. With most companies opting for LIDAR systems on top of camera, radar, and sonar, Elon Musk and Tesla have been at the front of news with claims of self-driving without LIDAR. This has lead to many industry leaders to criticize the hardware approach that Tesla is taking but Elon has spoken confidently about his hardware stack being the simplest solution and even going as far to say that LIDAR is a crutch and that anyone relying on LIDAR is doomed in the race to full self driving.[3]


Similar to the hardware stacks, the software stacks in self driving are different for many companies. Most companies with the exception of and Tesla are building maps of every road with GPS and LIDAR systems to localize themselves within the world. Once they are in a familiar world, they know where static road objects are and are able to add the moving vehicles and debris in order to navigate to a destination. and Tesla are using a different approach with no GPS or LIDAR mapping, they rely entirely on visual input from cameras as well as radar to help with distance measurements. They use machine learning and data sets to teach a neural net how to drive without explicitly giving it rules on how driving is performed. Both of these companies use the data collected by customers who use their software to train these neural nets on image recognition and driving techniques.

History of Self-Driving Cars

Level Zero self-driving was first put into production in 1769 by Nicolas-Joseph Cugnot.[4] This was also considered the first car built and since it was built with no driver assistance features, it makes it the first level zero self-driving vehicle.

Level one self-driving technology was first sold in the 1910's by Peerless. Starting in 1958 and the 1960's, Cadillac and AMC began rolling out a similar system in their automatic transmission vehicles. After the 1973 oil crisis, cruise control system began to enter the mainstream in many more vehicles spanning all the major automakers.[5]

Level two was achieved on some level in 1958 by General Motors.[6] GM built a prototype road with electronics embedded that allows the front end of the level two self-driving car to turn the around all the bends and curves of the road while also having cruise control making it the first level two system. More recently automakers and technology companies have been including driver assistance packages with lane keep assistance and adaptive cruise control. Some automakers like Tesla also include features that allow automated lane changing as well as automated driving in parking lots and other private property with human oversight which places it within the second level of self-driving.

Level three has been rolling out recently for Audi A8 vehicles.[7] This technology allows the vehicle to perform all driving functions during a traffic jam with the driver no longer needing to pay close attention to the surrounding. The driver must remain ready to take control if the vehicle alerts for a potential issue, or the traffic jam clears and regular highway driving resumes.

Level four has been deployed in some areas in Arizona and other states without human drivers. The vehicles don't require any human input during normal operation and generally drive without anyone in the driver seat. This is available to the public as a transportation service in select regions of the western United States.

Level five has not yet been developed by any person or company.

The Trolley Problem

The General Trolley Problem

The trolley problem is an ethical thought experiment. The scenario is a trolley car is going down a rail with a split coming up, to the right the track has one person on it, and straight ahead, where the trolley is heading, multiple people are on the track. A person is standing at the split and able to redirect the trolley to the left but is unable to stop the trolley and unable to get anyone off the track. Should the person switch the tracks and kill the person to right, or do nothing and have the trolley kill the people straight ahead.

The ethical decision of having the power over who dies is the issue. Should you intervene and minimize deaths or let the trolley do what it was going to do in order to avoid the issue? Is doing nothing avoiding the issue or choosing to kill the people on the straight portion of the track?

The Trolley Problem in Self-Driving Cars

The trolley problem also applies to self-driving cars. Industry and thought leaders in the space of automobiles, self-driving, and ethics have yet to come to a consensus on how to deal with the problem. When a self-driving car is driving down the road and comes across a scenario where multiple are straight ahead in the road and less people are on the side of the road. Should the self-driving car swerve and hit the smaller amount of people or stay on track and hit the people in the road? Should the car take age into account? Should the car care more about the passengers of the self-driving vehicle more than the rest of the people outside the vehicle?

More Ethical Aspects of the Technical Challenges in Self-Driving Cars

  • Safety - This is the most fundamental requirement of self-driving vehicles. An ethical challenge towards safety is making sure autonomous cars meet the safety standards of all road vehicles. Safety also goes into the hardware and hardware-software systems of the car. Questions like should a manufacturer choose a cheap or expensive sensor raises an ethical dilemma between cost-effectiveness and accidents on the road [8].
  • Security -There are eight key principles for vehicle cyber security as mentioned by UK's Department for Transport [8].
   1. Organizational security is governed, promoted, and owned at a broader level
   2. Security risks are managed and assessed appropriately 
   3. Companies must place product aftercare and incident response to make sure systems are protected for lifetime use
   4. All organizations involved in the process of making and establishing self-driving cars must work together to 
    enhance the security of the system
   5. Software systems are designed using a defence-in-depth approach 
   6. Software security is managed throughout its lifetime 
   7. Transmission and storage of data in the vehicle is safe and controlled 
   8. System must be designed to be resilient to attacks and respond correctly 
  • Privacy - The more information taken towards creating safe and efficient self-driving cars interferes with the protection of one's data privacy. For instance, using sensors to detect obstacles such as humans in the way of the car uses visual information [8].
  • Trust - This appears in various forms within self-driving cars. There must be trust between hardware and software components of the car and also between the car and the human [8].
  • Transparency - This is necessary within every ethical challenge of self-driving cars. Ethical questions around transparency are: how much information should be disclosed and to who? Should the entire ecosystem be transparent with each other? How do we manage intellectual property rights [8]?
  • Responsibility and Accountability - Ethical dilemmas within this notion is how will responsibility be placed in the case of accidents and incidents [8]?


  1. Howard, Daniel. "Public Perceptions of Self-driving Cars: 2 The Case of Berkeley, California"
  2. "How-To Geek: What Are The Different Self-Driving Car "Levels" of Autonomy"
  3. "Ars Technica: Elon Musk: “Anyone relying on lidar is doomed.” Experts: Maybe not
  4. "Wikipedia: History of the automobile
  5. "Wikipedia: Cruise Control
  6. "Wired: Autonomous Cars Through the Ages
  7. "Automotive News: Why Level 3 automated technology has failed to take hold
  8. 8.0 8.1 8.2 8.3 8.4 8.5 . Holstein, Tobias. Ethical and Social Aspects of Self-Driving Cars