How the Autonomous Car Works: A Technology Overview

How does an autonomous vehicle work? How does it understand what it sees? How does it know where to go? Oh, that majestic metallic creature, how does it know? In this post, I hope to provide you with a foundational understanding of the main mechanics of how an autonomous car works. After noticing that the only decent overviews of autonomous car technology involved the tedium of technical articles written by academics, I decided to develop a more accessible overview for anyone who was interested in the topic from a business perspective but wanted a better grasp of the underlying technology.

Let’s return to the original question. How does the autonomous vehicle work? The best place to start is by understanding the functional architecture of the autonomous car. A functional architecture is an architectural model that identifies a system’s functions and its interactions and how they work together to achieve some mission goal. A functional architecture of an autonomous car is like an anatomy map of the human body. Whereas an anatomy map illustrates the different organs and the various ways in which they interact with the mission of keeping the body alive, the functional architecture of an autonomous car illustrates how the major components of the car work together to achieve the mission of self-driving without violating any legal or ethical codes.

There is no consensus on the “correct” functional architecture for the autonomous car among academics and industry experts. Nonetheless, we can broadly categorize the main components of the autonomous vehicle, like any other machine, into hardware and software. You can further divide these two categories into additional subsets. Hardware splits broadly into sensors, Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) technology, and actuators. Software splits broadly into processes for perception, planning, and control. I’ll talk more about each of these sub-categories in future posts later, but for now, let us start with the basics. I will continue to use the metaphor of the human body to better illustrate these concepts.

A Functional Architecture for the Autonomous Vehicle

Autonomous Vehicle Hardware

The hardware components of the autonomous car are analogous to the physical parts of the human body, which allow us to interact with the stimuli of the outside world. The hardware components enable the car to complete such tasks as seeing (through sensors), communicating (through V2V technology), and moving (through actuators).

  • Sensors: Sensors are the components that allow the autonomous to take in raw information about the environment. Sensors are like your eyes, which enable you to understand what’s going on in your surroundings. The main sensors in autonomous cars include GPS/Inertial Measurement Unit (IMUs), camera, LiDar, and radar. Each of these sensors have their respective advantages and disadvantages. LiDar, for example, is great at capturing information in various types of ambient light (whether night or day), whereas cameras may have difficulty in handling certain occlusions caused by shadows or other poor lighting conditions. Accordingly, most autonomous vehicles combine the readings of multiple sensor types to add extra redundancy and compensate for the weaknesses of the various sensors in a process called sensor fusion.
  • V2X technology (V2V and V2I technology): V2V and V2I components enable the autonomous vehicle to talk and receive information from other machine agents in the environment, such as transmitted information from a city light that it has turned green or warnings from an oncoming car. You can think of V2X technology as akin to your mouth and ears. Your mouth allows you to communicate to other humans, and your ears allow you to understand what other humans are communicating to you.
  • Actuators: Actuators are the components of a machine responsible for controlling and moving the system. Actuators are like muscles of your body, responding to electrochemical signals from your brain so that you move such parts as your arm or leg.

Autonomous Vehicle Software

Whereas the hardware components of the autonomous car enable the car to perform such functions as see, communicate, and move, the software is like the brain, which processes information about the environment so that the car understands what action to take — whether to move, stop, slow down, etc. Autonomous vehicle software can be categorized into three systems: perception, planning, and control.

  • Perception: The perception system refers to the ability of the autonomous vehicle to understand what the raw information coming in through the sensors or V2V components mean. It enables the car to understand from a given picture frame whether a certain object is another car, a pedestrian, or something else entirely. This process is analogous to how our brains process the information we obtain through sight into meaning. The photoreceptors of our eyes (the sensors) absorb light waves emanating from the environment and converts those light waves into electrochemical signals. Networks of neurons pass these electrochemical signals all the way back to the visual cortex of the brain, where our brain processes what these electrochemical signals mean. In this way, our brain can understand whether a certain light pattern hitting our retina represents a chair, a plant, or another person.
  • Planning: The planning system refers to the ability of the autonomous vehicle to make certain decisions to achieve some higher order goals. This is how the autonomous vehicle knows what to do in a situation — whether to stop, go, slow down, etc. The planning system works by combining the processed information about the environment (i.e. from the sensors and V2X components) with established policies and knowledge about how to navigate in the environment (e.g. do not run over pedestrians, slow down when approaching a stop sign, etc.) so that the car can determine what action to take (e.g. overtake another car, how to reach the destination, etc.). Analogously, just like the planning system in the autonomous car, the processes in the frontal lobe of the human brain enable us to reason and make decisions, such as what to wear in the morning or what we should do for fun on the weekend.
  • Control: The control system pertains to the process of converting the intentions and goals derived from the planning system into actions. Here the control system tells the hardware (the actuators) the necessary inputs that will lead to the desired motions. For example, an autonomous vehicle, knowing that it should slow down when approaching a red light, translates this knowledge into the action of applying the brakes. In humans, the processes that occur in the cerebellum play the analogous role. The cerebellum is responsible for the important function of motor control. It enables us, for example, to chew when the desired intention is to eat.

How They All Work Together

Now that we have a good understanding of the main components of an autonomous vehicle, let’s review a scenario of how they all work together.

Scenario: The car has stopped at an intersection in front of the red light.

Mission: The car should move forward when the traffic light turns green without violating any traffic laws or hurting other beings.

  1. Sensors: The car’s sensors take in raw information about the environment. It does not know what this information means yet — at least not until it gets to the perception stage.
  2. V2X technology: The traffic light communicates to the car that it has just turned green. Other surrounding cars communicate their position in the environment.
  3. Perception Stage: The vehicle turns the raw information coming in from the perception stage into actual meaning. The camera information reveals that the light has just turned green and that there is a pedestrian crossing in front of the vehicle into the street.
  4. Planning Stage: The vehicle combines the sensing information processed during the perception stage with the incoming V2X information to determine how to behave. The car’s policy is to generally move when the light turns green; however, it has an overriding policy that it should not run over pedestrians. What should the car do in this scenario? The car decides that, based on the combination of environmental information and the general policies of how it should operate, it should not move.
  5. Control Stage: The car must translate its decision to not move into an action. In this case, this action (or rather, inaction in this case) is to stay still and keep the brakes applied.
  6. Actuators: The car keeps the brake applied, which is the result of its decision-making process stated above.

As you can see, the technology behind the autonomous vehicle is not extremely difficult to understand when boiled down into major concepts.

Stay tuned for future posts where I dive a little deeper into each of these areas.

Feel free to add me and message me via LinkedIn. Always happy to exchange thoughts: https://www.linkedin.com/in/samantha-huang-10375b106/

Disclaimer: This blog represents solely the opinions of myself, not my employer.

Principal at BMW i Ventures. VC trends. AI themes. Social commentaries. A personal blog bridging tech, business, and human issues by a curious mind.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store