Fear and Self-Driving Cars: Why Are We Afraid of Autonomous Vehicles?

We cannot deny the technology behind autonomous vehicles (also referred to as AVs or self-driving cars) is something many of us have only dreamed of, so why are some people afraid of them? Granted, some people will buy one simply because they love having the latest and greatest technology, but what is it about these vehicles that may prevent widespread adoption? In truth, there may be several reasons for this.

Fear of Change

Pulling up next to a self-driving car at a red light is sure to shock more than one driver when he or she realizes the driver is facing the rear of the vehicle (Motavalli, 2015). It appears odd and shocking, but it is not inherently wrong. It will make us uncomfortable because it is unexpected and goes against our schemata associated with driving. As a user, AVs cause cognitive dissonance and force us to choose between wanting to be in full control and enjoying the excitement of new technology. The struggle between our desire for something familiar and the adventure of trying something new will cause many to avoid or discount the idea of self-driving cars. In short, we are afraid of change, and that will not be easily overcome (Jackson, 1967).

As Brian Ladd points out in his book Autophobia: Love and Hate in the Automobile Age, it’s all about perspective. “Horses, trams, trains can collide, smash, kill half the world, and nobody cares. But if an automobile leaves a scratch on an urchin who dances in front of it, or on a drunken carter who is driving without a light…” (Ladd, 2008). Accidents involving autonomous vehicles get overemphasized for two reasons. First, every crash can provide engineers and scientists with valuable insight and information. Without these incidents, and careful analysis afterward, the technology won’t be ready for the real world. Secondly, they’re new and different in that the main decision-making mechanism isn’t human. In this instance, novelty is the driving force (Knight, 1996).


Much of the resistance to AVs appears to be robotophobia — the fear of robots. Halpern and Katz (p. 139 – 140) list several reasons for this fear:

  • Anthropomorphism – The recognition of human attributes in a machine influences our attitudes towards robotics. This isn’t applicable to self-driving cars, but this may well provide hints as to where the “evil” attributes of anything automated comes from. Because we are unaware of their limits and capabilities, our imaginations filling-in-the-blanks.
  • Cultural differences “…people might perceive them as incapable of assuming a position of moral equivalence” (pg 140). In terms of AVs, this translates into the classic ethical problem often referred to as Foot’s trolley problem (Vance, 2014). If humans struggle to make these decisions, how can we trust a computer to make them? And how do we know they made the right choice?
  • Gender and robotics – Females tend to exhibit more fear toward robotics than males, but less cyberdystopianism (fear they’ll take over the world) than males. Here, Halpern and Katz suggest it may be due to the idea of competition and the idea that females prefer cooperative modes of relationships rather than competitive ones. If this holds true for self-driving cars, this information could potentially be vital to the adoption of the technology going forward.
  • Cultural connections – Judeo-Christians were more adverse to robots in the Halpern and Katz study than subjects identifying with Eastern religions. The authors suggest this is due to philosophical elements within the cultures. This could potentially be at the crux of the problem here in North American and why China appears to be well ahead of North Americans in terms of technology development and use.
  • Familiarity with technology and fear – Individuals more familiar with technology hold more aversion to robotics and artificial intelligence. Halpern and Katz theorized this may be because this segment of society is more familiar with the capabilities and shortcomings of robotics and computers. Another area to explore would be the idea that technologically-inclined individuals are often big dreamers. They know what they’d like to see the technology do, but they also know how these things could be used for nefarious purposes.

By expanding on the work already done by Halpern and Kats, manufacturers like Google and Tesla could easily overcome the fear of autonomous cars with little effort. This is particularly true if they use this information during the design process. For example, if they focused their marketing on successful North American business women outside the tech industry, they should hypothetically experience less resistance. It would also be easier to situate the cars in the market as a status symbol much like Louis Vuitton bags. And, if manufacturers include features designed to make women’s lives easier, autonomous cars have the potential to be a worthwhile investment for professionals even if the price tag is considerably higher than current high-end models.

Hackers and Cybersecurity

Security and the technology’s vulnerabilities are also cited as reasons to fear AVs. If hackers could take over vehicles, it could potentially cause the loss of thousands of lives and mass chaos. However, this likely doesn’t pose any bigger risk than smartphones, heating systems, toasters, or anything else on the “Internet of Things”.

First, hackers can already glean a wide range of information with a few simple tricks and easy searches. A little brute force will garner the hacker enough information to gain access to pretty much anything they need. And as more objects become connected, the amount of implicit and explicit information available will only grow. There is simply little to gain from taking over vehicles particularly if there are other security measures in place. Secondly, a single system of fully autonomous cars would have the advantage of sheer numbers.

To go fully autonomous, vehicles would need a central data system and universal or compatible coding. In short, they would need to use the same information database and speak the same language. Therefore, all experts in the field could focus on security and maintenance of a single system rather than everyone focusing on their respective systems. The more people working on the same system, the more likely they are to catch problems before they cause massive issues. Also, the system could include the best features of all the systems currently available. Of course, hackers would also have the advantage of only needing to hack one system. However, proper controls, partitions, and security features should limit the damage created by any breech. The same thing is true of smartphones today.

Privacy Problems

Facebook, Google, smartphones, and many other devices already track every move we make. They collect the data together, and use this information for everything from product development to choosing brand ambassadors. And for-profit companies aren’t the only ones using this information.

Recently, the use of GPS data by the Department of Justice in the USA experienced some serious, negative consequences including false arrest and prosecution (Schultz, 2015). In fact, some argue that GPS doesn’t work well at all (Markowitz, 2013). Anyone who has tried geocaching, on the other hand, has the opposite opinion. They usually find the hardware makes all the difference, so this could be more the fault of the hardware manufacturer than GPS technology. So is losing more privacy really that detrimental?

Consider the level of privacy we have now. Countries like England, North American businesses, and public transportation systems around the world have video surveillance, which tracks every move everyone makes in the vicinity. In reality, citizens have very little privacy. AV data can be easily anonymized and personal identifiers hidden, so what makes autonomous cars so much worse?

Responsibility and Insurance

When something does go wrong with an AV, and the driver is no longer in the equation, who is responsible? Who will pay for the damages? While troubling, insurance companies have been working to find answers to these questions (Kovacs, 2016). There is an abundance of research on the topic already (Insurance Information Institute, 2016), and in the UK, insurance companies have already started to roll out driverless insurance policies (Kollewe, 2016). Because they are quite new, there will likely be many different types, and several revisions, until insurance companies are able to find the right combination of coverage and fairness. However, this should not be used as a barrier to prevent adoption. In fact, the only way to get an adequate policy is through testing and experience.

Autonomy and Autonomous Self-Driving Cars

Freedom and control may also play a role in the fear of autonomous cars. After all, if the vehicles are in full control, drivers no longer have the option to speed down a busy highway or make doughnuts in a farmer’s field. And while some drivers would be sorely disappointed to learn this is no longer an option, there are many other fun things to do that won’t put the lives of others in danger.

Other experts have expressed concern when it comes to controlling where AVs are able to go, how fast they can go, and when they can go somewhere. Should governments be able to dictate where its citizens are allowed to drive? In reality, they already do; cities regularly put up “no parking” signs, traffic cones, and other limitations on drivers, so there is no difference between controlling them with signage or via a piece of computer code. On the up side, it would be cheaper, faster, and easier for cities.

Unknown Limitations and the Tendency to Overestimate Limitations

Halpern and Katz mentioned anthropomorphism in their paper. And while the concepts behind it could play a part in the fear of AVs, the issue appears to be more direct. Fear arises because autonomous vehicles force us to recognize the limitations of technology as well as our own. For example, if humans are not capable of recognizing a hazard, or if we cannot instantly answer the trolley problem, how can we rely on a computer to be capable of these things? The opposite is also true.

If drivers do not understand what the exact limitations of AVs are, and what they are and are not capable of, there will always be some element of fear. Drivers will second-guess their abilities. It also allows them to use their imagination. For example, a mom will yell “stop hitting your brother or else!” at one of her kids. Meanwhile, the child’s mind instantly starts conjuring up images of the most horrible punishments imaginable. The same thing will happen with AVs. When drivers don’t understand the limitations, the worst-case scenario immediately becomes plausible. Pretty soon, a relaxing Sunday drive becomes a horrific scene from Maximum Overdrive (Maximum Overdrive Trailer, 2009).

Inconsistent Feedback

From the concept of feedback loops (Norman, 1988), to the idea of replacing traditional steering with something resembling a horse’s reins (Kuang, 2002), Don Norman has long discussed the importance of reliable feedback. And while feedback is usually used to inform the driver, it’s also important in an autonomous mode, at least until society gets accustomed to using them.

Every time a button is pushed, or a setting is changed, something in the car needs to make the individual aware that the computer has received and understood the directions. It should indicate that a button was pressed successfully and indicate to the individual what the results of pressing that button will be in a timely fashion. So, when a button is pushed to turn on the air conditioning, for example, the individual needs to feel and hear the air. The air also has to be cold, getting colder, or in the process of cooling it. If the driver tells the car to go to a specific coffee shop, they need to know the car “heard” them, understood the command, and made all the necessary changes to get them there.

Even with no changes, the drivers need to be able to reassure themselves that everything is operational and fully functional. Drivers need to know the car has everything under control. Otherwise, the fear of being in a fiberglass death trap careening toward certain death creeps and less adventurous individuals will be hesitant to use them.

Artificial Intelligence and the Inability to Rationalize

If an animal were to suddenly jump out from the ditch, or a child would suddenly come out from between two cars to chase a ball, should the driver accelerate? Brake? Swerve? What if the driver brakes and swerves to the left only to swerve directly into the path of a vehicle in the left lane? This is an all too common scenario that plays out on roads thousands of times every year.

Some drivers will inevitably suffer a fit of ego and assume that they will always make the right choice, but this simply is not true. Drivers make the best choices based on the information they have at the time. And while humans have amazing reasoning and perception skills that doesn’t mean it isn’t flawed. The other issue is that these decisions take time to reason through. And while only fragments of a second, even one fragment of time can make all the difference between life and death. So, if humans aren’t capable of making the right decision all the time, how would a self-driving car manage? Well, research suggests they do better than humans.

Many accidents, or “failures”, are caused by humans. This suggests that in a significant number of instances, humans have done more harm than good (Sui, 2016). Many of the incidents involving AVs were caused because the human driver underestimated the technology, took over to “correct” the software, and ended up causing an accident as a result. In fact, if the statistics so far are any indication, switching to self-driving technology now could save thousands of lives annually.

High Financial Costs

Another reason people feel they will shy away from autonomous cars is because of the high financial cost associated with them. After all, if a smartphone is now almost $1000, why would AVs be any different? Experts have reasoned that self-driving cars will likely have a larger price tag, but considering their data and connectivity needs, they could well signal the end of personal car ownership. Expertise could also be an issue. Therefore, public or private owned networks would allow users to simply pay for what they use rather than purchasing and paying to maintain the entire car. And this is another interesting concept with a whole host of benefits.

In truth, there are many reason to fear autonomous vehicles. However, none of them should present any serious or insurmountable obstacles to adoption. And in truth, the faster we are able to adopt the technology, the better. Thousands of lives depend on it.

(1) Motavalli, Jim. (January 15, 2015). Automakers Rethink Seats for Self-Driving Cars. Retrieved February 2, 2017, from Back

(2) Jackson, D. D. (2010). The Fear of Change (1967). Journal of Systemic Therapies, 29(2), 69–73. https://doi.org/10.1521/jsyt.2010.29.2.69 Back

(3) Ladd, B. (2008). Autophobia: Love and Hate in the Automobile Age. Chicago, IL: University Of Chicago Press. http://dx.doi.org/10.1080/02723638.2013.778644 Back

(4) Knight, R. (1996). Contribution of human hippocampal region to novelty detection. Nature, 383(6597), 256–259. https://doi.org/10.1038/383256a0 Back

(5) Halpern, D., & Katz, J. E. (2012). Unveiling robotophobia and Cyber-dystopianism: The role of gender, technology and religion on attitudes towards robots. Human-Robot Interaction (HRI), 2012 7th ACM/IEEE International Conference on, 139–140. https://doi.org/10.1145/2157689.2157724 Back

(6) Vance, C. (2014). The Trolley Problem. Retrieved from http://rintintin.colorado.edu/~vancecd/phil3160/trolley.pdf Back

(7) Schultz, E. (2015, March 15). Lawsuit: Man Jailed Based on Bad GPS Data. Retrieved February 08, 2017, from http://www.wctv.tv/home/headlines/Leon-County-Facing-Lawsuit.html?ref=731 Back

(8) Markowitz, E. (2014, April 17). Why GPS Doesn’t Always Work for Tracking Convicts. Retrieved February 08, 2017, from http://www.vocativ.com/underworld/crime/gps-doesnt-always-work-tracking-convicts/ Back

(9) Kovacs, P. (2016). Automated Vehicles, Implications for the Insurance Industry in Canada. The Insurance Institute of Canada. Retrieved from https://search.informit.com.au/documentSummary;dn=365036984144993;res=IELENG Back

(10) Insurance Information Institute, Inc. (2016, July). Self-Driving Cars and Insurance. Retrieved February 08, 2017, from http://www.iii.org/issue-update/self-driving-cars-and-insurance Back

(11) Kollewe, J. (2016, June 07). Insurer launches UK’s ‘first driverless car policy’ Retrieved February 08, 2017, from https://www.theguardian.com/business/2016/jun/07/uk-driverless-car-insurance-policy-adrian-flux Back

(12) Maximum Overdrive [Advertisement]. (2009, January 2). Retrieved February 2, 2017, from https://youtu.be/HgIgYhaqKeo?t=1m35s Back

(13) Norman, D. A. (1988). The Psychology of Everyday Things. New York: Basic Books.Back

(14) Kuang, C. (2016, February 02). The Secret UX Issues That Will Make (Or Break) Self-Driving Cars. Retrieved February 08, 2017, from https://www.fastcodesign.com/3054330/innovation-by-design/the-secret-ux-issues-that-will-make-or-break-autonomous-cars Back

(15) Siu, J. (2016, October 10). Human Driver Causes Crash with Google’s Self-Driving Car. Retrieved February 08, 2017, from http://www.autoguide.com/auto-news/2016/10/human-driver-causes-crash-with-google-s-self-driving-car.html Back

(16) Featured Image Source: WikiMedia. (1994). Retrieved February 2, 2017, from https://upload.wikimedia.org/wikipedia/commons/9/94/Driving_Google_Self-Driving_Car.jpg

Comments are closed