The role ethics will play in how cars of the future are designed
It may be time to say farewell to the idea of the ultimate driving machine. Autonomous cars are quickly becoming the new reality, and these cars will focus more on comfort and being mobile living rooms rather than power and exterior styling.
In his series on these future cars, Ed Bernardon explores how cars as we know them are on their way out. In part one, he examined how today’s cars could follow the same fate as the horse and be relegated to stadiums, race tracks or the farm. Here, he discusses the new challenges we all face as autonomous cars arrive and the questions we must begin answering.
There is no doubt cars of the future will bring new technical changes and ethical challenges with them – and these changes aren’t as far away as we think.
Mark Fields, CEO of Ford, estimates that fully autonomous vehicles will be on the market within five years. Travis Kalanick, CEO of Uber, expects Uber’s fleet to be driverless by 2030. Even the National Highway Traffic Safety Administration has determined that a computer system can qualify as the legal driver of a car.
We’ve moved toward these cars for some time. For many years, cars have had anti-lock braking systems because the computer is better at braking than humans who must learn how to pump the pedal just right. Electronic stability and traction control came next; it’s easier for a computer to maintain traction than a human driving in unfavorable conditions.
But that isn’t all. We can now buy cars that automatically parallel park. If we consider how many people dislike parallel parking, it isn’t hard to imagine how a computer is better at it than most humans will ever be.
Ford is developing technology to help cars recognize pedestrians and bicyclists to avoid collisions.
BMW manufactures cars with traffic jam assistant technology to drive in stop-and-go traffic.
And, Toyota has Lane Keeping Assist to keep a car within the white lines.
Tesla recently introduced “super-cruise,” allowing the model S to largely drive itself on freeways. Super-cruise automatically keeps the car in its lane and maintains a safe distance from the car in front both at highway and city speeds. It can find a parking spot and parallel park itself. In an industry first, it also has automatic lane-change when you use your turn signal.
Tesla provided this capability as a software upgrade to its current vehicles. The upgrade was done via the Internet directly to its cars on the road. But as we might expect with new upgrades, there were some bugs. One driver said that when he passed exit ramps, the car would drive for the ramp and required manual intervention to stay on the highway. The driver said that in the following days, the system was less eager to go toward the ramps, and a few days after that, the car would only “wiggle” toward the ramp. After a week, the car only slightly hesitated and required no manual correction.
Software improvements came as Tesla software engineers created new releases based on information collected from their cars on the road. Tesla wirelessly provided these releases and corrected most major bugs within a week. This is a goal every software provider should want to achieve with their own upgrades.
Of course, there’s still a big step from driver-assisted to fully autonomous cars. But the fundamental building blocks needed for these autonomous cars of the future have creeped up on us over the years. We may be like frogs in a pot of water slowly being brought to a boil. The change around us is gradual, and we don’t fully notice it until it’s here.
But should we assume that accepting autonomous cars is a foregone conclusion?
How ethics fits into the cars of the future discussion
To realize these autonomous cars of the future, we have bigger hurdles to overcome, and they aren’t necessarily technological ones. The deeper challenges are ethical, and they aren’t easy to solve.
In Google’s self-driving car project, which has accumulated over 1.4 million miles, there has been only one accident in which the car appears to have been at fault; it hit a bus while traveling at 2 miles per hour. The other crashes that have happened throughout the project were due to human error.
In response to the accident in which its car was at fault, Google ran thousands of simulations of what happened to tweak its autonomous-driving software. It baked what most drivers know into its software: big buses and trucks more often want the right of way. All of Google’s autonomous cars “learned” from a single accident, which human-driven cars can’t do. One month after the accident, Google even got a patent for robot cars to detect buses.
This is impressive. But even with this learning, these incidents beg the question: what happens when a screen replaces the steering wheel?
Google gave its workers autonomous cars to drive to and from home and found all kinds of distracted behavior, up to and including falling asleep. You might nod off if you’re tired and the car is driving itself, and this can cause a handoff problem.
The way a driverless car works now is that it drives on its own and hands back control if it encounters a problem it can’t solve. This handoff may take more than a few seconds and result in momentary confusion about whether the machine or the human is in control. But if you’re texting or watching Netflix when the car hands control back to you, you may be entering “the mushy middle“ of autonomy.
What about when driverless cars are confronted with an ethical decision? For example, to avoid an accident, the car could maintain its current path and may injure or kill passengers in the car, or it could be confronted with choosing an alternative that could injure or kill pedestrians. How does the machine decide what to do? Should the machine even make such a decision? Where would responsibility fall: upon the person sitting in the driver seat, the vehicle owner, the OEM, the software designers or the component suppliers?
These problems aren’t as clear to solve as the technical issues. But these are the kinds of questions we must begin answering as these autonomous cars move closer to becoming a reality.
This concludes part two of Ed Bernardon’s series on how preparing for life with autonomous cars. In part three, he discusses what engineering software companies must to consider to enhance their automotive engineering software products so engineers can address the design challenges the cars of the future pose.
Tell us: What do you think are the most difficult challenges designers will face when they work on cars of the future?
About the author:
Ed Bernardon is vice president of strategic automotive initiatives for the Specialized Engineering Software business segment of Siemens PLM Software, a business unit of the Siemens Industry Automation Division. Bernardon joined the company when Siemens acquired Vistagy, Inc. in December, 2011. During his 17 year tenure with Vistagy, Bernardon assumed the roles of vice president of sales, and later business development for all specialized engineering software products. Prior to Vistagy, Bernardon directed the Automation and Design Technology Group at the Charles Stark Draper Laboratory, formerly the Massachusetts Institute of Technology (MIT) Instrumentation Laboratory, which developed new manufacturing processes, automated equipment and complementary design software tools. Bernardon received an engineering degree in mechanical engineering from Purdue University, and later received an M.S. from the Massachusetts Institute of Technology and an MBA from Butler University. He also holds numerous patents in the area of automated manufacturing systems, robotics and laser technologies.