On the Road With Generative AI: Key Legal Considerations for the Automotive Industry

ArentFox Schiff
Contact

ArentFox Schiff

Generative AI is already an integral part of the automotive industry, playing a significant role in enhancing Advanced Driver Assistance Systems (ADAS) and making it possible for drivers to interact with their vehicles. Generative AI produces and processes massive amounts of data and images to train and improve self-driving algorithms. AI provides drivers with enhanced in-vehicle connectivity using voice technology, real-time traffic, and automatic rerouting, and the ability to monitor vehicles and advise if there’s a mechanical problem developing or that it’s time for service. Looking toward the future, manufacturers are using ADAS technologies and generative AI as building blocks to develop fully autonomous vehicles that one day can cruise across the country without any input from humans.

ADAS and Autonomous Driving

Many people use the terms “ADAS” and “autonomous driving” interchangeably, but they are not actually the same thing. ADAS is the suite of automotive technologies that assists drivers with features such as collision avoidance, pedestrian detection/avoidance, blind spot detection, lane keeping assist, adaptive cruise control, traffic sign recognition, and parking assistance. Sensors and advanced processing from camera, radar, sonar, thermal imaging, infrared sensors, and lidar systems help provide accurate event detection, driver alerts, and semi-autonomous intervention for ADAS.

The term “autonomous driving” refers to the technology that allows cars to drive without any human intervention. There are six levels of autonomous driving (named Levels 0 to 5) each with its own set of requirements and capabilities. Today’s vehicles that have a suite of ADAS technologies are Level 2+ and Level 3. At Level 2 vehicles can operate autonomously with complex functions such as steering, braking, and accelerating but the driver should still be aware and in control. Level 2 automation includes Ford BlueCruise, Tesla Autopilot, and GM Super Cruise™. Level 3 vehicles have conditional automated driving functions that allow a driver to disengage from driving while still sitting behind the wheel but must be prepared to take over in certain situations. At Level 3 a vehicle can monitor its surroundings, change lanes, control steering and braking, and even accelerate past a slow moving vehicle.

Level 4 automation will reduce a driver’s involvement to the point where it will be possible to work on a laptop or watch a movie. Test vehicles from Cruise, a subsidiary of General Motors, and Waymo, a spinoff from Google, are examples of Level 4 autonomy. Both Cruise and Waymo operate driverless ride-hailing services (with and without safety drivers) in Phoenix, San Francisco, Los Angeles (Waymo), and Austin (Cruise). Both companies are seeking approval from the California Public Utilities Commission to charge fares for their robo-taxi services in San Francisco that will have no one sitting in the driver’s seat.

The highest level of autonomous driving is Level 5. A few automotive companies are testing Level 5 but a fully autonomous vehicle is not yet available to the public. This level represents a vehicle that can operate completely autonomously in all situations and does not require any human input.

ADAS technologies combine generative AI with vision, radar, and lidar sensor systems. Vision-based systems use image signal processing algorithms to identify and detect objects in their field of view. Onboard automotive cameras installed in the front, rear, and both the sides of the vehicle are the eyes of the vehicle and assist by sending collision warning alerts, providing vehicle parking assistance, performing object recognition, and offering lane change assistance and more. Radar is used when automotive cameras are insufficient in providing ADAS data in poor weather and low-visibility conditions. Radar-based systems have a longer range and they can also pass through objects and can detect the position and velocity of approaching vehicles and other objects on the road. Lidar (Light Detection Imaging and Ranging) sensor systems can see through objects and differentiate between on-road objects like vehicles, pedestrians, people, bikes, etc. Transmitters send out laser pulses that then bounce back off surfaces and return to the lidar sensor. The time it takes for each light pulse to return to the device informs it of the exact location of the surface the light hit. By creating and combining hundreds of thousands of data points per second, the lidar system can detect the shape of objects, follow moving obstacles, and create an accurate, real-time perception of the area. This provides a highly accurate object detection and recognition for safer and more efficient driving. Ford BlueCruise, which is a Level 2 ADAS, uses both an advanced camera and radar-sensing technologies to allow a driver to operate hands-free on pre-qualified sections of divided highways. A driver-facing camera in the instrument cluster monitors eye gaze and head position to help ensure the driver’s eyes remain on the road. Ford uses AI to improve the system’s capabilities through machine learning. Data is collected from owners who have opted in to share real world information from their vehicles. The algorithm learns by looking at video and different environmental and lighting conditions from sections of pre-qualified divided highways. The system also takes cues from drivers and their reactions to other vehicles on the highway such as moving over if there are larger vehicles next to them.

AI, Navigation, Infotainment Systems & Biometric Identifiers

Navigation and infotainment systems have become more intuitive and personalized with the use of generative AI. Companies like Waze use generative AI to provide real-time, personalized navigation suggestions based on user preference and traffic conditions. Machine learning algorithms can analyze a driver’s music preferences and follow voice commands allowing for hands-free operation. The new 2023 Genesis GV60 uses biometric identifiers with Face Connect and Fingerprint Authentication. Face Connect allows the vehicle to recognize the driver’s face to lock or unlock its doors without a key. Once the user touches the door handle, a near-infrared (NIR) camera embedded into the vehicle’s B-pillar analyzes unique facial features, such as the contours of the face and specific facial landmarks which the car can instantly identify as its owner during both daytime and even in the dark. The camera uses image recognition technology based on deep learning to detect registered faces. The feature allows owners to pre-register multiple profiles for families with multiple drivers. Once the system recognizes the driver it can create a cockpit environment according to previously saved personalize settings. The head-up-display, steering wheel, side mirrors and infotainment settings are adjusted based on the driver’s customized settings. The Fingerprint Authorization System allows the vehicle to be started without a key. This biometric authentication technology is similar to what everyone uses through their smartphones.

Key Legal Considerations For Auto Manufacturers and Suppliers

As legacy manufacturers and parts suppliers continue development and testing of autonomous vehicles with generative AI, there are several key issues to consider:

1. Cybersecurity

Autonomous vehicles are highly complex and connected devices that utilize a combination of high-tech sensors and innovative algorithms to detect and respond to surroundings. The vehicle is a blend of networked components, some existing within the vehicle, and others outside of it. These complex systems allow the vehicle to make complex decisions but it also allows hackers several avenues to exploit this emerging technology. In 2015, security researchers Charlie Miller and Chris Valasek remotely hacked a Jeep Cherokee traveling at high speeds on the highway and forced it to come to stop in the middle of traffic. Using its internet connect, they were able to remotely gain control by exploiting vulnerabilities within Chrysler’s Uconnect system. Similar vulnerabilities were found in Volkswagen, Tesla Model S, and BMW vehicles.

Companies must constantly develop procedures to protect their security architecture, intrusion detection, and anomaly detection. This includes encryption and authentication of the driver, and firewalls between a vehicle’s internal network and the external world. Autonomous vehicles interact with a larger network of connected devices which include other vehicles, traffic signs, and even pedestrians with smart devices. Hackers could intentionally make a vehicle misinterpret a stop sign which would compromise both cybersecurity and the AI system to disrupt safety-critical functions.

2. Data Privacy

Generative AI often requires access to personal or sensitive information to authenticate authorized use. Sensor data is collected to help the vehicle understand where it is relative to other objects on the road. Data sets collected also include location data, i.e., destination, speed, and route data with additional information relevant to the trip. AI uses the dataset associated with a particular vehicle to personalize and enhance navigation features including the option to save specific locations in order to plan personalized routes for drivers. If a hacker gains access to this data, information about the owner or passengers, such as where they live and work and the specific locations they frequent (which can be very sensitive information), could be compromised and misused. If this data is not properly protected, hackers can access a driver or passenger’s personal information, leading to identity theft and misuse of personal information.

Manufacturers should minimize collection and retention of personal data to only what is needed for the AI system to function properly to reduce the risk of potential privacy breaches. Before using data for training generative AI, personal information should be deidentified to ensure individuals cannot be identified from generated outputs. Manufacturers using generative AI tools should clearly communicate to vehicle occupants their data collection, storage, and usage practices, and should only process personal data for disclosed purposes. Individuals should have granular control over what data they share and generate. If individuals wish to opt out of sharing their personal data through an AI system, it should be easy for them to manage their data. This is a very active area of the law at the federal and state levels, with California’s Privacy Protection Agency leading the way as it considers new rules for AI and automated decision-making technology, which may trigger rights to access information about, and to opt out of business’s use of these technologies, and obligations to perform risk assessments of their technologies.

3. Biometric Privacy

Technologies such as fingerprint readers, facial scanners, iris scans, and voice recognition collect and use biometric data to improve a driver’s in-vehicle experience. Companies that collect this data must comply with privacy and data protection regulations to keep data private and secure. State privacy laws including the California Consumer Privacy Act (CCPA) require manufacturers and service providers to conduct data inventories and monitor the flow of data to be able to develop systems for compliance. The Illinois Biometric Information Privacy Act (BIPA) is one of the most stringent and heavily litigated biometric privacy laws in the country. BIPA regulates the collection, use, storage, retention, and destruction of biometric identifiers and biometric information. A “biometric identifier” is a biologically unique personal identifier, including a fingerprint, voiceprint, face geometry, or a retina or hand scan. “Biometric information” is any information based on an individual’s biometric identifier used to identify an individual. BIPA imposes a number of compliance obligations on entities collecting biometric data, including providing notice, obtaining written consent, and developing a publicly available retention and destruction policy. Failure to comply with BIPA’s requirements could subject companies to substantial damages awards.

4. Regulatory Landscape

In 2021, the National Highway Traffic Safety Administration (NHTSA) issued a Standing General Order that required manufacturers and operators of automated driving systems (ADS) and SAE Level 2 ADAS-equipped vehicles to report crashes to the agency. ADS is still in development, encompassing Level 3 and Level 5 vehicles. The Order allowed NHTSA to obtain timely and transparent notification of real-world crashes associated with ADS and Level 2 ADAS vehicles. In June 2022, NHTSA upgraded a preliminary investigation of Tesla’s Autopilot active driver assistance system to an engineering analysis. As part of this evaluation, NHTSA requested information related to “object and event detection and response (OEDR) that include monitoring the driving environment (detecting, recognizing, and classifying objects and events, and preparing to respond as needed) and executing an appropriate response to such objects and events.”

In June 2023, NHTSA issued a Second Amended Standing General Order. Not only is NHTSA reviewing driver behavior during real-world crashes, it is also examining the decisions by software algorithms that analyze data inputs in real time to determine the appropriate vehicle response as well as safety issues that may also arise from the operational design domain for the ADS, and the continuing evolution and modification of these systems through software updates (including over-the-air-update). NHTSA defines operation design domains as the operating conditions under which a given ADS or ADS feature is designed to function. This includes but is not limited to, environmental, geographical, and time-of-day restrictions, and/or the presence or absence of certain traffic or roadway characteristics.

On a national level, the federal regulatory framework has not been able to keep pace with the development of autonomous vehicles. States have filled this landscape creating a patchwork of regulations. NHTSA is in the process of formulating a framework to ensure automated driving systems are deployed safely.

5. Litigation

Who is at fault in an accident with a self-driving car? Is it the AI? Is it the human driver? In 2018 a self-driving Uber Volvo hit and killed a pedestrian named Elaine Herzberg who was jaywalking at the time. Her death was the first pedestrian fatality involving a self-driving car. The NTSB concluded that the vehicle could not determine if the woman was a pedestrian, a bicycle, or another car and could not predict where she was going. The safety driver, Rafaela Vasquez, was not looking at the road and was instead watching “The Voice” on her smartphone. The NTSB split the blame among Uber, the company’s autonomous vehicle, the safety driver in the vehicle, the victim, and the state of Arizona. Arizona prosecutors charged Ms. Vasquez with negligent homicide. Her trial originally scheduled for June 2023 has been delayed until at least September. Prosecutors found Uber not criminally liable for Ms. Herzberg’s death.

In a Columbia University study, researchers developed a game-theory model that regulated the drivers, the self-driving car manufacturer, the car itself and lawmakers. Lead author of the paper, Dr. Xuan (Sharon) Di, said, “We found that human drivers may take advantage of this technology by driving carelessly and taking more risks, because they know that self-driving cars would be designed to drive more conservatively.” With more autonomous vehicles taking to the road in the future, there is a greater likelihood that liability will fall on manufacturers as there will no longer be a human safety driver to take over the vehicle if needed. Once a Level 5 vehicle is approved for use on the road without human intervention who becomes liable for an accident – is it the manufacturer, the AI algorithm, or perhaps the engineer that wrote the algorithm?

6. Ethical Considerations

Autonomous vehicles are not “programmed” by humans to mimic human decision-making. Instead they learn from large datasets to perform tasks like “traffic sign recognition” using complex algorithms distilled from data. A human driver may have a few hundred thousand miles of driving experience over their lifetime but Waymo has covered over 20 million miles on public roads since its creation in 2009 and billions in simulation. In January 2023, Waymo exceeded one million miles with no human being behind the wheel. With more data to learn from, AI will quickly improve, becoming more adaptive. However, there is still a major concern with AI and autonomous vehicles. The “Trolley” problem is the ethical dilemma where an onlooker can save five lives from a rogue trolley by diverting it to kill just one person. This illustrates why making decisions about who lives and dies are inherently moral judgments but with generative AI -- are we now relegating these moral judgments to artificial intelligence that doesn’t have human feeling? AI and human perceptions differ resulting in different kinds of mistakes. As in the pedestrian death caused by a self-driving Uber car, AI can misidentify hazards. How will an autonomous vehicle rationally choose a behavior model in an inevitable collision?

The role of generative AI will only increase as manufacturers continue working toward their goal of producing a fully autonomous Level 5 vehicle. Automotive companies need to ensure the AI tools they utilize in their vehicles comply with safety, data, and privacy regulations. Generative AI is constantly evolving and legal regulatory issues must be taken into consideration.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© ArentFox Schiff | Attorney Advertising

Written by:

ArentFox Schiff
Contact
more
less

ArentFox Schiff on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide