The Current State of Autonomous Vehicles on U.S. Roads
Autonomous vehicles are no longer experimental — they're operating commercially on public roads. Tesla has deployed its Autopilot and Full Self-Driving (FSD) Beta to hundreds of thousands of vehicles. Waymo operates commercial robotaxi services in San Francisco, Phoenix, and Los Angeles. Cruise, Aurora, and other companies are expanding their autonomous fleets across multiple cities.
With this expansion has come a growing toll of accidents, injuries, and fatalities. The NHTSA's Standing General Order (SGO), issued in 2021, requires manufacturers to report crashes involving vehicles equipped with Level 2+ automated driving systems. The resulting data has revealed hundreds of incidents — including dozens of fatalities linked to Tesla Autopilot alone.
The legal landscape is evolving rapidly. Traditional motor vehicle accident law — built around the concept of driver negligence — must adapt to a world where the 'driver' may be an algorithm. Product liability, software defect theories, and manufacturer negligence are becoming central to autonomous vehicle litigation.
Since NHTSA's 2021 Standing General Order, hundreds of crashes involving automated driving systems have been reported — with Tesla accounting for the majority of reported incidents.
SAE Levels of Automation: What Your Vehicle Can and Can't Do
The Society of Automotive Engineers (SAE) defines six levels of driving automation. Understanding these levels is essential to determining liability in an AV accident:
LEVEL 0 — NO AUTOMATION: The human driver controls everything. Standard vehicles. LEVEL 1 — DRIVER ASSISTANCE: One automated function (cruise control, lane centering) but the driver must remain engaged. LEVEL 2 — PARTIAL AUTOMATION: The vehicle can steer, accelerate, and brake simultaneously, but the human driver must monitor at all times and be ready to take over instantly. Tesla Autopilot and most current ADAS systems are Level 2.
LEVEL 3 — CONDITIONAL AUTOMATION: The vehicle handles all driving in certain conditions, but may request human intervention with adequate warning. Mercedes Drive Pilot (limited deployment) is Level 3. LEVEL 4 — HIGH AUTOMATION: The vehicle handles all driving in a defined operational domain (e.g., specific geographic area). No human intervention needed within that domain. Waymo and Cruise operate at Level 4. LEVEL 5 — FULL AUTOMATION: The vehicle handles all driving in all conditions. No Level 5 vehicles exist commercially.
The critical legal distinction: at Level 2, the human driver is expected to maintain attention and bears partial responsibility. At Level 4-5, the manufacturer bears primary responsibility because no human driver is expected to intervene.
Who Is Liable? The Multi-Party Liability Framework
Autonomous vehicle accidents create a complex web of potentially liable parties that extends far beyond traditional driver-vs-driver claims:
THE VEHICLE MANUFACTURER: Strict product liability for hardware defects (sensors, brakes, steering) and design defects in the autonomous system architecture. THE SOFTWARE DEVELOPER: Negligence or strict liability for software bugs, algorithm failures, or inadequate machine learning training data. THE SENSOR/COMPONENT MANUFACTURER: If a LiDAR unit, camera, or radar sensor was defective or miscalibrated.
THE VEHICLE OWNER/OPERATOR: At Level 2, the human driver who failed to monitor the system and intervene when necessary. THE MAPPING/DATA PROVIDER: If outdated or inaccurate HD map data caused the AV to navigate incorrectly. GOVERNMENT ENTITIES: If road infrastructure (lane markings, signage, traffic signals) was inadequate for AV systems to interpret. THE TELECOMMUNICATIONS PROVIDER: If V2X (vehicle-to-everything) communication failures contributed to the crash.
Identifying all liable parties is critical because it multiplies available insurance coverage. A Tesla Autopilot crash might involve the Tesla driver's auto policy, Tesla's product liability coverage, and potentially the sensor manufacturer's coverage — providing significantly more recovery than a single-driver claim.
AV accidents often involve multiple liable parties — each with separate insurance. Identifying all of them can dramatically increase your total recovery.
Tesla Autopilot & FSD: Marketing vs. Reality
Tesla presents unique legal challenges because of the gap between its marketing and its technology's actual capabilities. Despite being named 'Full Self-Driving,' Tesla's FSD system is SAE Level 2 — it requires constant human supervision. Tesla's own terms of service state that 'the currently enabled features require active driver supervision and do not make the vehicle autonomous.'
This creates a powerful legal argument for plaintiffs: if the system isn't truly 'full self-driving,' then marketing it as such constitutes deceptive trade practices. Multiple lawsuits and regulatory investigations have targeted this gap. The California DMV has accused Tesla of misleading advertising, and several class-action lawsuits allege consumer fraud.
For crash victims, the legal strategy typically combines: PRODUCT LIABILITY — the Autopilot/FSD system failed to detect a hazard or made a dangerous driving decision. NEGLIGENCE — Tesla knew or should have known that its system was prone to the type of failure that caused your crash. CONSUMER FRAUD — Tesla's marketing induced drivers to over-rely on a system that wasn't capable of the self-driving it promised.
Additionally, Tesla's over-the-air (OTA) software updates create a unique evidentiary issue: was the vehicle running the most current software? Were known bugs fixed? Was a safety-critical update available but not yet installed? Your attorney should preserve OTA update history and compare it against Tesla's known issue database.
Tesla's 'Full Self-Driving' is only SAE Level 2 — it requires constant human supervision. The gap between marketing and reality is the basis for product liability and consumer fraud claims.
Waymo, Cruise & Robotaxi Liability
Fully autonomous robotaxis operating without a human safety driver present a cleaner liability picture — in some ways, they're simpler cases for plaintiffs:
NO COMPARATIVE FAULT ARGUMENT: When there's no human driver, the manufacturer can't argue the driver was partially at fault for failing to intervene. CORPORATE DEFENDANT: Waymo (Alphabet), Cruise (GM), and other robotaxi operators are well-funded corporations with substantial insurance. MANDATORY INSURANCE: Robotaxi companies are required by regulators to carry substantial insurance — typically $5M+ per vehicle.
However, these companies have significant legal resources and will aggressively defend claims. Their primary defense strategies include: arguing the AV performed correctly and the accident was caused by the other party, claiming the victim's actions were unpredictable and outside the AV's design parameters, and using their massive sensor data archives to reconstruct the accident in their favor.
Your counter-strategy: demand the full sensor suite data (LiDAR, cameras, radar) from the moments before the crash. This data shows exactly what the AV 'saw' and how its algorithms responded — or failed to respond. Unlike human memory, AV sensor data is objective and contemporaneous.
Critical Evidence: Sensor Logs, Decision Data & OTA Updates
Autonomous vehicles generate more data than any other type of vehicle involved in a crash. This data is both your greatest asset and your biggest challenge — because the manufacturer controls it.
SENSOR DATA: LiDAR point clouds showing the 3D environment around the vehicle. Camera footage from multiple angles. Radar returns showing object detection and tracking. Ultrasonic sensor data for close-range object detection. This data reveals what the AV 'perceived' before the crash — and whether its perception matched reality.
DECISION LOGS: The AV's planning and decision-making records, showing what actions the system considered, what it predicted other road users would do, and why it chose a particular response. These logs can prove the AV made a faulty decision — for example, failing to brake when it detected a pedestrian. SOFTWARE VERSION AND OTA UPDATES: Which version of the autonomous system was running at the time of the crash. Whether patches for known bugs were available but not installed. Whether the manufacturer had received reports of similar failures.
CRITICAL: Manufacturers will argue this data is proprietary trade secret. Your attorney must file aggressive discovery motions and, if necessary, seek court orders compelling production. Time is critical — some AV data may be stored on a loop and overwritten.
AV sensor and decision data may be overwritten or claimed as proprietary. Your attorney must send a preservation demand within hours and be prepared to seek emergency court orders.
Filing Your Claim: Strategies for AV Accident Cases
If you've been injured by an autonomous vehicle, your case requires significant experience. Here are the strategic considerations:
IMMEDIATE EVIDENCE PRESERVATION: File a spoliation preservation demand with the AV manufacturer within 24 hours. Request preservation of all sensor data, decision logs, software version records, OTA update history, and disengagement reports. IDENTIFY ALL LIABLE PARTIES: Don't just sue the obvious defendant. Map the full technology stack — manufacturer, software developer, sensor suppliers, mapping providers.
RETAIN SPECIALIZED EXPERTS: AV cases require experts in: autonomous systems engineering, computer vision and machine learning, accident reconstruction with AV-specific methodology, human factors (for Level 2 cases involving driver monitoring), and regulatory compliance (NHTSA, state DMV requirements).
REGULATORY COMPLAINTS: File a complaint with NHTSA's Office of Defects Investigation (ODI). This creates an official record and may trigger a broader investigation that produces evidence helpful to your case. NHTSA investigations have led to recalls and public disclosures of safety data that strengthened individual plaintiff claims.
Bond Legal is committed to staying at the forefront of autonomous vehicle litigation. As AV technology expands across the 28 states where we practice, we're prepared to hold manufacturers accountable when their algorithms and sensors fail.