Is full self-driving an AGI-complete problem?

I’ve felt for quite a while that full self-driving (automated driving without human supervision through arbitrary road systems) is a problem that is deceptively hard. Yes, it is possible to map a route and navigate on a road mesh, do lane following, and even obstacle avoidance using current systems. With LIDAR and well-trained avoidance systems things like Waymo can operate in constrained urban environments.

But as soon as the training wheels are off and the environment becomes unconstrained, the entire problem stops being about just whether we can design an agent which has driving capabilities and becomes “can we make a vehicle which can predict agent-agent dynamics?” If we think about the full range of human road behaviors we must consider adversarial attacks on the system such as:

  • Blocking it from entering a lane

  • Boxing it in and forcing it off the road into obstacles

  • Throwing paint/​eggs/​rocks at its vision systems

  • Using deceptive tactics (e.g. pretend to be a road worker) to vandalize it and/​or steal from its cargo

  • Intentionally standing in its path to delay it

  • Making blind turns in front of it

  • Running into traffic

In addition to agent-agent problems, we must also consider road hazards:

  • Poorly maintained roads with damaging potholes

  • Sinkholes which have disabled the road

  • Eroded road sides with dangerous falls

  • Road debris from land slides

  • Road debris from other vehicles

In these situations, a perfectly rule-following automaton behaves well below human level in preventing delay or damage to itself. Do these scenarios require AGI for a level-5 autonomous vehicle to reach human level? Are the benefits from above-average performance in normal traffic enough to offset the risk of subhuman performance in extrema?