Radar has low angular resolution, so it had only a crude idea of the environment around the vehicle. What radar is quite good at, however, is figuring out how fast objects are moving. And so a key strategy for making the technology work was to ignore anything that wasn't moving. A car's radar will detect a lot of stationary objects located somewhere ahead of the car: these might be trees, parked cars, bridges, overhead signs, and so forth.
These systems were designed to work on controlled-access freeways, and, in the vast majority of cases, stationary objects near a freeway would be on the side of the road (or suspended above it) rather than directly in the car's path. Early adaptive cruise control systems simply didn't have the capability to distinguish the vast majority of objects that were near the road from the tiny minority that were on the road.
So cars were programmed to focus on maintaining a safe distance from other moving objects—cars—and to ignore stationary objects. Designers assumed it would still be the job of the human driver to pay attention to the road and intervene if there was an obstacle directly in the roadway. [emphasis added by me - Borepatch]
This may be a fundamental problem with this approach to driver assistance technology. The ADAS is supposed to do most of the driving, but the human driver is supposed to still monitor the system and make sure it doesn't make mistakes. But our brains aren't wired for this level of monotony. Monitoring a system that works correctly 99 percent of the time is in some ways harder—not easier—than just driving the car yourself. And monitoring a system that works correctly 99.9 percent of the time is even harder, because it's that much easier for our brains to get distracted by something else.