For me "real intelligence" is when a computer program gets a complete and realistic model of the environment, by analysing data streams coming from cameras, microphones, lidars, etc.
And based on the knowledge of obstacles and moving objects and vehicles, take proper actions regarding a vehicle guidance, in such a way to avoid collisions and preserve human lifes.
Only gathering a real undestanding of the environment (situational awareness) and operating proper decisional algorithms these program can safely operate in an unpredictable urban environment.
The vast majority of this I would not describe as intelligence, but rather software capable of a task. The exception might the last underlined part might be a small hint of intelligence.
Video games have been able to do the majority of what you describe for some time, minus the object identification in the real world. The simple concept of "don't hit object" or even follow a road is perhaps the easy part.
Identifying objects in the real world is challenging, but doesn't require AI or Intelligence. For example, I was using 3D point-cloud technology over a decade ago which takes multiple cameras and generates a 3D environment. The problem being it takes a lot of processing power to render a single "frame." I've seen a variety of ways to also attempt to create a "depth camera" although I'm unfamiliar with the state of that technology.
I've actually speculated that the "AI" approach to object identification is the wrong approach. In other words, feeding large numbers of images (or video) into machine-learning algorithms isn't the most reliable and requires a lot of processing power. Instead, I think it would be far more efficient to manually write software which simply attempts to identify where objects are in a video. This can be done with FAR less processing power, in real-time, however requires very smart and creative humans to write that software. I've written similar (but simpler) software myself, though I'm not really working on this problem at the moment and the above is somewhat like coaching from the bleachers.
Elon Musk's Tesla has not achieved this level of AI yet, so he is asking to stop everything until they catch the forerunners...
Bingo. Notice who else signed that letter. It was the various other big-tech companies. None of them seriously think Skynet is about to take over. It's 100.0% about them being able to restrict their competitors, so they (big tech) can catch up. IMO, Elon was more interested in restricting Twitter competitors than Tesla competitors.
"Is AI really smart" is one of those discussions where you and your friends grab a beer, and have a great chat. "Pausing AI" (which I call "Pausing Software using Government Force") is more akin to pointing a gun at someone.
Without doxxing what I'm working on, that gun isn't just pointed at software development, but I could easily see it being pointed at my project itself which I'm sure big tech will see as a threat to their market share. I don't even consider the project "AI" but like we discussed earlier, the main difference between software and "AI" is a marketing label. So, that's perhaps why I take this topic a little personally, hah.