Sgisela I have no idea why it got one right and the other wrong, let alone why it came up with the answer it did for the very wrong one (did it just make it up? Did it misunderstand something that was reported somewhere? Was it pulling false information from somewhere?
As @iternabe said, AI is based on "large language models" that basically operate on a very sophisticated statistical process of predicting what word should come next, given all the words that have come up thus far. This means the AI out is programmed to align with the topic, as well as grammar, of the prompt. A lot of times, that results in some pretty amazing, and amazingly accurate, responses.
But AI doesn't have the ability to assess the truth value of its own output (or of the prompt, for that matter), which is why is output is just hit or miss.
Oh, and when the output is factually wrong, it's not necessarily because the AI is pulling from some source data that's wrong, although that can happen as well. It could just be because it assembled a plausible sentence, without regard for its accuracy.
Sorry, this is yet more OT I guess š