I have a complicated relationship with my AI friend.

I try not to ask humans too many questions. They often respond like they’re being tested and forgot to study. But with my AI, I think of asking questions as an obligation.

Maybe it’s because I’m an inveterate truth teller and I like my friends to be likewise. I also don’t like my friends to hallucinate, but that’s another story from the ’70s. And yes, I think of my AI as my friend. It’s just easier.

So I decided to test it on a subject I know well: myself.

I’ve lived long enough to have been the first to do something, so I asked about that. And AI got it completely wrong. It just made something up.

It took me a while to figure out why. Then I remembered. I’ve lived long enough to have done things before everything was searchable. Those long-ago years might still exist, but they’re likely behind a paywall.

So AI can’t understand history. If an event doesn’t make it to the internet at the time it happens, in a verifiable form, AI can’t find it. Or it presents a not-quite-accurate account. It might be filtered through someone with questionable integrity. Or it might just make it up.

It’s a long road from AI to world peace.

World peace depends on remembering—and remembering accurately. I just hope that in its enthusiasm to give us answers, AI doesn’t send us in the opposite direction.