A long time ago, when I was at college exploring a brand new field of study called "Computer Science", my fellow students and I were often frustrated by the fundamentals of the incredibly accurate disipline required. Like the binary math that drives all program, there's a 1 and a 0. A right and a wrong answer. There was no grey zone. Our mentors taught us early on: "Bullshit in, Bullshit out!" While I knew that two and two equals four, my computer didn't. I needed to write a simple program that told it: "2+2=4".
Fast forward to today and we have large language models that are collecting, comparing billions of bits of data at astronomical costs.
These models cannot distinguish between bits of data that represent reality or fantasy.
If the L.L.M. is fed a diet of accurate information, the resulting 'output' will probably be fairly similar to the reality we occupy.
Feed it bullshit and you'll be able to conjure up stuff like this artificial technology from some artificial past; circa 1960?

Look . . . The Titanic didn't reaqlly sink. It was stolen and taken to Brazil!
While making up ads like this can be a lot of fun, the ability to do so reveals the darker side of a L.L.M.'s potential.
"Bad Actors" can seed hundreds of thousands of 'fake' ideas into the historical timeline of their choice to artificially 'alter' the past.
In time other models and search engines will roll these bits of information into their assessments and answers that they "innocently" provide. [Some] younger people and [some of] those from different cultures will treat these 'generated answers' as factual.