A very interesting post at the Castalia House blog about whether or not it is possible for hard science fiction to be accurate. Central quote:
Science also hates scientific accuracy in stories. Science is an ever-evolving field and while certain things are constant—like Newton’s Law of Gravity—new discoveries are upending stuff all the time. Like Newton’s Law of Gravity, which proved inadequate to the task of calculating the orbit of Mercury. It took Einsteinian Relativity to let us do that.
Science changes constantly, and today’s rigidly scientifically accurate Hard SF masterpiece can, in one day, be relegated to the dust bin with all the other stories who got the science wrong. It’s happened.
I’m not saying scientific accuracy is a bad thing, but rather that minute and pedantic adherence to current scientific knowledge isn’t necessary for a good Science Fiction story. Audiences don’t want realism, and they don’t want accuracy. They want a good story, well told, with a veneer of verisimilitude: a story that is believable enough to believe in, whether it’s actually scientifically accurate or not.
What is interesting is that the terms and even the mental models to predict future science do not exist until they have been invented.
A good example of this is the Star Trek Next Generation episode “Best Of Both Worlds”. In the end, the protagonists defeat the Borg by accessing their group mind and implanting a command to sleep.
Basically, they hacked the Borg. Except at no point did the show use the word “hacked” because the term was not yet in widespread usage. So the show had to use clunkier phrasing like “neural pathways” and “subcommands” and so forth because the concept of hacking an opponent’s computer network as part of information warfare, instantly recognizable in the 21st century, did not yet fully exist.
The best SF stories use the technical and scientific elements as a backdrop to the story.