We’ve often imaged interaction with aliens being some form of ‘Death Star’ battle. Yet scientists have discovered a way to crash that mortifyingly beautiful dream with a new theory: Alien Malware.
Two astrophysicists, Mike Hippke and John Learned from Germany’s Sonneberg Observatory and the University of Hawaii respectively, warn mankind that aliens could destroy life using a simple message infected with a virus.
Sounds like a plot from a movie, right? Well, I thought so too. But as I kept reading the article, the reality of it became increasingly cool and simultaneously scary. (And disappointing. I was hoping for space battles!)
In their research paper, they argue and propose various theories on how extraterrestrial intelligence (ETI) could pose a threat to our world using a malware. They begin with reasoning about how anyone, even an amateur astronomer with a telescope looking for alien life signals could be a reason for the dawn of the apocalypse.
The astrophysicists discuss different possibility of messages an ETI would send. They start with a basic simple text saying, “We will make your sun go supernova tomorrow”. Whether it’s from an ETI or a well masqueraded hacker, it could create panic among us, humans. But if it’s been sent to a single location and delivered to a single system, there is a possibility of containment without inducing panic.
In another scenario, they hypothesise the way aliens could be intelligent enough to trick humans into submitting by offering them something they are desperately in need of, deceiving them to accept (basically mind-manipulation). They proposed a hypothetical example of this situation. Aliens could send a message that says: “We are friends. The galactic library is attached. It is in the form of an artificial intelligence (AI) which quickly learns your language and will answer your questions. You may execute the code following these instructions…” This does sound appealing enough to open that text and execute.
And even if this situation was only available to a small government body and they try to contain the whole situation by isolating the alien Malware – say by sending it to the moon and blowing it up so that’s its shut down – the AI would still likely have human contact.
“Even in a military-style, adamant experiment, there will still be humans involved who go home after examination work with their own feelings,” the astrophysicists continued. “Even if everything is officially secret, whistle-blowers might get some news out to the public. Quickly, there could be a community on Earth in favor of letting it out for religious, philosophical etc. reasons.”
They further state that “If the AI promises to cure cancer, or offers a message of salvation, a cult could form. Maybe (or maybe not) a majority of the population would be in favor of releasing the AI. Should, or even could, a democratic government work against the majority of its people? Dictatorships are unstable and eventually overthrown; the AI will be eventually released.”
So, with no back up plan in hand, the AI will get out eventually at some point and will end humanity.
The scientists concluded their study by advising humans not to try and decrypt the code using some advanced computer. They simply tell us that, if the messages are not simple like easily printable images or plain text, we must destroy the message. Because, while the risk for humanity may be small, it is still certainly a risk.
The probability of receiving any such messages from ETI is very low and even if we do, they are likely to come from the “good” guys. This whole study is to articulate what could happen if it’s otherwise. It is always wise to weigh and understand the risks and chances ahead and make the right choice before it’s too late.
In my opinion, if things go south, I would say one good thing came of it. You would know that there is an intelligent life out there before you go “ka-boom”.