In his iconic, Blade Runner-inspiring novel Do Androids Dream of Electric Sheep, author Philip K. Dick mused upon the ethical and philosophical questions that emerge when artificial beings develop human-like consciousness. The novel explores whether a man-made entity is capable of taking on the characteristics of sentience; such as self-awareness, feeling emotions, and the ability to perceive. What would this mean for the boundaries that exist between human and artificial intelligence?

While we are (hopefully!) still some way from technology turning on us, the questions that Dick raised in this book are, today, more relevant than ever. 

Unprecedented Access to Artificial Intelligence

This is, of course, where AI hallucinations come in. With the recent influx of Large Language Models (LLMs) such as ChatGPT, we have come face-to-face with artificial intelligence in a way never before possible. The benefits and the drawbacks are both numerous and controversial. But one of the more fascinating/frustrating/frightening/funny (depending on your outlook) things to come out of this tech roll-out has been AI hallucinations. These ‘hallucinations’ are when LLMs present false, misleading or irrational information as if it were fact. Many users have found the answers to their questions provided by these systems to be demonstrably false. This generally seems to happen when a LLM is not given high enough quality data, or enough context to work from. 

Can A Machine Hallucinate?

The term that has been chosen for this bug, ‘hallucinations’, is an interesting one. At first it may seem paradoxical, seeing as hallucinations are traditionally associated with human or animal brains, not cold, hard machinery. From a metaphorical angle however, hallucination turns out to be a rather accurate term with which to describe these malfunctions, most notably in instances of image and pattern recognition. 

You know how you see figures in the clouds, or faces in inanimate objects? This is a very natural human phenomena called pareidolia. But AI hallucinations could be described as similar, looking for recognisable patterns in a mess of data. Their misinterpretations can happen due to various factors, including training data bias/inaccuracy, overfitting, and high model complexity. 

A Reality They Don’t Understand

LLMs use linguistic statistics to produce responses about a reality they don’t actually understand. Their answers may seem correct (with exemplary grammar, semantics, etc.) but sometimes they can be complete nonsense. Badly trained AI will have inherent biases and blind spots — meaning that it will try to fill in the blanks in the effort to answer your question. However, the ‘blanks’ it may come up with can be pretty out-of-this world. 

Photo by Possessed Photography on Unsplash

Some people have (rather romantically) compared AI hallucinations with our human dreams, in that they creatively meld seemingly random data, without basis in logic. These wacky responses can have their benefits. They can be used to inspire ‘out of the box’ thinking, or as starting points for writing, music, or art. However, they can be harmful too. They can proliferate fake news and spread false information, which can obviously have dangerous consequences. You do not want to risk AI hallucinations when it comes to self-driving cars or medical diagnoses! 

In an attempt to salvage the reputation of AI with the public, scientists are waging a war against hallucinations with methods devised to combat them. 

How Asking Better Questions Gives You Better Answers

One of these methods is ‘Prompt Engineering’. This puts the onus on us, the users, to think carefully about how we formulate the questions we ask LLMs. It’s about asking the right question. A prompt is basically the context and set of instructions that you give an LLM, like ChatGPT, to get an appropriate response. A successful prompt involves including clear context and perspective on what you’re asking, giving the AI generator a ‘role’, and even outlining how you want the response to be structured. 

This is not just smashing some keywords into Google. For the best, most accurate result sometimes multiple prompts can be necessary. Basically, you must be fully in control of your input to guide your output, which is a way of working with the existing system without intrinsically changing its programming. The higher quality the prompt you put in, the higher quality the response you will get out. 

Photo by Igor Omilaev on Unsplash

How ‘Promt Engineering’ Relates To Psychedelic Tripping

“Fascinating stuff!” — you might say — “But what has this got to do with psychedelics?!” Well, not loads to be honest, apart from the invocation of ‘hallucinations’. But the methods with which you can inhibit AI hallucinations have got us thinking about how we influence our own, purposeful hallucinations — i.e. psychedelic trips. 

In many ways, this idea of tailoring the input to ensure a good output parallels with intention setting, and the other work we do to prepare for a higher-dose psychedelic trip. However, rather than doing it to avoid ‘wrong’ answers (some may say there are no wrong answers in a psychedelic trip!) we do it to get the most profound and generative experience we can. Setting your intention is in some ways the ‘question’. ‘Set and Setting’ is like the context. 

Created via deepdreamgenerator

While, unlike AI, our hallucinations, dreams, and mind-wandering are usually a positive thing, cultivating a level of focus for a trip can help you to get the most out of it. Make sure you are feeling good mentally, and are in a safe and comfortable environment. Frame your questions/intentions/ideas for a kind universe, rather than a Google scrape. 

Visions of Our Inner Selves

To hallucinate is a very human thing — to manifest visions of our inner desires, fears, wonders, or even the silly video you watched on Youtube last week. Everything within us influences what we see and experience when we trip. Input, output. So while AI hallucinations are definitely not in the same league, or even on the same plain (at least, not yet!) it is possible to take inspiration from the way we approach them, in regards to our own forays into psychedelic intelligence