Since the development of the first artificial intelligence program in 1955, artificial intelligence has made its way into many areas of human life. In recent years there have been massive developments with artificial intelligence in healthcare, artistic creativity, even the judicial system. Many people have been of the assumption that we would only live to see it potentially overtake the labour of the blue-collar workforce.
Developments in computer programming show that artificial intelligence can carry out and provide support in certain fields of work that are otherwise known for requiring years of training and expertise.
Many ethical questions arise when we start to consider how we should embrace AI. If we want to allow it to become a substitute for human labourers, we should perhaps equally consider where those people will go. Do we as a global society have the economic infrastructure to support masses of replaced workers in the population? Do we have the framework set it place so that they can create a life for themselves where they can live well, rather than just be sunken in unemployment and destitution?
Furthermore, if artificial intelligence reaches human-level intelligence, and becomes entirely capable of responding to the environment, at least in the same way we do, is it conscious? Should it be treated as otherwise equal to a human from an ethical standpoint? What is consciousness anyway and how do we measure it?
Rather than being able to comprehensively answer these questions, this article is intended as a source of background information on the subject, to give you the opportunity to think it over for yourself.
What is artificial intelligence?
Artificial intelligence is generally defined as ‘any computer program that is capable of carrying tasks that otherwise require human intelligence’. More specifically, tasks that include complex cognition, such as visual perception, language comprehension, and decision making.
It was first developed in 1955 by Herbert Simon and Allen Newell, two researchers who combined their collective knowledge in the fields of computer science, cognitive psychology, and economics to create the Logic Theorist.
This program was designed to echo human problem-solving skills, to solve advanced mathematical theorems. Today, over half a century later of trials, failures, successes, and periods of staggering exponential growth, we live in a world of self-driving cars and AI customer service. A world where we wake up every day and direct all our questions to AI-powered search engines.
It’s always an interesting thing to hear other people’s views and expectations of AI in the future. I find that people have such a vast range of things to say on the subject. Some people are terrified that AI will take over the world, their skills will become obsolete, and that we will gear towards a world bereft of human values.
Others look forward eagerly, hoping for a world where they can download their human consciousness onto a drive and outlive their bodies, bypassing a natural death. Many fantasize of a future where we will have AI’s living in our houses, so that they can aid us by taking over the responsibility of doing mundane tasks, effectively, being our mechanical slaves.
Few people imagined that they might be the doctors that meet us at the clinic, to prescribe us medicine or surgery when we’re ill. Fewer people saw the possibility that artificial intelligence could be the psychologist listening to us speak about our deepest feelings and experiences, offering us counsel or administering a diagnosis.
Artificial Intelligence in Healthcare
One very noteworthy example of artificial intelligence shining its light of utility is with predicting malignant breast cancer. When a doctor sees an unusual spot on a mammography (breast scan), they need to make the decision whether to operate or not. Of course, the safest thing to do would be to operate, in case the tumour turns malignant and cancerous. However, surgery can be extremely taxing on an individual and it seems a great shame to have to undergo an operation when it’s not strictly necessary.
When a human is trying to plan in response to a complex problem, we try to take as much data into account as possible. A doctor in this situation might try to remember everything they learned about tumour shape and size, comparing this case to all past cases they’ve encountered in their career. They might try to consider the genetic variables related to the case, whether breast cancer runs in the family, and so on. They might also consider the patient’s age, lifestyle, and diet as factors for whether this particular tumour is likely to turn malignant or not.
The general fear factor of misdiagnosing, telling a patient that they don’t need surgery when it later turns out that they do, could also play a role. So, it can turn out that many people who don’t really need the surgery end up getting it anyway, as a result of human bias, misdirected caution, and error.
The key difference between the human prediction power and the prediction power of artificial intelligence is working memory. Working memory refers to the conscious part of memory when information is accessible for controlling the execution of tasks. Human beings have a very limited working memory. The rule of thumb for how many things a person can keep in their conscious mind at one time is +/- 7 items. This means it can vary between 5 and 9 items depending on the individual and the context.
We can only actively process a certain amount of data at a given moment. With artificial intelligence, the amount of data they can process is effectively limitless, and they can process it in an instant. Communication between neurons (brain cells) is extremely fast, which is why we don’t really sense delays when picking something up after deciding to or responding to a comment. However, compared to our artificial counterparts, who can process information at the speed of light (yes, the speed of light!) we are considerably slower.
When considering how important large data samples are for achieving accurate predictions, we can see a great disparity between how well people can do, and how much better it could possibly get.
Artificial Intelligence in Clinical Psychology
Here we need to define two terms: clinical prediction and mechanical prediction.
Clinical prediction is when a clinician, in the following case, a clinical psychologist makes a prediction about a person’s disorder. What disorder it is and how it will develop over time. They do this by employing their expertise and years of experience.
Mechanical predictions are basically predictions that are made using statistics and algorithms. While humans can do mathematical equations by hand, they can’t process the same amount of data that computer programs can. Considering this, the following meta-analysis review is kind of a computer versus human situation.
A meta-analysis that examined a bunch of studies that had measured the difference in accuracy between clinical and mechanical prediction found that ‘’ mechanical prediction often outshines clinical prediction; that is, when it is not superior, it performs as well as clinical prediction’’.
They found that the variable that influenced the observation that mechanical prediction outperformed human prediction, was when there had been a clinical interview. The clinical interview, the cause of error.
According to these results, a computer program would be more accurate in analysing the nature and development of a patient’s condition. They propose that the human contact between psychologist and client was the main source of error, which affected the diagnosis and prognosis. This would theoretically be a good argument for substituting (at least to some degree) a therapist with an AI program. If they are more accurate, why not? Well . . .
Imagine a person you know is suicidal, they have been suffering from psychological disturbances for quite some time, perhaps crippling anxiety or depression, and are literally at their wit’s end. Imagine that they might be in the process of coaxing themselves to the conclusion that no life at all is better than the life of suffering that they have been enduring.
They call an emergency suicide hotline and are met by a calm, clear, and robotic voice. The last sliver of hope is spent on an artificial system that feigns human concern. While I do not want to be so outright cynical about that quite real possibility, I simply can’t imagine that in particular cases like this, that an AI system could be at all comforting. It could very well have the opposite effect, who knows. These are all things to be considered as we edge our way toward an AI-prevalent world.
Artificial Intelligence & Creativity
Well, if you have made it this far in the article and are enjoying it, then I’m excited to engage you in this next part. Separately, creativity and artificial intelligence are considerably interesting topics. Combined, they are equally, if not, even more interesting.
What is Creativity?
There are many different definitions of creativity out there, sometimes they differ depending on the context. For example, when you describe a person as being creative it isn’t the same as when you describe it as a psychological construct or as a cognitive ability. Many artists conceptualize it in their own subjective ways too.
But it’s generally defined like this: the ability to create something that is both novel and useful. Interesting questions that I will attempt to provide some insight to here are How does creativity work in people? How does creativity work in AI? How are they similar or different?
Creativity and Brain Networks
A network in the brain refers to when several spatially distant areas (bits that aren’t close together) in the brain are activated at the same time in a particular kind of state or when doing a particular kind of task.
The default mode network refers to a set of brain areas that are activated when we are endorsed in ‘self-generated’ thoughts, among other things. It is the neural mechanism involved in daydreaming and mind wandering. When people are thinking about their life in an autobiographical way this network gets activated.
As fun and lovely as this network sounds at face value, it is actually correlated with a lot of psychological pathologies, such as anxiety, depression, anti-social behaviour, even schizophrenia. It is also seen to be abnormal in Alzheimer’s sufferers, they seem to somehow lose the ability to manipulate the activation and deactivation of this network like most people can using their conscious attention.
The creative process has long been presumed by creativity researchers to have two stages. The idea generation stage (where the creative thought puffs into the mind) and the idea evaluation stage (analysing the idea deeper).
The default mode network is active during idea generation and during idea evaluation, something very interesting happens. A second network, a network that has a famously antagonistic relationship with the default mode network, starts to work together with it.
This central executive network is usually active when you are doing things like planning, analysing, and organising. Usually, these two networks cannot both be active at the same time, it’s usually either one or the other. However, in this particular instance, they stop ‘fighting’ each other and work in synchrony. Highly creative people have much stronger functional connectivity between these two brain networks.
There are many different manifestations of creativity in artificial intelligence, depending a lot on how they were programmed to begin with. If they were programmed with a particular goal in mind, or with rules in place, their creativity would go in that direction, or work within the permitted framework.
Researchers have implicated this same neural arrangement of creativity in artificial intelligence. There are programs designed that exhibit this same network conflict and harmonization, similar to how the default mode and central executive network operate in human creativity.
In the same way, artificial intelligence can only be creative according to its programming and because of being exposed to a particular data set, human beings can generally only be creative according to their genetic code and the things that they have experienced in life. It is always some combination of something that already exists in the mind, and our genetic makeup is analogous to a computer program, and the data set is analogous to our past experiences.
Creative artificial intelligence programs can create abstract art, write songs and write scripts for movies. Many music corporations already employ AI to generate new pop songs, is it any wonder they all sound the same?
Artificial Intelligence and Consciousness
The question of ‘Can artificial intelligence ever be conscious?’ is difficult to answer, the scientific community is having a hell of a hard time trying to understand consciousness as it is. It’s a complicated topic, full of dead ends and spirals of uncertainty when just considering humans.
What is consciousness anyway? You know that you are conscious, and you know that it is what gives birth to your experience of existing. It is the source of every little thing that you do, say, and decide. It’s somehow silent, yet loud, omnipotent, yet nowhere, familiar, yet intangible.
It’s usually described as ‘the state or experience of being aware and responsive to one’s environment’. The word state is important here as it implies there is an experience underlying the responsiveness to the environment. Artificial intelligence is perfectly capable of responding to the surrounding environment, it is effectively able to see, hear, analyse and respond, in much the same way that we do. However, there’s nothing to suggest an underlying state or experience. But even if there was an underlying consciousness, we have no real idea how we would go about testing that.
The topic has proved to be quite the conundrum for science. The investigation of it has been passed around a lot between philosophers, doctors, psychologists, and neuroscientists. With the science of consciousness, we cannot use any of the usual methods that we have at our disposal, it is extremely difficult to test and measure. How do we measure conscious experience? It’s not even possible to know whether when another person sees a strawberry, they are seeing the same colour ‘red’ that you see. There’s no way for me to know that when you see ‘red’ that you are not in fact seeing me ‘green’. We could have been going our whole lives without even realizing the incongruency. I can know that I am conscious, but I can’t really know that you are conscious, just because you might say ‘trust me, I’m conscious’.
If an AI system one day says ‘I’m conscious and I can experience pain and sadness, I deserve the same rights as humans’, without the proper means of measuring the proclaimed consciousness, we are in something of an ethical predicament.
Researchers have been looking ardently to find a ‘neural correlate of consciousness’, we call this the NCC for short. The neural correlate of consciousness would basically be the part of the brain that implicates consciousness. It’s quite a large topic, so, we will talk more about this in later articles.
When investigating consciousness, science notes two types of problems.
The ‘simple problem’ is the problem of trying to find the physical basis of consciousness (the NCC just mentioned). Where is it? In our brain? What part of the brain is it in? Is it different per individual?
The ‘hard problem’ (named so because of the difficulty in solving it) refers to the problem of understanding WHY we are conscious. Why not just automatic? Consciousness itself does not seem to be necessary for survival, at least, it certainly isn’t obvious why it would be. So why has evolution endowed us with such a sharp sense of existence? Why the almost painful awareness? Considering that even science is bamboozled by this question, it is really anyone’s guess. Why do you think we are conscious?
According to the theory of ‘panpsychism’, consciousness pervades the universe and is a fundamental feature of it. In a sense, it is a single common feature that connects all things in the universe. Simpler forms of life are thought to have simpler forms of consciousness. Human beings, as complex as we are, are (according to some theories) thought to be the higher end of this spectrum of consciousness.
Artificial intelligence has the potential to move into many areas of human life. There are both very obvious benefits and areas of the potential hazard. Something that seems key for us to consider is the impact of this on human life and human values. It would be a great shame to simply allow the science to run away with itself without any type of philosophical input. It is less of a distant dream and more of an imminent future, those who are paying attention to the past and ongoing development should have no qualms agreeing with that.
I would like to make a comparison between artificial intelligence and an echo. An echo can resonate with the sound of a human voice. The sound of the echo sounds very much like the human voice that it resonates, but really it is a repetition, as the sound waves bounce off smooth, hard surfaces. For all intents and purposes, whatever is said by the sound of the echo is the same as what the person’s voice has said, but it lacks the conscious starting point. Artificial intelligence is somehow a mirror of human values, at least, a mirror of the values of those who programmed it. It reflects the way we think, the goals we are oriented towards.
Before we get totally carried away with the marvel of this echo of human consciousness and human values, it might be useful to first understand what our values really are, so we can embrace artificial intelligence in the right way and minimize any potentially negative consequences.
If you’ve made it this far and enjoyed the read, please consider liking the post to show support. We are a new journal, so it means a lot to us. Like us on Facebook if you want to be kept in the circle of what we do next.
A Very Short History Of Artificial Intelligence
The Seven Deadly Sins of AI Predictions
International evaluation of an AI system for breast cancer screening
Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective
Artificial intelligence and counseling: Four levels of implementation
Clinical versus mechanical prediction: A meta-analysis.
Computer Models of Creativity
Engineering Creativity in an Age of Artificial Intelligence
Generating original ideas: The neural underpinning of originality
The Easy and Hard Problems of Consciousness: A Cartesian Perspective
On the Search for the Neural Correlate of Consciousness