What Psychology Can Teach Artificial Intelligence

5 min read
Source: Cottonbro studio / Pexels

Source: Cottonbro studio / Pexels

Artificial Intelligence (AI) seems to dominate many aspects of society and the sciences, including psychology. Machine learning and deep learning have become far more common, and in neuropsychology and clinical psychology, there has been an increase in AI methods and tools.

Meanwhile, cognitive psychology and AI continue their fruitful collaborations in studying human and artificial minds. What does psychology have to offer AI? I think it can offer quite a bit.

AI, just like data science, excels in prediction. One algorithm with a specific parameter configuration may outperform another one with another parameter configuration. That is good when predicting tumors from X-ray images or recognizing objects for self-driving cars. However, in many scenarios, we don’t want to only get the best prediction but also the best explanation.

Mike Jones at Indiana University, Bloomington, nicely summarized the difference between psychology and artificial intelligence (or actually, cognitive science and data science) in the following way:

Within the cognitive sciences, we have been considerably more skeptical of big data’s promise, largely because we place such a high value on explanation over prediction. A core goal of any cognitive scientist is to fully understand the system under investigation, rather than being satisfied with a simple descriptive or predictive theory.

Predicting an outcome is important, but explaining the mechanisms behind getting to an outcome is often at least as important. And here, the cognitive sciences come in.

Having a black-box approach that brings us the highest performance may be very useful, but prying into the box and understanding why it comes to certain decisions will make the process more transparent. Not only does explainable AI have important advantages regarding ethical issues, but it also helps to understand what might be missing to reach maximal performance.

But there is another reason why psychology can teach AI a thing or two. To explain, we need to go back a few centuries. William of Ockham, an English friar born in Ockham, a small village in southeast England, has become a common name among psychologists. Ockham was one of the foremost thinkers in the 14th century. He proposed a principle called Occam’s (after the inventor’s name) razor (rule of thumb or principle in philosophy). Occam’s razor states that explanations requiring fewer assumptions are more likely to be correct, so unnecessary or improbable assumptions must be avoided.

Put simply: Simple models are better models.

Today’s AI operates on large (very large) datasets and powerful (very powerful) complex (very complex) algorithms. Take deep learning models that use a complex artificial neural network that spreads out like a spiderweb of connections, each connection similar to a thread in the web leading up to a node that spreads new threads to other nodes. The complexity becomes clear when you consider the predecessor of the current ChatGPT. In 2020, ChatGPT was estimated to have 175 billion parameters. That was three years ago.

It is tempting to automatically assume that large artificial neural networks always yield the best performance, contradicting Occam’s razor. The incredible findings from most AI models, such as ChatGPT, may suggest that Occam’s razor does not apply. More complex models with larger datasets always perform better, it seems.

Recently, Guido Linders and I worked on developing a dialog act classification system, a computer system that would take in a sentence and classify the intention of that sentence. Take, for instance, the following example:

We are having dinner, and I politely ask, “Can you pass me the salt?”

You politely respond (without taking action), stating, “I certainly have that ability.”

This would frustrate our diner at least as much as when I was to state my true intention: “Pass me the salt now.”

Such a dialog act classification system is useful for tools like chatbots and intelligent (tutoring) systems. They allow for communication with the user in natural language and—importantly—will respond naturally to the user.

Artificial Intelligence Essential Reads

“Have just had it” will then not yield a response such as “Great that you have it, sir,” but with an “I am sorry to hear that.” Dialog act classification systems classify utterances into appreciations, opinions, floor-grabbers, clarifications, etc.

Well over 50 of these dialog act classification systems have been developed. Some used simple algorithms, others used machine learning algorithms, and others used deep learning techniques. Let’s say some were simple, and others were very complex.

When we compared the results of 50 dialog act classification systems, the systems that used complex deep learning algorithms (the very complex ones) were better at classifying dialog acts than the simpler ones, but only barely.

We found only very marginal differences in the performance of the most sophisticated and complex algorithms and the absolute simplest algorithms. William of Ockham would have smirked.

When we conducted our analyses using a simple algorithm and opted for explanation over prediction, we found that it was not the combination of many linguistic features that best explained the dialog act classification. Neither the more complex linguistic features nor the variety of features best classify the dialog act. Instead, the simplest algorithm with only the linguistic information, the words in the sentence, best explained the speaker’s intention. Dialog act classification performed the best with the simplest linguistic features and the simplest algorithm.

Even though it often seems that more data and more complex models yield the best performance in AI, psychology helps to remind us that explanation is as important as prediction, that simple is often better than complicated, and that less is sometimes more.

You May Also Like

More From Author

+ There are no comments

Add yours