Go to main content
Linda Kaye

Are humans intelligent enough for the digital age?

02 June 2017 | by Linda Kaye

On the 14th of June the BPS is presenting a talk designed to address public concerns about the increasing use of Artificial Intelligence in the workplace. In preparation for this, we welcome back Dr Linda Kaye with this piece discussing some of the issues surrounding our ability - or lack thereof - to adapt to our increasingly digital world.

Many of our everyday interactions occur through interactive technology, and much of our daily lives are supported by access to these platforms.

Social media is one example of such a context which is highly intelligent in nature, allowing us to connect, communicate with each other in multiple ways, and interact with content in way which would be unimaginable a little over a decade ago.

Certainly, as a generation of digital natives, we have managed to navigate ourselves around these new environments and in the most part, have found ourselves to be competent at responding to the challenges they bring. However, it seems there is still much we have yet to learn, and it is questionable as to the extent to which we are able to completely acknowledge the complexity of Artificial Intelligence (AI) in these systems.

A recent blog post by Dr John Suler, entitled “Do the bots like you” makes reference to the phenomenon of social media “bots” - social media accounts which are made to look like they are owned by humans, but are instead programmed to respond in certain ways.

For example, they may be programmed to “like” certain content on social media platforms in order to help develop the visibility of particular accounts or organisations, a phenomenon which has clear commercial applications, but which raises some slightly disturbing questions.

After all, how can we be sure we're actually savvy enough to identify these bots, or that we're truly aware of how much they are influencing our interactions and experiences online?

It is widely understood in psychology that the way humans behave is not 100% logical, whereas computers can only be programmed to behave logically. The famous Turing Test, for example, is one key way of highlighting these differences.

Humans often behave irrationally, and are influenced by factors such as context and emotion, which computers are not. As such, the way we process information (on social media sites, for example) is determined by many influences, whereas in the case of bots their behaviour is simply in response to algorithms which are determined by prior programming and behaviour which will always be rational and logical by nature.

So what is the outcome of an environment inhabited both by logical (bots) and largely irrational (humans) agents?

The answer is, by and large, chaos.

As an illustrative example, fake news is becoming a huge societal concern and as a result, is on the agenda for remedial action. Indeed, Youtube has recently announced it will be offering training for teenagers in detecting fake news.

Although fake news content is not necessarily always a product of artificially intelligent agents, the potential for bots to be programmed to “like” content as a means of promoting the visibility and momentum of radical ideas is highly conceivable. In such cases “likes” can be immensely powerful persuasive tools, as social consensus (or the perception thereof) is one key principle which underpins effective persuasion. Indeed, this is one of six principles of persuasion posited by Cialdini (2001).

The number of “likes” on a given social media post can provide us with a visible behaviour of social consensus, and as a result, can help us validate the extent to which others endorse the viewpoint of the post content. So “likes” can be surprisingly psychologically powerful.

This is where AI holds the key to our social media experiences, and is perhaps influencing us in a way in which we do not yet fully understand, prompting some serious questions, such as:

  • How aware is the average person about the extent to which their social media content is targeted in a bespoke way, based on their prior behaviour and interactions?
  • Are we aware that not all accounts of social media are “real” (i.e. a reflection of a real human)?
  • And what (if anything) can be done about them?

Certainly, there is an increasing awareness of the importance of internet safety, but in my opinion as a society we still have much to learn about how AI is interacting with, and influencing. us on an everyday basis.
 



If you're interested in learning more, why not attend our event in York on June the 14th?

See the link below for details:

Topics

Top of page