AI value or vanity? How SaaS companies are approaching innovation
Download the report
Request a DemoLog in

Advisor vs Autocrat: AI Friend or Foe? Part 2

Panintelligence
Publish date: 16th April 2021

Part Two of our ‘AI: Friend of Foe?’ series is based on our recent webinar with Panintelligence Co-founders - Ken Miller and Zandra Moore - along with Denis Dokter (Relationship Officer at Nexus Leeds), and Nick Lomax (Associate Professor in Data Analytics for Population Research). Read Part 1: The Ethics of Algorithms here.

Should AI be an advisor or a decision-maker?

Much of the fear of futuristic Terminator-like AI comes from the sci-fi fantasy of AI being able to make decisions and take actions completely without human intervention. While there are some applications of AI being almost autonomous – self-driving cars being one – for many more use cases, AI and ML act not as autocrats, but as a very helpful assistant to humans.

Take cancer screening for example, where algorithms can quickly and accurately assess an image and then flag areas of concern for review by human eyes. In this scenario, AI is acting as a powerful tool and valuable assistant in the process of cancer identification, but it does not make any decisions itself.

If we removed the human entirely from the process things can get more difficult. But it does depend on the level of risk for each scenario. If the algorithm that chooses your Spotify playlist selects a track you don’t like, it’s a minor annoyance at most. But in use cases where the stakes are higher, having a human as part of the process, is probably a good thing.

AI can assist with cancer screening

AI can assist with cancer screening

Giving Choice Vs Making The Decision

AI already helps to give us choice, but are we comfortable with allowing it to make the final decision?

Take the concept of a ‘Smart Fridge’; an internet-connected device, it monitors the levels of food and drink stored inside and knows when you’ve run out of milk for example. The Smart Fridge can either simply alert you to this fact or it can go one step further: make the decision to order you more milk.

However, this second step may not always be wanted. Perhaps you are about to leave for a week’s holiday and therefore do not want to order more milk. This example illustrates a scenario where the algorithm doesn’t have all the data required to make the right decision. So in this case, you need the human in the loop.

The line between augmenting the human decision-making process, versus the AI itself making the decision, is a very interesting and important line.

At the moment, we’re much more comfortable with AI giving advice rather than making the final decision. However, this will likely change over time, as AI becomes ever more sophisticated, and ever better at making decisions that humans agree with.

Should smart fridges make decisions?

Samsung's smart fridge

Where do we draw the line with AI? The Ethics of AI

Like with many advancements, the line will be continuously drawn and redrawn. Ultimately it comes back to the human. Humans need to see how and why decisions are being made. Is it useful? Is the application good? Is it moral?

We need to be flexible with legislation and regulation as time goes on. We must ensure accountability (“it wasn’t me it was the AI!”). Who is accountable for decisions that are made because of algorithms?

An oft-cited theoretical problem with autonomous cars is the scenario where the car is forced to choose between hitting an elderly person or hitting a child. The scenario is somewhat of a moot point (on the rare occasions when this occurs with human drivers – is there really any conscious thought or merely a reflex reaction?) but with the autonomous car, does a human have to programme the software with a priority list of who should die first in such an incident?

Autonomous Car

Tesla's cars are not autonomous yet, but that is the long-term goal.

The Pitfalls of AI

Though many companies are seeking to build robots and AI that replicates humans, such mimicry can rub people up the wrong way. People need human contact, and as soon as they detect that the ‘person’ on the other end is a machine, they can be turned off.

But this will surely diminish, as AI gets better and better and ‘being human’. In 2014 a piece of software passed the longstanding ‘Turing Test’ (though not everyone agreed). In time, distinguishing between humans and AI will become increasingly more difficult.

Could AI eventually outperform humans in empathy?

We tend to think of futuristic AI and robotics in military situations, but is it possible that AI could one day outperform humans in terms of empathy? What if lonely or ill people could find support, even companionship from a robot?

Fictional sci-fi series like Channel 4’s Humans looks at this scenario where ‘Synths’ – highly realistic ‘synthetic humans’ – play the role of emotional or sexual partners to real humans. Such a future would seem plausible, if or when the technology reaches that level of sophistication, though would also require a dramatic shift in our attitude towards AI.

In Channel 4's 2015 series 'Humans' - synthetic humans provide care and companionship to real people.

Right now, we use medication to normalise people but in the future why not other methods? Could AI be used to recreate a person’s dead family allowing a person to talk to them? This scenario was explored in the first episode of the Black Mirror series ‘Be Right Back’  and is a fascinating vision of how AI could be used one day.

Trust is Very Important

We may think of algorithms as futuristic, but they have in fact been in use for a very long time, for example with loan approvals or insurance quotes. We have long used historic data to predict future behaviour.

Trust is very, very important. Many big companies are really struggling with trust right now. There have been incidences where algorithms make the wrong decision, and without a human looking over that decision, problems occur. So companies should be more open about how they make decisions.

Companies must declare that each prediction has a degree of uncertainty with it. So humans should be able to challenge any decisions made about them, whether by a human or algorithm, and request to see what data a company holds about them.

By providing the evidence and the explanation behind the decision and allowing people to challenge it, we should be able to enhance society and use AI, ML and algorithms to improve humankind, rather than destroy it.

If you want to hear more on the topics - watch the full webinar below:

Topics in this post: 
Panintelligence, Panintelligence, a UK and USA [Boston] based embedded analytics platform, helps SaaS businesses expand ARR and accelerate their product roadmap with engaging, secure, embedded analytics. Built specifically for embedding, Panintelligence is a leader in SaaS data integration, deployment, and embedding with features such as user authentication, auditing, flexible deployment options, and seamless integration and embedding, making Panintelligence invisible as a 3rd party tool. View all posts by Panintelligence
Share this post
Related posts: 
Artificial Intelligence

pi Analytics - Bridging the gap

Business intelligence is used to tell us what we already know; we must often work too hard to discover new facts. How can we be sure that time spent digging into data has yielded the most important facts? The truth is that, when we examine the data, it becomes harder to make the most important decisions.
Read more >>
Artificial Intelligence

4 upcoming AI trends for 2024 and beyond

In the ever-evolving landscape of Artificial Intelligence, staying ahead of the curve is crucial. This blog delves into the 4 AI trends set to redefine how organizations approach and utilize AI. Below, we navigate through AI Simulation, Federated Machine Learning, Neuro-Symbolic AI, and Causal AI.
Read more >>
Artificial Intelligence, Data and security, Data visulization

Curiosity: The Currency of Relevance

The Problem: AI Still Feels Out of Reach for Most SMEs Artificial Intelligence is everywhere – from boardroom strategy decks to product roadmaps. But for many small and mid-sized businesses, it still feels distant, overhyped, or just plain inaccessible. Why? Because AI is often framed through the lens of big tech: multi-million-pound R&D budgets, teams […]
Read more >>
panintelligence is a leader in Business Intelligence on G2
panintelligence is a leader in Europe Embedded Business Intelligence on G2
panintelligence is a leader in Mid-Market Analytics Platforms on G2
panintelligence is a leader in Analytics Platforms on G2
panintelligence is a leader in Data Visualization on G2
panintelligence is a leader in Analytics Platforms on G2
panintelligence is a leader in Europe Analytics Platforms on G2
panintelligence is a leader in Europe Analytics Platforms on G2
panintelligence is a leader in Mid-Market Embedded Business Intelligence on G2
Users love panintelligence on G2
panintelligence is a leader in Business Intelligence on G2
panintelligence is a leader in Europe Embedded Business Intelligence on G2
panintelligence is a leader in Mid-Market Analytics Platforms on G2
panintelligence is a leader in Analytics Platforms on G2
Users love panintelligence on G2
panintelligence is a leader in Data Visualization on G2
panintelligence is a leader in Data Visualization on G2
panintelligence is a leader in Data Visualization on G2

Houston... we've got mail.

Sign up with your email to receive news, updates and the latest blog articles to inspire you and your business.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Privacy PolicyT&Cs
© Panintelligence 2026