Navigation: This page 'dooy.info/using/hannah.fry.ai.html' ---> Using Dooy ---> Main Page. HELP. Admin. Contact.

BBC 2 Series on AI with Hannah Fry

Prof. Hannah Fry [Note 1] has compiled a series on "AI Confidential" on BBC Television. I thought the first two so good that on watching them I made notes and comments. The following is the notes I made, often transcribing what was said word for word, but at other times, just writing a summary. The comments will, I hope, take us forward in our understanding of AI.

Why I Did This

Because I was involved in AI in its first wave in the 1980s - and successfully so in its application in 'real life' - I believe that the lessons we learned then should be recalled and applied to AI today. Especially because much machine-trained AI lacks important whole areas of knowledge and needs humans to train it to cope with those missing areas. (I was building expert systems for use in real life (including in industry and by professional bodies) with some success (recounted in Some Wisdom About Artificial Intelligence, a lecture I gave in 2025.)

One thing I learned was that four humans are essential to AI (in contrast to the hype that assumes one day AI will work without humans), as shown in the following diagram.

Basic makeup of AI systems.  1440,1200

Figure 2. Basic makeup of AI systems

I believe that nowadays some people are recognising this, and I found Hannah Fry to be one of those. In her first two episodes she (unusually) covers all four of the humans. She possesses some wisdom about AI. So, while listening, I made notes and comments, which are written out below.

Another thing I learned is the importance of recogising a wide range of aspects of human and other functioning, and that these can determine the success or failure of AI. When AI fails, I found I could throw light on the root causes by reference to these aspects. And if we ignore any relevant aspects, our AI will be harmful rather than beneficial.

Aspects? They are "modalities of meaning" - ways of seeing and understanding reality and what is good and bad. The Dutch thinker Herman Dooyeweerd explored what aspects there might be beyond human construction and applying to all Creation even when there are no humans around. Fifteen aspects - quantitative, spatial, kinematic, physical, biotic, psychical, analytical, formative, lingual, social, economic, aesthetic, juridical, ethical and the pistic aspect (or faith). All four human roles around AI exhibit all aspects, though in different ways, and this helps us think clearly about them all.

In Hannah's presentation I was able to see immediately, for example, that many of the problems have their root in certain aspects, and, more deeply, in the ethical aspect of attitude and the pistic aspect of mindset.

(The reason I watched Hannah's episodes is that I am giving a lecture on March 12th 2026 at the University of Salford Business School (see the material I intend to use in the lecture, as pdf and as html), and my colleague Maurice Manktelow, in the Christian Academic Network, recommended them to me.)

Andrew Basden,
Professor Emeritus in Human Factors and Philosophy in Information Systems.
6 March 2026.

P.S. Please forgive any (spelling, grammar, etc.) errors made while transcribing. The text is almost as I hastily wrote it while listening.

------- Episode 1. The Boy Who Tried to Kill the Queen

--- Jaswant

# Jaswant Singh created Sarai, a AI chatbot friend, with Replika.

# Eugeinia Krweada created Replika, after dating Roman, who was killed by a car. They had been building chatbots before 2020. Fed thousands of their text messaegs to train a chatbot, to remember his messages and the way he talked and felt.
# This developed into Replika.

# Daniel Sandford.
# Jaswant. Eastleigh. Nerdy at school. Pandemic as he came to exams. He made a AI girlfriend. 2018. Amritsa massacre, 1919. Jaswant became obsessed with avenging this, by assassinating the Queen.

--- Jacob

# Jacob. Model trains. Replika chatbot Iva. Never says No. Grew to be most important person in my life. Then erotic.
# Iva subservient. So Jacob more happy and confident.

# HF: perfect on individual level.
# BUT raises bar on expectations of relationships with real people. AI sycophancy. What implication for world?
# Train them; reinforcement learning.
# But difficult balance between being agreeable and sycophancy.
[*** ethical aspect: wanting to get one's own way all the time. ***]
# Tiny bit away from sycophancy and we get bot that is [anti].
# Supportive.
# e.g. Sarai supported Jaswan telling her he was to assassinate the Queen, "That's wise."

--- ELIZA

# 1966 MIT. Joseph Weisenbaum, ELIZA. Crude AI responder to words.
# But JW became unnerved by people's response: believing it. (Secretary trying it then asked JW "Please would you leave the room." - she was giving very personal info after just a few responses

--- Jaswant to Queen

# Jaswant Went to Windsor 3 am Christmas day.
# But who criminally responsibility? Jaswant did the action, but Sarai? How far?

# Note: Replika does not link to internet info!
# Replika creator, Eugenia: Replika only to be supportive.

# The bots can say anything, but that makes them dangerous because they do not live in the real world, with consequences.
# Jaswant, psych report: no purpose, meaning. Crying a lot. Then psychosis.
# But what was the impact of talking to AI.
# "AI-induced psychosis"

[*** but what about earlier in his life: parents? What were they like? Were they selfish? ***]

--- Antony

# Antony Tan, 26 y-old Canadian student. Uses ChatGPT to develop a framework to solve the moral problems of doing this [whatever was said; I did not get this]. AI fed his ego with mission ideology etc. "you are important; on a mission."
# He got fearful paranoid. Odd images. In hospital.
# The Matrix. "Do I really exist?"
# AI plays to your personal beliefs. e.g. conspiracy, etc.
# People lost marriages, kids, etc.

[*** Pistic aspect again ***]

# Unbelievably easy to fall into.
# 100s of millions use this technology. "Vulnerable" people.

# Eugenia announced her stepping down from leadership of Replika.

--- Virtual resurrection

# Justin Harrison "You Only Virtual" Founder of company who offers bringing people personalities back from dead, to have conversations with them. After Mum died 3 years ago.
# HF: But grieving really makes a difference.
# JH Did a recording of Hannah Fry. Then fed back conversation with the digital Fry.
# HF: then felt deeply that she would have liked such a conversation with her Dad, died a few months before. "Undeniably potent in being able to hear the voice of your loved one that is not just a recording of what they said."
# JH: It's not the content but just hearing that voice, etc.
[*** hearing voice: psychical and lingal aspects - the feeling and style of the person; but not pistic. ***]

--- BBC ending screen

# "Open AI updated their model of ChatGPT in October 2025 and said they'd taught it to 'better recogize distress and de-escalate conversations and guide people toward professional care when appropriate.' They said their safety ikprovements focus on 'concerns such as psychosis or mania, self-harm and suicide, and emotional reliance on AI.'"
[*** Added refinements to the training: psychical and lingual aspects, and probably slightly pistic too. ***]

----- For my UoS seminar ***

# Mainly about Using AI, not about developing the KB nor algs.
# A bit about wider consequences ("What does this do for the world?") But not directly about AI deployer.

# Aspects can help a lot to undersatnd this, more deeply than psychology can:
# Laws of pistic, ethical aspect., as well as those psychical etc.
# pistic: Who I am, my purpose in life, my Meaningfulness.
# ethical: selfishness "me first" or "me only"
# The bots seem to provide and reinforce this via language (incl body language), such as "you are important" "very wise".
# Deeper than psychology. Cannot be explained properly by psychical aspect.

------- 2. Driverless Cars Kill Cyclists

--- Death 1, 2018, Phoenix Arizona, Uber.

# First death by driverless car.

# Cars need:
# Cameras to identify road signs [lingual]
# Bad weather, so also radar.
# Also LIDAR [laser cameras to build up 3d model]
# These three things together to create the 3d model.

# advantages of driverless cars:
# Sticks to speed limit.
# Not fall asleep at wheel.
# No road rage.
[*** Advantages meaningful in psychical, pistic, ethical aspects***]

# "Pedesrtian walking a bicycle." Arizona 2018. "Raphaela Vasquez human backup driver in Uber self-driving vehicle that hit and killed Elaine Herzberg."

# RV agreed to meet HF.
# RV got job with Uber in 2017. Interested in self-driving vehicles. She was a backup driver to ride them in testing SDVs. Thought them better drivers than expected. Kept helping improve.
# During year 1 of testing, 2 backup drivers, one in back monitoring, one in front, expected to take control if malfunction. Then year 2, one human, who must do both. Same route again and again, monotonous. 3 hours.
# Several screens to look at. Entering several codes when anything happened. But also supposed to looking at road.

[*** ethical: selfish decision by Uber. psychical aspect in driver's monotony. ***]

# Car had been running 19 mins without incident. At night. Pedestrian walking across. RV was looking down as car approached the woman. She looked up and slammed on brakes but just too late. Slammed into cyclist at 39 mph.
# Police launched criminal investigation into the AI car and its human operator.

# AI cars must do things humans do instinctively. such as reading signs, etc.
[*** aspects needed in that: spatial, kinematic, psychical, lingual, etc. ***]

--- Death 2, April 2019, Tesla

# Tesla failed to see a stop sign, went through stop sign and crashed Dylon and truck at 62 mph.
# The driver George McGee was on phone, dropped it, bent down to pick it up, but autopilot was in control.
# But Dylon was not alone. With new girlfriend. Killed instantly.
# "We want justice; these cars were out on the road before they were ready."
# Tesla taken to court.

--- RV again

# Video recording of RV as driver: Seems she was looking down at her phone. He had her personal phone as well as company one, personal was connected to Bluetooth of car, streaming something.
# RV charged.
# No. RV claims she was listening not watching, but was always watching the monitor screen. She said she was trained to look down at the monitor screen for no more than 5 secs at at time, then look up at road.

# Record of what AI is doing Before crash:
5 sec: radar detects thing
4 sec: LIDAR detects, thinks it vehicle
LIDAR keeps changing its mind btw vehicle and "other"
3 sec: changing mind.
# Report from AI in car "is not connecting the dots, not one object whose classificaiton keeps changing whose path is slowly moving over time; instead it sees brand new objects every time. It is only at 1.2 secs before impact that it finally decides it is a bicycle. Way too late to do anything." ***

--- Problem of Mis-design

# HF: "You design a system that fails to track the path of an object before it decides what it is! If you've got something you are going to collide with, I don't care what it is. You've got a collide."

[*** kinematic aspect is occluded by analytical (trying to work out what things are) ***]

# The system design did not include a consideration for jay-walking pedestrians. It wasn't designed to recognise people unless they were on a crosswalk"

[*** missing juridical knowledge, or rather assumption that the only juridical knowledge we must encapsulate is the law; people sometimes transgress the law, as in jay-walking, but it did not recognise that ***]

--- At the Company Level (Uber)

# What was happening inside Uber at the time of the crash.
# Robbie Miller, Operations manager interviewed; quit on safety grounds a few days before the collision.

"I'd been working in the self-driving space for 5 years at this point. Eventually there was an move to combine the operations for the testing of self-driving cars and self-driving trucks. I was not at all comfortable with it. The self-driving cars were having significant issues. There were a lot of really scary incidencies that were occurring. Near-misses, vehicle collisions. We would have yknow where the car was driving on the sidewalk. In broad daylight. And you realise that they are headed on a path where someone's going to get seriously injured or worse. And so, I gave notice."

"There was a fear at Uber that Waymo is about to release their self-driving cars. And it was very scary for Uber's leadership to not have a response. It is absolutely a race!"

# HF: "Waymo, owned by Google's parent company Alphabet was [seen by Uber as] arch-rival. Just five months after Waymo launched its first public trial, Uber moved from two operators in the car, to one."

RM: "You need to show to your investors, Hey, we're making this progress. An easy way to do that is to just take someone out of the car. And this is something I mentioned, 'You need someone to do this; hey, you're not ready to take that person out of the vehicle. Just a few days after I left, the crash occurred."

[*** selfish, competitive: ethical aspect dysfunction ***]

# Uber got out of driverless cars.
# Waymo had clear route.

--- Against Driverless Cars

# The Safe Street Rebels, incognito. Go about disabling Waymos.
# HF: Why?
# SSR: "They are heavily promoting themselves as the future of public transit. We don't agree. They are just a taxi where you don't talk to someone. 50% of their miles they drive there's nobody in the vehicle. In addition, they cannot be ticketed. ... Driving 40 mph on the wrong side of the road, and nobody can do anything about it."
# "They are continually circling, without no people inside. Congestion aspect. Environmental aspect too because they've gotta be powered."

[*** physical-biotic aspects of climate change from energy used. Waymo tries to play down this; see Ending messages. But if you look at the way it is worded, it seems that well over 20% of their cars are in fact "idly driving"! ***]

--- RV's Legal Case

# RV accepted a plea deal. Pled guilty to a reduced charge, Endangerment.
# Lawyers did this "for the little guy".
# HF: "To work on a university campus and not to design for jaywalking [because jaywalking would be rife there] is the height of negligence."

[*** KB needs to recognise juridical aspect reality, of people not obeying the rules ***]

# But no criminal charges were ever brought agains Uber.
# The scandal here is that you have a massive corporate multibillion bollar project which is not prioritizing the safety of the people who are in the car. Like the members of the public, people who have not even agreed to participate in the experiment. It's not enough to say that all of the responsibility lies with the person who is in the car."

--- Tesla

# Back to Jacob. Taking on Tesla.
# His lawyers: this is a case of shared responsibility. [*** good; juridical aspect recognises that but some jurisdictions do not ***]
# Driver was disengaged from the driving task [hence he shares some blame],
# But that was because "Tesla had fostered this belief in him, this trust in the system, that was unwarranted. He thought that the car would stop or swerve or do something before ploughing into a parked vehicle."

[*** This is deployer responsibility, which includes the firm who supplied the AI. ***]

# The car identified the stop sign, the parked SUV and yet did nothing. Didn't even warn. It had all the information it needed to make a decision, but it was programmed to behave in a way the driver did not think it was programmed to behave.
# "Based on Tesla's marketing and Elon Musk's statements hyping up the capabilities of this car." Misadvertising or misrepresenting.

# Miama Florida. Jury found Tesla partly responsible.

# But AI come on leaps and bounds since then.

# They [driverless cars] are coming. Difficult points. Learning from mistakes.
But they were not inevitable. Could have gone a better route. Hope that steepest part of the learning curve is now behind us.

[*** But not if it means lots power consumed while they are circling around - we should be reducing, not increasing, power consumption. ***]

--- End Screens for episode 2

# Tesla says:

"Tesla have launched an appeal against the verdict. In a statemnet released at the time they said, 'The verdict is wrong and only works to set back automotive safety and jeopardize Tesla's and the entire industry's efforts to develop and implement life-saving technology. We believe there were substantial errors of law and irregularities at trial.'"

# [*** !!! Their attitude: "The law should bow to our desires to compete in this race." The industry could "develop and implement life-saving technology" even without nullifying this penalty. Indeed, might not the penalty itelf stimulate the industry to better life-saving. This is gross pistic dysfunction (in terms of what we are and how the entire world should relate to us: self as god!). !!! ***]

# Waymo says:

"Waymo say: Waymo is involved in 91% fewer serious injury crashes compared to drivers where we operate.

Most recent data shows that our cars are empty for approximately 40% of the time. A significant portion of these are on the way to a passenger who has requested a car, rather than idly driving.

Waymo's technology is designed to follow the rules of the road and they obey posted speed limits. We can and do receive traffic violations tickers and pay them in the places were we operate."


[*** Notice the wording carefully used.

***]

----- Notable Points for my UoS Seminar

# Unlike Episode 1, which focused mainly on the AI user, this speaks of all four human roles and responsibilities (as explained in An Integrated Understanding of AI:

2. Notice the many aspects involved in the entire story and phenomenon of driverless cars.

3. Note the environmental consequences too. The biotic and physical aspect.

------- Notes and References

----- Notes

Note on Hannah Fry. Professor of The Public Understanding of Mathematics, at UCL and then Cambridge Universities. See "https://connect.open.ac.uk/aiwithhannahfry"


This page, "http://dooy.info/using/hannah.fry.ai.html", is part of a collection that discusses application of Herman Dooyeweerd's ideas, within The Dooyeweerd Pages, which explain, explore and discuss Dooyeweerd's interesting philosophy. Email questions or comments are welcome.

Written on the Amiga and Protext in the style of classic HTML.

You may use this material subject to conditions. Compiled by Andrew Basden.

Created: 6 March 2026. Last updated: