Transcript
Dr. Joshi: I'm the clinical lead for AI. What does that mean? Those of you who know what NHS England is put your hands up. This is the first conference I've been to that's not a health conference where people actually know. So I won't explain that, I won't tell you how money flows, and I won't tell you what we do because you all know. Those of you who are developing something in health as well put your hand up or are from a health background. Well done. Thank you, sir, we will refer to you for future questions and answers.
This is not my slide deck. I work with a fantastic girl called Jess Molly, who most of you can find on Twitter, but she is the one who really should be up here. She's doing a Master's in the ethics of AI and health and care, and this is her slide deck. I will put all credit to her, but I will try and tell you a little bit about what we're doing, why we're doing it, and what we want to do over the next couple of years in this space. Those of you who are familiar with health, so my background is das a doctor. I'm an A&E doctor. I work in hospitals around London, so if any of you ever fall sick in this room, we definitely know how to call 999 but more importantly, you're in safe hands. You're in good hands here.
In health, we've been doing ethics for a long time. We get taught at medical school, so I was taught medical ethics. I didn't go to all the lectures. Is it important one might say now? Yes, it was, but at the time, it was quite boring, and nobody really understood the point. We have this beautiful lecture theatre, I went to UCL up the road, which went all the way up. In the medical ethics lectures you'd have this gap of about 500 rows where it was empty and then at the very back, we all sat because we just could not be bothered to listen to medical ethics. Now, as you grow up and you become a consultant, which is one of the senior levels in health and care, you realize, actually, you should have gone there. Because all the complaints come to you and you try and understand how do you actually deal with some of those complaints, but you're not interested in my medical background. You want to know about some of the stuff we're doing in health and care and AI.
What's Happening in Health and Care and Ai?
A bunch of reports have been reported out there and written. We wrote one ourselves, this one over here, on what's happening in health and care and AI? Actually, we don't need to tell you this, but it's bit of a Wild West. There's loads of stuff being written. If you're an early-stage company, if you want to get VC funding, whack in the term AI. You're almost guaranteed to get funding. Actually, a lot of investors then come and talk to us "Do you think this company is really doing anything?" - "Have you actually looked at what it's developing and whether it's developing stuff ethically and in a good way?" "Well, no. They told us they can definitely get rid of doctors in 10 years' time." I was like, "There you're going wrong. Think about what you're doing."
You need to set some rules in this game, don't you? I always feel a little bit nervous standing up here in front of professionals who actually know what they're doing and saying, "How do we set the rules?" but as government, we all want to set some rules. We've set up the Office of AI who are there to do four things, they say. They're looking at how to develop the skills, which is really important. They're also looking at how to make the UK a good place to do this stuff, but also to set the rules around this game of how do we, as a society, when we're talking about artificial intelligence, or machine learning or whatever you want to call it, how do we do this in a good and ethical way?
Gareth [Rushgrove] talked about we've got loads of data. One of the things we had, when we went out to start, we wrote a Code of Conduct. This is my punch line. I'll give it to you up front. One of the things we did when we started thinking about this Code of Conduct was, "Why do we need it? We've already got medical ethics. We have something called HRA," which is our health and research, that body, advisory group or something. I can't remember what HRA stands for, which is terrible, isn't it? But they're there; you need ethical approval before you do medical research, and it's very clear, the guidelines.
One of the things that came out was that it's not always clear - if you're doing product development, so a lot of you will be developing products and also if you got ethical research approval, but then you went on and you did something else that wasn't exactly what you said you were doing when you started out, so we wanted to talk to lots of people. We spoke to these people. I don't know. Have any of you heard of understanding patient data? Definitely, go and have a look at their site. They're a spinout of the Wellcome Trust, which is a think tank, and they really say, "Patient data is a funny thing, isn't it?"
How many of you here have an app or something on your smartphone or on your whatever it is, your Apple watch, that you share data with freely about your health? It's nearly all of you. You guys are really a distinct audience, but how many of you are aware of whether that data is being used for you, your direct need, or whether it's being collected and utilized for bigger things? Bear in mind, you're a really clever audience. The average reading age of the UK population is between five and seven, so people who come and see me in my A&E they barely understand when I say, "Take one tablet four times a day, one tablet four times a day." "Yes, doctor, definitely." Now, they will download these apps, they will download loads of things, and they will share things. They don't always know what they're doing in that space.
We, therefore, have the responsibility, as the safeguarders of the system, to ensure that those people, the data they're sharing, but also that they understand what they're doing, is right and ethical. We've worked really closely with the UPD, the Understanding Patient Data guys, to understand what the problem was and what the need was. I won't go into this, but we've had some issues in the past, where people haven't always understood the rules. Therefore, how do we decide what the rules of the game is? You need to create an ecosystem, don't you, when you do this stuff? People will always talk about, "Oh, yes, let's set some rules. Boom, there you go. Let's have it, and really go and adopt it. I don't care how, you adopt it but go and adopt it. " But actually, how do you create an ecosystem in health that's using large sets of data, that understands that actually, this is slightly different because obviously, those of you familiar with GDPR know that the rules of the game are slightly different?
What we wanted to do was try and create a bit of the rules of the game, which we did. I won't say we've done it, I'll say we're starting to do with the Code of Conduct, which I'll come to, but also then how do we win the community. How do we bring people along with us? When I say people, I mean you in the room, but also people who are the workforce. We've got 1.2 million people who work for the NHS. That's just under the banner of the NHS contract, but then you've got a huge community of people. You've got charities, you've got volunteers sectors, you've got the think tanks that all work in this thing we believe of free at the point of care, health and care. Then, you've got the people in the system, the people who are buying, the people who are commissioning, but also the people who are developing, like some of you in the room. And so we have to create an ecosystem across all three areas to say, "Here are the rules. We want to work with you, so you help us design the rules of the game, but we also want to make sure that those who are deploying or buying this stuff understand the rules of the game, so that when you're doing something here, it's not a barrier when these guys want to then adopt it."
Distributed Responsibility
It's been a bit of a jigsaw puzzle that we've been trying to develop, but that's what we want to do. Here are all the players. I mean, I've named some of them, but what does this say? Data controllers. You need to understand that there are lots of players in this game and you all understand this, but they all have to work together. We can't turn around and say, "These are the rules. End of," or the regulators turn around and go, "We set out the regulation. End of." Let's work together in this space to actually fundamentally understand what we need to do.
Again, there's sort of this vision of a regulatory loop, so when you develop something and I know this was talked about and Catherine [Flick] talked about it earlier, but you just don't do it and it's out there. You develop it, you improve it, you go around. There's a loop, isn't there? Make sure when you're thinking about that loop that actually you work together in that space.
Finding the Balance
This is what, I think, is the most important thing - finding that balance between innovation and regulation and understanding that it's okay to innovate, but when you do, make sure you do it in a way that fulfills some of those requirements there on the left. I'm not sure if anybody has talked about, but there are always political wins, aren't there? There's somebody somewhere that wants a thing, and you have to develop that thing, and whether that thing is the right thing or not requires a body of people to stand up and say, "No, I don't believe this is right or wrong."
I'll give you an example of that. I used to work for an online medication provider, and we would provide medications. They were all done in a safe, ethical way and we followed the GMC guidance of remote prescribing, but sometimes, we would provide medication to young boys. When I say young boys, I mean boys who are 19 or 20. We followed the rules, we made sure it was all okay, but you have to think, "Was that the right thing to do, to do it in a remote way? So I never saw you, but I understood that you had a need, because you declared you had a need." Then, one of the things we pushed back, we said, "We, as the system who are providing these medications, we need to think a bit more. This isn't just about our profit balance sheet here. We need to think whether this is the right thing to do."
We then put in some other safety measures to make sure we called these people up and explained what they were getting and whether they really needed that. This has nothing to do with technology, but it has to do with the fact that when there's a remote interface, people can sometimes think they can get away with it because they don't see you. When you sit in my A&E cubicle - I was going to say my office, but I definitely don't. I have a little two by two area if I'm lucky. Otherwise, it's in the corridor with a bit of a shield - you look at me. You look at me in the eye, and I have to make you believe what I'm saying is right and true. When you do it on an interface, when you're doing it on a computer screen, or you're doing it over the phone, it's harder and understanding that balance between right and wrong - maybe I'm not explaining this correctly, but - sometimes you can find that you step away from it because you believe in the product and what you're doing. However, at that individual level, it can be quite tricky. This is what we're trying to say is there is a balance, but we can only really create that balance together.
10 Principles
Over the last year, we worked really closely with some academics. We worked closely with the developing agencies, so developers out there. We worked with commissioners as well, which is why we've got some funny principles in here. But most importantly, we try to work with the workforce and say, "What is it that you need from us as government or as people who are central commissioners, to say, 'How do you create some rules around these games?'" Now, this is by no means an extensive, really great list like the AMC one, which maybe we should adopt some of their principles, but understand some things. Just be really simple - understand what your user needs are. In health, if you're not creating to fulfill a need, but you're only creating because there happens to be a data set available, which in the NHS, we do. We have some great data sets available, we've got HES data, we've got CPID data, we've got large data sets if you're looking at screenings. If you're looking at doing some really fun things, we've got them, you can access them, we have good ways to access those datasets. However, are you solving a problem?
One of the things I was just telling Gareth earlier is, I was at a conference a couple of weeks ago and a company in India (India, huge population) had created an algorithm which could screen diabetic retinopathy. They'd done it on a great big data set because obviously, there's a lot of diabetics in India and we’re at higher risk of it from our genetic profile, but they found, "Great, we've screened this population. We now know that X percent are at risk of developing diabetic retinopathy, but we don't know what to do now." Say I screened all of you and you guys here I'm going to tell you, "Right, guys, in five years' time, you're all going to get diabetic retinopathy."
For those of you don't know what diabetic retinopathy is, it's where your eyesight starts narrowing, you lose vision, and it's because of your diabetes, which may or may not be poorly controlled. But I can't do anything about it because I don't have any doctors, I don't have any healthcare staff and actually, I've just told you something, which is fundamentally going to change the way you live your life, but I can't help you with that. There is my ethical dilemma. I've told you something. I've helped you, genuinely. I believe I've genuinely helped you because I've told you something you didn't know, but now I'm standing there and I go, "Well done, but there's nothing I can do to help you in that space."
When you are thinking about solving a problem, which is why we understand your user need, and understand what are you going to contribute to that. It's great to predict. Everybody says in AI, "We could do diagnostics, and we can really predict the future." Great, let's do it. I'm not against that, I think that's a brilliant thing to do - to predict pathology. However, make sure we have the infrastructure in place to then catch the people who you've predicted. If you find that, you're in a community that, for example, something really simple, doesn't have wound care specialist nurses, or generally doesn't have any diabetic specialist nurses, don't go out and develop something that's going to predict something when the people can't catch up with that.
You can argue and say, "We need to do that because we don't have any people." Don't ever forget there's a human involved in all of this and in health and care, you are at your most vulnerable. How many of you have been sick or been in hospital? You're vulnerable. When you're sick, when you're ill, you're vulnerable. When you're well, you try and maintain that wellness. People with chronic conditions will tell you very clearly they don't want to be identified by their condition. They are still them. They happen to have something, and those of you in the room who have a chronic condition may agree with that statement. Always remember that when you're developing something or when you're doing something because you happen to have the data or you happen to have the technology, what is the problem you're trying to solve and is that ethically okay to do?
I'm going to take a pause there considering I've been waffling on for about 20 or 30 minutes, but also do have a look at our Code of Conduct. It's there on the .gov.uk site. We're not by any means saying this is perfect. This is a start for health and care in a world where people are throwing money at it and people are developing at pace because there are good sets of data, although the data is not always brilliant. Some of it is a little bit dodgy, but the idea is think about it and do it in a good way, do it in an ethical way, and stick with the frameworks that are out there as well.
We do have, as I said, the HRA and we have a really good ethics framework. DCMS, the Digital Culture Media Sport guys, have got an ethical workbook that you can work through as well, so there are some good tools out there to help you. We've tried to base some of our ethics principles on theirs, and we're working quite closely with the Turing Institutes as well to say, "Pick at it. Make sure that what this is, is actually helping innovation and helping people do things, but in a good, right way, especially when it comes to health and care because these people are vulnerable, but they also don't always understand." I've been in an A&E. I've been sick and admitted to ITU, and I didn't understand, and this is my field. I know what was happening, but at that point in time, my brain shut down. I was just a girl in an A&E and I didn't understand, and I assumed that the people looking after me did understand and were completely aware. If you are developing something for the clinicians or the workforce, make sure they understand as well, because the only question everybody always asks in health and care is why? They don't really ask “How?” They only ever ask “Why?”
Questions and Answers
Participant 1: Do you think algorithms that affect psychology should be regulated?
Dr. Joshi: Currently, we have something called an Apps library in the NHS. It's there, it went live a couple of weeks ago; it was in beta for about a-year-and-a-half. We won't talk about that too much, but there's quite a few chatbots in mental health, but there are also quite a few developers out there who are developing algorithms to a) help you understand the problem in front of you. There's two ways in mental health. You're a therapist, so you belong to an allied health professional, and then you're a clinical psychologist or a psychiatrist, so you give drugs.
Those people who are therapists, obviously, do a lot of talking, do a lot of verbal communication, and they do use algorithms to help them understand. One of the things Catherine [Flick] talked about earlier, is understanding the diversity of your algorithms. You don't need me to explain it, but they are being used a lot, and one of the things we've pushed is don't just base your whole product on your 12,000 user base. Test it outside as well, and make sure it's applicable, especially when it comes to children. But this is a fine line.
Participant 2: Even consumer algorithms fall into that category, whether it's what shows up on your news feed or Facebook feed or Instagram, etc. Do you think those should be regulated as well?
Dr. Joshi: It's a fine line between regulation and innovation, isn't it? One of the things I say is you need to create intelligent customers. If more of us go out and spread this message out there and say, “Yes, it's fine." - you can't regulate the world. Let's be honest. You can't regulate everything and actually, those of you familiar with regulations, and we have this problem and it's nothing actually to do with an algorithm. It's the fact that in the UK, we can regulate within England, but we have a separate regulation body for Wales and Scotland, but you might live on the border. As a human being, if you're living on the border, which regulation applies to you? You could say, "Here, I'm going to go to a clinic in England versus a clinic Wales." Two sets of regulation apply. They're regulation, but they might not be exactly what you need. So whilst, yes, regulation is good and it can help, we, as society, need to raise the bar and make sure we understand.
Participant 3: There's been a fairly large amount of ruckus in the United States about services that do genetic testing like 23andMe, in particular, they were acquired by a pharmaceutical company. What is your viewpoint on whether people are entitled to not just the results of their data being used for research, but also potentially compensation, if people are profiting from it?
Dr. Joshi: That's a really hard question. Thank you for that. It's difficult. Those of you who have done your own personal genetic testing, it's something that is, as a consumer, is exciting, isn't it? It's exciting to know, and it's interesting to know whether or not that's something that will help you or not help you. Here in the UK, we have quite clear laws about what you are able to share and not, as somebody who's requested that from an insurance perspective and obviously, as the NHS, we have a slightly different health and care system.
This is my personal opinion, so it's not a government opinion or anything, but I'd say it really depends. For me, genetic testing is something that's quite dangerous. There was a recent case in the papers, I don't know if you read here in the UK, where a gentleman was diagnosed with a genetic condition and the hospital didn't tell his daughter who was pregnant at the time. She then sued because she felt, if they had told her that he had this condition, she wouldn't have then gone on to have a baby, who could have been at risk of this condition. Take out pharmacies, pharmaceutical industry. You've got a family here who are having a potential crisis. I don't know what the outcome was, I don't know whether she went on to have the child who did have the condition or not. But if you just think, if your dad or your mom hadn't told you about something and now, you found out that they did, and then you don't tell your children, even within the family unit, that causes ethical dilemmas, doesn't it?
I think then for us to turn around and go, "Oh, yes, we definitely need that information to help us understand whether or not we should be giving you something," it's already a big messy story, so let's try and get the family unit right before we start trying to get those. I'm not a genetic therapist, I'm an A&E doctor. I find it really difficult to have that conversation with people, even on a one-to-one level. Nobody trained me to say, "This is how you talk." I can tell you, "You're going to get a life-threatening condition and die." I'm trained to say that. I'm not trained, not me anyway, but there may be a new generation who have to tell people about these kinds of conditions, so this is what I mean. We have to bring people along with us on this journey and not just say, "You're great. Well done, pharma, for doing that."
Participant 4: You talked about the ethics classes at medical school and that it was there on the syllabus, but actually, lots of people didn't necessarily pay as much attention as they might do. There's been a bunch of conversations recently about ethics education in computer science and just technology disciplines in general. What can we on the technology side learn from what's worked and what's not in ethics education on the medical side? If we're just adding classes, are computer scientists more likely to turn up than medical professionals probably will?
Dr. Joshi: I don't know.
Participant 4: What can we learn in hindsight?
Dr. Joshi: What can we learn? I think it's case studies. How many of you remembered 100% of my talk? Probably one person in the room and even then you were probably writing it or recording it, but you remember stories, don't you? We all remember stories, and that's how we, as a human species, have communicated for generations. One of the things that we all learn and remember was when the lecturer stood up and gave them their worst ethical dilemma or why it was important to understand. I don't really have any computer science stories, maybe you do, but what are the stories that made you learn? You know, what were the real case examples?
One of the things I do is I belong to a network called One HealthTech. It's about creating diversity and championing women, but also people from a diverse background. Yesterday, we had an event for International Women's Day where we tried to get people to talk about their failures and actually, it was really successful because getting people to talk about their failures and their dilemmas they have in life makes you then remember. Now I remember all four people who spoke. I don't remember what they did, but I remember what their story was, so that would be my take home. And don't be afraid, to be honest. People always think they've got a lot to prove in this world. I try.
Participant 5: As a follow up to that, technologists are brilliant at trying to create solutions with a lot of certainties. Stories are a lot to do with nuance, and they're a lot more fuzzy around the edges, and technologists aren't very good with fuzzy around the edges. If you were to say, "Yes, here's all this data and here's all this stuff," what would be the kinds of things that you would want technologists to be building, if you want us to help with stories? Does that make sense, the question?
Dr. Joshi: Sort of. I'll be honest, not entirely.
Participant 5: Let me slightly rephrase the question then. We're very good at trying to get perfect answers. Answers within the medical profession are very rarely perfect. I'm a son of a consultant pediatrician, so I know that they are never perfect. What are the right kinds of things for us to build? Because we shouldn't ever try and build perfect, as far as I'm concerned.
Dr. Joshi: A roundabout way of answering that is the extremes of the world. Children, you're familiar with. Grown-up people, as in old people over 85, they are never perfect. You know, even if you told me now, "My child had a little bit of a fever. Should I bring them into A&E?" without a doubt, I'd probably say, "Yes," because nobody wants to risk that. It's the same when my old lady comes into my A&E, she's 94, she's fallen over, she's fine, but it's 2:00 in the morning and then she says, "I want to go home," I'm just, "No. You know what? You stay here. Unless you have a support system, you're not going home on your own," so there's extremes.
In the middle, yes, we care, but we're a bit more robust. I would say maybe 18 to 50 to 60, you're a bit more robust, and there's quite often clear-cut answers, so there are conditions that have good clear-cut answers. Going back to the user need, if you're fulfilling a need, so if you look at London's data set and you say, "Actually, is there a user need for people who call up, I don't know, 111," so we're not talking about emergencies, but 111, which is for acute conditions. And I'm just looking at, say, 20 to 45-year-olds who call up with query vaccines - I just made that up - nice user need, nice simple problem, clear-cut answer, that's fine, but don't try and answer that question in 2 to 10-year-olds because it won't be a clear-cut answer. Just think about the population you're doing it in, and how you would feel if that was your child, grandma, mum.
Participant 6: What is your viewpoint on the world-free NHS Trust selling the A&E data to Google to track potential renal failure? Is that something we can expect more of? If it is, should there be an opt-out possibility or can there be an opt-out possibility, even though it was randomized?
Dr. Joshi: I think you had three questions in there. The first question is, in this world, we're learning. That was a first case; it made a big publicity because of the companies involved. However, they weren't the only ones that had made that mistake. The line between data for direct care, data for research, data consent and confidentialities are quite blurred. And these aren't my words, these are words of people who are experts in the field. We have something called a National Data Guardian, who have also said these are quite blurred lines, so we, as a system, have a responsibility to clarify those lines, which we are doing.
We recently published something on information governance and how you actually exchange data, that was, I think, released a couple of months ago from our IG experts. But the question is, can you opt out? Yes, so we have also a national opt-out system, which is there. It's run by our delivery partner, NHS Digital. Can we expect more of that to happen? This goes back to my original point, which I keep making - we need to work collaboratively with people, so let's not just set the rules and say, "You have to follow the rules." Let's bring those people along with us to understand the rules and say, "Okay, if you are the IG expert in your areas," it might be a CCG, it might be a trust, or it might be what we now call ICSs and integrated care systems, "Understand clearly the rules, but work with us to develop those rules so actually, in the future, we don't make these mistakes again."
Participant 7: Thanks very much, the talk was really interesting. I'm interested in the GDPR and how that's had an impact on what you do because you were talking about informed consent, basically, in terms of apps, in particular, and that you have problems with vulnerable users of apps. I'm wondering if the GDPR has helped you at all with that or if it has been a problem.
Dr. Joshi: Somebody told me I wasn't allowed to say GDPR, Data Protection Act 2018. I got told off, but maybe it's a government thing.
Participant 7: Sorry, I don't know a lot of European stuff.
Dr. Joshi: It's with any new regulations. We're also changing medical device regulations, but I'm not allowed to say the B word, but, if the B thing happens, then that's going to cause a lot of trouble. Whenever any either new regulation comes in or regulation changes, it's about bringing the community with you and understanding. A lot of the colleges have sent out lots of columns and engagement to say, "This is what it means." From our perspective, it's been great. I think it's been really helpful. In health, it's a bit blurred those lines, and also the right term, the explanation. What does that mean? It's huge, and at what point. Maybe I need to clarify it. We didn't have a problem with vulnerable people. We just need to be aware of the vulnerability versus it's a problem, and I think that's helped. It has certainly helped, for sure.
Participant 8: I have a question about AI and its limits and how it can potentially impact face-to-face contact. Hypothetically, if you had AI, which, I assume, would be of the narrow variety within healthcare, and there are many people with respiratory problems, and it's able to detect this, it's able to optimize logistics, whatever, in order to solve this. What if it turns out that we have this epidemic of respiratory issues, and it's due to the lack of enforcement of building regulations, people have been getting these problems because of mold or whatever, or pollution. I'm guessing the AI would not be able to detect this because I assume that we wouldn't want to just simply treat the symptoms of any issues, we want to find the root cause because they would be more efficient in the end. How would you see AI intersecting with all these other parts of social infrastructure?
Dr. Joshi: I get really nervous about saying the term AI. As a doctor, I'm not really an AI expert by any means. For me, my understanding is it's just a form of stats and a form of math. We don't really ask that question about other statistical models like that. Maybe I think the question that needs answering is how do we intersect with other parts of the ecosystem and other data sets? You've talked about respiratory, but a really good example would be your consumer data, and how we link your consumer data into understanding whether or not, for example, if you're elderly or you've got dementia or frailty. Does your shopping data match your health data? A really high-level clear example, which we can actually do something about. We just need to create the right APIs and the right standards to do that.
Long question, short answer is yes. We need to work on getting those two datasets to match regardless of whether it's respiratory or social determinants or whether it is. Something as simple as does your Waitrose points match the fact that you didn't buy any toilet roll or you bought loads of toilet roll, and then walked into A&E five days later with DMV? You know, that would really help me by the way, but we've got to work, but it's a much bigger system. Those of you who are familiar with data standards in health and care - I always get this number wrong, so I'll just make it up - there's 120 different ways to measure your blood pressure or record your blood pressure. Simple. For me, it's three numbers over two numbers. Sometimes, it's three over three. Sometimes it's two over two, get worried, but generally, three numbers over two numbers. That's simple, isn't it? It should be a simple way to record three numbers over two numbers, but when you record that data, there's 100 different ways of doing it. So we need to get our data standards right, get the APIs right to then link into other data standards to enable us to answer those questions, which will be in a narrow form at first.
Participant 9: My question is definitely not coming from my tech journalist background, but as a frequent flyer of Evelina and an SG1 mother. I see you're talking about all this high-level data, but I see huge problems at the lower level. Eight GPs in my neighborhood just got failures from NHS because they weren't checking results coming from the hospital, or my experience with Evelina and St. Thomas has been glorious, but then the GP experience and the fact that these two sides can't share data. And I'm a huge tech ethics person, but I look at it as a more pressing issue. Is there a way to combine the systems because the GPs system is completely separate, in my understanding. Even if you get a service like pediatric bloods at the hospital, only the GP can read them, not the emergency, things like that. So I'm more curious about the bare level tech.
Dr. Joshi: Yes, and now you're talking about like real-world problems. I thought we were talking about ethics. Yes, there is work being done. We have a program called LHCRE, which is Local Health and Care Record Exemplars. We have five areas, which are basically that. It's quite difficult. I'm no technology expert, but getting one system to talk to another system, an archaic system, is very difficult, in my understanding and my knowledge, which is why I was talking about basic data standards. Let's get basic data standards right. We can then start connecting.
There are ways around this, I understand, you can have ways around it. In London, we have a system called One London where currently they are working on an architecture where you can read, so you might not be able to write, but you can definitely read and that is happening. It may be that your GP practices aren't quite up there, but from a hospital perspective, that is definitely happening, and they are reading into certain GP systems. Without going into the whole politics of it all, we can talk about this all day, that is happening, yes. And Matt Hancock's tech vision is all about that - I don't know if you've read it.
Moderator: If you want to work on that problem, I don't know if anyone is here, but the folks at NHS Digital are doing a lot of really good work, in the last few years in particular.
Participant 10: Do you face a lot of resistance towards AI in this sector? If so, do you need to raise awareness or convince people or how do you do this?
Dr. Joshi: We use the term AI, but in healthcare, like in most other industries, we have a lot of resistance to change. I remember working in Southampton, and one of these consulting companies came along, and they said, "You can really create efficiencies in your A&E by moving your red paper from there to here." I went, "Okay, thanks. Great use of money there." We moved our printer and we put red paper. They said, "Put red paper in it so people know it's an emergency," so we put red paper in it, and then we moved the printer closer to the pod where you can put blood up, and we did this for about a week.
We're all really keen and then one of the secretary staff was sick, so nobody ordered in red paper. Simple. Then we all were, "Well, where's the white paper? We just need paper because we don't have time to fluff around with where's the paper." Then what we realized was the paper was still stored over here, so we just moved the printer back the way it was. So we paid all this money for somebody to tell us how to create efficiencies in our A&E, but actually, none of us were bought along with that change. We were just, "Great, thanks for that. We'll just leave the printer where it was using the paper that we always knew," and blood still took a little bit longer to get the results for.
The moral of that story is people are resistant to change, and especially in an industry like ours in health, where we are so indoctrinated in this paternalistic view of health and care. One of the things we try and champion is turn the model upside down. You, the individual, should be in charge of your health and care. You should only come to me when you need me, not because you can't find that information or you don't have the tools to look after yourself. Part of my day job actually is I'm the Empower the Person. We have three pillars in NHS England. One is called Empower the Person, where we're trying to develop tools and technologies to help you. But the long answer is yes, there's resistance, but bring people with you. The moral of the story - bring people with you. Don't instill an algorithm or something and say, "This is definitely going to work. Use it." Understand their needs, and then solve the problem. User-centered design.
Participant 11: I've been massively blown away and inspired by this and I'm especially inspired that technology is like your secondary or tertiary skill, and yet you talk about it with such elegance. As a technologist, I don't know how to start conversing in your world, the medical world. How do I go about learning a bit more about that, without getting a medical book that's really not particularly consumable?
Dr. Joshi: Be yourself. The great thing about health and the great thing about medicine is we're all experts. You are an expert in you, as a human being. How you work, that's a detail. I mean, I don't even remember the Krebs cycle. It's not important oxygen, carbon dioxide, something, something. There are some basic things. There are basic pathologies that you need to figure out, but the most important thing people care about in medicine is the “why”, not necessarily the “how”, and you are probably an expert in the “why” already. You just need the confidence and the self-belief that you do know. Jennifer, are you a medical expert?
Participant 12: No.
Dr. Joshi: No, but do you feel like one?
Participant 12: Yes, for my son, sometimes.
Dr. Joshi: Yes, see, and we have these. We have what we call patients or experts by experience. You don't need to know anything about medicine per se. You need to understand health and what makes a person tick and what makes the system tick. But that hasn't answered your question of how the NHS works, which is a very long conversation, and that I would definitely do some background reading on.
See more presentations with transcripts