Jan 26, 2021 • 10 minutes read

Artificial Intelligence in eye care: the advantages, pitfalls and ethical dilemmas

Trine Johnsen

Artificial Intelligence (AI) is already all around us and plays an active role in our daily lives – although in most cases, you probably won’t notice it. Every time you open Facebook or see an online advert with a product you’ve just googled, AI works in the background. Within healthcare, there are already countless examples of AI: it’s used to power surgical robots, maximise hospital efficiency, and improve the quality of diagnosis and treatment plans. Within optometry, image analytics is mentioned as an area of opportunity. One thing we know for sure: AI will sooner or later become part of our clinical life.

That’s why we dive into this subject with Trine Johnsen, optometrist and Head of Professional Advancement at Specsavers in Norway. How can we in optometry benefit from already existing AI tools, and what’s more to come? What are the advantages and pitfalls? And what are the ethical dilemmas we’re facing?

Read Trine's bio

Thanks for your time, Trine! At Clinical Conference 2020, you hosted a discussion session about Artificial Intelligence in optometry. It proved to be a much-debated subject many professionals have an opinion about. For this conversation, let’s start at the beginning. The many terms might be confusing when it comes to this topic: AI, machine learning, deep learning… Can you start by clarifying these words for use?

Trine Johnsen

Of course – let me start with Artificial Intelligence (AI). AI traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, and even process natural language. AI is the broader umbrella, and machine learning and deep learning are both sub-sets of AI. The various types of AI have evolved hand-in-hand with the digital era – the internet – which has brought an explosion of data. This data, known as big data, is drawn from multiple sources, like social media and search engines. It will take decades to structure and grasp it for humans, but it only takes milliseconds to structure and understand it for computer algorithms.

Then, let’s move to machine learning. Machine learning is one of the most common AI techniques used for processing big data. It’s a self-adaptive algorithm that gets better and better at analysing with experience or when new data is added. The more you learn it, the better it becomes. A well-known type of machine learning is image recognition. In healthcare, machine learning can already be used for the diagnosis and management of several diseases.

Last but not least: deep learning is the next evolution of machine learning. What’s interesting about deep learning is that it mimics the human brain in how it processes data. It can detect objects, recognise speech, translate languages, and make decisions. If you own an iPhone, you probably know Siri; that’s an example of a deep-learning tool. The more you talk to Siri, the better it gets – and that’s how deep learning works. Other examples are self-driving cars and the recommendations you get presented all over the internet once you’ve searched for – for instance – a sweater. I think deep learning is fascinating, but also a bit scary, as it has the ability to learn without human supervision. We can program it to start working in one direction, but as it learns by itself, it could also move to another direction – without any human control.

You just mentioned image recognition as one example of AI usage in healthcare. Can you give other examples?

Trine Johnsen

Yes, there are many examples of AI in healthcare and optometry. One very useful and interesting example is Aipoly. This smartphone app can help people with visual impairment navigate or identify objects by pointing the camera at an object and getting an audio description of that object in return. You can imagine that’s an incredibly helpful tool for people who can’t see well to navigate at unfamiliar places.

In a few years, we’ll probably have AI tools that can detect eye diseases at an earlier stage than the tools we already have.

If we look at the optometrist’s perspective: do you know any devices that already use AI to evaluate the eye?

Trine Johnsen

There are several tools and solutions on the market already. The first optical device using AI was approved by the FDA (Food and Drug Administration in the US – ed.) already in early 2018. It’s an application that uses AI to detect diabetic retinopathy in adults who have diabetes – a very easy to use tool. What the software application does is grading fundus images of patients with known diabetes. The patient gets a fundus image taken; this image is sent to the cloud-based software that then grades the image. The outcome is one out of two: either the patient has more than mild diabetic retinopathy and should be referred to an eye care professional for further investigation, or the patient has less than mild diabetic retinopathy and then a new image should be taken in 12 months. As non-eye care specialists can use it as well, it’s a great tool in remote areas where there isn’t always a specialist available.

Another tool is much newer: Laguna ONhE/RetinaLyze Glaucoma. (Here’s an interesting article on this tool – ed.) This tool uses fundus images to be able to detect open-angle glaucoma as early as possible. There aren’t many studies on it yet, and the sample sizes of the studies that have been done are a bit small to draw conclusions, but it looks promising. The algorithm performs just as good as the very advanced machine Angio-OCT in detecting open-angle glaucoma. On one hand, I find that a bit disappointing because it would be ideal if an AI tool would perform better than an existing tool. On the other hand, the Angio-OCT device is very expensive – so if we can do the same job with a less expensive, more time-efficient and more patient-friendly tool, that’s already a win. In a few years, we’ll probably have AI tools that can detect eye diseases at an earlier stage than the tools we already have.

Are there other promising AI tools in eye care for early detection that you want to highlight?

Trine Johnsen

Yes! There’s one very exciting new tool that I follow closely. As eye care professionals, we know that age-related macular degeneration (AMD) is a common eye disease. Wet AMD usually affects younger people than dry AMD. This specific AI tool is used on patients diagnosed with exudative wet AMD in one eye, to predict if and when they will get the disease in the other eye. The tool can detect the disease in the other eye up to six months earlier than other clinical tests by combining models based on 3D OCT images and corresponding automatic tissue maps. This means that we can save sight with this AI tool, as we can treat wet AMD if detected early enough.

You can find an article on this tool on the website of Nature Medicine. Please note that without a subscription you can only read the abstract.

As an optometrist, you still have the responsibility to ask the right questions, understand your patient and evaluate the eyes for all diseases – and not only rely on an AI tool.

Do you see any pitfalls or areas that optometrists need to be aware of as AI is entering our practices?

Trine Johnsen

Yes, absolutely – there are a few. The first thing we should be aware of is that an AI tool is never better than the input data. In machine learning, the model’s algorithm will only be as good as the data it’s trained on – commonly described as “garbage in, garbage out”. Biased data will result in biased decisions.

An example is about differences in people. We know that the eyes of Caucasian people look different than the eyes of African and Asian people. The same goes for younger/older people – meaning that the AI tool can fail on your patient if it’s built on images or data that’s different than the patient you’re examining. Another example is image quality. If the quality of a fundus image is poor, the AI tool might come to a wrong conclusion or fail to diagnose at all – meaning your patient might get the wrong diagnosis or need to come back for a second time.

The last one – that I’m personally a bit afraid of – is that most AI tools are built to monitor and detect specific eye diseases, while in optometry practices, we screen for many diseases. So, if you just trust one AI tool, you might miss other eye diseases. That’s very important to be aware of as an optometrist: you still have the responsibility to ask the right questions, understand your patient and evaluate the eyes for all diseases – and not only rely on an AI tool.

AI tools can make eye care accessible to many more people and help to save sight.

Then onto the positive side: what are the advantages of AI?

Trine Johnsen

There are many – luckily at least as many as the number of pitfalls! The obvious one is the ability to process data at a speed that no human can do. Another one is the consistency in detection. For example, if you send a fundus image to different experienced optometrists or ophthalmologists and ask for their opinion, or how they would grade the image, you will get inconsistent answers. If you send that same image to an AI tool, it’ll be consistent in its answers every time.

Also, AI tools can make eye care accessible to many more people and help to save sight. Since everyone can be trained to take good images of the eye that can then be evaluated by the AI tool – as in the case with the tool to detect the level of diabetic retinopathy. So, efficiency in healthcare and support of the eye care professional in diagnosing and managing diseases are two more advantages of AI that are very promising.

Which ethical dilemmas do you think optometrists could meet when using AI in clinical decision-making?

Trine Johnsen

This is the part that hasn’t been debated much yet. I think we’ll face many dilemmas. One thing I’d be very worried about is that if I disagree with the diagnosis made by the AI tool. Where should the final decision lay? Is it me as a clinician, or should I just trust the tool?

Another dilemma is: what if the algorithm gets something wrong or overlooks clinical signs? Who would be deemed responsible: is it the product of the AI system, the coder of the algorithm or the practitioner?

These areas haven’t been trialled legally yet, but there are some guidelines and some thinking behind it. It depends if the AI system is autonomous, then the medical liability will be with the manufacturer of that system, meaning that companies that provide these tools would need malpractice insurance. On the other hand, if it’s a decision-supporting tool, so a tool that helps you in making the diagnosis or management plan, then it’s you as the clinician who makes the final decision and has the responsibility.

I think that’s the clue with all AI tools in healthcare. As a clinician, you should take responsibility, not blindly trust the tools you use. These can help you, but you should always ask the right questions and think for yourself. That’s the benefit of being a human: we can ask questions, we can listen, we can reason.

Read more about this subject in the article The Ethics of AI from the Ophthalmologist or the American article Potential Liability for Physicians Using Artificial Intelligence. (Please note that of the latter, you’ll need a subscription to read the full interview. You can read the abstract.)

As a clinician, you should take responsibility, not blindly trust the tools you use.

More conversations

Mar 8, 2021 • 5 minutes read

Shine a light on glaucoma

Mar 5, 2021 • 18 minutes read

How to identify and treat red eye

Pages

Links

Social

1
2
3
4
5
6
7
8
9
10
11
12