Experts Warn: Skin Cancer Diagnosis Apps Are Unreliable and Poorly Regulated

Skin Cancer App Illustration

Researchers cautioned that the current regulatory process for smartphone skin cancer apps that assess the risk of suspicious moles fails to offer enough protection to the public.

Smartphone apps not accurate enough to spot all skin cancers and current regulations do not provide adequate protection to the public, warn experts.

Smartphone apps that assess the risk of suspicious moles cannot be relied upon to detect all cases of skin cancer, finds a review of the evidence published by The BMJ today (February 10, 2020).

The researchers warn that the current regulatory process for these apps “does not provide adequate protection to the public.”

The World Health Organization estimates between 2 and 3 million non-melanoma skin cancers and 132,000 melanoma skin cancers occur globally each year, but survival is high if melanoma is spotted early, which makes prompt detection and treatment crucial.

Artificially intelligent (AI) smartphone apps offer the potential for earlier detection and treatment of suspicious moles. But they could be harmful, particularly if false reassurance leads to delays in people seeking medical advice.

In Europe, two apps (SkinVision and SkinScan) are currently available and are regulated as class 1 medical devices (deemed to have a low to moderate risk to the user). No apps currently have US Food and Drug Administration (FDA) approval.

A previous expert review of such apps suggested there is a high chance of skin cancers being missed.

So a research team led by Professor Jon Deeks at the University of Birmingham and Professor Hywel Williams at the University of Nottingham, set out to examine the validity and findings of studies looking at the accuracy of algorithm-based smartphone ‘skin’ apps.

Nine relevant studies that evaluated six different apps were identified. Studies were small and overall of poor quality.

Problems in the studies included the suspicious moles were chosen by clinicians not the app users, the photographs were taken by trained researchers on study phones, not by users on their own phones, and photographs that could not be evaluated by the apps were excluded. Also study participants were not followed up to identify cancers which were missed by the apps.

SkinScan was evaluated in a single study of 15 moles with five melanomas. The app did not identify any of the melanomas.

SkinVision was evaluated in two studies. One study of 108 moles (35 cancerous or precancerous moles) achieved a sensitivity of 88% and a specificity of 79%. This means that 12% of patients with cancerous or precancerous moles would be missed, while 21% of those non-problematic moles would be wrongly identified as potentially being cancerous.

To put this into context, the authors explain that in a population of 1000 users in which 3% have a melanoma, SkinVision could still miss four of 30 melanomas and 200 people would wrongly be told their mole was of high concern. But they point out that the limitations of the studies suggest that more errors are likely to be made.

They point out that both SkinVision and SkinScan are currently being marketed with claims that they can “detect skin cancer at an early stage” or “track moles over time with the aim of catching melanoma at an earlier stage of the disease.”

“Our review found poor and variable performance of algorithm-based smartphone apps, which indicates that these apps have not yet shown sufficient promise to recommend their use,” write the authors.

They warn that the current regulatory processes “are inadequate for protecting the public against the risks created by using smartphone diagnostic or risk stratification apps.”

And they say healthcare professionals “need to be aware of the limitations of algorithm-based apps to reliably identify melanomas, and should inform potential smartphone app users about these limitations.”

It is positive to see healthcare systems embracing data analytics and machine learning. However, little evidence indicates that current AI apps can beat the clinician when assessing skin lesion risk – at least not in a verifiable or reproducible form, argue researchers at the University of Oxford, in a linked editorial.

They say the implications for patients, regulators, and clinicians are substantial, and call for several measures to improve transparency (driving up overall quality), and enable better reproducibility and audit by the wider research community.

Without such measures, patients, clinicians, and other stakeholders cannot be assured of apps’ safety and efficacy, they conclude.

Reference: “Algorithm based smartphone apps to assess risk of skin cancer in adults: systematic review of diagnostic accuracy studies” by Karoline Freeman, Jacqueline Dinnes, Naomi Chuchu, Yemisi Takwoingi, Sue E Bayliss, Rubeta N Matin, Abhilash Jain, Fiona M Walter, Hywel C Williams and Jonathan J Deeks, 10 February 2020, BMJ.
DOI: 10.1136/bmj.m127

Be the first to comment on "Experts Warn: Skin Cancer Diagnosis Apps Are Unreliable and Poorly Regulated"

Leave a comment

Email address is optional. If provided, your email will not be published or shared.