UK rules patient data shared with Google's DeepMind was illegal: AI is not a doctor

Sharing medical records to develop an app that helps doctors save lives is not the same as a doctor sharing information in order to provide 'direct care', the UK’s data protection watchdog has ruled. 

The UK’s Information Commissioner’s Office today said the Royal Free NHS Foundation’s transfer of 1.6 million patients' data to UK artificial intelligence firm DeepMind was illegal. Google acquired DeepMind in 2014, but now the firm is under Google's parent, Alphabet. 

Royal Free should have gained the consent of patients before handing the personally identifiable medical records to DeepMind for processing, it said. 

The company used the trove of records for clinical safety testing in late 2015 to develop its Streams app, which went live in February and now sends potentially life-saving smartphone alerts to clinicians when patient tests detect acute kidney injury. 

The information DeepMind processed for the tests included identifiable details about people who had sought treatment for any illness, not just kidney injury, for tests since 2010. It also included data from the Trust’s radiology electronic patient record system. 

The scale and terms of the data-sharing agreement were revealed in a report by New Scientist in April last year, two months after DeepMind announced it was working with the NHS. 

Royal Free contended at the time that it only gave DeepMind the patient data for “direct clinical care”. Under UK common law consent is implied in this scenario, which ensures that doctors can legally share information while providing care to patients.  

The contention that identifiable could be legally shared to create an app that helps doctors deliver direct clinical care was shot down this May by Dame Fiona Caldicott of the National Data Guardian (NDG), a UK government healthcare confidentiality committee. 

In a letter to Royal Free, Caldicott said DeepMind she said the data-sharing took place on an “inappropriate legal basis”.  

UK Information Commissioner, Elizabeth Denham, agreed with Caldicott’s assessment.

“On the basis of the Commissioner’s investigation, and having appropriate regard for the NDG’s views, it is reasonable to conclude, as the Commissioner does, that the Royal Free did not have a valid basis for satisfying the common law duty of confidence and therefore the processing of that data breached that duty,” Denham said in a letter to Royal Free’s CEO, Sir David Sloman.

“In this light, the processing was not lawful” under the UK’s Data Protection Act, she said.

An ICO undertaking asks Royal Free to set a proper legal basis for the DeepMind project and any future trials, explain how it will comply with the launder future trials, complete a privacy impact assessment, and independently audit the trial. 

“There’s no doubt the huge potential that creative use of data could have on patient care and clinical improvements, but the price of innovation does not need to be the erosion of fundamental privacy rights,” said Denham.   

“Patients would not have reasonably expected their information to have been used in this way, and the Trust could and should have been far more transparent with patients as to what was happening.”

Denham said the hospital’s numerous shortcomings were “avoidable” and noted it wasn’t convinced it was “necessary or proportionate” to share 1.6 million patients’ records in order to test the Streams app. 

Though the undertaking only pertains to Royal Free, DeepMind has admitted failures of its own, such as underestimating public concerns about “a well-known tech company” having access to 1.6 million people’s medical records.  

“In our determination to achieve quick impact when this work started in 2015, we underestimated the complexity of the NHS and of the rules around patient data, as well as the potential fears about a well-known tech company working in health,” wrote DeepMind co-founder Mustafa Suleyman and the firm’s clinical lead, Dominic King.  

“We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole. We got that wrong, and we need to do better.”

Caldicott today explained why she opposed using data-sharing laws that were designed to facilitate life-saving work by human doctors to create artificial intelligence that could help doctors do their job.

“Royal Free shared the information on the basis of ‘implied consent for direct care’. This is a legal basis that doctors, nurses and care professionals rely on every day to share information in order to make sure the individuals they are looking after receive the care they need. However, it is my view that this legal basis cannot be used to develop or test new technology, even if the intended end result is to use that technology to provide care,” she wrote

“I’m afraid that a laudable aim – in this case developing and testing life-saving technology – is not enough legally to allow the sharing of data that identifies people without asking them first. We need to reassure the public there are always strong safeguards in place to make sure that confidential information will only ever be used transparently, safely and in line with the law and regulatory framework.”

Tags GoogleDeepMind

Show Comments