The World Health Organization called for caution on Tuesday (May 16) in using artificial intelligence for public healthcare, saying data used by AI to reach
decisions could be biased or misused.
The WHO said it was enthusiastic about the potential of AI but had concerns over how it will be used to improve access to health information, as a decision-support
tool and to improve diagnostic care.
The WHO said in a statement the data used to train AI may be biased and generate misleading or inaccurate information and the models can be misused to generate
disinformation.
It was "imperative" to assess the risks of using generated large language model tools (LLMs), like ChatGPT, to protect and promote human wellbeing and protect public
health, the U.N. health body said.
A University College London team of researchers has developed an artificial intelligence (AI) programme that can identify minute brain anomalies that lead to
epileptic seizures.
The algorithm, used in the Multicentre Epilepsy Lesion Detection project (MELD) and which reports locations of abnormalities in cases of drug-resistant focal
cortical dysplasia (FCD) - a major cause of epilepsy - was developed by a multinational team who used more than 1,000 patient MRI scans from 22 international
epilepsy centres.
Brain regions known as FCDs have evolved improperly and frequently lead to drug-resistant epilepsy. Surgery is usually used to treat it, however, finding the
lesions on an MRI is a constant problem for doctors because MRI scans for FCDs can appear normal.
The scientists employed about 300,000 places throughout the brain to quantify cortical properties from the MRI scans, such as how thick or folded the cortex/brain
surface was.
One evening whilst I was watching TV, my phone pinged with an all too familiar WhatsApp alert with a message preview saying "Can you help". I recognised the
name as one of our Titan pharmacy customers who was clearly in a state of panic.
I replied offering my assistance and asked him what was up. What followed over the next 24 hours was an interesting case study of how innovative technology can
genuinely solve real challenges in pharmacies.
Mr P (let's call him that) had booked a locum to cover in his dispensary on the next day so he could focus on his vaccination service. The problem - his locum
had just called to cancel his booking (no reason given) and now he had no cover.
Meanwhile, he was fully booked with back to back appointments and could not cancel them. He had phoned round his usual network of pharmacists and no one was
available at short notice.
He was asking if there was anything that Titan could do to reduce his workload and said he had heard about Titan's artificial intelligence module.
Unfortunately, Titan. X had not been installed at this site and was not an option at this late stage.
Equally, Titan's digital workflow cannot be circumvented so there was no way steps could be taken out of the process.
The Pharmacists Defence Association (PDA) has welcomed the government's action of publishing 'AI Regulation White Paper' which will regulate the artificial
intelligence (AI) system used in pharmacy on Wednesday (29 March).
The Association had raised concerns about the risk of patient harm due to inappropriate use of so-called AI to include that seen in some of the pharmacy systems
undertaking clinical checks.
For some time, it has been receiving concerns from practicing pharmacists describing examples of the potentially detrimental impact of automation and online pharmacy
provision on patient safety and pharmacy practice.
As a result, it raised these concerns with regulators, Chief Pharmaceutical Officers, and parliamentarians in all four nations of the UK to urge action.
It said: "This is required not only to protect patients, but also the frontline pharmacists who could be blamed for potential harm caused by inappropriate use of so
called 'AI' systems implemented by their employer."
The PDA therefore, welcomes the announcement from the UK government that they intend to strengthen regulation of such technology, empowering existing regulators to
come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors; this will include pharmacy.