Get Instant Loan up to ₹10 Lakh!
The more we rely on medical devices for our treatment, the greater is the risk small flaws in their underlying algorithms could result in misdiagnosis - or worse. Most of these algorithms are trained on insufficiently representative patient data and as a result are incapable of addressing normal differences in physiological traits across a diverse sample of the patient population. As a result the diagnoses they generate are sometimes ineffective and potentially harmful.
These biases - often hidden deep inside the complex layers of machine learning and biometric processing that are central to these devices - can be fatal particularly in under-represented demographics.
Take pulse oximeters, for example. Over COVID they became a common household appliance thanks to their ability to easily measure oxygen saturation - a critical early warning indicator of the silent onset of the disease. According to a recent study it turns out that these devices are less accurate in patients with darker skin pigments - to the extent that they are three times less likely to detect hypoxemia in African-American patients when compared to Caucassians. It is hard to measure how this impacted mortality during the pandemic
The fact is that these devices have largely been tested on lighter-skinned people, their algorithms tuned to the light absorption and reflection characteristics of paler skin. As a result they perform poorly on darker complexions. The same was found to be the case with melanoma detection algorithms that also perform poorly on darker skin for much the same reason - resulting in significantly delayed detection or, in some instances, the complete failure to spot the carcinoma.
Then there is the electrocardiogram (ECG) machine, an indispensable part of all modern healthcare facilities. While it might seem that all they really do is transpose the human heartbeat into a line on a graph, in actual fact, they use fairly complex algorithms to interpret a range of physiological signals to produce the output that they do.
There is a growing body of evidence that suggests that these ECG algorithms are significantly less effective on obese patients - generating inaccurate readings leading to misdiagnoses and ineffective treatment. Since obese people are naturally inclined towards heart disease, the delayed treatment that this results in can, in many instances, be fatal.
Similar biases exist on account of gender differences. It is a fact that women experience heart disease differently from men - and so their symptoms are often misattributed to non-cardiac causes. This results in incorrect treatment and a unduly delayed diagnosis of the cardiac condition - a problem often exacerbated by medical algorithms trained primarily on male data. Gender misrepresentation during preclinical stages of drug development regularly fail to capture exactly how women react to these new drugs resulting in incorrect assumptions as to the side-effects that these drugs will have in women as well as their overall effectiveness. The implications of this sort of bias is truly far-reaching - potentially extending to half the human population.
These examples are just the tip of the iceberg.
While most of the problems discussed here are on account of historical circumstance - the genuine lack of demographically disaggregated training data at the time when they were being developed - it is exacerbated by the proprietary nature of medical devices and their underlying algorithms. This lack of transparency - both as to how these algorithms work and the data they were originally trained on - hinders our ability to do a root cause analysis of the problem that will allow us to rectify them. As a result, even if they wanted to, physicians and medical practitioners simply don’t know how to compensate for what their devices are telling them.
An obvious solution would be to make these algorithms more transparent so that OEM device manufacturers can better understand the machines they are building and appropriately modify them to cover the diverse demographics they are supposed to address. Where possible, this should include releasing the underlying algorithms as open source so that researchers, software engineers and medical practitioners alike, can analyse them, detecting and bias that exists and, where possible, modifying them to better suit the patient populations they are being used on. Not only would this foster deeper collaboration between the different disciplines necessary for the development of these devices but also promote innovation by providing broader access to the tools required to build better, more inclusive medical devices.
But this is easier said than done. Proprietary algorithms are often the result of significant investments in research and development - money that the companies can ill-afford to not monetise. As a result any attempt at making these algorithms more accessible will need to balance the legitimate need for a wider community of research, with need to protect the intellectual property implicit in their creation.
We stand at the dawn of a new era in healthcare - one that will be powered by artificial intelligence and smart devices. We have to ensure that all of humanity stands to benefit from these new technologies. To do this we need to balance competing interests to make sure everyone benefits - regardless of skin color, gender, or geographical location. The choices we make today - will determine the care we receive tomorrow.
Rahul Matthan is a partner at Trilegal and also has a podcast by the name Ex Machina. His Twitter handle is @matthan.
Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.