Gearing up for successful AI adoptions in healthcare
In recent years, Artificial Intelligence (AI) has been transitioning from being at a concept stage to actually revolutionise industries. Healthcare industry also has a potential of improvising itself right from drug discovery to patient care management with the help of AI. Approximately 40% CAGR in value estimates of AI in Healthcare industry over the past few years is a testimony for this potential [ref1, ref2] .
However implementation of AI in healthcare has witnessed its own set of challenges. As for any challenges in an evolving technology, right foundation can make a business pioneer in implementing it, else shy away from its adoption. From the past implementations and research at Avegen, 4 critical aspects have been identified for successful implementation of AI use cases.
1: Spotting a low hanging fruit
Right use case: Low hanging fruit
- Burning problems:
- Which process / activities are taking your time and resources?
- Which large scale activities are banking on human consistency?
- What problems you just wish a “Genie” to come and solve for you?
- Quality & Regulatory requirements
- Tolerance for inaccuracy (no AI system is 100% accurate, so how much accuracy is good enough?)
- Regulatory requirements
- Tech feasibility
One of our products Together For Her, has more than 1.5 million downloads as of 05-June-2023. Health equity by making an impact on lower socio-economic groups has been one of the key objective of the product stakeholders. Assessing that on a large scale by actually sampling and reaching out users was a cumbersome task. This led to the requirement of predicting user’s socio economic class (SEC) using ML [ref]. For all the users, de-identified data such as OS version, device manufacturer, and engagement patterns, which gets collected for quality monitoring and analytics, was already available for this model. This made the problem technically feasible.
The primary objective was to estimate proportion of low SEC users. So theoretically symmetrical number of false positives and negatives do not affect the main metric to be computed. This sets a low bar for first version success. However, imagine a use case where a finance company wants to compute credit score and identify users who can be granted loan using ML, even a small proportion false positives can massively put business at risk. Similarly in healthcare, AI-driven diagnostics and treatments have the lowest tolerance for inaccuracy. So in this domain, an AI assistant to improve HCP’s productivity is a far better approach than trying to entirely replace a skilled professional.
2: Keeping things simple
- Not starting from scratch (utilise existing resources / freewares / APIs): For example, in our research for building a model that can detect if a person is living with depression [ref] , we utilised a secondary dataset and Fitbit’s step counts rather than processing raw accelerometer data. A lot of cloud platforms now readily provide APIs for face detection or object detection. Developing that in-house from scratch would be like re-inventing a bullock cart.
- Use simple algorithms to begin with: Supervised machine learning, especially for binary classification is one of the easiest approaches for implementation. So say for example, if requirement is for “AI driven digital interventions for better impact”, start with two sub-problems, “is a user on track for impact” and “at this point, should I nudge a user”. With more success, this can then be experimented with enforcement learning algorithms.
3: Data collection and architecture
Data quality is absolutely critical for any ML project. So whether or not there’s any ML use-case in near term roadmap, setting up best practices to capture correct and good quality data is critical. If not for ML, it will at least serve valuable insights for business operations.
Image source: Post by Michael
Often, large amount of data is constantly generated, but if not arranged properly data analysis becomes extremely cumbersome, leading to errors even before ML model is trained. For different ML use cases, the input data (referred to as ML features) vary. Infact, engineering features that bring out identifiable patterns is also an art. This step often gets underestimated and loosely done, leading to inaccuracies in engineered features, not so effective features or re-work for common features across different ML use cases. Forbes attributes information architecture as a key factor for separating companies who thrive in ML space from the others, in their article “IA before AI” [ref].
Image source: Hectic GIFs
At Avegen, we have continuous volume of data constantly generated as users submit forms or interact with the apps. Data that is required for analytics, is a de-normalised & de-identified aggregated data stored in separate data marts for analysis. Data management and information architecture requires a dedicated development and maintenance.
4. Tracking, testing and improvement iterations
Just like any software, AI systems often evolve through iterations. Key to this evolution is understanding how good the current system is and what needs to be done to improve it. Even if a model is apparently great, it is under a constant risk of deterioration from pattern changes, data drifts or missed out scenarios where model performs poorly. This requires a need for setting up an automated feedback system for tracking model performance. So start simple, improve and evolve!
Image source: freepic
From my experience, just like real intelligence, AI must also evolve. The best is not created with a big bang.
Although it is a standard practice, that ML models should be tracked and re-trained through MLOps pipelines, human intervention should not be missed out in the process.
Want to chat more about AI and healthcare? Let’s connect here.
Transforming Healthcare
With Digital Solutions
Our products are used by healthcare providers
and leading pharmaceutical companies to improve
patient management