Balancing the Risks and Opportunities AI Presents for IDD – An AI Series Update
Unlocking the Potential of AI in Healthcare for the IDD Community: Bridging Gaps in Accessibility and Fairness While Embracing Transformative Opportunities
THE VBP Blog
October 3, 2024 – As we’ve discussed in past VBP blogs, Artificial Intelligence (AI) is quickly changing the world of healthcare. It can also have a huge impact on the lives of people with disabilities. From voice assistants like Alexa to smart prosthetics, AI is creating new ways for the IDD community to communicate, move, and live more independently. But, with these new tools come risks, like concerns about privacy, AI bias, and making sure everyone has access to the technology.
What we’re learning as AI seems to move into every facet of healthcare, it that it’s important to find a balance. As advocates we are excited about the benefits AI can offer those with IDD. However, we must also consider the potential downsides that come with it. We need to ensure AI is inclusive, fair, and accessible for all. In this blog, we will look at the opportunities AI presents the IDD community, and the challenges that need to be addressed to ensure it helps, not harms, individuals with disabilities.
How AI Can Benefit Individuals with Disabilities and the I/DD Community
Artificial Intelligence (AI) is opening many possibilities for individuals with intellectual and developmental disabilities (IDD). And that’s important because over 7 million people have IDD in the United States alone.
One of the most exciting ways AI is helping is by improving communication. For people who have difficulty speaking or expressing themselves, AI-powered tools can offer personalized communication aids, like apps that translate speech into text or assistive devices that respond to voice commands. This technology enables more independence and a better quality of life by allowing users to interact with the world around them in ways that were previously difficult or impossible.
AI can also be used to help individuals with disabilities navigate daily tasks. Smart home systems, powered by AI, can control lights, adjust thermostats, and even lock doors. And all of this is done through voice commands, which can make it easier for people with mobility issues or other disabilities to manage their homes independently. AI-powered prosthetics and wheelchairs that adapt to the user’s needs are other examples of how this technology can empower individuals with disabilities by providing them with greater physical freedom.
For the IDD community, AI can be used in healthcare settings to monitor and assess health conditions. AI is being used to analyze facial features in the IDD population to help identify children that might have and IDD, allowing them to receive care sooner. Other machine learning technology can track data about a person’s behavior, sleep, or medication routines to identify developmental delays or meta health or physical health issues that need to be addressed. This allows for better, more personalized care, ensuring that individuals with IDD get the help they need before issues become serious. AI can even assist in managing mental health conditions by predicting symptoms or recommending therapeutic interventions tailored to an individual’s needs.
AI offers enormous potential to improve the lives of individuals with disabilities by enhancing communication, independence, healthcare, and accessibility. However, to ensure everyone can benefit, it is essential to address concerns around access and fairness in the development and distribution of these technologies.
Drawbacks of AI in Healthcare for People with IDD: Inaccurate Disability Data
While AI has the potential to improve healthcare for people with intellectual and developmental disabilities (IDD), it also has the ability to actively work against the IDD community. One of the biggest drawbacks lies in the lack of accurate disability data used to develop AI algorithms. These algorithms are typically trained on large datasets, but when it comes to disability-specific data, many gaps exist.
One significant issue is that most healthcare datasets used for AI development are not inclusive of people with disabilities, or they fail to account for the complexities and variations within the disability community. For example, an AI system designed to predict health risks may overlook critical factors specific to individuals with IDD, such as communication barriers, social determinants of health, or the need for ongoing long-term care.
In fact, in 2023, there was a research study published called “Automated Ableism: An Exploration of Explicit Disability Biases in Artificial Intelligence as a Service (AIaaS) Sentiment and Toxicity Analysis Models,” that looked at the bias embedded in natural language processing algorithms and models. It found that every model tested showed “significant bias against disability,” by flagging a sentence as negative just because it contained a reference to disability.
Without accurate data, AI systems can generate biased recommendations, potentially leading to misdiagnoses, improper treatment plans, or overlooked conditions. In other instances, it was recently discovered that Medicare Advantage plans use AI to determine what services will be covered for the 31 million beneficiaries. But the algorithms do not have data that is inclusive of individuals with disabilities, and those that have care denied, have no way to fight back because they don’t know why they were denied coverage to begin with.
Moreover, the reliance on generalized data can lead to healthcare systems that do not address the needs of those with IDD. For example, many AI models are based on assumptions about typical patient behaviors, which may not align with the experiences of individuals with disabilities. This can result in healthcare tools that are less effective or even harmful for people with IDD, as the algorithms may fail to recognize symptoms, behaviors, or healthcare challenges specific to this group.
Additionally, the lack of accurate disability data in AI could perpetuate existing biases in the healthcare system. If algorithms continue to be trained on data that excludes or underrepresents individuals with disabilities, these biases will be embedded in the technology, further marginalizing people with IDD in healthcare settings instead of helping decrease the gap that already exists.
Fairness of AI for People with Disabilities
The fairness of AI for individuals with disabilities is a critical issue as these systems often lack inclusivity and transparency. When AI algorithms are built on datasets that underrepresent people with disabilities, it leads to biased outcomes that fail to address the unique and complex needs of this population. To ensure AI is an effective tool for individuals with disabilities, these data gaps need to be addressed. Collecting more comprehensive and representative disability-specific data, as well as engaging people with disabilities in the development of AI systems, will be crucial for creating AI tools that truly benefit the IDD community.
Beyond this, is the concern about the lack of input from the disability community in the development of AI tools. Without their involvement, algorithms may be designed in ways that do not prioritize fairness or accessibility. For example, facial recognition technologies have been criticized for failing to identify individuals with atypical facial structures or physical conditions that differ from normative datasets, further reinforcing disparities in both healthcare and everyday use.
Addressing fairness in AI requires creating more diverse datasets, including people with disabilities in AI development, and ensuring that algorithms are rigorously tested for bias. This will help create systems that are truly inclusive, equitable, and capable of improving the lives of individuals with disabilities rather than exacerbating existing barriers.
Advocates Perspective
The potential of AI in healthcare for the IDD community is immense but must be approached with caution. While AI offers significant opportunities in terms of promoting more independent living and improving communication and personalized care, it is not without limitations. Biased algorithms and lack of disability-specific data pose real risks. As advocates we must push for the inclusion of individuals with disabilities in AI development and ensure that fairness, accessibility, and transparency are prioritized. Only by addressing these issues can AI truly improve healthcare outcomes for the everyone, including the IDD community, and empower individuals without exacerbating existing disparities.
Onward!
Share This Blog!
Get even more insights on Linkedin & Twitter
About the Author
Fady Sahhar brings over 30 years of senior management experience working with major multinational companies including Sara Lee, Mobil Oil, Tenneco Packaging, Pactiv, Progressive Insurance, Transitions Optical, PPG Industries and Essilor (France).
His corporate responsibilities included new product development, strategic planning, marketing management, and global sales. He has developed a number of global communications networks, launched products in over 45 countries, and managed a number of branded patented products.
About the Co-Author
Mandy Sahhar provides experience in digital marketing, event management, and business development. Her background has allowed her to get in on the ground floor of marketing efforts including website design, content marketing, and trade show planning. Through her modern approach, she focuses on bringing businesses into the new digital age of marketing through unique approaches and focused content creation. With a passion for communications, she can bring a fresh perspective to an ever-changing industry. Mandy has an MBA with a marketing concentration from Canisius College.