BLOG

Resources for Educators
& Professionals

 

Issues and Opportunities of Artificial Intelligence in Health Science Education

by  Bernadette Howlett     May 7, 2021
IssuesandOpportunities_blog_1200w630h

I have been a health sciences educator and researcher for over twenty years. I currently lead a research team that uses data science technologies to understand and predict the impact of educational innovations on learner progress, perceptions, and outcomes. Our work applies concepts that represent the latest disruptive innovation to education: artificial intelligence (AI). AI is a mega-range theory and practice that concentrically encompasses machine learning, deep learning, and neural networks (also referred to as artificial neural networks) as shown in Figure 1.

Figure 1: Components of AI

The term, AI, first appeared in 1956 by a working group led by John McCarthy at the Dartmouth Summer Research Project. The goal was to develop shared understandings around thinking machines. McCarthy et al proposed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” There are many definitions of AI offering different interpretations. Marr produced a list of definitions from sources including the English Oxford Living Dictionary, Merriam-Webster, Encyclopedia Britannica, Amazon, Google, Facebook, and IBM. 

There has been a rapid diffusion of AI in higher education. Still, there are vital ethical and educational issues to be confronted and addressed in the content of curricula and our general usage of these technologies in health science education. 

Perhaps the best way to define AI is to consider the innumerable examples of AI in daily life, such as biometrics (e.g. facial recognition), personalized/targeted ads, smart home devices, first-level customer response systems (like when we click on the ‘chat now’ option on many company websites), and much more. The lack of an agreed-upon definition of AI is indicative of the speed and scope of the expansion of the field. The proliferation of AI has greatly outpaced our ability to develop sound ontological, theoretical, and ethical foundations. 

The way AI works is for a large set of data, referred to as training data, to be analyzed in the system. The deep neural networks within the AI applications parse these data sets to identify patterns. The deep learning models programmed into AI are designed to mirror the human brain’s structures, linking layers of code similarly to how neurons are linked. The training data represent a majority of the available data for the analysis, with a portion held back as validation data. The resulting model (or models) is fine-tuned against the validation data set and then evaluated for bias against a test data set.

 

AI in Health Science Education: Issues and Opportunities

There has been a rapid diffusion of AI in higher education, with some intriguing innovations in the health sciences particularly. Patient simulators are developing increasing fidelity and complexity due to AI. AI is being swiftly deployed and studied in diagnostics, analyzing medical records, and performing exceptionally large epidemiologic studies, to name a few examples. The National Institutes of Health have numerous AI-related initiatives. For instance, the National Cancer Institute has identified four emerging applications of AI in oncology, including screening and diagnosis, genomic characterization of tumors, drug discovery, and cancer surveillance.  

There are vital ethical and educational issues to be confronted and addressed in the content of curricula and in our general usage of these technologies in health science education. We are behind and urgently need to catch up. In my work, the issue I believe to be the most urgent is social/racial bias. Patterns of social and racial bias in AI have garnered considerable attention in recent months. AI ethics researcher Timnit Gebru claimed she was released from Google for raising concerns about the societal impacts of the technology and for criticizing the company’s diversity efforts. Gebru’s departure was followed by a series of additional high-profile departures in protest of the circumstances surrounding her separation. 

Patel explained the racial bias issue in AI, “The problem is, training data sets may lack enough data from minority groups, reflect historical inequities such as lower salaries for women, or inject societal bias, as in the case of Asian-Americans being labeled foreigners. Models that learn from biased training data will propagate the same biases.” The good news is we have choices about how we use AI. AI can reproduce bias, or we can use it to identify bias and to inform strategies for resolving it, as represented in Figure 2.

Figure 2: Two Approaches to Bias in AI
 

A few specific ideas for applications of AI to identify bias in health science education include examining course and instructor evaluation data (including natural language analysis of open-ended items), assessing the representativeness of patient mixes and health needs in clinical training, and testing for differences in care decisions made by students related to different patient groups. In each example, changes can be implemented based on the learnings. Faculty evaluation systems, selection of rotation sites, and curricula related to implicit bias can be targeted to reduce bias. It is also essential for higher education to raise its consciousness about the presence of bias in data before any use of AI. There are significant risks associated with these systems, and their seeming inevitability (as is often the case with powerful technologies) means it is incumbent on us to learn how these systems work, how to benefit from them, and how to avoid their flaws.

 

About the Author

Bernadette Howlett, PhD, is a researcher, educator, and author with extensive expertise in online education in health sciences, evidence-based practice, and education technology. She is the lead author of Evidence-based Practice for Health Professionals: An Interprofessional Approach (Jones & Bartlett Learning, 2020). Dr. Howlett currently leads a team of education innovation researchers and data scientists at Western Governors University focused on studying new models of faculty interventions, student engagement, and student outcomes.

    Stay Connected

    Categories

    Clear

    Search Blogs

    Featured Posts

    Issues and Opportunities of Artificial Intelligence in Health Science Education

    by  Bernadette Howlett     May 7, 2021
    IssuesandOpportunities_blog_1200w630h

    I have been a health sciences educator and researcher for over twenty years. I currently lead a research team that uses data science technologies to understand and predict the impact of educational innovations on learner progress, perceptions, and outcomes. Our work applies concepts that represent the latest disruptive innovation to education: artificial intelligence (AI). AI is a mega-range theory and practice that concentrically encompasses machine learning, deep learning, and neural networks (also referred to as artificial neural networks) as shown in Figure 1.

    Figure 1: Components of AI

    The term, AI, first appeared in 1956 by a working group led by John McCarthy at the Dartmouth Summer Research Project. The goal was to develop shared understandings around thinking machines. McCarthy et al proposed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” There are many definitions of AI offering different interpretations. Marr produced a list of definitions from sources including the English Oxford Living Dictionary, Merriam-Webster, Encyclopedia Britannica, Amazon, Google, Facebook, and IBM. 

    There has been a rapid diffusion of AI in higher education. Still, there are vital ethical and educational issues to be confronted and addressed in the content of curricula and our general usage of these technologies in health science education. 

    Perhaps the best way to define AI is to consider the innumerable examples of AI in daily life, such as biometrics (e.g. facial recognition), personalized/targeted ads, smart home devices, first-level customer response systems (like when we click on the ‘chat now’ option on many company websites), and much more. The lack of an agreed-upon definition of AI is indicative of the speed and scope of the expansion of the field. The proliferation of AI has greatly outpaced our ability to develop sound ontological, theoretical, and ethical foundations. 

    The way AI works is for a large set of data, referred to as training data, to be analyzed in the system. The deep neural networks within the AI applications parse these data sets to identify patterns. The deep learning models programmed into AI are designed to mirror the human brain’s structures, linking layers of code similarly to how neurons are linked. The training data represent a majority of the available data for the analysis, with a portion held back as validation data. The resulting model (or models) is fine-tuned against the validation data set and then evaluated for bias against a test data set.

     

    AI in Health Science Education: Issues and Opportunities

    There has been a rapid diffusion of AI in higher education, with some intriguing innovations in the health sciences particularly. Patient simulators are developing increasing fidelity and complexity due to AI. AI is being swiftly deployed and studied in diagnostics, analyzing medical records, and performing exceptionally large epidemiologic studies, to name a few examples. The National Institutes of Health have numerous AI-related initiatives. For instance, the National Cancer Institute has identified four emerging applications of AI in oncology, including screening and diagnosis, genomic characterization of tumors, drug discovery, and cancer surveillance.  

    There are vital ethical and educational issues to be confronted and addressed in the content of curricula and in our general usage of these technologies in health science education. We are behind and urgently need to catch up. In my work, the issue I believe to be the most urgent is social/racial bias. Patterns of social and racial bias in AI have garnered considerable attention in recent months. AI ethics researcher Timnit Gebru claimed she was released from Google for raising concerns about the societal impacts of the technology and for criticizing the company’s diversity efforts. Gebru’s departure was followed by a series of additional high-profile departures in protest of the circumstances surrounding her separation. 

    Patel explained the racial bias issue in AI, “The problem is, training data sets may lack enough data from minority groups, reflect historical inequities such as lower salaries for women, or inject societal bias, as in the case of Asian-Americans being labeled foreigners. Models that learn from biased training data will propagate the same biases.” The good news is we have choices about how we use AI. AI can reproduce bias, or we can use it to identify bias and to inform strategies for resolving it, as represented in Figure 2.

    Figure 2: Two Approaches to Bias in AI
     

    A few specific ideas for applications of AI to identify bias in health science education include examining course and instructor evaluation data (including natural language analysis of open-ended items), assessing the representativeness of patient mixes and health needs in clinical training, and testing for differences in care decisions made by students related to different patient groups. In each example, changes can be implemented based on the learnings. Faculty evaluation systems, selection of rotation sites, and curricula related to implicit bias can be targeted to reduce bias. It is also essential for higher education to raise its consciousness about the presence of bias in data before any use of AI. There are significant risks associated with these systems, and their seeming inevitability (as is often the case with powerful technologies) means it is incumbent on us to learn how these systems work, how to benefit from them, and how to avoid their flaws.

     

    About the Author

    Bernadette Howlett, PhD, is a researcher, educator, and author with extensive expertise in online education in health sciences, evidence-based practice, and education technology. She is the lead author of Evidence-based Practice for Health Professionals: An Interprofessional Approach (Jones & Bartlett Learning, 2020). Dr. Howlett currently leads a team of education innovation researchers and data scientists at Western Governors University focused on studying new models of faculty interventions, student engagement, and student outcomes.

      Tags

      Clear