![]() |
Proceedings of a Workshop—in Brief |
The pandemic and overlapping global crises, including climate change, have increased attention to the importance of mental health and well-being as foundational for humans. Similarly, COVID-19 significantly exacerbated gaps in education, leaving children one to three years behind. Artificial intelligence (AI) has demonstrated potential to be transformative in addressing challenges in mental health and education and in supporting broader sustainability issues. However, there are well-founded concerns about AI regarding its potential to exacerbate inequity, further marginalizing underserved communities.
To further explore these issues, the National Academies of Sciences, Engineering, and Medicine’s Roundtable on Science and Technology for Sustainability, in collaboration with the Board on Health Care Services and Board on Science Education, convened a hybrid workshop, Artificial Intelligence in Education and Mental Health for a Sustainable Future on May 30, 2024. The workshop consisted of two parts: AI in mental health and well-being and AI in education. Participants reviewed AI tools, applications, and strategies in education and mental health and the implications for sustainable development. The workshop also discussed AI’s potential to accelerate progress on the Sustainable Development Goals (SDGs). Workshop discussions were related to topics most relevant to SDGs 3 (good health and well-being), 4 (quality education), and 9 (industry, innovation, and infrastructure). This Proceedings of a Workshop—in Brief provides a high-level summary of key discussions held during the workshop.
Roundtable Co-Chair Cherry Murray (NAS/NAE), University of Arizona and National Academies Senior Director Franklin Carrero-Martínez welcomed participants and described the workshop focus, to consider the use of AI in mental health and education.
Workshop Chair Shefali Mehta, Open Rivers Consulting Associates, highlighted that AI is the natural progression of computer modeling that we have been developing and building for almost 100 years. The computational capabilities of AI have and will continue to increase dramatically. To consider AI’s role in addressing mental health and education challenges, the workshop aims to:
From an economic perspective, human capital is our most critical resource, Mehta said. Education and mental health are foundational, as they nurture and grow our human capital, individually and collectively. The question becomes how can AI support our innate resilience so that humans can be well, safe, learn, and contribute to our potential? Mehta noted that with any major technological advancement, AI offers a chance to be bold and think expansively. There are opportunities to address gaps in our systems where we need more attention. AI reflects our world back to us, highlighting where we may need to make changes. However, there are challenges with AI, Mehta highlighted. These include issues with transparency, data, and inequities. Additionally, the way in which we are approaching AI itself is causing mental health duress and worsening wellbeing which should be considered by those developing and using these technologies.
The first session examined the key role of digital mental health technologies, and emerging concerns with these technologies, along with opportunities to optimize the benefits of AI.
Nebojsa Nakicenovic, International Institute for Applied Systems Analysis, moderated a panel on AI developments in mental health and well-being. He said that AI provides an opportunity and a way forward to address the mental health crises that were exacerbated by COVID. However, these technologies pose significant risks and barriers related to safety, trust, equity, and justice; there is a need for social steering and regulation to address these risks. He also highlighted the need for ethical oversight of and development and training on AI.
Tina Hernandez-Boussard, Stanford University, discussed the vast and exponential growth of AI, the advancements in machine learning technology, and the public adoption of these technologies. Despite their potential, AI systems can make mistakes and these mistakes can lead to harm, which can be cumulative. Hernandez-Boussard emphasized several key considerations for AI in mental health, including bias and fairness, accountability, transparency, and ethical decision-making. She highlighted that AI models learn from data representing majority populations, which often differ from underserved communities and how they may describe or experience mental illnesses. This data gap increases the risk of misdiagnosis or inappropriate treatment when decisions are based on AI tools, potentially exacerbating existing disparities in mental health care and outcomes for underserved communities. Also, data breaches from AI can expose sensitive mental health information, leading to other types of potential harm, said Hernandez-Boussard. Individuals may not be fully aware that their data are being used in AI tools, both from a lack of informed consent and also potential misuse of personal information.
Hernandez-Boussard discussed a study of the use of natural language processing (NLP) to identify behavioral treatment recommendations for children with attention deficit hyperactivity disorder (ADHD) in primary care, where the disorder is often underdiagnosed.1 The NLP-based model was employed to detect ADHD and assess pediatrician adherence to treatment guidelines. Using large language models (LLMs), results indicated that patients who were on public insurance (Medicaid) were less likely to receive a diagnosis than privately insured patients and therefore were underdiagnosed by the model.
Another study evaluated 10 state-of-the-art language models with 16 questions on various mental health conditions.2 The results indicated that models often gave overly cautious responses, lacking necessary safeguards, and could cause harm in mental health emergencies. The model performance was insufficient for reliable detection and management of psychiatric symptoms. It also failed to appropriately manage psychosis-related queries, encouraged hallucinations, and at times, provided unsafe or harmful advice for managing delusional thoughts. This study showed that AI models may risk reinforcing mental health stigma through inappropriate or insensitive responses, Hernandez-Boussard said.
__________________
1 Pillai, M., J. Posada, R. M. Gardner, T. Hernandez-Boussard, and Y. Bannett. 2024. Measuring quality-of-care in treatment of young children with attention-deficit/hyperactivity disorder using pre-trained language models. Journal of the American Medical Informatics Association 31(4):949–957. https://doi.org/10.1093/jamia/ocae001.
2 Grabb, D., M. Lamparth, and N. Vasan. 2024. Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation. medRxiv. doi: https://doi.org/10.1101/2024.04.07.24305462.
Hernandez-Boussard discussed a third study examining the use of LLMs to identify depressive symptoms following cancer diagnosis. The models created a risk score for depression concerns among patients starting cancer treatment.3 While the results were promising, the model underestimated depression risk for female and Black cancer patients. This further illustrates how AI may not capture differences in subpopulations in how they express mental health symptoms. Hernandez-Boussard described guiding principles of responsible AI in mental health (see Figure in Chin et al., 2023).4 These include to (1) design questions and algorithms to promote fairness and reduce health disparities; (2) ensure problem formulation is inclusive and representative; (3) engage diverse stakeholders, including community members, to mitigate knowledge gaps; (4) identify fairness issues and tradeoffs; and (5) establish accountability.
When AI developers do not transparently report on the data they are using in models, there is potential for introducing disparities in outcomes which can cause harm to populations, breaking trust with communities. These communities then no longer want to participate and thus do not provide data, resulting in continued biases in AI. A key first step is building trust with underserved communities, Hernandez-Boussard said.
AI can be used to characterize mental health traits, support decisions about treatment, ask patient questions and rank symptoms, said Isaac Galatzer-Levy, Google and New York University Grossman School of Medicine. These technologies offer paradigms of measurements that can help to support patient decision-making. He cautioned that there is a wide variety of mental health responses, for example, to traumatic events that may not be captured through AI. There are assessments of underlying biology that help to support these analyses; for example, people with depression show low mood, have sleep issues, and are lethargic. External observations of behavioral characteristics with treatment can be captured with machine learning methods.
Galatzer-Levy discussed his work on an AI model to assess post-traumatic stress disorder (PTSD) symptoms through facial features and vocal parameters among people who presented to an emergency department admission. Measuring eye blinks, which is highly predictive of many disorders, such as multiple sclerosis, stress, and Parkinson’s disease, he noted that it is possible to build AI that assess clinical symptoms along with a measure count of eye blinks. There are deep learning models built on parameters that examine emotions or topics in words and combine those concepts that form LLM. The models were found to have high interrater reliability for assessing PTSD symptoms. If we can measure people’s psychological functioning with high accuracy using AI tools, there are opportunities to overcome limitations in current measurement paradigms, Galatzer-Levy said. AI can measure symptoms with accuracy and may be important for assessing stress and trauma, among other symptoms. Automated health coaching also offers possibilities, for example, through chatbots.
Beatriz Sanz Sáiz, Ernst and Young, moderated a panel on AI implementation and human elements, noting that AI will fundamentally lift global gross domestic product (GDP) by over $4 trillion over the next decade; it will be the most disruptive boost in the economy in history. The United States, China, United Kingdom, Korea, India, and Japan are early disrupters, but developing countries are being left behind, presenting a both an opportunity and a need. Using AI to support mental health and wellness is critical to success. It can also offer educational support, tailoring learning to a child’s needs.
Karen Magruder, The University of Texas at Arlington, discussed opportunities and gaps in AI technology with a focus on education. She began by describing key challenges with the technologies, for instance, ensuring that the goals of AI are aligned with human values; misaligned AI systems could lead to harmful consequences. Another challenge is the potential for hallucinations, or false information, generated from AI. These hallucinations are not aligned with real world data. As discussed, bias in the underlying data supporting AI will result in
__________________
3 de Hond, A., M. van Buchem, C. Fanconi, M. Roy, D. Blayney, I. Kant, E. Steyerberg, and T. Hernandez-Boussard. 2024. Predicting Depression Risk in Patients With Cancer Using Multimodal Data: Algorithm Development Study. JMIR Medical Informatics 12:e51925. https://medinform.jmir.org/2024/1/e51925.
4 See Figure on Conceptual Framework for Applying Guiding Principles to Mitigate and Prevent Bias Across an Algorithm’s Life Cycle in Chin, M.H., N. Afsar-Manesh, A. S. Bierman, et al. 2023. Guiding Principles to Address the Impact of Algorithm Bias on Racial and Ethnic Disparities in Health and Health Care. JAMA Network Open 6(12):e2345050. doi:10.1001/jamanetworkopen.2023.45050.
bias in its outputs. As a result, AI may result in perpetuating existing biases or gaps. Other concerns include academic integrity, privacy and confidentiality issues, the potential for job loss, and use of AI in fraud and scams.
Despite these challenges, AI offers benefits that can increase productivity and creativity in education. It can be used to design rubrics for grading assignments, develop prompts for classes, create lectures in different languages, and develop role plays. Students have also used AI for exam preparation, ESL (English as a Second Language) and writing support, and in considering ethical dilemmas without using identifying client information. It is possible to empower AI as tools to solve social issues. However, Magruder noted that the human element in education is critical and is missed by AI. While these technologies are not perfect, and we should be aware of their limitations, it is important to embrace their strengths to enhance learning in an ethical and responsible way, Magruder said.
Sonja Batten, Stop Soldier Suicide Chief Clinical Officer, began by discussing a core purpose of her organization as offering free counseling services for veterans and those in the military who are at high risk for suicide. As background for AI in mental health, the first computer mental health chatbot was created in the 1960s and was called “Eliza.” Batten discussed other technological advances in mental health, such as the Department of Veterans Affairs’ first mental health app, PTSD Coach. It offered PTSD psychoeducation in a safe, ethical manner using a smartphone. Batten noted that rather than replacing the human component of mental health treatment, AI may have a more meaningful potential to support mental health staff with tedious administrative tasks that contribute to the widespread problem of burnout. Many health care organizations are currently exploring AI tools to help with clinical notetaking and are using AI to make the work of therapy more effective. For example, AI can automate, track, and monitor agreed upon actions between client and therapist during sessions. Other AI tools used by mental health professionals include those that can develop mindfulness scripts, homework sheets, and diary cards.
Batten said that her organization is gathering data from smartphones and digital devices to support AI efforts to understand behavioral health patterns leading up to veteran suicide. Stop Soldier Suicide’s “Black Box Project” is part of this work.5 The organization is inviting relatives of veterans who have died by suicide to share their loved one’s smartphones to access data that can help recreate the days and weeks before their death. These data could leverage machine learning to help staff understand and identify actions that could signal that someone is at high risk of suicide in the future. AI in mental health will continue to be more prevalent, said Batten. Clients will be best served by creating teams comprised jointly of mental health experts and technology innovators working together to create tools that are effective, safe, and ethical.
Wysa is a mental health chatbot with 6.5 million users in 95 countries. It is also used by five major public health systems such as the National Health Services in the United Kingdom, Ministry of Health in Singapore, and state initiatives in the United States, said Chaitali Sinha at Wysa. Her organization’s goal with Wysa was to build a scalable-clinically effective product that makes access barriers irrelevant. The product has been studied extensively across clinical trials, real world studies, and service evaluations. Sinha discussed that Wysa utilizes AI to support early detection and assessment, develop personalized interventions, and reduce barriers to help seeking. The primary concerns about AI in mental health, said Sinha, include user safety, model precision, and privacy. Wysa manages that by minimizing the collection of PII (Personally Identifiable Information) and utilizing clinician-created content and parameters. Other risks include data privacy and security, bias and fairness in the data, and the unintended consequences of its use. To address some of these challenges, there is a need to codesign with stakeholders; plan for cultural sensitivity and improvements; and design new technology for human oversight and review, she said.
José Lobo, Arizona State University, moderated a panel on policy issues related to broadening access and opportunities around AI.
__________________
5 For more information, see https://stopsoldiersuicide.org/blackboxproject.
Elizabeth Chin, Johns Hopkins University, discussed fairness and governance of algorithms under uncertainty. Her work focuses on analyzing the ethics of algorithms in AI, noting that in the medical and public health fields, there are limited data. Thus, AI models for these fields rely heavily on assumptions. Chin discussed mitigating allocative tradeoffs and harms using an AI-based environmental justice screening tool. The tool was designed to support decision-making related to allocating funds from cap-and-trade proceeds to disadvantaged communities. Based on data on pollution burden (e.g., exposure and environmental effects) and population characteristics (e.g., socioeconomic factors and sensitive populations), the tool developed community scores. Communities falling into the top 25 percent of those scores were considered disadvantaged and were to receive funds.
Chin noted that there are multiple ways of looking at who is vulnerable environmentally. When there are abstract outcomes or goals, it is possible to look at model sensitivity as a proxy for uncertainty. Regarding technical solutions to mitigate harm, one can incorporate multiple valid models; this reduces model sensitivity. There is a need for more robust evaluation of AI, Chin said, as well as understanding about who it works for and how it is being used in decision-making. There is also a need for more participation, transparency, accountability, and humility.
Tom Romanoff, Bipartisan Policy Center, began by discussing the exponential growth in AI that has and is predicted to continue. In the mental health space, AI is guiding users through practices intended to help them relax or play mental games. It is also creating platforms to connect people with professionals, coordinating patient care, and offering tools to emulate mental health professionals. Adoption is driving the market in mental health. There are 380,000 health apps available through Apple and Android; in 2022 there were between 10,000 and 20,000 mental health apps. Some U.S. insurers, universities, and hospital chains are encouraging access to these mental health apps.
Romanoff noted several issues related to AI in mental health, such as the need for more consideration of its oversight, evaluation of its effectiveness and accuracy, and impact on vulnerable populations. There are numerous studies that have shown strong promise using these mental health apps, for example, to support those with depression. Energy use related to AI is an important area for consideration, Romanoff said. A 2024 report examined the projected use of energy by AI in 2030 and found that assuming current trends of 15 percent adoption rate of these technologies by 2030, AI could consume 9.1 percent of energy in the United States.6
There are also several legislative considerations related to health and AI, including the need for an ethical framework, regulatory oversight and governance, liability and accountability, and an assessment of its impact on the labor market. Today, there are five federal offices working on oversight of AI: the U.S. Department of Health and Human Services (HHS) Office of National Coordination for Health Information Technology; HHS Office of Civil Rights; U.S. Food & Drug Administration; Centers for Medicare and Medicaid Services; and the White House. There is also a bipartisan working group in Congress that has convened forums on AI-related issues and produced a road map and priorities for moving forward.
Regarding state efforts to regulate AI in health care, Massachusetts and Rhode Island have developed legislation to prioritize patient safety. Colorado has passed comprehensive AI legislation and California and Texas may follow. Key issues at the state level include tackling discrimination in insurance or lending decisions; addressing use cases in diagnostic decisions; the impact on labor and employment; examining algorithms used to provide mental health care; and increasing research on the use of AI.
The European Union (EU) has also taken actions on AI, developing the AI Act’s Risk Framework.7 Passed in March 2024 with most of its provisions applicable in 2026, the framework outlines four main categories of risk related to AI:
__________________
6 The Electric Power Research Institute. 2024. Powering Intelligence: Analyzing Artificial Intelligence and Data Center Energy Consumption. https://www.epri.com/research/products/3002028905.
7 For more information, see https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
The EU is regulating AI technologies based on their assigned rating of risk.
The second session examined opportunities and challenges related to the use of AI in education, considering knowledge gaps between academia and the private sector and the need to address the digital divide.
Susan Hanson (NAS), Clark University, moderated a session on recent AI developments in education, noting that a key element in student learning is the connection between student and teacher. This cannot be recreated by AI.
AI can be defined as “machines with minds,” said Abejide Ade-Ibijola, University of Johannesburg. These technologies can conduct mundane, formal, or expert tasks. In education, AI serves in many capacities, offering personalized, adaptive learning platforms, intelligent tutoring systems (ITS), automated grading and assessment, and predictive analysis for student success. AI can also support language processing for literacy improvement, virtual and augmented reality in education, data analytics for curriculum enhancement, and procedural content generation in serious games.
In Africa, there are significant challenges in education that create barriers to the wide scale expansion of AI, such as lack of funding and staff, said Ade-Ibijola. There are also infrastructure barriers that make expansion difficult, such as poverty, inequality, lack of basic electricity or access to technology. To address some of these challenges, Ade-Ibijola discussed the need to increase funding, involve young people through training, create accountable structures, and revamp education through increased teacher salaries, etc. Through his university, students are offered opportunities to train in AI as well as create a portfolio of work. The university’s GRIT Lab had 477 students in 2023 who were studying AI; 249 graduated in November 2023. The world is changing too rapidly, and we need to train our students at a higher rate, Ade-Ibijola said. Solving societal problems through technology is important and Africa is behind. There is a need to focus on solving problems that matter. With every AI tool released, jobs are being eliminated; however, skills for all are possible, he said.
Fengchun Miao, United Nations Educational, Scientific and Cultural Organization (UNESCO) discussed his organization’s extensive work and resources related to AI and education. UNESCO published Guidance for generative AI in education in 20238 and has held seminars in over 50 countries on the topic based on local languages. The guidance describes the importance of considering the ethics of AI in education; promoting the design and use of AI for inclusivity; building AI competencies for teachers and students; and exploiting the use of AI to support futures of learning. The organization is currently developing AI competencies for students and teachers, to be launched in September 2024. Additionally, it has released several key documents on the topic, including AI and Education: Guidance for Policy-makers and K-12 AI Curricula: A Mapping of Government-Endorsed AI Curricula.9
Miao noted multiple concerns about the use of generative AI in education, including that human agency and motivation need to be protected, the potential for harm to the environment and ecosystems, and the need to protect linguistic and cultural diversity. Despite concerns about the technologies, Miao noted that generative AI may trigger major use scenarios in education based on the original technology leaps compared to previous generations of digital technologies, including:
__________________
8 UNESCO. 2023. Guidance for generative AI in education and research. https://doi.org/10.54675/EWZM9535.
9 For more information, see https://www.unesco.org/en/digital-education/artificial-intelligence.
assessments and feedback on computed skills including basic language skills and coding; and
Miao also noted that there are examples the successful use of generative AI in education that can be drawn upon, such as scaffolding higher-order thinking or creativity, enhancing co-creation of curricular content, creating AI agents or AI tools, personalizing formative assessment and diagnosis, chatbots leveraging GenAI models, and assessment tools for the validation of AI.
Maio also described controversies around generative AI, for example, data deprivation is worsening digital poverty. AI is outpacing national regulatory adaptation, and there is a proliferation of use of content without consent. He also discussed that unexplainable AI models are being used to generate outputs, and AI-generated content is polluting the internet. AI is threatening the plural knowledge construction and has enabled the ability to generate deeper deepfakes. Miao noted that the total number of deepfake videos by October 2023 increased by 550 percent from 2019 with 98 percent being deepfake pornography and 99 percent targeting women.
Jeff Martin moderated a panel on AI implementation and human elements. He began by noting that Silicon Valley is taking the issue of AI and education seriously. Recognizing how quickly AI is changing society, significant funding has been devoted to teaching high school kids about AI so that they can make informed career decisions. Specifically, aiEDU10 is a nonprofit that has created a dynamic curriculum around AI to support high school kids in thinking about careers in this new reality.
Saurabh Sanghvi, McKinsey & Company, discussed generative AI and the future of work in the United States.11 He provided several statistics regarding the changing workforce through 2030 and beyond. For example, 30 percent of time spent on tasks in the U.S. economy could be enabled by automation, 12 million additional occupational transitions could happen by 2030, and over 80 percent of occupational transitions could occur among workers in the customer service, production, and office support roles. He also noted that 14 times more likely that workers in lower wage jobs will need to transition vs those in highest wage jobs. As highlighted in the final point, job changes in response to generative AI will have disproportionate impacts on underserved populations, he said.
Numerous factors contribute to this changing workforce, such as the increased healthcare needs of an aging population; shifting consumer preferences towards e-commerce and delivery; accelerated automation adoption from generative AI; and increased infrastructure and net-zero investments, Sanghvi said. Shifts in the size and demographics of the available workforce will also affect these changes (aging workforce, more retirements, etc.) along with changes in worker preferences (e.g., demand for new working models). Thirty percent of hours worked today could be automated with generative AI, Sanghvi noted.12 The potential for job disruption due to generative AI is at all education levels; however, it is important to consider what is feasible vs adoption of generative AI. Adoption will take a significant amount of time. Sanghvi noted that workers in lower-wage jobs are 14 times more likely to be affected than high wage workers, while workers in jobs that require less education are 1.7 times more likely to be affected than those with a bachelor’s degree or above. Women are 1.5 times more likely to be affected than men, and people of color are 1.1 times more likely to be affected than their white counterparts.
Sanghvi noted that adaptability and lifelong learning will be increasingly critical with these shifts in the workforce; the need for socioemotional skills will also increase as will technical skills and knowledge. He described six actions stakeholders can take to prepare for the future of work: (1) hire for skills, (2) engage and support women, (3) support and leverage historically overlooked populations, (4) build pipelines, (5) develop skills, and (6) explore new working
__________________
10 See https://www.aiedu.org.
11 McKinsey Global Institute. 2023. Generative AI and the future of work in America. https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america.
12 See Exhibit 3 with data on automation adoption with and without generative AI acceleration for various jobs in McKinsey Global Institute. 2023. Generative AI and the future of work in America. https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america.
models. Education is a human challenge—a key reason that generative AI is not being adopted as readily in the field, Sanghvi said. Also, change management and a lack of guidance and capacity building are limiting at-scale adoption of responsible generative AI in K-12 and higher education. There is a need to help educators and employers understand how and when to use these technologies in a responsible manner that always includes a human in the loop.
Sanghvi said that responsible AI consists of establishing policies, best practices, and tools to ensure:
Kristen DiCerbo, Khan Academy, discussed the work of her company to integrate AI into its offerings. Khan Academy’s goal is to help address low math proficiency of students. Khan Academy has 12 million learners per month in 190 countries and over 50 languages. There are 1.2 million learners who use the site for at least 2 hours per month; data indicate that these students show improvement on standardized test scores. In March 2023, Khan Academy piloted Khanmigo, an AI-powered tutor for students and an assistant for teachers, for 65,000 students in 553 school districts.14 It supports students’ learning by providing dialogue-based support and feedback as they engage in practice. Teachers can use student data through Khanmigo to draft lesson plans, summarize data, and have a conversation with students that might need it. It can also be used to inform instructional decision-making.
Taskeen Adam, Open Development & Education, began by discussing the purpose of education with a quote from Professor Syed Muhammad Naquib al-Attas in The De-westernization of Knowledge: “The seat of this knowledge in man is his spirit or soul, and his heart and his intellect.”15 Could AI ever reach the soul and the heart in the same way a teacher do, she questioned. Adam introduced the dimensions of human injustices in education: cultural-epistemic injustices, material injustices, and political and geopolitical injustices. AI lacks embodied cognition and cognition depends on experiences that come from having sensorimotor capacities. Individual sensorimotor capacities are themselves embedded in a more encompassing biological, psychological, and cultural context. “Educators create educational experiences that strongly link to who they are, what they value, and how they understand the world,” Adam noted.16 Educators could make explicit about what their biases are and their philosophical approaches to teaching in the learning space, whereas current AI models present themselves as neutral, diplomatic and authoritative, which can be problematic, she said.
AI cannot do what a human teacher can do, said Adam. There is also a risk with AI, that human elements become commodified. Empathy, care, genuine interest, body language, and physical interaction, are all important in the teaching and learning process. Yet, with AI, the risk is that these human elements will be unbundled and reserved for those who can pay. AI tutors may be available to the masses as a cost-effective method. AI is also biased toward Western knowledge. These technologies may trickle down to lower and middle-income countries and be used in a manner that is not intended, said Adam. Cost-efficiency targets may lead governments to settle on AI educators as “good enough.” Generic AI educators may become the new mass public education system, Adam cautioned.
Meghna Tare, The University of Texas at Arlington, moderated a panel on broadening access to AI in educa-
__________________
13 Sanghvi, S. 2024. Generative AI and the future of work. Presentation at the Workshop on Artificial Intelligence in Education and Mental Health for a Sustainable Future, May 30, 2024.
14 See https://www.khanmigo.ai.
15 See https://www.goodreads.com/en/book/show/23819170.
16 Adam, T. 2020. Open educational practices of MOOC designers: embodiment and epistemic location. Distance Education 41(2):171–185. https://doi.org/10.1080/01587919.2020.1757405.
tion, noting the need to focus on leveraging AI for SDG 4 (“Ensure inclusive and equitable quality education and promote lifelong learning opportunities for all.”).
Victor P. Piotrowski, National Science Foundation (NSF), discussed how his agency is supporting work in AI and education to develop the nation’s AI-ready workforce. NSF funds AI education at the K-12, undergraduate, and graduate levels, as well as through fellowships and scholarships, and experiential and informal learning. NSF leads a National AI Research Institutes Program that funds $20 million (about $4 million per year) over 5 years to more than 130 organizations in 40 states and the District of Columbia.17 The agency also supports 25 active institutes on AI, and 20 percent of these are focused on AI and education. Some examples of awards to these institutes include:
NSF is also funding efforts to diversify participation in AI research, education, and workforce development through Minority Serving Institutions (MSIs). These institutions are a source of untapped talent critical to future AI innovation and responsible AI research, Piotrowski said. NSF also collaborates with governments, the private sector, non-profit, and philanthropy partners, and NSF-led national AI research resources include a 2023 publication on Strengthening and Democratizing the U.S. Artificial Intelligence Innovation Ecosystem.18
Kevin Johnstun, U.S. Department of Education, discussed the Department’s 2023 report, Artificial Intelligence and the Future of Teaching and Learning.19 The report offers seven recommendations for policy action on AI in education, including to emphasize humans in the loop, align AI models to a shared vision for education, design using modern learning principles, prioritize strengthening trust, inform and involve educators, focus research and development on improving how AI addresses context, and develop guidelines and guardrails. The 2024 National Educational Technology Plan also highlights three critical areas of focus related to AI and education:
Johnstun reiterated a quote from his agency about the role of AI in education: “We envision a technology enhanced future more like an electric bike and less like robot vacuums. On an electric bike, the human is fully aware and fully in control, but their burden is less, and their effort is multiplied by the supporting technology.”21 The U.S. Department of Education recently hosted its first AI Literacy Day on April 19, 2024, to discuss the importance of building AI literacy in education. Johnstun also noted Department’s efforts such as supporting students with disabilities and promoting equitable access to digital education.22
__________________
17 See https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research.
18 See https://www.ai.gov/wp-content/uploads/2023/01/NAIRR-TF-Final-Report-2023.pdf.
19 See https://www2.ed.gov/documents/ai-report/ai-report.pdf.
20 For more information about the 2024 Plan, see https://tech.ed.gov/netp.
21 Johnstun, K. 2024. Artificial Intelligence and the Future of Teaching and Learning. Presentation at the Workshop on Artificial Intelligence in Education and Mental Health for a Sustainable Future, May 30, 2024.
22 See https://sites.ed.gov/idea/files/Myths-and-Facts-Surrounding-Assistive-Technology-Devices-01-22-2024.pdf and https://oet.wp.nnth.dev/advancing-digital-equity-for-all.
Participants discussed key themes identified during the workshop related to AI in mental health and education. For example, the discussions on AI in mental health highlighted both the opportunities for AI to address mental health crises exacerbated by COVID-19 and the significant risks related to safety, trust, equity, and justice. Similarly, the session on AI in education emphasized the potential of AI to enhance educational outcomes while also addressing the critical challenges of bias, privacy, and the digital divide. Policy issues such as the ethical considerations of AI and the need for careful and responsible implementation of AI technologies were also discussed, in addition to the importance of human-centered AI development, transparent practices, and ongoing monitoring to ensure ethical standards.
Trust and AI were discussed repeatedly throughout the workshop. A participant noted that it is critical to build trust with communities to get them engaged in AI-related issues and to help them learn more about the benefits of AI. It will be important to develop tools that can reach the populations that could benefit the most, ensuring responsible application of tools to underserved populations. Responsible AI ensures that the models are deployed in communities that are represented in the training data.
A participant noted that the workshop discussions highlighted the need for AI to augment work in mental health and education rather than replace the human dimension. Additionally, judgement is critical to the human endeavor; AI can supplement this but not replace this. There is a need to examine how we can engage with risk and uncertainty in application of AI and our risk tolerance. Martin noted that while the focus on the workshop was on generative AI, which is consumer focused, there is a broader group of AI technologies that can be harnessed to support sustainability that does not have the same controversies. Examples include Google Earth meets pollution, AI in drug discovery, and the TEMPO (Tropospheric Emissions: Monitoring of Pollution) project, which is examining methane releases in the environment.23 These technologies should be a focus of a separate discussion.
Murray and others noted the need to organize a global research center on AI, like the International Center for Physics of The World Academy of Sciences (TWAS) to teach young people from developing countries about AI and to discuss the growing impact of AI in science. There should be a focus on the Global South as part of this effort. Lobo said that when considering the question about whether AI will replace jobs or skills, it may be helpful to take a historical perspective, considering, for example, the industrial revolutions where jobs were actually created. Another participant noted that jobs will be lost if society does not act. Murray noted that AI has the possibility of leapfrogging the developing world. Lobo noted that AI is viewed as an equalizer in the developing world rather than a threat. It is seen as a technological revolution that the Global South can join.
Mehta provided concluding comments, noting that the workshop highlighted the wide range of applications of AI in mental health and education. These technologies can provide an important opportunity to serve more people in need while not losing sight that the human connection is critical to our ability to learn and sense of wellbeing and safety. A key theme of the discussion was change and change management. The act of change is exhausting, even when it is positive, Mehta pointed out. The magnitude of change related to the proliferation of AI will be magnified and affect all areas of society, our economy, and the environment. Compassion and care are fundamental to accept this change. In science, research, and sustainability, there is an opportunity to build on the dynamic nature of AI while integrating elements of change and compassion into our research and development.
When considering bias in AI models related to mental health and education, one must acknowledge that these biases existed prior to AI, Mehta said. However, we can use AI to correct these issues rather than worsen them. Mehta noted that the inclusion of indigenous and broader cultural knowledge related to mental health are important. Our current focus on mental health is embedded in Western psychology; however, there are many other non-Western approaches for caring for and improving our mental health. AI could provide the major disruption to reconsider what it means to be human and how we connect with each other and the world around us.
__________________
23 See https://www.cfa.harvard.edu/facilities-technology/telescopes-instruments/tropospheric-emissions-monitoring-pollution.
DISCLAIMER This Proceedings of a Workshop—in Brief was prepared by Franklin Carrero-Martínez, Jennifer Saunders, and Emi Kameyama as a factual summary of what occurred at the meeting. The statements made are those of the rapporteurs or individual meeting participants and do not necessarily represent the views of all meeting participants; the planning committee; or the National Academies of Sciences, Engineering, and Medicine.
COMMITTEE ON ARTIFICIAL INTELLIGENCE IN EDUCATION AND MENTAL HEALTH FOR A SUSTAINABLE FUTURE Shefali Mehta (Chair), Open Rivers Consulting Associates; Susan Hanson (NAS), Clark University; Beatriz Sanz Sáiz, Ernst and Young; and Klaus Tilmes, Senior Policy Advisor and Development Consultant.
STAFF Franklin Carrero-Martínez, Senior Director, Science and Technology for Sustainability Program (STS), Policy and Global Affairs; Emi Kameyama, Program Officer, STS; Danielle Etheridge, Administrative Assistant, STS; Alexandra Andrada, Board on Health Care Services, Health and Medicine Division; and Heidi Schweingruber, Board on Science Education, Division of Behavioral and Social Sciences and Education.
REVIEWERS To ensure that it meets institutional standards for quality and objectivity, this Proceedings of a Workshop—in Brief was reviewed by Munmun De Choudhury, The Georgia Institute of Technology and Abejide Ade-Ibijola, University of Johannesburg.
SPONSORS This workshop was supported by the National Academy of Sciences George and Cynthia Mitchell Endowment for Sustainability.
For additional information regarding the workshop, visit: https://www.nationalacademies.org/event/42443_05-2024_artificial-intelligence-in-education-and-mental-health-for-a-sustainable-future-a-workshop
SUGGESTED CITATION National Academies of Sciences, Engineering, and Medicine. 2024. Artificial Intelligence in Education and Mental Health for a Sustainable Future: Proceedings of a Workshop—in Brief. National Academies Press. https://doi.org/10.17226/27995.
|
Policy and Global Affairs Copyright 2024 by the National Academy of Sciences. All rights reserved. |
![]() |