New Research Projects Explore Ethics of Artificial Intelligence in Medicine banner

Scholarships and Grants

New Research Projects Explore Ethics of Artificial Intelligence in Medicine

Northwestern Researchers Awarded Grants for Ethical AI in Healthcare

The Centre for Bioethics and Medical Humanities (CBMH), in collaboration with the Institute for Artificial Intelligence in Medicine (I.AIM), has announced the recipients of its 2025–2026 Ethics and AI Research Grants. The grants are designed to fund early-stage research that examines ethical issues surrounding the application of artificial intelligence in medicine. These chosen studies will lay a solid foundation for further studies and bigger grants in the future.

This funding scheme is just one part of a broader strategy to promote AI research that is concerned with both technical innovation and ethical consideration. It funds small pilot studies addressing new or outstanding ethical questions in clinical practice, public health, medical research, and healthcare policy. The programme is open to practical as well as theoretical research and embraces the entire path of AI technologies,from development and experiment to application in practice.

One of the selected researchers is Tim Schwirtlich, a PhD student in the Health Science Integrated Programme. His project, titled Ethics at Eye Level, will study the bioethical impact of AI-powered smart glasses used for personal health and well-being. With guidance from Dr Abel Kho, Schwirtlich plans to explore how patients feel about privacy, autonomy, and access when using such devices. He will also work on designing technology frameworks that are ethically sound and support responsible innovation. This research connects artificial intelligence ethics with real-life health tools and aims to improve the social impact of AI technologies.

Another grant was awarded to Wilson Ting, a graduate student in Biomedical Engineering and Computer Science. His project, supervised by Dr Anne Stey, will look at how generative large language models (LLMs) can be used to predict intra-abdominal injuries by analysing electronic medical record data. Ting will compare LLMs with traditional natural language processing (NLP) methods and study how doctors view the ethical use of AI-generated predictions in clinical settings. This work is a strong example of computer science and AI research being applied to real healthcare problems, with a focus on ethics and trust.

In addition to the AI-focused grants, CBMH also announced winners of its general Pilot/Exploratory Grant. Amy McArthur, mentored by Dr Sofia Garcia, will study how cancer survivors understand their rights related to disability and employment. Her research will help improve support systems and legal awareness for patients. Meanwhile, Brady Daitch, guided by Dr Mohammad Hosseini, will explore how visual icons can make informed consent easier to understand for research participants. This project links AI in education and ethics, showing how design tools can support better communication in medical research.

Applications for the 2025-2026 cycle are now closed. The CBMH and I.AIM teams have reported that the grants are intended to fund interdisciplinary research, enhance ethical consideration in technology development, and foster responsible adoption of artificial intelligence in healthcare. The chosen projects show an increased interest in merging data science and artificial intelligence programs with human values, ensuring that future healthcare tools are not only intelligent but also fair and respectful. These awards also underscore the need for AI and education ethics, demonstrating how educational institutions and research institutes are encouraging students and early-career scientists to reflect deeply on the place of AI in society. As news on artificial intelligence keeps increasing, such initiatives are important in defining technologies that benefit people with care, dignity, and responsibility.

 

Editor's Note

The Ethics and AI Research Grants unveiled by the Centre for Bioethics and Medical Humanities and I.AIM are a welcome move. With artificial intelligence expanding rapidly in medicine, it is important to inquire not only about what AI can achieve, but also about how it ought to be utilised. These grants reflect that ethical consideration is at last being considered as an integral aspect of AI research, not merely an afterthought. What is most striking is the emphasis on real-world challenges. Whether it's smart glasses for individual health or AI systems that predict injuries from medical histories, the winning projects are not solely technical; they are human. They question how patients feel, how physicians determine, and how technology can be made to honour privacy, equity, and trust. This is precisely the sort of research that we require if AI is to care for and dignify people. It's great to also see young researchers being nurtured. Early-career scientists and students are being provided space to investigate difficult questions and develop reflective solutions. The presence of projects on informed consent and disability rights indicates that ethics in AI is about people, communication, and justice.

Skoobuzz highlights that with artificial intelligence getting used more and more in hospitals, clinics, and research facilities, we have to ensure that it doesn't forget the human touch. These grants assist us with keeping that balance intact. They remind us that AI in education, AI in healthcare, and AI in data science all have to be driven by values, not merely algorithms.

 

FAQs

1. Who received the 2025–26 Ethics and AI research grants?

The grants were awarded by the Centre for Bioethics and Medical Humanities (CBMH) in collaboration with the Institute for Artificial Intelligence in Medicine (I.AIM). Among the recipients were Tim Schwirtlich, a PhD student in Health and Biomedical Informatics, for his project on AI-powered smart glasses and their ethical impact on health self-management. Wilson Ting, a graduate student in Biomedical Engineering and Computer Science, also received a grant for his research on using generative large language models (LLMs) to predict intra-abdominal injuries from electronic medical records. Additional recipients under the general pilot grant included Amy McArthur and Brady Daitch, focusing on disability rights and informed consent, respectively.

2. Why is ethics important in artificial intelligence?

Ethics is important in artificial intelligence because AI systems can affect people’s lives in serious ways,especially in healthcare, law, education, and public services. Ethical thinking helps ensure that AI is fair, safe, and respectful of human rights. It also helps prevent harm, protect privacy, and promote trust between users and technology. Without ethics, AI could be used in ways that are biased, unsafe, or unjust.

3. What are the main areas of focus in AI ethics research?

AI ethics research often focuses on key areas such as data privacy, fairness, transparency, accountability, and human autonomy. In healthcare, it also includes informed consent, clinical decision-making, and equitable access to technology. Researchers study how AI systems are designed, tested, and used, and they explore how to make these systems more responsible and inclusive.

4. How does AI impact data privacy and ethics?

AI can process large amounts of personal data, which raises serious questions about privacy and control. If not handled properly, AI systems may collect, share, or misuse sensitive information. Ethical research helps set rules and guidelines to protect data, ensure consent, and prevent discrimination. It also encourages developers to build systems that respect user rights and follow legal standards.

5. What institutions are leading in AI and ethics research?

Several institutions are leading in AI and ethics research. In this case, the Centre for Bioethics and Medical Humanities and the Institute for Artificial Intelligence in Medicine at Northwestern University are playing a key role. Globally, other leaders include MIT’s Media Lab, Oxford’s Institute for Ethics in AI, Stanford’s Human-Centred AI Institute, and India’s IITs and IIITs, which are increasingly exploring ethical dimensions in AI and data science programmes.