On Tuesday 12 October 2023 the Health Law and Ethic Network held their October Monthly Lunchtime seminar on 'Ethical Challenges in the use of AI in Healthcare and Academia'.
This seminar was presented by Professor Michael South and Dr Julian Koplin.
Artificial intelligence is increasingly used across all industries but confusion remains around the positive as well as negative outcomes it brings with it. Artificial intelligence is not just making inroads into healthcare; it's poised to revolutionize the way we work. We're entering an era of transformative possibilities and potentially groundbreaking innovations. Of course, with such progress there are essential questions about ethics and safety. This presentation, covered some basic concepts in the use of AI, gave examples of their use in healthcare, and delved into ethical and other significant dilemmas that the technology presents.
Alongside this, this seminar also discussed ChatGPT and academic publishing. In the wake of ChatGPT’s release, academics and journal editors have begun making important decisions about whether and how to integrate generative artificial intelligence (AI) into academic publishing. Some argue that AI outputs in scholarly works constitute plagiarism, and so should be disallowed by academic journals. Others suggest that it is acceptable to integrate AI output into academic papers, provided that its contributions are transparently disclosed. Views against both of these were discussed. Unlike 'traditional' forms of plagiarism, use of generative AI can be consistent with the norms that should underlie academic research. In these cases, its use should neither be prohibited nor required to be disclosed. However, some careless uses of generative AI do threaten to undermine the quality of academic research by mischaracterizing existing literature. This, not ‘AI plagiarism’, is the real concern raised by ChatGPT and related technologies.