Background: Medicine is becoming an increasingly data-centred discipline and, beyond classical statistical approaches, artificial intelligence (AI) and, in particular, machine learning (ML) are attracting much interest for the analysis of medical data. It has been argued that AI is experiencing a fast process of commodification. This characterization correctly reflects the current process of industrialization of AI and its reach into society. Therefore, societal issues related to the use of AI and ML should not be ignored any longer and certainly not in the medical domain. These societal issues may take many forms, but they all entail the design of models from a human-centred perspective, incorporating human-relevant requirements and constraints. In this brief paper, we discuss a number of specific issues affecting the use of AI and ML in medicine, such as fairness, privacy and anonymity, explainability and interpretability, but also some broader societal issues, such as ethics and legislation. We reckon that all of these are relevant aspects to consider in order to achieve the objective of fostering acceptance of AI- and ML-based technologies, as well as to comply with an evolving legislation concerning the impact of digital technologies on ethically and privacy sensitive matters. Our specific goal here is to reflect on how all these topics affect medical applications of AI and ML. This paper includes some of the contents of the “2nd Meeting of Science and Dialysis: Artificial Intelligence,” organized in the Bellvitge University Hospital, Barcelona, Spain. Summary and Key Messages: AI and ML are attracting much interest from the medical community as key approaches to knowledge extraction from data. These approaches are increasingly colonizing ambits of social impact, such as medicine and healthcare. Issues of social relevance with an impact on medicine and healthcare include (although they are not limited to) fairness, explainability, privacy, ethics and legislation.

1.
Leonelli S: Data-Centric Biology: A Philosophical Study. University of Chicago Press, 2016.
2.
Bacciu D, Lisboa PJ, Martín JD, Stoean R, Vellido A: Bioinformatics and medicine in the era of deep learning; in Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN). Bruges, Belgium, i6doc.com, 2018, pp 345–354.
3.
Provost F, Fawcett T: Data science and its relationship to big data and data-driven decision making. Big Data 2013; 1: 51–59.
4.
Deo RC: Machine learning in medicine. Circulation 2015; 132: 1920–1930.
5.
Cornwall-Jones K: The Commercialization of Artificial Intelligence in the UK. Doctoral dissertation, University of Sussex, UK, 1990.
6.
Goodman B, Flaxman S: European Union regulations on algorithmic decision making and a “right to explanation.” AI Mag 2017; 38.
7.
Veale M, Binns R, Van Kleek M: Some HCI priorities for GDPR-compliant machine learning. arXiv preprint arXiv: 1803.06174, 2018.
8.
O’Connor S: Big data and data science in health care: what nurses and midwives need to know. J Clin Nurs 2017, DOI: 10.1111/jocn.14164.
9.
Castelfranchi C: The theory of social functions: challenges for computational social science and multi-agent learning. Cogn Syst Res 2001; 2: 5–38.
10.
Vellido A, Martín-Guerrero JD, Lisboa PJG: Making machine learning models interpretable; in: Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2012). Bruges, Belgium, i6doc.com, 2012, pp 163–172.
11.
Doshi-Velez F, Kim B: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv: 1702.08608, 2017.
12.
Dreiseitl S, Binder M: Do physicians value decision support? A look at the effect of decision support systems on physician opinion. Artif Intell Med 2005; 33: 25–30.
13.
Cabitza F, Rasoini R, Gensini GF: Unintended consequences of machine learning in medicine. JAMA 2017; 318: 517–518.
14.
Mamoshina P, Vieira A, Putin E, Zhavoronkov A: Applications of deep learning in biomedicine. Mol Pharm 2016; 13: 1445–1454.
15.
Ravì D, Wong C, Deligianni F, Berthelot M, Andreu-Pérez J, Lo B, Yang GZ: Deep learning for health informatics. IEEE J Biomed Health 2017; 21: 4–21.
16.
Rögnvaldsson T, Etchells TA, You L, Garwicz D, Jarman I, Lisboa PJ: How to find simple and accurate rules for viral protease cleavage specificities. BMC Bioinformatics 2009; 10: 149.
17.
Van Belle V, Van Calster B, Van Huffel S, Suykens JAK, Lisboa PJ: Explaining Support Vector Machines: a color based nomogram. PLoS One 2016; 11:e0164568.
18.
Ash JS, Berg M, Coiera E: Some unintended consequences of information technology in health care: the nature of patient care information system-related errors. JAMA 2004; 11: 104–112.
19.
Hoff T: Deskilling and adaptation among primary care physicians using two work innovations. Health Care Manage Rev 2011; 36: 338–348.
20.
Reid MJ: Black-box machine learning: implications for healthcare. Polygeia 2017;April 6.
21.
Fernández-Alemán JL, Señor IC, Lozoya PÁO, Toval A: Security and privacy in electronic health records: a systematic literature review. J Biomed Inform 2013; 46: 541–562.
22.
Berman JJ: Confidentiality issues for medical data miners. Artif Intell Med 2002; 26: 25–36.
23.
Aggarwal CC, Philip SY: A general survey of privacy-preserving data mining models and algorithms; in Aggarwal CC, Philip SY (eds): Privacy-Preserving Data Mining. Boston, MA, Springer, 2008, pp 11–52.
24.
Scardapane S, Altilio R, Ciccarelli V, Uncini A, Panella M: Privacy-preserving data mining for distributed medical scenarios; in Esposito A, Faudez-Zanuy M, Morabito FC, Pasero E (eds): Multidisciplinary Approaches to Neural Computing. Springer, 2018, pp 119–128.
25.
The Medical Futurist: Top Artificial Intelligence Companies in Healthcare to Keep an Eye On. January 31, 2017. http://medicalfuturist.com/top-artificial-intelligence-companies-in-healthcare (accessed June 2018).
26.
Shah H: The DeepMind debacle demands dialogue on data. Nature 2017; 547: 259.
27.
Moor JH: The nature, importance, and difficulty of machine ethics. IEEE Intell Syst 2006; 21: 18–21.
28.
Ladikas M, Stemerding D, Chaturvedi S, Zhao Y: Science and Technology Governance and Ethics: A Global Perspective from Europe, India and China. Springer, 2015.
29.
30.
Campaign to stop killer robots. https://www.stopkillerrobots.org.
31.
Beauchamp T, Childress J: Principles of Biomedical Ethics, ed 7. New York, Oxford University Press, 2013.
32.
Magoulas GD, Prentza A: Machine learning in medical applications; in Paliouras G, Karkaletsis V, Spyropoulos CD (eds): Machine Learning and Its Applications. Advanced Course on Artificial Intelligence. Berlin, Heidelberg, Springer, 1999, pp 300–307.
33.
Veale M, Binns R: Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Society 2017; 4: 2053951717743530.
34.
Caliskan A, Bryson JJ, Narayanan A: Semantics derived automatically from language corpora contain human-like biases. Science 2017; 356: 183–186.
35.
Celis LE, Straszak D, Vishnoi NK: Ranking with fairness constraints. arXiv preprint arXiv: 1704.06840, 2017.
36.
Hacker P, Wiedemann E: A continuous framework for fairness. arXiv preprint arXiv: 1712.07924, 2017.
Copyright / Drug Dosage / Disclaimer
Copyright: All rights reserved. No part of this publication may be translated into other languages, reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, microcopying, or by any information storage and retrieval system, without permission in writing from the publisher.
Drug Dosage: The authors and the publisher have exerted every effort to ensure that drug selection and dosage set forth in this text are in accord with current recommendations and practice at the time of publication. However, in view of ongoing research, changes in government regulations, and the constant flow of information relating to drug therapy and drug reactions, the reader is urged to check the package insert for each drug for any changes in indications and dosage and for added warnings and precautions. This is particularly important when the recommended agent is a new and/or infrequently employed drug.
Disclaimer: The statements, opinions and data contained in this publication are solely those of the individual authors and contributors and not of the publishers and the editor(s). The appearance of advertisements or/and product references in the publication is not a warranty, endorsement, or approval of the products or services advertised or of their effectiveness, quality or safety. The publisher and the editor(s) disclaim responsibility for any injury to persons or property resulting from any ideas, methods, instructions or products referred to in the content or advertisements.
You do not currently have access to this content.