Artificial intelligence and health: resituating the problem

Authors

DOI:

https://doi.org/10.29397/reciis.v17i2.3842

Keywords:

Artificial intelligence, Relational ontology, Health, Data extractivism

Abstract

In the face of artificial intelligence recent advances, this note seeks to reassess fundamental questions that emerge in this context. Moving away from both salvationist and apocalyptic readings, we argue that the human exceptionalism privilege loss can be an opportunity to rethink intelligence from a relational and co-produced perspective between humans and other-than-humans. Such an angle, however, must be accompanied by a careful examination of the power relations that largely define the fate of AI. On this aspect, we reflect on the implications of the hegemonic epistemic and business model of AI, a predictive-accelerationist one dominated by large technology companies. Lastly, we highlight the risks involved in the inclusion of intelligent machines in the fields of health and care, as well as the dangers of subordinating public values and rights to commercial interests, which demands attentive, collective and permanent care in the construction of sociotechnical and political arrangements for the implementation of AI in this field.

Author Biographies

Fernanda Bruno, Universidade Federal do Rio de Janeiro, Instituto de Psicologia, Professora do programa de Pós-Graduação em Comunicação e Cultura. Rio de Janeiro, RJ

Doutorado em Comunicação e Cultura pela Universidade Federal do Rio de Janeiro.

Paula Cardoso Pereira, Universidade Federal do Rio de Janeiro, Escola de Comunicação. Rio de Janeiro, RJ

Mestrado em Design Comunicacional pela Universidade de Buenos Aires.

Paulo Faltay, Universidade Federal do Rio de Janeiro, Escola de Comunicação. Rio de Janeiro, RJ

Mestrado em Design Comunicacional pela Universidade de Buenos Aires.

References

BARAD, Karen. Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning. Durham: Duke University Press, 2007.

BARAD, Karen. Posthumanist Performativity: Toward an Understanding of How Matter Comes to Matter. Signs: Journal of Women in Culture and Society, Chicago, v. 28, n. 3, p. 801-831, 2003.

BEDI, Gillinder; CARRILO, Facundo; CECCHI, Guillermo A.; SLEZAK, Diego Fernández; SIGMAN, Mariano; MOTA, Natália B. et al. Automated analysis of free speech predicts psychosis onset in high-risk youths. NPJ Schizophrenia, London, v. 26, n. 1, p. 15030, 2015. DOI: https://doi.org/10.1038/npjschz.2015.30.

BEIGUELMAN, Giselle. Máquinas companheiras. Morel, Santo André, n. 7, p. 76-86, 2023.

BELLI, Luca; DA HORA, Nina. ChatGPT: O que anima e o que assusta na nova inteligência artificial. Folha de S.Paulo, 20 jan. 2023. Disponível em: https://www1.folha.uol.com.br/tec/2023/01/chatgpt-o-que-anima-e-o-que-assusta-na-nova-inteligencia-artificial.shtm. Acesso em: 14 jun. 2023.

BENDER, Emily M.; GEBRU, Timnit; MCMILLAN-MAJOR, Angelina; SHMITCHELL, Shmargaret. On the dangers of stochastic parrots: Can language models be too big?. Proceedings of the ACM Conference on Fairness, Accountability, and Transparency, p. 610-623, March 2021. DOI: https://doi.org/10.1145/3442188.3445922.

BRIDLE, James. Maneiras de ser: animais, plantas, máquinas: a busca por uma inteligência planetária. Trad.: Daniel Galera. São Paulo: Todavia, 2023.

BRUNO, Fernanda. Tecnopolítica, racionalidade algorítmica e mundo como laboratório. In: GROHMANN, Rafael. Os laboratórios do trabalho digital: entrevistas. São Paulo: Boitempo, 2021.

BRUNO, Fernanda Glória; PEREIRA, Paula Cardoso; BENTES, Anna Carolina Franco; FALTAY, Paulo; ANTOUN, Mariana; COSTA, Debora Dantas Pio da; STRECKER, Helena; ROCHA, Natássia Salgueiro. “Tudo por conta própria”: autonomia individual e mediação técnica em aplicativos de autocuidado psicológico. RECIIS - Revista Eletrônica de Comunicação, Informação & Inovação em Saúde, v. 15, n. 1, p. 33-54, 2021. DOI: https://doi.org/10.29397/reciis.v15i1.2205. Disponível em: https://www.reciis.icict.fiocruz.br/index.php/reciis/article/view/2205. Acesso em: 21 jun. 2023.

COSTANZA-CHOCK, Sasha. Design Justice: Community-led Practices to Build the Worlds We Need. Cambridge, Massachusetts: The MIT Press, 2020.

DANOWSKI, Déborah; VIVEIROS DE CASTRO, Eduardo. Há mundo por vir? Ensaios sobre os medos e os fins. 2. ed. (Cultura e Barbárie). Desterro [Florianópolis]: Instituto Socioambiental, 2017.

DONIA, Joseph; SHAW, James A. Co-design and ethical artificial intelligence for health: An agenda for critical research and practice. Big Data & Society, v. 8, n. 2, 2021.

ESCOBAR, Arturo. Designs for the Pluriverse: Radical Interdependence, Autonomy, and the Making of Worlds. Durham, NC: Duke University Press, 2018.

FOUCAULT, Michel. O nascimento da clínica. Rio de Janeiro: Forense Universitária, 1977.

GEBRU, Timnit. Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’. Wired. 30 nov. 2022. Disponível em: https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/. Acesso em: 10 jun. 2023.

HARAWAY, Donna. Ficar com o problema: fazer parentes no chthluceno. Trad.: Ana Luíza Braga. São Paulo: n-1 edições, 2023.

HARAWAY, Donna. Manifesto ciborgue: ciência, tecnologia e feminismo-socialista no final do século XX. In: HOLLANDA, Heloisa Buarque de (org.). Pensamento feminista: conceitos fundamentais. Rio de Janeiro: Bazar do Tempo, 2019.

HARAWAY, Donna. The Promise of Monsters: A Regenerative Politics for Inappropriate/d Others. In: HARAWAY, Donna. The Haraway Reader. New York, London: Routledge, 2004.

HAYLES, N. Katherine. Unthought: the power of cognitive nonconscious. Chicago: University of Chicago Press, 2017.

KURZWEIL, Ray. The singularity is near: when humans transcend biology. New York: Penguin, 2005.

LASHBROOK A. AI-driven dermatology could leave dark-skinned patients behind. The Atlantic. 16 ago. 2018. Disponível em: https://www.theatlantic.com/health/archive/2018/08/machine-learning-dermatology-skin-color/567619/. Acesso em: 12 jun. 2023.

MALABOU, C. L’intelligence n’est pas. Elle agit. Le Point. 13 abr. 2018.Disponível em: https://www.lepoint.fr/sciences-nature/catherine-malabou-l-intelligence-n-est-pas-elle-agit-13-04-2018-2210517_1924.php. Acessado em: 14 jun. 2023.

MARCUS, Julia L.; HURLEY, Leo B.; KRAKOWER, Douglas S.; ALEXEEFF, Stacey; SILVERBERG, Michael J.; VOLK, Jonathan E. Use of electronic health record data and machine learning to identify candidates for HIV pre-exposure prophylaxis: a modelling study. Lancet HIV, v. 6, n. 10, e-688-e695, Jul. 2019. DOI: https://doi.org/10.1016/S2352-3018(19)30137-7.

MCQUILLAN, Dan. Resisting AI: An Anti-fascist Approach to Artificial Intelligence, Bristol, UK: Bristol University Press, 2022.

OBERMEYER, Ziad; POWERS, Brian; VOGELI, Christine; MULLAINATHAN, Sendhil. Dissecting racial bias in an algorithm used to manage the health of populations. Science, v. 366, n. 6464, p. 447-453, Oct. 2019. DOI: https://doi.org/10.1126/science.aax2342.

ORGANIZAÇÃO MUNDIAL DA SAÚDE – OMS. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: World Health Organization; 2021. Disponível em: https://www.who.int/publications/i/item/9789240029200. Acesso em: 08 jun. 2023.

PEÑA, Paz; VARON, Joana. Decolonising AI: A transfeminist approach to data and social justice. In: GLOBAL INFORMATION SOCIETY WATCH 2019. Artificial intelligence: Human rights, social justice and development. [S.l.]: Association for Progressive Communications, 2019. Disponível em: https://giswatch.org/sites/default/files/gisw2019_web_th4.pdf. Acesso em: 12 jun. 2023.

PERRIGO, Billy. OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time. 18 jan. 2023. Disponível em: https://time.com/6247678/openai-chatgpt-kenya-workers/. Acesso em: 09 jun. 2023.

RICAURTE, Paola. Ethics for the majority world: AI and the question of violence at scale. Media, Culture & Society, v. 44, n. 4, p. 726-745, 2022.

ROSE, Nikolas. The politics of life itself. Theory, culture & society, London, v. 18, n. 6, p. 1-30, 2001.

STATEMENT ON AI RISK. AI experts and public figures express their concern about AI risk. Center for AI Safety, San Francisco. Disponível em: https://www.safe.ai/statement-on-ai-risk#open-letter. Acesso em: 21 jun. 2023.

SHARON, Tamar. When digital health meets digital capitalism, how many common goods are at stake?. Big Data and Society, v. 5, n. 2, 2018.

STRUBELL, Emma; GANESH, Ananya; MCCALLUM, Andrew. Energy and Policy Considerations for Deep Learning in NLP. In: ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 57 th. Proceedings […], Florence, Italy, Association for Computational Linguistics, p. 3645-3650, 2020.

VENTURINI, Jamila. Vigilância, controle social e desigualdade: a tecnologia reforça vulnerabilidades estruturais na América Latina. Derechos Digitales, 15 out. 2019. Disponível em: https://www.derechosdigitales.org/13921/vigilancia-controle-social-e-desigualdade-a-tecnologia-reforca-vulnerabilidades-estruturais-na-america-latina/. Acesso em: 14 jun. 2023.

VINGE, Vernor. Technological Singularity, 1993. Disponível em: https://frc.ri.cmu.edu/~hpm/book98/com.ch1/vinge.singularity.html. Acesso em: 15 jun. 2023.

WONG, Matteo. AI doomerism is a decoy. The Atlantic. 02 jun. 2023. Disponível em: https://www.theatlantic.com/technology/archive/2023/06/ai-regulation-sam-altman-bill-gates/674278/. Acesso em: 09 jun. 2023.

Published

2023-06-30

How to Cite

Bruno, F., Pereira, P. C., & Faltay, P. (2023). Artificial intelligence and health: resituating the problem. Revista Eletrônica De Comunicação, Informação & Inovação Em Saúde, 17(2), 235–242. https://doi.org/10.29397/reciis.v17i2.3842

Issue

Section

Notes on current situations