Rethinking trust-based artificial intelligence security
Protection against algorithmic discrimination
DOI:
https://doi.org/10.15448/1984-6746.2024.1.45911Keywords:
artificial intelligence, safety, trustworthy, algorithmic discrimination, intersectional logic.Abstract
The rapid rise of artificial intelligence (AI) raises ethical challenges, especially related to the trust in this technology and its implications for various demographic groups. This text takes a phenomenological and hermeneutic philosophical approach, grounded in Husserl and Heidegger, to explore the existential safety of AI and its connection to trust. Trust in AI is examined not only as a technical issue but as a phenomenon linked to complex social dynamics, challenging reflection on discriminatory influences in intelligent systems. The theoretical framework incorporates recent contributions on trust in AI to intensify the production of literature on trust-based AI safety and interacts with the Hiroshima Process International Code of Conduct for Advanced AI Systems.
Downloads
References
AHMED, Shazeda; JAŹWIŃSKA, Klaudia; AHLAWAT, Archana; WINECOFF, Amy; WANG, Mona. Building the Epistemic Community of AI Safety. Social Science Research Network (SSRN), New York, Nov. 22, 2023. Disponível em: <http://dx.doi.org/10.2139/ssrn.4641526>. Acesso em: 02 dez. 2023.
BACHELARD, Gaston. Le nouvel esprit scientifique. Paris: Presses Universitaires de France, 1934.
COMISSÃO EUROPEIA. Código de Conduta Internacional do Processo de Hiroshima para Sistemas Avançados de Inteligência Artificial. Bruxelas, 30 out. 2023. Disponível em: <https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems>. Acesso em: 10 nov. 2023.
CORRÊA, Nicholas. K.; GALVÃO, Camila; SANTOS, James W., DEL PINO, Carolina.; PINTO, Edson P.; BARBOSA, Camila; MASSMANN, Diogo; MAMBRINI, Rodrigo; GALVÃO, Luíza; TEREM, Edmund; OLIVEIRA, Nythamar (2023). Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance. Patterns, v. 4, n. 10, p. 1-15, 2023. Disponível em: <https://doi.org/10.1016/j.patter.2023.100857>. Acesso em: 30 de nov. 2023.
DREYFUS, Hubert L. What Computers Still Can’t Do: A critique of Artificial Reason. Cambridge: MIT Press, 1992.
DREYFUS, Hubert L. “Why Heideggerian Artificial Intelligence failed and how fixing it would require making it more”. Heideggerian. Philosophical Psychology, Abingdon, v. 20, n. 2, p. 247-268, 2007.
DRUMMOND, John; TIMMONS Mark. Moral Phenomenology, The Stanford Encyclopedia of Philosophy (Fall 2023 Edition). ZALTA, Edward N. & NODELMAN, Uri (Eds.). Disponível em: <https://plato.stanford.edu/archives/fall2023/entries/moral-phenomenology/>. Acesso em 1º mar. de 2024.
GOHAR, Usman; CHENG, Lu. A Survey on Intersectional Fairness in Machine Learning: Notions, Mitigation, and Challenges. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, (JCAI-23). International Joint Coferences on Artificial Intelligence Organization, 2023. Disponível em: <https://doi.org/10.24963/ijcai.2023/742>. Acesso em: 20 nov. 2023.
HEIDEGGER, Martin. A questão da técnica. Scientiæ Studia v. 5, n. 3, p. 375-398, 2007.
HEIDEGGER, Martin. Contribuições à Filosofia (Do Acontecimento Apropriador). Rio de Janeiro: Via Verita, 2015.
HUSSERL, Edmund. A filosofia como ciência de rigor. Coimbra: Atlântida, 1965.
HUSSERL, Edmund. A crise das ciências europeias e a fenomenologia transcendental: um a introdução à filosofia fenomenológica. BIEMEL, Walter (Ed.). Rio de Janeiro: Forense Universitária, 2012.
JACOVI, Alon; MARASOIC, Ana; MILLER, Miller; and GOLDBERG, Yoav. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In: ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2021, New York. [Proceedings of the…]. Fremont: Association for Computing Machinery (ACM), 2021. Disponível em: <https://doi.org/10.1145/3442188.3445923>. Acesso em: 30 nov. 2023.
KNOWLES, Bran; FLEDDERJOHANN, Jasmine; RICHARDS, John. T., & VARSHNEY, Kush. R. (2023). Trustworthy AI and the Logics of Intersectional Resistance. In: ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2023, New York. [Proceedings of the …] Fremont: Association for Computing Machinery. Disponível em <https://doi.org/10.1145/3593013.3593986>. Acesso em: 09 out. 2023.
MANHEIM, David. Building a Culture of Safety for AI: Perspectives and Challenges Social Science Research Network (SSRN), New York, Jun. 26, 2023. Disponível em: <https://ssrn.com/abstract=4491421>. Acesso em: 20 out. 2023.
PÉREZ Y MADRID, Aniceto; WRIGHT, Connor. Trustworthy AI Alone Is Not Enough. Madri: Editorial Dykinson, 2023. Disponível em: https://hdl.handle.net/10016/3. Acesso em: 30 nov. 2023.
REINO UNIDO. Online Safety Act 2023. CHAPTER 50, UK Public General Acts. Norwich - England, 2023. Disponível em: <https://www.legislation.gov.uk/ukpga/2023/50/enacted>. Acesso em: 15 nov. 2023.
SEN, Amartya. A ideia de justiça. São Paulo: Companhia das Letras, 2011.
SINGH, Richa; VATSA, Mayank; and RATHA, Nalini. Trustworthy AI. In: 3rd ACM India Joint International Conference on Data Science & Management of Data (8th ACM IKDD CODS & 26th COMAD), 2021, New York. [Proceedings of the…]. Fremont: Association for Computing Machinery. Disponível em: <https://doi.org/10.1145/3430984.3431966>. Acesso em: 30 nov. 2023.
WACHTER, Sandra; MITTELSTADT, Brent; RUSSELL, Chris. Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI. Computer Law & Security Review, v. 41, 2021. Disponível em: <http://dx.doi.org/10.2139/ssrn.3547922>. Acesso em: 10 ago. 2023.
WHITE HOUSE. Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. Washington, 30 out. 2023. Disponível em: <https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/>. Acesso em: 15 nov. 2023.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Veritas (Porto Alegre)
This work is licensed under a Creative Commons Attribution 4.0 International License.
Copyright
The submission of originals to Revista Veritas implies the transfer by the authors of the right for publication. Authors retain copyright and grant the journal right of first publication. If the authors wish to include the same data into another publication, they must cite Revista Veritas as the site of original publication.
Creative Commons License
Except where otherwise specified, material published in this journal is licensed under a Creative Commons Attribution 4.0 International license, which allows unrestricted use, distribution and reproduction in any medium, provided the original publication is correctly cited. Copyright: © 2006-2020 EDIPUCRS</p