Veículos Autônomos e Equilíbrio Reflexivo Amplo Coletivo

Autores

DOI:

https://doi.org/10.15448/1984-6746.2023.1.44388

Palavras-chave:

Veículos autônomos, Inteligência Artificial, Incerteza moral, Normatividade, Equilíbrio Reflexivo

Resumo

O objetivo deste artigo é refletir sobre a necessidade de contarmos com padrões morais para orientar os veículos autônomos (VAs) e propor o procedimento do equilíbrio reflexivo (ER) para tal fim. Com isso em mente, inicio com uma investigação sobre o desacordo moral para saber como devemos decidir em casos de incerteza, argumentando que devemos fazer uso de um procedimento que congregue diferentes critérios normativos. Após, apresento uma rota interessante de investigação, que é o método de equilíbrio reflexivo coletivo na prática (CREP) como proposto por Savulescu, Gyngell e Kahane (2021), que corrige os resultados do experimento Moral Machine e propõe princípios de uma política pública para regular os VAs. O próximo passo é analisar o procedimento do ER, identificando suas características básicas de consistência, reflexividade, holismo e progressividade. Com isso, será possível na sequência apontar os limites do CREP, em razão dele deixar de fora o critério normativo das virtudes e não formar um sistema coerente de crenças amplo o suficiente. Por fim, apresento a sugestão do equilíbrio reflexivo amplo coletivo (ERAC) de forma a dar conta da pluralidade normativa que é base de nossa sociedade e propor uma metodologia para identificar o padrão moral para os VAs.

Downloads

Não há dados estatísticos.

Biografia do Autor

Denis Coitinho, Universidade do Vale do Rio dos Sinos (Unisinos), São Leopoldo, RS, Brasil.

Doutor em Filosofia pela Pontifícia Universidade Católica do Rio Grande do Sul. Professor do Programa de Pós-Graduação da Universidade do Vale do Rio dos Sinos (Unisinos). Bolsista produtividade do CNPq.

Referências

ALBERSMEIR, Frauke. The Concept of Moral Progress. Berlin: De Gruyter, 2022. DOI: https://doi.org/10.1515/9783110798913

ANDERSON, Michel; ANDERSON, Susan Leigh. General Introduction. In: ANDERSON, Michel; ANDERSON, Susan Leigh (eds.). Machine Ethics. New York: Cambridge University Press, 2011. p. 1-4.

AWAD, Edmond et al. The Moral Machine Experiment. Nature, New York, v. 563, p. 59-64, 2018. DOI: https://doi.org/10.1038/s41586-018-0637-6

BBC NEWS. Uber’s self-driving operator charged our fatal crash. BBC News, London, 2020. Disponível em: https://www.bbc.co.uk/news/technology-54175359. Acesso em: 26 jan. 2023.

BEAUCHAMP, Tom L.; CHILDRESS, James F. Principles of Bioethical Ethics. Oxford: Oxford University Press, 2013.

BOGOSIAN, Kyle. Implementations of Moral Uncertainty in Intelligent Machines. Minds & Machines, Norwell, v. 27, p. 591-608, 2017. DOI: https://doi.org/10.1007/s11023-017-9448-z

BONNEFON, Jean- François; SHARIF, Azim; RAHWAN, Iyad. The Moral Psychology of AI and Ethical Opt-Out Problem. In: LIAO, Matthew (ed.). Ethics and Artificial Intelligence. New York: Oxford University Press, 2020. p. 109-126. DOI: https://doi.org/10.1093/oso/9780190905033.003.0004

BRANDSTEDT, Eric; BRÄNNMARCK, Johan. Rawlsian Constructivism: A Practical Guide to Reflective Equilibrium. The Journal of Ethics, Hanover, v. 24, p. 355-373, 2020. DOI: https://doi.org/10.1007/s10892-020-09333-3

BRINK, David. Some Forms and Limits of Consequentialism. In: COPP, David (ed.). The Oxford Handbook of Ethical Theory. New York: Oxford University Press, 2006. p. 380-423. DOI: https://doi.org/10.1093/0195147790.003.0015

BUCHANAN, Allen; POWELL, Russell. The Evolution of Moral Progress: A Biocultural Theory. New York: Oxford University Press, 2018. DOI: https://doi.org/10.1093/oso/9780190868413.001.0001

CAMPBELL, Richmond. Reflective Equilibrium and the Moral Consistency Reasoning. Australasian Journal of Philosophy, Adelaide, v. 92, n. 3, p. 433-451, 2014. DOI: https://doi.org/10.1080/00048402.2013.833643

CAMPBELL, Richmond; KUMAR, Victor. Moral Reasoning on the Ground. Ethics, Chicago, v. 122, n. 2, p. 273-312, 2012. DOI: https://doi.org/10.1086/663980

COPELAND, Jack. Artificial Intelligence: A Philosophical Introduction. Oxford: Blackwell, 2001.

CRISP, Roger; SLOTE, Michael (ed.). Virtue Ethics. Oxford: Oxford University Press, 1997. p. 217-238.

DANIELS, Norman. Wide Reflective Equilibrium and Theory Acceptance in Ethics. The Journal of Philosophy, New York, v. 76, n. 5, p. 256 -282, 1979. DOI: https://doi.org/10.2307/2025881

ENGLISH NEWS. China’s Baidu operates taxi night in Wuhan. English News, Beijing, 2022. Disponível em: https://english.news.cn/20221227/c06149e517884fabb79d1b0cad7950d1/c.html. Acesso em: 26 jan. 2023.

ETHICS COMMISSION. Automated and connected driving. Federal Ministry of Transport and Digital Infrastructure, Berlin, 2017. Disponível em: https://bmdv.bund.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__blob=publicationFile. Acesso em: 5 fev. 2023.

ETZIONI, Amitai; ETZIONI, Oren. Incorporating Ethics into Artificial Intelligence. The Journal of Ethics, Hanover, v. 21, n. 4, p. 403-418, DOI: https://doi.org/10.1007/s10892-017-9252-2

FLORIDI, Luciano; COWLS, Josh. A Unified Framework of Five Principles for AI in Society. In: FLORIDI, Luciano (ed.). Ethics, Governance and Policies in Artificial Intelligence. Berlin: Springer, 2021. p. 5-17. DOI: https://doi.org/10.1007/978-3-030-81907-1_2

FOOT, Philippa. Virtues and Vices. Oxford: Blackwell, 1978.

FRANKENFIELD, Jake. Artificial Intelligence: What It Is and How It is Used. Investopedia, New York, 2022. Disponível em: https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp. Acesso em: 26 jan. 2023.

GROVER, Simmy; MCCLELLAND, Alastair; FURNHAM, Adrian. Preferences for Scarce Medical Resource Allocation: Differences between Experts and the General Public and Implications for the Covid-19 Pandemic. British Journal of Health Psychology, Toronto, v. 25, p. 889-901, 2020. DOI: https://doi.org/10.1111/bjhp.12439

HARMAN, Gilbert; MANSON, Kelby; SINNOTT-AMSTRONG, Walter. Moral Reasoning. In: DORIS, John Michael (ed.). The Moral Psychology Handbook. Oxford: Oxford University Press, 2010. p. 206-245. DOI: https://doi.org/10.1093/acprof:oso/9780199582143.003.0007

HARRIS, John. The Immoral Machine. Cambridge Quarterly of Healthcare Ethics, Cambridge, v. 29, p. 71-79, 2020. DOI: https://doi.org/10.1017/S096318011900080X

HILL, Thomas. Kantian Normative Ethics. In: COPP, David (ed.). The Oxford Handbook of Ethical Theory. New York: Oxford University Press, 2006. p. 480-514. DOI: https://doi.org/10.1093/0195147790.003.0018

KAUR, Kanwaldeep; RAMPERSAD, Giselle. Trust in Driverless Cars: Investigating Key Factors Influencing the Adoption of Driverless Cars. Journal of Engineering and Technology Management, Amsterdam, v. 48, p. 87-96, 2018. DOI: https://doi.org/10.1016/j.jengtecman.2018.04.006

KUSHNER, Thomasine; BELLIOTTI, Raymond A.; BUCKNER, Donald. Toward a Methodology for Moral Decision Making in Medicine. Theoretical Medicine and Bioethics, Hanover, v. 12, n. 4, p. 281-293, 1991. DOI: https://doi.org/10.1007/BF00489889

LARSON, Jeff et al. How We Analyzed the COMPAS Recidivism Algorithm. ProPublica, New York, 2016. Disponível em: https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm. Acesso em: 26 jan. 2023.

MACASKILL, William. Normative Uncertainty as a Voting Problem. Mind, East Sussex, v. 125, n. 500, p. 967-1004, 2016. DOI: https://doi.org/10.1093/mind/fzv169

MESA, Natalia. Can the Criminal Justice System’s Artificial Intelligence ever be Truly Fair? Massive Science, New York, 2021. Disponível em: https://massivesci.com/articles/machine-learning-compas-racism-policing-fairness/. Acesso em: 26 jan. 2023.

PIETSCH, Bryan. Two killed in driverless Tesla car crash, officials say. The New York Times, New York, 2021. Disponível em: https://www.nytimes.com/2021/04/18/business/tesla-fatal-crash-texas.html. Acesso em: 26 jan. 2023.

RAJCZI, Alex. On the Incoherence Objection to Rule-Utilitarianism. Ethical Theory and Moral Practice, Hanover, v. 19, p. 857-876, 2016. DOI: https://doi.org/10.1007/s10677-016-9687-8

RAWLS, John. A Theory of Justice. Cambridge: Harvard University Press, 1971. DOI: https://doi.org/10.4159/9780674042605

RECHNITZER, Tanja. Applying Reflective Equilibrium: Towards the Justification of a Precautionary Principle. New York: Springer, 2022. DOI: https://doi.org/10.1007/978-3-031-04333-8

ROZENFIELD, Monica. The next step for artificial intelligence is machines that get smarter on their own. The Institute, [S.l.], 2016. Disponível em: http://theinstitute.ieee.org/technology-topics/artificial-intelligence/the-next-step-for-artificial-intelligence-is-machines-thatget-smarter-on-their-own. Acesso em: 23 jan. 2023.

SAVULESCU, Julien; GYNGELL, Christopher; KAHANE, Guy. Collective Reflective Equilibrium in Practice (CREP) and Controversial Novel Technologies. Bioethics, Toronto, v. 35, n. 7, p. 1-12, 2021. DOI: https://doi.org/10.1111/bioe.12869

SCANLON, Thomas. Rawls on Justification. In: FREEMAN, Samuel (ed.). The Cambridge Companion to Rawls. Cambridge: Cambridge University Press, 2003. p. 139-167. DOI: https://doi.org/10.1017/CCOL0521651670.004

TASIOULAS, John. Artificial Intelligence, Humanist Ethics. Daedalus: The Journal of the American Academy of Arts & Sciences, Cambridge, v. 151, n. 2, p. 232-243, 2022. DOI: https://doi.org/10.1162/daed_a_01912

THOMSON, Judith Jarvis. Killing, Letting Die, and the Trolley Problem. Monist, Oxford, v. 54, p. 204-217, 1976. DOI: https://doi.org/10.5840/monist197659224

WALLACH, Wendel; ALLEN, Colin. Moral Machine: Teaching Robots Right from Wrong. New York: Oxford University Press, 2009. DOI: https://doi.org/10.1093/acprof:oso/9780195374049.001.0001

WESSLING, Brianna. Waymo expand service area in 2 cities. The Robot Report, Santa Barbara, 2022. Disponível em: https://www.therobotreport.com/waymo-expands-service-area-in-2-cities/. Acesso em: 23 jan. 2023.

Downloads

Publicado

2023-11-13

Como Citar

Coitinho, D. (2023). Veículos Autônomos e Equilíbrio Reflexivo Amplo Coletivo. Veritas (Porto Alegre), 68(1), e44388. https://doi.org/10.15448/1984-6746.2023.1.44388

Edição

Seção

Ética e Filosofia Política