Ir al menú de navegación principal Ir al contenido principal Ir al pie de página del sitio

¿Tienen Derechos los Algoritmos?

Resumen

La creciente influencia de los algoritmos en la sociedad digital plantea interrogantes sobre su estatus legal y ético. A medida que estos algoritmos toman decisiones que afectan a las personas, surge la pregunta de si deberían tener derechos y cómo se pueden proteger esos derechos en un entorno tecnológico en constante evolución. La investigación llevó a cabo un análisis documental exhaustivo de literatura, informes y documentos legales relevantes sobre la ética de los algoritmos, así como investigaciones sobre sus impactos sociales en áreas como la privacidad, la discriminación y la toma automatizada de decisiones. Los hallazgos revelaron que actualmente no existe un consenso claro sobre si los algoritmos deben tener derechos. Sin embargo, se reconoce la necesidad de establecer regulaciones y principios éticos para garantizar su uso responsable y evitar consecuencias negativas. Se identificaron desafíos clave, como la transparencia algorítmica, la discriminación y la privacidad de los datos, que requieren atención adecuada. La regulación y la transparencia son fundamentales para garantizar que los algoritmos sean utilizados de manera justa y equitativa, de modo que protejan los derechos individuales y sociales.

Palabras clave

derechos, algoritmos, privacidad, transparencia, autonomía

HTML EPUB PDF

Citas

  1. Balkin, J. M. (2016). The Three Laws of Robotics in the Age of Big Data. Washington Law Review, 91(4), 1005-1051.
  2. Barocas, S. & Selbst, A. D. (2016). Fairness in Machine Learning: Lessons from Political Philosophy. arXiv preprint, 1609.07236.
  3. BBC. (2018). 5 claves para entender el escándalo de Cambridge Analytica que hizo que Facebook perdiera US$37.000 millones en un día. https://www.bbc.com/mundo/noticias-43472797
  4. Benthall, S. (2019). The Moral Economy of Algorithms. In Dubber, Markus D., Frank Pasquale, & Sunit Das (eds.), The Oxford Handbook of Ethics of AI (pp. 91-112). Oxford University Press.
  5. Berkeley News. (2020). Algorithmic Bias: UC Berkeley School of Information Study Finds Discrimination in Online Ad Delivery. https://news.berkeley.edu/2020/02/28/algorithmic-bias-uc-berkeley-school-of-information-study-finds-discrimination-in-online-ad-delivery/.
  6. Bostrom, N. & Yudkowsky, E. (2014). The ethics of artificial intelligence. In Keith Frankish & William Ramsey (eds.), Cambridge handbook of artificial intelligence (pp. 316-334). Cambridge University Press. DOI: https://doi.org/10.1017/CBO9781139046855.020
  7. Boyd, D. (2017). The ethics of big data: Confronting the challenges of an algorithmic society. Data & Society Research Institute.
  8. Boyd, D. & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662-679. DOI: https://doi.org/10.1080/1369118X.2012.678878
  9. Boyd, D., Crawford, K., Keller, E., Gangadharan, S. P. & Eubanks, V. (2019). AI in the public interest: Seven principles for ethical AI in society. AI & Society, 34(1), 1-14.
  10. Bracha, O. (2012). Owning ideas: A history of Anglo-American intellectual property. MIT Press.
  11. Brundage, M. et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation. Future for Humanity Institute, Oxford University, Centre for the Study of Existential Risk, University of Cambridge, Centre for a New American Security, Electronic Frontier Foundation, Open AI. arXiv preprint, 1802.07228.
  12. Bryson, J. J. (2018). Robots should be slaves. In The ethics of artificial intelligence (pp. 123-138). MIT Press.
  13. Bryson, J. J. & Winfield, A. F. (2018). Standardizing ethical design for artificial intelligence and autonomous systems. Computer, 51(5), 116-119. DOI: https://doi.org/10.1109/MC.2017.154
  14. Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithm. Big Data & Society, 3(1). https://doi.org/10.2139/ssrn.2660674. DOI: https://doi.org/10.1177/2053951715622512
  15. Calo, R. (2013a). Digital Market Manipulation. George Washington Law Review, 81(3), 725-772. DOI: https://doi.org/10.2139/ssrn.2309703
  16. Calo, R. (2013b). The drone as privacy catalyst. Stanford Law Review, 64(2), 29-71.
  17. Calo, R. (2017c). Artificial intelligence policy: A primer and roadmap. Policy Research Working Paper, World Bank Group. DOI: https://doi.org/10.2139/ssrn.3015350
  18. Chun, W. H. K. (2011). Programmed visions: Software and memory. MIT Press. DOI: https://doi.org/10.7551/mitpress/9780262015424.001.0001
  19. Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press. DOI: https://doi.org/10.4159/harvard.9780674735613
  20. Cormen, T. H., Leiserson, C. E., Rivest, R. L. & Stein, C. (2009). Introduction to Algorithms. MIT Press.
  21. Crawford, K. (2013). The Hidden Biases in Big Data. Harvard Business Review. https://hbr.org/2013/04/the-hidden-biases-in-big-data.
  22. Crawford, K. (2016). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology & Human Values, 41(1), 77-92. https://doi.org/10.1177/0162243915608947 DOI: https://doi.org/10.1177/0162243915589635
  23. Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press. DOI: https://doi.org/10.12987/9780300252392
  24. Crawford, K. & Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
  25. Crawford, K. et al. (2019). AI Now 2019 Report. AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf
  26. Dasgupta, S., Papadimitriou, C. H. & Vazirani, U. V. (2006). Algorithms. McGraw-Hill Education.
  27. Data & Society Research Institute. (2018). Principles for Accountable Algorithms and a Social Impact Statement for Algorithms.
  28. Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398-415. DOI: https://doi.org/10.1080/21670811.2014.976411
  29. Diakopoulos, N. (2019). Algorithmic Accountability Reporting: On the Investigation of Black Boxes. In The Oxford Handbook of Journalism and AI (pp. 141-162). Oxford University Press.
  30. Dignum, V. (2017). Ethics in the design and use of artificial intelligence. Springer.
  31. Dignum, V. (2020). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer. DOI: https://doi.org/10.1007/978-3-030-30371-6
  32. Doctorow, C. (2019). Radicalized. Tor Books.
  33. Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
  34. Finn, E. & Selwyn, N. (2017). The Ethics of Algorithms: Mapping the Debate. International Journal of Communication, 11, 2787-2805.
  35. Floridi, L. (2013). The Ethics of Information. Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780199641321.001.0001
  36. Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford University Press.
  37. Floridi, L. & Taddeo, M. (2016). What is data ethics? Philosophical transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), e20160360. https://doi.org/10.1098/rsta.2016.0360 DOI: https://doi.org/10.1098/rsta.2016.0360
  38. Floridi, L. (2018). Artificial Intelligence's Fourth Revolution. Philosophy & Technology, 31(2), 317-321. DOI: https://doi.org/10.1007/s13347-018-0325-3
  39. Floridi, L. (2019a). Soft Ethics and the Governance of the Digital. Philosophy & Technology, 32(2), 185-187. DOI: https://doi.org/10.1007/s13347-019-00354-x
  40. Floridi, L. (2019b). The logic of digital beings: on the ontology of algorithms, bots, and chatbots. Philosophy & Technology, 32(2), 209-227.
  41. Floridi, L. & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379. DOI: https://doi.org/10.1023/B:MIND.0000035461.63578.9d
  42. Hart, H. L. A. (2012). The concept of law. Oxford University Press. DOI: https://doi.org/10.1093/he/9780199644704.001.0001
  43. Hartzog, W. (2012). Privacy’s outsourced dilemma: analyzing the effectiveness of current approaches to regulating, “Notice and Choice”. Loyola of Los Angeles Law Review, 46(2), 413-468.
  44. Hartzog, W. (2016). The case for a duty of loyalty in privacy law. North Carolina Law Review, 94(4), 1151-1203.
  45. Hildebrandt, M. (2013). Smart technologies and the end(s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.
  46. Jobin, A., Ienca, M. & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. DOI: https://doi.org/10.1038/s42256-019-0088-2
  47. Kaye, D. (2018). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. United Nations General Assembly, A/73/348.
  48. Kerr, O. S. (2019). The fourth amendment in the digital age. Harvard Law Review, 127(6), 1672-1756.
  49. Kleinberg, J. & Tardos, E. (2005). Algorithm Design. Pearson Education.
  50. Knuth, D. E. (1997). The Art of Computer Programming. Addison-Wesley.
  51. Mittelstadt, B. (2019). AI ethics, oversight and accountability: A mapping report. Big Data & Society, 6(1), 2053951718823039.
  52. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. & Floridi, L. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society, 3(2), 2053951716679679. DOI: https://doi.org/10.1177/2053951716679679
  53. Mittelstadt, B.D., Russell, C. & Wachter, S. (2019). Exploring the Impact of Artificial Intelligence: Transparency, Fairness, and Ethics. AI & Society, 34(4), 787-793.
  54. Moor, J. H. (2007). Why we need better ethics for emerging technologies. Ethics and Information Technology, 9(2), 111-119. DOI: https://doi.org/10.1007/s10676-006-0008-0
  55. O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
  56. Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press. DOI: https://doi.org/10.4159/harvard.9780674736061
  57. Powles, J. & Nissenbaum, H. (2017). The seductive diversion of ‘solving’ bias in AI. Harvard Law Review, 131(6), 1641-1677.
  58. ProPublica. (2016). Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased Against Blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  59. Radbruch, G. (1946). Cinco minutos de filosofía del derecho. https://www.infojus.gob.ar/sites/default/files/5minutosderechoradbruch.pdf
  60. Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press. DOI: https://doi.org/10.12987/9780300245318
  61. Sedgewick, R. & Wayne, K. (2011). Algorithms (Fourth Edition). Addison-Wesley Professional.
  62. Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S. & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*), 59-68. DOI: https://doi.org/10.1145/3287560.3287598
  63. Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44-54. DOI: https://doi.org/10.1145/2447976.2447990
  64. Taddeo, M., & Floridi, L. (2018b). Regulate Artificial Intelligence to avert cyber arms race. Nature, 556(7701), 296-298. DOI: https://doi.org/10.1038/d41586-018-04602-6
  65. Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday, 19(7). DOI: https://doi.org/10.5210/fm.v19i7.4901
  66. Turing, A. M. (1937). On computable numbers, with an application to the Entscheidung’s problem. https://www.cs.virginia.edu/~robins/TuringPaper1936.pdf
  67. United Nations Human Rights Committee (2019). General Comment No. 37 on the right of peaceful assembly under the International Covenant on Civil and Political Rights. United Nations General Assembly, CCPR/C/GC/37.
  68. Vallor, S. (2021). Technology and the virtues: A philosophical guide to a world worth wanting. Oxford University Press. DOI: https://doi.org/10.1201/9781003278290-12
  69. Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197-208. DOI: https://doi.org/10.24908/ss.v12i2.4776
  70. Wachter, S., Mittelstadt, B. & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99. https://doi.org/10.1093/idpl/ipx017 DOI: https://doi.org/10.1093/idpl/ipx005
  71. Wallach, W. & Allen, C. (2019). Moral machines 2.0: Teaching robots right from wrong. Oxford University Press.
  72. Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision-making. Science, Technology, & Human Values, 41(1), 118-132. DOI: https://doi.org/10.1177/0162243915605575
  73. Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Descargas

Los datos de descargas todavía no están disponibles.

Artículos similares

<< < 1 2 3 4 5 6 

También puede {advancedSearchLink} para este artículo.