Skip to main navigation menu Skip to main content Skip to site footer

Do Algorithms have Rights?

Abstract

The growing influence of algorithms in digital society raises questions about their legal and ethical status. As these algorithms make decisions that affect people, the question arises of whether they should have rights and how those rights can be protected in an ever-evolving technological environment. The research carried out an exhaustive documentary analysis of relevant literature, reports and legal documents on the ethics of algorithms, as well as research on their social impacts in areas such as privacy, discrimination and automated decision-making. The findings revealed that there is currently no clear consensus on whether algorithms should have rights. However, the need to establish regulations and ethical principles is recognized to guarantee its responsible use and avoid negative consequences. Key challenges, such as algorithmic transparency, discrimination and data privacy, were identified as requiring adequate attention. Regulation and transparency are essential to ensure that algorithms are used fairly and equitably, in a way that protects individual and social rights.

Keywords

rights, algorithms, privacy, transparency, autonomy

PDF (Español) HTML (Español) EPUB (Español)

References

  • Balkin, J. M. (2016). The Three Laws of Robotics in the Age of Big Data. Washington Law Review, 91(4), 1005-1051.
  • Barocas, S. & Selbst, A. D. (2016). Fairness in Machine Learning: Lessons from Political Philosophy. arXiv preprint, 1609.07236.
  • BBC. (2018). 5 claves para entender el escándalo de Cambridge Analytica que hizo que Facebook perdiera US$37.000 millones en un día. https://www.bbc.com/mundo/noticias-43472797
  • Benthall, S. (2019). The Moral Economy of Algorithms. In Dubber, Markus D., Frank Pasquale, & Sunit Das (eds.), The Oxford Handbook of Ethics of AI (pp. 91-112). Oxford University Press.
  • Berkeley News. (2020). Algorithmic Bias: UC Berkeley School of Information Study Finds Discrimination in Online Ad Delivery. https://news.berkeley.edu/2020/02/28/algorithmic-bias-uc-berkeley-school-of-information-study-finds-discrimination-in-online-ad-delivery/.
  • Bostrom, N. & Yudkowsky, E. (2014). The ethics of artificial intelligence. In Keith Frankish & William Ramsey (eds.), Cambridge handbook of artificial intelligence (pp. 316-334). Cambridge University Press.
  • Boyd, D. (2017). The ethics of big data: Confronting the challenges of an algorithmic society. Data & Society Research Institute.
  • Boyd, D. & Crawford, K. (2012). Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, Communication & Society, 15(5), 662-679.
  • Boyd, D., Crawford, K., Keller, E., Gangadharan, S. P. & Eubanks, V. (2019). AI in the public interest: Seven principles for ethical AI in society. AI & Society, 34(1), 1-14.
  • Bracha, O. (2012). Owning ideas: A history of Anglo-American intellectual property. MIT Press.
  • Brundage, M. et al. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation. Future for Humanity Institute, Oxford University, Centre for the Study of Existential Risk, University of Cambridge, Centre for a New American Security, Electronic Frontier Foundation, Open AI. arXiv preprint, 1802.07228.
  • Bryson, J. J. (2018). Robots should be slaves. In The ethics of artificial intelligence (pp. 123-138). MIT Press.
  • Bryson, J. J. & Winfield, A. F. (2018). Standardizing ethical design for artificial intelligence and autonomous systems. Computer, 51(5), 116-119.
  • Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithm. Big Data & Society, 3(1). https://doi.org/10.2139/ssrn.2660674.
  • Calo, R. (2013a). Digital Market Manipulation. George Washington Law Review, 81(3), 725-772.
  • Calo, R. (2013b). The drone as privacy catalyst. Stanford Law Review, 64(2), 29-71.
  • Calo, R. (2017c). Artificial intelligence policy: A primer and roadmap. Policy Research Working Paper, World Bank Group.
  • Chun, W. H. K. (2011). Programmed visions: Software and memory. MIT Press.
  • Citron, D. K. (2014). Hate crimes in cyberspace. Harvard University Press.
  • Cormen, T. H., Leiserson, C. E., Rivest, R. L. & Stein, C. (2009). Introduction to Algorithms. MIT Press.
  • Crawford, K. (2013). The Hidden Biases in Big Data. Harvard Business Review. https://hbr.org/2013/04/the-hidden-biases-in-big-data.
  • Crawford, K. (2016). Can an algorithm be agonistic? Ten scenes from life in calculated publics. Science, Technology & Human Values, 41(1), 77-92. https://doi.org/10.1177/0162243915608947
  • Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
  • Crawford, K. & Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin’s Press.
  • Crawford, K. et al. (2019). AI Now 2019 Report. AI Now Institute. https://ainowinstitute.org/AI_Now_2019_Report.pdf
  • Dasgupta, S., Papadimitriou, C. H. & Vazirani, U. V. (2006). Algorithms. McGraw-Hill Education.
  • Data & Society Research Institute. (2018). Principles for Accountable Algorithms and a Social Impact Statement for Algorithms.
  • Diakopoulos, N. (2015). Algorithmic accountability: Journalistic investigation of computational power structures. Digital Journalism, 3(3), 398-415.
  • Diakopoulos, N. (2019). Algorithmic Accountability Reporting: On the Investigation of Black Boxes. In The Oxford Handbook of Journalism and AI (pp. 141-162). Oxford University Press.
  • Dignum, V. (2017). Ethics in the design and use of artificial intelligence. Springer.
  • Dignum, V. (2020). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.
  • Doctorow, C. (2019). Radicalized. Tor Books.
  • Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.
  • Finn, E. & Selwyn, N. (2017). The Ethics of Algorithms: Mapping the Debate. International Journal of Communication, 11, 2787-2805.
  • Floridi, L. (2013). The Ethics of Information. Oxford University Press.
  • Floridi, L. (2014). The fourth revolution: How the infosphere is reshaping human reality. Oxford University Press.
  • Floridi, L. & Taddeo, M. (2016). What is data ethics? Philosophical transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), e20160360. https://doi.org/10.1098/rsta.2016.0360
  • Floridi, L. (2018). Artificial Intelligence's Fourth Revolution. Philosophy & Technology, 31(2), 317-321.
  • Floridi, L. (2019a). Soft Ethics and the Governance of the Digital. Philosophy & Technology, 32(2), 185-187.
  • Floridi, L. (2019b). The logic of digital beings: on the ontology of algorithms, bots, and chatbots. Philosophy & Technology, 32(2), 209-227.
  • Floridi, L. & Sanders, J. W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349-379.
  • Hart, H. L. A. (2012). The concept of law. Oxford University Press.
  • Hartzog, W. (2012). Privacy’s outsourced dilemma: analyzing the effectiveness of current approaches to regulating, “Notice and Choice”. Loyola of Los Angeles Law Review, 46(2), 413-468.
  • Hartzog, W. (2016). The case for a duty of loyalty in privacy law. North Carolina Law Review, 94(4), 1151-1203.
  • Hildebrandt, M. (2013). Smart technologies and the end(s) of law: Novel entanglements of law and technology. Edward Elgar Publishing.
  • Jobin, A., Ienca, M. & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1(9), 389-399.
  • Kaye, D. (2018). Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. United Nations General Assembly, A/73/348.
  • Kerr, O. S. (2019). The fourth amendment in the digital age. Harvard Law Review, 127(6), 1672-1756.
  • Kleinberg, J. & Tardos, E. (2005). Algorithm Design. Pearson Education.
  • Knuth, D. E. (1997). The Art of Computer Programming. Addison-Wesley.
  • Mittelstadt, B. (2019). AI ethics, oversight and accountability: A mapping report. Big Data & Society, 6(1), 2053951718823039.
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. & Floridi, L. (2016). The Ethics of Algorithms: Mapping the Debate. Big Data & Society, 3(2), 2053951716679679.
  • Mittelstadt, B.D., Russell, C. & Wachter, S. (2019). Exploring the Impact of Artificial Intelligence: Transparency, Fairness, and Ethics. AI & Society, 34(4), 787-793.
  • Moor, J. H. (2007). Why we need better ethics for emerging technologies. Ethics and Information Technology, 9(2), 111-119.
  • O’Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books.
  • Pasquale, F. (2015). The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press.
  • Powles, J. & Nissenbaum, H. (2017). The seductive diversion of ‘solving’ bias in AI. Harvard Law Review, 131(6), 1641-1677.
  • ProPublica. (2016). Machine Bias: There’s Software Used across the Country to Predict Future Criminals. And It’s Biased Against Blacks. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Radbruch, G. (1946). Cinco minutos de filosofía del derecho. https://www.infojus.gob.ar/sites/default/files/5minutosderechoradbruch.pdf
  • Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.
  • Sedgewick, R. & Wayne, K. (2011). Algorithms (Fourth Edition). Addison-Wesley Professional.
  • Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S. & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*), 59-68.
  • Sweeney, L. (2013). Discrimination in online ad delivery. Communications of the ACM, 56(5), 44-54.
  • Taddeo, M., & Floridi, L. (2018b). Regulate Artificial Intelligence to avert cyber arms race. Nature, 556(7701), 296-298.
  • Tufekci, Z. (2014). Engineering the public: Big data, surveillance and computational politics. First Monday, 19(7).
  • Turing, A. M. (1937). On computable numbers, with an application to the Entscheidung’s problem. https://www.cs.virginia.edu/~robins/TuringPaper1936.pdf
  • United Nations Human Rights Committee (2019). General Comment No. 37 on the right of peaceful assembly under the International Covenant on Civil and Political Rights. United Nations General Assembly, CCPR/C/GC/37.
  • Vallor, S. (2021). Technology and the virtues: A philosophical guide to a world worth wanting. Oxford University Press.
  • Van Dijck, J. (2014). Datafication, dataism and dataveillance: Big data between scientific paradigm and ideology. Surveillance & Society, 12(2), 197-208.
  • Wachter, S., Mittelstadt, B. & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99. https://doi.org/10.1093/idpl/ipx017
  • Wallach, W. & Allen, C. (2019). Moral machines 2.0: Teaching robots right from wrong. Oxford University Press.
  • Zarsky, T. (2016). The trouble with algorithmic decisions: An analytic road map to examine efficiency and fairness in automated and opaque decision-making. Science, Technology, & Human Values, 41(1), 118-132.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Downloads

Download data is not yet available.