The Intersection of AI, Autonomy, and CSR: A Philosophical Inquiry: Part 1

Human autonomy refers to the capacity of individuals to make self-directed decisions and take actions that reflect their own values, preferences, and goals, free from undue external influence or coercion. It encompasses the ability to exercise independent judgment, pursue one’s interests, and chart one’s own course in life. However, the concept of autonomy is dynamic and subject to interpretation, evolving over time and shaped by various cultural, social, and philosophical influences. It is also a much-contested concept, with debates surrounding its scope, boundaries, and implications for individual freedom and social responsibility. Despite these complexities, autonomy remains a fundamental aspect of human dignity and well-being, serving as a cornerstone for ethical decision-making and personal empowerment. The importance of autonomy is rooted in various philosophical perspectives, including: 

Humanism: Humanism is a philosophical and ethical stance that emphasizes the inherent dignity, worth, and autonomy of individuals. According to Nida-Rümelin (2009), autonomy is a central concept in humanism, appearing in both theoretical and practical dimensions as autonomy of belief and autonomy of practice. To be autonomous, individuals must derive their beliefs and actions from their own reasoning. Autonomy implies that a person’s beliefs and actions are guided solely by reasons they personally accept and endorse.

Utilitarianism: Utilitarianism, as advocated by Jeremy Bentham and John Stuart Mill, emphasizes maximizing overall well-being or utility for the greatest number of people. Haworth (1984) asserts that autonomy plays a crucial role in maximizing utility by emphasizing the significance of autonomous preferences and pleasures in determining what is intrinsically good. By valuing and promoting autonomy, utilitarians can enhance overall well-being and ensure individuals have the freedom to autonomously pursue their own good.

Given the rapid advancement of AI and its pervasive integration into various aspects of human life, researchers from diverse disciplines have expressed concerns about its impact on human autonomy. A significant worry arises from the opacity of AI decision algorithms, indicating the lack of transparency in how these algorithms reach conclusions or make decisions. Unlike conventional decision-making processes, where the rationale is transparent and comprehensible, AI algorithms often function in intricate ways that challenge human interpretation. Many people view AI as a black box due to its hard-to-understand inner workings. Vaassen (2022) suggests that opaque decision algorithms can undermine personal autonomy by obscuring information about the factors influencing life-changing decisions. This lack of transparency hampers individuals’ ability to shape their lives according to their goals and preferences, namely, their ability to act as autonomous agents.

Formosa (2021) investigates how social robots, powered by AI, impact human autonomy, particularly considering their potential expansion into various work and life domains. Social robots are designed to interact and communicate with humans in social or collaborative contexts. They engage in conversations, provide assistance, and perform tasks in settings such as homes, healthcare facilities, and public spaces. AI algorithms enable social robots to understand natural language, recognize emotions, learn from experiences, and adapt their behavior accordingly. The paper suggests that social robots pose risks to human autonomy through various mechanisms. For example, they may limit authentic choices by influencing behavior towards societal norms, potentially resulting in decisions incongruent with individuals’ genuine values. Additionally, their commercial nature may increase vulnerability by potentially exploiting users for commercial or political gain, steering choices towards external agendas. Lastly, negative interactions or overreliance on social robots could diminish autonomy competencies such as self-respect and self-trust, hindering independent decision-making.

Formosa (2021) also discussed the positive impact of social robots, such as facilitating authentic choices by providing information and support to help humans make decisions aligned with their values and preferences. However, it’s essential to note that this support may not always materialize under certain circumstances. In a study by Dunning et al. (2023), the format of AI recommendations (categorical or probabilistic) was examined in a strategy game context. While probabilistic information typically offers more detailed insights, the results indicated that participants did not benefit significantly from AI agent recommendations expressed as a probability distribution over possible options compared to categorical recommendations. This lack of benefit suggests that users may have struggled to interpret the probabilistic information effectively or that the cognitive load of processing it outweighed its value. Consequently, users may have relied more on categorical recommendations, potentially limiting their autonomy. Additionally, human players often deferred uncritically to more skilled AI agents, disregarding good advice from low-skilled agents, indicating cognitive limitations in utilizing AI for decision-making autonomy enhancement.

Link to Part 2


References:

Dunning, R. E., Fischhoff, B., & Davis, A. L. (2023). When Do Humans Heed AI Agents’ Advice? When Should They? Human Factors0(0). https://doi.org/10.1177/00187208231190459

Formosa, P. R. (2021). Robot Autonomy vs. Human Autonomy: Social Robots, Artificial Intelligence (AI), and the Nature of Autonomy. Minds & Machines 31, 595–616. https://doi.org/10.1007/s11023-021-09579-2

Haworth, L. (1984). Autonomy and utility. Ethics 95 (1), 5-19.

Nida-Rümelin, J. (2009). “Philosophical grounds of humanism in economics”. In Spitzeck, H., Pirson, M., Amann, W., Khan, S., & von, K. E. (Eds.). (2009). Humanism in Business. Cambridge University Press.

Vaassen, B. (2022). AI, Opacity, and Personal Autonomy. Philosophy & Technology 35, 88. https://doi.org/10.1007/s13347-022-00577-5

Like (0)