In the era of AI, corporate social responsibility (CSR) has become increasingly complex. Whether companies are involved in developing AI technologies or utilizing AI applications, CSR efforts must now encompass a range of issues related to AI, including its impact on human autonomy.
In the domain of AI-powered consumer platforms and marketing activities, concerns regarding consumer autonomy are multifaceted. AI systems have the potential to influence consumer preferences, choices, and behaviors through personalized recommendations and targeted advertising, potentially compromising autonomy. Furthermore, apprehensions arise from the opaque nature of personal data collection and processing by firms utilizing AI, which fuels personalized marketing efforts. Fassiaux (2023 argues that such data practices undermine consumer autonomy by infringing upon privacy rights, a fundamental aspect of independent market transactions as it shields consumers from manipulation. Additionally, AI-driven marketing can perpetuate the “dependence effect” outlined by John Galbraith in his book “The Affluent Society”, wherein producers independently determine market offerings and subsequently create demand for their products. Consequently, consumers who make purchases to satisfy this “created demand” may not be making autonomous decisions, as external factors heavily influence their desires, options, and purchase decisions.
The CSR of companies involved in AI development or usage entails comprehensive considerations. As suggested by Shneiderman (2021), these firms should prioritize human-centered design principles, emphasizing user understanding and control in the design process and integrating feedback from diverse stakeholders to align AI technologies with human values and needs. Additionally, accountability and transparency are crucial aspects, supported by mechanisms like activity logging and explainable AI features. Building trust between consumers and firms is key to empowering consumers, requiring measures to safeguard consumer data privacy and security, along with providing accessible channels for feedback and support. Furthermore, companies must ensure compliance with data protection regulations while responsibly collecting and utilizing consumer data.
Employee autonomy is another area of concern within CSR. Perez et al. (2022) suggests that AI algorithms constrain employee autonomy by prescribing tasks and subjecting employment relationships to greater organizational control. This can result in employees feeling “out of the loop” and lacking control over their work. Algorithms automate decision-making by imposing predefined rules and processing large amounts of data. As a result, the options available to employees may be influenced. This can erode employee autonomy and foster uncertainty among employees regarding algorithmic decisions due to their opaque nature.
In the context of CSR, emphasizing employee autonomy in AI implementation is crucial. Responsible practices involve actively engaging employees in the transformation process by soliciting their input and making them co-creators of new systems and processes. By incorporating employee input into the iterative design of solutions, companies can promote autonomy while also increasing adoption rates. Rather than simply replacing human roles with AI, it’s essential to foster synergies between employees and AI technologies. For instance, in customer service, AI-powered chatbots can handle routine tasks, freeing up human agents to focus on more complex issues requiring empathy and critical thinking. Empowering employees through upskilling is another critical aspect. According to Jaiswal et al. (2022), this includes developing skills in data analysis, digital proficiency, complex cognition, decision-making, and continuous learning. Companies should implement training programs and leverage AI-driven HR services to match employees with suitable growth opportunities (Britt, 2019). Additionally, fostering an open organizational culture that encourages experimentation, learning, and adaptation to change is key (Britt, 2019). This approach helps employees view failures as opportunities for growth, further promoting autonomy and innovation within the workforce.
Beneath the surface-level negative impact on human autonomy in daily life or work lies another concern: the potential loss of diverse cultural narratives due to the prevalence of AI (Rettberg, 2024). This risk stems from individuals’ overreliance on AI, which may lead them to relinquish their narrative responsibilities (Coeckelbergh, 2023). Coeckelbergh (2023) argues that individuals have narrative responsibilities, and the failure to exercise this responsibility may result in them being ensnared in stories created by those with conflicting aims. In addition to individuals, businesses involved in AI development and applications also bear responsibilities. Rettberg (2024) advocates for responsible corporate practices in AI, which entail collecting diverse datasets representing various cultural perspectives, engaging diverse communities in dataset creation, and employing bias detection tools to ensure fair representation and inclusivity in AI models.
In conclusion, the interplay among AI advancements, human autonomy, and corporate social responsibility presents a complex landscape for our society to navigate. While AI advancements hold immense potential to enhance efficiency and improve lives, they also raise critical ethical questions surrounding autonomy, bias, and inclusivity. To navigate these complexities responsibly, businesses must prioritize the ethical development and deployment of AI technologies, ensuring that human values and autonomy are safeguarded throughout the process.
References:
Coeckelbergh, M. (2023). Narrative responsibility and artificial intelligence. AI & SOCIETY, 38, 2437–2450. https://doi.org/10.1007/s00146-021-01375-x
Britt, A. (2019). AI Will Transform Everything: How can HR ensure employees have the skills to succeed? Workforce Solutions Review, 10(3), 17–19.
Fassiaux, S. (2023). Preserving consumer autonomy through European Union regulation of artificial intelligence: A long-term approach. European Journal of Risk Regulation, 14(4), 710–730. doi:10.1017/err.2023.58
Galbraith, John Kenneth. The Affluent Society Fortieth Anniversary Edition. Houghton Mifflin Company: New York, 1998
Jaiswal, A., Arun, C. J., & Varma, A. (2022). Rebooting employees: upskilling for artificial intelligence in multinational corporations. International Journal of Human Resource Management, 33(6), 1179–1208. DOI: 10.1080/09585192.2021.1891114
Perez, F., Conway, N., & Roques, O. (2022). The autonomy tussle: AI technology and employee job crafting responses. Relations Industrielles / Industrial Relations, 77(3), 1–19. https://www.jstor.org/stable/27221324
Rettberg, J.W. (2024). How generative AI endangers cultural narratives. Issues in Science and Technology, 40(2), 77–79. https://doi.org/10.58875/rqjd7538
Shneiderman, B. (2021). Human-centered AI Computer scientists should build devices to enhance and empower—not replace—humans. Issues in Science & Technology, 37(2), 56–61.
Like (0)