Can Artificial Intelligence Be Xenophobic? Conflict and Ethical Challenges of Neural Network Development

Authors

  • Artem N. Sunami St. Petersburg State University, 7–9, Universitetskaya nab., St.Petersburg, 199034, Russian Federation https://orcid.org/0000-0002-5418-8120

DOI:

https://doi.org/10.21638/spbu17.2024.308

Abstract

The article examines the conflict and ethical challenges associated with the “the trouble with bias” of neural networks. Based on neural networks bias conceptions by Kate Crawford, Ezekiel Dixon-Roman and Luciana Parisi, a complex of natural language research for programming, the theory of cultural violence by Johan Galtung, the works by John Rawls and Will Kymlicka on the problem of justice and inequality, the author suggests that the experience of analysis of digital algorithms based on neural network technology has allowed us to form a corpus of ethical colored problems, the central of which is the “the trouble with bias”. “The trouble with bias” is directly related to the phenomenon of “natural language”, that is, a reflection of linguistic practices of the past and present, an average value of the digitized experience of mankind, which is the main resource for learning neural network algorithms. In this context, the article focuses to the stereotypes, power hierarchies, inequalities and discrimination encoded in natural language, which can be described in terms of “classification politics” and cultural violence. The article shows that that the ethical and conflict consequences of “the trouble with bias” can be reduced only if a particular social group receives the status of discriminated. On the basis of the identified substantial characteristics of the current neural network algorithms, the author concluded that there is a high danger of indoctrination of artificial intelligence algorithms with elements of xenophobia, exclusivity and “call-out culture”.

Keywords:

digitalization, neural network, the trouble with bias, natural language, inequality, classification politics, cultural violence

Downloads

Download data is not yet available.
 

References

Литература

Сунами, А. Н. (2023), Этика «цифрового общества»: новый конфликт или новый баланс, Вестник Санкт-Петербургского университета. Философия и конфликтология, т. 39, вып. 3, с. 544–556. https://doi.org/10.21638/spbu17.2023.311

McCulloch, W. S. and Pitts, W. (1943), A logical calculus of the ideas immanent in nervous activity, The Bulletin of Mathematical Biophysics. Kluwer Academic Publishers, vol. 5, no. 4, pp. 115–133. https://doi.org/10.1007/BF02478259

Wiener, N. (1948), Cybernetics or Control and Communication in the Animal and the Machine, New York: Technology Press.

Hebb, D. O. (1949), The Organization of Behavior. A Neuropsychological Theory, New York: John Wiley & Sons.

Moravec, H. (1990), Mind Children. The Future of Robot and Human Intelligence, Harvard: Harvard University Press.

Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., RomeraParedes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman,D., Clancy, E., Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A. W., Kavukcuoglu, K., Kohli, P. and Hassabis, D. (2021), Highly accurate protein structure prediction with AlphaFold, Nature, vol. 596, pp. 583–589. https://doi.org/10.1038/s41586-021-03819-2

Лисовский, А. Л. (2020), Применение нейросетевых технологий для разработки систем управления, Стратегические решения и риск-менеджмент, т. 11, № 4, c. 378–389. https://doi.org/10.17747/2618-947X-2020-4-378-389

Созыкин, А. В. (2017), Обзор методов обучения глубоких нейронных сетей, Вестник ЮУрГУ. Серия: Вычислительная математика и информатика, т. 6, № 3, с. 28–59. https://doi.org/10.14529/cmse170303

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, A. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I. and Amodei, D. (2020), Language Models are Few-Shot Learners. https://doi.org/10.48550/arXiv.2005.14165

Du, M., He, F., Zou, N., Tao, D. and Hu, X. (2023), Shortcut Learning of Large Language Models in Natural Language Understanding, Communications of the ACM, vol. 67, no. 1, pp. 110–120. https://doi.org/10.1145/3596490

Яковлева, Л. И. (2007), Понятие здравого смысла и традиция, его конституирующая, Вестник Московского университета. Серия 7: Философия, вып. 4, c. 29–49.

Фуко, М. (1996), Воля к истине: по ту сторону знания, власти и сексуальности. Работы разных лет, М.: Касталь.

Kejriwal, M. and Nagaraj, A. (2024), Quantifying Gender Disparity in Pre-Modern English Literature using Natural Language Processing, Journal of Data Science, vol. 22, no. 1, pp. 77–96. https://doi.org/10.6339/23-JDS1100

Gould, S. J. (1996), The Mismeasure of Man, New York: W. W. Norton & Co.

Кроуфорд, К. (2023), Атлас искусственного интеллекта: руководство для будущего, М.: АСТ.

Galtung, J. (1990), Cultural Violence, Journal of Peace Research, vol. 27, no. 3, pp. 291–305.

Bowker, G. C. and Star, S. L. (1999), Sorting Things Out: Classification and Its Consequences, Cambridge, Mass.: MIT Press.

Dixon-Roman, E. and Parisi, L. (2020), Data capitalism and the counter futures of ethics in artificial intelligence, Communication and the Public, vol. 5, no. 3–4, pp. 116–121. https://doi.org/10.1177/2057047320972029

Wynter, S. (2001), Towards the sociogenic principle: Fanon, identity, the puzzle of conscious experience, and what it is like to be Black, in: Gomez-Moriana, A. and Duran-Cogan, M. (eds), National identities and sociopolitical changes in Latin America, Milton Pak: Routledge, pp. 30–66.

Ролз, Д. (1995), Теория справедливости, Новосибирск: Изд-во Новосиб. гос. ун-та.

Кимлика, У. (2010), Современная политическая философия: введение, М.: Высшая школа экономики.

Сунами, А. Н. и Павлова, Е. В. (2024), Моральные основания политики в условиях эластичного цифрового общества: перестанут ли ценности иметь значение? (часть 1), Политическая экспертиза: ПОЛИТЭКС, т. 20, № 1, с. 33–44. https://doi.org/10.21638/spbu23.2024.103


References

Sunami, A. N. (2023), Ethics of “Digital Society”: New Conflict or New Balance, Vestnik of Saint Petersburg University. Philosophy and Conflict Studies, vol. 39, iss. 3, pp. 544–556. https://doi.org/10.21638/spbu17.2023.311 (In Russian)

McCulloch, W. S. and Pitts, W. (1943), A logical calculus of the ideas immanent in nervous activity, The Bulletin of Mathematical Biophysics. Kluwer Academic Publishers, vol. 5, no. 4, pp. 115–133. https://doi.org/10.1007/BF02478259

Wiener, N. (1948), Cybernetics or Control and Communication in the Animal and the Machine, New York: Technology Press.

Hebb, D. O. (1949), The Organization of Behavior. A Neuropsychological Theory, New York: John Wiley & Sons.

Moravec, H. (1990), Mind Children. The Future of Robot and Human Intelligence, Harvard: Harvard University Press.

Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S. A. A., Ballard, A. J., Cowie, A., RomeraParedes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman,D., Clancy, E., Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A. W., Kavukcuoglu, K., Kohli, P. and Hassabis, D. (2021), Highly accurate protein structure prediction with AlphaFold, Nature, vol. 596, pp. 583–589. https://doi.org/10.1038/s41586-021-03819-2

Lisovsky, A. L. (2020), Application of neural network technologies for management development of systems, Strategicheskie resheniia i risk-menedzhment, vol. 11, no. 4, pp. 378–389. https://doi.org/10.17747/2618-947X-2020-4-378-389 (In Russian)

Sozykin, A. V. (2017), An Overview of Methods for Deep Learning in Neural Networks, Vestnik IuUrGU. Seriia: Vychislitel’naia matematika i informatika, vol. 6, no. 3, pp. 28–59. https://doi.org/10.14529/cmse170303 (In Russian)

Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, A. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I. and Amodei, D. (2020), Language Models are Few-Shot Learners. https://doi.org/10.48550/arXiv.2005.14165

Du, M., He, F., Zou, N., Tao, D. and Hu, X. (2023), Shortcut Learning of Large Language Models in Natural Language Understanding, Communications of the ACM, vol. 67, no. 1, pp. 110–120. https://doi.org/10.1145/3596490

Yakovleva, L. I. (2007), The notion of common sense and the tradition which constructs it, Vestnik Moskovskogo universiteta. Seriia 7: Filosofiia, iss. 4, pp. 29–49. (In Russian)

Foucault, М. (1996), The will to knowledge: Beyond knowledge, power and sexuality. Works of different years, Moscow: Kastal’ Publ. (In Russian)

Kejriwal, M. and Nagaraj, A. (2024), Quantifying Gender Disparity in Pre-Modern English Literature using Natural Language Processing, Journal of Data Science, vol. 22, no. 1, pp. 77–96. https://doi.org/10.6339/23-JDS1100

Gould, S. J. (1996), The Mismeasure of Man, New York: W. W. Norton & Co.

Crawford, К. (2023), Atlas Of Ai Power, Politics And The Planetary Costs of Artificial Intelligence, Moscow: AST Publ. (In Russian)

Galtung, J. (1990), Cultural Violence, Journal of Peace Research, vol. 27, no. 3, pp. 291–305.

Bowker, G. C. and Star, S. L. (1999), Sorting Things Out: Classification and Its Consequences, Cambridge, Mass.: MIT Press.

Dixon-Roman, E. and Parisi, L. (2020), Data capitalism and the counter futures of ethics in artificial intelligence, Communication and the Public, vol. 5, no. 3–4, pp. 116–121. https://doi.org/10.1177/2057047320972029

Wynter, S. (2001), Towards the sociogenic principle: Fanon, identity, the puzzle of conscious experience, and what it is like to be Black, in: Gomez-Moriana, A. and Duran-Cogan, M. (eds), National identities and sociopolitical changes in Latin America, Milton Pak: Routledge, pp. 30–66.

Rawls, J. (1995), A theory of justice, Novosibirsk: Izdatel'stvo Novosibirskogo universiteta Publ. (In Russian)

Kymlicka, W. (2010), Contemporary Political Philosophy: An Introduction, Moscow: Vysshaia shkola ekonomiki Publ. (In Russian)

Sunami, A. N. and Pavlova, E. V. (2024), The moral foundations of the politics under conditions of digital society flexibility: If human values no longer matter (part 1), Political Expertise: POLITEX, vol. 20, no. 1, pp. 33–44. https://doi.org/10.21638/spbu23.2024.103 (In Russian)

Published

2024-12-30

How to Cite

Sunami, A. . N. (2024). Can Artificial Intelligence Be Xenophobic? Conflict and Ethical Challenges of Neural Network Development. Vestnik of Saint Petersburg University. Philosophy and Conflict Studies, 40(3), 459–472. https://doi.org/10.21638/spbu17.2024.308