

Artificial intelligence in the public sector
https://doi.org/10.32609/0042-8736-2022-6-91-109
Abstract
The article critically examines the possibilities of using steadily developing artificial intelligence systems in the public sector of foreign countries and Russia. It is noted that despite the prospects of obtaining significant gains, there are a number of technical, economic and socio-ethical limitations associated with the introduction of artificial intelligence, taking into account its features as a general purpose technology. The increasing value of professional judgment, which allows using the results of artificial intelligence, is emphasized. Based on the principles of working with artificial intelligence developed in world practice, as well as the peculiarities of the domestic institutional structure and trust in it by the citizens, a conclusion is made about the need for a cautious approach to the use of artificial intelligence technologies in applications of the Russian public sector. Such practices can not only cause considerable harm to specific individuals in the process of current functioning of domestic institutions, but also hinder their transformation.
Keywords
JEL: C60, D81, H83, K24, O33, O38
About the Author
O. V. BuklemishevRussian Federation
Oleg V. Buklemishev
Moscow
References
1. Voloshinskaya A., Komarov V. (2015). Evidence-based public policy: Problems and prospects. Vestnik Instituta Ekonomiki Rossiyskoy Akademii Nauk, Vol. 4, pp. 90—102. (In Russian).
2. Kurdin A. A. (2021). Prospects of AI implementation into business management practices: A survey (Based on the materials of the research seminar on digital economy studies at the Faculty of Economics of Lomonosov Moscow State University). Scientific Research of Faculty of Economics. Electronic Journal, Vol. 13, No. 3, pp. 57—66. (In Russian). https://doi.org/10.38050/2078-3809-2021-13-3-57-66
3. RANEPA (2019a). The state as a platform: People and technology. Moscow: Russian Presidential Academy of National Economy and Public Administration. (In Russian).
4. RANEPA (2019b). Artificial intelligence: On choosing the strategy. Moscow: Russian Presidential Academy of National Economy and Public Administration. (In Russian).
5. RANEPA (2020). Ethics and digit: Ethical problems of digital technologies. Moscow: Russian Presidential Academy of National Economy and Public Administration. (In Russian).
6. Tambovtsev V. L. (2019). Management without measurements. Terra Economicus, Vol. 17, No. 3, pp. 6—29. (In Russian). https://doi.org/10.23683/2073-6606-2019-17-3-6-29
7. Acemoglu D. (2021). Harms of AI. NBER Working Paper, No. 29247. https://doi.org/10.3386/w29247
8. Acemoglu D., Restrepo P. (2017). The race between machine and man: Implications of technology for growth, factor shares, and employment. MIT Department of Economics Working Paper, No. 16-05. https://doi.org/10.2139/ssrn.2781320
9. Acemoglu D., Autor D., Hazell J., Restrepo P. (2020). AI and jobs: Evidence from online vacancies. NBER Working Paper, No. 28257. https://doi.org/10.3386/w28257
10. Agrawal A., Gans J., Goldfarb A. (2019). Prediction, judgment, and complexity: A theory of decision-making and artificial intelligence. In: A. Agrawal, J. Gans, A. Goldfarb (eds.). The economics of artificial intelligence: An agenda. University of Chicago Press and NBER, pp. 89—114. https://doi.org/10.7208/chicago/9780226613475.003.0003
11. Agrawal A., Gans J., Goldfarb A. (2021). AI adoption and system-wide change. NBER Working Paper, No. 28811. https://doi.org/10.3386/w28811
12. Barredo Arrieta A., Díaz-Rodríguez N., Del Ser J., Bennetot A., Tabik S., Barbado A., Garcia S., Gil-Lopez S., Molina D., Benjamins R., Chatila R., Herrera F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, Vol. 58, pp. 82—115, https://doi.org/10.1016/j.inffus.2019.12.0
13. Athey S. (2019). The impact of machine learning on economics. In: A. Agrawal, J. Gans, A. Goldfarb (eds.). The economics of artificial intelligence: An agenda. University of Chicago Press and NBER, pp. 507—552. https://doi.org/10.7208/chicago/9780226613475.003.0021
14. Benabou R., Tirole J. (2011). Identity, morals, and taboos: Beliefs as assets. Quarterly Journal of Economics, Vol. 126, No. 2, pp. 805—855. https://doi.org/10.1093/qje/qjr002
15. Benaich N., Hogarth I. (2021). State of AI report, October 12. https://www.stateof.ai/2021-report-launch.html
16. Bolton P., Faure-Grimaud A. (2009). Thinking ahead: The decision problem. Review of Economic Studies, Vol. 76, pp. 1205—1238. https://doi.org/10.1111/j.1467-937X.2009.00554.x
17. Bresnahan T. (2010). General purpose technologies. In: B. H. Hall, N. Rosenberg (eds.). Handbook of the economics of innovation, Vol. 2. Elsevier, pp. 761—791. https://doi.org/10.1016/S0169-7218(10)02002-2
18. Brynjolfsson E., Rock D., Syverson C. (2019). Artificial intelligence and the modern productivity paradox: A clash of expectations and statistics. In: A. Agrawal, J. Gans, A. Goldfarb (eds.). The economics of artificial intelligence: An agenda. University of Chicago Press and NBER, pp. 23—60. https://doi.org/10.7208/chicago/9780226613475.003.0001
19. Chakraborty C., Joseph A. (2017). Machine learning at central banks. Bank of England Staff Working Paper, No. 674. https://doi.org/10.2139/ssrn.3031796
20. Chui M., Manyika J., Miremadi M., Henke N., Chung R., Nel P., Malhotra S. (2018). Notes from the AI frontier: Insights from hundreds of use cases. McKinsey Global Institute Discussion Paper, April.
21. Cross T. (2020). An understanding of AI’s limitations is starting to sink in. The Economist, June 11. https://www.economist.com/technology-quarterly/2020/06/11/an-understanding-of-ais-limitations-is-starting-to-sink-in
22. Daníelsson J., Macrae R., Uthemann A. (2021). Artificial intelligence and systemic risk. Journal of Banking & Finance, Vol. 140, article 106290. https://doi.org/10.1016/j.jbankfin.2021.106290
23. di Castri S., Hohl S., Kulenkampff A., Prenio J. (2019). The suptech generations. FSI Insights on Policy Implementation, No. 19. Financial Stability Institute, Bank for International Settlements.
24. EU (2016). Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Official Journal of the European Union, Vol. 59, pp. 1—88.
25. European Commission (2021). Study to support an impact assessment of regulatory requirements for Artificial Intelligence in Europe. Final report (D5). Brussels: EU Publications. https:/doi.org/10.2759/523404
26. Goodhart C.A.E. (1984). Problems of monetary management: The UK experience. In: Monetary theory and practice. London: Palgrave, pp. 91—121. https://doi.org/10.1007/978-1-349-17295-5_4
27. Head B. (2010). Evidence-based policy: Principles and requirements. In: Strengthening evidence-based policy in the Australian Federation. Roundtable Proceedings, Vol. 1. Canberra: Productivity Commission, pp. 13—26.
28. Holmstrom B., Milgrom P. (1991). Multitask principal-agent analyses: Incentive contracts, asset ownership, and job design. Journal of Law, Economics, and Organization, Vol. 7, pp. 24—52. https://doi.org/10.1093/jleo/7.special_issue.24
29. IEEE Spectrum (2021). The great AI reckoning: Deep learning has built a brave new world—but now the cracks are showing. September. https://spectrum.ieee.org/special-reports/the-great-ai-reckoning/
30. Kahnemann D. (2011). Thinking fast and slow. London: Macmillan.
31. Kinywamaghana A., Steffen S. (2021). A note on the use of machine learning in central banking. FIRE Research Paper, July 13. Frankfurt School of Finance and Management
32. Kissinger H. A. (2018). How the enlightment ends. The Atlantic, June. https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-theend-of-human-history/559124/
33. Kissinger H. A., Schmidt E., Huttenlocher D. (2021). The age of AI: And our human future. New York: Little, Brown and Company.
34. Kleinberg J., Ludwig J., Mullainathan S., Obermeyer Z. (2015). Prediction policy problems. American Economic Review, Vol. 105, No. 5, pp. 491—495. https://doi.org/10.1257/aer.p20151023
35. Manyika J., Chui M., Miremadi M., Bughin J., George K., Willmott P., Dewhurst M. (2017). Harnessing automation for a future that works. McKinsey Global Institute.
36. Mullainathan S., Obermeyer Z. (2017). Does machine learning automate moral hazard and error? American Economic Review, Vol. 107, No. 5, pp. 476—80. https://doi.org/10.1257/aer.p20171084
37. Müller V. C. (2021). Ethics of artificial intelligence and robotics. In: E. N. Zalta (ed.). The Stanford encyclopedia of philosophy (Summer 2021 edition). https://plato.stanford.edu/archives/sum2021/entries/ethics-ai/
38. Niskanen W. A. (1971). Bureaucracy and representative government. Chicago, IL: Adlkurierton.
39. OECD (2019). Artificial intelligence in society. Paris: OECD Publishing. https://doi.org/10.1787/eedfee77-en
40. Oxford Insights (2020). Government AI readiness index 2020.
41. Prat A. (2019). Comment. In: A. Agrawal, J. Gans, A. Goldfarb (eds.). The economics of artificial intelligence: An agenda. University of Chicago Press and NBER, pp. 110—114.
42. Russell S. (2019). Human compatible artificial intelligence. Oxford University Press.
43. Schweinsberg M., Feldman M., Staub N., Akker O., Aert R., Assen M., Liu Y., Althoff T., Heer J., Kale A., Mohamed Z., Amireh H., Prasad V., Bernstein A., Robinson E., Snellman K., Sommer S., Otner S., Robinson D. (2021). Same data, different conclusions: Radical dispersion in empirical results when independent analysts operationalize and test the same hypothesis. Organizational Behavior and Human Decision Processes, Vol. 165, pp. 228—249. https://doi.org/10.1016/j.obhdp.2021.02.003
44. Taddy M. (2019). The technological elements of artificial intelligence. In: A. Agrawal, J. Gans, A. Goldfarb (eds.). The economics of artificial intelligence: An agenda. University of Chicago Press and NBER, pp. 61—87. https://doi.org/10.7208/chicago/9780226613475.003.0002
45. Trajtenberg M. (2019). AI as the next GPT: A political-economy perspective. In: A. Agrawal, J. Gans, A. Goldfarb (eds.). The economics of artificial intelligence: An agenda. University of Chicago Press and NBER, pp. 175—186. https://doi.org/10.7208/chicago/9780226613475.003.0006, pp. 175—186.
46. Turovets Y., Vishnevskiy K., Altynov A. (2020). How to measure AI: Trends, challenges and implications. Higher School of Economics Research Paper, No. WP BRP 116/STI/2020. https://doi.org/10.2139/ssrn.3736851
47. Viechnicki P., Eggers W. D. (2017). How much time and money can AI save government? Deloitte Insights, April 26. https://www2.deloitte.com/us/en/insights/focus/cognitive-technologies/artificial-intelligence-government-analysis.html
48. Wallis C. J. D., Jerath A., Coburn N. et al. (2021). Association of surgeon-patient sex concordance with postoperative outcomes. JAMA Surgery, Vol. 157, No. 2, pp. 146—156. https://doi.org/10.1001/jamasurg.2021.6339
49. Wilson H. J., Daugherty P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, July—August, pp. 114—123.
Supplementary files
Review
For citations:
Buklemishev O.V. Artificial intelligence in the public sector. Voprosy Ekonomiki. 2022;(6):91-109. (In Russ.) https://doi.org/10.32609/0042-8736-2022-6-91-109