ETHICAL, LEGAL, AND SOCIOECONOMIC ASPECTS OF IMPLEMENTING ARTIFICIAL INTELLIGENCE IN TAX ADMINISTRATION[1]
Artur Bogucki *
https://orcid.org/0000-0002-9616-3159
Abstract. This study examines the integration of artificial intelligence (AI) in tax law and administration, underscoring key ethical, legal, and socioeconomic dimensions. It explores how AI can improve tax compliance and foster innovation, yet simultaneously raising concerns regarding fairness, transparency, and accountability. Specific risks, including data bias, breaches of privacy, and over-reliance on automated risk-scoring, illustrate the need for robust legal frameworks such as the GDPR and the AI Act. Socioeconomic implications – notably labour displacement and income inequality – spotlight the necessity for equitable policies and responsible AI governance. Drawing on Ethical, Legal, and Social Aspects (ELSA) as well as Responsible Research and Innovation (RRI) frameworks, this research provides recommendations for a comprehensive approach, emphasising stakeholder engagement, transparency, and continuous oversight. Ultimately, a balanced blend of technological ingenuity and principled governance is essential to ensure that AI’s transformative potential truly serves the public interest in tax law and administration.
Keywords: AI governance, Trustworthy AI, bias mitigation, AI Act, algorithmic transparency
ETYCZNE, PRAWNE I SPOŁECZNO-EKONOMICZNE ASPEKTY WDRAŻANIA SZTUCZNEJ INTELIGENCJI W ADMINISTRACJI PODATKOWEJ
Streszczenie. Niniejsze opracowanie dotyczy zastosowania sztucznej inteligencji (AI) w administracji podatkowej, przez pryzmat zagadnień etycznych, prawnych i społeczno-ekonomicznych. Analiza sugeruje, że choć AI może usprawnić pobór podatków i wspierać innowacyjność, to jednocześnie rodzi obawy związane z kwestiami takimi jak rzetelność, przejrzystość i rozliczalność działań. Szczególne ryzyko wynika między innymi z obecności błędów w danych treningowych, zagrożeń dla prywatności i nadmiernego polegania na automatycznych systemach punktacji ryzyka, co podkreśla znaczenie koherentnych regulacji prawnych, w tym RODO i Aktu o Sztucznej Inteligencji. Społeczno-ekonomiczne skutki AI – zwłaszcza potencjalne wypieranie miejsc pracy oraz wzrost nierówności dochodowych – wskazują na potrzebę odpowiedzialnych regulacji i polityk publicznych. W niniejszym opracowaniu, w oparciu o ramy Etycznych, Prawnych i Społecznych Aspektów (ELSA) oraz Odpowiedzialnych Badań i Innowacji (RRI), proponuje się rekomendacje łączące zaangażowanie interesariuszy, transparentność oraz stały nadzór. Ostatecznie, jedynie połączenie innowacyjności technologicznej z zasadami prawnymi i etycznymi gwarantuje, że zastosowanie AI w operacji administracji podatkowej będzie służyć dobru społecznemu zgodnie z zasadami godnej zaufania sztucznej inteligencji.
Słowa kluczowe: zarządzanie sztuczną inteligencją, godna zaufania sztuczna inteligencja, akt o sztucznej inteligencji, tendencyjności uprzedzeń, przejrzystość algorytmiczna
1. INTRODUCTION
The integration of artificial intelligence (AI) in tax law and administration leads to significant ethical, legal, and socioeconomic challenges that affect effective governance and public trust. As AI technologies increasingly influence tax collection, compliance, and enforcement, there is a growing recognition of the need to ensure that these systems operate within the boundaries of Trustworthy Artificial Intelligence. Ethical principles such as fairness, bias mitigation, and accountability are central to mitigating the risks associated with AI deployment in tax systems (European Commission 2019).
In recent years, numerous governments around the world have turned to AI in order to tackle tax evasion and reduce the tax gap. In July 2024, Turkey announced its plan to use advanced AI in combating tax evasion, following the lead of other countries such as the United States or Canada. Within the European Union (EU), eighteen Member States have been making regular use of machine learning in their tax administrations – some since as early as 2004 – to detect anomalies and streamline compliance processes (Lee 2024). The EU itself has developed specialised machine learning systems to address carousel fraud, demonstrating how AI can effectively analyse large and often complex datasets (Hadwick, Lan 2021, 609–645).
These advances underscore the central role that data – and especially big data – now plays in tax administration. By using machine learning to cluster information and highlight anomalous findings, tax authorities can process a growing volume of data at scale. However, this evolving dynamics also transforms the relationship between authorities and taxpayers, introducing concerns about bias, privacy, and transparency (Hadwick 2022).
A key worry is whether taxpayers flagged by AI-based audits might suffer undue hardship if the training data or algorithmic models themselves are biased. Indeed, social bias can creep in during both data selection and model development. If a model’s training data is skewed – or if the humans fine-tuning the system introduce their own biases – entire segments of society can become unfair targets of increased scrutiny. This risk is not hypothetical: the Dutch childcare benefits scandal (Toeslagenaffaire) shows how tens of thousands of lower-income and ethnic minority families were wrongly penalised when the local tax authority had deployed a self-learning fraud-detection tool with insufficient safeguards (Lee 2024). The impact of these errors was devastating, with families forced into poverty and children taken into foster care, underscoring how severe the consequences can be when AI systems are not transparently developed or carefully overseen.
The implementation of AI in tax administration must adhere to the principle of legality, which mandates that AI systems operate under clear statutory frameworks to ensure compliance with constitutional standards (Kuźniacki et al. 2022). Jurisdictional variations in legal approaches, such as those between different EU Member States, illustrate the complexities of aligning AI applications with existing laws while also addressing concerns about transparency and accountability in automated decision-making processes. Cases such as the Dutch Toeslagenaffaire and SyRI highlight the dire consequences of inadequate legal oversight, emphasising the necessity for robust legal frameworks to protect individual rights in the face of algorithmic governance (Hadwick, Lan 2021).
Socioeconomically, the impact of AI on labour markets and income distribution is a pressing concern as automation has the potential to displace unskilled workers while enhancing productivity for capital owners (Renda, Balland, Bosoer 2023; Rodrik, Stantcheva 2021; Hadwick 2024). Policymakers are exploring innovative fiscal measures, such as universal basic income and taxation strategies like robot taxes, to address the challenges posed by technological advancements (Hadwick 2024). The purpose of this article is to underscore the need for a balanced approach that promotes innovation while ensuring equitable outcomes for all members of society at the intersection of AI, taxation, and social policy.
Ultimately, the ongoing discourse surrounding the ethical, legal, and socioeconomic aspects of AI in tax law underlines its transformative potential and the necessity for responsible implementation. As jurisdictions navigate the complexities of AI integration, the emphasis on ethical governance, legal accountability, and socioeconomic equity will be vital to shaping the future landscape of tax administration and ensuring public trust in these evolving systems.
2. METHODOLOGY
In this study, I drew on two complementary frameworks – Ethical, Legal, and Social Aspects (ELSA) (Zwart, Landeweerd, van Rooij 2014) as well as Responsible Research and Innovation (RRI) (Owen et al. 2013) – to explore how artificial intelligence (AI) could be integrated into tax administration in a responsible and context-sensitive manner. ELSA emphasises a strong focus on ethical considerations, encouraging researchers and practitioners to anticipate potential risks, identify vulnerable groups, and align technological developments with moral and legal standards. RRI, meanwhile, broadens this lens to incorporate the social, political, and economic dimensions of innovation. One of its central tenets is the active engagement of multiple stakeholders – policymakers, industry representatives, end users, and civil society – to ensure that emerging technologies reflect diverse needs and values.
By bringing ELSA and RRI together, I sought to create a more holistic framework that combines ethical scrutiny with broad stakeholder involvement (Brownsword 2024). ELSA’s ethical focus enriches RRI’s concentration on social and political factors, while RRI’s stakeholder-driven approach helps to translate ELSA’s insights into practical processes and decisions. This synergy not only respects the integrity of each approach but also extends their benefits, fostering more robust and accountable decision-making around AI.
In practice, I adopted a participatory design method that involved assembling a small advisory group of domain experts from the outset. I reached out to a senior tax official who assisted me in recruiting six additional members, including a legal counsel specialising in tax regulation, an IT expert experienced with AI, two senior tax auditors (focusing on corporate and individual filings, respectively), and a compliance officer from a large accounting company. This diverse mix of expertise ensured that we could address ethical concerns, legal requirements, administrative processes, and technological feasibility all at once, reflecting the combined ethos of ELSA and RRI.
A key aspect of this participatory approach was a half-day online workshop, for which I prepared a series of visual materials. Using mind maps generated in the Mindmup2 software, I outlined the critical junctures in tax administration where AI might play a role, from detecting fraud to informing audit selection. These visuals also highlighted where human judgment remains essential and drew attention to potential ethical, legal, and socioeconomic issues such as fairness, privacy, and transparency. During the workshop, the advisory group collectively refined the research questions, focusing on those areas where AI’s impact might be most pronounced. Their insights helped me capture the complexity and nuance of AI-driven decision-making in tax setting.
The result is a study grounded in real-world considerations, one that examined both the immediate and broader implications of AI deployment. By weaving ELSA’s thorough ethical and legal analysis with RRI’s emphasis on inclusive engagement, I arrived at a deeper understanding of how to design, implement, and evaluate AI in tax administration. This combined approach allowed me to address practical concerns – such as technical feasibility and regulatory compliance – while maintaining a commitment to ethical principles, social values, and responsible innovation.
3. ETHICAL ASPECTS
In April 2019, the High-Level Expert Group on Artificial Intelligence published a comprehensive set of recommendations aimed at ensuring the development of Trustworthy AI.[2] Formed in response to the European Commission’s “Artificial Intelligence for Europe” communication released on 25 April 2018, this group of 52 independent experts comes from various countries and represents academia, civil society, and the business sector. Their work included drafting guidelines, proposing both technical and non-technical methods for putting these guidelines into practice, and conducting a pilot evaluation to assess their applicability (High-Level Expert Group on Artificial Intelligence (HLEG) 2019, 6).
The European Union’s institutions had already shown their commitment to AI policy in 2018, first by publishing “Artificial Intelligence for Europe” on 25 April, which identified key areas of activity necessary to support AI development (European Commission 2018). This was followed on 7 December 2018 by the “Coordinated Plan on Artificial Intelligence,” which reinforced the focus on increased investment in AI, preparedness for related socioeconomic changes, and the importance of ensuring AI’s ethical and legal evolution (European Commission 2018). Building on these documents, the European Commission established the High-Level Expert Group on Artificial Intelligence to provide both ethical guidelines and regulatory recommendations (European Commission 2018a).
On 8 April 2019, the group released the first instalment of its mandate in the form of “Ethics Guidelines for Trustworthy AI.” The document emphasises that AI must be lawful, ethical, and robust. It calls on a broad spectrum of stakeholders – including companies, NGOs, researchers, public services, and citizens – to voluntarily adopt these guidelines, thus collectively ensuring that the development and use of AI remains trustworthy. The recommendations are organised into three main parts: fundamental issues, implementation recommendations, and evaluation.
The first part, focused on fundamental issues, introduces four overarching guidelines to ensure that AI is developed with respect for fundamental rights, democracy, and ethical principles. These guidelines underscore that human autonomy must be protected by preventing manipulative or coercive uses of AI; that AI systems should be designed to avoid causing technical or social harm; that fairness and justice must be upheld through equitable technology distribution; and that AI solutions must be transparent and explainable so that users understand how the system operates and makes decisions.
In the second part, the guidelines highlight seven implementation recommendations for turning these fundamental ideas into practice. They stress the need for AI systems to support human decision-making and respect fundamental rights, democracy, and social justice. Safety and reliability are recognised as critical in preventing risks, whether intended or unintended. Privacy and responsible data management are pivotal to protecting users’ personal information, while transparency is underscored at multiple levels, covering data collection, system operations, and communication. Equally significant is ensuring diversity and non-discrimination throughout AI’s lifecycle, promoting well-being and environmental sustainability as well as establishing structures for accountability. Organisations are encouraged to adopt various technical and non-technical measures to meet these aims, ranging from ethical system architecture and explainability features to codes of conduct, standardisation, and certification procedures. Education, awareness initiatives, and inclusive stakeholder collaboration further reinforce these objectives, while diverse design teams help reflect society’s multitude of perspectives.
Finally, the third part of the recommendations sets out how AI solutions can be evaluated to ensure their aligning with the principles of trustworthiness. The High-Level Expert Group developed a survey-based tool known as the Assessment List for Trustworthy Artificial Intelligence (ALTAI). This tool is intended to gauge adherence to ethical and technical standards by questioning management, human resources, quality assurance teams, IT professionals, and AI system operators.
4. TRUSTWORTHY AI IN TAX ADMINISTRATION
The ethical considerations surrounding the implementation of artificial intelligence (AI) in administration are crucial to making sure that these technologies operate fairly, transparently, and responsibly. AI has increasingly come to influence important decisions in tax collection, compliance, and enforcement, making it essential that safeguards protect individuals’ rights and sustain public trust. Although AI in tax administration can produce significant benefits – tracing back as early as the 1970s with systems such as the “Taxman” (McCarty 1977) – it also carries notable risks and limitations (Engstrom et al. 2020; Ranchordas, Scarcella 2021; Citron 2008). Notably, due to limitations of current state of the art, achieving Trustworthy AI is mostly about data governance (Renda 2024).
One central ethical issue concerns fairness and bias mitigation. AI systems must be carefully designed and tested to prevent discrimination, particularly because large datasets often contain historical biases (Mayson 2018; Kleinberg et al. 2018). These challenges arise when correlations are mistaken for causal relationships, causing certain groups to be disproportionately flagged for audits or investigations. The so-called Toeslagenaffaire in the Netherlands illustrates how biases can become entrenched in automated decision-making processes: tens of thousands of families, many from marginalised backgrounds, were penalised following flawed risk-scoring models (Hadwick, Lan 2021). Although well-crafted AI can eliminate some forms of human error, its effect can become regressive if the underlying design or data is skewed (Sunstein 2021; Hacker 2018).
Accompanying the risk of bias is the need for transparency and accountability. Many modern AI tools, such as deep neural networks, function as “black boxes” (Citron 2008). This lack of transparency makes it difficult for taxpayers, or even tax professionals, to understand or challenge decisions that feel incorrect. While the OECD highlights explainability and transparency as key objectives for good AI governance, real-world practice reveals that governments often deploy AI systems without disclosing the precise criteria they use to raise “red flags” (Ranchordas, Scarcella 2021). Citizens may also over-rely on automated judgments or feel too intimidated to push back against machine-driven results.
Inclusion and attention to the digital divide add another dimension to these concerns (Bevacqua, Renolds 2018). If chatbots and e-filing portals become the principal channels for tax administration, individuals with lower digital literacy, reduced Internet access, or language barriers may be left behind (Ranchordás 2022). AI-driven systems that streamline compliance for some might unintentionally create hurdles for others, especially elderly or vulnerable citizens. Encouraging widespread user testing and designing multiple support mechanisms can alleviate these inequities (Okunogbe, Pouliquen 2022).
Furthermore, developing AI in tax administration brings into focus risk mitigation and regulatory alignment. Data misuse or privacy breaches can escalate when real-time analytics monitors billions of transactions, and digital profiling raises questions about infringements on personal freedoms (Scarcella 2019). With its risk-based approach, AI is often seen as a “silver bullet” that compensates for deficient tax laws or outdated enforcement models (de la Feria, Grau Ruiz 2021). However, relying solely on sophisticated tools to patch policy gaps can postpone much-needed legislative reforms (Blank, Osofsky 2021). A coherent legal framework, carefully aligned with AI capabilities, is vital to address the causes of non-compliance and make sure that data-driven tools do not merely paper over deeper structural flaws.
Despite such advancements in AI, human oversight remains pivotal to guide, monitor, and correct algorithmic processes. While well-designed AI can reduce “noise” in decision-making – stemming from varied human biases – it risks eliminating the equitable judgment or empathy that may be warranted in certain taxpayer situations. Furthermore, intelligent systems can increase accountability by requiring thorough documentation of data sources and logic, but meaningful appeal processes are needed to avoid scenarios where no one is ultimately responsible for errors.
Lastly, it is crucial to recognise the symbiotic relationship between tax policy and tax administration. Even the most advanced AI may be stymied by vague or deficient legal rules, leading to inconsistent enforcement and the possibility of “back-door” policymaking by algorithms. Over time, governments could be tempted to delay unpopular yet necessary tax reforms, relying on the sophistication of AI systems to enhance revenue collection in spite of legislative shortcomings (Blank, Osofsky 2021). To realise AI’s full potential in tackling fraud, improving compliance, and reducing error, agencies must combine these data-driven innovations with robust, adaptive legislation (Keen, Slemrod 2017).
In sum, the ethical implementation of AI in tax administration calls for careful balancing between technological capabilities and principled governance. Although AI can help detect fraud more efficiently, reduce bureaucratic inefficiencies, and strengthen revenue, it can also cement unintended biases and erode public trust if deployed without due regard for transparency, fairness, and individual rights. By building in inclusive design, fostering strong legal frameworks, and preserving meaningful human oversight, stakeholders can make sure that AI truly serves the public interest in tax law and administration.
5. LEGAL ASPECTS
The principle of legality remains a cornerstone in the context of tax administration utilising AI. As jurisdictions continue to expand their use of advanced technologies, tax authorities must ensure compliance with both national and supranational legal frameworks to protect taxpayer rights and foster public trust. However, these legal frameworks also intersect with broader concerns about AI-related risks, including discrimination, the lack of transparency, and vulnerabilities stemming from large-scale data processing (MIT Future Tech 2024a).
5.1. Legal accountability and the principle of legality
Under the rule of law sensu largo, governmental authorities must be able to demonstrate a clear statutory basis when deploying automated decision-making. The fundamental requirement is that an AI system’s scope, rationale, and oversight protocols align with constitutional principles and relevant statutory provisions (Kuźniacki et al. 2022). In the realm of tax administration, key questions arise regarding whether legislatures explicitly permit or restrict fully automated audits, risk-scoring algorithms, and other AI-driven processes. In some jurisdictions, like Germany, the Abgabenordnung mandates regular reviews of AI models used in tax audits, ensuring both transparency and compliance. This aligns with the understanding that AI, if left unchecked, can amplify existing legal loopholes or even facilitate regulatory overreach (Peeters 2024).
5.2. Emerging risks in the use of AI
A recent survey by the Massachusetts Institute of Technology (MIT) identified around 700 distinct risks associated with AI systems (MIT Future Tech 2024b). These risks fall into categories such as discrimination and toxicity, privacy and security, misinformation, malicious use, human–computer interaction, socioeconomic harms, and failures or limitations of AI itself. In tax administration, the interplay of these risk domains underscores how an algorithm’s impact on taxpayers’ rights could be grave if not governed by robust legal controls (OECD 2020).
Among the most pressing concerns is “human over-reliance” on AI outputs (Passi and Vorvoreanu 2022). In public administrations, where staffing is often stretched, the convenience of algorithmic “red flags” or risk scores can lead to undue delegation of crucial decision-making authority. The resulting decrease in human oversight raises the spectre of systemic bias or opacity if public officials simply “rubber-stamp” AI recommendations (Alon-Barkat, Busuioc 2023). This risk is especially salient in tax controls, where a single erroneous classification can have outsized consequences on a taxpayer’s financial well-being.
5.3. Data protection and security concerns
Increased reliance on AI-driven tax controls entails the collection of vast personal datasets, making data security a core legal issue (MIT Future Tech 2024b). Authorities often combine personal information with commercial or financial data, intending to detect anomalies or potential fraud. Yet these expansive datasets raise two acute vulnerabilities:
- Data Memorisation and Inference – complex models, including large language models (LLMs), can inadvertently memorise personal details, potentially skewing outcomes against individuals or breaching their privacy (Maat 2022).
- Unauthorised Access and Leaks – public or private actors may hack, leak, or unlawfully share sensitive tax data, eroding trust. A recent example in the United States involved an IRS contractor who accessed and disclosed taxpayer records, demonstrating how even official channels are not immune to misuse (Internal Revenue Service (IRS) 2024).
Ensuring compliance with data protection requirements is thus indispensable (Commission Nationale de l’Informatique et des Libertés 2022). Under the General Data Protection Regulation (GDPR), tax authorities remain data controllers or processors and must adopt a risk-based approach in their internal governance.[3] In addition, the GDPR mandates transparency, purpose limitation, and data minimisation – principles that, if taken seriously, curtail the potential for AI-driven mass surveillance or discriminatory outcomes.[4] Courts at both the EU and ECHR levels have reinforced that tax authorities must handle personal data proportionally, upholding taxpayers’ rights to privacy.[5]
5.4. Evolving regulatory responses
As governments worldwide incorporate AI in public administration, regulatory frameworks have begun to adapt. Legislators in the EU are particularly focused on risk-based obligations for public bodies.
In the European Union, the AI Act[6] introduces a risk-tiered approach, placing stricter obligations on “high-risk AI” in areas such as law enforcement, migration, or essential public services. However, Recital 59 of the AI Act excludes many tax or customs AI systems from the high-risk category if they serve purely “administrative” rather than “criminal” enforcement. Critics argue that this exception undermines key safeguards and may create an incoherent risk classification (Rizzo and Hassan 2024), especially when tax authorities use powerful AI-based profiling tools leading to a dangerous precedence (Peeters 2024).
At the same time, the GDPR obligations remain relevant. Where tax authorities rely on algorithmic or automated processes in assessing liability, courts have signalled that individuals should have meaningful recourse and access to explanations.[7] The interplay between the GDPR and the AI Act can create compliance overlaps but can also compel authorities to adopt robust risk-management assessments, encompassing data protection impact assessments (DPIAs)[8] and, where the AI Act applies, fundamental rights impact assessments (FRIAs).[9]
Ultimately, legal frameworks for AI in tax administration face the dual challenge of harnessing automation’s potential to enhance compliance while making sure that fundamental rights remain protected. The overarching lesson is that no single tool – whether an algorithmic risk model or a robust statute – can stand alone. Effective governance rests on continuous human scrutiny, clear legal boundaries, and a risk-based regulatory approach that adapts as AI technologies evolve.
6. SOCIOECONOMIC ASPECTS
A holistic understanding of AI’s role in tax administration requires going beyond legal and ethical questions to consider socioeconomic dimensions. Far-reaching technological advances can profoundly affect labour markets, income distribution, and social welfare, highlighting the importance of equitable and inclusive policies. As AI-driven solutions become more commonplace, stakeholders must address how automation might displace some types of work while creating new opportunities for others, and how to make sure that the benefits of these innovations are fairly distributed among all segments of society.
7. THE IMPACT OF TECHNOLOGICAL INNOVATIONS ON LABOUR MARKETS
The intersection of AI and fiscal policy is increasingly significant as technology reshapes labour markets and economic structures. Historical patterns indicate that technological advancements have led to both increased productivity and job displacement, making fiscal policy essential in addressing these dual effects (Acemoglu, Johnson 2023). For example, the introduction of automation could reduce job opportunities for unskilled workers while simultaneously enhancing profits for capital owners. To mitigate these adverse effects, governments are exploring measures such as the Finnish Universal Basic Income (UBI) and other social expenditure policies that aim to support individuals impacted by technological changes (Acemoglu, Johnson 2023).
8. TAXATION AND REDISTRIBUTION
Taxation remains a powerful tool for influencing social behaviour and addressing inequalities arising from technological advancements. As new technologies potentially exacerbate income disparities, there is a growing consensus that redistributive policies, funded through taxation, can help balance the gains of automation between skilled and unskilled workers. For instance, introducing a robot tax could slow the rate at which low-skilled jobs are replaced by machines while also generating revenue to fund social programmes aimed at supporting displaced workers (Khogali, Mekid 2023). However, policymakers must navigate the trade-offs between promoting innovation and ensuring equitable outcomes for all workers (Renda, Balland 2023).
9. AI’S ROLE IN TAX ADMINISTRATION
AI’s integration into tax administration presents opportunities to improve efficiency and compliance while also posing risks to equity and transparency. Enhanced audit processes and streamlined tax collection systems can reduce costs for both governments and taxpayers. Nevertheless, reliance on AI tools necessitates robust oversight to prevent biases and ensure fairness in tax enforcement. The ethical implications of AI use in taxation are critical, as transparency and accountability are essential to maintain public trust in tax systems (Cumberland 2024).
10. LONG-TERM ECONOMIC CONSIDERATIONS
The long-term implications of AI on economic structures require careful consideration. While fiscal policies can provide short-term benefits by addressing inequality, they may have longer-term consequences on capital accumulation and productivity growth. This presents a challenge for policymakers, who must balance immediate needs with sustainable economic strategies. As AI continues to evolve, its potential to concentrate market power and foster monopolistic practices could necessitate regulatory approaches beyond tax policy alone. The post-COVID environment may accelerate the adoption of automation, intensifying the urgency of these discussions in shaping equitable economic futures.
11. RECOMMENDATIONS AND CONCLUSION
The landscape of AI in tax law and administration is evolving rapidly, with emerging trends suggesting an increased emphasis on foundational models over singular “breakthrough” systems. Rather than relying on isolated innovations, the value will lie in how AI models are adapted to industry-specific workflows, particularly in legal tech and tax administration. Such integration raises critical legal and ethical considerations: as AI systems become more sophisticated, questions about accountability and liability become more pressing, especially when automated decisions affect taxpayer rights and obligations.
Future applications of AI in tax compliance will likely focus on boosting voluntary compliance and operational efficiency, encompassing everything from enhanced tax law cognition and accounting support to predictive analytics for tax disputes. Start-ups and companies offering AI-driven tax solutions will need to remain vigilant about evolving regulations and global legal developments to ensure compliance and fully capitalise on these technological advancements. Meanwhile, the ongoing integration of AI into the broader economy is generating new market structures, centred on services and digital networks rather than traditional ownership-based systems.
This shift demands proactive risk management and coherent regulatory frameworks to navigate the complexities of AI-driven service provision. As demand for AI-enabled offerings continues to grow, businesses and tax authorities alike must adapt their strategies and operations to harness these innovations effectively while maintaining legal and ethical standards. What follows is a number of recommendations for entities to consider.
Adopt rigorous oversight frameworks
Tax administrations should implement robust oversight protocols to detect and correct biases in AI systems. Government agencies often collect extensive personal data, thus heightening the need to ensure transparency, explainability, and fairness in AI-driven investigations. Human oversight should remain a core element, preventing errors that could escalate into injustices.
Address bias at all stages
Public authorities must recognise that bias can arise not only from flawed training data but also from human decisions surrounding model development and assessment. Making sure that large datasets are representative and carefully vetted for social biases can help mitigate systemic inequities. Likewise, continuous monitoring and auditing is critical to catch bias early.
Enhance transparency and communication
The lack of transparency and opaque algorithms can undermine taxpayer trust. Tax authorities should communicate proactively about how they use AI systems, including the underlying criteria or risk indicators. This openness fosters public confidence and aligns with both EU and OECD AI principles, which advocate transparency and responsible disclosure.
Build trust through service-oriented AI
Beyond fraud detection, AI tools can also be deployed to improve taxpayer services, such as chatbots for frequently asked questions or automated phone guidance. The examples of Ireland’s use of natural language processing to address clearance inquiries as well as Spain’s VAT chatbot demonstrate how AI can streamline administrative tasks and increase taxpayer satisfaction if done ethically and transparently.
Align with socioeconomic strategies
AI’s widespread adoption in tax administration intersects with broader socioeconomic policies, including how to handle labour displacement and income inequality. Governments might need to consider complementary measures (e.g. the Universal Basic Income, robot taxes) to ensure an equitable distribution of the gains from automation.
Adopt an AI governance policy at the institutional level
Given the systemic impact of AI, it is prudent for tax administrations to formalise their stance by publishing a policy document that explains what AI is, why regulation is essential, and what principles will govern AI deployment. This policy should articulate senior leadership’s objectives, ethical values (e.g. fairness and transparency), and oversight mechanisms. By establishing a consistent institution-wide policy, tax administrations can embed AI compliance into day-to-day operations rather than treating it as an isolated IT project.
Build individual duty of care
A formal policy alone does not guarantee responsible AI use. All levels of staff – including senior management – must be explicitly informed about AI’s benefits, limitations, and potential risks. Ongoing training programmes should stress each employee’s duty of care, emphasising the need to explain AI-driven decisions, watch for algorithmic bias, and identify when human intervention is critical. This cultural shift helps ensure that AI does not become an opaque black box but remains a tool under human oversight.
Integrate AI strategy
While AI can be transformational, it must not overshadow the existing revenue collection, audit effectiveness, and compliance functions. Tax administrations should develop a dedicated AI strategy that is consistent with broader modernisation efforts, strengthening core systems (e.g. data quality, digital filing platforms) while selectively exploring advanced analytics or next-generation AI. This balanced approach ensures that automation enhances rather than replaces fundamental tasks needed for equitable and efficient tax governance.
Maintain a formal inventory of AI use cases
A key step towards transparency and accountability is to catalogue where AI is deployed across the administration. By documenting the nature of each AI use case (e.g. machine learning fraud detection, chatbots for taxpayer inquiries) and its technical attributes (e.g. NLP tools, expert systems), leadership can better align oversight and risk assessments. Crucially, this inventory should capture future plans and should be regularly updated, i.e. not just in response to regulatory pressure but as an ongoing governance practice.
Perform rigorous legal and ethical reviews
Each catalogued use case demands thorough compliance checks under relevant data protection laws (GDPR), sectoral legislation (tax codes), and emerging AI regulations (AI Act). Risk-assessment methodologies can systematically evaluate issues such as bias, simplexity, privacy, and accountability. High-risk AI use cases – whether because they rely on sensitive personal data or significantly impact taxpayers’ rights – must be escalated to the highest level of review, with adequate mitigation strategies being put in place.
Retain the “Human-in-the-Loop” approach where necessary
When AI-influenced decisions can impose significant harm on taxpayers (e.g. large penalties, audits with potential reputational damage), human intervention should remain mandatory. This “Human-in-the-Loop” (HITL) approach guards against automated overreach and aligns with the requirement for meaningful oversight under the GDPR Article 22 and various provisions in the AI Act. In scenarios where fully automated systems operate, administrations might also consider “Human-on-the-Loop” (HOTL) models, in which humans continuously monitor for anomalies or red flags.
Conduct pre-introduction risk assessments for new AI cases
Before rolling out new AI-driven processes – be it a predictive model for tax evasion or a generative chatbot for taxpayer guidance – tax administrations should undertake a preliminary risk assessment. This diligence step evaluates not only the financial or operational upside of using AI, but also potential unintended consequences, such as data breaches or biased audits. Beyond compliance costs, the assessment should consider downstream effects on trust, appeals outcomes, and legal liabilities.
Prioritise transparency
In the spirit of openness, administrations can publish a filtered version of their AI inventory, excluding only the sensitive, national security-related, or genuinely trivial use cases. This publication would foster public confidence and invite stakeholder feedback, enabling the administration to spot early signals of risk or misaligned practices. Internally, staff who interact with AI tools should be notified when a recommendation or document is partly AI-generated.
Prominently disclose AI involvement in ongoing operations
Beyond a published inventory, any direct taxpayer-facing service or correspondence that involves AI outputs should carry a clear disclosure (e.g. “This communication was partially generated by AI”). Such labelling enforces ethical commitment to clarity and meets emerging best practices in human–computer interaction design. This measure also lessens confusion, allowing taxpayers or employees to gauge the reliability and context of AI-driven content.
Evaluate use case performance and intent
Finally, effective governance does not end with deployment; AI systems evolve over time and may yield unintended behaviours. Tax administrations should revisit each use case at regular intervals to confirm that it aligns with the original business purpose and does not generate unintended harms, such as misclassifications or discriminatory outcomes. Both technical metrics (e.g. model accuracy) and qualitative feedback (e.g. taxpayer complaints) should inform iterative improvements or, if necessary, system decommissioning.
In conclusion, the increasing reliance on AI in tax administration brings both opportunities – such as efficiency gains and improved accuracy – and risks, ranging from data-driven bias to the erosion of public trust. Balancing innovation with rigorous safeguards is essential to preserve taxpayer rights and promote social welfare. As technology continues to evolve, the principles of fairness, transparency, accountability, and equity must guide policy decisions and practical implementations, ensuring that AI truly serves the public interest in the domain of tax law and administration.
Authors
* Artur Bogucki
BIBLIOGRAPHY
Acemoglu, Daron. Sara Johnson. 2023. Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity. London: Hachette UK.
Alon-Barkat, Sigal. Madalina Busuioc. 2023. “Human–AI Interactions in Public Sector Decision Making: ‘Automation Bias’ and ‘Selective Adherence’ to Algorithmic Advice.” Journal of Public Administration Research and Theory 33(1): 153–169. https://doi.org/10.1093/jopart/muac007
Bevacqua, John. Vanessa Renolds. 2018. “The Digital Divide and Taxpayer Rights – Cautionary Findings from the United States.” eJournal of Tax Research 16: 714–739.
Blank, Joshua D. Leigh Osofsky. 2021. “The Social Justice of Legal Drafting: Tax Law and Beyond.” In 114th Annual Conference on Taxation. Washington, DC: National Tax Association.
Brownsword, Roger. 2024. “Law, Technology, and Our Governance Dilemma.” Laws 13(3): 30. https://doi.org/10.3390/laws13030030
Citron, Danielle Keats. 2008. “Open Code Governance.” University of Chicago Legal Forum 2008: 355–387.
CNIL (Commission Nationale de l’Informatique et des Libertés). 2022. AI: Ensuring GDPR Compliance. September 21, 2022. https://www.cnil.fr/en/ai-ensuring-gdprcompliance.
Cumberland, Torsten. 2024. “A Look at the Risks and Opportunities for AI in Tax Administrations.” OECD AI Policy Observatory. https://oecd.ai/en/wonk/risks-and-opportunities-ai-tax-administrations
de la Feria, Rita. M.A. Grau Ruiz. 2021. “The Robotisation of Tax Administration.” In International Conference on Inclusive Robotics for a Better Society. Edited by M.A. Grau Ruiz. 115–123. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-031-04305-5_19
Engstrom, David Freeman. Daniel E. Ho. Catherine M. Sharkey. Mariano-Florentino Cuéllar. 2020. “Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies.” NYU School of Law. Public Law Research Paper No. 20–54. https://doi.org/10.2139/ssrn.3551505
European Commission. 2018a. Artificial Intelligence for Europe. COM(2018) 237 final. Brussels: European Commission. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN
European Commission. 2018b. “Commission Appoints Expert Group on AI and Launches the Euro-pean AI Alliance.” Digital Strategy. https://digital-strategy.ec.europa.eu/en/news/commission-appoints-expert-group-ai-and-launches-european-ai-alliance
European Commission. 2018c. Coordinated Plan on Artificial Intelligence. COM(2018) 795 final. Brussels: European Commission. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A795%3AFIN
European Court of Human Rights (Grand Chamber). n.d. L. B. v. Hungary, Application No. 36345/16. https://hudoc.echr.coe.int/eng?i=001-212208
European Union. 2016. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation). Official Jour-nal L119/1. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016R0679
Hadwick, David. 2022. “Behind the One-Way Mirror: Reviewing the Legality of EU Tax Algorithmic Governance.” EC Tax Review 31(4): 184–201. https://doi.org/10.54648/ECTA2022019
Hadwick, David. 2024. “Slipping Through the Cracks: The Carve-outs for AI Tax Enforcement Systems in the EU AI Act.” European Papers 9(3): 936–955.
Hadwick, David. Shujing Lan. 2021. “Lessons to Be Learned from the Dutch Childcare Allowance Scandal: A Comparative Review of Algorithmic Governance by Tax Administrations in the Netherlands, France and Germany.” World Tax Journal 13(4): 609–645. https://doi.org/10.59403/27410pa
Internal Revenue Service (IRS). 2024. IRS Communication on Data Disclosure. March 10, 2024. https://www.irs.gov/newsroom/irs-communication-on-data-disclosure
Keen, Michael. Joel Slemrod. 2017. “Optimal Tax Administration.” Journal of Public Economics 152: 133–142. https://doi.org/10.1016/j.jpubeco.2017.04.006
Khogali, H.O.S. Mekid. 2023. “The Blended Future of Automation and AI: Examining Some Long-term Societal and Ethical Impact Features.” Technology in Society 73: 102–232. https://doi.org/10.1016/j.techsoc.2023.102232
Kleinberg, Jon. Himabindu Lakkaraju. Jure Leskovec. Jens Ludwig. Sendhil Mullainathan. 2018. “Human Decisions and Machine Predictions.” The Quarterly Journal of Economics 133(1): 237–293. https://doi.org/10.3386/w23180
Kuźniacki, Błażej. Mariana Almada. Krzysztof Tyliński. Łukasz Górski. Barbara Winogradska. Rik Zeldenrust. 2022. “Towards eXplainable Artificial Intelligence (XAI) in Tax Law: The Need for a Minimum Legal Standard.” World Tax Journal 14: 573–616. https://doi.org/10.59403/2yhh9pa
Maat, E.P. 2022. “Google v. CNIL: A Commentary on the Territorial Scope of the Right to Be Forgotten.” European Review of Private Law 30(2): 241–252. https://doi.org/10.54648/ERPL2022013
Mayson, Sandra G. 2018. “Bias In, Bias Out.” Yale Law Journal 128: 2218–2230.
McCarty, L. Thorne. 1977. “Reflections on ‘Taxman’: An Experiment in Artificial Intelligence and Legal Reasoning.” Harvard Law Review 90: 837–893. https://doi.org/10.2307/1340132
MIT Future Tech. 2024a. The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial Intelligence. https://airisk.mit.edu/
MIT Future Tech. 2024b. AI Risk Repository. https://airisk.mit.edu/
OECD (Organisation for Economic Co-operation and Development). 2020. Forum on Tax Administration: Tax Administration 3.0 – The Digital Transformation of Tax Administration. https://www.oecd.org/tax/forum-on-tax-administration/publications-and-products/tax-administration-3-0-the-digital-transformation-of-tax-administration.pdf
Okunogbe, Omowunmi. Victor Pouliquen. 2022. “Technology, Taxation, and Corruption: Evidence from the Introduction of Electronic Tax Filing.” American Economic Journal: Economic Policy 14(1): 341–372. https://doi.org/10.1257/pol.20200123
Passi, Samir. Maria Vorvoreanu. 2022. Overreliance on AI Literature Review. Microsoft Research. https://www.microsoft.com/en-us/research/uploads/prod/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdf
Peeters, Brigitte. 2024. “European Law Restrictions on Tax Authorities’ Use of Artificial Intelligence Systems: Reflections on Some Recent Developments.” EC Tax Review 33(2). https://doi.org/10.54648/ECTA2024006
Ranchordás, Sofia. 2022. “Connected but Still Excluded? Digital Exclusion Beyond Internet Access.” In The Cambridge Handbook of Life Sciences, Information Technology and Human Rights. Edited by Marcello Ienca, Oreste Pollicino, Laura Liguori, Portolano Cavallo, Elisa Stefanini, Roberto Andorno. 244–258. Cambridge: Cambridge University Press. https://doi.org/10.1017/9781108775038.020
Ranchordas, Sofia. Luisa Scarcella. 2021. “Automated Government for Vulnerable Citizens: Intermediating Rights.” William & Mary Bill of Rights Journal 30: 373–408. https://doi.org/10.2139/ssrn.3938032
Renda, Andrea. 2024. Towards a European Large-Scale Initiative on Artificial Intelligence. Brussels: Centre for European Policy Studies (CEPS). CEPS Deep Dive. https://www.ceps.eu/ceps-publications/towards-a-european-large-scale-initiative-on-artificial-intelligence/
Renda, Andrea. Pierre-Alexandre Balland. 2023. Forge Ahead or Fall Behind: Why We Need a United Europe of Artificial Intelligence. CEPS Explainer. https://cdn.ceps.eu/wp-content/uploads/2023/11/CEPS-Explainer-2023-13_United-Europe-of-Artificial-Intelligence.pdf
Rizzo, Alessandro. Gauri Hassan. 2024. AI Risk Management in Tax Audits: A Comparative Review of the EU and US Regulatory Approaches. Unpublished manuscript.
Rodrik, Dani. Stefanie Stantcheva. 2021. “Fixing Capitalism’s Good Jobs Problem.” Oxford Review of Economic Policy 37(4): 824–837. https://doi.org/10.1093/oxrep/grab024
Scarcella, Luisa. 2019. “Tax Compliance and Privacy Rights in Profiling and Automated Decision Making.” Internet Policy Review 8(4). https://doi.org/10.14763/2019.4.1441
Sunstein, Cass R. 2021. “Governing by Algorithm? No Noise and (Potentially) Less Bias.” Duke Law Journal 71: 1175–1213. https://doi.org/10.2139/ssrn.3925240
FOOTNOTES
- 1 This article is the result of scientific research carried out as part of the grant from the National Science Centre on “Countering Corporate Income Tax Avoidance and the Administrative Courts Jurisprudence: How Globalization determines Tax Policy” – OPUS 21 (No. 2021/41/B/ HS5/00225).
- 2 Despite the enactment of the AI Act, the Trustworthy AI ethical framework still remains one of the main and most relevant standards for ethical AI in the EU.
- 3 Regulation (EU) 2016/679 (General Data Protection Regulation), art. 3(7)–(8).
- 4 GDPR, arts. 5, 22.
- 5 European Court of Human Rights (Grand Chamber), L.B. vs. Hungary (Application no. 36345/16).
- 6 The AI Act is the European Union Regulation enacted on 1 August 2024 establishing a harmonised risk-based legal framework for safe, transparent, and accountable development and deployment of artificial intelligence.
- 7 Court of Justice of the European Union (CJEU), Schufa Holding AG v. EU, Case C-634/21, EU:C:2023:957.
- 8 Art. 35 GDPR.
- 9 Art. 27 The AI Act.