As artificial intelligence has taken center stage across industries, does it also have a place in tax administration? Our latest episode of “GILTI Conscience” explores potential applications of AI in the world of tax, including possible risks and opportunities. Hosts David Farhat and Stefane Victor are joined by Washington, D.C. tax partner Eric Sensenbrenner, senior tax advisor De Lon Harris and King’s College London reader in tax law Dr. Stephen Daly to explore this timely topic in detail.
Episode Summary
For tax authorities and even taxpayers, AI promises to make lives easier. At the same time, it carries risks that “cannot be taken out of the system.” Dr. Stephen Daly, reader in tax law at King’s College London, describes this dynamic in a conversation with Skadden partners David Farhat and Eric Sensenbrenner, associate Stefane Victor and senior advisor De Lon Harris. The panel explores the best and worst of the impact of AI in the tax world. Tune in for insights about what AI means for taxpayers and tax authorities alike.
Key Points
- Double-Edged Promise: While AI offers unprecedented efficiency in tax administration through risk management and automated taxpayer assistance, it can amplify existing biases and create systemic problems if not properly managed.
- Managing Mistakes: Despite AI’s potential, tax authorities remain cautious about implementation due to fear of high-profile mistakes being made, creating a tension between the need for efficiency and the risk-averse nature of government institutions that still rely on outdated technologies.
- Regulating Data: Dr. Daly compares and contrasts how jurisdictions around the world regulate AI. The European Union, for example, has its AI Act, which regulates businesses and their use of artificial intelligence. And while Germany has specific regulations for regulating the use of AI by taxing authorities, the United States and U.K. do not.
Voiceover (00:03):
This is GILTI Conscience, casual discussions on transfer pricing, tax treaties and related topics. A podcast from Skadden that invites thought leaders and industry experts to discuss pressing transfer pricing issues, international tax reform efforts and tax administration trends. We also dig into the innovative approaches companies are using to navigate the international tax environment and address the obligation everyone loves to hate. Now your hosts, Skadden partners, David Farhat and Nate Carden.
Stefane Victor (00:36):
Hello and welcome to another episode of GILTI Conscience, a Skadden podcast hosted by Skadden Partners David Farhat and Nate Carden and associates, Eman Cuyler and me, Stefane Victor, Nate and Eman are unable to join us today, but we’re lucky to have previous guests join us as co-hosts. Partner Eric Sensenbrenner and senior advisor De Lon Harris. On today’s episode, we are discussing AI and tax administration. The OECD defines AI as a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. At best, the growing utilization of AI’s projected to streamline and simplify time-consuming processes. At worst, all the historical data baked with antiquated biases fed into AI will result in decisions which exacerbate existing social and economic inequalities. In 2020, the OECD’s forum on tax administration published a report in Tax Administration 3.0, The Digital Transformation of Tax Administration.
(01:43):
It lays out a forward-looking blueprint for how tax administrations can and should evolve in response to digital transformation, changing taxpayer behavior and rising expectations for efficiency and transparency. To discuss this, we’re joined by Dr. Stephen Daly. Steve is a reader in tax law at King’s College London and a visiting lecturer at Imperial Business School. He teaches across the gambit of tax law, including international tax, corporate tax, tax administration, procedure, and dispute resolution and EU tax law. He has published a number of times including two monographs and more than 40 papers. Steve is currently leading a UK-funded project supported by two institutions focused on exploring practical use cases for AI in tax administration. Steve and David attended the King’s College London conference sponsored by Skadden where the idea for this podcast was formed. Welcome.
Stephen Daly (02:42):
I’m delighted to be here. Thanks very much for having me. Great to see David again.
David Farhat (02:46):
Good to see you as well, Stephen, thank you so much for agreeing to do the podcast. I think let’s dive right in. We’ll give you the floor, any kind of high-level thoughts to kick us off on tax authorities and AI? It’s a brave new world and a very interesting one, especially with Stefane’s intro, which some of the problems that can come about with AI. So we’ll let you kick it off and we’ll dive in.
Stephen Daly (03:07):
Yeah, absolutely. So we could build upon what Stefane had alluded to there. So AI brings about great promise. So from a tax administration perspective, the idea is that their tasks can become more efficient and more effective, and we’re seeing tax authorities using AI in a range of ways to make their lives easier. But that has an impact on taxpayers and can make the lives of taxpayers easier also. But with AI does come risks. There are various risks that they cannot be taken out of the system. They are built into the system, they’re necessary for the system. So if you want to exploit the benefits of AI, you’re going to have to deal and manage the risks that come with AI also. These days, tax authorities are using AI to do a range of things, as I mentioned, but the most significant ways in which AI is being used by tax authorities is in respect of risk management and taxpayer assistance.
(03:56):
And risk management, well, and this is just compliance risk management that tax authorities have always been doing, they try to risk profile taxpayers to determine who’s most likely to underpay their taxes, and then they allocate resources accordingly. What machine learning algorithms can do is they can do that at scale and much more effectively than is currently the case. So they can find anomalies and inconsistencies in data. In addition, you can benchmark, so you can look at what is the average that you’d expect for a taxpayer with this characteristic to be paying in tax and then determine whether they are or are not paying taxes in line with that expectation. And then in terms of taxpayer assistance, what we’re seeing is lots of chatbots being rolled out across the globe. The chatbots, the idea is you go to your chatbot with a query and the chatbot will come back with some kind of answer to your query.
(04:44):
And the IRS is also experimenting with this. It’s got the interactive taxpayer assistant that I’ve played around with earlier, and it’s quite nice. And there are two basic models of chatbot. You could have basic decision tree chatbots, which lots of tax authorities are using, and that’s where you pre-program all the answers. If the question is how many days are you resident in the country? If it’s 50, then you’re not resident. If it’s 330, you are a resident and so on. So you pre-program the answers. The more sophisticated chatbots then will be using large language models additionally, whereby you don’t pre-program the answers, instead you let the chatbot come up with what the answers might be. A minority of tax authorities are using that source of technology because it comes with the risk that the AI could make something up, which is hallucination is the phenomenon that we see in the literature and we see people talking about every day.
(05:35):
And the idea is that LLMs, they can come up with plausible sounding answers, but those answers may actually be completely untethered to reality. And in law that’s very problematic because we see the LLMs making up cases, making up bits of legislation, making up authorities generally. So those are some of the risks. And then when it comes to machine learning algorithms generally, so when you use risk management, one of the problems is whatever you train the system on that will have an impact upon the outputs that the system produces. So if you train your machine learning system on biased data in the first place, you’re going to get biased outcomes. And we’ve got several episodes where we’ve seen that happening in real life in a tax context.
David Farhat (06:17):
So just following up on that little point before we go back. So it sounds like there is a risk that we could just exacerbate the problems we have with tax authorities now by using AI. So you want to solve the problems, but if you have the garbage going in, it just makes those same problems, it creates those same problems at scale.
Stephen Daly (06:35):
Absolutely, absolutely. So I mean, the most significant example of this is what’s known as the child’s tax credit scandal in the Netherlands, the toeslagenaffaire. I’ve probably done an absolutely horrible job of pronouncing that where something, a region of 26,000 parents were wrongly accused of fraud. And the reason they were wrongly accused of fraud, it seems to be because the algorithm was trained to pick up on nationality as a key criterion for determining the risk of non-compliance. And so anybody who had dual nationality or was born or lived in a particular area code or had a name that you wouldn’t associate traditionally with Dutch people, they were all flagged for fraud. 26,000 parents and not just being flagged for fraud, but also they had their benefits retroactively withdrawn. So if there was a suspicion of fraud against you, you had the entirety of your years claim taken away on the basis of that prima facie evidence.
(07:32):
So the risks are significant. However, it’s quite obvious when you take a step back how that ended up happening in the Netherlands. Had somebody been exercising oversight properly of the training and also the outcomes that were being produced, they would have noticed that something was seriously wrong. I mean, it was so wrong that there was something in the region of 94%, it was a 94% false positive rate. So out of 100 people who were entitled to a benefit, 94 people were not given the benefit, were accused of fraud wrongly. Now anybody with a discerning eye looking at that sort of data would know that something was seriously wrong with the system.
Stefane Victor (08:11):
So I guess how did it end up getting rectified? I’m assuming that many, 94% of people flagged this issue. And so who then steps in and helps retrain the AI?
Stephen Daly (08:24):
Okay, so I mean, in terms of what ended up happening next, the entire Dutch cabinet collapsed and a new government was formed. There’s been various inquiries into how you rectify the situation and people are going to be paid out the money that was not given to them in the first place. How do you rectify it then? You get the computer scientists back in, you get the computer scientists and you use them with the tax officials. So the subject matter experts, you can’t just have the technologists go in and fix the issue because they don’t really know what the issue is or the magnitude of the issues.
David Farhat (08:53):
So where are a lot of tax authorities now? You mentioned a lot of them are not using the machine learning, they’re using the decision tree. I mean, De Lon, I’d love to hear you chime in as well with your experience with the IRS as to where they might be on the spectrum as well with using AI.
De Lon Harris (09:10):
Thanks, David. And Stephen, this is all very fascinating. I spent an entire career at the IRS and part of that time was in leadership positions where we were trying to figure out, because of lack of resources, how we could use AI. And I think that some of the things that you mentioned, like the fraud aspect, pinpointing an entire population of people as potentially fraudulent, they don’t like a lot of negative press. And the IRS when they tend to get press, it tends to be on the negative side just like we’ve seen recently. And so I think what concerns me is the IRS not being a little risk averse into stepping off into using some of these techniques that could benefit them and could benefit taxpayers. We have seen them use the chatbots on a lot of collection work that used to be done by folks on the phone actually talking to people and working them to get a collection alternative for unpaid taxes.
(10:20):
And I think they’ve been successful with the chatbots and that way letting taxpayers do that for themselves with the use of a chatbot and that whole time that people are on the phone or drop, because they’ve been on the phone for hours trying to reach somebody. So I think the IRS is getting there and moving in that direction, but they are doing it very slowly. And what concerns me right now is we’ve seen a mass exodus of employees from the service. We were talking yesterday about how you could use results of litigation to feed those algorithms to make sure that you’re really honing in on the right issues, that you’re spending time and resources on the right issues. And can AI kind of fill that gap of people that have left the service and not necessarily there anymore to educate that algorithm.
David Farhat (11:18):
De Lon, that’s a great question. Stephen, have you seen tax authorities get to that point yet? Because it sounds like AI has been pretty effective with things around the edges, minimizing call time. Can we get people to the right folks? Can we get you to the right answer? But Eric, you and I were talking about some of the more complicated things we deal with on a day-to-day basis, especially when it comes to that risk assessment. What should they be auditing? How does one issue lead to the next? And what De Lon alluded to where we’re in this environment where every tax authority seems to be resource constrained. Have we pivoted into that risk assessment? And I guess a second question is, if we have, in this resource environment, do we have the resources to monitor that to make sure we don’t have these hallucinations or these mistakes?
Eric Sensenbrenner (12:04):
There’s another aspect of that as well that I have a question about whether anyone is doing it at this point, but how exactly are they training the LLMs? Are they using taxpayer data? Do you see ethical or other confidentiality considerations, obviously in the US, but also around the world in terms of taxpayer personal information? Is there a program in any taxing authority at this point where they are actually using live taxpayer information, calling through returns to develop sort of pattern prediction? Because I would think that that would be a very logical use case for AI.
(12:38):
If I’m approaching an audit and I have a particular profile of a taxpayer, they have this category of transactions, they’re in this particular industry, I would think that that’s a real case where you could see a lot of value add for AI in terms of tailoring what IDRs might look like in the exam process, for instance. Flagging issues for agents to examine further. But I’m really curious about how that develops and are we using taxpayer data sort of the last 10 years of returns, for instance, to train the AI to train the LLM? Or is that not happening yet? Is it just sort of external algorithms that are programmed into it?
Stephen Daly (13:17):
Okay, so they are using taxpayers’ data to train up these algorithms, but so far the use has just been to flag up four audits. So is the risk of noncompliance by this taxpayer? Is it X percent or Y percent? And then if it’s sufficiently high, then the taxpayer will be investigated. That’s the current use of it. Now, could you then extract, could you train the model also traditionally on insights that you get from particular types of audits? Yeah, absolutely, absolutely. And where I see AI going is more dealing with the lower level simpler cases that you can resolve. So coming back to the chatbots in the first instance, a lot of bad press around chatbots because if you ask them the right questions, they’ll always get them wrong. Even with your pre-programmed chatbot, even where the chatbot’s not going to go off and hallucinate, it’ll still get the answer wrong.
(14:11):
It’ll get the answer wrong for several reasons. One, it might get the answer wrong because it’s been pulled the wrong answer in the first place. So whoever at the IRS or whoever was the technology specialist inputting the information, they got that wrong. But it can also get it wrong because if the answer is slightly different from what the question is asking, then somebody could be asking a question thinking that they’re getting the right answer in relation to their affairs, but that’s not actually correct in relation to their affairs. But if you applied these chatbots to the vast majority of people’s questions, you’re probably going to get the right answers. Lots of helplines, in the case of the helplines that are currently being operated. A lot of the queries just come down to quite basic questions that can be answered with basic answers. If you use AI to deal with the majority of mundane tasks, then hopefully you’ll be able to get more sophisticated and well-rested tax officials to deal then with more complicated tax issues.
David Farhat (15:09):
Is that what we’re seeing from tax authorities? Are they using it that way?
De Lon Harris (15:13):
Yeah, I just want to go back a minute to the question that Eric had about are tax authorities using information to feed these algorithms on past audit information. And the IRS has been doing that for several years. The large business at International changed their whole structure and how they’re identifying corporate work and partnership work by doing just that. The disclosure part is not really an issue because all that data that’s used in identifying those returns is stripped free of any taxpayer identifying information. It’s mostly just the numbers of what’s on the returns and the audits. So the IRS has been doing that, but I think in thinking about other things like you talked about Eric of, can they use that for IDRs? We talk about all the time here at Skadden about, it seems like the IRS uses the same IDRs for every audit, that the revenue agents are not necessarily technically trained in issues or kind of know what’s going on out there in the real world for specific industries.
(16:24):
And I think that the IRS, if they’re smart, I think that they’re going to have to devote some time and resources into, especially with what they’ve lost into using AI for some of those things like preparation of IDRs for specific taxpayers and taking some of those risks that they wouldn’t normally take to build those IDRs and to make sure they’re not just going out there with a cookie cutter IDR that doesn’t even fit the facts and circumstances of a particular tax taxpayer. But David, sorry, you did ask what we’re seeing at the IRS, but I just wanted to go back and touch on-
David Farhat (17:06):
No, really, really appreciate that context, De Lon, that’s helpful. The question directly was kind of, Stephen was talking about using AI to take away some of the routine and mundane tasks so that the people can focus on the difficult ones. Is that kind of the pattern we’re seeing at the IRS and other tax authorities?
De Lon Harris (17:24): I think that that’s what they really got into this for was to make sure that when they started using this for selecting corporate and partnership work back in 2019 at the IRS, it was resource driven. That even though there’s this byproduct of making it easier for the taxpayer, if they’re only focusing on those issues that are high risk, it was purely resource driven. They had little resources. They need to figure out how they could audit the most productive issues. So that’s the reason they stepped into it and now it’s more important than ever that they continue on with that because resources are a problem for them again. So, yeah.
Stefane Victor (18:09):
We’ve talked a bit about the benefits of AI, resource benefits of AI for tax administrators. What are the benefits for corporations or I guess for the taxpayers?
Stephen Daly (18:22):
This is building again upon what De Lon just said. So I’ve got the stats on how bad the audit rates were in the US. This came out of the 2023 strategic operating plan from the IRS.
Stefane Victor (18:32):
Wait, did you use AI to get those stats? Stephen, are you AI? He didn’t say no.
Stephen Daly (18:48):
I’ve been trained to never say no.
David Farhat (18:48):
Or yes as a matter of fact.
Stephen Daly (18:59):
No, no, no. So I picked this straight out of out the plan and the audit rates were really, really low. So large corporates, it was a 10% rate in 2011, and that was down to 1.7% in 2019. For partnerships, the rate was 0.05%. And then the audit rate of high net worth individuals, well individuals with earnings above a million dollars. The other rate was about 0.7% in 2019, and that was down from 7.2% in 2011. I don’t think I’ve ever seen a tax authority come out and say just how bad the audit rates are. And the reason why you’d never usually publicize your audit rates is because you’ve got basically two types of taxpayers in the world. One type of taxpayer are those that will generally pay, but they don’t need the coercion to do so. And then there’s the other group of taxpayers who don’t really want to pay taxes, and you need to coerce them into doing so.
(19:50):
By telling taxpayers that the audit rate is going to be about 1%, you’re giving a big incentive for those people who don’t naturally want to comply, to not comply with the rules. So it just shows how drastic things were that the IRS took the step to signify just how limited the resources were at the IRS for that purpose. And to my mind, it is good for corporates, for wealthy individuals, for partnerships, that the IRS is getting more funding, but also that the IRS will be able to use AI to take away some of the routine mundane tasks. As I said, that leaves you with tax officials that can be better trained, more competent, more well-rested to be able to deal with the niceties of businesses that they’re trying to audit. I think that itself is going to be a good thing. But then more specifically, we do have ongoing problems in relation to information requests. I did a survey with the Institute for Fiscal Studies in the UK a couple of years ago, and we asked large businesses, why are tax disputes taking so long?
(20:51):
And one of their chief complaints was, “Well, we end up getting asked to provide all this information to the tax authority. We provide it, we hear nothing back, and then we get asked for another tranche of information and we’re never told what they are looking for.” So instead of having these pro forma requests, what you could use AI to do is almost act like a conversation between the taxpayer and the tax authority. So you might have a bunch of information that the tax authority wants, and because when they get that information, they’d be able to see anomalies, inconsistencies, or things that they think need to be researched even further.
David Farhat (21:27):
Let me push back on that a little bit, Stephen, because I think I agree with the problem. You’re involved with the tax authority, and I know we’re picking on the IRS here a little bit, but I think this is a universal problem with a lot of tax authorities where you’re getting these information requests back and forth. And I think a lot of taxpayers that I talk to say, “Well, can we just have a conversation with them? Can we just talk?” And there seems to be this reluctance to talk, which in cases where we’ve been able to talk, we’ve been able to kind of get over the hurdle. But if we’re talking about training AI and the way you train it can exacerbate the problems, and we have now a tax authority, understandably, resource constrained, but the patterns and the procedure are more of, “Okay, no, no, no, no, we’re not going to talk to you. Answer our questions and we’ll talk to you.” Wouldn’t AI just exacerbate that problem if the kind of mindset doesn’t change in having real communication with taxpayers?
Stephen Daly (22:21):
Okay, so you’ve touched upon a perennial issue here, a perennial and fundamental issue, which is this kind of inherent conservatism. The reason why the tax officials don’t want to get into the room with you is because they don’t want to make a mistake. The reason why they keep asking for more and more information and elongating the dispute is again, because they don’t want to-
David Farhat (22:38):
100%.
Stephen Daly (22:38):
They don’t want to make a mistake. And honestly, to embrace the capabilities of AI, you need to accept that mistakes will be made. So in the case of a chatbot, if a chatbot gets the answer wrong and a taxpayer ends up underpaying their taxes, well then you just leave them off the hook. But that would be allowing taxpayers to make mistakes. And not taxpayers that are abusing the system or something like that.
David Farhat (22:58):
Of course.
Stephen Daly (22:59):
An honest taxpayer that comes, asks a question, gets an answer, files a tax around that basis, that should just be the end of it. But that would take a change of mindset to be able to accept that yes, mistakes will be made and we’re comfortable with that. And it would be the same thing when it comes to this LLM. If you have a conservative small conservative tax authority, which is using an LLM to ask questions, again, because they’ll never want to reveal their hand, they’re going to continue to ask blanket, blanket, blanket, ask for blanket information, blanket information, blanket information.
David Farhat (23:30):
So then the problem necessarily isn’t an AI one, it’s a philosophy one. If we fix the philosophy, we can fix the problem and AI will kind of help with that.
Stephen Daly (23:41):
Well, in a sense, none of these things are actually AI specific. Somebody has said to me, “We call it AI now because we can’t quite put a finger on what it is, but in the future, we’re just going to refer to it as software.”
Stefane Victor (23:53):
So I have a question. This is touching back to something we discussed earlier about what information is really being used and how it’s being stored. So I guess we talked about risk profiles of taxpayers, but where is that information coming from? Is it from results of audits? De Lon asked a question of is it at the end of litigation? The final adjudication on a matter, is that being fed back into the AI to better determine a risk profile?
Stephen Daly (24:24):
So the information that’s being used comes from everywhere. So it’ll come from taxpayers themselves, it’ll come from their employers, whatever information is shared generally with tax authorities. So they can also look at information that comes from another tax authority, whatever has been exchanged. They can look at information which has been given to them by banks, by insurance companies. They can also use the web, they can just scrape information off the web. So if you’ve got any public databases like an electoral registry, maybe you’ve got some database that holds onto information about vehicle insurance. If you’ve got a property register, all these bits and bobs of information, they’re all filtered into the system in addition to social media websites. So if you are shown on Facebook with your nice flashy Lamborghini, and then the IRS takes a look at it and they say, “But you’ve only declared $40,000 of income this year, where’d you get the money from the Lamborghini?” That is the sort of anomaly that gets flagged up by the system. So the information-
David Farhat (25:20):
You rent those things for the pictures, Stephen.
Eric Sensenbrenner (25:22):
Yeah. I manage my money very poorly.
Stephen Daly (25:32):
If you’re flying all over the world, it’s very clear that the money’s coming from somewhere. And also if you’re saying, “Oh, I’m not resident in the US.” And yet we see photos of you in Disneyland for 200 days of the year, you probably are a resident in the US.
David Farhat (25:49):
To pick up on that a little bit, where have you in your study, Stephen, where have you seen some of the best uses of AI? Which tax authorities have done some really good things and what have, or just examples of some really good things tax authorities have done?
Stephen Daly (26:03):
Well, the most interesting jurisdiction I’ve come across is Brazil, because Brazil seems to be starting from quite a low base, and they’ve fully embraced the technology. And so everything is AI-led. So when it comes to the chatbot that they’re using, they’re very happy to use a machine learning chatbot as opposed to the more rule-based chatbot. Other than that, as I said, tax authorities are principally just using them at the moment for these two things, little bits and bobs here there, but it’s chatbots and in particular, the risk management system that they’re using.
(26:35):
80% of tax authorities are using data analytics, as you want to call it, machine learning analytics in some capacity for risk. And 60% of them are using them for chatbots. So I mean, lots of taxpayers are doing the same thing. And I think what we’re going to witness is the more uses that taxpayers can devise for AI, the more other countries are going to get in behind because there is a conservatism that you see, and nobody wants to be the first person to make the big mistake when it comes to all this. We are seeing this also in the private sector when it comes to the use of chatbots and LLMs with regards to legal databases.
Stefane Victor (27:12):
So I just have a transfer pricing question. So we talked about the fact that AI can use information from everywhere, including from taxpayers, from other jurisdictions, from publicly available sources and non-publicly available sources. What does that mean for the development of comparables? I mean, as AI, it seems might have a leg up or an ability to develop its own set of comparables and maybe flag a taxpayer for, that would be included in its flagging a taxpayer’s risk profile.
Eric Sensenbrenner (27:44):
Wouldn’t that be a classic use case for AI? I would think that development of CompSets would be a classic use case.
Stefane Victor (27:52):
But would that be problematic given that it’s operating on non-public information?
David Farhat (27:57):
Yeah, like secret comparables. The OECD kind of went after governments a little bit for using non-disclosed comparables, especially in the map process and things of that nature. And even in the audit process. Could that then with the risk assessment cause some problems? I think that’s a very interesting transfer pricing question.
Stephen Daly (28:16):
The answer is probably that you’d need to change the law in order to accommodate that, but once you change the law to make it okay to do so, I don’t see, I mean, I’m not a transfer pricing lawyer, so I don’t have any skin in the game. I don’t really see a problem with using made up comparables as opposed to the comparables that we currently use that often are some of the time will not be a particularly good comparable for the business that you’re trying to compare. And it’s nothing to do with businesses being dodgy or anything like that, but sometimes it’s just very difficult to find the comparables.
David Farhat (28:49):
100%. I think the issue isn’t so much the value quality or the quality of the comparable. It’s about having an even playing field. So AI, and I think Stefane, this is your point. Keep me honest if I’m missing it, is AI has access to so much more than the taxpayer may have access to. And if it comes up with its comparable, taxpayer doesn’t have an ability to do that analysis.
Eric Sensenbrenner (29:11):
Let me just test that. How is it that AI has the ability to access more information than the taxpayer has? The taxpayer would be running AI as well, presumably. I was thinking of it from the standpoint of the government’s perspective. So you’ve got a taxing authority that has always get this deal in map or in this competent authority on this type of a licensing arrangement. We always sort of get this deal. It’s in this range. They certainly have the full taxpayer set of information of all taxpayers. I’m just questioning how uneven is that?
David Farhat (29:43):
Well, taxpayer doesn’t have the access to that data, because that’s taxpayer information.
Eric Sensenbrenner (29:47):
Correct. But the taxpayer presumably has access to a lot of publicly available information that AI could be scraping from financial statements from other sources, as you were indicating, Stephen. I would think that you’ve got taxpayers running AI, you’ve got government running AI. In theory, you would think that there should be a certain amount of lowest common denominator. You’re going to arrive at probably the broadest set of data points possible.
Stefane Victor (30:15):
I would imagine there would be a discrepancy in the information that each AI has. So the taxpayer AI wouldn’t have all the information, have access to all the various taxpayers that the taxing authority AI would have. So yeah, I don’t think that-
David Farhat (30:33):
Taxpayer AI will not have access to other taxpayers’ information. So I think to your point, I would say we’ve run all of these map cases, we’ve done all of these audits, we’ve settled all these cases that comes into this particular range, that bit of information would not, the independent taxpayer wouldn’t have, because their AI doesn’t get to go through returns and all of that kind of stuff, which I think creates an uneven playing field. And it may be antithetical to transfer pricing in and of itself because transfer pricing is supposed to be functions access risk of each individual taxpayer.
Eric Sensenbrenner (31:04):
Necessarily within your experience field. So-
David Farhat (31:07):
100%.
Eric Sensenbrenner (31:07):
Something outside of their arms-length parties would not, by definition, take that into account.
David Farhat (31:12):
100%. So I think I take Stefane’s point, I don’t know if the governments are necessarily advantaged by that or if they just have more information to create more data points that may not necessarily be or should not be in the realm of consideration in a transfer pricing case.
Stefane Victor (31:30):
I mean, and imagine the cases where there are hefty penalties imposed, taxpayers are operating with much less information, and a taxing authority could come and say, “Well, we actually have perfect information and here’s a 75% penalty.” For those who listen to our Brazil episode.
David Farhat (31:51):
Yeah, exactly. No, but I think Stefane’s question goes into the risk aspect of AI and kind of AI doing the more complicated things as opposed to doing the mundane and the routine, which I think is reflected in your point Stephen, about how careful taxpayers seem to be as they’re doing this, because no one wants to make that big mistake with the complicated stuff. But that being said, what are the rules of the road right now for developing AI?
Stephen Daly (32:16):
So what I’ve generally seen is quite a light touch approach that legislatures have taken to the use of AI and the development of AI by tax authorities. I mean, I only look at tax authorities, but it’s more general than just tax authorities. So what you’ll have is you’ll have some sets of regulation which generally apply but aren’t specific to artificial intelligence. So we’ve got rules around data in the European Union, the general data protection regulation. We also have rules more generally about human rights to encourage [inaudible 00:32:47] to people’s private lives. And data is, it’s something which is tangible or something is intangible, something which is it’s something that people feel quite strongly about. And so if you are to use people’s data in an appropriate way, then that may be a breach of their fundamental right and so on. The EU then separately has also introduced the AI Act.
(33:07):
The AI Act is about regulating businesses and their use of artificial intelligence. It’s not really directed towards public authorities. It is directed towards public authorities when they’re using profiling in a criminal context. But outside of that, the AI Act does not really apply to public authorities and doesn’t really apply to tax authorities. The General Data Protection Regulation as well. If you look at the exemptions, tax authorities are more or less completely exempted from the more strict rights that are provided by that regulation. So we’ve looked at those kind of regional legal frameworks. And then at a specific level, so in the US and the UK, there’s nothing specifically regulating how tax authorities use and develop artificial intelligence. That can be contrasted with Germany. It’s not like Germany has a really strong framework for this. What in Germany, you do have specific regulations. So you have legislation in the German fiscal code, legislative provisions in the German fiscal code expressly enabling the tax authority to use artificial intelligence for the purpose of risk management, but also setting up conditions for when that is used. So setting up conditions around testing.
(34:18):
However, when it comes to the UK and the US, there are no provisions expressly saying that tax authorities need to be testing their AI regularly to ensure accuracy and so on. And I think that’s a problem. I don’t see why countries won’t just follow the example set by Germany and put in legislation that yes, we’re happy for our public authorities to use artificial intelligence. When they do so, we would like them to ensure that they’re properly testing it and doing random testing as well, and random sampling to check the accuracy of the model. So when I mentioned the 94% error rate when it came to the Dutch child benefits scandal, that’s not okay, but you’re going to have some kind of error rate.
(34:59):
And this bears repeating. As it currently stands, we audit people who do not owe tax. There are always false positives, and in fact, you want false positives in a tax system. You want people to be audited who do not in fact owe tax. You don’t want it the other way around. You don’t want to be missing people who do in fact owe tax. That’s systemic under collection of taxes. What’s critical, however, is that you ensure that the error rate is appropriate so that it’s not something like 94%. What is appropriate. I’m happy to say that can differ from jurisdiction to jurisdiction, but a legal rule saying that you need to be cognizant of the error rate is something that I think most jurisdictions should introduce.
David Farhat (35:36):
And De Lon in your time with the IRS, did you see any particular rules or guidelines with how folks were using and training AI?
De Lon Harris (35:45):
I’d like to say yes, but I don’t think that, like the rest of the world, the IRS is really no different in waiting for the world to catch up and the laws to occur. And so making sure that they are not disclosing taxpayer data or data can be leaked. And I’ve been sitting here and I think about how we talk about the use of AI and selection of tax returns for audit and assisting taxpayers with questions they might have or collection issues they might have. And I started out saying risk is a very conservative and risk averse organization that still uses the fax machine because they feel like email might be a little too risky and losing taxpayer data out into the universe.
(36:33):
So you got to think about that, need it. That IRS and some leaders that are willing to take some of those risks to move the organization to first century and beyond and use some of these techniques and take some of those risks. And that’s hard to get leaders to take that sort of a risk. They don’t want to be the one that’s going to be the face of the organization or the person that’s the fall guy if there is something big to occur. When you talk about what happened in the Netherlands, for instance, somebody’s going to lose their job. Nobody wants to take that risk.
David Farhat (37:10):
No, I hear you. It’s an interesting topic because I think the goals of the use of AI, I think most taxpayers would align with them, making things more efficient, making things move faster. But I think where the heartburn comes in, and I think I speak for myself a lot here, is we’re in a time where governments and tax authorities have access to more information than they ever have. And I think the results of that have been frustrating for some taxpayers, because there’s question around, “Well, how are they using the information? Are they using it appropriately? Are they chasing red herrings all the time? What’s happening there?” And I think that kind of gives folks heartburn with a tool that’s able to gather up more information and process it.
(37:50):
With the example of Netherlands, I think that scares people even more. And I think we have to live with the fact that mistakes happen. Mistakes are happening now as [inaudible 00:38:00], but I think we’re more comfortable with them because we’re comfortable with people making mistakes and getting to that right mistake rate might be the thing to do. But Stephen, outside of just, well, not just outside of tax, but within the tax system, we’ve talked about some of the potential benefits to taxpayers, some of the potential benefits to tax authorities. Are there any kind of knock-on benefits to society as a whole or non-tax issues that come from...
Stephen Daly (38:29):
I am so glad you asked this question, David, because as Stefane mentioned in the introduction, I’ve provided with some funding from the Leverhulme Trust in the British Academy in the United Kingdom to look into some of the use cases of AI and tax administration, but not just from the perspective of efficiency and effectiveness at looking at broader ideals such as democracy, such as legitimacy and such as the rule of law. So at a basic level, if taxpayers are able to do their jobs better and people have more information about their rights and obligations, that is good from the rule of law’s perspective. So the more in which we can use AI to help taxpayers and tax authorities to get the law correctly applied in the first place, that is a good thing and is a good thing to advise people on their legal rights and obligations.
(39:13):
But looking a little bit more broadly than that, I’m with you in terms of the power of tax authorities having increased in recent years. At a basic level, they have more information than they ever had before, which means, and with information comes power specifically and very clearly in a tax context. So should we be revising or revisiting some of the rules that we had accepted in a paper age, are they still appropriate for the digital age? And this is something that I want to look into, and I’d love to survey taxpayers to find out if they think it’s still acceptable to have various rules around tax audits, which were designed for a paper era. So for instance, in the United Kingdom, we can have an audit into a taxpayer’s tax return up to 20 years in the past, and there is no default limit on how long an audit can take place.
(40:03):
So an audit can technically be indefinite. The only way in which you can close an audit is either if the tax authority decides to close it or you ask a tribunal to force the tax authority to close it. Otherwise, there is nothing stopping HMRC from having a tax audit taking place indefinitely. And as I said, you can look into a taxpayer’s affairs up to 20 years in the past if you think that there is a suspicion of fraud. These rules definitely made sense to me in an era where tax investigations were very, very cumbersome when they were paper-based. When you were relying upon stuff that you found and the facts, and when you were looking at bundles and bundles of documents, an individual is going to take a very long time to look through and sort through all that information. Also, when it comes to figuring out if somebody has been fraudulent with their tax return, very, very difficult in a paper age to find out that information.
(40:53):
You’re kind of relying upon whistle-blowers and mistakes, obvious mistakes being made, or obvious instances of fraud. These days with AI, it’s much easier for tax authorities to detect that sort of information. So what I’d love to do is survey taxpayers and find out, “Do you still regard these rules as legitimate? You accepted them as legitimate before, but do you still regard these as legitimate?” And I think it’s possible to use AI to do the survey of taxpayers, and by doing so, by incorporating people more into the tax system, you can increase participation.
David Farhat (41:29):
That’s fascinating because I think that goes to exactly the problem that we’re talking about. If you can use AI to get taxpayers to kind of express their frustrations with the current system, express what they’d like to see with the current system, that would then help with training the AI to be able to operate the way we need it to operate. I think that’s an interesting one. So kind of stepping back from, okay, using AI and tax authorities and using AI risk assessment, no, let’s step back. Let’s use AI to see if we can gather some intelligence on what the preferred policy is.
Stephen Daly (42:01):
And it may be impossible to do what taxpayers want, however, we could at least take it into account. So get the policymakers at your treasury and at your tax authority to take it into account and explain why some things are necessary, why some things will think of it, and some things are simply impossible. That would be great. I’d be very happy with that.
Stefane Victor (42:19):
This might be an unanswerable question, but is there an opportunity for taxpayers to be trained to provide more AI-friendly returns or change the way that they provide information such that they streamline the process and their risk profile is, I guess, lowered?
Stephen Daly (42:37):
I can see two ways of doing this. One is where you actually just take taxpayers out of the system entirely, and that’s what the OECD has thought about with Tax Administration 3.O, whereby nobody submits a tax return, all the information is sent directly to the tax authority, and the taxpayer never has to do anything.
Stefane Victor (42:54):
So from employers, but also Instagram and TikTok.
Stephen Daly (42:59):
Yeah, well, yeah, yeah, that too. That too. But they envisage even corporate tax returns being unnecessary so that a business’s information would be transmitted directly. So transactions coming through the register and so on. It’s years away from something like that happening. But in the shorter term, can people be trained with AI to make it less likely that they would be audited? The answer is that this is partially going to happen anyway. So in the UK, we’re already rolling out quarterly returns for individuals and they have to use particular software in order to submit their quarterly returns. So in a sense, the technology is already taking over and it is becoming a mandatory feature in the tax system. So there is going to be an education deficit at the beginning, but the end point is going to be taxpayers using the technology to ensure that compliance is as high as possible.
David Farhat (43:50):
That’s really interesting, and this has been a learning episode for me, especially as being a bit of a Luddite myself, not being into technology. But Stephen, thank you so much for joining us. We’ve said everything. Thanks everyone. This has been another episode of GILTI Conscience. Thanks so much for joining us.
Voiceover (44:11):
Thank you for joining us for today’s episode of GILTI Conscience. If you like what you’re hearing, be sure to subscribe in your favorite podcast app so you don’t miss any future conversations. Skadden’s tax team is recognized globally for providing clients with creative and innovative solutions to their most pressing transactional, planning, and controversy challenges. Additional information about Skadden can be found at skadden.com.
Listen here or subscribe via Apple Podcasts, Spotify or anywhere else you listen to podcasts.