When Transfer Pricing Teams Use AI: What Stands Up to Scrutiny?

Transfer pricing practice combines data, text and judgment. When transfer pricing teams or tax authorities use AI, attention turns to how materials were used, how conclusions were formed and how the result can be tested.

AI-assisted work may make the conclusion easier to read while leaving less of how it was formed on the face of the file.

Why does AI use draw closer attention in transfer pricing work?

Transfer pricing (TP) lends itself to AI-supported tools because it combines large data sets, text and recurring analytical criteria, while still depending on professional judgment at decisive points.

Attention does not usually stop at the tool itself. It turns instead to the steps through which source material becomes a transfer pricing position.

Which steps tend to matter most?

Attention sharpens when AI no longer only helps structure, summarise or draft the presentation of analysis, but begins to shape the factual or analytical basis of the position. This may happen where a tool determines which materials are taken forward, narrows the comparables set, reduces varied internal material to a working summary, or reduces several possible assumptions to a single working assumption for the draft.

How can the record behind the position be preserved?

A useful discipline is to keep a compact record of how the work moved from source material to conclusion. In practice, that record may identify the reference corpus, the relevant working date, the initial list before exclusions, material exclusion decisions, the principal assumptions and the reviewer sign-off.

For Group Tax, preserving that record depends on recognising when an AI-assisted step no longer only supports the presentation of analysis, but begins to shape the position itself. At that point, the relevant source set, filters, exclusions, working assumptions and points of human judgment need to remain visible in the file for later review and defence.

How can AI-supported administrative action be examined?

A related set of questions arises where tax authorities use AI-supported tools for risk selection, prioritisation or analytical support. In that setting, a useful distinction may lie between a risk signal, a working hypothesis and an established fact.

The practical question is how that progression can be identified in the record. This includes where internal selection enters the authority’s case, which features of the transaction or pricing are treated as material, whether the move from internal selection to stated fact can be identified, and what in the stated reasons can actually be tested in the proceedings.

The inquiry then turns less on the tool than on how the authority’s position was formed and whether its stated basis can be meaningfully examined.

What does this mean in practice?

What carries weight is not only the conclusion, but whether the file still shows how that conclusion came to be.