AI Impact Analysis
10Commercial Law

AI Vendor Liability
& Indemnification

Technology procurement contracts must now address algorithmic failure modes that standard indemnity clauses never anticipated.

The procurement of AI capabilities increasingly occurs through vendor relationships rather than internal development. Organisations acquire AI tools from specialised vendors, integrate AI services through APIs, and license models trained by third parties. These vendor relationships create complex liability landscapes that standard technology procurement contracts inadequately address. The allocation of risk between AI vendors and their customers requires careful attention to failure modes, accountability structures, and remedial mechanisms unique to AI systems.

The Accountability Gap

Traditional software licensing allocates responsibility based on who controls system behaviour. The vendor controls the software; the customer controls how the software is used. AI systems blur this allocation. The vendor trains the model and defines its capabilities. The customer deploys the model and provides runtime inputs. When the model produces harmful outputs, responsibility attribution becomes contested.

Consider a customer service chatbot licensed from an AI vendor. The chatbot, responding to a customer query, makes a commitment that the deploying company cannot fulfil. Who bears responsibility for the commitment? The vendor did not intend or anticipate the specific output. The customer did not control the chatbot's response. Standard warranty and indemnity provisions may not clearly resolve this allocation, leaving both parties exposed to unexpected liability.

Warranty Structures for AI

Software warranties typically address conformance to specifications, freedom from defects, and non-infringement of intellectual property. AI systems require additional warranty considerations. What accuracy or performance levels does the vendor warrant? How is performance measured, and over what population? What happens when model performance degrades over time as the world changes and training data becomes stale?

The customer seeking strong warranties confronts vendor resistance. AI vendors understand that model behaviour cannot be perfectly controlled or predicted. They resist warranties that would make them insurers of model outputs. The resulting negotiation produces warranties that are narrower than customers want but broader than vendors prefer. Understanding the technical realities that drive vendor positions helps customers calibrate realistic warranty expectations.

Indemnification Scope

Indemnification provisions in AI contracts must address several distinct risk categories. Intellectual property indemnification covers claims that the AI system infringes third-party patents, copyrights, or trade secrets. Given the opacity of AI training processes and ongoing litigation about training data rights, IP indemnification takes on heightened importance. Customers should seek broad IP indemnification with minimal carve-outs.

Data protection indemnification addresses claims arising from personal data processing. When an AI vendor processes personal data on behalf of a customer, regulatory liability may attach to the customer as data fiduciary. Indemnification provisions should address the vendor's contribution to any such liability. The scope of indemnification should track the scope of the vendor's data processing activities.

Output liability indemnification addresses claims arising from AI system outputs. This is the most contested category. Vendors resist unlimited output liability because they cannot control how customers use AI systems. Customers seek coverage because vendor-controlled models produce vendor-caused outputs. Compromise positions might limit output indemnification to specified use cases, cap exposure, or condition indemnification on customer compliance with usage guidelines.

Service Level Frameworks

Service level agreements for AI services must address availability, performance, and accuracy. Availability SLAs follow standard patterns: uptime commitments, measurement methodologies, and service credits for underperformance. Performance SLAs address latency and throughput. Accuracy SLAs, unique to AI, define acceptable error rates and measurement approaches.

Accuracy SLAs present particular challenges. Accuracy measurement requires ground truth against which model outputs are compared. Establishing ground truth may be expensive, especially for subjective or complex outputs. Accuracy may vary across different input populations or use cases. The SLA must specify which accuracy metrics apply, how they are measured, and what remedies attach to underperformance. Vague accuracy commitments provide neither party with clear expectations.

Audit and Transparency Rights

Customers increasingly seek audit rights to verify AI vendor claims and ensure ongoing compliance. Audit provisions might address model documentation, training data provenance, testing results, and operational monitoring. Vendors resist audit rights that would expose trade secrets or impose operational burdens. Negotiated solutions might include third-party audits, limited scope audits, or certification-based assurance.

Transparency provisions require vendors to disclose information about how their AI systems work. This might include model architecture descriptions, training data characteristics, known limitations, and failure mode documentation. Customers need this information to use AI systems appropriately and to meet their own regulatory obligations. Vendors may provide transparency through technical documentation, implementation guides, or dedicated support resources.

Exit and Transition

AI vendor lock-in presents strategic risks that contract provisions should address. Customers who build workflows around specific AI capabilities may find transition to alternative vendors difficult. Exit provisions should address data portability, transition assistance, and wind-down periods. Where AI systems incorporate customer data in their training, exit provisions should address whether and how that training contribution can be extracted or replicated.

The lawyer negotiating AI vendor contracts must understand both the technical realities of AI systems and the commercial dynamics of AI markets. Standard technology contracting templates provide starting points but require substantial customisation for AI contexts. The contracts that emerge from thoughtful negotiation allocate risk appropriately, set realistic expectations, and provide workable mechanisms for addressing the unexpected. This is the new standard for AI procurement.