Negotiating with an AI Provider: Some useful tips for integrating AI into your product/service

Niall DoyleNiall Doyle
Written by
Niall Doyle
and
-
May 21, 2025

Loved this article? Share it with your network:

Since the launch of ChatGPT in November 2022 we have seen an explosion in companies of all types looking to integrate AI models (of which Large Language Models (“LLMs”) are a particular type) into their product and service offerings.  In this article, I will examine some important issues to consider when evaluating third-party AI providers for your AI products or features.

In my experience, there is a sense of collaboration in our discussions with both reputable AI vendors and customers.  Contracting for and governing generative AI technology is an emerging area for everyone, so don’t hesitate to discuss the emerging issues with your vendors or customers - ultimately as legal teams, on both sides of the negotiation, we should want to enable our businesses to harness this ground-breaking technology in a way that accurately describes the parties rights, obligations and appropriately allocates risk for the use of these AI tools and products.

1. Engage with an AI Provider directly or via a third party Service Provider?

You can decide to use AI models provided directly by the AI model developer/provider (the most obvious examples being the well-known LLM model providers) itself via API or you can look at using one of the cloud service providers that offer and host some of these third party AI models along with their own integrated technology and security layers to deliver access to similar AI services – for example the OpenAI models available on Microsoft Azure OpenAI services, or the Anthropic models available via AWS Bedrock.  

In both Azure and Bedrock, the AI model is hosted within the respective third party's infrastructure, and no input is shared with the AI model provider.  Customers may prefer to have their data stored within what is seen as a more trusted and mature cloud service provider, since nearly all security and legal teams will be familiar with Microsoft and AWS.  In addition, using a service provider can be beneficial for customers that need the data to be hosted in a certain region, although we have seen OpenAI announce a regionally hosted version of its models.

On the other hand, signing up to these third party services will require careful review of any additional obligations or policies imposed by the third party service provider, typically AI service specific terms, as well as varying forms of acceptable use policy (“AUP”) or principles that must be applied to your use of their AI services.  You should carefully consider whether these need to be passed through to your own customers.

2. Open-source models

Increasingly powerful open-source AI models are an important part of the development landscape for companies considering the use of third party AI in their own products/services.  One of the distinct advantages of using open-source is your ability to operate the model from within your existing cloud infrastructure, potentially avoiding the need to send data outside of your existing cloud environment to the AI provider, who may not have equivalent security provisions or doesn’t offer regional hosting of your data.  

While the attributes of traditional open-source software are well understood, there is ambiguity regarding what constitutes open-source AI, and what level of access to things like model weights, model architecture, and training data) is required for the model to be considered truly open-source.  As is usual with open-source, check the underlying license to ensure you can comply with any attribution rights, there are no unusual restrictions around using the source code for commercial purposes or requirements to comply with acceptable use terms, which might call the open-source nature of the license into question.  

Open-source models may also benefit from the (somewhat limited) exception to full regulation under the AI Act, if the model is released under a truly free and open-source license, where the model architecture, weights and parameters are made available.  

3. Rights in your/your customer data 

In most cases, as between you and the AI provider you will retain all ownership in and to the input and output, although the extent to which intellectual property rights are created in AI generated output is still the subject of debate and interpretation and can vary widely across different jurisdictions.  

Most AI providers now offer enterprise-level customers the option to opt out of having their input data and output used for AI model training.  In practice, if you are heavily reliant on a certain AI provider for delivery of your product it may be helpful to memorialise this commitment in your contract or Order Form with the AI provider, as your customer may want a back-to-back commitment on this basic provision.

4. Restrictions on your Output

If you use an AI provider to help deliver services or products you should carefully consider any restrictions placed on your (and your customers) ability to use the AI generated output for other purposes. 

AI providers may include a restriction on you from using the output generated from their models to develop a competing product or service.  In the case of a large AI service provider, the potential scope of what a competing product can amount to is extremely broad.  Make sure you understand your potential future use cases for output, beyond just providing your product, and seek appropriate carve-outs to maintain the value in that output. 

5. Security measures

In addition to the security work done by your IT or Infosec team before engaging an AI provider, there are a couple of issues that will frequently arise during discussions with your customer.

Zero data retention: Most AI providers’/third Party service providers offer their enterprise-level customers the ability to request some form of zero data retention policy, whereby the customer’s input is not permanently stored by the AI provider but deleted once the input has been used to deliver the output.  Equally the output is deleted after it has been delivered back to you.  

This has proved to be a useful reassurance for customers that may be concerned (although the number is steadily dropping) about data security and integrity when sent to an AI Provider for processing.  If you opt for zero data retention with the third party providers you may have to trade-off zero data retention for other features - for example with Azure, if you opt for zero-data retention you will not be able to avail of the human review element of their Abuse Monitoring service.

Safety Classifiers: Another frequent topic of discussion is the type of safety classifiers employed by the AI Provider, as a means of monitoring API traffic to detect any potential violations of acceptable use.  Typically, you will want to confirm with the AI Provider that this monitoring does not involve analysis of actual input content and you will want to understand whether any of your input data may be stored if a safety classifier is “triggered”, which could create an exception to the zero data retention policy.

6. Pass through terms

There are some typical terms with an AI provider, which you have to keep in mind when it comes to considering the terms you have in place with your customers.  

  • Output: The current standard is that you will be responsible for evaluating the accuracy and appropriateness of the Output (including via human review) before relying on it.  If you are integrating an AI model into your product, then you need to be careful about what you’re promising the end-user in terms of reliability of Output.  Of course, this position could change in the future as confidence, and the ability to avoid unintended consequences or hallucinations by the AI model, grows.
  • AUP:  AI model providers have all released AUPs that correspond to some of the specific risks associated with use of AI.  In addition to the usual restrictions, you will see provisions which broadly align to current regulatory trends, although may not be limited to the risks identified in the EU’s AI Act - restricting the use of AI models in a way that could be considered “high-risk”.  Typically, this will include use of AI in sectors where advice is regulated: financial, health, legal or areas where the output of the AI can have potentially significant “real-world” consequences for the user - like access to education or employment.  You’ll need to consider updating your own AUP to ensure your downstream customers are aware of and agree to any restrictions specific to their use of your AI product.
  • Transparency:  The general requirement (replicated to varying degrees in a number of international regulations) that an end-user understands when they are interacting with an AI system.  This can have an impact on the design of the product’s user interface.  If your product is customisable and its appearance can be changed - consider passing the transparency obligation to your customers under your agreement with them.  
  • Restricted country lists:  Given the potential power offered by AI technology ,there is likely to be restrictions on where your product (incorporating the AI model) can be offered, as well as the locations from which end-users are able to access your product.  You should carefully review such restrictions, as they may be broader than traditional restricted country lists and could incorporate markets in which you are currently active.  In that case, you may wish to consider whether your product actually gives users in those restricted territories access to the underlying model itself.  If not, you may be able to argue that you are not making the AI model itself available in a restricted country and should be allowed to continue to sell into that territory.

7. Indemnities

While an indemnity for third party IP infringement claims arising from your use of the AI services is not unique, you should check that this indemnity encompasses both claims arising from the output of the AI model, as well as claims relating to your use of the AI model itself.  Due to ongoing debate and litigation regarding the legality of data and methods used to train large AI models, ensure you are indemnified against claims arising from your use or integration of the AI model itself.

Watch out for the typical carve-out that the indemnity will not apply if the infringement arises from the input.  If you’re offering an AI product to end-users, then you will typically not control the input into your AI product, and so should ensure your customer is fully responsible for input, especially input leading to a claim.

8. Regulatory obligations

With the EU’s AI Act (partially) in force and AI regulations emerging across the globe companies that utilize AI models in their products are looking to the AI model providers for assurance around future compliance with these regulations.  

You’re unlikely to obtain a catch-all commitment to meeting all future regulations.  You should review the obligations imposed on providers of general-purpose AI models (which would encompass the most popular LLMs) set out in Art. 53 of the AI Act as a useful starting point/guidance.  This will require the model provider to produce information and documentation to you, covering things like the training and testing process for the model, the capabilities and limitations of the model.  One of the current best sources for this kind of information is the “model card” for the model you intend to use.  It will typically include information on intended and restricted uses, limited information on the training data and training process used, and how the model operates to reduce things like bias, hallucinations and unintended or harmful output.