Commercial Considerations: “To build or to buy AI?”

October 13, 2025

Anne Todd looks at the pros, cons and risks of customised versus off the shelf AI tools.

When a business is looking to implement a new AI system it may consider whether it is preferable to build a unique, bespoke AI system or to buy an existing third party system. Procuring an existing off the shelf system brings many obvious benefits including faster deployment and lower costs but these benefits are often weighed against concerns over the solution being a compromise which may not perfectly match the business requirements, limited opportunity to influence the development roadmap and vendor lock-in risk – not to mention concerns over ensuring protection of critical IP and data assets and exposure to third party cyber risk. Against these concerns business leaders may well consider whether a better option may be to build an in-house AI system which will perfectly fit business requirements, ensure that valuable IP and data assets remain protected and which may also bring competitive business advantages.

While any business case for a “build” option will need to assess key financial, operational and skill-set requirements it also needs to take into account the legal and regulatory risks and requirements. While there are a multitude of different factors which will need to be addressed, in this brief note we’ve considered two of the most fundamental concerns – data and regulatory compliance. 

Data considerations

When procuring a third-party AI system, organisations can, on the whole, benefit from the vendor having done the hard work to source the data required to train the model and to build the strong data governance. With the market increasingly aware of the risks of misuse of third party data assets and demanding a level of risk protection from vendors, data risks are likely to be managed by warranties and indemnities given by the vendor. The level of contractual protection an organisation may be able to secure will depend on the strength of its purchasing power compared with the position of the vendor.

In a self-build scenario, a preliminary concern will be whether the organisation can access datasets of sufficient quantity and quality to develop and train the model such that it will deliver the required outputs and with a lack of bias. Without the negotiating power and dominant position of Big Tech, the costs and time for an individual organisation to secure data licences is likely to present a major hurdle. Appropriate due diligence on the data and each licensor will also be essential along with negotiations of appropriate warranty and indemnity cover with each licensor and the costs and effort this will require will need to be factored into the business case.

If the business already has access to a rich data asset base and is looking to use those data assets it will still need to account for the time and effort it will take to prepare that data and to satisfy itself there are no restrictions on rights to use that data for the purposes of AI training. Careful due diligence will be needed to verify that the data is free from confidentiality and other restrictions on use. If personal data may be involved it will be especially important to ensure that there is a lawful purpose to reuse that personal data in connection with development and deployment of the model and the principles of embedding data protection by design and default into the model along with provision of transparent information to data subjects and tools to allow data subjects to enforce their rights must be respected.

Where datasets already in use in the business are licensed from third parties, licence terms will need to be carefully assessed to ensure that the terms do not restrict the use of the data in connection with AI. Older licences are unlikely to have contemplated use of data to train AI or may be ambiguous and to avoid risk of claims they may require re-negotiation. In practice, it may be impossible to ascertain full provenance and freedom to use all the datasets involved leaving areas of residual risk.

Regardless of the data sources it will be essential to ensure appropriate data governance, management practices and records are developed and maintained on an on-going basis, and that regular audits and measures are implemented to detect, prevent and mitigate risks of bias.

Regulatory compliance

Whether procuring AI from the market or building a bespoke system in-house, regulatory compliance should be considered at the forefront of any deployment decision. Specific requirements will vary from sector to sector and from use case to use case and are likely to evolve over time but, at this time and as a minimum, consideration will need to be given to requirements of the EU AI Act.

When procuring an AI system from a third party, the vendor will be primarily responsible for ensuring that the AI system meets the EU AI Act’s strict regulatory requirements, such as conformity assessments, transparency and post-market monitoring. The business, as deployer of the AI system, will still be subject to regulation under the Act, especially in relation to high-risk systems but the greater burden of ensuring that the system is, and remains, compliant with the Act’s requirements will sit with the vendor and then spread across its entire customer base. A prudent purchaser will want to conduct its own due diligence on the vendor, its governance and regulatory compliance practices as well as ensuring that it has secured appropriate contractual rights, warranties and indemnities from the vendor as to compliance with all current and future applicable laws and regulations.  

In contrast, a business building its own system in-house will face the full regulatory compliance burden alone. This will be a greater concern where a business is building a high-risk classified system. To avoid risk of significant fines, not to mention reputational risk, the business should satisfy itself that it is able to meet all the Act’s strict requirements, not only at the design and deployment stages but that it will be able to do so throughout the lifetime of the system.  This will require on-going access to all necessary technical and legal expertise, as well as strong internal AI governance policies and measures. With regulation still emerging there is an inevitable degree of uncertainty as to what compliance will entail and what the on-going cost implications will be. However, one thing is for certain, the costs and complexities of regulatory compliance are likely to increase over time. Businesses will need to consider carefully how sustainable this will be for the business in the longer term.

Where elements of the AI development are outsourced to third parties, the business will also need to ensure that its supply chain contracts conform with the Act’s requirements. Once again careful due diligence and consideration of the key contractual rights and remedies for the business will be essential to reduce exposure to risk.

In practice, even where the business is procuring off the shelf AI, if it rebrands the system or makes substantial modifications, whether initially or over time, or where it changes the intended purpose of the AI such that a previously low-risk or minimal risk system becomes high risk, then the business will be considered as a new provider and will assume full regulatory responsibility under the Act. Strong AI governance and controls will therefore be needed to ensure that a business does not make this transition without full awareness of the risks and obligations.

A flexible approach may be required

Taking these key considerations into account, most businesses are likely to want to take a flexible, adaptable approach. Where access to a solution is required quickly and for routine tasks it is likely that the business case will come down on the side of procuring a solution already available on the open market and perhaps have some features adapted to meet the specific requirements of the business.

On the other hand, a business with strong in-house technical expertise, access to rich data sets and good regulatory compliance expertise may well be prepared to accept the increased regulatory risks and costs of on-going compliance so as to achieve a strong advantage, differentiate itself from its competitors and to protect the value in its core IP assets.

In the short term, while regulation is evolving and we await more clarity over rights of access to data, and while organisations are building up their own internal AI technical skills and governance it seems likely that many organisations will choose to take a “wait and see” strategy of contracting for third party solutions on a shorter-term basis with a view to transitioning to a bespoke system as and when this becomes viable. This will require organisations to negotiate hard over rights to IP in any bespoke developments along with obligations on the vendor to co-operate and support with a transition to an in-house model.

Anne Todd is a Partner at Michelmores, advising on Tech, Innovation and Data Protection. She is a former in-house lawyer in telecoms and technology and for over 25 years has advised businesses of all sizes on the development, procurement and deployment of cutting-edge technologies.

This article is also available in the special AI issue of Computers & Law, which is available to download here.

A human finger touches the fingertip of a glowing blue digital hand, which is rendered as a wireframe structure. The background is a dark blue, speckled with small glowing particles and abstract geometric shapes, suggesting a digital or technological environment. The word “AI” is visible in white text at the top of the image.