AI-generated code and vibe coding: copyright, licensing, and legal risks

Keywords: Vibe coding, artificial intelligence, intellectual property, GitHub Copilot, AI-generated code, AI Act, open source.

Legal analysis by Matthieu Quiniou, Partner IP/IT Lawyer at D&A Partners

Vibe coding refers to the use of generative artificial intelligence tools to produce computer code from natural language instructions.

AI code generation tools such as GitHub Copilot, ChatGPT or Claude now make it possible to rapidly generate functional code. Their use raises significant legal issues relating to the intellectual property of the generated code, open-source licensing, liability in the event of bugs, and compliance with the European Artificial Intelligence Act.

This guide answers the main legal questions surrounding vibe coding and AI-generated code.

Key takeaways

  • AI-generated code may be protected by copyright if creative human input can be demonstrated.
  • The use of vibe coding may expose developers to open-source license contamination risks (such as GNU General Public License or GNU Affero General Public License).
  • Liability for software generally remains with the company that deploys the software into production, even when AI tools were used during development.
  • Companies should implement code and license audits before any production deployment.

1. Understanding vibe coding

What is vibe coding and how does it work?

Vibe coding is a programming practice made possible by generative artificial intelligence models, which allows computer code to be created from instructions formulated as prompts.

Large language models (LLMs), trained on large volumes of data and digital content, may have been trained on corpora including code from public repositories such as GitHub or GitLab, as well as other data sources.

Although vibe coding can be used by experienced developers as a programming assistance tool, this practice also helps to democratize access to software development. It allows people with little or no programming knowledge to generate code from instructions formulated in natural language.

2. Training AI models used for vibe coding

Can you object to your code being used to train AI models?

Theoretically, yes, but in practice it’s more complicated.

The European AI Regulation (EU) 2024/1689 of June 13, 2024 refers to the Copyright Directive 2019/790 of April 17, 2019, which provides in Articles 3 and 4 for an exception to copyright for text and data mining (known as the “TDM exception”).

This exception allows the reproduction and extraction of lawfully accessible works for the purposes of text and data mining, except where rights holders have expressly objected to this using a machine-readable process, for example with devices such as robots.txt.

In practice, the effectiveness of this right to object remains limited. Opt-out mechanisms are still imperfectly standardized, and it is often difficult to verify whether these reservations are actually respected when training model databases are created. This difficulty also exists for computer code. License files, readme files, or comments in repositories are rarely taken into account during automated data collection and the creation of AI model training databases.

The European AI Regulation, through the work of the AI Office, provides for certain transparency obligations for providers of general-purpose AI models, including the obligation to implement a copyright compliance policy and to document the training data used. However, the technical opacity of model training systems, often described as black boxes and covered by trade secrets and business secrets, does not in practice allow rights holders to assert their opposition to training based on their creations.

Are developers compensated when their code is used to train AI?

This is still quite rare, but the issue is being debated, as code is, under certain conditions, eligible for copyright protection, raising the question of value sharing or collective remuneration. Several lawsuits have already been filed concerning the use of open source code to train AI systems without attribution to the original developers, notably in the GitHub Copilot case (J. DOE 1 v. GitHub Inc., Northern District of California, Case 3:22-cv-06823, Nov. 3, 2022).

3. Intellectual property of AI-generated code

Is computer code protected by copyright?

Computer code is indeed protected by copyright, the essential criterion for assessment being originality.

It is settled case law (Court of Cassation, Plenary Assembly, March 7, 1986, 83-10.477, Babolat case) that originality in computer code is assessed on the basis of the mark of intellectual contribution characterized by the fact that the author of the code has “demonstrated a personalized effort going beyond the simple implementation of an automatic and restrictive logic.”

Article L 112-2 (13°) of the Intellectual Property Code (CPI) explicitly states that software is a work of the mind.

Is code generated with vibe coding protectable?

Although it is difficult to give a definitive opinion at this stage on the protection of code generated with vibe coding, in the absence of specific case law on the subject, it seems reasonable to consider that the protection of code generated with vibe coding depends mainly on the degree of human intervention in the creation process.

In copyright law, only a creation that reflects the author’s own intellectual contribution can be protected. If the developer uses an AI tool to design the program architecture, formulate precise instructions, and then select, modify, and integrate the generated code, the result should be considered an original work eligible for copyright protection.

Conversely, if the code has been generated in a largely automated manner by an AI system without significant human intervention, protection is more uncertain.

In practice, vibe coding is most often part of a co-creation process between the developer and the AI tool, which leads to the originality being assessed in terms of the choices and decisions made by the developer in the design and structuring of the software.

In summary, the use of an AI system to generate code does not therefore exclude copyright protection, but leads to the analysis of originality being oriented towards the creative choices made by the developer.

Who owns the code created with vibe coding?

Code created with vibe coding belongs in principle to its author, provided that it constitutes a work of the mind that can be protected by copyright. In French law, as in most legal systems, the rights to software belong to the person who made the intellectual contribution that gave rise to the code.

When vibe coding is used as a programming assistance tool, the author will therefore generally be the developer who designs the program architecture, formulates the instructions, and selects or modifies the generated code.

However, two important factors must be taken into account. Firstly, under French law, according to Article L113-9 of the CPI, the economic rights to software created by employees in the course of their duties are transferred to the employer. Secondly, the user licenses or general terms and conditions of use for the AI tools used for vibe coding may include certain rules concerning the use or reuse of the generated code.

It is therefore recommended that these contractual terms and conditions, as well as the framework of the employment or service relationship, be carefully reviewed.

How can human intervention in AI-generated code be proven?

In copyright law, the protection of software presupposes the existence of human intellectual input that characterizes the originality of the work. When code is generated with the help of an AI system, it may therefore be useful to document the developer’s intervention in order to demonstrate that the code is indeed the result of human creative choices.

Documenting the developer’s intervention is an important step in facilitating the recognition of copyright ownership of the generated code.

Several elements can help establish this intervention, for example:

  • keeping prompts and exchanges with the AI tool;
  • successive versions of the code (Git history, commits, modifications);
  • documentation of the software architecture and technical choices made by the developer;
  • traces of editing, integration, and adaptation of the generated code.

In this context, implementing best practices for traceability in the development process becomes an important issue in securing ownership rights to software developed with the help of artificial intelligence tools.

Can AI-generated code violate open source licenses?

Yes, this risk exists and is currently the subject of much legal debate.

The AI systems used for vibe coding are trained on vast corpora of computer code, including open source repositories, for example under the GNU GPL license. In some cases, the generated code may reproduce or be inspired by existing code fragments. If these code snippets come from projects subject to copyleft licenses, their integration into software may create certain contractual obligations, including the obligation to publish the source code under the same license as the original code. These licenses are often referred to as contaminating licenses.

Two legal interpretations are currently being discussed.

The first, and most widespread, considers that the contaminating effect only applies if the generated code actually reproduces identifiable fragments of code subject to a copyleft license. In this case, it is recommended that the generated code be audited, similar to a plagiarism check, in order to detect any matches with open source repositories.

A second, more extensive and currently marginal interpretation is that once an AI model has been trained on code subject to copyleft licenses, the generated code should itself be subject to these licenses. Such an approach would have significant consequences, as it would call into question the possibility of protecting or exploiting AI-generated code in a proprietary manner.

The use of AI-generated code may therefore expose companies to constraints related to open source licenses or the unintentional introduction of problematic dependencies.

What license should be adopted for code developed with the help of AI?

The choice of license for code developed with the help of an artificial intelligence tool depends above all on the strategy of the software project and the legal framework applicable to the generated code. If the code is copyrightable and the developer or company is the copyright holder, it can be distributed under either a proprietary license or a fully or partially open source license (MIT, Apache 2.0, GPL, etc.).

In practice, it is in companies’ interests to implement procedures for auditing the generated code and verifying licenses, similar to those used for managing open source dependencies in traditional software projects.

Can the prompts used to generate code be protected by copyright?

Yes, if these prompts are original, there is no reason why they cannot be protected by copyright as intellectual works.

4. Legal risks of vibe coding

Who is liable in the event of a bug or flaw in AI-generated code?

Liability lies with the person or company that develops, integrates, or makes the software available to users or the public. The use of an artificial intelligence tool to generate code does not transfer liability to the AI provider.

Furthermore, AI code generation systems often include clauses in their terms and conditions of use stating that no guarantee is provided regarding the output.

It is therefore up to developers and companies to carry out the necessary tests, security audits, and code reviews before putting anything into production. Given the current state of the art in technology, it seems inappropriate to require AI systems to guarantee that the generated code is free of bugs or vulnerabilities. AI is a development aid tool, but the ultimate responsibility for software quality and security remains with humans.

What are the risks of confidentiality or information leaks with vibe coding tools?

The use of AI tools to generate code may present confidentiality risks, particularly when developers transmit sensitive code elements or technical information to the system.

These risks are particularly significant when the AI tool is operated via an online service and not deployed locally. Prompts, code snippets, or architecture descriptions submitted to the system may be processed on third-party servers and, depending on the terms of use of the service, may be stored, analyzed, or used to improve the models.

In this context, there is a risk of disclosure of information covered by trade secrets, particularly when a developer submits proprietary code, internal algorithms, or sensitive software architecture elements.

To limit these risks, companies can, in particular:

  • regulate the use of AI tools through internal policies,
  • avoid submitting confidential or strategic code,
  • favor solutions deployed locally or in secure environments,
  • verify the contractual terms and conditions and data processing policies of AI providers.

The use of vibe coding tools must therefore be compatible with trade secret protection obligations and, where applicable, with the company’s internal information security policies.

Can code generated with vibe coding be reused by AI providers to train their models?

There is no absolute answer to this question, as it is generally governed by the licenses and terms of use of generative AI systems. Some AI systems allow users to choose whether their prompts and generated content can be used to improve the model.

5. Best practices for vibe coding in companies

Can AI-generated code be used in commercial software?

In most cases, AI-generated code can be used in commercial software. Several legal precautions must be taken, in particular to verify that the generated code does not reproduce fragments subject to restrictive open source licenses and that the terms of use of the AI tool allow commercial exploitation of the generated code.

What best practices should be adopted before vibe coding or publishing or deploying AI-generated code?

Before going live, it is recommended to:

  • avoid disclosing confidential information or proprietary code when prompting
  • check the terms of use of the AI tool and the rules applicable to the generated outputs
  • conduct a human review of the code and thorough technical testing
  • verify the absence of security vulnerabilities and the robustness of the software
  • perform a license and similarity audit to detect any fragments from software subject to restrictive open source licenses
  • document human intervention in the development process (prompts, modifications, Git history) to secure ownership of rights.

In general, code generated with the help of AI should be considered as code to be verified and audited, rather than code that is ready to be used without control.

Companies using vibe coding tools must therefore integrate these legal issues into their software development practices, particularly with regard to intellectual property, open source licenses, and risk management.

Legal support

D&A Partners advises companies, startups, and developers on legal issues related to artificial intelligence, software intellectual property, and compliance with the European AI regulatory framework.

Last updated: March 2026.

By Matthieu Quiniou – Partner, Lawyer

360° & Full-Service Due Diligence with d&a partners

At d&a partners, we support investors, executives and companies in securing their strategic operations through tax, financial, legal and social analysis.

What you need:
• A single point of contact with strong deal culture.
• A truly global view of risks and negotiation levers: price, documentation, GAP, strategy, compliance.

Our deliverables are clear, prioritised, quantified and immediately actionable to help you make confident decisions.

Our key areas of expertise :
• Tax due diligence
• Financial due diligence
• Legal due diligence (corporate, contracts, litigation, regulatory)
• Social / employment due diligence
• IP / IT / Data / Cyber / AI due diligence
• Drafting & negotiation of transaction documentation through to closing

Our mission: to provide a concise, decision-oriented overview.

Why choose d&a partners?
✨ A unique cross-functional perspective
✨ Concrete, concise analysis
✨ A pragmatic approach backed by a multidisciplinary team
✨ The ability to work on both traditional and highly innovative operations

If you have any questions, please send an email to contact@dnapartners.fr

𝐃𝐢𝐬𝐭𝐚𝐧𝐜𝐞 𝐬𝐞𝐥𝐥𝐢𝐧𝐠 𝐨𝐟 𝐟𝐢𝐧𝐚𝐧𝐜𝐢𝐚𝐥 𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬: 𝐰𝐡𝐚𝐭’𝐬 𝐜𝐡𝐚𝐧𝐠𝐢𝐧𝐠

Ordinance No. 2026-2 of 5 January 2026 on the Distance Marketing of Financial Services to Consumers

Ordinance No. 2026-2 of 5 January 2026 was adopted pursuant to Law No. 2025-391 of 30 April 2025 (DDADUE)and transposes Directive (EU) 2023/2673 of 22 November 2023 on the distance selling of financial services. It also aligns the applicable legal framework with Law No. 2025-594 of 30 June 2025 on combating fraud involving public subsidies (the “Cazenave Law”), in particular with respect to telephone solicitation.

The purpose of this Ordinance is to strengthen consumer protection in the context of the distance marketing of financial services, taking into account the rapid development of online sales and the repeal of Directive 2002/65/EC, now integrated into Directive 2011/83/EU on consumer rights.

First, the Ordinance enhances the right of withdrawal by facilitating its exercise. Where a contract is concluded electronically, professionals are required to provide a dedicated withdrawal functionality, enabling consumers to exercise this right easily.

Second, it strengthens pre-contractual information obligations. Prior to the conclusion of the contract, professionals must provide consumers with clear and detailed information, including in particular:

  • complaint handling procedures,
  • the consequences of late or non-payment, and
  • the possible use of automated decision-making mechanisms influencing the price or contractual terms.

Third, the Ordinance imposes stricter requirements on digital interfaces used for distance marketing. Professionals must provide clear and appropriate explanations and must ensure that consumers are able to contact a human representative when digital tools are used.

Fourth, the Ordinance updates the sanctions regime. It extends the supervisory powers of the DGCCRF to all provisions governing the distance selling of financial services, including in the insurance sector, and introduces a system of administrative (decriminalised) sanctions, aligned with the general regime of the Consumer Code, without affecting the sanctioning powers of the ACPR.

In addition, where contracts are concluded via voice telephony, the Ordinance introduces a “two-step sales process”, requiring professionals to send consumers a prior confirmation of the offer before any binding commitment is made.

Finally, in line with the Cazenave Law, the Ordinance largely repeals Article L.112-2-2 of the Insurance Code, which has become obsolete following the ban on unsolicited telephone solicitation as of 11 August 2026.

The Ordinance is structured into seven titles, amending in particular the Consumer Code, the Insurance Code, the Mutuality Code, the Social Security Code and the Monetary and Financial Code.
It will enter into force on 19 June 2026, with the exception of Article 18 (11 August 2026) and Article 9 relating to telephone sales (1 January 2027).

By Margaux FRISQUE – Partner – Contracts & Litigation Expert

Integrating Crypto Services Without MiCA License, What Options Are Available?

With the entry into force of the “MiCA” regulation, the provision of crypto-asset services within the European Union is strictly reserved for duly authorized providers.

For many market participants, obtaining a CASP (“Crypto-Asset Service Provider”) license  entails significant organizational, technical and regulatory investments, sometimes amounting to several hundred thousand euros per year.

In this context, a key question arises: is it possible to offer a coherent crypto experience without holding a license?

MiCA does not provide for any “agent” status for CASPs. An unauthorized actor can therefore neither act on behalf of a provider nor deliver a crypto-asset service under the provider’s responsibility.

Nevertheless, market practice demonstrates that certain configurations remain viable. The partnerships established by Bitpanda with non-licensed actors illustrate this possibility, provided the framework is structured with sufficient rigor.


1. The Business Introducer Model

The business introducer model is the most accessible non-regulated option.

It is based on a simple requirement: limiting one’s role to putting a user in contact with a licensed CASP, without intervening in the service itself.

This implies:

  • a purely functional redirection with no incentive;
  • no collection or processing of customer information;
  • no access to orders or transactional data;
  • no promotional or value-driven communication.

Any deviation, even minor, may lead to regulatory requalification.

For actors looking to test a market or structure an initial step toward a crypto strategy, this model remains the simplest and fastest option.


2. The “Grey-Label” Distribution Model

The grey-label model is currently the most balanced solution for offering an integrated crypto experience without necessarily requiring a CASP license.

Under this model, the user accesses the CASP’s interface from within the partner’s environment (typically via webview), while maintaining a strict separation of roles.

Its effectiveness relies in particular on three cumulative requirements.

Transparency: the user must clearly identify the licensed provider. The interface may be co-branded, but the CASP’s identity must appear explicitly at every stage. Nothing should suggest that the service is provided by the facilitating entity.

Technical segregation: sensitive flows – orders, amounts, transactional data – must be handled exclusively by the CASP. The webview must remain a visual entry point only, with no operational capacity.

Neutrality: the partner’s role is limited to providing access. It does not present the service as its own, does not promote it, and does not intervene at any stage in its operation.

When these conditions are met, the grey-label model can deliver smooth user experience, realistic integration, and – importantly – no licensing requirement. It is now the most widely used distribution model in crypto partnerships across Europe.

Integrating CASP services via API represents the most ambitious variation of this model, but also the most exposed. If the partner interacts with an order, transforms an instruction, accesses transactional data or contributes to operational processing – even marginally – it may be requalified as providing a reception-transmission or execution service. These services are reserved for licensed CASPs.

Any consideration of such an integration should, as a strong recommendation, be preceded by consultation with the regulator (in France, the Autorité des marchés financiers) to assess compliance.


3. Becoming a CASP – The Structural Option

Some actors will choose to obtain a CASP licence themselves. This option provides independence, full control of the service, the ability to build a complete business model, and significantly greater flexibility in structuring and delivering the offering.

For regulated financial institutions, certain shortcuts exist, such as the accelerated licensing procedure available to credit institutions. These do not, however, reduce the level of substance expected.


4. Choosing the Appropriate Model

The decision rests fundamentally on three criteria:

  • the level of integration sought in the user journey;
  • the degree of responsibility the actor is prepared to assume;
  • the strategic orientation selected, whether partnership-based, a progressive ramp-up, or full internalization of the service.

Actors able to structure a compliant model from the outset gain a clear advantage: offering a credible crypto experience without exposing the organization to disproportionate regulatory risks.

To explore how to structure a crypto model in compliance with MiCA, you may contact Daniel Arroche using the form at the bottom of this page.

The information in this article is provided for general informational purposes only and does not constitute legal advice or investment guidance; it must be assessed in light of each actor’s specific circumstances and cannot create any liability for d&a partners or its attorneys, who recommend seeking professional advice before making any decisions related to the implementation of a crypto offering or the interpretation of the MiCA regulation.

d&a partners welcomes Pauline Robin as Partner and strengthens its Regulatory practice

d&a partners continues its development with the arrival of Pauline Robin, who takes the lead of the firm’s Banking Regulatory practice. This appointment reflects the firm’s ambition to strengthen its support for financial institutions, fintechs, and crypto-asset players in a constantly evolving regulatory environment.

Recognized expertise in banking and financial law

A lawyer specialized in banking and financial law, Pauline Robin began her career at Caceis Bank before joining CMS Francis Lefebvre and then the international law firm A&O Shearman.
Her career has enabled her to acquire in-depth expertise in complex and fast-changing regulatory matters. She advises notably on issues relating to banking compliance and financial regulation, two key challenges for institutions in the current context.

Tailored support for financial and innovation players

Pauline Robin advises her clients on a wide range of regulatory matters:

  • payment services,
  • electronic money,
  • asset management,
  • as well as assisting financial institutions, fintechs, and crypto players in their interactions with supervisory authorities such as the AMF and ACPR.

Thanks to her experience, she helps companies anticipate legislative changes and turn regulatory constraints into real growth drivers.

A strengthened Regulatory offering

Alongside Daniel Arroche, Partner in charge of regulatory matters relating to crypto-assets, Pauline Robin contributes to structuring a comprehensive Regulatory offering within d&a partners.
This offering covers all banking compliance and financial regulatory issues, particularly in the context of new European initiatives such as the MiCA Regulation (Markets in Crypto-Assets).

By leveraging this dual expertise—banking and crypto—the firm is able to support its clients at every stage of their development: structuring their activities, ensuring compliance with supervisory authorities, and legally securing their operations.

Supporting innovation

The strengthening of the Regulatory practice illustrates d&a partners’ commitment to providing solutions tailored to the challenges faced by traditional banks, fast-growing fintechs, and innovative players in the crypto ecosystem.

We are delighted to welcome Pauline to d&a partners and, thanks to her, to further strengthen our support to clients in a sector where regulation is evolving rapidly and demands rigor, anticipation, and innovation.

Welcome Pauline!

d&a partners advises Million Victories on its $40 million fundraising

d&a partners advised Million Victories, the Lyon-based studio behind the mobile strategy game Million Lords, in connection with its $40 million Series B fundraising.

The transaction was handled by the Corporate team composed of Stéphane Daniel, Clarice Duclos, and Thomas Letzelter. It was led by Haveli Investments, with the renewed support of Griffin Gaming Partners and Eurazeo.

A new momentum for Million Victories
With this fundraising, Million Victories is entering a new phase of ambitious development:

  • strengthening its teams,
  • accelerating expansion across Europe, the United States, and Asia,
  • and deploying resources to establish itself as a leading player in the mobile gaming industry.

Towards a global leadership position
This fundraising marks a key milestone in Million Victories’ growth strategy, as the studio now aims to position itself as a global leader in mobile gaming.

d&a partners advises Suzaku on its $1.5 million fundraising

d&a partners advised Suzaku, an innovative project dedicated to the secure decentralization of the Avalanche ecosystem, in connection with its structuring and $1.5 million fundraising.

The transaction was handled by the Corporate team composed of Stéphane Daniel and Clarice Duclos, and was structured around a seed round, public sales, and grants awarded by the Avalanche Foundation.

A strategic project for Avalanche
As a key player in Web3 infrastructure on Avalanche, Suzaku aims to take a new step forward with this fundraising by strengthening its role in the decentralization of Layer 1 blockchains (L1s).

This transaction represents a strategic milestone towards building a reference staking layer on Avalanche, contributing to the security and growth of the ecosystem.

d&a partners joins the innovation hub of Sophia Antipolis

Ever closer to innovative ecosystems, d&a partners sets up in the heart of Europe’s leading technology park.
Located on Boulevard du Cap in Antibes, our new office supports technology companies and investment funds in their high-value operations: structuring of complex projects, fundraising (equity, debt, tokens), mergers and acquisitions, DAOs, DeFi, tokenization…

On site, our partner Stéphane Daniel and his team implement an integrated legal approach designed to address the specific challenges of the tech and financial sectors in a constantly evolving environment.

Our teams combine their expertise in corporate, regulatory, litigation, tax, employment, and IP/IT/Data to build tailor-made legal solutions that meet the demands of the most dynamic ecosystems.

A true meeting point between law, innovation, and growth.