Intellectual property law has always rewarded human creativity. The author who writes the novel, the inventor who devises the machine, the designer who creates the ornamental form: all receive exclusive rights as incentive for their contributions to knowledge and culture. Artificial intelligence disrupts this human-centric framework. When an AI system generates text, creates images, or discovers inventions, the traditional categories of authorship and inventorship no longer apply cleanly. India, like every jurisdiction, must determine how its IP regime will respond.
Copyright and AI-Generated Works
The Copyright Act, 1957, defines "author" with reference to human creators: the person who creates the work. For photographs, the author is the person taking the photograph. For literary works, the author is the person who writes. The Act does not contemplate non-human authors, reflecting its origin in an era when authorship required human agency. When an AI system generates a literary work, artistic work, or musical composition, the Act's framework strains.
The Indian Copyright Office has not issued definitive guidance on AI-generated works. In the absence of such guidance, several positions are plausible. One view holds that AI-generated works lack authors and therefore lack copyright protection, falling immediately into the public domain. Another view attributes authorship to the human who created or operated the AI system. A third view might attribute authorship to the AI's owner or licensee. Each position has significant implications for investment in AI creative tools.
Computer-Generated Works
The Copyright Act contains a provision for "computer-generated" works, defining the author as "the person who causes the work to be created." This provision, influenced by UK copyright law, might accommodate AI-generated works by attributing authorship to the person who caused the AI to generate the output. However, the provision was drafted for simpler computer programs, not autonomous AI systems whose outputs their operators cannot predict or control.
Applying the computer-generated works provision to modern AI raises difficult questions. Who "causes" a generative AI system to create a specific output? The prompt engineer who provides instructions? The developer who trained the model? The operator who runs the inference? Each actor contributes to the output; none fully controls it. The causal inquiry that the provision requires becomes metaphysically complex when applied to systems that generate novel outputs from minimal inputs.
Patents and AI Inventors
The Patents Act, 1970, requires that patent applications identify inventors. The DABUS cases, in which applications identified an AI system as inventor, have tested patent offices worldwide. The Indian Patent Office has not issued reported decisions on AI inventorship, but the statutory requirement that inventors be "persons" suggests AI systems cannot be named as inventors under current law.
This limitation creates practical difficulties. When an AI system genuinely contributes to an invention, identifying a human inventor may involve attribution choices that are at best arbitrary, at worst fraudulent. Some commentators propose that AI should be acknowledged as a tool used by human inventors, analogous to laboratory equipment. Others argue that inventions generated primarily by AI should be unpatentable, preserving patent incentives for human innovation while allowing AI discoveries to enter the public domain.
Training Data and Infringement
AI systems are trained on vast datasets that may include copyrighted works. The legality of this training use remains contested globally. In India, the fair dealing exceptions in the Copyright Act are narrower than the US fair use doctrine, potentially limiting the scope for training use defences. The question whether AI training constitutes infringement, and if so, whether exceptions apply, awaits definitive judicial resolution.
AI outputs that closely resemble training data present additional infringement risks. When a generative AI produces text substantially similar to a training work, the output may infringe the training work's copyright. AI developers implement filters and safeguards to reduce verbatim reproduction, but complete prevention is technically challenging. Users of AI tools should understand that outputs may incorporate protected expression, creating infringement exposure that the tool provider's terms may not fully address.
Trade Secrets and AI
Trade secret protection, less developed in India than copyright or patent, may offer advantages for AI innovation. Model architectures, training methodologies, and learned parameters can be protected as trade secrets without the authorship or inventorship questions that copyright and patent raise. The protection lasts as long as secrecy is maintained and does not require registration or disclosure.
The limitation of trade secret protection is that it provides no defence against independent creation or reverse engineering. Competitors who develop equivalent AI capabilities through their own efforts acquire full rights to use those capabilities. For AI innovations that can be reverse-engineered from deployed products, trade secret protection may prove ephemeral. Strategic IP planning for AI must assess which innovations are better suited to trade secret versus registered IP protection.
Policy Directions
India's IP policy establishment is aware of AI-related challenges. The National Intellectual Property Rights Policy references technology evolution as a consideration in policy development. Parliamentary committees have received submissions on AI and IP. The trajectory points toward eventual clarification, though the timeline and direction remain uncertain.
For the practitioner advising AI developers and users today, uncertainty is the operating condition. Clients should understand that IP protection for AI-generated works and AI-assisted inventions is not assured. Contractual allocation of IP rights should not assume that registrable rights will be available. Investment decisions should account for the possibility that competitors may freely use similar AI-generated outputs. In this uncertain environment, strategic flexibility and realistic expectations serve clients better than false assurance.