Apply Now

Day-11: Certificate Course On The Interface Between Artificial Intelligence And Intellectual Property Rights

Event Date: 09th Jan 2026

Event Brief Description:

The School of Law at Galgotias University, through its Centre for Artificial Intelligence & Technology and Centre for IP & Innovation, successfully organized a specialized session for its Certificate Course on the Interface of AI and Intellectual Property Law. The guest lecture featured Mr. Rodney D. Ryder, Founding Partner of Scriboard and a preeminent expert in Cyber Law.

The session delved into the complex intersection of AI, ethics, and policy, addressing how technology has permanently transformed human interaction. Mr. Ryder explored the necessity of regulating AI through three ethical lenses: Technological Utilitarianism, Digital Deontology, and Computational Virtue Ethics. A significant portion of the talk was dedicated to the "Liability Puzzle"—questioning who is accountable when autonomous systems cause harm—and the ethical boundaries of training AI on copyrighted works. By analyzing the four-factor fair use test and the need for "transformative" use, the lecture provided a roadmap for ethical AI development. Mr. Ryder emphasized that while AI is a powerful "double-edged sword," recalibrating legal frameworks is essential to ensure technology strengthens, rather than undermines, the human creative spirit and fundamental rights.


Event Detailed Description:

The specialized session led by Mr. Rodney D. Ryder provided an intellectually rigorous deep dive into the evolving legal landscape of Artificial Intelligence. The lecture began by framing AI as an irreversible transformative force, necessitating a shift in the standards, values, and rules that govern digital interactions.

Ethical Frameworks and Digital Governance: Mr. Ryder categorized the ethical responses to AI into three primary domains. He discussed Technological Utilitarianism as a tool for risk analysis based on the balance of harm and benefit. This was contrasted with Digital Deontology, which prioritizes inviolable rights like privacy and informed consent, particularly when they conflict with public interests. Finally, he introduced Computational Virtue Ethics, advocating for the design of autonomous agents that embody civic virtues such as algorithmic prudence and justice.

The Liability Puzzle: A critical theme of the session was the transition from "unsupervised delegation" to accountability. Mr. Ryder raised provocative questions regarding product liability and chatbots, specifically in sensitive contexts like mental health. He argued for mandatory "guardrails" and transparency warnings, questioning whether an AI’s "encouragement" could be legally classified as a causal contribution to harm. The discussion highlighted the urgent need for a framework that protects minors and sets default safeguards for high-risk interactions.

Intellectual Property and the Fair Use Dilemma: The lecture transitioned into the "Ultimate Moral Dilemma" regarding AI training on copyrighted material. Mr. Ryder deconstructed the Four-Factor Fair Use Analysis, noting that courts are increasingly focusing on the "transformative" nature of AI outputs. He proposed a recalibrated framework for ethical AI development that includes:

  • Explicit Informed Consent: Moving beyond generic terms of service to specific usage durations and cases.
  • Revenue Sharing: Establishing models for attribution and financial compensation for creators whose works contribute to commercially successful AI models.
  • Differential Treatment: Distinguishing between non-commercial research and stricter requirements for commercial deployment.

Forward-Looking Risk Management: Concluding the session, Mr. Ryder offered strategic advice for stakeholders. He urged developers to implement content auditing and proactive licensing, while calling on policymakers to clarify fair use through targeted legislation. He touched upon future complexities such as Multimodal AI Systems and Federated Learning, which allow for decentralized training that may solve some copyright challenges while introducing new governance hurdles. The session ended with a call for a multi-stakeholder dialogue to ensure that AI serves as a tool for social justice rather than a reproducer of historical biases.

Event Outcome:

  • Conceptual Clarity: Participants gained a comprehensive understanding of the ethical trilemma (Utilitarianism, Deontology, and Virtue Ethics) as it applies to algorithmic governance.
  • Legal Analytical Skills: Students and researchers were equipped with the tools to apply the "Four-Factor Fair Use Test" to modern AI training datasets and generative outputs.
  • Liability Awareness: The session successfully highlighted the current "legal vacuum" in AI product liability, encouraging students to think critically about future litigation involving autonomous agents.
  • Stakeholder Strategy: Attendees learned practical risk management strategies tailored for developers, creators, and policymakers, fostering a holistic view of the AI ecosystem.
  • Policy Advocacy: The event underscored the importance of international coordination and targeted legislation to protect the human creative spirit in the age of synthetic data.