top of page
  • Manreet Kaur Sidhu

Who's In Charge? : Navigating The Legal Uncertainties Of Ai-Driven Agency Relationships

This Article is written by Manreet Kaur Sidhu of Rajiv Gandhi National University of Law




The proliferation of artificial intelligence (AI) has ushered in a new era of technological advancement, permeating fields traditionally dominated by human discretion. This intersection of AI and legal realms introduces complex questions, particularly concerning the concept of agency. Under the law, agency necessitates a contract between natural persons. However, AI's increasing involvement in creating and executing contracts challenges this traditional framework, raising questions about AI's role and liability within agency relationships.

AI, particularly in e-commerce and stock trading platforms, is increasingly assuming agent-like roles. For example, AI chatbots like Google Bard and Amazon Lex facilitate transactions between parties without explicit consent from the operator, acting as intermediaries between customers and retailers.


Emergence of AI in Agency Relationships


An agent is typically someone authorized to act on behalf of another (the principal) when dealing with third parties. In the Indian legal system, the Indian Contract Act, 1872, outlines agency through Sections 182 to 238. The basic principles for establishing agency include:

- Both agent and principal must have attained majority and be of sound mind (Sections 182-184).

- The principal must be bound by the agent's actions within the scope of their authority (Section 187).

Both having the basic presumption that the Agent must execute the business of the Principal with a degree of reasonability. 


AI in Retail and E-Commerce


E-commerce giants such as Amazon, Flipkart, and E-Bay utilize AI-based chatbots to enhance user experience by comparing prices, analysing quality, and streamlining transactions. The addition of AI is marketed as “magical listing” “smart decision enhancer”  etc. without having adequate control over its functions. These chatbots can autonomously tweak contract terms based on user input. This raises questions about the validity and liability of such contracts when the operator's direct consent is absent.


AI chatbots in retail represent a significant evolution in consumer interaction and transaction processing. These systems are programmed to handle vast amounts of data, providing customers with real-time responses and personalized shopping experiences. For instance, chatbots can suggest products, answer queries, and even process returns. The algorithms powering these chatbots are designed to understand natural language and learn from interactions, thereby improving their efficiency over time. The autonomy granted to these systems, however, introduces a layer of complexity in terms of liability and accountability.


AI in Stock Markets and Trading


AI's introduction to stock trading has revolutionized the field by using technologies like Natural Language Processing and Sentiment Analysis to predict market trends. AI can manage portfolios, provide financial advice, and is moving towards making trades autonomously. This autonomy raises concerns about liability for any misinformation or errors generated by AI, particularly when operators or developers are not directly involved in every decision.


In the stock market, AI's role extends beyond simple buy-sell decisions. Advanced AI systems analyse market data, news articles, social media trends, and other financial indicators to forecast market movements. These systems can execute trades within milliseconds, far faster than human traders. The efficiency and accuracy of these AI systems have made them invaluable in modern trading. However, the high level of autonomy also means that any errors or unforeseen market conditions could lead to significant financial losses, raising critical questions about who bears the responsibility for such outcomes.


The debate surrounding AI's nature—whether it is merely automated or possesses a degree of autonomy—has significant implications for agency. If AI is viewed as an automated tool following pre-set instructions, liability can be attributed to the operator or user. However, if AI is seen as having autonomous decision-making capabilities, determining liability becomes more complex.



The Contracting Problem


Traditional contract formation requires:

- Two distinct, legally valid parties.

- Consensus ad idem (agreement on the same terms).

- Intention to create a legal relationship.


AI's involvement in contracting challenges these essentials. AI lacks legal personality, preventing it from being a contracting party. This raises questions about the validity of contracts facilitated by AI and the liability of principals for AI's actions.

The validity of contracts facilitated by AI hinges on whether AI can be considered a legitimate party in the contracting process. Since AI lacks legal personality, it cannot be a party to a contract. However, it can sometimes act as an intermediary, facilitating the contract formation between two human parties. The key issue here is whether the actions taken by AI, on behalf of its operator, are binding on the operator.


For example, if an AI chatbot negotiates the terms of a sales contract, who is responsible if the chatbot agrees to unfavourable terms or makes an error in the negotiation process? Traditional legal doctrines struggle to accommodate such scenarios, as they rely on the presence of human intent and understanding, which AI inherently lacks.

Determining liability in AI-driven contracts involves examining the extent of control and supervision exercised by the operator over the AI system. If the AI acts within the scope of its intended functionality and the operator has taken reasonable steps to ensure its proper functioning, the operator may be held liable for the AI's actions. However, if the AI deviates from its expected behaviour due to unforeseen factors or errors in its programming, attributing liability becomes more complex.


Probable Solutions


Closed Agreements

- Umbrella Contracts: Users and operators agree to be bound by AI's decisions, with terms included in standard terms and conditions. However, this requires users to read and agree to all terms, which is impractical.

- Express Policies: AI operates within strict guidelines, limiting its capacity to alter contract terms. This approach restricts AI's adaptability, countering its inherent design.


Closed agreements offer a structured way to address AI-driven contracts by setting clear boundaries and guidelines for AI behaviour. Umbrella contracts and express policies provide a framework within which AI operates, ensuring that users and operators are aware of the extent of AI's authority. However, the practical challenges of ensuring that all parties fully understand and agree to these terms remain significant.


Open Agreements


- Generalized Intention: Principals can be bound by AI's actions if they generally intend to engage in transactions facilitated by AI. This approach is doctrinally appealing but disadvantageous for users and operators due to potential liability without specific consent.

- Objective Theory: AI's actions bind the principal if it is reasonable for users to believe the principal assents to AI's behaviour. This approach places undue burden on operators for AI's autonomous actions.


Open agreements allow for more flexibility by acknowledging the dynamic nature of AI interactions. The generalized intention and objective theory approaches offer ways to attribute liability based on the overall intent and reasonable expectations of the parties involved. However, these approaches can lead to unpredictable liability outcomes, as they rely on subjective interpretations of intent and reasonableness.


Agency Law Approach


This radical approach suggests altering doctrinal law to:

- Grant AI legal personality: A controversial and impractical solution.

- Give AI agent-like duties/rights without legal personality: AI acts as an intermediary with specific obligations but lacks the capacity to sue or be sued.


This model allows AI to enter contracts on behalf of principals, bypassing the need for express or implied consent. Principals are liable for AI's actions within the scope of authority, much like with human agents.


The agency law approach represents a significant shift in legal thinking, potentially transforming how AI is integrated into legal frameworks. By endowing AI with agent-like duties and rights, this approach acknowledges the unique capabilities of AI while maintaining human accountability. This model provides a more balanced solution, recognizing AI's role in modern transactions without granting it full legal personality.


The intersection of AI and agency law presents significant legal challenges. Traditional legal frameworks struggle to accommodate AI's autonomous roles in contracting and decision-making. While solutions like closed and open agreements offer some relief, they still leave gaps in liability and enforceability. The agency law approach, modifying doctrinal principles to include AI as intermediaries, provides a more comprehensive solution. However, it requires careful consideration and legislative action to ensure the preservation of individual rights and the integrity of the legal system.


Lawmakers must proactively address these issues to adapt to the evolving technological landscape and ensure legal clarity in AI-facilitated transactions. As AI continues to advance, the legal profession must evolve to meet these new challenges, ensuring that the principles of agency, liability, and contract formation remain robust and relevant in the age of artificial intelligence.


105 views0 comments

Comments


bottom of page