Discover how AI is transforming boardrooms, challenging legal boundaries, and reshaping decision-making processes in ways you never imagined possible.
The Blog is written by Syed Raiyyan , a student of Rajiv Gandhi National University of Law

I Introduction
Today, Artificial Intelligence (AI) has become an integral part of our lives, and the corporate world has not remained isolated from these developments. Some companies have even appointed AI programs as CEOs and members of the Board of Directors. Even the World Economic Forum acknowledged in 2015 that by 2025, AI would be on corporate boards of directors. AI can analyse voluminous data, figure out trends and patterns, predict future events, or make recommendations. These make AI an attractive candidate for positions such as CEOs and board directors in the corporate world. Thus, unsurprisingly, AI is becoming increasingly popular amongst corporations to help them make policy decisions. In the coming years, surely, AI will become even more involved in the corporate world. This change will require us to re-imagine corporate regulations and make them wide enough to include AI. Today, the law “is built on the philosophical foundation of anthropocentricism.” It is based on the understanding that certain traits of humans separate them from other things, both animate and inanimate. AI is challenging that presumption more and more every day. The question that now confronts us is about assimilating this new entity, AI, into our human-centric legal framework. But what does this “anthropocentric law”, regarding India, say in the context of the Board of Directors of a company and is it legally conceivable to integrate AI into this corpus of law?
II The Companies Act 2013 and the Responsibilities of BoD
In India, the Board of Directors is regulated by the Companies Act, 2013. Section 2(34) of the Act defines a Director as “any person appointed to the Board of a company”. Section 166 of the TCA lays down the responsibilities and obligations of the directors. These involve-
obligation is to conduct themselves in alignment with the company's Articles of Association.
obligation to act with utmost good faith and prioritize the company's best interests.
obligation to exercise reasonable care, skill, and diligence while independently exercising their judgment.
obligation to refrain from engaging in any activities that create a conflict of interest with the company.
obligation not to seek unwarranted personal advantage or gain from their position.
obligation to not transfer their office and any such attempt will be rendered invalid .
These obligations are consistent with the idea that directors owe fiduciary duties to the company. When we talk of using AI in the context of the Board of Directors this is something to be kept in mind. Furthermore, under Section 179 of the Act, it is provided that the directors do have the power to delegate their work, however, the delegation must conform with the provisions of the Act. The act in this regard states that the Board has the power to delegate in cases such as borrowing monies, investing the funds of the company, granting loans, giving a guarantee or providing security in respect of the loans, and on such conditions as may be specified, to any committee of Directors, the Managing Directors, the Manager or any other principal officer of the company. These positions can only be held by natural persons. Thus, under the current legal regime, it is not permissible for an AI to be a director or someone to whom a director's duties can be delegated. However, AI can still play an important role in the decision-making process and, as argued below, AI doesn't need to have legal personhood to contribute to the decision-making processes of the Board.
III AI as Director and Advisor
AI decision-making models can be broadly classified under three headings, namely – autonomous, augmented, and assisted. It would be convenient for this discussion to keep autonomous under one heading and augmented and assisted under another.
A) Autonomous
In this model, the AI acts independently and has the authority to make decisions. AI would be considered a separate member with the same rights and powers as other board members, such as the right to vote on various decisions. This model is not legally tenable, at least not yet. The legal issue for this model will eventually trickle down to the question of legal personhood for AI. The directors of a company have certain obligations, and neglecting these duties may lead to penalisation. The penalising measures under the Companies Act 2013 generally involve fines. How can an AI be expected to pay fines or be presented in front of courts before it has been granted legal personhood? A strong objection to granting legal personhood to AI is its lack of self-consciousness and inability to distinguish between right and wrong.
Interestingly, it has been argued that an autonomous model of AI can work for ‘independent directors’. Independent directors are defined under Section 149(6) of the Companies Act. They are persons who have no pecuniary or any other interests in the company and are appointed to offer unbiased advice to the company. Although independent directors are exempted from certain liabilities, they are not completely unrestricted. Even a non-executive or independent director is held accountable for any violations of the Act that occurred with their knowledge (due to Board procedures) and in which they failed to act with diligence, approval, or connivance.
Although it is not definite that AI can act in a non-diligent manner, it is not above suspicion. It has been argued that AI tends to provide false or deceptive results in the absence of information. Therefore, it would be incorrect to argue that AI can autonomously function even as independent directors.
As noted above this model of governance would not work until AI is granted legal personhood and, in its current state, it is not feasible to grant such personhood to AI. Thus, this model must, for now, be kept in abeyance.
B) Augmented and Assisted
Under the augmented and assisted model of decision-making, AI is used to aid the directors in performing their duties. This could include making suggestions, analysing data, understanding trends, and so on. The final decision-making power would rest with the Board of Directors. As a result, in this model, the entire responsibility for every decision rest with the directors. But even this is not free from challenges. Along with the duties described in the Company Act 2013, the Information Technology (IT) Act 2000 also imposes certain restrictions on the directors. Section 43-A of the IT Act says that it is the onus of “body corporates” to ensure reasonable security on their computers so that there is no data breach, which might cause any wrongful harm or gain to any individual. Any AI model requires data to work. It needs information to make meaningful suggestions. Thus, using AI would open new dimensions of cyber security in corporate law. Furthermore, acting as an advisor AI might also need data of the company, such as information about the shareholders. This also opens the question about the consent of shareholders for the use of AI by the Board of Directors. Along with this, it is also important to protect the directors. Directors are obligated to make well-informed decisions. As noted above, AI does tend to misrepresent data or give false results. Since it would be the directors who would eventually be held liable for any breach of regulations, the question emerges to what extent can a director be considered to have performed his obligations if they have entrusted AI while making a decision? All these issues would need to be properly addressed to ensure the integration of AI in corporate governance in a just and safe manner.
IV Space for Reforms
An assisted or augmented model of AI decision-making can be incorporated within the existing legal structure. For instance, a guideline could be released for Directors about how AI can be used. It could include recommendations such as companies can, on the consent of shareholders, incorporate AI use and train directors about how and when AI can be used. This training would help directors ensure that they do not give AI any data that could be detrimental to the company and understand when they can and cannot trust AI. Furthermore, it is crucial to amend the IT Act 2000 and the Company Act 2013 to more specifically address the obligations of directors regarding data privacy (such as what company data and information can be provided to the AI).
Conclusion
In conclusion, there are benefits and drawbacks to the incorporation of AI into corporate governance, especially for the Board of Directors. Although the idea of AI making decisions on its own would necessitate major legal developments, augmented and assisted models provide a quicker route. To effectively integrate AI into business decision-making, creating precise policies, improving data security protocols, securing shareholder approval, and giving directors the necessary training is essential. Furthermore, modifications to current laws, such as the Companies Act and the IT Act, can aid in navigating the changing terrain of artificial intelligence's place on corporate boards. Scientific innovation and legal protections must be carefully balanced to successfully integrate AI.
That's my boi 🥵🥵🥵