top of page

AI as Corporate Observer: Examining the Legal Implications for Directors

The author is Monarch Trivedi, a second-year student at Dr. Ram Manohar Lohiya National Law University.


Introduction


The growth of Large Language Model (“LLM”) based Artificial Intelligence in recent years has been remarkable. It has become ubiquitous and continues transforming almost every aspect of human activity. In a conversation with Microsoft co-founder Bill Gates, OpenAI CEO Sam Altman expressed that the AI revolution is poised to unfold at an unprecedented pace, emphasizing the swift integration of this technology into society.


In the age of AI, it will be favourable for corporations to reap the maximum benefits from these LLMs. Given their enormous capacities, examining the scope of integration of LLMs in corporate governance is worthwhile. Some of the corporate entities outside India have come up with the practice of appointing AI as an observer in the boardroom. This article discusses the legal aspect of this practice vis-à-vis the current regulatory framework in India and delves into the obligations of Directors if they choose to adopt this practice.


AI in Corporate Boardrooms


A Large Language Model (LLM) is an artificial intelligence program that can recognize and generate text, among other tasks. LLM-based AIs are in vogue today, the most popular amongst them being Chat GPT, and they are used for multiple purposes. These types of AI have the potential to participate in decision-making just like humans and are, and in fact, capable of analysing large amounts of data relating to the company, the markets and the global, economic, geopolitical, and environmental factors.


Under the existing legal framework, an AI cannot be appointed as a Director on the board of a company. A director is defined by Section 2(34) of the Companies Act. Section 149(1) starts with “every company shall have a Board of Directors consisting of individuals as directors”. The judicial dictum is not clear whether the term “individual” is broad enough to incorporate artificial intelligence within its ambit. Moreover, certain obligations and punishments prescribed for the breach of these obligations under the current regime cannot be attributed to AI.


However, to assimilate AI on the company board, various corporations across the globe have come up with giving AI an “Observer” status. In 2014, Deep Knowledge Ventures, a venture capital group based in Hong Kong, was in the news when it decided to assimilate an algorithm named “VITAL” as a member of its board of directors. VITAL does not meet the criteria of a ‘natural person’ as per the legal standards for corporate directors in Hong Kong. Subsequently, Tieto, a Finnish IT company, appointed Alicia T, an AI application, to its leadership team with voting rights to lead the new data-driven business unit in 2016. So far, there has been no remarkable instance of an AI joining as a board member in India.


The word “observer” is not used/defined in the Companies Act of 2013. As the name suggests, they may participate in the board meetings and “observe” the proceedings. The observers may not have the right to vote, the primary mechanism by which directors exercise their management functions. However, an observer may advise the board members on important policy decisions made for the corporation.


To What Extent Can AI Assist Directors


Though there are no explicit provisions to control the reliance that the directors may place on an observer, whether an AI or a natural person, certain inferences may be drawn by an analysis of the relevant provisions. In India, directors are bound by specific duties. Directors are the agents of the company. The appointment of a director within a company carries the stipulation that the director cannot transfer their position. A director might be in breach of duty if he left to others the matters for which the board had to take responsibility. They are bound by the principle of ‘delegatus non-protest delegare’, emphasizing that the competencies, skills, and integrity for which directors are appointed cannot be delegated to others. This implies that the board members cannot trust those decision-making responsibilities to AI, which are at the heart of the management of a corporate institution.


The duties imposed on the directors under Section 166(2) and Section 166(3) include acting for the benefit of the members of the company, and in the best interests of the company. Greatest good faith is expected in the discharge of their duties. It is the duty of the director to exercise reasonable care, skill and diligence. This section prevents the directors from recklessly relying on AI and its results.


In the case of listed entities, Regulation 4(2)(f)(i)(2) of LODR contains that the board of directors and senior management shall conduct themselves to meet the expectations of operational transparency to stakeholders while at the same time maintaining confidentiality of information to foster a culture of good decision-making. In case the board decides to feed confidential data of the company affairs to the AI, the fiduciary capacity within which Directors have to act enjoins upon them a duty to make full and honest disclosure to shareholders regarding all the data that the AI is processing. So if an AI is incorporated in the board, it is the duty of directors not only to inform the shareholders about the inputs of AI but also to protect the data that has been submitted to the AI.


Problems That May Arise


There is no doubt about the enormous capacities of AI; however, caution needs to be exercised while relying upon its outputs. There are some inherent structural problems with LLM-based AI, which may prove detrimental for the company if relied upon without the exercise of due care. AI hallucination is one such phenomenon wherein a large language model (LLM) - often a generative AI chatbot or computer vision tool - creates either nonsensical or inaccurate outputs. This usually happens because an AI is trained to give outputs based on data which has been previously fed into it. Thus, the outcomes generated are founded on the ‘probability’ of what might happen and might have very little or no connection with reality because the AI tends to give outcomes merely by analysing the previous inputs. Such a ‘hallucinated’ response from AI might lead to incorrect financial forecasts, investment decisions, or market analyses, potentially resulting in significant financial losses for the company and inducing the board members to form the wrong policies for their company.


Another corollary of this tendency to generate outcomes based on memory is AI bias. If the data loaded in the algorithm was discriminatory against a specific section of society, the same will be reflected in the results generated. Stereotypes about certain sections of society may hence find their way into the analysis of AI.  This can include overestimating or underestimating the potential of employees and can ultimately result in an ineffective division of labour within the corporation if the board members decide to rely on AI before assigning work to the employees.


These types of problems might be solved with Artificial General Intelligence (AGI). AGI is a hypothetical intelligent agent and is considered to be an upgraded version of AI that will have the same cognitive capacities as that of a human. It will be able to analyse data in a highly efficient manner, but this concept is merely theoretical and as of today, no true AGI exists.


Conclusion


To ensure the seamless incorporation of AI, the presence of robust State machinery is imperative. An AI boom in the near future is reasonably anticipated, and to deal with it, the government might consider coming up with a specialized organization consisting of qualified experts who can check for bias in AI before it is licensed and set out for sale. These experts can also check the decision-making capabilities of the AI when dealing with real-life circumstances. The risk factors (if any) can be labelled so the board members can exercise requisite caution.


It would be prudent for the legislature to hold the directors responsible for the activities of AI observers. If an AI violates any existing law, which results in loss to the company or its stakeholders, the directors must incur liability as they enjoy the ultimate oversight and decision-making authority on behalf of the company. This would further encourage directors to thoroughly examine the AI outputs and check the real-life applicability of its suggestions.


From a corporate governance perspective, it is undeniable that AI has great potential and can bring wonders to the corporate world. But the presence and need of humans cannot be neglected. The whole point of a human in a corporate boardroom is that he can think creatively when unexpected situations arise. While deploying AI in the boardroom, many of the board members might not know the working mechanism of the AI. In such a scenario, it is essential to include an AI specialist who is well-versed in the intricacies of AI mechanisms. Moreover, AI outputs must pass the board members’ scrutiny and independent analysis.

100 views0 comments
bottom of page