As a global leader in responsible artificial intelligence (“AI”), Quebec builds on flagship initiatives such as the Montréal Declaration for Responsible AI Development and the creation of Mila, the Quebec AI research institute, to promote responsible AI development, an issue that is becoming increasingly complex. AI offers companies unique opportunities to optimize their governance, particularly through predictive analysis and risk management, but its responsible integration raises significant legal and ethical challenges, especially around liability, transparency, privacy and regulatory compliance.
Use of AI by Boards of Directors
Boards of directors are facing growing complexities: large volumes of data, multidimensional risks and heightened regulatory requirements. In this context, AI can serve as a tool to help with decision-making by facilitating the analysis of financial data or the modelling of risk scenarios. It therefore enables directors to make more informed decisions, and to do so more quickly.
However, AI does not replace human judgment: intuition, moral reasoning and legal responsibility remain the sole domain of board members. AI should thus be used as a support mechanism and not a substitute within decision-making processes.
What the Law Says in Quebec and Canada About AI Regulation
In Quebec, the recent Act to modernize legislative provisions regarding the protection of personal information (“Law 25”), most of which came into force on September 22, 2023, requires proactive measures for managing data and automated decision-making systems (including AI tools). Companies must conduct a privacy impact assessment before deploying and using such systems and must disclose the use of automated decision-making processes based on personal information. These requirements apply directly to boards of directors that rely on AI tools to guide their decisions.
In Canada, the Personal Information Protection and Electronic Documents Act (“PIPEDA”) governs the collection, use and disclosure of personal information in the course of commercial activities. While PIPEDA does not explicitly address AI or automated decision-making systems, it still applies to the use of personal information by AI systems in the course of these activities. More recently, the Digital Charter Implementation Act, 2022 (“Bill C-27”) sought to introduce Canada’s first AI-specific legislation called the Artificial Intelligence and Data Act (“AIDA”), as well as reform Canadian privacy law. AIDA would have introduced rules to better regulate high-risk AI systems. Although Bill C-27 was dropped from the Order Paper on January 6, 2025, following the prorogation of Parliament, it nevertheless reflects a genuine willingness by the government to implement proactive AI regulation and address emerging technological challenges.
Issues of Liability and Transparency
Directors’ Legal Liability
Using AI does not relieve directors of their legal responsibility. Even when decisions are influenced by AI systems, directors remain fully liable for the decisions they make in accordance with their fiduciary duties under the Civil Code of Québec and the company’s constitutive regime. Board members must exercise their judgment, ensure the reliability of AI tools and be able to justify their decisions. This notably requires a diligent verification of the AI systems used, thorough documentation of decision-making processes, including AI input, and an ongoing human oversight, even when AI generates automated recommendations.
Transparency and Decision Traceability
Transparency is a fundamental pillar of responsible governance. Under Law 25, companies must disclose the use of automated decision-making processes based on personal information. This includes algorithms used in board decisions, particularly regarding risk management, appointments or performance evaluations. Boards of directors must therefore inform stakeholders (shareholders, employees, clients, etc.) of the use of AI in strategic decisions, explain the criteria and data that influenced decisions, implement traceability mechanisms to retrace the steps that led to a decision, including AI-generated analyses or recommendations.
Risks Related to Algorithmic Bias
AI systems are particularly vulnerable to bias, often originating from the data on which they were trained. These biases can compromise the neutrality of decisions, especially in sensitive areas such as recruitment, performance management or resource allocation. A prominent example is Amazon’s hiring AI, which several years ago systematically disadvantaged female candidates in the tech sector, where the workforce was predominantly male. Boards must therefore ensure that the tools they use have been rigorously tested, audited and validated to minimize risks of discrimination or inequity.
Recommendations for Companies
To integrate AI responsibly into board operations, organizations must adopt a proactive approach and develop a solid understanding of AI technologies. Companies should:
- Train and sensitize directors to the technological and legal issues associated with AI;
- Establish internal compliance policies governing AI use in decision-making;
- Ensure constant human oversight of AI tools;
- Implement accountability and transparency mechanisms related to AI use;
- Monitor legislative developments, including provincial and federal initiatives on personal data protection.
Conclusion
The integration of AI within boards of directors shows great promise for optimizing and accelerating certain decisions. However, this integration must be accompanied by rigorous governance focused on responsibility, transparency and ethics. By adopting a proactive approach, companies can fully benefit from AI’s potential while meeting their legal obligations and preserving stakeholder trust.
