FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Code sets voluntary commitments for organizations to demonstrate responsible development and management of advanced generative artificial intelligence systems

OTTAWA, ONDec. 7, 2023 /CNW/ – Recent advances in artificial intelligence (AI) technology are benefiting society in many ways, including in improving supply chain management, enhancing health care and precision medicine, and helping tackle environmental sustainability challenges. These advances, however, have also reaffirmed the urgency of ensuring that AI systems, particularly advanced generative AI systems, are developed and used safely. That is why the Government of Canada recently launched the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

Today, the Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry, announced that eight more organizations have undertaken this voluntary commitment in support of the ongoing development of a robust, responsible AI ecosystem in Canada. New signatories include:

  • AltaML
  • BlueDot
  • CGI
  • kama.ai
  • IBM
  • Protexxa Inc.
  • Resemble AI
  • Scale AI

The code identifies measures that organizations are encouraged to apply to their operations when they are developing and managing advanced generative AI systems. In addition, Canada continues to engage in domestic and international discussions supporting the creation of common standards and safeguards for generative AI systems.

The code is based on the input received from a cross-section of stakeholders through these engagements and through the consultation on the development of a Canadian code of practice for generative AI systems. The government has published a report that summarizes feedback received during the consultation.

Quotes

“Our government is committed to ensuring Canadians can trust AI systems used across the economy, which in turn can accelerate safe AI adoption. In undertaking our Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, leading Canadian organizations continue to adopt responsible measures for advanced generative AI systems that will help build safety and trust as the technology spreads. We will continue to ensure Canada’s AI policies are fit for purpose in a fast-changing world.”
– The Honourable François-Philippe Champagne, Minister of Innovation, Science and Industry

“At CGI, we are pleased to see the Government of Canada taking a strong position on this important topic as we believe that realizing the tremendous benefits of AI innovation requires principles and governance that balance human-centric and responsible use practices. We are proud to be a signatory to Canada’s voluntary AI code of conduct, which reinforces our plan to allocate $1 billion over the next three years to continually expand our AI-based capabilities.”
– François Boulanger, President and Chief Operating Officer, CGI

“There is a tremendous opportunity on the doorstep of Canadian businesses and government to harness the power of AI for good. But for the benefits of generative AI to truly take hold, both businesses and government must ensure the right guardrails are in place, and Canada’s voluntary AI code of conduct will play a key role in ensuring responsible adoption. As a global leader in AI innovation, IBM believes businesses must embed ethics into their AI applications and processes to build trust and protect against possible risks such as bias and discrimination. We are pleased to join the Canadian government and other organizations in this effort to ensure the development and deployment of generative AI applications are used in smart and trusted ways.”
– Christina Montgomery, Vice President and Chief Privacy & Trust Officer, IBM

Quick facts
  • The Government of Canada has already taken significant steps toward ensuring that AI technology evolves responsibly and safely through the proposed Artificial Intelligence and Data Act (AIDA), which was introduced in June 2022 as part of Bill C-27, also known as the Digital Charter Implementation Act, 2022.
  • AIDA is designed to promote the responsible design, development and use of AI systems in Canada’s private sector, with a focus on systems with the greatest impact on health, safety and human rights (high-impact systems).
  • Since the introduction of Bill C-27, the government has engaged extensively with stakeholders on AIDA and will continue to seek the advice of Canadians, experts—including the government’s Advisory Council on Artificial Intelligence—and international partners on the novel challenges posed by emerging AI technologies.
  • Bill C-27 was adopted at second reading in the House of Commons in April 2023 and was referred to the House of Commons Standing Committee on Industry and Technology for study.
Related product
Associated links
Stay connected

Find more services and information at Canada.ca/ISED.

Follow Innovation, Science and Economic Development Canada on social media.
X (Twitter): @ISED_CA, Facebook: Canadian Innovation, Instagram: @cdninnovation and LinkedIn

SOURCE Innovation, Science and Economic Development Canada

For further information: Audrey Champoux, Press Secretary and Senior Communications Advisor, Office of the Minister of Innovation, Science and Industry, audrey.champoux@ised-isde.gc.ca; Media Relations, Innovation, Science and Economic Development Canada, media@ised-isde.gc.ca

Canada.ca. Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

From: Innovation, Science and Economic Development Canada.

September 2023

Advanced AI systems capable of generating content — such as ChatGPT, DALL·E 2, and Midjourney — have captured the world’s attention. The general-purpose capabilities of these advanced AI systems offer enormous potential for innovation in a number of fields, and they are already being adopted and put to use in a variety of contexts. These advanced systems may be used to perform many different kinds of tasks — such as writing emails, answering complex questions, generating realistic images or videos, or writing software code.

While they have many benefits, advanced generative AI systems also carry a distinctly broad risk profile, due to the broad scope of data on which they are trained, their wide range of potential uses, and the scale of their deployment. Systems that are made publicly available for a range of different uses can present risks to health and safety, can propagate bias, and carry the potential for broader societal impacts, particularly when used by malicious actors. For example, the capability to generate realistic images and video, or to impersonate the voices of real people, can enable deception at a scale that can damage important institutions, including democratic and criminal justice systems. These systems may also have important implications for individual privacy rights, as highlighted in the G7 Data Protection and Privacy Authorities’ Statement on Generative AI.

Generative systems can also be adapted by organizations for specific uses – such as corporate knowledge management applications or customer service tools – which generally present a narrower range of risks. Even so, there are a number of steps that need to be taken to ensure that risks are appropriately identified and mitigated.

To address and mitigate these risks, signatories to this code commit to adopting the identified measures. The code identifies measures that should be applied in advance of binding regulation pursuant to the Artificial Intelligence and Data Act by all firms developingFootnote1 or managing the operationsFootnote2 of a generative AI system with general-purpose capabilities, as well as additional measures that should be taken by firms developing or managing the operations of these systems that are made widely available for use, and which are therefore subject to a wider range of potentially harmful or inappropriate use. Firms developing and managing the operations of these systems both have important and complementary roles. Developers and managers need to share relevant information to ensure that adverse impacts can be addressed by the appropriate firm.

While the framework outlined here is specific to advanced generative AI systems, many of the measures are broadly applicable to a range of high-impact AI systems and can be readily adapted by firms working across Canada’s AI ecosystem. It is also important to note that this code does not in any way change existing legal obligations that firms may have – for example, under the Personal Information Protection and Electronic Documents Act.

In undertaking this voluntary commitment, developers and managers of advanced generative systems commit to working to achieve the following outcomes:

  • Accountability – Firms understand their role with regard to the systems they develop or manage, put in place appropriate risk management systems, and share information with other firms as needed to avoid gaps.
  • Safety – Systems are subject to risk assessments, and mitigations needed to ensure safe operation are put in place prior to deployment.
  • Fairness and Equity – Potential impacts with regard to fairness and equity are assessed and addressed at different phases of development and deployment of the systems.
  • Transparency – Sufficient information is published to allow consumers to make informed decisions and for experts to evaluate whether risks have been adequately addressed.
  • Human Oversight and Monitoring – System use is monitored after deployment, and updates are implemented as needed to address any risks that materialize.
  • Validity and Robustness – Systems operate as intended, are secure against cyber attacks, and their behaviour in response to the range of tasks or situations to which they are likely to be exposed is understood.

Signatories also commit to support the ongoing development of a robust, responsible AI ecosystem in Canada. This includes contributing to the development and application of standards, sharing information and best practices with other members of the AI ecosystem, collaborating with researchers working to advance responsible AI, and collaborating with other actors, including governments, to support public awareness and education on AI. Signatories also commit to develop and deploy AI systems in a manner that will drive inclusive and sustainable growth in Canada, including by prioritizing human rights, accessibility and environmental sustainability, and to harness the potential of AI to address the most pressing global challenges of our time.

Frequently asked questions for the Voluntary Code of Conduct on Advanced Generative AI Systems.

Signatories to the code of conduct

Measures to be undertaken pursuant to the Code of Conduct

PrincipleMeasuresAll advanced generative systemsAdvanced generative systems available for public use
DevelopersManagersDevelopersManagers
AccountabilityImplement a comprehensive risk management framework proportionate to the nature and risk profile of activities. This includes establishing policies, procedures, and training to ensure that staff are familiar with their duties and the organization’s risk management practices.YesYesYesYes
Share information and best practices on risk management with firms playing complementary roles in the ecosystem.YesYesYesYes
Employ multiple lines of defence, including conducting third-party audits prior to release.NoNoYesNo
SafetyPerform a comprehensive assessment of reasonably foreseeable potential adverse impacts, including risks associated with inappropriate or malicious use of the system.YesYesYesYes
Implement proportionate measures to mitigate risks of harm, such as by creating safeguards against malicious use.YesNoYesNo
Make available to downstream developers and managers guidance on appropriate system usage, including information on measures taken to address risks.YesNoYesNo
Fairness and EquityAssess and curate datasets used for training to manage data quality and potential biases.YesNoYesNo
Implement diverse testing methods and measures to assess and mitigate risk of biased output prior to release.YesNoYesNo
TransparencyPublish information on capabilities and limitations of the system.NoNoYesNo
Develop and implement a reliable and freely available method to detect content generated by the system, with a near-term focus on audio-visual content (e.g., watermarking).NoNoYesNo
Publish a description of the types of training data used to develop the system, as well as measures taken to identify and mitigate risks.NoNoYesNo
Ensure that systems that could be mistaken for humans are clearly and prominently identified as AI systems.NoYesNoYes
Human Oversight and MonitoringMonitor the operation of the system for harmful uses or impacts after it is made availableincluding through the use of third-party feedback channels, and inform the developer and/or implement usage controls as needed to mitigate harm.NoYesNoYes
Maintain a database of reported incidents after deployment, and provide updates as needed to ensure effective mitigation measures.YesNoYesNo
Validity and RobustnessUse a wide variety of testing methods across a spectrum of tasks and contexts prior to deployment to measure performance and ensure robustness.YesNoYesNo
Employ adversarial testing (i.e., red-teaming) to identify vulnerabilities.YesNoYesNo
Perform an assessment of cyber-security risk and implement proportionate measures to mitigate risks, including with regard to data poisoning.YesNoYesYes
Perform benchmarking to measure the model’s performance against recognized standards.YesNoYesNo

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.