The owner of a fifteen-person UK consultancy has three clients with Canadian operations, one of which is headquartered in Montreal. She has read in passing that Canada’s federal AI bill stalled, taken that as a sign nothing in Canada really applies to her, and moved on. The reality is the opposite of what she concluded. Federal AI legislation is on hold, but Quebec’s Law 25 already applies to her firm, the automated decision-making provisions cover the AI screening she runs in client work, and the penalties are not theoretical.
This is the position a lot of non-Canadian owners are sitting in. The headline news from Canada in 2025 was the death of AIDA. The unreported news is that provincial rules already cover much of what a federal AI Act would have covered, and they apply extraterritorially. The point of this post is to give you the precise picture, so you can work out which Canadian rules attach to your firm and which do not.
This is not Canadian legal advice. Cross-border specifics warrant a Canadian privacy lawyer for anything that matters. What follows is the orientation map.
What is AI governance in Canada right now?
Canada has no dedicated federal AI statute as of May 2026. The Artificial Intelligence and Data Act, introduced inside Bill C-27 in June 2022, died on the order paper when Parliament was prorogued on 6 January 2025. Federal AI rules are currently a layered mix of PIPEDA, sector-specific guidance from financial regulators, voluntary codes, and active provincial laws. A successor bill is widely expected in 2026 but has not been tabled.
The federal layer rests on the Personal Information Protection and Electronic Documents Act (PIPEDA), which sets ten Fair Information Principles for any private-sector organisation handling personal data for commercial purposes. The Office of the Privacy Commissioner of Canada issued joint federal, provincial and territorial principles for generative AI in December 2023, which interpret PIPEDA for AI use cases. A 2023 Voluntary Code of Conduct for advanced generative AI signed by around thirty firms sits alongside as guidance, not law. OSFI Guideline E-23 covers model risk management in federally regulated financial institutions and takes effect in May 2027 after an 18-month transition.
The provincial layer is where the real obligations sit. Quebec’s Law 25 is the most stringent privacy regime in North America, fully in force since September 2024. Ontario’s Trustworthy AI Framework and the IPC-OHRC Principles bind the public sector and set a private-sector best-practice direction. Alberta has signalled an intention to legislate AI specifically, with British Columbia and other provinces watching closely.
Why does it matter for your business?
It matters because Quebec Law 25 applies extraterritorially with no business-size threshold, and the penalty regime has real teeth. The law covers any organisation, regardless of location, that collects or uses personal information of Quebec residents. Administrative penalties reach CAD 10 million or two per cent of worldwide turnover. Court fines for serious violations reach CAD 25 million or four per cent. Individuals also have a private right of action with statutory damages.
Section 12.1 is the AI-specific provision that catches many common SME use cases. If a decision about a Quebec resident is made exclusively by automated processing, the organisation must inform the person at or before the decision is communicated. On request, the organisation must disclose the personal information used, the factors that led to the decision, the right to correct information, and the opportunity to submit observations to personnel who can review the decision. Resume-screening tools, automated credit-scoring, and AI-driven customer-tier decisions all sit inside this scope unless a human is meaningfully involved, not a rubber-stamp review.
The financial cost of getting this wrong is rarely catastrophic for a small services firm, and the typical first enforcement action is a warning and an order, not a maximum fine. The reputational cost with Canadian clients is harder to quantify and arguably worse, particularly with public-sector or quasi-public-sector customers in Ontario and Quebec.
Where will you actually meet Canadian AI rules?
You will meet them in four places. First, in PIPEDA-level handling of any Canadian customer’s personal data, the federal baseline regardless of province. You need a designated privacy officer, identified purposes before collection, valid consent, limited use, appropriate safeguards, individual access and correction rights, and mandatory breach notification at the real-risk-of-significant-harm threshold. The OPC’s December 2023 generative AI principles add specific guidance for AI-driven processing.
Second, in Quebec Law 25 if any of your customers are Quebec residents. The compliance load is closer to GDPR than to other North American privacy regimes. You need a designated privacy officer, plain-language public privacy policy, opt-in consent for collection, mandatory privacy impact assessments before new digital systems involving personal data are launched, written agreements covering cross-border transfers with documented equivalent-protection assessments, and Section 12.1 disclosures for automated decision-making.
Third, in Ontario if you serve public-sector customers there. Bill 194 amendments to FIPPA came into force on 1 July 2025 and impose mandatory written privacy impact assessments, mandatory breach reporting at the real-risk-of-significant-harm threshold, and expanded IPC oversight including binding orders. The IPC-OHRC Principles for responsible AI use are increasingly being treated as the private-sector best-practice standard, particularly by organisations bidding into Ontario public-sector procurement.
Fourth, in financial-services-specific obligations if you are a vendor to a federally regulated Canadian bank, insurer or trust company. OSFI Guideline E-23 makes the institution responsible for model risk management across the whole enterprise, including vendor-supplied models. As the vendor, you carry the documentation, audit-trail, and pre-deployment testing load that lets the institution demonstrate compliance to OSFI.
When to ask versus when to ignore
Ask when you have any Quebec-resident customers, when you use AI to make or substantially influence decisions about Canadian individuals, when you serve Ontario public-sector customers, or when you supply AI to a federally regulated Canadian financial institution. Ignore the temptation to wait for federal AI legislation before treating the Canadian landscape as live. Provincial rules already cover the ground a federal act would cover, and they have been enforced through 2025 and into 2026.
A useful filter is the four-question test. One, do you handle personal information of any Canadian resident in a commercial capacity? If yes, PIPEDA applies. Two, do any of those Canadians live in Quebec? If yes, Law 25 applies in full. Three, does your AI make or substantially influence decisions about Canadian individuals without meaningful human review? If yes, Section 12.1 attaches for Quebec residents and the OPC generative AI principles attach federally. Four, do you transfer Canadian personal data outside Canada? If yes, Law 25 imposes adequacy assessment, written safeguard agreements, and individual notification for Quebec-resident data.
If the answers are PIPEDA-only with no Quebec customers, the workload is real but proportionate. Appoint a privacy officer, document purposes and consent, implement breach notification, and follow the December 2023 generative AI principles when AI is in the picture. If Quebec is in scope, treat Law 25 as a GDPR-equivalent project and bring in a Canadian privacy lawyer before you finalise the Section 12.1 disclosure wording or the cross-border transfer agreement.
Related concepts
This post sits inside a wider cluster on AI risk, trust and governance for SMEs. The pillar post is AI risk and governance for owner-operated businesses, the structural read on what governance looks like at this scale. For the surrounding regulatory picture, the EU AI Act for UK and EU SMEs, the UK pro-innovation pivot, and the US patchwork cover the other three jurisdictions Canadian-facing firms commonly serve.
For firms operating across more than one jurisdiction, multi-jurisdiction AI compliance for SMEs is the integration view. For internal practice, read the minimum viable AI policy for a small business and the audit trail an SME actually needs. For the Section 12.1 disclosure conversation, disclosing AI use to customers covers the practical wording.
The cluster does not replace specialist legal advice on your specific situation. Quebec Law 25 has edge cases worth a Canadian privacy lawyer’s time, particularly on cross-border transfers and Section 12.1 disclosures for automated decision-making. If you want to talk through where your firm sits in the Canadian picture and what a proportionate response looks like at your scale, book a conversation.



