HB Ad Slot
HB Mobile Ad Slot
AI in Job Postings: What Employers in Canada Need to Know
Wednesday, May 21, 2025

Artificial intelligence (AI) is rapidly changing the hiring landscape. Whether scanning resumes with machine learning tools or ranking candidates based on predictive models, employers in Canada may now want to ensure transparency when using AI during recruitment. This is no longer just a best practice—it is increasingly being reflected in legislative requirements.

Quick Hits

  • In Ontario, if AI is used to screen, assess, or select applicants, a disclosure may be required directly in the job posting.
  • Employers with fewer than twenty-five employees are exempt from Ontario’s requirement.
  • In Quebec, if a decision is made exclusively through automated processing (such as AI), employers need to inform the individual and offer a mechanism for human review.
  • Across Canada, privacy laws (Quebec, Alberta, British Columbia, and federally under PIPEDA) suggest that individuals be informed about the purposes for collecting their personal data and openness requirements, which suggests disclosing AI use.
  • In Quebec, individuals also have the right to know what personal data was used, the key factors behind an automated decision, and to request corrections.

With the coming into force of Ontario’s Working for Workers Four Act, 2024 (Bill 149) and the coming into force of Quebec’s Act to Modernize Legislative Provisions as Regards the Protection of Personal Information (Law 25), which began in 2022, alongside longstanding privacy obligations in Alberta, British Columbia, and under federal law, the Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (PIPEDA), employers may want to carefully review how AI is used in job postings and the broader hiring process.

Quebec—Regulatory Context

Quebec’s Act Respecting the Protection of Personal Information in the Private Sector, CQLR c. P-39.1, was significantly amended and is now in force. These provisions apply to all private sector organizations collecting, using, or disclosing personal information in Quebec. This includes employers hiring employees located in Quebec, regardless of where the employer is based, as long as they are considered to be doing business in Quebec.

Section 12.1 provides that any individual whose personal information is the subject of a decision made exclusively through automated processing may be informed of the decision, the main factors and parameters that led to it, and of their right to have the decision reviewed by a person. As such, employers may want to ensure that systems that are used for any automated decision-making in Quebec are explainable so that if they receive a request on the factors and parameters that led to the decision they are able to provide this information.

Although the law is not specific about how such a request must be made, we assume that the section on access rights will apply. This means that employers would need to respond to a written request for information within thirty days.

While the statute does not define “automated decision-making technology,” the language of the law may be interpreted broadly and may apply to a wide range of systems, including algorithmic and AI tools used in hiring. Based on the approach of the data protection regulatory authority in Quebec, the Commission d’accès à l’information (CAI), we can expect a broad interpretation of this concept as the CAI has recently taken the position that privacy laws in Quebec are quasi-constitutional in nature. (For a discussion of Quebec’s restrictive approach to data privacy, see our article, “Québec’s Restrictive Approach to Biometric Data Poses Challenges for Businesses Working on Security Projects.”)

The CAI has broad investigative and enforcement powers. These powers include conducting audits, issuing binding orders, and imposing administrative monetary penalties. Employers may want to monitor guidance from the CAI as the authority’s enforcement evolves.

Ontario—Regulatory Context

On March 21, 2024, the Working for Workers Four Act, 2024 (Bill 149) received Royal Assent in Ontario. Among other amendments, the act introduced a new provision under the Employment Standards Act, 2000, S.O. 2000, c. 41 (ESA) regarding employer disclosure of artificial intelligence use in hiring when a job is publicly advertised.

The implementing regulation, O. Reg. 476/24: Job Posting Requirements, defines artificial intelligence as:

“A machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

The same regulation defines a “publicly advertised job posting” as:

“An external job posting that an employer or a person acting on behalf of an employer advertises to the general public in any manner.”

This requirement is set to take effect on January 1, 2026. It will not apply to general recruitment campaigns, internal hiring efforts, or employers with fewer than twenty-five employees at the time of the posting.

Employers may find it useful to assess what tools qualify as “artificial intelligence” or what constitutes “screening,” “assessing,” or “selecting” a candidate. The broad definition may include simple keyword filters or more complex machine learning systems, raising the potential for over- or under-disclosure.

Considerations for Employers: Human Rights and AI Bias

Employers in Canada may also want to consider their use of AI tools in conjunction with human rights legislation to ensure that their recruitment practices comply with legal standards. These laws prohibit discrimination based on grounds such as race, gender, age, disability, and other protected characteristics.

When implementing AI in hiring, employers may want to assess whether any of the tools used unintentionally promote bias or perpetuate discriminatory outcomes. AI systems, if not properly designed or monitored, can inadvertently reinforce bias by relying on historical data that may reflect past inequalities. For example, predictive models may favor certain demographic groups over others, which could lead to unintentional bias in hiring decisions.

Employers can play a key role in minimizing these risks by considering the following:

  • being involved in discussions about how AI tools work, ensuring transparency about the data being used, and the potential for bias in decision-making;
  • choosing AI tools that are explainable—meaning the algorithms and their decision-making processes are understandable to humans. This can help employers detect and correct biases before they impact hiring decisions.
  • regularly auditing AI tools to identify and addressing any unintentional bias, ensuring that these tools comply with both privacy and human rights obligations; and
  • for employers subject to PIPEDA or provincial privacy laws, providing clear, accessible notices explaining how personal information is collected, why it is collected, and who to contact with questions.

AI is increasingly common in recruitment, and with this advancement comes increased scrutiny. The new laws are intended to support transparency, fairness, and human oversight. By using explainable AI, having strong internal mechanisms, and communicating across departments, employers may not only reduce legal risks but also foster trust in the hiring process, ensuring that all candidates are treated fairly and equitably.

Next Steps

Tips and considerations for responsible AI use include the following:

  • understanding the AI technology, verifying that it complies with the requirements of transparency under data privacy law, and understanding what the tool is doing to determine if it is necessary to indicate this information in a job posting;
  • communicating across the organization by having company-wide discussions about the implementation of AI tools to avoid the risk of a tool being used without being advertised in a job posting or privacy notice;
  • revising job posting templates to include AI-use disclosures where applicable
  • creating plain-language descriptions of AI tools used in hiring, especially those that may lead to automated decisions;
  • implementing procedures that enable human review of AI decisions, as reflected under Quebec’s Law 25;
  • maintaining up-to-date privacy policies that explain AI usage, list contact information for privacy inquiries, and detail individual rights;
  • training hiring personnel on how AI tools function and how to respond to applicant questions related to privacy and automation;
  • limiting data collection to what is necessary and reasonable for recruitment purposes, in line with privacy obligations under applicable laws; and
  • verifying the applicability of exemptions. For example, Ontario’s AI disclosure requirement may not apply to employers with fewer than twenty-five employees, though privacy obligations may still be relevant.

AI is transforming the hiring process—and the legal landscape is evolving just as fast. Employers across Canada may want to proactively review their recruitment practices through the lens of employment standards, privacy laws, and human rights obligations. Embracing transparency doesn’t just reduce legal risk—it can build trust with candidates and unlock the full potential of AI while respecting individual rights.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot

More from Ogletree, Deakins, Nash, Smoak & Stewart, P.C.

HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters