Canadian Authorities Accuse OpenAI of Privacy Law Violations
Canadian privacy officials have raised serious concerns about OpenAI’s data practices. An investigation found that the company may have broken federal and provincial privacy laws during the training of its AI models. This includes issues with how personal information was collected, stored, and used without proper consent.
Privacy Violations in AI Training Processes
Philippe Dufresne, Canada’s Privacy Commissioner, and officials from Alberta, Quebec, and British Columbia, concluded that OpenAI did not comply with key privacy laws, including the Personal Information Protection and Electronic Documents Act (PIPEDA). The investigation revealed that OpenAI gathered large amounts of personal data from the internet and other sources without adequate safeguards. This data was used to train models like ChatGPT, often without clear consent from individuals.
The regulators pointed out that even though ChatGPT interactions include warnings that data might be used for training, the company also scraped or purchased third-party data containing personal details. Many of these individuals weren’t aware their information was being used. Moreover, users cannot access, modify, or delete personal data embedded in the AI’s responses, raising further privacy concerns. The lack of transparency and safeguards was a central issue in the investigation.
OpenAI’s Response and Changes to Privacy Practices
OpenAI responded openly to the investigation, acknowledging the issues and committing to make changes. The company has already retired older models that violated privacy laws and implemented a filtering tool to detect and mask personal information in datasets used for training. This move aims to prevent sensitive data from being incorporated into future models.
Within three months, OpenAI plans to add a notice to ChatGPT when users are not signed in. This will explain that conversations may be used for training and advise users not to share sensitive information. Over the next six months, the company will improve its data export tools to help users better understand and challenge the information ChatGPT provides. They also pledged to ensure future datasets are protected and not used for active development after being retired.
Additionally, OpenAI will test measures to protect the privacy of minors related to public figures, making sure the AI refuses to share personal details like names or birthdays of individuals who are not public figures but are related to them. These steps are part of the company’s effort to align with Canadian privacy laws and improve user trust.
The investigation into OpenAI’s privacy policies began in 2023, but recent events have brought more scrutiny. The company faced additional pressure after a mass shooting in Tumbler Ridge in February 2026. Although OpenAI flagged the shooter’s account in 2025 for violent warnings, it failed to escalate these concerns to law enforcement. Following the incident, regulators demanded changes to how the company handles safety and cooperation with authorities.
OpenAI has agreed to work more closely with Canadian law enforcement and health agencies, aiming to improve safety protocols and prevent similar incidents in the future. The company’s commitment to transparency and compliance reflects ongoing efforts to address privacy and safety concerns raised by regulators and the public alike.












What do you think?
It is nice to know your opinion. Leave a comment.