OpenAI Secretly Funded Child Safety Group Pushing Age Verification Rules

OpenAI has been quietly funding a child safety advocacy group pushing for age verification requirements in California, a discovery that raises concerns about transparency in AI policy-making. The Parents and Kids Safe AI Coalition, formed to advocate for the Parents and Kids Safe AI Act, was entirely funded by OpenAI but kept the company's involvement hidden from partner organizations and the public .

How Did OpenAI Hide Its Involvement in the Child Safety Coalition?

When the Parents and Kids Safe AI Coalition began reaching out to child safety groups and advocacy organizations to build support for the proposed legislation, OpenAI's name was conspicuously absent from all communications and marketing materials. According to reporting from the San Francisco Standard, this omission was deliberate, leading numerous organizations to lend their support without realizing they were effectively aligning themselves with OpenAI .

  • Funding Structure: OpenAI is the coalition's biggest funder and entirely funds the organization. A Wall Street Journal report indicated OpenAI pledged $10 million to push the Parents and Kids Safe AI Act legislation .
  • Hidden Attribution: OpenAI was left off the coalition's website and messaging materials, making it appear as though the group was an independent child safety advocacy effort rather than an OpenAI-backed initiative .
  • Misleading Communications: One unnamed nonprofit leader told the Standard that receiving emails from the coalition felt deceptive, describing the situation as "a very grimy feeling" and noting that while OpenAI wasn't "outright lying," the communications were "pretty misleading" .

What Are the Core Concerns About This Arrangement?

The controversy centers on the lack of transparency in how the coalition presented itself to potential supporters. Child safety groups and advocacy organizations believed they were supporting an independent effort when they were actually lending credibility to an OpenAI-backed initiative. This matters because it affects how policymakers and the public evaluate the legislation's merits .

The Parents and Kids Safe AI Act itself proposes requiring AI firms to implement age verification and additional safeguards for users under 18. While these protections may have legitimate child safety benefits, the secretive funding structure raises questions about whether the regulatory push is genuinely motivated by child protection concerns or shaped primarily by corporate interests. OpenAI has been open about spending money on lobbying for favorable laws and regulations, but apparently decided that its involvement with child safety advocacy groups should remain hidden from public view .

When Gizmodo reached out to OpenAI for comment on its involvement in the Parents and Kids Safe AI Coalition, the company did not respond .

Why Does Transparency Matter in AI Policy?

This situation highlights a broader tension in AI regulation: the companies being regulated are often the ones funding advocacy groups that shape the regulatory landscape. When advocacy groups fail to disclose their funding sources, it undermines the credibility of their positions and makes it harder for policymakers and the public to evaluate whether proposed regulations serve the public interest or corporate interests.

The discovery of OpenAI's hidden funding also raises questions about how other tech companies might be influencing policy discussions through similar arrangements. If OpenAI felt comfortable keeping its involvement secret, other companies may be doing the same with other advocacy groups and policy initiatives. This pattern could systematically skew the regulatory landscape toward solutions that benefit large corporations rather than the public .

How Can Stakeholders Ensure Transparency in AI Advocacy?

  • Require Full Disclosure: Advocacy groups should be required to clearly disclose all funding sources on their websites, in press materials, and in communications with potential partner organizations.
  • Verify Independence Claims: Organizations considering partnerships with advocacy coalitions should independently verify funding sources and ownership structures before lending their names or support.
  • Demand Transparency in Lobbying: Policymakers should require tech companies to publicly disclose all funding for advocacy groups, think tanks, and policy initiatives related to AI regulation.
  • Scrutinize Regulatory Proposals: When evaluating proposed regulations, stakeholders should investigate whether the groups promoting them have financial incentives related to the rules being proposed.