Can Privacy Professionals Bring Governance to AI Before Regulators?

Training privacy pros to watch for problematic uses of artificial intelligence may get ethics in the mix while the world waits for policymakers to write rules.

Joao-Pierre S. Ruth, Senior Editor

May 24, 2023

4 Min Read
Luis Moreira via Alamy Stock Photo

The speed of AI’s spread and growth has regulators, industry, and other stakeholders scrambling to understand where to begin with potential guardrails. The recent Congressional hearings, if anything, showed shared concern from the private and public sectors to hammer out rules of the road for a technology that could do harm if left unchecked.

On Monday, an AI-generated, fake image of an explosion that made it look like an incident occurred near the Pentagon went viral and triggered a brief stock selloff. The image was debunked, and the market recovered, but it was an example of fast-moving, real-world effects that AI can ignite.

While policymakers ruminate over OpenAI CEO Sam Altman’s plea for the US to take the lead on AI regulation, will the private sector start self-policing usage of this tech?

Recently the International Association of Privacy Professionals (IAPP) introduced the Artificial Intelligence Governance Center, which is meant to provide education and certification for roles in oversight of AI such as governance and risk and compliance.

The center includes advisory board members from Microsoft, Google, NIST, IBM, and other organizations. IAPP’s President and CEO J. Trevor Hughes says it makes sense for the 80,000 professionals his organization represents who work in privacy and data protection to consider becoming part of the AI equation.

“As the issue of AI has emerged and as the risks associated with AI became clearer and clearer, the need for professionals who could manage the risks created by all sorts of different types of AI implementations and use cases … the need for people who could do that work was all the more obvious,” he says. IAPP plans to make its training and certification available by the end the third quarter, the start of the fourth quarter of this year.

The lack of nationwide privacy legislation was already of concern to Hughes even with individual states debating and introducing local laws. Now AI is compounding concerns. “We do not have comprehensive AI legislation in the United States,” he says. “We don’t have it anywhere in the world.”

The final version of the European Union’s AI Act may soon be passed, Hughes says, though he suspects that will not happen until the fall. That may leave it up to the private sector to act. “Even with those things, industry will need to come together,” he says. “We’ll need to assess risk, share information with each other to identify and benchmark best practices.”

Though there are plenty of advocacy groups, think tanks, policy shops, and universities with AI research centers, Hughes says there seemed to be a gap in training up staff to address AI oversight. “What was completely missing in the broader terrain was anyone in the lane, any organization in the lane of building the people to actually do the work inside organizations,” he says.

Privacy Professionals: Transferrable Skills

Hughes says privacy professionals have transferrable skills and with some training in this space, they and others could help catch AI issues such as discriminatory outcomes or algorithmic bias that may result if left unmonitored. “AI is on such an accelerated scale that we need people almost immediately, like we need them now,” he says.

Part of the concern is the ever-expanding list of ways AI could cause harm, including issues with intellectual property, such as if AI is fed sensitive, corporate information that gets spread to the public. There is also the question of ownership and copyright of content produced through generative AI. Further, defamatory and libelous content could be produced quickly with AI, including deepfakes of adult material.

“These technologies require governance,” Hughes says. “They require an ethical framework. We do not have laws in place yet, so we don’t know what the legislative and regulatory guardrails are going to be.” In the absence of such guardrails, he says, ethical principles should be established to reduce harm, assess discriminatory outcomes, and make sure that training data reflects a full, broad, and appropriate pool of data.

“Make sure it doesn’t have any inherent racism or misogyny or problematic data built in,” Hughes says. “Many of these systems are built on publicly available data sets and human society around the world is not a perfectly ethical or structured thing.” The output from these machine learning systems, he says, will often reflect the potentially biased data that goes in, which must be managed for.

Reluctant to call AI a bona fide existential threat, Hughes did point out the potential for unfettered AI to further damage society’s belief in its institutions. “We are about to head into a presidential cycle in the United States and there are already very deep concerns and I think cynicism, if not outright hostility, towards the veracity and factual nature of the news,” he says. “If our ability to believe anything in the news gets blown up because we don’t know what’s real or what’s not real -- it suggests an erosion of trust in politics, to be sure in media.”

What to Read Next:

What Just Broke: Is Self-Regulation the Answer to AI Worries?

OpenAI CEO Sam Altman Pleads for AI Regulation

Should There Be Enforceable Ethics Regulations on Generative AI?

About the Author(s)

Joao-Pierre S. Ruth

Senior Editor

Joao-Pierre S. Ruth covers tech policy, including ethics, privacy, legislation, and risk; fintech; code strategy; and cloud & edge computing for InformationWeek. He has been a journalist for more than 25 years, reporting on business and technology first in New Jersey, then covering the New York tech startup community, and later as a freelancer for such outlets as TheStreet, Investopedia, and Street Fight. Follow him on Twitter: @jpruth.


Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights