The latest advancements in artificial intelligence (AI) have been moved at superhuman speed. As this era of AI quickly gains momentum within corporate environments, organizations are facing a new data ecosystem that must be met with strong, proactive governance. As with all new technology, this will require careful thought about how to balance the advances in innovation with the potential impact and harm on individuals. Although this is not a wholly new concept—society has faced similar issues since the dawn of the industrial revolution—the difference now is the speed of change and the incredible power to analyze vast quantities of data, which stands to impact not only corporate revenue streams but personal outcomes related to finance, insurance, health or careers, as well as communities and society more widely.
For example, research has identified critical risk within AI algorithms, including racial, gender and socioeconomic biases and age verification issues. The data protection risk is also significant. For instance, what data is collected to train large language models, who owns the data outputs when individuals and employees communicate with tools such as ChatGPT and what happens to personal data that may be submitted? There have already been data breaches in which users were able to access other users' chat histories, underscoring the importance of data privacy and security. There is also the issue of accuracy—this technology is fundamentally predicting the most likely result based on all previous combinations of the language. So, the data a model is trained on is critical to ensure accuracy, which can also diminish over time depending on how the system is architected.
In the face of these and other concerns, privacy, legal, compliance and IT leaders will need to move faster to establish, reinforce and adapt governance structures around AI applications and the new flows of data that stem from them. Although regulations are still in development and organizations await more specific guidance, there are important steps that can be taken to ensure compliance with data privacy requirements and avoid potential ethical issues in new AI implementations.
There are several groups developing standards (both technical and governance), which are providing a useful starting point for organizations to begin tackling this new landscape. These include the Organisation for Economic Co-Operation and Development, the International Organization for Standardization (ISO), the US National Institute of Standards and Technology (NIST) and the European Data Protection Board. In addition to following ISO 27701 for privacy and ISO 31700 for privacy by design, organizations can look to the large number of standards ISO has developed to address the various system elements and functions of AI solutions as well as standards around AI risk management, bias in AI and automated decision making and governance standards. Likewise, NIST has published an AI Risk Management Framework, which establishes practical measures for implementing strong governance along with a Trustworthy and Responsible AI Resource Center. The recent US White House Executive Order on AI has also highlighted NIST as key to developing standards and guidance as well as the NIST AI framework as a major enabler for organizations to ensure effective and proactive governance for AI solutions. Importantly, while global regulations are developed, organizations should look to utilize early standards and known best practices to mitigate risk while allowing for innovation until further requirements are confirmed.
Data privacy best practices are also a crucial element of establishing AI governance structures. One of the lessons from implementing privacy programs is that in order to enable compliance with a complex set of global regulations it is essential to collaborate holistically across an enterprise in partnership with key stakeholders. Also, the fundamentals that are central to strong privacy—including creating cultures of compliance, gauging risk appetites across an organization and assessing potential for individual harms—can be leveraged for AI governance.
A high standard for privacy in AI can and should also be addressed at the development level. Privacy standards and best practices can be transposed into checkpoints and controls at each step of the software development lifecycle. To illustrate this point, for clinical trials and other types of research involving humans, there is always an ethical review of the proposed process and the need to identify current and emerging risk to assess the impact of a product, process or research. These standards are followed by many industries and provide a reasonable foundation for the development of AI and other impactful technologies. Such an approach can promote trust and reduce the possibility of adverse effects. In practice, privacy leaders can work with developers to ensure steps such as assessment of the AI model, explanation of the lineage of data sources, currency and accuracy of data sets, risk categorization and assessments are all completed appropriately and reviewed by a committee of data privacy, security, compliance and ethics experts.
Emerging ecosystems such as blockchain and the metaverse are additional considerations. There is a significant overlap between these advancements and those in AI. For example, blockchain can record actions and decisions that are made by layered AI agents and systems with auditable logs of what has happened and when. This provides proof of where data came from and evidence of any tampering with underlying data, allowing results to be evaluated against data source reputation and authenticity. Furthermore, digital ownership can permit provable ownership and provenance of outputted models and generated content, acknowledging the contributions of what was created by virtual or hidden AI systems. With digital blockchain-based tokens, data provenance within AI implementations can be supported by enabling the tracking and authentication of AI-generated content’s origin. For instance, a news article created by AI could be associated with a token, ensuring that the content’s provenance is verifiable and secure, making it easier to differentiate between genuine and manipulated information.
Ultimately, innovation almost always runs ahead of governance, and even further ahead of regulation. AI and machine learning tools have been in place in various formats for many years, but it is only now that there are concerted efforts to embed governance, transparency, explainability and fairness in response to global concerns around risk of harm and associated legislation. This is an inevitable conundrum: how to exploit the wealth of data within an organization while ensuring the trusted buy-in of consumers, clients, partners and governments. Such buy-in can only be achieved if privacy, risk and compliance leaders build trust and develop solutions based on appropriate algorithms and technology with accurate models and data, ensuring fair and transparent outcomes.
Editor’s note:For further insights on this topic, read the authors’ recent Journal article, “Data Are the Lifeblood of Business: To Thrive Governance Must Drive Insight,” ISACA Journal, volume 3 2023.