Future Shock: Generative AI and the International AI Policy and Ethics Landscape

Future Shock: Generative AI and the International AI Policy and Ethics Landscape

The Rise of Generative AI and the Future Shock for AI Governance

The rapid industrialization of generative artificial intelligence (GenAI) has undoubtedly triggered a future shock for the global AI policy and governance community. In late 2022, the launch of ChatGPT sent shockwaves across the digital world, ushering in a new era of large-scale, multipurpose AI systems known as “foundation models” (FMs). Within weeks, hundreds of commercial GenAI applications stormed onto the scene, penetrating nearly every sector and putting immediate and far-reaching pressure on social, cultural, political, legal, and economic norms and institutions.

This AI revolution was not entirely unexpected. For years, stakeholders from industry, academia, government, and civil society had made concerted efforts to develop standards, policies, and governance mechanisms to ensure the ethical, responsible, and equitable production and use of AI systems. Initiatives like the UK’s national guidance on AI ethics and safety, the US’s AI Risk Management Framework, and the European Union’s AI Act demonstrated a growing consensus around the key principles and practices needed for trustworthy AI.

However, despite this ostensible readiness, the international AI policy and governance ecosystem was caught in a crisis of future shock as the meteoric rise of FMs and GenAI applications exposed significant gaps and deficiencies in the existing landscape.

At the heart of this crisis was a disconnect between the strengthening thrust of public concerns about the hazards posed by the hasty industrial scaling of GenAI and the absence of effective regulatory mechanisms and needed policy interventions to address such hazards. This crisis was marked by several key factors:

  1. Enforcement Gaps in Existing Digital and Data Regulations: Significant gaps have arisen in the enforcement of laws and regulations related to data protection, cybersecurity, and digital rights. Coupled with a lack of regulatory capacity to develop the necessary skills and know-how to confront the novel governance challenges presented by large-scale AI, this has created conditions for regulatory inaction and ineptitude.

  2. Democratic Deficits in AI Standards Development: While progress has been made in developing voluntary AI ethics frameworks and industry standards, these have often lacked the robust stakeholder participation, social license, and public consent needed to establish justified consensus on the meaning of essential normative values and principles. This has undermined the credibility and effectiveness of self-regulatory approaches.

  3. Evasionary Tactics of Ethics Washing and State-Enabled Deregulation: Tech companies have leveraged their epistemic dominance and resource advantages to shape AI standards and governance discussions in ways that serve their private interests, using tactics like “ethics washing” to avoid meaningful accountability and binding regulation.

  4. Unprecedented Scaling and Centralization Dynamics: The rapid industrialization of FMs and GenAI systems has introduced a new order and scale of systemic, societal, and biospheric risks and harms. This has been driven by two key factors:

a. Model Scaling: The scaling of data, model size, and compute has led to the emergence of serious model intrinsic risks, such as data poisoning, privacy violations, and discriminatory biases.

b. Industrial Scaling: The brute-force commercialization of GenAI has exposed increasing numbers of impacted people and communities to the risks and harms of these technologies, while also concentrating power and control over AI innovation in the hands of a few large tech corporations.

The convergence of these factors created an ecosystem-level chasm between public concerns and the lack of actionable policy and regulatory responses, leading to an international AI governance crisis. As the first wave of policy initiatives in 2023 struggled to effectively address this crisis, it became clear that the narrow framing of the issues and the dominance of Western, corporate voices in the policy discussions had failed to capture the diverse contexts, concerns, and lived experiences of those most impacted by the rise of GenAI, particularly in the Global South.

Bridging the Gap: Towards Inclusive and Equitable AI Governance

To address the international AI governance crisis and build a more resilient and future-ready policy landscape, several key steps are essential:

  1. Consolidating Regulatory Capacity and Closing Enforcement Gaps: Governments and policymakers must invest in developing the skills, resources, and institutional frameworks needed to enforce existing digital and data-related laws and regulations, as well as to create new, binding mechanisms to govern the design, development, and deployment of FMs and GenAI systems.

  2. Democratizing AI Standards Development: Standards development processes must be reformed to ensure robust, inclusive, and transparent participation from a diverse range of stakeholders, including civil society, marginalized communities, and those most impacted by these technologies. This will help establish justified consensus on the meaning and operationalization of essential AI principles and values.

  3. Confronting the Risks and Harms of Scaling Dynamics: Policymakers must develop comprehensive regulatory frameworks that address the model intrinsic risks of data unfathomability and model opacity, as well as the systemic, societal, and biospheric-level risks introduced by the industrial scaling and centralization of AI innovation.

  4. Centering Equity and Inclusion in AI Governance: The global AI policy and governance landscape must be rebalanced to elevate the voices, contexts, and concerns of the Global South and other marginalized communities. This requires dismantling the colonial legacies and power asymmetries that have shaped the dominant narratives and agendas, and fostering more transversal, decentered dialogues that prioritize equity and justice.

By taking these steps, the international AI policy and governance community can begin to bridge the gap between public concerns and effective regulatory action, ensuring that the development and use of FMs and GenAI systems are aligned with the public good and the needs of diverse communities worldwide.

Consolidating Regulatory Capacity and Closing Enforcement Gaps

The rapid industrialization of GenAI has exposed significant gaps in the enforcement of existing digital and data-related laws and regulations. Despite the strengthening of legal frameworks like the EU’s General Data Protection Regulation (GDPR) and national data protection statutes in recent years, real-world compliance and enforcement have remained elusive.

Enforcement Gaps in Data Protection and Privacy

One key issue is the inability of data protection laws to adequately address the challenges posed by the unfathomability of AI training data. The massive and indiscriminate web-scraping used to create the training corpora for FMs and GenAI systems has made it nearly impossible to establish the legal basis for processing personal data, as required by data protection regimes like the GDPR. Regulators have struggled to assess whether the use of this data respects the “contextual integrity” and “compatible purpose” principles that are central to lawful data processing.

Similarly, the opacity and complexity of these large-scale AI systems have hindered efforts to enforce intellectual property and copyright protections. The failure to obtain consent from copyright holders or establish legitimate use cases has left the door open for potential infringement and digital piracy, as GenAI models can memorize and reproduce copyrighted content.

Regulatory Capacity Deficits

Alongside these enforcement gaps, there is a widespread lack of regulatory capacity to develop the necessary skills, knowledge, and institutional frameworks needed to confront the novel governance challenges presented by large-scale AI systems. Many policymakers and regulators simply lack the technical understanding and resources to effectively monitor, audit, and intervene in the design, development, and deployment of FMs and GenAI technologies.

This capacity deficit has allowed tech companies to exploit information asymmetries, shaping the AI policy and governance landscape in ways that serve their private interests. Well-resourced firms have been able to penetrate underequipped regulatory bodies, offering technical expertise and knowledge transfer in exchange for more accommodating policy environments.

Bridging the Enforcement Gap

To address these pressing issues, governments and policymakers must invest in building robust regulatory infrastructures capable of enforcing existing digital and data-related laws, as well as developing new, binding mechanisms to govern the AI innovation lifecycle. This may include:

  • Establishing dedicated AI regulatory authorities with the necessary technical skills, resources, and enforcement powers.
  • Implementing mandatory transparency and disclosure requirements for the training data and model architectures used in FMs and GenAI systems.
  • Creating comprehensive liability regimes to hold tech companies accountable for the risks and harms caused by their AI products and services.
  • Developing international standards and certification systems for trustworthy AI, with strong public participation and democratic oversight.

By consolidating regulatory capacity and closing enforcement gaps, policymakers can begin to restore public trust and ensure that the development and deployment of GenAI technologies align with the public good and fundamental rights protections.

Democratizing AI Standards Development

In parallel with efforts to strengthen regulatory enforcement, the global AI policy and governance community must also address the democratic deficits that have undermined the credibility and effectiveness of voluntary AI ethics frameworks and industry standards.

The Challenges of Value Pluralism and Legitimate Consensus

The translation of high-level AI ethics principles into practicable and binding standards, laws, and regulations has proved to be a significant challenge. This is due in part to the inherent difficulties in reaching justified consensus on the meaning and operationalization of essential normative concepts like “trustworthiness,” “fairness,” “safety,” and “transparency” across diverse cultural contexts and stakeholder groups.

The pluralism of values and perspectives in contemporary social life has made the establishment of fixed and universally accepted understandings of these concepts an arduous task. Stakeholders from different backgrounds often hold divergent interpretations, motivations, and priorities when it comes to defining and implementing such principles.

The Industry Dominance of Standards Development

Compounding this challenge is the fact that the standards development processes that have shaped much of the existing AI governance landscape have been largely industry-led, technically focused, and procedurally opaque. This has undermined the input legitimacy of these standards, as they have often failed to meaningfully incorporate the perspectives and concerns of civil society, marginalized communities, and those most impacted by AI systems.

The dominance of private sector actors in standards development has allowed tech companies to leverage their resource advantages and technical expertise to shape the content and scope of these frameworks in ways that serve their own interests. This has, in turn, raised concerns about the credibility and effectiveness of self-regulatory approaches, as well as the democratic legitimacy of the resulting standards.

Towards Democratized AI Standards

To address these shortcomings, standards development processes must be reformed to ensure robust, inclusive, and transparent participation from a diverse range of stakeholders. This may involve:

  • Mandating the inclusion of civil society organizations, marginalized community representatives, and impacted end-users in standards-setting bodies and decision-making processes.
  • Establishing public consultation and feedback mechanisms to incorporate the perspectives of a broader range of affected parties.
  • Enhancing the transparency of standards development workflows, including the disclosure of participating organizations, funding sources, and decision-making criteria.
  • Empowering independent, third-party auditing and certification systems to validate the integrity and societal alignment of AI standards.

By democratizing AI standards development, the global policy community can help establish justified consensus on the meaning and operationalization of essential AI principles, building public trust and ensuring that the resulting governance frameworks are truly responsive to the needs and concerns of diverse communities worldwide.

Confronting the Risks and Harms of Scaling Dynamics

The rapid industrialization of FMs and GenAI systems has introduced a new order and scale of systemic, societal, and biospheric risks and harms that have overwhelmed the existing AI policy and governance landscape. Two key factors have driven this:

Model Scaling: The Emergence of Serious Intrinsic Risks

The scaling of data, model size, and compute that has enabled the development of large-scale, multipurpose FMs has given rise to a range of serious model intrinsic risks. These include:

  • Data Poisoning and Privacy Violations: The indiscriminate web-scraping used to create the training corpora for these models has exposed them to the risks of data poisoning, adversarial attacks, and the leakage of sensitive personal information.
  • Discriminatory Biases and Harms: The unfathomability of these massive training datasets has led to the perpetuation of harmful biases, stereotypes, and toxic content, which can translate into disproportionately poor model performance and harmful outcomes for historically marginalized groups.
  • Copyright Infringement and Intellectual Property Violations: The failure to obtain consent from copyright holders or establish legitimate use cases has left the door open for potential infringement and digital piracy, as GenAI models can memorize and reproduce copyrighted content.

Industrial Scaling: The Onset of Systemic, Societal, and Biospheric Risks

Beyond the model intrinsic risks, the rapid commercialization and widespread deployment of FMs and GenAI systems have introduced a new scale of systemic, societal, and biospheric-level hazards. These include:

  • Expanding Inequities and Digital Divides: The uneven distribution of the benefits and risks of GenAI technologies, driven by disparities in access to essential resources and infrastructure, is exacerbating existing inequities and widening local, regional, and global digital divides.
  • Labor Displacement and Deskilling: The cognitive capabilities of FMs and GenAI systems pose significant risks of labor displacement, particularly for tasks and roles that can be easily automated. This can also lead to the deskilling and overdependence of workers, undermining human agency and social cohesion.
  • Mis- and Disinformation at Scale: The ability of GenAI systems to generate highly persuasive, dynamic, and multimodal content at low cost raises concerns about the potential for large-scale dis- and misinformation campaigns that can erode public trust and undermine democratic processes.
  • Environmental Degradation: The exponential increase in compute and infrastructure requirements for training and deploying FMs and GenAI systems is contributing to environmental damage, resource depletion, and significant carbon emissions.

Comprehensive Regulatory Frameworks for Governing Scaling Dynamics

To effectively confront these multifaceted risks and harms, policymakers must develop comprehensive regulatory frameworks that address both the model intrinsic and system-level impacts of AI scaling. This may include:

  • Mandatory reporting and disclosure requirements for model training data, architectures, and environmental impacts.
  • Liability regimes that hold tech companies accountable for the risks and harms caused by their AI products and services.
  • Robust auditing and certification systems to validate the safety, security, and societal alignment of FM and GenAI systems.
  • Targeted interventions to mitigate the disproportionate impacts on historically marginalized and vulnerable communities.
  • Incentives and guidelines for sustainable AI development and deployment practices that prioritize environmental and social considerations.

By taking a holistic, multifaceted approach to governing the scaling dynamics of FMs and GenAI, policymakers can help steer the trajectory of these transformative technologies towards the public good and the long-term wellbeing of people and the planet.

Centering Equity and Inclusion in AI Governance

Despite the efforts to strengthen regulatory capacity, close enforcement gaps, and democratize AI standards development, the global AI policy and governance landscape has continued to be shaped by the dominance of Western, corporate voices and perspectives. This has resulted in an agenda that often fails to adequately capture the diverse contexts, concerns, and lived experiences of those most impacted by the rise of GenAI, particularly in the Global South.

Uneven Pitching and Agenda-Setting Dynamics

The international AI policy discussions that have steered the first wave of governance initiatives have been predominantly centered around the views, positions, and interests of a handful of prominent geopolitical and private sector actors from the high-income countries of the West and the Global North. This has led to the exclusion or relegation of crucial issues that are deeply affected by GenAI policy, such as the exploitation of labor in global supply chains, widening digital divides, growing global inequality, data sovereignty, and the disproportionate environmental impacts on lower-income and small island nations.

Moreover, the dominance of Northern and Western framings has enabled the perpetuation of implicit colonial logics and the reinforcement of existing power asymmetries. Concepts like “frontier AI” and narratives around “existential risk” have been criticized for evoking the colonial mindset and prioritizing the concerns of technological elites over the needs of the Global Majority.

Towards Transversal and Equity-Centered AI Governance

To rebalance the global AI policy and governance landscape, there must be a concerted effort to center the voices, contexts, and concerns of the Global South and other marginalized communities. This requires a shift towards more transversal, decentered dialogues that disrupt the assumed core-periphery relationships and give equal importance to the unique perspectives of all affected parties.

Some key steps towards this:

  • Amplifying Diverse Voices: Proactively engage with and elevate the participation of civil society organizations, community representatives, and impacted individuals from the Global South and other underrepresented groups in AI policy and governance processes.

  • Forging Transversal Connections: Foster collaborative networks and knowledge-sharing platforms that facilitate “lived experience to lived experience” dialogues, prioritizing the exchange of insights and concerns across diverse global contexts.

  • Confronting Legacies of Inequality: Consistently interrogate how longer-term patterns and legacies of inequality, discrimination, and privilege have cascading effects across the AI innovation lifecycle, and design interventions to dismantle these structural barriers.

  • Embedding Equity and Justice: Ensure that AI governance frameworks, from standards development to regulatory implementation, are grounded in principles of equity, social justice, an

Scroll to Top