[EGOV LIST] Call for Papers — Governing the AGI Transformation through Technology Ecosystems — Technology in Society

Jochen Scholl via eGov-list egov-list at u.washington.edu
Mon Mar 2 11:33:56 PST 2026


Technology in Society — 21.9 CiteScore — 12.5 Impact Factor

Call for papers

Governing the AGI Transformation through Technology Ecosystems: Global Power, Technology-Institutional Co-evolution, and Societal Impact

Submission deadline: 01 November 2026 (https://www.editorialmanager.com/tfs/default2.aspx)

The convergence of Generative Artificial Intelligence (GAI) with the Internet of Things (IoT), autonomous systems, and data-intensive infrastructures is creating a new global reality. AI no longer functions as a standalone technology; it increasingly operates as the core intelligence embedded within interdependent technological ecosystems. While much of the existing literature emphasizes technical performance or ethical implications, a pressing challenge lies in governing the intelligence that drives these ecosystems and managing the global dependencies they create.

Yet, AI capabilities concentrate where institutional ecosystems enable them. Even regions with strong technical expertise often fail to develop globally competitive AI systems due to fragmented industrial structures, weak university–industry linkages, misaligned institutional incentives, limited labor market integration, and insufficient coordination across organizations. As a result, advanced GAI development becomes concentrated in a small number of ecosystems, intensifying geopolitical tensions. This Special Issue invites contributions that d examine how institutional ecosystems shape the production, constraints, and governance of AI and GAI capabilities, influencing global power relations and societal survival.

Guest editors:

Marijn Janssen, Delft University of Technology, Faculty of Technology, Policy & Management, The Netherlands. m.f.w.h.a.janssen at tudelft.nl

Hans Jochen Scholl, University of Washington, Information School, USA. jscholl at uw.edu

Corey Kewei Xu, Hong Kong University of Science & Technology, Thrust of Innovation, Policy, and Entrepreneurship, Society Hub, China. coreyxu at hkust-gz.edu.cn

Special issue information:

The convergence of Artificial Intelligence (AI) with the Internet of Things (IoT), autonomous systems, and data-intensive infrastructures is creating a new global reality. AI no longer functions as a standalone technology; it increasingly operates as the core intelligence embedded within interdependent technological ecosystems. While much of the existing literature emphasizes technical performance or ethical implications, a pressing challenge lies in governing the intelligence that drives these ecosystems and managing the global dependencies they create. As societies approach the possibility of Artificial General Intelligence (AGI), decisions regarding geopolitical competition versus collaboration, and proprietary versus open technological models, may shape not only economic power but long-term societal trajectories.

Dominant narratives frame AI and AGI development as a zero-sum race among major powers. This framing, however, obscures a deeper structural reality: AI capabilities concentrate where institutional ecosystems enable them. Even regions with strong technical expertise often fail to develop globally competitive AI systems due to fragmented industrial structures, weak university–industry linkages, misaligned institutional incentives, limited labor market integration, and insufficient coordination across organizations. As a result, advanced AGI development becomes concentrated in a small number of ecosystems, intensifying geopolitical tensions and increasing the likelihood that AI is enclosed as proprietary and profit-driven infrastructure.

This Special Issue advances the argument that AGI superpower competition, talent concentration, and existential risk are endogenous outcomes of technology–institution co-evolution. Although engineering and AI talent have become globally distributed through expanded education systems and transnational mobility, institutional capacity remains highly uneven. This mismatch renders containment strategies and simplistic “AI race” narratives increasingly ineffective and challenges governance models premised on Western-centric innovation monopolies.

From this perspective, debates over open source versus proprietary AGI models are not merely ideological choices; they reflect underlying ecosystem structures. Concentrated institutional environments make enclosure, secrecy, and commercialization structurally likely, whereas diversified and well-coordinated ecosystems expand the feasibility of openness, transparency, and shared governance.

Concerns over existential risk and the Singularity further reinforce the need for this ecosystem perspective. Avoiding outcomes that erode human agency requires more than ethical appeals; it requires understanding how upstream institutional arrangements shape AGI development trajectories. As AI ecosystems increasingly transcend national borders, governance must operate across domains and levels, combining centralized mechanisms such as regulation and international agreements with decentralized coordination among firms, platforms, universities, and research networks.

In this context, AI can be analytically approached as a global common—not only as a normative ideal, but as a socio-technical foundation characterized by collective-action problems, asymmetric dependencies, and coordination failures. Independent oversight, verification mechanisms, and transnational governance arrangements thus become essential infrastructural components for ensuring safety, reliability, and alignment with human values.

This Special Issue invites contributions that transcend abstract geopolitical or ethical debates and instead examine how institutional ecosystems shape the production, constraints, and governance of AI and AGI capabilities, influencing global power relations and societal survival.

This special issue invites contributions addressing AI and AGI governance through a technology–institution ecosystem perspective, topics include but are not limited to:

1. AI Ecosystems and Systemic Dependencies

Interdependencies among AI, IoT, autonomous systems, data, and energy infrastructures
Cross-sector integration and ecosystem-level coordination challenges

2. Institutional Capacity, Agglomeration, and AI Superpowers

Institutional and ecosystem explanations for AI dominance beyond technology alone
Regional and national impacts of ecosystem concentration and implications for catching up

3. Governance Architectures and Responsible Design

Interaction between top-down regulation and bottom-up governance
Responsible governance frameworks embedding public values and ethical requirements

4. Geopolitics and Global AI Governance

From “American vs. Chinese AI” toward global governance frameworks
Comparative governance approaches across major political systems

5. Talent, Labor, and Innovation Systems

Global distribution of AI talent and institutional absorption capacity
Education systems, labor markets, and workforce pipelines for advanced AI and AGI
University–industry–government relations in AI ecosystems

6. Openness, Risk, and AI as a Global Commons

Institutional conditions shaping enclosure, commercialization, and transparency
Existential risk, human agency, and AI as a shared socio-technical foundation
Manuscript submission information:

The Special Issue will be open for submission on 1 August 2026. Please submit your paper to Editorial Manager® (https://www.editorialmanager.com/tfs/default2.aspx ) before the submission deadline 1 November 2026. Please select article type "VSI: AGI Transformation" to link your paper to the special issue.

References:

Almeida, V., Mendes, L. S. , & Doneda, D. (2023). On the development of AI governance frameworks. IEEE Internet Computing 27 (1), 70–74. doi: 10.1109/MIC.2022.3186030

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press. [Foundational text on the risks of human subjugation/second-order status].

Burrows, M. (2025). Beyond the Space Race: Collaboration and Competition in the Future of AI Governance. The Stimson Center.

Engler, A. (2024). The Geopolitics of Open Source AI. The Brookings Institution.

Gao, V. Z. (2024). Imperatives for Global Stability: Moving Beyond the Zero-Sum Game in Technology. Center for China and Globalization (CCG).

Hemphill, T. A. (2020). The innovation governance dilemma: Alternatives to the precautionary principle. Technology in Society, 63, 101381. doi: 10.1016/j.techsoc.2020.101381

Imbrie, A., et al. (2020). Agile Technology Governance: Reimagining Policy for the Fourth Industrial Revolution. Georgetown University.

Janssen, M., Brous, P., Estevez, E., Barbosa. L. S., & Janowski, T. (2020). Data governance: Organizing data for trustworthy Artificial Intelligence. Government Information Quarterly, 37(3), 101493. doi: 10.1016/j.giq.2020.101493

Janssen, M. (2025). Responsible governance of generative AI: conceptualizing GenAI as complex adaptive systems. Policy and Society, 44(1), 38-51. DOI: 10.1093/polsoc/puae040

Khanal, S., Zhang, H., & Taeihagh, A. (2024). Why and how is the power of Big Tech increasing in the policy process? The case of generative AI. Policy and Society, 44(1), 52–69. DOI: 10.1093/polsoc/puae012

Li, Y., Ma, C., & Yu, L. (2025). Artificial Intelligence and Digital Nationalism: A Social Media Discourse Analysis. Technology in Society, 103197. doi 10.1016/j.techsoc.2025.103197

Ma, D. (2024). The Global AI Talent Tracker 2.0. The Paulson Institute (MacroPolo). [Data source for the statistic that ~47% of top-tier AI researchers originate from China].

Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspective. Policy and Society, 40 (2), 178–193. doi: 10.1080/14494035.2021.1929728

Sætra, H. S. (2020). A shallow defence of a technocracy of artificial intelligence: Examining the political harms of algorithmic governance in the domain of government. Technology in Society, 62, 101283. doi: 10.1016/j.techsoc.2020.101283

Seger, E., et al. (2023). Open Source AI: Risks and Benefits. University of Cambridge, Centre for the Study of Existential Risk (CSER).

Ulnicane, I., Knight, W., Leach, T., Stahl, B. C., & Wanjiku, W. G. (2021). Framing governance for a contested emerging technology: insights from AI policy. Policy and Society, 40(2), 158-177.

Xie, Q., & Freeman, R. B. (2019). The US-China STEM Education Gap. Harvard University & National Bureau of Economic Research (NBER).

Zaidan, E., Truby, J., Ibrahim, I. A., & Hoppe, T. (2025). Hybrid global governance for responsible and inclusive Artificial Intelligence: Proposing a new Sustainable Development Goal 18. Technology in Society, 103159. DOI: 10.1016/j.techsoc.2025.103159

Zwetsloot, R., et al. (2020). China’s AI Workforce: Assessing Demand and Supply. Center for Security and Emerging Technology (CSET). [Analysis of the annual STEM graduation output].

Keywords:

AI transformation, institutional change, ecosystems


More information about the eGov-list mailing list