Paris Peace Forum CeSIA

Multi-Stakeholder Consultation on AI Safety & Cybersecurity Public Interest Research

Questionnaire for Public-Interest Researchers — Deadline: 19 May 2026, 20:00 CET

Estimated completion time: ~20 minutes

1. Context of the Consultation

As AI capabilities rapidly scale up and diffuse, the cyber threat landscape is equally evolving. Recent cyber capability benchmarks suggest that AI proficiency in specific offensive technical domains has effectively doubled within a year, while threat intelligence reports from major AI developers confirm that adversarial diversion of general-purpose models is no longer theoretical but operational.

The implications for global economic security and public trust are far-reaching. With the global average cost of a single data breach now estimated at $4.88 million (IBM, 2024), AI-augmented attacks are expected to act as a significant volume multiplier for malicious operations. Nation-state actors increasingly target critical infrastructure, compressing the feedback loop between offensive exploitation and defensive mitigation to timeframes that outstrip standard research cycles.

In line with the ambitions promoted by the digital track of France's G7 Presidency in 2026, the Paris Peace Forum and the French Center for AI Safety (CeSIA) are conducting this consultation as part of a broader effort to strengthen the scientific and empirical foundations for policymaking at the intersection of AI safety/security and cybersecurity. This questionnaire is specifically addressed to researchers, in particular from academic institutions, think tanks, government research agencies and civil society organizations working on these questions in the public interest.

2. Objective of the Consultation

This written consultation seeks to gather structured input from the research community to:

  • Identify priority research gaps in understanding AI-driven cyber risks and map the most critical unknowns;
  • Assess structural barriers limiting the production of robust, policy-relevant evidence — including constraints related to compute, talent, data access, and institutional agility;
  • Diagnose dysfunctions in current research funding mechanisms as experienced by practitioners, from application processes to project execution timelines;
  • Explore practical mechanisms to strengthen collaboration between AI safety/security and cybersecurity research ecosystems;

A set of actionable outputs designed to inform intergovernmental discussions among G7 Digital Ministries and beyond will be produced based on the contributions submitted.

Participants are invited to submit written contributions in English, by Tuesday 19 May 2026, 20:00 CET, with references to supporting evidence where possible.

For any questions regarding this consultation, please contact the Paris Peace Forum secretariat and/or the CeSIA.

All contributions will be treated as confidential and used exclusively in aggregated or anonymized form, unless explicit written authorization is provided for attributed citation.
or press Enter
Step 1 of 8

Respondent Profile

Tell us a bit about yourself.

Step 2 of 8

Priority Threats & Research Gaps

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

What are the most critical unknowns regarding AI misuse in cyberspace? Which risk scenarios do you consider most plausible in the near term (1–3 years) and which warrant longer-term investigation?

Which areas require the most urgent empirical research (e.g., attacker capabilities uplift, model vulnerabilities, autonomous agent behaviors, systemic cascading risks)? What evidence would be most impactful for policymakers?

How can research better anticipate emerging risks rather than documenting them retrospectively? What role could structured red-teaming programmes, threat forecasting models, or continuous monitoring frameworks play?

Step 3 of 8

Methodologies & Research Infrastructure

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

What methodological approaches are best suited to study AI-driven cyber risks (e.g., simulations, red-teaming, real-world incident analysis, adversarial ML benchmarks)? What are the key limitations of current approaches?

What types of infrastructure or tools (compute clusters, curated datasets, sandboxed testing environments, shared benchmarks) are needed to enable more robust research on the AI-Cyber nexus? How should access to such infrastructure be governed?

What are the implications of open-source and open-weight frontier AI models for research and cyber-related risk understanding? Do they create more opportunities for independent scrutiny, or do they also amplify misuse risks in ways that require specific safeguards?

Step 4 of 8

Transparency, Data Access & Incident Reporting

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

What types of data are currently missing to support rigorous research on AI misuse for cyber-offensive purposes (e.g., incident data, model behavior logs, attack datasets)? What barriers exist to accessing proprietary systems or sensitive operational data?

What level of transparency is necessary to enable credible and reproducible evaluation of AI systems in cybersecurity contexts? What information should be shared (e.g., model capabilities, system architecture, testing protocols) — and under what conditions?

What are the key barriers to effective incident reporting for AI-enabled cyber threats? Are there existing models (e.g., CERT, CVE, vulnerability disclosure frameworks) that could be adapted for AI systems, and what would such adaptation require?

Step 5 of 8

Research Funding: Experience & Structural Barriers

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

In your experience, what are the most significant obstacles to securing and managing funding for research at the intersection of AI safety/security and cybersecurity? Consider application burden, evaluation timelines, eligibility constraints, and reporting requirements.

How well do current funding schemes accommodate the pace of AI-cyber developments? Have you experienced situations where the threat landscape shifted significantly during the lifecycle of a funded project?

What alternative funding mechanisms would best serve this research domain (e.g., rapid-response grants, rolling calls, bridge funding, multi-year flexible envelopes, milestone-based disbursement)?

To what extent does the current funding landscape incentivize conservative, incremental research over high-risk, potentially transformative work? What design changes to grant schemes could better reward bold investigation?

Step 6 of 8

Cross-Community Collaboration & Policy Translation

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

What are the main obstacles to deeper collaboration between the AI safety/security and cybersecurity research communities? What concrete mechanisms could enable integration (e.g., joint research programmes, shared benchmarks, interdisciplinary labs, embedded researcher schemes)?

What are the main challenges in making research outputs on the AI-Cyber nexus actionable for policymakers? How can research be better designed from the outset to produce policy-ready evidence?

What financing or coordination mechanisms would best support the translation of research into effective mitigation efforts (e.g., dedicated funds, joint AI safety/cybersecurity calls, ISAC-type structures adapted to AI, regulatory sandboxes for red-teaming, public-private partnerships for large-scale vulnerability research)?

Step 7 of 8

International Cooperation & Research Governance

While we encourage you to answer all questions, feel free to skip any that fall outside your area of expertise.

How could like-minded international partners align on shared research agendas or standards for AI-cyber risk assessment? What existing multilateral or regional frameworks could be leveraged?

What role should AI Safety/Security Institutes, national cybersecurity agencies, and international organizations play in coordinating cross-border research investments? How can duplication be minimized while ensuring broad geographic coverage?

How could the methodologies, infrastructures, and coordination mechanisms developed for AI cybersecurity research be extended to adjacent risk domains (biosecurity, information manipulation, undesirable autonomous behaviors, risks to minors)?

Thank You for Your Contribution

Thank you for your valuable contribution. The Paris Peace Forum secretariat and the CeSIA will follow up with you to share the outputs produced on the basis of this consultation.