OECD.AI’s cover photo
OECD.AI

OECD.AI

International Affairs

Paris, Île-de-France 52,378 followers

OECD.AI is a platform to share and shape trustworthy AI. Sign up below for email alerts and visit our blog OECD.AI/WONK/

About us

Visit our blog, the AI Wonk: https://oecd.ai/wonk/ The OECD AI Policy Observatory is a tool at the disposal of governments and businesses that they can use to implement the first intergovernmental standard on AI: the OECD AI Principles. The OECD AI Principles focus on how governments and other actors can shape a human-centric approach to trustworthy AI. The Observatory includes a blog for its group of international AI experts (ONE AI) to discuss issues related to defining AI and how to implement the OECD Principles. OECD countries adopted the standards in May 2019, along with a range of partner economies. The OECD AI Principles provided the basis for the G20 AI Principles endorsed by Leaders in June 2019. OECD.AI combines resources from across the OECD, its partners and all stakeholder groups. OECD.AI facilitates dialogue between stakeholders while providing multidisciplinary, evidence-based policy analysis in the areas where AI has the most impact. As an inclusive platform for public policy on AI – the OECD AI Policy Observatory is oriented around three core attributes: Multidisciplinarity The Observatory works with policy communities across and beyond the OECD – from the digital economy and science and technology policy, to employment, health, consumer protection, education and transport policy – to consider the opportunities and challenges posed by current and future AI developments in a coherent, holistic manner. Evidence-based analysis The Observatory provides a centre for the collection and sharing of evidence on AI, leveraging the OECD’s reputation for measurement methodologies and evidence-based analysis. Global multi-stakeholder partnerships The Observatory engages governments and a wide spectrum of stakeholders – including partners from the technical community, the private sector, academia, civil society and other international organisations – and provides a hub for dialogue and collaboration.

Website
https://oecd.ai/
Industry
International Affairs
Company size
11-50 employees
Headquarters
Paris, Île-de-France
Type
Government Agency
Founded
2020

Locations

Employees at OECD.AI

Updates

  • 𝗖𝗮𝗻 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗶𝗻𝗻𝗼𝘃𝗮𝘁𝗲 𝘄𝗶𝘁𝗵 𝗔𝗜 𝗮𝘁 𝘀𝗽𝗲𝗲𝗱 𝘄𝗵𝗶𝗹𝗲 𝗲𝗮𝗿𝗻𝗶𝗻𝗴 𝘁𝗿𝘂𝘀𝘁 𝗮𝗰𝗿𝗼𝘀𝘀 𝗴𝗹𝗼𝗯𝗮𝗹 𝗺𝗮𝗿𝗸𝗲𝘁𝘀? In this AI Wonk blog, Rashad Abelson and Barbara Bijelic examine why responsible AI is no longer optional for businesses operating across borders. As AI advances faster than its guardrails, companies face growing pressure to manage risks across the entire AI value chain, from data, labour and environmental impacts to privacy, misinformation and deepfakes. 𝗧𝗵𝗲 𝗢𝗘𝗖𝗗’𝘀 𝗻𝗲𝘄 𝗗𝘂𝗲 𝗗𝗶𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗚𝘂𝗶𝗱𝗮𝗻𝗰𝗲 𝗳𝗼𝗿 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗶𝗯𝗹𝗲 𝗔𝗜 offers the first internationally agreed, government-backed framework to help enterprises identify, prevent and address AI-related risks. Grounded in the OECD AI Principles and the OECD Guidelines for Multinational Enterprises, it provides: 🔹 A step-by-step due diligence framework 🔹 A whole-of-value-chain approach 🔹 Alignment with evolving global AI risk management standards 🔹 Practical implementation examples for businesses For firms seeking a competitive advantage, responsible and trustworthy AI is becoming a market differentiator. 🔗 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗯𝗹𝗼𝗴 𝗽𝗼𝘀𝘁 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄 👇 #OECD #ResponsibleBusinessConduct #AI #SupplyChains #OECDAI #TheAIWonk #OECDAIPrinciples

    • No alternative text description for this image
  • View organization page for OECD.AI

    52,378 followers

    𝗛𝗼𝘄 𝗰𝗮𝗻 𝗔𝗜 𝘀𝘁𝗿𝗲𝗻𝗴𝘁𝗵𝗲𝗻 𝗶𝗻𝗰𝗹𝘂𝘀𝗶𝘃𝗲 𝗮𝗻𝗱 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝘁 𝗳𝗼𝗼𝗱 𝘀𝘆𝘀𝘁𝗲𝗺𝘀? At the India AI Impact Summit 2026, the OECD - OCDE and the Government of the Netherlands will convene policymakers, OECD experts and academics to explore how artificial intelligence can support more transparent, responsible and sustainable agricultural value chains. 𝗜𝗳 𝘆𝗼𝘂 𝗮𝗿𝗲 𝗮𝘁𝘁𝗲𝗻𝗱𝗶𝗻𝗴 𝘁𝗵𝗲 𝗦𝘂𝗺𝗺𝗶𝘁, 𝗷𝗼𝗶𝗻 𝘂𝘀 𝗼𝗻 𝗙𝗲𝗯𝗿𝘂𝗮𝗿𝘆 𝟮𝟬. Opening remarks: 🔹 H.E. Mr. Harry Verweij, Ambassador at Large and Special Envoy AI for the Kingdom of the Netherlands 🔹 Ms. Audrey Plonk, Deputy Director, OECD Directorate for Science, Technology and Innovation Moderated by: 🔹 Ms. Sara Rendtorff-Smith, OECD Head of Division, AI and Emerging Digital Technologies Panellists: 🔹 H.E. Mr. Nezar Patria, Vice Minister of Communications and Digital Affairs of the Republic of Indonesia 🔹 Mr. Dejan Jakovljevic, Chief Information Officer (CIO) and the Director of Digital FAO and Agro-Informatics Division of the Food and Agriculture Organization of the United Nations (FAO) 🔹 Ms. Debjani Ghosh, Distinguished Fellow, NITI Aayog | Chief Architect, NITI Frontier Tech Hub 🔹 Dr. Arun Pratihast, Senior Researcher, Wageningen University Environmental Research 𝗥𝗲𝗴𝗶𝘀𝘁𝗲𝗿 𝗮𝘁 𝘁𝗵𝗲 𝗹𝗶𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄  #OECDAI #FoodSystems #Agriculture #IndiaAIActionSummit2026

    • No alternative text description for this image
  • View organization page for OECD.AI

    52,378 followers

    𝗔𝗜 𝗶𝘀 𝘁𝗮𝗸𝗶𝗻𝗴 𝗼𝘃𝗲𝗿 𝘃𝗲𝗻𝘁𝘂𝗿𝗲 𝗰𝗮𝗽𝗶𝘁𝗮𝗹 — 𝗯𝘂𝘁 𝗵𝗼𝘄 𝗳𝗮𝘀𝘁, 𝗮𝗻𝗱 𝘄𝗵𝗲𝗿𝗲? 𝗜𝗻 𝟮𝟬𝟮𝟱, 𝗰𝗼𝗺𝗽𝗮𝗻𝗶𝗲𝘀 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗶𝗻𝗴 𝗔𝗜 𝘁𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀 𝗰𝗮𝗽𝘁𝘂𝗿𝗲𝗱 𝟲𝟭 % 𝗼𝗳 𝗮𝗹𝗹 𝗴𝗹𝗼𝗯𝗮𝗹 𝘃𝗲𝗻𝘁𝘂𝗿𝗲 𝗰𝗮𝗽𝗶𝘁𝗮𝗹 𝗳𝘂𝗻𝗱𝗶𝗻𝗴 — roughly USD 258.7 billion out of USD 427.1 billion in total VC deals worldwide — a share that has more than doubled since 2022. 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗮𝗹𝗼𝗻𝗲 𝗮𝗰𝗰𝗼𝘂𝗻𝘁𝗲𝗱 𝗳𝗼𝗿 𝗮 𝘀𝗹𝗶𝗰𝗲 𝗼𝗳 𝗳𝘂𝗻𝗱𝗶𝗻𝗴 𝘁𝗵𝗮𝘁 𝗿𝗼𝘀𝗲 from about 2% of AI VC in 2022 𝘁𝗼 𝟭𝟰% 𝗶𝗻 𝟮𝟬𝟮𝟱, totalling 𝗨𝗦𝗗 𝟯𝟱.𝟯 𝗯𝗶𝗹𝗹𝗶𝗼𝗻. 🌍 𝗧𝗵𝗲 𝗨𝗻𝗶𝘁𝗲𝗱 𝗦𝘁𝗮𝘁𝗲𝘀 𝗰𝗼𝗻𝘁𝗶𝗻𝘂𝗲𝘀 𝘁𝗼 𝗱𝗼𝗺𝗶𝗻𝗮𝘁𝗲 𝘁𝗵𝗲 𝗳𝗶𝗲𝗹𝗱, attracting around 75 % of AI VC deal value — far ahead of the EU, China, and the UK — while “mega deals” over USD 100 million now make up around 73 % of total AI investment, signalling increasing concentration of capital. These insights come from the latest OECD policy brief, which uses proprietary Preqin data via the OECD.AI Policy Observatory to track trends in venture capital flowing into AI firms through the end of 2025. 🔍 From global funding patterns to shifts in where and how VCs are backing AI-enabled innovation, this brief is essential for policymakers, investors and innovation leaders trying to understand the dynamics shaping the future of AI ecosystems. 👉 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗳𝘂𝗹𝗹 𝗽𝗼𝗹𝗶𝗰𝘆 𝗯𝗿𝗶𝗲𝗳 — 𝗹𝗶𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄. Francesca Rossi Amir Banifatemi Joanna Shields Mohamed Nanabhay Wan Sie LEE #OECDAI #VentureCapital #ArtificialIntelligence #GenerativeAI

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +2
  • 𝗙𝗶𝗻𝗱 𝗢𝗘𝗖𝗗.𝗔𝗜 𝗮𝘁 𝘁𝗵𝗲 𝗜𝗻𝗱𝗶𝗮 𝗔𝗜 𝗜𝗺𝗽𝗮𝗰𝘁 𝗦𝘂𝗺𝗺𝗶𝘁 As the global policy discussion on artificial intelligence increasingly turns to implementation, the OECD is contributing to the India AI Impact Summit in New Delhi, focusing on practical approaches that support inclusive growth, resilience, and sustainable development. In our latest newsletter, we set out how the OECD and its Global Partnership on AI are providing evidence, policy tools and open discussions that address current priorities. The Summit provides an opportunity to examine how AI systems are applied in areas such as public services, labour markets and food systems across diverse national contexts. 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿 𝘁𝗼 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲 𝗢𝗘𝗖𝗗’𝘀 𝗮𝗴𝗲𝗻𝗱𝗮, events and resources at the New Delhi Summit. 𝗙𝗼𝗹𝗹𝗼𝘄 𝗢𝗘𝗖𝗗.𝗔𝗜 𝗼𝗻 𝗟𝗶𝗻𝗸𝗲𝗱𝗜𝗻 for updates and insights as the Summit approaches. #OECDAI #IndiaAIImpactSummit2026 #TrustworthyAI #AIPolicy

  • View organization page for OECD.AI

    52,378 followers

    𝗧𝗿𝘂𝘀𝘁 𝗶𝗻 𝗔𝗜 𝗱𝗼𝗲𝘀𝗻’𝘁 𝗲𝗺𝗲𝗿𝗴𝗲 𝗮𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗰𝗮𝗹𝗹𝘆 — 𝗶𝘁 𝗱𝗲𝗽𝗲𝗻𝗱𝘀 𝗼𝗻 𝘁𝗵𝗲 𝗽𝗲𝗼𝗽𝗹𝗲 𝗯𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗮𝗻𝗱 𝘀𝗵𝗮𝗿𝗶𝗻𝗴 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝘁𝗼𝗼𝗹𝘀. And open-source tools are a critical part of that process. Across the AI ecosystem, developers and researchers are building open-source solutions that assess safety, measure risks, document systems, and strengthen accountability. 𝗧𝗼 𝗯𝗲 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲, 𝘁𝗵𝗲𝘀𝗲 𝘁𝗼𝗼𝗹𝘀 𝗺𝘂𝘀𝘁 𝗯𝗲 𝗮𝗰𝗰𝗲𝘀𝘀𝗶𝗯𝗹𝗲. 𝗪𝗲 𝗮𝗿𝗲 𝗻𝗼𝘄 𝗶𝗻𝘃𝗶𝘁𝗶𝗻𝗴 𝘀𝘂𝗯𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀 𝗼𝗳 𝗼𝗽𝗲𝗻-𝘀𝗼𝘂𝗿𝗰𝗲 𝘁𝗼𝗼𝗹𝘀 𝘁𝗼 𝘁𝗵𝗲 𝗢𝗘𝗖𝗗.𝗔𝗜 𝗖𝗮𝘁𝗮𝗹𝗼𝗴𝘂𝗲 𝗼𝗳 𝗧𝗼𝗼𝗹𝘀 & 𝗠𝗲𝘁𝗿𝗶𝗰𝘀 𝗳𝗼𝗿 𝗧𝗿𝘂𝘀𝘁𝘄𝗼𝗿𝘁𝗵𝘆 𝗔𝗜. We encourage you to submit open-source AI tools for consideration that support: • evaluation of AI safety or robustness • security testing and risk monitoring • transparency, documentation and auditing • measurement of performance and impact 🗓️ If you submit them by 31 March they will be eligible for promotion across our public channels!   🔗 𝗦𝘂𝗯𝗺𝗶𝘀𝘀𝗶𝗼𝗻 𝗱𝗲𝘁𝗮𝗶𝗹𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 Help us to enrich the global trustworthy AI ecosystem. A big thanks to our partners: MozillaAI Security Institute Mistral AI Roost Audrey Herblin-Stoop, Mark Surman Adam B. Oliver Jones Anne Bertucio Camille Françoise #OpenSourceAI #TrustworthyAI #AIPolicy #OpenSource #OECDAI #OECDAI #CallforSubmissions

  • View organization page for OECD.AI

    52,378 followers

    𝗦𝗔𝗩𝗘 𝗧𝗛𝗘 𝗗𝗔𝗧𝗘!  𝟮𝟬 𝗙𝗲𝗯𝗿𝘂𝗮𝗿𝘆 𝟮𝟬𝟮𝟲, 𝟮:𝟯𝟬 – 𝟯:𝟯𝟬 𝗽𝗺 (𝗜𝗦𝗧) 𝗧𝗵𝗲 𝗞𝗶𝗻𝗴𝗱𝗼𝗺 𝗼𝗳 𝘁𝗵𝗲 𝗡𝗲𝘁𝗵𝗲𝗿𝗹𝗮𝗻𝗱𝘀 and the 𝗢𝗘𝗖𝗗 are pleased to invite you to 𝗮𝗻 𝗶𝗻-𝗽𝗲𝗿𝘀𝗼𝗻 𝗳𝗹𝗮𝗴𝘀𝗵𝗶𝗽 𝗲𝘃𝗲𝗻𝘁 𝗮𝘁 𝘁𝗵𝗲 𝗜𝗻𝗱𝗶𝗮 𝗔𝗜 𝗜𝗺𝗽𝗮𝗰𝘁 𝗦𝘂𝗺𝗺𝗶𝘁 𝟮𝟬𝟮𝟲, focused on the role of Artificial Intelligence in strengthening the sustainability, inclusiveness and resilience of global food systems. 𝗜𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝘁𝗼𝗿𝘆 𝗿𝗲𝗺𝗮𝗿𝗸𝘀 𝘁𝗼 𝗶𝗻𝗰𝗹𝘂𝗱𝗲 𝗱𝗶𝘀𝘁𝗶𝗻𝗴𝘂𝗶𝘀𝗵𝗲𝗱 𝘀𝗽𝗲𝗮𝗸𝗲𝗿𝘀: Welcome remarks: Audrey Plonk, OECD Deputy Director, Science, Technology and Innovation #OECD 𝗛.𝗘. Harry Verweij, Ambassador at Large and Special Envoy AI for the Kingdom of the #Netherlands Moderated by Audrey Plonk, OECD Head of Artificial Intelligence and Emerging Ecomomies Unit, OECD This session leverages the work of the India AI Impact Summit Working Group on Economic Growth and Social Good, co-chaired by the Netherlands and Indonesia. It explores how AI can support the transition toward more transparent, responsible, and inclusive agricultural production and distribution as part of food systems. Discussions will draw on national initiatives and concrete use cases to highlight practical and scalable approaches for improving data sharing, interoperability, risk management and access to high-quality agricultural data. 𝗞𝗲𝘆 𝘁𝗵𝗲𝗺𝗲𝘀 𝗳𝗼𝗿 𝘁𝗵𝗲 𝗽𝗮𝗻𝗲𝗹 𝗱𝗶𝘀𝗰𝘂𝘀𝘀𝗶𝗼𝗻 𝗶𝗻𝗰𝗹𝘂𝗱𝗲: 🔹 Democratising AI benefits across food systems; 🔹 Enabling policies and data infrastructure for agricultural innovation; 🔹 Implementing and scaling sustainable AI solutions worldwide; 🔹 Strengthening partnerships among national governments, multilateral organisations, agribusiness, farmers, civil society, researchers, innovative SMEs and technology providers. The panellists will identify actionable pathways to ensure that AI fosters innovation and meaningful participation by all stakeholders in global food systems. We look forward to your participation and to an engaging discussion on advancing trustworthy and inclusive AI for food systems worldwide. 𝗥𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 (both required) 𝗜𝗻𝗱𝗶𝗮 𝗔𝗜 𝗜𝗺𝗽𝗮𝗰𝘁 𝗦𝘂𝗺𝗺𝗶𝘁 𝗥𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻: https://lnkd.in/g2rs5tRv 𝗙𝗲𝗯𝗿𝘂𝗮𝗿𝘆 𝟮𝟬𝘁𝗵 𝗥𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻:  https://lnkd.in/gVxiXxcz 📍 Venue: Room 18 Bharat Mandapam Convention Centre, New Delhi, India Main Building, Level 1, Meeting Room No. 18, Near Gate 7 #IndiaAIImpactSummit2026 #ResponsibleAI #PeoplePlanetProgress #OECDAI Government of the Netherlands Marjoleine Hennis Ph.D Sara Rendtorff-Smith Celine Caira Lucia Russo

    • No alternative text description for this image
  • 𝗪𝗵𝗮𝘁 𝗰𝗼𝘂𝗹𝗱 𝗔𝗜 𝗹𝗼𝗼𝗸 𝗹𝗶𝗸𝗲 𝗯𝘆 𝟮𝟬𝟯𝟬 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘀𝗵𝗼𝘂𝗹𝗱 𝗽𝗼𝗹𝗶𝗰𝘆𝗺𝗮𝗸𝗲𝗿𝘀 𝗽𝗿𝗲𝗽𝗮𝗿𝗲 𝗳𝗼𝗿? The OECD has just published a new report outlining four plausible AI capability scenarios through 2030, intended to help governments and institutions think rigorously about uncertainty rather than make single predictions. The four scenarios for progress in AI are as follows: 1️⃣ 𝗦𝘁𝗮𝗹𝗹𝘀 – Progress in AI capability largely stops. Technical, economic or data constraints stall advances, and we see only incremental improvements in narrow applications. 2️⃣ 𝗦𝗹𝗼𝘄𝘀 – AI continues to improve, but the rate of advancement eases meaningfully relative to recent years. Breakthroughs become harder to achieve, and overall capability growth flattens. 3️⃣ 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗲𝘀 – The recent pace of capability gains broadly holds. In this scenario, AI systems routinely perform well-scoped software engineering and other specialised tasks that currently take humans days to complete. 4️⃣ 𝗔𝗰𝗰𝗲𝗹𝗲𝗿𝗮𝘁𝗲𝘀 – Progress picks up speed, potentially because AI systems increasingly assist in their own development. This could lead to faster improvements across many domains and capabilities, rivalling or exceeding human performance. These structured scenarios are designed to support strategic preparedness across diverse possible futures. They also served as a core analytical input to the International AI Safety Report 2026, chaired by Yoshua Bengio, helping frame discussions on emerging risks at the frontier of AI capabilities. As Prof. Bengio has noted: “𝘈 𝘸𝘪𝘴𝘦 𝘴𝘵𝘳𝘢𝘵𝘦𝘨𝘺, 𝘸𝘩𝘦𝘵𝘩𝘦𝘳 𝘺𝘰𝘶'𝘳𝘦 𝘪𝘯 𝘨𝘰𝘷𝘦𝘳𝘯𝘮𝘦𝘯𝘵 𝘰𝘳 𝘪𝘯 𝘣𝘶𝘴𝘪𝘯𝘦𝘴𝘴, 𝘪𝘴 𝘵𝘰 𝘱𝘳𝘦𝘱𝘢𝘳𝘦 𝘧𝘰𝘳 𝘢𝘭𝘭 𝘵𝘩𝘦 𝘱𝘭𝘢𝘶𝘴𝘪𝘣𝘭𝘦 𝘴𝘤𝘦𝘯𝘢𝘳𝘪𝘰𝘴.” That means building governance frameworks that are robust, adaptive and ready for diverse outcomes. 📘 𝗥𝗲𝗮𝗱 𝘁𝗵𝗲 𝗢𝗘𝗖𝗗 𝗿𝗲𝗽𝗼𝗿𝘁 to explore the scenarios in depth – 🔗 link in the comments below. 💬 𝗪𝗵𝗶𝗰𝗵 𝘀𝗰𝗲𝗻𝗮𝗿𝗶𝗼 𝘀𝗵𝗼𝘂𝗹𝗱 𝗽𝗼𝗹𝗶𝗰𝘆𝗺𝗮𝗸𝗲𝗿𝘀 𝗽𝗿𝗶𝗼𝗿𝗶𝘁𝗶𝘀𝗲 𝗽𝗿𝗲𝗽𝗮𝗿𝗶𝗻𝗴 𝗳𝗼𝗿 𝘁𝗼𝗱𝗮𝘆? Tell us in the comments. #AI #AIScenarios #AIGovernance #AISafety #OECD #ResponsibleAI AI Security Institute Department for Science, Innovation and Technology OECD - OCDE

    • No alternative text description for this image
  • Congratulations to Yoshua Bengio and the entire writing and advisory team on the release of the International AI Safety Report 2026 — an outstanding and rigorous contribution to the global evidence base on advanced AI capabilities, risks and risk management. This report exemplifies what international, science-based collaboration can achieve: a clear, sober and forward-looking assessment that helps policymakers navigate fast-moving technological change with analytical depth and intellectual independence. We are pleased that the OECD - OCDE contributed directly to this work, including: 🔹 Scenario development exploring plausible pathways for AI capability evolution to 2030 🔹 Quantitative forecasting work to help policymakers reason under deep uncertainty 🔹 Participation in the international Expert Advisory Panel supporting the report’s review process These contributions help ensure the report speaks to real policy needs, complements parallel international efforts, and strengthens shared understanding across jurisdictions. Warm congratulations again to Yoshua Bengio and all contributors for this landmark report — a vital reference for governments, researchers and institutions working to ensure AI delivers benefits safely, responsibly and at global scale. 🔗 Read the report via the original post below #AISafety #ResponsibleAI #AIGovernance #InternationalCooperation #OECD

    Today we’re releasing the International AI Safety Report 2026: the most comprehensive evidence-based assessment of AI capabilities, emerging risks, and safety measures to date. Over 100 independent experts contributed to the Report, including Nobel laureates and Turing Award winners, along with an Expert Advisory Panel nominated by over 30 countries and international organisations, including the European Union, OECD - OCDE, and United Nations. The Report also features exciting collaborations with the OECD and Forecasting Research Institute. AI poses an “evidence dilemma” to policymakers—capabilities evolve quickly, but scientific evidence emerges far more slowly. Acting too early risks entrenching ineffective policies, but waiting for strong evidence may leave society vulnerable to risks. With all the noise around AI, I hope this Report provides policymakers, researchers, and the public with the reliable evidence they need to make more informed choices about how to develop and deploy this critical technology. I’ve found the collaborative spirit of the 100+ international contributors heartening, and am grateful to have benefitted from their diverse perspectives. Thank you to all contributors for their dedication. Link to the full Report: https://lnkd.in/e6H4uGRE

  • View organization page for OECD.AI

    52,378 followers

    JOIN US TODAY ONLINE 📅 14:30 - 17:00 CET How can transparency reporting strengthen global AI governance—across borders and sectors? Today, the OECD joins The Brookings Institution for a timely discussion on the Hiroshima AI Process (HAIP) Reporting Framework—a practical, voluntary approach to advancing trust, accountability, and interoperability in AI governance. 🎙️ Audrey Plonk (OECD) will share insights from the first reporting cycle and what comes next as momentum builds toward the India AI Impact Summit. 👉 Join the conversation and explore how transparency reporting can support responsible AI at scale: https://lnkd.in/e-EVAmnD #AIGovernance #ResponsibleAI #HAIP #Transparency #OECD #AIPolicy #AIImpac

    • No alternative text description for this image
  • View organization page for OECD.AI

    52,378 followers

    We were pleased to have Karine Perset present at the Hiroshima Global Forum for Trustworthy AI, where discussions focused on a shared challenge: how to move from high-level principles to concrete, interoperable practices for AI safety, security, and trustworthiness across jurisdictions. Karine joined the session on Initiatives of International Organizations, bringing the OECD perspective into a truly global dialogue convened by the Japan AI Safety Institute and the Cabinet Office, Government of Japan. Karine highlighted how the OECD AI Principles are being operationalised through the OECD.AI Policy Observatory, with practical tools to support governments and other stakeholders. She presented key OECD initiatives, including work on defining, reporting and monitoring AI incidents, as well as the OECD Catalogue of Tools and Metrics, which maps concrete approaches to trustworthy AI across countries, objectives and stakeholder groups. Many thanks to the Japan AI Safety Institute and all partners for convening this timely forum and fostering open, international exchange. The discussions in Hiroshima underscored the importance of collaboration among governments, international organisations, industry and research to advance trustworthy AI in practice. 🔗 𝗜𝗳 𝘆𝗼𝘂 𝗵𝗮𝘃𝗲𝗻'𝘁 𝘃𝗶𝘀𝗶𝘁𝗲𝗱 𝘁𝗵𝗲 𝗢𝗘𝗖𝗗.𝗔𝗜 𝗢𝗯𝘀𝗲𝗿𝘃𝗮𝘁𝗼𝗿𝘆 𝗿𝗲𝗰𝗲𝗻𝘁𝗹𝘆, 𝗵𝗮𝘃𝗲 𝗮 𝗹𝗼𝗼𝗸 𝗮𝘁 𝘀𝗼𝗺𝗲 𝗼𝗳 𝘁𝗵𝗲 𝘁𝗼𝗼𝗹𝘀 𝗞𝗮𝗿𝗶𝗻𝗲 𝗰𝗼𝘃𝗲𝗿𝗲𝗱 𝗶𝗻 𝗵𝗲𝗿 𝗽𝗿𝗲𝘀𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 𝗮𝘁 𝘁𝗵𝗲 𝗹𝗶𝗻𝗸 𝗶𝗻 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗲𝗻𝘁𝘀 𝗯𝗲𝗹𝗼𝘄. 👇 #AISI #AIsafety #TrustworthyAI #OECDAI #ASEAN #UNU

    • No alternative text description for this image

Affiliated pages

Similar pages

Browse jobs

Funding

OECD.AI 1 total round

Last Round

Grant

US$ 250.0K

See more info on crunchbase