로그아웃 하시겠습니까?

  • 주제별 국가전략
  • 전체

Advancing accountability in AI : governing and managing risks throughout the lifecycle for trustworthy AI
(AI의 책임성 향상 : 신뢰할 수 있는 AI의 수명 주기 전반에 걸친 위험 관리)

목차

Title page

Contents

Foreword 2

Acknowledgements 3

Abstract 6

Resume 7

Background and objectives 8

Executive summary 9

Synthese 11

1. Introduction 13

1.1. The need for trustworthy AI 13

1.2. What is trustworthy AI? 13

1.3. What is accountability in AI? 14

2. DEFINE: Scope, context, actors, and criteria 18

2.1. Scope 18

2.2. Context 18

2.3. Actors 19

2.4. Criteria 22

3. ASSESS: Identify and measure AI risks 23

3.1. Benefiting people and the planet 23

3.2. Human-centred values and fairness 24

3.3. Transparency and explainability 29

3.4. Robustness, security, and safety 30

3.5. Interactions and trade-offs between the values-based Principles 31

4. TREAT: Prevent, mitigate, or cease AI risks 33

4.1. Risks to people and the planet 33

4.2. Risks to human-centred values and fairness 34

4.3. Risks to transparency and explainability 37

4.4. Risks to robustness, security, and safety 38

4.5. Anticipating unknown risks and contingency plans 40

5. GOVERN: Monitor, document, communicate, consult and embed 41

5.1. Monitor, document, communicate and consult 41

5.2. Embed a culture of risk management 49

6. Next steps and discussion 50

Annex A. Presentations relevant to accountability in AI from the OECD.AI network of experts 51

Annex B. Participation in the OECD.AI Expert Group on Classification and Risk 53

Annex C. Participation in the OECD.AI Expert Group on Tools and Accountability 55

References 58

Table 2.1. Sample processes and technical attributes per OECD AI Principle 22

Table 3.1. Examples of documentation to assess transparency and traceability at each phase of the AI system lifecycle 30

Table 4.1. Approaches to treating risks to people and the planet 33

Table 4.2. Approaches to treating bias and discrimination 34

Table 4.3. Approaches to treating risks to privacy and data governance 36

Table 4.4. Approaches to treating risks to human rights and democratic values 37

Table 4.5. Approaches to treating risks to transparency and explainability 37

Table 4.6. Approaches to treating risks to robustness, security, and safety 39

Table 5.1. Characteristics of AI auditing and review access levels 44

Figure 1.1. High-level AI risk-management interoperability framework 16

Figure 1.2. Sample uses of the high-level AI risk management interoperability framework 17

Figure 2.1. Actors in an AI accountability ecosystem 20

Figure 3.1. UK Information Commissioner's Office (ICO) qualitative rating for data protection 27

Figure 3.2. Mapping of algorithms by explainability and performance 32

Figure 5.1. Trade-off between information concealed and auditing detail by access level 45

Boxes

Box 1.1. What is AI? 13

Box 1.2. Trustworthy AI per the OECD AI Principles 14

Box 2.1. Mapping the lifecycle phases to the dimensions of an AI system 18

Box 3.1. Errors, biases, and noise: a technical note 25

Box 3.2. Human rights and AI 28

Box 3.3. Explainability vs interpretability 29

Annex Tables

Table A.1. OECD.AI expert presentations 51

Table B.1. Participation in the OECD.AI Expert Group on Classification & Risk (as of December 2022) 53

Table C.1. Participation in the OECD.AI Expert Group on Tools & Accountability (as of December 2022) 55

해시태그

#인공지능윤리 #AI윤리 #인공지능정책

관련자료