Machine Learning Safety – Full Course from the Center for AI Safety
By freeCodeCamp.org
Published: Aug 02, 2023
ML systems are rapidly increasing in size, are acquiring new capabilities, and are increasingly deployed in high-stakes settings. As with other powerful technologies, safety for ML should be a leading research priority. In this course we’ll discuss how researchers can shape the process that will lead to strong AI systems and steer that process in a safer direction. We’ll cover various technical topics to reduce existential risks (X-Risks) from strong AI, namely withstanding hazards (“Robustness”), identifying hazards (“Monitoring”), reducing inherent ML system hazards (“Alignment”), and reducing systemic hazards (“Systemic Safety”). At the end, we will zoom out and discuss additional abstract existential hazards and discuss how to increase safety without unintended side effects.âœï¸ See course.mlsafety.org for more.
âï¸ Contents âï¸
(0:00:00) Introduction
(0:11:09) Deep Learning Review
(0:52:41) Risk Decomposition
(1:06:57) Accident Models
(1:39:22) Black Swans
(1:58:45) Adversarial Robustness
(2:29:40) Black Swan Robustness
(2:52:56) Anomaly Detection
(3:35:32) Interpretable Uncertainty
(3:59:09) Transparency
(4:12:22) Trojans
(4:22:52) Detecting Emergent Behavior
(4:43:07) Honest Models
(5:00:06) Machine Ethics
(5:52:08) ML for Improved Decision-Making
(6:04:40) ML for Cyberdefense
(6:25:00) Cooperative AI
(6:58:33) X-Risk Overview
(7:05:23) Possible Existential Hazards
(7:13:16) AI and Evolution
(8:03:08) Safety-Capabilities Balance
(8:21:07) Review and Conclusion🎉 Thanks to our Champion and Sponsor supporters:
👾 davthecoder
👾 jedi-or-sith
👾 å—å®®åƒå½±
👾 Agustín Kussrow
👾 Nattira Maneerat
👾 Heather Wcislo
👾 Serhiy Kalinets
👾 Justin Hual
👾 Otis Morgan—
Learn to code for free and get a developer job: https://www.freecodecamp.org
Read hundreds of articles on programming: https://freecodecamp.org/news