Füge eine Handlung in deiner Sprache hinzuA new film from acclaimed director, Leanne Pooley.A new film from acclaimed director, Leanne Pooley.A new film from acclaimed director, Leanne Pooley.
Keir Dullea
- Self - Narrator
- (Synchronisation)
Max Tegmark
- Self - Astrophysicist
- (as Prof. Max Tegmark)
- …
Louis Rosenberg
- Self - CEO & Chief Scientist, Unanimous A.I.
- (as Dr. Louis Rosenberg)
Stuart J. Russell
- Self
- (as Stuart Russell)
Rodney Brooks
- Self - Robotics, M.I.T.
- (as Emeritus Prof. Rodney Brooks)
Mary Cummings
- Self - Humans & Autonomy Lab, Duke University
- (as Dr. Mary Cummings)
Empfohlene Bewertungen
... from the world of A.I. as just about every possible scenario is forecast for its future use and application over every possible time scale - all through opinions, without too much evidence, from so called "geeks" and "nerds" who must have recently read Philosophy for Dummies.
Also doubles as a Terminator franchise promotion, because time travel is also likely to happen! Blade Runner would have been a far better reference for a lot of the discussions.
Also doubles as a Terminator franchise promotion, because time travel is also likely to happen! Blade Runner would have been a far better reference for a lot of the discussions.
I think the film is great! I've recommended to my friends and colleagues in IT to watch it as soon as they get a chance. I think currently in society the risks of AI (more specifically Artifical General Intelligence and Artificial Super Intelligence) are not understood by most people, even most IT and AI researchers, as the main focus (and main budgets) goes to ANI (narrow AI) that already makes is way into our society and has a lot of (potential) benefits in various fields including medicine (e.g. diagnosis of cancer, fighting pandemics), logistics, climate control, sustainability, etc.
It's brilliant that in this film Keir Dullea looks back on "2001" and his interactions with HAL. For most people outside the field of AI, HAL is still the most recognizable superintelligent AI computer. The documentary gives a very nice overview of the different stakeholders and views in the current AGI pro/con discussions (benefits of AI, robotics, warfare, existential risks for humanity, is it controllable or not?). Especially Bryan Johnson's quote ("What is our plan as a species? ... We don't have a plan and we don't realize it's necessary to have a plan.") keeps coming back to my mind. I think that's exactly the issue. Almost anyone in the field of AI (even more cautious people like Stuart Russell or Max Tegmark) assumes that AGI will soon be there (within the next 10 to 50 years). And many researches agree that there are very serious risks (including the existential risk) that come with this. However, when they talk about mitigation of these risks, the discussions become more unclear, e.g. Stuart Russell's suggestion of "Provable Benevolent AI", or Ben Goertzel's ideas on "decentralized AI". To me this doesn't make sense; we should first have a plan that proves the fundamental risks are mitigated before we move on. Or else put the AGI research on hold (if this is still possible...) like we did with genetic manipulation and cloning.
It's brilliant that in this film Keir Dullea looks back on "2001" and his interactions with HAL. For most people outside the field of AI, HAL is still the most recognizable superintelligent AI computer. The documentary gives a very nice overview of the different stakeholders and views in the current AGI pro/con discussions (benefits of AI, robotics, warfare, existential risks for humanity, is it controllable or not?). Especially Bryan Johnson's quote ("What is our plan as a species? ... We don't have a plan and we don't realize it's necessary to have a plan.") keeps coming back to my mind. I think that's exactly the issue. Almost anyone in the field of AI (even more cautious people like Stuart Russell or Max Tegmark) assumes that AGI will soon be there (within the next 10 to 50 years). And many researches agree that there are very serious risks (including the existential risk) that come with this. However, when they talk about mitigation of these risks, the discussions become more unclear, e.g. Stuart Russell's suggestion of "Provable Benevolent AI", or Ben Goertzel's ideas on "decentralized AI". To me this doesn't make sense; we should first have a plan that proves the fundamental risks are mitigated before we move on. Or else put the AGI research on hold (if this is still possible...) like we did with genetic manipulation and cloning.
Discusses the potential of AI with clear, thought provoking questions. The variety of opinions expressed by featured leading academics gives you a wide picture of the future in this field, making you do exactly as the title entails. Talk about AI.
Thought provoking and extremely watchable documentary showcasing a balance of viewpoints from leading experts - the trouble is, they were all so convincing and backed up their arguments so well, I am not sure whose side I come down on. The most telling comment for me however, was the observation from one that 'the human race does not have a plan'. So now the AI genie has been released, are we hurtling across a minefield without a map towards doom, or salvation? And - another point of professional disagreement - will it take 40, 60, 100 or 200 years to find out? I have been talking about AI a lot since I viewed this.
Wusstest du schon
- VerbindungenFeatures 2001 - Odyssee im Weltraum (1968)
Top-Auswahl
Melde dich zum Bewerten an und greife auf die Watchlist für personalisierte Empfehlungen zu.
- How long is We Need to Talk About A.I.?Powered by Alexa
Details
- Laufzeit1 Stunde 26 Minuten
Zu dieser Seite beitragen
Bearbeitung vorschlagen oder fehlenden Inhalt hinzufügen