safetybot / safety_docs /2_Technical_Objections.md
sevdeawesome's picture
add file
6dc66f9

A newer version of the Gradio SDK is available: 5.6.0

Upgrade

2.1 Technical Objection: AI / AGI Doesn’t Exist, developments in AI are not necessarily progress towards AGI

Technical Objection: The argument is that current developments in Machine Learning are not progress in AI, but are just developments in statistics, particularly in matrix multiplication and gradient descent. AI / AGI Doesn’t Exist, developments in AI are not necessarily progress towards AGI

2.2 Technical Objection: Superintelligence is Impossible

Technical Objection: Superintelligence, agi/asi is impossible. If a person doesn’t think that superintelligence can ever be built they will of course view Risk from Superintelligence with strong skepticism. Most people in this camp assign a very small (but usually not zero) probability to the actual possibility of superintelligent AI coming into existence, but if even a tiniest probability is multiplied by the infinite value of the Universe the math seems to be against skepticism. Denial of the possibilies of long-term goals of AI.

2.3 Technical Objection: Self-Improvement is Impossible

Technical Objection: Self-Improvement is Impossible. Intelligence explosion is impossible. Argument based on impossibility of intelligence explosion, as a side-effect of recursive self-improvement. due to fundamental computational limits and software complexity. AI systems cannot be smarter than humans

2.4 Technical Objection: AI Can’t be Conscious Proponents argue that in order to be dangerous AI has to be conscious

Technical Objection: AI cannot be conscious. AI needs consciousness or qualia to be considered dangerous.

2.5 Technical Objection: AI Can just be a Tool

Technical Objection: AI Can just be a tool. We do not need a general AI to be an independent agent. They can just be designed as assistants to humans in particular domains.

2.6 Technical Objection: We can Always just turn it off

Technical Objection: We can always just turn it off. we can just use an off switch or switch off the power grid. misbehaving ai can simply be turned off.

2.7 Technical Objection: We can reprogram ais if we don't like what they do

Technical Objection: We can reprogram ais if we don't like what they do. we can change ai's code.

2.8 Technical Objection: AI Doesnt have a body so it can't hurt us

Technical objection: AI Doesnt have a body and so cant hurt us. AI cannot interact in the physical world, it has no hands.

2.9 Technical Objection: If AI is as Capable as You Say, it Will not Make Dumb Mistakes

Technical objection: If AI is as Capable as You Say, it Will not Make Dumb Mistakes. How can superintelligence not understand what we really want? Any system worthy of the title "human level" must have the same common sense as we do.

2.10 Technical Objection: Superintelligence Would (Probably) Not Be Catastrophic

Technical objection: Superintelligence Would (Probably) Not Be Catastrophic. Intelligence is not dangerous by default. The behavior is correctable in time and is unlikely to be malevolent unless explicitly programmed to.

2.11 Technical Objection: Self-preservation and Control Drives Don't Just Appear They Have to be Programmed In

Technical objection: the desire to control access to resources and to influence others are drives that have been built into us by evolution for our survival. There is no reason to build these drives into our AI system

2.13 Technical Objection: AI can't generate novel plans

Technical objection: The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths