This was a lengthy polemic on the necessity of US governmental oversight on the development of superintelligence over the next decade. There was a particular emphasis on the influence of China as a potential rival best suited to reach AI dominance during this time, due to lax security standards currently at AI labs. The existential risks of AI development were a backdrop to the thrust of the argument.
The premise is compelling enough for me to agree with. Sort of. The line between AI applications that require military command and civilian uses becomes blurry or nonexistent on many parts of the map. I'm not totally bought into the allegory of the Oppenheimer moment (the paper contained several references to Los Alamos) because the trajectory of the technology is reversed. The positive applications of nuclear power were downstream of the bomb. The negative applications of AI are downstream of a benign process of step-by-step learning, iterated at speed beyond human reaction. It seems clear to me that some directions AI takes will require immediate government involvement, but many other paths will be contested. For example, at what point does medical research become dangerous enough to require DOD oversight? China's dominance is not an optimal outcome. But if they have a decisive edge in the next decade, this doesn't seem irreversible. I'm not convinced that the dominant AI model would be tied to geography, via power demands or any other bottleneck. It's not even clear what dominance would look like. What if the fastest superintelligence had blind spots that could be exploited by an opposing AI force? What if China had a superior power grid that was exploited by one or more invisible US models? This was a useful corrective to doomer and accelerationist narratives. Comments are closed.
|
Archives
November 2024
Categories |