weird and unfriendly<\/a> behavior. We know their mindset is inhuman and inherently difficult to train to do useful things while also obeying moral rules. We know we have no robust, foolproof way to instill a \u201cdo not harm people\u201d principle in them. It really is believable that one or more of them, when cognitively scaled up and given the opportunity, would try to exterminate some or all of us, as casually as you or I would spray Raid on some ants we had found in the kitchen.<\/p>\nMany A.I. and \u201cA.I. ethics\u201d experts are thinking about this problem now. At least one prominent researcher, Eliezer Yudkowsky, has rather emotionally thrown his hands up in despair (see video above). He will keep thinking about the alignment problem, he says, but for now has no good solution\u2014and worse, has no confidence in the folks that currently control A.I. research.<\/p>\n
My own view, fwiw (I\u2019m not an A.I. expert though I have a technical background), is that the A.I. alignment problem isn\u2019t<\/em> the main problem here.<\/p>\nAlignment should<\/em> be a soluble technical problem for an A.I. system if its architecture is designed with the need for alignment in mind. A key goal of this design approach would be to ensure that the A.I.\u2019s motives and specific plans are always transparent. It\u2019s like putting a speed governor on a car\u2019s drive system\u2014a relatively straightforward task, if you have a real-time readout from an accurate speedometer.<\/p>\nThere is a deeper problem, though\u2014a deeper problem that is also a general problem in societies that believe their cultures and technologies should be free to evolve where they will. Put simply, although many technologies have potentially hazardous side-effects, in Western societies hardly any of them are regulated so strongly that their hazards are effectively mitigated in every instance of the technology.<\/p>\n
In the case of A.I., it should be technically possible, maybe even easy, to align a given system <\/em>with training\/hard-coding, assuming it has the right architecture. Enforcing the alignment of every<\/em> A.I. system that presents a potential hazard, in order to cut the risk to zero, would be the real challenge. Even domestic enforcement would be tough, but international enforcement\u2014against bad-actor states like Russia, China, and North Korea\u2014could be impossible without war-like cross-border interventions. And, again, we\u2019re not talking about a technical issue of A.I. design. We\u2019re talking about the geopolitical issue of being able to control, regulate, and, if needed, destroy other countries\u2019 A.I.s.<\/p>\nIt\u2019s easy to imagine that as A.I. develops in Western countries, domestic regulatory regimes will develop around it, perhaps modeled on existing regulatory systems covering nuclear reactors and the plutonium and other radioactive byproducts they generate. (The antiterrorism model is probably also applicable.) For the regulation of \u201cforeign A.I.s,\u201d the system will probably resemble the modern arms control and anti-proliferation setup.<\/p>\n
Modern arms control and antiproliferation efforts, so far, have been moderately successful in keeping nukes out of the hands of crazy states. Obviously, they have not been entirely<\/em> successful: see Iran, Pakistan, N. Korea. Moreover, A.I. could be a lot harder to regulate than nuclear weapons. Nukes require very special materials and engineering knowledge. By contrast, even a future superintelligent A.I., in principle, might be able to use consumer-grade hardware that any moderately wealthy Dr. No type could obtain from Amazon.com and assemble undetectably on private property. Most importantly, the hazard from any instance of an advanced A.I. is potentially infinite from the human perspective, whereas the hazard from any single nuclear weapon (or even all of them) is much more limited.<\/p>\nSo a plausible scenario is that Western and Western-allied governments will set up A.I. regulatory systems domestically, and, to the extent they can, a regulatory\/antiproliferation system abroad. Presumably they will also take steps to counter or survive against specific WMD threats from A.I.s gone bad\u2014threats that could really run the gamut of nightmares, including totally novel pathogens with human-exterminating potential. Despite all this effort, though, it seems unlikely that \u201cthe good guys\u201d will be able to mitigate the risk sufficiently within the system of nations that now exists.<\/p>\n
On the other hand, as the awareness of the risk grows (possibly due to actual disasters), it should push Western governments to work together more and more tightly, to do whatever they can to extend A.I. regulation\u2014coercively,<\/em> if necessary\u2014to non-compliant individuals and organizations in the West, and to entire non-compliant countries outside the West. If the risk is as big, and as hard-to-mitigate, as I suspect, then the end result could be effectively a single, highly intrusive, all-surveilling World Government. Obviously, the risks from other hazardous techs will tend to drive things in the same direction. Even if the geopolitical changes don\u2019t run all the way to that drastic outcome, people ultimately will be forced to recognize that the West\u2019s na\u00efve belief in \u201cfreedom\u201d was always going to lead it towards a Leviathan-like unfree state.<\/p>\n***<\/p>\n","protected":false},"excerpt":{"rendered":"
Further notes on the \u201cunaligned A.I.\u201d problem<\/p>\n","protected":false},"author":1,"featured_media":789,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[25,26,18],"tags":[],"_links":{"self":[{"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/posts\/788"}],"collection":[{"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/comments?post=788"}],"version-history":[{"count":3,"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/posts\/788\/revisions"}],"predecessor-version":[{"id":792,"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/posts\/788\/revisions\/792"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/media\/789"}],"wp:attachment":[{"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/media?parent=788"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/categories?post=788"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/thoughtsofstone.com\/wp-json\/wp\/v2\/tags?post=788"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}