{"id":788,"date":"2023-03-04T19:40:38","date_gmt":"2023-03-05T00:40:38","guid":{"rendered":"http:\/\/thoughtsofstone.com\/?p=788"},"modified":"2023-03-04T19:47:35","modified_gmt":"2023-03-05T00:47:35","slug":"from-tech-freedom-to-world-government","status":"publish","type":"post","link":"https:\/\/thoughtsofstone.com\/from-tech-freedom-to-world-government\/","title":{"rendered":"FROM TECH FREEDOM TO WORLD GOVERNMENT"},"content":{"rendered":"

Further notes on the \u201cunaligned A.I.\u201d problem<\/em><\/p>\n

<\/p>\n

 <\/p>\n

A lot of dust is now being raised by media hype and corporate positioning about A.I.\u2014similar to what we saw in the early days of the Internet. Behind all the dust clouds, though, there\u2019s an active debate among techies and tech-adjacent types about the \u201cA.I. apocalypse\u201d that may lie in our future.<\/p>\n

My previous post<\/a> has more details, but anyway I\u2019m referring to a future in which A.I. systems will be significantly more powerful than they are today\u2014maybe capable of running entire industries, maybe capable of running everything. While these systems could displace most\/all humans from the production side of the economy, they could also drive the costs of goods and services so low that anyone, on the strength of savings or a state subsidy, could live a comfortable life. (In other words, the \u201cparadise\u201d depicted in films like Wall-E<\/em>.) One catch is that these A.I. systems, if built with the same machine-learning design approaches used in modern ChatGPT-type systems, effectively will be advanced non-human intelligences with opaque cognitive processes. It might be as hard, or even harder, to train them to \u201calign\u201d their values with human values as it is now with much more primitive systems. That\u2019s a problem because an unaligned<\/em> A.I. is one that plausibly would have no compunction about doing away with humans\u2014just as soon as it could survive without them<\/a>.<\/p>\n