TL/DR: I'm glad to have read the book but can't recommend it. The interesting ideas are now widely accepted (computers can learn, and we can't rely on computers to make decisions).
Best Quotes
"A goal-seeking mechanism will not necessarily seek our goals" (page 63)
"This is only one of the many places where human impotence has hitherto shielded us from the full destructive impact of human folly" (page 64)
"A digital computer can accomplish in a day a body of work that would have the full efforts of a team of [human] computers for a year, ..." (page 71). A modern 2022 computer can do the work of 40,000 people for a year in about a second (a Core I5 can do 34969 million FLOPS).
"Written for the intellectually alert public"
The book cover flaps are, unusually, one long continuous text that summarizes the text. The final paragraph: "... written for the intellectually alert public, does not require of the reader that [they] have a highly technical background." I suspect that this is the editor code for "all the glamor of a calculus textbook, but without the equations."
I originally picked up this book second hand as part of my overall interest in everything in the history of my computing profession. This is the first time I've managed to get all the way through while also grasping what the heck Norbert is trying to say. It helped that I put in lots of annotations and had access to the internet.
Theme: Computers will be like humans
If you accept that the Star Trek character "Data" is a "sentient being", then you already agree with Weiner. The entire book is trying to get us people to understand that eventually computers will have all of the parameters of sentient life.
The book was written in the early 60's (the publication date of 1964 is misleading; the book is a rewritten amalgam of earlier lectures), which is before "Star Trek" and sentient robots for the general public, but it's written long after Isaac Asimov's Robot series (including the books with Daneel Olivaw).
Weiner's basic thesis is that "computers" need to be considered in three ways: can a computer learn, can a computer reproduce, and what functions should be handled by humans and which by computer?
Can computers learn (spoiler: yes)
The "can computers learn" is now well understood: yes, they can. Weiner has a highly intelligence-is-everything point of view: in his opinion, as soon as a game is theoretically understood, it ceases to be of any interest at all to anyone. The obvious counter-example -- that people still play tic-tac-toc -- is entirely unconsidered.
This section, BTW, is what propelled me to write notes in the book. Weiner will bring up a person's name on one page, mess about for 15 pages, and then bring back that name assuming that you remember it.
Can computers make a new computer? (spoiler: eventually, yes)
The section on whether computers can duplicate themselves can only be understood by people who understand the complex dead-end mechanism used in WW2 artillery fire control systems. This is something Weiner excelled at, and he has great enthusiasm for it. But a better example is the numerically controlled machine tools that were already available -- a computer can guide the tools needed to build more computers.
The section is also somewhat weird. Biologists love to use "can reproduce themselves" as part of the important distinction between living and non-living. But from a legal or religious perspective, it's bunk: people don't have more or fewer rights because of their ability to reproduce.
What's the right place of computers? (helper, not decider)
Weiner correctly foreshadows the problems of having computers be the ultimate decider of critical actions, while also missing most of the problems that we're bedeviled with currently.
He's got a lot to say about nuclear war (fifty years later, we thankfully have never had another nuclear war, although arguably several wars have been highly influenced by the nuclear capabilities of the sides). He's rightfully skeptical of automated launch systems -- the reality of most alerts is that they are false alarms.
So, he says that computers will be like humans? (answer: no)
On the one hand, he's got a lot to say about how computers can theoretically learn, mutate, and reproduce. But he doesn't carry this to the logical conclusion: that computers will eventually be sentient (which he doesn't bring up at all). Instead, he argues that we humans must block any attempt to have computer make decisions that affect us humans. He's firmly in the camp that computers are good helpers for the human intellect but are ill-suited to being in control.
And right now, I'd say he's right. We see computers making "unbiased" decisions on health care that turn out to be racist (*), or "unbiased" justice decisions that put one set of people into jail. And we see clearly during these days of the Ukraine war that computerized messaging can be a tool to amplify one position or another.
If it's not physics, it's crap
Holy cow, there's an entire chapter devoting to bashing the mathematical formulations of anything that isn't physics. He's got a lot to say about how (for example) mathematical economics can't possibly ever be useful because getting good data is hard. What he misses is that we can deal with the data being wonky. During the pandemic times, we all saw the strange way that death rates would fluctuate, only to be explained that this state or that state was behind in their processing, and would periodically catch up by providing one giant batch of data. Similarly, the reason that some states (like Florida) have a low death rate is that all visitor deaths are reported by the home state.
One problem some academics have is that they can see how their own field is impacted by whatever the new thing is, but they can't imagine how this will impact other fields. Famously, after WW2, the British government commissioned an academic to decide if these new "computers" would be useful. The academic could easily see how their own particular field would benefit (x-ray crystallography), but couldn't imagine that computers would be useful in any other field.
Wait -- what's all this religious stuff?
Weiner love to talk religion. He's not very good about being particularly coherent. FYI: the sin of simony isn't related to Black Masses.
Where's the golem?
The golem is the Golem of Warsaw. It's mentioned in passing on page 49. Considering that it's the overarching theme, you'd think it would be mentioned a bit more. It's also mentioned on page 95, the conclusion, where it's mentioned once in an attempt to explain why the book is called God and Golem, Inc.
What's with the ", Inc"?
The title is best parsed as "(God) and (Golem, Inc)". For years I've been assuming it was best read as "(God and Golem), Inc". He's comparing the for-profit creators of computing machinery ("Golem, Inc") with God.
(*) I can hear the "well, actually" crowd now. "Well, actually, the computers are racists, they merely use racist data to implement racist policies that have disproportionate impact on different races in a way that dehumanizes people and creates additional stumbling blocks, but the computers themselves aren't racist". Well, actually, that attitude is bogus.
No comments:
Post a Comment