Let’s begin today’s post with a reference to a Sci-Fi novel: The Forever War (by Joe Haldeman, 1974). This brilliant story was the first I’ve read to mix at its core the relativistic effects of near-lightspeed travels – including future shocks (Alvin Toffler: “too much change in too short a period of time”). But the seed I need from this story for my posts (spoiler alert) is that the war between Humans and Taurans has started because of a big misunderstanding. Oops! “We apologize for the inconvenience.”
AI & The Fifth Generation Computer
A similar misunderstanding happened during the 80s-90s between the Japanese and the American computer researchers. In 1982, Japan’s Ministry of International Trade and Industry (MITI) funded the Institute for New Generation Computer Technology (ICOT). ICOT was based in Tokyo, and his Director Kazuhiro Fuchi has been appointed to lead widespread multidisciplinary researches. In addition, ICOT was reasonably backed by Nippon corporations such as Toshiba, Sharp, NEC, Fujitsu, etc. (94 contributors in total).
ICOT’s main goal was to fast-forward a decade into the ’90s and provide Japan the very needed computing technologies, encompassing critical fields such as VLSI, massively parallel computer architectures and perhaps more prominently, software! And by software, you guessed it, it was all about Artificial Intelligence. What was the misunderstanding? Same old story: before the harsh reality of project de-funding and the arrival of a winter (it is coming…), the wildest phantasms are projected upon the technology, without any restraint whatsoever. Then we usually fail to deliver to the phantasmagoric expectations, and everyone goes to the beach until the next misunderstanding…
In 1993, when ICOT was shut down, almost everyone described it as a failure. This included the usual naysayers, but not only. Many, genuinely feared the dominion ICOT could have given to Japan. Shocker! A good illustration of this point is the Fifth Generation: Artificial Intelligence and Japan’s Computer Challenge to The World (by Edward A. Feigenbaum and Pamela McCorduck, 1983). The subtitle tells it all: the World is the United States of America, its military, its vibrant computer industry, and renowned Universities. All of it was condemned to be wiped-out by an unexpected and formidable Oriental foe. Certainly not the by the hands of the Chairman of the Supreme Soviet of the Soviet Union. Good news, none of these alternatives turned out to be true. Instead, mainframes and their cohort of mini-computers vanished, but not because of some mighty AI supercomputer dispatching its mandates in Kanjis. No, it happened because of … eeny, meeny, miny, moe … Personal Computer!. Of course, strong AI was nowhere even three years later, but instead, IBM’s Deep Blue ass-kicked chess master Garry Kasparov using brute force.
You Missed the Point
Describing ICOT as a failure is, of course, one-sided and ridiculous. At this point, I must mention the Fifth Generation Fallacy: Why Japan Is Betting Its Future on Artificial Intelligence (by J. Marshall Unger, 1987). According to the author, a linguist, the misunderstanding was total. Indeed, Japan’s planners didn’t have any other choices if they wanted to open-up computers to the general population. Early on, the percentage of computer literate users is in the single-digit in Japan, and the outlook was grim. Unger explains that this is because of the complex and rich in sense writing system used in Japan (48 phonetic symbols and +6K Chinese ideographs). So, if weak-AI techniques were indeed emphasized in the scope of ICOT, it was not a plan for world domination, but instead, the imperious need to design functional word processing technologies! A word processor, not an ill-tempered toaster with lasers! In a pragmatic way, industrial contributors to ICOT, wanted to sell work processors first, computers later. LISP or PROLOG machines? Maybe one day.
But make no mistake, ICOT pushed the envelope of AI in the field of logic programming as well. Indeed, ICOT researchers choose an in vitero approach to knowledge representation and manipulation. For their programming language of predilection, they’ve picked Prolog. This language was created by French researcher Alain Colmerauer in the ’70s and got its name from the contraction of PROgrammation en LOGique. The first Prolog system was developed two years later with the help of another French researcher, Philippe Roussel.
Prolog is well suited to implement first-order logics, and a Prolog application is a collection of facts and rules – a.k.a. knowledge database –, against which, the inference engine issues queries to answer questions asked by the operator. The main problem with a serious Prolog program is the exponential explosion of time and resources required to run the user’s queries. Of course, we know today that parallelism is a solution to handle such problem sizes. And it is as soon as 1982, during a stay at ICOT, that Ehud Shapiro invented Concurrent Prolog, which in turn influenced the parallel implementation of Prolog in ICOT!
To bridge this post into the next one, I will ask a question: “did you know that Japanese (and European) AI researchers prefer using Prolog, where Americans favor LISP?”
To be continued…