Chapter 9: The Natural Logic of Artificial Intelligence (*)

Mario Carpo

Artificial Intelligence may be a very old dream, but the term Artificial Intelligence itself is fairly old too: it was introduced by computers scientists in 1956 as an alternative to Norbert Wiener's new science of cybernetics, as Wiener's cybernetics had a strong emphasis on neurophysiology, which many computer scientists at the time found suspicious, or worse. Be that as it may, in the 1950s and 1960s the ambitions of computer scientists, as well as of cyberneticians, were equally out of kilter with the very limited technical means then at their disposal. When it became evident that even the biggest mainframe computers of the time could not do much more than faster high-school maths, funding dried up, and the loftiest and boldest AI projects were quietly abandoned. That was the beginning of what computer scientists to this day call "the long winter of Artificial Intelligence."  

If this old term, in spite of its checkered record, was so successfully resurrected in recent times, that's because, against all expectations, what many call AI today kind of works. Most commercial computers today evidently carry out increasingly intelligent tasks, and today's computers, unlike those of the 1960s, easily win at games of checkers, chess, and Go, against the best human champions. Today's computers can even, almost, drive cars. Designers are not alone in questioning the survival of their profession in the imminent age of AI.  Many jobs and tasks, for sure, will be eliminated by the next technical change--that's what technical changes typically do. Cab drivers are certainly right to worry.  How worried should designers be? 

That's hard to tell as nobody knows for certain what AI today is or does. AI 2.0 (that is today's AI, which works; unlike that of the 1950s and 1960s and 1970s, which didn't) is no longer even trying to imitate the logic of modern science, nor the natural processes of human learning. AI today looks more and more like a new, post-scientific, post-human method--as if computers had started to develop their own science, and their own way of thinking, which is quite different from ours. In spite of the anthropomorphic connotations embedded in some vintage AI lingo still in use (machine learning, deep learning, artificial neural networks, etc.) computers today neither think, nor learn, the way we do. 

Most of the things we learn, we learn by doing. Humans are not unique in that: my parents' cat, when I was a child, was an urban dweller--born and bred in an apartment in town, he was familiar with underfloor heating, but he had never in his life experienced a strong and direct source of flameless heat. The first time he did, in a country house, he badly burnt his whiskers by getting too close to a monumental, German-made cast iron stove. He did not like that, and since that unhappy experience, he learned to enjoy the radiant heat of the stove (which he cherished) from a safe distance. Not unlike cats, we learn all kind of things by trial and error. When something hurts, or does not work, we take notice, and we don't do it again. 

Unlike most cats, however, we can also save plenty of time by being told in advance how some things may play out. This is why we listen to our elders, go to school, and read books. Pre-industrial artisans at the end of the Middle Ages and in early modern Europe had to go through a laborious, rigidly regulated educational and training process, marked by exams at all steps, before they could set up shop on their own: apprentices first had to qualify to become journeymen, then take the most impervious tests to become masters in a trade, art, or craft. At every stage, all trainees were made privy to some core technical know-how, so that all members of the same guild would produce reliable objects of standard, comparable quality, at very similar costs. Yet a few craftsmen always stood out, by making better or cheaper stuff. This they did by tweaking and twisting, within limits, the methods they had been taught: by taking risks and trying something new. Artisans can try and copy from rivals, but the best--the innovators--always learn from their own, solitary trials. You make a chair, and if it breaks you make another, and then another, until you make one that won't break. If you are smart, you can also intuit and learn some "transferable skills" in the process, so the next time you won't have to restart it all from scratch.

Trial and error is a laborious and expensive procedure, because making and breaking stuff takes time and money. Which is why, over time, scientists came up with some shortcuts. Indeed, that was the main achievement--if not the main purpose--of the modern scientific method: from the comparison and selection, generalization and abstraction of the results of many trials, scientists formulate causal laws (generally in the format of algebraic expressions) that distil in a few lines of clean mathematical script the results of many experiments, and allow scientists to predict the result of similar phenomena that may occur in the future in similar conditions. 

Built on such premises, statics and the mechanics of materials are among the most successful of modern sciences.  Structural designers use the laws and formulas of elasticity to calculate and predict the mechanical resistance of very complex structures, like the wings of an airplane, or the Eiffel Tower, before they are built, and in most standard cases they do that using numbers, not by making and breaking physical models in a workshop every time anew. That's because those numbers--those laws--condense in simple mathematical notations the lore acquired through countless experiments performed over time. And as crunching numbers is cheaper than making and breaking stuff, and provides more reliable results, over time we came to trust engineers more than artisans, and number-based engineering replaced empirical craft as the driving and dominant technical logic of the modern industrial world.

Then computers came. At the beginning we thought that computers were just, as the name still suggests, calculating devices--very fast abacuses that would speed up all our traditionally slow and error-prone number-based calculations. To the contrary, it is more and more evident today that computers can produce better and more useful results if we let computers solve problems following their own logic and methods, instead of replicating ours. For computers, oddly, do not think like the engineers that designed them; they think more like the good, dumb artisans of old--those very artisans that modern engineers have replaced, and relegated to the dustbin of technical history. Today, using computers, we can make and break on the screen--in simulations--in a few minutes more chairs than a traditional artisan would have made and broken in a lifetime; and if we are smart we can more easily intuit or learn something in this process. In fact, we may not even need to do so, because computers are proving increasingly smarter than us in learning by their own trials and errors. 

Computer-based simulations are already, in most cases, perfectly reliable--and yes, they are obtained, mostly, using traditional, calculus-based or discrete mathematical tools. But computers can churn out so many of them, changing as many parameters as needed, so fast, and at so little cost, that one can easily imagine that, to the limit, computers can offer an almost infinite number of solutions for each stated problem. Among so many options, inevitably at some point one or two will show up that will be good enough to solve the matter at hand. Thus computer-based, simulated trial and error becomes a perfectly viable and effective problem-solving strategy, and computational heuristic should be seen today as a fully fledged post-scientific method: the core method of artificial intelligence. That means, for example, that no engineer needs to calculate the mechanical resistance of a chair any more, because computers can just make and break as many chairs in simulation as needed, until they find one that won't break. That's not unlike what a traditional artisan would have done: but computers can now do that so much faster that they can beat, by massive trial and error--by brute force--both the artisan's intuitions, and the engineer's demonstrations.

However, picking and choosing among so many chairs--where many may have the same mechanical resistance, and where mechanical resistance may not even be the only design requirement--would still take time. That's why we can now teach computers to compare results themselves, and come up with smaller and smaller rosters of winners. We call that optimization, and the principles of the process have been known since the 1970s, when John Holland famously compared the optimization of mathematical algorithms to Darwin's theory of evolution by random variations and natural selection. John Holland thought that we should breed algorithms the way we breed horses: by force-mating the strongest. Since then, the science of genetic, evolutionary algorithms (but which should be more pertinently called eugenic algorithms) has largely proven its efficacy; most software of structural optimization used in engineering today proceeds by trying a huge number of solutions at random, then picking the parameters that appear to yield better results and dropping (killing) those that don't, then again and again, ad libitum atque ad infinitum--or, in fact, until someone is pleased with the results, or runs out of time, and pulls the plug. There is no guarantee that any lead found this way may go anywhere, but as computers can keep trying forever, that's irrelevant. Indeed, having smart intuitions or trying to orient this massively random process in any "intelligent" way is unwarranted, unnecessary, and may even be counterproductive.

And that is finally the main difference between the way computers solve problems and the way we do: our science predicts events through laws of causation; the same formula that allows us to calculate the resistance of a cantilever also offers us an explanation--or at least a rational interpretation--of how the statics of a cantilever works. Computers don't do that, because computers are not in the business of making sense of the world. Why will this extraordinarily complex, indescribably messy structure we are fooling around on the screen right now stand up, and the 20,000 very similar ones just tried and discarded in simulation, won't? Who knows: nobody knows that--least of all, its designers. But we know it will stand up, which is why we can build it. The first, still inchoate applications of artificial intelligence in technology and in the arts have already produced visual and formal results that many find weird, alien, or hostile. And rightly so, as these unusual shapes and forms are the outward and visible sign of a technical logic we may master and unleash, but we can neither replicate, emulate, nor even comprehend with our mind.

A long time ago Karl Marx famously introduced the theory of "alienation" (Entfremdung, or estrangement) to decry the industrial separation of the bodies of the makers from the tools of production; the same notions may just as well apply today to the postindustrial separation of the minds of the thinkers from the tools of computation.  The ontological and functional dissociation between the inner workings of natural intelligence and those of today's artificial brains will eventually result in forms of collaboration and complementarity between humans and machines--as well as in antipathy and antagonism. Time will tell. Personally, I would not worry too much. Humans have an inborn tendency to irrationally loath and fear the very same machines they breed and need. We have been there before.  

London, UK, 8th September 2019

*An earlier version of this article was published in the exhibition catalogue Perspectives: Enabling Machines to Learn, edited by Alvise Simondetti (London: Arup, 2017, p.7-10). Revisited, expanded and republished here with the kind permission of Mario Carpo.


copyright Mario Carpo 2024

Previous
Previous

Chapter 8: Post-Norm Architecture

Next
Next

Chapter 10: Architecture at the Crossings of the “Valley of Death” During Times of Digital Urban Transformations