Chapter 6: The Inevitable & Utter Demise of the Entire Architecture Profession

David Rutten


Recent years have seen an increase of positive references within popular culture to cutting edge computational developments such as artificial intelligence, neural networks, machine learning, and other new-fashioned concepts. These are algorithms with the potential to execute tasks well beyond the reach of conventional computing. Some of these tasks are currently performed by old-fashioned human beings, who have invested years of their lives in becoming experts and who expect to be financially rewarded for performing them. You may be –or perhaps one day hope to become– one of these human beings, in which case the edict that these algorithms are just around the corner may upset you. But fret not, for I have some good news. Incidentally there will also be a little bit of bad news, so you may want to fret some.

The good news is that these algorithms have been just around the corner for a good while. Their arrival has been billed as imminent since at least the 1970’s, and while certain companies such as Google and Facebook may rely on them as part of their everyday functions, only big companies can afford to develop and maintain them in the first place. It is also easy to become unduly impressed by things like Google’s image recognition software, but if you think it is good at spotting kittens online, wait till you see me parse a photograph. I can even tell you whether the picture features a hungry, sleepy, scared or playful feline, and I haven’t had millions of dollars invested in my kitty detection abilities.

We are impressed by these artificial neural networks not because they can outperform humans, but because they outperform code written by humans, which really is not the same thing at all. The reason Google needs image recognition software is because it would cost too much to employ people to look at and categorize all the images that make their way onto the internet every minute of every day, not because humans are not up to the job.

Incremental improvements in AI research certainly add up to an impressive whole on an academic level, but they are not the harbingers of the Impending Singularity™​. Paradigm shifting innovations tend to come out of the blue, and are therefore utterly unpredictable both in their scheduling and their details. This is the bad news incidentally; whatever is going to make you unemployed is not something you will in all likelihood see coming from a long way away.

Yet it cannot be denied that computers have had –and will continue to have– a significant effect on architecture. CAD provides access to geometry which is cumbersome to describe or measure on paper. CAM provides access to affordable, yet accurate, bespoke elements. BIM promises to aggregate the totality of administrative, legal, and structural data. Of course a cursory glance at history reveals that byzantine structures with an abundance of bespoke detailing are not, by any yardstick, recent phenomena. The worth of CAD, CAM and BIM is not measured in units of innovative brilliance, it is measured in units of time saved and money earned. In other words, these computer technologies are not manifestations of some unparalleled, paradigm shifting wizardry, they merely make certain buildings possible in the current socio-economic climate.

Nobody knows if –and especially not how– computers will surpass human intelligence. Any career saving strategies we wish to adopt had better be based on solid observation of the recent past, rather than wishful or panicked thinking about the distant future. Unromantic as it may be, any discussion about the place of both computers and humans in architecture must limit itself to the facts in order to be productive.

The most salient fact regarding computers is that they are very good at doing sums. The clue is right there in the name. A computer can calculate the standard deviation of a million numbers before your finger has had time to let go of the Enter key. But the aphorism that it would take a team of humans ten years to do what a computer can do in one second, has the corollary that a computer can also make more mistakes in that very same second than you could ever hope to make in a lifetime. This is because, ultimately, a computer has no idea what it’s doing, let alone why it’s doing it. Understanding stuff is what humans are good at.

The introduction of computers in fields other than architecture has not always been without issue and there are important lessons to be learned here. Famously, the introduction of autopilots in commercial airliners has resulted in a net loss of pilot skill. Most of the time the autopilot keeps the plane level and pointed in the right direction, but when something goes wrong the human pilot needs to take over. Unfortunately this now happens without the benefit of much active flight time, or indeed a clear understanding of the immediate events that preceded the failure. The problem appears to be that the human is employed to supervise a machine executing a monotonous task. This is not something humans do well. I appreciate that it is very tempting to delegate boring work to a machine, but if the machine cannot be fully entrusted with the task, it may make more sense to have the computer supervise the human instead.

If architects wish to more fully integrate computers into their practice, yet not be displaced by them, it is vital that humans are allocated those jobs that humans do well –typically those involving thinking and understanding, whilst computers are allowed to focus on what they do well –computation and not getting bored. The architect remains the designer, while the computer becomes the critic. Does this building violate envelope restrictions? Will this facade melt cars at a thousand paces by focusing sunlight? Does the piping intersect itself anywhere? Can you actually see the major landmarks from the penthouse offices?

Architects need to ask themselves whether they would rather second-guess computer generated designs, or whether they would prefer to have computers audit their creative work.

Since many goals and constraints are project specific, architects will themselves have to transcribe these into computer algorithms, leading us to the concept of responsibility, the core problem of Computing in Architecture. If algorithms are awarded salient roles in the design process, who is at fault when they do not work as advertised? Can the architect be held accountable when a closed-source algorithm written by some third party is flawed? How about if the algorithm is open-source, but the architect does not bother to –or does not know how to– assess it? What if an algorithm is unwittingly applied beyond its scope?

Since the reality is that the architect is almost always legally responsible, it is my conviction that she should do whatever it takes to claim moral responsibility as well. This requires a decent understanding of algorithmics and computational theory, as well as at least a vague notion regarding the methodology of any relevant algorithm, especially its boundary conditions and failure points.

If the architect is to claim responsibility over the outcome of a computation, then that outcome must permit rigorous evaluation. Unless you know how to justify every single design decision your computer has made in your stead, your career as an architect may well be forfeit.

In conclusion; there should be no reason why algorithms –traditional or “intelligent”– pose a threat to the continued employment of the architect, provided each party keeps to their respective spheres of expertise. Computers have the capacity to harm only when they are employed unthinkingly or their outcomes are allowed to go unchecked.

Tyrol, Austria, 19 March 2018

copyright David Rutten 2024

Previous
Previous

Chapter 5: You will not make any more boring architecture You will not make any more boring architecture You will not make any more boring architecture…

Next
Next

Chapter 7: Architects Never Die: Evolving Through ‘Smartification’