Stacking Up

Originally published in Landscapes | Paysages. Cited as Belesky, Philip. "Stacking Up." Landscapes Paysages 19, no. 4 (November 25, 2017): 46–49.

In programming, there is a term known as ‘the stack’. A stack is a strata of software: layers of tools that provide functionality, such as a database or an operating system, that programs are built upon. Each layer of the stack reveals tools built on other tools that are in turn built on yet more tools; an infrastructural condition enabled by an emphasis on modular and interoperable software that provide a panoply of options that can be mixed freely. In contrast, the software used to design the built environment is a small series of monolithic tools: whether in 1997 or 2017 the majority of computer-aided design development occurs in one of a few major programs that attempt to be most things to most people.

One exception is found in the rise of computational design tools and technologies over the last several decades. While their effects are most readily recognised in exuberant architectural geometries, computational practices have much wider applications and constitute a shift in the way that designers engage with their stack: methods such parametric modeling or scripting allow for a given design task or procedure to be extracted from a given design context and in doing so potentially become a reusable general-purpose tool. For example, a parametric model for rationalising a facade into modules, or a script for measuring shading effects, might have been originally developed to meet the needs of one particular project. However, because that tool describes logical procedures — rather than a fixed representation — it can be easily redeployed to solve similar problems in subsequent projects.

While computational methods can trace their lineage back to the dawn of computer-aided design, the emergence of platforms such as Grasshopper and Dynamo has made creating and distributing these tools much more approachable and popular. The communities surrounding these platforms in many ways mirror those surrounding popular programming tools, wherein software development often follows a ‘Bazaar’ model — public and distributed across individuals and companies — rather than a ‘Cathedral’ model where development is private and centrally controlled by a corporate entity.

This shift should be of special interest to landscape architects. The comparatively smaller size of our discipline limits the incentives to create commercial software specialised to our needs; leaving us often working with generic or under-developed tools that do not cater to distinctly landscape architectural approaches. Computational design offers a means for us to work around this shortfall by developing and distributing our own tools and in doing so take greater agency over how we work within a digital environment. This is set to become increasingly important as methods of design modelling become ever-more complex: if drone-based methods become the norm for many survey tasks, or if virtual and augmented reality visualisations become valuable for testing, constructing, or maintaining landscapes, these technologies will need to be adapted to our specific needs. Looking just at a present example, many software implementations of Building Information Modeling fall short as tools for developing landscape architectural features, undermining the methodology’s (and our discipline’s) promise of coordinating collaboration.

Within my own work, I’m developing tools that augment parametric modeling platforms with the capability to help understand and test how natural systems operate. As in other disciplines, methods for analysis and simulation have great potential to better test our intuition or to achieve higher levels of resolution earlier in the design process without risking rework. While software such as Grasshopper provides many inbuilt and community-developed techniques for geometric development, in many cases these capacities are implicitly or explicitly architectonic; ignoring or marginalising many of the conditions that constitute the unique complexities of landscapes. To suit our purposes, computational design needs to not only provide general methods for describing and testing formal systems but also general methods for modeling natural systems as they are relevant to the design of landscapes.

For example, part of the Grasshopper plugin I’m developing (‘Groundhog’) offers a number of ways to model surface water flows. Like most common parametric methods, these capabilities operate as an extensible kit of parts intended to be combined and extended. Initially, a component might run a simulation of flow paths along a given topography to identify areas where drainage needs are highest. Taking this as a starting point, further components can identify catchment areas (based on the end points of each path), calculate surface absorption (by cross-referencing a terrain’s permeability), or model pooling effects (by summing the total expected water volumes in a given area). In each case, the degree of resolution is arbitrary; capable of working with areas totalling square meters or square kilometres.

Other capabilities look at modeling the distribution and effects of vegetation in performative terms. As before, this starts in a simple fashion, where a given palette (represented as a spreadsheet) can be automatically distributed within a given area or according to manually determined locations. Once set, further techniques can enact projections of growth that look at how a given distribution of species would appear at arbitrary points in the future by extrapolating growth rates and mature dimensions. While in some cases this is useful for visualisation purposes, each stage of the process is an opportunity to tie planting design to some performance-driven goal in a project. In doing so, planting design might begin to become better tuned to strategies such as erosion amelioration by having species selection and distribution automatically matched to variations in slope across a site. Alternately, the degree of slope stabilisation offered by a particular species could be quantified over the imagined course of the project. Similarly, shading effects could be considered at both ends of the project; either as a consideration for selection/distribution (i.e. determining the annual sunlight hours in a given location) or upon spatial outcomes (the amount of annual sunlight hours in a mixed seating/planting area).

Each of these techniques can be useful enough for investigating a particular design concern, but it is in their union (alongside established parametric techniques) that something more novel can be developed: a more comprehensive understanding of how a designed landscape, as a series of evolving and interconnected systems, can perform across spatial and temporal scales. The results of the aforementioned surface water flows and absorption can be linked to those of planting design; creating models that can begin to quantify the performance of say a distributed swale system according to different configurations of topography, planter elements, and specific species over a site. Models of flooding or sea-level rise can begin to better tune topographic manipulation or hardscape design in response to future contingencies; offering fine-tuned projections according to seasonal tidal action or longer-term sea-level rises. For example, as part of the development process for the Groundhog plugin, a landscape study was developed for a river mouth area in Wellington, New Zealand. The soil present along the river’s banks contains pollutants from past and present industrial run-off in the adjacent area, while the many stretches also face ongoing problems with erosion and flooding. The study itself looked at how to extend a standard set of site data through parametric analysis to give a more precise understanding of the local conditions along the river, and how these could thus could aid community-led restoration efforts.

Initially, a number of parametric models looked at synthesising existing site data to create a detailed profile of the public space along the riverside and shoreline areas. In the first model, a simulation of surface water flows was deployed, alongside a projection of salinity gradients in water bodies. In the second, an algorithm was developed and deployed along the identified areas to create an index of the ground conditions present within a spatial grid. At each grid point, the model ‘samples’ the given substrate, slope, and saturation levels of the soil, and translates these into a single metric, as measured along a red-yellow spectra. This measure then became a valuable guide in deciding how to develop a planting plan that could help re-vegetate the river banks and mitigate erosion. While the appropriate species were already known, the spatial index became a way to automatically identify where they would be best distributed as it allowed for cross-referencing between species characteristics (say slope or saturation tolerances) and the localised variations within these conditions at a given point on site. A diagram represents this process, where each species is classified against the spatial index given their preferences (left vs right aligned; following a red-yellow spectra) and their tolerances (the width of the bars along that same spectra). As such, it allows for a kind of pseudo-planting plan to be automatically generated for any given portion of the site, whereby volunteer planters could easily identify which particular species is best suited for a given location.

In the above example, and in general, the use of these models is not meant to supplant specialist knowledge, but to provide fall-backs when it is not available, or to allow for better collaboration between specialists and designers. For instance, Snøhetta’s design for the landscape surrounding the Max Lab IV building sought to develop a topographic form that would help mitigate the impact of surface vibrations from surrounding roads on the facility’s sensitive scientific equipment. To do so, a parametric model was used to define the topography as a series of geometric rules that produced an undulating wave-like pattern that spiralled out from the building in plan. Defining the exact rules and parameters of this pattern meant they could then become an explicit consideration when collaborating with the engineer team, as the wavelength, amplitude, and other factors could be finely tuned in numeric terms according to their knowledge and simulations of vibration dispersal. At the same time, the use of this model as a site of design development meant that ongoing updates to the pattern could be tested against more traditional landscape architectural criteria, such as quantifying the topography in terms of cut and fill volumes, its effects on wind modulation, and the interaction between the grading, stormwater run-off, and the proposed ephemeral wetland areas.

While landscape architects were among the pioneers of computer-aided design, the software that dominate our present practices are in effect hand-me-downs — a stack largely catering to, and created by, other disciplines. We should look to computational design as an opportunity to impart knowledge of how we work into our design tools and in doing so claim greater agency over many stages of the design process. Doubtless many landscape architectural practices already engage in some fashion of digital toolmaking, but as a discipline, the more novel potential comes when we begin to do so in a transparent and collaborative manner. Architects have long benefited from the resulting flow of digital tools between a multiplicity of groups across practice and academia; an exchange that has come about by considering software development as an activity that can take part within, not outside, a design discipline. As landscape architects, we shouldn’t always blame our tools, but we also shouldn’t shy from fixing them.

Thanks to Jason Hare and Judy Lord

I'm a designer, developer, and educator who likes to work on tools for mapping and modelling. More about me