April 21, 2024
AI as a Design and Planning Partner
By Matt Shaw
Stuart Weitzman School of Design
102 Meyerson Hall
210 South 34th Street
Philadelphia, PA 19104
Michael Grant
mrgrant@design.upenn.edu
215.898.2539
Like most technological innovations, artificial intelligence (AI) has garnered almost evangelical supporters, with some suggesting that AI will bring about a futuristic world complete with cyborgian “singularity” where super intelligent computers takeover and destroy human civilization. The optimists in the tech world want to leave everything behind and build a new technologically liberated world. But how is AI really impacting the world we live in? For what applications does it really have value? To talk to Weitzman faculty, the current reality is not so much a break from history, but a layering of these new AI technologies onto our existing environments and processes.
Artificial intelligence, machine learning, computer vision, and other kindred technologies have been quietly incorporated into the built environment—from traffic cameras to automated check-out lines. At Weitzman, several faculty are using AI to advance their research in image making, manufacturing, and urban analytics as a new “partner” in the design process.
Architecture and design have long had a complex and long list of collaborators, from engineers to the client. As historian Antoine Picon said at the 2022 ACADIA Conference, organized by the Department of Architecture at Weitzman with the Association in Computer-Aided Design in Architecture, “With the development of the recourse to AI in digital fabrication, one should also add to the list of authors the ‘intelligent’ robots as well as the various people involved in their building and training.”
This folding of a new agent into the design process is the basis of the third-year architecture seminar titled Image, Object, Architecture, where Associate Professor of Architecture Ferda Kolatan has been working with students to explore how generative AI can expand our notions of authorship and aesthetics. “In many ways the work asks the same questions we have for centuries in architectural history,” Kolatan says. “The seminar reassesses the alliance between image, object, and architecture in the context of diverse cultural ideas and advanced digital technologies including AI.”
Kolatan’s students use 3D modeling, rendering, and fabrication techniques, in dialogue with commercial AI tools like Midjounrey, Dall-E, or Stable Diffusion to produce a series of images and “artifacts,” or physical 3D objects. Starting with an image from art history, such as a painting, a film still, or an architectural drawing, they create a digital image using AI prompts to amplify certain characteristics. These images are iterated and eventually are turned into text using ChatGPT texts, then back to images, looking at all steps for surprising “aberrations” that can be isolated and used productively to create a final model in conversation with the 2D image and the collaborative text.
“The whole idea in the 90s was to produce novelty using technology. I’m not sure today we are still looking for new or novel things, but rather new combinations of existing stuff. AI is less a modernist idea of newness, and more an archival one of seeing things differently.”
Kolatan’s students are not the only group addressing contemporary and disciplinary questions around aesthetics and AI. Assistant Professor of Architecture and Director of the Master of Science in Design: Robotics and Autonomous Systems program (MSD-RAS) Robert Stuart-Smith and his team at the Autonomous Manufacturing Lab (AML)—part of the Advanced Research and Innovation Lab (ARI)—are working to redefine digital design, robotic fabrication, and robotic construction. They are developing autonomous processes such as machine learning, adaptive robotic control and “multi-agent,” approaches where algorithms embedded in manufacturing processes might act as co-designers.
“We are authoring and working in partnership with autonomous processes that enable us to integrate the act of design within computational, fabrication, and construction processes,” Stuart-Smith says. “In this scenario, a design outcome might emerge through the act of making, distinguishing the work from previous approaches to digital design.”
The AML is using computer vision to analyze the “visual character” of a piece of a building such as a column/beam/roof connection, and measure qualities like heterogeneity, intricacy, continuity, and recesses. By quantifying these normally qualitative data points, the team can begin to analyze how different forms of design expression relates to material and structural efficiency, while automating some of the time-intensive decision making that would normally be part of the architect and engineer’s process. “We are analyzing structural performance and material volume within our design methods in order to produce more complex and intricate designs that reduce overall material use,” says Stuart-Smith. “We are constantly asking ourselves ‘How can we create more considered and specific 3D designs that are both aesthetically and ethically ambitious?’”
The AML is also pioneering what they call “aerial additive manufacturing,” or teams of drones equipped with 3D printers that “swarm build'' large-scale building parts either on-site or prefabricated off-site. Inspired by wasps and other natural builders that use collective building methods, it is a collaboration with AML’s sister lab in University College London’s department of Computer Science, and research teams at Imperial College, Empa and the Swiss Federal Laboratories of Materials Science and Technology, University of Bath, Queen Mary University of London, and Technical University of Munich.
The swarm consists of two types of drones: ScanDrones and BuildDrones. ScanDrones use depth cameras or LiDAR to 3D scan the site and each 3D print layers’s position and quality. This data is then relayed to BuildDrones which adapt their 3D printing trajectories and flight paths to this information to ensure that each layer is aligned with previous layers and the intended geometric design. The AML team and its aerial robots have produced a 72-layer, 6.5-feet-high column out of expandable foam, and a 28-layer, 7-inch-high column out of a custom concrete-based structural polymer. They also produced a “virtual print” that illustrates in light trails the paths that the drones take when moving as a swarm to print along with simulations using the same software stack to demonstrate much larger populations of robots co-building
Collaborating with industry partners such as Cemex, Skanska, Mace, Burohappold, Arup, MTC, Ultimaker, Kuka, and others, Stuart-Smith and his team at the AML are not only introducing autonomous robots and AI-based tools as new design partners into the manufacturing and fabrication sector, but also opening up new possibilities for addressing myriad global challenges. The lab’s development of custom design software and methods has potential to optimize material use and workflows for complex design solutions, potentially leading to a greener and more cost-effective construction industry.
The aerial additive manufacturing project, while in its infancy, could be useful for building in environmentally hostile, remote, or hard-to-access locations, such as post-disaster reconstruction or repair after a storm or earthquake, or building in remote mountainous areas or conflict zones. Because such swarms of drones do not require any large infrastructure or ground transportation, they have the potential to reduce construction costs, energy use, and embodied carbon, and could potentially be recharged from solar or wind-generated power sources.
“The goal of the work across the AML is to develop, test, and build real-world examples of new techniques that can have both an aesthetic and ethical potential to help address age-old issues,” Stuart-Smith says. “Time, cost, and material use matters a lot in certain contexts, especially in humanitarian or environmental contexts.”
Another faculty member using AI to tackle big issues is Elizabeth Delmelle, a data scientist and director of the Master of Urban Spatial Analytics (MUSA) program. She and her colleague Isabelle Nilsson, associate professor at University of North Carolina at Charlotte (UNCC), are using AI at the urban scale to shift our understanding of the real estate industry—and ultimately influence policy decisions.
Delmelle and Nilsson used machine learning to analyze real estate language to find patterns in how they discuss urban amenities and investment language. They trained the computer to scrape text from online real estate listings and cross-reference it with publicly available race and income data, identifying patterns in home type and buyer profile.
“We are using data sources that are not designed for research, like real estate and rental listings in minority neighborhoods from websites like Zillow and Craigslist,” Delmelle says. “They are fun to analyze because they are in a way accidental and not meant to be representative, but they are.”
Their findings prove—at least in part—that realtors are still promoting a soft version of discrimination in housing sales, even though the practice was technically outlawed in the 1968 Fair Housing Act. Properties with terms like “urban amenities” and “smart growth” are often being marketed and sold to wealthy whites, while places with car-centric characteristics like cul-de-sacs were being marketed and sold to poor black people. Terms like “up and coming” appear more in those that are sold to white people, and black people are predominantly being directed to places with disinvestment.
“We are examining how people talk about neighborhoods as a way to understand cities,” Delmelle says. “It's satisfying to prove something we already had a hunch about.”
For Delmelle, the introduction of machine learning as a research partner allows for larger and larger data sets to be scanned. “It can help bring a quantitative eye to what has typically been a qualitative area of discourse—real estate language and cities—typically reliant on personal anecdotes and person-to-person interviews,” she said.
The Charlotte project acted as proof-of-concept, but the research team received a National Science Foundation grant to expand to more cities across the country, all of which have particular challenges. Weitzman also offers a teaching module based on the work so students can get hands-on experience with the project.
Delmelle notes that these AI tools are a partner, not an independent actor. “The more we rely on computers, the more important it is to have a theoretical understanding of things,” she says. “‘Unsupervised classifications’ where the machine just starts grouping like objects can lead to nonsense. You need to understand what you are looking for.”
While AI will likely not bring about the science-fiction future that leaves the past behind, many applications in the built environment professions are beginning to chart a path toward a world where machines and humans work side-by-side. Weitzman faculty are driving research that will empower future designers to design, fabricate, and build our cities to be more socially and environmentally sustainable than we could have imagined.
[The author thanks Rob Fleming and Ricardo J. Rodriguez De Santiago, Assoc. AIA, LEED AP of the Center for Professional Learning at Weitzman, whose class AI Basics for Architects and Designers was instrumental in the writing of this essay.]