One of the challenges of real-world engineering problems is the need to predict the behavior of a system which may be complex, dynamic, nonlinear and subject to uncertainty as to loading and response. Foundation problems are a good example of this. The behavior of a foundation is complex; it is a soil structure interaction problem in which the behavior of the ground and the configuration of the superstructure will influence the results. Foundation behavior may be dynamic, in the sense that the response can be sensitive to initial conditions and the loading history. Loads can also be applied dynamically. However, the complex and dynamic nature of foundation problems is driven by the nonlinear behavior of structures and soil-structure interaction. Continue reading
When structural systems are used to retain in-situ soil during excavation, the resulting soil pressures are difficult to accurately predict. In addition to the uncertainty inherent to soil materials, and the inability to fully measure those properties, the pressures on an excavation support system or permanent foundation elements that similarly retain in-situ soil and any existing facilities thereon are indeterminate soil-structure interaction problems. As the structural system is loaded, usually by excavation of supporting soil, it deforms. Movement of the excavation support mobilizes the internal strength of the soil. For a given load, the deformation stops when equilibrium is reached and the soil and support structure are sharing the task of retaining the soil.
This loading process is non-linear, time-dependent, and is influenced by a number of soil attributes, as well as the configuration and behavior of the excavation support structure, both globally and locally. Consequently, there are many approaches to estimate the soil loads on an excavation support system, all of which are based on simplifying assumptions of the soil-structure interaction. Continue reading
I was once contacted by a marketing official of renewable energy firm looking for help with a small solar array project. His company had a solar installation designed offshore but needed an engineer licensed in the proper jurisdiction to seal the drawings…that afternoon. I balked at the request. I could not possibly perform a sufficient review of the design to represent to the jurisdiction that I was in responsible charge. The marketer insisted that my seal on the plans would tell the jurisdiction that I would be involved going forward. Sure I would… Continue reading
Last Sunday, a string of thunderstorms dropped several inches of rain in Howard County Maryland, causing a torrential flash flood to rise several feet above lower Main Street in Ellicott City – the second time in two years that historic central business district suffered such a severe flood. Flood waters carried away automobiles and damaged numerous buildings.
This area is in a relatively narrow valley where two streams converge before emptying into the Patapsco River. It is a small urban area with a lot of hardscapes and the creek is channelized (some of the buildings span over the creek). If the ground becomes saturated and the river rises, there is just no place for the runoff to go in a heavy storm. Needless to say, this is a flood-prone neighborhood; it has flooded 15 times since 1768.
The National Weather Service determined that the rain events that produced the floods in both 2016 and 2018 were 1000-year events. It is worth noting that the flood produced by a 1000 year rain is not necessarily a 1000 year flood or vice versa. It did not rain at all in Ellicott City before flooding of the Patapsco destroyed much of the town in 1868. A so-called 1000 year event is really one that has a 0.1 percent probability of being exceeded in a given year. The probability of experiencing two such events in two years is less than 0.2 percent. Jeff Halverson of Capital Weather Gang likened this to lightning striking twice in the same place. Statistically it is improbable but possible; statistically improbably events happen every day. However, Ellicott City experienced a storm with a probability of one to two percent per year (roughly 50 to 100 year return time) in 2011. Strong storms appear to be happening at a frequency greater than probabilistic forecasts would suggest. And if you have been paying attention over the past several years, you may have noticed the same thing happening elsewhere as well. Something else may be at work.
There may be reasons to believe that statistical forecasts, such as those used for natural hazards, may promote a false sense of certainty. There is, of course, the problem that some people misinterpret the concept of, say a 100-year flood, by thinking that such an event is “due” or not based on what has happened in the recent past. But a larger problem is that statistical forecasts can be arguably wrong. A study by the University of Bristol in the United Kingdom found that flood risk in the United States might be under-predicted by a factor of three compared to what is reported by the FEMA Flood Insurance Rate Maps. The study noted that the FEMA maps aggregate local studies of varying age and quality and do not always cover smaller drainage areas as well.
There are a variety of reasons why a statistical forecast can be wrong, or perhaps more specifically, inaccurate. The data set on which the forecast is based can be incomplete. This has been a problem for forecasting ground motions from earthquakes. The earthquake catalog in the United States is a little over three hundred years old, but is incomplete, especially for small to moderate events. The data can be interpreted inappropriately, for example by combining data that are not really the same (like standard penetration test results from alluvium and glacial till). The models can be too simple, missing significant, but difficult to measure parameters. This may be the case FEMA maps with respect to small streams and was arguably the reason many of the probabilistic predictions of the 2016 election failed. Or they may be too sensitive to small changes in the input data. There can be issues of scale or resolution as anyone who has had to mesh finite element models knows. Finally, a statistical model cannot always account for conditions changing with time. If climate change increases the severity of storms, or new development increases stormwater discharges, historical storms records may not be predictive.
This is not to say that we should not make and use probabilistic forecasts. Policymakers and indeed engineers need a consistent empirical basis for making decisions. Personal experience, “gut” feelings and ideology are not adequate. Probability is useful in that it explicitly expresses uncertainty; it is more insightful than just a guess of what is likely to happen. It is especially important to account for uncertainty in cases when theory and knowledge of existing conditions are not sufficient to make reliable deterministic predictions, such as the case with forecasting of earthquakes or elections. But it is important to remember that every probabilistic model is conditioned on a set of assumptions, including assumptions as to the quality of the data. The better the assumptions, they closer the estimated probability becomes to the “true”, but unknown probability.
Unlike the forecasting of elections, the difference between the probability predicted by a model and the and theoretically true probability that would be found by perfect data and models can sometimes have serious consequences. For example, in many jurisdictions, the first-floor elevation for new construction in a flood zone must be set at or just higher than the so-called 100-year flood. If the flood estimate is based on poor quality or incomplete data this could result in owners of these properties being unknowingly subjected to substantially greater flood risk. It is therefore prudent for design professionals and their clients to consider exceeding the minimum requirements in the code with respect to natural hazards and other uncertainties, to hedge against the possibility that risk may be later discovered to be greater.
Ellicott City is a different case. Some of the buildings downtown are over 150 years old. Back then, natural hazards were largely viewed as unpredictable. Buildings were constructed based on the experience of the builder. Failures were more common relative to today and more accepted. It is clear today that flooding there is a fact of life. While it is possible that the flooding on Sunday and in 2016 was produced by 1000-year storms, it is also possible that climate change is making extreme storms effects more common and the built enviromnment is making their effects more severe. Regardless, a rare event is not necessary to produce devastating flooding. Rebuilding everything the way is was, seems like an exercise of futility, if not insanity, for a place that floods once or twice per generation on average.
The question for places like Ellicott City is how to rebuild. The historic building fabric and sense of place have value, but once the damage to a particular building exceeds some threshold, modern codes require conformance. It is always possible to rebuild and repair with greater resiliency, but this typically requires modern materials and construction methods. In the process, some of the historic building fabric is lost. Sometimes these improvements can be hidden, like replacing wood stud walls with reinforced masonry or trusses with steel beam Some improvements, like elevating a structure several feet above the flood level, completely change the function and appearance of a building. Is it better to preserve a building and destroy the setting or recreate the setting with modern, resilient replicas of past structures? Is there an in-between approach that makes any sense? It is a difficult balance with real impacts on residents and visitors alike. The lessons learned in the process will be useful for design professionals and disaster-susceptible communities everywhere.
The information and statements in this document are for information purposes only and do not comprise the professional advice of the author or create a professional relationship between reader and author.
Having projects in the urban environment representing a large proportion of my career experience, I am always a little surprised when I encounter design professionals and contractors who do not fully appreciate the challenges and constraints associated with building on urban sites. While I find that a lot of design professionals, contractors and other stakeholders have urban project horror stories, they do not necessarily associate those adversities with choices that were made or not made during the project. It is almost as if they believe that nothing can be done.
Perhaps I should not be surprised. The fact is that most of the Architecture, Engineering and Construction (A/E/C) industry is focused outside of the urban cores. In a lot of major metropolitan areas, development has focused on low-density sprawl with large parking lots and generous setbacks. For these projects, consideration of the outside world may be limited to curb cuts and utility connections. Is it any wonder then that designers and constructors underestimate what it takes to build on a constrained urban lot. Continue reading
Existing conditions of a site are a common and stubborn constraint and source of challenges for construction projects. Unlike new construction, existing conditions cannot be specified. However, existing conditions are often difficult to observe and variable, creating a significant source of uncertainty and risk. Exploration and testing are the standard means of mitigating the uncertainty associated with existing conditions. This may include visual observation, probing, material sampling and in-situ and laboratory testing. However, investigations are expensive and never fully eliminates uncertainty. As a result, a lot of design professionals have trouble managing the risk associated with existing conditions and resort to excessively conservative design, increasing construction costs and often creating added risks. A more rational approach to the uncertainties of existing conditions can result in more cost-effective investigation programs and reduce construction costs and risk. Continue reading
It is not exactly news that construction costs in New York City are exceptionally high, especially for underground infrastructure. However, given New York’s importance to the American economy and the state of its infrastructure, construction costs are a real constraint on future growth. In addition, there are lessons that other cities can learn from New York’s experience.
Last month, the New York Times published a long-form piece on the East Side Access (ESA) project, which they labeled the “Most Expensive Mile of Subway Track on Earth” and other major capital projects Metropolitan Transportation Authority (MTA). MTA is the semi-independent state agency that owns and manages various transportation infrastructure, including the commuter rail lines and the subways in New York City. Continue reading
A couple of weeks ago, I was using a geotechnical report to develop a critical parameter for design of a particular foundation system. Like so many geotechnical reports I see, this report had many of the signs of being the product of commodity geotechnical services, as is often practiced when materials testing agencies offer geotechnical engineering. When geotechnical services are provided at the lowest possible cost, the effort to perform a subsurface exploration and provide a report must be reduced to the minimum, using the lowest-cost staff available. There is no budget for detailed analysis of data or development of site-specific recommendations by senior staff. The report is similar as to form as those provided a higher cost, but the substance and the level of service that produced it are not the same.
Interestingly, the services and reports provided by commodity geotechnical firms have a lot of the same short-comings. Perhaps this should not come as a surprise. To be competitive, these firms will have to use a lot of the same means to reduce the price of their services as similarly situated firms. This price competition on increasingly similar products and services is the essence of commoditization. Reaching the end of the process is to be a true commodity and be indistinguishable, except on price. While commodity geotechnical services are less expensive, they often increase cost overall by incentivizing excessive conservatism in design, leading to higher cost of construction and also by increasing uncertainty during construction, leading to higher risk of claims and delays.
Having had a lot of experience reading and using geotechnical reports, as well as experience producing them, the signs of commodity geotechnical services are quickly recognizable to me. They reflect a lack of thought and attention to detail in scoping the subsurface exploration, collection and presentation of data and development of recommendations. Here are a few common problems: Continue reading
As reported by the United States Geological Survey and various news organizations, a magnitude 4.1 earthquake occurred outside Dover Delaware on November 30, subjecting much of the mid-Atlantic region to weak to light shaking. Assuming the magnitude is not revised down, it would be tied for the largest magnitude earthquake in Delaware history. No damage or injuries have been reported as of this writing.
For anyone who thinks that it is “common sense” that earthquakes cannot happen in their area, this should serve as a reminder: earthquakes can happen anywhere. In the Central and Eastern United States (CEUS), earthquakes are not directly caused by plate tectonic activity, and are, consequently, more infrequent and harder to predict. However, much of the CEUS is subject the moderate earthquake hazards and a few locations are subject to high hazards. Take a look at the hazard map below produced by USGS. Continue reading
Traditionally, designers of temporary structures for use in construction had little guidance binding on their designs. Some owners, particularly infrastructure operators, provided standards and guidelines that permitted increased allowable stresses for certain temporary conditions. Sometimes the increased allowable stresses were limited to new materials or were subject to other stipulations. However, this practice came from a time when codes were much simpler and, in some respects, more conservative than they are now. Should increased allowable stresses still be used in the design of temporary structures, or is this practice anachronistic?
The answer is not simple and depends on who you ask. Different professional approach temporary works in very different ways. Structural Engineers are typically squeamish about construction means and methods and are often very conservative about soils and other loads commonly supported by temporary structures. Some structural engineers will, incorrectly, claim that a structure has “failed” if the computed factor of safety is below design code values. Geotechnical engineers typically view factors of safety to be a matter of judgment and some deprecate codes and standards and structural design generally. Thus geotechnical engineers will take a more aggressive approach to temporary structures, but may take risks unwittingly, especially when considering elements and systems that are not in contact with soil. Construction engineers are often highly risk-tolerant, but use simple and typically conservative methods for their temporary structures designs. It is hard to find consensus as to design approach, much less a standard of care among such disparate perspectives. Continue reading