Sometimes Lightning Strikes Twice

National Weather Service – NOAA

On 27 May 2018, a string of thunderstorms dropped several inches of rain in Howard County Maryland, causing a torrential flash flood to rise several feet above lower Main Street in Ellicott City – the second time in two years that historic central business district suffered such a severe flood. Flood waters carried away automobiles and damaged numerous buildings.

This area is in a relatively narrow valley where two streams converge before emptying into the Patapsco River. It is a small urban area with a lot of hardscapes and the creek is channelized (some of the buildings span over the creek). If the ground becomes saturated and the river rises, there is just no place for the runoff to go in a heavy storm. Needless to say, this is a flood-prone neighborhood; it has flooded 15 times since 1768.

The National Weather Service determined that the rain events that produced the floods in both 2016 and 2018 were 1000-year events. It is worth noting that the flood produced by a 1000 year rain is not necessarily a 1000 year flood or vice versa. It did not rain at all in Ellicott City before flooding of the Patapsco destroyed much of the town in 1868. A so-called 1000 year event is really one that has a 0.1 percent probability of being exceeded in a given year. The probability of experiencing two such events in two years is less than 0.2 percent. Jeff Halverson of Capital Weather Gang likened this to lightning striking twice in the same place. Statistically it is improbable but possible; statistically improbable events happen every day. However, Ellicott City experienced a storm with a probability of one to two percent per year (roughly 50 to 100 year return time) in 2011. Strong storms appear to be happening at a frequency greater than probabilistic forecasts would suggest. And if you have been paying attention over the past several years, you may have noticed the same thing happening elsewhere as well. Something else may be at work.

There may be reasons to believe that statistical forecasts, such as those used for natural hazards, may promote a false sense of certainty. There is, of course, the problem that some people misinterpret the concept of, say a 100-year flood, by thinking that such an event is “due” or not based on what has happened in the recent past. But a larger problem is that statistical forecasts can be arguably wrong. A study by the University of Bristol in the United Kingdom found that flood risk in the United States might be under-predicted by a factor of three compared to what is reported by the FEMA Flood Insurance Rate Maps. The study noted that the FEMA maps aggregate local studies of varying age and quality and do not always cover smaller drainage areas as well.

There are a variety of reasons why a statistical forecast can be wrong, or perhaps more specifically, inaccurate. The data set on which the forecast is based can be incomplete. This has been a problem for forecasting ground motions from earthquakes. The earthquake catalog in the United States is a little over three hundred years old, but is incomplete, especially for small to moderate events. The data can be interpreted inappropriately, for example by combining data that are not really the same (like standard penetration test results from alluvium and glacial till).  The models can be too simple, missing significant, but difficult to measure parameters. This may be the case FEMA maps with respect to small streams and was arguably the reason many of the probabilistic predictions of the 2016 election failed. Or they may be too sensitive to small changes in the input data. There can be issues of scale or resolution as anyone who has had to mesh finite element models knows. Finally, a statistical model cannot always account for conditions changing with time. If climate change increases the severity of storms, or new development increases stormwater discharges, historical storms records may not be predictive.

This is not to say that we should not make and use probabilistic forecasts. Policymakers and indeed engineers need a consistent empirical basis for making decisions. Personal experience, “gut” feelings and ideology are not adequate. Probability is useful in that it explicitly expresses uncertainty; it is more insightful than just a guess of what is likely to happen. It is especially important to account for uncertainty in cases when theory and knowledge of existing conditions are not sufficient to make reliable deterministic predictions, such as the case with forecasting of earthquakes or elections. But it is important to remember that every probabilistic model is conditioned on a set of assumptions, including assumptions as to the quality of the data. The better the assumptions, the closer the estimated probability becomes to the “true”, but unknown probability.

Unlike the forecasting of elections, the difference between the probability predicted by a model and the and theoretically true probability that would be found by perfect data and models can sometimes have serious consequences. For example, in many jurisdictions, the first-floor elevation for new construction in a flood zone must be set at or just higher than the so-called 100-year flood. If the flood estimate is based on poor quality or incomplete data this could result in owners of these properties being unknowingly subjected to substantially greater flood risk. It is therefore prudent for design professionals and their clients to consider exceeding the minimum requirements in the code with respect to natural hazards and other uncertainties, to hedge against the possibility that risk may be later discovered to be greater.

Ellicott City Flood Recovery, Patapsco River (29376019606)
Ellicott City is a different case. Some of the buildings downtown are over 150 years old. Back then, natural hazards were largely viewed as unpredictable. Buildings were constructed based on the experience of the builder. Failures were more common relative to today and more accepted. It is clear today that flooding there is a fact of life. While it is possible that the flooding on Sunday and in 2016 was produced by 1000-year storms, it is also possible that climate change is making extreme storms effects more common and the built environment is making their effects more severe. Regardless, a rare event is not necessary to produce devastating flooding. Rebuilding everything the way it was, seems like an exercise of futility, if not insanity, for a place that floods once or twice per generation on average.

The question for places like Ellicott City is how to rebuild. The historic building fabric and sense of place have value, but once the damage to a particular building exceeds some threshold, modern codes require conformance.  It is always possible to rebuild and repair with greater resiliency, but this typically requires modern materials and construction methods. In the process, some of the historic building fabric is lost. Sometimes these improvements can be hidden, like replacing wood stud walls with reinforced masonry or trusses with steel beam Some improvements, like elevating a structure several feet above the flood level, completely change the function and appearance of a building. Is it better to preserve a building and destroy the setting or recreate the setting with modern, resilient replicas of past structures? Is there an in-between approach that makes any sense? It is a difficult balance with real impacts on residents and visitors alike. The lessons learned in the process will be useful for design professionals and disaster-susceptible communities everywhere.

The information and statements in this document are for information purposes only and do not comprise the professional advice of the author or create a professional relationship between reader and author.