Tropical cyclones or hurricanes threaten the lives of millions and cause billions of dollars in damage every year. To estimate flood risks at a particular location, scientists and engineers typically start by looking at the historical record of all previous storms there. From these records, they can statistically predict how likely a storm of a given size is (e.g., the biggest storm likely to occur there in 100 years).
There are two problems with this approach: (1) What if there isn’t much historical data in the records? This is often the case for Small Island Developing States (SIDS) and in the Global South. If you don’t have enough data points (particularly for rarer, more extreme events), your statistical estimates will be much more uncertain. (2) What if the historical record isn’t representative of the conditions we are likely to see in the present and future? This is also a big problem in light of climate change, which is expected to bring sea level rise and changes in storminess to coasts around the world.
To address these challenges, our team led by Tije Bakker came up with a new approach to estimating tropical cyclone-induced hazards like wind, waves, and storm surge in areas with limited historical data. Our findings are now published open-access in Coastal Engineering here!
Coral reefs and the islands that they protect from flooding are in big trouble. This is arecurringthemeon thisblog, and now it’s time for the latest update. We are currently building towards the development of an early flood warning system for low-lying tropical islands fronted by coral reefs. Our previous work on this topic has focused on finding ways to do this accurately for a wide variety of coral reef shapes and sizes, as well as different wave and sea level conditions. However, it’s not enough to be accurate- to deliver timely early warnings, you also need to be fast.
That’s where the latest research of Vesna Bertoncelj comes in.
Vesna’s research provides us with new approaches for making highly accurate predictions of coastal flooding, at limited computational expense. The numerical models that we use to estimate flooding often take a long time to simulate, since they resolve many complex physical processes at high resolution in space and time. However, by paring down these models to only the most essential components for the task at hand, we can do this much faster. My colleagues at Deltares recently developed the SFINCS model, which has been successfully used to predict flooding in a fraction of the time that our standard models take. But how do we put all these different pieces together?
First, Vesna established a baseline for model performance by running a computationally intensive XBeach Non-Hydrostatic model (XB-NH+), and a much faster SFINCS model. These models provide an estimate for runup (R2%), which can be taken as a proxy for coastal flooding. In the second step, she used a lookup table (LUT) of pre-computed XBeach model output and to derive the input for the SFINCS model. The crucial task is doing this quickly and accurately, so she experimented with different interpolation techniques for deriving that input. She then compared her new approach with the standard models to find the fastest and most accurate combination.
Her research gives us a useful methodology that we can implement to speed up our early flood warning system, saving time and hopefully someday saving lives.
Vesna’s quality of work is excellent and she has a fantastic attitude towards research and collaboration. Her curiosity, professionalism, and diligence will undoubtedly serve her well in the years to come. I hope that we will have other opportunities to collaborate in the future. If anybody out there needs a bright young coastal researcher and/or modeller, hire her!
We frequently hear in the news about dying coral reefs, and also about the threats of sea level rise and climate change. But there is a key gap: what if we can hit two birds with one stone, and restore damaged ecosystems while providing vital protection against flooding? Our latest research demonstrates how coastal managers and ecologists can join forces to achieve both goals, which may help stretch limited funding further.
Small island developing states around the world are especially vulnerable to the hazards posed by sea level rise and climate change. As engineers, we have a number of tools in our toolbox for reducing the risk posed by coastal flooding and for planning adaptation measures. We often rely on predictive models which combine information about expected wave and sea level conditions, the topography of the coast, and vulnerable buildings and population to estimate potential flooding and expected damage.
However, to use these types of models, we first need to answer a lot of questions: what exactly are the expected wave and sea level conditions? What if detailed topographic measurements are unavailable? What if the population of a given coastal area increases? How are the local buildings constructed, and what are the consequences of that for estimating damage from flooding?
If our information is imperfect (which it almost always is), all is not lost: we can still make educated guesses or test the sensitivity of our models to a range of values. However, these uncertainties can multiply out of control rather quickly, so we need to be able to quantify them. There is no sense in spending the time to develop a detailed hydrodynamic model if your bathymetry data is crap. Can we get a better handle on which variables are the most important to quantify properly? Can we prioritize which data is the most important to collect? This would help us make better predictions, and to make better use of scarce resources (data collection is expensive, especially on remote islands!).
Based on a study of the islands of São Tomé and Príncipe, off the coast of Africa, Matteo found that topographic measurements and the relationship between flood depth and damage to buildings were the biggest uncertainties for predicting present-day flood damage. This means that measuring topography of vulnerable coastal areas in high resolution, and performing better post-disaster damage surveys will provide the best “bang for your buck” right now. However, for longer time horizons (i.e. the year 2100), uncertainty in sea level rise estimates become most important.
Matteo’s work will help coastal managers on vulnerable islands to better prioritize limited financial resources, and will improve the trustworthiness of our predictive models. Great job, Matteo!
Many of the world’s idyllic tropical coasts are facing threats on multiple fronts. Rising seas threaten the very habitability of many low-lying islands, and the coral reefs that often defend these coasts from wave attack are dying, too. Compounding this problem is the sheer number and variety of these islands: there are thousands of islands, and the coral reefs surrounding them come in all shapes and sizes. Located around the globe, these islands are each exposed to a unique wave climate and range of sea level conditions. This variability in reef characteristics and hydrodynamic forcing makes it a big challenge to forecast how waves will respond when they approach the shore, something that is quite tricky even at the best of times. Under these circumstances, how can we protect vulnerable coastal communities on coral reef coasts from wave-driven flooding?
This is the problem that our fantastic former student, Fred Scott (now at Baird & Associates in Canada), tackled in his paper, Hydro-Morphological Characterization of Coral Reefs for Wave Runup Prediction, recently published in Frontiers in Marine Science. Working in partnership with Deltares and the US Geological Survey for his master’s thesis, Fred came up with a new methodology for forecasting how waves transform in response to variations in the shape and size of coral reefs.
In our previous research on this topic, we tried to predict flooding on coral reef-lined coasts using a very simplified coral reef shape. This was fine as a first guess, but most reefs are bumpy and jagged and bear little resemblance to the unnaturally straight lines in my model. We couldn’t help it though: there just wasn’t enough data available when I started my thesis four years ago, so we did the best we could with the information we had at the time. On the bright side, using a single simple reef shape meant that we could easily run our computer simulations hundreds of thousands of times to represent a wide range of wave and relative sea level conditions.
Fast forward three years to when Fred began his own thesis. We now had access to a mind-boggling dataset of over 30,000 measured coral reef cross-sections from locations around the world! However, instead of too little data, we now had too much! If we wanted to simulate a whole range of wave and sea level conditions on each of the reefs in our dataset, it might take months or even years to run our models! Fred had the daunting task of distilling that gargantuan database down to a more manageable number of reef cross-sections.
But how do we choose which cross-sections are the most useful or important to look at? Even though every coral reef is, like a beautiful snowflake, utterly unique, surely there must be some general trends or similarities that we can identify, right? This question lies at the heart of Fred’s research, and to answer it, he turned to many of the same powerful statistical and machine-learning techniques used by the likes of Google and Facebook to harvest your life’s secrets from the internet or power self-driving cars. Maybe we can use some of this technology for good, after all!
The main approach that Fred used in this study was cluster analysis, a family of techniques that look for similarities or differences between entries in a dataset, and then group the entries accordingly into clusters. The entries within one cluster should be more similar to each other than to the entries in other clusters. In our case, this meant grouping the reefs into clusters by similar shape and size. This allowed us to increase efficiency and reduce redundancy by proceeding with 500 representative cross sections, instead of the entire database of 30,000.
Other studies in our field have tried similar approaches (such as this Brazilian study of coral reef shape), but the innovative part of Fred’s technique was to also account for similarities in the hydrodynamic response of the waves to each reef via a second round of clustering. Wave transformation on coral reefs can be immensely complicated, so it is entirely possible that two reef profiles could look very different, but lead to the same amount of flooding in the end. Since we are mainly concerned about the flooding (rather than a classification for ecological or geological purposes about coral reef formation and evolution), this suits us just fine!
In the end, Fred was able to distill this colossal dataset into between 50-312 representative cross sections that can forecast wave runup with a mean error of only about 10%, compared to predictions made using the actual cross sections. This opens the door wide for a range of future applications, such as climate change impact assessments or coral reef restoration projects. Right now, we are working on a new project that will apply Fred’s approach to the development of a simplified global early-warning system for wave-induced flooding on coral reef-fronted coasts.
Great work, Fred, and congratulations on your first publication! I am excited to see where this road takes us!
Scott, F., Antolinez, J.A.A., McCall, R.C., Storlazzi, C.D., Reniers, A.J.H.M., & Pearson, S.G. (2020). Hydro-morphological characterization of coral reefs for wave-runup prediction. Frontiers in Marine Science. [Link]
Scott, F. (2019). Data reduction techniques of coral reef morphology and hydrodynamics for use in wave runup prediction. [Link]. TU Delft MSc thesis in cooperation with Deltares and the US Geological Survey.
Scott, F., Antolinez, J.A., McCall, R.T., Storlazzi, C.D., Reniers, A., and Pearson, S., 2020, Coral reef profiles for wave-runup prediction: U.S. Geological Survey data release [Link].