The goal of this module is to show you how to make use of all the data you gathered and calculated in the previous modules, and calculate risk scores. First, you will define the clas- sification scheme for your risk analysis, before assessing the probability and intensity of your selected concrete hazards, and estimating the potential impacts/consequences of those hazards in order to finally calculate risk scores. Lastly, you need to validate the results before preparing the result presentation in module 6.
As previously mentioned in module 3, the quantitative part of IVAVIA requires some knowl- edge of statistics and experience in handling data, but will make your assessment results more credible and a better basis for policy level decisions on funding adaptation measures. This is even truer for the probabilistic risk analysis method applied during this module, which requires a substantial amount of (historic) data to estimate hazard intensity/proba- bility and potential impacts/consequences. For example, daily weather data and (at least) monthly mortality statistics for all regions of the study area are necessary to estimate the impacts of a heat wave to the inhabitants of a city. In case the required capacities—in terms of time, personnel, and knowledge—are not available, we recommend involving consult- ants or experts from local universities or research institutions.
Alternatively, the authors of the Vulnerability Sourcebook published the “Risk Supple- ment to the Vulnerability Sourcebook” (BMZ 2017), which introduces a less data intensive, non-probabilistic risk analysis method, which is compatible with the results achieved as part of IVAVIA until now. However, including hazard probabilities into the assessment process, while data intensive, enables working with climate projections and subsequently changing hazard intensities and probabilities of occurrence, which will raises the credibility of your assessment results even more and gives you a more complete picture of the future risks your city faces.
In this approach, impacts / consequences and probabilities are classified using discrete, ordinal classes (e.g. ‘insignificant’, ‘minor’, or ‘disastrous’ for consequences and ‘very un- likely’, ‘likely’, and ‘very likely’ for probabilities). Applied to climate change related risk, the ‘consequences’ would be the impacts identified previously, e.g. by means of IVAVIA’s impact chain diagrams. The resulting impact and probability pairs in the BBK approach, i.e. the risk scores, are then assigned to discrete, ordinal risk classes using a risk matrix. This matrix has one axis for the impact classes and one axis for the probability classes and defines risk classes for every combination of the two. Figure 14 shows an example risk matrix, in use at the German Federal Office of Civil Protection and Disaster Assistance, while Figure 15 shows a risk matrix recommendation by the Climate Change Office of the Spanish Ministry of Agriculture, Food, and Environment.
Figure 14: Risk matrix used by the German Federal Office of Civil Protection and Disaster Assistance. Source: BBK 2011, axis labels modified by Fraunhofer for consistency
Figure 15: Risk matrix recommendation used in Spain. Source: Solaun 2014, translated by Fraunhofer
The risk assessment process starts with deciding how many classes for impacts / conse- quences, probabilities, and risk are used, how estimated consequences and probabilities are classified, and how impact / probability combinations are categorized. For many European countries, national or regional standards or at least guidelines for risk analysis using matri- ces exist, which already define impact, probability, and risk classes and their relationships (see e.g. (BBK 2011) for Germany or (Solaun 2014) for Spain). If no such regulations or guide- lines exist, expert judgements have to be employed.
One very important aspect of defining the classification scheme is the assignment of nu- merical values to discrete classes: As can be seen in Figure 14, every impact and probability class is associated with an integer value. This has two reasons:
- Assigning numerical values to impacts and probabilities enables you to use interim values without defining a larger number of classes, g. a probability of 0.002, which would be classified between ‘likely to a limited extend’ and ‘likely’ when employing the scheme depicted in Figure 17, could be associated with a numerical value of 3.5.
- Numerical impact values for impacts of the same category (see Figure 16) may be ag- gregated to a single impact value for the whole category, g. using the weighted arith- metic mean.
The classification of estimated impacts and probabilities is done by defining threshold values for the different impact / probability classes, i.e. in which value range do potential impacts / probabilities have to lie to be classified a certain way. This may constitute one of the most problematic steps of the assessment process, because it involves highly political issues that have to be handled with extreme care, e.g. deciding when a certain number of fatalities is regarded as ‘moderate’ or ‘significant’.
Thresholds for impact classes will usually be defined differently for different categories (e.g. human consequences or environmental consequences) and use varying measurement units (e.g. number of fatalities or hectare of affected agricultural land). Figure 16 shows two of the categories and their related measurement units employed by the German Federal Office of Civil Protection and Disaster Assistance.
Figure 16: Example consequence indicators. Source: BBK 2011
Figure 17: Probability classification scheme as employed by the German Federal Office of Civil Protection and Disaster Assistance. Source: BBK 2011
Threshold values for probabilities are generally not hazard specific, so that a single classi- fication scheme can be employed. Figure 17 shows the probability classification scheme in use at the German Federal Office of Civil Protection and Disaster Assistance.
Step 5.2: Estimating hazard intensity and probability
After the classification scheme has been defined, the intensity and probability of the rele- vant scenarios (e.g. a 500 year flood) have to be estimated for every region under examina- tion, based on the data of the corresponding indicators, e.g. as defined in the related impact chain. This can, for example, be done by analysing and aggregating historical indicator data, employing climate projections, or simulation methods. The latter is especially suitable, if you are lacking enough historical indicator data and are not able to calculate probabilities. In this case advanced applications could be employed, if suitable data models exist. For example, if a digital elevation model including 3D models of buildings and infrastructure exist, probabilistic flood risk models may be employed1. For hazards other than flooding, the situation regarding available data may be worse. In this case, expert judgements have to be employed to assess hazard probabilities and intensities in a qualitative way.
An example for indicator data that may be analysed to estimate intensity and probability of a hazard is shown in Figure 18. It depicts a flood depth map for a 500 year flood in Bilbao as provided by the Basque Water Agency URA. This kind of maps can for example be used by Geo Information Systems to calculate flood depth statistics for different regions.
(1 See e.g. Koks 2015, Merz 2014, Moel 2016, Muis 2015, Paudel 2013, Samuels 2008, and Ward 2013)
Figure 18: Flood depth map for a 500 year flood in Bilbao as provided by the Basque Water Agency URA. The lighter the colour, the deeper the flood. Flooded regions in grey levels.
At this point, the defined classification scheme can be employed to classify the estimated probability values, e.g. using the probability classification scheme depicted in Figure 17 to classify a 500 year flood, which would mean 0.002 floods per year, would put the probabil- ity of this concrete hazard between ‘likely to a limited extend’ (numerical category 3) and ‘likely’ (numerical category 4).
Step 5.3: Estimating impacts / consequences
Before determining potential impacts / consequences you have to determine the effective exposure, i.e. the fraction of the exposed objects that is actually affected by a specific haz- ard occurrence. This is done using the characteristics of the concrete hazard and its inten- sity as determined in previous steps. For example, while all buildings in a neighbourhood could potentially be affected by a flood (i.e. they constitute the exposure), only a fraction of the buildings will be affected by a 500 year flood. To determine this fraction, you may employ a flood map, which is usually specific for a given hazard intensity.
To estimate the expected impacts, we employ a process called Consequence Analysis (see
e.g. Xie 2016). In the context of IVAVIA, it is the process of relating hazard intensities to expected impacts. There is no hard and fast standard process, but rather a suite of tools, models, and approaches that may be employed, depending on the data and resources (in terms of personnel, time, and funds) available. Some of the most frequently used are:
- Damage functions correlate hazard intensity, often quantified using a single measure, with potential For example, flood depth-damage functions relate flood depths to damages; these may be damages in terms of monetary values, reductions in travel speed, number of fatalities, or other damages. Sources for damage functions include, for example, the JRC Technical Report on global flood depth-damage functions (Huizinga 2017) or the standard method for damage and casualty estimation in the Netherlands (Kok 2004). A damage function that has an associated mathematical model can be calcu- lated by computer programs (like Mixed Integer Linear Programming (MILP) algorithms).
- Inter-/Extrapolation may be used if historical data on past intensities and impacts is In this case historical data is analysed and used to define damage functions by inter- or extrapolating the consequences resulting from historical hazard intensities and probabilities.
- Expert judgement may be employed if absolutely no other data is Here, local experts qualitatively estimate the impacts resulting from the given hazard intensities.
In addition to the exposed object directly defined by the impact chain (e.g. built-up area), further exposed elements might need to be considered when estimating impacts, depend- ing on the impacts previously defined, e.g. by the corresponding impact chain. For example, if fatalities and injuries should also be assessed, data on the exposed population needs to be gathered, in addition to building data. This also includes cascading impacts resulting from a hazardous event, e.g. effects of traffic disruptions caused by flooding or economic losses due to disrupted supply chains. The occurrence of cascading effects is an especially important characteristic when considering impacts to critical infrastructures (CI), where a failure in one CI element can spread to other elements of the same CI system or different dependent CI systems (see RESIN D1.1). Cascading effects in a single CI can be modelled as secondary impact chains, at a high level of abstraction from the physical level. Damages from cascading effects can be estimated using different methods: Simulation models can be employed to estimate traffic disruptions resulting from flood-related rerouting, while input-output models can be used to model links between economic sectors and subse- quently economic impacts.
The vulnerability of the exposed area with regard to the hazard influences the potential impacts. Therefore, the vulnerability scores should influence the Consequence Analysis. How this is achieved depends on the employed method and on the scale of the vulnerabil- ity scores. Continuing the Bilbao example from the previous step, building damages from fluvial flooding were estimated by multiplying damage values obtained from the global flood depth-damage functions (Huizinga 2017) with vulnerability scores –scaled from 0 (‘optimal’) to 1 (‘critical’) (see also Figure 13). Thus, regions with the highest vulnerability score suffer the maximum amount of damage under the given hazard intensity, while re- gions with lower vulnerability scores only suffer reduced damages. In this case, the dam- ages estimated using the flood depth-damage functions were interpreted as worst-case consequences, reduced by the vulnerability score to derive expected damages.
Step 5.4: Calculating risk scores
After all expected impacts have been estimated, they are classified according to the classi- fication scheme defined during step 5.2.
impact chains will usually contain multiple impacts and subsequently, multiple expected consequences will be estimated, e.g. damages to buildings as well as transport infrastruc- ture in Euro. To aggregate different consequences to a single impact value, each impact and probability class is assigned an integer value (see e.g. Figure 14). Aggregation of multiple impact values can then be done using similar aggregation methods as during the aggrega- tion of composite risk components, i.e. weighted arithmetic or geometric mean.
It is important to be cautious when aggregating impact values. Damages are not only ad- dressing objects that can be restored by providing a sufficient amount of money. For exam- ple, the loss of cultural heritage or lives cannot simply be measured in terms of budgets. In this case, the impacts should not be aggregated, but kept separate instead, resulting in multiple aggregated impact values, e.g. for material impacts and consequences to humans.
After all impacts and probabilities have been estimated and classified, risk scores for the different regions under examination can be derived using the matrix defined during step 5.2,
Figure 19 and Figure 20 show risk maps for human and material impacts for the fluvial flooding example from Bilbao. The risk scores are based on impact categorizations provid- ed by the municipality of Bilbao using the approach by the German Federal Office of Civil Protection and Disaster Assistance. The results show that the densely populated inner-city neighbourhoods have a higher risk of consequences to humans, while the river neighbour- hood of La Ribera has the highest risk of material consequences.
Step 5.5: Result validation
Before presenting the results of your assessment to (external) stakeholders / decision mak- ers, you should conduct a final validation / plausibility check, preferably including a small group of colleagues / experts or stakeholders knowledgeable on the subject and the area under review.
The goal of the validation is to make sure that your results reflect local conditions correctly, to identify any inconsistencies, and to decide whether or not any corrective actions are necessary.
At the least, you should apply common sense to check if your results are plausible. For ex- ample, when assessing the effects of heat waves on inhabitants of a city and your indicators include measures like population density and percentage of surface area covered by green infrastructure, exposed areas with high population density and only little green infrastruc- ture should exhibit higher risk values.
Usually, the validation takes a top-down approach: You will start by looking at the risk scores for the regions under examination, identify regions with unexpectedly high / low risk scores and further analyse the partial results of these regions by looking at (classified) probability values, hazard intensities, estimated impacts, and composite risk scores. Once you identified unexpected partial results you might need to delve even deeper, e.g. by ana- lysing single sensitivity / coping capacity indicator values for their plausibility.
Be aware that unexpected results do not necessarily indicate an error in your calculation. It might be that the data used for the assessment does not correctly represent the real-world circumstances or that some parameters of the assessment distort the results. For example, when conducting a neighbourhood scale assessment, information about uneven spatial
distributions of indicators within neighbourhoods (e.g. lots of green infrastructure in a non- flood prone area and almost no green infrastructure in a flood prone area) will be lost when averaging across the whole neighbourhood. This would not be an error in your calculation, but would indicate that the resolution of your assessment is not high enough and needs to be raised.
Visualisations of partial results will make the validation a lot easier. Maps and charts let you identify regions with unexpected results easier than simple spreadsheets. This is es- pecially true, if your assessment contains a large number of study regions. We suggest to at least visualise risk scores, vulnerability scores, sensitivity scores, and coping capacity values using maps.
During result validation it may become evident that additional/other data needs to be ac- quired, indicators weights need to be adjusted, the resolution of the assessment needs to be changed, or whole indicators need to be replaced. In this case, you need to go back and repeat at least parts of the related step(s) in the assessment process and make changes to the documentation. For example, if an indicator has to be replaced you have to adjust the related part of the impact chain, gather the necessary data (or combine the already gath- ered data in a different way), maybe adjust the indicator weight, recalculate the related composite risk component, vulnerability score, expected impacts, and risk scores.