Module 4: Normalisation, weighting, and aggregation of indicators

At this stage of the IVAVIA process you have already achieved many valuable results:

  • You have structured the cause-effect relationship between the most relevant hazards and exposed objects and have identified important intensifying and mitigating aspects of the exposed object as well as potential
  • You have gathered and/or defined indicators to quantify the different elements of your impact
  • You have acquired and analysed the necessary data sets for the measurement of the indicators, which each in itself and in combination already may provide you with an impression of the vulnerability of the examined

Some users may want to stop at this point and use the gathered results to determine the vulnerability of the study area by analysing the raw indicator data. However, communi- cating a multitude of complex, multi-dimensional indicators in a comprehensive way is complicated. Instead, composite indicators are easier to comprehend, both by policy mak- ers and the general public. Therefore, we highly recommend to put in the extra effort of normalising, weighting, and aggregating the gathered data, as described in this module. If you do not feel confident to start out to do this alone, data experts in the municipal ad- ministration or at a local university might be able to help.

Because the indicators you selected and calculated in the previous modules use different measurement units and scales (e.g. cm of water, °C), they cannot be combined into risk components without normalisation. Normalisation is the process of using mathematical operations to transform values measured on different scales and in different units into unit-less values on a common scale. Additionally, normalisation allows you to assess the criticalness of different indicator values when placing them on a fixed scale, e.g. when em- ploying a scale from 0.00 to 1.00 a value of ‘0.00’ could be defined as ‘optimal, no improve- ment necessary or possible’, while a value of ‘1.00’ could be defined as ‘critical’.

This module will show you how to normalise indicator values, by first determining the measurement scale of your indicator values and then using this information to normalise the values.

Afterwards, this module will show you how to weight and combine the coping capacity and sensitivity indicators selected in module 3 into composite indicators for their respective risk components, which will in turn be aggregated to the risk component ‘vulnerability’. The indicators for the other risk components do not have to be aggregated any further and subsequently will not be covered in this module.

1

Step 4.1: Determine the scale of measurement

2

The measurement scale of an indicator determines which normalisation methods can be employed (see step 4.2). Therefore, you have to determine the measurement scale for each of your chosen indicators before starting the actual normalisation process.

The scale, in turn, is determined by the phenomenon you observe and how you intend to describe it (e.g. household income in absolute numbers or grouped in classes). The scale types most relevant to your vulnerability assessment are metric, ordinal, and nominal:

  • A metric scale uses ordered, numerical values with a clearly defined, fixed interval be- tween two For example, the temperature difference between 7°C and 10°C is the same as the difference between 36°C and 39°C, while a road segment that is 300m long is three times as long as a 100m segment. Metric scales allow you to determine whether two values are equal or not, whether one value is larger than another, and to apply basic mathematical calculations (+, -, *, /) to them.
  • An ordinal scale is used, if an order on a range of values can be established, but the interval between two values is undefined or School marks are one example for an ordinal scale, certain survey results (‘Do you like, somewhat like, have no opinion, somewhat dislike or dislike something?’) might be another; ordinal scales allow you to determine whether two values are equal or not and whether one value is larger than another, but you cannot apply basic mathematical calculations (+, -, *, /) to them.
  • A nominal scale only allows simple categorization of values without establishing any order, g. names or soil type; nominal scales subsequently only allow you to determine whether two values are equal or not.

3

Step 4.2: Normalising coping capacity and sensitivity indicator values

4

There is no standard approach to normalisation. Different approaches have different (dis-) advantages. Which one you should employ depends on

  • your measurement goals,
  • on how you and your colleagues interpret the indicators,
  • on the scale types of the indicators, and
  • what your normalised target scale is (e.g. from 0 to 1 or from 1 to 2).

However, you should employ the same normalisation approach across all indicators, oth- erwise you might distort the results of the calculations. Subsequently, you should confer with the rest of your team and other experts (if available) before starting the normalisation process and keep to a consistent methodology. Section D of the appendix describes two frequently employed normalisation methods suitable for different indicator scales.

Check the direction of the value range

Depending on which normalisation and aggregation method (see steps 4.4 and 4.5) you employ, you need to check whether the normalised indicator values increase in the right direction. If vulnerability scores are calculated using arithmetic or geometric mean, the normalisation of an indicator should result in lower values for positive conditions in terms of vulnerability and higher values for more negative conditions. For example, household in- come may be selected as an indicator for coping capacity, where a higher income represents a higher coping capacity and subsequently lowers vulnerability. As a result, the direction of the indicator’s value range is negative: vulnerability increases as the indicator value de- creases. In this case your normalised indicator values increase in the wrong direction and you need to invert the value range by subtracting the normalised indicator value from the maximum normalised value of your scale. For example, in the case of our example scale from 0 to 1, you would subtract the normalised indicator value from ‘1’.

5

Step 4.3: Weighting coping capacity and sensitivity indicators

6

Usually, you will have chosen multiple indicators per risk component. These indicators may not necessarily have equal influence on their corresponding risk component, which should be reflected by assigning weights to them when combining them into composite indicators. Subsequently, indicators with greater (respectively: lesser) weight will have a greater (respec- tively: lesser) influence on the respective risk component. However, there are also valid rea- sons for assigning equal weights to all indicators: for example, when no information about indicator influence is available, no consensus between stakeholders can be achieved, or not enough resources for defining different weights are available. This is especially true if a large number of indicators for the different risk components is used, which can make the definition

of meaningful weights unfeasible. Nonetheless, weighting can have a major influence on the assessment results and should be undertaken with care and in a transparent process.

Weights can be assigned based on existing literature, participatory approaches including stakeholders and experts, and statistical procedures. It is important to know that neither participatory nor statistical procedures provide objective ways of defining weights. Conse- quently, weights should always be regarded as value judgement.

If the vulnerability assessment is part of a larger monitoring and evaluation process and has to be repeated later, you must ensure that weights remain constant over time. Other- wise it is impossible to determine whether changes in risk components result from wider changes in the system under examination, from the effect of implemented adaptation measures, or simply from differences in weighting.

Lastly, remember to document the chosen or calculated weights as well as the reasons for choosing a specific calculation method.

41

Step 4.4: Aggregating coping capacity and sensitivity indicators

1011

As with normalisation, there is no standard approach for indicator aggregation into com- posite risk components. The literature covers several aggregation methods, each with their own (dis)advantages (cf. OECD 2008). Two commonly used approaches are the weighted arithmetic mean and the weighted geometric mean. Which method you chose, ultimately depends on which of their properties are most suitable for your assessment. In case you want to undertake this step without the help of data experts, the appendix describes these methods in detail, including examples (appendix E).

Another decision you have to take when aggregating indicator values to coping capacity and sensitivity scores is whether or not to normalise the aggregated scores. If you want to rank the different regions under examination relative to one another in order to identify the most sensitive region and/or the region with the highest/lowest coping capacity in your specific local content, you should normalise the aggregated scores, e.g. using min-max normalisation as described in section D of the appendix. However, if you want to compare aggregated scores with aggregated scores from other urban areas or you want to assess the criticalness of the regions of your study area in regard to a predetermined scale, you should not normalise your sensitivity and coping capacity scores. For a short example of the effects of (re-)normalising aggregated scores see step 4.5.

Regardless of which aggregation method and normalisation approach you chose, you should document your choice and the reasons for it, to allow non-participating colleagues (and yourself) to reconstruct the process at a later point in time.

Potential pitfalls

As mentioned in step 4.2, regardless of which aggregation method you choose, you have to make sure that all indicators that are to be aggregated are aligned in the same way, i.e. low/ high indicator values have the same meaning in terms of the vulnerability.

Lastly, you should apply a plausibility check to your aggregation (to see, for example, if a single indicator dominates a risk component) by presenting the results (for example in form of maps) to colleagues, experts or stakeholders knowledgeable on the subject and the area under review. If the plausibility check results in changes to weights or the aggregation method, the change process has to be as transparent as the initial process.

Step 4.5: Calculating vulnerability scores

12

You derive the vulnerability scores for the regions of your study area for a given impact chain by aggregating their respective composite coping capacity and sensitivity indicators. As in the previous steps, there is no standard method for aggregating these composite indi- cators. Instead, various methods—some of which were already mentioned—exist, each with its own strengths and weaknesses. Again, which aggregation method you choose should primarily depend on which properties are most suitable for your assessment. Section F of the appendix to this guideline provides detailed descriptions for several aggregation meth- ods, which may be employed to calculate vulnerability scores.

Contingent on the calculation method you employed for the vulnerability scores and whether or not you normalised the coping capacity and sensitivity scores, re-normalisation of vulnerability scores might be unavoidable. However, you might choose to re-normalise the vulnerability scores anyway, if it suits your examination goal: should different regions of your study area be positioned relative to one another to allow ranking and identification of the most vulnerable areas or should they be placed on a predetermined fixed scale to allow identification of their criticalness and comparability with assessments from other urban areas? The former approach would employ re-normalisation, e.g. min-max normalisation with the minimum and maximum preliminary vulnerability scores as threshold values, to make sure the scores occupy the whole vulnerability range and allow you to easier rank the regions according to their vulnerability. On the other hand, using a predetermined fixed scale allows you to identify where the different regions lie within a broader vulnerability spectrum. Figure 12 and Figure 13 show an artificial example for Bilbao: Without re-normal- isation the aggregation results in non-normalised vulnerability scores between 0.32 and

0.67 on a fixed scale between 0 (‘optimal’) and 1 (‘critical’), with most regions getting a low to medium vulnerability score. Normalising these scores to the same scale using the min-max normalisation with 0.32 and 0.67 as threshold values results in the region with a non-normalised score of 0.32 getting a normalised value of 0, the region with a non-nor- malised score of 0.67 getting a normalised score of 1, and positioning all other regions rel- atively between them. While this normalisation allows for easier identification of the most vulnerable regions in the local context, it distorts the overall picture: the information that even the region with the lowest normalised vulnerability score exhibits at least a low over- all vulnerability is lost. Both approaches have their value and as mentioned in the beginning of this paragraph, which approach you choose ultimately depends on the goal of your as- sessment.

Regardless of which aggregation method and normalisation approach you chose, you should document your choice and the reasons for it, to allow non-participating colleagues (and yourself) to reconstruction the process at a later point in time.

13
7

Figure 12: Artificial vulnerability map for Bilbao employing a predetermined fixed scale,i.e. non-normalised vulnerability scores

8

Figure 13: Artificial vulnerability map for Bilbao employing re-normalised vulnerability scores to establish a relative ranking of city districts

21828-200Proceed to Module 5