by Marc-Andre Leger, DESS, MASc
Lecturer, Graduate Programs in Governance, Audit and IT security
University of Sherbrooke, Quebec (Canada)
This article presents a n assessment of Risk Analisys methodologies using criterias. These criteria are applied to several methodologies and whose results are also presented. The methods CRAMM, ÉBIOS and Octave seem to have better results, while others, such as Méhari have acceptable results but requires a solid framing. Other methods are immature or unverifiable.
One of the tools in the arsenal of the Information Security (IS) professional and of organizations which wish to implement a formal process of informational risk management (IRM) is the Risk Analisys methodology (RAM). Many methods are currently available, some free, others at a significant cost. In certain cases, organizations have created their own RAM in order to meet specific needs and consider particular constraints. Each of these methods can prove to be an effective tool when it are used diligently in a well defined context. However, like any tool, they have limits. They can be usefull but often in a particular organisational context or a limited sphere of activity such as banks or Government Agencies. In the same way, like other methodologies used in academic research, they seek to measure concepts, in RAM the level of informational risk, via variables (such as the threat or impact) according to a measurement scales (for example: low, means, high). Several of the measured concepts cannot be measured directly (such as reading the temperature on a thermometer) but indirectly, requiring an individual to estimate the value of the measurement of the variable allotted to the concept. Thus like any methodology, the RAM must account for several sources of error or bias. For example, the selection of the individuals which gives the answers or the interpretation to be given to the answers will afect the results. As well, the type of measurements used, the type of analysis (explanatory or statistical) of the measurements and the manner in which the results are presented can affect interpretation of the results done by stakeholders.
In the field of medicine, it is necessary to determine the risk associated with the introduction of new protocols of care or of new drugs. For historical and ethical reasons, the field of medicine systematized the use of methodologies in the last century. Several studies were carried out on the sources of errors and biases in methodologies. The goal being to make sure that the results of a study are faithful to the reality being examined.
This article presents a methodological assessment of RAM in relation to criterias from various sources, including requirements of methodological rigour resulting from medicine as well as criterias from international standards. A partial table is included at the end of this article. The complete table is available on: www.ismsiug.ca or on the authours website (www.leger.ca). The principal concepts are also presented. The article then presents the evaluation criterias used and the results as they where applied. It concludes with suggestions for future explorations.
Methodology used leading to this article
Various methodologies and certain methodological tools were examined. They were selected according to what is currently used or available in Quebec (Canada). They are:
|Methodology||Source||Language||Mcritical mass of users||Available|
|Audicta||Audicta – Medical technologies||English and French||No||Yes|
|Callio Secura||Callio||English, French and others||No||No|
|CRAMM||Insight a division of Siemens||English||Yes||Yes|
|ÉBIOS||France (DCSSI and Club EBIOS)||English, French and others||Yes||Yes|
|IVRI||by the author of this article||French||No||No|
|OCTAVE||CERT at Carnegie University Melon||English||Yes||Yes|
|RiskPro||HEC (MONTREAL) – CIRANO||French||No||No|
Table 1: Methodologies assessed
Some of these methodologies are not presented in this article because they where either not easily available for analysis (IRAM, RiskIT, Risk Pro) or they are no longer available following the suspension of activities (Callio) or the abandonment of the project (ISO 13335-2, IVRI). Although there are many other methodologies available worldwide, they could not be identified and added within the scope of this study.
Methodologies for the analysis or management of risk were analyzed individually by the author by applying criteria presented below. When possible, copies of the documents and the tools being evaluated where obtained. The results were then compiled in a table which was circulated with an group of experts in Risk Management and with experts of various RAM. When necessary, additional explanations were provided the the experts. The experts subjected suggestions for corrections which were integrated in a revised table, which is presented with this article. When needed, when there was absence of consensus on the comments, discussions took place in order to arrive at the final results.
Of analyzed methodologies, only four (4) could show a sufficient number of users to ensure perenniality: CRAMM, ÉBIOS, Méhari and Octave. In the other cases, available information did not make it possible to reasonnally believe that the number of users was sufficiently important to ensure its survival in the long run internationally.
What is Risk
Information Security (IS) refers to two concepts: security and information. Security being defined as the absence of unacceptable risks. Informational Risk Management aims to preserve or improve quality of the informational assests of an organization in relation to it’s expectations (eg. availability, integrity) or of the expectations of its customers (eg. protection of privacy). Security is also necessary because technology applied to information creates intrinsic risks. For example, a piece of hardware has a limited lifespan and is subject to breakdowns. But these many of these problems can be foreseeable, data being available on the capacities and the performances of hardware (eg. Mean Time Between Failure). The risk of breakdown which can be estimated objectively on the basis of statistical data, the risk can be seen as the probability, over a given period, to have to repair a given piece of hardware. Thus there are several possible definitions of the risk. It is necessary to find a definition of risk. Unfortunately, much of what was written on the risk is based on anecdotic data or studies limited to a particular aspect. The use of these definitions is dubious. It is essential to define what risk means for this article.
The word risks has its origins in the Middle-Ages, in the Italian word risco, meaning jagged rock. It was used by early insurance companies to indicate the danger at sea. The word also draws its origins from the Latin resecum. The eatly insurance companies (17th century) ensured ships and their cargo against the risco… the term evolved to become the word risks. Risk is often defined as a combination of the probability of occurrence of a damage and its gravity. For Knight, a significant author on risk in his time, risk refers to situations by which the decision maker assigns mathematical probabilities to random events which he faces. Risk is also defined as a variation in the results (outcomes) which can occur over a predetermined period in a given situation. Risk is also a function of the distribution of the variance of the probabilities. Basically, risk is a social construct, it depends on that which perceives it. The majority of the definitions of risk integrate some element of subjectivity, according to the nature of the risk and the field in which the definition applies. But there is also an objective risk, quantified in insurance policies car, for example. The various definitions of the risk are integrated in an operational definition of risk for this study. Risk is:
- A discontinuity
- A dysfonction
- A disaster
- The difference between what was expected ($) and reality
- The Probability of an event and its Consequences
Informational Risk depends on inacceptability in relation to expectations of value of informational assets, often declared prospectively within an organization. Expectations are established on the basis of of policies, strategy and context (political, environmental, social, technological and economic). In a way, in Informational Risk Management (IRM), it is necessary to delimit the sandbox (limits of the organization) and to trace a line in sand (baseline) on the basis of the expectations and the context which delimits what is acceptable and what is not it, what is our part of the sandbox and what belongs to others (externalities).
Objective risk is present when the variation exists in the real world (natural) and is the same for all individuals in an identical situation. This is distinct from subjective risk, when there is an estimatation of objective risk by an individual or a group. Risk is different from uncertainty, when the randomness cannot be expressed by probabilities, even subjective. The word risk is generally used when there is at least the possibility of negative consequences. This article does not discuss uncertainty, which will be mitigated in organization by Change Management, Business Continuity Planning or Incident Management functions. It will not be either question of positive consequences, not being a concern of IS as such.
What is a methodology
The word methodology literally means science of the method. A methodology is a meta-method, a method of methods, which can be viewed as a kind of toolbox. In this toolbox each tool is a process, a technique or a suitable technology to solve an enigma or to determine the value of a particular variable. When an individual works in a field of knowledge, a methodology makes it possible to establish a succession of actions to be carried out, questions to be posed, choices to be made, which make it possible to undertake in a more effective way a study or the resolution of a problem. It is one of the elements which make the difference between an art and a profession. In research, a methodology is this systematization of the study, independently of the subject of the study itself. It is what makes it possible to obtain results which have a demonstrable scientificity, which can be reproduced or verified by individuals external to the study. It is a fundamental building block of modern scientific knowledge.
A methodology for informational risk assessment proposes a series of activities and tools making it possible to analyze the informational risk in a precise context and at moment in time. In medical research various qualities are necessary and expected of a methodology:
Credibility: The results of the analysis of the data collected reflect the experiences of the participants or the context with credibility.
Authenticity: The perspective of the participants are presented in the results of the analysis and show understanding of the subtle differences in opinion of all the participants.
Criticality: The analisys of data collected and the results shows signs of evaluation of the level of criticality.
Integrity: The analysis reflects a validation of repetitive and recursive validity associated a simple presentation.
Clarity: The methodological decisions and interpretations, as well as the particular positions of those which performed the study are considered.
Realism: Rich descriptions and respecting reality are clearly illustrated and with liveliness in the results.
Creativity: Methods of organization, presentation and creative analysis of the data are incorporated in the study.
Exhaustiveness: The conclusions of the study cover the whole of the questions put forward at the beginning in an exhaustive way.
Congruence: The process and the results are congruent, go hand in hand one with the others and do not fit in another context only that of the studied situation.
Sensitivity: The study was made considering human nature and the sociocultural context of the studied organization.
The following table presents the synopsis of the evaluation of the treatment of these different criterias in the methods studied.
Table 2: the synopsis of the evaluation of the treatment of the criterias in the analyzed methods.
The table shows that none of the methodologies meet all of the assessment criterias of their results.
Why is this important?
According to Jung, there are two ways of obtaining information on the world which surrounds us: directly perceived by our senses (e.g. one can touch, feel or see) or by intuition which bring contents from the unconscious to conscious (e.g memory of acquired knowledge). Cognitive psychology teaches that this information, although it can seem exact to the individual, is prone to a number of biases, such as:
- paralogisms (errors of reasoning) both formal and informal
- cognitive dissonance
- judgement heuristics
- perceptual variations due cultural or social factors
- the limits of vigilance
Internal validity refers to the exactitude of the results. There is internal validity when there is agreement between the data from the field and their interpretation in the results of the study. A study can be considered for its internal validity, i.e. it is true for the population being studied, meaning that the results of the study correspond to what was studied for these individuals at a given time. Without performing an in-depth study it was not possible to evaluate the internal validity of all methodologies. As for the external validity, which refers to the generalizability of the results (allows to draw some the impartial conclusions about a population larger than the whole of the subjects studied), this aspect of validity is important only with regard to one particular target population external to a study, which is less critical for small organizations taking into account the limited use of RAM but more significant to large organizations. For example, the results of an RAM carried out with seven participants in a business unit can be generalized with the whole of the organization (where a target population being made of the whole of an organization and the population that participated in the study made of seven individuals in a single division). Here also, without formal controls and making an in-depth study, it is impossible to evaluate external validity.
Expressed simply, it is important to use a quality RAM for the following reasons:
- the process must be independent of those which carries it out
- the results of the analysis must be representative of reality
- the results are used to make decisions.
The measurement of the variables
A first problem relates to the measurement of the variables which one seeks to study at the time of a Risk Analisys. Measurement is the attribution of numbers to objects, events or individuals according to preestablished rules’ with an aim of determining the value of a given attribute. In scientific research, a variable is a concept to which a measurement can be given. It corresponds to a quality (e.g. small, large) or to a character (e.g. size, age) which can be identified to an element (people, events) being subject of a research and to which a value is allotted. The variables are connected to the theoretical concepts by means of operational definitions, used to measure concepts which can be classified in various ways according to the role they fill in a given research.
A part of informational risk can be measured objectively on the basis of historical data of an organization: the objective informational risk. However few organizations have a quantity of reliable objective data over a sufficient period of time to use them effectively. It is important to note that there are, in this part of risk, problems of distribution of the probabilities which requires explanation. If an individual can assign a potential risk on the basis of probability of realization of future events, the individual often assumes that these events are distributed normally over time and a very large number of observations. This measurement is very questionnable without a sufficient knowledge base and a large number of observations.
All that cannot be measured objectively must be measured subjectively. Thus all risk that is not objective risk is placed, in this article, in the category of subjective informational risk. Any methodological approach seeking to determine the situation of an organization as regards to the management of subjective informational risk, must integrate ways of identifying expectations of the organization (values and beliefs) by the individuals which composes it and by the documents (artefacts) available. From a methodological point of view, a qualitative approach is the most likely solution to enable the description of the phenomenon of informational risk in its particular context taking into account the current state of knowledge. Methodological controls that apply to qualitative research methodologies are thus necessary in order to be ensured of the congruence of the results of an RAM with the reality of the organization. If one could not guarantee the validity of the results, one opens the door with criticisms on the results and the possible recommendations following a study.
One distinguishes discrete measurements (using categories) from continuous measurements. Continuous measurements use numerical values according to defined rules of measurement (quantity, length, temperature). It makes it possible to determine if a characteristic is present and, if so, to which degree. Measurement scales are usually classified in four categories, as presented in the table below by ascending order of precision and complexity of possible mathematical calculation.
|Scale of measurement||Description||Example(s)|
|Ordinal scale||Classifies the objects in categories.
The numbers are numerical but do not represent relative values or quantities.
Nonparametric tests and descriptive statistics can be used.
|Male or female sex, race, religion.|
|Ordinal scale||The objects are classified by order of magnitude.
The numbers indicate rank and not quantities.
The use of descriptive statistics is possible.
Can allow the statistical use if there is a subjacent continuum of intervals
|Degree of schooling: Secondary 1, secondary 2, secondary 3
Level of low, average or high exposure.
Wage category: modest (0 to 20000$), Low (20001$ to 35000$), etc.
|Interval scale||The intervals between the numbers are equal.
The numbers can be added or withdrawn.
The numbers are not absolute because the zero are arbitrary.
A great number of statistical operations allows.
|The temperature measured in degrees Celsius|
|Proportional scale||The scale has an absolute zero.
The numbers represent real quantities and it possible to carry out on them all the mathematical operations.
|The temperature measured in Kelvin degrees, the weight in kilograms, size in meters, income in dollars.|
Table 3: categories of measurement scales
It is critical to ensure that scientific rigour is present in the measurement scales used in any RAM. Certain methods use qualitative data to which are assigned numerical values on which statistical analysis is carried out. Furthermore, these assigned values are used in mathematical calculations, this is rather problematic from a rigour point of view. The passage from qualitative data to quantitative data can’t be made without being supported by a rigorous framework which must be rigorously validated. If not, then the significance of the results and their precision is highly questionnable. For example, imagine an election survey with an unknown error margin.
The majority of methodologies analyzed use ordinal or interval scales. This type of measurement scale is not adapted for complex mathematical operations, but can be the subject statistical of analysis. The problem is that certain methodologies (Audicta, CRAMM and MÉHARI) carry out apparently complex mathematical operations which are not adapted taking into account the measurement scale used. Audicta and CRAMM seem to have methodological controls to dominate the situation, however these are not well documented. In the case of ÉBIOS, the matrix approach avoids this problematic situation. Octave is the only one which was found to use a proportional scale allowing an optimal use of the data.
It is essential to question sampling in RAM. If an investigator meets a limited number or sample of subjects in an organization to obtain information allowing him to assign values to variables, it is essential that these subjects provide answers which are able to provide a real and complete portrait of the situation: the sample must be representative of the whole organisation or population which it represents. All methodologies in this article have a nonprobabilistic sample determined by reasoned choice. What this means is that, in each case, the individuals who take part in the study, the sample, are chosen by the investigators who perform make the Risk Analisys. Thus many selections biases are introduced, dependent on human relations, availability, organisational and individual priorities and many others. Thus all methodologies seem to use a sample for which it is not possible to determine it’s representativeness. It is thus dubious that the whole of the situation, such that it exists in reality, could be expressed in the results of the RAM. In the same way there are no controls, such as data saturation, in order to be ensured as everything is being said on the situation under analysis by the individuals who are included in the sample. Here as well additional study is required to determine the correct sample size for each RAM and how it compares to what is actually done.
The detailed table, available on the web (http://www.leger.ca/pages/articles/RAM.html ), applies methodological controls used in the field of clinical research and taught to graduate students of the Faculty of Medicine of the University of Sherbrooke, as mentioned at the beginning of article. The table makes it possible to have an outline of various methodologies available to Quebec in 2006. As explained in the article, none of the methodologies studied meet the whole of the criterias of evaluation.
Although it is difficult to show the superiority of a method on another, it is apparent that certain are methodologiquely more rigourous than others. The methods CRAMM, ÉBIOS and Octave seem more rigourous than the others. Méhari is in a second category of good methods but which requires a solid framing (training, qualified consultants, external audit) to limit biases. Other methods are either unavailable, immature or unverifiable. However any method used by a trained and qualified expert is likely to give results which have a certain value for an organization. It would be necessary to carry out a thorough investigation to make it possible to draw the truly solid conclusions, which is not very probable considering the little attention rigour seems to be getting from Information Security Specialists.
Beucher, S., Reghezza, M., (2004) Les risques (CAPES Agrégation), Bréal
Blakley, B., McDermott, E., Geer, D.(2001), Session 5: less is more: Information security is information risk management, Proceedings of the 2001 workshop on New security paradigms, September 2001
Fortin, M.-F., Côté, J., Filion, F. (2006). Fondements et étapes du processus de recherche, Chenelière Éducation, Montréal (Québec), 485 pages
ISO (1999) Guide 51 security aspects, Guidelines for their inclusion in standards, International Standards Organization
ISO (2002) Guide 73, Management du risque, Vocabulaire, Principes directeurs pour l’utilisation dans les normes , International Standards Organization
Knight, Frank H. (1921) Risk, Uncertainty, and Profit, Boston, MA: Hart, Schaffner & Marx; Houghton Mifflin Company
Office québécois de la langue française (2005) Grand dictionnaire terminologique, en ligne: http://www.olf.gouv.qc.ca/ressources/gdt_bdl2.html
Robin Whittemore, R., Chase, S., Mandle, C. (2001) Validity in Qualitative Research, Qualitative Health Research, Vol 11 No 4, pages 522-537
©December 2006, Marc-Andre Leger