260 likes | 393 Views
Building Security Assurance in Open Infrastructures. Workshop on Public Safety And Security Research Helsinki, February 2008 Bertrand MARQUET, Bell Labs, Alcatel-Lucent France, BUGYO & BUGYO Beyond Project Coordinator. The problem: You cannot manage /improve what you cannot measure.
E N D
Building Security Assurance in Open Infrastructures Workshop on Public Safety And Security Research Helsinki, February 2008 Bertrand MARQUET, Bell Labs, Alcatel-Lucent France, BUGYO & BUGYO Beyond Project Coordinator
The problem: You cannot manage /improve what you cannot measure Manage/improve security in an acceptable range losses + costs Losses Costs A simplified drawing to start addressing the problem Losses vs. costs Euros Security deployment
Measurement sufficient assurance Confidence countermeasures Exploitablevulnerabilities risks Services in operations provides evaluation Gives evidence of giving that are Have no And thereforeminimize on The BUGYO project • Has proposed a solution to response to that problem by providing a • Framework to measure, maintain and document security assurance of critical ICT-based services infrastructures • Security assurance by evaluation of security based on a continuous measurement and monitoring. • We defined Security assurance as: “confidence that a system meets its security objective and therefore is cost-effective to protect against potential attacks and threats”
Core result: 6 steps methodology improve IN LINE process Monitor Monitor Monitor the assurance level for the service and provide comparison Evaluate Evaluate Evaluate the assurance status of the service based on the aggregated values and initiate learning Aggregate the metric Aggregate Aggregate results to derive an assurance level per component and for the service Measure Measure Investigate the network by means of the selected metrics on a component and system level as modeled Select metrics Select metrics Use the metric taxonomy as checklist to assign OFF LINE process normalized metrics Model Model the service the service Decompose the service to identify assurance critical components
BUGYO Methodology Methodology Realisation Monitor Show the assurance levelfor the service and provide comparison Evaluate Evaluate the assurance statusof the service based on the aggregatedvalues and initiate learning Aggregate Aggregate the metricresults to derive an assurance level per component and for the service Investigate the network by means of the selected metricson a component and systemlevel as modeled Measure A dedicated security assurance model is providing means toexpress Telco based service assurance needs. Selectmetrics Use the metric taxonomy as checklist to assignnormalized metrics Modelthe service Decompose the serviceto identify assurancecritical components
Model the service: the goals • The main goal of the Model is to: • Describe the system under observation • Reduce complexity • Describe the assurance composition • Document the results of measurements • Enable to mechanize the creative part
Model the service: fundamental concepts • We use the two system theoretic concepts of • hierarchization, so that the system can be modelled with different levels of granularity • black boxes to represent complex infrastructure objects, which can be further investigated on demand. • With these two mechanisms we allow a fast initial deployment which is stepwise refined during operation.
<<abstract>> Infrastructure Object Managed Infrastructure Object Unmanaged Infrastructure Object Model the service: the model elements • The infrastructure object: • The Metric Metric Managed Infrastructure Object * *
Model the service: Chose the Metrics • The model mentions Metrics, attached to Infrastructure Objects. • The metrics are the one to compute the Assurance value at their level. • The model enables to aggregate the values in order to have a value at higher level, up to Service level. • But what are metrics ?
Modelthe service Aggregate Evaluate Measure Monitor Selectmetrics BUGYO Methodology (2/5) Methodology Realisation Monitor Show the assurance levelfor the service and provide comparison Evaluate Evaluate the assurance statusof the service based on the aggregatedvalues and initiate learning Aggregate Aggregate the metricresults to derive an assurance level per component and for the service Investigate the network by means of the selected metricson a component and systemlevel as modeled Measure A formalised process is transforming measured raw data into a normalised assurance level. Selectmetrics Use the metric taxonomy aschecklist to assignnormalized metrics Modelthe service Decompose the serviceto identify assurancecritical components
F u l l c o n f i d e n c e e c n e d i f n o C 0 1 2 3 4 5 A s s u r a n c e L e v e l How to build a normalized metric? • Security Assurance is a measure of confidence • Common Criteria • Scope • broader scope gives more assurance • Depth • more details investigated gives more assurance • Rigor • more formalism gives more assurance • Other Criteria • Quantity • more evidence give greater confidence • Timeliness • more recent versions are bound to found more problems • Reliability • higher reliability of the collector gives better confidence
Definition Interpretation/Application Level 1 Rudimentary evidence for selected parts Carrier/large enterprises infrastructure service Basic security assurance Level 2 Regular informal evidence for important parts Carrier/large enterprises infrastructure service Medium security assurance Level 3 Frequent informal evidence for important parts Carrier/large enterprises infrastructures service High assurance Level 4 Continuous informal evidence for large parts Critical infrastructure service security assurance Level 5 Continuous semi formal evidence for entire system Governmental/Defence service infrastructure service security assurance Security assurance taxonomy • We derived the following 5 security assurance levels • Odd number of levels => possible to have a medium level • CC have 7 levels (but sense of level 1 and 7 in operational context) • Only 3 levels wouldn’t have provided enough granularity
CLASS SM: Service Model SM_VU: Absence of relevant vulnerabilities 1 1 2 2 3 Family Class Level SM_OR: Unmanaged/managed objects ratio 1 2 2 3 4 1 2 3 4 5 CLASS MC: Metric Construction MC_SC: Scope 1 2 2 3 4 MC_DE: Depth 1 1 2 2 3 MC_RI: Rigor 1 2 2 2 3 MC_RE: Reliability of metric 1 2 2 2 3 MC_TI: Timeliness 1 2 3 3 3 MC_FR: Frequency 1 2 3 4 4 MC_SA: Stability 1 2 2 2 3 CLASS MM: Maintenance management MM_PM: Probe maintenance 1 1 2 2 2 MM_OM Infrastructure object model maintenance 1 1 2 2 2 Assurance classes
SM_VU Absence of relevant vulnerability Relevant vulnerabilities should not be present in the controlled system. SM_OR Unmanaged/Managed Object Ratio Less unmanaged objects can provide higher confidence in the assurance expression Families Description SM: Service model SM_VU: Absence of relevant vulnerabilities 1 2 3 SM_OR: Unmanaged/managed objects ratio 1 2 3 4 Some examples of families and classes • We followed Common Criteria formalism to represent classes and families • Families are embedded in a classes • Each family as a description, dependencies and its components • For instance, the SM class: Service Model Class • Refers to families dealing with the way the service is modelled
How to concretely produce an assurance level • At Infrastructure Object’s level • The metric • processes that enable to gather raw data from the observed system and derive a normalized assurance level, based on the taxonomy • Decompose the process to help the building of metrics based on COTS • Measuring => produce “raw data”:. base measure (ISO27004 definition) • Interpreting the base measure => produce derived measure (ISO27004 definition) • Normalising the derived measure => produce a normalized discrete AL
NormalisationExample of transformation • Create an AL2 capable metric that is based on the Nessus tool Scope AL1 2 Domain plug-ins are used (enumeration of the plug-ins) 1 Handcrafted probe is used AL2 5 Domain plug-ins are used (enumeration of the plug-ins) 1 Handcrafted metric is used Timeliness AL1 Nothing AL2 The most recent plug-ins are installed (difference in last update made) The latest version of the scanner installed (difference in age) NMAP in its last version is used Frequency AL1 A scan is performed once a month AL2 A scan is performed once a week Result specific: AL1 Max. 4 serious and 10 non-serious vulnerabilities are found AL2 Max. 2 serious and 5 non-serious vulnerabilities are found
Modelthe service Modelthe service Aggregate Aggregate Evaluate Evaluate Measure Measure Monitor Monitor Selectmetrics Selectmetrics BUGYO Methodology Methodology Realisation Monitor Show the assurance levelfor the service and provide comparison Evaluate Evaluate the assurance statusof the service based on the aggregated values and initiate learning Aggregate Aggregate the metricresults to derive an assurance level per component and for the service Investigate the network by means of the selected metricson a component and systemlevel as modeled Measure A multi-agent platform and a centralised server are providinga measurement infrastructure (implemented metrics) and multiple aggregation algorithms. Selectmetrics Use the metric taxonomy aschecklist to assignnormalized metrics Modelthe service Decompose the serviceto identify assurancecritical components
Min Algorithm Max Weighted Sum Simplicity Very simple Very simple More complex but does not require powerful computation Ability to indicate any changes Only below min Only above max Enable any changes Main advantage Represent exact assurance Represent best effort of the operator Can monitor any minor change in the infrastructure Main constraint Required homogeneous metrics and distribution Required homogenous metric levels and distribution Need rigorous weight affection in building the model AggregationAlgorithm comparison • Proposed operational aggregation algorithms
Modelthe service Modelthe service Aggregate Aggregate Evaluate Evaluate Measure Measure Monitor Monitor Selectmetrics Selectmetrics BUGYO Methodology Methodology Realisation Assurance Level 1:Rudimentary evidence for selected partsAssurance Level 2:Regular informal evidence for important parts Assurance Level 3:Frequent informal evidence for important parts Assurance Level 4:Continuous informal evidence for large parts Assurance Level 5:Continuous semi formal evidence for entire system Monitor Show the assurance levelfor the service and provide comparison Evaluate Evaluate the assurance statusof the service based on the aggregatedvalues and initiate learning Aggregate Aggregate the metricresults to derive an assurance level per component and for the service Investigate the network by means of the selected metricson a component and systemlevel as modeled Measure Selectmetrics Use the metric taxonomy aschecklist to assignnormalized metrics Five levels of assurance enable to express increasing confidence. Modelthe service Decompose the serviceto identify assurancecritical components
Evaluation • Process of comparing the automatic real-time computed assurance levels against expected values (nominal between actual value) • Aims at supporting the decision maker • Not only compare the top level (i.e. the service assurance level) • For more advanced analysis • Can rely on two kinds of evaluation rules • Thresholds • E.g. Compare an IO assurance value of major interest don’t fall down a predetermined value • Complex • Managed some “correlation” between interdependent IO • Defining patterns (group of IO) that should follow some evolving rules • E.g. if the mean between 2 IO is constant but the 2 values are diverging • E.g. Smooth results over time to detect trends • Based on those evaluation rules • Raise alerts
Modelthe service Modelthe service Modelthe service Aggregate Aggregate Aggregate Evaluate Evaluate Evaluate Measure Measure Measure Monitor Monitor Monitor Selectmetrics Selectmetrics Selectmetrics BUGYO Methodology Methodology Realisation Monitor Show the assurance levelfor the service and provide comparison Evaluate Evaluate the assurance statusof the service based on the aggregatedvalues and initiate learning Aggregate Aggregate the metricresults to derive an assurance level per component and for the service Investigate the network by means of the selected metricson a component and systemlevel as modeled Measure The security cockpit is displaying real time assurance information. Selectmetrics Use the metric taxonomy aschecklist to assignnormalized metrics Modelthe service Decompose the serviceto identify assurancecritical components
Monitoring • Real time indication of Security assurance level • For service • For infrastructure objects • Measurement • Detailed information • Metric • On infrastructure objects • Assistance • Provide support to maintain assurance level • Generate specific alarms • Edit reports Details on the cockpit
Demonstrator scenario: VoIP architecture IP Network WAP AS LAN SBC DSLAM HSS DNS P-CSCF P-CSCF IMS-Core I-CSCF I-CSCF xDSL MRFC IMS simulator + IPBX real deployment BUGYO demonstrator
Strategic aspect to address in ICT Security & Trust • Tools to support Risk & Trust management • Metrics and tools to measure and improve effectiveness of infrastructure security • Top-Down approach for holistic security improvement • Business-adjusted risk management vs. technology driven improvement • Replace nonstop crisis response to systematic security improvement • For more information or join BUGYO BEYOND advisory board, • Contact me: bertrand.marquet@alcatel-lucent.fr