CISSP Domain 6 – Security Architecture and Models

NOTE: These notes have not been updated since I took the test many years ago.
To perform a more up to date study for your CISSP exam, I suggest buying the Shon Harris Book.

Domain 6 – Security Architecture and Models

 

A security model is a statement that outlines the requirements necessary to properly support a certain security policy.

 

“Security is best if it is built into the foundation of operating systems and applications”

 

Computer Architecture

 

CPU: Contains ALU, Control Unit and primary storage.

 

Protection Rings: Operating system concept. Inner rings have most privileges. Inner rings can directly access outer rings, but not vice versa. A typical arrangement might be:

 

Ring 0 : Operating system & Kernel

Ring 1 : Remaining parts of operating system

Ring 2 : I/O drives and utilities

Ring 3 : Applications and programs.

 

MITs MULTICS is an example of an operating system using this concept.

 

Operating Systems

 

Operating systems can be in one of several states:

 

Ready           : Ready to resume processing

Supervisory   : Executing a highly privileged routine

ProblemState: Executing an application (working on a problem)

WaitState     : Waiting for a specific event to complete or resource to become free.

 

An operating system implements security using many mechanisms, including:

 

  • Protection Rings
  • Mapping Memory
  • Implementing virtual machines
  • Working in different states
  • Assigning trust levels to each process

 

Process Vs Thread

 

A process is a program in execution with its own address space. Threads are pieces of code executed within a process.

 

Memory Addressing Modes

 

Register: Addresses registers within the CPU.

Direct: Actual addresses. Usually limited to current memory page.

Absolute: Address of all primary memory space. Can hit any memory address directly.

Indexed: Adding contents of addresses in programs instruction to that of a memory (index) register.

Implied: Operations internal to the processor, such as clearing of a carry bit.

Indirect: The address specified in the instruction contains the address where the actual data can be found.

 

Processing Methods

 

Some methods to improve system performance at the hardware level are:

 

Pipelining: Overlapping the steps of different instructions to run close to concurrently.

 

CISC: Complex Instruction Set Computer. In earlier technologies, the “fetch” cycle was the slowest port. By packing instructions wtih several operations, number of fetches were reduced.

 

RISC: Reduced instruction set computer. Instructions are simple and require fewer clock cycles.

 

Scalar Processor: One instruction at a time.

 

Superscalar Processor: Enabled concurrent execute of multiple instructions in some pipeline stage.

 

Very Long Instruction Word (VLIW) Processor: A single instruction specifies more than one concurrent operation.

 

 

 

 

System (Security) Architecture

 

Some of the biggest questions in systems security architecture are:

 

  • Where should the protection take place? User’s end? Where the data is stored? Restricting user activities?

 

  • At which layers should mechanisms be implemented? Hardware, kernel, o/s, services or program layers?

 

“The more complex a security mechanism becomes, the less assurance it provides” -> Functionality and Assurance Compromise.

 

  • What system mechanisms need to be trusted? How can these entities interact in a secure manner?

 

Some important terms and definitions relating to system architecture are:

 

Trusted Computer Base: Combination of protection mechanisms within a system, including hardware, software and firmware. Not every part of a system needs to be trusted and therefore do not fall under the TCB.

 

Reference Monitor: Abstract system concept that mediates all access the subjects have to objects. This is a concept, not a component.

 

Security Kernel: Hardware, firmware and software that fall under the TCB and implement the reference monitor concept. The security kernel is the core of the TCB. The security kernel must:

  • Mediate all access to objects in the system.
  • Be protected from modification
  • Be verified as correct.

 

Domains: Set of objects that a subject can access. Domains have to be identified, separated and strictly enforced.

 

Resource Isolation: Enables each subject and object to be uniquely identified, permissions and rights to be assigned independently, accountability to be enforceable and activities to be tracked precisely.

 

“Security policies that prevent information from flowing from a high security level to a lower security level are multilevel security policies.”

 

“The execution and memory space assigned to each process is called a protection domain.”

 

Virtual Machine Monitor: Each “machine” runs at a different security level.

 

 

Security Models:

 

“A model is a symbolic representation of a policy. It maps the desires of the policy into a set of rules to be followed by a computer system.”

 

“Security policy provides the abstract goals, the security model defines the dos and donts to achieve those goals”

 

Bell-LaPadula Model:

 

Developed by the military in the 1970s to address leakage of classified information. Main goal is confidentiality. A system using the Bell-LaPadula model would be classified as a multi-level security system. The Bell-LaPadula is a state machine model, and could also be categorized as an information flow model.

 

Simple Security Rule : Cannot read data at a higher level.

 

Star (*) Property: Cannot write data to a lower level.

 

Bell-LaPadula also uses a discretionary access control matrix to handle exceptions. This may allow a trusted subject to violate the *-property, but not its intent. IE, a low-sensitivity paragraph in a higher level document being moved to a low-sensitivity document is ok, but might require an override of the * property.

 

Criticisms of Bell-LaPadula:

 

  • Only deals with confidentiality, not integrity.
  • Does not address access control management.
  • Does not address covert channels.
  • Does not address file sharing in more modern systems.
  • Secure state transition is not explicitly defined.
  • Only addresses multi-level security policy type.

 

 

Biba Model:

 

The Biba model is also a state machine model and similar to Bell-LaPadula, except is addresses data integrity rather than data confidentiality. The data integrity is characterized by three goals:

 

  1. Protection from modification by unauthorized users.
  2. Protection from unauthorized modification by authorized users.
  3. Internally and externally consistent.

 

The following rules of the Biba model implement these goals:

 

Simple Integrity Axiom: No read down.

 

Star (*) Integrity Axiom: No writing up.

 

Subjects at one level of integrity cannot invoke an object or subject at a higher level of integrity.

 

 

Clark-Wilson Model:

 

The Clark-Wilson model takes a different approach to protecting integrity. Users cannot access objects directly, but must go through programs that control their access.

 

“Usually in an information flow model (Bell-LaPadula and Biba), information can flow from one level to another until a restricted operation is attempted. At this point, the system checks an access control matrix to see if the operation has been explicitly permitted”.

 

The Clark-Wilson model defines the following terms:

 

Constrained Data Item (CDI): Data item whose integrity is to be protected.

 

Integrity Verification Procedure (IVP): Program that verifies integrity of CDI.

 

Transformation Procedure (TP):

 

Unconstrained Data Item: Data outside of the control of the model. For example, input data.

 

 

Information Flow Models:

 

Each object is assigned a security class or value. Information is constrained to flow only in the directions permitted by the security policy.

 

 

Security Modes of Operation

 

The “mode of operation” defines the security conditions under which the system actually functions:

 

Dedicated Security Mode: ALL users have the clearance and the “need to know” to all the data within the system.

 

System-High Security Mode: All users have clearance and authorization to access the information in the system, but not necessarily a need to know.

 

Compartmented Security Mode: All users have the clearance to all information on the system but might not have need to know and formal access approvial. Users can access a compartment of data only.

 

Multilevel Security Mode: Permits two or more classification levels of information to be processed at the same time. Users do not have clearance for all of the information being processed.

 

Limited Access: Minimum user clearance is “not cleared” and the maximum data classification is “sensitive but unclassified”.

 

Controlled Access: Limited amount of trust placed on system hardware and software.

 

Trust – assurance of trust implies a much deeper knowledge of the building process, etc.

 

 

Systems Evaluation Methods

 

 

The “Orange” Book:

 

The US Department of Defense developed TCSEC (Trusted Computer Systems Evaluation Criteria) to provide a graded classification for computer system security. The graded classification hierarchy is:

 

A – Verified Protection

B – Mandatory Protection

C – Discretionary Protection

D – Minimal Security

 

The evaluation criterion involves 4 main areas: Security, Polciy, Accountabilty and Assustance and Testing, but these break down into 7 specific areas:

 

  1. Security policy – explicit, well defined, enforced by mechanisms in the system itself.
  2. Identification – individual subjects must be uniquely identified in the system.
  3. Labels – labels must be associated with individual objects.
  4. Documentation – test, design and specification documentation. User guides and manuals.
  5. Accountability – audit data is captured and protected. Relies on identification.
  6. Life Cycle Assurance – Software, hardware and firmware can be tested individually to ensure that each enforces security policy.
  7. Continuous Protection – Ongoing review and maintenance of the security.

 

Products for evaluation under TCSEC are submitted to the National Computer Security Center (NCSC). The Trusted Products Evalation Program (TPEP) puts successfully evaluated products on the Evaluated Product List – EPL.

 

TCSEC Ratings:

 

Division D: Minimal Protection

 

All systems that fail to meet the requirements of the higher divisions fall under this category.

 

Division C: Discretionary Protection

 

C1: Discretionary Security Protection

 

  • Based on individuals and/or groups.
  • Identification and authorization of individual entities.
  • Protected execution domain for privileges processes.
  • Design, test and user documentation.

 

C2: Controlled Access Protection

 

  • Security relevant events are audited.
  • Object reuse concept must be invoked (including memory)
  • Strict login procedures.

 

The C2 rating is the most reasonable class for commercial applications.

 

Division B : Mandatory Protection

 

Division B enforces mandatory protection by the use of security labels and the reference monitor concept.

 

            B1 : Labeled Security

 

  • Each object must have a classification label and each user must have a clearance label.
  • Security labels are mandatory in class “B”

 

B2 : Structured Protection

 

  • System design and implementation are subject to a more thorough review and testing process.
  • Well-defined interfaces between system layers.
  • Covert storage channels are addressed.
  • Trusted path required for login and authentication.
  • Separation of operator and administrator functions.

 

            B3: Security Domains

 

  • More granularity in each protection mechanism
  • Code not necessary to support the security policy is excluded from the system.
  • Reference monitor component must be small enough to be isolated and tested fully.
  • System fails to a secure state.

 

Division A: Verified Protection

 

Formal methods are used to ensure that all subjects and objects are controlled.

 

A1: Verified Design

 

  • Feature and architectures are not much different than B3, the difference is in the development process.
  • Assurance is higher because the formality in the way the system was designed and built is much higher.
  • Stringent change and configuration management.

 

Summary of Ratings:

 

D – Minimal Protection

C – Discretionary protection

C1 : Discretionary Security Protection

C2 : Controlled Access Protection

B – Mandatory Protection

B1 : Labeled Security

B2 : Structured Protection

B3 : Structured Domains

A – Verified Protection

A1 : Verified Design

 

The Red Book:

 

TNI (Trusted Network Interpretation). The red book is an interpretation of the Orango book for networks and network components. The Red Book TNI ratings are:

 

  • None
  • C1 – Minimum
  • C2 – Fair
  • B2 – Good

 

 

DITSCAP

 

Defense Information Technology Security Certification and Accreditation Process. Has 4 phases:

 

  1. Definition
  2. Verification
  3. Validation
  4. Post Accreditation

 

NIACAP

 

National Information Assurance Certification and Accreditation Process. Has several types of accreditation

 

Side Accreditation: Applications and systems at a self-contained location.

 

Type Accreditation: An application or system distributed to a number of different locations.

 

System Accreditation: Major application or general support system.

 

CIAP

 

Commercial Information Security Assessment Process – in development.

 

 

ITSEC – Information Technology Security Evaluation Criteria

 

This accreditation system is used in Europe. Two main elements of a system are evaluated by ITSEC or TCSET: Functionality and Assurance.

 

Two systems with the same functionality can have different assurance levels. ITSEC separates these two elements and rates them separately. In ITSEC, F1 to F10 rate the functionality and E0 through E6 rate the assurance:

 

ITSEC                         TCSEC

            E0                          D0

F1 + E1                  C1

F2 + E2                  C2

F3 + E3                  B1

F4 + E4                  B2

F5 + E5                  B3

F5 + E6                  A1

F6                          High Integrity

F7                          High Availability

F8                          Data Integrity during communication

F9                          High Confidentiality (encryption)

F10                        Networks with high demands on confidentially and integrity.

 

Security products or systems are referred to as TOE – Target of Evaluation.

 

10 functionality classes – F

8 Assurance Levels   – Q

7 correctness levels  – E

 

CTCPEC

 

Canadian Trusted Computer Product Evaluation Criteria

 

 

COMMON CRITERIA

 

International standard evaluation criteria initiated by ISO in 1990 and started in 1993.

 

One specific set of classifications, internationally recognized. Evaluates a product against a “protection profile” which is structured to address specific security problems.

 

A product is assigned an EAL – Evaluation Insurance Level – EAL1 – EAL7.

 

Similar to other criteria, the common criteria answers two basic questions:

 

  1. What does it do? (functionality)
  2. How sure are you? (assurance)

 

The protection profile contains:

 

  • Descriptive elements
  • Rationale
  • Functional Requirements
  • Development Assurance Requirements
  • Evaluation Assurance Requirements

 

Certification: Technical evaluation of security components and their compliants for the purpose of accreditation.

 

Accreditation: Formal acceptance of the system’s overall adequacy by management. Based partly on certification information.

 

 

SSE-CMM : System Security Engineering, Capability Maturity Model

 

Based on the premise “If you can guarantee the quality of the processes that are used by an organization, you can guarantee the quality of the products and services generated by those processes.”

 

Two dimensions are used to measure the capability of an organization to perform specific activities. The two dimensions are domain and capability.

 

Domain: All the practices that collectively define security engineering.

 

Base practices: Related base practices are grouped into Process Areas (PAs)

 

Capability: Practices that indicate process management and institutionalization capability. Generic Practices (GPs).

 

 

The GPs represent activities that should be performed as part of BPs.

 

In the domain dimension, SSE-CMM defined 11 security engineering process areas and 11 administrative process areas.

 

 

 

Threats:

 

Some threats to security models and architectures are:

 

Covert Channels: Information flow not controlled by the security mechanism.

 

  • Covert timing channel
  • Covert storage channel.

 

Addresses in TCSEC B2 and higher.

 

Back Doors: Also known as maintenance hooks / trapdoors.

 

Timing Issues: Also known as ‘asynchronous attack’. Deals with the timing different in the sequence of steps a system uses to complete a task.

 

Time of Check vs Time of Use (TOC/TOU) – Also known as “race conditions”

 

Buffer Overflows: “Smashing the stack”.

 

Each of these can lead to a violation of the system security policy.

 

 

Recovery Procedures:

 

The system must recover/restart from an error in a secure state – maintenance mode: access only by privileged users from privileged terminals.

 

Failsafe: Program execution terminated, system protected from compromise.

 

Failsoft (resilient): Non-critical processing is terminated.

 

Failover: Switching to duplicate (hot) backup in realtime.

 

Cold start: System cannot be restored to a known secure state.

 

 

 

Domain 6 – Security Architecture and Models

 

A security model is a statement that outlines the requirements necessary to properly support a certain security policy.

 

“Security is best if it is built into the foundation of operating systems and applications”

 

Computer Architecture

 

CPU: Contains ALU, Control Unit and primary storage.

 

Protection Rings: Operating system concept. Inner rings have most privileges. Inner rings can directly access outer rings, but not vice versa. A typical arrangement might be:

 

Ring 0 : Operating system & Kernel

Ring 1 : Remaining parts of operating system

Ring 2 : I/O drives and utilities

Ring 3 : Applications and programs.

 

MITs MULTICS is an example of an operating system using this concept.

 

Operating Systems

 

Operating systems can be in one of several states:

 

Ready           : Ready to resume processing

Supervisory   : Executing a highly privileged routine

Problem State: Executing an application (working on a problem)

Wait State     : Waiting for a specific event to complete or resource to become free.

 

An operating system implements security using many mechanisms, including:

 

  • Protection Rings
  • Mapping Memory
  • Implementing virtual machines
  • Working in different states
  • Assigning trust levels to each process

 

Process Vs Thread

 

A process is a program in execution with its own address space. Threads are pieces of code executed within a process.

 

Memory Addressing Modes

 

Register: Addresses registers within the CPU.

Direct: Actual addresses. Usually limited to current memory page.

Absolute: Address of all primary memory space. Can hit any memory address directly.

Indexed: Adding contents of addresses in programs instruction to that of a memory (index) register.

Implied: Operations internal to the processor, such as clearing of a carry bit.

Indirect: The address specified in the instruction contains the address where the actual data can be found.

 

Processing Methods

 

Some methods to improve system performance at the hardware level are:

 

Pipelining: Overlapping the steps of different instructions to run close to concurrently.

 

CISC: Complex Instruction Set Computer. In earlier technologies, the “fetch” cycle was the slowest port. By packing instructions wtih several operations, number of fetches were reduced.

 

RISC: Reduced instruction set computer. Instructions are simple and require fewer clock cycles.

 

Scalar Processor: One instruction at a time.

 

Superscalar Processor: Enabled concurrent execute of multiple instructions in some pipeline stage.

 

Very Long Instruction Word (VLIW) Processor: A single instruction specifies more than one concurrent operation.

 

 

 

 

System (Security) Architecture

 

Some of the biggest questions in systems security architecture are:

 

  • Where should the protection take place? User’s end? Where the data is stored? Restricting user activites?

 

  • At which layers should mechanisms be implemented? Hardware, kernel, o/s, services or program layers?

 

“The more complex a security mechanism becomes, the less assurance it provides” -> Functionality and Assurance Compromise.

 

  • What system mechanisms need to be trusted? How can these entities interract in a secure manner?

 

Some important terms and definitions relating to system architecture are:

 

Trusted Computer Base: Combination of protection mechanisms within a system, including hardware, software and firmware. Not every part of a system needs to be trusted and therefore do not fall under the TCB.

 

Reference Monitor: Abstract system concept that mediates all access the subjects have to objects. This is a concept, not a component. In order for a reference monitor to be trusted:

 

  • It must be tamperproof (isolation)
  • It must be invoked for all access requests to an object – there must be no path that can by bypass the reference monitory (completeness)
  • Must be small enough for thorough validation (verifiability)

 

Security Kernel: Hardware, firmware and software that fall under the TCB and implement the reference monitor concept. The security kernal is the core of the TCB. The security kernal must:

  • Mediate all access to objects in the system.
  • Be protected from modification
  • Be verified as correct.

 

Domains: Set of objects that a subject can access. Domains have to be identified, seperated and strictly enforced.

 

Resource Isolation: Enables each subject and object to be uniquely identified, permissions and rights to be assigned independantly, accountability to be enforceable and activities to be tracked precisely.

 

“Security policies that prevent information from flowing from a high security level to a lower security level are multilevel security policies.”

 

“The execution and memory space assigned to each process is called a protection domain.”

 

Virtual Machine Monitor: Each “machine” runs at a different security level.

 

 

Security Models:

 

“A model is a symbolic representation of a policy. It maps the desires of the policy into a set of rules to be followed by a computer system.”

 

“Security policy provides the abstract goals, the security model defines the dos and donts to achieve those goals”

 

Bell-LaPadula Model:

 

Developed by the military in the 1970s to address leakage of classified information. Main goal is confidentiality. A system using the Bell-LaPadula model would be classified as a multi-level security system. The Bell-LaPadula is a state machine model, and could also be categorized as an information flow model.

 

Simple Security Rule : Cannot read data at a higher level.

 

Star (*) Property: Cannot write data to a lower level. Confinement property.

 

Bell-LaPadula also uses a discretionary access control matrix to handle exceptions. This may allow a trusted subject to violate the *-property, but not its intent. IE, a low-sensitivity paragraph in a higher level document being moved to a low-sensitivity document is ok, but might require an override of the * property.

 

Criticisms of Bell-LaPadula:

 

  • Only deals with confidentiality, not integrity.
  • Does not address access control management.
  • Does not address covert channels.
  • Does not address file sharing in more modern systems.
  • Secure state transition is not explicitly defined.
  • Only addresses multi-level security policy type.

 

 

Biba Model:

 

The Biba model is also a state machine model and similar to Bell-LaPadula, except is addresses data integrity rather than data confidentiality. The data integrity is characterized by three goals:

 

  1. Protection from modification by unauthorized users.
  2. Protection from unauthorized modification by authorized users.
  3. Internally and externally consistent.

 

The following rules of the Biba model implement these goals:

 

Simple Integrity Axiom: No read down.

 

Star (*) Integrity Axiom: No writing up.

 

Subjects at one level of integrity cannot invoke an object or subject at a higher level of integrity.

 

 

Clark-Wilson Model:

 

The Clark-Wilson model takes a different approach to protecting integrity. Users cannot access objects directly, but must go through programs that control their access.

 

“Usually in an information flow model (Bell-LaPadula and Biba), information can flow from one level to another until a restricted operation is attempted. At this point, the system checks an access control matrix to see if the operation has been explicity permitted”.

 

Unlike Biba, the Clark-Wilson model addresses all three integrity goals:

 

  • Preventing unauthorized users from making modifications.
  • Maintaining internal and external consistency
  • Preventing authorized users from making improper modifications.

 

The Clark-Wilson model defines the following terms:

 

Constrained Data Item (CDI): Data item whose integrity is to be protected.

 

Integrity Verification Procedure (IVP): Program that verifies integrity of CDI.

 

Transformation Procedure (TP):

 

Unconstrained Data Item: Data outside of the control of the model. For example, input data.

 

 

Information Flow Models:

 

Each object is assigned a security class or value. Information is constrained to flow only in the directions permitted by the security policy.

 

 

Security Modes of Operation

 

The “mode of operation” defines the security conditions under which the system actually functions:

 

Dedicated Security Mode: ALL users have the clearance and the “need to know” to all the data within the system.

 

System-High Security Mode: All users have clearance and authorization to access the information in the system, but not necessarily a need to know.

 

Compartmented Security Mode: All users have the clearance to all information on the system but might not have need to know and formal access approval. Users can access a compartment of data only.

 

Multilevel Security Mode: Permits two or more classification levels of information to be processed at the same time. Users do not have clearance for all of the information being processed.

 

Limited Access: Minimum user clearance is “not cleared” and the maximum data classification is “sensitive but unclassified”.

 

Controlled Access: Limited amount of trust placed on system hardware and software.

 

Trust – assurance of trust implies a much deeper knowledge of the building process, etc.

 

 

Systems Evaluation Methods

 

 

The “Orange” Book:

 

The US Dept of defense developed TCSEC (Trusted Computer Systems Evaluation Criteria) to provide a graded classification for computer system security. The graded classification hierarchy is:

 

A – Verified Protection

B – Mandatory Protection

C – Discretionary Protection

D – Minimal Security

 

The evaluation criterion involves 4 main area: Security, Policy, Accountability and Assurance and Testing, but these break down into 7 specific areas:

 

  1. Security policy – explicit, well defined, enforced by mechanisms in the system itself.
  2. Identification – individual subjects must be uniquely identified in the system.

10. Labels – labels must be associated with individual objects.

11. Documentation – test, design and specification documentation. User guides and manuals.

12. Accountability – audit data is captured and protected. Relies on identification.

13. Life Cycle Assurance – Software, hardware and firmware can be tested individually to ensure that each enforces security policy.

14. Continuous Protection – Ongoing review and maintenance of the security.

 

Products for evaluation under TCSEC are submitted to the National Computer Security Center (NCSC). The Trusted Products Evaluation Program (TPEP) puts successfully evaluated products on the Evaluated Product List – EPL.

 

TCSEC Ratings:

 

Division D : Minimal Protection

 

All systems that fail to meet the requirements of the higher divisions fall under this category.

 

Division C : Discretionary Protection

 

C1: Discretionary Security Protection

 

  • Based on individuals and/or groups.
  • Identification and authorization of individual entities.
  • Protected execution domain for privileges processes.
  • Design, test and user documentation – life cycle assurance through security testing.
  • Documentation to include Security feature’s user guide and trusted facility manual.

 

C2: Controlled Access Protection

 

  • Security relevant events are audited.
  • Object reuse concept must be invoked (including memory)
  • Strict login procedures.

 

The C2 rating is the most reasonable class for commercial applications.

 

Division B : Mandatory Protection

 

Division B enforces mandatory protection by the use of security labels and the reference monitor concept.

 

            B1 : Labeled Security

 

  • Each object must have a classification label and each user must have a clearance label.
  • Security labels are mandatory in class “B”
  • Mandatory access control for a defined subset of subjects and objects.

 

B2 : Structured Protection

 

  • System design and implementation are subject to a more thorough review and testing process.
  • Well-defined interfaces between system layers.
  • Covert storage channels are addressed.
  • Trusted path required for login and authentication.
  • Separation of operator and administrator functions (trusted facility management)
  • Mandatory access control for all subjects and objects.
  • Formal policy model, structured design and configuration management.

 

            B3 : Security Domains

 

  • More granularity in each protection mechanism
  • Code not necessary to support the security policy is excluded from the system.
  • Reference monitor component must be small enough to be isolated and tested fully.
  • System fails to a secure state.

 

Division A : Verified Protection

 

Formal methods are used to ensure that all subjects and objects are controlled.

 

A1 : Verified Design

 

  • Feature and architectures are not much different than B3, the difference is in the development process.
  • Assurance is higher because the formality in the way the system was designed and built is much higher.
  • Stringent change and configuration management.
  • Format methods of covert channel analysis.

 

Summary of Ratings:

 

D – Minimal Protection

C – Discretionary protection

C1 : Discretionary Security Protection

C2 : Controlled Access Protection

B – Mandatory Protection

B1 : Labeled Security

B2 : Structured Protection

B3 : Structured Domains

A – Verified Protection

A1 : Verified Design

 

The Red Book:

 

TNI (Trusted Network Interpretation). The red book is an interpretation of the Orange book for networks and network components. The Red Book TNI ratings are:

 

  • None
  • C1 – Minimum
  • C2 – Fair
  • B2 – Good

 

 

DITSCAP

 

Defense Information Technology Security Certification and Accreditation Process. Has 4 phases:

 

  1. Definition
  2. Verification
  3. Validation
  4. Post Accreditation

 

NIACAP

 

National Information Assurance Certification and Accreditation Process. Has several types of accreditation

 

Side Accreditation: Applications and systems at a self-contained location.

 

Type Accreditation: An application or system distributed to a number of different locations.

 

System Accreditation: Major application or general support system.

 

CIAP

 

Commercial Information Security Assessment Process – in development.

 

 

ITSEC – Information Technology Security Evaluation Criteria

 

This accreditation system is used in Europe. Two main elements of a system are evaluated by ITSEC or TCSET: Functionality and Assurance.

 

Two systems with the same functionality can have different assurance levels. ITSEC separates these two elements and rates them separately. In ITSEC, F1 to F10 rate the functionality and E0 through E6 rate the assurance:

 

ITSEC                         TCSEC

            E0                          D0

F1 + E1                  C1

F2 + E2                  C2

F3 + E3                  B1

F4 + E4                  B2

F5 + E5                  B3

F5 + E6                  A1

F6                          High Integrity

F7                          High Availability

F8                          Data Integrity during communication

F9                          High Confidentiality (encryption)

F10                        Networks with high demands on confidentially and integrity.

 

Security products or systems are referred to as TOE – Target of Evaluation.

 

10 functionality classes – F

8 Assurance Levels   – Q

7 correctness levels  – E

 

The ITSEC assurance classes are:

 

E0 : Inadequate assurance to quality for E1.

E1 : Informal definition of TOE architectural design. TOE satisfies functional testing.

E2 : E1 + information description of detailed design. Configuration control and approved distribution procedure.

E3 : E2 + source code and/or drawing have been evaluated.

E4 : E3 + a formal model of security policy.

E5 : E4 + close correspondence between detailed design and source code/drawings.

E6 : E5 + Formal specification of security enforcing functions. Consistency with formal security policy model.

 

CTCPEC

 

Canadian Trusted Computer Product Evaluation Criteria

 

 

COMMON CRITERIA

 

International standard evaluation criteria, initiated by ISO in 1990 and started in 1993.

 

One specific set of classifications, internationally recognized. Evaluates a product against a “protection profile” which is structured to address specific security problems.

 

A product is assigned an EAL – Evaluation Insurance Level – EAL1 – EAL7.

 

Similar to other criteria, the common criteria answers two basic questions:

 

  1. What does it do? (functionality)
  2. How sure are you? (assurance)

 

The protection profile contains:

 

  • Descriptive elements
  • Rationale
  • Functional Requirements
  • Development Assurance Requirements
  • Evaluation Assurance Requirements

 

Certification: Technical evaluation of security components for the purpose of accreditation.

 

Accreditation: Formal acceptance of the system’s overall adequacy by management. Based partly on certification information.

 

 

SSE-CMM : System Security Engineering, Capability Maturity Model

 

Based on the premise “If you can guarantee the quality of the processes that are used by an organization, you can guarantee the quality of the products and services generated by those processes.”

 

Two dimensions are used to measure the capability of an organization to perform specific activities. The two dimensions are domain and capability.

 

Domain: All the practices that collectively define security engineering.

 

Base practices: Related base practices are grouped into Process Areas (PAs)

 

Capability: Practices that indicate process management and institutionalization capability. Generic Practices (GPs).

 

 

The GPs represent activities that should be performed as part of BPs.

 

In the domain dimension, SSE-CMM defined 11 security engineering process areas and 11 administrative process areas.

 

 

 

Threats:

 

Some threats to security models and architectures are:

 

Covert Channels: Information flow not controlled by the security mechanism.

 

  • Covert timing channel
  • Covert storage channel.

 

Addresses in TCSEC B2 and higher.

 

Back Doors: Also known as maintenance hooks / trapdoors.

 

Timing Issues: Also known as ‘asynchronous attack’. Deals with the timing different in the sequence of steps a system uses to complete a task.

 

Time of Check vs. Time of Use (TOC/TOU) – Also known as “race conditions”

 

Buffer Overflows: “Smashing the stack”.

 

Each of these can lead to a violation of the system security policy.

 

 

Recovery Procedures:

 

The system must recover/restart from an error in a secure state – maintenance mode: access only by privileged users from privileged terminals.

 

Failsafe: Program execution terminated, system protected from compromise.

 

Failsoft (resilient): Non-critical processing is terminated.

 

Failover: Switching to duplicate (hot) backup in realtime.

 

Cold start: System cannot be restored to a known secure state.

 


No tags for this post.

Leave a Reply

Advanced & Persistent Security

CLOSE
CLOSE