Computer security is a branch of technology known as information security as applied to computers.
The objective of computer security varies and can include protection of
information from theft or corruption, or the preservation of
availability, as defined in the security policy.
Computer security imposes requirements on computers that are
different from most system requirements because they often take the
form of constraints on what computers are not supposed to do. This
makes computer security particularly challenging because it is hard
enough just to make computer programs do everything they are designed
to do correctly. Furthermore, negative requirements are deceptively
complicated to satisfy and require exhaustive testing to verify, which
is impractical for most computer programs. Computer security provides a
technical strategy to convert negative requirements to positive
enforceable rules. For this reason, computer security is often more
technical and mathematical than some computer science fields.
Typical approaches to improving computer security (in approximate order of strength) can include the following:
- Physically limit access to computers to only those who will not compromise security.
- Hardware mechanisms that impose rules on computer programs, thus avoiding depending on computer programs for computer security.
- Operating system mechanisms that impose rules on programs to avoid trusting computer programs.
- Programming strategies to make computer programs dependable and resist subversion.
1.Secure Operating Systems
One use of the term computer security refers to technology to implement a secure operating system.
Much of this technology is based on science developed in the 1980s and
used to produce what may be some of the most impenetrable operating
systems ever. Though still valid, the technology is in limited use
today, primarily because it imposes some changes to system management
and also because it is not widely understood. Such ultra-strong secure
operating systems are based on operating system kernel
technology that can guarantee that certain security policies are
absolutely enforced in an operating environment. An example of such a Computer security policy is the
Bell-LaPadula model: The strategy is based on a coupling of special microprocessor hardware features, often involving the memory management unit,
to a special correctly implemented operating system kernel. This forms
the foundation for a secure operating system which, if certain critical
parts are designed and implemented correctly, can ensure the absolute
impossibility of penetration by hostile elements. This capability is
enabled because the configuration not only imposes a security policy,
but in theory completely protects itself from corruption. Ordinary
operating systems, on the other hand, lack the features that assure
this maximal level of security. The design methodology to produce such
secure systems is precise, deterministic and logical.
Systems designed with such methodology represent the state of the
art of computer security although products using such security are not
widely known. In sharp contrast to most kinds of software, they meet
specifications with verifiable certainty comparable to specifications
for size, weight and power. Secure operating systems designed this way
are used primarily to protect national security information, military
secrets, and the data of international financial institutions. These
are very powerful security tools and very few secure operating systems
have been certified at the highest level (Orange Book
A-1) to operate over the range of "Top Secret" to "unclassified"
(including Honeywell SCOMP, USAF SACDIN, NSA Blacker and Boeing MLS
LAN.) The assurance of security depends not only on the soundness of
the design strategy, but also on the assurance of correctness of the
implementation, and therefore there are degrees of security strength
defined for COMPUSEC. The Common Criteria
quantifies security strength of products in terms of two components,
security functionality and assurance level (such as EAL levels), and
these are specified in a Protection Profile for requirements and a Security Target
for product descriptions. None of these ultra-high assurance secure
general purpose operating systems have been produced for decades or
certified under the Common Criteria.
In USA parlance, the term High Assurance usually suggests the system
has the right security functions that are implemented robustly enough
to protect DoD and DoE classified information. Medium assurance
suggests it can protect less valuable information, such as income tax
information. Secure operating systems designed to meet medium
robustness levels of security functionality and assurance have seen
wider use within both government and commercial markets. Medium robust
systems may provide the same the security functions as high assurance
secure operating systems but do so at a lower assurance level (such as
Common Criteria levels EAL4 or EAL5). Lower levels mean we can be less
certain that the security functions are implemented flawlessly, and
therefore less dependable. These systems are found in use on web
servers, guards, database servers, and management hosts and are used
not only to protect the data stored on these systems but also to
provide a high level of protection for network connections and routing
services.
2.Security ArchitectureSecurity Architecture can be defined as the design artifacts that
describe how the security controls (security countermeasures) are
positioned, and how they relate to the overall information technology
architecture. These controls serve the purpose to maintain the system's
quality attributes, among them
confidentiality,
integrity,
availability,
accountability and
assurance."
[1].
In simpler words, a security architecture is the plan that shows where
security measures need to be placed. If the plan describes a specific
solution then, prior to building such a plan, one would make a risk
analysis. If the plan describes a generic high level design (reference
architecture) then the plan should be based on a threat analysis.
3.Security by DesignThe technologies of computer security are based on
logic.
There is no universal standard notion of what secure behavior is.
"Security" is a concept that is unique to each situation. Security is
extraneous to the function of a computer application, rather than
ancillary to it, thus security necessarily imposes restrictions on the
application's behavior.
There are several approaches to
security in
computing, sometimes a combination of approaches is valid:
- Trust all the software to abide by a security policy but the software is not trustworthy (this is computer insecurity).
- Trust all the software to abide by a security policy and the
software is validated as trustworthy (by tedious branch and path
analysis for example). - Trust no software but enforce a security policy with mechanisms that are not trustworthy (again this is computer insecurity).
- Trust no software but enforce a security policy with trustworthy mechanisms.
Many systems have unintentionally resulted in the first possibility.
Since approach two is expensive and non-deterministic, its use is very
limited. Approaches one and three lead to failure. Because approach
number four is often based on hardware mechanisms and avoids
abstractions and a multiplicity of degrees of freedom, it is more
practical. Combinations of approaches two and four are often used in a
layered architecture with thin layers of two and thick layers of four.
There are myriad strategies and techniques used to design security
systems. There are few, if any, effective strategies to enhance
security after design.
One technique enforces the principle of least privilege to great extent, where an entity has only the privileges that are needed for its function. That way even if an attacker gains access to one part of the system, fine-grained security ensures that it is just as difficult for them to access the rest.
Furthermore, by breaking the system up into smaller components, the
complexity of individual components is reduced, opening up the
possibility of using techniques such as automated theorem proving to prove the correctness of crucial software subsystems. This enables a closed form solution
to security that works well when only a single well-characterized
property can be isolated as critical, and that property is also
assessable to math. Not surprisingly, it is impractical for generalized
correctness, which probably cannot even be defined, much less proven.
Where formal correctness proofs are not possible, rigorous use of code review and unit testing represent a best-effort approach to make modules secure.
The design should use "defense in depth",
where more than one subsystem needs to be violated to compromise the
integrity of the system and the information it holds. Defense in depth
works when the breaching of one security measure does not provide a
platform to facilitate subverting another. Also, the cascading
principle acknowledges that several low hurdles does not make a high
hurdle. So cascading several weak mechanisms does not provide the
safety of a single stronger mechanism.
Subsystems should default to secure settings, and wherever possible
should be designed to "fail secure" rather than "fail insecure" (see fail safe
for the equivalent in safety engineering). Ideally, a secure system
should require a deliberate, conscious, knowledgeable and free decision
on the part of legitimate authorities in order to make it insecure.
In addition, security should not be an all or nothing issue. The
designers and operators of systems should assume that security breaches
are inevitable. Full audit trails
should be kept of system activity, so that when a security breach
occurs, the mechanism and extent of the breach can be determined.
Storing audit trails remotely, where they can only be appended to, can
keep intruders from covering their tracks. Finally, full disclosure helps to ensure that when bugs are found the "window of vulnerability" is kept as short as possible.
Early history of security by design
The early Multics
operating system was notable for its early emphasis on computer
security by design, and Multics was possibly the very first operating
system to be designed as a secure system from the ground up. In spite
of this, Multics' security was broken, not once, but repeatedly. The
strategy was known as 'penetrate and test' and has become widely known
as a non-terminating process that fails to produce computer security.
This led to further work on computer security that prefigured modern security engineering techniques producing closed form processes that terminate.
4.Secure coding
Seacord, "Secure Coding in C and C++"
5.Capabilities vs. ACLs
Within computer systems, the two fundamental means of enforcing privilege separation are access control lists (ACLs) and capabilities. The semantics of ACLs have been proven to be insecure in many situations (e.g., Confused deputy problem).
It has also been shown that ACL's promise of giving access to an object
to only one person can never be guaranteed in practice. Both of these
problems are resolved by capabilities. This does not mean practical
flaws exist in all ACL-based systems, but only that the designers of
certain utilities must take responsibility to ensure that they do not
introduce flaws.
Unfortunately, for various historical reasons, capabilities have been mostly restricted to research operating systems
and commercial OSs still use ACLs. Capabilities can, however, also be
implemented at the language level, leading to a style of programming
that is essentially a refinement of standard object-oriented design. An
open source project in the area is the E language.
First the Plessey System 250 and then Cambridge CAP computer
demonstrated the use of capabilities, both in hardware and software, in
the 1970s, so this technology is hardly new. A reason for the lack of
adoption of capabilities may be that ACLs appeared to offer a 'quick
fix' for security without pervasive redesign of the operating system
and hardware.
The most secure computers are those not connected to the Internet
and shielded from any interference. In the real world, the most
security comes from operating systems where security is not an add-on, such as OS/400 from IBM.
This almost never shows up in lists of vulnerabilities for good reason.
Years may elapse between one problem needing remediation and the next.
A good example of a secure system is EROS.
But see also the article on secure operating systems.
TrustedBSD is an example of an open source project with a goal, among other things, of building capability functionality into the FreeBSD operating system. Much of the work is already done.
Applicattions
Computer security is critical in almost any technology-driven
industry which operates on computer systems. The issues of computer
based systems and addressing their countless vulnerabilities are an
integral part of maintaining an operational industry. [3]
Lightning, power fluctuations, surges, brown-outs,
blown fuses, and various other power outages instantly disable all
computer systems, since they are dependent on electrical source. Other
accidental and intentional faults have caused significant disruption of
safety critical systems throughout the last few decades and dependence
on reliable communication and electrical power only jeopardizes
computer safety.
TerminologyThe following terms used in engineering secure systems are explained below.
- Firewalls
can either be hardware devices or software programs. They provide some
protection from online intrusion, but since they allow some
applications (e.g. web browsers) to connect to the Internet, they don't
protect against some unpatched vulnerabilities in these applications
(e.g. lists of known unpatched holes from Secunia and SecurityFocus).
- Automated theorem proving
and other verification tools can enable critical algorithms and code
used in secure systems to be mathematically proven to meet their
specifications. - Thus simple microkernels can be written so that we can be sure they don't contain any bugs: eg EROS and Coyotos.
A bigger OS, capable of providing a standard API like POSIX,
can be built on a secure microkernel using small API servers running as
normal programs. If one of these API servers has a bug, the kernel and
the other servers are not affected: e.g. Hurd or Minix 3.
- Cryptographic
techniques can be used to defend data in transit between systems,
reducing the probability that data exchanged between systems can be
intercepted or modified. - Strong authentication techniques can be used to ensure that communication end-points are who they say they are.
Secure cryptoprocessors can be used to leverage physical security techniques into protecting the security of the computer system.
- Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
- Mandatory access control
can be used to ensure that privileged access is withdrawn when
privileges are revoked. For example, deleting a user account should
also stop any processes that are running with that user's privileges. - Capability and access control list techniques can be used to ensure privilege separation and mandatory access control.