The security design principles are considered while designing any security mechanism for a system. These principles are review to develop a secure system which prevents the security flaws and also prevents unwanted access to the system. Show
Below is the list of fundamental security design principles provided by the National Centres of Academic Excellence in Information Assurance/Cyber Defence, along with the U.S. National Security Agency and the U.S. Department of Homeland Security. Fundamental Security Design Principles1. Economy of MechanismThis fundamental security principle defines that the security measures implemented in the software and the hardware must be simple and small. This would ease the testers to test the security measures thoroughly. If the designed security mechanism is complex then it is likely that the tester would get a chance to exploit the weakness in the design. So more the design is simple less are the opportunities for the tester to discover the flaws and more the complex is the design more are the chances to exploit flaws in the design. When the security design is simple, it easy to update or modify the design. But when it comes to practice, we cannot consider the economy of a mechanism as the best security design principle. Because there is a continuous demand for adding the security features in both hardware, as well as software. Adding security features constantly makes the security design complex. What we can do to obey this principle while designing security mechanism is to eliminate the less important complex feature. 2. Fail-safe DefaultsThis principle says that if any user wants access to any mechanism then whether the access is permitted or denied should be based on authorization rather than elimination. By default, all the mechanism should have a lack of access and the function of a security mechanism is to identify the condition where the access to the security mechanism should be permitted. This means by default access to all mechanism should be denied, unless any privilege attribute is provided. This principle denies unauthorized access. If there occurs any mistake while designing the security mechanism which grants access based on permission or authorization. That mechanism fails by simply denying access, which is the safest condition. If there occurs any mistake while designing the security mechanism which grants access based on exclusion. That mechanism fails by simply granting access which can not be considered as the safest situation. 3. Complete MediationSome systems are designed to operate continuously such systems remember access decision. So, there must be an access control mechanism which would check every access occurring on the system. This principle says that the system should not trust the access decisions it recovers from the system cache. This particular security design principle says that there must be a mechanism in the system that checks each access through the access control mechanism. However, this is an exhaustive approach and is rarely considered while designing a security mechanism. 4. Open DesignThis security principle suggests that the security mechanism design should be open to the public. Like in the cryptographic algorithm, the encryption key is kept secret while the encryption algorithm is opened for a public investigation. This principle is followed by the NIST (National Institute of Standards and Technology) to standardize the algorithms because it helps in worldwide adoption of NIST approved algorithms. 5. Separation of PrivilegeThis security principle states that whenever a user tries to gain access to a system, the access should not be granted based on a single attribute or condition. Instead, there must be multiple situations or conditions or attribute which should be verified to grant access to the system. We also term this as a multifactor user authentication as this principle says that multiple techniques must be implemented to authenticate a user. For example, while conducting online money transfer we require user-id, password, transaction password along with OTP. 6. Least PrivilegeThe least privilege security design principle states that each user should be able to access the system with the least privilege. Only those limited privileges should be assigned to the user which are essential to perform the desired task. An example of considering and implementing this principle is role-based access control. The role-based designed security mechanism should discover and describe various roles of the users or processes. Now, the least set of privileges should be assigned to each role which is essential to perform its functions. So, the access control mechanism enables each role only those privileges for which it is authorized. The least set of privileges assigned to each role describes the resources available each role can access. In this way, unauthentic roles are unable to access the protected resources. Like, the users accessing database has privilege only to retrieve the data they are not authorized to modify the data. 7. Least Common MechanismFollowing the least common mechanism, a security design principle there should be minimum common functions to share between the different user. This principle reduces the count of communication paths and therefore further reduces the hardware and software implementation. Ultimately this principle reduces the threat of unwanted access to the system as it becomes easy to verify if there are some unwanted access to the shared function. 8. Psychological AcceptabilityThis security design principle says that the security mechanisms design to protect the system should not interfere with the working of the user every now and then. As this would irritate the user ad user may disable this security mechanism on the system. Therefore, it is suggested that the security mechanism should introduce minimum hurdles to the user of the system. The security mechanism should not be designed such that it becomes difficult for the user to access the resources in the system. 9. IsolationThis security design principle is considered in three circumstances. The first condition, the system that has critical data, processes or resources must be isolated such that it restricts public access. It can be done in two ways. The system with critical resources can be isolated in two ways physical and logical isolation. The physical isolation is one where the system with critical information is isolated from the system with public access information. In logical isolation, the security services layers are established between the public system and the critical systems. The second isolation condition is that the files or data of one user must be kept isolated with the files or data of another user. Nowadays the new operating system has this functionality. Each user operating the system have an isolated memory space, process space, file space along with the mechanism to prevent unwanted access. And the third isolation condition is where the security mechanism must be isolated from such that they are prevented from unwanted access. 10. EncapsulationThis security design principle is a form of isolation which is designed on the principle of object-oriented principles. Here the processes of the protected system can only access the data object of the system and these processes can only be invoked from a domain entry point. 11. ModularityThis security designing principle says that the security mechanism must be generated as separate and protected modules and the security mechanism must be generated using the modular architecture. This principle helps in updating the security mechanism independently without modifying the entire system. 12. LayeringMultiple security layers must be used in order to protect the opponent from accessing crucial information. Applying multiple security layers provides multiple barriers to the adversary if he tries to access the protected system. 13. Least AstonishmentThis security design principle states that the user interface of the system must not amaze the user while accessing the secure system. He should be able to understand how the security mechanism is essential to protect the system. So, this is all about the security design principles which should be considered while designing the security mechanism for a system. In engineering, a fail-safe is a design feature or practice that in the event of a specific type of failure, inherently responds in a way that will cause minimal or no harm to other equipment, to the environment or to people. Unlike inherent safety to a particular hazard, a system being "fail-safe" does not mean that failure is impossible or improbable, but rather that the system's design prevents or mitigates unsafe consequences of the system's failure. That is, if and when a "fail-safe" system fails, it remains at least as safe as it was before the failure.[1][2] Since many types of failure are possible, failure mode and effects analysis is used to examine failure situations and recommend safety design and procedures. Some systems can never be made fail-safe, as continuous availability is needed. Redundancy, fault tolerance, or contingency plans are used for these situations (e.g. multiple independently controlled and fuel-fed engines).[3] ExamplesMechanical or physicalExamples include:
Electrical or electronicExamples include:
Procedural safetyAs well as physical devices and systems fail-safe procedures can be created so that if a procedure is not carried out or carried out incorrectly no dangerous action results. For example:
Other terminologyFail-safe (foolproof) devices are also known as poka-yoke devices. Poka-yoke, a Japanese term, was coined by Shigeo Shingo, a quality expert.[9][10] "Safe to fail" refers to civil engineering designs such as the Room for the River project in Netherlands and the Thames Estuary 2100 Plan[11][12] which incorporate flexible adaptation strategies or climate change adaptation which provide for, and limit, damage, should severe events such as 500-year floods occur.[13] Fail safe and fail secureFail-safe and fail-secure are distinct concepts. Fail-safe means that a device will not endanger lives or property when it fails. Fail-secure, also called fail-closed, means that access or data will not fall into the wrong hands in a security failure. Sometimes the approaches suggest opposite solutions. For example, if a building catches fire, fail-safe systems would unlock doors to ensure quick escape and allow firefighters inside, while fail-secure would lock doors to prevent unauthorized access to the building. The opposite of fail-closed is called fail-open. Fail active operationalFail active operational can be installed on systems that have a high degree of redundancy so that a single failure of any part of the system can be tolerated (fail active operational) and a second failure can be detected – at which point the system will turn itself off (uncouple, fail passive). One way of accomplishing this is to have three identical systems installed, and a control logic which detects discrepancies. An example for this are many aircraft systems, among them inertial navigation systems and pitot tubes. Failsafe pointDuring the Cold War, "failsafe point" was the term used for the point of no return for American Strategic Air Command nuclear bombers, just outside Soviet airspace. In the event of receiving an attack order, the bombers were required to linger at the failsafe point and wait for a second confirming order; until one was received, they would not arm their bombs or proceed further.[14] The design was to prevent any single failure of the American command system causing nuclear war. This sense of the term entered the American popular lexicon with the publishing of the 1962 novel Fail-Safe. (Other nuclear war command control systems have used the opposite scheme, fail-deadly, which requires continuous or regular proof that an enemy first-strike attack has not occurred to prevent the launching of a nuclear strike.) See also
Look up fail-safe in Wiktionary, the free dictionary.
References
|