What is principle of fail-safe defaults?

The security design principles are considered while designing any security mechanism for a system. These principles are review to develop a secure system which prevents the security flaws and also prevents unwanted access to the system.

What is principle of fail-safe defaults?

Below is the list of fundamental security design principles provided by the National Centres of Academic Excellence in Information Assurance/Cyber Defence, along with the U.S. National Security Agency and the U.S. Department of Homeland Security.

Fundamental Security Design Principles

1. Economy of Mechanism

This fundamental security principle defines that the security measures implemented in the software and the hardware must be simple and small. This would ease the testers to test the security measures thoroughly.

If the designed security mechanism is complex then it is likely that the tester would get a chance to exploit the weakness in the design.

So more the design is simple less are the opportunities for the tester to discover the flaws and more the complex is the design more are the chances to exploit flaws in the design.

When the security design is simple, it easy to update or modify the design. But when it comes to practice, we cannot consider the economy of a mechanism as the best security design principle. Because there is a continuous demand for adding the security features in both hardware, as well as software.

Adding security features constantly makes the security design complex. What we can do to obey this principle while designing security mechanism is to eliminate the less important complex feature.

2. Fail-safe Defaults

This principle says that if any user wants access to any mechanism then whether the access is permitted or denied should be based on authorization rather than elimination.

By default, all the mechanism should have a lack of access and the function of a security mechanism is to identify the condition where the access to the security mechanism should be permitted. This means by default access to all mechanism should be denied, unless any privilege attribute is provided.

This principle denies unauthorized access. If there occurs any mistake while designing the security mechanism which grants access based on permission or authorization. That mechanism fails by simply denying access, which is the safest condition.

If there occurs any mistake while designing the security mechanism which grants access based on exclusion. That mechanism fails by simply granting access which can not be considered as the safest situation.

3. Complete Mediation

Some systems are designed to operate continuously such systems remember access decision. So, there must be an access control mechanism which would check every access occurring on the system.

This principle says that the system should not trust the access decisions it recovers from the system cache. This particular security design principle says that there must be a mechanism in the system that checks each access through the access control mechanism.

However, this is an exhaustive approach and is rarely considered while designing a security mechanism.

4. Open Design

This security principle suggests that the security mechanism design should be open to the public. Like in the cryptographic algorithm, the encryption key is kept secret while the encryption algorithm is opened for a public investigation.

This principle is followed by the NIST (National Institute of Standards and Technology) to standardize the algorithms because it helps in worldwide adoption of NIST approved algorithms.

5. Separation of Privilege

This security principle states that whenever a user tries to gain access to a system, the access should not be granted based on a single attribute or condition.

Instead, there must be multiple situations or conditions or attribute which should be verified to grant access to the system. We also term this as a multifactor user authentication as this principle says that multiple techniques must be implemented to authenticate a user.

For example, while conducting online money transfer we require user-id, password, transaction password along with OTP.

6. Least Privilege

The least privilege security design principle states that each user should be able to access the system with the least privilege. Only those limited privileges should be assigned to the user which are essential to perform the desired task.

An example of considering and implementing this principle is role-based access control. The role-based designed security mechanism should discover and describe various roles of the users or processes.

Now, the least set of privileges should be assigned to each role which is essential to perform its functions. So, the access control mechanism enables each role only those privileges for which it is authorized. The least set of privileges assigned to each role describes the resources available each role can access.

In this way, unauthentic roles are unable to access the protected resources. Like, the users accessing database has privilege only to retrieve the data they are not authorized to modify the data.

7. Least Common Mechanism

Following the least common mechanism, a security design principle there should be minimum common functions to share between the different user. This principle reduces the count of communication paths and therefore further reduces the hardware and software implementation.

Ultimately this principle reduces the threat of unwanted access to the system as it becomes easy to verify if there are some unwanted access to the shared function.

8. Psychological Acceptability

This security design principle says that the security mechanisms design to protect the system should not interfere with the working of the user every now and then.

As this would irritate the user ad user may disable this security mechanism on the system. Therefore, it is suggested that the security mechanism should introduce minimum hurdles to the user of the system.

The security mechanism should not be designed such that it becomes difficult for the user to access the resources in the system.

9. Isolation

This security design principle is considered in three circumstances. The first condition, the system that has critical data, processes or resources must be isolated such that it restricts public access. It can be done in two ways.

The system with critical resources can be isolated in two ways physical and logical isolation. The physical isolation is one where the system with critical information is isolated from the system with public access information.

In logical isolation, the security services layers are established between the public system and the critical systems.

The second isolation condition is that the files or data of one user must be kept isolated with the files or data of another user. Nowadays the new operating system has this functionality.

Each user operating the system have an isolated memory space, process space, file space along with the mechanism to prevent unwanted access.

And the third isolation condition is where the security mechanism must be isolated from such that they are prevented from unwanted access.

10. Encapsulation

This security design principle is a form of isolation which is designed on the principle of object-oriented principles. Here the processes of the protected system can only access the data object of the system and these processes can only be invoked from a domain entry point.

11. Modularity

This security designing principle says that the security mechanism must be generated as separate and protected modules and the security mechanism must be generated using the modular architecture.

This principle helps in updating the security mechanism independently without modifying the entire system.

12. Layering

Multiple security layers must be used in order to protect the opponent from accessing crucial information. Applying multiple security layers provides multiple barriers to the adversary if he tries to access the protected system.

13. Least Astonishment

This security design principle states that the user interface of the system must not amaze the user while accessing the secure system. He should be able to understand how the security mechanism is essential to protect the system.

So, this is all about the security design principles which should be considered while designing the security mechanism for a system.

Design feature or practice

In engineering, a fail-safe is a design feature or practice that in the event of a specific type of failure, inherently responds in a way that will cause minimal or no harm to other equipment, to the environment or to people. Unlike inherent safety to a particular hazard, a system being "fail-safe" does not mean that failure is impossible or improbable, but rather that the system's design prevents or mitigates unsafe consequences of the system's failure. That is, if and when a "fail-safe" system fails, it remains at least as safe as it was before the failure.[1][2] Since many types of failure are possible, failure mode and effects analysis is used to examine failure situations and recommend safety design and procedures.

Some systems can never be made fail-safe, as continuous availability is needed. Redundancy, fault tolerance, or contingency plans are used for these situations (e.g. multiple independently controlled and fuel-fed engines).[3]

Examples

Mechanical or physical

What is principle of fail-safe defaults?

Globe control valve with pneumatic diaphragm actuator. Such a valve can be designed to fail to safety using spring pressure if the actuating air is lost.

Examples include:

  • Roller-shutter fire doors that are activated by building alarm systems or local smoke detectors must close automatically when signaled regardless of power. In case of power outage the coiling fire door does not need to close, but must be capable of automatic closing when given a signal from the building alarm systems or smoke detectors. A temperature-sensitive fusible link may be employed to hold the fire doors open against gravity or a closing spring. In case of fire, the link melts and releases the doors, and they close.
  • Some airport baggage carts require that the person hold down a given cart's handbrake switch at all times; if the handbrake switch is released, the brake will activate, and assuming that all other portions of the braking system are working properly, the cart will stop. The handbrake-holding requirement thus both operates according to the principles of "fail-safety" and contributes to (but does not necessarily ensure) the fail-security of the system. This is an example of a dead man's switch.
  • Lawnmowers and snow blowers have a hand-closed lever that must be held down at all times. If it is released, it stops the blade's or rotor's rotation. This also functions as a dead man's switch.
  • Air brakes on railway trains and air brakes on trucks. The brakes are held in the "off" position by air pressure created in the brake system. Should a brake line split, or a carriage become de-coupled, the air pressure will be lost and the brakes applied, by springs in the case of trucks, or by a local air reservoir in trains. It is impossible to drive a truck with a serious leak in the air brake system. (Trucks may also employ wig wags to indicate low air pressure.)
  • Motorized gates – In case of power outage the gate can be pushed open by hand with no crank or key required. However, as this would allow virtually anyone to go through the gate, a fail-secure design is used: In a power outage, the gate can only be opened by a hand crank that is usually kept in a safe area or under lock and key. When such a gate provides vehicle access to homes, a fail-safe design is used, where the door opens to allow fire department access.
  • Safety valves – Various devices that operate with fluids use fuses or safety valves as fail-safe mechanisms.

What is principle of fail-safe defaults?

Railway semaphore signals. "Stop" or "caution" is a horizontal arm, "Clear to Proceed" is 45 degrees upwards, so failure of the actuating cable releases the signal arm to safety under gravity.

  • A railway semaphore signal is specially designed so that, should the cable controlling the signal break, the arm returns to the "danger" position, preventing any trains passing the inoperative signal.
  • Isolation valves, and control valves, that are used for example in systems containing hazardous substances, can be designed to close upon loss of power, for example by spring force. This is known as fail-closed upon loss of power.
  • An elevator has brakes that are held off brake pads by the tension of the elevator cable. If the cable breaks, tension is lost and the brakes latch on the rails in the shaft, so that the elevator cabin does not fall.
  • Vehicle air conditioning – Defrost controls require vacuum for diverter damper operation for all functions except defrost. If vacuum fails, defrost is still available.

Electrical or electronic

Examples include:

  • Many devices are protected from short circuit by fuses, circuit breakers, or current limiting circuits. The electrical interruption under overload conditions will prevent damage or destruction of wiring or circuit devices due to overheating.
  • Avionics using redundant systems to perform the same computation using three different systems. Different results indicate a fault in the system.[4]
  • Drive-by-wire and fly-by-wire controls such as an Accelerator Position Sensor typically have two potentiometers which read in opposite directions, such that moving the control will result in one reading becoming higher, and the other generally equally lower. Mismatches between the two readings indicates a fault in the system, and the ECU can often deduce which of the two readings is faulty.[5]
  • Traffic light controllers use a Conflict Monitor Unit to detect faults or conflicting signals and switch an intersection to an all flashing error signal, rather than displaying potentially dangerous conflicting signals, e.g. showing green in all directions.[6]
  • The automatic protection of programs and/or processing systems when a computer hardware or software failure is detected in a computer system. A classic example is a watchdog timer. See Fail-safe (computer).
  • A control operation or function that prevents improper system functioning or catastrophic degradation in the event of circuit malfunction or operator error; for example, the failsafe track circuit used to control railway block signals. The fact that a flashing amber is more permissive than a solid amber on many railway lines is a sign of a failsafe, as the relay, if not working, will revert to a more restrictive setting.
  • The iron pellet ballast on the Bathyscaphe is dropped to allow the submarine to ascend. The ballast is held in place by electromagnets. If electrical power fails, the ballast is released, and the submarine then ascends to safety.
  • Many nuclear reactor designs have neutron absorbing control rods suspended by electromagnets. If the power fails, they drop under gravity into the core and shut down the chain reaction in seconds by absorbing the neutrons needed for fission to continue.
  • In industrial automation, alarm circuits are usually "normally closed". This ensures that in case of a wire break the alarm will be triggered. If the circuit were normally open, a wire failure would go undetected, while blocking actual alarm signals.
  • Analog sensors and modulating actuators can usually be installed and wired such that the circuit failure results in an out-of-bound reading – see current loop. For example, a potentiometer indicating pedal position might only travel from 20% to 80% of its full range, such that a cable break or short results in a 0% or 100% reading.
  • In control systems, critically important signals can be carried by a complementary pair of wires (<signal> and <not_signal>). Only states where the two signals are opposite (one is high, the other low) are valid. If both are high or both are low the control system knows that something is wrong with the sensor or connecting wiring. Simple failure modes (dead sensor, cut or unplugged wires) are thereby detected. An example would be a control system reading both the normally open (NO) and normally closed (NC) poles of a SPDT selector switch against common, and checking them for coherency before reacting to the input.
  • In HVAC control systems, actuators that control dampers and valves may be fail-safe, for example, to prevent coils from freezing or rooms from overheating. Older pneumatic actuators were inherently fail-safe because if the air pressure against the internal diaphragm failed, the built-in spring would push the actuator to its home position – of course the home position needed to be the "safe" position. Newer electrical and electronic actuators need additional components (springs or capacitors) to automatically drive the actuator to home position upon loss of electrical power.[7]
  • Programmable logic controllers (PLCs). To make a PLC fail-safe the system does not require energization to stop the drives associated. For example, usually, an emergency stop is a normally closed contact. In the event of a power failure this would remove the power directly from the coil and also the PLC input. Hence, a fail-safe system.
  • If a voltage regulator fails, it can destroy connected equipment. A crowbar (circuit) prevents damage by short-circuiting the power supply as soon as it detects overvoltage.

Procedural safety

What is principle of fail-safe defaults?

An aircraft lights its afterburners to maintain full power during an arrested landing aboard an aircraft carrier. If the arrested landing fails, the aircraft can safely take off again.

As well as physical devices and systems fail-safe procedures can be created so that if a procedure is not carried out or carried out incorrectly no dangerous action results. For example:

  • Spacecraft trajectory - During early Apollo program missions to the Moon, the spacecraft was put on a free return trajectory — if the engines had failed at lunar orbit insertion, the craft would have safely coasted back to Earth.
  • The pilot of an aircraft landing on an aircraft carrier increases the throttle to full power at touchdown. If the arresting wires fail to capture the aircraft, it is able to take off again; this is an example of fail-safe practice.[8]
  • In railway signalling signals which are not in active use for a train are required to be kept in the 'danger' position. The default position of every controlled absolute signal is therefore "danger", and therefore a positive action — setting signals to "clear" — is required before a train may pass. This practice also ensures that, in case of a fault in the signalling system, an incapacitated signalman, or the unexpected entry of a train, that a train will never be shown an erroneous "clear" signal.
  • Railroad engineers are instructed that a railway signal showing a confusing, contradictory or unfamiliar aspect (for example a colour light signal that has suffered an electrical failure and is showing no light at all) must be treated as showing "danger". In this way, the driver contributes to the fail-safety of the system.

Other terminology

Fail-safe (foolproof) devices are also known as poka-yoke devices. Poka-yoke, a Japanese term, was coined by Shigeo Shingo, a quality expert.[9][10] "Safe to fail" refers to civil engineering designs such as the Room for the River project in Netherlands and the Thames Estuary 2100 Plan[11][12] which incorporate flexible adaptation strategies or climate change adaptation which provide for, and limit, damage, should severe events such as 500-year floods occur.[13]

Fail safe and fail secure

Fail-safe and fail-secure are distinct concepts. Fail-safe means that a device will not endanger lives or property when it fails. Fail-secure, also called fail-closed, means that access or data will not fall into the wrong hands in a security failure. Sometimes the approaches suggest opposite solutions. For example, if a building catches fire, fail-safe systems would unlock doors to ensure quick escape and allow firefighters inside, while fail-secure would lock doors to prevent unauthorized access to the building.

The opposite of fail-closed is called fail-open.

Fail active operational

Fail active operational can be installed on systems that have a high degree of redundancy so that a single failure of any part of the system can be tolerated (fail active operational) and a second failure can be detected – at which point the system will turn itself off (uncouple, fail passive). One way of accomplishing this is to have three identical systems installed, and a control logic which detects discrepancies. An example for this are many aircraft systems, among them inertial navigation systems and pitot tubes.

Failsafe point

During the Cold War, "failsafe point" was the term used for the point of no return for American Strategic Air Command nuclear bombers, just outside Soviet airspace. In the event of receiving an attack order, the bombers were required to linger at the failsafe point and wait for a second confirming order; until one was received, they would not arm their bombs or proceed further.[14] The design was to prevent any single failure of the American command system causing nuclear war. This sense of the term entered the American popular lexicon with the publishing of the 1962 novel Fail-Safe.

(Other nuclear war command control systems have used the opposite scheme, fail-deadly, which requires continuous or regular proof that an enemy first-strike attack has not occurred to prevent the launching of a nuclear strike.)

See also

What is principle of fail-safe defaults?

  • Fail-fast
  • Control theory
  • Dead man's switch
  • EIA-485
  • Elegant degradation
  • Failing badly
  • Fail-deadly
  • Fault tolerance
  • IEC 61508
  • Interlock
  • Safe-life design
  • Safety engineering

References

  1. ^ "Fail-safe". AudioEnglich.net. Accessed 2009.12.31
  2. ^ e.g., David B. Rutherford Jr., What Do You Mean It\'s Fail Safe? . 1990 Rapid Transit Conference
  3. ^ Bornschlegl, Susanne (2012). Ready for SIL 4: Modular Computers for Safety-Critical Mobile Applications (pdf). MEN Mikro Elektronik. Retrieved 2015-09-21.
  4. ^ Bornschlegl, Susanne (2012). Ready for SIL 4: Modular Computers for Safety-Critical Mobile Applications (pdf). MEN Mikro Elektronik. Retrieved 2015-09-21.
  5. ^ "P2138 DTC Throttle/Pedal Pos Sensor/Switch D / E Voltage Correlation". www.obd-codes.com.
  6. ^ Manual on Uniform Traffic Control Devices, Federal Highway Administration, 2003
  7. ^ "When Failure Is Not an Option: The Evolution of Fail-Safe Actuators". KMC Controls. Retrieved 12 April 2021.
  8. ^ Harris, Tom. "How Aircraft Carriers Work". HowStuffWorks, Inc. Retrieved 2007-10-20.
  9. ^ Shingo, Shigeo; Andrew P. Dillon (1989). A study of the Toyota production system from an industrial engineering viewpoint. Portland, Oregon: Productivity Press. p. 22. ISBN 0-915299-17-8. OCLC 19740349
  10. ^ John R. Grout, Brian T. Downs. "A Brief Tutorial on Mistake-proofing, Poka-Yoke, and ZQC", MistakeProofing.com
  11. ^ "Thames Estuary 2100 Plan" (PDF). UK Environment Agency. November 2012. Archived from the original (PDF) on 2012-12-10. Retrieved March 20, 2013.
  12. ^ "Thames Estuary 2100 (TE2100)". UK Environment Agency. Retrieved March 20, 2013.
  13. ^ Jennifer Weeks (March 20, 2013). "Adaptation expert Paul Kirshen proposes a new paradigm for civil engineers: 'safe to fail,' not 'fail safe'". The Daily Climate. Archived from the original on May 13, 2013. Retrieved March 20, 2013.
  14. ^ "fail-safe". Dictionary.com. Retrieved November 7, 2021.

Retrieved from "https://en.wikipedia.org/w/index.php?title=Fail-safe&oldid=1102011951"