Which statement best describes the difference between session affinity and session persistence?

If you specify the HTTP Cookie Passive method, the BIG-IP® system does not insert or search for blank Set-Cookie headers in the response from the server. This method does not try to set up the cookie. With this method, the server provides the cookie, formatted with the correct server information and timeout.

Important: We recommend that you use the HTTP Cookie Rewrite method instead of the HTTP Cookie Passive method whenever possible.

For the HTTP Cookie Passive method to succeed, there needs to be a cookie coming from the web server with the appropriate server information in the cookie. Using the BIG-IP Configuration utility, you generate a template for the cookie string, with encoding automatically added, and then edit the template to create the actual cookie.

For example, the following string is a generated cookie template with the encoding automatically added, where [pool name] is the name of the pool that contains the server, 336260299 is the encoded server address, and 20480 is the encoded port:

Set-Cookie:BIGipServer[poolname]=336268299.20480.0000; expires=Sat, 01-Jan-2002 00:00:00 GMT; path=/

To create your cookie from this template, type the actual pool name and an expiration date and time. Alternatively, you can perform the encoding using the following equation for address (a.b.c.d): d*(256^3) + c*(256^2) + b*256 +a

The way to encode the port is to take the two bytes that store the port and reverse them. Thus, port 80 becomes 80 * 256 + 0 = 20480. Port 1433 (instead of 5 * 256 + 153) becomes 153 * 256 + 5 = 39173.

With Apache variants, the cookie can be added to every web page header by adding the following entry to the httpd.conf file: Header add Set-Cookie: "BIGipServer my_pool=184658624.20480.000; expires=Sat, 19-Aug-2002 19:35:45 GMT; path=/"

Note: the profile settings Mirror Persistence, Match Across Services, Match Across Virtual Servers, and Match Across Pools do not apply to the HTTP Cookie Passive method. These settings apply to the Cookie Hash method only.

Analyze the following scenarios and determine which attacker used piggy backing.On the way to a meeting in a restricted area of a government facility, a contractor holds open a gate for a person in a military uniform, who approaches the entry point at a jog, flashing a badge just outside of the readable range.A government employee is late for a meeting in a restricted area of a military installation. Preoccupied with making the meeting on time, the employee does not notice when the gate has not closed and someone enters the restricted area.An employee leaves the workstation to use the restroom. A coworker notices that the employee has forgotten to lock the workstation, and takes advantage of the user's permissions.

Several prospective interns are touring the operations floor of a large tech firm. One of them seems to be paying especially close attention to the employees.

APiggy backing is similar to tailgating, but the attacker enters a secure area with an employee's permission. Flashing an unreadable badge implies a request, soliciting to hold the door. The attacker takes advantage of urgency.Tailgating is a means of entering a secure area without authorization by following close behind a person who is allowed to open the door or checkpoint.Lunchtime attacks take advantage of an unsecured, unattended workstation to gain access to the system.

An attacker can use shoulder surfing to learn a password or PIN (or other secure information) by watching the user type it. Despite the name, the attacker may not have to be close to the target.

Analyze and select the statements that accurately describe both worms and Trojans. (Select all that apply.)A worm is concealed within an application package while a Trojan is self-contained.Both worms and Trojans can provide a backdoor.Both worms and Trojans are designed to replicate.

A worm is self-contained while a Trojan is concealed within an application package.

BDBoth worms and Trojans can provide a backdoor into a system. Worms can carry a payload that may perform a malicious action such as installing a backdoor. Many Trojans function as backdoor applications.Worms are self-contained and are memory-resident viruses that replicate over network resources. A Trojan is concealed within an application package.Worms do not need to attach themselves to another executable file as they are self-contained. Trojans are not self-contained and are delivered with an application.

Worms are designed to replicate, but Trojans are not. Typically, a worm is designed to rapidly consume network bandwidth as it replicates. This action may be able to crash a system.

An end-user has enabled cookies for several e-commerce websites and has started receiving targeted ads. The ads do not trouble the user until, when trying to access an e-commerce site, the user gets several pop-up ads that automatically redirect the user to suspicious sites the user did not intend to visit. What is the most likely explanation for this phenomenon?Tracking cookies have infected the user's computer.Ransomware has infected the user's computer.Spyware has infected the user's computer.

Crypto-malware has infected the user's computer.

ASpyware can perform adware-like tracking and monitor local activity. Another spyware technique is to perform domain name service (DNS) redirection to pharming sites.Cookies are not malware, but if browser settings allow third-party cookies, they can record pages visited, search queries, browser metadata, and IP addresses.Ransomware is a type of Trojan malware that tries to extort money from the victim. It will display threatening messages, stating the computer will remain locked until the victim pays the ransom.

Crypto-malware is a class of ransomware that attempts to encrypt data files. The user will be unable to access the files without obtaining the private encryption key, which is held by the attacker.

A hacker gains access to a database of usernames for a target company and then begins combining common, weak passwords with each username to attempt authentication. The hacker conducts what type of attack?Password sprayingBrute force attackDictionary attack

Rainbow table attack

APassword spraying is a horizontal brute-force online attack. An attacker chooses common passwords and tries them with multiple usernames.A brute-force attack attempts every possible combination in the output space to match a captured hash and guess at the plaintext that generated it.An attacker uses a dictionary attack where there is a good chance of guessing the plaintext value (non-complex passwords). The software generates hash values from a dictionary of plaintexts to try to match one to a captured hash.

Rainbow table attacks refine the dictionary approach. The attacker uses a precomputed lookup table of all possible passwords and their matching hashes and looks up the hash value of a stored password in the table to discover the plaintext.

A retail establishment experiences an attack where whole number values have been exploited. As a result, some credit values are manipulated from positive values to negative values. Which type of attack is the establishment dealing with?Integer overflowBuffer overflowStack overflow

Race condition

AAn integer overflow attack causes the target software to calculate a value that exceeds these bounds. This may cause a positive number to become negative.A buffer is an area of memory that the application reserves to store expected data. To exploit a buffer overflow vulnerability, the attacker passes data that deliberately overfills the buffer.A stack is an area of memory used by a program. It includes a return address, which is the location of the program that called the subroutine. An attacker could use a buffer overflow to change the return address.

Race conditions occur when the outcome from an execution process is directly dependent on the order and timing of certain events, and those events fail.

An attacker compromises a confidential database at a retailer. Investigators discover that unauthorized ad hoc changes to the system were to blame. How do the investigators describe the actor vector in a follow-up report? (Select all that apply.)Configuration driftWeak configurationLack of security controls

Shadow IT

ADConfiguration drift happens when malware exploits an undocumented configuration change on a system.Shadow IT occurs when software or an unauthorized service/port reapply the baseline configuration and investigate configuration management procedures to prevent this type of ad hoc change.Weak configuration occurs when a configuration was correctly applied but was exploited anyway. Review the template to devise more secure settings.A lack of security control is likely to happen if an attack could have been prevented by endpoint protection or antivirus, a host firewall, content filtering policies, data loss prevention systems, or a mobile device management program.P488Configuration drift—if the malware exploited an undocumented configuration change (shadow IT software or an unauthorized service/port, for instance), reapply the baseline configuration and investigate configuration management procedures

to prevent this type of ad hoc change.

An unauthorized person gains access to a restricted area by claiming to be a member of upper management and bullying past the door guard’s verbal attempts to stop the unauthorized visitor. What type of policy could help mitigate this type of social engineering attack?Challenge policyID badge policyMantrap policy

Skimming policy

AOne of the most important parts of surveillance is the challenge policy, which details appropriate responses for given situations and helps to defeat social engineering attacks. Challenge policies may include insisting that individuals complete proper authentication at gateways, even if this means inconveniencing staff members (no matter their seniority).Anyone moving through secure areas of a building should be wearing an ID badge; anyone without an ID badge security should challenge them.A mantrap is a physical security control used for critical assets, where one gateway leads to an enclosed space protected by another barrier.Skimming involves the use of a counterfeit card reader to capture card details, which are then used to program a duplicate.p556Reception Personnel and ID Badges One of the most important parts of surveillance is the challenge policy. This sets out what type of response is appropriate in given situations and helps to defeat social engineering attacks. This must be communicated to and understood by the staff. Challenges represent a whole range of different contact situations. For example: • Challenging visitors who do not have ID badges or are moving about unaccompanied. • Insisting that proper authentication is completed at gateways, even if this means inconveniencing staff members (no matter their seniority). • Intruders and/or security guards may be armed.

The safety of staff and compliance with local laws has to be balanced against the imperative to protect the company's other resources. It is much easier for employees to use secure behavior in these situations if they know that their actions are conforming to a standard of behavior that has been agreed upon and is expected of them.

An attack at a company renders a network useless after a switch is impacted. Engineers review network traffic and determine that the switch is behaving like a hub. What do the engineers conclude is happening? (Select all that apply.)The switch's memory is exhausted.The switch is flooding unicast traffic.The switch MAC table has invalid entries.

The switch is using MAC-based forwarding.

ABMAC flooding is used to attack a switch. The intention of the attack is to exhaust the memory used to store the switch's MAC address table.Overwhelming the switch's MAC table can cause the switch to stop trying to apply MAC-based forwarding and flood unicast traffic out of all ports.If the switch has invalid entries, it would need to build a new MAC table. It would not be flooding traffic out all ports.The switch uses the MAC address table to determine which port to use to forward unicast traffic to its correct destination.P238

MAC Flooding Attacks Where ARP poisoning is directed at hosts, MAC floodingis used to attack a switch. The intention of the attacker is to exhaust the memory used to store the switch's MAC address table. The switch uses the MAC address tableto determine which port to use to forward unicast traffic to its correct destination. Overwhelming the table can cause the switch to stop trying to apply MAC-based forwarding and flood unicast traffic out of all ports, working as a hub. This makes sniffing network traffic easier for the threat actor.

After several users call to report dropped network connections on a local wireless network, a security analyst scans network logs and discovers that multiple unauthorized devices were connecting to the network and overwhelming it via a smartphone tethered to the network, which provided a backdoor for unauthorized access. How would this device be classified?A switched port analyzer (SPAN)/mirror portA spectrum analyzerA rogue access point (AP)

A thin wireless access point (WAP)

CWith a SPAN port, the sensor attaches to a specially configured port on the switch that receives copies of frames addressed to nominated access ports (or all the other ports).A spectrum analyzer is a device that can detect the source of jamming (interference) on a wireless network.A malicious user can set up an unauthorized (rogue) access point with something as basic as a smartphone with tethering capabilities, and non-malicious users could do so by accident.An access point that requires a wireless controller to function is known as a thin WAP, while a fat WAP’s firmware contains enough processing logic to be able to function autonomously and handle clients without the use of a wireless controller.

P253

An engineer pieces together the clues from an attack that temporarily disabled a critical web server. The engineer determines that a SYN flood attack was the cause. Which pieces of evidence led the engineer to this conclusion? (Select all that apply.)ACK packets were held by the serverSYN/ACK packets were misdirected from the clientACK packets were missing from the client

SYN/ACK packets from the server were misdirected

CDA SYN flood attack works by withholding the client's ACK packet during TCP's three-way handshake.In a SYN attack, the SYN/ACK packets are not misdirected from the client since the client is the attacker. Packets are misdirected from the server since the attacker is a spoofed client.Typically a client's IP address is spoofed in a SYN attack, meaning that an invalid or random IP is entered so the server's SYN/ACK packet can be misdirected.In a SYN attack, the three-way handshake is compromised. The client's ACK packet is held, not the SYN packet.P257

Some types of DDoS attacks simply aim to consume network bandwidth, denying it to legitimate hosts, by using overwhelming numbers of bots. Others cause resource exhaustion on the hosts' processing requests, consuming CPU cycles and memory. This delays processing of legitimate traffic and could potentially crash the host system completely. For example, a SYN flood attackworks by withholding the client's ACK packet during TCP's three-way handshake. Typically the client's IP address is spoofed, meaning that an invalid or random IP is entered so the server's SYN/ACK packet is misdirected. A server, router, or firewall can maintain a queue of pending connections, recorded in its state table. When it does not receive an ACK packet from the client, it resends the SYN/ACK packet a set number of times before timing out the connection. The problem is that a server may only be able to manage a limited number of pending connections, which the DoS attack quickly fills up. This means that the server is unable to respond to genuine traffic.

The IT staff at a large company review numerous security logs and discover that the SAM database on Windows workstations is being accessed by a malicious process. What does the staff determine the issue to be?ShellcodePersistenceCredential dumping

Lateral movement

CCredential dumping is a method used to access the credentials file (SAM on a local Windows workstation) or sniff credentials held in memory by the lsass.exe system process.Shellcode is a minimal program designed to exploit a buffer overflow or similar vulnerability to gain privileges to a system.Persistence is a mechanism that maintains a connection if the threat actor's backdoor is restarted, if the host reboots, or if the user logs off.With lateral movement, the attacker might be seeking data assets or may try to widen access by changing the system security configuration.

P404

A junior engineer suspects there is a breached system based on an alert received from a software monitor. The use of the alert provides which information to the engineer?TTPCTIIoC

ISAC

An indicator of compromise (IoC) is a residual sign that an asset or network has been successfully attacked or is continuing to be attacked and provides evidence of a TTP.A tactic, technique, or procedure (TTP) is a generalized statement of adversary behavior. TTPs categorize behaviors in terms of a campaign strategy.Threat data can be packaged as feeds that integrate with a security information and event management (SIEM) platform. These feeds are usually described as cyber threat intelligence (CTI) data.Public/private information sharing centers are utilized in many critical industries. Information Sharing and Analysis Centers (ISAC) are set up to share threat intelligence and promote best practices.P650IoC (indicator of compromise)A sign that an asset or network has been attacked or is currently under attack.P38Tactics, Techniques, and Procedures and Indicators of Compromise A tactic, technique, or procedure (TTP)is a generalized statement of adversary behavior. The term is derived from US military doctrine (mwi.usma.edu/what-is-armydoctrine). TTPs categorize behaviors in terms of campaign strategy and approach (tactics), generalized attack vectors (techniques), and specific intrusion tools and methods (procedures). An indicator of compromise (IoC)is a residual sign that an asset or network has been successfully attacked or is continuing to be attacked. Put another way, an IoC is evidence of a TTP. TTPs describe what and how an adversary acts and Indicators describe how to recognize what those actions might look like.(stixproject.github.io/documentation/concepts/ ttp-vs-indicator) As there are many different targets and vectors of an attack, so too are there many different potential IoCs. The following is a list of some IoCs that you may encounter: • Unauthorized software and files • Suspicious emails • Suspicious registry and file system changes • Unknown port and protocol usage • Excessive bandwidth usage • Rogue hardware • Service disruption and defacement • Suspicious or unauthorized account usage

An IoC can be definite and objectively identifiable, like a malware signature, but often IoCs can only be described with confidence via the correlation of many data points. Because these IoCs are often identified through patterns of anomalous activity rather than single events, they can be open to interpretation and therefore slow to diagnose. Consequently, threat intelligence platforms use AI-backed analysis to speed up detection without overwhelming analysts' time with false positives.

An engineer routinely provides data to a source that compiles threat intelligence information. The engineer focuses on behavioral threat research. Which information does the engineer provide?IP addresses associated with malicious behaviorDescriptions of example attacksCorrelation of events observed with known actor indicators

Data available as a paid subscription

BBehavioral threat research is narrative commentary describing examples of attacks and TTPs gathered through primary research sources.Reputational threat intelligence includes lists of IP addresses and domains associated with malicious behavior, plus signatures of known file-based malware.Threat data is computer data that can correlate events observed on a customer's own networks and logs with known TTP and threat actor indicators.Data that is part of a closed/proprietary system is made available as a paid subscription to a commercial threat intelligence platform. There is no mention of a subscription model in this case.P36Threat Intelligence ProvidersThe outputs from the primary research undertaken by security solutions providers and academics can take three main forms:• Behavioral threat research—narrative commentary describing examples of attacks and TTPs gathered through primary research sources.• Reputational threat intelligence—lists of IP addresses and domains associated with malicious behavior, plus signatures of known file-based malware.

• Threat data—computer data that can correlate events observed on a customer's own networks and logs with known TTP and threat actor indicators.

An actor penetrates a system and uses IP spoofing to reroute information to a fraudulent host. Which method does the actor utilize for this purpose?Data exfiltrationData breachPrivacy breach

Data leak

AData exfiltration refers to the methods and tools by which an attacker transfers data without authorization from the victim's systems to an external network or media.A data breach event is where confidential data is read or transferred without authorization. A breach can be intentional/malicious or unintentional/accidental.A privacy breach occurs when personal data is not collected, stored, or processed in full compliance with the laws or regulations governing personal information.

A breach can also be described as a data leak and is where confidential data is read or transferred without authorization.

An organization hires a pen tester. The tester achieves a connection to a perimeter server. Which technique allows the tester to bypass a network boundary from this advantage?PersistencePrivilege escalationPivoting

Lateral movement

CIf the pen tester achieves a foothold on a perimeter server, a pivot allows them to bypass a network boundary and compromise servers on an inside network.Persistence is the tester's ability to reconnect to the compromised host and use it as a remote access tool (RAT) or backdoor.A pen tester uses privilege escalation in attempts to map out the internal network and discover the services running on it and the accounts configured to access it.Lateral movement is the action of gaining control over other hosts. This is done partly to discover more opportunities to widen access, partly to identify where valuable data assets might be located, and partly to evade detection.P80Pen Test Attack Life Cycle In the kill chain attack life cycle, reconnaissance is followed by an initial exploitation phase where a software tool is used to gain some sort of access to the target's network. This foothold might be accomplished using a phishing email and payload or by obtaining credentials via social engineering. Having gained the foothold, the pen tester can then set about securing and widening access. A number of techniques are required: • Persistence—the tester's ability to reconnect to the compromised host and use it as a remote access tool (RAT) or backdoor. To do this, the tester must establish a command and control (C2 or C&C) network to use to control the compromised host, upload additional attack tools, and download exfiltrated data. The connection to the compromised host will typically require a malware executable to run after shut down/log off events and a connection to a network port and the attacker's IP address to be available. • Privilege escalation—persistence is followed by further reconnaissance, where the pen tester attempts to map out the internal network and discover the services running on it and accounts configured to access it. Moving within the network or accessing data assets are likely to require higher privilege levels. For example, the original malware may have run with local administrator privileges on a client workstation or as the Apache user on a web server. Another exploit might allow malware to execute with system/root privileges, or to use network administrator privileges on other hosts, such as application servers. • Lateral movement—gaining control over other hosts. This is done partly to discover more opportunities to widen access (harvesting credentials, detecting software vulnerabilities, and gathering other such "loot"), partly to identify where valuable data assets might be located, and partly to evade detection. Lateral movement usually involves executing the attack tools over remote process shares or using scripting tools, such as PowerShell. • Pivoting—hosts that hold the most valuable data are not normally able to access external networks directly. If the pen tester achieves a foothold on a perimeter server, a pivot allows them to bypass a network boundary and compromise servers on an inside network. A pivot is normally accomplished using remote access and tunneling protocols, such as Secure Shell (SSH), virtual private networking (VPN), or remote desktop. • Actions on Objectives—for a threat actor, this means stealing data from one or more systems (data exfiltration). From the perspective of a pen tester, it would be a matter of the scope definition whether this would be attempted. In most cases, it is usually sufficient to show that actions on objectives could be achieved.

• Cleanup—for a threat actor, this means removing evidence of the attack, or at least evidence that could implicate the threat actor. For a pen tester, this phase means removing any backdoors or tools and ensuring that the system is not less secure than the pre-engagement state.

An organization requires that a file transfer occurs on a nightly basis from an internal system to a third-party server. IT for both organizations agree on using FTPS. Which configurations does IT need to put in place for proper file transfers? (Select all that apply.)Configure the use of port 990Configure the use of port 22Negotiate a tunnel prior to any exchanged commands

Using Secure Shell (SSH) between client and server

Implicit TLS (FTPS) mode FTPS is tricky to configure when there are firewalls between the client and server, and it uses the secure port 990 for the control connection.Implicit TLS (FTPS) negotiates an SSL/TLS tunnel before the exchange of any FTP commands.SSH FTP (SFTP) uses a secure link that is created between the client and server using Secure Shell (SSH) over TCP port 22.

With SFTP, which uses SSH, a secure link is created between the client and server. Ordinary FTP commands and data transfer can then be sent over the secure link without risk of eavesdropping or man-in-the-middle attacks.

An administrator provisions both a new cloud-based virtual server and an on-premises virtual server. Compare the possible virtualization layer responsibilities for the implementation and determine which one applies to this configuration.CSP is responsible for the cloud, the administrator is responsible for the on-premise.CSP is responsible for the cloud, the CSP is responsible for the on-premise.The administrator is responsible for the cloud, the administrator is responsible for the on-premise.

The administrator is responsible for the cloud, the CSP is responsible for the on-premise.

AThe virtualization layer is the underlying layer that provides virtualization capabilities such as a virtual server. The CSP is responsible for this in the cloud. An on-premise installation is the responsibility of the administrator.The CSP is responsible for the cloud, such as in an IaaS or PaaS implementation, but the administrator is responsible for the on-premise installation.The administrator is only responsible for the on-premise installation. This underlying virtualization platform might be a Windows Hyper-V server for example.The Cloud Service Provider (CSP) would be responsible for the platform that the administrator utilizes to create a virtual machine.P420Matrix

under virtualization layer all CSP

Consider an abstract model of network functions for an infrastructure as code (IaC) implementation and determine which plane describes how traffic is prioritized.DataManagementControl

Application

CThe control plane makes decisions about how traffic should be prioritized, secured, and switched. A software-defined networking (SDN) application can be used to define policy decisions.The data plane handles the actual switching and routing of traffic and imposition of security access controls. Decisions made in the control plane are implemented on the data plane.The management plane is used to monitor traffic conditions and network status. SDN can be used to manage compatible physical appliances, but also virtual switches, routers, and firewalls.Applications interface with network devices by using APIs. The interface between the SDN applications and the SDN controller is described as the "northbound" API, while that between the controller and appliances is the "southbound" API.P442Software-Defined Networking IaC is partly facilitated by physical and virtual network appliances that are fully configurable via scripting and APIs. As networks become more complex—perhaps involving thousands of physical and virtual computers and appliances—it becomes more difficult to implement network policies, such as ensuring security and managing traffic flow. With so many devices to configure, it is better to take a step back and consider an abstracted model about how the network functions. In this model, network functions can be divided into three "planes": • Control plane—makes decisions about how traffic should be prioritized and secured, and where it should be switched. • Data plane—handles the actual switching and routing of traffic and imposition of security access controls. • Management plane—monitors traffic conditions and network status.

A software-defined networking (SDN)application can be used to define policy decisions on the control plane. These decisions are then implemented on the data plane by a network controller application, which interfaces with the network devices using APIs. The interface between the SDN applications and the SDN controller is described as the "northbound" API, while that between the controller and appliances is the "southbound" API. SDN can be used to manage compatible physical appliances, but also virtual switches, routers, and firewalls. The architecture supporting rapid deployment of virtual networking using general-purpose VMs and containers is called network functions virtualization (NFV)(redhat.com/en/ topics/virtualization/what-is-nfv). This architecture saves network and security administrators the job and complexity of configuring each appliance with proper settings to enforce the desired policy. It also allows for fully automated deployment (or provisioning) of network links, appliances, and servers. This makes SDN an important part of the latest automation and orchestration technologies.

Compare the components found in a virtual platform and select the options that accurately differentiate between them. (Select all that apply.)Hypervisors are Virtual Machine Monitors (VMM) and guest operating systems are Virtual Machines (VM).Hypervisors facilitate interactions with the computer hardware and computers are the platform that hosts the virtual environment.Computers are the operating systems that are installed under the virtual environment and guest operating systems are the platform that host the virtual environment.

Hypervisors are operating systems and computers are the platform that hosts the virtual environment.

ABHypervisors are the Virtual Machine Monitor (VMM) and guest operating systems are the Virtual Machines (VM) found within the virtual platform.Hypervisors manage the virtual machine environment and facilitate interaction with the computer hardware and network. The computer component is the platform that hosts the virtual environment. Multiple computers may also be networked together.Computers are the platform of the virtual environment and guest operating systems are the operating systems installed under the virtual environment.

Guest operating systems are the operating systems installed under the virtual environment and computers are platform that hosts the virtual environment.

After a company moves on-premise systems to the cloud, engineers devise to use a serverless approach in a future deployment. What type of architecture will engineers provision in this deployment? (Select all that apply.)Virtual machinePhysical serverContainers

Microservices

CDWhen a client requires some operation to be processed in a serverless environment, the cloud spins up a container to run the code, performs the processing, and then destroys the container.With serverless technologies, applications are developed as functions and microservices, each interacting with other functions to facilitate client requests.A virtual machine or VM is a fully operational operating system functioning as a guest instance on a physical host.

A physical machine or server is a fully operational operating system that functions on a physical host system and is not dependent on any virtual technology.

Based on knowledge of identity and authentication concepts, select the true statement.A user profile must be unique.Credentials could include name, contact details, and group memberships.An identifier could be a username and password, or smart card and PIN code.

An account consists of an identifier, credentials, and a profile.

DAn account consists of an identifier, credentials, and a profile. An account identifies a user on a computer system.An identifier must be unique, not a profile. This is accomplished by defining the account on the system by a Security Identifier (SID) string.A profile, not credentials, could include name and contact details, as well as group memberships.

Credentials, not an identifier, could be a username and password or smart card and PIN code. This is the information used to authenticate a subject when it attempts user account access.

A guard station deploys a new security device for accessing a classified data station. The installation tech tests the device’s improvements for speed and pressure. Which behavioral technology does the tech test?Voice recognitionGait analysisTyping

Signature recognition

DSignatures are relatively easy to duplicate, but it is more difficult to fake the actual signing process. Signature matching records the user applying their signature (stroke, speed, and pressure of the stylus).Voice recognition is relatively cheap, as the hardware and software required are built into many standard PCs and mobiles. However, obtaining an accurate template can be difficult and time-consuming.Gait analysis produces a template from human movement (locomotion). The technologies can either be camera-based or use smartphone features, such as an accelerometer and gyroscope.Typing is used to match the speed and pattern of a user’s input of a passphrase.P185• Voice recognition—relatively cheap, as the hardware and software required are built into many standard PCs and mobiles. However, obtaining an accurate template can be difficult and time-consuming. Background noise and other environmental factors can also interfere with logon. Voice is also subject to impersonation.• Gait analysis—produces a template from human movement (locomotion). The technologies can either be camera-based or use smartphone features, such as an accelerometer and gyroscope.• Signature recognition—signatures are relatively easy to duplicate, but it is more difficult to fake the actual signing process. Signature matching records the user applying their signature (stroke, speed, and pressure of the stylus).

• Typing—matches the speed and pattern of a user’s input of a passphrase.

An organization considers installing fingerprint scanners at a busy entry control point to a secure area. What concerns might arise with the use of this technology? (Select all that apply.).Fingerprint scanning is relatively easy to spoof.Installing equipment is cost-prohibitive.Surfaces must be clean and dry.

The scan is highly intrusive.

ACThe main problem with fingerprint scanners is that it is possible to obtain a copy of a user's fingerprint and create a mold of it that will fool the scanner.The technology required for scanning and recording fingerprints is relatively inexpensive, and the process quite straightforward. A fingerprint sensor is usually a small capacitive cell that can detect the unique pattern of ridges making up the pattern.Moisture or dirt can prevent good readings, so facilities using fingerprint scanners must keep readers clean and dry, which can prove challenging in high throughput areas.

Fingerprint technology is non-intrusive and relatively simple to use.

An administrator plans a backup and recovery implementation for a server. The goal is to have a full backup every Sunday followed by backups that only include changes every other day of the week. In the event of a catastrophe, the restore time needs to be as quick as possible. Which scheme does the administrator use?Full followed by incrementalsImage followed by incrementalsFull followed by differentials

Snapshot followed by differentials

CA full backup includes data regardless of its last backup time. A differential backup includes new and modified files since the last backup. A differential restore is quicker than an incremental.A full backup includes data regardless of its last backup time. An incremental backup includes new and modified files since the last backup. A restore can be time consuming based on the number of sets involved.An image is not a backup type in a backup scheme, but is a disk imaging process. An incremental backup includes new files and files modified since the last backup.

A snapshot is a method to backup open files. A differential backup includes new and modified files since the last full backup.

A security specialist reviews an open data closet and discovers areas for improvement. Most notable is the exposed connectivity media. Which concerns does the specialist have regarding the need for better security? (Select all that apply.)EavesdroppingSpeedDamage

Length

ACA physically secure cabled network is referred to as a protected distribution system (PDS). This method of cable installation can deter eavesdropping.A hardened PDS is one where all cabling is routed through sealed metal conduit. This type of enclosure protects the cabling from accidental or intentional damage.The speed, or throughput, of a network cable is dependent on the type of cable such as Cat5 versus Cat6. The cable speed is typical based on the number of twists inside the cable and is not a security concern.The length of a network cable is dependent on the type of cable such as Cat5 versus Cat6. The cable length dictates how far signal can travel and is not a security concern.P560Protected Distribution and Faraday Cages A physically secure cabled network is referred to as protected cable distribution or as a protected distribution system (PDS). There are two principal risks: • An intruder could attach eavesdropping equipment to the cable (a tap). • An intruder could cut the cable (Denial of Service).

A hardened PDS is one where all cabling is routed through sealed metal conduit and subject to periodic visual inspection. Lower-grade options are to use different materials for the conduit (plastic, for instance). Another option is to install an alarm system within the cable conduit, so that intrusions can be detected automatically. It is possible to install communications equipment within a shielded enclosure, known as a Faraday Cage. The cage is a charged conductive mesh that blocks signals from entering or leaving the area. The risk of eavesdropping from leakage of electromagnetic signals was investigated by the US DoD who defined TEMPEST (Transient Electromagnetic Pulse Emanation Standard) as a means of shieldingthe signals.

Systems administrators configure an application suite that uses a collection of single hash functions and symmetric ciphers to protect sensitive communication. While the suite uses these security features collectively, how is each instance recognized?As non-repudiationAs a cryptographic systemAs a cryptographic primitive

As a key pair

CA single hash function, symmetric cipher, or asymmetric cipher is called a cryptographic primitive. The properties of different symmetric/asymmetric/hash types and of specific ciphers for each type impose limitations when used alone.Non-repudiation depends on a recipient not being able to encrypt the message, or the recipient would be able to impersonate the sender.A complete cryptographic system or product is likely to use multiple cryptographic primitives, such as within a cipher suite.To use a key pair, the user or server generates the linked keys. These keys are an example of a cryptographic primitive that uses a symmetric cipher.P121A single hash function, symmetric cipher, or asymmetric cipher is called a cryptographic primitive. A complete cryptographic system or product is likely to use multiple cryptographic primitives, such as within a cipher suite.

The properties of different symmetric/asymmetric/hash types and of specific ciphers for each type impose limitations on their use in different contexts and for different purposes. If you are able to encrypt a message in a particular way, it follows that the recipient of the message knows with whom he or she is communicating (that is, the sender is authenticated). This means that encryption can form the basis of identification, authentication, and access control systems.

An engineer considers blockchain as a solution for record-keeping. During planning, which properties of blockchain does the engineer document for implementation? (Select all that apply.)Using a peer-to-peer networkObscuring the presence of a messagePartially encrypting data

Using cryptographic linking

ADBlockchain is recorded in a public ledger. This ledger does not exist as an individual file on a single computer; rather, it is distributed across a peer-to-peer (P2P) network.The hash value of a previous block in a chain is added to the hash calculation of the next block in the chain. This ensures that each successive block is cryptographically linked.Steganography is a technique for obscuring the presence of a message. Typically, information is embedded where it is not expected.Homomorphic encryption is a solution that allows an entity to use information in particular fields within the data while keeping the data set as a whole encrypted.P131Blockchainis a concept in which an expanding list of transactional records is secured using cryptography. Each record is referred to as a blockand is run through a hash function. The hash value of the previous block in the chain is added to the hash calculation of the next block in the chain. This ensures that each successive block is cryptographically linked. Each block validates the hash of the previous block, all the way through to the beginning of the chain, ensuring that each historical transaction has not been tampered with. In addition, each block typically includes a timestamp of one or more transactions, as well as the data involved in the transactions themselves. The blockchain is recorded in a public ledger. This ledger does not exist as an individual file on a single computer; rather, one of the most important characteristics of a blockchain is that it is decentralized. The ledger is distributed across a peer-to-peer (P2P) network in order to mitigate the risks associated with having a single point of failure or compromise. Blockchain users can therefore trust each other equally. Likewise, another defining quality of a blockchain is its openness—everyone has the same ability to view every transaction on a blockchain.

Blockchain technology has a variety of potential applications. It can ensure the integrity and transparency of financial transactions, online voting systems, identity management systems, notarization, data storage, and more. However, blockchain is still an emerging technology, and outside of cryptocurrencies, has not yet been adopted on a wideranging scale.

A new systems administrator at an organization has a difficult time understanding some of the configurations from the previous IT staff. It appears many shortcuts were taken to keep systems running and users happy. Which weakness does the administrator report this configuration as?Complex dependenciesOverdependence on perimeter securityAvailability over confidentiality and integrity

Single points of failure

CAvailability over confidentiality and integrity is often presented by taking "shortcuts" to get a service up and running. Compromising security might represent a quick fix but creates long term risks.Complex dependencies may include services that require many different systems to be available. Ideally, the failure of individual systems or services should not affect the overall performance of other network services.Overdependence on perimeter security can occur if the network architecture is "flat." Penetrating the network edge gives the attacker freedom of movement.A single point of failure is a "pinch point" in a network that may rely on a single hardware server or appliance.P236Secure Network Designs A secure network design provisions the assets and services underpinning business workflows with the properties of confidentiality, integrity, and availability. Weaknesses in the network architecture make it more susceptible to undetected intrusions or to catastrophic service failures. Typical weaknesses include: • Single points of failure—a "pinch point" relying on a single hardware server or appliance or network channel. • Complex dependencies—services that require many different systems to be available. Ideally, the failure of individual systems or services should not affect the overall performance of other network services. • Availability over confidentiality and integrity—often it is tempting to take "shortcuts" to get a service up and running. Compromising security might represent a quick fix but creates long term risks. • Lack of documentation and change control—network segments, appliances, and services might be added without proper change control procedures, leading to a lack of visibility into how the network is constituted. It is vital that network managers understand business workflows and the network services that underpin them.

• Overdependence on perimeter security—if the network architecture is "flat" (that is, if any host can contact any other host), penetrating the network edge gives the attacker freedom of movement.

An engineer configures a proxy to control access to online content for all users in an organization. Which proxy type does the engineer implement by using an inline network appliance? (Select all that apply.)Non-transparentTransparentIntercepting

Application

BCA transparent proxy must be implemented on a switch, router, or other inline network appliance.An intercepting proxy (known as a transparent proxy) is configured to intercept client traffic without the client having to be reconfigured.A non-transparent proxy configuration means that the client must be configured with the proxy server address and port number to use it.Proxy servers can be application-specific; others are multipurpose. A multipurpose proxy is one configured with filters for multiple protocol types. In this case, the target is not a specific application.P271A proxy server must understand the application it is servicing. For example, a web proxy must be able to parse and modify HTTP and HTTPS commands (and potentially HTML and scripts too). Some proxy servers are application-specific; others are multipurpose. A multipurpose proxy is one configured with filters for multiple protocol types, such as HTTP, FTP, and SMTP. Proxy servers can generally be classed as non-transparent or transparent. • A non-transparent proxymeans that the client must be configured with the proxy server address and port number to use it. The port on which the proxy server accepts client connections is often configured as port 8080.auto

• A transparent (or forced or intercepting) proxyintercepts client traffic without the client having to be reconfigured. A transparent proxy must be implemented on a switch or router or other inline network appliance.

Analyze the following statements and select the one that describes key differences between internet protocol security (IPSec) modes.Transport mode allows communication between virtual private networks (VPNs), while tunnel mode secures communications between hosts on a private network.Authentication Header (AH) mode does not provide confidentiality, as the payload is not encrypted. Encapsulation Security Payload (ESP) mode provides confidentiality and/or authentication and integrity.Tunnel mode allows communication between virtual private networks (VPNs), while transport mode secures communications between hosts on a private network.

Encapsulation Security Payload (ESP) mode does not provide confidentiality, as the payload is not encrypted. Authentication Header (AH) mode provides confidentiality and/or authentication and integrity.

CTunnel mode, also called router implementation, creates a virtual private network (VPN), allowing communications between VPN gateways across an unsecure network.Transport mode secures communications between hosts on a private network (an end-to-end implementation).The AH protocol authenticates the origin of transmitted data and provides integrity and protection against replay attacks. The payload is not encrypted, so this protocol does not provide confidentiality.ESP is an IPSec sub-protocol that enables encryption and authentication of a data packet’s header and payload. Encapsulation Security Payload (ESP) provides confidentiality and/or authentication and integrity, and can be used to encrypt the packet.P316IPSec can be used in two modes: • Transport mode—this mode is used to secure communications between hosts on a private network (an end-to-end implementation). When ESP is applied in transport mode, the IP header for each packet is not encrypted, just the payload data. If AH is used in transport mode, it can provide integrity for the IP header.

• Tunnel mode—this mode is used for communications between VPN gateways across an unsecure network (creating a VPN). This is also referred to as a router implementation. With ESP, the whole IP packet (header and payload) is encrypted and encapsulated as a datagram with a new IP header. AH has no real use case in tunnel mode, as confidentiality will usually be required.

An organization installs embedded system throughout a manufacturing plant. When planning the install, engineers had to consider system constraints related to identification. As a result, which areas of the main systems are impacted? (Select all that apply.)ComputeNetworkCrypto

Authentication

CD(因為這個題目在問跟認證有關的選項)The lack of compute resources means that embedded systems are not well-matched to the cryptographic identification technologies that are widely used on computer networks.As embedded systems become more accessible, they will need to use authentication technologies to ensure consistent confidentiality, integrity, and availability.Due to their size, embedded systems are usually constrained in terms of processor capability (cores and speed), system memory, and persistent storage.Networks for embedded systems emphasize the power-efficient transfer of small amounts of data with a high degree of reliability and low latency.

P341

Identify the true statements about supervisory control and data acquisition (SCADA) systems. (Select all that apply.)SCADA systems typically communicate with one another through LAN connections.SCADA systems typically run as software on ordinary computers, gathering data from and managing field devices.SCADA systems are purpose-built devices that prioritize IT security features.

SCADA systems serve primarily industrial, manufacturing, utility, and logistics sectors.

BDSCADA typically runs as software on ordinary computers, gathering data from and managing plant devices and equipment, with embedded PLCs, referred to as field devices.Many sectors of industry, including utilities, industrial processing, fabrication and manufacturing, logistics, and facilities management use these types of systems.SCADA typically use WAN communications, such as cellular or satellite, to link the SCADA server to field devices.ICS/SCADA was historically built without regard to IT security, though there is now high awareness of the necessity of enforcing security controls to protect them, especially when they operate in a networked environment.

P344

An engineering firm provisions microwave technology for a wide area communications project. When using point-to-multipoint (P2M) mode, which technologies does the firm put in place? (Select all that apply.)Directional antennasSectoral antennasMultiple sites connected to a single hub

High gain link between two sites

BCPoint-to-multipoint (P2M) microwave links multiple sites and uses smaller sectoral antennas than P2P, each covering a separate quadrant.P2M links multiple sites or subscriber nodes to a single hub. This can be more cost-efficient in high density urban areas and requires less radio spectrum.A high gain connection means that the antennas used between sites are highly directional. Each antenna is pointed directly at the other.Point-to-point (P2P) microwave uses high gain antennas to link two sites. The satellite modems or routers are also normally paired to one another.P372

Point-to-multipoint (P2M)microwave uses smaller sectoral antennas, each covering a separate quadrant. Where P2P is between two sites, P2M links multiple sites or subscriber nodes to a single hub. This can be more cost-efficient in high density urban areas and requires less radio spectrum. Each subscriber node is distinguished by multiplexing. Because of the higher risk of signal interception compared to P2P, it is crucial that links be protected by over-the-air encryption.

A company follows a bring your own device (BYOD) mobile implementation. What is an ideal solution the company can use to overcome some of the security risks involved with employee-supplied devices?Virtual desktop infrastructure (VDI)Location servicesRemote wipe

Carrier unlocking

AVirtual desktop infrastructure (VDI) means provisioning an OS desktop to interchangeable hardware. The hardware only has to be capable of running a VDI client viewer or have a browser support a clientless HTML5 solution. Each time a user accesses VDI, the session is “as new” and employees can remotely access it.Location services alone represent a security risk. Location services can use geo-fencing to enforce context-aware authentication based on the device’s location.If a malicious actor steals a user's device using a remote wipe (kill switch), it can reset the device to factory defaults or clear personal data (sanitization).Carrier unlocking involves the removal of restrictions that lock a device to a single carrier and uses it for privilege escalation.P423Virtual desktop infrastructure (VDI)refers to using a VM as a means of provisioning corporate desktops. In a typical VDI, desktop computers are replaced by low-spec, low-power thin client computers. When the thin client starts, it boots a minimal OS, allowing the user to log on to a VM stored on the company server infrastructure. The user makes a connection to the VM using some sort of remote desktop protocol (Microsoft Remote Desktop or Citrix ICA, for instance). The thin client has to find the correct image and use an appropriate authentication mechanism. There may be a 1:1 mapping based on machine name or IP address or the process of finding an image may be handled by a connection broker.P623

Virtual Desktop Infrastructure (VDI) allows a client device to access a VM. In this scenario, the mobile device is the client device. Corporate data is stored and processed on the VM so there is less chance of it being compromised, even though the client device itself is not fully managed

A cloud engineer configures a virtual private cloud. While trying to create a public subnet, the engineer experiences difficulties. The issue is that the subnet remains private, while the goal is to have a public subnet. What does the engineer conclude the problem might be?The Internet gateway is configured as the default route.The Internet gateway is not configured as the default route.The Internet gateway uses 1:1 network address translation.

The Internet gateway does not use 1:1 network address translation.

BTo configure a public subnet, first an Internet gateway (virtual router) must be attached to the VPC configuration. Secondly, the Internet gateway must be configured as the default route for each public subnet.After a VPC has a virtual router attached, a gateway is set as a default route. If an Internet gateway is not assigned as a default route, the subnet is private.Each instance in a public subnet is configured with a public IP in its cloud profile. The Internet gateway performs 1:1 network address translation (NAT) to route Internet communications to and from the instance.Typically, the virtual Internet gateway performs 1:1 network address translation (NAT) to route Internet communications to and from the instance. One-to-many is another NAT approach.P433Virtual Private Clouds (VPCs) Each customer can create one or more virtual private clouds (VPCs)attached to their account. By default, a VPC is isolated from other CSP accounts and from other VPCs operating in the same account. This means that customer A cannot view traffic passing over customer B's VPC. The workload for each VPC is isolated from other VPCs. Within the VPC, the cloud consumer can assign an IPv4 CIDR block and configure one or more subnets within that block. Optionally, an IPv6 CIDR block can be assigned also. The following notes focus on features of networking in AWS. Other vendors support similar functionality, though sometimes with different terminology. For example, in Microsoft Azure, VPCs are referred to as virtual networks. Public and Private Subnets Each subnet within a VPC can either be private or public. To configure a public subnet, first an Internet gateway (virtual router) must be attached to the VPC configuration. Secondly, the Internet gateway must be configured as the default route for each public subnet. If a default route is not configured, the subnet remains private, even if an Internet gateway is attached to the VPC. Each instance in the subnet must also be configured with a public IP in its cloud profile. The Internet gateway performs 1:1 network address translation (NAT) to route Internet communications to and from the instance.

The instance network adapter is not configured with this public IP address. The instance's NIC is configured with an IP address for the subnet. The public address is used by the virtualization management layer only. Public IP addresses can be assigned from your own pool or from a CSP-managed service, such as Amazon's Elastic IP

A business is setting up new network devices. Compare the permissions allocated to each account and determine which type of account is most appropriate for the installation of device drivers.Administrator/Root accountAdministrator's user accountNetwork service account

Local service account

AThe local system account creates the host processes that start Windows before the user logs on. Administrative or privileged accounts can install and remove apps and device drivers. Admin should prohibit superuser accounts from logging on in normal circumstances.Admin should replace the default superuser with named accounts that have sufficient elevated privileges for a given job role. This ensures that admin can audit administrative activity and the system conforms to non-repudiation.A network service account has the same privileges as the standard user account but can present the computer's account credentials when accessing network resources.

A local service account has the same privileges as the standard user account. It can only access network resources as an anonymous user.

A user enters a card equipped with a secure processing chip into a reader and then enters a PIN for Kerberos authentication. What authentication method is described here? (Select all that apply.)Trusted Platform Module (TPM) authenticationSmart-card authenticationMultifactor authentication

One-time password (OTP) token authentication

BCSmart-card authentication means programming cryptographic information onto a card equipped with a secure processing chip. The chip stores the user's digital certificate, the private key associated with the certificate, and a personal identification number (PIN) used to activate the card.Strong, multifactor authentication (MFA) technology combines the use of more than one type of knowledge, ownership, and biometric factor.A Trusted Platform Module (TPM) is a cryptoprocessor enclave implemented on a PC, laptop, smartphone, or network appliance. The TPM is usually a module within the CPU and can be used to present a virtual smart card.

A one-time password (OTP) is generated automatically, rather than being chosen by a user, and used only once.

Which of the following defines key usage with regard to standard extensions?The purpose for which a certificate was issuedThe ability to create a secure key pairConfiguring the security log to record key indicators

To archive a key with a third party

AOne of the most important standard extensions is key usage. This extension defines the purpose for issuing a digital certificate, such as for signing documents or key exchange.The ability to create a secure key pair of the required strength using the chosen cipher is key generation, not key usage.Configuring the security log to record key indicators and then reviewing the logs for suspicious activity is usage auditing, not key usage.In terms of key management, escrow refers to archiving a key (or keys) with a third party. It is not key usage.

P140

A user enters the web address of a favorite site and the browser returns the following: "There is a problem with this website's security certificate." The user visits this website frequently and has never had a problem before. Applying knowledge of server certificates, select the circumstances that could cause this error message. (Select all that apply.)The system's time setting is incorrect.The certificate is pinned.The web address was mistyped.

The certificate expired.

ADIf the date and time settings on the system are not synchronized with the server’s setting, the server’s certificate will be rejected.An expired server certificate would cause the browser to return an error message.Certificate pinning ensures that when a client inspects the certificate presented by a server, it is inspecting the proper certificate. This is mostly done to prevent a Man-in-the-Middle attack and would not generate an error message.

A mistyped web address would not return an error message about the server certificate. It would return a message that the website could not be found.

A suspected network breach prompts an engineer to investigate. The engineer utilizes a set of command line tools to collect network routing data. While doing so, the engineer discovers that UDP communications is not working as expected. Which tool does the engineer experience difficulty with?routetracertpathping

traceroute

The traceroute command performs route discovery from a Linux host. This command uses UDP probes rather than ICMP, by default.The route command displays and modifies a system's local routing table. This command does not collect network data.The tracert command uses ICMP probes to report the round trip time (RTT) for hops between the local host and a host on a remote network. This command is a Windows based tool.

The pathping command is a Windows tool that provides statistics for latency and packet loss along a route over a measuring period.

There are a variety of methods for indicating a potential security breach during the identification and detection phase of incident response. Two examples are Intrusion Detection System (IDS) alerts and firewall alerts. Evaluate the following evidence and select the alternate methods that would be of most interest to the IT department during this phase. (Select all that apply.)A daily industry newsletter reports on a new vulnerability in the software version that runs on the company's server.An anonymous employee uses an "out of band" communication method to report a suspected insider threat.The marketing department contacts the IT department because they cannot post a company document to the company's social media account.

An employee calls the help desk because the employee is working on a file and is unable to save it to a USB to work on at home.

ABA media report of a newly discovered vulnerability in the version of software that’s currently running would be valuable information that should be addressed immediately.A whistleblower with information about a potential insider threat would be worthy of pursuit. “Out of band” is an authenticated communications channel separate from the company’s primary channel.If the marketing department is trying to post a document that has been identified as confidential data, the IT department would not be concerned since the company’s data loss prevention mechanisms are working.

If an employee is trying to save a document that has been identified as confidential data to USB and it fails, the IT department would not be concerned since the company’s data loss prevention mechanisms are working.

A response team has to balance the need for business continuity with the desire to preserve evidence when making incident management decisions. Consider the following and determine which would be an effective course of action for the goal of collecting and preserving evidence to pursue prosecution of the attacker(s)? (Select all that apply.)AnalysisQuarantineHot swap

Prevention

BCQuarantining is the process of isolating a file, computer system, or computer network to prevent the spread of a virus or another cybersecurity incident. This allows for analysis of the attack and collection of evidence.A hot swap involves bringing a backup system into operation, and the live system is frozen to preserve evidence of the attack.Analysis is an early stage in the process and involves determining whether a genuine incident has been identified and what level of priority it should be assigned. Gathering and preserving evidence is not a consideration at this point.

Prevention occurs when the response team takes countermeasures to end the incident on the live system, without regard to preserving evidence.

A company hires a security consultant to train the IT team in incident response procedures. The consultant facilitates a question and answer session, and the IT team practices running scans. Examine the scenario and determine which type of incident response exercise the consultant conducts.Tabletop exerciseWalkthroughSimulation

Forensics

BIn a walkthrough, a facilitator presents a scenario and the incident responders demonstrate what actions they would take. Responders may run scans and analyze sample files, typically on sandboxed versions of the company's actual response and recovery tools.The facilitator in a tabletop exercise presents a scenario and the responders explain what action they would take to manage the threat—without the use of computer systems.Simulations are team-based exercises, where the red team attempts an intrusion, the blue team operates response and recovery controls, and a white team moderates and evaluates the exercise.

Digital forensics describes techniques to collect and preserve evidence. Forensics procedures are detailed and time-consuming, where the purpose of incident responses are usually urgent.

During a cyber incident response exercise, a blue team takes steps to ensure the company and its affiliates can still use network systems while managing a simulated threat in real-time. Based on knowledge of incident response procedures, what stage of the incident response process is the blue team practicing?ContainmentIdentificationEradication

Recovery

AThe goal of the containment stage is to secure data while limiting the immediate impact on customers and business partners.Based on an alert or report, identification determines whether an incident has taken place, how severe it might be (triage), and notifies stakeholders.Once the security admin contains the incident, eradication removes the cause and restores the affected system to a secure state.

When security admin eradicates the cause of the incident, they can reintegrate the system into the business process that it supports. This recovery phase may involve restoration of data from backup and security testing.

A security and information event management (SIEM) handler’s dashboard provides graphical representations of user profile trends. The graphic contrasts standard user activity with administrative user activity and flags activity that deviates from these clusters. This graphical representation utilizes which trend analysis methodology?Frequency-based trend analysisVolume based trend analysisStatistical deviation analysis

Syslog trend analysis

CStatistical deviation analysis can alert security admin to a suspicious data point. A cluster graph might show activity by standard users and privileged users, and data points outside these clusters may indicate suspicious account activity.Frequency-based trend analysis establishes a baseline for a metric, and if frequency exceeds the baseline threshold, then the system raises an alert.Volume-based trend analysis uses simpler indicators, such as log or network traffic volume, or endpoint disk usage. Unusual log growth needs investigating, and unexpected disk capacity may signify data exfiltration.

Syslog provides an open format, protocol, and server software for logging event messages. A very wide range of host types use Syslog.

A security investigator compiles a report for an organization that lost data in a breach. Which ethical approach does the investigator apply while collecting data for the report?Search for relevant informationApply standard tags to filesDisclosing of evidence

Using repeatable methods

DAnalysis methods should follow strong ethical principles and must be repeatable by third parties with access to the same evidence. This can indicate that any evidence has not been changed or manipulated.Searching information through e-discovery allows investigators to locate files of interest to the case. As well as keyword search, software might support semantic search.Applying standardized keywords or tags to files and metadata helps to organize evidence. Tags might be used to indicate relevancy to a case or part of a case.

Disclosure is an important part of trial procedure. Disclosure states that the same evidence be made available to both plaintiff and defendant.

Which of the following sets properly orders forensic data acquisition by volatility priority?1. System memory caches 2. Data on persistent mass storage devices 3. Archival media 4. Remote monitoring data1. System memory caches 2. Remote monitoring data 3. Data on persistent mass storage devices 4. Archival media1. Data on persistent mass storage devices 2. System memory caches 3. Remote monitoring data 4. Archival media

1. Remote monitoring data 2. Data on persistent mass storage devices 3. System memory caches 4. Archival media

CCPU registers and cache memory are most highly volatile, although they may not be accessible as sources of forensics evidence. Non-persistent system memory (RAM) contents, including routing table, ARP cache, process table, kernel statistics occupy the second volatility priority.The third most volatile category includes data on persistent mass storage devices. This includes system memory caches, partition and file system blocks, slack space, free space, temporary file caches, and user, application, and OS files and directories.Remote logging and monitoring data comprise the fourth most volatile form of data, followed by physical configuration and network topology.Archival media and printed documents are considered least volatile.P500Data acquisition usually proceeds by using a tool to make an image from the data held on the target device. An image can be acquired from either volatile or nonvolatile storage. The general principle is to capture evidence in the order of volatility, from more volatile to less volatile. The ISOC best practice guide to evidence collection and archiving, published as tools.ietf.org/html/rfc3227, sets out the general order as follows: 1. CPU registers and cache memory (including cache on disk controllers, GPUs, and so on). 2. Contents of nonpersistent system memory (RAM), including routing table, ARP cache, process table, kernel statistics. 3. Data on persistent mass storage devices (HDDs, SSDs, and flash memory devices): • Partition and file system blocks, slack space, and free space. • System memory caches, such as swap space/virtual memory and hibernation files. • Temporary file caches, such as the browser cache. • User, application, and OS files and directories. 4. Remote logging and monitoring data. 5. Physical configuration and network topology.

6. Archival media and printed documents.

After a break-in at a government laboratory, some proprietary information was stolen and leaked. Which statement best summarizes how the laboratory can implement security controls to prevent future breaches?The laboratory needs to take detective action and should implement physical and deterrent controls in the future.The laboratory needs to take detective action and should implement corrective controls in the future.The laboratory needs to take compensatory action and should implement physical controls in the future.

The laboratory needs to take corrective action and should implement both physical and preventative controls in the future.

DFollowing a break-in that included both physical intrusion and data compromise, the lab should take corrective action to reduce the impact of the intrusion event. Implementing preventative measures can help secure data from future attacks, and physical controls can mitigate the probability of future physical break-ins.Deterrent controls, such as warning signs, may not physically or logically prevent access, but psychologically discourage attackers from attempting an intrusion.Detective controls, such as logs, which operate during an attack, may not prevent or deter access, but they will identify and record any attempted or successful intrusion.

Compensating controls serve as a substitute for a principal control, but corrective controls reduce the impact of an intrusion event.

Which of the following policies support separation of duties? (Select all that apply.)Employees must take at least one, five-consecutive-day vacation each year.Employees must stay in the same role for a minimum of two years prior to promotion.A principle of least privilege is utilized and critical tasks are distributed between two employees.

Standard Operating Procedures (SOPs) are in effect in each office.

ACD(?A???Mandatory vacations force employees to take earned vacation time. During this time, someone else fulfills their duties while they are away so audits can occur and potential discrepancies can be identified.The principle of least privilege solely grants a user sufficient rights to perform a specific job. For critical tasks, duties should be divided between several people.Standard Operating Procedures (SOPs) result in the employee having no cause for lapses in following protocol in terms of performing these types of critical operations.It is advisable that employees do not stay in the same role for an extended period of time. For example, managers may be moved to different departments periodically.P192Separation of Duties Separation of dutiesis a means of establishing checks and balances against the possibility that critical systems or procedures can be compromised by insider threats. Duties and responsibilities should be divided among individuals to prevent ethical conflicts or abuse of powers. An employee is supposed to work for the interests of their organization exclusively. A situation where someone can act in his or her own interest, personally, or in the interests of a third party is said to be a conflict of interest. Separation of duties means that employees must be constrained by security policies: • Standard operating procedures (SOPs) mean that an employee has no excuse for not following protocol in terms of performing these types of critical operations.

• Shared authority means that no one user is able to action or enable changes on his or her own authority. At least two people must authorize the change. One example is separating responsibility for purchasing (ordering) from that of authorizing payment. Another is that a request to create an account should be subject to approval and oversight. Separation of duties does not completely eliminate risk because there is still the chance of collusion between two or more people. This, however, is a much less likely occurrence than a single rogue employee.

An IT engineer looks to practice very rigid configuration management. The primary goal is to ensure very little deviation from an initial install of systems. Which method does the engineer utilize to accomplish this?TemplatesDiagramsBaselines

Microservices

CA baseline configuration is the template of settings that a device, VM instance, or other CI was configured to, and that a system should continue to match while in use.A template is a predefined set of settings that are used when deploying a system. With configuration management, a template helps to deploy a uniform environment.Diagrams are the best way to capture the complex relationships between network elements. Diagrams can be used to show how CIs are involved in business workflows.

With serverless technologies, applications are developed as functions and microservices, each interacting with other functions to facilitate client requests. Microservices are not a management item.

An organization prepares for an audit of all systems security. While doing so, staff perform a risk management exercise. Which phase does the staff consider first?Identify vulnerabilitiesIdentify essential functionsAnalyze business impact

Identify risk response

BEffective risk management must focus on mission essential functions that could cause the whole business to fail if they are not performed. Identifying these systems and processes should be done first.Identifying vulnerabilities for each function or workflow (starting with the most critical) is done by analyzing systems and assets to discover and list any vulnerabilities or weaknesses.Analyzing business impacts identifies the likelihood of a vulnerability being activated as a security incident by a threat and the impact of that incident on critical systems.Identifying risk response for each risk requires identifying possible countermeasures and assesses the cost of deploying additional security controls to protect systems and processes.P510Risk managementis a process for identifying, assessing, and mitigating vulnerabilities and threats to the essential functions that a business must perform to serve its customers. You can think of this process as being performed over five phases: 1. Identify mission essential functions—mitigating risk can involve a large amount of expenditure so it is important to focus efforts. Effective risk management must focus on mission essential functions that could cause the whole business to fail if they are not performed. Part of this process involves identifying critical systems and assets that support these functions. 2. Identify vulnerabilities—for each function or workflow (starting with the most critical), analyze systems and assets to discover and list any vulnerabilities or weaknesses to which they may be susceptible. 3. Identify threats—for each function or workflow, identify the threat sources and actors that may take advantage of or exploit or accidentally trigger vulnerabilities. 4. Analyze business impacts—the likelihood of a vulnerability being activated as a security incident by a threat and the impact of that incident on critical systems are the factors used to assess risk. There are quantitative and qualitative methods of analyzing impacts and likelihood. 5. Identify risk response—for each risk, identify possible countermeasures and assess the cost of deploying additional security controls.

Most risks require some sort of mitigation, but other types of response might be more appropriate for certain types and level of risks.

When a company first installed its computer infrastructure, IT implemented robust security controls. As the equipment ages, however, those controls no longer effectively mitigate new risks. Which statement best summarizes the company’s risk posture?The company’s aging infrastructure constitutes a control risk.The company demonstrates risk transference, assigning risk to IT personnel.The company can expect little to no impact from an outage event.

The company demonstrates effective risk mitigation techniques for low priority systems.

AControl risk measures how much less effective a security control has become over time. Risk management is an ongoing process, requiring continual reassessment and re-prioritization.Transference (or sharing) means assigning risk to a third-party, such as an insurance company or a contract with a supplier that defines liabilities. A company’s IT department is not a third-party.A security categorization (SC) of low risk describes an impact as minor damage to an asset or loss of performance (though essential functions remain operational).

Companies may accept some risks. Risk acceptance means that no countermeasures are emplaced either because the level of risk does not justify the cost or because there will be unavoidable delay before deploying the countermeasures.

A power outage disrupts a medium-sized business, and the company must restore systems from backups. If the business can resume normal operations from a backup made two days ago, what metric does this scenario represent?Recovery Point Objective (RPO)Recovery time objective (RTO)Maximum tolerable downtime (MTD)

Work Recovery Time (WRT)

ARPO is the amount of data loss a system can sustain, measured in time. That is, if a virus destroys a database, an RPO of 24 hours means the system can recover the data (from a backup copy) to a point not more than 24 hours before the infection.RTO is the post-disaster period an IT system may remain offline, including the amount of time it takes to identify a problem and perform recovery.MTD is the longest period of time that a business function outage may occur, without causing irrecoverable business failure.

Following system recovery, WRT is the additional work necessary to reintegrate systems, test functionality, and brief users on changes and updates to fully support the business function.

A national intelligence agency maintains data on threat actors. If someone intercepted this data, it would pose a serious threat to national security. Analyze the risk of exposure and determine which classification this data most likely holds.ConfidentialSecretTop secret

Proprietary

CCritical or top secret information is too valuable to allow any risk of its capture. Viewing is severely restricted.Confidential or secret information is highly sensitive, for viewing only by approved persons within the owner organization, and possibly by trusted third parties under NDA.The terms confidential and secret are sometimes used interchangeably, but some agencies make a distinction between confidential and secret data.

Another type of classification schema identifies the kind of information asset. Proprietary information or intellectual property (IP) is information a company creates and owns; typically about the products or services that they make or perform.

The U.S. department of defense (DoD) awards an IT contract to a tech company to perform server maintenance at a storage facility. What type of agreement must the DoD enter with the tech company to commit the company to implementing agreed upon security controls?Interconnection security agreement (ISA)Non-disclosure agreement (NDA)Data sharing and use agreement

Service level agreement (SLA)

AAny federal agency interconnecting its IT system to a third party must create an ISA to govern the relationship. An ISA sets out a security risk awareness process and commits the agency and supplier to implementing security controls.An NDA establishes a legal basis for protecting information assets. If a party breaks this agreement and shares prohibited information, they may face legal consequences.Data sharing and use agreements specify the purpose for which an entity can collect and analyze data, and proscribes the use of re-identification techniques.

An SLA is a contractual agreement detailing the terms under which a contractor provides service. This includes terms for security access controls and risk assessments plus processing requirements for confidential and private data.

Any external responsibility for an organization’s security lies mainly with which individuals?The ownerTech staffManagement

Public relations

AExternal responsibility for security (due care or liability) lies mainly with directors or owners. It is important to note that all employees share some measure of responsibility.Technical and specialist staff have the direct responsibility for implementing, maintaining, and monitoring the policy. Security might be made a core competency of systems and network administrators, or there may be dedicated security administrators.Managers at an organization may have responsibility for a specific domain or unit, such as building control, ICT, or accounting.

Non-technical staff have the responsibility of complying with policy and with any relevant legislation. Public relations is responsible for media communications.

What distinguishes DevSecOps from a traditional SOC?Software code is the responsibility of a programming or development team.Identification as a single point-of-contact for the notification of security incidents.A cultural shift within an organization to encourage much more collaboration.

Security is a primary consideration at every stage of software development.

DDevSecOps extends the boundary to security specialists and personnel, reflecting the principle that security is a primary consideration at every stage of software development and deployment.Traditionally, software code would be the responsibility of a programming or development team. Separate development and operations departments or teams can lead to silos.A dedicated cyber incident response team (CIRT)/computer security incident response team (CSIRT)/computer emergency response team (CERT) as a single point-of-contact for the notification of security incidents.

Development and operations (DevOps) is a cultural shift within an organization to encourage much more collaboration between developers and system administrators.

The _____ requires federal agencies to develop security policies for computer systems that process confidential information.Sarbanes-Oxley Act (SOX)Computer Security ActFederal information Security Management Act (FISMA)

Gramm-Leach-Bliley Act (GLBA)

BThe Computer Security Act (1987) specifically requires federal agencies to develop security policies for computer systems that process confidential information.The Sarbanes-Oxley Act (2002) mandates the implementation of risk assessments, internal controls and audit procedures. This act is not for any specific entity.The Federal Information Security Management Act (2002) governs the security of data processed by federal government agencies. This act requires agencies to implement an information security program.

The Gramm-Leach-Bliley Act (1999) is a United States federal law that requires financial institutions to explain how they share and protect their customers' private information.

A company has one technician that is solely responsible for applying and testing software and firmware patches. The technician goes on a two-week vacation, and no one is tasked to perform the patching duties during this time. A critical patch is released and not installed due to the absence. According to the National Institute of Standards and Technology (NIST), what has the delay in applying the patch caused?ControlRiskThreat

Vulnerability

DNIST defines vulnerability as a weakness that could be triggered accidentally or exploited intentionally to cause a security breach. In addition to delays in applying patches, other examples of vulnerabilities include improperly installed hardware, untested software, and inadequate physical security.Control is a system or procedure put in place to mitigate a risk. An example of control is policies or network monitoring to identify unauthorized software.Risk is the likelihood and impact of a threat actor exercising a vulnerability.

Threat is the potential for a threat agent to exercise a vulnerability.

Any part of the World Wide Web that is accessed through non-standard methods and is intentionally not indexed and hidden from a search engine is called a _____.Dark netCyber threat actorDeep web

Dark web

CDeep web is any part of the World Wide Web that is not indexed by a search engine. Examples include pages that require registration, unlinked pages, and pages using nonstandard DNS.A dark net is deliberately concealed and is an overlay to internet infrastructure. A dark net is one type of deep web.Cyber threat actors use deep web pages to communicate and exchange information without detection. This is accomplished using deep web.

Dark web are sites, content, and services that are accessible only over a dark net.

Which of the following could represent an insider threat? (Select all the apply.)Former employeeContractorCustomer

White box hacker

ABAnyone who has or had authorized access to an organization’s network, system, or data is considered an insider threat. In this example, a former employee and a contractor fit the criteria.Current employees, business partners, and contractors also qualify as insider threats.A customer does not have authorized access and is unlikely to be affiliated with an organization’s staff.

A white box hacker is given complete access to information about the network, which is useful for simulating the behavior of a privileged insider threat, but they are not an insider threat.

A Department of Defense (DoD) security team identifies a data breach in progress, based on some anomalous log entries, and take steps to remedy the breach and harden their systems. When they resolve the breach, they want to publish the cyber threat intelligence (CTI) securely, using standardized language for other government agencies to use. The team will transmit threat data feed via which protocol?Structured Threat Information eXpression (STIX)Automated Indicator Sharing (AIS)Trusted Automated eXchange of Indicator Information (TAXII)

A code repository protocol

CThe TAXII protocol provides a means for transmitting CTI data between servers and clients. Subscribers to the CTI service obtain updates to the data to load into analysis tools over TAXII.While STIX provides the syntax for describing CTI, the TAXII protocol transmits CTI data between servers and clients.The Department of Homeland Security's (DHS) Automated Indicator Sharing (AIS) is especially aimed at Information Sharing and Analysis Centers (ISACs), but private companies can join too. AIS is based on the STIX and TAXII standards and protocols.A file/code repository holds signatures of known malware code.P39Threat Data Feeds When you use a cyber threat intelligence (CTI) platform, you subscribe to a threat data feed. The information in the threat data can be combined with event data from your own network and system logs. An analysis platform performs correlation to detect whether any IoCs are present. There are various ways that a threat data feed can be implemented. Structured Threat Information eXpression (STIX) The OASIS CTI framework (oasis-open.github.io/cti-documentation) is designed to provide a format for this type of automated feed so that organizations can share CTI. The Structured Threat Information eXpression (STIX)part of the framework describes standard terminology for IoCs and ways of indicating relationships between them. Where STIX provides the syntax for describing CTI, the Trusted Automated eXchange of Indicator Information (TAXII)protocol provides a means for transmitting CTI data between servers and clients. For example, a CTI service provider would maintain a repository of CTI data. Subscribers to the service obtain updates to the data to load into analysis tools over TAXII. This data can be requested by the client (referred to as a collection), or the data can be pushed to subscribers (referred to as a channel). Automated Indicator Sharing (AIS)

Automated Indicator Sharing (AIS)is a service offered by the Department of Homeland Security (DHS) for companies to participate in threat intelligence sharing (us-cert.gov/ais). It is especially aimed at ISACs, but private companies can join too. AIS is based on the STIX and TAXII standards and protocols.

Compare the following and select the appropriate methods for packet capture. (Select all that apply.)WiresharkPacket analyzerPacket injection

Tcpdump

ABD(?AD?Wireshark and tcdump are packet sniffers. A sniffer is a tool that captures packets, or frames, moving over a network.Wireshark is an open source graphical packet capture and analysis utility. Wireshark works with most operating systems, where tcpdump is a command line packet capture utility for Linux.A packet analyzer works in conjunction with a sniffer to perform traffic analysis. Protocol analyzers can decode a captured frame to reveal its contents in a readable format, but they do not capture packets.A packet injection involves sending forged or spoofed network traffic by inserting (or injecting) frames into the network stream. Packets are not captured with packet injection.P53A protocol analyzer (or packet analyzer) works in conjunction with a sniffer to perform traffic analysis. You can either analyze a live capture or open a saved capture (.pcap) file. Protocol analyzers can decode a captured frame to reveal its contents in a readable format. You can choose to view a summary of the frame or choose a more detailed view that provides information on the OSI layer, protocol, function, and data.

Wireshark(wireshark.org) is an open-source graphical packet capture and analysis utility, with installer packages for most operating systems. Having chosen the interface to listen on, the output is displayed in a three-pane view. The packet list pane shows a scrolling summary of frames. The packet details pane shows expandable fields in the frame currently selected from the packet list. The packet bytes pane shows the raw data from the frame in hex and ASCII. Wireshark is capable of parsing (interpreting) the headers and payloads of hundreds of network protocols.

Select the statement which best describes the difference between a zero-day vulnerability and a legacy platform vulnerability.A legacy platform vulnerability is unpatchable, while a zero-day vulnerability may be exploited before a developer can create a patch for it.A zero-day vulnerability is unpatchable, while a legacy platform vulnerability can be patched, once detected.A zero-day vulnerability can be mitigated by responsible patch management, while a legacy platform vulnerability cannot be patched.

A legacy platform vulnerability can be mitigated by responsible patch management, while a zero-day vulnerability does not yet have a patch solution.

AA zero-day vulnerability is exploited before the developer knows about it or can release a patch. These can be extremely destructive, as it can take the vendor some time to develop a patch, leaving systems vulnerable in the interim.A legacy platform is no longer supported with security patches by its developer or vendor. By definition, legacy platforms are not patchable.Legacy systems are highly likely to be vulnerable to exploits and must be protected by security controls other than patching, such as isolating them to networks that an attacker cannot physically connect to.

Even if effective patch management procedures are in place, attackers may still be able to use zero-day software vulnerabilities, before a vendor develops a patch.

An outside security consultant updates a company’s network, including data cloud storage solutions. The consultant leaves the manufacturer’s default settings when installing network switches, assuming the vendor shipped the switches in a default-secure configuration. Examine the company’s network security posture and select the statements that describe key vulnerabilities in this network. (Select all that apply.)The network is open to third-party risks from using an outside contractor to configure cloud storage settings.The default settings in the network switches represent a weak configuration.The use of network switches leaves numerous unused ports open.

The default settings in the network switches represent unsecured protocols.

ABWeaknesses in products or services in a supply chain can impact service availability and performance, or lead to data breaches. Suppliers and vendors in the chain rely on each other to perform due diligence.Relying on the manufacturer default settings when deploying an appliance or software applications is a weak configuration. Although many vendors ship products in secure default configurations, it is insufficient to rely on default settings.Default settings may leave unsecure interfaces enabled that allow an attacker to compromise the device. Weak settings on network appliances can allow attackers to move through the network unhindered and snoop on traffic.

An unsecure protocol transfers data as cleartext. It does not use encryption for data protection.

In which of these situations might a non-credentialed vulnerability scan be more advantageous than a credentialed scan? (Select all that apply.)When active scanning poses no risk to system stabilityExternal assessments of a network perimeterDetection of security setting misconfiguration

Web application scanning

BDNon-credentialed scanning is often the most appropriate technique for external assessment of the network perimeter or when performing web application scanning.A non-credentialed scan proceeds by directing test packets at a host without being able to log on to the OS or application. A non-credentialed scan provides a view of what the host exposes to an unprivileged user on the network.A passive scan has the least impact on the network and on hosts but is less likely to identify vulnerabilities comprehensively.Configuration reviews investigate how system misconfigurations make controls less effective or ineffective, such as antivirus software not being updated, or management passwords left configured to the default. Configuration reviews generally require a credentialed scan.P71Credentialed versus Non-Credentialed Scanning A non-credentialed scan is one that proceeds by directing test packets at a host without being able to log on to the OS or application. The view obtained is the one that the host exposes to an unprivileged user on the network. The test routines may be able to include things such as using default passwords for service accounts and device management interfaces, but they are not given privileged access. While you may discover more weaknesses with a credentialed scan, you sometimes will want to narrow your focus to think like an attacker who doesn't have specific high-level permissions or total administrative access. Non-credentialed scanning is often the most appropriate technique for external assessment of the network perimeter or when performing web application scanning.

A credentialed scan is given a user account with log-on rights to various hosts, plus whatever other permissions are appropriate for the testing routines. This sort of test allows much more in-depth analysis, especially in detecting when applications or security settings may be misconfigured. It also shows what an insider attack, or one where the attacker has compromised a user account, may be able to achieve. A credentialed scan is a more intrusive type of scan than non-credentialed scanning.

A contractor has been hired to conduct penetration testing on a company's network. They have decided to try to crack the passwords on a percentage of systems within the company. They plan to annotate the type of data that is on the systems that they can successfully crack to prove the ease of access to data. Evaluate the penetration steps and determine which are being utilized for this task. (Select all that apply.)Test security controlsBypass security controlsVerify a threat exists

Exploit vulnerabilities

ADTwo penetration test steps are being utilized by actively testing security controls and exploiting the vulnerabilities. Identifying weak passwords is actively testing security controls.In addition, exploiting vulnerabilities is being used by proving that a vulnerability is high risk. The list of critical data obtained will prove that the weak passwords can allow access to critical information.Bypassing security controls can be accomplished by going around controls that are already in place to gain access.

Verifying that a threat exists would have consisted of using surveillance, social engineering, network scanners, and/or vulnerability assessment tools to identify vulnerabilities.

A hacker set up a Command and Control network to control a compromised host. What is the ability of the hacker to use this remote connection method as needed known as?WeaponizationPersistenceReconnaissance

Pivoting

B(這題很爛,他在講PT,但混入cyber kill chain)Persistence refers to the hacker’s ability to reconnect to the compromised host and use it as a Remote Access Tool (RAT) or backdoor. To do this, the hacker must establish a Command and Control (C2 or C&C) network.Weaponization is an exploit used to gain some sort of access to a target's network, but it doesn't involve being able to reconnect.Reconnaissance is the process of gathering information, it is not related to Command and Control networks.Pivoting follows persistence. It involves a system and/or set of privileges that allow the hacker to compromise other network systems (lateral spread). The hacker likely has to find some way of escalating the privileges available to him/her.P80Pen Test Attack Life Cycle In the kill chain attack life cycle, reconnaissance is followed by an initial exploitation phase where a software tool is used to gain some sort of access to the target's network. This foothold might be accomplished using a phishing email and payload or by obtaining credentials via social engineering. Having gained the foothold, the pen tester can then set about securing and widening access. A number of techniques are required: • Persistence—the tester's ability to reconnect to the compromised host and use it as a remote access tool (RAT) or backdoor. To do this, the tester must establish a command and control (C2 or C&C) network to use to control the compromised host, upload additional attack tools, and download exfiltrated data. The connection to the compromised host will typically require a malware executable to run after shut down/log off events and a connection to a network port and the attacker's IP address to be available. • Privilege escalation—persistence is followed by further reconnaissance, where the pen tester attempts to map out the internal network and discover the services running on it and accounts configured to access it. Moving within the network or accessing data assets are likely to require higher privilege levels. For example, the original malware may have run with local administrator privileges on a client workstation or as the Apache user on a web server. Another exploit might allow malware to execute with system/root privileges, or to use network administrator privileges on other hosts, such as application servers. • Lateral movement—gaining control over other hosts. This is done partly to discover more opportunities to widen access (harvesting credentials, detecting software vulnerabilities, and gathering other such "loot"), partly to identify where valuable data assets might be located, and partly to evade detection. Lateral movement usually involves executing the attack tools over remote process shares or using scripting tools, such as PowerShell. • Pivoting—hosts that hold the most valuable data are not normally able to access external networks directly. If the pen tester achieves a foothold on a perimeter server, a pivot allows them to bypass a network boundary and compromise servers on an inside network. A pivot is normally accomplished using remote access and tunneling protocols, such as Secure Shell (SSH), virtual private networking (VPN), or remote desktop. • Actions on Objectives—for a threat actor, this means stealing data from one or more systems (data exfiltration). From the perspective of a pen tester, it would be a matter of the scope definition whether this would be attempted. In most cases, it is usually sufficient to show that actions on objectives could be achieved. • Cleanup—for a threat actor, this means removing evidence of the attack, or at least evidence that could implicate the threat actor. For a pen tester, this phase means removing any backdoors or tools and ensuring that the system is not less secure than the pre-engagement state.P470The Lockheed Martin kill chain identifies the following phases: 1. Reconnaissance—in this stage the attacker determines what methods to use to complete the phases of the attack and gathers information about the target's personnel, computer systems, and supply chain. 2. Weaponization—the attacker couples payload code that will enable access with exploit code that will use a vulnerability to execute on the target system. 3. Delivery—the attacker identifies a vector by which to transmit the weaponized code to the target environment, such as via an email attachment or on a USB drive. 4. Exploitation—the weaponized code is executed on the target system by this mechanism. For example, a phishing email may trick the user into running the code, while a drive-by-download would execute on a vulnerable system without user intervention. 5. Installation—this mechanism enables the weaponized code to run a remote access tool and achieve persistence on the target system. 6. Command and control (C2 or C&C)—the weaponized code establishes an outbound channel to a remote server that can then be used to control the remote access tool and possibly download additional tools to progress the attack.

7. Actions on objectives—in this phase, the attacker typically uses the access he has achieved to covertly collect information from target systems and transfer it to a remote system (data exfiltration). An attacker may have other goals or motives, however.

A system administrator has just entered their credentials to enter a secure server room. As the administrator is entering the door, someone is walking up to the door with their hands full of equipment and appears to be struggling to move items around while searching for their credentials. The system administrator quickly begins to assist by getting items out of the person's hands, and they walk into the room together. This person is not an employee, but someone attempting to gain unauthorized access to the server room. What type of social engineering has occurred?Familiarity/likingConsensus/social proofAuthority and intimidation

Identity fraud

BConsensus/social proof revolves around the belief that without an explicit instruction to behave in a certain way, people will follow social norms. It is typically polite to assist someone with their hands full.Familiarity/Liking is when an attacker uses charisma to persuade others to do as requested. They downplay their requests to make it seem like their request is not out of the ordinary.Authority and Intimidation can be used by an attacker by pretending to be someone senior. The person receiving the request would feel the need to take action quickly and without questioning the attacker.Identity fraud is a specific type of impersonation where the attacker uses specific details (such as personal information) of someone's identity.P85Consensus/Social Proof

The principle of consensusor social proofrefers to the fact that without an explicit instruction to behave in a certain way, many people will act just as they think others would act. A social engineering attack can use this instinct either to persuade the target that to refuse a request would be odd ("That's not something anyone else has ever said no to") or to exploit polite behavior to slip into a building while someone holds the door for them. As another example, an attacker may be able to fool a user into believing that a malicious website is actually legitimate by posting numerous fake reviews and testimonials praising the site. The victim, believing many different people have judged the site acceptable, takes this as evidence of the site's legitimacy and places their trust in it.

A gaming company decides to add software on each title it releases. The company's objective is to require the CD to be inserted during use. This software will gain administrative rights, change system files, and hide from detection without the knowledge or consent of the user. Consider the malware characteristics and determine which is being used.SpywareKeyloggerRootkit

Trojan

CA rootkit is characterized by its ability to hide itself by changing core system files and programming interfaces and to escalate privileges. The gaming company accomplished this.Spyware monitors user activity and may be installed with or without the user's knowledge, but it cannot gain administrative privileges or hide itself.A keylogger is also a type of spyware that records a user’s keystrokes. It occurs without a user’s knowledge, but it cannot hide itself or gain privileges.

Trojans cannot conceal their presence entirely and will surface as a running process or service. While a rootkit is a type of Trojan or spyware, it differs in its ability to hide itself.

An employee calls IT personnel and states that they received an email with a PDF document to review. After the PDF was opened, the system has not been performing correctly. An IT admin conducted a scan and found a virus. Determine the two classes of viruses the computer most likely has. (Select all that apply.)Boot sectorProgramScript

Macro

BCBoth a program and script virus can use a PDF as a vector. The user stated that a PDF file was recently opened. A program virus is executed when an application is executed. Executable objects can also be embedded or attached within other file types such as Microsoft Word and Rich Text Format.A script virus typically targets vulnerabilities in an interpreter. Scripts are powerful languages used to automate operating system functions and add interactivity to web pages and are executed by an interpreter rather than self-executing. PDF documents have become a popular vector for script viruses.A boot sector virus is one that attacks the disk boot sector information, the partition table, and sometimes the file system.A macro virus uses the programming features available in Microsoft Office documents.P93A computer virusis a type of malware designed to replicate and spread from computer to computer, usually by "infecting" executable applications or program code. There are several different types of viruses and they are generally classified by the different types of file or media that they infect: • Non-resident/file infector—the virus is contained within a host executable file and runs with the host process. The virus will try to infect other process images on persistent storage and perform other payload actions. It then passes control back to the host program. • Memory resident—when the host file is executed, the virus creates a new process for itself in memory. The malicious processremains in memory, even if the host process is terminated. • Boot—the virus code is written to the disk boot sector or the partition table of a fixed disk or USB media, and executes as a memory resident process when the OS starts or the media is attached to the computer.

• Script and macro viruses—the malware uses the programming features available in local scripting engines for the OS and/or browser, such as PowerShell, Windows Management Instrumentation (WMI), JavaScript, Microsoft Office documents with Visual Basic for Applications (VBA) code enabled, or PDF documents with JavaScript enabled.

Which of the following is NOT a use of cryptography?Non-repudiationObfuscationSecurity through obscurity

Resiliency

C這是課本定義Security through obscurity involves keeping something a secret by hiding it. With cryptography, messages do not need to be hidden since they are not understandable unless decrypted.Non-repudiation is when the sender cannot deny sending the message. If the message has been encrypted in a way known only to the sender, logic follows the sender must have composed it.Obfuscation is the art of making a message difficult to understand. Cryptography is a very effective way of obfuscating a message by encrypting it.Resiliency occurs when the compromise of a small part of the system is prevented from allowing compromise of the whole system. Cryptography ensures the authentication and integrity of messages delivered over the control system.P106Security through obscurity involves keeping something a secret by hiding it. With cryptography, messages do not need to be hidden since they are not understandable unless decrypted.Non-repudiation is when the sender cannot deny sending the message. If the message has been encrypted in a way known only to the sender, logic follows the sender must have composed it.Obfuscation is the art of making a message difficult to understand. Cryptography is a very effective way of obfuscating a message by encrypting it.Resiliency occurs when the compromise of a small part of the system is prevented from allowing compromise of the whole system. Cryptography ensures the authentication and integrity of messages delivered over the control system.P6092. How can cryptography support high resiliency?

A complex system might have to support many inputs from devices installed to potentially unsecure locations. Such a system is resilient if compromise of a small part of the system is prevented from allowing compromise of the whole system. Cryptography assists this goal by ensuring the authentication and integrity of messages delivered over the control system.

Compare and contrast the modes of operation for block ciphers. Which of the following statements is true?ECB and CBC modes allow block ciphers to behave like stream ciphers.CTR and GCM modes allow block ciphers to behave like stream ciphers.ECB and GCM modes allow block ciphers to behave like stream ciphers.

CBC and CTR modes allow block ciphers to behave like stream ciphers.

BCounter Mode (CTR) and Galois/Counter Mode (GCM) combine each block with a counter value. This allows each block to be processed individually and in parallel, improving performance.Electronic Code Book (ECB) mode applies the same key to each plaintext block, which means identical plaintext blocks can output identical ciphertexts. This is not how a stream cipher behaves.Counter Mode (CTR) and Galois/Counter Mode (GCM) allow block ciphers to behave like stream ciphers, which are faster than block ciphers.Cipher Block Chaining (CBC) mode applies an Initialization Vector (IV) to the first plaintext block to ensure that the key produces a unique ciphertext from any given plaintext and repeating as a “chain.” This is not how a stream cipher behaves.P648GCM (Galois/Counter Mode)A mode of block chained encryption that provides message authenticity for each block.P110Stream Ciphers In a stream cipher, each byte or bit of data in the plaintext is encrypted one at a time. This is suitable for encrypting communications where the total length of the message is not known. The plaintext is combined with a separate randomly generated message, calculated from the key and an initialization vector (IV). The IV ensures the key produces a unique ciphertext from the same plaintext. The keystream must be unique, so an IV must not be reused with the same key. The recipient must be able to generate the same keystream as the sender and the streams must be synchronized. Stream ciphers might use markers to allow for synchronization and retransmission. Some types of stream ciphers are made self-synchronizing. Block Ciphers

In a block cipher, the plaintext is divided into equal-size blocks (usually 128-bit). If there is not enough data in the plaintext, it is padded to the correct size using some string defined in the algorithm. For example, a 1200-bit plaintext would be padded with an extra 80 bits to fit into 10 x 128-bit blocks. Each block is then subjected to complex transposition and substitution operations, based on the value of the key used. The Advanced Encryption Standard (AES)is the default symmetric encryption cipher for most products. Basic AES has a key size of 128 bits, but the most widely used variant is AES256, with a 256-bit key.

A security team is in the process of selecting a cryptographic suite for their company. Analyze cryptographic implementations and determine which of the following performance factors is most critical to this selection process if users primarily access systems on mobile devices.SpeedLatencyComputational overhead

Cost

CSome technologies or ciphers configured with longer keys require more processing cycles and memory space, which makes them slower and consume more power. This makes them unsuitable for handheld devices and embedded systems that work on battery power.Speed is most impactful when processing large amounts of data.For some use cases, the time required to obtain a result is more important than a data rate. Latency issues may negatively affect performance when an operation or application times out before the authentication handshake.Cost issues may arise in any decision-making process, but for mobile device cryptography, computing overhead is a primary limiting factor.P123Differences between ciphers make them more or less useful for resource-constrained environments. The main performance factors are as follows: • Speed—for symmetric ciphers and hash functions, speedis the amount of data per second that can be processed. Asymmetric ciphers are measured by operations per second. Speed has the most impact when large amounts of data are processed. • Time/latency—for some use cases, the time required to obtain a result is more important than a data rate. For example, when a secure protocol depends on ciphers in the handshake phase, no data transport can take place until the handshake is complete. This latency, measured in milliseconds, can be critical to performance. • Size—the security of a cipher is strongly related to the size of the key, with longer keys providing better security. Note that the key size cannot be used to make comparisons between algorithms. For example, a 256-bit ECC key is stronger than a 2048-bit RSA key. Larger keys will increase the computational overhead for each operation, reducing speed and increasing latency.

• Computational overheads—in addition to key size selection, different ciphers have unique performance characteristics. Some ciphers require more CPU and memory resources than others, and are less suited to use in a resource-constrained environment.

Which statement best illustrates the importance of a strong true random number generator (TRNG) or pseudo-random number generator (PRNG) in a cryptographic implementation?A weak number generator leads to many published keys sharing a common factor.A weak number generator creates numbers that are never reused.A strong number generator creates numbers that are never reused.

A strong number generator adds salt to encryption values.

AA cryptanalyst can test for the presence of common factors and derive the whole key much more easily. The TRNG or PRNG module in the cryptographic implementation is critical to its strength.Predictability is a weakness in either the cipher operation or within particular key values that make a ciphertext more vulnerable to cryptanalysis. Reuse of the same key within the same session can cause this weakness.The principal characteristic of a nonce is that it is never reused ("number used once") within the same key value. A nonce can be a random, pseudo-random, or counter value.

Salt is a random or pseudo-random number or string. The term salt is used specifically in conjunction with hashing password values.

Which statement describes the mechanism by which encryption algorithms help protect against birthday attacks?Encryption algorithms utilize key stretching.Encryption algorithms use secure authentication of public keys.Encryption algorithms demonstrate collision avoidance.

Encryption algorithms add salt when computing password hashes.

CTo protect against the birthday attack, encryption algorithms must demonstrate collision avoidance (that is, to reduce the chance that different inputs will produce the same output).Key stretching takes a key that is generated from a user password and repeatedly converts it to a longer and more random key. The initial key may be put through thousands of rounds of hashing to slow down potential attackers.Securely authenticating public keys, such as associating the keys with certificates, helps protect against man-in-the-middle attacks.Passwords stored as hashes are vulnerable to brute force and dictionary attacks. Adding salt values when creating hashes can slow down both of these attacks.P127

A birthday attackis a type of brute force attack aimed at exploiting collisions in hash functions. A collisionis where a function produces the same hash value for two different plaintexts. This type of attack can be used for the purpose of forging a digital signature.

Examine each statement and determine which most accurately describes a major limitation of quantum computing technology.Presently, quantum computers do not have the capacity to run useful applications.Quantum computing is not yet sufficiently secure to run current cryptographic ciphers.Quantum computing is not sufficiently agile to update the range of security products it most frequently uses.

Attackers may exploit a crucial vulnerability in quantum computing to covertly exfiltrate data.

APresently, the most powerful quantum computers have about 50 qubits. A quantum computer will need about a million qubits to run useful applications.Quantum computing could put the strength of current cryptographic ciphers at risk, but it also has the promise of underpinning more secure cryptosystems in the future.Cryptographic agility refers to an organization's ability to update the specific algorithms used in security products without affecting the business workflows that those products support. Quantum computing could pose a threat to cryptographic agility.Steganography obscures the presence of a message and can be used for data exfiltration. The quantum computing properties of entanglement, superposition, and collapse suit the design of a tamper-evident communication system that would allow secure key agreement.P130Quantum and Post-Quantum Quantumrefers to computers that use properties of quantum mechanics to significantly out-perform classical computers at certain tasks. Computing A quantum computer performs processing on units called qubits(quantum bits). A qubit can be set to 0 or 1 or an indeterminate state called a superposition,where there is a probability of it being either 1 or 0. The likelihood can be balanced 50/50 or can be weighted either way. The power of quantum computing comes from the fact that qubits can be entangled. When the value of a qubit is read, it collapses to either 1 or 0, and all other entangled qubits collapse at the same time. The strength of this architecture is that a single operation can utilize huge numbers of state variables represented as qubits, while a classical computer's CPU must go through a read, execute, write cycle for each bit of memory. This makes quantum very well-suited to solving certain tasks, two of which are the factoring problem that underpins RSA encryption and the discrete algorithm problem that underpins ECC. Communications While quantum computing could put the strength of current cryptographic ciphers at risk, it also has the promise of underpinning more secure cryptosystems. The properties of entanglement, superposition, and collapse suit the design of a tamperevident communication system that would allow secure key agreement. Post-Quantum

Post-quantumrefers to the expected state of computing when quantum computers that can perform useful tasks are a reality. Currently, the physical properties of qubits and entanglement make quantum computers very hard to scale up. At the time of writing, the most powerful quantum computers have about 50 qubits. A quantum computer will need about a million qubits to run useful applications. No one can predict with certainty if or when such a computer will be implemented. In the meantime, NIST is running a project to develop cryptographic ciphers that are resistant to cracking even by quantum computers.

A hospital must balance the need to keep patient privacy information secure and the desire to analyze the contents of patient records for a scientific study. What cryptographic technology can best support the hospital’s needs?BlockchainQuantum computing is not yet sufficiently secure to run current cryptographic ciphers.Perfect forward security (PFS)

Homomorphic encryption

DHomomorphic encryption is used to share privacy-sensitive data sets. It allows a recipient to perform statistical calculations on data fields, while keeping the data set as a whole encrypted, thus preserving patient privacy.Blockchain uses cryptography to secure an expanding list of transactional records. Each record, or block, goes through a hash function. Each block’s hash value links to the hash value of the previous block.Quantum computing could serve as a secure foundation for secure cryptosystems and tamper-evident communication systems that would allow secure key agreement.Perfect forward security (PFS) mitigates the risks from RSA key exchanges through the use of ephemeral session keys to maintain confidentiality.

P131

During a penetration test, an adversary operator sends an encrypted message embedded in an attached image. Analyze the scenario to determine what security principles the operator is relying on to hide the message. (Select all that apply.)Security by obscurityIntegrityPrepending

Confidentiality

ADWhen used to conceal information, steganography amounts to "security by obscurity," which is usually deprecated.A message can be encrypted by some mechanism before embedding it in a covertext, providing confidentiality.Steganography technology can also provide integrity or non-repudiation; for example, it can show that something was printed on a particular device at a particular time, which could demonstrate that it was genuine or a fake.

A phishing or hoax email can be made more convincing by using prepending. In an offensive sense, prepending means adding text that appears legitimate and to have been generated by the mail system such as "MAILSAFE:PASSED."

Consider the life cycle of an encryption key. Which of the following is NOT a stage in a key's life cycle?StorageVerificationExpiration and renewal

Revocation

BVerification is not a stage in a key’s life cycle. It is part of the software development life cycle. The stages are: key generation, certificate generation, storage, revocation, and expiration and renewal.Storage is the stage where a user must take steps to store the private key securely. It is also important to ensure that the private key is not lost or damaged.The expiration and renewal stage addresses that a key pair expires after a certain period. Giving the key a "shelf-life" increases security. Certificates can be renewed with new key material.Revocation is the stage that concerns itself with the event of a private key being compromised; it can be revoked before it expires.P147Key managementrefers to operational considerations for the various stages in a key's life cycle. A key's life cycle may involve the following stages: • Key generation—creating a secure key pair of the required strength, using the chosen cipher. • Certificate generation—to identify the public part of a key pair as belonging to a subject (user or computer), the subject submits it for signing by the CA as a digital certificate with the appropriate key usage. At this point, it is critical to verify the identity of the subject requesting the certificate and only issue it if the subject passes identity checks. • Storage—the user must take steps to store the private key securely, ensuring that unauthorized access and use is prevented. It is also important to ensure that the private key is not lost or damaged. • Revocation—if a private key is compromised, the key pair can be revoked to prevent users from trusting the public key.

• Expiration and renewal—a key pair that has not been revoked expires after a certain period. Giving the key or certificate a "shelf-life" increases security. Certificates can be renewed with new key material.

Which certificate field shows the name of the Certificate Authority (CA) expressed as a Distinguished Name (DN)?VersionSignature algorithmIssuer

Subject

CThe Signature Algorithm field provides the algorithm used by the CA to sign the certificate. The signature algorithm is used to assert the identity of the server's public key and facilitate authentication.The Version field provides which X.509 version is supported (V1, V2, or V3).The Issuer field provides the name of the CA, expressed as a Distinguished Name (DN).

The Subject field gives the name of the certificate holder, expressed as a distinguished name (DN). Within this, the common name (CN) part should usually match either the fully qualified domain name (FQDN) of the server or a user email address.

An employee has requested a digital certificate for a user to access the Virtual Private Network (VPN). It is discovered that the certificate is also being used for digitally signing emails. Evaluate the possible extension attributes to determine which should be modified so that the certificate only works for VPN access.Extension IDCriticalValue

Distinguished encoding rules

BExtensions allow for extra information to be included about the certificate. A Critical section is available and is a Boolean value that indicates if the section is critical. By making the Extended Key Usage (EKU) field critical the certificate will only work for VPN. The "true" option will only allow VPN while "false" will not block the certificate for other uses.The Extension ID (extnID) is expressed as an OID.The Value (extnValue) is a string value of the extension.

Distinguished Encoding Rules (DER) are what certificates use to encode a certificate as a digital file for exchange between different systems.

An employee handles key management and has learned that a user has used the same key pair for encrypting documents and digitally signing emails. Prioritize all actions that should be taken and determine the first action that the employee should take.Revoke the keys.Recover the encrypted data.Generate a new key pair.

Generate a new certificate.

BThe first step is to recover any data encrypted with the key so the data can be decrypted. Once the data is recovered, the key can be revoked and an administrator can issue a new key pair.After the data has been recovered, the keys should be revoked. They are compromised and should not be used for any future tasks.After the compromised keys are revoked, the user can be issued new keys. The user requires two sets of keys, one for encrypting messages and the other for digitally signing documents.

Certificate generation is used to identify the public part of a key pair as belonging to a subject and will occur after the user’s new keys have been generated.

A company has a critical encryption key that has an M-of-N control configuration for protection. Examine the examples and select the one that correctly illustrates the proper configuration for this type of protection of critical encryption keys.M=1 and N=5M=3 and N=5M=6 and N=5

M=0 and N=5

BA correct configuration for an M-of-N control is M=3 and N=5. M stands for the number of authorized administrators that must be present to access the critical encryption keys and N is the total number of authorized administrators. In this scenario, 3 of the 5 administrators must be present for access.M is always greater than 1 for this type of configuration making M=1 and N=5 not a valid choice. If only 1 administrator must be present, this configuration would be unnecessary.M=6 and N=5 is not possible as this configuration is asking for more administrators to be present than is authorized.

The final option of M=0 is not viable because M must always equal more than 1.

A Certificate Revocation List (CRL) has a publish period set to 24 hours. Based on the normal procedures for a CRL, what is the most applicable validity period for this certificate?26 hours1 hour23 hours

72 hours

AOne or two hours over the publish period is considered normal thus making 26 hours within the window.The validity period is the period during which the CRL is considered authoritative. This is usually a bit longer than the publish period, giving a short window to update and keep the CRL authoritative.The validity period would not be less than the publish period as it would make the CRL nonauthoritative prior to the next publishing.

If the validity period was set to 72 hours this would be much too long after the publish period. The CRL would be published two additional times prior to the validity period ending.

Evaluate the following controls that have been set by a system administrator for an online retailer. Determine which statement demonstrates the identification control within the Identity and Access Management (IAM) system.A control is set to force a customer to log into their account prior to reviewing and editing orders.A control is set to cancel automatic shipments for any customer that has an expired credit card on file.A control is set to ensure that billing and primary delivery addresses match.

A control is set to record the date, time, IP address, customer account number, and order details for each order.

CIdentification controls are set to ensure that customers are legitimate. An example is to ensure that billing and primary delivery addresses match.Authentication controls are to ensure that customers have unique accounts, and that only they can manage their orders and billing information. An example is to require each customer create an account prior to allowing them to store billing or shipping information.Authorization controls are to ensure customers can only place orders when they have valid payment information in place prior to completing an order.

Accounting controls include maintaining a record of each action taken by a customer to ensure that they cannot deny placing an order. Records may include order details, date, time, and IP address information.

Considering how to mitigate password cracking attacks, how would restricting the number of failed logon attempts be categorized as a vulnerability?The user is exposed to a replay attack.The user is exposed to a brute force attack.The user is exposed to a DoS attack.

The user is exposed to an offline attack.

CRestricting logons can become a vulnerability by exposing a user to Denial of Service (DoS) attacks. The attacker keeps trying to authenticate, locking out valid users.In a replay attack, an intercepted key or password hash is reused to gain access to a resource. This is prevented with once-only tokens or timestamping, not restricting logon attempts.A brute force attack is where an attacker uses an application to exhaustively try every possible alphanumeric combination to crack encrypted passwords. Restricting logon attempts is a way to mitigate this threat, not be vulnerable to it.

In an offline attack, a password cracker works on a downloaded password database without having to interact with the authentication system. It is unrelated to logon attempts.

Select the explanations that accurately describe the Ticket Granting Ticket (TGT) role within the Authentication Service (AS). (Select all that apply.)The client sends the AS a request for a TGT that is composed by encrypting the date and time on the local computer with the user's password hash as the key.The AS responds with a TGT that contains information about the client, to include name and IP address, plus a timestamp and validity period.The AS responds with a TGT key for use in communications between the client and the Ticket Granting Service (TGS).

The TGT responds with a service session key for use between the client and the application server.

ABThe Authentication Service (AS) is responsible for authenticating user logon requests. The first step within AS is when the client sends the AS a request for a Ticket Granting Ticket (TGT). This is composed by encrypting the date and time on the local computer with the user's password hash as a key.The Ticket Granting Ticket (TGT) contains information about the client and includes a timestamp and validity period. The information is encrypted using the KDC's secret key. This occurs after the user is found in the database and the request is valid.The AS does not respond back with a TGT key but with a Ticket Granting Service (TGS) key that is used in communications between the client and the TGS.The TGS is the service that responds with a service session key for use between the client and the application server.

P165

A user presents a smart card to gain access to a building. Authentication is handled through integration to a Windows server that's acting as a certificate authority on the network. Review the security processes and conclude which are valid when using Kerberos authentication. (Select all that apply.)Inputting a correct PIN authorizes the smart card's cryptoprocessor to use its private key to create a Ticket Granting Ticket (TGT) request.The smart card generates a one-time use Ticket Granting Service (TGS) session key and certificate.The Authentication Server (AS) trusts the user's certificate as it was issued by a local certification authority.

The Authentication Server (AS) is able to decrypt the request because it has a matching certificate.

ACInputting a correct PIN authorizes the smart card's cryptoprocessor to use its private key to create a Ticket Granting Ticket (TGT) request to an Authentication Server (AS).The AS can place trust when the user's certificate is issued by a local or third-party root certification authority.An AS responds with a TGT and Ticket Granting Service (TGS) session key, not the smart card.

An AS would be able to decrypt the request because it has a matching public key and trusts the user's smart-card certificate.

Both Remote Access Dial-In User Service (RADIUS) and Terminal Access Controller Access-Control System (TACACS+) provide authentication, authorization, and accounting using a separate server (the AAA server). Based on the protocols' authentication processes, select the true statements. (Select all that apply.)TACACS+ is open source and RADIUS is a proprietary protocol from Cisco.RADIUS uses UDP and TACACS+ uses TCP.TACACS+ encrypts the whole packet (except the header) and RADIUS only encrypts the password.

RADIUS is primarily used for network access and TACACS+ is primarily used for device administration.

BCDRADIUS uses UDP over ports 1812 and 1813 and TACACS+ uses TCP on port 49.TACACS+ encrypts the whole packet (except the header, which identifies the packet as TACACS+ data) and RADIUS only encrypts the password portion of the packet using MD5.RADIUS is primarily used for network access for a remote user and TACACS+ is primarily used for device administration. TACACS+ provides centralized control for administrators to manage routers, switches, and firewall appliances, as well as user privileges.

RADIUS is an open-source protocol, not TACACS+. TACACS+ is a Cisco proprietary protocol.

When a network uses Extensible Authentication Protocol (EAP) as the authentication method, what access control protocol provides the means for a client to connect from a Virtual Private Network (VPN) gateway?IEEE802.1XKerberosTerminal Access Controller Access-Control System Plus (TACACS+)

Remote Authentication Dial-in User Service (RADIUS)

AWhere EAP provides the authentication mechanisms, the IEEE 802.1X Port-based Network Access Control (NAC) protocol provides the means of using an EAP method when a device connects to a VPN gateway.Kerberos is designed to work over a trusted local network. Several authentication protocols have been developed to work with remote access protocols, where the connection is made over a serial link or virtual private network (VPN).TACACS+ uses TCP communications. Authentication, authorization, and accounting (AAA) functions within TACACS+ are discrete.With authentication, authorization, and accounting (AAA), the network access server (NAS) devices (RADIUS or TACACS+) do not have to store any authentication credentials. They forward this data between the AAA server and the supplicant.P175Extensible Authentication Protocol/IEEE 802.1X The smart-card authentication process described earlier is used for Kerberos authentication where the computer is attached to the local network and the user is logging on to Windows. Authentication may also be required in other contexts: • When the user is accessing a wireless network and needs to authenticate with the network database. • When a device is connecting to a network via a switch and network policies require the user to be authenticated before the device is allowed to communicate. • When the user is connecting to the network over a public network via a virtual private network (VPN). In these scenarios, the Extensible Authentication Protocol (EAP)provides a framework for deploying multiple types of authentication protocols and technologies. EAP allows lots of different authentication methods, but many of them use a digital certificate on the server and/or client machines. This allows the machines to establish a trust relationship and create a secure tunnel to transmit the user credential or to perform smart-card authentication without a user password. Where EAP provides the authentication mechanisms, the IEEE 802.1XPort-based Network Access Control (NAC) protocol provides the means of using an EAP method when a device connects to an Ethernet switch port, wireless access point (with enterprise authentication configured), or VPN gateway. 802.1X uses authentication, authorization, and accounting (AAA) architecture: • Supplicant—the device requesting access, such as a user's PC or laptop. • Network access server (NAS)—edge network appliances, such as switches, access points, and VPN gateways. These are also referred to as RADIUS clientsor authenticators. • AAA server—the authentication server, positioned within the local network.

With AAA, the NAS devices do not have to store any authentication credentials. They forward this data between the AAA server and the supplicant. There are two main types of AAA server: RADIUS and TACACS+.

Assess the features and processes within biometric authentication to determine which scenario is accurate.A company chooses to use a biometric cryptosystem due to the ease of revocation for a compromised certificate.A company uses a fingerprint scanner that acts as a sensor module for logging into a system.A company uses a fingerprint scanner that acts as a feature extraction module for logging into a system.

A company records information from a sample using a sensor module.

BA sensor module acquires the biometric sample from the target. Examples of a sensor module can be a fingerprint scanner or retina scanner.One problem that biometric cryptosystems is the lack of revocability. A legitimate person will almost always have the same template (fingerprint, retina, etc). If a company is using a smart card and it is compromised, the card is revoked and reissued. The same cannot be done for biometric cryptosystems.A feature extraction module records the significant information from the sample. This record would include the fingerprint that was scanned when authentication was requested.

A sensor module acquires the biometric sample from the target.

Analyze the features of behavioral technologies for authentication, and choose the statements that accurately depict this type of biometric authentication. (Select all that apply.)Behavioral technologies are cheap to implement, but have a higher error rate than other technologies.Signature recognition is popular within this technology because everyone has a unique signature that is difficult to replicate.Obtaining a voice recognition template for behavioral technologies is rather easy and can be obtained quickly.

Behavior technologies may use typing as a template, which matches the speed and pattern of a user's input of a passphrase.

ADBehavioral technologies are sometimes classified as "something you do." These technologies often have a lower cost to implement than other types of biometric cryptosystems, but they have a higher error rate.Typing is used as a behavioral technology, and the template is based on the speed and pattern of a user's input of a passphrase.Signature recognition is not based on the actual signature due to it being easy to replicate. Instead, it is based on the process of applying a signature such as stroke, speed, and pressure of the stylus.

Obtaining a voice recognition template is not a fast process, and can be difficult. Background noise and other environmental factors can also interfere with authentication.

Consider the challenges with providing privileged management and authorization on an enterprise network. Which of the following would the network system administrator NOT be concerned with when configuring directory services?ConfidentialityIntegrityNon-repudiation

DoS

CThe integrity of the information on the network (write access) is also a concern. Only users who have write access are able to modify a file.Non-repudiation means a subject cannot deny doing something, such as creating, modifying, or sending a resource. It is not a consideration for managing privileges and authorization.Confidentiality of the information on the network (read access) is a concern. A user may be able to view a file, but not read it.

Another concern is Denial of Service (DoS), a network-based attack that consumes the network's bandwidth to disable the directory server.

Compare all of the functions within directory services and determine which statement accurately reflects the function of group memberships.The key provided at authentication lists a user's group memberships, which is a list of all of the resources that the user has access to on the network.The system compares group memberships with the user's logon credentials to determine if the user has access to the network resources.Group memberships contain entries for all usernames and groups that have permission to use the resource.

Group memberships are like a database, where an object is similar to a record, and the attributes known about the object are similar to the fields.

AGroup memberships of an authenticated user are on the access key and also contains the user's username. The system provides the access key upon authentication, and the user has access to all of the allowed resources.A security database holds authentication data for users and compares this information with the supplied authentication data from the user. If the supplied data and the data within the security database match, then the system has authenticated the user, and the security database generates an access key.An Access Control List (ACL) controls access to resources. The ACL contains entries for all usernames and groups that have permission to use the resource.

A directory is like a database, such as an object is like a record, and the attributes known about the object are like fields.

Which of the following recommended guidelines should systems admin follow for Account Management? (Select all that apply.)Implement the principle of least privilege when assigning user and group account access.Draft a password policy and include requirements to ensure passwords are resistant to cracking attempts.Identify group or role account types and how admin will allocate them to users.

Identify user account types to implement within the model, such as standard users and types of privileged users.

ABOne recommended guideline is implementing the principle of least privilege when assigning user and group account access. This involves assigning no more than the minimum sufficient permissions to perform the relevant job function.Another recommended guideline is drafting a password policy and including requirements to ensure strong passwords, such as minimum password length, requiring complex passwords, requiring periodic password changes, and placing limits on password reuse.Identifying group or role account types, and how admin or systems will allocate them to users, is a recommended guideline for Access Management Control, not Account Management.

Identifying user account types to implement within the model, such as standard users and types of privileged users, is a recommended guideline for Access Management Control.

Windows has several service account types, typically used to run processes and background services. Which of the following statements about service accounts is FALSE?The Network service account and the Local service account have the same privileges as the standard user account.Any process created using the system account will have full privileges over the local computer.The local service account creates the host processes and starts Windows before the user logs on.

The Local Service account can only access network resources as an anonymous user.

CThe System account, not the Local Service account, creates the host processes that start Windows before the user logs on.The Network Service account and the Local Service account have the same privileges as the standard user account. Standard users have limited privileges, typically with access to run programs, create, and modify files only belonging to their profile.Any process created using the System account will have full privileges over the local computer. The System account has the most privileges of any Windows account.

The Local Service account can only access network resources as an anonymous user, unlike a Network Service account. Network Service accounts can present the computer's account credentials when accessing network resources.

A system administrator has configured a security log to record unexpected behavior and review the logs for suspicious activity. Consider various types of audits to determine which type aligns with this activity.Permission auditingUsage auditingInformation security audit

Compliance audit

BUsage auditing refers to configuring the security log to record key indicators and then reviewing the logs for suspicious activity. Behavior recorded by event logs that differs from expected behavior may indicate everything from a minor security infraction to a major incident.The systems administrator puts in place permission auditing to review privileges regularly. This includes monitoring group membership and access control lists for each resource plus identifying and disabling unnecessary accounts.An information security audit measures how the organization's security policy is employed and determines how secure the network or site is that is being audited.

A compliance audit reviews a company's policies and procedures and determines if it is in compliance with regulatory guidelines.

Consider the role trust plays in federated identity management and determine which models rely on networks to establish trust relationships. (Select all that apply.)SAMLOAuthOpenID

LDAP

ABCSecurity Assertion Markup Language (SAML) is an identity federation format used to exchange authentication information between the principle, the service provider, and the identity provider.Authentication and authorization for a RESTful API is often implemented using the Open Authorization (OAuth) protocol.OpenID is an identity federation method enabling users authentication on cooperating websites by a third-party authentication service.

Lightweight Directory Access Protocol (LDAP) is not an identity federation. It is a network protocol used to access network directory databases storing information about authorized users and their privileges, as well as other organizational information.

An employee is working on a team to build a directory of systems they are installing in a classroom. The team is using the Lightweight Directory Access Protocol (LDAP) to update the X.500 directory. Utilizing the standards of an X.500 directory, which of the following distinguished names is the employee most likely to recommend?OU=Univ,DC=local,CN=user,CN=system1CN=system1,CN=user,OU=Univ,DC=localCN=user,DC=local,OU=Univ,CN=system1

DC=system1,OU=Univ,CN=user,DC=local

BA distinguished name is a unique identifier for any given resource within an X.500-like directory and made up of attribute=value pairs, separated by commas. The most specific attribute lists first, and then successive attributes become progressively broader.Also referred to as the relative distinguished name, the most specific attribute (in this case, system1) uniquely identifies the object within the context of successive attribute values.The directory schema describes the types of attributes, what information they contain, and the way attributes define object types. Some of the attributes commonly used include Common Name (CN), Organizational Unit (OU), Organization (O), Country (C), and Domain Component (DC).In this scenario, CN=system1 is the Common Name, CN=User is the broader common name, OU=Univ is the Organizational Unit, and DC=local is the Domain Component. This goes in order of a specific system to the broadest Domain Component.P213Directory servicesare the principal means of providing privilege management and authorization on an enterprise network, storing information about users, computers, security groups/roles, and services. A directory is like a database, where an object is like a record, and things that you know about the object (attributes) are like fields. In order for products from different vendors to be interoperable, most directories are based on the same standard. The Lightweight Directory Access Protocol (LDAP) is a protocol widely used to query and update X.500 format directories. A distinguished name (DN) is a unique identifier for any given resource within an X.500-like directory. A distinguished name is made up of attribute=value pairs, separated by commas. The most specific attribute is listed first, and successive attributes become progressively broader. This most specific attribute is also referred to as the relative distinguished name, as it uniquely identifies the object within the context of successive (parent) attribute values.The types of attributes, what information they contain, and the way object types are defined through attributes (some of which may be required, and some optional) is described by the directory schema. Some of the attributes commonly used include common name (CN), organizational unit (OU), organization (O), country (C), and domain component (DC). For example, the distinguished name of a web server operated by Widget in the UK might be:CN=WIDGETWEB, OU=Marketing, O=Widget, C=UK,

DC=widget, DC=foo

A senior administrator is teaching a new technician how to properly develop a standard naming convention in Active Directory (AD). Examine the following responses and determine which statements would be sound advice for completing this task. (Select all that apply.)Create as many root-level containers and nest containers as deeply as neededConsider grouping Organizational Units (OU) by location or departmentBuild groups based on department, and keep all users, both standard and administrative, in their respective group

Within each root-level Organizational Unit (OU), use separate child OUs for different types of objects

BDOrganizational Units (OUs) represent administrative boundaries. They allow the enterprise administrator to delegate administrative responsibility for users and resources in different locations or departments. An OU grouped by location will be sufficient if different IT departments are responsible for services in different geographic locations. An OU grouped by department is more applicable if different IT departments are responsible for supporting different business functions.Within each root-level parent OU, use separate child OUs for different types of objects such as servers, client systems, users and groups. Be consistent.Do not create too many root-level containers or nest containers too deeply. They should not be more than five levels.

Separate administrative user and group accounts from standard ones.

Which type of employee training utilizes gaming and competition techniques to emphasize training objectives? (Select all that apply.)Capture the flag (CTF)Computer-based training (CBT)Phishing campaigns

Role-based training

ABCapture the Flag (CTF) is usually used in ethical hacker training programs and gamified competitions. Participants complete a series of challenges within a virtualized computing environment to discover a flag that represents a vulnerability or attack to overcome.Computer-based training (CBT) allows a student to acquire skills and experience by completing practical simulations or branching choice scenarios. CBT might use video game elements to improve engagement.A phishing campaign training event means sending simulated phishing messages to users. Users that respond to the messages can be targeted for follow-up training.Staff training should focus on user roles, which require different levels of security training, education, or awareness.P211Phishing Campaigns A phishing campaign training event means sending simulated phishing messages to users. Users that respond to the messages can be targeted for follow-up training. Capture the Flag Capture the Flag (CTF)is usually used in ethical hacker training programs and gamified competitions. Participants must complete a series of challenges within a virtualized computing environment to discover a flag. The flag will represent either threat actor activity (for blue team exercises) or a vulnerability (for red team exercises) and the participant must use analysis and appropriate tools to discover it. Capturing the flag allows the user to progress to the next level and start a new challenge. Once the participant has passed the introductory levels, they will join a team and participate in a competitive event, where there are multiple flags embedded in the environment and capturing them wins points for the participant and for their team. Computer-Based Training and Gamification Participants respond well to the competitive challenge of CTF events. This type of gamification can be used to boost security awareness for other roles too. Computerbased training (CBT)allows a student to acquire skills and experience by completing various types of practical activities: • Simulations—recreating system interfaces or using emulators so students can practice configuration tasks. • Branching scenarios—students choose between options to find the best choices to solve a cybersecurity incident or configuration problem.

CBT might use video game elements to improve engagement. For example, students might win badges and level-up bonuses such as skills or digitized loot to improve their in-game avatar. Simulations might be presented so that the student chooses encounters from a map and engages with a simulation environment in a first person shooter type of 3D world.

There are several types of security zones on a network. Analyze network activities to determine which of the following does NOT represent a security zone.DMZScreened hostWireless

Guest

BA screened host is when a smaller network accesses the Internet using a dual-homed proxy/gateway servers.A Demilitarized Zone (DMZ) is a protected but untrusted area (zone) between the Internet and the private network.Traffic from wireless networks might be less trusted than from a cabled network. If unauthenticated open access points or authenticated guest Wi-Fi networks exist on the network, admin should keep them isolated.

A guest network is a zone that allows untrusted or semi-trusted hosts on the local network. Examples include publicly accessible computers or visitors bringing their own portable computing devices to the premises.

Evaluate the typical weaknesses found in network architecture and determine which statement best aligns with a perimeter security weakness.A company has a single network channel.A company has many different systems to operate one service.A company has a habit of implementing quick fixes.

A company has a flat network architecture.

DOverdependence on perimeter security occurs when the network architecture is flat. If an attacker can penetrate the network edge, the attacker will then have freedom of movement throughout the entire network.A single point of failure occurs with a "pinch point" by relying on a single hardware server or appliance or network channel. Complex dependencies are services that require many different systems to be available.Ideally, the failure of individual systems or services should not affect the overall performance of other network services.

Availability over confidentiality and integrity occurs when a company takes shortcuts to get a service up and running. Compromising security might represent a quick fix but creates long term risks.

Evaluate the following choices based on their potential to lead to a network breach. Select the choice that is NOT a network architecture weakness.The network architecture is flat.Services rely on the availability of several different systems.The network relies on a single hardware server.

Not all hosts on the network can talk to one another.

DIt is good that not all the hosts can talk to each other. If any host can contact another host, an attacker can penetrate the network edge and gain freedom of movement.A flat architecture is where all hosts can contact each other, exposing an overdependence on perimeter security.When services rely on several different systems, the failure of one will affect the overall performance of other network services.

Relying on a single hardware server represents a single point of failure, meaning the whole network crashes if the server goes down.

Analyze the techniques that are available to perform rogue machine detection and select the accurate statements. (Select all that apply.)Visual inspection of ports and switches will prevent rogue devices from accessing the network.Network mapping is an easy way to reveal the use of unauthorized protocols on the network or unusual traffic volume.Intrusion detection and NAC are security suites and appliances that combine automated network scanning with defense and remediation suites to prevent rogue devices from accessing the network.

Wireless monitoring can reveal whether there are unauthorized access points.

CDIntrusion detection and NAC are security suites and appliances that can combine automated network scanning with defense and remediation suites to prevent rogue devices from accessing the network.Wireless monitoring can reveal the presence of unauthorized or malicious access points and stations.Visual inspection of ports/switches will reveal any obvious unauthorized devices or appliances; however, a sophisticated attack can prevent observation, such as creating fake asset tags.

Network mapping can identify hosts unless an OS is actively trying to remain unobserved by not operating when scans are running. Identifying a rogue host on a large network from a scan may still be difficult.

Which statement regarding attacks on media access control (MAC) addresses accurately pairs the method of protection and what type of attack it guards against? (Select all that apply.)MAC filtering guards against MAC snooping.Dynamic Host Configuration Protocol (DHCP) snooping guards against MAC spoofing.MAC filtering guards against MAC spoofing.

Dynamic address resolution protocol inspection (DAI) guards against MAC flooding.

BDIn MAC filtering, a switch will record the specified number of MACs allowed to connect to a port, but then drop any traffic from other MAC addresses.DHCP snooping inspects traffic arriving on access ports to ensure that a host is not trying to spoof its MAC address.MAC filtering on a switch defines which MAC addresses are allowed to connect to a particular port, dropping other traffic to protect against MAC flooding attacks.Dynamic ARP inspection (DAI), which can be configured alongside DHCP snooping, prevents a host attached to an untrusted port from flooding the segment with gratuitous ARP replies.P240MAC Filtering and MAC Limiting Configuring MAC filteringon a switch means defining which MAC addresses are allowed to connect to a particular port. This can be done by creating a list of valid MAC addresses or by specifying a limit to the number of permitted addresses. For example, if port security is enabled with a maximum of two MAC addresses, the switch will record the first two MACs to connect to that port, but then drop any traffic from machines with different MAC addresses that try to connect. This provides a guard against MAC flooding attacks. DHCP Snooping

Another option is to configure Dynamic Host Configuration Protocol (DHCP) snooping. DHCP is the protocol that allows a server to assign IP address information to a client when it connects to the network. DHCP snooping inspects this traffic arriving on access ports to ensure that a host is not trying to spoof its MAC address. It can also be used to prevent rogue (or spurious) DHCP servers from operating on the network. With DHCP snooping, only DHCP messages from ports configured as trusted are allowed. Additionally dynamic ARP inspection (DAI), which can be configured alongside DHCP snooping, prevents a host attached to an untrusted port from flooding the segment with gratuitous ARP replies. DAI maintains a trusted database of IP:ARP mappings and ensures that ARP packets are validly constructed and use valid IP addresses.

Compare the characteristics of a rogue Access Point (AP) in wireless networks to determine which statements correctly summarize their attributes. (Select all that apply.)An evil twin is a rogue AP masquerading as a legitimate AP, and an attacker may form this by using a Denial of Service (DoS) to overcome the legitimate AP.Sometimes referred to as an evil twin, a rogue AP masquerading as a legitimate AP, may have a similar name to a legitimate AP.An attacker can set up a rogue AP with something as simple as a smartphone with tethering capabilities.

A Denial of Service (DoS) will bypass authentication security (enabled on the AP), so it is important to regularly scan for rogue APs on the network.

ABCA rogue AP masquerading as a legitimate one is an evil twin, or sometimes known as WiPhishing. A DoS attack can form the evil twin to overcome the legitimate AP.An attacker can also form evil twins, giving the AP a similar name (SSID) to that of the legitimate AP. Users may select this AP by mistake, and enter their credentials, which the attacker will capture.Rogue APs can be setup with something as basic as a smartphone with tethering capabilities. It is vital to periodically survey the site to detect rogue APs.

When enabling authentication security on the AP (without the attacker knowing the details of the authentication method), a DoS will not succeed. It is important to scan for rogue APs, but the ease of a DoS is not the reasoning behind the need for regular scans.

A team is building a wireless network, and the company has requested the team to use a Wired Equivalent Privacy (WEP) encryption scheme. The team has developed a recommendation to utilize a different encryption scheme based on the problems with WEP. Analyze the features of WEP to determine what problems to highlight in the recommendation.WEP only allows the use of a 128-bit encryption key and is not secure. The Initialization Vector (IV) is too large to provide adequate security.WEP allows for a 256-bit key but is still not secure. The Initialization Vector (IV) is not sufficiently large, thus is not always generated using a sufficiently random algorithm.WEP has the option to use either a 64-bit or a 128-bit key, which is not secure enough for the company. Packets use a checksum to verify integrity that is too difficult to compute.

WEP only allows the use of a 64-bit key, which is not secure enough for the company. The Initialization Vector (IV) is often not generated using a sufficiently random algorithm.

BWEP version 1 has both 64-bit and 128-bit keys, while WEP version 2 has 128-bit and 256-bit keys but is still not secure. The main problem with WEP is the 24-bit Initialization Vector (IV). The IV changes the keystream each time, but this does not always occur due to problems. One of the problems is that the IV is not sufficiently large, meaning the system will reuse the IV within the same keystream under load.WEP, depending on the version, allows for the use of 64-, 128-, and 256-bit keys, but none of these options are secure. The IV is not large enough, meaning that the system will reuse it, within the same keystream under load.WEP does have the option to use 64- and 128-bit keys but also allows for 256-bit keys. Packs use a checksum to verify integrity, but that is easy to compute.

WEP allows for 64-, 128-, and 256-bit keys. The IV is often not generated using a sufficiently random algorithm.

A network is under a Distributed Denial of Service (DDoS) attack. The Internet Service Provider (ISP) decides to use a blackhole as a remedy. How does the ISP justify their decision?A blackhole drops packets for the affected IP address(es) and is in a separate area of the network that does not reach any other part of the network.A blackhole makes the attack less damaging to the ISP's other customers and continues to send legitimate traffic to the correct destination.A blackhole routes traffic destined to the affected IP address to a different network. Here, the ISP can analyze and identify the source of the attack, to devise rules to filter it.

A blackhole is preferred, as it evaluates each packet in a multi-gigabit stream against an Access Control List (ACL) without overwhelming the processing resources.

AA blackhole drops packets for the affected IP addresses(es). A blackhole is an area of the network that cannot reach any other part of the network which protects the unaffected portion.A blackhole does make the attack less damaging to the other ISP customers but does not send legitimate traffic to the correct destination. The blackhole does not look at packets and simply drops all packets into the black hole.A sinkhole routing routes traffic to a particular IP address, to a different network, so the ISP can analyze and identify the source of the attack.

A blackhole is preferred, but it does not evaluate each packet. An ACL option will evaluate each packet but can overwhelm the processing resources, which makes using a blackhole preferred.

Given knowledge of load balancing and clustering techniques, which configuration provides both fault tolerance and consistent performance for applications like streaming audio and video services?Active/Passive clusteringActive/Active clusteringFirst in, First out (FIFO) clustering

Fault tolerant clustering

AIn active/passive clustering, if the active node suffers a fault, the connection can failover to the passive node, without performance degradation.In an active/active cluster, both nodes process connections concurrently, using the maximum hardware capacity. During failover, the failed node’s workload shifts to the remaining node, the workload on the remaining nodes increases, and performance degrades.Most network appliances process packets on a best effort and first in, first out (FIFO) basis, while the Quality of Service (QoS) framework prioritizes traffic based on its characteristics to better support voice and video applications susceptible to latency and jitter.

Failover ensures that a redundant component, device, or application can quickly and efficiently take over the functionality of an asset that has failed.

Which statement best describes the difference between session affinity and session persistence?With persistence, once a client device establishes a connection, it remains with the node that first accepted its request, while an application-layer load balancer uses session affinity to keep a client connected by setting up a cookie.Session affinity makes node scheduling decisions based on health checks and processes incoming requests based on each node’s load. Session persistence makes scheduling decisions on a first in, first out (FIFO) basis.With session affinity, when a client establishes a session, it remains with the node that first accepted its request, while an application-layer load balancer uses persistence to keep a client connected by setting up a cookie.

Session persistence makes scheduling decisions based on traffic priority and bandwidth considerations, while session affinity makes scheduling decisions based on which node is available next.

CSession affinity is a layer 4 approach to handling user sessions. When a client establishes a session, it stays with the node that first accepted the request.Most network appliances process packets on a best effort and FIFO basis. Layer 4 load balancers only make basic connectivity tests, while layer 7 appliances can test the application's state.An application-layer load balancer uses persistence to keep a client connected to a session. Persistence typically works by setting a cookie, which can be more reliable than session affinity.Quality of Service (QoS) prioritizes traffic based on its characteristics, like bandwidth requirements for video and voice applications. A round robin is a simple form of scheduling that picks the next node.P260Source IP Affinity and Session Persistence

When a client device has established a session with a particular node in the server farm, it may be necessary to continue to use that connection for the duration of the session. Source IP or session affinityis a layer 4 approach to handling user sessions. It means that when a client establishes a session, it becomes stuck to the node that first accepted the request. An application-layer load balancer can use persistenceto keep a client connected to a session. Persistence typically works by setting a cookie, either on the node or injected by the load balancer. This can be more reliable than source IP affinity, but requires the browser to accept the cookie.

Analyze the following scenarios and determine which best simulates a content filter in action. (Select all that apply.)A system has broken down a packet containing malicious content, and erases the suspicious content, before rebuilding the packet.A high school student is using the school library to do research for an assignment and cannot access certain websites due to the subject matter.A system administrator builds a set of rules based on information found in the source IP address to allow access to an intranet.

A system administrator blocks access to social media sites after the CEO complains that work performance has decreased due to excessive social media usage at work.

BDA content filter restricts web use to only authorized sites. Examples of content filter uses can be schools restricting access to only sites that are .edu or to not allow sites that have adult-level content.Another example of a content filter can be the workplace, only allowing sites that are for work purposes.A proxy server works on a store-and-forward model and deconstructs each packet, performs analysis, then rebuilds the packet and forwards it on. A part of this process is removing suspicious content in the process of rebuilding the packet.

A system admin configures packet filtering firewalls by specifying a group of rules that define the type of data packet, and the appropriate action to take when the packet matches the rule.

Evaluate the functions of a Network-Based Intrusion Detection System (NIDS) and conclude which statements are accurate. (Select all that apply.)Training and tuning are fairly simple, and there is a low chance of false positives and false negatives.A NIDS will identify and log hosts and applications for the administrator to analyze, and take action to remove or block attackers.Training and tuning are complex, and there is a high chance of false positive and negative rates.

A NIDS will identify attacks and block the traffic to stop the attack. The administrator will be able to review the reports for future prevention.

BCA NIDS can identify and log hosts and applications and detect attack signatures and other indicators of attack. An administrator can analyze logs to tune firewall rulesets, remove or block suspect hosts and processes, or deploy additional security controls to mitigate threats identified.One of the main disadvantages of NIDS is that training and tuning are complex, which results in high false positive and false negative rates, especially during initial deployment.NIDS training and tuning are complex, with high initial false positive and false negative rates.

A NIDS will not block the traffic during an attack, which is a disadvantage. If an administrator does not immediately review logs during an attack, a delay will occur and the attack will continue.

Which of the following solutions best addresses data availability concerns that may arise with the use of application-aware next-generation firewalls (NGFW) and unified threat management (UTM) solutions?Signature-based detection systemSecure web gateway (SWG)Network-based intrusion prevention system (IPS)

Active or passive test access point (TAP)

BA signature-based detection (or pattern-matching) engine is loaded with a database of attack patterns or signatures. If traffic matches a pattern, then the engine generates an incident.While complex NGFW and UTM solutions provide high confidentiality and integrity, lower throughput reduces availability. One solution to this is to treat security solutions for server traffic differently from that for user traffic. An SWG acts as a content filter, which applies user-focused filtering rules and also conducts threat analysis.Intrusion prevention systems (IPS), positioned like firewalls at borders between network zones, provide an active response to network threats.A TAP is a hardware device inserted into a cable to copy frames for analysis.P661SWG (secure web gateway)An appliance or proxy server that mediates client connections with the Internet by filtering spam and malware and enforcing access restrictions on types of sites visited, time spent, and bandwidth consumed.P437Enterprise networks often make use of secure web gateways (SWG).

An on-premises SWG is a proxy-based firewall, content filter, and intrusion detection/prevention system that mediates user access to Internet sites and services. A next-generation SWG, as marketed by Netskope , combines the functionality of an SWG with that of data loss prevention (DLP) and a CASB to provide a wholly cloud-hosted platform for client access to websites and cloud apps. This supports an architecture defined by Gartner as secure access service edge (SASE).

A system administrator suspects a memory leak is occurring on a client. Determine which scenario would justify this finding.A rapid decrease in disk space has been logged.High page file utilization has been logged.High utilization when employees are not working has been logged without a scheduled activity.

Decreasing available bytes and increasing committed bytes have been logged.

DA memory leak is a process that takes up memory without subsequently freeing it up, which a worm or other type of malware can cause. Looking for decreasing available bytes and increasing committed bytes can detect this type of memory leak.The free disk space performance threshold counter will create an alert when there is a rapid decrease in available disk space, which malware or the illegitimate use of a server could cause.Insufficient physical memory could cause high page file utilization but otherwise could indicate malware.

High utilization out-of-hours can be suspicious if no scheduled activities are occurring, such as backup or virus scanning.

A system administrator is configuring a new Dynamic Host Configuration Protocol (DHCP) server. Analyze the types of attacks DHCP servers are prone to and determine which steps the system administrator should take to protect the server. (Select all that apply.)Use scanning and intrusion detection to pick up suspicious activity.Disable DHCP snooping on switch access ports to block unauthorized servers.Enable logging and review the logs for suspicious events.

Disable unused ports and perform regular physical inspections to look for unauthorized devices.

ACDThe system administrator should use scanning and intrusion detection to pick up suspicious activity.The system administrator should set logging to be enabled and then review the logs regularly for suspicious events.The system administrator should disable unused ports and perform regular physical inspections to ensure that unauthorized devices are not connected via unused jacks.

The system administrator should enable DHCP snooping on switch access ports to prevent the use of unauthorized DHCP servers. DHCP snooping acts as a firewall between the server and untrusted hosts and should be enabled versus disabled.

An organization routinely communicates directly to a partner company via a domain name. The domain name now leads to a fraudulent site for all users. Systems administrators find incorrect host records in DNS. What do the administrators believe to be the root cause?A server host has a poisoned arp cache.Some user systems have invalid hosts file entries.An attacker masquerades as an authoritative name server.

The domain servers have been hijacked.

CDNS server cache poisoning aims to corrupt the records held by the DNS server itself. A DNS server queries an authoritative server for domain information. An attacker can masquerade as an authoritative name server and respond with fraudulent information.An ARP cache contains entries that map IP addresses to MAC addresses. An ARP cache is not related to name resolution.Before developers created DNS, early name resolution took place using a text file named HOSTS. In this case, all users are experiencing an issue, not just some.Domain Reputation can be impacted if an attacker hijacks public servers. In this case, systems admin found invalid host records, which ruled out hijacking.P297DNS Server Cache Poisoning

DNS server cache poisoning aims to corrupt the records held by the DNS server itself. This can be accomplished by performing DoS against the server that holds the authorized records for the domain, and then spoofing replies to requests from other name servers. Another attack involves getting the victim name server to respond to a recursive query from the attacking host. A recursive query compels the DNS server to query the authoritative server for the answer on behalf of the client. The attacker's DNS, masquerading as the authoritative name server, responds with the answer to the query, but also includes a lot of false domain:IP mappings for other domains that the victim DNS accepts as genuine. The nslookupor digtool can be used to query the name records and cached records held by a server to discover whether any false records have been inserted.

A system administrator is deploying a new web server. Which hardening procedures should the administrator consider? (Select all that apply.)The administrator should use SFTP to transfer files to and from the server remotely.Guest accounts should have the permissions set for outside of the directory for browsing.The administrator should remove sample pages as they may contain vulnerabilities.

The configuration templates contain vulnerabilities, and the administrator should not utilize them.

ACSecure file transfer protocol (SFTP) safely transfers files remotely via SSH.System administrators typically install web servers with sample pages and scripts, along with supporting documentation. These samples sometimes contain vulnerabilities, and administrators should remove them from the production server.Most web servers must allow for secure access to guests. The guest accounts should have no permissions outside of the directory set up for browsing.

Web servers should deploy using configuration templates where possible.

Analyze the features of a Full Disk Encryption (FDE) to select the statements that accurately reflect this type of security. (Select all that apply.)FDE encrypts the files that are listed as critical with one encryption key.The encryption key that is used for FDE can only be stored in a TPM on the disk for security.A drawback of FDE is the cryptographic operations performed by the OS reduces performance.

FDE requires the secure storage of the key used to encrypt the drive contents.

CDFDE means that the entire contents of the drive, including system files and folders, are encrypted. The cryptographic operations performed by the OS reduces performance.FDE normally utilizes a Trusted Platform Module (TPM) to secure the storage of the key used to encrypt the drive contents.FDE means that the entire content of the drive (or volume), including system files and folders, are encrypted. This is not limited to only critical files.

FDE requires secure storage of the key used to encrypt the drive contents. Normally, this is in a TPM. It is also possible to use a removable USB drive if USB is a boot device option.

Compare and evaluate the various levels and types of security found within a Trusted OS (TOS) to deduce which scenario is an example of a hardware Root of Trust (RoT).A security system is designed to prevent a computer from being hijacked by a malicious operating systemThe boot metrics and operating system files are checked, and signatures verified at logon.Digital certificates, keys, and hashed passwords are maintained in hardware-based storage.

The industry standard program code that is designed to operate the essential components of a system.

BA hardware RoT, or trust anchor, is a secure subsystem that can provide attestation. When a computer joins a network, it may submit a report to the NAC declaring valid OS files. The RoT scans the boot metrics and OS files to verify their signatures.A secure boot is a security system designed to prevent a computer from being hijacked by a malicious OS.A Trusted Platform Module (TPM) is a specification for hardware-based storage of digital certificates, keys, hashed passwords, and other user and platform identification information.

The Basic Input/Output System (BIOS) provides an industry standard program code that operates the essential components of the PC and ensures that the design of each manufacturer's motherboard is PC compatible.

Given knowledge of secure firmware implementation, select the statement that describes the difference between secure boot and measured boot.Secure boot requires a unified extensible firmware interface (UEFI) and trusted platform module (TPM), but measured boot requires only a unified extensible firmware interface (UEFI).Secure boot provisions certificates for trusted operating systems (OSes) and blocks unauthorized OSes. Measured boot stores and compares hashes of critical boot files to detect the presence of unauthorized processes.Secure boot is the process of sending a signed boot log or report to a remote server, while measured boot provisions certificates for trusted operating systems (OSes) and blocks unauthorized OSes.

Secure boot requires a unified extensible firmware interface (UEFI) but does not require a trusted platform module (TPM). Measured boot is the mechanism by which a system sends signed boot log or report to a remote server.

BSecure boot is about provisioning certificates for trusted operating systems and blocking unauthorized OSes. Measured boot stores and compares hashes of critical boot files to detect unauthorized processes.Secure boot requires UEFI but does not require a TPM. A trusted or measured boot process uses platform configuration registers (PCRs) in the TPM at each stage in the boot process to check whether hashes of key system state data have changed.Attestation is the process of sending a signed boot log or report to a remote server.Secure boot prevents the use of a boot loader or kernel that has been changed by malware (or an OS installed without authorization).P329Secure Boot

Secure bootis designed to prevent a computer from being hijacked by a malicious OS. UEFI is configured with digital certificates from valid OS vendors. The system firmware checks the operating system boot loader and kernel using the stored certificate to ensure that it has been digitally signed by the OS vendor. This prevents a boot loader or kernel that has been changed by malware (or an OS installed without authorization) from being used. Secure boot is supported on Windows and many Linux platforms. Secure boot requires UEFI, but does not require a TPM.

Contrast vendor support for products and services at the end of their life cycle. Which of the following statements describes the difference between support available during the end of life (EOL) phase and end of service life (EOSL) phase?During the end of life (EOL) phase, manufacturers provide limited support, updates, and spare parts. In the end of service life (EOSL), developers or vendors no longer support the product and no longer push security updates.During the end of service life (EOSL) phase, manufacturers provide limited support, updates, and spare parts. In the end of life (EOL), developers or vendors no longer support the product and no longer push security updates.All vendors adhere to a policy of providing five years of mainstream support (end of life support) and five years of extended support (end of service life support), during which vendors only ship security updates.

A well-maintained piece of software is in its end of service life (EOSL) stage. Abandonware refers to a product during the end of life (EOL) stage, which no longer receives updates.

AWhen a manufacturer discontinues a product’s sales, it enters an end of life (EOL) phase in which support and availability of spares and updates grow limited. An end of service life (EOSL) system is one whose developer or vendor no longer supports.EOSL products no longer receive security updates and represent a critical vulnerability if companies actively use them.Microsoft provides Windows versions five years of mainstream support and five years of extended support (during which Microsoft only ships security updates). Most OS and application vendors have similar policies.

Well-maintained open-source software may have long term support (LTS) versions. Developers may also abandon software, and companies who rely on such abandonware must assume any maintenance responsibility.

Evaluate approaches to applying patch management updates to select the accurate statement.Service release patch updates are known to cause problems with software application compatibility.Applying all patches as released is more time consuming than only applying patches as needed.It is more costly to apply all patches, so most companies choose to apply patches on an as-needed basis.

It is best practice to install patches immediately to provide the highest level of security for workstations.

AIt is well recognized that updates, particularly service releases, can cause problems with software application compatibility.The least time-consuming approach is to apply all of the latest patches. A system administrator who applies patches on a case-by-case basis must stay up to date with security bulletins to see if a patch is necessary.Patches are usually provided at no cost. The cost associated with patches is the time it takes to test, review, and apply them.

It is best practice to trial and update on a test system to try to discover whether it will cause any problems. Applying a patch immediately could do more harm than good to the workstations.

Select the options that can be configured by Group Policy Objects (GPOs). (Select all that apply.)Registry settingsExecution controlSoftware deployment

Baseline deviation

ACGroup Policy Objects (GPOs) are a means of applying security settings across a range of computers. They can be used to configure software deployment among several other tasks.GPOs can configure registry settings across a range of computers.Execution control is the process of determining which additional software may be installed on a client or server beyond its baseline. It prevents the use of unauthorized software.

Baseline deviation reporting tests the configuration of clients and servers to ensure they are patched, and their configuration settings match the baseline template.

Evaluate the features and vulnerabilities found in medical devices and then select the accurate statements. (Select all that apply.)Medical devices are only those devices located outside of the hospital setting, including defibrillators and insulin pumps.Attackers may attempt to gain access in order to kill or injure patients, or hold medical units ransom.Medical devices are updated regularly to secure them against vulnerabilities and protect patient safety.

Many portable devices, such as cardiac monitors and insulin pumps, run on unsupported operating systems.

BDAttackers may have a goal of injuring or killing patients by tampering with dosage levels or device settings.Many of the control systems for medical devices run on unsupported versions of operating systems, such as Windows XP, because the costs of updating the software to work with newer OS versions is high and disruptive to patient services.Medical devices can be found in the hospital, clinic, and as portal devices such as cardiac monitors, defibrillators, and insulin pumps.

Medical devices may have unsecure communications protocols. Many devices run on unsupported systems due to the cost and potential disruptions the update would cause.

Compare the features of static and dynamic computing environments and then select the accurate statements. (Select all that apply.)Embedded systems are typically static, while most personal computers are dynamic.A dynamic environment is easier to update than a static environment.A dynamic environment gives less control to a user than a static environment.

Dynamic environments are easier to protect in terms of security than static environments.

ABAn embedded system is a complete computer system designed to perform a specific dedicated function, typically in a dynamic environment. A PC is a dynamic environment where the user can add or remove programs and data files.A dynamic environment provides users more control, including updating. A static environment update will usually only be available through specific management interfaces.A static environment gives less control than a dynamic environment. A dynamic environment gives the user access to add and remove programs, update the system, and install new hardware components.

In terms of security, static environments are easier to protect. This is due to the unchanging environment without adding new hardware or software. With fewer changes and additions, the systems are not introduced to as many threats.

Examine the differences between general purpose personal computer hosts and embedded systems and select the true statements regarding embedded system constraints. (Select all that apply.)Many embedded systems work on battery power, so they cannot require significant processing overhead.Many embedded systems rely on a root of trust established at the hardware level by a trusted platform module (TPM).Embedded systems often use the system on chip (SoC) design to save space and increase power efficiency.

Most embedded systems are based on a common but customizable design, such as Raspberry Pi or Arduino.

ACMany embedded devices are battery-powered and may need to run for years without having to replace the cells. Processing must be kept to the minimum possible level.Embedded systems often use system on chip (SoC), a design where processors, controllers, and devices reside on a single processor die (or chip). This packaging saves space and is usually power efficient.TPM establishes a root of trust at the hardware level on PC, but most embedded systems do not have embedded TPMs, so they must rely on implicit trust and network perimeter security.

Customers can program a field programmable gate array (FPGA) to run a specific application an embedded system requires, which is more efficient than running a pre-programmed device.

A company security manager takes steps to increase security on Internet of Things (IoT) devices and embedded systems throughout a company’s network and office spaces. What measures can the security manager use to implement secure configurations for these systems? (Select all that apply.)Isolate hosts using legacy versions of operating systems (OSes) from other network devices through network segmentation.Use wrappers, such as Internet Protocol Security (IPSec) for embedded systems’ data in transit.Increase network connectivity for embedded systems so they receive regular updates.

Maintain vendor-specific software configuration on Internet of Things (IoT) devices that users operate at home and in the office.

ABSome embedded systems use legacy OSes, making them difficult to secure. Isolating these hosts from others through network segmentation and using endpoint security can help secure them against exploitation.One way of increasing the security of data in transit for embedded systems is through the use of wrappers, such as IPSec, which secures data through authentication and encryption.Only specific security control functions require network access for static environments, which the security manager should keep separate from the corporate network with perimeter security.

When designed for residential use, IoT devices can suffer from weak defaults that customers do not take steps to secure. The security manager can configure them to "work" with a minimum of configuration effort.

The owner of a company asks a network manager to recommend a mobile device deployment model for implementation across the company. The owner states security is the number one priority. Which deployment model should the network manager recommend for implementation?BYOD since the company can restrict the usage to business only applications.CYOD because even though the employee picks the device, the employee only conducts official business on it.COPE since only company business can be conducted on the device.

COBO because the company retains the most control over the device and applications.

DCorporate Owned, Business Only (COBO) devices provide the greatest security of the four mobile device deployment models. The device is the property of the company and may only be used for company business.The Bring Your Own Device (BYOD) model is the least secure of the four models. The device is owned by the employee, and the employee agrees to use it for company use.Deploying a Choose Your Own Device (CYOD) model means the device is chosen by the employee and owned by the company. The employee is able to use the device for personal business.

A Corporate Owned, Personally-Enabled (COPE) device is supplied and chosen by the company and personal use is allowed.

A user would like to install an application on a mobile device that is not authorized by the vendor. The user decides the best way to accomplish the install is to perform rooting on the device. Compare methods for obtaining access to conclude which type of device the user has, and what actions the user has taken.The user has an iOS device and has used custom firmware to gain access to the administrator account.The user has an Android device and has used custom firmware to gain access to the administrator account.The user has an iOS device and has booted the device with a patched kernel.

The user has an Android device and has booted the device with a patched kernel.

BRooting is a term associated with Android devices. Some vendors provide authorized mechanisms for users to access the root account on their device. For some devices, it is necessary to exploit a vulnerability or use custom firmware.A user who has an iOS device and wants access to the administrator account will perform an action called jailbreaking versus rooting.If the user had an iOS device, and has booted the device with a patched kernel, the term would have been jailbreaking.

An Android device is not able to be booted with a patched kernel. Custom firmware or access from the vendor is required to obtain administrator access.

Pilots in an Air Force unit utilize government-issued tablet devices loaded with navigational charts and aviation publications, with all other applications disabled. This illustrates which type of mobile device deployment?BYODCOBOCOPE

CYOD

BThe company owns the device and dictates the device’s purpose in the corporate owned, business only (COBO) model.Employees own their own devices that meet business configuration standards and run corporate applications in the bring your own device (BYOD) model. Businesses may oversee and audit user-owned devices to some extent, but this model poses significant security challenges.In the corporate owned, personally-enabled (COPE) model, the company chooses, supplies, and owns the device, but authorizes personal use, (subject to acceptable use policies).

With choose your own device (CYOD), the company owns and supplies the device, but the employee chooses it.

A system administrator is working to restore a system affected by a stack overflow. Analyze the given choices and determine which overflow vulnerability the attacker creates.An attacker changes the return address of an area of memory used by a program subroutine.An attacker overwrites an area of memory allocated by an application to store variables.An attacker exploits unsecure code with more values than an array expects.

An attacker causes the target software to calculate a value that exceeds the set bounds.

AA stack is an area of memory used by a program subroutine. It includes a return address, which is the location of the program that is called the subroutine. An attacker could use a buffer overflow to change the return address, which is called a stack overflow.A heap is an area of memory allocated by an application during execution to store a variable. A heap overflow can overwrite the variables with unexpected effects.An array is a type of variable designed to store multiple values. It is possible to create an array index overflow by exploiting unsecure code to load the array with more values than it expects.

An integer overflow attack causes the target software to calculate a value that exceeds bounds that are set by the software.

A threat actor programs an attack designed to invalidate memory locations to crash target systems. Which statement best describes the nature of this attack?The attacker created a null pointer file to conduct a dereferencing attack.The attacker programmed a dereferencing attack.The attacker programmed a null pointer dereferencing exception.

The attacker created a race condition to perform a null pointer dereferencing attack.

CDereferencing occurs when a pointer variable stores a memory location, which is attempting to read or write that memory address via the pointer. If the memory location is invalid or null, this creates a null pointer dereference type of exception and the process may crash.Dereferencing does not mean deleting or removing; it means read or resolve.A null pointer might allow a threat actor to run arbitrary code. Programmers can use logic statements to test that a pointer is not null before trying to use it.

A race condition is one means of engineering a null pointer dereference exception. Race conditions occur when processes depend on timing and order, and those events fail to execute in the order and timing intended.

Compare and contrast the types of Cross-Site Scripting (XSS) attacks, and select the option that accurately distinguishes between them.Reflected and stored XSS attacks exploit client-side scripts, while the DOM is used to exploit vulnerabilities in server-side scripts.Reflected and stored XSS attacks exploit server-side scripts, while the DOM is used to exploit vulnerabilities in client-side scripts.Reflected and DOM attacks exploit server-side scripts, while a stored attack exploits vulnerabilities in client-side scripts.

Nonpersistent and persistent attacks exploit client-side scripts, while the DOM is used to exploit vulnerabilities in server-side scripts.

BBoth reflected and stored Cross-Site Scripting (XSS) attacks exploit server-side scripts. The third type of XSS attack exploits vulnerabilities in client-side scripts, and these scripts often use the Document Object Model (DOM) to modify the content and layout of a web page.Both reflected and stored Cross-Site Scripting (XSS) attacks exploit server-side scripts, not client-side. Document Object Model (DOM) modifies the content and layout utilizing client-side scripts.While reflected Cross-Site Scripting (XSS) attacks exploit server-side scripts, Document Object Model (DOM) exploits client-side scripts. A stored attack exploits server-side scripts.

A nonpersistent attack is another name for reflected, and a persistent attack is another name for a stored attack. Both of these attacks exploit server-side scripts. DOM exploits client-side scripts.

Analyze the following statements and select the statement which correctly explains the difference between cross-site scripting (XSS) and cross-site request forgery (XSRF).XSRF spoofs a specific request against the web application, while XSS is a means of running any arbitrary code.XSS is not an attack vector, but the means by which an attacker can perform XSRF, the attack vector.XSRF requires a user to click an embedded malicious link, whereas the attacker embeds an XSS attack in the document object module (DOM) script.

XSRF is a server-side exploit, while XSS is a client-side exploit.

AA client-side or cross-site request forgery (CSRF or XSRF) can exploit applications that use cookies to authenticate users and track sessions. XSS exploits a browser’s trust and can perform an XSRF attack.XSS inserts a malicious script that appears to be part of a trusted site. XSS can conduct an XSRF attack.XSRF passes an HTTP request to the victim’s browser that spoofs a target site action, such as changing a password. The attacker can disguise and accomplish this request without the victim necessarily having to click a link.

XSRF is a client-side exploit. An XSS attack may be reflected (nonpersistent) or stored (persistent) and may target back-end systems (server-side) or client-side scripts.

Which type of attack disguises the nature of malicious input, preventing normalization from stripping illegal characters?FuzzingCanonicalizationCode reuse

Code signing

BNormalization means that a string is stripped of illegal characters or substrings, and converted to the accepted character set. This ensures that the string is in a format that can process correctly by the input validation routines. An attacker may use a canonicalization attack to disguise the nature of malicious input.Fuzzing is a means of testing that an application's input validation routines work well. The test, or vulnerability scanner, generates large amounts of deliberately invalid and random input and records the application's responses.Code reuse occurs through the use of a block of code from elsewhere in the same application, or from another application, to perform a different function.

Code signing is the principal means of proving the authenticity and integrity of code.

Which scenario simulates code in the test environment?A developer checks out a portion of code for editing on a local machine.Code from multiple developers is merged to a single master copy.The code is utilized on a mirror of the production environment.

The application is released to end users.

BIn the integration environment, (test) code from multiple developers is merged to a single master copy and subjected to several basic unit and functional tests. These tests aim to ensure the code builds correctly and fulfills the functions according to design requirements.In the development environment, the code is hosted on a secure server. Each developer checks out a portion of code for editing on a local machine. The local machine will normally be configured with a sandbox for local testing.In the staging environment, a mirror of the production environment is used, but may only use test or sample data with additional access controls, so it is only accessible to test users.In the production environment, the application is released to end users.P410To meet the demands of the life cycle model and quality assurance, code is normally passed through several different environments: • Development—the code will be hosted on a secure server. Each developer will check out a portion of code for editing on his or her local machine. The local machine will normally be configured with a sandbox for local testing. This ensures that whatever other processes are being run locally do not interfere with or compromise the application being developed. • Test/integration—in this environment, code from multiple developers is merged to a single master copy and subjected to basic unit and functional tests (either automated or by human testers). These tests aim to ensure that the code builds correctly and fulfills the functions required by the design. • Staging—this is a mirror of the production environment but may use test or sample data and will have additional access controls so that it is only accessible to test users. Testing at this stage will focus more on usability and performance.

• Production—the application is released to end users.

Which cookie attribute can a security admin configure to help mitigate a request forgery attack?SecureHttpOnlySameSite

Cache-Control

CCookies can be a vector for session hijacking and data exposure if not configured correctly. Use the SameSite attribute to control where a cookie may be sent, mitigating request forgery attacks.Set the Secure attribute to prevent a cookie from being sent over unencrypted HTTP.Set the HttpOnly attribute to make the cookie inaccessible to document object model/client-side scripting.

A number of security options can be set in the response header returned by the server to the client, including Cache-Control, which sets whether the browser can cache responses. Preventing caching of data protects confidential and personal information where multiple users might share the client device.

A hacker compromises a web browser and uses access to harvest credentials users input when logging in to banking websites. What type of attack has occurred?Evil twinMan-in-the-BrowserSession hijacking

Clickjacking

BA man-in-the-browser (MitB) attack compromises the web browser. An attacker may be able to inspect session cookies, certificates, and data, change browser settings, perform redirection, and inject code.An evil twin is a rogue WAP masquerading as a legitimate one. It can capture user logon attempts, allow man-in-the-middle attacks, and allow access to private information.Session hijacking involves replaying a web application cookie in some way. Attackers can sniff network traffic to obtain session cookies sent over an unsecured network.

In a clickjacking attack, the user sees and trusts as a web application with a login page or form that contains a malicious layer or invisible iFrame that allows an attacker to intercept or redirect user input.

Which phase occurs immediately following the transition phase in the Agile model of a Software Development Lifecycle (SDLC).ProductionInceptionIteration

Retirement

AThe production phase occurs immediately after the transition phase in the Agile model. The transition phase consists of perming final integration and testing of the solution and preparing for deployment in the user environment. The production phase is ensuring that the solution operates effectively.The inception phase occurs following the concept phase. The inception phase includes identifying stakeholders and support for the project and starting to provision resources.The iteration phase follows inception and the requirements are prioritized and the cycles of designing, developing, testing, and test deploying solutions to the project goals occur.

The retirement phase follows production and consists of deprovisioning the solution and any environmental dependencies.

Evaluate the phases of the Agile model within a Software Development Lifecycle (SDLC) to determine which statement demonstrates the production phase.Devising an application's initial scope and vision for the project.Prioritizing the requirements and work through the cycles of designing, developing and testing.Testing an application to ensure the solution operates effectively.

Perform the final integration and testing of the solution.

CAgile development flips the waterfall model by iterating through phases concurrently on smaller modules of code. Both models are principals of a Software Development Lifecycle (SDLC). The production phase includes ensuring the solution operates effectively.The concept phase includes devising the initial scope and vision for the project and to determine its feasibility.The iteration phase consists of prioritizing requirements and working through cycles of designing, developing, testing, and test deploying solutions to the project goals.

The transition phase includes performing final integration and testing of the solution and preparing for deployment in the user environment.

Select the correct simulation of the testing phase in terms of secure software development.A security analyst determines the security needs.A systems engineer identifies threats and controls.A software developer performs a white box analysis.

A security expert performs a gray box analysis.

DIn the testing phase, a security expert performs black box (blind) or gray box (partial disclosure) analysis to test for vulnerabilities in the published application and its publication environment.In the requirements phase, a security analyst determines security needs and privacy in terms of data processing and access controls.In the design phase, a systems engineer identifies threats and controls or secure coding practices to meet the requirements.

In the implementation phase, a software developer performs a white box (full disclosure) source code analysis and code review to identify and resolve vulnerabilities.

Code developers de-conflict coding with one another during which phase of the software development life cycle (SDLC)?Continuous integrationContinuous deliveryContinuous validation

Continuous monitoring

AContinuous integration (CI) is the principle that developers should commit and test updates often. CI aims to detect and resolve coding conflicts early.Continuous delivery is about testing all of the infrastructures that support an app, including networking, database functionality, client software, and so on.Verification is a compliance testing process to ensure that the product or system meets its design goals. Validation is the process of determining whether the application is fit-for-purpose. These processes ensure the application conforms to the secure configuration baseline.

An automation solution will have a system of continuous monitoring to detect service failures, security incidents, and failover mechanisms.

Analyze and select the accurate statements about threats associated with virtualization. (Select all that apply.)Virtualizing switches and routers with hypervisors make virtualization more secure.VM escaping occurs as a result of malware jumping from one guest OS to another.A timing attack occurs by sending multiple usernames to an authentication server to measure the server response times.

VMs providing front-end, middleware, and back-end servers should remain together to reduce security implications of a VM escaping attack on a host located in the DMZ.

BCVirtual Machine (VM) escaping refers to malware running on a guest Operating System (OS) jumping to another guest or to the host.A timing attack occurs by sending multiple usernames to an authentication server and measuring the server's response times.Hypervisors are a common target of attacks and become more complex when the network infrastructure, such as switches and routers, is also virtualized. When the network infrastructure is implemented in software, it may not be subject to inspection and troubleshooting by system administrators.

VMs providing front-end, middleware, and back-end services should be separated to different physical hosts. This reduces the security implications of a VM escaping attack on a host in the Demilitarized Zone (DMZ).

An organization plans a move of systems to the cloud. In order to identify and assign areas of risk, which solution does the organization establish to contractually specify cloud service provider responsibilities?Service level agreementTrust relationshipResponsibilities matrix

High availability

AIt is imperative to identify precisely which risks are transferring to the cloud, which risks the service provider is undertaking, and which risks remain with the organization. A service level agreement (SLA) outlines those risks and responsibilities.A trust relationship simply defines the relationship with a cloud service provider. The more important the service is to a business, the more risk the business invests in that trust relationship.A responsibility matrix is a good way to identify what risks exist, and who is responsible for them. The matrix can be part of an SLA.

High availability is an approach to keeping systems functionality at a constant.

A systems administrator deploys a cloud access security broker (CASB) solution for user access to cloud services. Evaluate the options and determine which solution may be configured at the network edge and without modifying a user's system.Single sign-onApplication programming interfaceForward proxy

Reverse proxy

DA reverse proxy (positioned at the cloud network edge) directs traffic to cloud services if the contents of that traffic comply with policy. This does not require configuration of users' devices.Single sign-on authentication and enforcing access controls and authorizations from the enterprise network to the cloud provider is a feature of a CASB.Rather than placing a CASB appliance or host inline with cloud consumers and the cloud services, an API-based CASB brokers connections between the cloud service and the cloud consumer.A forward proxy is a security appliance or host, positioned at the client network edge, that forwards user traffic to the cloud network if the contents of that traffic comply with policy. This requires configuration of users' devices.P437In general, CASBs are implemented in one of three ways: • Forward proxy—this is a security appliance or host positioned at the client network edge that forwards user traffic to the cloud network if the contents of that traffic comply with policy. This requires configuration of users' devices or installation of an agent. In this mode, the proxy can inspect all traffic in real time, even if that traffic is not bound for sanctioned cloud applications. The problem with this mode is that users may be able to evade the proxy and connect directly. Proxies are also associated with poor performance as without a load balancing solution, they become a bottleneck and potentially a single point of failure. • Reverse proxy—this is positioned at the cloud network edge and directs traffic to cloud services if the contents of that traffic comply with policy. This does not require configuration of the users' devices. This approach is only possible if the cloud application has proxy support.

• Application programming interface (API)—rather than placing a CASB appliance or host inline with cloud consumers and the cloud services, an API-based CASB uses brokers connections between the cloud service and the cloud consumer. For example, if a user account has been disabled or an authorization has been revoked on the local network, the CASB would communicate this to the cloud service and use its API to disable access there too. This depends on the API supporting the range of functions that the CASB and access and authorization policies demand. CASB solutions are quite likely to use both proxy and API modes for different security management purposes.

A security team suspects the unauthorized use of an application programming interface (API) to a private web-based service. Which metrics do the team analyze and compare to a baseline for response times and usage rates, while investigating suspected DDoS attacks? (Select all that apply.)Number of requestsError ratesLatency

Endpoint connections

ACThe number of requests is a basic load metric that counts the number of requests per second or requests per minute. Depending on the service type, admin can set a baseline for typical usage.Latency is the time in milliseconds (ms) taken for the service to respond to an API call. This can be measured for specific services or as an aggregate value across all services.Error rates measure the number of errors as a percentage of total calls, usually classifying error types under category headings.

Admin can manage unauthorized and suspicious endpoint connections to the API in the same sort of way as remote access.

An engineer utilizes infrastructure as code to deploy and manage a network. When considering an abstract model that represents network functionality, how does the engineer make control decisions?By managing compatible physical appliancesBy prioritizing and securing trafficBy monitoring traffic conditions

By using security access controls

BWhen using an abstract model, the engineer can divide the network functions into three "planes." The control plane makes decisions about how to prioritize and secure traffic, as well as where to switch it.A software-defined networking (SDN) application can manage the aspects of all "planes" in the abstract model. SDN can manage compatible physical appliances, but also virtual switches, routers, and firewalls.The management plane monitors traffic conditions and the overall network status.

The data plane handles the actual switching and routing of traffic and the imposition of security access controls.

A security professional is looking to harden systems at an industrial facility. In particular, the security specialist needs to secure an HVAC system that is part of an IoT network. Which areas does the specialist look to secure from data exfiltration exploits? (Select all that apply.)Edge devicesData centerFog node

Edge gateway

CDA security specialist can incorporate fog nodes as a data processing layer positioned close to edge gateways, assisting the prioritization of critical data transmission. Fog nodes are high-value targets for both denial of service and data exfiltration attacks.Edge gateways perform some pre-processing of data to and from edge devices to enable prioritization. They also perform the wired or wireless connectivity to transfer data to and from the storage and processing networks. Edge gateways are high-value targets to exploit.Edge devices collect and depend upon data for their operation. For example, a thermometer in an HVAC system collects temperature data.

The cloud or data center provides the main storage and processing resources, plus distribution and aggregation of data.

A startup designs a new online service and uses a serverless approach for some business functions. With this approach, how does the startup perform these functions? (Select all that apply.)Virtual machinesContainersSingle service

Orchestration

BDWhen an operation needs processing, by using a container, the cloud spins up the container to run the code, performs the processing, and then destroys the container.A virtual machine is a full-fledged operating system that runs in a virtual environment and considered a server, not serverless. Serverless refers to creating and using containers when needed.A single service would provide a specific output. Many services require working together in a serverless environment.

Serverless architecture depends heavily on the concept of event-driven orchestration with many services involved to facilitate operations.

Analyze and determine the role responsible for managing the system where data assets are stored, and is responsible for enforcing access control, encryption, and backup measures.Data ownerData stewardData custodian

Privacy officer

CA data custodian is responsible for managing the system where data assets are stored, including responsibility for enforcing access control, encryption, and backup or recovery measures.A data owner has the ultimate responsibility for maintaining the confidentiality, integrity, and availability of the information asset.The data steward is primarily responsible for data quality, such as ensuring data is labeled and identified with appropriate metadata.

The privacy officer is responsible for oversight of any Personally Identifiable Information (PII) assets managed by the company and ensures that the processing and disclosure of PII comply with the legal and regulatory frameworks.

A document contains information about a company that is too valuable to permit any risks, and viewing is severely restricted. Analyze levels of classification and determine the appropriate classification for the document.CriticalConfidentialClassified

Unclassified

ADocuments labeled as critical contain information that is too valuable to permit any risk of its capture, and viewing is severely restricted.Documents labeled as confidential contain information that is highly sensitive and is for viewing only by approved persons within the organization or possibly by third parties under a Nondisclosure Agreement (NDA). This classification may also be called low.Documents labeled as classified contains information that limits viewing by only persons within an organization or by third parties that are under an NDA. This classification may also be called private, restricted, internal use only, or official use only.

Unclassified documents are unrestricted and anyone can view the document. This document does not contain information that will harm the company if released. This classification is also known as public.

Choose which of the following items classify as Personally Identifiable Information. (Select all that apply.)Job positionGenderFull name

Date of birth

CDA full name can be used to identify, contact, or locate an individual. A full name is an identifier and can be used to search for a person to locate more PII that can be used to contact or locate a person.A date of birth can be used to identify, contact, or locate an individual. This information is often used when verifying identity and may be used by an attacker to obtain unauthorized access into accounts and obtain other PII.A job position is not considered PII. The position is not considered a person and is typically categorized as public information. Having the sole information of the name of a job position does not arm an attacker with the ability to identify, contact, or locate an individual.

Gender is not considered PII and is not unique to an individual. This information will not assist an attacker with contacting, identifying, or locating an individual.

A new cloud-based application will replicate its data on a global scale. Which concerns should the organization that provides the data to consumers take into consideration? (Select all that apply.)General Data Protection Regulations (GDPR)SovereigntyLocation

Roles

BCData sovereignty refers to a jurisdiction preventing or restricting processing and storage from taking place on systems that do not physically reside within that jurisdiction.Storage locations might have to be carefully selected to mitigate data sovereignty issues. Most cloud providers allow a choice of data centers for processing and storage.GDPR protections extend to any EU citizen while they are within EU or EEA (European Economic Area) borders.

There are important institutional governance roles for oversight and management of information assets within a data life cycle. These roles help to manage and maintain data.

Analyze the features of Microsoft's Information Rights Management (IRM) and choose the scenarios that accurately depict IRM. (Select all that apply.)File permissions are assigned based on the roles within a document.A document is emailed as an attachment, but cannot be printed by the receiver.A document does not allow screen capture to any device it is sent to.

An email message cannot be forwarded to another employee.

ABDA benefit of IRM is that file permissions can be assigned for different document roles, such as author, editor, or reviewer. Each role can have specific access such as sending, printing, and editing.Printing and forwarding of documents can be restricted even when the document is sent as a file attachment. This means that just because a document is forwarded it may not have printing capabilities.Printing and forwarding of email messages can be restricted.One disadvantage is that a document protected with IRM is not immune to screen captures. These captures can be done with a camera phone, third party software, or a workaround using an Apple system.P460Rights Management Services As another example of data protection and information management solutions, Microsoft provides an Information Rights Management (IRM) feature in their Office productivity suite, SharePoint document collaboration services, and Exchange messaging server. IRM works with the Active Directory Rights Management Services (RMS) or the cloud-based Azure Information Protection. These technologies provide administrators with the following functionality: • Assign file permissions for different document roles, such as author, editor, or reviewer. • Restrict printing and forwarding of documents, even when sent as file attachments.

• Restrict printing and forwarding of email messages.

A systems administrator suspects that a virus has infected a critical server. In which step of the incident response process does the administrator notify stakeholders of the issue?RecoveryIdentificationContainment

Eradication

BIn the identification phase, it is important to determine whether an incident has taken place, assess how severe it might be (triage), and notify stakeholders.The recovery phase reintegrates the system into the environment and may involve the restoration of data from backup and security testing. The systems administrator must monitor the systems more closely for a period to detect and prevent any reoccurrence of the attack.The containment phase aims to limit the scope and magnitude of the incident. The goal is to secure data while limiting the immediate impact on customers and business partners.

In the eradication phase, the admin removes the cause and restores the system to a secure state by applying secure configuration settings and installing patches.

Incident management relies heavily on efficient allocation of resources. Which of the following factors should the IT manager consider to effectively triage remediation efforts? (Select all that apply.)Planning timeDowntimeDetection time

Recovery time

BCDDowntime is a critical factor to consider to the degree to which an incident disrupts business processes. An incident can either degrade (reduce performance) or interrupt (completely stop) the availability of an asset, system, or business process.Detection time is an important consideration requiring that the systems used to search for intrusions are thorough, and the response to detections must be fast.Recovery time must be considered, as some incidents that need to have complex system changes require lengthy remediation. This extended recovery period should trigger heightened alertness for continued or new attacks.

Planning time can refer to the expected time for completing a project plan, or a period of time scheduled for an IT team to work together to plan out projects. It is not a consideration for incident remediation efforts.

During weekly scans, a system administrator identifies a system that has software installed that goes against security policy. The system administrator removes the system from the network in an attempt to limit the effect of the incident on the remainder of the network. Apply the Computer Security Incident Handling Guide principles to determine which stage of the incident response life cycle the administrator has entered.PreparationIdentificationContainment, eradication and recovery

Lessons learned

CThe system administrator has entered the containment, eradication, and recovery stage by removing the system from the network. This action contains the incident and protects the other network resources. This is also the stage where the administrator will repair the system and bring it back online or replace it.Preparation is the stage where the admin puts controls in place to prevent the software from being installed.The identification stage was completed when the scan was conducted and the unauthorized software identified.

The lessons learned stage will occur after the containment, eradication, and recovery stage is completed and lessons learned will be utilized to improve the security of the network.

security team desires to modify event logging for several network devices. One team member suggests using the configuration files from the current logging system with another open format that uses TCP with a secure connection. Which format does the team member suggest?Syslog-ngRsyslogSyslog

NXlog

BRsyslog can work over TCP and use a secure connection. It uses the same configuration file syntax as Syslog. Rsyslog can use more types of filter expressions in its configuration file to customize message handling.Syslog-ng is an update to Syslog that can use TCP secure communications, but it uses a different configuration file syntax than Syslog.Syslog provides an open format, protocol, and server software for logging event messages. A very wide range of host types use Syslog, as well as UDP for communications.

NXlog is an open-source log normalization tool. One common use for it is to collect Windows logs, which use an XML-based format and then normalize them to a standard syslog format.

A security expert needs to review systems information to conclude what may have occurred during a breach. The expert reviews NetFlow data. What samples does the expert review?Protocol usage and endpoint activityTraffic statistics at any layer of the OSI modelStatistics about network traffic

Bandwidth usage and comparative baselines.

CA flow collector is a means of recording metadata and statistics about network traffic rather than recording each frame. Network traffic and flow data may come from a wide variety of sources.A SIEM collects data from sensors. The information captured from network packets can be aggregated and summarized to show overall protocol usage and endpoint activity.sFlow, developed by HP and subsequently adopted as a web standard, uses sampling to measure traffic statistics at any layer of the OSI model for a wide range of protocol types.

If one has reliable baselines for comparison, bandwidth usage can be a key indicator of suspicious behavior. Unexpected bandwidth consumption could be evidence of a data exfiltration attack.

When endpoint security experiences a breach, there are several classes of vector to consider for mitigation. Which type relates to exploiting an unauthorized service port change?Configuration driftWeak configurationLack of controls

Social Engineering

AConfiguration drift applies when malware exploits an undocumented configuration change (shadow IT software or an unauthorized service/port, for instance).A weak configuration is correctly applied, but exploited anyway. Review of the settings is recommended to ensure the highest level of security.If endpoint protection/A-V, host firewall, content filtering, DLP, or MDM could have prevented an attack, then investigate the possibility of the lack of security controls.

Social engineering means that a user executed an exploit. Use security education and awareness to reduce the risk of future attacks succeeding.

A security analyst would like to review attack information on a compromised system. Which containment approach reduces the opportunity for the analyst to be successful?Black holeVLANACL

Airgap

DA simple option is to disconnect the host from the network completely (creating an air gap) or disabling its switch port. This is the least stealthy option and will reduce opportunities to analyze the attack or malware due to the isolation.The analyst can implement a routing infrastructure to isolate one or more infected virtual LANs (VLANs) in a black hole that is not reachable from the rest of the network.Segmentation-based containment is a means of achieving the isolation of a host or group of hosts using network technologies and architecture such as VLANs.

ACLs can prevent a host or group of hosts from communicating outside of a protected segment.

Which term defines the practice of collecting evidence from computer systems to an accepted standard in a court of law?ForensicsDue processeDiscovery

Legal hold

AComputer forensics is the practice of collecting evidence from computer systems to an accepted standard in a court of law.Due Process is a common law term used in the US and the UK which requires that people only be convicted of crimes following the fair application of the laws of the land.eDiscovery is a means of filtering the relevant evidence produced from all the data gathered by a forensic examination and storing it in a database in a format to use as evidence in a trial.

Legal hold refers to the fact that information that may be relevant to a court case must be preserved

Which of the following is an example of the process of identifying and de-duplicating files and metadata to be stored for evidence in a trial?Legal holdForensicseDiscovery

Due process

CeDiscovery is a means of filtering the relevant evidence produced from all the data gathered by a forensic examination and storing it in a database in a format to use as evidence in a trial.Legal hold refers to the fact that information that may be relevant to a court case must be preserved.Forensics is the practice of collecting evidence from computer systems to an accepted standard in a court of law.Due process is a term used in common law to require that people only be convicted of crimes following the fair application of the laws of the land.P495A forensic examination of a device such as a fixed drive that contains Electronically Stored Information (ESI) entails a search of the whole drive (including both allocated and unallocated sectors, for instance). E-discoveryis a means of filtering the relevant evidence produced from all the data gathered by a forensic examination and storing it in a database in a format such that it can be used as evidence in a trial. E-discovery software tools have been produced to assist this process. Some of the functions of e-discovery suites are: • Identify and de-duplicate files and metadata—many files on a computer system are "standard" installed files or copies of the same file. E-discovery filters these types of files, reducing the volume of data that must be analyzed. • Search—allow investigators to locate files of interest to the case. As well as keyword search, software might support semantic search. Semantic search matches keywords if they correspond to a particular context. • Tags—apply standardized keywords or labels to files and metadata to help organize the evidence. Tags might be used to indicate relevancy to the case or part of the case or to show confidentiality, for instance. • Security—at all points evidence must be shown to have been stored, transmitted, and analyzed without tampering.

• Disclosure—an important part of trial procedure is that the same evidence be made available to both plaintiff and defendant. E-discovery can fulfill this requirement. Recent court cases have required parties to a court case to provide searchable ESI rather than paper records.

A security expert archives sensitive data that is crucial to a legal case involving a data breach. The court is holding this data due to its relevance. The expert fully complies with any procedures as part of what legal process?Chain of custodyDue processForensics

Legal hold

DLegal hold refers to information that the security expert must preserve, which may be relevant to a court case. Regulators or the industry's best practice may define the information that is subject to legal hold.Chain of custody reinforces the integrity and proper handling of evidence from collection, to analysis, to storage, and finally to presentation. When security breaches go to trial, the chain of custody protects an organization against accusations of tampering with the evidence.Due process is a common law term used in the US and UK to require that people only be convicted of crimes following the fair application of the laws of the land.

Forensics is the practice of collecting evidence from computer systems to a standard that a court of law will accept.

An engineer retrieves data for a legal investigation related to an internal fraud case. The data in question is from an NTFS volume. What will the engineer have to consider with NTFS when documenting a data timeline?UTC timeLocal system timeTime server

Time offset

ANTFS uses UTC "internally." When collecting evidence, it is vital to establish the procedure to calculate a timestamp and note the difference between the local system time and UTC.Many operating systems and file systems record timestamps as the local system time, making it easy to document a data timeline.Most computers have the clock configured to synchronize to a Network Time Protocol (NTP) server. Closely synchronized time is important for authentication and audit systems to work properly.

Local time is the time within a particular time zone, which will be offset from UTC by several hours (or in some cases, half-hours). The local time offset may also vary if a seasonal daylight savings time is in place.

An engineer utilizes digital forensics for information gathering. While doing so, the first focus is counterintelligence. Which concepts does the engineer pursue? (Select all that apply.)Identification and analysis of specific adversary tacticsBuild cybersecurity capabilitiesConfigure and audit active logging systems

Inform risk management provisioning

ACCounterintelligence includes the identification and analysis of specific adversary tactics, techniques, and procedures (TTP). This information furthers the betterment of understanding adversary approaches that counterintelligence can note for monitoring.Counterintelligence provides information about how to configure and audit active logging systems so that they are most likely to capture evidence of attempted and successful intrusions.Strategic intelligence is data and research that security specialists have analyzed to produce actionable insights that help to build mature cybersecurity capabilities.

Strategic intelligence is information that security specialists have gathered through research and provides insights used to inform risk management and security control provisioning.

A systems breach occurs at a manufacturer. The system in question contains highly valuable data. An engineer plans a live acquisition, but ultimately, is not successful. What reason may be stopping the engineer?There is no hibernation file presentThe tools are not preinstalledThe crash dump file is missing

The pagefile is corrupt

BA specialist hardware or software tool can capture the contents of memory while the host is running (live acquisition). This type of tool needs to be pre-installed, as it requires a kernel mode driver to dump any data of interest.When a Windows host is in a sleep state, the system creates a hibernation file on disk in the root folder of the boot volume. This file is not a prerequisite for a live acquisition.When Windows encounters an unrecoverable kernel error, it can write contents of memory to a dump file. This file is not a prerequisite for a live acquisition.

The pagefile/swap file/swap partition stores pages of memory in use that exceed the capacity of the host's RAM modules. This file is not a prerequisite for a live acquisition.

A cloud server has been breached. The organization realizes that data acquisition differs in the cloud when compared to on-premises. What roadblocks may the organization have to consider when considering data? (Select all that apply.)On-demand servicesJurisdictionChain of custody

Notification laws

ABCThe on-demand nature of cloud services means that instances are often created and destroyed again, with no real opportunity for forensic recovery of any data.Jurisdiction and data sovereignty may restrict what evidence the CSP is willing to release to the organization.Chain of custody issues are complex as it may have to rely on the CSP to select and package data for the organization.

If the CSP is a data processor, it will be bound by data breach notification laws and regulations. This issue does not relate to the acquisition of data.

A systems breach occurs at a financial organization. The system in question contains highly valuable data. When performing data acquisition for an investigation, which component does an engineer acquire first?RAMBrowser cacheSSD data

Disk controller cache

DThe order of volatility outlines a general list of which components the engineer should examine for data. The engineer should first examine CPU registers and cache memory (including the cache on disk controllers and GPUs).The engineer should acquire contents of nonpersistent system memory (RAM), including routing tables, ARP caches, process tables, and kernel statistics after any cache memory.The engineer performs data acquisition on persistent mass storage devices after any available system caches or memory. This includes temporary files, such as those found in a browser cache.The engineer performs data acquisition on persistent mass storage devices (such as HDDs or SSDs) after any available system caches or memory.P5011. CPU registers and cache memory (including cache on disk controllers, GPUs, and so on).2. Contents of nonpersistent system memory (RAM), including routing table, ARP cache, process table, kernel statistics.3. Data on persistent mass storage devices (HDDs, SSDs, and flash memory devices):• Partition and file system blocks, slack space, and free space.• System memory caches, such as swap space/virtual memory and hibernation files.• Temporary file caches, such as the browser cache.• User, application, and OS files and directories.4. Remote logging and monitoring data.5. Physical configuration and network topology.

6. Archival media and printed documents.

A company performs risk management. Which action identifies a risk response approach?A company develops a list of processes necessary for the company to operate.A company develops a countermeasure for an identified risk.A company conducts penetration testing to search for vulnerabilities.

A company determines how the company will be affected in the event a vulnerability is exploited.

BThe fifth phase of risk management is identifying risk response. A countermeasure should be identified for each risk and the cost of deploying additional security controls should be assessed.The first phase of risk management is to identify mission essential functions. Mitigating risk can involve a large amount of expenditure, so it is important to focus efforts. Part of risk management is to analyze workflows and identify the mission essential functions that could cause the whole business to fail if they are not performed.The second phase of risk management is to identify vulnerabilities for each function or workflow. This includes analyzing systems and assets to discover, and list any vulnerabilities or weaknesses they may be susceptible to.The fourth phase of risk management is to analyze business impacts, the likelihood of a vulnerability being activated as a security incident by a threat, and the impact that incident may have on critical systems.P510Risk managementis a process for identifying, assessing, and mitigating vulnerabilities and threats to the essential functions that a business must perform to serve its customers. You can think of this process as being performed over five phases: 1. Identify mission essential functions—mitigating risk can involve a large amount of expenditure so it is important to focus efforts. Effective risk management must focus on mission essential functions that could cause the whole business to fail if they are not performed. Part of this process involves identifying critical systems and assets that support these functions. 2. Identify vulnerabilities—for each function or workflow (starting with the most critical), analyze systems and assets to discover and list any vulnerabilities or weaknesses to which they may be susceptible. 3. Identify threats—for each function or workflow, identify the threat sources and actors that may take advantage of or exploit or accidentally trigger vulnerabilities. 4. Analyze business impacts—the likelihood of a vulnerability being activated as a security incident by a threat and the impact of that incident on critical systems are the factors used to assess risk. There are quantitative and qualitative methods of analyzing impacts and likelihood.

5. Identify risk response—for each risk, identify possible countermeasures and assess the cost of deploying additional security controls. Most risks require some sort of mitigation, but other types of response might be more appropriate for certain types and level of risks.

Select the phase of risk management a company has performed if they analyzed workflows and identified critical tasks that could cause their business to fail, if not performed.Identify mission essential functionsIdentify vulnerabilitiesIdentify threats

Analyze business impacts

AThe first phase of risk management is to identify mission essential functions. Mitigating risk can involve a large amount of expenditure so it is important to focus efforts. Part of risk management is to analyze workflows and identify the mission essential functions that could cause the whole business to fail if they are not performed.The second phase of risk management is to identify vulnerabilities for each function or workflow. This includes analyzing systems and assets to discover and list any vulnerabilities or weaknesses to which they may be susceptible.The third phase of risk management is to identify threats. Threats that may take advantage of, exploit, or accidentally trigger vulnerabilities. Threat refers to the sources or motivations of people and things that could cause loss or damage.The fourth phase of risk management is to analyze business impacts and the likelihood of a vulnerability being activated as a security incident by a threat and the impact of that incident on critical systems.P510Risk managementis a process for identifying, assessing, and mitigating vulnerabilities and threats to the essential functions that a business must perform to serve its customers. You can think of this process as being performed over five phases: 1. Identify mission essential functions—mitigating risk can involve a large amount of expenditure so it is important to focus efforts. Effective risk management must focus on mission essential functions that could cause the whole business to fail if they are not performed. Part of this process involves identifying critical systems and assets that support these functions. 2. Identify vulnerabilities—for each function or workflow (starting with the most critical), analyze systems and assets to discover and list any vulnerabilities or weaknesses to which they may be susceptible. 3. Identify threats—for each function or workflow, identify the threat sources and actors that may take advantage of or exploit or accidentally trigger vulnerabilities. 4. Analyze business impacts—the likelihood of a vulnerability being activated as a security incident by a threat and the impact of that incident on critical systems are the factors used to assess risk. There are quantitative and qualitative methods of analyzing impacts and likelihood.

5. Identify risk response—for each risk, identify possible countermeasures and assess the cost of deploying additional security controls. Most risks require some sort of mitigation, but other types of response might be more appropriate for certain types and level of risks.

Select the example that provides an accurate simulation of a company engaging in the risk management phase of identifying threats.A company develops a list of process that are necessary for the company to operate.A company conducts research to determine why vulnerabilities may be exploited.A company conducts penetration testing to search for vulnerabilities.

A company determines how the company will be affected in the event a vulnerability is exploited.

BThe third phase of risk management is identify threats. Threats that may take advantage of, exploit, or accidentally trigger vulnerabilities. Threat refers to the sources or motivations of people and things that could cause loss or damage.The first phase of risk management is to identify mission essential functions. Mitigating risk can involve a large amount of expenditure, so it is important to focus efforts. Part of risk management is to analyze workflows and identify the mission essential functions that could cause the whole business to fail if they are not performed.The second phase of risk management is to identify vulnerabilities for each function or workflow. This includes analyzing systems and assets to discover and list any vulnerabilities or weaknesses to which they may be susceptible.The fourth phase of risk management is to analyze business impacts and the likelihood of a vulnerability being activated as a security incident by a threat and the impact of that incident on critical systems.P510Risk managementis a process for identifying, assessing, and mitigating vulnerabilities and threats to the essential functions that a business must perform to serve its customers. You can think of this process as being performed over five phases: 1. Identify mission essential functions—mitigating risk can involve a large amount of expenditure so it is important to focus efforts. Effective risk management must focus on mission essential functions that could cause the whole business to fail if they are not performed. Part of this process involves identifying critical systems and assets that support these functions. 2. Identify vulnerabilities—for each function or workflow (starting with the most critical), analyze systems and assets to discover and list any vulnerabilities or weaknesses to which they may be susceptible. 3. Identify threats—for each function or workflow, identify the threat sources and actors that may take advantage of or exploit or accidentally trigger vulnerabilities. 4. Analyze business impacts—the likelihood of a vulnerability being activated as a security incident by a threat and the impact of that incident on critical systems are the factors used to assess risk. There are quantitative and qualitative methods of analyzing impacts and likelihood.

5. Identify risk response—for each risk, identify possible countermeasures and assess the cost of deploying additional security controls. Most risks require some sort of mitigation, but other types of response might be more appropriate for certain types and level of risks.

Management of a company identifies priorities during a risk management exercise. By doing so, which risk management approach does management use?Inherent riskRisk postureRisk transference

Risk avoidance

BRisk posture is the overall status of risk management. Risk posture shows which risk response options management can identify and prioritize.The result of a quantitative or qualitative analysis is a measure of inherent risk. Inherent risk is the level of risk before attempting any type of mitigation.Transference means assigning risk to a third party, such as an insurance company or a contract with a supplier that defines liabilities.Risk avoidance means that management halts the activity that is risk-bearing. For example, management may discontinue a flawed product to avoid risk.

P514

Evaluate the metrics associated with Mission Essential Functions (MEF) to determine which example is demonstrating Work Recovery Time (WRT).A business function takes five hours to restore, resulting in an irrecoverable business failure.It takes two hours to identify an outage and restore the system from backup.It takes three hours to restore a system from backup, and the restore point is two hours prior to the outage.

It takes three hours to restore a system from backup, reintegrate the system, and test functionality.

DWork Recovery Time (WRT) is the additional time that it takes to restore data from backup, reintegrate different systems, and test overall functionality. This can also include briefing system users on any changes or different working practices so that the business function is again fully supported.The Maximum Tolerable Downtime (MTD) is the longest period of time that a business function outage may occur without causing irrecoverable business failure.Recovery Time Objective (RTO) is the period following a disaster that an individual IT system may remain offline. This represents the amount of time it takes to identify a problem that exists and perform recovery steps.

Recovery Point Objective (RPO) is the amount of data loss that a system can sustain, measured in time. If a database is destroyed and has an RPO of 24 hours, the data can be recovered to a point not more than 24 hours before the database was infected.

A critical server has a high availability requirement of 99.99%. Solve the Maximum Tolerable Downtime (MTD) in hh:mm:ss to conclude which option will meet the requirement.0:53:560:49:231:24:19

2:48:42

BThe Maximum Tolerable Downtown (MTD) metric states the requirement for a particular business function. High availability is usually described as 24x7. For a critical system, availability will be described from 99% to 99.9999%. In this scenario, the requirement is 99.99%, resulting in the maximum downtime of 00:52:34. Since 00:49:23 is less downtime than the maximum requirement, this results in the system meeting the requirement.A downtime of 00:53:56 is more than the maximum annual downtime of 00:52:34. As a result, it is outside of the MTD.A downtime of 01:24:19 is more than the maximum annual downtime of 00:52:34. As a result, it is outside of the MTD.

A downtime of 02:48:42 is more than the maximum annual downtime of 00:52:34. As a result, it is outside of the MTD.

Analyze automation strategies to differentiate between elasticity and scalability. Which scenarios demonstrate scalability? (Select all that apply.)A company is hired to provide data processing for 10 additional clients and has a linear increase in costs for the support.A company is hired to provide data processing for 10 additional clients and is able to utilize the same servers to complete the tasks without performance reduction.A company has a 10% increase in clients and a 5% increase in costs.

A company has a 10% increase in clients and a 10% decrease in server performance.

ACScalability is defined as the costs involved in supplying the service to more users is linear. For example, if the number of users doubles in a scalable system, the costs to maintain the same level of service would also double (or less than double). If costs more than double, the system is less scalable. A company that is hired to provide data processing for ten additional clients and has a linear increase in costs for the support is a scalable system.A company that has a 10% increase in clients and a 5% increase in costs is highly scalable due to the cost increase being less than the client increase.Elasticity refers to a system's ability to handle changes in demand in real time. A company that is hired to provide data processing for 10 additional clients and can utilize the same servers to complete the task is displaying elasticity.

A company that has a 10% increase in clients and a 10% decrease in server performance is demonstrating linear elasticity.

IT staff looks to provide a high level of fault tolerance while implementing a new server. With which systems configuration approach does the staff achieve this goal?Adapting to demand in real timeAdding more resources for powerFocusing on critical components

Increasing the power of resources

CA system often achieves fault tolerance by provisioning redundancy for critical components and single points of failure. Although not required, a redundant component is available for system recovery.Elasticity refers to the system's ability to handle any changes in resource demand in real-time. Elasticity often applies to processing power and storage.A system achieves scalability by adding resources. To scale out is to add more resources in parallel with existing resources.

A system achieves scalability is by adding resources. To scale up is to increase the power of existing resources.

Security specialists create a sinkhole to disrupt any adversarial attack attempts on a private network. Which solution do the specialists configure?Routing traffic to a different networkUsing fake telemetry in response to port scanningConfiguring multiple decoy directories on a system

Staging fake IP addresses as active

AA popular disruption strategy is to configure a DNS sinkhole to route suspect traffic to a different network, such as a honeynet, where it can be analyzed.Port triggering or spoofing can disrupt adversarial attacks by returning fake telemetry data when a host detects port scanning activity.To create disruption for an adversary, a security specialist can configure a web server with multiple decoy directories or dynamically generated pages to slow down scanning.Fake telemetry can disrupt an adversary by reporting IP addresses as up and available when they actually are not.P545Another type of active defense uses disruption strategies. These adopt some of the obfuscation strategies used by malicious actors. The aim is to raise the attack cost and tie up the adversary's resources. Some examples of disruption strategies include: • Using bogus DNS entries to list multiple hosts that do not exist. • Configuring a web server with multiple decoy directories or dynamically generated pages to slow down scanning. • Using port triggering or spoofing to return fake telemetrydata when a host detects port scanning activity. This will result in multiple ports being falsely reported as open and will slow down the scan. Telemetry can refer to any type of measurement or data returned by remote scanning. Similar fake telemetry could be used to report IP addresses as up when they are not, for instance.

• Using a DNS sinkhole to route suspect traffic to a different network, such as a honeynet, where it can be analyzed.

A company is working to restore operations after a blizzard stopped all operations. Evaluate the order of restoration and determine the correct order of restoring devices from first to last.Routers, firewalls, Domain Name System (DNS), client workstationsDomain Name System (DNS), routers, firewalls, client workstationsFirewalls, routers, Domain Name System (DNS), client workstations

Routers, client workstations, firewalls, Domain Name System (DNS)

AThe order of restoration states that switch infrastructure, then routing appliances, followed by firewalls, and then Domain Name System (DNS) should be enabled in that order. The final step is to enable client workstations and devices.The DNS should not be enabled prior to routers and firewalls. Both routers and firewalls are needed prior to DNS being operable.Routers should be restored prior to firewalls. Routers should be restored immediately following switch infrastructure as firewalls are not needed until routers are online.

Client workstations should be restored last as firewalls and DNS must be restored prior to bringing the client workstations back online.

A recent systems crash prompts an IT administrator to perform recovery steps. Which mechanism does the administrator use to achieve nonpersistence?Configuration validationData replicationRestoration automation

Revert to known state

DSnapshot/revert to known state is a saved system state that the administrator can reapply to the instance on a system. This is a mechanism that achieves nonpersistence.Configuration validation is a process that ensures that a recovery solution is working at each layer (hardware, network connectivity, data replication, and application).Data replication is a process of reinstating data on a system. Nonpersistence would occur by separating any system restore from data replication.Automation may restore a system as software may build and provision an instance according to any template instructions. This is a mastering instruction.P538Nonpersistence When recovering systems, it may be necessary to ensure that any artifacts from the disaster, such as malware or backdoors, are removed when reconstituting the production environment. This can be facilitated in an environment designed for nonpersistence. Nonpersistencemeans that any given instance is completely static in terms of processing function. Data is separated from the instance so that it can be swapped out for an "as new" copy without suffering any configuration problems. There are various mechanisms for ensuring nonpersistence: • Snapshot/revert to known state—this is a saved system state that can be reapplied to the instance. • Rollback to known configuration—a physical instance might not support snapshots but has an "internal" mechanism for restoring the baseline system configuration, such as Windows System Restore. • Live boot media—another option is to use an instance that boots from read-only storage to memory rather than being installed on a local read/write hard disk. When provisioning a new or replacement instance automatically, the automation system may use one of two types of mastering instructions: • Master image—this is the "gold" copy of a server instance, with the OS, applications, and patches all installed and configured. This is faster than using a template, but keeping the image up to date can involve more work than updating a template. • Automated build from a template—similar to a master image, this is the build instructions for an instance. Rather than storing a master image, the software may build and provision an instance according to the template instructions.

Another important process in automating resiliency strategies is to provide configuration validation. This process ensures that a recovery solution is working at each layer (hardware, network connectivity, data replication, and application). An automation solution for incident and disaster recovery will have a dashboard of key indicators and may be able to evaluate metrics such as compliance with RPO and RTO from observed data.

A natural disaster has resulted in a company moving to an alternate processing site. The company has operations moved within a few hours as a result of having a building with all of the equipment and data needed to resume services. Evaluate the types of recovery sites to determine which processing site the company is utilizing.Replication siteCold siteWarm site

Hot site

DThe company is utilizing a hot site for recovery. A hot site can failover almost immediately. The site is already within the organization's ownership and is ready to deploy.A cold site takes longer to set up (up to a week) and does not have the equipment or data needed to set up immediately.A warm site contains features of both hot and cold sites. An example of a warm site is a building with the computer equipment available, but the company must supply the latest data set to be operational.

Replication is the process of duplicating data between different servers or sites.

A system has a slight misconfiguration which could be exploited. A manufacturing workflow relies on this system. The admin recommends a trial of the proposed settings under which process?Change managementChange controlAsset management

Configuration management

AChange management involves careful planning, with consideration for how the change will affect dependent components. For most significant or major changes, organizations should attempt to trial the change first.The admin performs a change control process prior to the actual change. This process requests and approves changes in a planned and controlled way.An asset management process tracks all the organization's critical systems, components, devices, and other objects of value in an inventory.

Configuration management ensures that each component of ICT infrastructure is in a trusted state that has not diverged from its documented properties.

A systems engineer decides that security mechanisms should differ for various systems in the organization. In some cases, systems will have multiple mechanisms. Which types of diversity does the engineer practice? (Select all that apply.)ControlVendorChange

Resiliency

ABControl diversity means that the layers of controls should combine different classes of technical and administrative controls with the range of control functions.Vendor diversity means that security controls are sourced from multiple sources. A vulnerability in solutions from a single vendor approach is a security weakness.Change refers to control processes or management. Control involves careful planning, with consideration for how the change will affect dependent components.

Enterprise-level networks often provision resiliency at the site level. An alternate processing or recovery site is a location that can provide the same (or similar) level of service. Resiliency itself does not provide diversity.

Compare physical access controls with network security to identify the statements that accurately connect the similarities between them. (Select all that apply.)Authentication provides users access through the barriers, while authorization determines the barriers around a resource.An example of authentication in networking is a user logging into the network with a smart card. Similarly, authentication in physical security is demonstrated by an employee using a badge to enter a building.Authorization provides users access through barriers, while authentication creates barriers around a resource.

An example of authorization in networking is a user logging into the network with a smart card. Similarly, authorization in physical security is demonstrated by an employee using a badge to enter a building.

ABAuthentication creates access lists and identification mechanisms to allow approved persons through the barriers.Authorization determines the barriers around a resource so that access can be controlled through defined entry and exit points.An example of authentication is a user who logs into a network with a smart card. In terms of physical security, authentication is represented by an employee using a badge to enter a building.An example of authorization is used to determine which files a user has on the network after authenticating. In physical security, the same can be said of the barriers in place to determine which employees can access a controlled room.P550Physical access controlsare security measures that restrict and monitor access to specific physical areas or assets. They can control access to a building, to equipment, or to specific areas, such as server rooms, finance or legal areas, data centers, network cable runs, or any other area that has hardware or information that is considered to have important value and sensitivity. Determining where to use physical access controls requires a cost–benefit analysis and must consider any regulations or other compliance requirements for the specific types of data that are being safeguarded. Physical access controls depend on the same access control fundamentals as network or operating system security: • Authentication—create access lists and identification mechanisms to allow approved persons through the barriers. • Authorization—create barriers around a resource so that access can be controlled through defined entry and exit points. • Accounting—keep a record of when entry/exit points are used and detect security breaches.

Physical security can be thought of in terms of zones. Each zone should be separated by its own barrier(s). Entry and exit points through the barriers need to be controlled by one or more security mechanisms. Progression through each zone should be progressively more restricted.

An organization plans the destruction of old HDDs. In an effort to save money, the organization damages the media by impact, but they did not destroy all of the data. Which method has the organization tried?DegaussingPulpingShredding

Pulverizing

DPulverizing involves destroying media by impact. It is important to note that hitting a hard drive with a hammer can actually leave a surprising amount of recoverable data. Industrial machinery should perform this type of destruction.Degaussing involves exposing a magnetic hard disk to a powerful electromagnet. This disrupts the magnetic pattern that stores the data on the disk surface.Pulping involves mixing any shredded remains of destroyed documents with water to provide an extra measure of protection.

Shredders destroy documents. Some shredders are powerful enough to destroy optical media too. Industrial shredders can destroy hard drives and flash drives.