CompTIA Security+ SY0-601: How to Recognize Indicators of Common Network Attacks

Introduction

CompTIA Security+ SY0-601 objective: Given a scenario, analyze potential indicators associated with network attacks.

I’m keeping this aligned to the older SY0-601 wording, even though plenty of folks also compare it with SY0-701 study material.

This objective looks simple until the exam gives you fragments instead of labels: users report slowness, a firewall shows spikes, DNS answers look wrong, or a host starts making periodic outbound connections. Your job is to recognize the pattern. That is exactly how real operations work. An indicator is evidence, not proof.

The good analysts I’ve worked with don’t jump to conclusions—they tie the symptoms, the protocol behavior, the logs, and the packet details together first, then they name the attack.

A useful way to stay oriented is the CIA triad: availability problems often suggest floods and service exhaustion, confidentiality problems often suggest interception or spoofing, and integrity problems often suggest poisoning, replay, or traffic manipulation. That lens is not perfect, but it helps narrow answers fast.

Attack Indicator Triage Workflow

Here’s the same five-step workflow I use for exam questions and real triage:

  1. Identify the symptom: outage, slowdown, redirection, certificate warning, unusual login, or strange outbound traffic.
  2. Identify the protocol or layer: TCP handshake, ICMP, DNS, DHCP, ARP, routing, or application traffic.
  3. Classify the likely impact: availability, confidentiality, integrity, or a mix.
  4. Correlate evidence: firewall, IDS/IPS, DNS, DHCP, NetFlow, packet capture, switch logs, SIEM, EDR.
  5. Rule out benign causes: patch windows, vulnerability scans, failover events, TLS inspection, bad certificates, or misconfiguration.

That last step matters. Security+ often asks for the most likely answer, not absolute certainty.

Common Network Attacks and High-Value Indicators

Attack Key Indicators Best Data Source First Thought
DoS / DDoS Traffic spike, bandwidth saturation, many requests, service slowdown NetFlow, firewall, WAF, load balancer logs Availability attack
SYN flood Many SYNs, few completed handshakes, half-open sessions Packet capture, IDS/IPS, host connection table TCP resource exhaustion
ICMP flood High echo-request volume, latency, packet loss Firewall, packet capture, NetFlow Ping traffic used for disruption
Port scan One source probing many ports or hosts, many failures/resets

Firewall and IDS/IPS logs

Reconnaissance
ARP poisoning Gateway MAC changes, duplicate ARP mappings, repeated ARP replies ARP table, switch logs, packet capture Local IPv4 interception
DNS poisoning / hijacking Unexpected resolution, altered cache, odd TTLs, redirection DNS logs, dig, nslookup Name resolution manipulation
Rogue DHCP Multiple offers, wrong gateway/DNS, unexpected lease options DHCP logs, packet capture, switch logs Unauthorized network configuration
Beaconing / C2 Periodic or jittered outbound connections, rare domains, low-volume persistence NetFlow, DNS logs, proxy, EDR Post-compromise activity
Lateral movement New east-west SMB/RDP/WinRM/SSH/WMI/PsExec activity, auth anomalies SIEM, auth logs, NetFlow, EDR Internal spread

DoS and DDoS: Both aim to make a service unavailable. DDoS means distributed attack infrastructure, not just “many visible IPs.” Reflection and amplification attacks can hide the true origin while still being distributed. On the exam, clues include bandwidth saturation, many requests, or “many distributed hosts.” Know the three broad patterns: volumetric attacks consume bandwidth, protocol attacks exhaust connection/state resources, and application-layer attacks overwhelm the app itself.

In cloud-heavy environments, you’ll often spot the first clue in CDN, WAF, or load balancer data long before you get a clean look at raw packets. Honestly, that’s just how a lot of environments work these days.

SYN flood: The attacker sends SYN packets, the server responds with SYN-ACK, but the final ACK never arrives. That leaves many connections in SYN-RECV or equivalent half-open states until backlog resources are exhausted.

In Wireshark, a SYN-only filter can show you the initial attempts, but that by itself doesn’t prove a flood. The real clue is that the handshakes aren’t finishing.

On Linux you might check ss -ant; on Windows, Get-NetTCPConnection or legacy netstat -an.

ICMP abuse: Separate three ideas. A ping sweep is reconnaissance across many hosts. An ICMP echo flood is high-volume disruption against a target. A smurf-style amplification attack is a related historical concept where spoofed ICMP requests trigger amplified replies. Exam questions usually care about the difference between recon and flood: same protocol, different purpose.

Port scanning: Scans can be vertical (many ports on one host), horizontal (same port across many hosts), block scans, or slow/distributed scans.

A TCP connect scan goes all the way through the handshake, while a SYN scan usually bails out once it learns whether the port looks open or closed.

FIN/Xmas/NULL scans are stealthier conceptually, and UDP scans often produce timeouts instead of clear resets. A classic clue is many connection attempts with little successful session establishment. Also remember false positives: vulnerability scanners, NAC probes, and management platforms can look scan-like.

Spoofing: Spoofing means forged identity, such as IP or MAC. IP spoofing is common in stateless or reflection scenarios and less useful where a full bidirectional TCP session must complete unless combined with another weakness. Treat path or addressing inconsistencies as suspicious, not conclusive, because NAT, VPNs, multihoming, and routing quirks can look strange too. Anti-spoofing controls include source address validation, ingress and egress filtering, and technologies such as unicast reverse path forwarding and upstream source-validation filtering.

On-path attacks: “On-path” is the modern term, though “man-in-the-middle” still appears in exams and tools. Indicators include certificate warnings, altered traffic, proxy/path changes, and unexpected redirection. But certificate warnings do not automatically mean attack; expired certificates, hostname mismatch, missing intermediate certificate authorities, TLS inspection proxies, and captive portals can all trigger them.

You can also run into SSL/TLS stripping or downgrade attempts, where an encrypted session gets pushed back toward plaintext or forced into a weaker negotiation than it should’ve used.

ARP poisoning: ARP maps IPv4 addresses to MAC addresses on a local broadcast domain.

It doesn’t cross routed boundaries, and it’s an IPv4 thing. For IPv6, the similar idea is Neighbor Discovery or RA/NDP spoofing.

The classic clue is a default gateway IP suddenly mapping to a different MAC on multiple hosts. Check arp -a or ip neigh, then confirm with packet capture. Dynamic ARP Inspection is basically the switch saying, ‘Hold on, does this ARP message match what I trust?’ It checks ARP behavior against trusted bindings, and those bindings usually come from DHCP snooping.

DNS poisoning and related DNS attacks: Distinguish several lookalikes: DNS cache poisoning, malicious hosts-file changes, rogue DNS server assignment, and registrar or router-level DNS hijacking. “Unexpected IP” is only a clue; validate against authoritative DNS, known-good internal DNS, TTL behavior, and DNSSEC status where applicable. Also know DNS tunneling: unusually long or encoded subdomains, heavy TXT record use, high query volume to one domain, or suspicious periodicity can indicate covert data transfer or C2.

Rogue DHCP and DHCP starvation: DHCP follows DORA: Discover, Offer, Request, Acknowledge.

A rogue DHCP server usually gets traction by replying faster than the real one, or by sitting in a spot where clients hear it before they hear the legitimate server.

Multiple offers can happen legitimately in failover or poorly segmented environments, so the stronger clue is an unauthorized offer source or bad option values such as wrong gateway, DNS server, proxy auto-discovery settings, or odd lease times.

DHCP starvation is related. In a DHCP starvation attack, the attacker floods Discover requests until the address pool gets burned through. Then a rogue server can step in and hand out malicious settings once the legitimate scope is exhausted.

MAC flooding and VLAN hopping: MAC flooding tries to exhaust the switch CAM table. More precise wording: this may cause unknown unicast flooding, increasing traffic visibility beyond intended ports. VLAN hopping is segmentation bypass, usually by switch spoofing or double-tagging. Double-tagging only works under specific misconfigurations and typically depends on native VLAN behavior on a trunk. Proper mitigations include disabling trunk negotiation, explicitly configuring access and trunk ports, and avoiding default or native VLAN mistakes.

Replay attacks: Replay is the reuse of valid captured data, often in authentication or session protocols.

Replay attacks aren’t just a generic network problem, either—they can show up at the application, protocol, or session layer.

The real defense is freshness validation: nonces, timestamps, challenge-response, sequence numbers, and short-lived tokens.

And here’s the catch: strong authentication by itself doesn’t stop replay if the protocol doesn’t also check freshness.

Beaconing and C2: Malware often reaches out over HTTPS, DNS, ICMP, or even common cloud services. Strict periodicity is common, but modern malware may use jitter to avoid easy detection. Good clues include rare destinations, repeated small flows, unusual TLS certificate metadata, suspicious server name indication where visible, or DNS patterns that do not fit business use. TLS hides payload, but metadata such as timing, destination, bytes, traffic fingerprinting concepts, and certificate details still help.

Lateral movement: Once inside, attackers pivot.

Look for new east-west traffic, remote administration protocols, privileged logons, admin-share access, remote service creation, WMI, PsExec, pass-the-hash activity, or Kerberos abuse. That’s the kind of stuff that should make you stop and take a second look.

In Windows-heavy environments, authentication logs and service-creation telemetry can be just as useful as raw network logs. Actually, sometimes they’re even more useful because they tell you what the attacker tried to do, not just where they tried to connect.

Where the Evidence Appears

No single log source is enough. Packet captures show flags, ARP replies, DNS answers, and handshake behavior, but visibility depends on capture point, mirror-port or network tap placement, encryption, and packet loss. Firewall and proxy logs show source and destination trends. NetFlow shows who talked to whom and how often, which is excellent for DDoS patterns and beaconing cadence. DNS and DHCP logs explain name resolution and lease behavior. Switch logs help with CAM, ARP, VLAN, and trust-boundary issues. SIEM and EDR tie it together.

When correlating, normalize timestamps and time zones first. NAT, proxies, load balancers, and cloud front ends can hide the original client or make one event appear as many. In hybrid environments, also use cloud-native telemetry such as virtual network flow logs, security group or network security group flow logs, WAF logs, CDN telemetry, and load balancer logs.

False Positives and Misconfigurations That Mimic Attacks

Security+ loves distractors here. A patch window can create connection spikes that resemble a flood. A vulnerability scanner can look like hostile reconnaissance. DNS failover or migration can look like poisoning. TLS inspection can cause certificate warnings that resemble on-path interception. DHCP failover or a lab device can create multiple offers. Monitoring systems can generate lots of ICMP. A software updater or EDR heartbeat can look like beaconing.

So ask: Is the source expected? Is the timing tied to maintenance? Does the destination belong to a known vendor? Did a certificate expire? Did a load balancer or DNS record change? The best answer is the one supported by both indicators and context.

Two Practical Scenarios

Scenario 1: External web app outage. Users report intermittent outages. Firewall and NetFlow show a huge rise in inbound 443 traffic from many networks. WAF logs show request spikes, and the load balancer reports connection exhaustion. That points to DDoS. If packet captures also show many incomplete TCP handshakes, a SYN-flood component may be involved. First controls: upstream filtering, content delivery and traffic-scrubbing services, rate limiting, and provider coordination.

Scenario 2: Local interception and redirection. Users get certificate warnings for an internal portal. nslookup returns an unexpected IP for some clients, arp -a shows the gateway MAC changed, and packet capture shows repeated unsolicited ARP replies. That strongly suggests ARP poisoning enabling an on-path attack, though you still verify whether DNS was poisoned, clients were pointed to a rogue resolver, or a legitimate TLS inspection device is in path.

My first moves would be to isolate the suspicious host or port, check DHCP snooping and Dynamic ARP Inspection, compare DNS answers against authoritative records, and preserve the logs before anything gets overwritten. That’s the kind of thing you don’t want to be figuring out after the evidence is gone.

Safe validation commands and what they can tell you

Use read-only checks first:

  • arp -a or ip neigh: check local ARP mappings for gateway changes.
  • nslookup / dig: compare client, resolver, and authoritative DNS answers and TTLs.
  • ss -ant, netstat -an, or Get-NetTCPConnection: inspect connection states and half-open sessions.
  • tcpdump -nn: capture packet summaries; Just remember that tcpdump uses Berkeley Packet Filter syntax, not Wireshark display filters. That little syntax mix-up trips people up all the time.
  • Wireshark filters such as arp, dns, icmp, and SYN-only filters: useful for analysis, but never proof by themselves.
  • Switch commands like show arp or show mac address-table: validate layer 2 behavior.
  • traceroute or tracert: useful for path changes, but not definitive proof of routing attack.

Controls That Match the Indicators

Map the control to the symptom:

  • DoS/DDoS: rate limiting, ACLs, WAF, content delivery and traffic-scrubbing services, upstream blackholing or sinkholing, autoscaling with caution.
  • SYN flood: SYN cookies, backlog tuning, IPS thresholds, rate controls.
  • ICMP flood: ICMP rate limiting and edge ACLs.
  • Recon/scans: firewalls, IDS/IPS, segmentation, exposure reduction.
  • Spoofing: ingress/egress filtering, source validation, port security.
  • ARP poisoning: DHCP snooping plus Dynamic ARP Inspection; DAI depends on trusted bindings.
  • DNS poisoning/tunneling: resolver hardening, DNSSEC where supported, DNS filtering, query monitoring.
  • Rogue DHCP: DHCP snooping, trusted and untrusted port design, port security.
  • MAC flooding/VLAN hopping: port security, MAC limits, explicit trunk configuration, disable dynamic trunk negotiation, avoid native VLAN mistakes.
  • Replay: nonces, timestamps, challenge-response, sequence validation, short-lived tokens.
  • Beaconing/lateral movement: egress filtering, EDR/XDR, segmentation, NAC, least privilege, identity monitoring.

Wireless, IPv6, and Routing Indicators

Security+ scenarios can broaden “network attacks” beyond wired IPv4. For wireless, know rogue AP/evil twin indicators: a familiar SSID with a different BSSID, unexpected captive portal prompts, certificate prompts on enterprise Wi-Fi, or users associating to a stronger but fake access point. Deauthentication/disassociation floods cause repeated disconnects and forced reconnects.

For IPv6, ARP thinking is incomplete. Watch for NDP spoofing, rogue Router Advertisements, and DHCPv6 abuse. Dual-stack environments are a blind spot because defenders may monitor IPv4 closely while missing IPv6 path manipulation.

Also know basic routing and redirection indicators: unexpected route table changes, traceroute path shifts, ICMP redirect abuse, proxy auto-discovery abuse, or even conceptual BGP hijack symptoms where traffic suddenly takes an unexpected path. Security+ usually expects recognition, not deep routing engineering.

Security+ Exam Clue Decoder

If you see this... Think this... Common distractor
Many distributed hosts, service unavailable DDoS Single-source DoS
Half-open sessions, repeated SYNs SYN flood Normal traffic burst
Many ICMP echo requests with latency ICMP flood Ping sweep
One source probing many ports Port scan Exploitation
Gateway MAC changed ARP poisoning MAC spoofing
Unexpected DNS answer or altered cache DNS poisoning/hijacking Bad web server
Multiple DHCP offers, wrong gateway/DNS Rogue DHCP DNS poisoning
Periodic outbound HTTPS to rare domain Beaconing/C2 Scheduled update
New east-west admin traffic Lateral movement External beaconing only
Certificate warnings plus path/interception clues On-path attack Expired cert alone

Rapid Review Checklist

For fast exam triage, remember:

  • Availability = floods, saturation, exhaustion.
  • Confidentiality = interception, spoofing, rogue AP, on-path attacks.
  • Integrity = poisoning, replay, redirection, manipulated trust data.
  • Correlate, don’t guess: packet detail plus logs plus context.
  • Know the exam traps: scans are not exploitation, cert warnings are not always MITM, multiple DHCP offers are not always malicious, periodic traffic is not always malware.

If you train yourself to identify the symptom, map it to the protocol, verify with the right log source, and rule out the obvious benign explanation, you will answer this Security+ objective the way a real analyst works: calmly, methodically, and based on evidence.