There's an infosec influencer who put out an IPv6 hit piece.
-
Rick Altherrreplied to William D. Jones last edited by
@cr1901 @ewenmcneill to control what services running on the router itself are accessible via WAN
-
@mxshift @cr1901 … and if there are public (globally unique) IPv4 addresses internally, you probably want “permit only these things, deny the rest”.
Where globally unique addresses internally is either a DMZ, or an “IPv4 rich” organisation that has always numbered things internally with globally unique addresses.
The “everything goes through NAT always” Internet is… a modern assumption. (And destination NAT inbound, drop if no DstNAT is basically a default deny firewall )
-
William D. Jonesreplied to Ewen McNeill last edited by
@ewenmcneill @mxshift Never thought about this, so sorry if this is a stupid q, but... since routing uses subnet and dest IP to decide how/where/which iface to send a packet, why can't a machine lie about it's source IP in a packet to get past a incoming conn firewall?
-
david_chisnallreplied to William D. Jones last edited by
It’s not a stupid question. You absolutely can lie about the source address for a packet. On a local network segment (with no VLANs), it will probably arrive just fine. If it needs routing, it may go through something that says ‘hmm, packets from this subnet aren’t allowed to come from over here’ and drops it on the floor.
The important question is: and then what?
The main use for the source address is to allow the destination to reply. If you send a packet with a faked source address, any reply will go to the faked source address and not to you. Sometimes that’s useful. Some NAT tunnelling things used to rely on this doing terrifying things with intermediaries taking part of a TCP handshake.
In general, most connection-oriented things (TCP, QUIC) require some handshake and so you’ll end up failing to establish the connection if you fake the source. As an attacker, you need to compromise the target network stack with the first packet (which is sometimes possible) to get anything useful from stateful protocols. You may (if you can guess the sequence number) be able to inject a packet into the middle of a TCP stream, but generally that will just show up as a decryption failure in TLS (you are using TOS for everything, right?).
The more fun attacks rely on reflection. DNS, for example, is designed to be low latency and so is (ignoring newer protocol variants) a single UDP packet request and a single UDP packet response. With things like DNSSEC signatures (or entries with a lot of round-robin IPs), the response can be much bigger than the request and so you can send a request to a DNS server with a small request and a spoofed source, and the DNS server will reply with a big packet to your victim. DNS servers and other parts of network infrastructure have mitigations for this, but it’s easy to accidentally make a new protocol that provides this kind of amplification. QUIC has a rule that the first packet must be at least as big as its response (so sometimes requires padding) to establish a connection precisely to avoid this kind of issue.
If you think that’s scary, remember that networks are everywhere. Most busses are now actuallly networks. PCIe is a network and PCIe source addresses are used by the IOMMU to determine which devices can access which bit of the host memory. Where does the PCIe source address come from? The device puts it in the PCIe packet. It’s trivial for a PCIe device (including an external one connected via Thunderbolt) to spoof the ID of a valid one and initiate DMA to and from regions exposed to the other device. IDE and TDISP should mitigate these problems when they’re actually deployed (I don’t know of any shipping hardware that implements them yet. I think IDE is sufficient but it’s been a while and I don’t remember what things are in which spec).
-
William D. Jonesreplied to david_chisnall last edited by
@david_chisnall @ewenmcneill @mxshift Indeed, as mentioned by Ewen earlier, I forgot the part where the dest actually has to reply.
Re: sending a big packet to a victim, at worst won't that cause excess network traffic that'll be ignored (b/c the victim won't be listening, the kernel will discard it)?
Also I thought the whole purpose of IOMMU was "the kernel decides the memory addresses a device can write to/read from, for each xaction". Won't not knowing valid addrs guard against spoofing?
-
david_chisnallreplied to William D. Jones last edited by
Re: sending a big packet to a victim, at worst won't that cause excess network traffic that'll be ignored (b/c the victim won't be listening, the kernel will discard it)?
Sure, the kernel will discard it at the far end, but the network connection to the victim Is finite. If you fill it with big packets, it doesn’t matter that the kernel discards them, it will never get to see other things. If you have a 10 Mbit connection and so does your victim, and you can get DNS servers to amplify your attack with a 10:1 ratio (response is 1000 bytes for a 100-byte request), you can deliver 100 Mb/s to the victim, which will cause a load of the packets that they want to be dropped, which will cause TCP connections to get slower, which will make their proportion of the total drop, which will make them slower, and so on.
Also I thought the whole purpose of IOMMU was "the kernel decides the memory addresses a device can write to/read from, for each xaction". Won't not knowing valid addrs guard against spoofing?
The kernel decides a mapping between device physical addresses and host physical addresses. A malicious device can choose to use a different mapping. For most of these things, this is fine in the threat model. They assume trusted devices and untrusted VMs.
-
William D. Jonesreplied to david_chisnall last edited by
@david_chisnall Ack everything re: the network slowing down.
>A malicious device can choose to use a different mapping.
Yes, but when the malicious devices tries to write/read into kernel mem using its own chosen device physical addresses, the IOMMU will recognize that the kernel said "no, I don't allow writes/reads through this address" and quash the write/read.
And how would the device be able to choose which host physical address it wants to (maliciously) read and write?
-
William D. Jonesreplied to William D. Jones last edited by
@david_chisnall >They assume trusted devices and untrusted VMs.
Are you using VM as a catch-all for "anything running a kernel"? Or actual VM as in "kernel running under control of a hypervisor, either bare metal or another kernel"?
Anyways this sounds backwards :P. I thought devices not choosing to read/write all over mem was what we were trying to prevent. Why would we trust the devices to _not_ do that :D?
-
david_chisnallreplied to William D. Jones last edited by
@cr1901 IOMMUs in most systems are designed to allow devices to be attached to VMs. The threat model is that you have attached a device to a VM and want to protect against that device initiating DMAs to or from a physical address that the VM cannot access. They are somewhat useful without virtualisation (and, increasingly for kernel-bypass things with userspace), but the threat model almost always assumes that devices are trustworthy. The PCIe spec even includes a feature called ATS that allows the device to bypass the IOMMU if it implements its own (fortunately, it’s possible to turn this off).
-
William D. Jonesreplied to david_chisnall last edited by
@david_chisnall Okay, you lost me. Why is this threat model specific to VMs, as opposed to applying equally to not-VMs?
What's special about VMs such that they're more susceptible to having host memory overwritten?
I guess guest OS memory can be overwritten by a rogue device too, but that at least will be constrained to the VM given proper sandboxing...
-
@cr1901 @david_chisnall to properly attach a device to a VM, you need an IOMMU so that, when the guest OS programs DMA using guest physical addresses, the guest-to-host physical mapping is applied on device memory accesses; it's not even (just) about security, it's about feasibility of making the device work within VM at all — without IOMMU, the device / memory subsystem would try to interpret the guest-given physical addresses as host physical addresses and break everything
-
@mwk @cr1901 Random historical tangent:
This is not quite true. Xen allowed device pass through before IOMMUs and this was necessary because drivers all lived in dom0. It was also possible to delegate devices to domU, but it broke security. There was a project that moved the Xen and dom0 locations around so that a domU got the bottom N pages of memory and so you could run Windows in domU with GPU pass through. Most of the Xen interfaces date from this time and so require the guest to translate addresses from pseudophysical to physical (and then the hypervisor has to translate back again).
AMD's first steps towards an IOMMU was a thing called a Device Exclusion Vector (DEV). This was really an IOMPU. It didn't do address translation, just protection. You could restrict which pages DMA could touch. This was fine for *NIX, where you could modify the drivers to understand that they needed to do address translation (something like NetBSD/FreeBSD's busdma framework already has support for this kind of abstraction, where the address you tell the device and the address you use are different, so that it can transparently do bounce buffering). It was a problem for Windows. The aforementioned project to move everything around could use the DEV to run Windows safely with full GPU access and a Linux VM that mediated disk / network access and was protected from Windows.
-
William D. Jonesreplied to david_chisnall last edited by
@david_chisnall @mwk Is Wanda generally correct that IOMMU is required, but Xen did some "awful magic" to make device passthru work without MMU?
>Most of the Xen interfaces date from this time and so require the guest to translate addresses from pseudophysical to physical (and then the hypervisor has to translate back again).
So the guest OS has to know it's running under a hypervisor?
-
@cr1901 @david_chisnall @mwk yes, remember that the initial target of Xen was paravirtualisation of a guest on hardware which didn't have virtualization support