There's an infosec influencer who put out an IPv6 hit piece.
-
William D. Jonesreplied to david_chisnall last edited by
@david_chisnall @ewenmcneill @mxshift Indeed, as mentioned by Ewen earlier, I forgot the part where the dest actually has to reply.
Re: sending a big packet to a victim, at worst won't that cause excess network traffic that'll be ignored (b/c the victim won't be listening, the kernel will discard it)?
Also I thought the whole purpose of IOMMU was "the kernel decides the memory addresses a device can write to/read from, for each xaction". Won't not knowing valid addrs guard against spoofing?
-
david_chisnallreplied to William D. Jones last edited by
Re: sending a big packet to a victim, at worst won't that cause excess network traffic that'll be ignored (b/c the victim won't be listening, the kernel will discard it)?
Sure, the kernel will discard it at the far end, but the network connection to the victim Is finite. If you fill it with big packets, it doesn’t matter that the kernel discards them, it will never get to see other things. If you have a 10 Mbit connection and so does your victim, and you can get DNS servers to amplify your attack with a 10:1 ratio (response is 1000 bytes for a 100-byte request), you can deliver 100 Mb/s to the victim, which will cause a load of the packets that they want to be dropped, which will cause TCP connections to get slower, which will make their proportion of the total drop, which will make them slower, and so on.
Also I thought the whole purpose of IOMMU was "the kernel decides the memory addresses a device can write to/read from, for each xaction". Won't not knowing valid addrs guard against spoofing?
The kernel decides a mapping between device physical addresses and host physical addresses. A malicious device can choose to use a different mapping. For most of these things, this is fine in the threat model. They assume trusted devices and untrusted VMs.
-
William D. Jonesreplied to david_chisnall last edited by
@david_chisnall Ack everything re: the network slowing down.
>A malicious device can choose to use a different mapping.
Yes, but when the malicious devices tries to write/read into kernel mem using its own chosen device physical addresses, the IOMMU will recognize that the kernel said "no, I don't allow writes/reads through this address" and quash the write/read.
And how would the device be able to choose which host physical address it wants to (maliciously) read and write?
-
William D. Jonesreplied to William D. Jones last edited by
@david_chisnall >They assume trusted devices and untrusted VMs.
Are you using VM as a catch-all for "anything running a kernel"? Or actual VM as in "kernel running under control of a hypervisor, either bare metal or another kernel"?
Anyways this sounds backwards :P. I thought devices not choosing to read/write all over mem was what we were trying to prevent. Why would we trust the devices to _not_ do that :D?
-
david_chisnallreplied to William D. Jones last edited by
@cr1901 IOMMUs in most systems are designed to allow devices to be attached to VMs. The threat model is that you have attached a device to a VM and want to protect against that device initiating DMAs to or from a physical address that the VM cannot access. They are somewhat useful without virtualisation (and, increasingly for kernel-bypass things with userspace), but the threat model almost always assumes that devices are trustworthy. The PCIe spec even includes a feature called ATS that allows the device to bypass the IOMMU if it implements its own (fortunately, it’s possible to turn this off).
-
William D. Jonesreplied to david_chisnall last edited by
@david_chisnall Okay, you lost me. Why is this threat model specific to VMs, as opposed to applying equally to not-VMs?
What's special about VMs such that they're more susceptible to having host memory overwritten?
I guess guest OS memory can be overwritten by a rogue device too, but that at least will be constrained to the VM given proper sandboxing...
-
@cr1901 @david_chisnall to properly attach a device to a VM, you need an IOMMU so that, when the guest OS programs DMA using guest physical addresses, the guest-to-host physical mapping is applied on device memory accesses; it's not even (just) about security, it's about feasibility of making the device work within VM at all — without IOMMU, the device / memory subsystem would try to interpret the guest-given physical addresses as host physical addresses and break everything
-
@mwk @cr1901 Random historical tangent:
This is not quite true. Xen allowed device pass through before IOMMUs and this was necessary because drivers all lived in dom0. It was also possible to delegate devices to domU, but it broke security. There was a project that moved the Xen and dom0 locations around so that a domU got the bottom N pages of memory and so you could run Windows in domU with GPU pass through. Most of the Xen interfaces date from this time and so require the guest to translate addresses from pseudophysical to physical (and then the hypervisor has to translate back again).
AMD's first steps towards an IOMMU was a thing called a Device Exclusion Vector (DEV). This was really an IOMPU. It didn't do address translation, just protection. You could restrict which pages DMA could touch. This was fine for *NIX, where you could modify the drivers to understand that they needed to do address translation (something like NetBSD/FreeBSD's busdma framework already has support for this kind of abstraction, where the address you tell the device and the address you use are different, so that it can transparently do bounce buffering). It was a problem for Windows. The aforementioned project to move everything around could use the DEV to run Windows safely with full GPU access and a Linux VM that mediated disk / network access and was protected from Windows.
-
William D. Jonesreplied to david_chisnall last edited by
@david_chisnall @mwk Is Wanda generally correct that IOMMU is required, but Xen did some "awful magic" to make device passthru work without MMU?
>Most of the Xen interfaces date from this time and so require the guest to translate addresses from pseudophysical to physical (and then the hypervisor has to translate back again).
So the guest OS has to know it's running under a hypervisor?
-
@cr1901 @david_chisnall @mwk yes, remember that the initial target of Xen was paravirtualisation of a guest on hardware which didn't have virtualization support