I did end up confirming what our practical experience taught us, that a 2-core VPS can handle roughly 200 active connections at the same time before falling over. However, at that point you're looking at 10s+ response times, so that's definitely not ideal.
This forum is hosted on a 2-core VPS that handles everything, the proxying (via nginx), the database, and the application server. Splitting out these tasks to separate machines does increase the raw throughput by a non-insignificant margin.
That said, if I moved the db out to a separate droplet, that wouldn't actually give results that you could usefully compare with those above, as we'd be effectively doubling the CPU count from 2 to 4
For the testing itself, I used loader.io, which provides quite a nice interface for testing (if throttled, as I was using the free plan).