The bottom line here is that there are computing basics that always apply and information from other sources are irrelevant. Then concept of database backups is always the same. No information from NodeBB or MongoDB can alter how a database interacts with a filesystem. So the universal rules always apply.
A database uses a live file on the filesystem and/or has data in RAM. Anything that has it's data file open and/or has data in RAM cannot be fully backed up via a snapshot mechanism or backup software at the filesystem level - full stop, no exceptions. This is universal and any "but I asked X vendor" just means you risk getting a wrong answer. This is basic computing physics and applies to all databases, and many other things. It's a computing pattern.
You can stop a database from being a live database by powering it down or otherwise forcing it to write everything to the storage subsystem and locking it to prevent further transactions and then use a reliable component of the storage subsystem to take a snapshot or file copy - but a snapshot can never do something that a file copy cannot. It's both or neither.
Or you can have a database that is set up to have a non-live on disk storage in addition to the live, like a journal, which takes longer and uses more resources but allows you to roll forward or back to "fix" the corruption. That a journal is required is MongoDB making it clear that the snapshot of the DB itself isn't safe and an additional copy of the data must exist. But that means that the entire functionality is impacted to make this possible (which is fine) and that the last transaction is always at risk but the database beyond the last transaction is at least safe because it can be recreated from the journal.
The mechanism to make the journal is not unlike the dump mechanism. And it might be the same code under the hood. Both are using the database's application logic to determine what "should be" and present it safely when storage safety cannot be determined. Making a journal is a little like making a dump locally for every transaction so that one is always present. You have to trust the dump in order to trust the journal.
As with all databases or any similar application that keeps live data open or works from RAM - the only possible safe mechanism to ensure data integrity - short of powering down the system entirely - is a built in locking and backup mechanism that has access to every bit of the data in flight and ensures that it is in a non-corrupted, consistent state when flushed to disk. You can't make a simpler, lighter, more reliable method no matter what tools you use.
The thing that makes this seem confusing is when you start looking at it from a NodeBB or MongoDB level, it feels natural that one or the other might have some special insight into their unique situation, but they do not, they cannot. What determines how NodeBB backups work is the universal laws of computing and how they apply to databases. Trying to look at it from any other level will lead to confusion or risks as the more you ask, the more chances for someone along the chain to be misunderstood.
Attempting to look for ways around the physical constraints of computing can only lead, if there are no errors, to dead ends, or worse if mistakes are made, to accidentally getting a bad answer.
Beyond that, snapshots are heavy and slow, dumps are fast and light. There should never be a desire to work around them as they really carry no caveats, just pros. Fast, simple, reliable, and the smallest resulting backup set size.