Backing Up Mongodb Backed NodeBB

  • GNU/Linux Admin

    Oh, I did not see this thread immediately.

    The official NodeBB recommendation is that you shut down the database before doing a mongodump, but as you've discovered, Mongo itself does not mention anything for or against it.

    The sentence in the docs reads like something I would have written, and I think I added the recommendation to shut down MongoDB just as an extra (ultimately unnecessary) safety measure.

    Kind of like how you should probably flick off the light switch before changing a light bulb, but practically speaking you most certainly won't die if you leave it on (you might get blinded a little, though 😎 )

  • Community Rep

    @julian said in Backing Up Mongodb Backed NodeBB:

    The official NodeBB recommendation is that you shut down the database before doing a mongodump, but as you've discovered, Mongo itself does not mention anything for or against it.

    The purpose of the utility is so that you don't have to shut down the database. It locks the database and is 100% safe to use. It's the only safe way to do it without shutting down the DB completely, and a shutdown doesn't buy you anything that the lock doesn't provide.

    The NodeBB docs should be updated. It's fine to not mention this, but saying you should shut down before using the dump utility is misleading as this would imply that MongoDB is borked and can't be used in production as it can't properly lock.

    Mongo doesn't say anything against it because that would be nonsensical to have built a backup utility that can't take backups 😉 That it is to be used while still running is implied.

  • Community Rep

    The bigger fear here is that this might encourage to do actually dangerous things like trying to use snapshots or something else that isn't reliable because it makes them feel that the database's backup mechanism is broken, when it isn't. So while it might be "extra safe", sort of, it heavily risks people not using the completely reliable backup mechanism and resorting to something not safe like snapshots, crashplan, etc.

  • Global Moderator Plugin & Theme Dev

    mongodump and mongorestore cannot be part of a backup strategy for 4.2+ sharded clusters that have sharded transactions in progress, as backups created with mongodump do not maintain the atomicity guarantees of transactions across shards.

    It's fine for small deployments though

  • Community Rep

    @PitaJ said in Backing Up Mongodb Backed NodeBB:

    mongodump and mongorestore cannot be part of a backup strategy for 4.2+ sharded clusters that have sharded transactions in progress, as backups created with mongodump do not maintain the atomicity guarantees of transactions across shards.

    That's true. But really there you need to probably avoid backing up while sharded transactions are in progress as they can't be locked while in progress. Nothing can back up a database in that state.

    But that doesn't mean that dump is the issue, that's a specific timing on a specific case. It can still do a backup safely of a standard DB, and a sharded one, just not one doing a transaction across shards during the dump. That has to be locked.

  • Community Rep

    @scottalanmiller

    Except... Per mongo docs, snapshots are viable method:

    So it is a bit more perplexing than initially imagined for the mongo backup neophyte to discern the optimal happy path here, eh?

  • Community Rep

    @gotwf said in Backing Up Mongodb Backed NodeBB:

    Except... Per mongo docs, snapshots are viable method:

    Not exactly, read it carefully. It only works IF you do certain things in your setup which allow it to work. Most people do not do that. Can you, absolutely. But you have to design the system around corruption protection, must do it on the same volume, and when you restore you risk a rollback from the journal.

  • Community Rep

    @scottalanmiller To be clear: I am all for using mongodump/restore. Mongo seems to really, really want to push the Atlas offerings so maybe some incentive for their docs to be less than clear w,r,t, best practice alternatives.

    If I was going to shutdown NodeBB to take a dump, as Julian suggested above, then I may well be better off grabbing a zfs snapshot - more comprehensive total vm backup and the deltas might require less storage space over the long run?

    But I am not... I am living large and running mongodump on a hot mongo. 😜

  • Community Rep

    @gotwf

    The bottom line here is that there are computing basics that always apply and information from other sources are irrelevant. Then concept of database backups is always the same. No information from NodeBB or MongoDB can alter how a database interacts with a filesystem. So the universal rules always apply.

    A database uses a live file on the filesystem and/or has data in RAM. Anything that has it's data file open and/or has data in RAM cannot be fully backed up via a snapshot mechanism or backup software at the filesystem level - full stop, no exceptions. This is universal and any "but I asked X vendor" just means you risk getting a wrong answer. This is basic computing physics and applies to all databases, and many other things. It's a computing pattern.

    You can stop a database from being a live database by powering it down or otherwise forcing it to write everything to the storage subsystem and locking it to prevent further transactions and then use a reliable component of the storage subsystem to take a snapshot or file copy - but a snapshot can never do something that a file copy cannot. It's both or neither.

    Or you can have a database that is set up to have a non-live on disk storage in addition to the live, like a journal, which takes longer and uses more resources but allows you to roll forward or back to "fix" the corruption. That a journal is required is MongoDB making it clear that the snapshot of the DB itself isn't safe and an additional copy of the data must exist. But that means that the entire functionality is impacted to make this possible (which is fine) and that the last transaction is always at risk but the database beyond the last transaction is at least safe because it can be recreated from the journal.

    The mechanism to make the journal is not unlike the dump mechanism. And it might be the same code under the hood. Both are using the database's application logic to determine what "should be" and present it safely when storage safety cannot be determined. Making a journal is a little like making a dump locally for every transaction so that one is always present. You have to trust the dump in order to trust the journal.

    As with all databases or any similar application that keeps live data open or works from RAM - the only possible safe mechanism to ensure data integrity - short of powering down the system entirely - is a built in locking and backup mechanism that has access to every bit of the data in flight and ensures that it is in a non-corrupted, consistent state when flushed to disk. You can't make a simpler, lighter, more reliable method no matter what tools you use.

    The thing that makes this seem confusing is when you start looking at it from a NodeBB or MongoDB level, it feels natural that one or the other might have some special insight into their unique situation, but they do not, they cannot. What determines how NodeBB backups work is the universal laws of computing and how they apply to databases. Trying to look at it from any other level will lead to confusion or risks as the more you ask, the more chances for someone along the chain to be misunderstood.

    Attempting to look for ways around the physical constraints of computing can only lead, if there are no errors, to dead ends, or worse if mistakes are made, to accidentally getting a bad answer.

    Beyond that, snapshots are heavy and slow, dumps are fast and light. There should never be a desire to work around them as they really carry no caveats, just pros. Fast, simple, reliable, and the smallest resulting backup set size.

  • Community Rep

    @gotwf said in Backing Up Mongodb Backed NodeBB:

    If I was going to shutdown NodeBB to take a dump, as Julian suggested above, then I may well be better off grabbing a zfs snapshot - more comprehensive total vm backup and the deltas might require less storage space over the long run?

    Snapshots are big and slow. Ideally a restore operation would not involve putting a snap back in place, but a restore of only the data.

    In the DevOps and post-DevOps backup world, the idea of snapshots or any full volume / full system backup is considered a failure of design. It's heavy to backup, heavy to restore, heavy to store. Modern system design allows us to quickly restore base systems sans data, quickly. This is what makes cloud efficient. I have a single command that builds my NodeBB instances, for example. It takes maybe a minute, doesn't require my time, is repeatable (and testable) and is needed for more than restores but for updates, moves, relocations, growth, etc.

    Since that tool is already ideal and in place, the ideal restore is to use that and simply replace the data, and nothing more. Snapshots are unnecessarily large and slow to restore (and more prone to corruption.) The straight data is the fastest thing to restore. So that's what we want in a restore situation. Faster to move over the network, faster to put onto disk.

    So even if snapshots are available to us, we should never want them. Using snapshots is necessary for situations where we are stuck with legacy systems that cannot be automated in a modern way and we have to brute force past bad designs, software, or politics. But not something we should ever "want" if we have our druthers.

  • Community Rep

    I just happen to have a video of me presenting this topic at a conference, lol.

  • Community Rep

    @gotwf said in Backing Up Mongodb Backed NodeBB:

    deltas might require less storage space over the long run?

    If you are using deltas, you create increasing file system dependencies that make corruption more likely. To do so, you have to store lots of partial snapshots with the original(s) there for recreating. It can be done and most backup systems today do exactly this (Veeam, StorageCraft, Unitrends, etc.) It's the "assumed" method of storing snaps used for system backups. It makes the best of a bad situation.

    However, if you are willing to use deltas, then you can do the same with dumps. This can be done in two main ways...

    First, using MongoDump in a way I'm not familiar with but appears to work: https://dba.stackexchange.com/questions/107987/mongodb-incremental-backups

    Second, by storing the resulting dumps on a compressed or deduped filesystem that will automate the delta functionality on disk. There are backup storage systems from StorageCraft that do exactly this for this purpose, for example. This method allows for delta-like storage efficiency, but with "full backup" style ease of restore so your restore admin need not know a complicated method for restore.

    Dumps tend to be very small, so often a little compression goes a long way. But just in case they are huge, this handles it.

  • Community Rep

    Heh.... Like the way I tee'd that one up fer' ye? 🏌

    Good stuff. Better still that ye' scribed it here.

    Rock on. 🎸

  • GNU/Linux Admin

    @scottalanmiller said in Backing Up Mongodb Backed NodeBB:

    I just happen to have a video of me presenting this topic at a conference, lol.

    Seems like he certainly put the issue to bed, didn't he 😁

  • Community Rep

    I talk backups all the time. I've been a senior advisor for multiple of the big backup players over the years and now my company builds its own backup systems for our own products. So we think about backups all the time.

Suggested Topics

| |