I think it would be cool to add category description to the Get Recent Topics API. Wondering if I can get that in a backlog somewhere.
DevOps - Sourcing of NodeBB administration changes, updates, topics under VCS and pass them in pipelined stages
@radu-ionescu what changes are you making and in what files?
@gotwf Thank you for your answer, but I was looking for a way to apply this to my NodeBB forum.
@PitaJ I want to version all of the changes and choices (i.e. adding a plugin, admin panel choices) that I make to the forum in git. Somehow I get the feeling all of them are persisted in the DB
With a lower priority, I am also looking on how to achieve this for categories, groups and other meta-content that I understand are saved in the DB. This makes it very hard since you would need to capture them as incremental updates to the DB (e.g. I used flyway as an example).
In the end I would like to build a pipeline to maintain the forum and push changes to production. I am not looking for automated tests, but at least a staging (QA like) environment. For this ideally you would capture some production events to get a real feel of how it will look. This however is a very complex setup, so a dummy DB in staging would suffice.
I am not sure if I can achieve this without a DB dump somehow. This is why I am asking if someone has more experience and knows what can be versioned.
@radu-ionescu Well, yeah, but I saw this percolating for a few hours and I did not want you to be discouraged, thinking you were being ignored.
And now you've got a helper who actually knows this stuff...
Good luck and enjoy nodebb! o/
@gotwf Thank you and I appreciate your answer, further more since it in the end attracted the attention. There are no wrong answers in a discussion and your post actually defines very well the principles to follow in devops and explains very well what I was trying to understand if it is possible.
I want to version all of the changes and choices (i.e. adding a plugin, admin panel choices) that I make to the forum in git. Somehow I get the feeling all of them are persisted in the DB
Your feeling is correct. Everything that isn't in
config.jsonis stored in the DB:
- active plugin list
- ACP settings
- plugin settings
If you wish to track these, you'll have to do it through your database. Most of these settings will be stored under a few keys, like
plugins:active. So you can just create a whitelist of database keys you want to track and dump out those specific keys to text to track. Then for deployment you would write those into your production database.
@pitaj So the solution would be to develop a middleware plugin that captures the API POST/PUT calls that create this changes in the DB, store them in the FS and be able to replay them again. It would need to verify that they are idempotent or assign them a version (like flyway does for incremental DB updates) to make sure they are applied at most once if not.
Is this something that people have asked for in the past or it exists already as a plugin?
@radu-ionescu yeah you could create a plugin, but I think it's probably easier to do this entirely on the DB layer. I think the plugin capturing API calls is a lot of added complexity for no real gain. I envision a little bash script that does something like this:
redis DUMP "config" > config.rdb redis DUMP "plugins:active" > plugins__active.rdb ...
And then a restore script to deploy to production:
cat config.rdb | redis RESTORE "config" cat plugins_active.rdb | redis RESTORE "plugins:active" ...
@pitaj It is a actually a very good solution to start with and is still very git friendly if you limit it to small "tables" (keys for Redis, collections for Mongo). The dump command should be stable enough (if run in a single thread/worker) that the files would have an appended/inlined text git change format, but at the same time it could get messy even if alphabetic order is used
( i.e. a git diff output
Can not say much about the Redis DB as I have used Redis only for caching and I actually spent most of my time deploying a Redis cluster in a Kubernetes environment (I could not use the bitami cluster implementation because this were the rules of the company and our infrastructure particularity).
But I can see an issue with a restore command from a dump that it will not handle well merging the changes from the dump. It is not clear if a missing entry in a dump needs to be removed or not? What is currently in the production should be kept if it does not appear in the dump? Should never happen if you keep the flow of changes as dev->stagging->prod, but we all abuse the hot-fixes deployed into production directly. For NodeBB is even more tempting as the change switch was made available in the admin panel. One of the reasons that brought me to the conclusion that NodeBB is actually the best choice for a forum (and I have researched a bit the alternatives and I can say this with great confidence)
@radu-ionescu yes merging changes is an issue. One thing you could try is before deployment, use the same dump script on prod.
# deploy.sh ./dump.sh git pull --ff-only
Git will refuse to pull unless no changes were made.
As for your choice of database, redis was just easy for the example. I'd recommend against using redis, and using mongo or postgres instead. Mongo will probably give you even better dump output since it's essentially JSON.
@pitaj Having a branch for the production version with a similar dump output is actually a great idea. You can solve the conflicts during git merge (or PRs if there would be a tea working on this) and there should be no problem.