Mongodb + Redis still WIP ?


  • Staff Admin

    You can do this right now, during setup pick mongo as your database and all forum data will be stored on mongo, then add a redis block into your config.json and redis will be used for sessions and socket.io.



  • Okay, and what about the big "not for production use" ? ^^'


  • Staff Admin

    The setup I mentioned is not using the --advanced flag, that is something else entirely. The advanced flag let's you store some data in redis and some data in mongodb, for example user and post data in mongo and notifications, chats in redis, but it is not well tested hence the warning.



  • Ok, if i got problem, i will track them 😉 !



  • @baris like this ?

    {
        "url": "http://HEYMYWEBSITE.NETr",
        "secret": "xXxXxxx-XXxxxXxxx-XxX--X",
        "database": "mongo",
        "mongo": {
            "host": "127.0.0.1",
            "port": "27017",
            "username": "nodebb",
            "password": "SomeStrongPass",
            "database": "nodebb"
        }
        "redis": {
            "host": "127.0.0.1",
            "port": "6379",
            "password": "EvenMoreStrongPass",
            "database": "1"
        }
    }
    

    EDIT : Eh, not working >_>


  • Anime Lovers

    @Technowix You missed a comma after mongo block



  • I don't mean to hijack, but what if I'm already using redis and want to start using a split method?



  • @JLChnToZ A comma ? Where ? o_o
    Oh ! right ! Thank dude !
    @Adrian-Kaepernick There is no, for the moment, nodebb to nodebb importer >< ! so no redis to mongodb transfert, maybe @bentael have this on his wonderfull list if importer ?





  • I hink there will be no reply on this error o: ?... This is pretty "urgent" .3.



  • @baris said:

    The setup I mentioned is not using the --advanced flag, that is something else entirely. The advanced flag let's you store some data in redis and some data in mongodb, for example user and post data in mongo and notifications, chats in redis, but it is not well tested hence the warning.

    So this way, that you mention, can be reversed if something goes wrong?


  • Staff Admin

    @scottalanmiller There is no way to reverse it since we don't have exporter importer between redis/mongo. Personally I wouldn't use the --advanced flag, it hasn't been tested enough.

    But using mongo for data and redis for sessions and socket.io is well tested. As well as everything in redis(which is this forum).



  • @baris said:

    But using mongo for data and redis for sessions and socket.io is well tested. As well as everything in redis(which is this forum).

    Any scaling issues with Redis for sessions and socket.io? If it is only that data, wouldn't removing the config entry cause the data to be reconstructed in MongoDB? Since it does not need to be persistent.


  • Admin

    You'll need to use something other than in-memory if you're planning to scale out to multiple servers, otherwise socket.io connections will drop.

    And yes, if you switched away from using Redis for sessions (back to MongoDB or something), everybody will just get logged out, that's all.

    There is a mongo db driver for socket.IO, but we have not tried it.



  • That's what I thought. We had chosen MongoDB specifically because we wanted the flexibility to move out to multiple databases (not there yet, but want to be ready.) MongoDB caches everything anyway, so I am wondering if the performance difference would warrant looking into using Redis to speed up MongoDB?



  • Right now we are only at the point of being prepared to double our memory and core count and spinning up more NodeBB processes, but nothing past that. Long way to go before we will need distributed databases for the community. But we are planning for a bright future 🙂

    We feel that we can easily handle 20K views per hour with the current infrastructure. We are not taxed at all at 3,000.


  • Admin

    @scottalanmiller Good to hear! We should compare notes -- we have supported over 10k/hr, but this is on multiple machines.

    IIRC Mongo keeps everything in memory anyways, so when it comes down to it the speed difference may really be negligible. (Though I imagine if you have more data than memory, it starts keeping only the most recently used data in-memory...)



  • @julian said:

    @scottalanmiller Good to hear! We should compare notes -- we have supported over 10k/hr, but this is on multiple machines.

    Right, we are doing 3K+ per hour (sometimes sustained for over 10 hours continuous) on a single NodeBB instance. Although the average is quite a bit lower. I think we were at 1,200/hr during the night last night before we did lots of maintenance this morning, which always slows things down.



  • @julian said:

    IIRC Mongo keeps everything in memory anyways, so when it comes down to it the speed difference may really be negligible. (Though I imagine if you have more data than memory, it starts keeping only the most recently used data in-memory...)

    We plan on keeping memory increasing as the scale increases 🙂 Although, at some point, that will likely be impossible. Although as data gets really old, keeping it in memory probably makes zero difference anyway. Most threads are never seen again.

    Our busiest thread just hit the half million views mark yesterday.


Log in to reply
 

Suggested Topics

  • 1
  • 9
  • 1
  • 5
  • 3
| |