Lost all my data ? Help please
-
Hello,
The situation is very urgent.
Today, I discovered that I probably lost all my Data. After one year of team hard work for making a social network website (only for node.js developers). My website contains more than 900 users registered, and more than 300 articles and posts written by me and users.
When I visited my website today (after wihout doing any changes for many days on Admin panel), I could not even connect with my admin account (not known), the list of users registered is became empty : 0 user. And no data at all. On the my home page, only the head bar with login/register is showen.
What's weird is that the login with social network (fb-twitter) works fine but create new users.
Can someone explain to me what exactly happened ? What should I do to have more details on my situation ?It 'is like my data base is destroyed.I'm losting more than year of hard work
Please I nee d your help !! Tanks..
-
You are using Redis?
-
Will take some time to track down what happened. Do you have good backups?
-
There have been similar incidents on several sites in the last days.
See this thread: https://community.nodebb.org/topic/6904/how-to-export-from-redis-to-mongodb-my-database-got-wiped
-
0.9 related?
Our dev site is down... admin account throws "user does not exist." Thought it was just the weekly token error, cleared chrome and restarted... not working. We take regular backups, not super concerned for our stuff. if you dont take backups? this could be bad.
Edit: back up; lost two days
-
Thanks for your responds.
I clear the browser cache, use different browser on different devise but the problem is on server side. This is my website: nodejsworld.comI'm using digitalocean hosting offer without backup option
Is there a way (command) to verify if I can restore all my datas ? Or a solution for this problem.
-
@kacemlight you can configure a cronjob to backup your database every X minutes/hours.
But your data is most likely lost forever. -
you can check in your redis work directory for old copies of your database.
The redis db is running from memory and just dumps to a disk file time by time.
The problem seems to be, that some error occured during the dump, redis flushed the db and reloaded from the disk file, which is or was empty.I think, your data is lost
-
This post is deleted!
-
you maybe want to consider now SSDB support? :bowtie:
-
@AOKP
after loosing several days of posts in the error, I set up a database backup following this article: http://redis4you.com/articles.php?id=010 -
@wellenreiter said:
@AOKP
after loosing several days of posts in the error, I set up a database backup following this article: http://redis4you.com/articles.php?id=010Mine is similar to this. Except I was too lazy to setup a crontab for X minutes. I simply made a hourly one.
-
@AOKP said:
Mine is similar to this. Except I was too lazy to setup a crontab for X minutes. I simply made a hourly one.
Did the same
-
Is someone now why this happen ?!! Is it can be related to the hosting provider Digitalocean ?
-
The host system of my installation is not running on digital ocean, as far as I know.
The provider is called linode??Sorry, but I'm not the one leasing the host or dealing with the provider at all
-
Same issue here on a DigitalOcean hosting. The database has first been wiped out while running v0.8.2, then again last night while running v0.9.0.
-
What's amazing is that happening to several people in the same days interval on different redis version and nodeBB version without touching anything from days !!!
I've backup up also every hour now !!!
crontab - e
then add# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command MAILTO="" 13 * * * * /root/redis-backup.sh
And
/root/redis-backup.sh
(grabbed from Internet, don't remember where and modified to fit my needs).dont forget to
chmod ug+x /root/redis-backup.sh
#!/bin/bash # ## redis backup script ## usage ## redis-backup.sh port backup.dir port=${1:-6379} backup_dir=${2:-"/var/lib/redis/backup"} cli="/usr/bin/redis-cli -p $port" rdb="/var/lib/redis/dump.rdb" test -f $rdb || { echo "[$port] No RDB Found" ; exit 1 } test -d $backup_dir || { echo "[$port] Create backup directory $backup_dir" && mkdir -p $backup_dir } # perform a bgsave before copy echo bgsave | $cli echo "[$port] waiting for 5 seconds..." sleep 5 try=10 while [ $try -gt 0 ] ; do ## redis-cli output dos format line feed '\r\n', remove '\r' bg=$(echo 'info Persistence' | $cli | awk -F: '/rdb_bgsave_in_progress/{sub(/\r/, "", $0); print $2}') ok=$(echo 'info Persistence' | $cli | awk -F: '/rdb_last_bgsave_status/{sub(/\r/, "", $0); print $2}') if [ "$bg" = "0" ] && [ "$ok" = "ok" ] ; then dst="$backup_dir/$port-dump.$(date +%Y%m%d%H%M).rdb" cp $rdb $dst if [ $? = 0 ] ; then echo "[$port] redis rdb $rdb copied to $dst." # delete rdb created 30 days ago cd $backup_dir find . \( -name "$port-dump*" \) -mtime +30 -exec rm -f {} \; exit 0 else echo "[$port] >> Failed to copy $rdb to $dst!" fi fi try=$((try - 1)) echo "[$port] redis maybe busy, waiting and retry in 5s..." sleep 5 done
If it can help, it can be better and not in root folder, done in a hurry, you know what I mean