EDIT4 29/12/2020:
I tought it could be DEBUG settings but neither works
I've read many toturial and documentation and I do not understand what is wrong with my code here for example
EDIT3 29/12/2020:
I try call_command('flush','--noinput') and send_email and it works don't understand why call_command('dumpdata') or call_command('dbbackup') not working?
I have some redis warning, maybe it is related?
redis_1 | 1:C 29 Dec 2020 14:08:38.361 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 29 Dec 2020 14:08:38.361 # Redis version=6.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 29 Dec 2020 14:08:38.361 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 29 Dec 2020 14:08:38.363 * Running mode=standalone, port=6379.
redis_1 | 1:M 29 Dec 2020 14:08:38.363 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 29 Dec 2020 14:08:38.363 # Server initialized
redis_1 | 1:M 29 Dec 2020 14:08:38.363 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 29 Dec 2020 14:08:38.363 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo madvise > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled (set to 'madvise' or 'never').
redis_1 | 1:M 29 Dec 2020 14:08:38.369 * Loading RDB produced by version 6.0.9
redis_1 | 1:M 29 Dec 2020 14:08:38.370 * RDB age 47 seconds
redis_1 | 1:M 29 Dec 2020 14:08:38.370 * RDB memory usage when created 0.77 Mb
redis_1 | 1:M 29 Dec 2020 14:08:38.370 * DB loaded from disk: 0.001 seconds
redis_1 | 1:M 29 Dec 2020 14:08:38.370 * Ready to accept connections
EDIT2 29/12/2020:
I try to make backup using dumpdata but neither works. Don't have any error but can not find my files even if I specify a storage folder
see below my task.py updated if I do not redirect, I can see records in stdout so I tought I could resolve redirecting but ...
EDIT1:
I install postgresql-client and backup seems to works but can't find where is it... not in my backup folder even if ths folder is specify in my settings.py
EDIT:
I rebuilt and now task is running with error [Errno 2] No such file or directory: 'pg_dump'
I try to implement celery and celery-beat to run periodic task. My goal is to backup Postgresql database using Django-dbbackup
but only my test tack hello is running whereas the 3 tasks are registered
celery_1 | [tasks]
celery_1 | . cafe.tasks.backup
celery_1 | . cafe.tasks.hello
celery_1 | . core.celery.debug_task
celery_1 | [2020-12-28 17:05:00,075: WARNING/ForkPoolWorker-4] Hello there!
celery_1 | [2020-12-28 17:05:00,081: INFO/ForkPoolWorker-4] Task cafe.tasks.hello[5b3e46b5-16bc-4d6a-b608-69ffdf8e5664] succeeded in 0.006272200000239536s: None
task.py
@shared_task
def backup():
print("backup")
try:
print("backup done on " + str(timezone.now()))
print(settings.BASE_DIR)
# management.call_command('dumpdata')
sysout = sys.stdout
sys.stdout = open('/usr/src/app/backup/filename.json', 'w')
management.call_command('dumpdata', 'parameters')
sys.stdout = sysout
except:
print("Error during backup on " + str(timezone.now()))
settings.py
CELERY_BROKER_URL = 'redis://redis:6379'
CELERY_RESULT_BACKEND = 'redis://redis:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_BEAT_SCHEDULE = {
'hello': {
'task': 'cafe.tasks.hello',
'schedule': crontab() # execute every minute
},
'backup': {
'task': 'cafe.tasks.backup',
'schedule': crontab() # execute every minute
}
}