Background task processing separates the work that needs to happen from the request that triggers it. Sending emails, generating reports, processing image uploads, syncing with external APIs: these operations do not belong in the request-response cycle. Celery is the dominant task queue for Django, backed by Redis or RabbitMQ as the message broker. This guide covers the full setup, from basic task definitions and broker configuration through periodic scheduling, error handling, retry strategies, monitoring, and production deployment patterns I have used across projects handling millions of tasks per day. For broader deployment patterns, see the Deployment hub.
The cost of not using background tasks is predictable: slow page loads, timeout errors under load, and coupled failure modes where a third-party API outage takes down your entire application. Celery solves this by decoupling the work into a separate process.
Project setup
Install Celery with Redis support:
pip install celery[redis] django-celery-beat
Create the Celery application alongside your Django project:
# myproject/celery.py
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings.production')
app = Celery('myproject')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
In your project’s __init__.py:
from .celery import app as celery_app
__all__ = ('celery_app',)
Configure the broker in settings:
CELERY_BROKER_URL = os.environ.get('REDIS_URL', 'redis://127.0.0.1:6379/0')
CELERY_RESULT_BACKEND = 'django-db'
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'UTC'
Defining tasks
Tasks live in tasks.py inside each app:
# notifications/tasks.py
from celery import shared_task
from django.core.mail import send_mail
@shared_task(bind=True, max_retries=3)
def send_welcome_email(self, user_id):
from accounts.models import User
try:
user = User.objects.get(id=user_id)
send_mail(
'Welcome',
'Thanks for joining.',
'noreply@prodjango.com',
[user.email],
)
except User.DoesNotExist:
pass # User was deleted between enqueue and execution
except Exception as exc:
self.retry(exc=exc, countdown=60 * (2 ** self.request.retries))
Call the task asynchronously from views:
from notifications.tasks import send_welcome_email
def register(request):
# ... create user ...
send_welcome_email.delay(user.id)
return redirect('dashboard')
Pass serializable arguments only. Pass the user ID, not the user object. The task runs in a separate process that needs to look up the object fresh from the database.
Retry strategies and error handling
Production tasks need retry logic. Network errors, database timeouts, and third-party API failures are normal:
@shared_task(
bind=True,
max_retries=5,
default_retry_delay=120,
autoretry_for=(ConnectionError, TimeoutError),
retry_backoff=True,
retry_backoff_max=600,
)
def sync_inventory(self, product_id):
# Retry automatically on connection errors with exponential backoff
...
retry_backoff=True with retry_backoff_max implements exponential backoff capped at 10 minutes. This prevents retry storms when an external service is down.
Set task_acks_late = True in settings so tasks are only acknowledged after completion. If a worker crashes mid-task, the broker redelivers the task to another worker.
Periodic tasks with Celery Beat
Celery Beat runs scheduled tasks. Use django-celery-beat for database-backed schedules that you can manage through the admin:
# settings
INSTALLED_APPS += ['django_celery_beat']
CELERY_BEAT_SCHEDULER = 'django_celery_beat.schedulers:DatabaseScheduler'
Or define schedules in settings for simpler setups:
from celery.schedules import crontab
CELERY_BEAT_SCHEDULE = {
'daily-report': {
'task': 'analytics.tasks.generate_daily_report',
'schedule': crontab(hour=6, minute=0),
},
'cleanup-expired-sessions': {
'task': 'accounts.tasks.cleanup_sessions',
'schedule': crontab(hour=3, minute=30),
},
}
Run the Beat scheduler as a separate process:
celery -A myproject beat --loglevel=info
Task priorities and routing
Route different task types to dedicated queues:
CELERY_TASK_ROUTES = {
'notifications.tasks.*': {'queue': 'notifications'},
'analytics.tasks.*': {'queue': 'analytics'},
}
Run workers per queue:
celery -A myproject worker --queues=notifications --concurrency=4
celery -A myproject worker --queues=analytics --concurrency=2
This prevents a flood of analytics tasks from blocking time-sensitive notifications.
Monitoring with Flower
Flower provides a web interface for monitoring Celery workers and tasks:
pip install flower
celery -A myproject flower --port=5555
Track active tasks, worker status, task success rates, and execution times. In production, put Flower behind authentication and restrict access.
Production deployment
In production, run Celery workers as managed processes using systemd, Supervisor, or container orchestration:
# /etc/systemd/system/celery-worker.service
[Unit]
Description=Celery Worker
After=network.target
[Service]
Type=forking
User=deploy
WorkingDirectory=/opt/myproject
ExecStart=/opt/myproject/.venv/bin/celery -A myproject worker --loglevel=info --concurrency=4
Restart=always
[Install]
WantedBy=multi-user.target
Key production settings:
CELERY_WORKER_PREFETCH_MULTIPLIER = 1 # Fetch one task at a time for fairness
CELERY_TASK_TIME_LIMIT = 300 # Hard kill after 5 minutes
CELERY_TASK_SOFT_TIME_LIMIT = 240 # Raise SoftTimeLimitExceeded after 4 minutes
Frequently asked questions
Can I use Django Q or Huey instead of Celery? Yes. Django Q and Huey are simpler alternatives for projects that do not need Celery’s full feature set. If you only need basic task deferral and scheduling, a lighter library reduces operational complexity.
Should I use Redis or RabbitMQ as the broker? Redis is simpler to operate and sufficient for most projects. RabbitMQ offers more sophisticated routing and reliability guarantees. If you are already running Redis for caching, using it as the broker avoids adding another infrastructure component.
How do I test Celery tasks?
Call tasks synchronously in tests with task.apply() or set CELERY_TASK_ALWAYS_EAGER = True in test settings. This executes tasks inline without needing a running broker or worker. See the testing strategy guide.
What about task idempotency? Design every task to be safely re-executable. Tasks may run more than once due to retries or broker redelivery. Use database constraints and conditional updates to ensure repeated execution produces the same result.