Asynchronous topics by Django documentation

Django 3.1

Asynchronous views and middleware support

Django now supports a fully asynchronous request path, including:

To get started with async views, you need to declare a view using async def :

1 async def my_view(request):
2     await asyncio.sleep(0.5)
3     return HttpResponse('Hello, async world!')

All asynchronous features are supported whether you are running under WSGI or ASGI mode.

However, there will be performance penalties using async code in WSGI mode.

You can read more about the specifics in Asynchronous support documentation.

You are free to mix async and sync views, middleware, and tests as much as you want. Django will ensure that you always end up with the right execution context.

We expect most projects will keep the majority of their views synchronous, and only have a select few running in async mode - but it is entirely your choice.

Django’s ORM, cache layer, and other pieces of code that do long-running network calls do not yet support async access. We expect to add support for them in upcoming releases.

Async views are ideal, however, if you are doing a lot of API or HTTP calls inside your view, you can now natively do all those HTTP calls in parallel to considerably speed up your view’s execution.

Asynchronous support should be entirely backwards-compatible and we have tried to ensure that it has no speed regressions for your existing, synchronous code. It should have no noticeable effect on any existing Django projects.

topics/http/views Async views

As well as being synchronous functions, views can also be asynchronous (“async”) functions, normally defined using Python’s async def syntax.

Django will automatically detect these and run them in an async context.

However, you will need to use an async server based on ASGI to get their performance benefits .

Here’s an example of an async view:

 1 import datetime
 3 #
 4 from zoneinfo import ZoneInfo
 6 from django.http import HttpResponse
 8 async def current_datetime(request):
 9     now ="Europe/Paris"))
10     html = '<html><body>It is now %s.</body></html>' % now
11     return HttpResponse(html)

You can read more about Django’s async support, and how to best use async views, in Asynchronous support.

topics/http/middleware async-middleware Asynchronous support

Middleware can support any combination of synchronous and asynchronous requests.

Django will adapt requests to fit the middleware’s requirements if it cannot support both, but at a performance penalty.

By default, Django assumes that your middleware is capable of handling only synchronous requests.

To change these assumptions, set the following attributes on your middleware factory function or class:

  • sync_capable is a boolean indicating if the middleware can handle synchronous requests. Defaults to True .

  • async_capable is a boolean indicating if the middleware can handle asynchronous requests. Defaults to False .

If your middleware has both sync_capable = True and async_capable = True, then Django will pass it the request without converting it.

In this case, you can work out if your middleware will receive async requests by checking if the get_response object you are passed is a coroutine function, using asyncio.iscoroutinefunction() .

The django.utils.decorators module contains:

  • sync_only_middleware() ,

  • async_only_middleware(),

  • and sync_and_async_middleware()

decorators that allow you to apply these flags to middleware factory functions.

The returned callable must match the sync or async nature of the get_response method. If you have an asynchronous get_response, you must return a coroutine function (async def).

process_view, process_template_response and process_exception methods, if they are provided, should also be adapted to match the sync/async mode.

However, Django will individually adapt them as required if you do not, at an additional performance penalty.

Here’s an example of how to create a middleware function that supports both:

 1 import asyncio
 2 from django.utils.decorators import sync_and_async_middleware
 4 @sync_and_async_middleware
 5 def simple_middleware(get_response):
 6     # One-time configuration and initialization goes here.
 7     if asyncio.iscoroutinefunction(get_response):
 8         async def middleware(request):
 9             # Do something here!
10             response = await get_response(request)
11             return response
13     else:
14         def middleware(request):
15             # Do something here!
16             response = get_response(request)
17             return response
19     return middleware


If you declare a hybrid middleware that supports both synchronous and asynchronous calls, the kind of call you get may not match the underlying view. Django will optimize the middleware call stack to have as few sync/async transitions as possible.

Thus, even if you are wrapping an async view, you may be called in sync mode if there is other, synchronous middleware between you and the view.

topics/testing/tools/#async-tests Testing asynchronous code

If you merely want to test the output of your asynchronous views, the standard test client will run them inside their own asynchronous loop without any extra work needed on your part.

However, if you want to write fully-asynchronous tests for a Django project, you will need to take several things into account.

Firstly, your tests must be async def methods on the test class (in order to give them an asynchronous context).

Django will automatically detect any async def tests and wrap them so they run in their own event loop.

If you are testing from an asynchronous function, you must also use the asynchronous test client. This is available as django.test.AsyncClient, or as self.async_client on any test.

With the exception of the follow parameter, which is not supported, AsyncClient has the same methods and signatures as the synchronous (normal) test client, but any method that makes a request must be awaited:

1 async def test_my_thing(self):
2     response = await self.async_client.get('/some-url/')
3     self.assertEqual(response.status_code, 200)

The asynchronous client can also call synchronous views; it runs through Django’s asynchronous request path , which supports both.

Any view called through the AsyncClient will get an ASGIRequest object for its request rather than the WSGIRequest that the normal client creates.

topics/async/ Introduction

Django has support for writing asynchronous (“async”) views, along with an entirely async-enabled request stack if you are running under ASGI.

Async views will still work under WSGI, but with performance penalties , and without the ability to have efficient long-running requests.

We’re still working on async support for the ORM and other parts of Django. You can expect to see this in future releases.

For now, you can use the sync_to_async() adapter to interact with the sync parts of Django.

There is also a whole range of async-native Python libraries that you can integrate with.

Async views

Any view can be declared async by making the callable part of it return a coroutine, commonly, this is done using async def .

For a function-based view, this means declaring the whole view using async def.

For a class-based view, this means making its __call__() method an async def (not its __init__() or as_view()).


Django uses asyncio.iscoroutinefunction to test if your view is asynchronous or not. If you implement your own method of returning a coroutine, ensure you set the _is_coroutine attribute of the view to asyncio.coroutines._is_coroutine so this function returns True.

Under a WSGI server, async views will run in their own, one-off event loop. This means you can use async features, like concurrent async HTTP requests, without any issues, but you will not get the benefits of an async stack .

The main benefits are the ability to service hundreds of connections without using Python threads .

This allows you to use slow streaming, long-polling , and other exciting response types.

If you want to use these, you will need to deploy Django using ASGI instead.

In both ASGI and WSGI mode, you can still safely use asynchronous support to run code concurrently rather than serially. This is especially handy when dealing with external APIs or data stores.

If you want to call a part of Django that is still synchronous, like the ORM, you will need to wrap it in a sync_to_async() call.

For example (to be completed, see Example 3 async Django )

 1 #
 2 import asyncio
 3 #
 4 from asgiref.sync import sync_to_async
 7 from asgiref.sync import sync_to_async
 8 results = sync_to_async(Blog.objects.get)(pk=123)
10 loop = asyncio.get_event_loop()
11 loop.create_task(get_blog)(pk)

You may find it easier to move any ORM code into its own function and call that entire function using sync_to_async() .

For example (to be completed, see Example 3 async Django )

 1 #
 2 import asyncio
 3 #
 4 from asgiref.sync import sync_to_async
 6 @sync_to_async
 7 def get_blog(pk):
 8     return Blog.objects.select_related('author').get(pk=pk)
11 loop = asyncio.get_event_loop()
12 loop.create_task(get_blog)(pk)

If you accidentally try to call a part of Django that is still synchronous-only from an async view, you will trigger Django’s asynchronous safety protection to protect your data from corruption.


When running in a mode that does not match the view (e.g. an async view under WSGI, or a traditional sync view under ASGI), Django must emulate the other call style to allow your code to run.

This context-switch causes a small performance penalty of around a millisecond.

This is also true of middleware. Django will attempt to minimize the number of context-switches between sync and async.

If you have an ASGI server, but all your middleware and views are synchronous, it will switch just once, before it enters the middleware stack.

However, if you put synchronous middleware between an ASGI server and an asynchronous view, it will have to switch into sync mode for the middleware and then back to async mode for the view.

Django will also hold the sync thread open for middleware exception propagation.

This may not be noticeable at first, but adding this penalty of one thread per request can remove any async performance advantage.

You should do your own performance testing to see what effect ASGI versus WSGI has on your code.

In some cases, there may be a performance increase even for a purely synchronous codebase under ASGI because the request-handling code is still all running asynchronously.

In general you will only want to enable ASGI mode if you have asynchronous code in your project.

  1import logging
  2import sys
  3import tempfile
  4import traceback
  6from asgiref.sync import sync_to_async
  8from django.conf import settings
  9from django.core import signals
 10from django.core.exceptions import RequestAborted, RequestDataTooBig
 11from django.core.handlers import base
 12from django.http import (
 13    FileResponse,
 14    HttpRequest,
 15    HttpResponse,
 16    HttpResponseBadRequest,
 17    HttpResponseServerError,
 18    QueryDict,
 19    parse_cookie,
 21from django.urls import set_script_prefix
 22from django.utils.functional import cached_property
 24logger = logging.getLogger("django.request")
 27class ASGIRequest(HttpRequest):
 28    """
 29    Custom request subclass that decodes from an ASGI-standard request dict
 30    and wraps request body handling.
 31    """
 33    # Number of seconds until a Request gives up on trying to read a request
 34    # body and aborts.
 35    body_receive_timeout = 60
 37    def __init__(self, scope, body_file):
 38        self.scope = scope
 39        self._post_parse_error = False
 40        self._read_started = False
 41        self.resolver_match = None
 42        self.script_name = self.scope.get("root_path", "")
 43        if self.script_name and scope["path"].startswith(self.script_name):
 44            # TODO: Better is-prefix checking, slash handling?
 45            self.path_info = scope["path"][len(self.script_name) :]
 46        else:
 47            self.path_info = scope["path"]
 48        # The Django path is different from ASGI scope path args, it should
 49        # combine with script name.
 50        if self.script_name:
 51            self.path = "%s/%s" % (
 52                self.script_name.rstrip("/"),
 53                self.path_info.replace("/", "", 1),
 54            )
 55        else:
 56            self.path = scope["path"]
 57        # HTTP basics.
 58        self.method = self.scope["method"].upper()
 59        # Ensure query string is encoded correctly.
 60        query_string = self.scope.get("query_string", "")
 61        if isinstance(query_string, bytes):
 62            query_string = query_string.decode()
 63        self.META = {
 64            "REQUEST_METHOD": self.method,
 65            "QUERY_STRING": query_string,
 66            "SCRIPT_NAME": self.script_name,
 67            "PATH_INFO": self.path_info,
 68            # WSGI-expecting code will need these for a while
 69            "wsgi.multithread": True,
 70            "wsgi.multiprocess": True,
 71        }
 72        if self.scope.get("client"):
 73            self.META["REMOTE_ADDR"] = self.scope["client"][0]
 74            self.META["REMOTE_HOST"] = self.META["REMOTE_ADDR"]
 75            self.META["REMOTE_PORT"] = self.scope["client"][1]
 76        if self.scope.get("server"):
 77            self.META["SERVER_NAME"] = self.scope["server"][0]
 78            self.META["SERVER_PORT"] = str(self.scope["server"][1])
 79        else:
 80            self.META["SERVER_NAME"] = "unknown"
 81            self.META["SERVER_PORT"] = "0"
 82        # Headers go into META.
 83        for name, value in self.scope.get("headers", []):
 84            name = name.decode("latin1")
 85            if name == "content-length":
 86                corrected_name = "CONTENT_LENGTH"
 87            elif name == "content-type":
 88                corrected_name = "CONTENT_TYPE"
 89            else:
 90                corrected_name = "HTTP_%s" % name.upper().replace("-", "_")
 91            # HTTP/2 say only ASCII chars are allowed in headers, but decode
 92            # latin1 just in case.
 93            value = value.decode("latin1")
 94            if corrected_name in self.META:
 95                value = self.META[corrected_name] + "," + value
 96            self.META[corrected_name] = value
 97        # Pull out request encoding, if provided.
 98        self._set_content_type_params(self.META)
 99        # Directly assign the body file to be our stream.
100        self._stream = body_file
101        # Other bits.
102        self.resolver_match = None
104    @cached_property
105    def GET(self):
106        return QueryDict(self.META["QUERY_STRING"])
108    def _get_scheme(self):
109        return self.scope.get("scheme") or super()._get_scheme()
111    def _get_post(self):
112        if not hasattr(self, "_post"):
113            self._load_post_and_files()
114        return self._post
116    def _set_post(self, post):
117        self._post = post
119    def _get_files(self):
120        if not hasattr(self, "_files"):
121            self._load_post_and_files()
122        return self._files
124    POST = property(_get_post, _set_post)
125    FILES = property(_get_files)
127    @cached_property
128    def COOKIES(self):
129        return parse_cookie(self.META.get("HTTP_COOKIE", ""))
132class ASGIHandler(base.BaseHandler):
133    """Handler for ASGI requests."""
135    request_class = ASGIRequest
136    # Size to chunk response bodies into for multiple response messages.
137    chunk_size = 2 ** 16
139    def __init__(self):
140        super().__init__()
141        self.load_middleware(is_async=True)
143    async def __call__(self, scope, receive, send):
144        """
145        Async entrypoint - parses the request and hands off to get_response.
146        """
147        # Serve only HTTP connections.
148        # FIXME: Allow to override this.
149        if scope["type"] != "http":
150            raise ValueError(
151                "Django can only handle ASGI/HTTP connections, not %s." % scope["type"]
152            )
153        # Receive the HTTP request body as a stream object.
154        try:
155            body_file = await self.read_body(receive)
156        except RequestAborted:
157            return
158        # Request is complete and can be served.
159        set_script_prefix(self.get_script_prefix(scope))
160        await sync_to_async(signals.request_started.send, thread_sensitive=True)(
161            sender=self.__class__, scope=scope
162        )
163        # Get the request and check for basic issues.
164        request, error_response = self.create_request(scope, body_file)
165        if request is None:
166            await self.send_response(error_response, send)
167            return
168        # Get the response, using the async mode of BaseHandler.
169        response = await self.get_response_async(request)
170        response._handler_class = self.__class__
171        # Increase chunk size on file responses (ASGI servers handles low-level
172        # chunking).
173        if isinstance(response, FileResponse):
174            response.block_size = self.chunk_size
175        # Send the response.
176        await self.send_response(response, send)
178    async def read_body(self, receive):
179        """Reads a HTTP body from an ASGI connection."""
180        # Use the tempfile that auto rolls-over to a disk file as it fills up.
181        body_file = tempfile.SpooledTemporaryFile(
182            max_size=settings.FILE_UPLOAD_MAX_MEMORY_SIZE, mode="w+b"
183        )
184        while True:
185            message = await receive()
186            if message["type"] == "http.disconnect":
187                # Early client disconnect.
188                raise RequestAborted()
189            # Add a body chunk from the message, if provided.
190            if "body" in message:
191                body_file.write(message["body"])
192            # Quit out if that's the end.
193            if not message.get("more_body", False):
194                break
196        return body_file
198    def create_request(self, scope, body_file):
199        """
200        Create the Request object and returns either (request, None) or
201        (None, response) if there is an error response.
202        """
203        try:
204            return self.request_class(scope, body_file), None
205        except UnicodeDecodeError:
206            logger.warning(
207                "Bad Request (UnicodeDecodeError)",
208                exc_info=sys.exc_info(),
209                extra={"status_code": 400},
210            )
211            return None, HttpResponseBadRequest()
212        except RequestDataTooBig:
213            return None, HttpResponse("413 Payload too large", status=413)
215    def handle_uncaught_exception(self, request, resolver, exc_info):
216        """Last-chance handler for exceptions."""
217        # There's no WSGI server to catch the exception further up
218        # if this fails, so translate it into a plain text response.
219        try:
220            return super().handle_uncaught_exception(request, resolver, exc_info)
221        except Exception:
222            return HttpResponseServerError(
223                traceback.format_exc() if settings.DEBUG else "Internal Server Error",
224                content_type="text/plain",
225            )
227    async def send_response(self, response, send):
228        """Encode and send a response out over ASGI."""
229        # Collect cookies into headers. Have to preserve header case as there
230        # are some non-RFC compliant clients that require e.g. Content-Type.
231        response_headers = []
232        for header, value in response.items():
233            if isinstance(header, str):
234                header = header.encode("ascii")
235            if isinstance(value, str):
236                value = value.encode("latin1")
237            response_headers.append((bytes(header), bytes(value)))
238        for c in response.cookies.values():
239            response_headers.append(
240                (b"Set-Cookie", c.output(header="").encode("ascii").strip())
241            )
242        # Initial response message.
243        await send(
244            {
245                "type": "http.response.start",
246                "status": response.status_code,
247                "headers": response_headers,
248            }
249        )
250        # Streaming responses need to be pinned to their iterator.
251        if response.streaming:
252            # Access `__iter__` and not `streaming_content` directly in case
253            # it has been overridden in a subclass.
254            for part in response:
255                for chunk, _ in self.chunk_bytes(part):
256                    await send(
257                        {
258                            "type": "http.response.body",
259                            "body": chunk,
260                            # Ignore "more" as there may be more parts; instead,
261                            # use an empty final closing message with False.
262                            "more_body": True,
263                        }
264                    )
265            # Final closing message.
266            await send({"type": "http.response.body"})
267        # Other responses just need chunking.
268        else:
269            # Yield chunks of response.
270            for chunk, last in self.chunk_bytes(response.content):
271                await send(
272                    {
273                        "type": "http.response.body",
274                        "body": chunk,
275                        "more_body": not last,
276                    }
277                )
278        await sync_to_async(response.close, thread_sensitive=True)()
280    @classmethod
281    def chunk_bytes(cls, data):
282        """
283        Chunks some data up so it can be sent in reasonable size messages.
284        Yields (chunk, last_chunk) tuples.
285        """
286        position = 0
287        if not data:
288            yield data, True
289            return
290        while position < len(data):
291            yield (
292                data[position : position + cls.chunk_size],
293                (position + cls.chunk_size) >= len(data),
294            )
295            position += cls.chunk_size
297    def get_script_prefix(self, scope):
298        """
299        Return the script prefix to use from either the scope or a setting.
300        """
301        if settings.FORCE_SCRIPT_NAME:
302            return settings.FORCE_SCRIPT_NAME
303        return scope.get("root_path", "") or ""