Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
Role based access control implementation
I am developing a food ordering and delivery service and I have four roles in my project admin, customer, rider and manager but I get challenge on implementing role based access control so I would like to ask how to implement role based access control in such project. Tech stack I am using is vue.js, Django and Django Rest Framework. I feel stack when implementing it because I am begginer -
trouble connecting my database to Django server on deployment machine
I'm deploying to AWS Linux 2023 and my postgresql database in on aws RDS. I've installed psql and checked if db is accessible in my instance. I've also checked that the environmental variables are fetching exactly as expected in my settings.py. and I even ssh and applied the migrations myself. but I keep getting the following issue: [stderr] raise dj_exc_value.with_traceback(traceback) from exc_value [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/db/backends/base/base.py", line 279, in ensure_connection [stderr] self.connect() [stderr] ~~~~~~~~~~~~^^ [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/utils/asyncio.py", line 26, in inner [stderr] return func(*args, **kwargs) [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/db/backends/base/base.py", line 256, in connect [stderr] self.connection = self.get_new_connection(conn_params) [stderr] ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/utils/asyncio.py", line 26, in inner [stderr] return func(*args, **kwargs) [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/db/backends/postgresql/base.py", line 332, in get_new_connection [stderr] connection = self.Database.connect(**conn_params) [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib64/python3.13/site-packages/psycopg2/__init__.py", line 122, in connect [stderr] conn = _connect(dsn, connection_factory=connection_factory, **kwasync) [stderr]django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory [stderr] Is the server running locally and accepting connections on that socket? [stderr] [stderr]2025-10-19 04:29:22,032 INFO Starting server at tcp:port=8000:interface=127.0.0.1 [stderr]2025-10-19 04:29:22,032 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras) [stderr]2025-10-19 04:29:22,033 INFO Configuring endpoint tcp:port=8000:interface=127.0.0.1 [stderr]2025-10-19 04:29:22,033 INFO Listening on TCP address 127.0.0.1:8000 settings.py: ... DATABASES = { "default": { … -
Celery memory leak in Django — worker memory keeps increasing and not released after tasks complete
I’m using Django + Celery for data crawling tasks, but the memory usage of the Celery worker keeps increasing over time and never goes down after each task is completed. I’m using: celery==5.5.3 Django==5.2.6 Here’s my Celery configuration: # ---------- Broker/Backend ---------- app.conf.broker_url = "sqs://" app.conf.result_backend = "rpc://" app.conf.task_ignore_result = True # \---------- Queue (FIFO) ---------- QUEUE_NAME = env("AWS_SQS_CELERY_NAME") app.conf.task_default_queue = QUEUE_NAME app.conf.task_queues = (Queue(QUEUE_NAME),) # \---------- SQS transport ---------- app.conf.broker_transport_options = { "region": env.str("AWS_REGION"), "predefined_queues": { QUEUE_NAME: { "url": env.str("AWS_CELERY_SQS_URL"), "access_key_id": env.str("AWS_ACCESS_KEY_ID"), "secret_access_key": env.str("AWS_SECRET_ACCESS_KEY"), }, }, # long-poll "wait_time_seconds": int(env("SQS_WAIT_TIME_SECONDS", default=10)), "polling_interval": float(env("SQS_POLLING_INTERVAL", default=0)), "visibility_timeout": int(env("SQS_VISIBILITY_TIMEOUT", default=900)), "create_missing_queues": False, # do not create queue automatically } # \---------- Worker behavior ---------- app.conf.worker_prefetch_multiplier = 1 # process one job at a time app.conf.task_acks_late = True # ack after task completion app.conf.task_time_limit = int(env("CELERY_HARD_TIME_LIMIT", default=900)) app.conf.task_soft_time_limit = int(env("CELERY_SOFT_TIME_LIMIT", default=600)) app.conf.worker_send_task_events = False app.conf.task_send_sent_event = False app.autodiscover_tasks() Problem: After each crawling task completes, the worker memory does not drop back — it only increases gradually. Restarting the Celery worker releases memory, so I believe it’s a leak or a cleanup issue. What I’ve tried: Set task_ignore_result=True Add option --max-tasks-per-child=200 -
Error during Django Deployment: Cannot import 'setuptools.build_meta'
I have encountered this error while deploying my Django project on Render. The error is as follows: pip._vendor.pyproject_hooks._impl.BackendUnavailable: Cannot import 'setuptools.build_meta' I have upgraded setuptools, even reinstalled the package but still can't figure out what the issue is. PS- I am using Django 5.2.7 and Python 3.11 -
Mystery lag in completing Django POST request after form.save() completes
One of the forms in my Django application takes a long time to submit, save, and redirect. I'm using a variant of cProfile to measure the time spent on form.clean() and form.save(). It takes <0.1 seconds to run form.clean(), then ~10 seconds to run form.save(), then calls get_success_url(), and then hangs for another ~30 seconds before loading the success_url page. The success_url page should take negligible time to render. What's eating up the mystery 25 seconds? -
Certain django-select2 fields won't render when multiple forms are used on single template
I have an issue rendering some select2 fields: When multiple forms with same select2 widget are used within a single template some select2 fields are not rendered as expected (rendered just as normal select field). If I re-initialize one of the fields using JQuery it renders as expected ($("element").djangoSelect2()). CSS/JS (the one rendered with {{ form.media }} property) renders as well and can be seen in browser's dev mode. What might be the issue rendering django-select2 fields and do I have to render media imports for each form or just one of them is enough as they looks identical? Simplified forms example: FORM 1: <form action=""> <div class="form-meta"> {{ form.media }} </div> <div class="field p-2 d-flex flex-column flex-fill rounded"> {{ form.recipient.label_tag }} {{ form.recipient }} {% if form.recipient.errors %} <div class="error">{{ form.recipient.errors }}</div> {% endif %} </div> </form> FORM 2 <form action=""> <div class="d-flex flex-column gap-2"> <!-- Recipient field --> <div class="form-meta"> {{ shipment_form.media }} </div> <div class="d-flex gap-2 p-3 border"> <i class="align-self-center fa-regular fa-user fa-2x"></i> <div class="vr"></div> <div class="d-flex gap-2 flex-fill"> <div id="recipientFieldContainer" class="flex-fill"> {{ shipment_form.recipient }} </div> <button class="btn btn-dark rounded-0" type="button" data-bs-target="#newRecipientModal" data-bs-toggle="modal">+</button> </div> </div> <!-- Comment field --> {{ shipment_form.comment }} </div> </form> -
How can I make a Wagtail/Django application serve assets through a CloudFront URL and not a direct S3 URL?
Previously, my S3 bucket's objects were directly accessible using the bucket's URL. I have recently taken measures to restrict this sort of access by creating a CloudFront domain, with an S3 origin, with the idea of only allowing reads to the bucket through CloudFront. Now, my bucket only has the default OAC CloudFront policy attached to it and the access is working as expected. If you were to navigate directly to an S3 key, for example, https:// bucket-name.s3.amazon.aws/images/image.png you'd get a 403, but if you were to go to https:// cloudfront-cname.com/images/image.png you'd see the image. However, the Wagtail application is still trying to serve assets using the direct S3 URL, meaning it gets a 403 when trying to display images etc. The app is being hosted using ECS + EC2. I've done some reading/research and saw some suggestions suggesting adding a CLOUDFRONT_DOMAIN variable to base.py which would then have to be added as an env var to the ECS task definition, and then set AWS_S3_CUSTOM_DOMAIN to CLOUDFRONT_DOMAIN if CLOUDFRONT_DOMAIN is not null. I also saw some suggestions to set MEDIA_URL to f"https://{CLOUDFRONT_DOMAIN}/": CLOUDFRONT_DOMAIN = os.getenv("CLOUDFRONT_DOMAIN") if CLOUDFRONT_DOMAIN: AWS_S3_CUSTOM_DOMAIN = CLOUDFRONT_DOMAIN MEDIA_URL = f"https://{AWS_S3_CUSTOM_DOMAIN}/" I have done the above and redeployed … -
Custom Permissions in django-ninja which needs to use existing db objects
I am using django-ninja and django-ninja-extra for an api. Currently I have some Schema like so from ninja import schema class SchemaA(Schema) fruit_id: int other_data: str and a controller like so class HasFruitAccess(permissions.BasePermission): def has_permission(self, request: HttpRequest, controller: ControllerBase): controller.context.compute_route_parameters() data = controller.context.kwargs.get('data') fruit = Fruit.objects.get(pk=data.fruit_id) if fruit.user.pk == request.user.pk: return True return False @api_controller("/fruits", permissions=[IsAuthenticated]) class FruitController(ControllerBase): """Controller class for test runs.""" @route.post("/", auth=JWTAuth(), response=str, permissions=[HasFruitAccess()]) def do_fruity_labour(self, data: SchemaA) -> str: #Check fruit exists. fruit = get_object_or_404(Fruit, data.fruit_id) #do work return "abc" And a model like class Fruit(models.Model): user = models.ForeignKey(User) ... What I wanted to do here was check the user is related to the fruit and then we authorize them to do whatever on this object. Is this a good idea, is this best practice or is it better to just validate in the api route itself? Because permissions will obviously run before we check if fruit is even a valid object in the db so I might be trying to "authorize" a user with invalid data. How can one go about authorizing users for a specific api route which relies on db models through permissions (I would prefer it if I could use permissions since … -
How can I share my PostgreSQL changes with teammates after git pull in a Django project?
I'm working on the backend of a web application using Django.Each developer has a local setup of the project, and we all pull updates from GitHub using git pull. I have a question about database changes: Whenever I make changes to the PostgreSQL database (for example, updating schema, adding new tables or data), is there a way for my teammates to automatically get those changes after they run git pull or for now use sqlite and after the project done deploy it on postgres? for my own i used sqlite but in this project i dont know it is a good idea or not -
Velvet: Static Files are searched by path string in urls
This issue refers to Velvet - Django Bootstrap 5 Premium Admin & Dashboard Template If you add a new URL urlpatterns = [ path( "dashboard/", views.velvet_dashboard, name="dashboard" ), # this finds Not Found: /dashboard/static/assets/libs/bootstrap/css/bootstrap.rtl.min.css path( "", views.velvet_dashboard, name="dashboard" ), # this finds Not Found: /static/assets/libs/bootstrap/css/bootstrap.rtl.min.css ] You can test it if you choose the Directions: RTL. logs for http://localhost:8000/: INFO 2025-10-16 08:24:43,331 basehttp 901281 124358292928064 "GET /static/assets/libs/bootstrap/css/bootstrap.rtl.min.css HTTP/1.1" 200 232911 logs for http://localhost:8000/dashboard/: WARNING 2025-10-16 08:34:37,514 log 902757 140627449730624 Not Found: /dashboard/static/assets/libs/bootstrap/css/bootstrap.rtl.min.css WARNING 2025-10-16 08:34:37,515 basehttp 902757 140627449730624 "GET /dashboard/static/assets/libs/bootstrap/css/bootstrap.rtl.min.css HTTP/1.1" 404 429 -
Which most web backend stack is common for remote Europe jobs. (python django or c# dotnet)? [closed]
Which most web backend stack is common for remote Europe jobs. (python django or c# dotnet) ? To choose which one to learn .......i am trying to find a a stack that has the most common jobs remotely si i can have a higher chance . -
Django: User Experience for checking if a person already exists in the database (first and last name, and birthday)
[I'm sure my use case has been addressed somewhere, but I'm not finding it.] In this use case, the user is registering a new person with the following fields: first name, last name, and birthday. If these are the same as an existing entry, the flow is to ask the user if any of the existing entries are the same. If yes, reject the entry. If no, add the entry. I'm looking for suggestions how to implement this. I know how to check for similar entries. What I'm not sure of is how to ask the "is this a duplicate of any of these other people" question to the user and then process the answer. Here's what I expect the flow is, and I wonder if there's a better way to do this. During clean() on the person entry form, we can query for similar entries (e.g. Jon Smith born 1/1/2000 and John Smith born 1/1/2000) and note there are some (or not). In is_valid(), if there is possible duplication, the system this would redirect to a view to list the possible duplicates and ask the question in a second form. And then if the user says it's a duplicate … -
How to prevent database overload with BlacklistedToken and OutstandingToken models in Django using Simple JWT?
'm working on a Django project using Simple JWT , and I've noticed that every time a user logs in, the generated tokens are stored in the BlacklistedToken and OutstandingToken tables in the database. As more users authenticate and new tokens are generated, these tables continue to grow, which could lead to database overload over time. What I want to achieve is to avoid these tables filling up unnecessarily with tokens, as I don't want to use a cron job to manually clean the tables or manage the tokens. What best practices exist for handling this situation? Is there a way to prevent these tokens from being stored persistently or to have them automatically cleaned up without using cron jobs? I would appreciate any suggestions to improve performance and keep the database optimized. I use postgresql for database. I have tried to set a script that runs every month but the table still fills up too much because the users are concurrent. -
Where should I put Django pre-flight checks that access the database?
I have a Django app that requires certain items to be in the database before it will run. I'd like to add checks at start time that will fail if these items are not found in the database. Is there a way to integrate these checks into the Django system check framework? I've played with this but I'm not sure if this is appropriate where the check makes database queries. And if this isn't the right place, is there a better way? I only need these checks to run once, before or at startup time. -
How to reset password in a Django UserChangeForm
I have a basic CustomUser model in my project. When I want to update it I fill my form with instance where I try to make user's passsword null, but anyway in the form i recieve: "No password set. Set password Raw passwords are not stored, so there is no way to see the user’s password." And Set password is a link that leads me to nowhere. I just want a password field to be null in my update form. views.py ... class CustomUserUpdateView(View): def get(self, request, *args, **kwargs): user_id = kwargs["user_id"] if request.user.is_authenticated: user = get_object_or_404(CustomUser, id=user_id) user.password = None if request.user == user: form = CustomUserChangeForm(instance=user) return render( request, "users/update.html", {"form": form, "user_id": user_id} ) else: return HttpResponse("you damn wrong") else: return HttpResponse("you idiot") def post(self, request, *args, **kwargs): user_id = kwargs["user_id"] user = get_object_or_404(CustomUser, id=user_id) form = CustomUserChangeForm(request.POST, instance=user) if form.is_valid(): form.save() return redirect("users_list") return render( request, "users/update.html", {"form": form, "user_id": user_id} ) ... models.py from django.contrib.auth.models import AbstractUser # Create your models here. class CustomUser(AbstractUser): pass def __str__(self): return self.username forms.py ... class CustomUserChangeForm(UserChangeForm): class Meta: model = CustomUser fields = ('first_name', 'last_name', 'username', 'password') update.html {% extends "base.html" %} {% block content %} {% if … -
More than basic form validation with Django
I'm learning Django with a small app allowing people to book houses for lodging. I have two models, describing a house and a booking (I'm currently working without a "Customer" model): # models.py from django.db import models class Housing(models.Model): name = models.CharField(max_length=255) capacity = models.SmallIntegerField() class Booking(models.Model): house = models.ForeignKey(Housing, on_delete=models.CASCADE) arrival_date = models.DateField(auto_now_add=False) departure_date = models.DateField(auto_now_add=False) client_id = models.TextField() nb_travellers = models.SmallIntegerField() I also have a ModelForm matching the Booking model, in which a customer can book a house: # forms.py from django.forms import ModelForm from .models import Booking class BookingForm(ModelForm): """Form to make a booking""" class Meta: model = Booking fields = "__all__" In my view, I retrieve the form data, and I'd like to add some validation before adding the new booking instance to the database: arrival_date must be before departure_date The number of travellers must not be higher then the houe capacity There must not be an existing booking in the databse which has overlapping dates with the new booking I already have code to compute those, it works in additional testing scripts I made, but I am struggling to integrate it properly in the view. Should I look deeper in Django forms validation documentation ? … -
Changing the owner of .venv created by uv inside Docker
I have a Django app build by uv running inside Docker. I mount the local filesystem as a volume in the container using Docker Compose so that edits to the source code locally trigger reloading of the app in the container. It almost works. The issue is that the .venv directory built by uv is owned by the root user of the Docker container. This means that I cannot edit those files from my local filesystem without root access. I have gotten around this with pip/pipenv/poetry/pdm in the past by installing the venv as a non-root user who has the same uid and guid as my local user (those values are passed into Docker via a .env file). But I can't work out how to do that for uv. Dockerfile: FROM python:3.12-slim-trixie # create non-root user RUN addgroup --system app && adduser --system --group app # set work directory WORKDIR /app # environment variables ENV PYTHONDONTWRITEBYTECODE=1 \ PYTHONUNBUFFERED=1 \ UV_LINK_MODE=copy \ UV_PYTHON_DOWNLOADS=never \ UV_PROJECT_ENVIRONMENT=$APP_HOME/.venv # install uv COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/ # install system dependencies RUN apt-get update RUN apt-get install -y --no-install-recommends \ build-essential netcat-traditional \ python-is-python3 python3-gdal python3-psycopg2 # switch to app user [THIS MAKES THE NEXT … -
Error: TypeError: 'coroutine' object is not subscriptable
Code: total_deposit = transaction.filter(type='deposit').aaggregate(total=Sum('amount'))['total'] or 0 total_transfer = transaction.filter(type='transfer').aaggregate(total=Sum('amount'))['total'] or 0 Error: total_deposit = transaction.filter(type='deposit').aaggregate(total=Sum('amount'))['total'] or 0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ How do i fix this?? -
Django email not sending — no error, but messages don’t arrive (using Gmail SMTP)
I’m trying to send emails from my Django project using Gmail’s SMTP server. The server runs without any errors, and my code executes successfully, but the emails never reach the recipient — not even in the spam folder. I’ve enabled 2-Step Verification on my Gmail account and generated an App Password specifically for this project, but it still doesn’t work. I want to understand why Django thinks the email was sent but it never actually arrives. EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'smtp.gmail.com' EMAIL_PORT = 465 EMAIL_USE_SSL = True EMAIL_HOST_USER = 'mygmail@gmail.com' EMAIL_HOST_PASSWORD = 'my-16-digit-app-password' DEFAULT_FROM_EMAIL = EMAIL_HOST_USER i the Expected result was Recipient receives the email. but the actual result is Email never arrives, and no errors are raised. -
django getting 530, 5.7.0 Authentication Required despite using google's App Paswords
Have 2fa on Google, created the password, putting in the correct email and app password in the settings.py, yet still get Authentication Error. Tried both 587(TLS=True) and 465(SSL=True) but didn't seem to change anything. settings.py: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'smtp.gmail.com' EMAIL_PORT = 465 EMAIL_USE_TLS = False EMAIL_USE_SSL = True EMAIL_HOST_USER = 'mygmail@gmail.com' EMAIL_PASSWORD = "my16digitpassword" DEFAULT_FROM_EMAIL = 'mygmail@gmail.com' What might be the problem/solution? Any answers for this problem feature "use google app password". -
How can I properly implement ManifestStaticFilesStorage in Django?
I'm attempting to implement ManifestStaticFilesStorage in my Django project. From what I've seen, this should be simple, but it's not behaving in the way I expect. Firstly, I have DEBUG=os.getenv("DEBUG", "False").lower() == "true" In my settings.py file, with DEBUG in my .env file set to "False". Next, I have the following settings for my static files: STATICFILES_STORAGE = 'django.contrib.staticfiles.storage.ManifestStaticFilesStorage' MAX_DOCUMENT_FILE_SIZE_MB = 50 STATIC_URL = '/static/' if LOCAL: # LOCAL is False here STATIC_ROOT = os.path.join(BASE_DIR, 'static_collected') else: STATIC_ROOT = os.getenv('STATIC_ROOT') Finally, for my own sanity, I have some print statements at the end of my settings file that output when I run collectstatic, which output: STATICFILES_STORAGE: django.contrib.staticfiles.storage.ManifestStaticFilesStorage STATIC_ROOT: /var/www/html/static STATIC_URL: /static/ I have an nginx server set to serve static files at the above STATIC_ROOT. Finally, in my project's venv, I run python manage.py collectstatic And it copies the files successfully to the output directory I specified. The nginx server correctly serves them. However, after all this, the filenames remain their basic iterations, rather than including a hash as I expect. I'm using Django's {% static %} template in all my template HTML files. I've tried deleting the entire static folder and re-running collectstatic, but it outputs the same thing … -
How to aggregate a group by queryset in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
How to search by multiple fields on django_opensearch_dsl
I have an opensearch server in which I want to search items and apply some filters to the search: search = Item.search().query("match", name="test") I need to search items by multiple filters, like name, date, location, etc. For this I will need some other kind of queries like "range" or "terms". Now the issue is I've trying using opensearch-dsl package like this: search_1 = ESQ("match", name="test") search_2 = ESQ("terms", name="location") search_3 = ESQ("range", name="date") filters = [search_1, search_2, search_3] query = ESQ("bool", should=filters) search = FreezerItemDocument.search().query(query) This is not working, constantly returning errors like: {"error":"unhashable type: 'Bool'"} Event if I try to run the query individually like this: query = ESQ("match", name="test") search = FreezerItemDocument.search().query(query) How can I do a search by multiple fields? -
Why does pytest fail to resolve Related model references in a Django package?
I have an installable Django package that I have built and was starting to write tests for it. I am using pytest-django. However, when I run the tests, almost all the tests fail and I keep getting this error:- request = <SubRequest 'django_db_setup' for <Function test_filter_with_full_name>>, django_test_environment = None django_db_blocker = <pytest_django.plugin.DjangoDbBlocker object at 0x10072ba40>, django_db_use_migrations = False, django_db_keepdb = True django_db_createdb = False, django_db_modify_db_settings = None @pytest.fixture(scope="session") def django_db_setup( request: pytest.FixtureRequest, django_test_environment: None, django_db_blocker: DjangoDbBlocker, django_db_use_migrations: bool, django_db_keepdb: bool, django_db_createdb: bool, django_db_modify_db_settings: None, ) -> Generator[None, None, None]: """Top level fixture to ensure test databases are available""" from django.test.utils import setup_databases, teardown_databases setup_databases_args = {} if not django_db_use_migrations: _disable_migrations() if django_db_keepdb and not django_db_createdb: setup_databases_args["keepdb"] = True aliases, serialized_aliases = _get_databases_for_setup(request.session.items) with django_db_blocker.unblock(): > db_cfg = setup_databases( verbosity=request.config.option.verbose, interactive=False, aliases=aliases, serialized_aliases=serialized_aliases, **setup_databases_args, ) .venv/lib/python3.12/site-packages/pytest_django/fixtures.py:198: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ … -
How to integrate OpenAI GPT API in Django REST Framework project?
I’m building a Django REST Framework (DRF) project and I want to integrate OpenAI GPT API to provide AI-powered responses to users. I’ve tried setting up the API call using Python’s requests library and also with the official openai Python package, but I’m running into issues with authentication and response handling. Here’s my current code snippet: import openai openai.api_key = "YOUR_API_KEY" response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello, can you help me with Django?"} ] ) print(response['choices'][0]['message']['content']) Problem: Sometimes I get an authentication error: "Invalid API key" Other times the API call works but I’m not sure how to integrate it properly into a DRF view so that it returns a JSON response to the frontend. What I want: A clear way to call OpenAI GPT from a Django REST API endpoint Return the GPT response in JSON format to a React frontend What I’ve tried: Adding API key in .env and using os.environ Testing with curl — works sometimes Wrapping the call inside a DRF APIView — but facing serialization issues Any advice or working example would be highly appreciated 🙏 I tried calling the OpenAI GPT API inside …