Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
How to Log Old and New Data on Model Updates in Django Rest Framework Using Celery for Asynchronous Processing?
I need to log both the old and new values of fields whenever a model is updated in Django Rest Framework (DRF). The logs should be saved in a separate database or an external system like Elasticsearch. The challenge is to capture both values without querying the database again and without affecting performance. I want to handle this logging process asynchronously using Celery. What I Tried: Used pre_save and post_save signals to detect updates. Tried querying the database to get the old data, but I want to avoid this extra query. Integrated Celery to handle the logging asynchronously, but not sure about the best way to capture the old and new values efficiently. What I Need Help With: How can I capture both old and new data without querying the database? How can I use Celery to handle logging asynchronously to avoid performance issues? Is there an existing pattern or best practice for logging changes in Django models? -
QuerySet returns empty in django
I am trying to get the book list with 3 conditions: Accession number (character varying,10) Physical location (character varying,20) Book status Based on the user input: Accession number should be present in the list provided by the user Physical location should be exact match Book status should be 'Published' qobjects = Q() variable_column = "accession_number" search_type = 'in' filter_string = variable_column + '__' + search_type #Passed accessionNumber as '123','234',456' #Also tried 123,456,567 #Both did not work search_string = '['+accessionNumber+']' qcolumn = Q(**{filter_string: search_string}) qobjects.add(qcolumn, Q.AND) print('print qobjects after adding accession numbers') print(qobjects) location_column="physical_book_location" search_type='iexact' filter_string = location_column + '__' + search_type qcolumn_location = Q(**{filter_string: location}) print('print qobjects after adding location') print(qobjects) qobjects.add(qcolumn_location,Q.AND) qcolumn_status = Q(**{'booK_status': 'PUBLISHED'}) qobjects.add(qcolumn_status, Q.AND) print('print qobjects after adding status') print(qobjects) res_set = Book.objects.filter(qobjects).order_by(location_column). \ values('id', 'title', 'cover_image_name','booK_status', 'accession_number', 'total_image_count',) print('print result set') print(res_set) -
Third Party API Integration with Django (JWT) + Next.js (Redux, RTK Query)
I am creating a NYT News App using Django JWT authentication for registration and login and NextJS, Redux and RTK Query on the frontend. My full stack login/registration system works perfectly and now I want to load the news after logging in. Lets call NYT API json from the backend as raw data. On the frontend I wanna filter out raw news by selecting categories:{} and create a popup in the frontend for the first time users. (These preferences are then stored in DigitalOcean bucket using DigitalOcean functions). I am not sure how to start new api call in the first place and which tools to use. Any help would be appreciated! -
Setting permissions between edit and view only in wagtail
In my Wagtail project, I have a class that inherits from EditView(modelAdmin). Within this class, I override the get_edit_handler method to dynamically set fields as read-only based on user permissions. However, I'm encountering an issue with RichTextField fields: the permission settings only take effect after restarting the application. For example, if I set an InlinePanel to read-only, all InlinePanel instances that include RichTextField fields across the model also become read-only, regardless of their individual permission settings. class JournalEditView(EditView): def form_valid(self, form): self.object = form.save_all(self.request.user) return HttpResponseRedirect(self.get_success_url()) def get_edit_handler(self): """ Overrides the get_edit_handler method from EditView. It checks whether the user has permission to edit each field. If not, the fields for which the user lacks the journal.can_edit_{field_name} permission in their user_permissions will be set as read-only. This applies to FieldPanel, AutocompletePanel, and InlinePanel. """ edit_handler = super().get_edit_handler() user_permissions = self.request.user.get_all_permissions() if not self.request.user.is_superuser: for object_list in edit_handler.children: for field in object_list.children: if isinstance(field, FieldPanel) and f"journal.can_edit_{field.field_name}" not in user_permissions: field.read_only = True elif isinstance(field, InlinePanel) and f"journal.can_edit_{field.relation_name}" not in user_permissions: field.classname += ' read-only-inline-panel' for inline_field in field.panel_definitions: inline_field.read_only = True return edit_handler class JournalAdmin(ModelAdmin): model = models.Journal inspect_view_enabled = True menu_label = _("Journals") create_view_class = JournalCreateView edit_view_class = JournalEditView -
Django CKEditor5 images not displaying on server
I am using Django 5.0 and django-ckeditor-5 to handle rich text stuff in my admin panel, including image uploading by staff. On my local machine (Win11), everything works fine. I can upload the image and it shows correctly in the admin view and in the site view. However, when putting it on my test server (Windows Server 2022, using IIS), when I upload an image, it flashes and then displays the default broken image. Attempting to navigate to the image directly gives a 404, but when I look at the URL of the image and physically find it on the server, its there and I can open it in paint. So the images are uploading into the correct place, the URL of the image in CK editor is showing the correct image, but the image does not want to load. Here is my path settings for the urls.py at the root and the media folder in settings.py urls.py ... urlpatterns += [ path("ckeditor5/", include('django_ckeditor_5.urls')), ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) settings.py ... STATIC_URL = '/static/' MEDIA_URL = '/media/' MEDIA_ROOT = os.path.join(BASE_DIR, 'media/') The setup is identical for both local and on the server in terms of folder structure, the media folder is … -
How to set up docker compose with django and pdm
I have a django project with pdm and docker compose and I set up the codebase volume to enable django hot reload and debugging in the container. Building with the compose config works fine but when I try to run the server with docker compose up -d I hit a python error as if the libs were not picked up properly. The project has the following architecture project/ ├── config/ │ ├── settings.py │ └── urls.py │ └── ... ├── some_django_app/ │ └── ... ├── compose.yaml ├── Dockerfile ├── README.md ├── pyproject.toml └── pdm.lock the compose file is as follows services: web: build: dockerfile: Dockerfile command: pdm run python manage.py runserver 0.0.0.0:8000 ports: - 8000:8000 volumes: - .:/app env_file: - .env my dockerfile is as follows # Use an official Python runtime as a parent image FROM python:3.13.2-slim-bullseye # Set environment variables ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # Set the working directory in the container WORKDIR /app # Install system dependencies RUN apt-get update && apt-get install -y \ build-essential \ libpq-dev \ && rm -rf /var/lib/apt/lists/* # Install PDM RUN pip install --no-cache-dir pdm # Copy the project files into the container COPY . /app # Accept build argument for … -
Django: Saving automated JSON POST from software (no user form!) in several tables (models)
Needed to save POST data in several tables\models in 2 apps (one - only 4 POST receiving) , depending on POST content. One model - ActionFeed will be JSON field and save all input, others - conditionally, looking on various JSON POST values Input data is looking like { "timestamp":"2025-04-29T07:15:12Z", "event":"MissionAccepted", ... } #schematically json_data = json.loads(post_input) Save2ActionFeed (post_inpt) if event=="MissionAccepted" #get neeses fields from json data save2Missionsions (field1, ... , field N) if event=="FSDJump" #get neeses fields from json data save2Location(field1, ... , field M) etc. Needed a sample of save2... methods, please, remember, than no forms, UI, etc - only clean POST -
Django [Postgreq] restore single field/column
I have a model with field creation_date = models.DateTimeField(auto_now_add=True). Doing some complex manipulations with migration file (converting model to multi-table inheritance and creating new objects) I've accidentally overwriten the field above. I have backups but need to overwrite only the single field creation_date as there are some fields in table are not align with current state. I guess there is a way to dump (id, creation_date) in JSON file and update the corresponding objects with for loop. Is there a way to pull data from gzip backup and save it to .json ? Or some more easier way to go? -
How to to debug DB errors in flaky frontend tests with django, htmx and playwright
I'm running a larger Django app that has a test suite that runs both on Debian Bookworm and Trixie. The tests use the test_server fixture and sqlite. We're currently experimenting with HTMX based views, which we test using playwright. A few weeks ago the test covering our new HTMX view started occasionally failing, but only ever on Trixie. The only relevant packages provided by the Debian Trixie host are sqlite 3.46 and python 3.13. Everything else comes in via pip and pyproject.toml in the same versions as on Bookworm. The test does not fail reliably and when it does fail, it doesn't always fail with the same error. The only commonality is that it always fails while trying to get the current session, in order to authenticate the request user (the tested view has an auth decorator). The error messages include django.contrib.auth.models.User.MultipleObjectsReturned: get() returned more than one User -- it returned 2! django/db/models/sql/compiler.py stepping out with List index out of range when trying to load the session cache django.db.utils.InterfaceError: bad parameter or other API misuse when trying to get the session cache. They mostly end up with AttributeError: 'SessionStore' object has no attribute '_session_cache' but there can be different issues … -
Django custom user model stores password in separate table — request.user causes "column users.password does not exist" error
i have created custom user in django class User(AbstractBaseUser, PermissionsMixin): which have password in user_credential table separately . what i want to request.user function i get error like ProgrammingError at /api/planer/user/ column users.password does not exist LINE 1: SELECT "users"."id", "users"."password", Ensured my custom authentication backend returns the correct user Confirmed AUTHENTICATION_BACKENDS is set properly Verified the User model has no password field -
Best way to design database tables for classified ads
I want to post classified ads. So in the context of designing the database is it better to put fields like body_type (which is not going to really change ever) or color into a separate table or keep as part of the overall model? -
Django + DigitalOcean Managed PostgreS: “remaining connection slots are reserved for roles with the SUPERUSER attribute” when using connection pooling
I’m trying to configure Django to use a connection pool against a DigitalOcean Managed PostgreSQL instance, but keep running into this error: OperationalError: connection failed: connection to server at "xxxxxxxxxxxx", port yyyy failed: FATAL: remaining connection slots are reserved for roles with the SUPERUSER attribute My DATABASES setting looks like this (Django 5.2+): DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': database_name, 'USER': database_user, 'PASSWORD': database_password, 'HOST': database_host, 'PORT': database_port, "OPTIONS": { 'sslmode': sslmode, 'pool': { 'min_size': 5, 'max_size': 150, 'timeout': 20, }, } } } Is this the correct way to enable pooling via Django’s built-in OPTIONS['pool'] settings? -
Why am I getting "ERR_CONNECTION_REFUSED" when querying by sidecar-hosted backend in Azure App Service?
I'm trying to deploy a fairly basic web app in Azure App Service. I have a backend image (running django+graphql) and and a frontend image (running vue+apollo), and when I run them locally either separately or via docker-compose it works with no issues. Once I deploy the app in Azure App Service, with the frontend as the main container, and the backend as the sidecar, I get "ERR_CONNECTION_REFUSED" when Apollo queries the graphql api in the backend. Apollo-config import { ApolloClient, InMemoryCache, } from "@apollo/client/core"; import { createApolloProvider } from "@vue/apollo-option"; import createUploadLink from "apollo-upload-client/createUploadLink.mjs"; const httpLink = createUploadLink({ uri: "http://localhost:8080/graphql", }); // Cache implementation const cache = new InMemoryCache(); // Create the apollo client export const apolloClient = new ApolloClient({ link: httpLink, cache, }); export const ApolloProvider = createApolloProvider({ defaultClient: apolloClient, }); Docker compose services: backend: container_name: backend build: context: ./backend ports: - "8080:8080" frontend: container_name: vue_frontend build: context: ./vue ports: - "80:80" Frontend dockerfile FROM node:22.14-alpine RUN npm install -g http-server WORKDIR /vue COPY package*.json ./ COPY .npmrc .npmrc RUN --mount=type=secret,id=npmrc,target=.npmrc npm install COPY . . EXPOSE 80 8080 CMD ["npm", "run", "dev", "--", "--port", "80", "--host"] Backend dockerfile FROM python:3.10-slim RUN mkdir /backend WORKDIR /backend ENV PYTHONDONTWRITEBYTECODE=1 … -
How can I properly test swagger_auto_schema for methods, request_body, and responses in drf-yasg with pytest?
I’m working on testing a Django REST Framework (DRF) CartViewSet using pytest, and I need to verify the swagger_auto_schema properties like the HTTP method, request body, and responses for different actions (e.g., add, remove, clear). I have the following code in my CartViewSet: class CartViewSet(GenericViewSet, RetrieveModelMixin, ListModelMixin): # Other viewset code... @swagger_auto_schema( method="post", request_body=AddToCartSerializer, responses={ 201: openapi.Response(description="Item added successfully."), 400: openapi.Response(description="Invalid input data"), }, ) @action(detail=False, methods=["post"], url_path="add") def add(self, request): # Logic for adding an item to the cart pass Now, I want to write a pytest unit test to check the following for the add method: HTTP Method: Ensure the swagger_auto_schema method is POST. Request Body: Ensure the correct serializer (AddToCartSerializer) is set for the request body. Responses: Verify that the response status codes (201 and 400) and their descriptions are properly set. Could someone guide me on how to properly test the swagger_auto_schema properties for method, request body, and responses in pytest? Any help or insights would be greatly appreciated! -
django : the page refreshed when i click on import file and no message appears
i am working on a django powered web app, and i want to customize admin view of one of my models. i have made a custom template for the add page and overrides save function in admin class to process input file before saving. here i have the admin class of RMABGD: @admin.register(RMABGD) class RMABGDAdmin(BaseModelAdmin): list_display = ('name', 'code_RMA', 'type_BGD', 'Partenaire', 'date_creation', 'RMA_BGD_state') list_filter = ('type_BGD', 'RMA_BGD_state', 'city') search_fields = ('name', 'code_RMA', 'Partenaire') add_form_template = "admin/spatial_data/RMABGD/change_form.html" change_form_template = "admin/spatial_data/RMABGD/change_form.html" def process_excel_import(self, request): excel_file = request.FILES.get('excel_file') if not excel_file: messages.error(request, "No file was selected. Please choose an Excel file.") return False try: df = pd.read_excel(excel_file) required_headers = ["code RMA", "code ACAPS", "Dénomination RMA", "Ville", "Adresse", "Longitude", "Latitude", "Type BGD", "Partenaire", "Date création", "Etat BGD RMA"] missing_headers = [header for header in required_headers if header not in df.columns] if missing_headers: messages.error(request, f"Missing required fields: {', '.join(missing_headers)}") return False else: # If all headers are correct, process data rows_imported = 0 errors = 0 for index, row in df.iterrows(): try: # Process row data obj = RMABGD( code_ACAPS=row["code ACAPS"], code_RMA=row["code RMA"], name=row["Dénomination RMA"], address=row["Adresse"], city=row["Ville"], location=f'POINT({row["Longitude"]} {row["Latitude"]})', type_BGD=row["Type BGD"], Partenaire=row["Partenaire"], date_creation=row["Date création"], RMA_BGD_state=row["Etat BGD RMA"] ) obj.save() rows_imported += 1 except Exception as … -
Error Running Spark Job from Django API using subprocess.Popen
I have created a Django project executable, and I need to run a Spark job from an API endpoint within this executable. I am using subprocess.Popen to execute the spark-submit command, but I am encountering an error when the command is executed. Here’s the command I am trying to run: /opt/spark-3.5.5-bin-hadoop3/bin/spark-submit --master local --deploy-mode client --conf "spark.ui.enabled=false" --conf "spark.ui.showConsoleProgress=false" --conf "spark.dynamicAllocation.enabled=false" --conf "spark.rdd.compress=false" --conf "spark.driver.memory=4g" --conf "spark.executor.memory=8g" --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" /Users/user1/Project/process-engine/route.py > /app/server/EAP/rasLite/engine/raslight/spark_submit_log/20250424/772_1.0_174702842893.log 2>&1 "{'processNo': '772', 'versionNo': '1.0', 'jsonData': '', 'executionDate': '', 'skipError': 'N', 'generated_executionid': '149897', 'isExecutionIdGenerated': 'True', 'executionId': '149897', 'isPreProcess': 'False'}" & However, I am getting the following error in the logs: Unknown command: 'C:/Users/user1/Project/process-engine/route.py' Type 'ras.exe help' for usage. Context: I am running this command from a Django API endpoint within a Django project executable. The path to route.py seems to be correct, but the error message indicates a Windows-style path (C:/Users/...) instead of the Unix-style path I am using (/Users/...). I am using the following code to execute the command: command = f'/Users/user1/Project/process-engine/spark-3.5.5-bin-hadoop3/bin/spark-submit --master local --deploy-mode client --conf "spark.ui.enabled=false" --conf "spark.ui.showConsoleProgress=false" --conf "spark.dynamicAllocation.enabled=false" --conf "spark.rdd.compress=false" --conf "spark.driver.memory=4g" --conf "spark.executor.memory=8g" --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" /Users/user1/Project/process-engine/route.py > {finalFilePath} 2>&1' final_command = command + " " + '\"' + raw_body_decoded1 + '\"' + … -
django-formset custom queryset
I am using the https://django-formset.fly.dev/ library, version 1.7.6. I'm trying to set a custom queryset on a ModelChoiceField, but I'm having trouble filtering the queryset based on the request object. Specifically, I want to filter and set the queryset to include only certain InRGPItem objects. Could you advise on how to properly set a custom queryset for ModelChoiceField in this context? forms.py # django libraries from django.forms.widgets import HiddenInput from django.forms.models import ModelChoiceField, ModelForm # django-formset libraries from formset.collection import FormCollection from formset.widgets import Selectize, DateInput, TextInput, Button from formset.renderers.bootstrap import FormRenderer as BootstrapFormRenderer # project models from rgp_entry_app.models import OutRGPEntry, OutRGPItem, InRGPItem class OutRGPItemForm(ModelForm): in_rgp_item = ModelChoiceField( label="In RGP Item", queryset=InRGPItem.objects.none(), # Using direct empty queryset instead empty_label="Select", # to_field_name="guid", # widget=Selectize( # search_lookup="name__icontains", # ), ) class Meta: model = OutRGPItem fields = ['in_rgp_item', 'sent_qty', 'note'] widgets = { 'note': Textarea(attrs={'rows': 1}), } class OutRGPItemCollection(FormCollection): outrgpitem = OutRGPItemForm() # ✅ repeatable formset items related_field = 'out_rgp_entry' legend = "Out RGP Items" min_siblings = 1 is_sortable = True ignore_marked_for_removal = True class OutRGPEntryCollection(FormCollection): outrgpentry = OutRGPEntryForm() outrgpitem_set = OutRGPItemCollection() legend = "Out RGP Entry" default_renderer = BootstrapFormRenderer( form_css_classes='row', field_css_classes={ # '*': 'mb-2 col-4', 'chalan_no': 'col-sm-4', 'chalan_date': 'col-sm-4', 'rgp_date': 'col-sm-4', … -
invalid_client or unsupported_grant_type in djnago testcases
This is my code which is a simple signup and login and I'm trying to login via oauth2 class UserAuthTests(TestCase): def setUp(self): self.client = APIClient() self.user = CustomUser.objects.create_user( username='test', password='strongpassword123', email='test@example.com' ) self.application = Application.objects.create( name="Test Application", client_type=Application.CLIENT_CONFIDENTIAL, authorization_grant_type=Application.GRANT_PASSWORD, skip_authorization=True, user=self.user, # Associate the application with the user ) self.application.save() def test_register_user(self): url = '/api/users/register/' data = { 'username': 'newuser', 'password': 'newpassword123', 'email': 'newuser@example.com' } response = self.client.post(url, data, format='json') print(response.data) self.assertEqual(response.status_code, status.HTTP_201_CREATED) self.assertEqual(response.json()['username'], 'newuser') def test_login_user(self): url = '/o/token/' data = { 'grant_type': 'client_credentials', # i've tried with 'password' as well 'username': 'test', 'password': 'strongpassword123', 'client_id': self.application.client_id, 'client_secret': self.application.client_secret, } print(self.application.authorization_grant_type == 'password') print(f"Request Data: {data}") response = self.client.post(url, data=data, content_type="application/x-www-form-urlencoded") print(f"Response JSON: {response.json()}") self.assertEqual(response.status_code, status.HTTP_200_OK) self.assertIn('access_token', response.json()) self.assertIn('refresh_token', response.json()) I'm getting Response JSON: {'error': 'unsupported_grant_type'} or Invalid Client, my settings file looks like this - OAUTH2_PROVIDER = { 'ACCESS_TOKEN_EXPIRE_SECONDS': 36000, # Set token expiration 'OAUTH2_BACKEND_CLASS': 'oauth2_provider.oauth2_backends.OAuthLibCore', } I have tried changing content_type, but it has not worked. Please let me know how I can fix this. In my postman, I've tried the same and it works, here is the curl - curl --location 'localhost:8000/o/token/' \ --header 'Content-Type: application/x-www-form-urlencoded' \ --data-urlencode 'grant_type=password' \ --data-urlencode 'username=test' \ --data-urlencode 'password=strongpassword123' \ … -
Python django & server
I have a problem. I have placed my REST API project made in Python and Django on a VPS server on Ubuntu, but I can't configure it through nginx and gunicorn, but both of them are in the active running status. I fixed ngninx and gunicorn with chatgpt but still nothing happens when I enter ip -
Problème de Page non trouvée (404) dans Django [closed]
Bonjour, Je rencontre une erreur 404 lors de l'accès à la page d'accueil de mon application Django. Voici les détails de l'erreur : **Erreur : Page not found (404) reservation/accueil.html Détails de la requête : Request Method: GET Request URL: http://127.0.0.1:8000/ Raised by: reservations.views.accueil URLconf utilisé : Dans le fichier gestion_reservations.urls, Django a essayé de correspondre ces modèles d'URL, dans cet ordre : --admin/ --accounts/ --[name='accueil'] --Le chemin vide a été associé au dernier modèle. **Contexte : J'essaie d'afficher la page accueil.html dans le dossier reservation/, mais Django retourne une erreur 404. Mon fichier urls.py pour l'application semble être correctement configuré, mais la page ne se charge pas. **Voici une partie de mon code urls.py : from django.urls import path from . import views urlpatterns = [ path('', views.accueil, name='accueil'), # Autres chemins ] **Et voici la vue correspondante dans views.py : from django.shortcuts import render def accueil(request): return render(request, 'reservation/accueil.html') **Ce que j'ai essayé : --Vérifier la configuration des URL et des chemins des templates. --Relancer le serveur Django. --Vérifier que le fichier accueil.html est bien présent dans le dossier templates/reservation/. **Questions : --Pourquoi Django ne trouve-t-il pas le fichier accueil.html ? --Y a-t-il un problème dans ma configuration … -
How to check if a django queryset has a matching manytomany + foreignkey?
I try to check - in Djnago - if a user made queryset has two fields out of three which match 100%. class Foo(models.Model): # ... free_field = models.ForeignKey(FreeModel, ...) must_match_m2m_field = models.ManyToManyField(ManyModel, ...) must_match_fk_field = models.ForeignKey(BarModel, ...) # ... So, user generates a queryset of the "Foo" model (with several objects in it), which may contain many objects with a) different FreeModels, and b) precisely matching ManyModels & BarModels. In backend I need to check that all ManyModels and BarModels match in the query. for the ForeignKey objects I made following (I think its fair enough): def check_same_objects(object_list): return len(set(object_list)) == 1 check_bars = check_same_objects(qs_foo.values_list('must_match_fk_field', flat=True)) I made this for comparing the lists of objects in the manytomany fields of each object: def check_same_lists(object_lists): for lst in object_lists: if set(lst) != set(object_lists[0]): return False return True should_match_foos = [] for e in qs_foo: should_match_foos.append(e.must_match_m2m_field.all().values_list('id', flat=True)) check_manies = check_same_lists(should_match_foos) # finally checking both 'must-match-fields': check_both = all([check_bars, check_manies]) What is the more elegant pythonic/djnago variant of checking if theese two fields match 100% in the whole qs? -
Django Channels WebSocket connects but immediately closes (Live chat app)
I am building a live streaming website where users can watch a live class and interact via WebSocket chat. I have integrated Django Channels, Redis, and HLS.js for video playback. The chat connects via a WebSocket to Django Channels. However, the issue is: The WebSocket connects successfully, but immediately closes afterward. The browser console shows multiple attempts to reconnect, but the problem persists. There is no clear error message except "WebSocket connection closed" in the JavaScript console. My Setup Django 5.1.8 channels channels-redis Redis server is running (redis-cli ping returns PONG) Development server (local machine, runserver + daphne for Channels) Code Snippets HTML/JS Side (live_stream.html) <script> // WebSocket Chat functionality const streamId = "UDOOSDIHOH49849"; // Your stream_id let chatSocket = null; let reconnectAttempts = 0; const maxReconnectAttempts = 5; // You can fetch real username from Django template if available const username = "Anonymous"; // 🔥 You can dynamically change this later function connectWebSocket() { chatSocket = new WebSocket( 'ws://' + window.location.host + '/ws/chat/' + streamId + '/' ); chatSocket.onopen = function(e) { console.log('WebSocket connection established'); reconnectAttempts = 0; }; chatSocket.onmessage = function(e) { const data = JSON.parse(e.data); const chatBox = document.getElementById('chatMessages'); const newMessage = document.createElement('p'); newMessage.innerHTML = `<strong>${data.username}:</strong> ${data.message}`; … -
Product visible in Admin but not returned from database queries
I'm working on a Django project and I'm seeing a strange issue. There's a product that appears in the Django admin, but when I run queries (raw or Django ORM), it's not returned at all, as if it doesn't exist. I am using MySQL database. I checked the database and the object isn't saved there. -
How to use pagination with djangochannelsrestframework?
I'm using djangochannelsrestframework for my project and want to use Pagination. I found the PaginatedModelListMixin. This is my consumer: class UserConsumer(GenericAsyncModelAPIConsumer): queryset = User.objects.all() serializer_class = UserSerializer pagination_class = WebsocketPageNumberPagination @model_observer(User) async def user_activity(self, message, observer=None, **kwargs): await self.send_json(message) @user_activity.serializer def user_activity_serializer(self, instance, action, **kwargs): return { "action": action.value, "data": UserSerializer(instance).data, } async def connect(self): await self.accept() await self.user_activity.subscribe() The GenericAsyncModelAPIConsumer is just a wrapper for all the CRUD mixins class GenericAsyncModelAPIConsumer( PaginatedModelListMixin, CreateModelMixin, UpdateModelMixin, RetrieveModelMixin, DeleteModelMixin, GenericAsyncAPIConsumer, ): pass The WebsocketPageNumberPagination should be a wrapper for the rest_framework's PageNumberPagination, but it didn't work for me. I send the request with a js WebSocket like this: class ModelWebSocket extends WebSocket { items = reactive([]) constructor(url, protocols, pk = 'id') { // Call the parent constructor super(url, protocols) // List all items when the connection is opened this.onopen = () => { console.debug('[WS] Connected') this.list() } // Handle incoming messages this.onmessage = (event) => { const message = JSON.parse(event.data) console.log('[WS] Message', message) // Some more stuff, but the message is the interessting } // Close and error handling // ... } list() { return new Promise((resolve, reject) => { const requestId = this.#getAndSetPendingRequest(resolve, reject) this.send( JSON.stringify({ action: 'list', request_id: requestId, page_size: … -
Using Asych in Django views to connect to LiveKit Backend throws missing arguments
I'm new to asynch side of django rest framework. I currently have a django rest api with Django v5 with all functions written in synchronous views. However I'm attempting to add a webrtc calling feature using the Livekit server.I'm attempting to connect my django rest api to LiveKit Server(self hosted on ubuntu 22.04) using this documentation(https://github.com/livekit/python-sdks) to create a room before connecting. The documentation clearly states RoomService uses asyncio and aiohttp to make API calls. It needs to be used with an event loop. Here is my code for the same : # Creating a room # RoomService uses asyncio and aiohttp to make API calls. It needs to be used with an event loop. async def createLiveKitRoom(self, request): request_data = request.data.dict() serializer = CreateLiveKitRoomSerializer(data=request_data) serializer.is_valid() data = serializer.validated_data room_uuid = data.get("room_uuid") # Will read LIVEKIT_URL, LIVEKIT_API_KEY, and LIVEKIT_API_SECRET from environment variables lkapi = LiveKitAPI( "http://${nginx_sfu_media_server_intra_chat_ip}:${sfu_media_server_intra_i_chat_port}" ) room_info = await lkapi.room.create_room( CreateRoomRequest(name=room_uuid, empty_timeout=10 * 60, max_participants=20) ) print(room_info) await lkapi.aclose() return room_info asyncio.run(createLiveKitRoom()) I first have to create a room_uuid on my django end using the usual put synchronous call(which I already have) and pass this room_uuid to the above asynch call so that the room is created on livekit …