Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
How to aggregate a group by query in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
How to integrate JWT authentication with HttpOnly cookies in a Django project that already uses sessions, while keeping roles and permissions unified?
I currently have a monolithic Django project that uses Django’s session-based authentication system for traditional views (login_required, session middleware, etc.). Recently, I’ve added a new application within the same project (also under the same templates directory) that communicates with the backend via REST APIs (Django REST Framework) and uses JWT authentication with HttpOnly cookies. The goal is for both parts (the old and the new) to coexist: The legacy sections should continue working with regular session-based authentication. The new app should use JWT authentication to access protected APIs. The problem I’m facing is how to properly handle permissions and roles across both authentication systems (sessions and JWT) without duplicating logic or breaking compatibility. Here’s what I want to achieve: Roles and permissions (e.g., X, Y, Z) should be defined centrally in the backend (either using Django Groups or a custom Role model). On the backend, traditional views should use @login_required, while API views should use JWTAuthentication with custom permission classes. On the frontend, I want to show or hide sections, submenus, or information depending on the authenticated user’s roles and permissions. (How can I properly integrate this?) All of this must work within the same Django project and the same … -
How to aggregate a group by queryset in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
Changing Django Model Field for Hypothesis Generation
I'm testing the generation of an XML file, but it needs to conform to an encoding. I'd like to be able to simple call st.from_model(ExampleModel) and the text fields conform to this encoding without needing to go over each and every text field. Something like: register_field_strategy(models.CharField, st.text(alphabet=st.characters(blacklist_categories=["C", "S"], blacklist_characters=['&']))) I searched around but didn't find anything which could do this. Anyone have any advice on how to make something like this works? -
Ligne sur Postgres invisible depuis l'ORM Django alors que visible depuis le Shell [closed]
Bonjour, Je rencontre un comportement incompréhensible entre le Shell Django (python manage.py shell) et le code exécuté via le serveur (python manage.py runserver). La requête ci-dessous renvoie un queryset vide depuis l'ORM Django, alors que depuis le Shell (ou PGAdmin avec une requête SQL brute), on a bien un retour non null : DossierValideur.objects.filter(id_dossier_id=185).values_list("id_instructeur", flat=True) C'est le seul id_dossier_id qui pose problème car cette même requête passe pour tous les autres. Contexte J’utilise Django + PostgreSQL avec plusieurs schémas (public, avis, documents, instruction, utilisateurs). Configuration : Django 5.1.7, PostgreSQL 15, Python 3.12, OS : Windows Dans mon settings.py : DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': os.environ.get('BDD_NAME'), 'USER': os.environ.get('BDD_USER'), 'PASSWORD': os.environ.get('BDD_PASSWORD'), 'HOST': os.environ.get('BDD_HOSTNAME'), 'PORT': os.environ.get('BDD_PORT'), 'OPTIONS': { 'options': '-c search_path=public,avis,documents,instruction,utilisateurs' }, } } Modèles concernés Mes 2 modèles concernés (base identique mais schémas différents) : class Instructeur(models.Model): id = models.AutoField(primary_key=True) id_ds = models.CharField(unique=True, blank=True, null=True) email = models.CharField(unique=True) id_agent_autorisations = models.ForeignKey(AgentAutorisations, models.RESTRICT, db_column='id_agent_autorisations') class Meta: managed = False db_table = '"utilisateurs"."instructeur"' def __str__(self): if self.id_agent_autorisations : return f"{self.id_agent_autorisations.nom} {self.id_agent_autorisations.prenom}" else : return self.email class Dossier(models.Model): id = models.AutoField(primary_key=True) id_ds = models.CharField(unique=True, blank=True, null=True) id_etat_dossier = models.ForeignKey(EtatDossier, models.RESTRICT, db_column='id_etat_dossier') id_etape_dossier = models.ForeignKey(EtapeDossier, models.RESTRICT, db_column='id_etape_dossier', default=10) numero = models.IntegerField(unique=True) date_depot = … -
How can I set a fixed iframe height for custom preview sizes in Wagtail’s page preview?
I’m extending Wagtail’s built-in preview sizes to include some additional device configurations. Wagtail natively supports device_width, but not device_height. I’d like to define both width and height for the preview iframe instead of having it default to height: 100%. Here’s an example of my mixin that extends the default preview sizes: from wagtail.models import Page, PreviewableMixin from django.utils.translation import gettext_lazy as _ class ExtendedPreviewSizesMixin(PreviewableMixin): """Extend the default Wagtail preview sizes without replacing them.""" @property def preview_sizes(self): base_sizes = super().preview_sizes extra_sizes = [ { "name": "12_inch", "icon": "hmi-12", "device_width": 1280, "device_height": 800, # not supported by Wagtail by default "label": _("Preview in 12-inch screen"), }, { "name": "24_inch", "icon": "hmi-24", "device_width": 1920, "label": _("Preview in 24-inch screen"), }, ] return base_sizes + extra_sizes @property def preview_modes(self): base_modes = super().preview_modes extra_modes = [ ("custom", _("Custom Preview")), ("custom_with_list", _("Custom Preview with List")), ] return base_modes + extra_modes def get_preview_template(self, request, mode_name): if mode_name == "custom": return "previews/preview_custom.html" if mode_name == "custom_with_list": return "previews/preview_custom_with_list.html" return "previews/default_preview.html" By default, Wagtail sets the preview iframe width using: width: calc(var(--preview-iframe-width) * var(--preview-width-ratio)); There doesn’t seem to be an equivalent variable for height. Question: Is there a way to set a fixed iframe height for custom preview sizes … -
How to enable bulk delete in Wagtail Admin (Wagtail 2.1.1, Django 2.2.28)
I’m currently working on a project using Wagtail 2.1.1 and Django 2.2.28. In Django Admin, there’s a built-in bulk delete action that allows selecting multiple records and deleting them at once. However, in Wagtail Admin, this functionality doesn’t seem to exist in my current version. I want to implement a bulk delete feature for one of my custom models (not a snippet), similar to how Django Admin provides it. Here’s my model example: class membership(address): user = models.OneToOneField(m3_account, on_delete=models.CASCADE, unique=True) locality_site = models.ForeignKey(wagtailSite, null=True, on_delete=models.CASCADE) # Contact details: business_name = models.CharField(max_length=100, verbose_name='Business Name/Account Name') contact_name = models.CharField(max_length=100) phone = models.CharField(max_length=70, verbose_name="Phone Number") def __str__(self): return self.business_name Before I start writing custom logic for this, I’d like to know: Does Wagtail support bulk delete functionality in newer versions of the admin interface? If yes, from which version was it officially introduced or supported? Would upgrading from Wagtail 2.1.1 to a certain version allow me to use this feature directly (without registering my model as a snippet)? I want to keep using Wagtail’s admin interface (ModelAdmin) and not convert my model into a snippet. Any guidance on compatible versions or recommended upgrade paths would be appreciated. -
Django-Oscar: UserAddressForm override in oscar fork doesn't work
I need to override UserAddress model. My Steps: Make address app fork python manage.py oscar_fork_app address oscar_fork Override model from django.db import models from django.conf import settings from django.utils.translation import gettext_lazy as _ AUTH_USER_MODEL = getattr(settings, "AUTH_USER_MODEL", "auth.User") class Address(models.Model): city = models.CharField("city ", max_length=255, blank=False) street = models.CharField("street ", max_length=255, blank=False) house = models.CharField("house ", max_length=30, blank=False) apartment = models.CharField("apartment ", max_length=30, blank=True) comment = models.CharField("comment ", max_length=255, blank=True) class UserAddress(Address): user = models.ForeignKey( AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="addresses", verbose_name=_("User"), ) from oscar.apps.address.models import * But I came across an error during makemigrations: django.core.exceptions.FieldError: Unknown field(s) (line1, phone_number, line4, notes, state, postcode, last_name, line2, country, line3, first_name) specified for UserAddress I tried to override UserAddressForm too: from django import forms from .models import UserAddress class UserAddressForm(forms.ModelForm): class Meta: model = UserAddress fields = [ "city", "street", "house", "apartment", "comment", ] But it doesn't work. What am I doing wrong? -
Django ORM fails to generate valid sql for JSONb contains
lets start with the error first from my logs: 2025-10-21 19:18:11,380 ERROR api.services.observium_port_status_service Error getting port status from store: invalid input syntax for type json LINE 1: ..." WHERE "api_jsonstatestore"."filter_key_json" @> '''{"type"... ^ DETAIL: Token "'" is invalid. CONTEXT: JSON data, line 1: '... and the query: state = ( JsonStateStore.objects.select_for_update() .filter(filter_key_json__contains={"type": "observium_port_status", "observium_port_id": observium_port_id}) .first() ) here is an example record id created touched filter_key_json data_json 33 2025-10-21 18:19:59.873 -0500 2025-10-21 18:44:57.047 -0500 {"type": "observium_port_status", "observium_port_id": 987} redacted and the model: class JsonStateStore(models.Model): created = models.DateTimeField(auto_now_add=True) touched = models.DateTimeField(auto_now=True) filter_key_json = models.JSONField() data_json = models.JSONField() class Meta: verbose_name = "JSON State Store" verbose_name_plural = "JSON State Stores" def __str__(self): return f"JsonStateStore(key={self.filter_key_json}, created={self.created}, touched={self.touched})" def save(self, *args, **kwargs): self.touched = timezone.now() super().save(*args, **kwargs) I am on django 4.2.2 my database backend is timescale.db.backends.postgis (github | pypi) and i am on version 0.2.13 of that package I cannot identify any syntax error in the QuerySet call and I cannot figure out what is going wrong here My current workaround is lock_sql = ( """ SELECT id, created, touched, filter_key_json, data_json FROM api_jsonstatestore WHERE filter_key_json @> %s::jsonb ORDER BY id ASC LIMIT 1 FOR UPDATE """ ) payload = {"type": "observium_port_status", "observium_port_id": … -
Tailwind CSS 4 and DaisyUI - Menu Items stacking vertically
Stack = Django, PostgreSQL, TailwindCSS 4 using Django-Tailwind (DaisyUI plugin) and Vanilla JavaScript My menu items for desktop (lg screens and above) on the second row are stacking vertically. I don't understand why they aren't horizontal. <div class="hidden lg:flex justify-center w-full mt-2"> <ul class="menu menu-horizontal flex-row px-1"> <li><a href="#">About</a></li> <li><a href="#">Shop</a></li> <li><a href="#">Blog</a></li> </ul> </div> This is the second part of the header and needs to be underneath the search bar on lg screens and above. I am using the DaisyUI TailwindCSS 4 navbar and menu components for the header. In my style.css I have @import "tailwindcss"; @plugin "@tailwindcss/forms"; @plugin "@tailwindcss/typography"; @plugin "daisyui"; At the top so I know DaisyUI is installed. I am on DaisyUI 5.0.43 according to my package.json -
React Native Maps not showing Markers on Android, even though API data is fetched correctly
I'm building a React Native app to display location markers on a map using react-native-maps. I'm using the Google Maps provider on Android. My problem is that the map loads, but the markers are not visible, even though I can confirm that my API call is successful and returns a valid array of location data. MapsScreen.jsx:- import React, { useState, useEffect, useRef, useMemo } from "react"; import { View, Text, StyleSheet, ActivityIndicator, Alert, SafeAreaView, TouchableOpacity, StatusBar, Modal, } from "react-native"; import MapView, { Marker, Callout, PROVIDER_GOOGLE } from "react-native-maps"; import { useRoute, useNavigation } from "@react-navigation/native"; import Icon from "react-native-vector-icons/MaterialCommunityIcons"; import { fetchMaps } from "../services/maps"; const StatusIndicator = ({ text }) => ( <SafeAreaView style={styles.statusContainer}> <StatusBar barStyle="light-content" backgroundColor="#181d23" /> <ActivityIndicator size="large" color="#27F0C9" /> <Text style={styles.statusText}>{text}</Text> </SafeAreaView> ); const isValidCoord = (lat, lng) => Number.isFinite(lat) && Number.isFinite(lng) && Math.abs(lat) <= 90 && Math.abs(lng) <= 180; const MapsAnalysis = () => { const [points, setPoints] = useState([]); const [isLoading, setIsLoading] = useState(true); const [error, setError] = useState(null); const [showInfo, setShowInfo] = useState(false); const [mapReady, setMapReady] = useState(false); const route = useRoute(); const navigation = useNavigation(); const { userId } = route.params; const mapRef = useRef(null); useEffect(() => { const getLocations = … -
How to aggregate a group by queryset in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp), max(price) from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(price) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) price from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
Python Community from Central Asia
Python Community from Central Asia - we trying to create cool Central Asian python developers community Do we have some python developers? -
I can only run my backend tests locally because all the instances of the mocked environment are created and into the actual db because of celery
I want to create tests but every time I run a test it triggers celery and celery creates instances into my local db. that means that if I run those tests in the prod or dev servers, then it will create rubish there. maybe that will trigger other stuff and create problems into the db. how can I avoid all of that? how can I mock celery so it doesn't create troubles in the dev server or prod server while running tests? I tried some mocking throuth the @override_settings but it didn't work actually as I would like -
Django cannot find static image file
I have a Django project served on AWS EC2, with just one HTML page which is supposed to display one static image (im.jpg) but it doesn't. It does display body text on HTML file (following) but not the image. It says: "'im.jpg' could not be found" and returns: "GET /static/im.jpg HTTP/1.1" 404 1837 Here's show.html: <!DOCTYPE html> <html lang='en'> {% load static %} <head> <title>Title </title> </head> <body> body text <img src="{% static '/im.jpg' %}"> </body> and my settings.py includes: STATIC_URL = 'static/' STATICFILES_DIR = [ os.path.join(BASE_DIR, 'static') ] I have tried all different correct paths to im.jpg, even absolute path inside STATICFILES_DIR and rerun the server, but no success. Seems like Django cannot find the image file. Adding a STATIC_ROOT in settings.py also DEBUG=False makes no difference. -
How to specify the name while concurrently removing indexes from database
I have some field in my table for which i need to remove indexing. In the django application, i saw that i could do this using the migrations.RemoveIndexConcurrently() method. However im having confusions on how to specify the name attribute with it. The previously said indexed fields were added during the time of creating the table and hence there is no separate AddIndex migration. Need to remove indexing for these fields in 2 different environments and when i looked up the names using SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'my_db_table_name' i saw indexnames like user_secondary_mail_779d505a_like, which could be different in the second environment. Is there any way i could specify the names of the fields so that i could run the migration in both environments. Any help would be greatly appreciated! -
Role based access control implementation
I am developing a food ordering and delivery service and I have four roles in my project admin, customer, rider and manager but I get challenge on implementing role based access control so I would like to ask how to implement role based access control in such project. Tech stack I am using is vue.js, Django and Django Rest Framework. I feel stack when implementing it because I am begginer -
trouble connecting my database to Django server on deployment machine
I'm deploying to AWS Linux 2023 and my postgresql database in on aws RDS. I've installed psql and checked if db is accessible in my instance. I've also checked that the environmental variables are fetching exactly as expected in my settings.py. and I even ssh and applied the migrations myself. but I keep getting the following issue: [stderr] raise dj_exc_value.with_traceback(traceback) from exc_value [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/db/backends/base/base.py", line 279, in ensure_connection [stderr] self.connect() [stderr] ~~~~~~~~~~~~^^ [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/utils/asyncio.py", line 26, in inner [stderr] return func(*args, **kwargs) [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/db/backends/base/base.py", line 256, in connect [stderr] self.connection = self.get_new_connection(conn_params) [stderr] ~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^ [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/utils/asyncio.py", line 26, in inner [stderr] return func(*args, **kwargs) [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib/python3.13/site-packages/django/db/backends/postgresql/base.py", line 332, in get_new_connection [stderr] connection = self.Database.connect(**conn_params) [stderr] File "/home/ec2-user/.local/share/virtualenvs/app-UPB06Em1/lib64/python3.13/site-packages/psycopg2/__init__.py", line 122, in connect [stderr] conn = _connect(dsn, connection_factory=connection_factory, **kwasync) [stderr]django.db.utils.OperationalError: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory [stderr] Is the server running locally and accepting connections on that socket? [stderr] [stderr]2025-10-19 04:29:22,032 INFO Starting server at tcp:port=8000:interface=127.0.0.1 [stderr]2025-10-19 04:29:22,032 INFO HTTP/2 support not enabled (install the http2 and tls Twisted extras) [stderr]2025-10-19 04:29:22,033 INFO Configuring endpoint tcp:port=8000:interface=127.0.0.1 [stderr]2025-10-19 04:29:22,033 INFO Listening on TCP address 127.0.0.1:8000 settings.py: ... DATABASES = { "default": { … -
Celery memory leak in Django — worker memory keeps increasing and not released after tasks complete
I’m using Django + Celery for data crawling tasks, but the memory usage of the Celery worker keeps increasing over time and never goes down after each task is completed. I’m using: celery==5.5.3 Django==5.2.6 Here’s my Celery configuration: # ---------- Broker/Backend ---------- app.conf.broker_url = "sqs://" app.conf.result_backend = "rpc://" app.conf.task_ignore_result = True # \---------- Queue (FIFO) ---------- QUEUE_NAME = env("AWS_SQS_CELERY_NAME") app.conf.task_default_queue = QUEUE_NAME app.conf.task_queues = (Queue(QUEUE_NAME),) # \---------- SQS transport ---------- app.conf.broker_transport_options = { "region": env.str("AWS_REGION"), "predefined_queues": { QUEUE_NAME: { "url": env.str("AWS_CELERY_SQS_URL"), "access_key_id": env.str("AWS_ACCESS_KEY_ID"), "secret_access_key": env.str("AWS_SECRET_ACCESS_KEY"), }, }, # long-poll "wait_time_seconds": int(env("SQS_WAIT_TIME_SECONDS", default=10)), "polling_interval": float(env("SQS_POLLING_INTERVAL", default=0)), "visibility_timeout": int(env("SQS_VISIBILITY_TIMEOUT", default=900)), "create_missing_queues": False, # do not create queue automatically } # \---------- Worker behavior ---------- app.conf.worker_prefetch_multiplier = 1 # process one job at a time app.conf.task_acks_late = True # ack after task completion app.conf.task_time_limit = int(env("CELERY_HARD_TIME_LIMIT", default=900)) app.conf.task_soft_time_limit = int(env("CELERY_SOFT_TIME_LIMIT", default=600)) app.conf.worker_send_task_events = False app.conf.task_send_sent_event = False app.autodiscover_tasks() Problem: After each crawling task completes, the worker memory does not drop back — it only increases gradually. Restarting the Celery worker releases memory, so I believe it’s a leak or a cleanup issue. What I’ve tried: Set task_ignore_result=True Add option --max-tasks-per-child=200 -
Error during Django Deployment: Cannot import 'setuptools.build_meta'
I have encountered this error while deploying my Django project on Render. The error is as follows: pip._vendor.pyproject_hooks._impl.BackendUnavailable: Cannot import 'setuptools.build_meta' I have upgraded setuptools, even reinstalled the package but still can't figure out what the issue is. PS- I am using Django 5.2.7 and Python 3.11 -
Mystery lag in completing Django POST request after form.save() completes
One of the forms in my Django application takes a long time to submit, save, and redirect. I'm using a variant of cProfile to measure the time spent on form.clean() and form.save(). It takes <0.1 seconds to run form.clean(), then ~10 seconds to run form.save(), then calls get_success_url(), and then hangs for another ~30 seconds before loading the success_url page. The success_url page should take negligible time to render. What's eating up the mystery 25 seconds? -
Certain django-select2 fields won't render when multiple forms are used on single template
I have an issue rendering some select2 fields: When multiple forms with same select2 widget are used within a single template some select2 fields are not rendered as expected (rendered just as normal select field). If I re-initialize one of the fields using JQuery it renders as expected ($("element").djangoSelect2()). CSS/JS (the one rendered with {{ form.media }} property) renders as well and can be seen in browser's dev mode. What might be the issue rendering django-select2 fields and do I have to render media imports for each form or just one of them is enough as they looks identical? Simplified forms example: FORM 1: <form action=""> <div class="form-meta"> {{ form.media }} </div> <div class="field p-2 d-flex flex-column flex-fill rounded"> {{ form.recipient.label_tag }} {{ form.recipient }} {% if form.recipient.errors %} <div class="error">{{ form.recipient.errors }}</div> {% endif %} </div> </form> FORM 2 <form action=""> <div class="d-flex flex-column gap-2"> <!-- Recipient field --> <div class="form-meta"> {{ shipment_form.media }} </div> <div class="d-flex gap-2 p-3 border"> <i class="align-self-center fa-regular fa-user fa-2x"></i> <div class="vr"></div> <div class="d-flex gap-2 flex-fill"> <div id="recipientFieldContainer" class="flex-fill"> {{ shipment_form.recipient }} </div> <button class="btn btn-dark rounded-0" type="button" data-bs-target="#newRecipientModal" data-bs-toggle="modal">+</button> </div> </div> <!-- Comment field --> {{ shipment_form.comment }} </div> </form> -
How can I make a Wagtail/Django application serve assets through a CloudFront URL and not a direct S3 URL?
Previously, my S3 bucket's objects were directly accessible using the bucket's URL. I have recently taken measures to restrict this sort of access by creating a CloudFront domain, with an S3 origin, with the idea of only allowing reads to the bucket through CloudFront. Now, my bucket only has the default OAC CloudFront policy attached to it and the access is working as expected. If you were to navigate directly to an S3 key, for example, https:// bucket-name.s3.amazon.aws/images/image.png you'd get a 403, but if you were to go to https:// cloudfront-cname.com/images/image.png you'd see the image. However, the Wagtail application is still trying to serve assets using the direct S3 URL, meaning it gets a 403 when trying to display images etc. The app is being hosted using ECS + EC2. I've done some reading/research and saw some suggestions suggesting adding a CLOUDFRONT_DOMAIN variable to base.py which would then have to be added as an env var to the ECS task definition, and then set AWS_S3_CUSTOM_DOMAIN to CLOUDFRONT_DOMAIN if CLOUDFRONT_DOMAIN is not null. I also saw some suggestions to set MEDIA_URL to f"https://{CLOUDFRONT_DOMAIN}/": CLOUDFRONT_DOMAIN = os.getenv("CLOUDFRONT_DOMAIN") if CLOUDFRONT_DOMAIN: AWS_S3_CUSTOM_DOMAIN = CLOUDFRONT_DOMAIN MEDIA_URL = f"https://{AWS_S3_CUSTOM_DOMAIN}/" I have done the above and redeployed … -
Custom Permissions in django-ninja which needs to use existing db objects
I am using django-ninja and django-ninja-extra for an api. Currently I have some Schema like so from ninja import schema class SchemaA(Schema) fruit_id: int other_data: str and a controller like so class HasFruitAccess(permissions.BasePermission): def has_permission(self, request: HttpRequest, controller: ControllerBase): controller.context.compute_route_parameters() data = controller.context.kwargs.get('data') fruit = Fruit.objects.get(pk=data.fruit_id) if fruit.user.pk == request.user.pk: return True return False @api_controller("/fruits", permissions=[IsAuthenticated]) class FruitController(ControllerBase): """Controller class for test runs.""" @route.post("/", auth=JWTAuth(), response=str, permissions=[HasFruitAccess()]) def do_fruity_labour(self, data: SchemaA) -> str: #Check fruit exists. fruit = get_object_or_404(Fruit, data.fruit_id) #do work return "abc" And a model like class Fruit(models.Model): user = models.ForeignKey(User) ... What I wanted to do here was check the user is related to the fruit and then we authorize them to do whatever on this object. Is this a good idea, is this best practice or is it better to just validate in the api route itself? Because permissions will obviously run before we check if fruit is even a valid object in the db so I might be trying to "authorize" a user with invalid data. How can one go about authorizing users for a specific api route which relies on db models through permissions (I would prefer it if I could use permissions since … -
How can I share my PostgreSQL changes with teammates after git pull in a Django project?
I'm working on the backend of a web application using Django.Each developer has a local setup of the project, and we all pull updates from GitHub using git pull. I have a question about database changes: Whenever I make changes to the PostgreSQL database (for example, updating schema, adding new tables or data), is there a way for my teammates to automatically get those changes after they run git pull or for now use sqlite and after the project done deploy it on postgres? for my own i used sqlite but in this project i dont know it is a good idea or not