Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
I am having issue with htmx and django
I am trying to create a social media add friend system with django and htmx. I keep getting 403 Forbbiden which is associated with invalid csrf token. I have tried to fix it but to no avail. views.py def get_friends(current_user): friend_ids = Friend.objects.filter( Q(user=current_user, request_accepted=True) | Q(requests_user=current_user, request_accepted=True) ).values_list('user', 'requests_user') # id of friends of current user in a list friend_ids = set( [user_id for sublist in friend_ids for user_id in sublist if user_id != current_user.id]) # user model of friends of current user friends = User.objects.filter(id__in=friend_ids) return friends def get_users_no_rel(current_user): sent_requests = Friend.objects.filter( user=current_user) list_sent_requests = sent_requests.values_list('requests_user', flat=True) received_requests = Friend.objects.filter( requests_user=current_user) list_received_requests = received_requests.values_list('user', flat=True) excluded_users = list(list_sent_requests) + \ list(list_received_requests) + \ [current_user.id] # list of users that have no relationship with the current user # user model of user in the excluded_users users = User.objects.exclude(id__in=excluded_users) return users def friends_view(request, **kwargs): current_user = request.user sent_requests = Friend.objects.filter( user=current_user) list_sent_requests = sent_requests.values_list('requests_user', flat=True) received_requests = Friend.objects.filter( requests_user=current_user) list_received_requests = received_requests.values_list('user', flat=True) excluded_users = list(list_sent_requests) + \ list(list_received_requests) + \ [current_user.id] # list of users that have no relationship with the current user # user model of user in the excluded_users users = User.objects.exclude(id__in=excluded_users) friends = get_friends(request.user) if request.htmx: … -
Django/Dash - injected script not executed but no errors
I'm trying to create a text area in a dash app which shall function as status window for the user. Updates shall be sent to the window via messaging (ws, channels, redis). html.Div(style={'flex': '0 0 auto', 'padding': '0px', 'boxSizing': 'border-box'}, children=[ dcc.Textarea( id='log-window', style={'width': '100%', 'height': '400px', 'resize': 'none'}, readOnly=True), html.Script(""" console.log('logging script loaded successfully!'); setTimeout(function() { var logWindow = document.getElementById('log-window'); if (logWindow) { var socket = new WebSocket('ws://127.0.0.1:8000/ws/logs/'); socket.onmessage = function (event) { var data = JSON.parse(event.data); logWindow.value += data.message + '\\n'; }; socket.onopen = function () { console.log('WebSocket connection opened'); }; socket.onerror = function (error) { console.error('WebSocket error:', error); }; socket.onclose = function () { console.log('WebSocket connection closed'); }; } else { console.error('logWindow element is null'); }; }); """) ]), The text area is created successfully but the injected script from above for starting the WebSocket connection is not executed. Nevertheless, the script is visible when inspecting the source code in the web browser. The console does not produce any error or hint. The text area is part of an iframe which holds the above dash application and allows scripts. The script was executed when I included it in the view.html but there were diffuclties in obtaining the … -
Microsoft Azure React-Django 403 Forbidden
I have made a react and django web application and it is hosted on Azure. The app works when run on localhost but when I run it on Azure I get a 403 forbidden error on my post requests. But my Get requests are working fine. I have set up django Cors/CSRF and it works fine when done locally. I set this up by following a tutorial but they never had this issue. I have been trying to find a solution to this but have not found one that works. Note: React is a static web app, and django backend is a webapp + database I read azure documentation to try and find the solution. But it appeared lots of the one si find the relate are outdataed. Posted on microsoft questions, no help. Watched many tutorials. Rewrote my django code to see if that was the issue. I just got no idea anymore -
Why does Django still display migrations when none exist?
I'm trying to reset my migrations and start off fresh but I'm getting this really strange issue I can't seem to fix. I started off by deleting the Migrations folder and Deleting every table in Database I'm using (for context it's a MySQL database and I'm using MySQL Workbench to access it). Now after doing so I ran the python manage.py showmigrations command and it keeps showing me all the old migrations even though none should exist. Here is what I tried to do to fix this issue: I tried to restart the server multiple times with every change or action I did I deleted and created the database again with the same name I deleted and created the database again with a different name I created an empty Migrations folder in hopes that maybe it would fill up the folder with the files but it does not, and now I have an empty Migrations folder and still an empty database with no tables I deleted all the __pycache__ folders I could find I know this isn't database-sided because I re-created the database with a different name as mentioned. I know there is something that is keeping all this data … -
How can I implement a Django queryset into an HTML datalist
I am creating my own ERP system and I have it completely operational. I have a problem in which the user cannot seem to find articles which they want to add to the order. So I was looking at trying to do this 100% HTML. My idea is the following: In forms.py I have defined my article queryset like this: self.fields['article'].queryset = Article.objects.filter(deleted=False) Then I have my widgets like this : 'article' : forms.TextInput(attrs={'type': 'input', 'list':articles}) The articles list is : <!-- Datalists --> <datalist id="articles"> {% for article in articles %} <option value='{{article.id}}'>[{{article.articlenumber}}] {{article.name}}</option> {% endfor %} </datalist> Then in the form the articleid is sent as article. So I pick that up and in my views then: article = request.POST.get('article') article = Article.objects.get(id=article) -
Removing custom field default method in Django
I implemented a field in my Django model with a default value that calls a custom class method I wrote. I ran and applied the migrations to my production deployment. Recently, I've found the field is no longer necessary, and I want to remove it and the custom class method from my model: Model code: from django.db import models class OrganizationConfig(models.Model): ... def get_sla_default_value(): # type: ignore return { Priority.STAT: "1:00:00", Priority.ASAP: "2:00:00", Priority.TIMING_CRITICAL: "6:00:00", Priority.PREOPERATIVE: "24:00:00", Priority.ROUTINE: "48:00:00", } ... priority_slas = models.JSONField(default=get_sla_default_value) Migration: class Migration(migrations.Migration): dependencies = [ ("core", "0085_alter_hangingprotocol_modality_and_more"), ] operations = [ migrations.CreateModel( name="OrganizationConfig", fields=[ ... ( "priority_slas", models.JSONField( default=nl_backend.core.models.OrganizationConfig.get_sla_default_value ), ), ... ], ), ] However, the class method is referenced in the migration generated when I first implemented the logic (as shown above) and deleting the method results in an error when I try to apply my migrations: Traceback (most recent call last): File "/app/./manage.py", line 31, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line utility.execute() File "/usr/local/lib/python3.9/site-packages/django/core/management/__init__.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 412, in run_from_argv self.execute(*args, **cmd_options) File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 458, in execute output = self.handle(*args, **options) File "/usr/local/lib/python3.9/site-packages/django/core/management/base.py", line 106, in wrapper res = handle_func(*args, **kwargs) File "/usr/local/lib/python3.9/site-packages/django/core/management/commands/migrate.py", … -
Can we change _id to
want to add nested data into mongoDB ,i am working in django, i have tried both the foreign key and EmbedField but unable to get nested data into database , when run migrations i created two models in mongoDB, my models are class Employee(models.Model): id=models.ObjectIdField() name = models.CharField(max_length=255) age = models.IntegerField() email=models.EmailField() phone_number=models.CharField(max_length=20) def __str__(self) -> str: return self.name class Company(models.Model): name = models.CharField(max_length=255) description = models.TextField() location = models.CharField(max_length=255) email=models.EmailField() employees = models.ArrayField( model_container=Employee, null=True, blank=True, ) def __str__(self): return self.name and the data i am getting in Company collection is like { "_id": { "$oid": "66c8b0c7fc77e7422da0e80d" }, "id": 2, "name": "dgdhg", "description": "abhdj", "location": "sdsgh", "email": "xeno@gmail.com" },,,,,i want some thing like this "{ "name": "ganesh", "location": "Guitar", "description": "abhdj", "email": "xeno@gmail.com", "employees": [ { "id": 2, "name": "John", "age": 23, "email": "adeebhassi@gmail.com", "company": 2, "phone_number": "2394820948" } ] }," -
Can`t update data in data base via patch method in django
I have a model of items, and i need to write CRUD operations with data. Post and get works, but patch - no, and i can`t understand why serializers.py class CreateItemSerializer(serializers.ModelSerializer): photo = serializers.ImageField(max_length=None, allow_empty_file=True, allow_null=True) class Meta: model = CreateItem fields = '__all__' def create(self, validated_data): items = CreateItem.object.create_item( name = validated_data.get('name'), description = validated_data.get('description'), type_item = validated_data.get('type_item'), photo=validated_data.get('photo') ) return items def update(self, instance, validated_data): instance.name = validated_data.get('name', instance.name) instance.description = validated_data.get('description', instance.description) instance.type_item = validated_data.get('type_item', instance.type_item) instance.photo = validated_data.get('photo', instance.photo) instance.save() return instance views.py class CreateItemView(APIView): serializer_class = CreateItemSerializer def post(self, request): serializer = self.serializer_class(data=request.data) serializer.is_valid(raise_exception=True) serializer.save() return Response(_('item created successfully'), status=status.HTTP_201_CREATED) def get(self, request, pk, format=None): item = [item.name for item in CreateItem.object.all()] description = [item.description for item in CreateItem.object.all()] type_item = [item.type_item for item in CreateItem.object.all()] return Response({'name':item[pk], 'description':description[pk], 'type_item':type_item[pk]}, status=status.HTTP_200_OK) def patch(self, request, pk): serializer = CreateItemSerializer(data=request.data) serializer.is_valid(raise_exception=True) return Response(_("item updated successfully"), status=status.HTTP_200_OK) when i call patch method, "all works" but data doesn`t change -
Celery and Redis Command Overload Despite Optimizations in Django App
I’m facing an issue with my Celery + Redis setup for a Django project where the number of Redis commands being executed is skyrocketing despite having made several optimizations. I’m using Upstash Redis with the free tier, which has a daily limit of 10,000 commands. However, within an hour of starting the app, I’m already hitting thousands of Redis commands, primarily BRPOP, PUBLISH, and PING. This is happening even when the app is idle. Context: Django Version: 4.2 Celery Version: 5.4 Redis Broker: Upstash Redis Free Tier Deployment: DigitalOcean (App platform) App Details: The app is primarily used for scheduling and sending reminder emails using Celery tasks. Here’s a brief overview: I have Celery workers running for email sending tasks. There is no celery.beat setup. Tasks are invoked on-demand by the application logic. The app is relatively idle most of the time, with tasks being triggered only a few times a day. Optimizations I’ve Already Tried: Result Backend Disabled: I set CELERY_RESULT_BACKEND = None to avoid unnecessary reads from Redis. Heartbeat Adjustment: I increased the CELERY_BROKER_TRANSPORT_OPTIONS['heartbeat'] to 120 seconds, hoping it would reduce the command load. Prefetch Multiplier: Set CELERY_WORKER_PREFETCH_MULTIPLIER = 1 to ensure that only one task is fetched … -
How to properly use templates as a variable in django?
In general, I wanted to use TemplateView from Django to provide responses to users. Since my website will have many frames with content blocks as separate sections, I thought it would be very convenient to store them as class objects, each having its own variables and HTML files (templates). The main idea was to include multiple templates within a single TemplateView (in other words, many small TemplateViews within one TemplateView). The main problem I encountered is that the context_object_name variables conflict with names in other templates. How can this be resolved? Ideally, it would be great if the variable could be created locally for a specific template. For example, I will often refer to the rfq-details.html template, and there will be many of them, so it would be perfect if each table value could be enclosed in a variable that doesn't conflict with others. <div class="rfq-details"> <table> <tr><td>Name </td></tr> <tr><td>Main Characteristics of the Product </td></tr> <tr><td>Consumer Characteristics of the Product </td></tr> <tr><td>Type and Volume of Packaging </td></tr> <tr><td>Quantity (volume) Requested </td></tr> <tr><td>Terms of Delivery (INCOTERMS) </td></tr> <tr><td>Terms of Payment </td></tr> <tr><td>Expected Price Per Unit of Goods </td></tr> <tr><td>Compliance with Sanitary and Phytosanitary Standards </td></tr> <tr><td>Transportation Conditions </td></tr> <tr><td>Unloading Area </td></tr> … -
Django: create record from form post in another app
I have 3 apps within a project: registrations Model Registration learners Model:Learner track Model: LearnerTrack When I create a learner object I've a view in learner_track which had been receiving the learner instance and creating relevant tracking records based on the course the learner was registering on. However, when creating a foreign key relationship it stops working and gives an error: Cannot assign "UUID('11456134-fab7-41f7-a7be-6bdaf423d1c6')": "LearnerTrack.learnerId" must be a "Learner" instance. Learner model.py class Learner(models.Model): class Status(models.TextChoices): LIVE = 'Live', 'Live' COMPLETE = 'Complete', 'Complete' DEFERRED = 'Deferred', 'Deferred' WITHDRAWN = 'Withdrawn', 'Withdrawn' status = models.CharField( max_length=25, choices = Status, default = Status.LIVE ) learnerId = models.UUIDField(primary_key=True, default=uuid.uuid4) contractId = models.PositiveIntegerField(default = 1) firstname = models.CharField(max_length=150) middlename = models.CharField(max_length=150, blank=True, null=True) surname = models.CharField(max_length=150) uln = models.ForeignKey(Registration, on_delete=models.CASCADE) postcode = models.CharField(max_length=8) area = models.CharField(max_length=10) created = models.DateTimeField(auto_now_add=True) funding = models.SmallIntegerField() groupId = models.ForeignKey(Group, on_delete=models.PROTECT) courseId = models.ForeignKey(Course, default = 'WMCASec', on_delete=models.CASCADE) postcode_code = models.FloatField(default=1.0) notes = models.TextField(blank = True, null = True) objects = models.Manager() #The default manager live = LiveManager() complete = CompleteManager() deferred = DeferredManager() withdrawn = WithdrawnManager() def clean(self): if self.postcode: self.postcode = self.postcode.replace(" ","").upper() #all postcode data saved in same way def __str__(self): return f"{self.learnerId}" def get_absolute_url(self): … -
"collectstatic" takes too long with Cloudinary, works instantly with local storage
I'm having a serious issue with Django's collectstatic command when using Cloudinary for static file storage. The command takes an unreasonable amount of time to complete, but when I switch to local storage, it runs in under a second. My Setup: Django Version: 5.0.7 Cloudinary Storage Backend: cloudinary_storage.storage.StaticHashedCloudinaryStorage Local Storage: Runs instantly Cloudinary Storage: Takes several minutes (or longer) My settings.py: CLOUDINARY_STORAGE = { 'CLOUD_NAME': os.environ.get('CLOUD_NAME'), 'API_KEY': os.environ.get('API_KEY'), 'API_SECRET': os.environ.get('API_SECRET'), } STORAGES = { 'default': { 'BACKEND': 'cloudinary_storage.storage.MediaCloudinaryStorage', }, 'staticfiles': { 'BACKEND': 'cloudinary_storage.storage.StaticHashedCloudinaryStorage', }, } STATIC_URL = '/static/' STATICFILES_DIRS = [BASE_DIR / "static"] STATIC_ROOT = BASE_DIR / "staticfiles" Increasing verbosity: python manage.py collectstatic --noinput --verbosity=3 This shows that the process hangs when checking if files exist on Cloudinary. Using --clear flag: python manage.py collectstatic --noinput --verbosity=3 --clear Didn't help with the speed. Temporarily switching to local storage: This solves the issue, but I need to use Cloudinary in production. Increasing Cloudinary timeout: I added a TIMEOUT value to my Cloudinary configuration, but this also didn't resolve the issue. Using --ignore-existing: Found out that this option doesn't exist in Django. My Cloudinary Status: There was a recent issue with Cloudinary's Aspose Document Conversion Addon, which was marked as resolved. However, the … -
moving python project to another pc without installing packages
I have a project that work with python and some packages and use venv. I compress and transfer it to another pc . the packages are in the site-packages folder .when i write command python manage.py runserver it doesnt work . I try to use python.exe in Scripts folder of my venv but it doesnt work . Also i change path in cfg and activate file but still not working . Do you have oponion or suggestion or another way to do that? Do you have oponion or suggestion or another way to do that? -
django improve performance in dumping data
I have a main model called MainModel and n models ModelA, ModelB ... related with MainModel with a ForeingKey. I want to export n csv each one made from 10k MainModels . This is the code: import csv import io import boto3 from django.core.management.base import BaseCommand from django.conf import settings import datetime import time MODELS = [ 'ModelA', 'ModelB', 'ModelC', 'ModelD', 'ModelE', 'ModelF', 'ModelG', 'ModelH', 'ModelI', 'ModelJ', 'ModelK', 'ModelL', 'ModelM', 'ModelN', 'ModelO', 'ModelP', 'ModelQ', 'ModelR', 'ModelS', 'ModelT', 'ModelU', ] CHUNK_SIZE = 10000 class Command(BaseCommand): help = 'Dump data of all models related to a given club_id into multiple CSV files on S3' def add_arguments(self, parser): parser.add_argument('club_id', type=int, help='The club_id for filtering the data') parser.add_argument('operation_id', type=str, help='The rsync operation ID to structure the folder on S3') parser.add_argument('--output', type=str, default='output.csv', help='Base output file name (default: output.csv)') def handle(self, *args, **kwargs): club_id = kwargs['club_id'] operation_id = kwargs['operation_id'] output_file = kwargs['output'] # Retrieve models using the name in MODELS models = [get_model_from_name(model_name) for model_name in MODELS] s3 = boto3.client('s3') bucket_name = 'your-s3-bucket-name' server = settings.MY_ENVIRONMENT if settings.MY_ENVIRONMENT else "default_env" folder_name = f"{server}-{operation_id}/data" mainmodel = MainModel.objects.filter(club_id=club_id) total_sessions = mainmodel.count() all_fields = sorted(sorted(set(field.name for model in models for field in model._meta.fields)) + ['model']) for start in … -
Django POST not entering data into SQLLiteDB -form not valid
I am trying to get my post data into my SQL database but I just get forms not valid: questions.html (Trimed since the same code is repeated for question2 and question3 replace question1 in code <form action = "" method = "POST"> {% csrf_token %} <div> <p class="t1">Top management demonstrates business continuity leadership and commitment</p> </div> <table class="center"> <thead> <tr> <th>Risk Area</th> <th>Not Sure</th> <th> 1 - Not at All</th> <th> 2 - Somewhat</th> <th>3 - Average</th> <th>4 - Above Average</th> <th>5 - Outstanding</th> </tr> </thead> <tbody> <td colspan="7">{{ topic.0 }}</td> {% for question in questions1 %} <tr> <td class="question">{{question}}</td> {% for n in nchoices %} {% if n == 0 %} <td> <input name= {{question.id}} type="radio" value={{ n }} id="{{name}}" /><lable> Not Sure</lable> </td> {% else %} <td> <input name={{question.id}} type="radio" value={{ n }} id="{{name}}" /><lable> {{n}}</lable> </td> {% endif %} {% endfor %} </tr> {% endfor%} {% endfor%} </table> </table> <h4>Enter any comments about your responses to the questions</h4> <textarea name="{{ textname }}">"Enter Comments Here"</textarea > <input type="submit"> </form> models.py class Answer(models.Model): value = models.IntegerField(null = True) name = models.CharField(max_length=20, null = True) # q_id = models.CharField(max_length=10, default="XX") # USed foreignkey instead question = models.ForeignKey( Question, on_delete=models.CASCADE, null … -
Celery with Redis as Broker Fails with SSL Warnings and Build Fails on DigitalOcean App Platform
I'm deploying a Django app on DigitalOcean's App Platform, using Celery for background tasks with Redis as the broker (Upstash Redis). When I deploy the app on DigitalOcean, I keep getting SSL-related warnings, and eventually, the build fails. Here is the repeated warning that I see in the logs: WARNING/MainProcess: Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behavior. Setting ssl_cert_reqs=CERT_NONE when connecting to redis means that celery will not validate the identity of the redis broker when connecting. This leaves you vulnerable to man-in-the-middle attacks. When I configure ssl_cert_reqs to CERT_REQUIRED, the warnings still persist, and my deployment fails with the following log (partial): [2024-08-23 20:09:15,471: INFO/MainProcess] Connected to rediss://default:**@refined-cub-54199.upstash.io:6379// [2024-08-23 20:09:15,472: WARNING/MainProcess] Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behaviour. Here is the relevant part of my settings.py file: CELERY_BROKER_URL = 'rediss://default:<password>@refined-cub-54199.upstash.io:6379' CELERY_RESULT_BACKEND = 'rediss://default:<password>@refined-cub-54199.upstash.io:6379' CELERY_ACCEPT_CONTENT = ['json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_TIMEZONE = 'Asia/Kuala_Lumpur' CELERY_REDIS_BACKEND_USE_SSL = { 'ssl_cert_reqs': 'CERT_REQUIRED', 'ssl_ca_certs': BASE_DIR / 'certs/upstash-ca-chain.pem', } CELERY_BROKER_TRANSPORT_OPTIONS = { 'visibility_timeout': 3600, 'ssl': { 'ssl_cert_reqs': 'CERT_REQUIRED', 'ssl_ca_certs': BASE_DIR / 'certs/upstash-ca-chain.pem', } } I have tried the following: Setting ssl_cert_reqs to CERT_NONE, which reduces the warnings but doesn't … -
How to get single nested child object in parent serialized object when models are foreign key related?
I have related models like: class ModelC(models.Model): name = models.CharField() class ModelA(models.Model): state = models.Charfield() modelc = models.ForeingKey(ModelC, through='ModelAModelCRelation') class ModelAModelCRelation(models.Model): modelc = models.ForeignKey(ModelC) modela = models.ForeignKey(ModelA) class ModelAChild(models.Model): parent_model = models.ForeignKey(ModelA) active = models.BooleanField(default=True) type = model.CharField(choices=['a', 'b', 'c', 'd']) updated_at = models.DateTimeField() ModelC is related to ModelA with many to many relation, ModelA is related to ModelC throug ModelAModelCRelation relation, ModelAChild is related to ModelA with ForeignKey, which have many records related with ModelA record And the related serializers are: class ModelCSerializer(serializers.ModelSerializer): modelA_objects = ModelASerializer(many=True, read_only=False) class Meta: model = ModelC fields = StateBaseSerializer.Meta.fields + [ "id", "name", "modelA_objects", ] class ModelASerializer(serializers.ModelSerializer): modelAChild_objects = ModelAChildSerializer(many=True, read_only=True) class Meta: model = ModelA fields = ( 'id', 'state', 'modelAChild_objects', ) class ModelAChildSerializer(serializers.ModelSerializer): class Meta: model = ModelAChild fields = ( 'id', 'active', 'type', ) This approach is returning result like this: [ { 'modelC_obj': { 'id': x, 'name': 'object name', 'modelA_obj': [ { 'id': 1, 'state': 'state name', 'modelAChild_obj': [ { 'id': 1, 'active': true, 'type': 'a' }, { 'id': 2, 'active': false, 'type': 'b' }, { 'id': 3, 'active': true, 'type': 'd' }, ] } ] } } ] But, I need a serialized result like: [ { 'modelC_obj': … -
Troubling while deploying my django app on render with dockerized dataabase
I am having trouble while deploying my Django application on render. I used the render Postgres database which expired and deleted. So, that's why I am trying to use a dockerized database so that it doesn't have any time limit and database accessibility for a lifetime. After containerizing my Django application, it works well on my local machine and I could push it on the docker hub. Whatever, as my application was pre-deployed and configured with the main branch. I am having this error: OperationalError at / could not translate host name "db_social" to address: Name or service not known Request Method: GET Request URL: https://network-project-5q7j.onrender.com/ Django Version: 5.0 Exception Type: OperationalError Exception Value: could not translate host name "db_social" to address: Name or service not known Exception Location: /opt/render/project/src/.venv/lib/python3.11/site-packages/psycopg2/__init__.py, line 122, in connect Raised during: network.views.index Python Executable: /opt/render/project/src/.venv/bin/python3.11 Python Version: 3.11.3 Python Path: ['/opt/render/project/src', '/opt/render/project/src/.venv/bin', '/opt/render/project/python/Python-3.11.3/lib/python311.zip', '/opt/render/project/python/Python-3.11.3/lib/python3.11', '/opt/render/project/python/Python-3.11.3/lib/python3.11/lib-dynload', '/opt/render/project/src/.venv/lib/python3.11/site-packages'] Server time: Fri, 23 Aug 2024 10:12:12 +0000 This is what I got on render. I would like to share my docker-compose.yml version: "3.11" services: db_social: image: postgres volumes: - ./data/db_social:/var/lib/postgresql/data environment: - POSTGRES_DB=social - POSTGRES_USER=postgres - POSTGRES_PASSWORD=qnr63363 web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code … -
confusion during make docker serve static files(configuration issue)
My problem was just to configure my wagtail project settings.py to serve static file correctly and i tried search about this topic but docker + wagtail(wagtail into existed django project) not on internet pls help me if you are good at it: Dockerfile: FROM python:3.9-slim-buster LABEL maintainer="londonappdeveloper.com" ENV PYTHONUNBUFFERED=1 # Copy application files COPY ./requirements.txt /requirements.txt COPY . /app # COPY except_nextgen.sql /docker-entrypoint-initdb.d/except_nextgen.sql #added during uwsgi server COPY ./scripts /scripts WORKDIR /app EXPOSE 8000 # Set up virtual environment RUN python -m venv /py && \ /py/bin/pip install --upgrade pip # Install dependencies RUN apt-get update && apt-get install --no-install-recommends -y \ build-essential \ libpq-dev \ gcc \ exiftool \ imagemagick \ libmagickwand-dev \ libmagic1 \ && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \ && rm -rf /var/lib/apt/lists/* # Install Python dependencies RUN /py/bin/pip install -r /requirements.txt # Set up directories for static and media files RUN adduser --disabled-password --no-create-home app && \ mkdir -p /vol/web/static && \ mkdir -p /vol/web/media && \ chown -R app:app /vol && \ chmod -R 755 /vol && \ chmod -R +x /scripts # original:ENV PATH="/py/bin:$PATH" updated(uwsgi):ENV PATH="/scripts:/py/bin:$PATH" ENV PATH="/scripts:/py/bin:$PATH" # ENV PATH="/py/bin:$PATH" USER app CMD ["run.sh"] settings.py: import os from django.utils.translation import gettext_lazy … -
How can I safely use multiprocessing in a Django app?
I’ve read the docs suggesting that multiprocessing may cause unintended side effects in Django apps or on Windows, especially those connected to multiple databases. Specifically, I'm using a function, load_to_table, to create multiple CSV files from a DataFrame and then load the data into a PostgreSQL table using multiprocessing. This function is deeply integrated within my Django app and is not a standalone script. I am concerned about potential long-term implications if this code is used in production. Additionally, if __name__ == '__main__': does not seem to work within the deep files/functions of Django. This is because Django's management commands are executed in a different context where __name__ is not set to "__main__", which prevents this block from being executed as expected. Moreover, multiprocessing guidelines recommend using if __name__ == '__main__': to safely initialize multiprocessing tasks, as it ensures that code is not accidentally executed multiple times, especially on platforms like Windows where the module-level code is re-imported in child processes. Here is the code I am using: import os import glob import shutil from multiprocessing import Pool, cpu_count from functools import partial def copy_to_table(connection, file_name: str, table_name: str, columns: list): cursor = connection.cursor() with open(file_name, "r") as f: cursor.copy_from(f, … -
Django Admin Site Not Enforcing Two-Factor Authentication (2FA) with django-otp and django-two-factor-auth
Problem Description: I am trying to enforce two-factor authentication (2FA) for the Django admin site using the django-otp and django-two-factor-auth packages. Despite following the setup steps, the admin login does not require 2FA and allows users to log in with just their username and password. My Setup Django Version: 4.2.11 django-otp Version: 1.5.2 django-two-factor-auth Version: 1.17.0 Python Version: 3.10 What i've done Installed Required Packages: pip install django-otp django-two-factor-auth Updated INSTALLED_APPS in settings.py: INSTALLED_APPS = [ 'django.contrib.contenttypes', 'django.contrib.auth', 'django.contrib.sessions', 'django.contrib.admin', 'django_otp', 'django_otp.plugins.otp_email', 'two_factor', 'two_factor.plugins.email', ... ] Configured Middleware in settings.py: MIDDLEWARE = [ 'django.middleware.common.CommonMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django_otp.middleware.OTPMiddleware', 'two_factor.middleware.threadlocals.ThreadLocals', ... ] Patched the Admin Site in urls.py: from django.contrib import admin from two_factor.admin import AdminSiteOTPRequired admin.site.__class__ = AdminSiteOTPRequired urlpatterns = [ path('admin/', admin.site.urls), path('account/', include(('two_factor.urls', 'two_factor'), namespace='two_factor')), ... ] 2FA Settings in settings.py: TWO_FACTOR_PATCH_ADMIN = True TWO_FACTOR_LOGIN_URL = 'two_factor:login' LOGIN_REDIRECT_URL = '/admin/' LOGIN_URL = 'two_factor:login' Added 2FA Devices: I added an email device using the two_factor_add management command. The issue Even after following these steps, the admin login page does not prompt for 2FA. It allows me to log in directly with just the username and password, bypassing the 2FA requirement entirely. Errors in logs Here are some relevant log entries when … -
I'm having trouble setting cookies with response.set_cookies it return empty dictionary in request.COOKIES in my Django and React.js using axios
I am developing authentication application using django rest framework and React.js with JWT and it works with login and registering user but when I want to access logged in user detail it says un authenticated it can't get Cookie from request.COOKIES.get("jwt") it returns empty dictionary and it also said in frontend Response Headers This Set-Cookie headers didn't specify a "SameSite" attribute and was defaulted to "SameSite = Lax," and was blocked because it came from a cross-site response... Here is the View.py ... class LoginView(APIView): def post(self, request): email = request.data["email"] password = request.data["password"] user = User.objects.filter(email=email).first() if user is None: raise AuthenticationFailed("Incorrect Email!") if not user.check_password(password): raise AuthenticationFailed("Incorrect Password!") payload = { 'id': user.id, 'exp': datetime.datetime.utcnow() + datetime.timedelta(minutes=60), 'iat': datetime.datetime.utcnow() } token = jwt.encode(payload, 'secret', algorithm='HS256') response = Response() response.set_cookie(key='jwt', value=token, httponly=True) response.data = { 'jwt': token } return response class UserView(APIView): def get(self, request): token = request.COOKIES.get("jwt") print(request.COOKIES) if not token: raise AuthenticationFailed("UnAuthenticated 1!") try: payload = jwt.decode(token, 'secret', algorithms=['HS256']) except jwt.ExpiredSignatureError: raise AuthenticationFailed("UnAuthenticated 2!") user = User.objects.filter(id=payload['id']).first() serializer = UserSerializer(user) return Response(serializer.data) ... Here is settings.py ... ALLOWED_HOSTS = [] # Application definition INSTALLED_APPS = [ ... 'corsheaders', 'rest_framework', ... ] MIDDLEWARE = [ "django.middleware.security.SecurityMiddleware", "django.contrib.sessions.middleware.SessionMiddleware", "corsheaders.middleware.CorsMiddleware", … -
Dynamic fields in django admin from __init__ form method
I have some models: class Variation(models.Model): name = models.CharField(max_length=100) def __str__(self): return self.name class VariationOption(models.Model): value = models.CharField(max_length=100) variation = models.ForeignKey(Variation, on_delete=models.CASCADE) def __str__(self): return self.value class BaseProduct(models.Model): name = models.CharField(max_length=100) variations = models.ManyToManyField(Variation, blank=True) class Product(models.Model): base_product = models.ForeignKey(BaseProduct, on_delete=models.CASCADE, null=True, blank=True) variation_options = models.ManyToManyField(VariationOption, null=True, blank=True) an admin form where i dynamically add fields in init method: class ProductAdminInlineForm(forms.ModelForm): class Meta: model = Product exclude = ['variation_options'] def __init__(self, base_product, *args, **kwargs): super(ProductAdminInlineForm, self).__init__(*args, **kwargs) if base_product: self.base_product = base_product variations = self.base_product.variations.all() for variation in variations: field_name = f'variation_{variation.id}' self.fields[field_name] = forms.ModelMultipleChoiceField( queryset=VariationOption.objects.filter(variation=variation), required=True, widget=forms.CheckboxSelectMultiple, label=variation.name ) here is my admin models: class ProductInline(admin.TabularInline): model = Product exclude = ['variation_options'] form = ProductAdminInlineForm class BaseProductAdmin(admin.ModelAdmin): model = BaseProduct list_display = ['__str__'] inlines = [ProductInline] def get_formset_kwargs(self, request, obj, inline, prefix, **kwargs): return { **super().get_formset_kwargs(request, obj, inline, prefix), 'form_kwargs': {"base_product": obj}, } I'm trying to retrieve dynamic fields that i declare in ProductAdminInlineForm init method in the admin page but it look like ProductInline doesn't call init to retrieve form fields. How can i achieve this ? I tried to overrides get_fields and get_fieldset method in ProductInline class: def get_fieldsets(self, request, obj=None): fieldsets = super(ProductInline, self).get_fieldsets(request, obj) if … -
Unable to Upload Files to S3 from Django on Amazon Lightsail, Despite Working Fine Locally with Same Credentials and Policy
I'm running into an issue where I can successfully upload files to my S3 bucket locally, but I encounter problems when trying to upload from my server. Here are the details: Django Settings Settings.py (relevant parts): STATIC_URL = "https://my-cdn.s3.amazonaws.com/static/" STATICFILES_STORAGE = env("STATICFILES_STORAGE") AWS_ACCESS_KEY_ID = env("AWS_ACCESS_KEY_ID") AWS_SECRET_ACCESS_KEY = env("AWS_SECRET_ACCESS_KEY") AWS_STORAGE_BUCKET_NAME = env("AWS_STORAGE_BUCKET_NAME") AWS_S3_REGION_NAME = env("AWS_S3_REGION_NAME") DEFAULT_FILE_STORAGE = env("DEFAULT_FILE_STORAGE") MEDIA_URL = env("MEDIA_URL") S3 Policy: { "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadGetObject", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-cdn/*" }, { "Sid": "AllowPutObject", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::id:user/my-cdn" }, "Action": "s3:PutObject", "Resource": "arn:aws:s3:::my-cdn/*" } ] } Issue: Local Environment: Uploading files to the S3 bucket works perfectly. Amazon Lightsail Server: Uploads fail (No logs whatsoever), but the credentials and policy are the same. -
Django SocketIO Connection Fails After RDS Restart – How to Handle Database Connectivity Issues?
I'm developing a Django application using SocketIO for real-time communication, and I'm encountering an issue where SocketIO connections fail after an RDS (Relational Database Service) restart, while my Django HTTP APIs continue to work fine. Problem Description My Django application integrates with SocketIO for real-time features. After an RDS instance restart, the HTTP APIs function normally, but SocketIO connections encounter issues and fail to access database models. Specifically, I get errors related to database connectivity when attempting to handle SocketIO connections. Code Snippets Here's how I configure my ASGI application and handle SocketIO connections: ASGI Configuration (asgi.py): import os from django.core.asgi import get_asgi_application import socketio from backend.socketio import socketio_server os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'backend.settings.dev') django_asgi_app = get_asgi_application() application = socketio.ASGIApp(socketio_server=socketio_server.socketio_server, other_asgi_app=django_asgi_app) SocketIO Connection Handler: async def on_connect(self, sid: str, environ: dict): try: query_string = environ['asgi.scope']['query_string'] token, chat_id = get_token_chat_id_from_query(query_string=query_string) if not token or not chat_id: raise ConnectionRefusedError("Invalid connection parameters.") user = await get_user_from_token(token=token) chat_obj = await get_chat_from_id(chat_id=chat_id, user=user) await update_all_chat_redis(chat_obj=chat_obj) async with self.session(sid=sid, namespace=self.namespace) as session: session['user_id'] = user.id session['chat_id'] = chat_obj.machine_translation_request_id await self.enter_room(sid=sid, room=chat_id) except ConnectionRefusedError as e: logger.error(f"Connection refused: {e}") raise except UserErrors as exc: logger.error(f"User error: {exc.message}") raise ConnectionRefusedError(exc.message) except Exception as e: logger.error(f"Unexpected error: {e}") raise ConnectionRefusedError("An unexpected error …