Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
Django model with FK to learner app model Group is displaying options from user admin Group
I have the following models: learner app class Group(models.Model): short_name = models.CharField(max_length=50) # company acronym slug = models.SlugField(default="prepopulated_do_not_enter_text") contract = models.ForeignKey(Contract, on_delete=models.CASCADE) course = models.ForeignKey(Course, on_delete=models.CASCADE) start_date = models.DateField() end_date = models.DateField() notes = models.TextField(blank=True, null=True) class Meta: ordering = ["short_name"] unique_together = ( "short_name", "contract", ) management app I've set up an Invoice model: class Invoice(models.Model): staff = models.ForeignKey(Staff, on_delete=models.RESTRICT) group = models.ForeignKey(Group, on_delete=models.RESTRICT) date = models.DateField() amount = models.DecimalField(max_digits=7, decimal_places=2) note = models.CharField(max_length=500, null=True, blank=True) When I try to add an invoice instead of the learner groups I'm being offered the user admin Group options: Can anyone help with what I'm doing wrong. I have the learner group as a FK in other models without issue. -
Django Unit Test - using factory_boy build() on a Model with Many-To-Many relationship
I’m working on writing unit tests for a DRF project using pytest and factory_boy. I’m running into issues with many-to-many relationships. Specifically, when I try to use .build() in my unit tests, DRF attempts to access the M2M field which requires a saved object, leading to errors. tests_serializers.py def test_serialize_quality_valid_data(self): user = UserFactory.build() quality = QualityFactory.build(created_by=user) serializer = QualitySerializer(quality) data = serializer.data assert data["num"] == quality.num error: FAILED quality/tests/tests_serializers.py::TestQualitySerializer::test_serialize_quality_valid_data - ValueError: "<Quality: Quality object (None)>" needs to have a value for field "id" before this many-to-many relationship can be used. model.py class QualityTag(ExportModelOperationsMixin("quality_tag"), models.Model): name = models.CharField(max_length=64, unique=True) description = models.TextField() class Quality(ExportModelOperationsMixin("quality"), models.Model): num = models.IntegerField() title = models.CharField(max_length=64) ... tags = models.ManyToManyField(QualityTag, related_name="qualities", blank=True) factories.py class QualityTagFactory(DjangoModelFactory): class Meta: model = QualityTag name = factory.Sequence(lambda n: f"Quality Tag {n}") class QualityFactory(factory.django.DjangoModelFactory): class Meta: model = Quality num = factory.Faker("random_int", min=1, max=999) @factory.post_generation def tags(self, create, extracted, **kwargs): if not create: return if extracted: for tag in extracted: self.tags.add(tag) serializers.py class QualitySerializer(serializers.ModelSerializer): tags = QualityTagDetailSerializer(many=True) created_by = UserProfileSerializer() updated_by = UserProfileSerializer() class Meta: model = Quality fields = "__all__" read_only_fields = ["quality_num", "tags", "created_by", "updated_by"] I’ve been advised to switch to .create() instead of .build(), but I’d prefer to … -
Django HttpOnly cookies not persisted on iOS Safari and WebView, but work on Chrome and Android ITP
I'm using Django to set HttpOnly and Secure cookies for my React web application. These cookies work perfectly on Chrome (both desktop and mobile) and Android devices. However, I'm encountering a major issue on iOS: -iOS Safari: Cookies are not persisted; they are treated like session cookies and are deleted when the browser is closed. -iOS React Native WebView: Similar to Safari, the cookies are not persisted. -İOS Chrome: It works. -Android React Native WebView: It works. MAX_AGE = 60 * 60 * 24 * 360 COMMON = { "httponly": True, "secure": True, "samesite": "None", "path": "/", "domain": ".kashik.net", "max_age": MAX_AGE, } def set_auth_cookies(response, access_token: str, refresh_token: str): response.set_cookie("refresh_token", refresh_token, **COMMON) response.set_cookie("access_token", access_token, **COMMON) return response I have confirmed that the max_age is set to a long duration, so it's not a session cookie by design. This issue seems to be specific to the iOS ecosystem. What could be causing this behavior on iOS Safari and WebView, and how can I ensure these cookies are properly persisted? <WebView ref={webRef} source={{ uri: WEB_URL }} style={styles.full} /* COOKIE PERSIST */ sharedCookiesEnabled thirdPartyCookiesEnabled incognito={false} /* FIX */ javaScriptEnabled domStorageEnabled allowsInlineMediaPlayback allowsFullscreenVideo mediaCapturePermissionGrantType="grant" startInLoadingState cacheEnabled={false} injectedJavaScriptBeforeContentLoaded={INJECT_BEFORE} injectedJavaScriptBeforeContentLoadedForMainFrameOnly={false} onMessage={handleWebViewMessage} onLoadEnd={() => { setLoadedOnce(true); lastLoadEndAt.current = … -
How to set up an in-project PostgreSQL database for a Django trading app?
I’m working on a Django-based trading platform project. Currently, my setup connects to a hosted PostgreSQL instance (Render). My client has now requested an “in-project PostgreSQL database”. From my understanding, this means they want the database to run locally within the project environment (rather than relying on an external hosted DB). Question: What is the best practice for including PostgreSQL directly with the project? Should I: Use Docker/Docker Compose to spin up PostgreSQL alongside the Django app, Include migrations and a seed dump in the repo so the DB can be created on any machine, or Is there another recommended approach? I want the project to be portable so the client (or other developers) can run it without needing to separately set up PostgreSQL. -
Deployment errors
When I try to deploy my web app built with Windsurf in Heroku, I get the following errors: Error: Unable to generate Django static files. ! ! The 'python manage.py collectstatic --noinput' Django ! management command to generate static files failed. ! ! See the traceback above for details. ! ! You may need to update application code to resolve this error. ! Or, you can disable collectstatic for this application: ! ! $ heroku config:set DISABLE_COLLECTSTATIC=1 ! ! https://devcenter.heroku.com/articles/django-assets Please help fix it. -
How can I set filter by month of folium map(Django project)
I am using folium and I can see in front folium map with markers, I use checkbox, but because of I have two month, I want to add radio button and selecting only one month. I want to have filters by months and also by statuses, but with different labels. I used chatgpt but it doesn't help me. I have also tried many things. What do you suggest as an alternative? I tried this but it is not working: GroupedLayerControl( groups={'groups1': [fg1, fg2]}, collapsed=False, ).add_to(m) My code: def site_location(request): qs = buffer.objects.filter( meter="Yes", ext_operators="No", ).exclude( # hostid="No" ).values('n', 'name', 'village_city', 'region_2', 'rural_urban', 'hostid', 'latitude', 'longitude', 'site_type', 'meter', 'ext_operators', 'air_cond', 'kw_meter', 'kw_month_noc', 'buffer', 'count_records', 'fix_generator', 'record_date', 'id', 'date_meter' ) data = list(qs) if not data: return render(request, "Energy/siteslocation.html", {"my_map": None}) m = folium.Map(location=[42.285649704648866, 43.82418523761071], zoom_start=8) critical = folium.FeatureGroup(name="Critical %(100-..)") warning = folium.FeatureGroup(name="Warning %(70-100)") moderate = folium.FeatureGroup(name="Moderate %(30-70)") positive = folium.FeatureGroup(name="Positive %(0-30)") negative = folium.FeatureGroup(name="Negative %(<0)") check_noc = folium.FeatureGroup(name="Check_noc") check_noc_2 = folium.FeatureGroup(name="Check_noc_2") for row in data: comments_qs = SiteComment.objects.filter(site_id=row["id"]).order_by('-created_at')[ :5] if comments_qs.exists(): comments_html = "" for c in comments_qs: comments_html += ( f"<br><span style='font-size:12px; color:black'>" f"{c.ip_address} - {c.created_at.strftime('%Y-%m-%d %H:%M')}: {c.comment}</span>" ) else: comments_html = "<br><span style='font-size:13px; color:black'>....</span>" html = ( f"<a target='_blank' … -
Django python manage.py runserver problem
When I make this command: python manage.py runserver I receive this response: {"error":"You have to pass token to access this app."} I ran my django app previously without any problems but this time it gives this error Do you have any suggestions to correct it best I am using the correct localhost and the port suggested by the Django app and it previously worked without problems and now I have this issue -
Multiple Data Entry in Django ORM
I have been trying to create a way that my Django database will store data for 7 consecutive days because I want to use it to plot a weekly graph but the problem now is that Django doesn't have a datetime field that does that. '''My Code''' #Model to save seven consecutive days class SevenDayData(models.Model): '''Stores the date of the latest click and stores the value linked to that date in this case our clicks''' day1_date= models.DateField(default=None) day1_value= models.CharField(max_length=20) day2_date= models.DateField(default=None) day2_value= models.CharField(max_length=20) day3_date= models.DateField(default=None) day3_value= models.CharField(max_length=20) day4_date= models.DateField(default=None) day4_value= models.CharField(max_length=20) day5_date= models.DateField(default=None) day5_value= models.CharField(max_length=20) day6_date= models.DateField(default=None) day6_value= models.CharField(max_length=20) day7_date= models.DateField(default=None) day7_value= models.CharField(max_length=20) #updating the model each time the row is saved updated_at= models.DateTimeField(auto_now= True) #function that handles all the saving and switching of the 7 days def shift_days(self, new_value): #getting todays date today= date.today() #shifting every data out from day_7 each time a date is added i.e the 7th day is deleted from the db once the time is due self.day7_date, self.day7_value = self.day6_date, self.day6_value #Overwriting each date with the next one self.day6_date, self.day6_value = self.day5_date, self.day5_value self.day5_date, self.day5_value = self.day4_date, self.day4_value self.day4_date, self.day4_value = self.day3_date, self.day3_value self.day3_date, self.day3_value = self.day2_date, self.day2_value self.day2_date, self.day2_value = self.day1_date, self.day1_value #writing todays … -
session.get not getting corrected file on remote server/local server
I have this piece of code that works complete fine on a single python script. When I try to test it out on local server it returns a html page saying link is not vaild. (I am expecting a PDF downloaded). Both localserver and python script returns a 200. url is the download link of a pdf file on the website. def get_file(url): headers = { 'User-Agent': user_agent, 'Cookie': cookie, } session = requests.Session() try: response = session.get(url, headers=headers, verify=False) filename = response.headers['Content-Disposition'].split('"')[-2] with open(filename, 'wb') as f: f.write(response.content) fileFullPath = os.path.abspath(filename) print(fileFullPath) except requests.exceptions.HTTPError as err: print("file download fail err {}".format(err.response.status_code)) -
Failed to create subscription: LinkedIn Developer API real-time notification error
I’m working on enabling real-time notifications from LinkedIn. I can successfully retrieve access tokens, but when I try to create a real-time notification subscription, the API returns the following error. Could someone please help me understand what might be causing this issue? Error Message { "message": "Failed to create subscription. RestException{_response=RestResponse[headers={Content-Length=13373, content-type=application/x-protobuf2; symbol-table="https://ltx1-app150250.prod.linkedin.com:3778/partner-entities-manager/resources|partner-entities-manager-war--60418946", x-restli-error-response=true, x-restli-protocol-version=2.0.0},cookies=[],status=400,entityLength=13373]}", "status": 400 } My code is below def linkedinCallBack(request): """Handle LinkedIn OAuth callback.""" code = request.GET.get('code') state = request.GET.get('state') if not code or not state: return handle_redirect(request, message_key='missing_params') try: error, state_data = parse_state_json(state) if error: return handle_redirect(request, message_key='missing_params') error, platform = get_social_platform(state_data['platform_id']) if error: return handle_redirect(request, message=error) redirect_uri = request.build_absolute_uri( reverse('social:linkedin_callback')) # Exchange code for access token token_url = 'https://www.linkedin.com/oauth/v2/accessToken' data = { 'grant_type': 'authorization_code', 'code': code, 'redirect_uri': redirect_uri, 'client_id': os.environ.get('LINKEDIN_APP_ID'), 'client_secret': os.environ.get('LINKEDIN_APP_SECRET'), } headers = {'Content-Type': 'application/x-www-form-urlencoded'} response = requests.post(token_url, data=data, headers=headers) if response.status_code != 200: return handle_redirect(request, message_key='token_failed') token_data = response.json() access_token = token_data.get('access_token', None) refresh_token = token_data.get('refresh_token', None) refresh_token_expires_in = token_data.get( 'refresh_token_expires_in', None) expires_in = token_data.get('expires_in', 3600) if not access_token: return handle_redirect(request, message_key='token_failed') LINKEDIN_API_VERSION = os.environ.get('LINKEDIN_API_VERSION') org_url = "https://api.linkedin.com/v2/organizationalEntityAcls" params = { 'q': 'roleAssignee', 'role': 'ADMINISTRATOR', 'state': 'APPROVED', 'projection': '(elements*(*,organizationalTarget~(id,localizedName)))' } headers = { 'Authorization': f'Bearer {access_token}', 'X-Restli-Protocol-Version': '2.0.0', 'LinkedIn-Version': LINKEDIN_API_VERSION } … -
Does anyone have any idea about in-project PostgreSQL database? [closed]
I have been recently working on a trading project. Now client has a requirement for in-project Postgresql Database. Have anyone worked across this or maybe something similar??? -
Building development image with Nodejs and production without NodeJS (with only precompiled files)
I have a Django application, which is using TailwindCSS for styling (using the django-tailwind package). I am developing locally with docker compose and plan to deploy using the same. So I have the following requirements For development: I need to run the python manage.py tailwind start or npm run dev command so that the postcss watcher rebuilds the static files when I am developing the application (this requires NodeJS) For Production: I compile the CSS files at build time and do not need NodeJS overhead. I can always create two Dockerfiles for development and production, but I do not want to do that unless absolutely necessary. How can I do both of these in a single Dockerfile. This is the current Dockerfile I have ARG BUILD_TYPE=production FROM ghcr.io/astral-sh/uv:python3.13-bookworm-slim AS base-builder # Set environment variables to optimize Python ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 # Set environment variables to optimize UV ENV UV_COMPILE_BYTECODE=1 ENV UV_SYSTEM_PYTHON=1 WORKDIR /app # Install the requirements COPY uv.lock . COPY pyproject.toml . # Update the package list and install Node.js RUN apt-get update && \ apt-get install -y nodejs npm && \ apt-get clean && \ rm -rf /var/lib/apt/lists/* FROM base-builder AS production-builder RUN echo "Running the Production … -
dj-rest-auth + allauth not sending email
Context: I'm setting DRF + dj-rest-auth + allauth + simple-jwt for user authentication. Desired behaviour: Register with no username, only email. Authorize login only if email is verified with a link sent to email. Social login to be added. Problem: It seems that confirmation email is not being sent. When I run the following test I see that it wanted to send some email but it's not found anywhere. Test code: client = APIClient() url = reverse("rest_register") # dj-rest-auth register endpoint # Register a user data = { "email": "user1@example.com", "password1": "StrongPass123!", "password2": "StrongPass123!", } response = client.post(url, data, format="json") assert response.status_code == 201, response.data print(response.data) # Manually verify the user from allauth.account.models import EmailConfirmation user = User.objects.get(email="user1@example.com") from django.core import mail print(f'Amount of sent emails: {len(mail.outbox)}') print(f'Email Confimation exists: {EmailConfirmation.objects.filter(email_address__email=user.email).exists()}') This prints: {'detail': 'Verification e-mail sent.'} Amount of sent emails: 0 Email Confimation exists: False My code: core/urls.py from django.contrib import admin from django.urls import include, path urlpatterns = [ path('api/auth/', include('authentication.urls')), path("admin/", admin.site.urls), path("accounts/", include("allauth.urls")), ] authentication/urls.py from dj_rest_auth.jwt_auth import get_refresh_view from dj_rest_auth.registration.views import RegisterView, VerifyEmailView from dj_rest_auth.views import LoginView, LogoutView, UserDetailsView from django.urls import path from rest_framework_simplejwt.views import TokenVerifyView urlpatterns = [ path("register/", RegisterView.as_view(), name="rest_register"), path("register/verify-email/", VerifyEmailView.as_view(), … -
Celery task called inside another task always goes to default queue even with queue specified
I’m running Celery with Django and Celery Beat. Celery Beat triggers an outer task every 30 minutes, and inside that task I enqueue another task per item. Both tasks are decorated to use the same custom queue, but the inner task still lands in the default queue. from celery import shared_task from django.db import transaction @shared_task(queue="outer_queue") def sync_all_items(): """ This outer task is triggered by Celery Beat every 30 minutes. It scans the DB for outdated items and enqueues a per-item task. """ items = Item.objects.find_outdated_items() for item in items: # I expect this to enqueue on outer_queue as well process_item.apply_async_on_commit(args=(item.pk,)) @shared_task(queue="outer_queue") def process_item(item_id): do_some_processing(item_id=item_id) Celery beat config: CELERY_BEAT_SCHEDULE = { "sync_all_items": { "task": "myapp.tasks.sync_all_items", "schedule": crontab(minute="*/30"), # Beat is explicitly sending the outer task to outer_queue "options": {"queue": "outer_queue"}, } } But, when I run the process_item task manually e.g. in the Django view, it respect the decorator and lands in expected queue. I’ve tried: Adding queue='outer_queue' to apply_async_on_commit Calling process_item.delay(item.pk) instead Using .apply_async(args=[item.pk], queue='outer_queue') inside transaction.on_commit But no matter what, the inner tasks still show up in the default queue. -
Django + SimpleJWT: Access tokens sometimes expire immediately ("credentials not provided") when calling multiple endpoints
I’m building a Vue 3 frontend (deployed on Vercel at example.com) with a Django REST Framework backend (deployed on Railway at api.example.com). Authentication uses JWT access/refresh tokens stored in HttpOnly cookies (access, refresh). Access token lifetime = 30 minutes Refresh token lifetime = 1 day Cookies are set with: HttpOnly; Secure; SameSite=None; Domain=.example.com Django timezone settings: LANGUAGE_CODE = "en-us" TIME_ZONE = "Africa/Lagos" USE_I18N = True USE_TZ = True The problem When the frontend calls multiple API endpoints simultaneously (e.g. 5 requests fired together), some succeed but others fail with: 401 Unauthorized {"detail":"Authentication credentials were not provided."} In the failing requests I can see the cookies are sent: cookie: access=...; refresh=... But SimpleJWT still rejects the access token, sometimes immediately after login. It looks like the exp claim in the access token is already in the past when Django validates it. What I’ve tried Verified cookies are set with correct domain and withCredentials: true. Implemented an Axios response interceptor with refresh token retry. Ensured CookieJWTAuthentication checks both Authorization header and access cookie. -
"Django: Cannot use ImageField because Pillow is not installed (Python 3.13, Windows)
PS C:\Users\ltaye\ecommerce> python manage.py runserver Watching for file changes with StatReloader Performing system checks... Exception in thread django-main-thread: Traceback (most recent call last): File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 1043, in _bootstrap_inner self.run() ~~~~~~~~^^ File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\threading.py", line 994, in run self._target(*self._args, **self._kwargs) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper fn(*args, **kwargs) ~~^^^^^^^^^^^^^^^^^ File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\site-packages\django\core\management\commands\runserver.py", line 134, in inner_run self.check(**check_kwargs) ~~~~~~~~~~^^^^^^^^^^^^^^^^ File "C:\Users\ltaye\AppData\Local\Programs\Python\Python313\Lib\site-packages\django\core\management\base.py", line 569, in check raise SystemCheckError(msg) django.core.management.base.SystemCheckError: SystemCheckError: System check identified some issues: ERRORS: store.Product.image: (fields.E210) Cannot use ImageField because Pillow is not installed. HINT: Get Pillow at https://pypi.org/project/Pillow/ or run command "python -m pip install Pillow". System check identified 1 issue (0 silenced). I created a Django project and added a model with an ImageField. When I run python manage.py runserver, I get the following error: SystemCheckError: Cannot use ImageField because Pillow is not installed. I expected the server to start normally and let me upload images. I already tried: Running python -m pip install Pillow Running pip install Pillow inside my project virtual environment Upgrading pip with python -m pip install --upgrade pip But the error still shows up when I start the server. I’m using Python 3.13 on Windows 11. -
How to write a documentation for project/django-project?
How do you write documentation for your projects? How to improve readability of documentation? Do you have any tips for writing documentation? Thanks! Im trying to write my first documenation for django API project and I need a help -
IHow do I coonect a webapp to a thermal printer for printing
I built a web app and bought a thermal printer, I geenrate recipt from the web app, but don't know how to send it to the printer to print, also the connection is not stable. which printer is cost effective and I can use that has stable connection How can I send the recipt for printing directly from my web app without third party intervention I bought a printer already but I have to reconnect on eevry print, and it hard even reconnecting, am using django for my backend and react for front end. I have not been able to print directly from my app, all other printer were through third party app -
Ubuntu 22.04 Django cronjob - No MTA installed, discarding output - Error
If I run this source /var/www/django/env/bin/activate && cd /var/www/django/ && python manage.py cron in the cockpit gui terminal (ubuntu-server 22.04) an email is sent. But if I run it as a cronjob, in crontab: * * * * * administrator source /var/www/html/django/env/bin/activate && cd /var/www/html/django/ && python manage.py cron I'm getting the error (CRON) info (No MTA installed, discarding output) What I'm missing? -
How to create an update function for a Django AbstractUser
I created signup/login with 'UserCreationForm'. How can I make update possible by using 'UserChangeForm'? models.py from django.contrib.auth.models import AbstractUser # Create your models here. class CustomUser(AbstractUser): pass def __str__(self): return self.username forms.py from django.contrib.auth.forms import UserCreationForm, UserChangeForm from.models import CustomUser class CustomUserCreationForm(UserCreationForm): class Meta(UserCreationForm): model = CustomUser fields = ('first_name', 'last_name', 'username', 'email') class CustomUserChangeForm(UserChangeForm): class Meta: model = CustomUser fields = ('first_name', 'last_name', 'username', 'email') views.py from django.shortcuts import render, redirect # Create your views here. from django.urls import reverse_lazy from django.views.generic.edit import CreateView from django.views import View from .forms import CustomUserCreationForm, CustomUserChangeForm from .models import CustomUser class SingUpView(CreateView): form_class = CustomUserCreationForm success_url = reverse_lazy('login') template_name = 'signup.html' #выдает ошибку class CustomUserUpdateView(View): def get(self, request, *args, **kwargs): user_id = kwargs.get("id") user = CustomUser.objects.get(id=user_id) form = CustomUserChangeForm(instance=user) return render( request, "users/update.html", {"form": form, "user_id": user_id} ) def post(self, request, *args, **kwargs): user_id = kwargs.get("id") user = CustomUser.objects.get(id=user_id) form = CustomUserChangeForm(request.POST, instance=user) if form.is_valid(): form.save() return redirect("users_list") return render( request, "users/update.html", {"form": form, "user_id": user_id} ) I've been trying to create update with inheritance of the View class including 'get/post' methods, but it raises an error CustomUser matching query does not exist. I did everything google told me to activate get/post … -
Django ORM gives duplicates in filtered queryset
I have a django app. I use the ORM to run some queries. It appears I have some duplicates in my result. While I can simply add a distinct() I would like to understand what is going on. Here are my models: class Person(models.Model): created = models.DateTimeField(auto_now_add=True) active_stuffs = models.ManyToManyField(Stuff, related_name="persons") waiting_stuffs = models.ManyToManyField(Stuff, related_name="persons_waiting") cancelled_stuffs = models.ManyToManyField(Stuff, related_name="persons_cancelled") # ... other fields class Stuff(models.Model): name = models.CharField(null=False, blank=False, max_length=150,) # ... other fields Here is the query: queryset = Person.objects.filter( Q(active_stuffs__id=some_id) | Q(cancelled_stuffs__id=some_id) | Q(waiting_stuffs__id=some_id) ) What I don't understand, is the following results: queryset.count() -> 23 Person.objects.filter(Q(active_stuffs__id=some_id)).count() -> 16 Person.objects.filter(Q(cancelled_stuffs__id=some_id)).count() -> 0 Person.objects.filter(Q(waiting_stuffs__id=some_id)).count() -> 6 An instance of Stuff can only be in either active_stuffs, cancelled_stuffs or waiting_stuffs. I checked the Person instance that is duplicated, and the Stuff instance I'm looking for is only in the waiting_stuffs field... So, where could this duplicate come from? -
Architecture Advice for Research Portal (DRF + Next.js)
I’m currently developing a research portal locally on my Mac using Django REST Framework (DRF) for the backend and Next.js for the frontend. We’re now preparing to move the project to a Test server environment. Our university’s IT Services team has asked for deployment specifications, including whether we need two separate servers for the frontend and backend. The database will be hosted on a dedicated server, and everything will be placed behind a load balancer and firewall. Given that this portal will host research data (real-time Data entry forms, real-time reports, etc), I’m trying to understand the best practices for security and performance: Is it recommended to host the frontend and backend on separate servers? What are the pros and cons of separating them vs. hosting both on a single server? What web servers are commonly used in this kind of setup? Are there any other security or architectural considerations I should be aware of? Have read few blogs and googled around but mixed responses and not specific to my requirements. SO asking here as I do not have much IT people experienced in this stack in our Uni. -
why when adding a cron job , it doesn't work? [closed]
I do add a cron job and it is shown when using crontab show but the function in python doesn't get executed and I tried to run the function in the python interpreter and it works , so I guess the problem is in crontab but couldn't resolve it and I am using docker and here is my repo : https://github.com/RachidJedata/Cron_with_django I did add the cron job as shown below but it doesn't get executed and I have set the log to a file cron.log but it is always empty: root@143ee1babb0b:/app# python manage.py crontab add adding cronjob: (4500c7eba7f00df4e625ceb624206d74) -> ('* * * * *', 'crypto.cron.fetchCryptoData >> /cron/cron.log 2>&1') root@143ee1babb0b:/app# cat ../cron/cron.log root@143ee1babb0b:/app# and below is the proof that my cron job is added but it is not being executed : python manage.py crontab show Currently active jobs in crontab: 0352d2a16547ccdea8c7d44dcac8cf1d -> ('* * * * *', 'crypto.cron.fetchCryptoData >> cron/cron.log 2>&1') root@94461ae7b66f:/app# -
OIDC django-allauth - kid lookup uses x509 instead of jwk when upgraded to 65.11.0?
We recently upgraded to django-allauth[mfa, socialaccount]==65.11.0 where we are using an OIDC-provider that extends OAuth2Client and we discovered that one of our SocialApplication configs that is connected with an Azure app registration stopped working after the bump. Before the version bump, successful authentication was made but now we get an allauth.socialaccount.providers.oauth2.client.OAuth2Error: Invalid 'kid' error. Digging a bit deeper we can see that it's jwtkit.py in allauth/socialaccount/internal that calls lookup_kid_pem_x509_certificate(keys_data, kid) to check if the kid is valid but the variables does not have the expected structure and rather fits lookup_kid_jwk(keys_data, kid) instead. I can't seem to find any documentation or pointers to where or how i can direct the call to use lookup_kid_jwk(keys_data, kid) since the config is the same as before the version bump. Anyone else having the same issue or any input here? The config at SocialApplication.settings looks like {"server_url": "https://login.microsoftonline.com/abc123/v2.0/.well-known/openid-configuration", "oauth_pkce_enabled": false} -
Django App Deploy Error in AWS ECR+EC2 setup
enter image description here I have installed Docker & AWS CLI on EC2 instance, pull docker image from ECR to EC2, then have run the Django container on EC2 machine. As of now trying to deploy with http, later will shift to https. Want to deploy withing free tier. I am facing attached error, what could have gone wrong?