Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
getCSRFToken is not defined error, JavaScript
This is the part of the code in the Django + JavaScript Todo App that is responsible for deleting a note. I need a csrftoken for this, but the JS is showing me an error in the console. What did I do wrong and how can I fix it? Uncaught ReferenceError: getCSRFToken is not defined at HTMLButtonElement.<anonymous> (main.js:100:30) const delUrl = document.body.dataset.delNoteUrl; deleteBtn.addEventListener("click", (e) => { e.preventDefault(); if (e.target.classList.contains("delete-btn")) { const parentLi = e.target.closest(".todo__note"); const noteId = parentLi.getAttribute("data-id"); fetch(`${delUrl}/${noteId}`, { method: "POST", headers: { "X-CSRFToken": getCSRFToken(), }, }) .then((response) => response.json()) .then((data) => { if (data.status == "success") { parentLi.remove(); } }); } });``` Here is HTML, if need. <ul class="todo__list"> {% for note in notes %} <li class="todo__note flex" data-id="{{ note.id }}"> <div> <input type="checkbox" /> <span>{{ note.text }}</span> </div> <div class="delete__edit"> <button class="edit-btn" id="editBtn"> <img src="{% static 'images/edit.svg' %}" alt="" /> </button> <button class="delete-btn" id="deleteBtn"> <img src="{% static 'images/delete.svg' %}" alt="" /> </button> </div> </li> {% endfor %} </ul> -
how DRF undestand which filed in serialazer.py is related to which model field?
imagine i have a super simple serializer.py file : and i just want to use it ! nothing special .. so im going to write something like this (with a model class called "Product") & its going to work : But how DRF undrstand which field in serializer.py file belongs to which field in the "Product" class in models file ? (i told DRF nothing about it !? + considering that the API Model != Data Model ) -
Results of Questionnaire to be downloaded as a spreadsheet
So i have this Model namely Questionnaire in models.py file of a Django project class Questionnaire(models.Model): title = models.CharField(max_length=200) description = models.TextField(blank=True, null=True) formula = models.CharField( max_length=200, default='{total}', help_text="Formula to calculate the total score for this questionnaire. Use {total} and {number_of_questions} as placeholders." ) color = models.CharField( max_length=7, default='#000000', help_text="Color in HEX format. Examples: #FF5733 (red), #33FF57 (green)," " #3357FF (blue), #FF33A1 (pink), #A133FF (purple), #33FFF5 (cyan), #FF8C33 (orange)" ) what i want to do is i want to download the results of the Questionnaire in a spreed sheet form? i also have a admin.py file having the model there to represent or show it on UI like this class QuestionnaireAdmin(nested_admin.NestedModelAdmin): model = Questionnaire inlines = [QuestionInline] list_display = ['title', 'description', 'color'] search_fields = ['title', 'description'] so i think the best way to do this is to add an action button and the client be able to download it by a click -
Django google-auth-oauthlib insecure_transport error on Cloud Workstations despite HTTPS and SECURE_PROXY_SSL_HEADER
I'm developing a Django application on Firebase Studio environment. I'm trying to implement Google OAuth 2.0 for my users (doctors) to connect their Google Calendar accounts using the google-auth-oauthlib library. The application is accessed via the public HTTPS URL provided by Firebase (e.g., https://8000-firebase-onlinearsts-...cloudworkstations.dev). I've configured my Google Cloud Project, enabled the Calendar API, set up the OAuth consent screen, and created an OAuth 2.0 Client ID for a Web application with the correct https:// Authorized redirect URI (https://8000-firebase-onlinearsts-1753264806380.cluster-3gc7bglotjgwuxlqpiut7yyqt4.cloudworkstations.dev/accounts/google/callback/). However, when my Django application's OAuth callback view (accounts.views.google_oauth_callback) attempts to exchange the authorization code for tokens using flow.fetch_token(), I get the following error: Google Authentication Error An error occurred during the Google authentication process. Error details: Error during OAuth exchange: (insecure_transport) OAuth 2 MUST utilize https. I cannot understand why Im receiving this error if I am utilizing https. mysite/mysite/settings.py: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') # Google API Settings GOOGLE_CLIENT_ID = '...' GOOGLE_CLIENT_SECRET = '...' GOOGLE_REDIRECT_URI = 'https://8000-firebase-onlinearsts-1753264806380.cluster-3gc7bglotjgwuxlqpiut7yyqt4.cloudworkstations.dev/accounts/google/callback/' # Matches Google Cloud Console GOOGLE_CALENDAR_SCOPES = [ 'https://www.googleapis.com/auth/calendar.events', 'https://www.googleapis.com/auth/calendar.readonly', 'https://www.googleapis.com/auth/calendar', ] To investigate why the insecure_transport error persists, I added debugging print statements to my callback view (accounts.views.google_oauth_callback) to inspect the incoming request headers and properties: accounts/views.py: @login_required def google_oauth_callback(request): flow = … -
Django, HTMX, class based generic views, querysets and pagination
I think this is as much a question about minimalism and efficiency, but anyway... I have a generic ListView that I'm using, along with HTMX which I'm a first time user of, but loving it so far! That said, I have some quirks here with the default behavior of a generic class based view that I'm not sure how to handle. Considering the following... class AccountListView(ListView): model = Account template_name = 'account_list.html' paginate_by = 100 def get_queryset(self): query = self.request.POST.get('query') try: query = int(query) except: pass if query: if isinstance(query, int): return Account.objects.filter( Q(id=query) ) else: return Account.objects.filter( Q(full_name__icontains=query) | Q(email1=query) | Q(email2=query) | Q(email3=query) ).order_by('-date_created', '-id') return Account.objects.all().order_by('-date_created', '-id') def post(self, request, *args, **kwargs): response = super().get(self, request, *args, **kwargs) context = response.context_data is_htmx = request.headers.get('HX-Request') == 'true' if is_htmx: return render(request, self.template_name + '#account_list', context) return response def get(self, request, *args, **kwargs): response = super().get(self, request, *args, **kwargs) context = response.context_data is_htmx = request.headers.get('HX-Request') == 'true' if is_htmx: return render(request, self.template_name + '#account_list', context) return response As you can likely gather, my issue here is I'm trying to implement two different functionalities in a single generic view... a quick-search, that checks whether the user has submitted an integer … -
How to get a billing cycle period between the 26th of the previous month and the 25th of the current month using Python (timezone-aware)?
The Problem I'm building a billing system in Django, and I need to calculate the billing period for each invoice. Our business rule is simple: The billing cycle starts on the 26th of the previous month at midnight (00:00:00); And ends on the 25th of the current month at 23:59:59. For example, if the current date is 2025-07-23, the result should be: start = datetime(2025, 6, 26, 0, 0, 0) end = datetime(2025, 7, 25, 23, 59, 59) We're using Django, so the dates must be timezone-aware (UTC preferred), as Django stores all datetime fields in UTC. The problem is: when I run my current code (below), the values saved in the database are shifted, like 2025-06-26T03:00:00Z instead of 2025-06-26T00:00:00Z. What We Tried We tried the following function: from datetime import datetime, timedelta from dateutil.relativedelta import relativedelta def get_invoice_period(reference_date: datetime = None) -> tuple[datetime, datetime]: if reference_date is None: reference_date = datetime.now() end = (reference_date - timedelta(days=1)).replace(hour=23, minute=59, second=59, microsecond=0) start = (reference_date - relativedelta(months=1)).replace(day=26, hour=0, minute=0, second=0, microsecond=0) return start, end But this causes timezone problems, and datetime.now() is not timezone-aware in Django. So when we save these values to the database, Django converts them to UTC, shifting the … -
How to parse multipart/form-data from a put request in django
I want to submit a form to my backend and use the form data as the initial value for my form. Simple stuff if you are using a POST request: def intervals(request, **kwargs): form = MyForm(initial=request.POST) However, I am sending a form that should replace the current resource, which should idiomatically be a PUT request (I am using HTMX which allows you to do that). The problem is that I cannot find out how I can parse the form data from a put request. request.PUT does not exist and QueryDict only works for query params. What am I missing here? -
How many keys can I use store in Django file-based cache before it becomes a performance bottleneck?
I'm working with a large number of small-sized data entries (typically 2–3 KB each) and I'm using Django's file-based cache backend for storage. I would like to understand the scalability limits of this approach. Specifically: Is there a practical or recommended limit to the number of cache keys the file-based backend can handle efficiently? At what point (number of keys or total cache size) might I start seeing performance degradation or bottlenecks? Are there any known issues or filesystem-level constraints that I should be aware of when caching tens or hundreds of thousands of small files? I'm open to alternative caching strategies if the file-based backend is not well-suited for this use case. What is the most suitable Django cache backend for storing a high volume of small entries (possibly tens or hundreds of thousands)? -
How to swap between detail views for user specific datasets in django (python)?
On a Table (lets call it Items) I open the detail views for the item. On the detail view I want a "next" and a "previous" button. The button should open the next items detail view. I cannot just traverse through all datasets because the user cannot access other users datasets. I though about using a doubly linked list where the data in the nodes contain the id of the current dataset and as pointers the next and previous item id. When the user reaches the tail he automatically goes to the head and the other way around. But I dont want to load this list every time the user is opening the next detail view. Is there a ressource friendly way to swap between detail views without just increasing the id? -
Is it possible to run Django migrations on a Cloud SQL replica without being the owner of the table?
I'm using Google Cloud SQL for PostgreSQL as an external primary replica, with data being replicated continuously from a self-managed PostgreSQL source using Database Migration Service (DMS) in CDC mode. I connected a Django project to this replica and tried to run a migration that renames a column and adds a new one: uv run python manage.py migrate However, I get the following error: django.db.utils.ProgrammingError: must be owner of table camera_manager_invoice This makes sense, since in PostgreSQL, ALTER TABLE requires table ownership. But in this case, the replica was created by DMS, so the actual table owner is the replication source — and not the current user. 🔍 The Problem: I'm trying to apply schema changes via Django migrations on a Cloud SQL replica that I do not own. The replication is working fine for data (CDC), but I need to apply structural changes on the replica independently. ✅ What I Tried: Changing the connected user: still not the owner, so same error. Running sqlmigrate to get the SQL and applying manually: same result — permission denied. Attempted to change ownership of the table via ALTER TABLE ... OWNER TO ...: failed due to not being superuser. Tried running migration … -
Why is my Cloud SQL external replica not reflecting schema changes (like new columns) after Django migrations?
I'm using Google Cloud Database Migration Service (DMS) to replicate data from a self-managed PostgreSQL database into a Cloud SQL for PostgreSQL instance, configured as an external primary replica. The migration job is running in CDC mode (Change Data Capture), using continuous replication. Everything seems fine for data: new rows and updates are being replicated successfully. However, after running Django’s makemigrations and migrate on the source database — which added new columns and renamed others — the schema changes are not reflected in the Cloud SQL replica. The new columns simply don’t exist in the destination. 🔍 What I’ve done: Source: self-managed PostgreSQL instance. Target: Cloud SQL for PostgreSQL set as an external replica. Replication user has proper privileges and is connected via mTLS. The job is active, with "Optimal" parallelism and healthy status. Data replication (INSERT/UPDATE/DELETE) works great. Schema changes like ALTER TABLE, ADD COLUMN, RENAME COLUMN are not reflected in the replica. ❓ Question: How can I configure DMS or Cloud SQL to also replicate schema changes (like ALTER TABLE or CREATE COLUMN) from the source to the replica? Or is it necessary to manually apply schema changes on the target? I'm fine with workarounds or official recommendations … -
Django won't apply null=True changes on fields when running makemigrations and migrate
I’m working on a Django project and I’m facing an issue: I modified several fields in one of my models to be null=True, but after running makemigrations and migrate, the changes are not reflected in the database. I have a model named Sellers with many fields. For example: class Sellers(models.Model): ... selling_name = models.CharField(max_length=100, null=True, blank=True) zip_code = models.IntegerField(null=True, blank=True) ... However, in the database schema, these fields are still marked as NOT NULL. What I’ve tried: I created an empty migration manually: python manage.py makemigrations --empty sellersapp -n fix_nullable_fields Then manually added AlterField operations like: migrations.AlterField( model_name='sellers', name='selling_name', field=models.CharField(max_length=100, null=True, blank=True), ), After running migrate, it said the migration was applied — but still no actual effect on the DB schema. How can I force Django to apply null=True changes to existing fields in the database? Is there a proper way to generate migrations that actually produce the corresponding ALTER TABLE statements? I’m aware that this issue likely stems from a desynchronization between the database schema and Django’s migration state. However, this is the only inconsistency I need to fix, and I believe this is the cleanest approach to do it without resetting everything. If there’s a better or … -
Django ORM, seeding users and related objects by OneToOneField
I am designing a django application for educational purposes. I've come up with creating a fake banking application. The idea is to have a User<->BankAccount link by a OneToOneField. Similarly, to have User<->UserProfile link by a OneToOneField. Attached is my models.py: from django.db import models from django.contrib import admin from django.contrib.auth import get_user_model from django.contrib.auth.models import User import secrets ACCOUNT_ID_PREFIX = "SBA_" CURRENCY_CHOICES = [ ('USD', '($) US Dollars'), ('EUR', '(Є) European Dollars'), ('ILS', '(₪) New Israeli Shekels'), ] GENDER_CHOICES = [ ('MALE', 'Male'), ('FEMALE', 'Female') ] class Currencies(models.Model): country = models.CharField(max_length=50) code = models.CharField(max_length=6, choices=CURRENCY_CHOICES, unique=True) sign = models.CharField(max_length=4) class UserProfile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) first_name = models.CharField(max_length=30, default="John") middle_name = models.CharField(max_length=30, default="Damien") last_name = models.CharField(max_length=30, default="Doe") gender=models.CharField( max_length=10, choices=GENDER_CHOICES ) class BankAccount(models.Model): owner = models.OneToOneField(User, on_delete=models.PROTECT) # Account ID of format SBA_12345678 account_id = models.CharField(max_length=16, unique=True, null=False) balance = models.DecimalField(max_digits=20, decimal_places=2) currency_code = models.ForeignKey(Currencies, on_delete=models.PROTECT) def save(self, *args, **kwargs): if self._state.adding: acctid = BankAccount.make_new_acct_id() if not acctid: raise Exception("Failed to create new Account ID (attempts exhausted)") print("Saving acctid={}".format(acctid)) self.account_id = acctid super(BankAccount, self).save(*args, **kwargs) @staticmethod def make_new_acct_id() -> str | None: prefix = ACCOUNT_ID_PREFIX accid = "" attempts_left = 5 while attempts_left > 0: accid = prefix + … -
How to prevent Django from generating migrations when using dynamic GoogleCloudStorage in a FileField?
We’re working on a Django project that stores video files in Google Cloud Storage using a FileField. In our model, we define a default bucket storage like this: from storages.backends.gcloud import GoogleCloudStorage from django.conf import settings DEFAULT_STORAGE = GoogleCloudStorage(bucket_name=settings.DEFAULT_GCS_BUCKET) class Recording(models.Model): raw_file_gcp = models.FileField(blank=True, null=True, storage=DEFAULT_STORAGE) However, in some parts of the system, we move files between two different GCS buckets: One for regular usage (e.g. default-bucket) Another for retention or archival purposes (e.g. retention-bucket) To do that, we dynamically update the .name attribute of the file based on logic in the backend: recording.raw_file_gcp.name = path_with_retention_bucket recording.save(update_fields=["raw_file_gcp", "updated_at"]) Because the underlying storage class contains a bucket name, every time we run makemigrations, Django detects a change and adds a migration like this: migrations.AlterField( model_name='recording', name='raw_file_gcp', field=models.FileField(blank=True, null=True, storage=myapp.models.CustomStorage(bucket_name='default-bucket')), ) But nothing has actually changed in the model. To avoid these unnecessary AlterField migrations, we implemented a custom storage class using @deconstructible and __eq__: from django.utils.deconstruct import deconstructible from storages.backends.gcloud import GoogleCloudStorage @deconstructible class NeutralGCSStorage(GoogleCloudStorage): def __eq__(self, other): return isinstance(other, NeutralGCSStorage) And then used: DEFAULT_STORAGE = NeutralGCSStorage(bucket_name=settings.DEFAULT_GCS_BUCKET) But Django still generates the same migration, and doesn’t treat the storage as unchanged. ❓ What we’re looking for How can we prevent Django … -
Django Celery with prefork workers breaks OpenTelemetry metrics
I have a Django application I wanted to instrument with OpenTelemetry for traces and metrics. I created an otel_config.py file next to my manage.py with this content: # resources def get_default_service_instance_id(): try: hostname = socket.gethostname() or "unknown-host" except Exception as e: hostname = "unknown-host" try: process_id = os.getpid() except Exception as e: process_id = "unknown-pid" return f"{hostname}-{process_id}" service_name = "my-service" otlp_endpoint = "http://otel-collector:4318" service_instance_id = get_default_service_instance_id() resource = Resource.create( { SERVICE_NAME: service_name, SERVICE_INSTANCE_ID: service_instance_id, } ) # traces otlp_endpoint_traces = urljoin(otlp_endpoint, "/v1/traces") trace_exporter = OTLPSpanExporter(endpoint=otlp_endpoint_traces) span_processor = BatchSpanProcessor(trace_exporter) tracer_provider = TracerProvider(resource=resource) trace.set_tracer_provider(tracer_provider) trace.get_tracer_provider().add_span_processor(span_processor) # metrics otlp_endpoint_metrics = urljoin(otlp_endpoint, "/v1/metrics") metric_exporter = OTLPMetricExporter(endpoint=otlp_endpoint_metrics) metric_reader = PeriodicExportingMetricReader(metric_exporter) meter_provider = MeterProvider(resource=resource, metric_readers=[metric_reader]) metrics.set_meter_provider(meter_provider) # instrument DjangoInstrumentor().instrument() Psycopg2Instrumentor().instrument() CeleryInstrumentor().instrument() Then, I simply imported it at the end of my settings.py file like below: import otel_config Although my traces and metrics work fine in most cases, my OpenTelemetry metrics are broken in the case of celery workers with prefork mode. In case of prefork workers, the child processes happen to get the same SERVICE_INSTANCE_ID as the parent process. Therefore, different child processes manipulate the same metric value as each has its own exclusive memory. Thus, the value in my collector gets changed very often and … -
moving from Django-WSGI to ASGI/Uvicorn: issue with AppConfig.ready() being called synchronously in asynchronous context
I'm moving my application views to asynchronous calls as they are requesting a number of data from the database. When running the async views from the wsgi server, everything is working according to expectations. But to be able to really benefit from the async rewriting of my application, I'm now trying to start my application as a asgi application, together with Uvicorn. asgi.py: os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'MyAppName.settings') application = get_asgi_application() While launching the asgi server through: uvicorn MyAppName.asgi:application I end up triggering: SynchronousOnlyOperation(message) django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async The reason is that through my AppConfig.ready() method, I'm calling a function which populates some key data from database in cache. my_app/apps.py: class MyAppConfig(AppConfig): name = 'my_app' def ready(self): """ hook for application initialization : is called as soon as the registry is fully populated for this app put your startup code here useful to run some code inside the Django appserver process or you need to initialize something in memory, in the context of the Django app server """ # Reinitializing cache to enable the cached_dicts to get the right values: cached_dicts.set_up_cache_dicts() AppConfig.ready() is by design a sync method in Django, but the … -
Django message not showing up in template
I’m using Django 5.2.4, and my login_user view sets an error message with messages.error when authentication fails, but it doesn’t appear in the template (login page) after redirecting. App urls: from django.urls import path from django.shortcuts import redirect from . import views # urlpatterns of the app checks the function names defined in views urlpatterns = [ path('', lambda request: redirect('login', permanent=True)), path("login/", views.login_user, name="login"), ] Project urls: from django.contrib import admin from django.shortcuts import redirect from django.urls import path, include urlpatterns = [ path("", lambda request: redirect('loginapp/', permanent=True)), path('admin/', admin.site.urls), path("loginapp/", include("loginapp.urls")), path("loginapp/", include('django.contrib.auth.urls')), ] template html : <html lang="en"> <body> <div class="box"> {% if messages %} <ul class="messages"> {% for message in messages %} <li{% if message.tags %} class="{{ message.tags }}"{% endif %}>{{ message }}</li> {% endfor %} </ul> {% endif %} <h2>Welcome to Trevo</h2> <h3>Sign in now!</h3> <form method="post", action="{% url 'login' %}"> <!-- Django uses to verify that the form submission is coming from your site and not from a malicious third party. --> {% csrf_token %} <label for="Username">Username</label><br> <input type="text" id="username" name="username"><br> <br> <label for="Password">Password</label><br> <input type="password" id="password" name="password"><br> <br> <input type="submit" id="submit" name="submit" value="Sign in"><br> </form> </div> <style> Views.py : from django.shortcuts import render, … -
How to show in django template data from connected models?
I have a models: class Publication(models.Model): pub_text = models.TextField(null=True, blank=True) pub_date = models.DateTimeField(auto_now_add=True) pub_author = models.ForeignKey(User, on_delete=models.CASCADE) coor_text = models.CharField(null=True, blank=True) coor_adress = models.CharField(null=True, blank=True) coor_coordinates = models.CharField(null=True, blank=True) class Image(models.Model): image = models.ImageField(upload_to='images', null=True) image_to_pub = models.ForeignKey(Publication, on_delete=models.CASCADE, null=True, related_name='images') And i have view: def pub_list_view(request): pubs = Publication.objects.all() images = Image.objects.all() context = {"pubs": pubs, "images": images} return render(request, "epictalk/pub_list.html", context) And i have template: {% extends 'epictalk/layout.html' %} {% block content %} {% for pub in pubs %} <h1>{{ pub.pub_text }}</h1> {% for image in images %} <img src="{{ image.image.url }}"> {% endfor %} {% endfor %} {% endblock %} How to show in browser a Publications with a connected images to it? How to show in browser a Publications with a connected frist image to it? -
django-import-export id auto generated by the package during insert?
I'm using django-import-export and trying to work it with multi-thread concurrency. I tried logging the sql queries and notice that INSERT query has id values generated as well. INSERT INTO "lprovider" ("id", "npi", "provider_id", "first_name", "last_name") VALUES (278082, '1345', NULL, 'CHARLES', 'STEVENS') Is it expected? The package self populates the primary key? -
Postgres indexing fails in Django
Tried to set db_index=True, HashIndex and BrinIndex, nothing works, indexes by Seq Scan, there are 1000 records in the database, all migrations are completed. Model code: from django.db import models from django.utils import timezone from django.contrib.postgres.indexes import BrinIndex, HashIndex class Contact(models.Model): phone = models.CharField(max_length=50, unique=True) address = models.CharField(max_length=50) def __str__(self): return self.phone class Department(models.Model): name = models.CharField(max_length=255) description = models.TextField(null=True, blank=True) def __str__(self): return self.name class Employee(models.Model): first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) about = models.CharField(max_length=10000,db_index=True) age = models.SmallIntegerField(null=True) created = models.DateTimeField(default=timezone.now) work_experience = models.SmallIntegerField(default=0, null=True) contact = models.OneToOneField(Contact, on_delete=models.CASCADE, null=True) department = models.ForeignKey(Department, on_delete=models.CASCADE, default=None, null=True) class Meta: indexes = ( BrinIndex(fields=('created',), name="hr_employee_created_ix", pages_per_range=2 ), ) def __str__(self): return f'{self.first_name} {self.last_name}' I tried this filters: employees = Employee.objects.filter(created__year__lte=2022) employees = Employee.objects.filter(about__contains='Test') -
Bad Request /api/accounts/register/ HTTP/1.1" 400 50
I am a newbie to ReactJS and Django REST Framework. I am trying to connect the frontend registration form to a backend API to no avail; I keep getting "POST /api/accounts/register/ HTTP/1.1" 400 50' error. The following are codes: endpoints: from django.urls import path from .views import RegisterView, LoginView, AuditLogView urlpatterns = [ path('admin/', admin.site.urls), path('api/accounts/', include('accounts.urls')), ] App URLs: urlpatterns = [ path('register/', RegisterView.as_view(), name='register'), path('login/', LoginView.as_view(), name='login'), path('audit-logs/', AuditLogView.as_view(), name='audit-logs'), ] On the part of ReactJS, frontend, I provide a connection to the backend using axios as follows: import axios from "axios"; const API = axios.create({ baseURL: "http://localhost:8000/api/accounts/", // Change this to match your Django backend }); // Register a new user export const registerUser = (userData) => API.post("register/", userData); // Login and receive token export const loginUser = (credentials) => API.post("login/", credentials); // Get audit logs (SUPERUSER only) export const getAuditLogs = (token) => API.get("audit-logs/", { headers: { Authorization: `Token ${token}` }, }); // Get user profile (optional if endpoint exists on your backend) export const getUserProfile = (token) => API.get("profile/", { headers: { Authorization: `Token ${token}` }, }); // Logout (optional if you implement token blacklist on server) export const logoutUser = (token) => API.post( "logout/", … -
Why are some folders and files still red in PyCharm even though the Django project works correctly?
I'm working on a Django project in PyCharm, and although everything works fine (including migrations, interpreter setup, Django installation, and manage.py is in the correct place), some folders and .py files like models.py, admin.py, etc., still appear in red in the Project view. Marked the correct root folder (the one containing manage.py) as Sources Root Ensured all packages (blog, djangoProject) have init.py All imports are correct and not showing any actual errors Ran Invalidate Caches / Restart -
Email verification in django python
I try to make email verification in django, everyting works correctly, but if user creates account with someone other's email and if user will not confirm email, owner of this email will not be able to register because account is alredy registered and is_active is False. Here my views.py with reg form ''' from django.shortcuts import render, redirect from django.http.request import HttpRequest from django.http import JsonResponse from .forms import UserRegisterForm, UserLoginForm from .models import User from .tokens import accound_activate_token from django.contrib.sites.shortcuts import get_current_site from django.template.loader import render_to_string from django.utils.http import urlsafe_base64_decode, urlsafe_base64_encode from django.utils.encoding import force_bytes, force_str from django.core.mail import send_mail # from django.contrib.auth.forms import AuthenticationForm from django.contrib.auth import authenticate, login, logout def send_email_validation(request: HttpRequest, user: User, form: UserRegisterForm): current_site = get_current_site(request) mail_subject = "Activate your account" message = render_to_string( "users/emails/email_activation.html", { "user": user, "domain": current_site.domain, "uid": urlsafe_base64_encode(force_bytes(user.pk)), 'token': accound_activate_token.make_token(user) } ) from_email = "xxx" to_email = form.cleaned_data["email"] send_mail( mail_subject, message, from_email, [to_email] ) def register_page(request: HttpRequest): if request.method == "GET": reg_form = UserRegisterForm() if request.method == "POST": reg_form = UserRegisterForm(request.POST) # create user and redirect to login if reg_form.is_valid(): user: User = reg_form.save(commit=False) user.is_active = False user.save() # sending email for validation send_email_validation(request, user, reg_form) return redirect("users_login") context = … -
activate script missing from Python virtual environment (venv) on Ubuntu 22.04 with Python 3.12
I'm trying to deploy a Django project on my Ubuntu server using a virtual environment. I created a directory named ae9f7a37e98d4a8f98643ced843d71d7_venv, but when I try to activate it using: source /www/wwwroot/ayale_atv/ae9f7a37e98d4a8f98643ced843d71d7_venv/bin/activate I get this error: -bash: /www/wwwroot/ayale_atv/ae9f7a37e98d4a8f98643ced843d71d7_venv/bin/activate: No such file or directory When I check the contents of the bin/ folder, I see Python binaries but no activate script: ls -l /www/wwwroot/ayale_atv/ae9f7a37e98d4a8f98643ced843d71d7_venv/bin Output: 2to3 pip3 pydoc3 python3.12 idle3 pip3.12 python3 python3.12-config It seems like the virtual environment wasn't set up correctly. I'm using Python 3.12.3 and Ubuntu 22.04. What caused this, and how can I properly create a working virtual environment so I can run my Django project? i use aapanel for deploy this Django app aapanel -
Django-tenants: relation "journal_nav_topnavitem" does not exist even after adding app to SHARED_APPS and running migrate_schemas --shared
I'm working on a multi-tenant Django project using django-tenants with Django 3.2.16. I created an app called journal_nav and initially added it only to TENANT_APPS. Later, I moved it to SHARED_APPS because it provides a common navigation bar for all tenants and the public schema. I added it to SHARED_APPS in settings.py: SHARED_APPS = [ 'app123', 'app456', ... 'journal_nav', # moved here ] However, when I visited a route that used a template context processor that queried TopNavItem.objects.all(), I got the following error: Internal Server Error: /services/typography/ django.db.utils.ProgrammingError: relation "journal_nav_topnavitem" does not exist LINE 1: ...", "journal_nav_topnavitem"."show_in_topbar" FROM "journal_n... I then ran: python manage.py migrate_schemas --shared But it showed: [standard:public] Running migrations: No migrations to apply. Even though the model clearly existed and the migration file (0001_initial.py) was present.