Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
How to trigger multiple views in Django with one button using HTMX?
I’m working on a Django project where I need to call two different views when clicking a single button using HTMX. Scenario: First, I need to send a POST request to edit a task (edit_positions_tasks view). After the first request completes, I need to send a GET request to create a task (create_position_tasks view). The second request should only execute after the first request is successfully processed. Current Code: <button hx-post="{% url 'hrm_tenant:edit_positions_tasks' task.id %}" hx-target="#task_{{ task.uuid }}" hx-swap="outerHTML" hx-on::after-request="htmx.ajax('GET', '{% url 'hrm_tenant:create_position_tasks' empty_position.id %}', {target: '#task_table', swap: 'beforeend'})" > Update </button> Problem: The first POST request works correctly and updates the task. However, the GET request to create_position_tasks doesn’t seem to fire or execute properly after the first request finishes. What I Need Help With: Is my approach correct for chaining two requests in HTMX? If not, what is the recommended way to ensure the second request only fires after the first one completes successfully? Are there better ways to handle this in HTMX or JavaScript? Any insights would be greatly appreciated! -
Sending large base64 files through RabbitMQ to consume on workers
I'm using RabbitMQ and Celery to process email attachments using the gmail API. In my first celery task it fetches batches of emails with large attachments in base64 strings greater than 25mb per file. The current RabbitMQ default limit is 16mb, but I don't want to raise it because I read a few articles about how keeping the message size small is a better practice. What is the best practice here? While the first task is pulling emails, I want to create multiple other celery workers that processes those files (with OCR and storing it in a database) concurrently to optimize the speed of the process. A few solutions (that I'm not sure if it's a good practice because I'm a newbie) I came up with: Raising the RabbitMQ message size limit Storing the file in memory and referencing that in the second celery task (Not sure if this is a good idea, because my server I'm running is 32gb of ram) In the first celery task that's pulling emails, I can directly upload that to a cloud storage service, and then reference that url to the file in the second celery task. But the downside of that is I … -
Should I Perform Geocoding on the Frontend or Backend in a Django Application?
I'm developing a web application using Django and need to convert user-provided addresses into geographic coordinates (latitude and longitude) using a geocoding service like the Google Maps Geocoding API. I'm trying to determine the best practice for where to handle this geocoding process: Frontend: Collect the address from the user, send it directly to the geocoding API from the client's browser, and then send the obtained coordinates to the Django backend for storage and further processing. Backend: Collect the address from the user via the frontend, send it to the Django backend, perform the geocoding server-side by making a request to the geocoding API, and then store and process the coordinates as needed. I'm trying to determine the best practice for where to handle this geocoding process -
Issues when tryin to deploy using Nginx and Docker?
I’m deploying my backend using Nginx and Docker (containerized DRF app), but I’m encountering an issue when trying to access the admin panel. I get the following error: "403 Forbidden – CSRF verification failed. Request aborted." To fix this, I reviewed my configuration and added some parameters to my settings.py file, ensuring that CSRF_TRUSTED_ORIGINS is pointing to the correct domain but unforthunately didn't fix the problem, any hints ? -
"Cannot query 'admin': Must be 'User' instance"
I’m working on a Django REST Framework project where I have a Cart model, and I want to allow authenticated users to add items to their cart via a POST request. However, I keep getting the following error: ValueError: Cannot query "admin": Must be "User" instance. Here’s the relevant part of my Cart model: class Cart(models.Model): user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='carts') album = models.ForeignKey(Album, on_delete=models.CASCADE) quantity = models.PositiveIntegerField(default=1) added_at = models.DateTimeField(default=timezone.now) And here’s the CartView where the issue occurs: class CartView(APIView): permission_classes = [IsAuthenticated] def post(self, request): user = User.objects.get(id=request.user.id) # Force correct User instance album_id = request.data.get('album_id') if not album_id: return Response({"error": "Album ID is required"}, status=400) try: album = Album.objects.get(id=album_id) except Album.DoesNotExist: return Response({"error": "Album does not exist"}, status=404) cart_item, created = Cart.objects.get_or_create(user=user, album=album) if not created: cart_item.quantity += 1 cart_item.save() return Response({"message": f"Added {album.title} to cart", "quantity": cart_item.quantity}, status=201) In Postman, I include the Authorization header with a valid token: Authorization: Token fcbb7230bb0595694200e3e6effbe67d1c43fb7c Ensured request.user is properly authenticated. Verified that the token belongs to the correct user in the admin panel. Logged type(request.user) and found it’s not always an instance of User. Explicitly retrieved the User instance using User.objects.get(id=request.user.id). -
How to use django-allauth for Google API?
How is django-allauth implemented to obtain authorization using Oauth2 for a Google API (in my case the Gmail API)? Additionally, I am looking to implement this separately from using django-allauth to have users log in with Google, so I would need to store it separately, and also call it in a view. Thanks! -
How to serve static files in django production envrionment while Debug=False?
So, guys I found interesting problem with serving staticfiles in Django production environment while Debug=False. Django server is developed to stopped serving staticfiles when Debug=False. For docker and K8s users, this is the perfect implementation of handling the issue. in your Dockerfile create a mount point eg :/app for your django app and ensure to copy your app to this mount COPY . .:/app In docker-compose.yaml file when creating your service ensure to create a static_volume that binds the :/app/static -
Manage django-allauth social applications from admin portal
In all the tutorials I've seen, the django-allauth settings are all in the settings.py file. However, this ends up being kind of messy: SOCIALACCOUNT_PROVIDERS = { "google": { "SCOPE": [ "profile", "email", ], "AUTH_PARAMS": { "access_type": "online", "redirect_uri": "https://www.********.com/accounts/google/login/callback/", }, "OAUTH_PKCE_ENABLED": True, } } SITE_ID = 1 SOCIALACCOUNT_ONLY = True ACCOUNT_EMAIL_VERIFICATION = 'none' ACCOUNT_EMAIL_REQUIRED = True ACCOUNT_AUTHENTICATION_METHOD = 'email' LOGIN_REDIRECT_URL = 'https://www.********.com/success/' ROOT_URLCONF = 'gvautoreply.urls' So my question is, how can I fully manage Social Applications and their settings from the admin portal? I see that there is a settings field that takes JSON input, but I can't find any documentation on how to use it. -
Where docker gets the worker from if it is not specified in the compose file (it was before, but deleted and docker still pulls it)
I want to run the application from docker, there was a problem that worker_1 connects to localhost, but a different address was specified in the env .env DATABASE_URL=postgres://db_user:db_password@db/db_name #DATABASE_URL=postgres://postgres:123@127.0.0.1/olx-killer # Redis #REDIS_URL=redis://localhost:6379 REDIS_URL=redis://redis:6379 I decided to comment out everything related to redis and celery both in settings and env, also deleted worker from docker compose file, but it still takes it from somewhere and tries to connect to local host. settings.py # Redis #REDIS_URL = os.getenv('REDIS_URL', 'redis://redis:6379/') #REDIS_CACHE_URL = f'{REDIS_URL}/1' # Celery #CELERY_TIMEZONE = TIME_ZONE #CELERY_TASK_TRACK_STARTED = True #CELERY_BROKER_URL = REDIS_URL #CELERY_RESULT_BACKEND = None #CELERY_TASK_SERIALIZER = 'json' #CELERY_RESULT_SERIALIZER = 'json' #CELERY_ACCEPT_CONTENT = ['json'] docker-compose.yml volumes: pg_data: driver: local x-base: &base-backend build: . volumes: - .:/code:delegated depends_on: - db services: backend: <<: *base-backend ports: - "8000:8000" env_file: .env environment: - DJANGO_SETTINGS_MODULE=settings.main entrypoint: ["/code/entrypoint.sh"] depends_on: - db restart: unless-stopped db: image: postgres:13 volumes: - "pg_data:/var/lib/postgresql/data" environment: POSTGRES_DB: db_name POSTGRES_USER: db_user POSTGRES_PASSWORD: db_password ports: - "5432:5432" restart: unless-stopped Traceback db_1 | 2025-02-06 17:22:20.576 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 db_1 | 2025-02-06 17:22:20.577 UTC [1] LOG: listening on IPv6 address "::", port 5432 db_1 | 2025-02-06 17:22:20.581 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" db_1 | 2025-02-06 17:22:20.588 … -
Native way to preview results in Django?
I have a django app running(frontend is vanilla everything) and it has a search function. I would like for users to see a preview dropdown of search results while typing so that they can know if there will be results without having to complete the search, to increase UX. Is there a django package or a native django method to do this? And what is this called? -
Why does the package name need to be included in this import statement?
To set things up, here's my directory structure for my Django project: RecallThatMovie | |-- RecallThatMovie | | | |-- __pycache__ (directory) | | | |-- __init__.py | | | |-- asgi.py | | | |-- myconfig.py | | | |-- settings.py | | | |-- urls.py | | | |-- wsgi.py | |-- db.sqlite3 | |-- manage.py And I have this code at the top of the settings.py file: """ Django settings for RecallThatMovie project. Generated by 'django-admin startproject' using Django 5.1.5. """ import myconfig from pathlib import Path # Build paths inside the project like this: BASE_DIR / 'subdir'. BASE_DIR = Path(__file__).resolve().parent.parent # Quick-start development settings - unsuitable for production # See https://docs.djangoproject.com/en/5.1/howto/deployment/checklist/ # SECURITY WARNING: keep the secret key used in production secret! SECRET_KEY = myconfig.SECRET_KEY When I run python manage.py I get this error: File "<frozen importlib._bootstrap>", line 1387, in _gcd_import File "<frozen importlib._bootstrap>", line 1360, in _find_and_load File "<frozen importlib._bootstrap>", line 1331, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 935, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 1026, in exec_module File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed File "C:\Users\jaull\Documents\RecallThatMovie\RecallThatMovie\settings.py", line 13, in <module> import myconfig ModuleNotFoundError: No module named 'myconfig' However when I change from: import myconfig … -
django rest framework serializer read_only_fields does not work
I am using django rest framework to create a blog website for fun. I am using neon db for the database. I have a users model in which I have a created_at attribute with datetimetz datatype. I have also set the default value to NOW() and in the serializer i have set the attribute to read_only_fields. But when i'm doing a post request to create a new user, it puts in null in the field and the created_at is filled as null in the database table. My users model: class Users(models.Model): user_id = models.AutoField(primary_key=True,null=False,blank=True) username = models.CharField(max_length=100) email = models.CharField(max_length=254) password_hash = models.CharField(max_length=30, blank=True, null=True) created_at = models.DateTimeField(null=False,blank=True) class Meta: managed = False db_table = 'users' my serializer: class UserSerializer(serializers.ModelSerializer): class Meta: model = Users read_only_fields = ('created_at',) fields = '__all__' my view : class UsersList(APIView): """ get all users or create a single user """ def get(self, request, format=None): users = Users.objects.all() serializer = UserSerializer(users,many=True) return Response(serializer.data,status=status.HTTP_200_OK) def post(self, request, format=None): serializer = UserSerializer(data = request.data) if serializer.is_valid(): serializer.save() return Response(serializer.data,status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) i have tried this: class UserSerializer(serializers.ModelSerializer): created_at = serializers.SerializerMethodField() class Meta: model = Users read_only_fields = ('created_at',) fields = '__all__' def get_created_at(self, instance): return instance.created_at.strftime("%B … -
How to access an attribute from an authenticated user in Django?
I have a line of code that authenticates a user: user = authenticate(request, username=username, password=password, company=company) Is there a method that gets an attribute from the authenticated user? Let's say this is my user database table in the system: |username|password|company|type| |Victor| charlie| Echo| Delta| | Bravo | Alpha | Beta | Gamma| I am hoping to see if there was a method that would function as follows: user.get_type() And it returns Delta for Victor, and Gamma for Charlie. This is the page I have been reading for the documentation: Django authenticate documentation -
Django with Regional Databases and Users
I'm looking to provide reliable uptime with a horizontally scaled Django service. I want to run migrations on a somewhat timezone based schedule such that migrations are run when the fewest users are online. Those users have static locations, so we can reliably assign them to regions. For this, I was thinking I would have separate databases depending on region. eu_east, eu_west, etc and using a database router to keep users and their data separate. The challenge is, I don't want users to have to enter the region they're in every time they authenticate. I want them to just enter username and password, and have the regional aspect of the site be transparent to them. I'm imagining some sort of user->region default database lookup, but I was wondering if there was a best-practice for this use case, or something I'm not considering? -
Facebook JavaScript SDK Not Returning Authorization Code for WhatsApp Embedded Signup in Django
I am trying to implement WhatsApp Embedded Signup using the Facebook SDK, but FB.login() does not return the expected authorization code in the callback function. Below is my implementation as official documentation here <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Embedded Signup</title> </head> <body> <!-- SDK loading --> <script async defer crossorigin="anonymous" src="https://connect.facebook.net/en_US/sdk.js"></script> <script> // SDK initialization window.fbAsyncInit = function() { FB.init({ appId: 'appId', // my app ID autoLogAppEvents: true, xfbml: true, version: 'v22.0' // Graph API version }); }; // Session logging message event listener window.addEventListener('message', (event) => { if (event.origin !== "https://www.facebook.com" && event.origin !== "https://web.facebook.com") return; try { const data = JSON.parse(event.data); if (data.type === 'WA_EMBEDDED_SIGNUP') { console.log('message event: ', data); // Debugging log } } catch { console.log('message event: ', event.data); // Debugging log } }); // Response callback const fbLoginCallback = (response) => { if (response.authResponse) { const code = response.authResponse.code; // Expected authorization code console.log('response: ', code); // Debugging log } else { console.log('response: ', response); // Debugging log } }; // Launch method and callback registration const launchWhatsAppSignup = () => { FB.login(fbLoginCallback, { config_id: 'config_id', // my configuration ID response_type: 'code', override_default_response_type: true, extras: { setup: {}, featureType: '', sessionInfoVersion: '3', } … -
Correct way to store Python function definitions (names) in a database?
Context - Skip to "Crux" for tl;dr: I'm building a report automation system that includes a handful of independent "worker" daemons each with their own APScheduler instance, one central "control panel" web application in Django, and using ZMQ to manage communication between Django and the workers. The workers tasks involve querying a database, compiling reports, and saving and/or distributing exported files of those reports. Everything is running on a single server, but each worker is "assigned" to its own business unit with its own data, users, and tasks. The intent is that users will be able to use the web app to manage the scheduled jobs of their assigned worker. I'm aware of APScheduler's issue (at least in v3.x) with sharing jobstores, so instead of having Django modify the jobstores directly I'm planning on using ZMQ to send a JSON message containing instructions, which the worker will parse and execute itself. Crux: For the web user (assumed to have zero programming proficiency) to be able to add new scheduled jobs to the worker, the web app needs to provide a list of "possible tasks" that the worker can execute. Since the tasks are (primarily) reports to be produced, using the … -
How to dynamically switch databases for multi-tenancy in Django without modifying core settings?
I’m working on implementing multi-tenancy for a Django application, where each tenant will have a separate database. I create the databases dynamically when a new tenant is created and need to switch to the tenant's database for each request. Since the databases are created on the fly, I cannot register them in the settings.py file. To achieve this, I plan to create middleware that intercepts the request, checks for the authenticated user, retrieves the appropriate database for that user, and dynamically switches to it for the request. However, I'm unsure about how to correctly implement this in Django, given its default database handling behavior. I need guidance on: Where to modify the code: What files or areas of Django need to be adjusted to make this work? Database switching: How do I properly switch to a dynamically created database per user without affecting other requests? Database routers: How do I use database routers to manage connections dynamically in my custom setup? Considerations for querysets and operations: What should I keep in mind when querying data or performing operations on the dynamically selected database? I’m looking for a solution that doesn’t require me to modify the core application heavily and would … -
Order by subset of related field?
I have a model Release. Each release has a type, and depending on that type different types of Credit are considered primary credits. I want to be able to order releases by the Entity names of their primary credits. class Release(models.Model): TYPES = { "GA": "Game", "MO": "Movie", "TV": "TV Show", "MU": "Music", "BO": "Book", "BU": "Bundle" } TYPES_PRIMARY_ROLE = { "GA": "Developer", "MO": "Director", "TV": "Showrunner", "MU": "Primary Artist", "BO": "Author", "BU": "none" } type = models.CharField(choices=TYPES) # How to order_by the entity__names returned here? def get_credits_primary(self): return self.credits.filter(role__name=self.TYPES_PRIMARY_ROLE[self.type]).order_by("entity__name") class Credit(models.Model): role = models.ForeignKey(CreditRole, on_delete=models.CASCADE, related_name="credits") entity = models.ForeignKey(Entity, on_delete=models.CASCADE, related_name="credits") release = models.ForeignKey(Release, on_delete=models.CASCADE, related_name="credits") I suppose I could create a cached string value of the primary credit names, but that doesn't seem like a good way to do it. -
django.db.utils.OperationalError: no such column: dashboard_player.player_run
class Player(models.Model): role_choices = [ ('Batsman', 'Batsman'), ('Bowler', 'Bowler'), ('AllRounder', 'AllRounder'), ('WicketKeeper', 'WicketKeeper'), ] player_name = models.CharField(max_length=30, blank=False) player_team = models.ForeignKey(Team, on_delete=models.CASCADE, blank=False) match_number = models.ForeignKey(Match, on_delete=models.CASCADE, blank=False) player_role = models.CharField(choices=role_choices, max_length=15, blank=False) player_available = models.BooleanField(default=True) player_number = models.IntegerField(null=True, editable=False) player_run = models.IntegerField(blank=True, null=True, default=0) player_wickets = models.IntegerField(blank=True, null=True, default=0) player_catch = models.IntegerField(blank=True, null=False, default=0) def __str__(self): return f"{self.player_name} ({self.player_role})" after adding player_run, player_wickets, player_catch I ran the migration commands which asked for a default value to which i mistakenly added datetime to it. But now whenever i try to save any player it says raise e.__class__( TypeError: Field 'player_catch' expected a number but got datetime.datetime(2025, 2, 6, 10, 53, 15, 330920, tzinfo=datetime.timezone.utc). and the api response is 'table dashboard_player has no column named player_run' Can anyone tell what can be the problem in this code? -
Configuration of Django+WSGI+Apache
I have a Debian 11 system with Apache2 2.4.6.2, mod_wsgi 4.7.1 and Python 3.9. I want to run two different Django projects under the same Apache2 server, and in the same virtual host as outlined e.g. here: http://www.tobiashinz.com/2019/04/10/apache-django-virtualenv.html The following additions in the apache2.conf have worked fine to get one Django project online: WSGIApplicationGroup %{GLOBAL} WSGIDaemonProcess myapp python-home=/var/www/myproj/venv3/ python-path=/var/www/myproj/myproj WSGIProcessGroup myapp WSGIScriptAlias / /var/www/myproj/myproj/wsgi.py To configure the second Django, I first tried to move the configuration of the first project to a virtual host section. However, if I move the configurations to a virtual host in available-sites (e.g. in 000-default.conf), I do not get any error message (e.g. in the apache error log), but instead of my Django project, I see the default apache2 landing page ("It works!"), even if I comment out DocumentRoot. Am I missing something? -
I need some guidance with finishing my customized api endpoints for my search query in my Django views
I’m trying to finish customizing the search_query_setfor ArboristCompany in my Django views. I’m following the Django-Haystack docs. I’m also using DRF-Haystack. In the docs, it shows you how to set up the search_querty_setviews with template files. However, I already have a VueJS search page API call. Basically, I want to know how to implement my VueJS search page API call with the search_query_set API endpoint in my views, like I did with the the VueJS API calls for the Login/Register API endpoints Here are some files that may be of some use to you. serializers.py class CompanySerializer(HaystackSerializer): class Meta: index_class = [ArboristCompanyIndex] fields = [ 'text', 'company_name', 'company_city', 'company_state' ] search_indexes.py class ArboristCompanyIndex(indexes.SearchIndex, indexes.Indexable): text = indexes.EdgeNgramField(document=True, use_template=True) company_name = indexes.CharField(model_attr='company_name') company_city = indexes.CharField(model_attr='company_city') company_state = indexes.CharField(model_attr='company_state') # Instead of directly indexing company_price, use the price_display method company_price_display = indexes.CharField(model_attr='price_display', null=True) # Using the custom price_display method experience = indexes.CharField(model_attr='experience') def get_model(self): return ArboristCompany def index_queryset(self, using=None): return self.get_model().objects.filter(content='foo').order_by('company_name', 'company_city', 'company_state', 'experience') views @authentication_classes([JWTAuthentication]) class LoginView(APIView): def post(self, request, *args, **kwargs): serializer = LoginSerializers(data=request.data, context={'request': request}) serializer.is_valid(raise_exception=True) user = serializer.validated_data['user'] login(None, user) token = Token.objects.create(user=user) return Response({"status": status.HTTP_200_OK, "Token": token.key}) if user is not None: # Generate token refresh = … -
Django Status Transition Issue: Incorrect Status Update for 'New', 'Active', and 'Deleted'
I have a question. I recently noticed that the statuses aren't updating correctly from "new" to "active" and from "deleted" to "not visible anymore." For example, the program should show the "new" status in January and switch to "active" in February, but instead, it's showing "new" for both January and February, and only switching to "active" in March. The same issue is happening with the "deleted" status. It should show as "deleted" in January and change to "not visible anymore" in February. However, it's showing as "deleted" in January, February, and March, and then disappears entirely in April. What could be causing this issue? I assume it is the wrong timestamps that were set up, but I am not sure how to redefine them. class ReportView(View): template_name = "reports/report.html" def get(self, request): context = { 'menu': get_service_menu_data(), } return render(request, self.template_name, context) def post(self, request): logger.info(f"Request: {request.method} {request.path}?{str(request.body)[str(request.body).index('&') + 1:-1]}") form = csv_form(request.POST) if form.is_valid(): company = form.cleaned_data.get('company') month = form.cleaned_data.get('month') year = form.cleaned_data.get('year') date_from = datetime.date(int(year), int(month), 1) date_to = datetime.date(int(year), int(month), calendar.monthrange(int(year), int(month))[1]) + datetime.timedelta(days=1) prev_month = date_from - relativedelta(months=1) next_month = date_to + relativedelta(months=1) current_numbers = PhoneNumberHolder.objects.filter( billing_group__billing_group=company, purchased__lte=date_to ).exclude( terminated__lt=date_from ) prev_numbers = PhoneNumberHolder.objects.filter( billing_group__billing_group=company, purchased__lte=prev_month … -
Django ORM sporadically dies when ran in fastAPI endpoints using gunicorn
I'm using the django ORM outside of a django app, in async fastapi endpoints that I'm running with gunicorn. All works fine except that once in a blue moon I get these odd errors where a worker seemingly "goes defunct" and is unable to process any requests due to database connections dropping. I've attached a stack trace bellow but they really make no sense whatsoerver to me. I'm using postgres without any short timeouts (database side) and the way I setup the connectiong enables healthchecks at every request, which I'd assume would get rid of any "stale connection" style issues: DATABASES = { "default": dj_database_url.config( default="postgres://postgres:pass@localhost:5432/db_name", conn_max_age=300, conn_health_checks=True, ) } I'm curious if anyone has any idea as to how I'd go about debugging this? It's really making the django ORM unusable due to apps going down spontanously. Stacktrace bellow is an example of the kind of error I get here: Traceback (most recent call last): File "/home/ubuntu/py_venv/lib/python3.12/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/py_venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/ubuntu/py_venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send) File "/home/ubuntu/py_venv/lib/python3.12/site-packages/starlette/applications.py", line 113, in __call__ await self.middleware_stack(scope, … -
Do we need to keep a flow of migration files from development to production?
I was not pushing migration files to git and devops team making migration on their side when I do some changes in models. In development phase i played a lot with models added and removed some models made migrations and migrate. While doing these thing I deal with lot of migrarion issues. when I run python manage.py makemigrations appname it detect the new changes in models.py I hae also verified in migration files it is added but when i run python manage.py migrate it say tat no migrations to apply and in the end I need to delete the database and create databse again then do makemigrations and migrate. But I can't do this at production level. Below is one of the error im getting. This is the response while running migrate command. root@98d07ed814b3:/app# python manage.py migrate Operations to perform: Apply all migrations: admin, auth, communityEmpowerment, contenttypes, django_celery_beat, sessions, token_blacklist Running migrations: No migrations to apply. root@98d07ed814b3:/app# I have tried faking migrations. delete migrations files and migrate it again but none of them worked. I want someone who can explain everything about django migrations and these common issues. -
How to remove empty paragraph tags generated in ckeditor content
I am using ckeditor5 in my django web app. The issue is, if the content contains any blank line, it's become an p tag and taking a default margin. As I am using tailwind css to get the default styles of the content I am using @tailwindcss/typography. I have added prose class in the content container. I have tried to override the css classes. like, .prose p:empty{ margin: 0; } But it didn't work. So I created a django custom filter to remove the content. from django import template from django.utils.safestring import mark_safe import re register = template.Library() @register.filter def remove_empty_paragraphs(value): # Remove empty <p> tags cleaned_value = re.sub(r'<p[^>]*>\s*</p>', '&nbsp;', value) return mark_safe(cleaned_value) I have tried with both '' blank string and &nbsp; because in the developer console, there is showing &nbsp; inside the p tag. Still didn't work.