Django community: Community blog posts RSS
This page, updated regularly, aggregates Community blog posts from the Django community.
-
Musings about django.contrib.auth.models.User
Dawned on me that the Django auth user model that ships with Django is like the string built-in of a high level programming language. With the string built-in it's oh so tempting to add custom functionality to it like a fancy captialization method or some other function that automatically strips whitespace or what not. Yes, I'm looking at you Prototype for example. By NOT doing that, and leaving it as it is, you automatically manage to Keep It Simple Stupid and your application code makes sense to the next developer who joins your project. I'm not a smart programmer but I'm a smart developer in that I'm good at keeping things pure and simple. It means I can't show off any fancy generators, monads or metaclasses but it does mean that fellow coders who follow my steps can more quickly hit the ground running. My colleagues and I now have more than ten Django projects that rely on, without overriding, the django.contrib.auth.models.User class and there has been many times where I've been tempted to use it as a base class or something instead but in retrospect I'm wholeheartedly happy I didn't. The benefit isn't technical; it's a matter of teamwork … -
Musings about django.contrib.auth.models.User
Dawned on me that the Django auth user model that ships with Django is like the string built-in of a high level programming language. With the string built-in it's oh so tempting to add custom functionality to it like a fancy captialization method or some other function that automatically strips whitespace or what not. Yes, I'm looking at you Prototype for example. By NOT doing that, and leaving it as it is, you automatically manage to Keep It Simple Stupid and your application code makes sense to the next developer who joins your project. I'm not a smart programmer but I'm a smart developer in that I'm good at keeping things pure and simple. It means I can't show off any fancy generators, monads or metaclasses but it does mean that fellow coders who follow my steps can more quickly hit the ground running. My colleagues and I now have more than ten Django projects that rely on, without overriding, the django.contrib.auth.models.User class and there has been many times where I've been tempted to use it as a base class or something instead but in retrospect I'm wholeheartedly happy I didn't. The benefit isn't technical; it's a matter of teamwork … -
Musings about django.contrib.auth.models.User
Dawned on me that the Django auth user model that ships with Django is like the string built-in of a high level programming language. With the string built-in it's oh so tempting to add custom functionality to it like a fancy captialization method or some other function that automatically strips whitespace or what not. Yes, I'm looking at you Prototype for example. By NOT doing that, and leaving it as it is, you automatically manage to Keep It Simple Stupid and your application code makes sense to the next developer who joins your project. I'm not a smart programmer but I'm a smart developer in that I'm good at keeping things pure and simple. It means I can't show off any fancy generators, monads or metaclasses but it does mean that fellow coders who follow my steps can more quickly hit the ground running. My colleagues and I now have more than ten Django projects that rely on, without overriding, the django.contrib.auth.models.User class and there has been many times where I've been tempted to use it as a base class or something instead but in retrospect I'm wholeheartedly happy I didn't. The benefit isn't technical; it's a matter of teamwork … -
Centralized logging for fun and profit!
Setting up a centralized log server using syslog isn't as hard as many may believe. Whether it's logs from Apache, nginx, email services, or even from your own Python applications having a central log server gives you many benefits: Benefits to a centralized logs Reduces disk space usage and disk I/O on core servers that should be busy doing something else. This is especially true if you want to log all queries to your database. Doing this on the same disk as your actual database creates a write for every read and an extra write for every write. Removes logs from the server in the event of an intrusion or system failure. By having the logs elsewhere you at least have a chance of finding something useful about what happened. All of your logs are in one place, duh! This makes things like grepping through say Apache error logs across multiple webservers easier than bouncing around between boxes. Any log processing and log rotation can also be centralized which may delay your sysadmin from finally snapping and killing everyone. Syslog Review In case you aren't terribly familiar with how syslog works, here's a quick primer. Syslog separates out various logs … -
Tips & Tricks for your Django Powered Database
Tonight was our Django meetup here in San Francisco. Four of us presented (three from Whiskey, outnumbering everyone as always ;), with some pretty great material. A few of my favorites were .update() from Andy McCurdy, (I have to love anyone that agrees), formfield_overrides (not sure how I didn... -
Wondering about django orm caching frameworks
So briefly looking over the code reveals that: johnny-cache will cache the rows returned by the execution machinery in django’s sql compiler (monkey-patches the compilers). It looks like it has fancy-pants invalidation (it basically has bulk invalidation through 2-tiered cache key scheme, unlike cache-machine with relies on set_many) and even support for transactions. I’m using this and it’s awesome. django-cache-machine will cache the result of the QuerySet.iterator method. It seems that it has some limitations: it only (automatically) invalidates on forward relations (FKs) so you have to perform carefull invalidation through your code (eg: you use qs.update(), run queries through models without the custom CachingManager, use Model.create() and whatnot …). Also, cache-machine will be heavy on the memcached traffic (1 call for every invalidated object, using set_many though …) django-cachebot will cache the rows on the same level as cache-machine (at QuerySet.iterator call). Also, it has a very nice feature that will prefetch objects from reverse relations (like FK reverse descriptors and many to many relations – eg: Group.objects.select_reverse('user_set') and then group.user_set_cache will be equal to group.user_set.all()). Unfortunately the author only tested it on django 1.1 and it needs a django patch to work (the django manager patch is only … -
Wondering about django orm caching frameworks
So briefly looking over the code reveals that: johnny-cache will cache the rows returned by the execution machinery in django's sql compiler (monkey-patches the compilers). It looks like it has fancy-pants invalidation (it basically has bulk invalidation through 2-tiered cache key scheme, unlike cache-machine with relies on set_many) and even support for transactions. I'm using this and it's awesome. django-cache-machine will cache the result of the QuerySet.iterator method. It seems that it has some limitations: it only (automatically) invalidates on forward relations (FKs) so you have to perform carefull invalidation through your code (eg: you use qs.update(), run queries through models without the custom CachingManager, use Model.create() and whatnot ...). Also, cache-machine will be heavy on the memcached traffic (1 call for every invalidated object, using set_many though ...) django-cachebot will cache the rows on the same level as cache-machine (at QuerySet.iterator call). Also, it has a very nice feature that will prefetch objects from reverse relations (like FK reverse descriptors and many to many relations - eg: Group.objects.select_reverse('user_set') and then group.user_set_cache will be equal to group.user_set.all()). Unfortunately the author only tested it on django 1.1 and it needs a django patch to work (the django manager patch is only … -
Wondering about django orm caching frameworks
So briefly looking over the code reveals that: johnny-cache will cache the rows returned by the execution machinery in django's sql compiler (monkey-patches the compilers). It looks like it has fancy-pants invalidation (it basically has bulk invalidation through 2-tiered cache key scheme, unlike cache-machine with relies on set_many) and even support for transactions. I'm using this and it's awesome. django-cache-machine will cache the result of the QuerySet.iterator method. It seems that it has some limitations: it only (automatically) invalidates on forward relations (FKs) so you have to perform carefull invalidation through your code (eg: you use qs.update(), run queries through models without the custom CachingManager, use Model.create() and whatnot ...). Also, cache-machine will be heavy on the memcached traffic (1 call for every invalidated object, using set_many though ...) django-cachebot will cache the rows on the same level as cache-machine (at QuerySet.iterator call). Also, it has a very nice feature that will prefetch objects from reverse relations (like FK reverse descriptors and many to many relations - eg: Group.objects.select_reverse('user_set') and then group.user_set_cache will be equal to group.user_set.all()). Unfortunately the author only tested it on django 1.1 and it needs a django patch to work (the django manager patch is only … -
What is the history of Django?
What is the history of Django?. I’ve been playing with Quora—it’s a really neat twist on the question-and-answer format, which makes great use of friends, followers and topics and has some very neat live update stuff going on (using Comet on top of Tornado). I just posted quite a long answer to a question about the history of Django. -
Simplifying Django dependencies with virtualenv
virtualenv is a tool for simplifying dependency management in Python applications. As the name suggests, virtualenv creates a virtual environment which makes it easy to install Python packages without needing root privileges to do so. To use the packages installed in a virtual environment you run the activate script in the bin directory of the [...] -
nashvegas 0.1a1.dev2 Released
So like a Phoenix rising from the ashes, nashvegas has returned with a 0.1a1.dev2 release. 18 months ago, I posted about the initial release of a migration tool that I wrote and found useful. I then later abandoned it in my personal projects in favor of South. However, I quickly wanted something not so complicated and allowed me to better manage exactly what was getting executed. This brought me full circle to just needing to finish off some outstanding features on nashvegas. With this release you can: Execute both sql and Python scripts Generate migration scripts for new models that are introduced to your project, whether they be from reusable apps that you have installed, or apps that live within your project. A Migration model tracks everything in the database instead of what was previously just a table. This Migration model bootstraps itself into your database when executing any of the commands -- after adding it to INSTALLED_APPS it's ready to use. Check it out. Let me know what you think! You can find the source code on the project page on Github. Enjoy! -
nashvegas 0.1a1.dev2 Released
So like a Phoenix rising from the ashes, nashvegas has returned with a 0.1a1.dev2 release. 18 months ago, I posted about the initial release of a migration tool that I wrote and found useful. I then later abandoned it in my personal projects in favor of South. However, I quickly wanted something not so complicated and allowed me to better manage exactly what was getting executed. This brought me full circle to just needing to finish off some outstanding features on nashvegas. With this release you can: Execute both sql and Python scripts Generate migration scripts for new models that are introduced to your project, whether they be from reusable apps that you have installed, or apps that live within your project. A Migration model tracks everything in the database instead of what was previously just a table. This Migration model bootstraps itself into your database when executing any of the commands -- after adding it to INSTALLED_APPS it's ready to use. Check it out. Let me know what you think! You can find the source code on the project page on Github. Enjoy! -
Mixing Django with Jinja2 without losing template debugging
At Fashiolista we’ve build nearly the entire site with Jinja instead of the Django template engine. There are a lot of reasons for choosing Jinja2 over Django for us. Better performance (atleast… it was a lot better with previous Django versions), way more options (named arguments, multiple arguments for filters, etc), macros and simply easier to extend. Writing custom tags is simply not needed anymore since you can just make any function callable from the templates. But… during the conversion there are always moments when you need a Django function in a Jinja template or vice versa. So… I created a few template tags to allow for Jinja code in Django templates (I’ve also created code to run Django code from Jinja, but I haven’t seen the need for it so I omitted it here). A Jinja Include tag to include a template and let it be parsed by Jinja from a Django template: from django import template from coffin import shortcuts as jinja_shortcuts register = template.Library() class JinjaInclude(template.Node): def __init__(self, filename): self.filename = filename def render(self, context): return jinja_shortcuts.render_to_string(self.filename, context) @register.tag def jinja_include(parser, token): bits = token.contents.split() '''Check if a filename was given''' if len(bits) != 2: raise template.TemplateSyntaxError('%r … -
Using Sass with Django
Install django-css. Install Sass. sudo gem install haml Add to settings.py: INSTALLED_APPS = ( ... 'compressor', ... ) ... COMPILER_FORMATS = { '.sass': { 'binary_path':'sass', 'arguments': '*.sass *.css' }, '.scss': { 'binary_path':'sass', 'arguments': '*.scss *.css' } } Add to a template that you want to load a Sass file: {% load compress %} ... {% [...] -
Final, official GSoC Django NoSQL status update
Alex Gaynor has posted a final status update on his Google Summer of Code (GSoC) project which should bring official NoSQL support to Django. Basically, Django now has a working MongoDB backend (not to be confused with the MongoDB backend for Django-nonrel: django-mongodb-engine) and (after lots of skepticism) the ORM indeed needed only minor changes to support non-relational backends (surprise, surprise ;). There are still a few open design issues, but probably the ORM changes will be merged into trunk and the MongoDB backend will become a separate project. The biggest design issue (in my opinion) is how to handle AutoField. In the GSoC branch, non-relational model code would always need a manually added NativeAutoField(primary_key=True) because many NoSQL DBs use string-based primary keys. As you can see in Django-nonrel, a NativeAutoField is unnecessary. The normal AutoField already works very well and it has the advantage that you can reuse existing Django apps unmodified and you don't need a special NativeAutoField definition in your model. Hopefully this issue will get fixed before official NoSQL support is merged into trunk. Another issue is about efficiency: In the GSoC branch, save() first checks whether the entity already exists in the DB by doing … -
Rails-like configuration style for Django
Django's default settings system is not very suitable for multiple configuration profiles — development, testing, production and so on: you have the settings.py, and that's it. As far as I'm concerned, my settings are sometimes very different between my notebook and for instance my production server. First of all, it is obviously out of the question to change the settings.py on deployment. Some people append at the end of their settings.py a simple from deploy import *, surrounded by a try / except ImportError clause. In my opinion this workaround is neither sexy nor flexible. I finally decided to use the Rails way to manage configuration in my latest pet project. The principle is simple: replace the content of the settings.py by the one provided further, create a config/ directory, as if it is an application, and in this folder, put the default configuration in __init__.py, and add a Python module per configuration profile. You should come up with something like that: The default profile is 'development' (read the settings.py). If you want to use a specific profile, prepend DJANGO_ENV="theprofile" to your shell command. It must become very long to type this at each command, that's why I recommend you … -
2010 Django Dash Post-Mortem
A rundown of the good & bad of this year's Django Dash. -
Improving link discovery
ScratchBlog is now sitemap enabled, which has been done using the Django sitemap framework, and it's accessible here. Additionally, the 'robots.txt' file was recently added to the root url for specifying crawler access permissions. And all the external links are now updated and fixed. -
Improving link discovery
ScratchBlog is now sitemap enabled, which has been done using the Django sitemap framework, and it's accessible here. Additionally, the 'robots.txt' file was recently added to the root url for specifying crawler access permissions. And all the external links are now updated and fixed. -
Python comparisons speed depends from the result
Recently I decided to check whether “less than or equal”(<=) is slower than “bigger”(>) and I was surprised from the result. In my case “bigger” was slower. I was amazed, according to the simple logic in the case of less then or equal we need one or two operation(for example we first check for equality [...] -
Using Sass with django-mediagenerator
This is the second post in our django-mediagenerator series. If you haven't read it already, please read the first post before continuing: django-mediagenerator: total asset management What is Sass? Great that you ask. :) Sass is a high-level language for generating CSS. What? You still write CSS by hand? "That's so bourgeois." (Quick: Who said that in which TV series?) Totally. ;) Sass to CSS is like Django templates to static HTML. Sass supports variables (e.g.: $mymargin: 10px), reusable code snippets, control statements (@if, etc.), and a more compact indentation-based syntax. You can even use selector inheritance to extend code that is defined in some other Sass file! Also, you can make computations like $mymargin / 2 which can come in very handy e.g. for building fluid grids. Let's see a very simple example of the base syntax: .content padding: 0 p margin-bottom: 2em .alert color: red This produces the following CSS code: .content { padding: 0; } .content p { margin-bottom: 2em; } .content .alert { color: red; } So, nesting can help reduce repetition and the cleaner syntax also makes Sass easier to type and read. Once you start using the advanced features you won't ever want to … -
Why I moved to Django from PHP
After working for years with PHP, I developed a project using Django and loved every bit of it. -
Finding the closest data center using GeoIP and indexing
We are about to release the TurnKey Linux Backup and Migration (TKLBAM) mechanism, which boasts to be the simplest way, ever, to backup a TurnKey appliance across all deployments (VM, bare-metal, Amazon EC2, etc.), as well as provide the ability to restore a backup anywhere, essentially appliance migration or upgrade. Note: We'll be posting more details really soon - In this post I just want to share an interesting issue we solved recently. Backups need to be stored somewhere - preferably somewhere that provides unlimited, reliable, secure and inexpensive storage. After exploring the available options, we decided on Amazon S3 for TKLBAM's storage backend. The problem Amazon have 4 data centers called regions spanning the world, situated in North California (us-west-1), North Virginia (us-east-1), Ireland (eu-west-1) and Singapore (ap-southeast-1). The problem: Which region should be used to store a servers backups, and how should it be determined? One option was to require the user to specify the region to be used during backup, but, we quickly decided against polluting the user interface with options which can be confusing, and opted for a solution that could automatically determine the best region. The solution The below map plots the countries/states with their associated Amazon region: Generated automatically using Google Maps API … -
Finding the closest data center using GeoIP and indexing
We are about to release the TurnKey Linux Backup and Migration (TKLBAM) mechanism, which boasts to be the simplest way, ever, to backup a TurnKey appliance across all deployments (VM, bare-metal, Amazon EC2, etc.), as well as provide the ability to restore a backup anywhere, essentially appliance migration or upgrade. Note: We'll be posting more details really soon - In this post I just want to share an interesting issue we solved recently. Backups need to be stored somewhere - preferably somewhere that provides unlimited, reliable, secure and inexpensive storage. After exploring the available options, we decided on Amazon S3 for TKLBAM's storage backend. The problem Amazon have 4 data centers called regions spanning the world, situated in North California (us-west-1), North Virginia (us-east-1), Ireland (eu-west-1) and Singapore (ap-southeast-1). The problem: Which region should be used to store a servers backups, and how should it be determined? One option was to require the user to specify the region to be used during backup, but, we quickly decided against polluting the user interface with options which can be confusing, and opted for a solution that could automatically determine the best region. The solution The below map plots the countries/states with their associated Amazon region: Generated automatically using Google Maps API … -
Finding the closest data center using GeoIP and indexing
We are about to release the TurnKey Linux Backup and Migration (TKLBAM) mechanism, which boasts to be the simplest way, ever, to backup a TurnKey appliance across all deployments (VM, bare-metal, Amazon EC2, etc.), as well as provide the ability to restore a backup anywhere, essentially appliance migration or upgrade. Note: We'll be posting more details really soon - In this post I just want to share an interesting issue we solved recently. Backups need to be stored somewhere - preferably somewhere that provides unlimited, reliable, secure and inexpensive storage. After exploring the available options, we decided on Amazon S3 for TKLBAM's storage backend. The problem Amazon have 4 data centers called regions spanning the world, situated in North California (us-west-1), North Virginia (us-east-1), Ireland (eu-west-1) and Singapore (ap-southeast-1). The problem: Which region should be used to store a servers backups, and how should it be determined? One option was to require the user to specify the region to be used during backup, but, we quickly decided against polluting the user interface with options which can be confusing, and opted for a solution that could automatically determine the best region. The solution The below map plots the countries/states with their associated Amazon region: Generated automatically using Google Maps API …