Django community: Community blog posts RSS
This page, updated regularly, aggregates Community blog posts from the Django community.
-
How To Show Correct List Item Indexes When Using Pagination in Django
In your Django template put something like this: <ul> {% for object in object_list %} <li>{{ forloop.counter0|add:page_obj.start_index }}. {{ object }}</li> {% endfor %} </ul> Where: object_list - is a list of objects produced by pagination; page_obj - is a page object produced by pagination page_obj ... Read now -
How To Exclude node_modules Directory When Running collectstatic Command in Django
If you use npm or yarn to install frontend packages inside your Django project, you may notice, that when you run python manage.py collectstatic command, it ends up collecting huge amounts of files. That's because by default, collectstatic grabs all content of static directories inside the project, including ... Read now -
Make Django Rest Framework and Axios Work Together Nicely
This is a solution to the problem I encountered while marrying Django Rest Framework powered API and Axios JS HTTP client: Axios issues GET requests with multi-value parameters in a bit different way than Django expects. When you create your API with Django Rest Framework, it expects multi-value GET parameters ... Read now -
Download AWS S3 Files using Python & Boto
<div class='alert-warning aler... -
Sorl-thumbnail to generate thumbnails in django
sorl-thumbnail is a very useful package to deal with images in the Django template. It is very easy to implement. Resizing and cropping images become simple with inbuilt tags provided by sorl-thumbnail. Installation and setup: To install sorl-thumbnail. pip install sorl-thumbnail To setup, add sorl.thumbnail to INSTALLED_APPS in settings. Then create the migrations with python manage.py makemigrations thumbnail and migrate with python manage.py migrate thumbnail Now sorl-thumbnail is ready to use in your project. To use sorl-thumbnail we should install python image library pip install Pillow Key Value Store: sorl-thumbnail needs key value store to maintain the keys and values of the thumbnails generated from the images and its storage. Storage can be our own server or it can be any cloud storage services like Amazon S3, Microsoft azure etc. To store the images in these cloud storage services, django-storages package is very useful. Template tags and filters for sorl-thumbnail: sorl-thumbnail has one tag and three filters to use in templates. To use these filters and tag we should load them first with {% load thumbnail %} thumbnail tag: When you use this tag sorl-thumbnail searches the thumbnail in Key Value … -
How to use django-cache-memoize
Last week I released django-memoize-function which is a library for Django developers to more conveniently use caching in function calls. This is a quick blog post to demonstrate that with an example. The verbose traditional way to do it Suppose you have a view function that takes in a request and returns a HttpResponse. Within, it does some expensive calculation that you know could be cached. Something like this: No caching def blog_post(request, slug): post = BlogPost.objects.get(slug=slug) related_posts = BlogPost.objects.exclude( id=post.id ).filter( # BlogPost.keywords is an ArrayField keywords__overlap=post.keywords ).order_by('-publish_date') context = { 'post': post, 'related_posts': related_posts, } return render(request, 'blogpost.html', context) So far so good. Perhaps you know that lookup of related posts is slowish and can be cached for at least one hour. So you add this: Caching from django.core.cache import cache def blog_post(request, slug): post = BlogPost.objects.get(slug=slug) cache_key = b'related_posts:{}'.format(post.id) related_posts = cache.get(cache_key) if related_posts is None: # was not cached related_posts = BlogPost.objects.exclude( id=post.id ).filter( # BlogPost.keywords is an ArrayField keywords__overlap=post.keywords ).order_by('-publish_date') cache.set(cache_key, related_posts, 60 * 60) context = { 'post': post, 'related_posts': related_posts, } return render(request, 'blogpost.html', context) Great progress. But now you want that cache to immediate reset as soon as the blog posts change. … -
You have two jobs
Welcome to FictionalSoft! I hope your first week is going well? Great. As you start to find your feet, I want to make sure we have a shared understanding of what success looks like here. Apologies in advance if I’m telling you something you already know, but it’s important to be explicit about this early. You were hired to write code. Many developers make the mistake and think that their job stops there. -
How to filter a Django Queryset using Extra
SQL Query in Django ORM: Using Django ORM we can perform all queryset operations. In some cases, we need to use SQL Queries in Django ORM. Here is the scenario if a model having a field with positiveInteger of another model, When I want to get an object of this positive Integer Field we will Query again for that model, By this number of Queries will be increased to reduce this we will use SQL Query to get appropriate results in single Query. testapp/models.py class State(models.Model): name=models.CharField(max_length=150) class City(models.Model): name=models.CharField(max_length=150) class Student(models.Model): name=models.CharField(max_length=150) state_id=models.PositiveIntegerField() city_id=models.PositiveIntegerField() is_active = models.BooleanField(default=False) Here Student Model is having state and city as positiveInteger which provides the ID of state model and city model. In this case, when the student details required for every object we should query again to get state name and city name. By using extra({}) in the Django ORM we can filter state and city objects with single query students = Student.objects.filter( is_active=True, ).extra( select={ 'state': … -
Getting a MUD Roleplaying Scene going
Getting a MUD RP-scene goingThis article is a little different from the normal more technical Evennia-specific content of this blog. It was originally published as a light-hearted addition to the Imaginary Realities e-zine many years ago. While IR is still online it has since dozed off. So I'm reposting it here to bring it to a new audience. In roleplay-heavy MUDs (and in other categories of text-based roleplaying games), the concept of scenes become important. A scene in this concept is simply a situation big or small that involves you and your fellow players in interesting role play. A scene can be as simple as two players meeting in the street and exchanging a few words to the dramatic conclusion to a staff-driven quest. Whenever role player-interested players meet, a scene may happen.But sometimes scenes won’t come naturally. Sometimes your favourite game has only a few people online, or most of them are hovering in private areas. It’s time to go proactive. Below I offer some archetypes, tropes and ideas for how to get a random Scene started and people interested. The list is based on one I did for a RP-heavy MUD I played some time back.Some terms used … -
django-cache-memoize
Released a new package today: django-cache-memoize Docs On GitHub On PyPI It's actually quite simple; a Python memoize function that uses Django's cache plus the added trick that you can invalidate the cache my doing the same function call with the same parameters if you just add .invalidate to your function. The history of it is from my recent Mozilla work on Symbols. I originally copy and pasted the snippet out that in a blog post and today I extracted it out into its own project with tests, docs, CI and a setup.py. I'm still amazed how long it takes to make a package with all the "fluff" around it. A lot of the bits in here (like setup.py and pytest.ini etc) are copied from other nicely maintained Python packages. For example, I straight up copied the tox.ini from Jannis Leidel's python-dockerflow. The ratio of actual code writing (including tests!) is far overpowered by the package sit-ups. But I "complain with a pinch of salt" because a lot of time spent was writing documentation and that's equally as important as the code probably. -
PyCon.de keynote: Artificial intelligence,: differentiating hype and real value - Michael Feindt
(One of my summaries of a talk at the 2017 PyCon.de conference). He's a physics professor and started out with particle physics. The big experiments like in CERN. Big experiments that also generated Big Data. Think terabyte per second. This was long before the term "Big Data" was invented. Lots of data, you have to filter out the noise and find the actual signal. There's a fine balance there: if you are too careful, you'll never discover anything. If you're too enthousiastic, you can get wrong results. Wrong results are bad for your physics career, so the methods used were quite conservative. He had to fight to get more modern methods like neural networks accepted. What is intelligence? Two definitions: The ability to achieve complex goals. Ability to acquire and apply knowledge and skills. And artificial intelligence? All intelligence that is not biological. Biology, ok, what is life? "A process that retains its pcomplexity and replicates. DNA is abog 1.2GB. This is the physical life. Your brain is about 100TB. This is the "software". Cultural live. It accelerates through teaching, books, technology. Technological life? That will be when it can design its own hardware and software. He guesses robots/computers will … -
PyCon.de: Python on bare metal, micropython on the pyboard - Christine Spindler
(One of my summaries of a talk at the 2017 PyCon.de conference). (See also yesterday's talk) There's a lot of power and functionality in microcontrollers nowadays. But they are harder and harder to program. Wouldn't python be a great fit? It allows beginners to do things they couldn't do before. Micropython is a powerful and modern language with a large community, especially intended for very constrained/embedded systems. If you program for embedded systems, you really have to know the hardware. This is different from "regular" programming. Micropython started with a succesful kickstarter. In 2016, the BBC used it for 7 million school children. There was also a kickstarter for porting it to the super cheap ESP8266 chip. Fun facts: ESA (European space agency) is sponsoring development to make it even more reliable. They're planning to use it in satellites. It is certified for use in traffic management devices in the UK! There were some pyboards and people could play with it. Very nice is that you don't need an IDE: you can just connect to the board and type around on the python prompt. Photo explanation: some 1:87 scale figures on my model railway (under construction). -
PyCon.de: Observing your applications with Sentry and Prometheus - Patrick Mühlbauer
(One of my summaries of a talk at the 2017 PyCon.de conference). Monitoring your applications is important. You can fix problems before they happen. You can quickly pin-point them if they occur anyway. And you should get a good feel for the application through metrics. There are three 'pillars of observability': Logging. Records of individual events that happened. Metrics. Numbers describing a particular process or activity. CPU load, for instance. Tracing. Capture the lifetime of requests as they flow through the various components of a distributed system. (He won't talk about this). Error logging Logging in on a server and searching through logfiles is not much fun. Much better: sentry. It sends notifications for events (mail, slack, etc). It sends them only once. This is important. It aggregates events (statistics, regressions). There are lots of clients for multiple languages (python, javascript, etc.) and platforms (django, flask, angular). It is open source. There is also a software-as-a-service. He showed the user interface for an error. You see the line in your python code where the error occurred. Traceback. Statistics about browser types, server names, and so on. How often the error occurred already. When it occurred. It is easy to integrate. … -
PyCon.de: an admin's cornucopia, python is more than just better bash - Christian Theune
(One of my summaries of a talk at the 2017 PyCon.de conference). A "cornicopia" is a "horn of plenty". It keeps on giving. Pragmatism. I can quickly write some python fast for a task I need to do now. And it will be good enough. You can start right away: you don't need to start designing an architecture beforehand like you'd have to do in java. Often if you fix something quickly, you'll have to fix it a second time a day later. With python, you don't need to write your code twice. Perhaps 1.5 times. You can add tests, you can fix up the code. What do they use from python? Language features. Decorators, context managers, fstrings, meta programming. Python's standard library. You get a lot build-in. Releasing. zc.buildout, pip, virtualenv. Testing. pytest, flake8. Lots of external libraries. Some of these in detail. Context managers. Safely opening and closing files. They had trouble with some corner cases, so they wrote their own context manager that worked with a temporary file and guaranteed you cannot ever see a half-written file. Decorators. Awesome. For instance for automating lock files. Just a (self-written) @locked on a command line function. asyncio. They use … -
PyCon.de: the snake in the tar pit, complex systems with python - Stephan Erb
(One of my summaries of a talk at the 2017 PyCon.de conference). He started with an xkcd comic: Often it feels this way at the beginning of a project, later on it gets harder. You cannot just run import greenfield to get back to a green field again. Most engineering time is spend debugging. Most debugging time is spend looking for information. Most time spend looking for information is because the code/system is unfamiliar. Unfamiliar, unknown code: now we're talking about team size. You're probably debugging someone elses code. Or someone else is debugging your code. What can we do to understand such code? How can we spread the knowledge? You can do informal reasoning. Try to "run the code in your head". Code reviews. Pair programming. By getting better here, we create less bugs. Try to understand it by testing. Treat it as a black box. See what comes out. Add integration tests. Do load tests. Perhaps even chaos engineering. By getting better here we find more bugs. The first is better than the second way, right? Both get harder when it becomes more complex. Complexity destroys understanding. But I need understanding to have confidence. Keep in mind the … -
PyCon.de: graphql in the python world - Nafiul Islam
(One of my summaries of a talk at the 2017 PyCon.de conference). graphql is a query language for your API. You don't call the regular REST API and get the standard responses back, but you ask for exactly what you need. You only get the attributes you need. Graphiql is a graphical explorer for graphql. Github is actually using graphql for its v4 API. He did a demo. The real question to ask: why graphql over REST? There is a standard. No more fights over the right way to do REST. Development environment (graphiql). You get only what you want/need. Types. Lots of companies are using it already. What does python have to offer? graphene. Graphene uses these concepts: Types/objects. More or less serializers. Schema. Collection of objects and mutations. "Your API". Resolver. Query. What you can ask of the API. "You can search for users by username and by email". Mutations. Changes you allow to be made. "You can create a new user and you have to pass a username and email". He demoed it. It looked really comfortable and slick. Some small things: 2.0 is out (today!). The django integration is better than the sqlalchemy integration at the … -
PyCon.de: friday lightning talks
(One of my summaries of a talk at the 2017 PyCon.de conference). Parallel numpy with Bohrium - Dion Häfner He had to port a fortran codebase to numpy. Took a few months, but was quite doable. Just some number crunching, so you can do everything with numpy just fine. For production-running it had to run on parallel hardware. For that he used bohrium, a tool that works just like numpy, but with jit-compiled code. He showed some numbers: a lot faster. Cultural data processing with python - Oliver Götze Cultural data? Catalogs of book archives. Lots of different formats, often proprietary and/or unspecified and with missing data. And with lots of different fields. He wrote a "data preparation tool" so that they can clean up and transform the data to some generic format at the source. The power of git - Peer Wagner What do you think your repositories contain? Code? More! He read a book about "data forensics". git log is ok. But you can pass it arguments so that you can get much more info out of it. You can show which parts of your code are most often edited. You can also look which files are often … -
PyCon.de keynote: dask, next steps in parallel Python - Matthew Rocklin
(One of my summaries of a talk at the 2017 PyCon.de conference). Matthew Rocklin works on dask for anaconda. He showed a demo. Python has a mature analytics stack. numpy, pandas, etcetera. These have one big drawback: they are designed to run in RAM on a single machine. Now... how can we parallellize it? And not only numpy and pandas, but also the libraries that build upon it. What can you do in Python: Embarrassingly parallel: mulitiprocessing, for instance. You can use the multiprocessing library output = map(func, data) becomes output = pool.map(func, data). This is the simplest case. Often it is enough! Big data collections: spark, SQL, linear algebra. It manages parallelism for you within a fixed algorithm. If you stick to one of those paradigms, you can run on a cluster. Problem solved. Task schedulers: airflow, celery. You define a graph of python functions with data dependencies between them. The task scheduler then runs those functions on parallel hardware. How do these solutions hold up Multiprocessing is included with python and well-known. That's good. But it isn't efficient for most of the specific scientific algorithms. Big data: heavy-weight dependency. You'll have to get everyone to choose Spark, for … -
PyCon.de: programming the web of things witn micropython - Hardy Erlinger
(One of my summaries of a talk at the 2017 PyCon.de conference). We're used to programming for computers that have keyboards and a mouse and displays. Hardy Erlinger talked to a fully packed room about "physical computing". Computers with all sorts of sensors like temperature sensors, physical switches and with outputs like motors, LEDs, etc. When you try to teach computing to people, something like print('hello world') often fails to excite people. Once you can get LEDs to blink or servos to move: that helps. Often, you'll see is a single board computer. A complete computer build on a single circuit board. A raspberry pi, for instance. That's already a pretty powerful machine. Effectively a linux machine. Everything's there. Fine. But it is not very handy if you want it to be mobile. "Mobile" meaning "you want to track the movements of your cat", for instance. It is too big to tie to your cat. And it requires quite some electrical energy. The next smaller step in computing: microcontrollers. A computer shrunk into a single very small chip designed for use in an embedded system. It is often used as an embedded "brain" in a larger mechanical or electrical system … -
PyCon.de: empowered by Python - Jens Nie and Peer Wagner
(One of my summaries of a talk at the 2017 PyCon.de conference). Jens and Peer are working with pipeline inspections (for Rosen). (Real-world pipelines of up to 1000km long, not software pipelines). They build their own pipeline inspection robots. There's a lot of measurements coming out of such an inspection. One measurement every millimeter... So they're working with big data. And they're completely based on python. Everything from matplotlib, numpy, scipy, dask. etc. Also the laboratory measurements use python now. They were used to matlab, but python was much nicer and easier and more powerful. In the pipeline industry, they invested lots of money and effort in artificial intelligence. But it just did not work. Lots of overfitting. The time was just not right. A large problem was the lack of enough data. They have that now. And with machine learning, they're getting results. They also told about the history of their software development process. It started out as word documents that were then implemented. Next phase: prototypes in matlab with re-implementation in python. Only the end-users started to discover the prototypes and started using them anyway.... Now they're doing everything in python. And prototypes are now more "minimum viable … -
PyCon.de: keeping grip on decoupled code with CLIs - Anne Matthies
(One of my summaries of a talk at the 2017 PyCon.de conference). Anne is writing Python since 1996 already! She mostly builds data pipelines for analysts. In a big company, those pipelines start to get messy quickly. Her solution: chop everything in those pipelines up. The biggest problem in software that is in use for more than a year: humans. The problems like performance are relatively easy and solvable. She showed some code: # uncomment what you need That was in infrastructure code that deployed something. Everyting is code. Deployment is code. Infrastructure is code. Intalling is code. And all that became messy. And others (like ruby programmers) needed to be able to use those tools/pipelines. The solution: chop everything up in individual packages with a proper setup.py and with command line tools. Everyone can install a python package and call a command line tool! For the command line, they use cliff, a "command line interface formulation framework". With setuptools entry points she could get extra installed libraries to inject their commands into the generic CLI. Photo explanation: picture from our recent cycling holiday (NL+DE). Small stream near Renkum (NL). -
PyCon.de: building your own SDN with linux/saltstack/python - Maximilian Wilhelm
(One of my summaries of a talk at the 2017 PyCon.de conference). SDN? Software defined networking. You can just give a lot of money to cisco, right? Well, such money isn't always available. And it doesn't always do what we want. They needed an SDN for a city-wide point-to-point wifi network between various buildings in Paderborn. Recently he installed a new linux and typed in ifconfig, route, arp? It isn't there anymore. iproute2 is now the swiss army knife for networkers. VXLAN, VRF, MPLS, VLAN-aware bridges, IPsec, OpenVPN: linux has it all build-in. You can use it. Network configuration? It used to be ifupdown, but that is not easily automated. You can change the config file, but reloading is not possible... Restarting the network disrupts the connections... So there's now ifupdown2 written in python. You can extend it. Batteries included: dependency resolution, ifreload, VRFs, VXLAN, VLAN-aware bridges. And: they're open for ideas. You can send pull requests. For their network, they needed a routing solution. There are many open source implementations you can use. One of them, ExaBGP, is even written in Python. They used bird for OSPF. Configuring it all? Salt stack. Continuous management. Extensible. Salt stack works on … -
PyCon.de: use ansible properly or stick to your scripts - Bjoern Meier
(One of my summaries of a talk at the 2017 PyCon.de conference). Ansible is an infrastructure management tool. You have an "inventory" with your hosts and what kinds of hosts they are ('webserver', 'database'), combined with a "playbook" that tells what to do with what kind of host. They started with mapping the various manual deployment steps to ansible tasks. A playbook would just be a list of tasks that call shell scripts. This was wrong. A task would always result in a change. Another big problem? Ansible's check mode (or diff mode) would not work. A shell script cannot be simulated, so "check" will skip it. The solution? Use proper ansible modules. Modules can mostly check the state and determen what should be done. You can write your own modules, which means writing python code. This means you can also properly test your code (which is harder to do with shell scripts). He showed some example code, including code for checking whether something would change. And with a test playbook for testing the module. A common problem is that ansible doesn't know if something changed in your application: does it need to be restarted or not? The "solution" is … -
PyCon.de: effective data analysis with pandas indexes - Alexander Hendorf
(One of my summaries of a talk at the 2017 PyCon.de conference). (Warning beforehand: I hardly know pandas, so my summary might not be totally correct/useful/complete) When he started using pandas, differences between dataseries and dataframe tendet to trip him up often. Series is just like an array. It has a type, as it uses numpy under the hood ("labeled numpy arrays"). It has one type, so a series with ints and floats will be all-floats. Slicing is just series[3:6] or series.iloc[3:6]. He prefers the latter as it is more explicit. A dataframe is a bunch of series with an index (that is also a series). If you slice, you get rows. If you ask for one item, you get a column. It is better if you use .iloc(). A very powerful concept: a boolean index. sales_data['units'] > 40 gives you an index with everything that sold more than 40 items. You can and and or those indexes. Handy for filtering. Multi-index. Handy for data that is hierarchical (country, towns, etc). Datetime index. You can use a function to convert timestamps to actual datetimes. Pandas will now treat it correctly, for instance in plots. You can group by years and … -
PyCon.de: public transport efficiency with geopandas and GTFS - Pieter Mulder
(One of my summaries of a talk at the 2017 PyCon.de conference). Pieter Mulder works at dooor2door on a ride-sharing platform. He works on the research into demand. He uses: geopandas, which extents pandas with a geometical column type (using 'shapely'). You can even reproject a geometry series from one projection to another. GTFS: generic transit feed specification. It is defined by google: a common format for transportation schedules and associated geographical information. It isn't scary, it is basically a zipfile with a bunch of csv files. Geonotebook: adds a map to your jupyter notebook with two-way interaction. He then showed a demo with jupyter. Extracting all stops from local Karlsruhe GTFS files. Plotting them on the map. Searching for bus or tram stops within 5 minutes walking distance. Finding the stops you can reach with a trip of max half an hour. Nice demo! (Something similar is being done with opentripplanner (written in java).) Photo explanation: picture from our recent cycling holiday (NL+DE). Disused railway bridge over the Rhein at Wesel.