A user with a "0.00" balance would have either a "+0.00" in green or a
"-0.00" in red, depending on the exact value of the floating-point value.
Fix this by simply rounding to 2 digits before comparing to zero.
When viewing the list of bills, bills are (correctly) sorted by date. But
the order of all bills for a given day is not intuitive: I would expect
bills to be sorted by reverse order of insertion. That is, the last bill
to be added for a given day should appear first, not last. Otherwise,
when adding several bills in a row for a given day, it's confusing to see
that the new bills do not appear on top of the list.
Fix this by sorting by decreasing ID after sorting by date.
This avoids creating thousands of small SQL queries when computing the
balance of users. This significantly improves the performance of
displaying the main page of a project, since the balance of users is
displayed there:
Before this commit: 4004 SQL queries, 19793 ms elapsed time, 19753 ms CPU time, 2094 ms SQL time
After this commit: 12 SQL queries, 3688 ms elapsed time, 3753 ms CPU time, 50 ms SQL time
Measured request: display the sidebar with the balance of all users for the project (without displaying the list of bills)
This commit also greatly improves the performance of the "settle bills" page:
Before this commit: 8006 SQL queries, 39167 ms elapsed time, 39600 ms CPU time, 4141 ms SQL time
After this commit: 22 SQL queries, 7144 ms elapsed time, 7283 ms CPU time, 96 ms SQL time
Measured request: display the "Settle bills" page
Test setup to measure performance improvement:
- 5 users with various weights
- 1000 bills, each paid by a random user, each involving all 5 users
- laptop with Celeron N2830@2.16 GHz, SSD Samsung 850 EVO
- sqlite database on SSD, using sqlite 3.15.2
- python 2.7.13
- Flask-DebugToolbar 0.10.0 (to count SQL queries and loading time)
Performance measurements (using Flask-DebugToolbar on the second request,
to avoid measuring cold-cache performance):
- number of SQL queries
- elapsed time (from request to response)
- total CPU time consumed by the server handling the request
- total time spent on SQL queries (as reported by SQLAlchemy)
By defaut, SQLAlchemy uses lazy loading, which means that displaying n
bills will generate around n queries (to get the list of owers of each
bill). Pre-load the list of owers to drastically decrease the number of
SQL queries.
Before this commit: 1004 SQL queries, 7535 ms elapsed time, 7536 ms CPU time, 530 ms SQL time
After this commit: 5 SQL queries, 3342 ms elapsed time, 3393 ms CPU time, 15 ms SQL time
Measured request: display the list of all bills for the project (without displaying the sidebar with balances)
Test setup to measure performance improvement:
- 5 users with various weights
- 1000 bills, each paid by a random user, each involving all 5 users
- laptop with Celeron N2830@2.16 GHz, SSD Samsung 850 EVO
- sqlite database on SSD, using sqlite 3.15.2
- python 2.7.13
- Flask-DebugToolbar 0.10.0 (to count SQL queries and loading time)
Performance measurements (using Flask-DebugToolbar with the second
request, to avoid measuring cold-cache performance):
- number of SQL queries
- elapsed time (from request to response)
- total CPU time consumed by the server handling the request
- total time spent on SQL queries (as reported by SQLAlchemy)
As per [their blog post of the 27th April](https://blog.readthedocs.com/securing-subdomains/) ‘Securing subdomains’:
> Starting today, Read the Docs will start hosting projects from subdomains on the domain readthedocs.io, instead of on readthedocs.org. This change addresses some security concerns around site cookies while hosting user generated data on the same domain as our dashboard.
Test Plan: Manually visited all the links I’ve modified.
For some reason, the migration path from unmanaged db (from alembic
point-of-view) to managed db, through the initial migration works well with
sqlite… But not with mysql where the db system tries to re-create the existing
tables.
This commit is a way to detect if we are migrating from pre-alembic era and
skip the first migration (which would do nothing anyway), marking it as already
executed.
It's quite hackish but that's the best I found so far to get it working with
both MySQL and SQLite.
Loading not versioned settings.py during tests make them less predictable.
That's inspired from django behaviour with DJANGO_SETTING_MODULE environment variable.