##// END OF EJS Templates
docs: reduce double nesting level in performance.rst...
Thomas De Schampheleire -
r8370:d442d839 default
parent child Browse files
Show More
@@ -1,125 +1,125 b''
1 1 .. _performance:
2 2
3 3 ================================
4 4 Optimizing Kallithea performance
5 5 ================================
6 6
7 7 When serving a large amount of big repositories, Kallithea can start performing
8 8 slower than expected. Because of the demanding nature of handling large amounts
9 9 of data from version control systems, here are some tips on how to get the best
10 10 performance.
11 11
12 12
13 13 Fast storage
14 14 ------------
15 15
16 16 Kallithea is often I/O bound, and hence a fast disk (SSD/SAN) and plenty of RAM
17 17 is usually more important than a fast CPU.
18 18
19 19
20 20 Caching
21 21 -------
22 22
23 23 Tweak beaker cache settings in the ini file. The actual effect of that is
24 24 questionable.
25 25
26 26 .. note::
27 27
28 28 Beaker has no upper bound on cache size and will never drop any caches. For
29 29 memory cache, the only option is to regularly restart the worker process.
30 30 For file cache, it must be cleaned manually, as described in the `Beaker
31 31 documentation <https://beaker.readthedocs.io/en/latest/sessions.html#removing-expired-old-sessions>`_::
32 32
33 33 find data/cache -type f -mtime +30 -print -exec rm {} \;
34 34
35 35
36 36 Database
37 37 --------
38 38
39 39 SQLite is a good option when having a small load on the system. But due to
40 40 locking issues with SQLite, it is not recommended to use it for larger
41 41 deployments.
42 42
43 43 Switching to PostgreSQL or MariaDB/MySQL will result in an immediate performance
44 44 increase. A tool like SQLAlchemyGrate_ can be used for migrating to another
45 45 database platform.
46 46
47 47
48 48 Horizontal scaling
49 49 ------------------
50 50
51 51 Scaling horizontally means running several Kallithea instances (also known as
52 52 worker processes) and let them share the load. That is essential to serve other
53 53 users while processing a long-running request from a user. Usually, the
54 54 bottleneck on a Kallithea server is not CPU but I/O speed - especially network
55 55 speed. It is thus a good idea to run multiple worker processes on one server.
56 56
57 57 .. note::
58 58
59 59 Kallithea and the embedded Mercurial backend are not thread-safe. Each
60 60 worker process must thus be single-threaded.
61 61
62 62 Web servers can usually launch multiple worker processes - for example ``mod_wsgi`` with the
63 63 ``WSGIDaemonProcess`` ``processes`` parameter or ``uWSGI`` or ``gunicorn`` with
64 64 their ``workers`` setting.
65 65
66 66 Kallithea can also be scaled horizontally across multiple machines.
67 67 In order to scale horizontally on multiple machines, you need to do the
68 68 following:
69 69
70 - Each instance's ``data`` storage needs to be configured to be stored on a
71 shared disk storage, preferably together with repositories. This ``data``
72 dir contains template caches, sessions, whoosh index and is used for
73 task locking (so it is safe across multiple instances). Set the
74 ``cache_dir``, ``index_dir``, ``beaker.cache.data_dir``, ``beaker.cache.lock_dir``
75 variables in each .ini file to a shared location across Kallithea instances
76 - If using several Celery instances,
77 the message broker should be common to all of them (e.g., one
78 shared RabbitMQ server)
79 - Load balance using round robin or IP hash, recommended is writing LB rules
80 that will separate regular user traffic from automated processes like CI
81 servers or build bots.
70 - Each instance's ``data`` storage needs to be configured to be stored on a
71 shared disk storage, preferably together with repositories. This ``data``
72 dir contains template caches, sessions, whoosh index and is used for
73 task locking (so it is safe across multiple instances). Set the
74 ``cache_dir``, ``index_dir``, ``beaker.cache.data_dir``, ``beaker.cache.lock_dir``
75 variables in each .ini file to a shared location across Kallithea instances
76 - If using several Celery instances,
77 the message broker should be common to all of them (e.g., one
78 shared RabbitMQ server)
79 - Load balance using round robin or IP hash, recommended is writing LB rules
80 that will separate regular user traffic from automated processes like CI
81 servers or build bots.
82 82
83 83
84 84 Serve static files directly from the web server
85 85 -----------------------------------------------
86 86
87 87 With the default ``static_files`` ini setting, the Kallithea WSGI application
88 88 will take care of serving the static files from ``kallithea/public/`` at the
89 89 root of the application URL.
90 90
91 91 The actual serving of the static files is very fast and unlikely to be a
92 92 problem in a Kallithea setup - the responses generated by Kallithea from
93 93 database and repository content will take significantly more time and
94 94 resources.
95 95
96 96 To serve static files from the web server, use something like this Apache config
97 97 snippet::
98 98
99 99 Alias /images/ /srv/kallithea/kallithea/kallithea/public/images/
100 100 Alias /css/ /srv/kallithea/kallithea/kallithea/public/css/
101 101 Alias /js/ /srv/kallithea/kallithea/kallithea/public/js/
102 102 Alias /codemirror/ /srv/kallithea/kallithea/kallithea/public/codemirror/
103 103 Alias /fontello/ /srv/kallithea/kallithea/kallithea/public/fontello/
104 104
105 105 Then disable serving of static files in the ``.ini`` ``app:main`` section::
106 106
107 107 static_files = false
108 108
109 109 If using Kallithea installed as a package, you should be able to find the files
110 110 under ``site-packages/kallithea``, either in your Python installation or in your
111 111 virtualenv. When upgrading, make sure to update the web server configuration
112 112 too if necessary.
113 113
114 114 It might also be possible to improve performance by configuring the web server
115 115 to compress responses (served from static files or generated by Kallithea) when
116 116 serving them. That might also imply buffering of responses - that is more
117 117 likely to be a problem; large responses (clones or pulls) will have to be fully
118 118 processed and spooled to disk or memory before the client will see any
119 119 response. See the documentation for your web server.
120 120
121 121
122 122 .. _SQLAlchemyGrate: https://github.com/shazow/sqlalchemygrate
123 123 .. _mod_wsgi: https://modwsgi.readthedocs.io/
124 124 .. _uWSGI: https://uwsgi-docs.readthedocs.io/
125 125 .. _gunicorn: http://pypi.python.org/pypi/gunicorn
General Comments 0
You need to be logged in to leave comments. Login now