##// END OF EJS Templates
docs: updated section on performance, scaling, ssl support
marcink -
r2205:dd780472 default
parent child Browse files
Show More
@@ -0,0 +1,50 b''
1 .. _gunicorn-ssl-support:
2
3
4 Gunicorn SSL support
5 --------------------
6
7
8 :term:`Gunicorn` wsgi server allows users to use HTTPS connection directly
9 without a need to use HTTP server like Nginx or Apache. To Configure
10 SSL support directly with :term:`Gunicorn` you need to simply add the key
11 and certificate paths to your configuration file.
12
13 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file.
14 2. In the ``[server:main]`` section, add two new variables
15 called `certfile` and `keyfile`.
16
17 .. code-block:: ini
18
19 [server:main]
20 host = 127.0.0.1
21 port = 10002
22 use = egg:gunicorn#main
23 workers = 1
24 threads = 1
25 proc_name = RhodeCodeEnterprise
26 worker_class = sync
27 max_requests = 1000
28 timeout = 3600
29 # adding ssl support
30 certfile = /home/ssl/my_server_com.pem
31 keyfile = /home/ssl/my_server_com.key
32
33 4. Save your changes.
34 5. Restart your |RCE| instance, using the following command:
35
36 .. code-block:: bash
37
38 $ rccontrol restart enterprise-1
39
40 After this is enabled you can *only* access your instances via https://
41 protocol. Check out more docs here `Gunicorn SSL Docs`_
42
43 .. note::
44
45 This change only can be applied to |RCE|. VCSServer doesn't support SSL
46 and should be only used with http protocol. Because only |RCE| is available
47 externally all communication will still be over SSL even without VCSServer
48 SSL enabled.
49
50 .. _Gunicorn SSL Docs: http://docs.gunicorn.org/en/stable/settings.html#ssl
@@ -19,6 +19,7 b' The following are the most common system'
19 19 config-files-overview
20 20 vcs-server
21 21 svn-http
22 gunicorn-ssl-support
22 23 apache-config
23 24 nginx-config
24 25 backup-restore
@@ -3,16 +3,19 b''
3 3 Increase Gunicorn Workers
4 4 -------------------------
5 5
6 .. important::
6
7 |RCE| comes with `Gunicorn`_ packaged in its Nix environment.
8 Gunicorn is a Python WSGI HTTP Server for UNIX.
7 9
8 If you increase the number of :term:`Gunicorn` workers, you also need to
9 increase the threadpool size of the VCS Server. The recommended size is
10 6 times the number of Gunicorn workers. To set this, see
11 :ref:`vcs-server-config-file`.
10 To improve |RCE| performance you can increase the number of `Gunicorn`_ workers.
11 This allows to handle more connections concurently, and provide better
12 responsiveness and performance.
12 13
13 |RCE| comes with `Gunicorn`_ packaged in its Nix environment. To improve
14 performance you can increase the number of workers. To do this, use the
15 following steps:
14 By default during installation |RCC| tries to detect how many CPUs are
15 available in the system, and set the number workers based on that information.
16 However sometimes it's better to manually set the number of workers.
17
18 To do this, use the following steps:
16 19
17 20 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file.
18 21 2. In the ``[server:main]`` section, increase the number of Gunicorn
@@ -20,16 +23,26 b' 2. In the ``[server:main]`` section, inc'
20 23
21 24 .. code-block:: ini
22 25
23 [server:main]
24 host = 127.0.0.1
25 port = 10002
26 26 use = egg:gunicorn#main
27 workers = 1
28 threads = 1
29 proc_name = RhodeCodeEnterprise
27 ## Sets the number of process workers. You must set `instance_id = *`
28 ## when this option is set to more than one worker, recommended
29 ## value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers
30 ## The `instance_id = *` must be set in the [app:main] section below
31 workers = 4
32 ## process name
33 proc_name = rhodecode
34 ## type of worker class, one of sync, gevent
35 ## recommended for bigger setup is using of of other than sync one
30 36 worker_class = sync
37 ## The maximum number of simultaneous clients. Valid only for Gevent
38 #worker_connections = 10
39 ## max number of requests that worker will handle before being gracefully
40 ## restarted, could prevent memory leaks
31 41 max_requests = 1000
32 timeout = 3600
42 max_requests_jitter = 30
43 ## amount of time a worker can spend with handling a request before it
44 ## gets killed and restarted. Set to 6hrs
45 timeout = 21600
33 46
34 47 3. In the ``[app:main]`` section, set the ``instance_id`` property to ``*``.
35 48
@@ -40,72 +53,72 b' 3. In the ``[app:main]`` section, set th'
40 53 # You must set `instance_id = *`
41 54 instance_id = *
42 55
43 4. Save your changes.
44 5. Restart your |RCE| instance, using the following command:
56 4. Change the VCSServer workers too. Open the
57 :file:`home/{user}/.rccontrol/{instance-id}/vcsserver.ini` file.
58
59 5. In the ``[server:main]`` section, increase the number of Gunicorn
60 ``workers`` using the following formula :math:`(2 * Cores) + 1`.
61
62 .. code-block:: ini
63
64 ## run with gunicorn --log-config vcsserver.ini --paste vcsserver.ini
65 use = egg:gunicorn#main
66 ## Sets the number of process workers. Recommended
67 ## value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers
68 workers = 4
69 ## process name
70 proc_name = rhodecode_vcsserver
71 ## type of worker class, currently `sync` is the only option allowed.
72 worker_class = sync
73 ## The maximum number of simultaneous clients. Valid only for Gevent
74 #worker_connections = 10
75 ## max number of requests that worker will handle before being gracefully
76 ## restarted, could prevent memory leaks
77 max_requests = 1000
78 max_requests_jitter = 30
79 ## amount of time a worker can spend with handling a request before it
80 ## gets killed and restarted. Set to 6hrs
81 timeout = 21600
82
83 6. Save your changes.
84 7. Restart your |RCE| instances, using the following command:
45 85
46 86 .. code-block:: bash
47 87
48 $ rccontrol restart enterprise-1
88 $ rccontrol restart '*'
89
90
91 Gunicorn Gevent Backend
92 -----------------------
49 93
50 If you scale across different machines, each |RCM| instance
51 needs to store its data on a shared disk, preferably together with your
52 |repos|. This data directory contains template caches, a whoosh index,
53 and is used for task locking to ensure safety across multiple instances.
54 To do this, set the following properties in the :file:`rhodecode.ini` file to
55 set the shared location across all |RCM| instances.
94 Gevent is an asynchronous worker type for Gunicorn. It allows accepting multiple
95 connections on a single `Gunicorn`_ worker. This means you can handle 100s
96 of concurrent clones, or API calls using just few workers. A setting called
97 `worker_connections` defines on how many connections each worker can
98 handle using `Gevent`.
99
100
101 To enable `Gevent` on |RCE| do the following:
102
103
104 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file.
105 2. In the ``[server:main]`` section, change `worker_class` for Gunicorn.
106
56 107
57 108 .. code-block:: ini
58 109
59 cache_dir = /file/path # set to shared location
60 search.location = /file/path # set to shared location
110 ## type of worker class, one of sync, gevent
111 ## recommended for bigger setup is using of of other than sync one
112 worker_class = gevent
113 ## The maximum number of simultaneous clients. Valid only for Gevent
114 worker_connections = 30
61 115
62 ####################################
63 ### BEAKER CACHE ####
64 ####################################
65 beaker.cache.data_dir = /file/path # set to shared location
66 beaker.cache.lock_dir = /file/path # set to shared location
116
117 .. note::
118
119 `Gevent` is currently only supported for Enterprise/Community instances.
120 VCSServer doesn't yet support gevent.
67 121
68 122
69 123
70 Gunicorn SSL support
71 --------------------
72
73
74 :term:`Gunicorn` wsgi server allows users to use HTTPS connection directly
75 without a need to use HTTP server like Nginx or Apache. To Configure
76 SSL support directly with :term:`Gunicorn` you need to simply add the key
77 and certificate paths to your configuration file.
78
79 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file.
80 2. In the ``[server:main]`` section, add two new variables
81 called `certfile` and `keyfile`.
82
83 .. code-block:: ini
84
85 [server:main]
86 host = 127.0.0.1
87 port = 10002
88 use = egg:gunicorn#main
89 workers = 1
90 threads = 1
91 proc_name = RhodeCodeEnterprise
92 worker_class = sync
93 max_requests = 1000
94 timeout = 3600
95 # adding ssl support
96 certfile = /home/ssl/my_server_com.pem
97 keyfile = /home/ssl/my_server_com.key
98
99 4. Save your changes.
100 5. Restart your |RCE| instance, using the following command:
101
102 .. code-block:: bash
103
104 $ rccontrol restart enterprise-1
105
106 After this is enabled you can *only* access your instances via https://
107 protocol. Check out more docs here `Gunicorn SSL Docs`_
108
109
110 124 .. _Gunicorn: http://gunicorn.org/
111 .. _Gunicorn SSL Docs: http://docs.gunicorn.org/en/stable/settings.html#ssl
@@ -3,21 +3,45 b''
3 3 Scale Horizontally
4 4 ------------------
5 5
6 |RCE| is built in a way it support horizontal scaling across multiple machines.
7 There are two main pre-requisites for that:
8
9 - Shared storage that each machine can access.
10 - Shared DB connection across machines.
11
12
6 13 Horizontal scaling means adding more machines or workers into your pool of
7 14 resources. Horizontally scaling |RCE| gives a huge performance increase,
8 15 especially under large traffic scenarios with a high number of requests. This
9 16 is very beneficial when |RCE| is serving many users simultaneously,
10 17 or if continuous integration servers are automatically pulling and pushing code.
11 18
12 To horizontally scale |RCE| you should use the following steps:
19
20 If you scale across different machines, each |RCM| instance
21 needs to store its data on a shared disk, preferably together with your
22 |repos|. This data directory contains template caches, a full text search index,
23 and is used for task locking to ensure safety across multiple instances.
24 To do this, set the following properties in the :file:`rhodecode.ini` file to
25 set the shared location across all |RCM| instances.
26
27 .. code-block:: ini
28
29 cache_dir = /shared/path/caches # set to shared location
30 search.location = /shared/path/search_index # set to shared location
13 31
14 1. In the :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file,
15 set ``instance_id = *``. This enables |RCE| to use multiple nodes.
16 2. Define the number of worker threads using the formula
17 :math:`(2 * Cores) + 1`. For example 4 CPU cores would lead to
18 :math:`(2 * 4) + 1 = 9` workers. In some cases it's ok to increase number of
19 workers even beyond this formula. Generally the more workers, the more
20 simultaneous connections the system can handle.
32 ####################################
33 ### BEAKER CACHE ####
34 ####################################
35 beaker.cache.data_dir = /shared/path/data # set to shared location
36 beaker.cache.lock_dir = /shared/path/lock # set to shared location
37
38
39 .. note::
40
41 If you use custom caches such as `beaker.cache.auth_plugins.` it's recommended
42 to set it to the memcached/redis or database backend so it can be shared
43 across machines.
44
21 45
22 46 It is recommended to create another dedicated |RCE| instance to handle
23 47 traffic from build farms or continuous integration servers.
@@ -28,24 +52,7 b' traffic from build farms or continuous i'
28 52 load balancing rules that will separate regular user traffic from
29 53 automated process traffic like continuous servers or build bots.
30 54
31 If you scale across different machines, each |RCE| instance needs to store
32 its data on a shared disk, preferably together with your repositories. This
33 data directory contains template caches, a whoosh index,
34 and is used for task locking to ensure safety across multiple instances. To
35 do this, set the following properties in the
36 :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file to set
37 the shared location across all |RCE| instances.
38
39 .. code-block:: ini
40
41 cache_dir = /file/path # set to shared directory location
42 search.location = /file/path # set to shared directory location
43 beaker.cache.data_dir = /file/path # set to shared directory location
44 beaker.cache.lock_dir = /file/path # set to shared directory location
45
46 55 .. note::
47 56
48 57 If Celery is used on each instance then you should run separate Celery
49 58 instances, but the message broker should be the same for all of them.
50 This excludes one RabbitMQ shared server.
51
@@ -15,15 +15,13 b' We keep the calls in the form ``{verb}_{'
15 15 Change and Deprecation
16 16 ======================
17 17
18 API deprecation is documented in the section :ref:`deprecated` together with
18 API deprecation is documented in the section `deprecated` together with
19 19 other notes about deprecated parts of the application.
20 20
21 21
22 22 Deprecated API calls
23 23 --------------------
24 24
25 - Make sure to add them into the section :ref:`deprecated`.
26
27 25 - Use `deprecated` inside of the call docstring to make our users aware of the
28 26 deprecation::
29 27
General Comments 0
You need to be logged in to leave comments. Login now