Show More
@@ -0,0 +1,50 b'' | |||
|
1 | .. _gunicorn-ssl-support: | |
|
2 | ||
|
3 | ||
|
4 | Gunicorn SSL support | |
|
5 | -------------------- | |
|
6 | ||
|
7 | ||
|
8 | :term:`Gunicorn` wsgi server allows users to use HTTPS connection directly | |
|
9 | without a need to use HTTP server like Nginx or Apache. To Configure | |
|
10 | SSL support directly with :term:`Gunicorn` you need to simply add the key | |
|
11 | and certificate paths to your configuration file. | |
|
12 | ||
|
13 | 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file. | |
|
14 | 2. In the ``[server:main]`` section, add two new variables | |
|
15 | called `certfile` and `keyfile`. | |
|
16 | ||
|
17 | .. code-block:: ini | |
|
18 | ||
|
19 | [server:main] | |
|
20 | host = 127.0.0.1 | |
|
21 | port = 10002 | |
|
22 | use = egg:gunicorn#main | |
|
23 | workers = 1 | |
|
24 | threads = 1 | |
|
25 | proc_name = RhodeCodeEnterprise | |
|
26 | worker_class = sync | |
|
27 | max_requests = 1000 | |
|
28 | timeout = 3600 | |
|
29 | # adding ssl support | |
|
30 | certfile = /home/ssl/my_server_com.pem | |
|
31 | keyfile = /home/ssl/my_server_com.key | |
|
32 | ||
|
33 | 4. Save your changes. | |
|
34 | 5. Restart your |RCE| instance, using the following command: | |
|
35 | ||
|
36 | .. code-block:: bash | |
|
37 | ||
|
38 | $ rccontrol restart enterprise-1 | |
|
39 | ||
|
40 | After this is enabled you can *only* access your instances via https:// | |
|
41 | protocol. Check out more docs here `Gunicorn SSL Docs`_ | |
|
42 | ||
|
43 | .. note:: | |
|
44 | ||
|
45 | This change only can be applied to |RCE|. VCSServer doesn't support SSL | |
|
46 | and should be only used with http protocol. Because only |RCE| is available | |
|
47 | externally all communication will still be over SSL even without VCSServer | |
|
48 | SSL enabled. | |
|
49 | ||
|
50 | .. _Gunicorn SSL Docs: http://docs.gunicorn.org/en/stable/settings.html#ssl |
@@ -1,30 +1,31 b'' | |||
|
1 | 1 | .. _rhodecode-admin-ref: |
|
2 | 2 | |
|
3 | 3 | System Administration |
|
4 | 4 | ===================== |
|
5 | 5 | |
|
6 | 6 | The following are the most common system administration tasks. |
|
7 | 7 | |
|
8 | 8 | .. only:: latex |
|
9 | 9 | |
|
10 | 10 | * :ref:`vcs-server` |
|
11 | 11 | * :ref:`apache-ws-ref` |
|
12 | 12 | * :ref:`nginx-ws-ref` |
|
13 | 13 | * :ref:`rhodecode-tuning-ref` |
|
14 | 14 | * :ref:`indexing-ref` |
|
15 | 15 | * :ref:`rhodecode-reset-ref` |
|
16 | 16 | |
|
17 | 17 | .. toctree:: |
|
18 | 18 | |
|
19 | 19 | config-files-overview |
|
20 | 20 | vcs-server |
|
21 | 21 | svn-http |
|
22 | gunicorn-ssl-support | |
|
22 | 23 | apache-config |
|
23 | 24 | nginx-config |
|
24 | 25 | backup-restore |
|
25 | 26 | tuning-rhodecode |
|
26 | 27 | indexing |
|
27 | 28 | reset-information |
|
28 | 29 | enable-debug |
|
29 | 30 | admin-tricks |
|
30 | 31 | cleanup-cmds |
@@ -1,111 +1,124 b'' | |||
|
1 | 1 | .. _increase-gunicorn: |
|
2 | 2 | |
|
3 | 3 | Increase Gunicorn Workers |
|
4 | 4 | ------------------------- |
|
5 | 5 | |
|
6 | .. important:: | |
|
6 | ||
|
7 | |RCE| comes with `Gunicorn`_ packaged in its Nix environment. | |
|
8 | Gunicorn is a Python WSGI HTTP Server for UNIX. | |
|
7 | 9 | |
|
8 |
|
|
|
9 | increase the threadpool size of the VCS Server. The recommended size is | |
|
10 | 6 times the number of Gunicorn workers. To set this, see | |
|
11 | :ref:`vcs-server-config-file`. | |
|
10 | To improve |RCE| performance you can increase the number of `Gunicorn`_ workers. | |
|
11 | This allows to handle more connections concurently, and provide better | |
|
12 | responsiveness and performance. | |
|
12 | 13 | |
|
13 | |RCE| comes with `Gunicorn`_ packaged in its Nix environment. To improve | |
|
14 | performance you can increase the number of workers. To do this, use the | |
|
15 | following steps: | |
|
14 | By default during installation |RCC| tries to detect how many CPUs are | |
|
15 | available in the system, and set the number workers based on that information. | |
|
16 | However sometimes it's better to manually set the number of workers. | |
|
17 | ||
|
18 | To do this, use the following steps: | |
|
16 | 19 | |
|
17 | 20 | 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file. |
|
18 | 21 | 2. In the ``[server:main]`` section, increase the number of Gunicorn |
|
19 | 22 | ``workers`` using the following formula :math:`(2 * Cores) + 1`. |
|
20 | 23 | |
|
21 | 24 | .. code-block:: ini |
|
22 | 25 | |
|
23 | [server:main] | |
|
24 | host = 127.0.0.1 | |
|
25 | port = 10002 | |
|
26 | 26 | use = egg:gunicorn#main |
|
27 | workers = 1 | |
|
28 | threads = 1 | |
|
29 | proc_name = RhodeCodeEnterprise | |
|
27 | ## Sets the number of process workers. You must set `instance_id = *` | |
|
28 | ## when this option is set to more than one worker, recommended | |
|
29 | ## value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers | |
|
30 | ## The `instance_id = *` must be set in the [app:main] section below | |
|
31 | workers = 4 | |
|
32 | ## process name | |
|
33 | proc_name = rhodecode | |
|
34 | ## type of worker class, one of sync, gevent | |
|
35 | ## recommended for bigger setup is using of of other than sync one | |
|
30 | 36 | worker_class = sync |
|
37 | ## The maximum number of simultaneous clients. Valid only for Gevent | |
|
38 | #worker_connections = 10 | |
|
39 | ## max number of requests that worker will handle before being gracefully | |
|
40 | ## restarted, could prevent memory leaks | |
|
31 | 41 | max_requests = 1000 |
|
32 | timeout = 3600 | |
|
42 | max_requests_jitter = 30 | |
|
43 | ## amount of time a worker can spend with handling a request before it | |
|
44 | ## gets killed and restarted. Set to 6hrs | |
|
45 | timeout = 21600 | |
|
33 | 46 | |
|
34 | 47 | 3. In the ``[app:main]`` section, set the ``instance_id`` property to ``*``. |
|
35 | 48 | |
|
36 | 49 | .. code-block:: ini |
|
37 | 50 | |
|
38 | 51 | # In the [app:main] section |
|
39 | 52 | [app:main] |
|
40 | 53 | # You must set `instance_id = *` |
|
41 | 54 | instance_id = * |
|
42 | 55 | |
|
43 | 4. Save your changes. | |
|
44 | 5. Restart your |RCE| instance, using the following command: | |
|
56 | 4. Change the VCSServer workers too. Open the | |
|
57 | :file:`home/{user}/.rccontrol/{instance-id}/vcsserver.ini` file. | |
|
58 | ||
|
59 | 5. In the ``[server:main]`` section, increase the number of Gunicorn | |
|
60 | ``workers`` using the following formula :math:`(2 * Cores) + 1`. | |
|
61 | ||
|
62 | .. code-block:: ini | |
|
63 | ||
|
64 | ## run with gunicorn --log-config vcsserver.ini --paste vcsserver.ini | |
|
65 | use = egg:gunicorn#main | |
|
66 | ## Sets the number of process workers. Recommended | |
|
67 | ## value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers | |
|
68 | workers = 4 | |
|
69 | ## process name | |
|
70 | proc_name = rhodecode_vcsserver | |
|
71 | ## type of worker class, currently `sync` is the only option allowed. | |
|
72 | worker_class = sync | |
|
73 | ## The maximum number of simultaneous clients. Valid only for Gevent | |
|
74 | #worker_connections = 10 | |
|
75 | ## max number of requests that worker will handle before being gracefully | |
|
76 | ## restarted, could prevent memory leaks | |
|
77 | max_requests = 1000 | |
|
78 | max_requests_jitter = 30 | |
|
79 | ## amount of time a worker can spend with handling a request before it | |
|
80 | ## gets killed and restarted. Set to 6hrs | |
|
81 | timeout = 21600 | |
|
82 | ||
|
83 | 6. Save your changes. | |
|
84 | 7. Restart your |RCE| instances, using the following command: | |
|
45 | 85 | |
|
46 | 86 | .. code-block:: bash |
|
47 | 87 | |
|
48 |
$ rccontrol restart |
|
|
88 | $ rccontrol restart '*' | |
|
89 | ||
|
90 | ||
|
91 | Gunicorn Gevent Backend | |
|
92 | ----------------------- | |
|
49 | 93 | |
|
50 | If you scale across different machines, each |RCM| instance | |
|
51 | needs to store its data on a shared disk, preferably together with your | |
|
52 | |repos|. This data directory contains template caches, a whoosh index, | |
|
53 | and is used for task locking to ensure safety across multiple instances. | |
|
54 | To do this, set the following properties in the :file:`rhodecode.ini` file to | |
|
55 | set the shared location across all |RCM| instances. | |
|
94 | Gevent is an asynchronous worker type for Gunicorn. It allows accepting multiple | |
|
95 | connections on a single `Gunicorn`_ worker. This means you can handle 100s | |
|
96 | of concurrent clones, or API calls using just few workers. A setting called | |
|
97 | `worker_connections` defines on how many connections each worker can | |
|
98 | handle using `Gevent`. | |
|
99 | ||
|
100 | ||
|
101 | To enable `Gevent` on |RCE| do the following: | |
|
102 | ||
|
103 | ||
|
104 | 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file. | |
|
105 | 2. In the ``[server:main]`` section, change `worker_class` for Gunicorn. | |
|
106 | ||
|
56 | 107 | |
|
57 | 108 | .. code-block:: ini |
|
58 | 109 | |
|
59 | cache_dir = /file/path # set to shared location | |
|
60 | search.location = /file/path # set to shared location | |
|
110 | ## type of worker class, one of sync, gevent | |
|
111 | ## recommended for bigger setup is using of of other than sync one | |
|
112 | worker_class = gevent | |
|
113 | ## The maximum number of simultaneous clients. Valid only for Gevent | |
|
114 | worker_connections = 30 | |
|
61 | 115 | |
|
62 | #################################### | |
|
63 | ### BEAKER CACHE #### | |
|
64 | #################################### | |
|
65 | beaker.cache.data_dir = /file/path # set to shared location | |
|
66 | beaker.cache.lock_dir = /file/path # set to shared location | |
|
116 | ||
|
117 | .. note:: | |
|
118 | ||
|
119 | `Gevent` is currently only supported for Enterprise/Community instances. | |
|
120 | VCSServer doesn't yet support gevent. | |
|
67 | 121 | |
|
68 | 122 | |
|
69 | 123 | |
|
70 | Gunicorn SSL support | |
|
71 | -------------------- | |
|
72 | ||
|
73 | ||
|
74 | :term:`Gunicorn` wsgi server allows users to use HTTPS connection directly | |
|
75 | without a need to use HTTP server like Nginx or Apache. To Configure | |
|
76 | SSL support directly with :term:`Gunicorn` you need to simply add the key | |
|
77 | and certificate paths to your configuration file. | |
|
78 | ||
|
79 | 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file. | |
|
80 | 2. In the ``[server:main]`` section, add two new variables | |
|
81 | called `certfile` and `keyfile`. | |
|
82 | ||
|
83 | .. code-block:: ini | |
|
84 | ||
|
85 | [server:main] | |
|
86 | host = 127.0.0.1 | |
|
87 | port = 10002 | |
|
88 | use = egg:gunicorn#main | |
|
89 | workers = 1 | |
|
90 | threads = 1 | |
|
91 | proc_name = RhodeCodeEnterprise | |
|
92 | worker_class = sync | |
|
93 | max_requests = 1000 | |
|
94 | timeout = 3600 | |
|
95 | # adding ssl support | |
|
96 | certfile = /home/ssl/my_server_com.pem | |
|
97 | keyfile = /home/ssl/my_server_com.key | |
|
98 | ||
|
99 | 4. Save your changes. | |
|
100 | 5. Restart your |RCE| instance, using the following command: | |
|
101 | ||
|
102 | .. code-block:: bash | |
|
103 | ||
|
104 | $ rccontrol restart enterprise-1 | |
|
105 | ||
|
106 | After this is enabled you can *only* access your instances via https:// | |
|
107 | protocol. Check out more docs here `Gunicorn SSL Docs`_ | |
|
108 | ||
|
109 | ||
|
110 | 124 | .. _Gunicorn: http://gunicorn.org/ |
|
111 | .. _Gunicorn SSL Docs: http://docs.gunicorn.org/en/stable/settings.html#ssl |
@@ -1,51 +1,58 b'' | |||
|
1 | 1 | .. _scale-horizontal: |
|
2 | 2 | |
|
3 | 3 | Scale Horizontally |
|
4 | 4 | ------------------ |
|
5 | 5 | |
|
6 | |RCE| is built in a way it support horizontal scaling across multiple machines. | |
|
7 | There are two main pre-requisites for that: | |
|
8 | ||
|
9 | - Shared storage that each machine can access. | |
|
10 | - Shared DB connection across machines. | |
|
11 | ||
|
12 | ||
|
6 | 13 | Horizontal scaling means adding more machines or workers into your pool of |
|
7 | 14 | resources. Horizontally scaling |RCE| gives a huge performance increase, |
|
8 | 15 | especially under large traffic scenarios with a high number of requests. This |
|
9 | 16 | is very beneficial when |RCE| is serving many users simultaneously, |
|
10 | 17 | or if continuous integration servers are automatically pulling and pushing code. |
|
11 | 18 | |
|
12 | To horizontally scale |RCE| you should use the following steps: | |
|
19 | ||
|
20 | If you scale across different machines, each |RCM| instance | |
|
21 | needs to store its data on a shared disk, preferably together with your | |
|
22 | |repos|. This data directory contains template caches, a full text search index, | |
|
23 | and is used for task locking to ensure safety across multiple instances. | |
|
24 | To do this, set the following properties in the :file:`rhodecode.ini` file to | |
|
25 | set the shared location across all |RCM| instances. | |
|
26 | ||
|
27 | .. code-block:: ini | |
|
28 | ||
|
29 | cache_dir = /shared/path/caches # set to shared location | |
|
30 | search.location = /shared/path/search_index # set to shared location | |
|
13 | 31 |
|
|
14 | 1. In the :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file, | |
|
15 | set ``instance_id = *``. This enables |RCE| to use multiple nodes. | |
|
16 | 2. Define the number of worker threads using the formula | |
|
17 | :math:`(2 * Cores) + 1`. For example 4 CPU cores would lead to | |
|
18 | :math:`(2 * 4) + 1 = 9` workers. In some cases it's ok to increase number of | |
|
19 | workers even beyond this formula. Generally the more workers, the more | |
|
20 | simultaneous connections the system can handle. | |
|
32 | #################################### | |
|
33 | ### BEAKER CACHE #### | |
|
34 | #################################### | |
|
35 | beaker.cache.data_dir = /shared/path/data # set to shared location | |
|
36 | beaker.cache.lock_dir = /shared/path/lock # set to shared location | |
|
37 | ||
|
38 | ||
|
39 | .. note:: | |
|
40 | ||
|
41 | If you use custom caches such as `beaker.cache.auth_plugins.` it's recommended | |
|
42 | to set it to the memcached/redis or database backend so it can be shared | |
|
43 | across machines. | |
|
44 | ||
|
21 | 45 | |
|
22 | 46 | It is recommended to create another dedicated |RCE| instance to handle |
|
23 | 47 | traffic from build farms or continuous integration servers. |
|
24 | 48 | |
|
25 | 49 | .. note:: |
|
26 | 50 | |
|
27 | 51 | You should configure your load balancing accordingly. We recommend writing |
|
28 | 52 | load balancing rules that will separate regular user traffic from |
|
29 | 53 | automated process traffic like continuous servers or build bots. |
|
30 | 54 | |
|
31 | If you scale across different machines, each |RCE| instance needs to store | |
|
32 | its data on a shared disk, preferably together with your repositories. This | |
|
33 | data directory contains template caches, a whoosh index, | |
|
34 | and is used for task locking to ensure safety across multiple instances. To | |
|
35 | do this, set the following properties in the | |
|
36 | :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file to set | |
|
37 | the shared location across all |RCE| instances. | |
|
38 | ||
|
39 | .. code-block:: ini | |
|
40 | ||
|
41 | cache_dir = /file/path # set to shared directory location | |
|
42 | search.location = /file/path # set to shared directory location | |
|
43 | beaker.cache.data_dir = /file/path # set to shared directory location | |
|
44 | beaker.cache.lock_dir = /file/path # set to shared directory location | |
|
45 | ||
|
46 | 55 | .. note:: |
|
47 | 56 | |
|
48 | 57 | If Celery is used on each instance then you should run separate Celery |
|
49 | 58 | instances, but the message broker should be the same for all of them. |
|
50 | This excludes one RabbitMQ shared server. | |
|
51 |
@@ -1,60 +1,58 b'' | |||
|
1 | 1 | |
|
2 | 2 | =================== |
|
3 | 3 | CONTRIBUTING TO API |
|
4 | 4 | =================== |
|
5 | 5 | |
|
6 | 6 | |
|
7 | 7 | |
|
8 | 8 | Naming conventions |
|
9 | 9 | ================== |
|
10 | 10 | |
|
11 | 11 | We keep the calls in the form ``{verb}_{noun}``. |
|
12 | 12 | |
|
13 | 13 | |
|
14 | 14 | |
|
15 | 15 | Change and Deprecation |
|
16 | 16 | ====================== |
|
17 | 17 | |
|
18 |
API deprecation is documented in the section |
|
|
18 | API deprecation is documented in the section `deprecated` together with | |
|
19 | 19 | other notes about deprecated parts of the application. |
|
20 | 20 | |
|
21 | 21 | |
|
22 | 22 | Deprecated API calls |
|
23 | 23 | -------------------- |
|
24 | 24 | |
|
25 | - Make sure to add them into the section :ref:`deprecated`. | |
|
26 | ||
|
27 | 25 | - Use `deprecated` inside of the call docstring to make our users aware of the |
|
28 | 26 | deprecation:: |
|
29 | 27 | |
|
30 | 28 | .. deprecated:: 1.2.3 |
|
31 | 29 | |
|
32 | 30 | Use `new_call_name` instead to fetch this information. |
|
33 | 31 | |
|
34 | 32 | - Make sure to log on level `logging.WARNING` a message that the API call or |
|
35 | 33 | specific parameters are deprecated. |
|
36 | 34 | |
|
37 | 35 | - If possible return deprecation information inside of the result from the API |
|
38 | 36 | call. Use the attribute `_warning_` to contain a message. |
|
39 | 37 | |
|
40 | 38 | |
|
41 | 39 | Changed API calls |
|
42 | 40 | ----------------- |
|
43 | 41 | |
|
44 | 42 | - If the change is significant, consider to use `versionchanged` in the |
|
45 | 43 | docstring:: |
|
46 | 44 | |
|
47 | 45 | .. versionchanged:: 1.2.3 |
|
48 | 46 | |
|
49 | 47 | Optional explanation if reasonable. |
|
50 | 48 | |
|
51 | 49 | |
|
52 | 50 | Added API calls |
|
53 | 51 | --------------- |
|
54 | 52 | |
|
55 | 53 | - Use `versionadded` to document since which version this API call is |
|
56 | 54 | available:: |
|
57 | 55 | |
|
58 | 56 | .. versionadded:: 1.2.3 |
|
59 | 57 | |
|
60 | 58 | Optional explanation if reasonable. |
General Comments 0
You need to be logged in to leave comments.
Login now