diff --git a/docs/admin/gunicorn-ssl-support.rst b/docs/admin/gunicorn-ssl-support.rst new file mode 100644 --- /dev/null +++ b/docs/admin/gunicorn-ssl-support.rst @@ -0,0 +1,50 @@ +.. _gunicorn-ssl-support: + + +Gunicorn SSL support +-------------------- + + +:term:`Gunicorn` wsgi server allows users to use HTTPS connection directly +without a need to use HTTP server like Nginx or Apache. To Configure +SSL support directly with :term:`Gunicorn` you need to simply add the key +and certificate paths to your configuration file. + +1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file. +2. In the ``[server:main]`` section, add two new variables + called `certfile` and `keyfile`. + +.. code-block:: ini + + [server:main] + host = 127.0.0.1 + port = 10002 + use = egg:gunicorn#main + workers = 1 + threads = 1 + proc_name = RhodeCodeEnterprise + worker_class = sync + max_requests = 1000 + timeout = 3600 + # adding ssl support + certfile = /home/ssl/my_server_com.pem + keyfile = /home/ssl/my_server_com.key + +4. Save your changes. +5. Restart your |RCE| instance, using the following command: + +.. code-block:: bash + + $ rccontrol restart enterprise-1 + +After this is enabled you can *only* access your instances via https:// +protocol. Check out more docs here `Gunicorn SSL Docs`_ + +.. note:: + + This change only can be applied to |RCE|. VCSServer doesn't support SSL + and should be only used with http protocol. Because only |RCE| is available + externally all communication will still be over SSL even without VCSServer + SSL enabled. + +.. _Gunicorn SSL Docs: http://docs.gunicorn.org/en/stable/settings.html#ssl diff --git a/docs/admin/system-admin.rst b/docs/admin/system-admin.rst --- a/docs/admin/system-admin.rst +++ b/docs/admin/system-admin.rst @@ -19,6 +19,7 @@ The following are the most common system config-files-overview vcs-server svn-http + gunicorn-ssl-support apache-config nginx-config backup-restore diff --git a/docs/admin/tuning-gunicorn.rst b/docs/admin/tuning-gunicorn.rst --- a/docs/admin/tuning-gunicorn.rst +++ b/docs/admin/tuning-gunicorn.rst @@ -3,16 +3,19 @@ Increase Gunicorn Workers ------------------------- -.. important:: + +|RCE| comes with `Gunicorn`_ packaged in its Nix environment. +Gunicorn is a Python WSGI HTTP Server for UNIX. - If you increase the number of :term:`Gunicorn` workers, you also need to - increase the threadpool size of the VCS Server. The recommended size is - 6 times the number of Gunicorn workers. To set this, see - :ref:`vcs-server-config-file`. +To improve |RCE| performance you can increase the number of `Gunicorn`_ workers. +This allows to handle more connections concurently, and provide better +responsiveness and performance. -|RCE| comes with `Gunicorn`_ packaged in its Nix environment. To improve -performance you can increase the number of workers. To do this, use the -following steps: +By default during installation |RCC| tries to detect how many CPUs are +available in the system, and set the number workers based on that information. +However sometimes it's better to manually set the number of workers. + +To do this, use the following steps: 1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file. 2. In the ``[server:main]`` section, increase the number of Gunicorn @@ -20,16 +23,26 @@ 2. In the ``[server:main]`` section, inc .. code-block:: ini - [server:main] - host = 127.0.0.1 - port = 10002 use = egg:gunicorn#main - workers = 1 - threads = 1 - proc_name = RhodeCodeEnterprise + ## Sets the number of process workers. You must set `instance_id = *` + ## when this option is set to more than one worker, recommended + ## value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers + ## The `instance_id = *` must be set in the [app:main] section below + workers = 4 + ## process name + proc_name = rhodecode + ## type of worker class, one of sync, gevent + ## recommended for bigger setup is using of of other than sync one worker_class = sync + ## The maximum number of simultaneous clients. Valid only for Gevent + #worker_connections = 10 + ## max number of requests that worker will handle before being gracefully + ## restarted, could prevent memory leaks max_requests = 1000 - timeout = 3600 + max_requests_jitter = 30 + ## amount of time a worker can spend with handling a request before it + ## gets killed and restarted. Set to 6hrs + timeout = 21600 3. In the ``[app:main]`` section, set the ``instance_id`` property to ``*``. @@ -40,72 +53,72 @@ 3. In the ``[app:main]`` section, set th # You must set `instance_id = *` instance_id = * -4. Save your changes. -5. Restart your |RCE| instance, using the following command: +4. Change the VCSServer workers too. Open the + :file:`home/{user}/.rccontrol/{instance-id}/vcsserver.ini` file. + +5. In the ``[server:main]`` section, increase the number of Gunicorn + ``workers`` using the following formula :math:`(2 * Cores) + 1`. + +.. code-block:: ini + + ## run with gunicorn --log-config vcsserver.ini --paste vcsserver.ini + use = egg:gunicorn#main + ## Sets the number of process workers. Recommended + ## value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers + workers = 4 + ## process name + proc_name = rhodecode_vcsserver + ## type of worker class, currently `sync` is the only option allowed. + worker_class = sync + ## The maximum number of simultaneous clients. Valid only for Gevent + #worker_connections = 10 + ## max number of requests that worker will handle before being gracefully + ## restarted, could prevent memory leaks + max_requests = 1000 + max_requests_jitter = 30 + ## amount of time a worker can spend with handling a request before it + ## gets killed and restarted. Set to 6hrs + timeout = 21600 + +6. Save your changes. +7. Restart your |RCE| instances, using the following command: .. code-block:: bash - $ rccontrol restart enterprise-1 + $ rccontrol restart '*' + + +Gunicorn Gevent Backend +----------------------- -If you scale across different machines, each |RCM| instance -needs to store its data on a shared disk, preferably together with your -|repos|. This data directory contains template caches, a whoosh index, -and is used for task locking to ensure safety across multiple instances. -To do this, set the following properties in the :file:`rhodecode.ini` file to -set the shared location across all |RCM| instances. +Gevent is an asynchronous worker type for Gunicorn. It allows accepting multiple +connections on a single `Gunicorn`_ worker. This means you can handle 100s +of concurrent clones, or API calls using just few workers. A setting called +`worker_connections` defines on how many connections each worker can +handle using `Gevent`. + + +To enable `Gevent` on |RCE| do the following: + + +1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file. +2. In the ``[server:main]`` section, change `worker_class` for Gunicorn. + .. code-block:: ini - cache_dir = /file/path # set to shared location - search.location = /file/path # set to shared location + ## type of worker class, one of sync, gevent + ## recommended for bigger setup is using of of other than sync one + worker_class = gevent + ## The maximum number of simultaneous clients. Valid only for Gevent + worker_connections = 30 - #################################### - ### BEAKER CACHE #### - #################################### - beaker.cache.data_dir = /file/path # set to shared location - beaker.cache.lock_dir = /file/path # set to shared location + +.. note:: + + `Gevent` is currently only supported for Enterprise/Community instances. + VCSServer doesn't yet support gevent. -Gunicorn SSL support --------------------- - - -:term:`Gunicorn` wsgi server allows users to use HTTPS connection directly -without a need to use HTTP server like Nginx or Apache. To Configure -SSL support directly with :term:`Gunicorn` you need to simply add the key -and certificate paths to your configuration file. - -1. Open the :file:`home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file. -2. In the ``[server:main]`` section, add two new variables - called `certfile` and `keyfile`. - -.. code-block:: ini - - [server:main] - host = 127.0.0.1 - port = 10002 - use = egg:gunicorn#main - workers = 1 - threads = 1 - proc_name = RhodeCodeEnterprise - worker_class = sync - max_requests = 1000 - timeout = 3600 - # adding ssl support - certfile = /home/ssl/my_server_com.pem - keyfile = /home/ssl/my_server_com.key - -4. Save your changes. -5. Restart your |RCE| instance, using the following command: - -.. code-block:: bash - - $ rccontrol restart enterprise-1 - -After this is enabled you can *only* access your instances via https:// -protocol. Check out more docs here `Gunicorn SSL Docs`_ - - .. _Gunicorn: http://gunicorn.org/ -.. _Gunicorn SSL Docs: http://docs.gunicorn.org/en/stable/settings.html#ssl diff --git a/docs/admin/tuning-scale-horizontally.rst b/docs/admin/tuning-scale-horizontally.rst --- a/docs/admin/tuning-scale-horizontally.rst +++ b/docs/admin/tuning-scale-horizontally.rst @@ -3,21 +3,45 @@ Scale Horizontally ------------------ +|RCE| is built in a way it support horizontal scaling across multiple machines. +There are two main pre-requisites for that: + +- Shared storage that each machine can access. +- Shared DB connection across machines. + + Horizontal scaling means adding more machines or workers into your pool of resources. Horizontally scaling |RCE| gives a huge performance increase, especially under large traffic scenarios with a high number of requests. This is very beneficial when |RCE| is serving many users simultaneously, or if continuous integration servers are automatically pulling and pushing code. -To horizontally scale |RCE| you should use the following steps: + +If you scale across different machines, each |RCM| instance +needs to store its data on a shared disk, preferably together with your +|repos|. This data directory contains template caches, a full text search index, +and is used for task locking to ensure safety across multiple instances. +To do this, set the following properties in the :file:`rhodecode.ini` file to +set the shared location across all |RCM| instances. + +.. code-block:: ini + + cache_dir = /shared/path/caches # set to shared location + search.location = /shared/path/search_index # set to shared location -1. In the :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file, - set ``instance_id = *``. This enables |RCE| to use multiple nodes. -2. Define the number of worker threads using the formula - :math:`(2 * Cores) + 1`. For example 4 CPU cores would lead to - :math:`(2 * 4) + 1 = 9` workers. In some cases it's ok to increase number of - workers even beyond this formula. Generally the more workers, the more - simultaneous connections the system can handle. + #################################### + ### BEAKER CACHE #### + #################################### + beaker.cache.data_dir = /shared/path/data # set to shared location + beaker.cache.lock_dir = /shared/path/lock # set to shared location + + +.. note:: + + If you use custom caches such as `beaker.cache.auth_plugins.` it's recommended + to set it to the memcached/redis or database backend so it can be shared + across machines. + It is recommended to create another dedicated |RCE| instance to handle traffic from build farms or continuous integration servers. @@ -28,24 +52,7 @@ traffic from build farms or continuous i load balancing rules that will separate regular user traffic from automated process traffic like continuous servers or build bots. -If you scale across different machines, each |RCE| instance needs to store -its data on a shared disk, preferably together with your repositories. This -data directory contains template caches, a whoosh index, -and is used for task locking to ensure safety across multiple instances. To -do this, set the following properties in the -:file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini` file to set -the shared location across all |RCE| instances. - -.. code-block:: ini - - cache_dir = /file/path # set to shared directory location - search.location = /file/path # set to shared directory location - beaker.cache.data_dir = /file/path # set to shared directory location - beaker.cache.lock_dir = /file/path # set to shared directory location - .. note:: If Celery is used on each instance then you should run separate Celery instances, but the message broker should be the same for all of them. - This excludes one RabbitMQ shared server. - diff --git a/docs/contributing/api.rst b/docs/contributing/api.rst --- a/docs/contributing/api.rst +++ b/docs/contributing/api.rst @@ -15,15 +15,13 @@ We keep the calls in the form ``{verb}_{ Change and Deprecation ====================== -API deprecation is documented in the section :ref:`deprecated` together with +API deprecation is documented in the section `deprecated` together with other notes about deprecated parts of the application. Deprecated API calls -------------------- -- Make sure to add them into the section :ref:`deprecated`. - - Use `deprecated` inside of the call docstring to make our users aware of the deprecation::