##// END OF EJS Templates
docs: update instructions for shared exception store in cluster setup
marcink -
r3030:d6f1971e stable
parent child Browse files
Show More
@@ -1,383 +1,395 b''
1 .. _scale-horizontal-cluster:
1 .. _scale-horizontal-cluster:
2
2
3
3
4 Scale Horizontally / RhodeCode Cluster
4 Scale Horizontally / RhodeCode Cluster
5 --------------------------------------
5 --------------------------------------
6
6
7 |RCE| is built in a way it support horizontal scaling across multiple machines.
7 |RCE| is built in a way it support horizontal scaling across multiple machines.
8 There are three main pre-requisites for that:
8 There are three main pre-requisites for that:
9
9
10 - Shared storage that each machine can access. Using NFS or other shared storage system.
10 - Shared storage that each machine can access. Using NFS or other shared storage system.
11 - Shared DB connection across machines. Using `MySQL`/`PostgreSQL` that each node can access.
11 - Shared DB connection across machines. Using `MySQL`/`PostgreSQL` that each node can access.
12 - |RCE| user sessions and caches need to use a shared storage (e.g `Redis`_/`Memcached`)
12 - |RCE| user sessions and caches need to use a shared storage (e.g `Redis`_/`Memcached`)
13
13
14
14
15 Horizontal scaling means adding more machines or workers into your pool of
15 Horizontal scaling means adding more machines or workers into your pool of
16 resources. Horizontally scaling |RCE| gives a huge performance increase,
16 resources. Horizontally scaling |RCE| gives a huge performance increase,
17 especially under large traffic scenarios with a high number of requests.
17 especially under large traffic scenarios with a high number of requests.
18 This is very beneficial when |RCE| is serving many users simultaneously,
18 This is very beneficial when |RCE| is serving many users simultaneously,
19 or if continuous integration servers are automatically pulling and pushing code.
19 or if continuous integration servers are automatically pulling and pushing code.
20 It also adds High-Availability to your running system.
20 It also adds High-Availability to your running system.
21
21
22
22
23 Cluster Overview
23 Cluster Overview
24 ^^^^^^^^^^^^^^^^
24 ^^^^^^^^^^^^^^^^
25
25
26 Below we'll present a configuration example that will use two separate nodes to serve
26 Below we'll present a configuration example that will use two separate nodes to serve
27 |RCE| in a load-balanced environment. The 3rd node will act as a shared storage/cache
27 |RCE| in a load-balanced environment. The 3rd node will act as a shared storage/cache
28 and handle load-balancing. In addition 3rd node will be used as shared database instance.
28 and handle load-balancing. In addition 3rd node will be used as shared database instance.
29
29
30 This setup can be used both in Docker based configuration or with individual
30 This setup can be used both in Docker based configuration or with individual
31 physical/virtual machines. Using the 3rd node for Storage/Redis/PostgreSQL/Nginx is
31 physical/virtual machines. Using the 3rd node for Storage/Redis/PostgreSQL/Nginx is
32 optional. All those components can be installed on one of the two nodes used for |RCE|.
32 optional. All those components can be installed on one of the two nodes used for |RCE|.
33 We'll use following naming for our nodes:
33 We'll use following naming for our nodes:
34
34
35 - `rc-node-1` (NFS, DB, Cache node)
35 - `rc-node-1` (NFS, DB, Cache node)
36 - `rc-node-2` (Worker node1)
36 - `rc-node-2` (Worker node1)
37 - `rc-node-3` (Worker node2)
37 - `rc-node-3` (Worker node2)
38
38
39 Our shares NFS storage in the example is located on `/home/rcdev/storage` and
39 Our shares NFS storage in the example is located on `/home/rcdev/storage` and
40 it's RW accessible on **each** node.
40 it's RW accessible on **each** node.
41
41
42 In this example we used certain recommended components, however many
42 In this example we used certain recommended components, however many
43 of those can be replaced by other, in case your organization already uses them, for example:
43 of those can be replaced by other, in case your organization already uses them, for example:
44
44
45 - `MySQL`/`PostgreSQL`: Aren't replaceable and are the two only supported databases.
45 - `MySQL`/`PostgreSQL`: Aren't replaceable and are the two only supported databases.
46 - `Nginx`_ on `rc-node-1` can be replaced by: `Hardware Load Balancer (F5)`, `Apache`_, `HA-Proxy` etc.
46 - `Nginx`_ on `rc-node-1` can be replaced by: `Hardware Load Balancer (F5)`, `Apache`_, `HA-Proxy` etc.
47 - `Nginx`_ on rc-node-2/3 acts as a reverse proxy and can be replaced by other HTTP server
47 - `Nginx`_ on rc-node-2/3 acts as a reverse proxy and can be replaced by other HTTP server
48 acting as reverse proxy such as `Apache`_.
48 acting as reverse proxy such as `Apache`_.
49 - `Redis`_ on `rc-node-1` can be replaced by: `Memcached`
49 - `Redis`_ on `rc-node-1` can be replaced by: `Memcached`
50
50
51
51
52 Here's an overview what components should be installed/setup on each server in our example:
52 Here's an overview what components should be installed/setup on each server in our example:
53
53
54 - **rc-node-1**:
54 - **rc-node-1**:
55
55
56 - main storage acting as NFS host.
56 - main storage acting as NFS host.
57 - `nginx` acting as a load-balancer.
57 - `nginx` acting as a load-balancer.
58 - `postgresql-server` used for database and sessions.
58 - `postgresql-server` used for database and sessions.
59 - `redis-server` used for storing shared caches.
59 - `redis-server` used for storing shared caches.
60 - optionally `rabbitmq-server` for `Celery` if used.
60 - optionally `rabbitmq-server` for `Celery` if used.
61 - optionally if `Celery` is used Enterprise/Community instance + VCSServer.
61 - optionally if `Celery` is used Enterprise/Community instance + VCSServer.
62 - optionally mailserver that can be shared by other instances.
62 - optionally mailserver that can be shared by other instances.
63 - optionally channelstream server to handle live communication for all instances.
63 - optionally channelstream server to handle live communication for all instances.
64
64
65
65
66 - **rc-node-2/3**:
66 - **rc-node-2/3**:
67
67
68 - `nginx` acting as a reverse proxy to handle requests to |RCE|.
68 - `nginx` acting as a reverse proxy to handle requests to |RCE|.
69 - 1x RhodeCode Enterprise/Community instance.
69 - 1x RhodeCode Enterprise/Community instance.
70 - 1x VCSServer instance.
70 - 1x VCSServer instance.
71 - optionally for testing connection: postgresql-client, redis-client (redis-tools).
71 - optionally for testing connection: postgresql-client, redis-client (redis-tools).
72
72
73
73
74 Before we start here are few assumptions that should be fulfilled:
74 Before we start here are few assumptions that should be fulfilled:
75
75
76 - make sure each node can access each other.
76 - make sure each node can access each other.
77 - make sure `Redis`_/`MySQL`/`PostgreSQL`/`RabbitMQ`_ are running on `rc-node-1`
77 - make sure `Redis`_/`MySQL`/`PostgreSQL`/`RabbitMQ`_ are running on `rc-node-1`
78 - make sure both `rc-node-2`/`3` can access NFS storage with RW access
78 - make sure both `rc-node-2`/`3` can access NFS storage with RW access
79 - make sure rc-node-2/3 can access `Redis`_/`PostgreSQL`, `MySQL` database on `rc-node-1`.
79 - make sure rc-node-2/3 can access `Redis`_/`PostgreSQL`, `MySQL` database on `rc-node-1`.
80 - make sure `Redis`_/Database/`RabbitMQ`_ are password protected and accessible only from rc-node-2/3.
80 - make sure `Redis`_/Database/`RabbitMQ`_ are password protected and accessible only from rc-node-2/3.
81
81
82
82
83
83
84 Setup rc-node-2/3
84 Setup rc-node-2/3
85 ^^^^^^^^^^^^^^^^^
85 ^^^^^^^^^^^^^^^^^
86
86
87 Initially before `rc-node-1` we'll configure both nodes 2 and 3 to operate as standalone
87 Initially before `rc-node-1` we'll configure both nodes 2 and 3 to operate as standalone
88 nodes with their own hostnames. Use a default installation settings, and use
88 nodes with their own hostnames. Use a default installation settings, and use
89 the default local addresses (127.0.0.1) to configure VCSServer and Community/Enterprise instances.
89 the default local addresses (127.0.0.1) to configure VCSServer and Community/Enterprise instances.
90 All external connectivity will be handled by the reverse proxy (`Nginx`_ in our example).
90 All external connectivity will be handled by the reverse proxy (`Nginx`_ in our example).
91
91
92 This way we can ensure each individual host works,
92 This way we can ensure each individual host works,
93 accepts connections, or do some operations explicitly on chosen node.
93 accepts connections, or do some operations explicitly on chosen node.
94
94
95 In addition this would allow use to explicitly direct certain traffic to a node, e.g
95 In addition this would allow use to explicitly direct certain traffic to a node, e.g
96 CI server will only call directly `rc-node-3`. This should be done similar to normal
96 CI server will only call directly `rc-node-3`. This should be done similar to normal
97 installation so check out `Nginx`_/`Apache`_ configuration example to configure each host.
97 installation so check out `Nginx`_/`Apache`_ configuration example to configure each host.
98 Each one should already connect to shared database during installation.
98 Each one should already connect to shared database during installation.
99
99
100
100
101 1) Assuming our final url will be http://rc-node-1, Configure `instances_id`, `app.base_url`
101 1) Assuming our final url will be http://rc-node-1, Configure `instances_id`, `app.base_url`
102
102
103 a) On **rc-node-2** find the following settings and edit :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini`
103 a) On **rc-node-2** find the following settings and edit :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini`
104
104
105 .. code-block:: ini
105 .. code-block:: ini
106
106
107 ## required format is: *NAME-
107 ## required format is: *NAME-
108 instance_id = *rc-node-2-
108 instance_id = *rc-node-2-
109 app.base_url = http://rc-node-1
109 app.base_url = http://rc-node-1
110
110
111
111
112 b) On **rc-node-3** find the following settings and edit :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini`
112 b) On **rc-node-3** find the following settings and edit :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini`
113
113
114 .. code-block:: ini
114 .. code-block:: ini
115
115
116 ## required format is: *NAME-
116 ## required format is: *NAME-
117 instance_id = *rc-node-3-
117 instance_id = *rc-node-3-
118 app.base_url = http://rc-node-1
118 app.base_url = http://rc-node-1
119
119
120
120
121
121
122 2) Configure `User Session` to use a shared database. Example config that should be
122 2) Configure `User Session` to use a shared database. Example config that should be
123 changed on both node 2 and 3. Edit :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini`
123 changed on both **rc-node-2** and **rc-node-3** .
124 Edit :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini`
124
125
125 .. code-block:: ini
126 .. code-block:: ini
126
127
127 ####################################
128 ####################################
128 ### BEAKER SESSION ####
129 ### BEAKER SESSION ####
129 ####################################
130 ####################################
130
131
131 ## Disable the default `file` sessions
132 ## Disable the default `file` sessions
132 #beaker.session.type = file
133 #beaker.session.type = file
133 #beaker.session.data_dir = %(here)s/data/sessions
134 #beaker.session.data_dir = %(here)s/data/sessions
134
135
135 ## use shared db based session, fast, and allows easy management over logged in users
136 ## use shared db based session, fast, and allows easy management over logged in users
136 beaker.session.type = ext:database
137 beaker.session.type = ext:database
137 beaker.session.table_name = db_session
138 beaker.session.table_name = db_session
138 # use our rc-node-1 here
139 # use our rc-node-1 here
139 beaker.session.sa.url = postgresql://postgres:qweqwe@rc-node-1/rhodecode
140 beaker.session.sa.url = postgresql://postgres:qweqwe@rc-node-1/rhodecode
140 beaker.session.sa.pool_recycle = 3600
141 beaker.session.sa.pool_recycle = 3600
141 beaker.session.sa.echo = false
142 beaker.session.sa.echo = false
142
143
143 In addition make sure both instances use the same `session.secret` so users have
144 In addition make sure both instances use the same `session.secret` so users have
144 persistent sessions across nodes. Please generate other one then in this example.
145 persistent sessions across nodes. Please generate other one then in this example.
145
146
146 .. code-block:: ini
147 .. code-block:: ini
147
148
148 # use an unique generated long string
149 # use an unique generated long string
149 beaker.session.secret = 70e116cae2274656ba7265fd860aebbd
150 beaker.session.secret = 70e116cae2274656ba7265fd860aebbd
150
151
151 3) Configure stored cached/archive cache to our shared NFS `rc-node-1`
152 3) Configure stored cached/archive cache to our shared NFS `rc-node-1`
152
153
153 .. code-block:: ini
154 .. code-block:: ini
154
155
155 # note the `_` prefix that allows using a directory without
156 # note the `_` prefix that allows using a directory without
156 # remap and rescan checking for vcs inside it.
157 # remap and rescan checking for vcs inside it.
157 cache_dir = /home/rcdev/storage/_cache_dir/data
158 cache_dir = /home/rcdev/storage/_cache_dir/data
158 # note archive cache dir is disabled by default, however if you enable
159 # note archive cache dir is disabled by default, however if you enable
159 # it also needs to be shared
160 # it also needs to be shared
160 #archive_cache_dir = /home/rcdev/storage/_tarball_cache_dir
161 #archive_cache_dir = /home/rcdev/storage/_tarball_cache_dir
161
162
162
163
163 4) Change cache backends to use `Redis`_ based caches. Below full example config
164 4) Use shared exception store. Example config that should be
165 changed on both **rc-node-2** and **rc-node-3**, and also for VCSServer.
166 Edit :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini` and
167 :file:`/home/{user}/.rccontrol/{vcsserver-instance-id}/vcsserver.ini`
168 and add/change following setting.
169
170 .. code-block:: ini
171
172 exception_tracker.store_path = /home/rcdev/storage/_exception_store_data
173
174
175 5) Change cache backends to use `Redis`_ based caches. Below full example config
164 that replaces default file-based cache to shared `Redis`_ with Distributed Lock.
176 that replaces default file-based cache to shared `Redis`_ with Distributed Lock.
165
177
166
178
167 .. code-block:: ini
179 .. code-block:: ini
168
180
169 #####################################
181 #####################################
170 ### DOGPILE CACHE ####
182 ### DOGPILE CACHE ####
171 #####################################
183 #####################################
172
184
173 ## `cache_perms` cache settings for permission tree, auth TTL.
185 ## `cache_perms` cache settings for permission tree, auth TTL.
174 #rc_cache.cache_perms.backend = dogpile.cache.rc.file_namespace
186 #rc_cache.cache_perms.backend = dogpile.cache.rc.file_namespace
175 #rc_cache.cache_perms.expiration_time = 300
187 #rc_cache.cache_perms.expiration_time = 300
176
188
177 ## alternative `cache_perms` redis backend with distributed lock
189 ## alternative `cache_perms` redis backend with distributed lock
178 rc_cache.cache_perms.backend = dogpile.cache.rc.redis
190 rc_cache.cache_perms.backend = dogpile.cache.rc.redis
179 rc_cache.cache_perms.expiration_time = 300
191 rc_cache.cache_perms.expiration_time = 300
180 ## redis_expiration_time needs to be greater then expiration_time
192 ## redis_expiration_time needs to be greater then expiration_time
181 rc_cache.cache_perms.arguments.redis_expiration_time = 7200
193 rc_cache.cache_perms.arguments.redis_expiration_time = 7200
182 rc_cache.cache_perms.arguments.socket_timeout = 30
194 rc_cache.cache_perms.arguments.socket_timeout = 30
183 rc_cache.cache_perms.arguments.host = rc-node-1
195 rc_cache.cache_perms.arguments.host = rc-node-1
184 rc_cache.cache_perms.arguments.password = qweqwe
196 rc_cache.cache_perms.arguments.password = qweqwe
185 rc_cache.cache_perms.arguments.port = 6379
197 rc_cache.cache_perms.arguments.port = 6379
186 rc_cache.cache_perms.arguments.db = 0
198 rc_cache.cache_perms.arguments.db = 0
187 rc_cache.cache_perms.arguments.distributed_lock = true
199 rc_cache.cache_perms.arguments.distributed_lock = true
188
200
189 ## `cache_repo` cache settings for FileTree, Readme, RSS FEEDS
201 ## `cache_repo` cache settings for FileTree, Readme, RSS FEEDS
190 #rc_cache.cache_repo.backend = dogpile.cache.rc.file_namespace
202 #rc_cache.cache_repo.backend = dogpile.cache.rc.file_namespace
191 #rc_cache.cache_repo.expiration_time = 2592000
203 #rc_cache.cache_repo.expiration_time = 2592000
192
204
193 ## alternative `cache_repo` redis backend with distributed lock
205 ## alternative `cache_repo` redis backend with distributed lock
194 rc_cache.cache_repo.backend = dogpile.cache.rc.redis
206 rc_cache.cache_repo.backend = dogpile.cache.rc.redis
195 rc_cache.cache_repo.expiration_time = 2592000
207 rc_cache.cache_repo.expiration_time = 2592000
196 ## redis_expiration_time needs to be greater then expiration_time
208 ## redis_expiration_time needs to be greater then expiration_time
197 rc_cache.cache_repo.arguments.redis_expiration_time = 2678400
209 rc_cache.cache_repo.arguments.redis_expiration_time = 2678400
198 rc_cache.cache_repo.arguments.socket_timeout = 30
210 rc_cache.cache_repo.arguments.socket_timeout = 30
199 rc_cache.cache_repo.arguments.host = rc-node-1
211 rc_cache.cache_repo.arguments.host = rc-node-1
200 rc_cache.cache_repo.arguments.password = qweqwe
212 rc_cache.cache_repo.arguments.password = qweqwe
201 rc_cache.cache_repo.arguments.port = 6379
213 rc_cache.cache_repo.arguments.port = 6379
202 rc_cache.cache_repo.arguments.db = 1
214 rc_cache.cache_repo.arguments.db = 1
203 rc_cache.cache_repo.arguments.distributed_lock = true
215 rc_cache.cache_repo.arguments.distributed_lock = true
204
216
205 ## cache settings for SQL queries, this needs to use memory type backend
217 ## cache settings for SQL queries, this needs to use memory type backend
206 rc_cache.sql_cache_short.backend = dogpile.cache.rc.memory_lru
218 rc_cache.sql_cache_short.backend = dogpile.cache.rc.memory_lru
207 rc_cache.sql_cache_short.expiration_time = 30
219 rc_cache.sql_cache_short.expiration_time = 30
208
220
209 ## `cache_repo_longterm` cache for repo object instances, this needs to use memory
221 ## `cache_repo_longterm` cache for repo object instances, this needs to use memory
210 ## type backend as the objects kept are not pickle serializable
222 ## type backend as the objects kept are not pickle serializable
211 rc_cache.cache_repo_longterm.backend = dogpile.cache.rc.memory_lru
223 rc_cache.cache_repo_longterm.backend = dogpile.cache.rc.memory_lru
212 ## by default we use 96H, this is using invalidation on push anyway
224 ## by default we use 96H, this is using invalidation on push anyway
213 rc_cache.cache_repo_longterm.expiration_time = 345600
225 rc_cache.cache_repo_longterm.expiration_time = 345600
214 ## max items in LRU cache, reduce this number to save memory, and expire last used
226 ## max items in LRU cache, reduce this number to save memory, and expire last used
215 ## cached objects
227 ## cached objects
216 rc_cache.cache_repo_longterm.max_size = 10000
228 rc_cache.cache_repo_longterm.max_size = 10000
217
229
218
230
219 4) Configure `Nginx`_ as reverse proxy on `rc-node-2/3`:
231 6) Configure `Nginx`_ as reverse proxy on `rc-node-2/3`:
220 Minimal `Nginx`_ config used:
232 Minimal `Nginx`_ config used:
221
233
222
234
223 .. code-block:: nginx
235 .. code-block:: nginx
224
236
225 ## rate limiter for certain pages to prevent brute force attacks
237 ## rate limiter for certain pages to prevent brute force attacks
226 limit_req_zone $binary_remote_addr zone=req_limit:10m rate=1r/s;
238 limit_req_zone $binary_remote_addr zone=req_limit:10m rate=1r/s;
227
239
228 ## custom log format
240 ## custom log format
229 log_format log_custom '$remote_addr - $remote_user [$time_local] '
241 log_format log_custom '$remote_addr - $remote_user [$time_local] '
230 '"$request" $status $body_bytes_sent '
242 '"$request" $status $body_bytes_sent '
231 '"$http_referer" "$http_user_agent" '
243 '"$http_referer" "$http_user_agent" '
232 '$request_time $upstream_response_time $pipe';
244 '$request_time $upstream_response_time $pipe';
233
245
234 server {
246 server {
235 listen 80;
247 listen 80;
236 server_name rc-node-2;
248 server_name rc-node-2;
237 #server_name rc-node-3;
249 #server_name rc-node-3;
238
250
239 access_log /var/log/nginx/rhodecode.access.log log_custom;
251 access_log /var/log/nginx/rhodecode.access.log log_custom;
240 error_log /var/log/nginx/rhodecode.error.log;
252 error_log /var/log/nginx/rhodecode.error.log;
241
253
242 # example of proxy.conf can be found in our docs.
254 # example of proxy.conf can be found in our docs.
243 include /etc/nginx/proxy.conf;
255 include /etc/nginx/proxy.conf;
244
256
245 ## serve static files by Nginx, recommended for performance
257 ## serve static files by Nginx, recommended for performance
246 location /_static/rhodecode {
258 location /_static/rhodecode {
247 gzip on;
259 gzip on;
248 gzip_min_length 500;
260 gzip_min_length 500;
249 gzip_proxied any;
261 gzip_proxied any;
250 gzip_comp_level 4;
262 gzip_comp_level 4;
251 gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/json application/xml application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml;
263 gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/json application/xml application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml;
252 gzip_vary on;
264 gzip_vary on;
253 gzip_disable "msie6";
265 gzip_disable "msie6";
254 #alias /home/rcdev/.rccontrol/community-1/static;
266 #alias /home/rcdev/.rccontrol/community-1/static;
255 alias /home/rcdev/.rccontrol/enterprise-1/static;
267 alias /home/rcdev/.rccontrol/enterprise-1/static;
256 }
268 }
257
269
258
270
259 location /_admin/login {
271 location /_admin/login {
260 limit_req zone=req_limit burst=10 nodelay;
272 limit_req zone=req_limit burst=10 nodelay;
261 try_files $uri @rhode;
273 try_files $uri @rhode;
262 }
274 }
263
275
264 location / {
276 location / {
265 try_files $uri @rhode;
277 try_files $uri @rhode;
266 }
278 }
267
279
268 location @rhode {
280 location @rhode {
269 # Url to running RhodeCode instance.
281 # Url to running RhodeCode instance.
270 # This is shown as `- URL: <host>` in output from rccontrol status.
282 # This is shown as `- URL: <host>` in output from rccontrol status.
271 proxy_pass http://127.0.0.1:10020;
283 proxy_pass http://127.0.0.1:10020;
272 }
284 }
273
285
274 ## custom 502 error page. Will be displayed while RhodeCode server
286 ## custom 502 error page. Will be displayed while RhodeCode server
275 ## is turned off
287 ## is turned off
276 error_page 502 /502.html;
288 error_page 502 /502.html;
277 location = /502.html {
289 location = /502.html {
278 #root /home/rcdev/.rccontrol/community-1/static;
290 #root /home/rcdev/.rccontrol/community-1/static;
279 root /home/rcdev/.rccontrol/enterprise-1/static;
291 root /home/rcdev/.rccontrol/enterprise-1/static;
280 }
292 }
281 }
293 }
282
294
283
295
284 5) Optional: Full text search, in case you use `Whoosh` full text search we also need a
296 7) Optional: Full text search, in case you use `Whoosh` full text search we also need a
285 shared storage for the index. In our example our NFS is mounted at `/home/rcdev/storage`
297 shared storage for the index. In our example our NFS is mounted at `/home/rcdev/storage`
286 which represents out storage so we can use the following:
298 which represents out storage so we can use the following:
287
299
288 .. code-block:: ini
300 .. code-block:: ini
289
301
290 # note the `_` prefix that allows using a directory without
302 # note the `_` prefix that allows using a directory without
291 # remap and rescan checking for vcs inside it.
303 # remap and rescan checking for vcs inside it.
292 search.location = /home/rcdev/storage/_index_data/index
304 search.location = /home/rcdev/storage/_index_data/index
293
305
294
306
295 .. note::
307 .. note::
296
308
297 If you use ElasticSearch it's by default shared, and simply running ES node is
309 If you use ElasticSearch it's by default shared, and simply running ES node is
298 by default cluster compatible.
310 by default cluster compatible.
299
311
300
312
301 6) Optional: If you intend to use mailing all instances need to use either a shared
313 8) Optional: If you intend to use mailing all instances need to use either a shared
302 mailing node, or each will use individual local mailagent. Simply put node-1/2/3 needs
314 mailing node, or each will use individual local mail agent. Simply put node-1/2/3
303 to use same mailing configuration.
315 needs to use same mailing configuration.
304
316
305
317
306
318
307 Setup rc-node-1
319 Setup rc-node-1
308 ^^^^^^^^^^^^^^^
320 ^^^^^^^^^^^^^^^
309
321
310
322
311 Configure `Nginx`_ as Load Balancer to rc-node-2/3.
323 Configure `Nginx`_ as Load Balancer to rc-node-2/3.
312 Minimal `Nginx`_ example below:
324 Minimal `Nginx`_ example below:
313
325
314 .. code-block:: nginx
326 .. code-block:: nginx
315
327
316 ## define rc-cluster which contains a pool of our instances to connect to
328 ## define rc-cluster which contains a pool of our instances to connect to
317 upstream rc-cluster {
329 upstream rc-cluster {
318 # rc-node-2/3 are stored in /etc/hosts with correct IP addresses
330 # rc-node-2/3 are stored in /etc/hosts with correct IP addresses
319 server rc-node-2:80;
331 server rc-node-2:80;
320 server rc-node-3:80;
332 server rc-node-3:80;
321 }
333 }
322
334
323 server {
335 server {
324 listen 80;
336 listen 80;
325 server_name rc-node-1;
337 server_name rc-node-1;
326
338
327 location / {
339 location / {
328 proxy_pass http://rc-cluster;
340 proxy_pass http://rc-cluster;
329 }
341 }
330 }
342 }
331
343
332
344
333 .. note::
345 .. note::
334
346
335 You should configure your load balancing accordingly. We recommend writing
347 You should configure your load balancing accordingly. We recommend writing
336 load balancing rules that will separate regular user traffic from
348 load balancing rules that will separate regular user traffic from
337 automated process traffic like continuous servers or build bots. Sticky sessions
349 automated process traffic like continuous servers or build bots. Sticky sessions
338 are not required.
350 are not required.
339
351
340
352
341 Show which instance handles a request
353 Show which instance handles a request
342 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
354 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
343
355
344 You can easily check if load-balancing is working as expected. Visit our main node
356 You can easily check if load-balancing is working as expected. Visit our main node
345 `rc-node-1` URL which at that point should already handle incoming requests and balance
357 `rc-node-1` URL which at that point should already handle incoming requests and balance
346 it across node-2/3.
358 it across node-2/3.
347
359
348 Add a special GET param `?showrcid=1` to show current instance handling your request.
360 Add a special GET param `?showrcid=1` to show current instance handling your request.
349
361
350 For example: visiting url `http://rc-node-1/?showrcid=1` will show, in the bottom
362 For example: visiting url `http://rc-node-1/?showrcid=1` will show, in the bottom
351 of the screen` cluster instance info.
363 of the screen` cluster instance info.
352 e.g: `RhodeCode instance id: rc-node-3-rc-node-3-3246`
364 e.g: `RhodeCode instance id: rc-node-3-rc-node-3-3246`
353 which is generated from::
365 which is generated from::
354
366
355 <NODE_HOSTNAME>-<INSTANCE_ID>-<WORKER_PID>
367 <NODE_HOSTNAME>-<INSTANCE_ID>-<WORKER_PID>
356
368
357
369
358 Using Celery with cluster
370 Using Celery with cluster
359 ^^^^^^^^^^^^^^^^^^^^^^^^^
371 ^^^^^^^^^^^^^^^^^^^^^^^^^
360
372
361
373
362 If `Celery` is used we recommend setting also an instance of Enterprise/Community+VCSserver
374 If `Celery` is used we recommend setting also an instance of Enterprise/Community+VCSserver
363 on the node that is running `RabbitMQ`_. Those instances will be used to executed async
375 on the node that is running `RabbitMQ`_. Those instances will be used to executed async
364 tasks on the `rc-node-1`. This is the most efficient setup. `Celery` usually
376 tasks on the `rc-node-1`. This is the most efficient setup. `Celery` usually
365 handles tasks such as sending emails, forking repositories, importing
377 handles tasks such as sending emails, forking repositories, importing
366 repositories from external location etc. Using workers on instance that has
378 repositories from external location etc. Using workers on instance that has
367 the direct access to disks used by NFS as well as email server gives noticeable
379 the direct access to disks used by NFS as well as email server gives noticeable
368 performance boost. Running local workers to the NFS storage results in faster
380 performance boost. Running local workers to the NFS storage results in faster
369 execution of forking large repositories or sending lots of emails.
381 execution of forking large repositories or sending lots of emails.
370
382
371 Those instances need to be configured in the same way as for other nodes.
383 Those instances need to be configured in the same way as for other nodes.
372 The instance in rc-node-1 can be added to the cluser, but we don't recommend doing it.
384 The instance in rc-node-1 can be added to the cluser, but we don't recommend doing it.
373 For best results let it be isolated to only executing `Celery` tasks in the cluster setup.
385 For best results let it be isolated to only executing `Celery` tasks in the cluster setup.
374
386
375
387
376 .. _Gunicorn: http://gunicorn.org/
388 .. _Gunicorn: http://gunicorn.org/
377 .. _Whoosh: https://pypi.python.org/pypi/Whoosh/
389 .. _Whoosh: https://pypi.python.org/pypi/Whoosh/
378 .. _Elasticsearch: https://www.elastic.co/..
390 .. _Elasticsearch: https://www.elastic.co/..
379 .. _RabbitMQ: http://www.rabbitmq.com/
391 .. _RabbitMQ: http://www.rabbitmq.com/
380 .. _Nginx: http://nginx.io
392 .. _Nginx: http://nginx.io
381 .. _Apache: http://nginx.io
393 .. _Apache: http://nginx.io
382 .. _Redis: http://redis.io
394 .. _Redis: http://redis.io
383
395
General Comments 0
You need to be logged in to leave comments. Login now