Show More
The requested changes are too big and content was truncated. Show full diff
@@ -0,0 +1,43 b'' | |||
|
1 | |RCE| 5.0.1 |RNS| | |
|
2 | ----------------- | |
|
3 | ||
|
4 | Release Date | |
|
5 | ^^^^^^^^^^^^ | |
|
6 | ||
|
7 | - 2024-05-20 | |
|
8 | ||
|
9 | ||
|
10 | New Features | |
|
11 | ^^^^^^^^^^^^ | |
|
12 | ||
|
13 | ||
|
14 | ||
|
15 | General | |
|
16 | ^^^^^^^ | |
|
17 | ||
|
18 | ||
|
19 | ||
|
20 | Security | |
|
21 | ^^^^^^^^ | |
|
22 | ||
|
23 | ||
|
24 | ||
|
25 | Performance | |
|
26 | ^^^^^^^^^^^ | |
|
27 | ||
|
28 | ||
|
29 | ||
|
30 | ||
|
31 | Fixes | |
|
32 | ^^^^^ | |
|
33 | ||
|
34 | - Fixed Celery serialization issues | |
|
35 | - Fixed Celery startup problems signaling | |
|
36 | - Fixed SVN hooks binary dir paths which in certain scenarios resulted in empty values forbidding hooks to execute | |
|
37 | - Fixed annotation bug for files without new lines or mixed newlines | |
|
38 | ||
|
39 | ||
|
40 | Upgrade notes | |
|
41 | ^^^^^^^^^^^^^ | |
|
42 | ||
|
43 | - RhodeCode 5.0.1 is unscheduled bugfix release to address some of the issues found during 4.X -> 5.X migration |
@@ -0,0 +1,39 b'' | |||
|
1 | |RCE| 5.0.2 |RNS| | |
|
2 | ----------------- | |
|
3 | ||
|
4 | Release Date | |
|
5 | ^^^^^^^^^^^^ | |
|
6 | ||
|
7 | - 2024-05-29 | |
|
8 | ||
|
9 | ||
|
10 | New Features | |
|
11 | ^^^^^^^^^^^^ | |
|
12 | ||
|
13 | ||
|
14 | ||
|
15 | General | |
|
16 | ^^^^^^^ | |
|
17 | ||
|
18 | ||
|
19 | ||
|
20 | Security | |
|
21 | ^^^^^^^^ | |
|
22 | ||
|
23 | ||
|
24 | ||
|
25 | Performance | |
|
26 | ^^^^^^^^^^^ | |
|
27 | ||
|
28 | ||
|
29 | ||
|
30 | ||
|
31 | Fixes | |
|
32 | ^^^^^ | |
|
33 | ||
|
34 | - Fixed problems with saving branch permissions | |
|
35 | ||
|
36 | Upgrade notes | |
|
37 | ^^^^^^^^^^^^^ | |
|
38 | ||
|
39 | - RhodeCode 5.0.2 is unscheduled bugfix release to address some of the issues found during 4.X -> 5.X migration |
@@ -0,0 +1,39 b'' | |||
|
1 | |RCE| 5.0.3 |RNS| | |
|
2 | ----------------- | |
|
3 | ||
|
4 | Release Date | |
|
5 | ^^^^^^^^^^^^ | |
|
6 | ||
|
7 | - 2024-06-17 | |
|
8 | ||
|
9 | ||
|
10 | New Features | |
|
11 | ^^^^^^^^^^^^ | |
|
12 | ||
|
13 | ||
|
14 | ||
|
15 | General | |
|
16 | ^^^^^^^ | |
|
17 | ||
|
18 | ||
|
19 | ||
|
20 | Security | |
|
21 | ^^^^^^^^ | |
|
22 | ||
|
23 | ||
|
24 | ||
|
25 | Performance | |
|
26 | ^^^^^^^^^^^ | |
|
27 | ||
|
28 | ||
|
29 | ||
|
30 | ||
|
31 | Fixes | |
|
32 | ^^^^^ | |
|
33 | ||
|
34 | - Fixed problems nested ldap groups | |
|
35 | ||
|
36 | Upgrade notes | |
|
37 | ^^^^^^^^^^^^^ | |
|
38 | ||
|
39 | - RhodeCode 5.0.3 is unscheduled bugfix release to address some of the issues found during 4.X -> 5.X migration |
@@ -0,0 +1,59 b'' | |||
|
1 | |RCE| 5.1.0 |RNS| | |
|
2 | ----------------- | |
|
3 | ||
|
4 | Release Date | |
|
5 | ^^^^^^^^^^^^ | |
|
6 | ||
|
7 | - 2024-07-18 | |
|
8 | ||
|
9 | ||
|
10 | New Features | |
|
11 | ^^^^^^^^^^^^ | |
|
12 | ||
|
13 | - We've introduced 2FA for users. Now alongside the external auth 2fa support RhodeCode allows to enable 2FA for users | |
|
14 | 2FA options will be available for each user individually, or enforced via authentication plugins like ldap, or internal. | |
|
15 | - Email based log-in. RhodeCode now allows to log-in using email as well as username for main authentication type. | |
|
16 | - Ability to replace a file using web UI. Now one can replace an existing file from the web-ui. | |
|
17 | - GIT LFS Sync automation. Remote push/pull commands now can also sync GIT LFS objects. | |
|
18 | - Added ability to remove or close branches from the web ui | |
|
19 | - Added ability to delete a branch automatically after merging PR for git repositories | |
|
20 | - Added support for S3 based archive_cache based that allows storing cached archives in S3 compatible object store. | |
|
21 | ||
|
22 | ||
|
23 | General | |
|
24 | ^^^^^^^ | |
|
25 | ||
|
26 | - Upgraded all dependency libraries to their latest available versions | |
|
27 | - Repository storage is no longer controlled via DB settings, but .ini file. This allows easier automated deployments. | |
|
28 | - Bumped mercurial to 6.7.4 | |
|
29 | - Mercurial: enable httppostarguments for better support of large repositories with lots of heads. | |
|
30 | - Added explicit db-migrate step to update hooks for 5.X release. | |
|
31 | ||
|
32 | ||
|
33 | Security | |
|
34 | ^^^^^^^^ | |
|
35 | ||
|
36 | ||
|
37 | ||
|
38 | Performance | |
|
39 | ^^^^^^^^^^^ | |
|
40 | ||
|
41 | - Introduced a full rewrite of ssh backend for performance. The result is 2-5x speed improvement for operation with ssh. | |
|
42 | enable new ssh wrapper by setting: `ssh.wrapper_cmd = /home/rhodecode/venv/bin/rc-ssh-wrapper-v2` | |
|
43 | - Introduced a new hooks subsystem that is more scalable and faster, enable it by settings: `vcs.hooks.protocol = celery` | |
|
44 | ||
|
45 | ||
|
46 | Fixes | |
|
47 | ^^^^^ | |
|
48 | ||
|
49 | - Archives: Zip archive download breaks when a gitmodules file is present | |
|
50 | - Branch permissions: fixed bug preventing to specify own rules from 4.X install | |
|
51 | - SVN: refactored svn events, thus fixing support for it in dockerized env | |
|
52 | - Fixed empty server url in PR link after push from cli | |
|
53 | ||
|
54 | ||
|
55 | Upgrade notes | |
|
56 | ^^^^^^^^^^^^^ | |
|
57 | ||
|
58 | - RhodeCode 5.1.0 is a mayor feature release after big 5.0.0 python3 migration. Happy to ship a first time feature | |
|
59 | rich release |
@@ -0,0 +1,55 b'' | |||
|
1 | ||
|
2 | # Copyright (C) 2010-2023 RhodeCode GmbH | |
|
3 | # | |
|
4 | # This program is free software: you can redistribute it and/or modify | |
|
5 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
6 | # (only), as published by the Free Software Foundation. | |
|
7 | # | |
|
8 | # This program is distributed in the hope that it will be useful, | |
|
9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
11 | # GNU General Public License for more details. | |
|
12 | # | |
|
13 | # You should have received a copy of the GNU Affero General Public License | |
|
14 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
15 | # | |
|
16 | # This program is dual-licensed. If you wish to learn more about the | |
|
17 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
18 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
19 | ||
|
20 | import pytest | |
|
21 | ||
|
22 | from rhodecode.api.tests.utils import ( | |
|
23 | build_data, api_call) | |
|
24 | ||
|
25 | ||
|
26 | @pytest.mark.usefixtures("app") | |
|
27 | class TestServiceApi: | |
|
28 | ||
|
29 | def test_service_api_with_wrong_secret(self): | |
|
30 | id, payload = build_data("wrong_api_key", 'service_get_repo_name_by_id') | |
|
31 | response = api_call(self.app, payload) | |
|
32 | ||
|
33 | assert 'Invalid API KEY' == response.json['error'] | |
|
34 | ||
|
35 | def test_service_api_with_legit_secret(self): | |
|
36 | id, payload = build_data(self.app.app.config.get_settings()['app.service_api.token'], | |
|
37 | 'service_get_repo_name_by_id', repo_id='1') | |
|
38 | response = api_call(self.app, payload) | |
|
39 | assert not response.json['error'] | |
|
40 | ||
|
41 | def test_service_api_not_a_part_of_public_api_suggestions(self): | |
|
42 | id, payload = build_data("secret", 'some_random_guess_method') | |
|
43 | response = api_call(self.app, payload) | |
|
44 | assert 'service_' not in response.json['error'] | |
|
45 | ||
|
46 | def test_service_get_data_for_ssh_wrapper_output(self): | |
|
47 | id, payload = build_data( | |
|
48 | self.app.app.config.get_settings()['app.service_api.token'], | |
|
49 | 'service_get_data_for_ssh_wrapper', | |
|
50 | user_id=1, | |
|
51 | repo_name='vcs_test_git') | |
|
52 | response = api_call(self.app, payload) | |
|
53 | ||
|
54 | assert ['branch_permissions', 'repo_permissions', 'repos_path', 'user_id', 'username']\ | |
|
55 | == list(response.json['result'].keys()) |
@@ -0,0 +1,125 b'' | |||
|
1 | # Copyright (C) 2011-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | ||
|
19 | import logging | |
|
20 | import datetime | |
|
21 | from collections import defaultdict | |
|
22 | ||
|
23 | from sqlalchemy import Table | |
|
24 | from rhodecode.api import jsonrpc_method, SERVICE_API_IDENTIFIER | |
|
25 | ||
|
26 | ||
|
27 | log = logging.getLogger(__name__) | |
|
28 | ||
|
29 | ||
|
30 | @jsonrpc_method() | |
|
31 | def service_get_data_for_ssh_wrapper(request, apiuser, user_id, repo_name, key_id=None): | |
|
32 | from rhodecode.model.db import User | |
|
33 | from rhodecode.model.scm import ScmModel | |
|
34 | from rhodecode.model.meta import raw_query_executor, Base | |
|
35 | ||
|
36 | if key_id: | |
|
37 | table = Table('user_ssh_keys', Base.metadata, autoload=False) | |
|
38 | atime = datetime.datetime.utcnow() | |
|
39 | stmt = ( | |
|
40 | table.update() | |
|
41 | .where(table.c.ssh_key_id == key_id) | |
|
42 | .values(accessed_on=atime) | |
|
43 | ) | |
|
44 | ||
|
45 | res_count = None | |
|
46 | with raw_query_executor() as session: | |
|
47 | result = session.execute(stmt) | |
|
48 | if result.rowcount: | |
|
49 | res_count = result.rowcount | |
|
50 | ||
|
51 | if res_count: | |
|
52 | log.debug(f'Update key id:{key_id} access time') | |
|
53 | db_user = User.get(user_id) | |
|
54 | if not db_user: | |
|
55 | return None | |
|
56 | auth_user = db_user.AuthUser() | |
|
57 | ||
|
58 | return { | |
|
59 | 'user_id': db_user.user_id, | |
|
60 | 'username': db_user.username, | |
|
61 | 'repo_permissions': auth_user.permissions['repositories'], | |
|
62 | "branch_permissions": auth_user.get_branch_permissions(repo_name), | |
|
63 | "repos_path": ScmModel().repos_path | |
|
64 | } | |
|
65 | ||
|
66 | ||
|
67 | @jsonrpc_method() | |
|
68 | def service_get_repo_name_by_id(request, apiuser, repo_id): | |
|
69 | from rhodecode.model.repo import RepoModel | |
|
70 | by_id_match = RepoModel().get_repo_by_id(repo_id) | |
|
71 | if by_id_match: | |
|
72 | repo_name = by_id_match.repo_name | |
|
73 | return { | |
|
74 | 'repo_name': repo_name | |
|
75 | } | |
|
76 | return None | |
|
77 | ||
|
78 | ||
|
79 | @jsonrpc_method() | |
|
80 | def service_mark_for_invalidation(request, apiuser, repo_name): | |
|
81 | from rhodecode.model.scm import ScmModel | |
|
82 | ScmModel().mark_for_invalidation(repo_name) | |
|
83 | return {'msg': "Applied"} | |
|
84 | ||
|
85 | ||
|
86 | @jsonrpc_method() | |
|
87 | def service_config_to_hgrc(request, apiuser, cli_flags, repo_name): | |
|
88 | from rhodecode.model.db import RhodeCodeUi | |
|
89 | from rhodecode.model.settings import VcsSettingsModel | |
|
90 | ||
|
91 | ui_sections = defaultdict(list) | |
|
92 | ui = VcsSettingsModel(repo=repo_name).get_ui_settings(section=None, key=None) | |
|
93 | ||
|
94 | default_hooks = [ | |
|
95 | ('pretxnchangegroup.ssh_auth', 'python:vcsserver.hooks.pre_push_ssh_auth'), | |
|
96 | ('pretxnchangegroup.ssh', 'python:vcsserver.hooks.pre_push_ssh'), | |
|
97 | ('changegroup.ssh', 'python:vcsserver.hooks.post_push_ssh'), | |
|
98 | ||
|
99 | ('preoutgoing.ssh', 'python:vcsserver.hooks.pre_pull_ssh'), | |
|
100 | ('outgoing.ssh', 'python:vcsserver.hooks.post_pull_ssh'), | |
|
101 | ] | |
|
102 | ||
|
103 | for k, v in default_hooks: | |
|
104 | ui_sections['hooks'].append((k, v)) | |
|
105 | ||
|
106 | for entry in ui: | |
|
107 | if not entry.active: | |
|
108 | continue | |
|
109 | sec = entry.section | |
|
110 | key = entry.key | |
|
111 | ||
|
112 | if sec in cli_flags: | |
|
113 | # we want only custom hooks, so we skip builtins | |
|
114 | if sec == 'hooks' and key in RhodeCodeUi.HOOKS_BUILTIN: | |
|
115 | continue | |
|
116 | ||
|
117 | ui_sections[sec].append([key, entry.value]) | |
|
118 | ||
|
119 | flags = [] | |
|
120 | for _sec, key_val in ui_sections.items(): | |
|
121 | flags.append(' ') | |
|
122 | flags.append(f'[{_sec}]') | |
|
123 | for key, val in key_val: | |
|
124 | flags.append(f'{key}= {val}') | |
|
125 | return {'flags': flags} |
@@ -0,0 +1,67 b'' | |||
|
1 | import pytest | |
|
2 | import mock | |
|
3 | ||
|
4 | from rhodecode.lib.type_utils import AttributeDict | |
|
5 | from rhodecode.model.meta import Session | |
|
6 | from rhodecode.tests.fixture import Fixture | |
|
7 | from rhodecode.tests.routes import route_path | |
|
8 | from rhodecode.model.settings import SettingsModel | |
|
9 | ||
|
10 | fixture = Fixture() | |
|
11 | ||
|
12 | ||
|
13 | @pytest.mark.usefixtures('app') | |
|
14 | class Test2FA(object): | |
|
15 | @classmethod | |
|
16 | def setup_class(cls): | |
|
17 | cls.password = 'valid-one' | |
|
18 | ||
|
19 | def test_redirect_to_2fa_setup_if_enabled_for_user(self, user_util): | |
|
20 | user = user_util.create_user(password=self.password) | |
|
21 | user.has_enabled_2fa = True | |
|
22 | self.app.post( | |
|
23 | route_path('login'), | |
|
24 | {'username': user.username, | |
|
25 | 'password': self.password}) | |
|
26 | ||
|
27 | response = self.app.get('/') | |
|
28 | assert response.status_code == 302 | |
|
29 | assert response.location.endswith(route_path('setup_2fa')) | |
|
30 | ||
|
31 | def test_redirect_to_2fa_check_if_2fa_configured(self, user_util): | |
|
32 | user = user_util.create_user(password=self.password) | |
|
33 | user.has_enabled_2fa = True | |
|
34 | user.init_secret_2fa() | |
|
35 | Session().add(user) | |
|
36 | Session().commit() | |
|
37 | self.app.post( | |
|
38 | route_path('login'), | |
|
39 | {'username': user.username, | |
|
40 | 'password': self.password}) | |
|
41 | response = self.app.get('/') | |
|
42 | assert response.status_code == 302 | |
|
43 | assert response.location.endswith(route_path('check_2fa')) | |
|
44 | ||
|
45 | def test_2fa_recovery_codes_works_only_once(self, user_util): | |
|
46 | user = user_util.create_user(password=self.password) | |
|
47 | user.has_enabled_2fa = True | |
|
48 | user.init_secret_2fa() | |
|
49 | recovery_code_to_check = user.init_2fa_recovery_codes()[0] | |
|
50 | Session().add(user) | |
|
51 | Session().commit() | |
|
52 | self.app.post( | |
|
53 | route_path('login'), | |
|
54 | {'username': user.username, | |
|
55 | 'password': self.password}) | |
|
56 | response = self.app.post(route_path('check_2fa'), {'totp': recovery_code_to_check}) | |
|
57 | assert response.status_code == 302 | |
|
58 | response = self.app.post(route_path('check_2fa'), {'totp': recovery_code_to_check}) | |
|
59 | response.mustcontain('Code is invalid. Try again!') | |
|
60 | ||
|
61 | def test_2fa_state_when_forced_by_admin(self, user_util): | |
|
62 | user = user_util.create_user(password=self.password) | |
|
63 | user.has_enabled_2fa = False | |
|
64 | with mock.patch.object( | |
|
65 | SettingsModel, 'get_setting_by_name', lambda *a, **kw: AttributeDict(app_settings_value=True)): | |
|
66 | ||
|
67 | assert user.has_enabled_2fa |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
@@ -1,5 +1,5 b'' | |||
|
1 | 1 | [bumpversion] |
|
2 |
current_version = 5. |
|
|
2 | current_version = 5.1.0 | |
|
3 | 3 | message = release: Bump version {current_version} to {new_version} |
|
4 | 4 | |
|
5 | 5 | [bumpversion:file:rhodecode/VERSION] |
@@ -1,192 +1,171 b'' | |||
|
1 | 1 | # required for pushd to work.. |
|
2 | 2 | SHELL = /bin/bash |
|
3 | 3 | |
|
4 | 4 | |
|
5 | 5 | # set by: PATH_TO_OUTDATED_PACKAGES=/some/path/outdated_packages.py |
|
6 | 6 | OUTDATED_PACKAGES = ${PATH_TO_OUTDATED_PACKAGES} |
|
7 | 7 | |
|
8 | 8 | .PHONY: clean |
|
9 | 9 | ## Cleanup compiled and cache py files |
|
10 | 10 | clean: |
|
11 | 11 | make test-clean |
|
12 | 12 | find . -type f \( -iname '*.c' -o -iname '*.pyc' -o -iname '*.so' -o -iname '*.orig' \) -exec rm '{}' ';' |
|
13 | 13 | find . -type d -name "build" -prune -exec rm -rf '{}' ';' |
|
14 | 14 | |
|
15 | 15 | |
|
16 | 16 | .PHONY: test |
|
17 | 17 | ## run test-clean and tests |
|
18 | 18 | test: |
|
19 | 19 | make test-clean |
|
20 | 20 | make test-only |
|
21 | 21 | |
|
22 | 22 | |
|
23 | 23 | .PHONY: test-clean |
|
24 | 24 | ## run test-clean and tests |
|
25 | 25 | test-clean: |
|
26 | 26 | rm -rf coverage.xml htmlcov junit.xml pylint.log result |
|
27 | 27 | find . -type d -name "__pycache__" -prune -exec rm -rf '{}' ';' |
|
28 | 28 | find . -type f \( -iname '.coverage.*' \) -exec rm '{}' ';' |
|
29 | 29 | |
|
30 | 30 | |
|
31 | 31 | .PHONY: test-only |
|
32 | 32 | ## Run tests only without cleanup |
|
33 | 33 | test-only: |
|
34 | 34 | PYTHONHASHSEED=random \ |
|
35 | 35 | py.test -x -vv -r xw -p no:sugar \ |
|
36 | 36 | --cov-report=term-missing --cov-report=html \ |
|
37 | 37 | --cov=rhodecode rhodecode |
|
38 | 38 | |
|
39 | 39 | |
|
40 | .PHONY: test-only-mysql | |
|
41 | ## run tests against mysql | |
|
42 | test-only-mysql: | |
|
43 | PYTHONHASHSEED=random \ | |
|
44 | py.test -x -vv -r xw -p no:sugar \ | |
|
45 | --cov-report=term-missing --cov-report=html \ | |
|
46 | --ini-config-override='{"app:main": {"sqlalchemy.db1.url": "mysql://root:qweqwe@localhost/rhodecode_test?charset=utf8"}}' \ | |
|
47 | --cov=rhodecode rhodecode | |
|
48 | ||
|
49 | ||
|
50 | .PHONY: test-only-postgres | |
|
51 | ## run tests against postgres | |
|
52 | test-only-postgres: | |
|
53 | PYTHONHASHSEED=random \ | |
|
54 | py.test -x -vv -r xw -p no:sugar \ | |
|
55 | --cov-report=term-missing --cov-report=html \ | |
|
56 | --ini-config-override='{"app:main": {"sqlalchemy.db1.url": "postgresql://postgres:qweqwe@localhost/rhodecode_test"}}' \ | |
|
57 | --cov=rhodecode rhodecode | |
|
58 | ||
|
59 | .PHONY: ruff-check | |
|
60 | ## run a ruff analysis | |
|
61 | ruff-check: | |
|
62 | ruff check --ignore F401 --ignore I001 --ignore E402 --ignore E501 --ignore F841 --exclude rhodecode/lib/dbmigrate --exclude .eggs --exclude .dev . | |
|
63 | ||
|
64 | ||
|
65 | 40 | .PHONY: docs |
|
66 | 41 | ## build docs |
|
67 | 42 | docs: |
|
68 | 43 | (cd docs; docker run --rm -v $(PWD):/project --workdir=/project/docs sphinx-doc-build-rc make clean html SPHINXOPTS="-W") |
|
69 | 44 | |
|
70 | 45 | |
|
71 | 46 | .PHONY: docs-clean |
|
72 | 47 | ## Cleanup docs |
|
73 | 48 | docs-clean: |
|
74 | 49 | (cd docs; docker run --rm -v $(PWD):/project --workdir=/project/docs sphinx-doc-build-rc make clean) |
|
75 | 50 | |
|
76 | 51 | |
|
77 | 52 | .PHONY: docs-cleanup |
|
78 | 53 | ## Cleanup docs |
|
79 | 54 | docs-cleanup: |
|
80 | 55 | (cd docs; docker run --rm -v $(PWD):/project --workdir=/project/docs sphinx-doc-build-rc make cleanup) |
|
81 | 56 | |
|
82 | 57 | |
|
83 | 58 | .PHONY: web-build |
|
84 | 59 | ## Build JS packages static/js |
|
85 | 60 | web-build: |
|
86 | 61 | docker run -it --rm -v $(PWD):/project --workdir=/project rhodecode/static-files-build:16 -c "npm install && /project/node_modules/.bin/grunt" |
|
87 | 62 | # run static file check |
|
88 | 63 | ./rhodecode/tests/scripts/static-file-check.sh rhodecode/public/ |
|
89 | 64 | rm -rf node_modules |
|
90 | 65 | |
|
66 | .PHONY: ruff-check | |
|
67 | ## run a ruff analysis | |
|
68 | ruff-check: | |
|
69 | ruff check --ignore F401 --ignore I001 --ignore E402 --ignore E501 --ignore F841 --exclude rhodecode/lib/dbmigrate --exclude .eggs --exclude .dev . | |
|
91 | 70 | |
|
92 | 71 | .PHONY: pip-packages |
|
93 | 72 | ## Show outdated packages |
|
94 | 73 | pip-packages: |
|
95 | 74 | python ${OUTDATED_PACKAGES} |
|
96 | 75 | |
|
97 | 76 | |
|
98 | 77 | .PHONY: build |
|
99 | 78 | ## Build sdist/egg |
|
100 | 79 | build: |
|
101 | 80 | python -m build |
|
102 | 81 | |
|
103 | 82 | |
|
104 | 83 | .PHONY: dev-sh |
|
105 | 84 | ## make dev-sh |
|
106 | 85 | dev-sh: |
|
107 | 86 | sudo echo "deb [trusted=yes] https://apt.fury.io/rsteube/ /" | sudo tee -a "/etc/apt/sources.list.d/fury.list" |
|
108 | 87 | sudo apt-get update |
|
109 | 88 | sudo apt-get install -y zsh carapace-bin |
|
110 | 89 | rm -rf /home/rhodecode/.oh-my-zsh |
|
111 | 90 | curl https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh | sh |
|
112 | echo "source <(carapace _carapace)" > /home/rhodecode/.zsrc | |
|
113 | PROMPT='%(?.%F{green}√.%F{red}?%?)%f %B%F{240}%1~%f%b %# ' zsh | |
|
91 | @echo "source <(carapace _carapace)" > /home/rhodecode/.zsrc | |
|
92 | @echo "${RC_DEV_CMD_HELP}" | |
|
93 | @PROMPT='%(?.%F{green}√.%F{red}?%?)%f %B%F{240}%1~%f%b %# ' zsh | |
|
114 | 94 | |
|
115 | 95 | |
|
116 | 96 | .PHONY: dev-cleanup |
|
117 | 97 | ## Cleanup: pip freeze | grep -v "^-e" | grep -v "@" | xargs pip uninstall -y |
|
118 | 98 | dev-cleanup: |
|
119 | 99 | pip freeze | grep -v "^-e" | grep -v "@" | xargs pip uninstall -y |
|
120 | 100 | rm -rf /tmp/* |
|
121 | 101 | |
|
122 | 102 | |
|
123 | 103 | .PHONY: dev-env |
|
124 | 104 | ## make dev-env based on the requirements files and install develop of packages |
|
105 | ## Cleanup: pip freeze | grep -v "^-e" | grep -v "@" | xargs pip uninstall -y | |
|
125 | 106 | dev-env: |
|
107 | sudo -u root chown rhodecode:rhodecode /home/rhodecode/.cache/pip/ | |
|
126 | 108 |
|
|
127 | 109 | pushd ../rhodecode-vcsserver/ && make dev-env && popd |
|
128 | 110 | pip wheel --wheel-dir=/home/rhodecode/.cache/pip/wheels -r requirements.txt -r requirements_rc_tools.txt -r requirements_test.txt -r requirements_debug.txt |
|
129 | 111 | pip install --no-index --find-links=/home/rhodecode/.cache/pip/wheels -r requirements.txt -r requirements_rc_tools.txt -r requirements_test.txt -r requirements_debug.txt |
|
130 | 112 | pip install -e . |
|
131 | 113 | |
|
132 | 114 | |
|
133 | 115 | .PHONY: sh |
|
134 | 116 | ## shortcut for make dev-sh dev-env |
|
135 | 117 | sh: |
|
136 | 118 | make dev-env |
|
137 | 119 | make dev-sh |
|
138 | 120 | |
|
139 | 121 | |
|
140 | .PHONY: dev-srv | |
|
141 | ## run develop server instance, docker exec -it $(docker ps -q --filter 'name=dev-enterprise-ce') /bin/bash | |
|
142 | dev-srv: | |
|
143 | pserve --reload .dev/dev.ini | |
|
122 | ## Allows changes of workers e.g make dev-srv-g workers=2 | |
|
123 | workers?=1 | |
|
144 | 124 | |
|
145 | ||
|
146 | .PHONY: dev-srv-g | |
|
147 | ## run gunicorn multi process workers | |
|
148 | dev-srv-g: | |
|
149 | gunicorn --paste .dev/dev.ini --bind=0.0.0.0:10020 --config=.dev/gunicorn_config.py --timeout=120 --reload | |
|
125 | .PHONY: dev-srv | |
|
126 | ## run gunicorn web server with reloader, use workers=N to set multiworker mode | |
|
127 | dev-srv: | |
|
128 | gunicorn --paste=.dev/dev.ini --bind=0.0.0.0:10020 --config=.dev/gunicorn_config.py --timeout=120 --reload --workers=$(workers) | |
|
150 | 129 | |
|
151 | 130 | |
|
152 | 131 | # Default command on calling make |
|
153 | 132 | .DEFAULT_GOAL := show-help |
|
154 | 133 | |
|
155 | 134 | .PHONY: show-help |
|
156 | 135 | show-help: |
|
157 | 136 | @echo "$$(tput bold)Available rules:$$(tput sgr0)" |
|
158 | 137 | @echo |
|
159 | 138 | @sed -n -e "/^## / { \ |
|
160 | 139 | h; \ |
|
161 | 140 | s/.*//; \ |
|
162 | 141 | :doc" \ |
|
163 | 142 | -e "H; \ |
|
164 | 143 | n; \ |
|
165 | 144 | s/^## //; \ |
|
166 | 145 | t doc" \ |
|
167 | 146 | -e "s/:.*//; \ |
|
168 | 147 | G; \ |
|
169 | 148 | s/\\n## /---/; \ |
|
170 | 149 | s/\\n/ /g; \ |
|
171 | 150 | p; \ |
|
172 | 151 | }" ${MAKEFILE_LIST} \ |
|
173 | 152 | | LC_ALL='C' sort --ignore-case \ |
|
174 | 153 | | awk -F '---' \ |
|
175 | 154 | -v ncol=$$(tput cols) \ |
|
176 | 155 | -v indent=19 \ |
|
177 | 156 | -v col_on="$$(tput setaf 6)" \ |
|
178 | 157 | -v col_off="$$(tput sgr0)" \ |
|
179 | 158 | '{ \ |
|
180 | 159 | printf "%s%*s%s ", col_on, -indent, $$1, col_off; \ |
|
181 | 160 | n = split($$2, words, " "); \ |
|
182 | 161 | line_length = ncol - indent; \ |
|
183 | 162 | for (i = 1; i <= n; i++) { \ |
|
184 | 163 | line_length -= length(words[i]) + 1; \ |
|
185 | 164 | if (line_length <= 0) { \ |
|
186 | 165 | line_length = ncol - indent - length(words[i]) - 1; \ |
|
187 | 166 | printf "\n%*s ", -indent, " "; \ |
|
188 | 167 | } \ |
|
189 | 168 | printf "%s ", words[i]; \ |
|
190 | 169 | } \ |
|
191 | 170 | printf "\n"; \ |
|
192 | 171 | }' |
@@ -1,865 +1,856 b'' | |||
|
1 | 1 | |
|
2 | 2 | ; ######################################### |
|
3 | 3 | ; RHODECODE COMMUNITY EDITION CONFIGURATION |
|
4 | 4 | ; ######################################### |
|
5 | 5 | |
|
6 | 6 | [DEFAULT] |
|
7 | 7 | ; Debug flag sets all loggers to debug, and enables request tracking |
|
8 | 8 | debug = true |
|
9 | 9 | |
|
10 | 10 | ; ######################################################################## |
|
11 | 11 | ; EMAIL CONFIGURATION |
|
12 | 12 | ; These settings will be used by the RhodeCode mailing system |
|
13 | 13 | ; ######################################################################## |
|
14 | 14 | |
|
15 | 15 | ; prefix all emails subjects with given prefix, helps filtering out emails |
|
16 | 16 | #email_prefix = [RhodeCode] |
|
17 | 17 | |
|
18 | 18 | ; email FROM address all mails will be sent |
|
19 | 19 | #app_email_from = rhodecode-noreply@localhost |
|
20 | 20 | |
|
21 | 21 | #smtp_server = mail.server.com |
|
22 | 22 | #smtp_username = |
|
23 | 23 | #smtp_password = |
|
24 | 24 | #smtp_port = |
|
25 | 25 | #smtp_use_tls = false |
|
26 | 26 | #smtp_use_ssl = true |
|
27 | 27 | |
|
28 | 28 | [server:main] |
|
29 | 29 | ; COMMON HOST/IP CONFIG, This applies mostly to develop setup, |
|
30 | 30 | ; Host port for gunicorn are controlled by gunicorn_conf.py |
|
31 | 31 | host = 127.0.0.1 |
|
32 | 32 | port = 10020 |
|
33 | 33 | |
|
34 | ; ################################################## | |
|
35 | ; WAITRESS WSGI SERVER - Recommended for Development | |
|
36 | ; ################################################## | |
|
37 | ||
|
38 | ; use server type | |
|
39 | use = egg:waitress#main | |
|
40 | ||
|
41 | ; number of worker threads | |
|
42 | threads = 5 | |
|
43 | ||
|
44 | ; MAX BODY SIZE 100GB | |
|
45 | max_request_body_size = 107374182400 | |
|
46 | ||
|
47 | ; Use poll instead of select, fixes file descriptors limits problems. | |
|
48 | ; May not work on old windows systems. | |
|
49 | asyncore_use_poll = true | |
|
50 | ||
|
51 | 34 | |
|
52 | 35 | ; ########################### |
|
53 | 36 | ; GUNICORN APPLICATION SERVER |
|
54 | 37 | ; ########################### |
|
55 | 38 | |
|
56 |
; run with gunicorn |
|
|
39 | ; run with gunicorn --config gunicorn_conf.py --paste rhodecode.ini | |
|
57 | 40 | |
|
58 | 41 | ; Module to use, this setting shouldn't be changed |
|
59 |
|
|
|
42 | use = egg:gunicorn#main | |
|
60 | 43 | |
|
61 | 44 | ; Prefix middleware for RhodeCode. |
|
62 | 45 | ; recommended when using proxy setup. |
|
63 | 46 | ; allows to set RhodeCode under a prefix in server. |
|
64 | 47 | ; eg https://server.com/custom_prefix. Enable `filter-with =` option below as well. |
|
65 | 48 | ; And set your prefix like: `prefix = /custom_prefix` |
|
66 | 49 | ; be sure to also set beaker.session.cookie_path = /custom_prefix if you need |
|
67 | 50 | ; to make your cookies only work on prefix url |
|
68 | 51 | [filter:proxy-prefix] |
|
69 | 52 | use = egg:PasteDeploy#prefix |
|
70 | 53 | prefix = / |
|
71 | 54 | |
|
72 | 55 | [app:main] |
|
73 | 56 | ; The %(here)s variable will be replaced with the absolute path of parent directory |
|
74 | 57 | ; of this file |
|
75 | 58 | ; Each option in the app:main can be override by an environmental variable |
|
76 | 59 | ; |
|
77 | 60 | ;To override an option: |
|
78 | 61 | ; |
|
79 | 62 | ;RC_<KeyName> |
|
80 | 63 | ;Everything should be uppercase, . and - should be replaced by _. |
|
81 | 64 | ;For example, if you have these configuration settings: |
|
82 | 65 | ;rc_cache.repo_object.backend = foo |
|
83 | 66 | ;can be overridden by |
|
84 | 67 | ;export RC_CACHE_REPO_OBJECT_BACKEND=foo |
|
85 | 68 | |
|
86 | 69 | use = egg:rhodecode-enterprise-ce |
|
87 | 70 | |
|
88 | 71 | ; enable proxy prefix middleware, defined above |
|
89 | 72 | #filter-with = proxy-prefix |
|
90 | 73 | |
|
91 | 74 | ; ############# |
|
92 | 75 | ; DEBUG OPTIONS |
|
93 | 76 | ; ############# |
|
94 | 77 | |
|
95 | 78 | pyramid.reload_templates = true |
|
96 | 79 | |
|
97 | 80 | # During development the we want to have the debug toolbar enabled |
|
98 | 81 | pyramid.includes = |
|
99 | 82 | pyramid_debugtoolbar |
|
100 | 83 | |
|
101 | 84 | debugtoolbar.hosts = 0.0.0.0/0 |
|
102 | 85 | debugtoolbar.exclude_prefixes = |
|
103 | 86 | /css |
|
104 | 87 | /fonts |
|
105 | 88 | /images |
|
106 | 89 | /js |
|
107 | 90 | |
|
108 | 91 | ## RHODECODE PLUGINS ## |
|
109 | 92 | rhodecode.includes = |
|
110 | 93 | rhodecode.api |
|
111 | 94 | |
|
112 | 95 | |
|
113 | 96 | # api prefix url |
|
114 | 97 | rhodecode.api.url = /_admin/api |
|
115 | 98 | |
|
116 | 99 | ; enable debug style page |
|
117 | 100 | debug_style = true |
|
118 | 101 | |
|
119 | 102 | ; ################# |
|
120 | 103 | ; END DEBUG OPTIONS |
|
121 | 104 | ; ################# |
|
122 | 105 | |
|
123 | 106 | ; encryption key used to encrypt social plugin tokens, |
|
124 | 107 | ; remote_urls with credentials etc, if not set it defaults to |
|
125 | 108 | ; `beaker.session.secret` |
|
126 | 109 | #rhodecode.encrypted_values.secret = |
|
127 | 110 | |
|
128 | 111 | ; decryption strict mode (enabled by default). It controls if decryption raises |
|
129 | 112 | ; `SignatureVerificationError` in case of wrong key, or damaged encryption data. |
|
130 | 113 | #rhodecode.encrypted_values.strict = false |
|
131 | 114 | |
|
132 | 115 | ; Pick algorithm for encryption. Either fernet (more secure) or aes (default) |
|
133 | 116 | ; fernet is safer, and we strongly recommend switching to it. |
|
134 | 117 | ; Due to backward compatibility aes is used as default. |
|
135 | 118 | #rhodecode.encrypted_values.algorithm = fernet |
|
136 | 119 | |
|
137 | 120 | ; Return gzipped responses from RhodeCode (static files/application) |
|
138 | 121 | gzip_responses = false |
|
139 | 122 | |
|
140 | 123 | ; Auto-generate javascript routes file on startup |
|
141 | 124 | generate_js_files = false |
|
142 | 125 | |
|
143 | 126 | ; System global default language. |
|
144 | 127 | ; All available languages: en (default), be, de, es, fr, it, ja, pl, pt, ru, zh |
|
145 | 128 | lang = en |
|
146 | 129 | |
|
147 | 130 | ; Perform a full repository scan and import on each server start. |
|
148 | 131 | ; Settings this to true could lead to very long startup time. |
|
149 | 132 | startup.import_repos = false |
|
150 | 133 | |
|
151 | 134 | ; URL at which the application is running. This is used for Bootstrapping |
|
152 | 135 | ; requests in context when no web request is available. Used in ishell, or |
|
153 | 136 | ; SSH calls. Set this for events to receive proper url for SSH calls. |
|
154 | 137 | app.base_url = http://rhodecode.local |
|
155 | 138 | |
|
139 | ; Host at which the Service API is running. | |
|
140 | app.service_api.host = http://rhodecode.local:10020 | |
|
141 | ||
|
142 | ; Secret for Service API authentication. | |
|
143 | app.service_api.token = | |
|
144 | ||
|
156 | 145 | ; Unique application ID. Should be a random unique string for security. |
|
157 | 146 | app_instance_uuid = rc-production |
|
158 | 147 | |
|
159 | 148 | ; Cut off limit for large diffs (size in bytes). If overall diff size on |
|
160 | 149 | ; commit, or pull request exceeds this limit this diff will be displayed |
|
161 | 150 | ; partially. E.g 512000 == 512Kb |
|
162 | 151 | cut_off_limit_diff = 512000 |
|
163 | 152 | |
|
164 | 153 | ; Cut off limit for large files inside diffs (size in bytes). Each individual |
|
165 | 154 | ; file inside diff which exceeds this limit will be displayed partially. |
|
166 | 155 | ; E.g 128000 == 128Kb |
|
167 | 156 | cut_off_limit_file = 128000 |
|
168 | 157 | |
|
169 | 158 | ; Use cached version of vcs repositories everywhere. Recommended to be `true` |
|
170 | 159 | vcs_full_cache = true |
|
171 | 160 | |
|
172 | 161 | ; Force https in RhodeCode, fixes https redirects, assumes it's always https. |
|
173 | 162 | ; Normally this is controlled by proper flags sent from http server such as Nginx or Apache |
|
174 | 163 | force_https = false |
|
175 | 164 | |
|
176 | 165 | ; use Strict-Transport-Security headers |
|
177 | 166 | use_htsts = false |
|
178 | 167 | |
|
179 | 168 | ; Set to true if your repos are exposed using the dumb protocol |
|
180 | 169 | git_update_server_info = false |
|
181 | 170 | |
|
182 | 171 | ; RSS/ATOM feed options |
|
183 | 172 | rss_cut_off_limit = 256000 |
|
184 | 173 | rss_items_per_page = 10 |
|
185 | 174 | rss_include_diff = false |
|
186 | 175 | |
|
187 | 176 | ; gist URL alias, used to create nicer urls for gist. This should be an |
|
188 | 177 | ; url that does rewrites to _admin/gists/{gistid}. |
|
189 | 178 | ; example: http://gist.rhodecode.org/{gistid}. Empty means use the internal |
|
190 | 179 | ; RhodeCode url, ie. http[s]://rhodecode.server/_admin/gists/{gistid} |
|
191 | 180 | gist_alias_url = |
|
192 | 181 | |
|
193 | 182 | ; List of views (using glob pattern syntax) that AUTH TOKENS could be |
|
194 | 183 | ; used for access. |
|
195 | 184 | ; Adding ?auth_token=TOKEN_HASH to the url authenticates this request as if it |
|
196 | 185 | ; came from the the logged in user who own this authentication token. |
|
197 | 186 | ; Additionally @TOKEN syntax can be used to bound the view to specific |
|
198 | 187 | ; authentication token. Such view would be only accessible when used together |
|
199 | 188 | ; with this authentication token |
|
200 | 189 | ; list of all views can be found under `/_admin/permissions/auth_token_access` |
|
201 | 190 | ; The list should be "," separated and on a single line. |
|
202 | 191 | ; Most common views to enable: |
|
203 | 192 | |
|
204 | 193 | # RepoCommitsView:repo_commit_download |
|
205 | 194 | # RepoCommitsView:repo_commit_patch |
|
206 | 195 | # RepoCommitsView:repo_commit_raw |
|
207 | 196 | # RepoCommitsView:repo_commit_raw@TOKEN |
|
208 | 197 | # RepoFilesView:repo_files_diff |
|
209 | 198 | # RepoFilesView:repo_archivefile |
|
210 | 199 | # RepoFilesView:repo_file_raw |
|
211 | 200 | # GistView:* |
|
212 | 201 | api_access_controllers_whitelist = |
|
213 | 202 | |
|
214 | 203 | ; Default encoding used to convert from and to unicode |
|
215 | 204 | ; can be also a comma separated list of encoding in case of mixed encodings |
|
216 | 205 | default_encoding = UTF-8 |
|
217 | 206 | |
|
218 | 207 | ; instance-id prefix |
|
219 | 208 | ; a prefix key for this instance used for cache invalidation when running |
|
220 | 209 | ; multiple instances of RhodeCode, make sure it's globally unique for |
|
221 | 210 | ; all running RhodeCode instances. Leave empty if you don't use it |
|
222 | 211 | instance_id = |
|
223 | 212 | |
|
224 | 213 | ; Fallback authentication plugin. Set this to a plugin ID to force the usage |
|
225 | 214 | ; of an authentication plugin also if it is disabled by it's settings. |
|
226 | 215 | ; This could be useful if you are unable to log in to the system due to broken |
|
227 | 216 | ; authentication settings. Then you can enable e.g. the internal RhodeCode auth |
|
228 | 217 | ; module to log in again and fix the settings. |
|
229 | 218 | ; Available builtin plugin IDs (hash is part of the ID): |
|
230 | 219 | ; egg:rhodecode-enterprise-ce#rhodecode |
|
231 | 220 | ; egg:rhodecode-enterprise-ce#pam |
|
232 | 221 | ; egg:rhodecode-enterprise-ce#ldap |
|
233 | 222 | ; egg:rhodecode-enterprise-ce#jasig_cas |
|
234 | 223 | ; egg:rhodecode-enterprise-ce#headers |
|
235 | 224 | ; egg:rhodecode-enterprise-ce#crowd |
|
236 | 225 | |
|
237 | 226 | #rhodecode.auth_plugin_fallback = egg:rhodecode-enterprise-ce#rhodecode |
|
238 | 227 | |
|
239 | 228 | ; Flag to control loading of legacy plugins in py:/path format |
|
240 | 229 | auth_plugin.import_legacy_plugins = true |
|
241 | 230 | |
|
242 | 231 | ; alternative return HTTP header for failed authentication. Default HTTP |
|
243 | 232 | ; response is 401 HTTPUnauthorized. Currently HG clients have troubles with |
|
244 | 233 | ; handling that causing a series of failed authentication calls. |
|
245 | 234 | ; Set this variable to 403 to return HTTPForbidden, or any other HTTP code |
|
246 | 235 | ; This will be served instead of default 401 on bad authentication |
|
247 | 236 | auth_ret_code = |
|
248 | 237 | |
|
249 | 238 | ; use special detection method when serving auth_ret_code, instead of serving |
|
250 | 239 | ; ret_code directly, use 401 initially (Which triggers credentials prompt) |
|
251 | 240 | ; and then serve auth_ret_code to clients |
|
252 | 241 | auth_ret_code_detection = false |
|
253 | 242 | |
|
254 | 243 | ; locking return code. When repository is locked return this HTTP code. 2XX |
|
255 | 244 | ; codes don't break the transactions while 4XX codes do |
|
256 | 245 | lock_ret_code = 423 |
|
257 | 246 | |
|
258 | ; allows to change the repository location in settings page | |
|
259 | allow_repo_location_change = true | |
|
247 | ; Filesystem location were repositories should be stored | |
|
248 | repo_store.path = /var/opt/rhodecode_repo_store | |
|
260 | 249 | |
|
261 | 250 | ; allows to setup custom hooks in settings page |
|
262 | 251 | allow_custom_hooks_settings = true |
|
263 | 252 | |
|
264 | 253 | ; Generated license token required for EE edition license. |
|
265 | 254 | ; New generated token value can be found in Admin > settings > license page. |
|
266 | 255 | license_token = |
|
267 | 256 | |
|
268 | 257 | ; This flag hides sensitive information on the license page such as token, and license data |
|
269 | 258 | license.hide_license_info = false |
|
270 | 259 | |
|
271 | 260 | ; supervisor connection uri, for managing supervisor and logs. |
|
272 | 261 | supervisor.uri = |
|
273 | 262 | |
|
274 | 263 | ; supervisord group name/id we only want this RC instance to handle |
|
275 | 264 | supervisor.group_id = dev |
|
276 | 265 | |
|
277 | 266 | ; Display extended labs settings |
|
278 | 267 | labs_settings_active = true |
|
279 | 268 | |
|
280 | 269 | ; Custom exception store path, defaults to TMPDIR |
|
281 | 270 | ; This is used to store exception from RhodeCode in shared directory |
|
282 | 271 | #exception_tracker.store_path = |
|
283 | 272 | |
|
284 | 273 | ; Send email with exception details when it happens |
|
285 | 274 | #exception_tracker.send_email = false |
|
286 | 275 | |
|
287 | 276 | ; Comma separated list of recipients for exception emails, |
|
288 | 277 | ; e.g admin@rhodecode.com,devops@rhodecode.com |
|
289 | 278 | ; Can be left empty, then emails will be sent to ALL super-admins |
|
290 | 279 | #exception_tracker.send_email_recipients = |
|
291 | 280 | |
|
292 | 281 | ; optional prefix to Add to email Subject |
|
293 | 282 | #exception_tracker.email_prefix = [RHODECODE ERROR] |
|
294 | 283 | |
|
295 | 284 | ; File store configuration. This is used to store and serve uploaded files |
|
296 | 285 | file_store.enabled = true |
|
297 | 286 | |
|
298 | 287 | ; Storage backend, available options are: local |
|
299 | 288 | file_store.backend = local |
|
300 | 289 | |
|
301 | ; path to store the uploaded binaries | |
|
302 |
file_store.storage_path = |
|
|
290 | ; path to store the uploaded binaries and artifacts | |
|
291 | file_store.storage_path = /var/opt/rhodecode_data/file_store | |
|
292 | ||
|
293 | ||
|
294 | ; Redis url to acquire/check generation of archives locks | |
|
295 | archive_cache.locking.url = redis://redis:6379/1 | |
|
296 | ||
|
297 | ; Storage backend, only 'filesystem' and 'objectstore' are available now | |
|
298 | archive_cache.backend.type = filesystem | |
|
299 | ||
|
300 | ; url for s3 compatible storage that allows to upload artifacts | |
|
301 | ; e.g http://minio:9000 | |
|
302 | archive_cache.objectstore.url = http://s3-minio:9000 | |
|
303 | ||
|
304 | ; key for s3 auth | |
|
305 | archive_cache.objectstore.key = key | |
|
306 | ||
|
307 | ; secret for s3 auth | |
|
308 | archive_cache.objectstore.secret = secret | |
|
303 | 309 | |
|
304 | ; Uncomment and set this path to control settings for archive download cache. | |
|
310 | ;region for s3 storage | |
|
311 | archive_cache.objectstore.region = eu-central-1 | |
|
312 | ||
|
313 | ; number of sharded buckets to create to distribute archives across | |
|
314 | ; default is 8 shards | |
|
315 | archive_cache.objectstore.bucket_shards = 8 | |
|
316 | ||
|
317 | ; a top-level bucket to put all other shards in | |
|
318 | ; objects will be stored in rhodecode-archive-cache/shard-N based on the bucket_shards number | |
|
319 | archive_cache.objectstore.bucket = rhodecode-archive-cache | |
|
320 | ||
|
321 | ; if true, this cache will try to retry with retry_attempts=N times waiting retry_backoff time | |
|
322 | archive_cache.objectstore.retry = false | |
|
323 | ||
|
324 | ; number of seconds to wait for next try using retry | |
|
325 | archive_cache.objectstore.retry_backoff = 1 | |
|
326 | ||
|
327 | ; how many tries do do a retry fetch from this backend | |
|
328 | archive_cache.objectstore.retry_attempts = 10 | |
|
329 | ||
|
330 | ; Default is $cache_dir/archive_cache if not set | |
|
305 | 331 | ; Generated repo archives will be cached at this location |
|
306 | 332 | ; and served from the cache during subsequent requests for the same archive of |
|
307 | 333 | ; the repository. This path is important to be shared across filesystems and with |
|
308 | 334 | ; RhodeCode and vcsserver |
|
309 | ||
|
310 | ; Default is $cache_dir/archive_cache if not set | |
|
311 | archive_cache.store_dir = %(here)s/data/archive_cache | |
|
335 | archive_cache.filesystem.store_dir = /var/opt/rhodecode_data/archive_cache | |
|
312 | 336 | |
|
313 | 337 | ; The limit in GB sets how much data we cache before recycling last used, defaults to 10 gb |
|
314 |
archive_cache.cache_size_gb = 1 |
|
|
338 | archive_cache.filesystem.cache_size_gb = 1 | |
|
339 | ||
|
340 | ; Eviction policy used to clear out after cache_size_gb limit is reached | |
|
341 | archive_cache.filesystem.eviction_policy = least-recently-stored | |
|
315 | 342 | |
|
316 | 343 | ; By default cache uses sharding technique, this specifies how many shards are there |
|
317 | archive_cache.cache_shards = 10 | |
|
344 | ; default is 8 shards | |
|
345 | archive_cache.filesystem.cache_shards = 8 | |
|
346 | ||
|
347 | ; if true, this cache will try to retry with retry_attempts=N times waiting retry_backoff time | |
|
348 | archive_cache.filesystem.retry = false | |
|
349 | ||
|
350 | ; number of seconds to wait for next try using retry | |
|
351 | archive_cache.filesystem.retry_backoff = 1 | |
|
352 | ||
|
353 | ; how many tries do do a retry fetch from this backend | |
|
354 | archive_cache.filesystem.retry_attempts = 10 | |
|
355 | ||
|
318 | 356 | |
|
319 | 357 | ; ############# |
|
320 | 358 | ; CELERY CONFIG |
|
321 | 359 | ; ############# |
|
322 | 360 | |
|
323 | 361 | ; manually run celery: /path/to/celery worker --task-events --beat --app rhodecode.lib.celerylib.loader --scheduler rhodecode.lib.celerylib.scheduler.RcScheduler --loglevel DEBUG --ini /path/to/rhodecode.ini |
|
324 | 362 | |
|
325 |
use_celery = |
|
|
363 | use_celery = true | |
|
326 | 364 | |
|
327 | 365 | ; path to store schedule database |
|
328 | 366 | #celerybeat-schedule.path = |
|
329 | 367 | |
|
330 | 368 | ; connection url to the message broker (default redis) |
|
331 | 369 | celery.broker_url = redis://redis:6379/8 |
|
332 | 370 | |
|
333 | 371 | ; results backend to get results for (default redis) |
|
334 | 372 | celery.result_backend = redis://redis:6379/8 |
|
335 | 373 | |
|
336 | 374 | ; rabbitmq example |
|
337 | 375 | #celery.broker_url = amqp://rabbitmq:qweqwe@localhost:5672/rabbitmqhost |
|
338 | 376 | |
|
339 | 377 | ; maximum tasks to execute before worker restart |
|
340 | 378 | celery.max_tasks_per_child = 20 |
|
341 | 379 | |
|
342 | 380 | ; tasks will never be sent to the queue, but executed locally instead. |
|
343 | 381 | celery.task_always_eager = false |
|
344 | 382 | |
|
345 | 383 | ; ############# |
|
346 | 384 | ; DOGPILE CACHE |
|
347 | 385 | ; ############# |
|
348 | 386 | |
|
349 | 387 | ; Default cache dir for caches. Putting this into a ramdisk can boost performance. |
|
350 | 388 | ; eg. /tmpfs/data_ramdisk, however this directory might require large amount of space |
|
351 |
cache_dir = |
|
|
389 | cache_dir = /var/opt/rhodecode_data | |
|
352 | 390 | |
|
353 | 391 | ; ********************************************* |
|
354 | 392 | ; `sql_cache_short` cache for heavy SQL queries |
|
355 | 393 | ; Only supported backend is `memory_lru` |
|
356 | 394 | ; ********************************************* |
|
357 | 395 | rc_cache.sql_cache_short.backend = dogpile.cache.rc.memory_lru |
|
358 | 396 | rc_cache.sql_cache_short.expiration_time = 30 |
|
359 | 397 | |
|
360 | 398 | |
|
361 | 399 | ; ***************************************************** |
|
362 | 400 | ; `cache_repo_longterm` cache for repo object instances |
|
363 | 401 | ; Only supported backend is `memory_lru` |
|
364 | 402 | ; ***************************************************** |
|
365 | 403 | rc_cache.cache_repo_longterm.backend = dogpile.cache.rc.memory_lru |
|
366 | 404 | ; by default we use 30 Days, cache is still invalidated on push |
|
367 | 405 | rc_cache.cache_repo_longterm.expiration_time = 2592000 |
|
368 | 406 | ; max items in LRU cache, set to smaller number to save memory, and expire last used caches |
|
369 | 407 | rc_cache.cache_repo_longterm.max_size = 10000 |
|
370 | 408 | |
|
371 | 409 | |
|
372 | 410 | ; ********************************************* |
|
373 | 411 | ; `cache_general` cache for general purpose use |
|
374 | 412 | ; for simplicity use rc.file_namespace backend, |
|
375 | 413 | ; for performance and scale use rc.redis |
|
376 | 414 | ; ********************************************* |
|
377 | 415 | rc_cache.cache_general.backend = dogpile.cache.rc.file_namespace |
|
378 | 416 | rc_cache.cache_general.expiration_time = 43200 |
|
379 | 417 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
380 | 418 | #rc_cache.cache_general.arguments.filename = /tmp/cache_general_db |
|
381 | 419 | |
|
382 | 420 | ; alternative `cache_general` redis backend with distributed lock |
|
383 | 421 | #rc_cache.cache_general.backend = dogpile.cache.rc.redis |
|
384 | 422 | #rc_cache.cache_general.expiration_time = 300 |
|
385 | 423 | |
|
386 | 424 | ; redis_expiration_time needs to be greater then expiration_time |
|
387 | 425 | #rc_cache.cache_general.arguments.redis_expiration_time = 7200 |
|
388 | 426 | |
|
389 | 427 | #rc_cache.cache_general.arguments.host = localhost |
|
390 | 428 | #rc_cache.cache_general.arguments.port = 6379 |
|
391 | 429 | #rc_cache.cache_general.arguments.db = 0 |
|
392 | 430 | #rc_cache.cache_general.arguments.socket_timeout = 30 |
|
393 | 431 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
394 | 432 | #rc_cache.cache_general.arguments.distributed_lock = true |
|
395 | 433 | |
|
396 | 434 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
397 | 435 | #rc_cache.cache_general.arguments.lock_auto_renewal = true |
|
398 | 436 | |
|
399 | 437 | ; ************************************************* |
|
400 | 438 | ; `cache_perms` cache for permission tree, auth TTL |
|
401 | 439 | ; for simplicity use rc.file_namespace backend, |
|
402 | 440 | ; for performance and scale use rc.redis |
|
403 | 441 | ; ************************************************* |
|
404 | 442 | rc_cache.cache_perms.backend = dogpile.cache.rc.file_namespace |
|
405 | 443 | rc_cache.cache_perms.expiration_time = 3600 |
|
406 | 444 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
407 | 445 | #rc_cache.cache_perms.arguments.filename = /tmp/cache_perms_db |
|
408 | 446 | |
|
409 | 447 | ; alternative `cache_perms` redis backend with distributed lock |
|
410 | 448 | #rc_cache.cache_perms.backend = dogpile.cache.rc.redis |
|
411 | 449 | #rc_cache.cache_perms.expiration_time = 300 |
|
412 | 450 | |
|
413 | 451 | ; redis_expiration_time needs to be greater then expiration_time |
|
414 | 452 | #rc_cache.cache_perms.arguments.redis_expiration_time = 7200 |
|
415 | 453 | |
|
416 | 454 | #rc_cache.cache_perms.arguments.host = localhost |
|
417 | 455 | #rc_cache.cache_perms.arguments.port = 6379 |
|
418 | 456 | #rc_cache.cache_perms.arguments.db = 0 |
|
419 | 457 | #rc_cache.cache_perms.arguments.socket_timeout = 30 |
|
420 | 458 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
421 | 459 | #rc_cache.cache_perms.arguments.distributed_lock = true |
|
422 | 460 | |
|
423 | 461 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
424 | 462 | #rc_cache.cache_perms.arguments.lock_auto_renewal = true |
|
425 | 463 | |
|
426 | 464 | ; *************************************************** |
|
427 | 465 | ; `cache_repo` cache for file tree, Readme, RSS FEEDS |
|
428 | 466 | ; for simplicity use rc.file_namespace backend, |
|
429 | 467 | ; for performance and scale use rc.redis |
|
430 | 468 | ; *************************************************** |
|
431 | 469 | rc_cache.cache_repo.backend = dogpile.cache.rc.file_namespace |
|
432 | 470 | rc_cache.cache_repo.expiration_time = 2592000 |
|
433 | 471 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
434 | 472 | #rc_cache.cache_repo.arguments.filename = /tmp/cache_repo_db |
|
435 | 473 | |
|
436 | 474 | ; alternative `cache_repo` redis backend with distributed lock |
|
437 | 475 | #rc_cache.cache_repo.backend = dogpile.cache.rc.redis |
|
438 | 476 | #rc_cache.cache_repo.expiration_time = 2592000 |
|
439 | 477 | |
|
440 | 478 | ; redis_expiration_time needs to be greater then expiration_time |
|
441 | 479 | #rc_cache.cache_repo.arguments.redis_expiration_time = 2678400 |
|
442 | 480 | |
|
443 | 481 | #rc_cache.cache_repo.arguments.host = localhost |
|
444 | 482 | #rc_cache.cache_repo.arguments.port = 6379 |
|
445 | 483 | #rc_cache.cache_repo.arguments.db = 1 |
|
446 | 484 | #rc_cache.cache_repo.arguments.socket_timeout = 30 |
|
447 | 485 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
448 | 486 | #rc_cache.cache_repo.arguments.distributed_lock = true |
|
449 | 487 | |
|
450 | 488 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
451 | 489 | #rc_cache.cache_repo.arguments.lock_auto_renewal = true |
|
452 | 490 | |
|
453 | 491 | ; ############## |
|
454 | 492 | ; BEAKER SESSION |
|
455 | 493 | ; ############## |
|
456 | 494 | |
|
457 | 495 | ; beaker.session.type is type of storage options for the logged users sessions. Current allowed |
|
458 | 496 | ; types are file, ext:redis, ext:database, ext:memcached |
|
459 | 497 | ; Fastest ones are ext:redis and ext:database, DO NOT use memory type for session |
|
460 | beaker.session.type = file | |
|
461 | beaker.session.data_dir = %(here)s/data/sessions | |
|
498 | #beaker.session.type = file | |
|
499 | #beaker.session.data_dir = %(here)s/data/sessions | |
|
462 | 500 | |
|
463 | 501 | ; Redis based sessions |
|
464 |
|
|
|
465 |
|
|
|
502 | beaker.session.type = ext:redis | |
|
503 | beaker.session.url = redis://redis:6379/2 | |
|
466 | 504 | |
|
467 | 505 | ; DB based session, fast, and allows easy management over logged in users |
|
468 | 506 | #beaker.session.type = ext:database |
|
469 | 507 | #beaker.session.table_name = db_session |
|
470 | 508 | #beaker.session.sa.url = postgresql://postgres:secret@localhost/rhodecode |
|
471 | 509 | #beaker.session.sa.url = mysql://root:secret@127.0.0.1/rhodecode |
|
472 | 510 | #beaker.session.sa.pool_recycle = 3600 |
|
473 | 511 | #beaker.session.sa.echo = false |
|
474 | 512 | |
|
475 | 513 | beaker.session.key = rhodecode |
|
476 | 514 | beaker.session.secret = develop-rc-uytcxaz |
|
477 |
beaker.session.lock_dir = |
|
|
515 | beaker.session.lock_dir = /data_ramdisk/lock | |
|
478 | 516 | |
|
479 | 517 | ; Secure encrypted cookie. Requires AES and AES python libraries |
|
480 | 518 | ; you must disable beaker.session.secret to use this |
|
481 | 519 | #beaker.session.encrypt_key = key_for_encryption |
|
482 | 520 | #beaker.session.validate_key = validation_key |
|
483 | 521 | |
|
484 | 522 | ; Sets session as invalid (also logging out user) if it haven not been |
|
485 | 523 | ; accessed for given amount of time in seconds |
|
486 | 524 | beaker.session.timeout = 2592000 |
|
487 | 525 | beaker.session.httponly = true |
|
488 | 526 | |
|
489 | 527 | ; Path to use for the cookie. Set to prefix if you use prefix middleware |
|
490 | 528 | #beaker.session.cookie_path = /custom_prefix |
|
491 | 529 | |
|
492 | 530 | ; Set https secure cookie |
|
493 | 531 | beaker.session.secure = false |
|
494 | 532 | |
|
495 | 533 | ; default cookie expiration time in seconds, set to `true` to set expire |
|
496 | 534 | ; at browser close |
|
497 | 535 | #beaker.session.cookie_expires = 3600 |
|
498 | 536 | |
|
499 | 537 | ; ############################# |
|
500 | 538 | ; SEARCH INDEXING CONFIGURATION |
|
501 | 539 | ; ############################# |
|
502 | 540 | |
|
503 | 541 | ; Full text search indexer is available in rhodecode-tools under |
|
504 | 542 | ; `rhodecode-tools index` command |
|
505 | 543 | |
|
506 | 544 | ; WHOOSH Backend, doesn't require additional services to run |
|
507 | 545 | ; it works good with few dozen repos |
|
508 | 546 | search.module = rhodecode.lib.index.whoosh |
|
509 | 547 | search.location = %(here)s/data/index |
|
510 | 548 | |
|
511 | 549 | ; #################### |
|
512 | 550 | ; CHANNELSTREAM CONFIG |
|
513 | 551 | ; #################### |
|
514 | 552 | |
|
515 | 553 | ; channelstream enables persistent connections and live notification |
|
516 | 554 | ; in the system. It's also used by the chat system |
|
517 | 555 | |
|
518 |
channelstream.enabled = |
|
|
556 | channelstream.enabled = true | |
|
519 | 557 | |
|
520 | 558 | ; server address for channelstream server on the backend |
|
521 |
channelstream.server = |
|
|
559 | channelstream.server = channelstream:9800 | |
|
522 | 560 | |
|
523 | 561 | ; location of the channelstream server from outside world |
|
524 | 562 | ; use ws:// for http or wss:// for https. This address needs to be handled |
|
525 | 563 | ; by external HTTP server such as Nginx or Apache |
|
526 | 564 | ; see Nginx/Apache configuration examples in our docs |
|
527 | 565 | channelstream.ws_url = ws://rhodecode.yourserver.com/_channelstream |
|
528 |
channelstream.secret = |
|
|
529 |
channelstream.history.location = |
|
|
566 | channelstream.secret = ENV_GENERATED | |
|
567 | channelstream.history.location = /var/opt/rhodecode_data/channelstream_history | |
|
530 | 568 | |
|
531 | 569 | ; Internal application path that Javascript uses to connect into. |
|
532 | 570 | ; If you use proxy-prefix the prefix should be added before /_channelstream |
|
533 | 571 | channelstream.proxy_path = /_channelstream |
|
534 | 572 | |
|
535 | 573 | |
|
536 | 574 | ; ############################## |
|
537 | 575 | ; MAIN RHODECODE DATABASE CONFIG |
|
538 | 576 | ; ############################## |
|
539 | 577 | |
|
540 | 578 | #sqlalchemy.db1.url = sqlite:///%(here)s/rhodecode.db?timeout=30 |
|
541 | 579 | #sqlalchemy.db1.url = postgresql://postgres:qweqwe@localhost/rhodecode |
|
542 | 580 | #sqlalchemy.db1.url = mysql://root:qweqwe@localhost/rhodecode?charset=utf8 |
|
543 | 581 | ; pymysql is an alternative driver for MySQL, use in case of problems with default one |
|
544 | 582 | #sqlalchemy.db1.url = mysql+pymysql://root:qweqwe@localhost/rhodecode |
|
545 | 583 | |
|
546 | 584 | sqlalchemy.db1.url = sqlite:///%(here)s/rhodecode.db?timeout=30 |
|
547 | 585 | |
|
548 | 586 | ; see sqlalchemy docs for other advanced settings |
|
549 | 587 | ; print the sql statements to output |
|
550 | 588 | sqlalchemy.db1.echo = false |
|
551 | 589 | |
|
552 | 590 | ; recycle the connections after this amount of seconds |
|
553 | 591 | sqlalchemy.db1.pool_recycle = 3600 |
|
554 | 592 | |
|
555 | 593 | ; the number of connections to keep open inside the connection pool. |
|
556 | 594 | ; 0 indicates no limit |
|
557 | 595 | ; the general calculus with gevent is: |
|
558 | 596 | ; if your system allows 500 concurrent greenlets (max_connections) that all do database access, |
|
559 | 597 | ; then increase pool size + max overflow so that they add up to 500. |
|
560 | 598 | #sqlalchemy.db1.pool_size = 5 |
|
561 | 599 | |
|
562 | 600 | ; The number of connections to allow in connection pool "overflow", that is |
|
563 | 601 | ; connections that can be opened above and beyond the pool_size setting, |
|
564 | 602 | ; which defaults to five. |
|
565 | 603 | #sqlalchemy.db1.max_overflow = 10 |
|
566 | 604 | |
|
567 | 605 | ; Connection check ping, used to detect broken database connections |
|
568 | 606 | ; could be enabled to better handle cases if MySQL has gone away errors |
|
569 | 607 | #sqlalchemy.db1.ping_connection = true |
|
570 | 608 | |
|
571 | 609 | ; ########## |
|
572 | 610 | ; VCS CONFIG |
|
573 | 611 | ; ########## |
|
574 | 612 | vcs.server.enable = true |
|
575 |
vcs.server = |
|
|
613 | vcs.server = vcsserver:10010 | |
|
576 | 614 | |
|
577 | 615 | ; Web server connectivity protocol, responsible for web based VCS operations |
|
578 | 616 | ; Available protocols are: |
|
579 | 617 | ; `http` - use http-rpc backend (default) |
|
580 | 618 | vcs.server.protocol = http |
|
581 | 619 | |
|
582 | 620 | ; Push/Pull operations protocol, available options are: |
|
583 | 621 | ; `http` - use http-rpc backend (default) |
|
584 | 622 | vcs.scm_app_implementation = http |
|
585 | 623 | |
|
586 | 624 | ; Push/Pull operations hooks protocol, available options are: |
|
587 | 625 | ; `http` - use http-rpc backend (default) |
|
626 | ; `celery` - use celery based hooks | |
|
588 | 627 | vcs.hooks.protocol = http |
|
589 | 628 | |
|
590 | 629 | ; Host on which this instance is listening for hooks. vcsserver will call this host to pull/push hooks so it should be |
|
591 | 630 | ; accessible via network. |
|
592 | 631 | ; Use vcs.hooks.host = "*" to bind to current hostname (for Docker) |
|
593 | 632 | vcs.hooks.host = * |
|
594 | 633 | |
|
595 | 634 | ; Start VCSServer with this instance as a subprocess, useful for development |
|
596 | 635 | vcs.start_server = false |
|
597 | 636 | |
|
598 | 637 | ; List of enabled VCS backends, available options are: |
|
599 | 638 | ; `hg` - mercurial |
|
600 | 639 | ; `git` - git |
|
601 | 640 | ; `svn` - subversion |
|
602 | 641 | vcs.backends = hg, git, svn |
|
603 | 642 | |
|
604 | 643 | ; Wait this number of seconds before killing connection to the vcsserver |
|
605 | 644 | vcs.connection_timeout = 3600 |
|
606 | 645 | |
|
607 | ; Compatibility version when creating SVN repositories. Defaults to newest version when commented out. | |
|
608 | ; Set a numeric version for your current SVN e.g 1.8, or 1.12 | |
|
609 | ; Legacy available options are: pre-1.4-compatible, pre-1.5-compatible, pre-1.6-compatible, pre-1.8-compatible, pre-1.9-compatible | |
|
610 | #vcs.svn.compatible_version = 1.8 | |
|
611 | ||
|
612 | 646 | ; Cache flag to cache vcsserver remote calls locally |
|
613 | 647 | ; It uses cache_region `cache_repo` |
|
614 | 648 | vcs.methods.cache = true |
|
615 | 649 | |
|
616 | 650 | ; #################################################### |
|
617 | 651 | ; Subversion proxy support (mod_dav_svn) |
|
618 | 652 | ; Maps RhodeCode repo groups into SVN paths for Apache |
|
619 | 653 | ; #################################################### |
|
620 | 654 | |
|
655 | ; Compatibility version when creating SVN repositories. Defaults to newest version when commented out. | |
|
656 | ; Set a numeric version for your current SVN e.g 1.8, or 1.12 | |
|
657 | ; Legacy available options are: pre-1.4-compatible, pre-1.5-compatible, pre-1.6-compatible, pre-1.8-compatible, pre-1.9-compatible | |
|
658 | #vcs.svn.compatible_version = 1.8 | |
|
659 | ||
|
660 | ; Redis connection settings for svn integrations logic | |
|
661 | ; This connection string needs to be the same on ce and vcsserver | |
|
662 | vcs.svn.redis_conn = redis://redis:6379/0 | |
|
663 | ||
|
664 | ; Enable SVN proxy of requests over HTTP | |
|
665 | vcs.svn.proxy.enabled = true | |
|
666 | ||
|
667 | ; host to connect to running SVN subsystem | |
|
668 | vcs.svn.proxy.host = http://svn:8090 | |
|
669 | ||
|
621 | 670 | ; Enable or disable the config file generation. |
|
622 |
svn.proxy.generate_config = |
|
|
671 | svn.proxy.generate_config = true | |
|
623 | 672 | |
|
624 | 673 | ; Generate config file with `SVNListParentPath` set to `On`. |
|
625 | 674 | svn.proxy.list_parent_path = true |
|
626 | 675 | |
|
627 | 676 | ; Set location and file name of generated config file. |
|
628 |
svn.proxy.config_file_path = |
|
|
677 | svn.proxy.config_file_path = /etc/rhodecode/conf/svn/mod_dav_svn.conf | |
|
629 | 678 | |
|
630 | 679 | ; alternative mod_dav config template. This needs to be a valid mako template |
|
631 | 680 | ; Example template can be found in the source code: |
|
632 | 681 | ; rhodecode/apps/svn_support/templates/mod-dav-svn.conf.mako |
|
633 | 682 | #svn.proxy.config_template = ~/.rccontrol/enterprise-1/custom_svn_conf.mako |
|
634 | 683 | |
|
635 | 684 | ; Used as a prefix to the `Location` block in the generated config file. |
|
636 | 685 | ; In most cases it should be set to `/`. |
|
637 | 686 | svn.proxy.location_root = / |
|
638 | 687 | |
|
639 | 688 | ; Command to reload the mod dav svn configuration on change. |
|
640 | 689 | ; Example: `/etc/init.d/apache2 reload` or /home/USER/apache_reload.sh |
|
641 | 690 | ; Make sure user who runs RhodeCode process is allowed to reload Apache |
|
642 | 691 | #svn.proxy.reload_cmd = /etc/init.d/apache2 reload |
|
643 | 692 | |
|
644 | 693 | ; If the timeout expires before the reload command finishes, the command will |
|
645 | 694 | ; be killed. Setting it to zero means no timeout. Defaults to 10 seconds. |
|
646 | 695 | #svn.proxy.reload_timeout = 10 |
|
647 | 696 | |
|
648 | 697 | ; #################### |
|
649 | 698 | ; SSH Support Settings |
|
650 | 699 | ; #################### |
|
651 | 700 | |
|
652 | 701 | ; Defines if a custom authorized_keys file should be created and written on |
|
653 | 702 | ; any change user ssh keys. Setting this to false also disables possibility |
|
654 | 703 | ; of adding SSH keys by users from web interface. Super admins can still |
|
655 | 704 | ; manage SSH Keys. |
|
656 |
ssh.generate_authorized_keyfile = |
|
|
705 | ssh.generate_authorized_keyfile = true | |
|
657 | 706 | |
|
658 | 707 | ; Options for ssh, default is `no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding` |
|
659 | 708 | # ssh.authorized_keys_ssh_opts = |
|
660 | 709 | |
|
661 | 710 | ; Path to the authorized_keys file where the generate entries are placed. |
|
662 | 711 | ; It is possible to have multiple key files specified in `sshd_config` e.g. |
|
663 | 712 | ; AuthorizedKeysFile %h/.ssh/authorized_keys %h/.ssh/authorized_keys_rhodecode |
|
664 |
ssh.authorized_keys_file_path = |
|
|
713 | ssh.authorized_keys_file_path = /etc/rhodecode/conf/ssh/authorized_keys_rhodecode | |
|
665 | 714 | |
|
666 | 715 | ; Command to execute the SSH wrapper. The binary is available in the |
|
667 | 716 | ; RhodeCode installation directory. |
|
668 | ; e.g ~/.rccontrol/community-1/profile/bin/rc-ssh-wrapper | |
|
669 | ssh.wrapper_cmd = ~/.rccontrol/community-1/rc-ssh-wrapper | |
|
717 | ; legacy: /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper | |
|
718 | ; new rewrite: /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper-v2 | |
|
719 | ssh.wrapper_cmd = /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper | |
|
670 | 720 | |
|
671 | 721 | ; Allow shell when executing the ssh-wrapper command |
|
672 | 722 | ssh.wrapper_cmd_allow_shell = false |
|
673 | 723 | |
|
674 | 724 | ; Enables logging, and detailed output send back to the client during SSH |
|
675 | 725 | ; operations. Useful for debugging, shouldn't be used in production. |
|
676 | 726 | ssh.enable_debug_logging = true |
|
677 | 727 | |
|
678 | 728 | ; Paths to binary executable, by default they are the names, but we can |
|
679 | 729 | ; override them if we want to use a custom one |
|
680 |
ssh.executable.hg = |
|
|
681 |
ssh.executable.git = |
|
|
682 |
ssh.executable.svn = |
|
|
730 | ssh.executable.hg = /usr/local/bin/rhodecode_bin/vcs_bin/hg | |
|
731 | ssh.executable.git = /usr/local/bin/rhodecode_bin/vcs_bin/git | |
|
732 | ssh.executable.svn = /usr/local/bin/rhodecode_bin/vcs_bin/svnserve | |
|
683 | 733 | |
|
684 | 734 | ; Enables SSH key generator web interface. Disabling this still allows users |
|
685 | 735 | ; to add their own keys. |
|
686 | 736 | ssh.enable_ui_key_generator = true |
|
687 | 737 | |
|
688 | ||
|
689 | ; ################# | |
|
690 | ; APPENLIGHT CONFIG | |
|
691 | ; ################# | |
|
692 | ||
|
693 | ; Appenlight is tailored to work with RhodeCode, see | |
|
694 | ; http://appenlight.rhodecode.com for details how to obtain an account | |
|
695 | ||
|
696 | ; Appenlight integration enabled | |
|
697 | #appenlight = false | |
|
698 | ||
|
699 | #appenlight.server_url = https://api.appenlight.com | |
|
700 | #appenlight.api_key = YOUR_API_KEY | |
|
701 | #appenlight.transport_config = https://api.appenlight.com?threaded=1&timeout=5 | |
|
702 | ||
|
703 | ; used for JS client | |
|
704 | #appenlight.api_public_key = YOUR_API_PUBLIC_KEY | |
|
705 | ||
|
706 | ; TWEAK AMOUNT OF INFO SENT HERE | |
|
707 | ||
|
708 | ; enables 404 error logging (default False) | |
|
709 | #appenlight.report_404 = false | |
|
710 | ||
|
711 | ; time in seconds after request is considered being slow (default 1) | |
|
712 | #appenlight.slow_request_time = 1 | |
|
713 | ||
|
714 | ; record slow requests in application | |
|
715 | ; (needs to be enabled for slow datastore recording and time tracking) | |
|
716 | #appenlight.slow_requests = true | |
|
717 | ||
|
718 | ; enable hooking to application loggers | |
|
719 | #appenlight.logging = true | |
|
720 | ||
|
721 | ; minimum log level for log capture | |
|
722 | #ppenlight.logging.level = WARNING | |
|
723 | ||
|
724 | ; send logs only from erroneous/slow requests | |
|
725 | ; (saves API quota for intensive logging) | |
|
726 | #appenlight.logging_on_error = false | |
|
727 | ||
|
728 | ; list of additional keywords that should be grabbed from environ object | |
|
729 | ; can be string with comma separated list of words in lowercase | |
|
730 | ; (by default client will always send following info: | |
|
731 | ; 'REMOTE_USER', 'REMOTE_ADDR', 'SERVER_NAME', 'CONTENT_TYPE' + all keys that | |
|
732 | ; start with HTTP* this list be extended with additional keywords here | |
|
733 | #appenlight.environ_keys_whitelist = | |
|
734 | ||
|
735 | ; list of keywords that should be blanked from request object | |
|
736 | ; can be string with comma separated list of words in lowercase | |
|
737 | ; (by default client will always blank keys that contain following words | |
|
738 | ; 'password', 'passwd', 'pwd', 'auth_tkt', 'secret', 'csrf' | |
|
739 | ; this list be extended with additional keywords set here | |
|
740 | #appenlight.request_keys_blacklist = | |
|
741 | ||
|
742 | ; list of namespaces that should be ignores when gathering log entries | |
|
743 | ; can be string with comma separated list of namespaces | |
|
744 | ; (by default the client ignores own entries: appenlight_client.client) | |
|
745 | #appenlight.log_namespace_blacklist = | |
|
746 | ||
|
747 | 738 | ; Statsd client config, this is used to send metrics to statsd |
|
748 | 739 | ; We recommend setting statsd_exported and scrape them using Prometheus |
|
749 | 740 | #statsd.enabled = false |
|
750 | 741 | #statsd.statsd_host = 0.0.0.0 |
|
751 | 742 | #statsd.statsd_port = 8125 |
|
752 | 743 | #statsd.statsd_prefix = |
|
753 | 744 | #statsd.statsd_ipv6 = false |
|
754 | 745 | |
|
755 | 746 | ; configure logging automatically at server startup set to false |
|
756 | 747 | ; to use the below custom logging config. |
|
757 | 748 | ; RC_LOGGING_FORMATTER |
|
758 | 749 | ; RC_LOGGING_LEVEL |
|
759 | 750 | ; env variables can control the settings for logging in case of autoconfigure |
|
760 | 751 | |
|
761 | 752 | #logging.autoconfigure = true |
|
762 | 753 | |
|
763 | 754 | ; specify your own custom logging config file to configure logging |
|
764 | 755 | #logging.logging_conf_file = /path/to/custom_logging.ini |
|
765 | 756 | |
|
766 | 757 | ; Dummy marker to add new entries after. |
|
767 | 758 | ; Add any custom entries below. Please don't remove this marker. |
|
768 | 759 | custom.conf = 1 |
|
769 | 760 | |
|
770 | 761 | |
|
771 | 762 | ; ##################### |
|
772 | 763 | ; LOGGING CONFIGURATION |
|
773 | 764 | ; ##################### |
|
774 | 765 | |
|
775 | 766 | [loggers] |
|
776 | 767 | keys = root, sqlalchemy, beaker, celery, rhodecode, ssh_wrapper |
|
777 | 768 | |
|
778 | 769 | [handlers] |
|
779 | 770 | keys = console, console_sql |
|
780 | 771 | |
|
781 | 772 | [formatters] |
|
782 | 773 | keys = generic, json, color_formatter, color_formatter_sql |
|
783 | 774 | |
|
784 | 775 | ; ####### |
|
785 | 776 | ; LOGGERS |
|
786 | 777 | ; ####### |
|
787 | 778 | [logger_root] |
|
788 | 779 | level = NOTSET |
|
789 | 780 | handlers = console |
|
790 | 781 | |
|
791 | 782 | [logger_sqlalchemy] |
|
792 | 783 | level = INFO |
|
793 | 784 | handlers = console_sql |
|
794 | 785 | qualname = sqlalchemy.engine |
|
795 | 786 | propagate = 0 |
|
796 | 787 | |
|
797 | 788 | [logger_beaker] |
|
798 | 789 | level = DEBUG |
|
799 | 790 | handlers = |
|
800 | 791 | qualname = beaker.container |
|
801 | 792 | propagate = 1 |
|
802 | 793 | |
|
803 | 794 | [logger_rhodecode] |
|
804 | 795 | level = DEBUG |
|
805 | 796 | handlers = |
|
806 | 797 | qualname = rhodecode |
|
807 | 798 | propagate = 1 |
|
808 | 799 | |
|
809 | 800 | [logger_ssh_wrapper] |
|
810 | 801 | level = DEBUG |
|
811 | 802 | handlers = |
|
812 | 803 | qualname = ssh_wrapper |
|
813 | 804 | propagate = 1 |
|
814 | 805 | |
|
815 | 806 | [logger_celery] |
|
816 | 807 | level = DEBUG |
|
817 | 808 | handlers = |
|
818 | 809 | qualname = celery |
|
819 | 810 | |
|
820 | 811 | |
|
821 | 812 | ; ######## |
|
822 | 813 | ; HANDLERS |
|
823 | 814 | ; ######## |
|
824 | 815 | |
|
825 | 816 | [handler_console] |
|
826 | 817 | class = StreamHandler |
|
827 | 818 | args = (sys.stderr, ) |
|
828 | 819 | level = DEBUG |
|
829 | 820 | ; To enable JSON formatted logs replace 'generic/color_formatter' with 'json' |
|
830 | 821 | ; This allows sending properly formatted logs to grafana loki or elasticsearch |
|
831 | 822 | formatter = color_formatter |
|
832 | 823 | |
|
833 | 824 | [handler_console_sql] |
|
834 | 825 | ; "level = DEBUG" logs SQL queries and results. |
|
835 | 826 | ; "level = INFO" logs SQL queries. |
|
836 | 827 | ; "level = WARN" logs neither. (Recommended for production systems.) |
|
837 | 828 | class = StreamHandler |
|
838 | 829 | args = (sys.stderr, ) |
|
839 | 830 | level = WARN |
|
840 | 831 | ; To enable JSON formatted logs replace 'generic/color_formatter_sql' with 'json' |
|
841 | 832 | ; This allows sending properly formatted logs to grafana loki or elasticsearch |
|
842 | 833 | formatter = color_formatter_sql |
|
843 | 834 | |
|
844 | 835 | ; ########## |
|
845 | 836 | ; FORMATTERS |
|
846 | 837 | ; ########## |
|
847 | 838 | |
|
848 | 839 | [formatter_generic] |
|
849 | 840 | class = rhodecode.lib.logging_formatter.ExceptionAwareFormatter |
|
850 | 841 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
851 | 842 | datefmt = %Y-%m-%d %H:%M:%S |
|
852 | 843 | |
|
853 | 844 | [formatter_color_formatter] |
|
854 | 845 | class = rhodecode.lib.logging_formatter.ColorFormatter |
|
855 | 846 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
856 | 847 | datefmt = %Y-%m-%d %H:%M:%S |
|
857 | 848 | |
|
858 | 849 | [formatter_color_formatter_sql] |
|
859 | 850 | class = rhodecode.lib.logging_formatter.ColorFormatterSql |
|
860 | 851 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
861 | 852 | datefmt = %Y-%m-%d %H:%M:%S |
|
862 | 853 | |
|
863 | 854 | [formatter_json] |
|
864 | 855 | format = %(timestamp)s %(levelname)s %(name)s %(message)s %(req_id)s |
|
865 | 856 | class = rhodecode.lib._vendor.jsonlogger.JsonFormatter |
@@ -1,816 +1,824 b'' | |||
|
1 | 1 | |
|
2 | 2 | ; ######################################### |
|
3 | 3 | ; RHODECODE COMMUNITY EDITION CONFIGURATION |
|
4 | 4 | ; ######################################### |
|
5 | 5 | |
|
6 | 6 | [DEFAULT] |
|
7 | 7 | ; Debug flag sets all loggers to debug, and enables request tracking |
|
8 | 8 | debug = false |
|
9 | 9 | |
|
10 | 10 | ; ######################################################################## |
|
11 | 11 | ; EMAIL CONFIGURATION |
|
12 | 12 | ; These settings will be used by the RhodeCode mailing system |
|
13 | 13 | ; ######################################################################## |
|
14 | 14 | |
|
15 | 15 | ; prefix all emails subjects with given prefix, helps filtering out emails |
|
16 | 16 | #email_prefix = [RhodeCode] |
|
17 | 17 | |
|
18 | 18 | ; email FROM address all mails will be sent |
|
19 | 19 | #app_email_from = rhodecode-noreply@localhost |
|
20 | 20 | |
|
21 | 21 | #smtp_server = mail.server.com |
|
22 | 22 | #smtp_username = |
|
23 | 23 | #smtp_password = |
|
24 | 24 | #smtp_port = |
|
25 | 25 | #smtp_use_tls = false |
|
26 | 26 | #smtp_use_ssl = true |
|
27 | 27 | |
|
28 | 28 | [server:main] |
|
29 | 29 | ; COMMON HOST/IP CONFIG, This applies mostly to develop setup, |
|
30 | 30 | ; Host port for gunicorn are controlled by gunicorn_conf.py |
|
31 | 31 | host = 127.0.0.1 |
|
32 | 32 | port = 10020 |
|
33 | 33 | |
|
34 | 34 | |
|
35 | 35 | ; ########################### |
|
36 | 36 | ; GUNICORN APPLICATION SERVER |
|
37 | 37 | ; ########################### |
|
38 | 38 | |
|
39 |
; run with gunicorn |
|
|
39 | ; run with gunicorn --config gunicorn_conf.py --paste rhodecode.ini | |
|
40 | 40 | |
|
41 | 41 | ; Module to use, this setting shouldn't be changed |
|
42 | 42 | use = egg:gunicorn#main |
|
43 | 43 | |
|
44 | 44 | ; Prefix middleware for RhodeCode. |
|
45 | 45 | ; recommended when using proxy setup. |
|
46 | 46 | ; allows to set RhodeCode under a prefix in server. |
|
47 | 47 | ; eg https://server.com/custom_prefix. Enable `filter-with =` option below as well. |
|
48 | 48 | ; And set your prefix like: `prefix = /custom_prefix` |
|
49 | 49 | ; be sure to also set beaker.session.cookie_path = /custom_prefix if you need |
|
50 | 50 | ; to make your cookies only work on prefix url |
|
51 | 51 | [filter:proxy-prefix] |
|
52 | 52 | use = egg:PasteDeploy#prefix |
|
53 | 53 | prefix = / |
|
54 | 54 | |
|
55 | 55 | [app:main] |
|
56 | 56 | ; The %(here)s variable will be replaced with the absolute path of parent directory |
|
57 | 57 | ; of this file |
|
58 | 58 | ; Each option in the app:main can be override by an environmental variable |
|
59 | 59 | ; |
|
60 | 60 | ;To override an option: |
|
61 | 61 | ; |
|
62 | 62 | ;RC_<KeyName> |
|
63 | 63 | ;Everything should be uppercase, . and - should be replaced by _. |
|
64 | 64 | ;For example, if you have these configuration settings: |
|
65 | 65 | ;rc_cache.repo_object.backend = foo |
|
66 | 66 | ;can be overridden by |
|
67 | 67 | ;export RC_CACHE_REPO_OBJECT_BACKEND=foo |
|
68 | 68 | |
|
69 | 69 | use = egg:rhodecode-enterprise-ce |
|
70 | 70 | |
|
71 | 71 | ; enable proxy prefix middleware, defined above |
|
72 | 72 | #filter-with = proxy-prefix |
|
73 | 73 | |
|
74 | 74 | ; encryption key used to encrypt social plugin tokens, |
|
75 | 75 | ; remote_urls with credentials etc, if not set it defaults to |
|
76 | 76 | ; `beaker.session.secret` |
|
77 | 77 | #rhodecode.encrypted_values.secret = |
|
78 | 78 | |
|
79 | 79 | ; decryption strict mode (enabled by default). It controls if decryption raises |
|
80 | 80 | ; `SignatureVerificationError` in case of wrong key, or damaged encryption data. |
|
81 | 81 | #rhodecode.encrypted_values.strict = false |
|
82 | 82 | |
|
83 | 83 | ; Pick algorithm for encryption. Either fernet (more secure) or aes (default) |
|
84 | 84 | ; fernet is safer, and we strongly recommend switching to it. |
|
85 | 85 | ; Due to backward compatibility aes is used as default. |
|
86 | 86 | #rhodecode.encrypted_values.algorithm = fernet |
|
87 | 87 | |
|
88 | 88 | ; Return gzipped responses from RhodeCode (static files/application) |
|
89 | 89 | gzip_responses = false |
|
90 | 90 | |
|
91 | 91 | ; Auto-generate javascript routes file on startup |
|
92 | 92 | generate_js_files = false |
|
93 | 93 | |
|
94 | 94 | ; System global default language. |
|
95 | 95 | ; All available languages: en (default), be, de, es, fr, it, ja, pl, pt, ru, zh |
|
96 | 96 | lang = en |
|
97 | 97 | |
|
98 | 98 | ; Perform a full repository scan and import on each server start. |
|
99 | 99 | ; Settings this to true could lead to very long startup time. |
|
100 | 100 | startup.import_repos = false |
|
101 | 101 | |
|
102 | 102 | ; URL at which the application is running. This is used for Bootstrapping |
|
103 | 103 | ; requests in context when no web request is available. Used in ishell, or |
|
104 | 104 | ; SSH calls. Set this for events to receive proper url for SSH calls. |
|
105 | 105 | app.base_url = http://rhodecode.local |
|
106 | 106 | |
|
107 | ; Host at which the Service API is running. | |
|
108 | app.service_api.host = http://rhodecode.local:10020 | |
|
109 | ||
|
110 | ; Secret for Service API authentication. | |
|
111 | app.service_api.token = | |
|
112 | ||
|
107 | 113 | ; Unique application ID. Should be a random unique string for security. |
|
108 | 114 | app_instance_uuid = rc-production |
|
109 | 115 | |
|
110 | 116 | ; Cut off limit for large diffs (size in bytes). If overall diff size on |
|
111 | 117 | ; commit, or pull request exceeds this limit this diff will be displayed |
|
112 | 118 | ; partially. E.g 512000 == 512Kb |
|
113 | 119 | cut_off_limit_diff = 512000 |
|
114 | 120 | |
|
115 | 121 | ; Cut off limit for large files inside diffs (size in bytes). Each individual |
|
116 | 122 | ; file inside diff which exceeds this limit will be displayed partially. |
|
117 | 123 | ; E.g 128000 == 128Kb |
|
118 | 124 | cut_off_limit_file = 128000 |
|
119 | 125 | |
|
120 | 126 | ; Use cached version of vcs repositories everywhere. Recommended to be `true` |
|
121 | 127 | vcs_full_cache = true |
|
122 | 128 | |
|
123 | 129 | ; Force https in RhodeCode, fixes https redirects, assumes it's always https. |
|
124 | 130 | ; Normally this is controlled by proper flags sent from http server such as Nginx or Apache |
|
125 | 131 | force_https = false |
|
126 | 132 | |
|
127 | 133 | ; use Strict-Transport-Security headers |
|
128 | 134 | use_htsts = false |
|
129 | 135 | |
|
130 | 136 | ; Set to true if your repos are exposed using the dumb protocol |
|
131 | 137 | git_update_server_info = false |
|
132 | 138 | |
|
133 | 139 | ; RSS/ATOM feed options |
|
134 | 140 | rss_cut_off_limit = 256000 |
|
135 | 141 | rss_items_per_page = 10 |
|
136 | 142 | rss_include_diff = false |
|
137 | 143 | |
|
138 | 144 | ; gist URL alias, used to create nicer urls for gist. This should be an |
|
139 | 145 | ; url that does rewrites to _admin/gists/{gistid}. |
|
140 | 146 | ; example: http://gist.rhodecode.org/{gistid}. Empty means use the internal |
|
141 | 147 | ; RhodeCode url, ie. http[s]://rhodecode.server/_admin/gists/{gistid} |
|
142 | 148 | gist_alias_url = |
|
143 | 149 | |
|
144 | 150 | ; List of views (using glob pattern syntax) that AUTH TOKENS could be |
|
145 | 151 | ; used for access. |
|
146 | 152 | ; Adding ?auth_token=TOKEN_HASH to the url authenticates this request as if it |
|
147 | 153 | ; came from the the logged in user who own this authentication token. |
|
148 | 154 | ; Additionally @TOKEN syntax can be used to bound the view to specific |
|
149 | 155 | ; authentication token. Such view would be only accessible when used together |
|
150 | 156 | ; with this authentication token |
|
151 | 157 | ; list of all views can be found under `/_admin/permissions/auth_token_access` |
|
152 | 158 | ; The list should be "," separated and on a single line. |
|
153 | 159 | ; Most common views to enable: |
|
154 | 160 | |
|
155 | 161 | # RepoCommitsView:repo_commit_download |
|
156 | 162 | # RepoCommitsView:repo_commit_patch |
|
157 | 163 | # RepoCommitsView:repo_commit_raw |
|
158 | 164 | # RepoCommitsView:repo_commit_raw@TOKEN |
|
159 | 165 | # RepoFilesView:repo_files_diff |
|
160 | 166 | # RepoFilesView:repo_archivefile |
|
161 | 167 | # RepoFilesView:repo_file_raw |
|
162 | 168 | # GistView:* |
|
163 | 169 | api_access_controllers_whitelist = |
|
164 | 170 | |
|
165 | 171 | ; Default encoding used to convert from and to unicode |
|
166 | 172 | ; can be also a comma separated list of encoding in case of mixed encodings |
|
167 | 173 | default_encoding = UTF-8 |
|
168 | 174 | |
|
169 | 175 | ; instance-id prefix |
|
170 | 176 | ; a prefix key for this instance used for cache invalidation when running |
|
171 | 177 | ; multiple instances of RhodeCode, make sure it's globally unique for |
|
172 | 178 | ; all running RhodeCode instances. Leave empty if you don't use it |
|
173 | 179 | instance_id = |
|
174 | 180 | |
|
175 | 181 | ; Fallback authentication plugin. Set this to a plugin ID to force the usage |
|
176 | 182 | ; of an authentication plugin also if it is disabled by it's settings. |
|
177 | 183 | ; This could be useful if you are unable to log in to the system due to broken |
|
178 | 184 | ; authentication settings. Then you can enable e.g. the internal RhodeCode auth |
|
179 | 185 | ; module to log in again and fix the settings. |
|
180 | 186 | ; Available builtin plugin IDs (hash is part of the ID): |
|
181 | 187 | ; egg:rhodecode-enterprise-ce#rhodecode |
|
182 | 188 | ; egg:rhodecode-enterprise-ce#pam |
|
183 | 189 | ; egg:rhodecode-enterprise-ce#ldap |
|
184 | 190 | ; egg:rhodecode-enterprise-ce#jasig_cas |
|
185 | 191 | ; egg:rhodecode-enterprise-ce#headers |
|
186 | 192 | ; egg:rhodecode-enterprise-ce#crowd |
|
187 | 193 | |
|
188 | 194 | #rhodecode.auth_plugin_fallback = egg:rhodecode-enterprise-ce#rhodecode |
|
189 | 195 | |
|
190 | 196 | ; Flag to control loading of legacy plugins in py:/path format |
|
191 | 197 | auth_plugin.import_legacy_plugins = true |
|
192 | 198 | |
|
193 | 199 | ; alternative return HTTP header for failed authentication. Default HTTP |
|
194 | 200 | ; response is 401 HTTPUnauthorized. Currently HG clients have troubles with |
|
195 | 201 | ; handling that causing a series of failed authentication calls. |
|
196 | 202 | ; Set this variable to 403 to return HTTPForbidden, or any other HTTP code |
|
197 | 203 | ; This will be served instead of default 401 on bad authentication |
|
198 | 204 | auth_ret_code = |
|
199 | 205 | |
|
200 | 206 | ; use special detection method when serving auth_ret_code, instead of serving |
|
201 | 207 | ; ret_code directly, use 401 initially (Which triggers credentials prompt) |
|
202 | 208 | ; and then serve auth_ret_code to clients |
|
203 | 209 | auth_ret_code_detection = false |
|
204 | 210 | |
|
205 | 211 | ; locking return code. When repository is locked return this HTTP code. 2XX |
|
206 | 212 | ; codes don't break the transactions while 4XX codes do |
|
207 | 213 | lock_ret_code = 423 |
|
208 | 214 | |
|
209 | ; allows to change the repository location in settings page | |
|
210 | allow_repo_location_change = true | |
|
215 | ; Filesystem location were repositories should be stored | |
|
216 | repo_store.path = /var/opt/rhodecode_repo_store | |
|
211 | 217 | |
|
212 | 218 | ; allows to setup custom hooks in settings page |
|
213 | 219 | allow_custom_hooks_settings = true |
|
214 | 220 | |
|
215 | 221 | ; Generated license token required for EE edition license. |
|
216 | 222 | ; New generated token value can be found in Admin > settings > license page. |
|
217 | 223 | license_token = |
|
218 | 224 | |
|
219 | 225 | ; This flag hides sensitive information on the license page such as token, and license data |
|
220 | 226 | license.hide_license_info = false |
|
221 | 227 | |
|
222 | 228 | ; supervisor connection uri, for managing supervisor and logs. |
|
223 | 229 | supervisor.uri = |
|
224 | 230 | |
|
225 | 231 | ; supervisord group name/id we only want this RC instance to handle |
|
226 | 232 | supervisor.group_id = prod |
|
227 | 233 | |
|
228 | 234 | ; Display extended labs settings |
|
229 | 235 | labs_settings_active = true |
|
230 | 236 | |
|
231 | 237 | ; Custom exception store path, defaults to TMPDIR |
|
232 | 238 | ; This is used to store exception from RhodeCode in shared directory |
|
233 | 239 | #exception_tracker.store_path = |
|
234 | 240 | |
|
235 | 241 | ; Send email with exception details when it happens |
|
236 | 242 | #exception_tracker.send_email = false |
|
237 | 243 | |
|
238 | 244 | ; Comma separated list of recipients for exception emails, |
|
239 | 245 | ; e.g admin@rhodecode.com,devops@rhodecode.com |
|
240 | 246 | ; Can be left empty, then emails will be sent to ALL super-admins |
|
241 | 247 | #exception_tracker.send_email_recipients = |
|
242 | 248 | |
|
243 | 249 | ; optional prefix to Add to email Subject |
|
244 | 250 | #exception_tracker.email_prefix = [RHODECODE ERROR] |
|
245 | 251 | |
|
246 | 252 | ; File store configuration. This is used to store and serve uploaded files |
|
247 | 253 | file_store.enabled = true |
|
248 | 254 | |
|
249 | 255 | ; Storage backend, available options are: local |
|
250 | 256 | file_store.backend = local |
|
251 | 257 | |
|
252 | ; path to store the uploaded binaries | |
|
253 |
file_store.storage_path = |
|
|
258 | ; path to store the uploaded binaries and artifacts | |
|
259 | file_store.storage_path = /var/opt/rhodecode_data/file_store | |
|
260 | ||
|
261 | ||
|
262 | ; Redis url to acquire/check generation of archives locks | |
|
263 | archive_cache.locking.url = redis://redis:6379/1 | |
|
264 | ||
|
265 | ; Storage backend, only 'filesystem' and 'objectstore' are available now | |
|
266 | archive_cache.backend.type = filesystem | |
|
267 | ||
|
268 | ; url for s3 compatible storage that allows to upload artifacts | |
|
269 | ; e.g http://minio:9000 | |
|
270 | archive_cache.objectstore.url = http://s3-minio:9000 | |
|
271 | ||
|
272 | ; key for s3 auth | |
|
273 | archive_cache.objectstore.key = key | |
|
274 | ||
|
275 | ; secret for s3 auth | |
|
276 | archive_cache.objectstore.secret = secret | |
|
254 | 277 | |
|
255 | ; Uncomment and set this path to control settings for archive download cache. | |
|
278 | ;region for s3 storage | |
|
279 | archive_cache.objectstore.region = eu-central-1 | |
|
280 | ||
|
281 | ; number of sharded buckets to create to distribute archives across | |
|
282 | ; default is 8 shards | |
|
283 | archive_cache.objectstore.bucket_shards = 8 | |
|
284 | ||
|
285 | ; a top-level bucket to put all other shards in | |
|
286 | ; objects will be stored in rhodecode-archive-cache/shard-N based on the bucket_shards number | |
|
287 | archive_cache.objectstore.bucket = rhodecode-archive-cache | |
|
288 | ||
|
289 | ; if true, this cache will try to retry with retry_attempts=N times waiting retry_backoff time | |
|
290 | archive_cache.objectstore.retry = false | |
|
291 | ||
|
292 | ; number of seconds to wait for next try using retry | |
|
293 | archive_cache.objectstore.retry_backoff = 1 | |
|
294 | ||
|
295 | ; how many tries do do a retry fetch from this backend | |
|
296 | archive_cache.objectstore.retry_attempts = 10 | |
|
297 | ||
|
298 | ; Default is $cache_dir/archive_cache if not set | |
|
256 | 299 | ; Generated repo archives will be cached at this location |
|
257 | 300 | ; and served from the cache during subsequent requests for the same archive of |
|
258 | 301 | ; the repository. This path is important to be shared across filesystems and with |
|
259 | 302 | ; RhodeCode and vcsserver |
|
260 | ||
|
261 | ; Default is $cache_dir/archive_cache if not set | |
|
262 | archive_cache.store_dir = %(here)s/data/archive_cache | |
|
303 | archive_cache.filesystem.store_dir = /var/opt/rhodecode_data/archive_cache | |
|
263 | 304 | |
|
264 | 305 | ; The limit in GB sets how much data we cache before recycling last used, defaults to 10 gb |
|
265 | archive_cache.cache_size_gb = 40 | |
|
306 | archive_cache.filesystem.cache_size_gb = 40 | |
|
307 | ||
|
308 | ; Eviction policy used to clear out after cache_size_gb limit is reached | |
|
309 | archive_cache.filesystem.eviction_policy = least-recently-stored | |
|
266 | 310 | |
|
267 | 311 | ; By default cache uses sharding technique, this specifies how many shards are there |
|
268 | archive_cache.cache_shards = 4 | |
|
312 | ; default is 8 shards | |
|
313 | archive_cache.filesystem.cache_shards = 8 | |
|
314 | ||
|
315 | ; if true, this cache will try to retry with retry_attempts=N times waiting retry_backoff time | |
|
316 | archive_cache.filesystem.retry = false | |
|
317 | ||
|
318 | ; number of seconds to wait for next try using retry | |
|
319 | archive_cache.filesystem.retry_backoff = 1 | |
|
320 | ||
|
321 | ; how many tries do do a retry fetch from this backend | |
|
322 | archive_cache.filesystem.retry_attempts = 10 | |
|
323 | ||
|
269 | 324 | |
|
270 | 325 | ; ############# |
|
271 | 326 | ; CELERY CONFIG |
|
272 | 327 | ; ############# |
|
273 | 328 | |
|
274 | 329 | ; manually run celery: /path/to/celery worker --task-events --beat --app rhodecode.lib.celerylib.loader --scheduler rhodecode.lib.celerylib.scheduler.RcScheduler --loglevel DEBUG --ini /path/to/rhodecode.ini |
|
275 | 330 | |
|
276 |
use_celery = |
|
|
331 | use_celery = true | |
|
277 | 332 | |
|
278 | 333 | ; path to store schedule database |
|
279 | 334 | #celerybeat-schedule.path = |
|
280 | 335 | |
|
281 | 336 | ; connection url to the message broker (default redis) |
|
282 | 337 | celery.broker_url = redis://redis:6379/8 |
|
283 | 338 | |
|
284 | 339 | ; results backend to get results for (default redis) |
|
285 | 340 | celery.result_backend = redis://redis:6379/8 |
|
286 | 341 | |
|
287 | 342 | ; rabbitmq example |
|
288 | 343 | #celery.broker_url = amqp://rabbitmq:qweqwe@localhost:5672/rabbitmqhost |
|
289 | 344 | |
|
290 | 345 | ; maximum tasks to execute before worker restart |
|
291 | 346 | celery.max_tasks_per_child = 20 |
|
292 | 347 | |
|
293 | 348 | ; tasks will never be sent to the queue, but executed locally instead. |
|
294 | 349 | celery.task_always_eager = false |
|
295 | 350 | |
|
296 | 351 | ; ############# |
|
297 | 352 | ; DOGPILE CACHE |
|
298 | 353 | ; ############# |
|
299 | 354 | |
|
300 | 355 | ; Default cache dir for caches. Putting this into a ramdisk can boost performance. |
|
301 | 356 | ; eg. /tmpfs/data_ramdisk, however this directory might require large amount of space |
|
302 |
cache_dir = |
|
|
357 | cache_dir = /var/opt/rhodecode_data | |
|
303 | 358 | |
|
304 | 359 | ; ********************************************* |
|
305 | 360 | ; `sql_cache_short` cache for heavy SQL queries |
|
306 | 361 | ; Only supported backend is `memory_lru` |
|
307 | 362 | ; ********************************************* |
|
308 | 363 | rc_cache.sql_cache_short.backend = dogpile.cache.rc.memory_lru |
|
309 | 364 | rc_cache.sql_cache_short.expiration_time = 30 |
|
310 | 365 | |
|
311 | 366 | |
|
312 | 367 | ; ***************************************************** |
|
313 | 368 | ; `cache_repo_longterm` cache for repo object instances |
|
314 | 369 | ; Only supported backend is `memory_lru` |
|
315 | 370 | ; ***************************************************** |
|
316 | 371 | rc_cache.cache_repo_longterm.backend = dogpile.cache.rc.memory_lru |
|
317 | 372 | ; by default we use 30 Days, cache is still invalidated on push |
|
318 | 373 | rc_cache.cache_repo_longterm.expiration_time = 2592000 |
|
319 | 374 | ; max items in LRU cache, set to smaller number to save memory, and expire last used caches |
|
320 | 375 | rc_cache.cache_repo_longterm.max_size = 10000 |
|
321 | 376 | |
|
322 | 377 | |
|
323 | 378 | ; ********************************************* |
|
324 | 379 | ; `cache_general` cache for general purpose use |
|
325 | 380 | ; for simplicity use rc.file_namespace backend, |
|
326 | 381 | ; for performance and scale use rc.redis |
|
327 | 382 | ; ********************************************* |
|
328 | 383 | rc_cache.cache_general.backend = dogpile.cache.rc.file_namespace |
|
329 | 384 | rc_cache.cache_general.expiration_time = 43200 |
|
330 | 385 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
331 | 386 | #rc_cache.cache_general.arguments.filename = /tmp/cache_general_db |
|
332 | 387 | |
|
333 | 388 | ; alternative `cache_general` redis backend with distributed lock |
|
334 | 389 | #rc_cache.cache_general.backend = dogpile.cache.rc.redis |
|
335 | 390 | #rc_cache.cache_general.expiration_time = 300 |
|
336 | 391 | |
|
337 | 392 | ; redis_expiration_time needs to be greater then expiration_time |
|
338 | 393 | #rc_cache.cache_general.arguments.redis_expiration_time = 7200 |
|
339 | 394 | |
|
340 | 395 | #rc_cache.cache_general.arguments.host = localhost |
|
341 | 396 | #rc_cache.cache_general.arguments.port = 6379 |
|
342 | 397 | #rc_cache.cache_general.arguments.db = 0 |
|
343 | 398 | #rc_cache.cache_general.arguments.socket_timeout = 30 |
|
344 | 399 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
345 | 400 | #rc_cache.cache_general.arguments.distributed_lock = true |
|
346 | 401 | |
|
347 | 402 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
348 | 403 | #rc_cache.cache_general.arguments.lock_auto_renewal = true |
|
349 | 404 | |
|
350 | 405 | ; ************************************************* |
|
351 | 406 | ; `cache_perms` cache for permission tree, auth TTL |
|
352 | 407 | ; for simplicity use rc.file_namespace backend, |
|
353 | 408 | ; for performance and scale use rc.redis |
|
354 | 409 | ; ************************************************* |
|
355 | 410 | rc_cache.cache_perms.backend = dogpile.cache.rc.file_namespace |
|
356 | 411 | rc_cache.cache_perms.expiration_time = 3600 |
|
357 | 412 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
358 | 413 | #rc_cache.cache_perms.arguments.filename = /tmp/cache_perms_db |
|
359 | 414 | |
|
360 | 415 | ; alternative `cache_perms` redis backend with distributed lock |
|
361 | 416 | #rc_cache.cache_perms.backend = dogpile.cache.rc.redis |
|
362 | 417 | #rc_cache.cache_perms.expiration_time = 300 |
|
363 | 418 | |
|
364 | 419 | ; redis_expiration_time needs to be greater then expiration_time |
|
365 | 420 | #rc_cache.cache_perms.arguments.redis_expiration_time = 7200 |
|
366 | 421 | |
|
367 | 422 | #rc_cache.cache_perms.arguments.host = localhost |
|
368 | 423 | #rc_cache.cache_perms.arguments.port = 6379 |
|
369 | 424 | #rc_cache.cache_perms.arguments.db = 0 |
|
370 | 425 | #rc_cache.cache_perms.arguments.socket_timeout = 30 |
|
371 | 426 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
372 | 427 | #rc_cache.cache_perms.arguments.distributed_lock = true |
|
373 | 428 | |
|
374 | 429 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
375 | 430 | #rc_cache.cache_perms.arguments.lock_auto_renewal = true |
|
376 | 431 | |
|
377 | 432 | ; *************************************************** |
|
378 | 433 | ; `cache_repo` cache for file tree, Readme, RSS FEEDS |
|
379 | 434 | ; for simplicity use rc.file_namespace backend, |
|
380 | 435 | ; for performance and scale use rc.redis |
|
381 | 436 | ; *************************************************** |
|
382 | 437 | rc_cache.cache_repo.backend = dogpile.cache.rc.file_namespace |
|
383 | 438 | rc_cache.cache_repo.expiration_time = 2592000 |
|
384 | 439 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
385 | 440 | #rc_cache.cache_repo.arguments.filename = /tmp/cache_repo_db |
|
386 | 441 | |
|
387 | 442 | ; alternative `cache_repo` redis backend with distributed lock |
|
388 | 443 | #rc_cache.cache_repo.backend = dogpile.cache.rc.redis |
|
389 | 444 | #rc_cache.cache_repo.expiration_time = 2592000 |
|
390 | 445 | |
|
391 | 446 | ; redis_expiration_time needs to be greater then expiration_time |
|
392 | 447 | #rc_cache.cache_repo.arguments.redis_expiration_time = 2678400 |
|
393 | 448 | |
|
394 | 449 | #rc_cache.cache_repo.arguments.host = localhost |
|
395 | 450 | #rc_cache.cache_repo.arguments.port = 6379 |
|
396 | 451 | #rc_cache.cache_repo.arguments.db = 1 |
|
397 | 452 | #rc_cache.cache_repo.arguments.socket_timeout = 30 |
|
398 | 453 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
399 | 454 | #rc_cache.cache_repo.arguments.distributed_lock = true |
|
400 | 455 | |
|
401 | 456 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
402 | 457 | #rc_cache.cache_repo.arguments.lock_auto_renewal = true |
|
403 | 458 | |
|
404 | 459 | ; ############## |
|
405 | 460 | ; BEAKER SESSION |
|
406 | 461 | ; ############## |
|
407 | 462 | |
|
408 | 463 | ; beaker.session.type is type of storage options for the logged users sessions. Current allowed |
|
409 | 464 | ; types are file, ext:redis, ext:database, ext:memcached |
|
410 | 465 | ; Fastest ones are ext:redis and ext:database, DO NOT use memory type for session |
|
411 | beaker.session.type = file | |
|
412 | beaker.session.data_dir = %(here)s/data/sessions | |
|
466 | #beaker.session.type = file | |
|
467 | #beaker.session.data_dir = %(here)s/data/sessions | |
|
413 | 468 | |
|
414 | 469 | ; Redis based sessions |
|
415 |
|
|
|
416 |
|
|
|
470 | beaker.session.type = ext:redis | |
|
471 | beaker.session.url = redis://redis:6379/2 | |
|
417 | 472 | |
|
418 | 473 | ; DB based session, fast, and allows easy management over logged in users |
|
419 | 474 | #beaker.session.type = ext:database |
|
420 | 475 | #beaker.session.table_name = db_session |
|
421 | 476 | #beaker.session.sa.url = postgresql://postgres:secret@localhost/rhodecode |
|
422 | 477 | #beaker.session.sa.url = mysql://root:secret@127.0.0.1/rhodecode |
|
423 | 478 | #beaker.session.sa.pool_recycle = 3600 |
|
424 | 479 | #beaker.session.sa.echo = false |
|
425 | 480 | |
|
426 | 481 | beaker.session.key = rhodecode |
|
427 | 482 | beaker.session.secret = production-rc-uytcxaz |
|
428 |
beaker.session.lock_dir = |
|
|
483 | beaker.session.lock_dir = /data_ramdisk/lock | |
|
429 | 484 | |
|
430 | 485 | ; Secure encrypted cookie. Requires AES and AES python libraries |
|
431 | 486 | ; you must disable beaker.session.secret to use this |
|
432 | 487 | #beaker.session.encrypt_key = key_for_encryption |
|
433 | 488 | #beaker.session.validate_key = validation_key |
|
434 | 489 | |
|
435 | 490 | ; Sets session as invalid (also logging out user) if it haven not been |
|
436 | 491 | ; accessed for given amount of time in seconds |
|
437 | 492 | beaker.session.timeout = 2592000 |
|
438 | 493 | beaker.session.httponly = true |
|
439 | 494 | |
|
440 | 495 | ; Path to use for the cookie. Set to prefix if you use prefix middleware |
|
441 | 496 | #beaker.session.cookie_path = /custom_prefix |
|
442 | 497 | |
|
443 | 498 | ; Set https secure cookie |
|
444 | 499 | beaker.session.secure = false |
|
445 | 500 | |
|
446 | 501 | ; default cookie expiration time in seconds, set to `true` to set expire |
|
447 | 502 | ; at browser close |
|
448 | 503 | #beaker.session.cookie_expires = 3600 |
|
449 | 504 | |
|
450 | 505 | ; ############################# |
|
451 | 506 | ; SEARCH INDEXING CONFIGURATION |
|
452 | 507 | ; ############################# |
|
453 | 508 | |
|
454 | 509 | ; Full text search indexer is available in rhodecode-tools under |
|
455 | 510 | ; `rhodecode-tools index` command |
|
456 | 511 | |
|
457 | 512 | ; WHOOSH Backend, doesn't require additional services to run |
|
458 | 513 | ; it works good with few dozen repos |
|
459 | 514 | search.module = rhodecode.lib.index.whoosh |
|
460 | 515 | search.location = %(here)s/data/index |
|
461 | 516 | |
|
462 | 517 | ; #################### |
|
463 | 518 | ; CHANNELSTREAM CONFIG |
|
464 | 519 | ; #################### |
|
465 | 520 | |
|
466 | 521 | ; channelstream enables persistent connections and live notification |
|
467 | 522 | ; in the system. It's also used by the chat system |
|
468 | 523 | |
|
469 |
channelstream.enabled = |
|
|
524 | channelstream.enabled = true | |
|
470 | 525 | |
|
471 | 526 | ; server address for channelstream server on the backend |
|
472 |
channelstream.server = |
|
|
527 | channelstream.server = channelstream:9800 | |
|
473 | 528 | |
|
474 | 529 | ; location of the channelstream server from outside world |
|
475 | 530 | ; use ws:// for http or wss:// for https. This address needs to be handled |
|
476 | 531 | ; by external HTTP server such as Nginx or Apache |
|
477 | 532 | ; see Nginx/Apache configuration examples in our docs |
|
478 | 533 | channelstream.ws_url = ws://rhodecode.yourserver.com/_channelstream |
|
479 |
channelstream.secret = |
|
|
480 |
channelstream.history.location = |
|
|
534 | channelstream.secret = ENV_GENERATED | |
|
535 | channelstream.history.location = /var/opt/rhodecode_data/channelstream_history | |
|
481 | 536 | |
|
482 | 537 | ; Internal application path that Javascript uses to connect into. |
|
483 | 538 | ; If you use proxy-prefix the prefix should be added before /_channelstream |
|
484 | 539 | channelstream.proxy_path = /_channelstream |
|
485 | 540 | |
|
486 | 541 | |
|
487 | 542 | ; ############################## |
|
488 | 543 | ; MAIN RHODECODE DATABASE CONFIG |
|
489 | 544 | ; ############################## |
|
490 | 545 | |
|
491 | 546 | #sqlalchemy.db1.url = sqlite:///%(here)s/rhodecode.db?timeout=30 |
|
492 | 547 | #sqlalchemy.db1.url = postgresql://postgres:qweqwe@localhost/rhodecode |
|
493 | 548 | #sqlalchemy.db1.url = mysql://root:qweqwe@localhost/rhodecode?charset=utf8 |
|
494 | 549 | ; pymysql is an alternative driver for MySQL, use in case of problems with default one |
|
495 | 550 | #sqlalchemy.db1.url = mysql+pymysql://root:qweqwe@localhost/rhodecode |
|
496 | 551 | |
|
497 | 552 | sqlalchemy.db1.url = postgresql://postgres:qweqwe@localhost/rhodecode |
|
498 | 553 | |
|
499 | 554 | ; see sqlalchemy docs for other advanced settings |
|
500 | 555 | ; print the sql statements to output |
|
501 | 556 | sqlalchemy.db1.echo = false |
|
502 | 557 | |
|
503 | 558 | ; recycle the connections after this amount of seconds |
|
504 | 559 | sqlalchemy.db1.pool_recycle = 3600 |
|
505 | 560 | |
|
506 | 561 | ; the number of connections to keep open inside the connection pool. |
|
507 | 562 | ; 0 indicates no limit |
|
508 | 563 | ; the general calculus with gevent is: |
|
509 | 564 | ; if your system allows 500 concurrent greenlets (max_connections) that all do database access, |
|
510 | 565 | ; then increase pool size + max overflow so that they add up to 500. |
|
511 | 566 | #sqlalchemy.db1.pool_size = 5 |
|
512 | 567 | |
|
513 | 568 | ; The number of connections to allow in connection pool "overflow", that is |
|
514 | 569 | ; connections that can be opened above and beyond the pool_size setting, |
|
515 | 570 | ; which defaults to five. |
|
516 | 571 | #sqlalchemy.db1.max_overflow = 10 |
|
517 | 572 | |
|
518 | 573 | ; Connection check ping, used to detect broken database connections |
|
519 | 574 | ; could be enabled to better handle cases if MySQL has gone away errors |
|
520 | 575 | #sqlalchemy.db1.ping_connection = true |
|
521 | 576 | |
|
522 | 577 | ; ########## |
|
523 | 578 | ; VCS CONFIG |
|
524 | 579 | ; ########## |
|
525 | 580 | vcs.server.enable = true |
|
526 |
vcs.server = |
|
|
581 | vcs.server = vcsserver:10010 | |
|
527 | 582 | |
|
528 | 583 | ; Web server connectivity protocol, responsible for web based VCS operations |
|
529 | 584 | ; Available protocols are: |
|
530 | 585 | ; `http` - use http-rpc backend (default) |
|
531 | 586 | vcs.server.protocol = http |
|
532 | 587 | |
|
533 | 588 | ; Push/Pull operations protocol, available options are: |
|
534 | 589 | ; `http` - use http-rpc backend (default) |
|
535 | 590 | vcs.scm_app_implementation = http |
|
536 | 591 | |
|
537 | 592 | ; Push/Pull operations hooks protocol, available options are: |
|
538 | 593 | ; `http` - use http-rpc backend (default) |
|
594 | ; `celery` - use celery based hooks | |
|
539 | 595 | vcs.hooks.protocol = http |
|
540 | 596 | |
|
541 | 597 | ; Host on which this instance is listening for hooks. vcsserver will call this host to pull/push hooks so it should be |
|
542 | 598 | ; accessible via network. |
|
543 | 599 | ; Use vcs.hooks.host = "*" to bind to current hostname (for Docker) |
|
544 | 600 | vcs.hooks.host = * |
|
545 | 601 | |
|
546 | 602 | ; Start VCSServer with this instance as a subprocess, useful for development |
|
547 | 603 | vcs.start_server = false |
|
548 | 604 | |
|
549 | 605 | ; List of enabled VCS backends, available options are: |
|
550 | 606 | ; `hg` - mercurial |
|
551 | 607 | ; `git` - git |
|
552 | 608 | ; `svn` - subversion |
|
553 | 609 | vcs.backends = hg, git, svn |
|
554 | 610 | |
|
555 | 611 | ; Wait this number of seconds before killing connection to the vcsserver |
|
556 | 612 | vcs.connection_timeout = 3600 |
|
557 | 613 | |
|
558 | ; Compatibility version when creating SVN repositories. Defaults to newest version when commented out. | |
|
559 | ; Set a numeric version for your current SVN e.g 1.8, or 1.12 | |
|
560 | ; Legacy available options are: pre-1.4-compatible, pre-1.5-compatible, pre-1.6-compatible, pre-1.8-compatible, pre-1.9-compatible | |
|
561 | #vcs.svn.compatible_version = 1.8 | |
|
562 | ||
|
563 | 614 | ; Cache flag to cache vcsserver remote calls locally |
|
564 | 615 | ; It uses cache_region `cache_repo` |
|
565 | 616 | vcs.methods.cache = true |
|
566 | 617 | |
|
567 | 618 | ; #################################################### |
|
568 | 619 | ; Subversion proxy support (mod_dav_svn) |
|
569 | 620 | ; Maps RhodeCode repo groups into SVN paths for Apache |
|
570 | 621 | ; #################################################### |
|
571 | 622 | |
|
623 | ; Compatibility version when creating SVN repositories. Defaults to newest version when commented out. | |
|
624 | ; Set a numeric version for your current SVN e.g 1.8, or 1.12 | |
|
625 | ; Legacy available options are: pre-1.4-compatible, pre-1.5-compatible, pre-1.6-compatible, pre-1.8-compatible, pre-1.9-compatible | |
|
626 | #vcs.svn.compatible_version = 1.8 | |
|
627 | ||
|
628 | ; Redis connection settings for svn integrations logic | |
|
629 | ; This connection string needs to be the same on ce and vcsserver | |
|
630 | vcs.svn.redis_conn = redis://redis:6379/0 | |
|
631 | ||
|
632 | ; Enable SVN proxy of requests over HTTP | |
|
633 | vcs.svn.proxy.enabled = true | |
|
634 | ||
|
635 | ; host to connect to running SVN subsystem | |
|
636 | vcs.svn.proxy.host = http://svn:8090 | |
|
637 | ||
|
572 | 638 | ; Enable or disable the config file generation. |
|
573 |
svn.proxy.generate_config = |
|
|
639 | svn.proxy.generate_config = true | |
|
574 | 640 | |
|
575 | 641 | ; Generate config file with `SVNListParentPath` set to `On`. |
|
576 | 642 | svn.proxy.list_parent_path = true |
|
577 | 643 | |
|
578 | 644 | ; Set location and file name of generated config file. |
|
579 |
svn.proxy.config_file_path = |
|
|
645 | svn.proxy.config_file_path = /etc/rhodecode/conf/svn/mod_dav_svn.conf | |
|
580 | 646 | |
|
581 | 647 | ; alternative mod_dav config template. This needs to be a valid mako template |
|
582 | 648 | ; Example template can be found in the source code: |
|
583 | 649 | ; rhodecode/apps/svn_support/templates/mod-dav-svn.conf.mako |
|
584 | 650 | #svn.proxy.config_template = ~/.rccontrol/enterprise-1/custom_svn_conf.mako |
|
585 | 651 | |
|
586 | 652 | ; Used as a prefix to the `Location` block in the generated config file. |
|
587 | 653 | ; In most cases it should be set to `/`. |
|
588 | 654 | svn.proxy.location_root = / |
|
589 | 655 | |
|
590 | 656 | ; Command to reload the mod dav svn configuration on change. |
|
591 | 657 | ; Example: `/etc/init.d/apache2 reload` or /home/USER/apache_reload.sh |
|
592 | 658 | ; Make sure user who runs RhodeCode process is allowed to reload Apache |
|
593 | 659 | #svn.proxy.reload_cmd = /etc/init.d/apache2 reload |
|
594 | 660 | |
|
595 | 661 | ; If the timeout expires before the reload command finishes, the command will |
|
596 | 662 | ; be killed. Setting it to zero means no timeout. Defaults to 10 seconds. |
|
597 | 663 | #svn.proxy.reload_timeout = 10 |
|
598 | 664 | |
|
599 | 665 | ; #################### |
|
600 | 666 | ; SSH Support Settings |
|
601 | 667 | ; #################### |
|
602 | 668 | |
|
603 | 669 | ; Defines if a custom authorized_keys file should be created and written on |
|
604 | 670 | ; any change user ssh keys. Setting this to false also disables possibility |
|
605 | 671 | ; of adding SSH keys by users from web interface. Super admins can still |
|
606 | 672 | ; manage SSH Keys. |
|
607 |
ssh.generate_authorized_keyfile = |
|
|
673 | ssh.generate_authorized_keyfile = true | |
|
608 | 674 | |
|
609 | 675 | ; Options for ssh, default is `no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding` |
|
610 | 676 | # ssh.authorized_keys_ssh_opts = |
|
611 | 677 | |
|
612 | 678 | ; Path to the authorized_keys file where the generate entries are placed. |
|
613 | 679 | ; It is possible to have multiple key files specified in `sshd_config` e.g. |
|
614 | 680 | ; AuthorizedKeysFile %h/.ssh/authorized_keys %h/.ssh/authorized_keys_rhodecode |
|
615 |
ssh.authorized_keys_file_path = |
|
|
681 | ssh.authorized_keys_file_path = /etc/rhodecode/conf/ssh/authorized_keys_rhodecode | |
|
616 | 682 | |
|
617 | 683 | ; Command to execute the SSH wrapper. The binary is available in the |
|
618 | 684 | ; RhodeCode installation directory. |
|
619 | ; e.g ~/.rccontrol/community-1/profile/bin/rc-ssh-wrapper | |
|
620 | ssh.wrapper_cmd = ~/.rccontrol/community-1/rc-ssh-wrapper | |
|
685 | ; legacy: /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper | |
|
686 | ; new rewrite: /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper-v2 | |
|
687 | ssh.wrapper_cmd = /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper | |
|
621 | 688 | |
|
622 | 689 | ; Allow shell when executing the ssh-wrapper command |
|
623 | 690 | ssh.wrapper_cmd_allow_shell = false |
|
624 | 691 | |
|
625 | 692 | ; Enables logging, and detailed output send back to the client during SSH |
|
626 | 693 | ; operations. Useful for debugging, shouldn't be used in production. |
|
627 | 694 | ssh.enable_debug_logging = false |
|
628 | 695 | |
|
629 | 696 | ; Paths to binary executable, by default they are the names, but we can |
|
630 | 697 | ; override them if we want to use a custom one |
|
631 |
ssh.executable.hg = |
|
|
632 |
ssh.executable.git = |
|
|
633 |
ssh.executable.svn = |
|
|
698 | ssh.executable.hg = /usr/local/bin/rhodecode_bin/vcs_bin/hg | |
|
699 | ssh.executable.git = /usr/local/bin/rhodecode_bin/vcs_bin/git | |
|
700 | ssh.executable.svn = /usr/local/bin/rhodecode_bin/vcs_bin/svnserve | |
|
634 | 701 | |
|
635 | 702 | ; Enables SSH key generator web interface. Disabling this still allows users |
|
636 | 703 | ; to add their own keys. |
|
637 | 704 | ssh.enable_ui_key_generator = true |
|
638 | 705 | |
|
639 | ||
|
640 | ; ################# | |
|
641 | ; APPENLIGHT CONFIG | |
|
642 | ; ################# | |
|
643 | ||
|
644 | ; Appenlight is tailored to work with RhodeCode, see | |
|
645 | ; http://appenlight.rhodecode.com for details how to obtain an account | |
|
646 | ||
|
647 | ; Appenlight integration enabled | |
|
648 | #appenlight = false | |
|
649 | ||
|
650 | #appenlight.server_url = https://api.appenlight.com | |
|
651 | #appenlight.api_key = YOUR_API_KEY | |
|
652 | #appenlight.transport_config = https://api.appenlight.com?threaded=1&timeout=5 | |
|
653 | ||
|
654 | ; used for JS client | |
|
655 | #appenlight.api_public_key = YOUR_API_PUBLIC_KEY | |
|
656 | ||
|
657 | ; TWEAK AMOUNT OF INFO SENT HERE | |
|
658 | ||
|
659 | ; enables 404 error logging (default False) | |
|
660 | #appenlight.report_404 = false | |
|
661 | ||
|
662 | ; time in seconds after request is considered being slow (default 1) | |
|
663 | #appenlight.slow_request_time = 1 | |
|
664 | ||
|
665 | ; record slow requests in application | |
|
666 | ; (needs to be enabled for slow datastore recording and time tracking) | |
|
667 | #appenlight.slow_requests = true | |
|
668 | ||
|
669 | ; enable hooking to application loggers | |
|
670 | #appenlight.logging = true | |
|
671 | ||
|
672 | ; minimum log level for log capture | |
|
673 | #ppenlight.logging.level = WARNING | |
|
674 | ||
|
675 | ; send logs only from erroneous/slow requests | |
|
676 | ; (saves API quota for intensive logging) | |
|
677 | #appenlight.logging_on_error = false | |
|
678 | ||
|
679 | ; list of additional keywords that should be grabbed from environ object | |
|
680 | ; can be string with comma separated list of words in lowercase | |
|
681 | ; (by default client will always send following info: | |
|
682 | ; 'REMOTE_USER', 'REMOTE_ADDR', 'SERVER_NAME', 'CONTENT_TYPE' + all keys that | |
|
683 | ; start with HTTP* this list be extended with additional keywords here | |
|
684 | #appenlight.environ_keys_whitelist = | |
|
685 | ||
|
686 | ; list of keywords that should be blanked from request object | |
|
687 | ; can be string with comma separated list of words in lowercase | |
|
688 | ; (by default client will always blank keys that contain following words | |
|
689 | ; 'password', 'passwd', 'pwd', 'auth_tkt', 'secret', 'csrf' | |
|
690 | ; this list be extended with additional keywords set here | |
|
691 | #appenlight.request_keys_blacklist = | |
|
692 | ||
|
693 | ; list of namespaces that should be ignores when gathering log entries | |
|
694 | ; can be string with comma separated list of namespaces | |
|
695 | ; (by default the client ignores own entries: appenlight_client.client) | |
|
696 | #appenlight.log_namespace_blacklist = | |
|
697 | ||
|
698 | 706 | ; Statsd client config, this is used to send metrics to statsd |
|
699 | 707 | ; We recommend setting statsd_exported and scrape them using Prometheus |
|
700 | 708 | #statsd.enabled = false |
|
701 | 709 | #statsd.statsd_host = 0.0.0.0 |
|
702 | 710 | #statsd.statsd_port = 8125 |
|
703 | 711 | #statsd.statsd_prefix = |
|
704 | 712 | #statsd.statsd_ipv6 = false |
|
705 | 713 | |
|
706 | 714 | ; configure logging automatically at server startup set to false |
|
707 | 715 | ; to use the below custom logging config. |
|
708 | 716 | ; RC_LOGGING_FORMATTER |
|
709 | 717 | ; RC_LOGGING_LEVEL |
|
710 | 718 | ; env variables can control the settings for logging in case of autoconfigure |
|
711 | 719 | |
|
712 | 720 | #logging.autoconfigure = true |
|
713 | 721 | |
|
714 | 722 | ; specify your own custom logging config file to configure logging |
|
715 | 723 | #logging.logging_conf_file = /path/to/custom_logging.ini |
|
716 | 724 | |
|
717 | 725 | ; Dummy marker to add new entries after. |
|
718 | 726 | ; Add any custom entries below. Please don't remove this marker. |
|
719 | 727 | custom.conf = 1 |
|
720 | 728 | |
|
721 | 729 | |
|
722 | 730 | ; ##################### |
|
723 | 731 | ; LOGGING CONFIGURATION |
|
724 | 732 | ; ##################### |
|
725 | 733 | |
|
726 | 734 | [loggers] |
|
727 | 735 | keys = root, sqlalchemy, beaker, celery, rhodecode, ssh_wrapper |
|
728 | 736 | |
|
729 | 737 | [handlers] |
|
730 | 738 | keys = console, console_sql |
|
731 | 739 | |
|
732 | 740 | [formatters] |
|
733 | 741 | keys = generic, json, color_formatter, color_formatter_sql |
|
734 | 742 | |
|
735 | 743 | ; ####### |
|
736 | 744 | ; LOGGERS |
|
737 | 745 | ; ####### |
|
738 | 746 | [logger_root] |
|
739 | 747 | level = NOTSET |
|
740 | 748 | handlers = console |
|
741 | 749 | |
|
742 | 750 | [logger_sqlalchemy] |
|
743 | 751 | level = INFO |
|
744 | 752 | handlers = console_sql |
|
745 | 753 | qualname = sqlalchemy.engine |
|
746 | 754 | propagate = 0 |
|
747 | 755 | |
|
748 | 756 | [logger_beaker] |
|
749 | 757 | level = DEBUG |
|
750 | 758 | handlers = |
|
751 | 759 | qualname = beaker.container |
|
752 | 760 | propagate = 1 |
|
753 | 761 | |
|
754 | 762 | [logger_rhodecode] |
|
755 | 763 | level = DEBUG |
|
756 | 764 | handlers = |
|
757 | 765 | qualname = rhodecode |
|
758 | 766 | propagate = 1 |
|
759 | 767 | |
|
760 | 768 | [logger_ssh_wrapper] |
|
761 | 769 | level = DEBUG |
|
762 | 770 | handlers = |
|
763 | 771 | qualname = ssh_wrapper |
|
764 | 772 | propagate = 1 |
|
765 | 773 | |
|
766 | 774 | [logger_celery] |
|
767 | 775 | level = DEBUG |
|
768 | 776 | handlers = |
|
769 | 777 | qualname = celery |
|
770 | 778 | |
|
771 | 779 | |
|
772 | 780 | ; ######## |
|
773 | 781 | ; HANDLERS |
|
774 | 782 | ; ######## |
|
775 | 783 | |
|
776 | 784 | [handler_console] |
|
777 | 785 | class = StreamHandler |
|
778 | 786 | args = (sys.stderr, ) |
|
779 | 787 | level = INFO |
|
780 | 788 | ; To enable JSON formatted logs replace 'generic/color_formatter' with 'json' |
|
781 | 789 | ; This allows sending properly formatted logs to grafana loki or elasticsearch |
|
782 | 790 | formatter = generic |
|
783 | 791 | |
|
784 | 792 | [handler_console_sql] |
|
785 | 793 | ; "level = DEBUG" logs SQL queries and results. |
|
786 | 794 | ; "level = INFO" logs SQL queries. |
|
787 | 795 | ; "level = WARN" logs neither. (Recommended for production systems.) |
|
788 | 796 | class = StreamHandler |
|
789 | 797 | args = (sys.stderr, ) |
|
790 | 798 | level = WARN |
|
791 | 799 | ; To enable JSON formatted logs replace 'generic/color_formatter_sql' with 'json' |
|
792 | 800 | ; This allows sending properly formatted logs to grafana loki or elasticsearch |
|
793 | 801 | formatter = generic |
|
794 | 802 | |
|
795 | 803 | ; ########## |
|
796 | 804 | ; FORMATTERS |
|
797 | 805 | ; ########## |
|
798 | 806 | |
|
799 | 807 | [formatter_generic] |
|
800 | 808 | class = rhodecode.lib.logging_formatter.ExceptionAwareFormatter |
|
801 | 809 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
802 | 810 | datefmt = %Y-%m-%d %H:%M:%S |
|
803 | 811 | |
|
804 | 812 | [formatter_color_formatter] |
|
805 | 813 | class = rhodecode.lib.logging_formatter.ColorFormatter |
|
806 | 814 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
807 | 815 | datefmt = %Y-%m-%d %H:%M:%S |
|
808 | 816 | |
|
809 | 817 | [formatter_color_formatter_sql] |
|
810 | 818 | class = rhodecode.lib.logging_formatter.ColorFormatterSql |
|
811 | 819 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
812 | 820 | datefmt = %Y-%m-%d %H:%M:%S |
|
813 | 821 | |
|
814 | 822 | [formatter_json] |
|
815 | 823 | format = %(timestamp)s %(levelname)s %(name)s %(message)s %(req_id)s |
|
816 | 824 | class = rhodecode.lib._vendor.jsonlogger.JsonFormatter |
@@ -1,31 +1,31 b'' | |||
|
1 | 1 | .. _lab-settings: |
|
2 | 2 | |
|
3 | 3 | Lab Settings |
|
4 | 4 | ============ |
|
5 | 5 | |
|
6 | 6 | |RCE| Lab Settings is for delivering features which may require an additional |
|
7 | 7 | level of support to optimize for production scenarios. To enable lab settings, |
|
8 | 8 | use the following instructions: |
|
9 | 9 | |
|
10 | 10 | 1. Open the |RCE| configuration file, |
|
11 |
:file:` |
|
|
11 | :file:`config/_shared/rhodecode.ini` | |
|
12 | 12 | |
|
13 | 13 | 2. Add the following configuration option in the ``[app:main]`` section. |
|
14 | 14 | |
|
15 | 15 | .. code-block:: bash |
|
16 | 16 | |
|
17 | 17 | [app:main] |
|
18 | 18 | |
|
19 | 19 | ## Display extended labs settings |
|
20 | 20 | labs_settings_active = true |
|
21 | 21 | |
|
22 | 22 | 3. Restart your |RCE| instance |
|
23 | 23 | |
|
24 | 24 | .. code-block:: bash |
|
25 | 25 | |
|
26 | 26 | $ rccontrol restart enterprise-1 |
|
27 | 27 | |
|
28 | 28 | 4. You will see the labs setting on the |
|
29 | 29 | :menuselection:`Admin --> Settings --> labs` page. |
|
30 | 30 | |
|
31 | 31 | .. image:: ../images/lab-setting.png |
@@ -1,57 +1,57 b'' | |||
|
1 | 1 | .. _x-frame: |
|
2 | 2 | |
|
3 | 3 | Securing HTTPS Connections |
|
4 | 4 | -------------------------- |
|
5 | 5 | |
|
6 | 6 | * To secure your |RCE| instance against `Cross Frame Scripting`_ exploits, you |
|
7 | 7 | should configure your webserver ``x-frame-options`` setting. |
|
8 | 8 | |
|
9 | 9 | * To configure your instance for `HTTP Strict Transport Security`_, you need to |
|
10 | 10 | configure the ``Strict-Transport-Security`` setting. |
|
11 | 11 | |
|
12 | 12 | Nginx |
|
13 | 13 | ^^^^^ |
|
14 | 14 | |
|
15 | 15 | In your nginx configuration, add the following lines in the correct files. For |
|
16 | 16 | more detailed information see the :ref:`nginx-ws-ref` section. |
|
17 | 17 | |
|
18 | 18 | .. code-block:: nginx |
|
19 | 19 | |
|
20 | 20 | # Add this line to the nginx.conf file |
|
21 | 21 | add_header X-Frame-Options SAMEORIGIN; |
|
22 | 22 | |
|
23 | 23 | # This line needs to be added inside your virtual hosts block/file |
|
24 | 24 | add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;"; |
|
25 | 25 | |
|
26 | 26 | Apache |
|
27 | 27 | ^^^^^^ |
|
28 | 28 | |
|
29 | 29 | In your :file:`apache2.conf` file, add the following line. For more detailed |
|
30 | 30 | information see the :ref:`apache-ws-ref` section. |
|
31 | 31 | |
|
32 | 32 | .. code-block:: apache |
|
33 | 33 | |
|
34 | 34 | # Add this to your virtual hosts file |
|
35 | 35 | Header always append X-Frame-Options SAMEORIGIN |
|
36 | 36 | |
|
37 | 37 | # Add this line in your virtual hosts file |
|
38 | 38 | Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload" |
|
39 | 39 | |
|
40 | 40 | |RCE| Configuration |
|
41 | 41 | ^^^^^^^^^^^^^^^^^^^ |
|
42 | 42 | |
|
43 | 43 | |RCE| can also be configured to force strict *https* connections and Strict |
|
44 | 44 | Transport Security. To set this, configure the following options to ``true`` |
|
45 |
in the :file:` |
|
|
45 | in the :file:`config/_shared/rhodecode.ini` file. | |
|
46 | 46 | |
|
47 | 47 | .. code-block:: ini |
|
48 | 48 | |
|
49 | 49 | ## force https in RhodeCode, fixes https redirects, assumes it's always https |
|
50 | 50 | force_https = false |
|
51 | 51 | |
|
52 | 52 | ## use Strict-Transport-Security headers |
|
53 | 53 | use_htsts = false |
|
54 | 54 | |
|
55 | 55 | |
|
56 | 56 | .. _Cross Frame Scripting: https://www.owasp.org/index.php/Cross_Frame_Scripting |
|
57 | 57 | .. _HTTP Strict Transport Security: https://www.owasp.org/index.php/HTTP_Strict_Transport_Security No newline at end of file |
@@ -1,179 +1,179 b'' | |||
|
1 | 1 | .. _sec-your-server: |
|
2 | 2 | |
|
3 | 3 | Securing Your Server |
|
4 | 4 | -------------------- |
|
5 | 5 | |
|
6 | 6 | |RCE| runs on your hardware, and while it is developed with security in mind |
|
7 | 7 | it is also important that you ensure your servers are well secured. In this |
|
8 | 8 | section we will cover some basic security practices that are best to |
|
9 | 9 | configure when setting up your |RCE| instances. |
|
10 | 10 | |
|
11 | 11 | SSH Keys |
|
12 | 12 | ^^^^^^^^ |
|
13 | 13 | |
|
14 | 14 | Using SSH keys to access your server provides more security than using the |
|
15 | 15 | standard username and password combination. To set up your SSH Keys, use the |
|
16 | 16 | following steps: |
|
17 | 17 | |
|
18 | 18 | 1. On your local machine create the public/private key combination. The |
|
19 | 19 | private key you will keep, and the matching public key is copied to the |
|
20 | 20 | server. Setting a passphrase here is optional, if you set one you will |
|
21 | 21 | always be prompted for it when logging in. |
|
22 | 22 | |
|
23 | 23 | .. code-block:: bash |
|
24 | 24 | |
|
25 | 25 | # Generate SSH Keys |
|
26 | 26 | user@ubuntu:~$ ssh-keygen -t rsa |
|
27 | 27 | |
|
28 | 28 | .. code-block:: bash |
|
29 | 29 | |
|
30 | 30 | Generating public/private rsa key pair. |
|
31 | 31 | Enter file in which to save the key (/home/user/.ssh/id_rsa): |
|
32 | 32 | Created directory '/home/user/.ssh'. |
|
33 | 33 | Enter passphrase (empty for no passphrase): |
|
34 | 34 | Enter same passphrase again: |
|
35 | 35 | Your identification has been saved in /home/user/.ssh/id_rsa. |
|
36 | 36 | Your public key has been saved in /home/user/.ssh/id_rsa.pub. |
|
37 | 37 | The key fingerprint is: |
|
38 | 38 | 02:82:38:95:e5:30:d2:ad:17:60:15:7f:94:17:9f:30 user@ubuntu |
|
39 | 39 | The key\'s randomart image is: |
|
40 | 40 | +--[ RSA 2048]----+ |
|
41 | 41 | |
|
42 | 42 | 2. SFTP to your server, and copy the public key to the ``~/.ssh`` folder. |
|
43 | 43 | |
|
44 | 44 | .. code-block:: bash |
|
45 | 45 | |
|
46 | 46 | # SFTP to your server |
|
47 | 47 | $ sftp user@hostname |
|
48 | 48 | |
|
49 | 49 | # copy your public key |
|
50 | 50 | sftp> mput /home/user/.ssh/id_rsa.pub /home/user/.ssh |
|
51 | 51 | Uploading /home/user/.ssh/id_rsa.pub to /home/user/.ssh/id_rsa.pub |
|
52 | 52 | /home/user/.ssh/id_rsa.pub 100% 394 0.4KB/s 00:00 |
|
53 | 53 | |
|
54 | 54 | 3. On your server, add the public key to the :file:`~/.ssh/authorized_keys` |
|
55 | 55 | file. |
|
56 | 56 | |
|
57 | 57 | .. code-block:: bash |
|
58 | 58 | |
|
59 | 59 | $ cat /home/user/.ssh/id_rsa.pub > /home/user/.ssh/authorized_keys |
|
60 | 60 | |
|
61 | 61 | You should now be able to log into your server using your SSH |
|
62 | 62 | Keys. If you've added a passphrase you'll be asked for it. For more |
|
63 | 63 | information about using SSH keys with |RCE| |repos|, see the |
|
64 | 64 | :ref:`ssh-connection` section. |
|
65 | 65 | |
|
66 | 66 | VPN Whitelist |
|
67 | 67 | ^^^^^^^^^^^^^ |
|
68 | 68 | |
|
69 | 69 | Most company networks will have a VPN. If you need to set one up, there are |
|
70 | 70 | many tutorials online for how to do that. Getting it right requires good |
|
71 | 71 | knowledge and attention to detail. Once set up, you can configure your |
|
72 | 72 | |RCE| instances to only allow user access from the VPN, to do this see the |
|
73 | 73 | :ref:`settip-ip-white` section. |
|
74 | 74 | |
|
75 | 75 | Public Key Infrastructure and SSL/TLS Encryption |
|
76 | 76 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
77 | 77 | |
|
78 | 78 | Public key infrastructure (PKI) is a system that creates, manages, and |
|
79 | 79 | validates certificates for identifying nodes on a network and encrypting |
|
80 | 80 | communication between them. SSL or TLS certificates can be used to |
|
81 | 81 | authenticate different entities with one another. To read more about PKIs, |
|
82 | 82 | see the `OpenSSL PKI tutorial`_ site, or this `Cloudflare PKI post`_. |
|
83 | 83 | |
|
84 | 84 | If the network you are running is SSL/TLS encrypted, you can configure |RCE| |
|
85 | 85 | to always use secure connections using the ``force_https`` and ``use_htsts`` |
|
86 |
options in the :file:` |
|
|
86 | options in the :file:`config/_shared/rhodecode.ini` file. | |
|
87 | 87 | For more details, see the :ref:`x-frame` section. |
|
88 | 88 | |
|
89 | 89 | FireWalls and Ports |
|
90 | 90 | ^^^^^^^^^^^^^^^^^^^ |
|
91 | 91 | |
|
92 | 92 | Setting up a network firewall for your internal traffic is a good way |
|
93 | 93 | of keeping it secure by blocking off any ports that should not be used. |
|
94 | 94 | Additionally, you can set non-default ports for certain functions which adds |
|
95 | 95 | an extra layer of security to your setup. |
|
96 | 96 | |
|
97 | 97 | A well configured firewall will restrict access to everything except the |
|
98 | 98 | services you need to remain open. By exposing fewer services you reduce the |
|
99 | 99 | number of potential vulnerabilities. |
|
100 | 100 | |
|
101 | 101 | There are a number of different firewall solutions, but for most Linux systems |
|
102 | 102 | using the built in `IpTables`_ firewall should suffice. On BSD systems you |
|
103 | 103 | can use `IPFILTER`_ or `IPFW`_. Use the following examples, and the IpTables |
|
104 | 104 | documentation to configure your IP Tables on Ubuntu. |
|
105 | 105 | |
|
106 | 106 | Changing the default SSH port. |
|
107 | 107 | |
|
108 | 108 | .. code-block:: bash |
|
109 | 109 | |
|
110 | 110 | # Open SSH config file and change to port 10022 |
|
111 | 111 | vi /etc/ssh/sshd_config |
|
112 | 112 | |
|
113 | 113 | # What ports, IPs and protocols we listen for |
|
114 | 114 | Port 10022 |
|
115 | 115 | |
|
116 | 116 | Setting IP Table rules for SSH traffic. It is important to note that the |
|
117 | 117 | default policy of your IpTables can differ and it is worth checking how each |
|
118 | 118 | is configured. The options are *ACCEPT*, *REJECT*, *DROP*, or *LOG*. The |
|
119 | 119 | usual practice is to block access on all ports and then enable access only on |
|
120 | 120 | the ports you with to expose. |
|
121 | 121 | |
|
122 | 122 | .. code-block:: bash |
|
123 | 123 | |
|
124 | 124 | # Check iptables policy |
|
125 | 125 | $ sudo iptables -L |
|
126 | 126 | |
|
127 | 127 | Chain INPUT (policy ACCEPT) |
|
128 | 128 | target prot opt source destination |
|
129 | 129 | |
|
130 | 130 | Chain FORWARD (policy ACCEPT) |
|
131 | 131 | target prot opt source destination |
|
132 | 132 | |
|
133 | 133 | Chain OUTPUT (policy ACCEPT) |
|
134 | 134 | target prot opt source destination |
|
135 | 135 | |
|
136 | 136 | # Close all ports by default |
|
137 | 137 | $ sudo iptables -P INPUT DROP |
|
138 | 138 | |
|
139 | 139 | $ sudo iptables -L |
|
140 | 140 | Chain INPUT (policy DROP) |
|
141 | 141 | target prot opt source destination |
|
142 | 142 | DROP all -- anywhere anywhere |
|
143 | 143 | |
|
144 | 144 | Chain FORWARD (policy ACCEPT) |
|
145 | 145 | target prot opt source destination |
|
146 | 146 | |
|
147 | 147 | Chain OUTPUT (policy ACCEPT) |
|
148 | 148 | target prot opt source destination |
|
149 | 149 | |
|
150 | 150 | .. code-block:: bash |
|
151 | 151 | |
|
152 | 152 | # Deny outbound SSH traffic |
|
153 | 153 | sudo iptables -A OUTPUT -p tcp --dport 10022 -j DROP |
|
154 | 154 | |
|
155 | 155 | # Allow incoming SSH traffic on port 10022 |
|
156 | 156 | sudo iptables -A INPUT -p tcp --dport 10022 -j ACCEPT |
|
157 | 157 | |
|
158 | 158 | # Allow incoming HTML traffic on port 80 and 443 |
|
159 | 159 | iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT |
|
160 | 160 | iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT |
|
161 | 161 | |
|
162 | 162 | Saving your IP Table rules, and restoring them from file. |
|
163 | 163 | |
|
164 | 164 | .. code-block:: bash |
|
165 | 165 | |
|
166 | 166 | # Save you IP Table Rules |
|
167 | 167 | iptables-save |
|
168 | 168 | |
|
169 | 169 | # Save your IP Table Rules to a file |
|
170 | 170 | sudo sh -c "iptables-save > /etc/iptables.rules" |
|
171 | 171 | |
|
172 | 172 | # Restore your IP Table rules from file |
|
173 | 173 | iptables-restore < /etc/iptables.rules |
|
174 | 174 | |
|
175 | 175 | .. _OpenSSL PKI tutorial: https://pki-tutorial.readthedocs.org/en/latest/# |
|
176 | 176 | .. _Cloudflare PKI post: https://blog.cloudflare.com/how-to-build-your-own-public-key-infrastructure/ |
|
177 | 177 | .. _IpTables: https://help.ubuntu.com/community/IptablesHowTo |
|
178 | 178 | .. _IPFW: https://www.freebsd.org/doc/handbook/firewalls-ipfw.html |
|
179 | 179 | .. _IPFILTER: https://www.freebsd.org/doc/handbook/firewalls-ipf.html |
@@ -1,172 +1,172 b'' | |||
|
1 | 1 | .. _system-overview-ref: |
|
2 | 2 | |
|
3 | 3 | System Overview |
|
4 | 4 | =============== |
|
5 | 5 | |
|
6 | 6 | Latest Version |
|
7 | 7 | -------------- |
|
8 | 8 | |
|
9 | 9 | * |release| on Unix and Windows systems. |
|
10 | 10 | |
|
11 | 11 | System Architecture |
|
12 | 12 | ------------------- |
|
13 | 13 | |
|
14 | 14 | The following diagram shows a typical production architecture. |
|
15 | 15 | |
|
16 | 16 | .. image:: ../images/architecture-diagram.png |
|
17 | 17 | :align: center |
|
18 | 18 | |
|
19 | 19 | Supported Operating Systems |
|
20 | 20 | --------------------------- |
|
21 | 21 | |
|
22 | 22 | Linux |
|
23 | 23 | ^^^^^ |
|
24 | 24 | |
|
25 | 25 | * Ubuntu 14.04+ |
|
26 | 26 | * CentOS 6.2, 7 and 8 |
|
27 | 27 | * RHEL 6.2, 7 and 8 |
|
28 | 28 | * Debian 7.8 |
|
29 | 29 | * RedHat Fedora |
|
30 | 30 | * Arch Linux |
|
31 | 31 | * SUSE Linux |
|
32 | 32 | |
|
33 | 33 | Windows |
|
34 | 34 | ^^^^^^^ |
|
35 | 35 | |
|
36 | 36 | * Windows Vista Ultimate 64bit |
|
37 | 37 | * Windows 7 Ultimate 64bit |
|
38 | 38 | * Windows 8 Professional 64bit |
|
39 | 39 | * Windows 8.1 Enterprise 64bit |
|
40 | 40 | * Windows Server 2008 64bit |
|
41 | 41 | * Windows Server 2008-R2 64bit |
|
42 | 42 | * Windows Server 2012 64bit |
|
43 | 43 | |
|
44 | 44 | Supported Databases |
|
45 | 45 | ------------------- |
|
46 | 46 | |
|
47 | 47 | * SQLite |
|
48 | 48 | * MySQL |
|
49 | 49 | * MariaDB |
|
50 | 50 | * PostgreSQL |
|
51 | 51 | |
|
52 | 52 | Supported Browsers |
|
53 | 53 | ------------------ |
|
54 | 54 | |
|
55 | 55 | * Chrome |
|
56 | 56 | * Safari |
|
57 | 57 | * Firefox |
|
58 | 58 | * Internet Explorer 10 & 11 |
|
59 | 59 | |
|
60 | 60 | System Requirements |
|
61 | 61 | ------------------- |
|
62 | 62 | |
|
63 | 63 | |RCE| performs best on machines with ultra-fast hard disks. Generally disk |
|
64 | 64 | performance is more important than CPU performance. In a corporate production |
|
65 | 65 | environment handling 1000s of users and |repos| you should deploy on a 12+ |
|
66 | 66 | core 64GB RAM server. In short, the more RAM the better. |
|
67 | 67 | |
|
68 | 68 | |
|
69 | 69 | For example: |
|
70 | 70 | |
|
71 | 71 | - for team of 1 - 5 active users you can run on 1GB RAM machine with 1CPU |
|
72 | 72 | - above 250 active users, |RCE| needs at least 8GB of memory. |
|
73 | 73 | Number of CPUs is less important, but recommended to have at least 2-3 CPUs |
|
74 | 74 | |
|
75 | 75 | |
|
76 | 76 | .. _config-rce-files: |
|
77 | 77 | |
|
78 | 78 | Configuration Files |
|
79 | 79 | ------------------- |
|
80 | 80 | |
|
81 |
* :file:` |
|
|
81 | * :file:`config/_shared/rhodecode.ini` | |
|
82 | 82 | * :file:`/home/{user}/.rccontrol/{instance-id}/search_mapping.ini` |
|
83 | 83 | * :file:`/home/{user}/.rccontrol/{vcsserver-id}/vcsserver.ini` |
|
84 | 84 | * :file:`/home/{user}/.rccontrol/supervisor/supervisord.ini` |
|
85 | 85 | * :file:`/home/{user}/.rccontrol.ini` |
|
86 | 86 | * :file:`/home/{user}/.rhoderc` |
|
87 | 87 | * :file:`/home/{user}/.rccontrol/cache/MANIFEST` |
|
88 | 88 | |
|
89 | 89 | For more information, see the :ref:`config-files` section. |
|
90 | 90 | |
|
91 | 91 | Log Files |
|
92 | 92 | --------- |
|
93 | 93 | |
|
94 | 94 | * :file:`/home/{user}/.rccontrol/{instance-id}/enterprise.log` |
|
95 | 95 | * :file:`/home/{user}/.rccontrol/{vcsserver-id}/vcsserver.log` |
|
96 | 96 | * :file:`/home/{user}/.rccontrol/supervisor/supervisord.log` |
|
97 | 97 | * :file:`/tmp/rccontrol.log` |
|
98 | 98 | * :file:`/tmp/rhodecode_tools.log` |
|
99 | 99 | |
|
100 | 100 | Storage Files |
|
101 | 101 | ------------- |
|
102 | 102 | |
|
103 | 103 | * :file:`/home/{user}/.rccontrol/{instance-id}/data/index/{index-file.toc}` |
|
104 | 104 | * :file:`/home/{user}/repos/.rc_gist_store` |
|
105 | 105 | * :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.db` |
|
106 | 106 | * :file:`/opt/rhodecode/store/{unique-hash}` |
|
107 | 107 | |
|
108 | 108 | Default Repositories Location |
|
109 | 109 | ----------------------------- |
|
110 | 110 | |
|
111 | 111 | * :file:`/home/{user}/repos` |
|
112 | 112 | |
|
113 | 113 | Connection Methods |
|
114 | 114 | ------------------ |
|
115 | 115 | |
|
116 | 116 | * HTTPS |
|
117 | 117 | * SSH |
|
118 | 118 | * |RCE| API |
|
119 | 119 | |
|
120 | 120 | Internationalization Support |
|
121 | 121 | ---------------------------- |
|
122 | 122 | |
|
123 | 123 | Currently available in the following languages, see `Transifex`_ for the |
|
124 | 124 | latest details. If you want a new language added, please contact us. To |
|
125 | 125 | configure your language settings, see the :ref:`set-lang` section. |
|
126 | 126 | |
|
127 | 127 | .. hlist:: |
|
128 | 128 | |
|
129 | 129 | * Belorussian |
|
130 | 130 | * Chinese |
|
131 | 131 | * French |
|
132 | 132 | * German |
|
133 | 133 | * Italian |
|
134 | 134 | * Japanese |
|
135 | 135 | * Portuguese |
|
136 | 136 | * Polish |
|
137 | 137 | * Russian |
|
138 | 138 | * Spanish |
|
139 | 139 | |
|
140 | 140 | Licencing Information |
|
141 | 141 | --------------------- |
|
142 | 142 | |
|
143 | 143 | * See licencing information `here`_ |
|
144 | 144 | |
|
145 | 145 | Peer-to-peer Failover Support |
|
146 | 146 | ----------------------------- |
|
147 | 147 | |
|
148 | 148 | * Yes |
|
149 | 149 | |
|
150 | 150 | Additional Binaries |
|
151 | 151 | ------------------- |
|
152 | 152 | |
|
153 | 153 | * Yes, see :ref:`rhodecode-nix-ref` for full details. |
|
154 | 154 | |
|
155 | 155 | Remote Connectivity |
|
156 | 156 | ------------------- |
|
157 | 157 | |
|
158 | 158 | * Available |
|
159 | 159 | |
|
160 | 160 | Executable Files |
|
161 | 161 | ---------------- |
|
162 | 162 | |
|
163 | 163 | Windows: :file:`RhodeCode-installer-{version}.exe` |
|
164 | 164 | |
|
165 | 165 | Deprecated Support |
|
166 | 166 | ------------------ |
|
167 | 167 | |
|
168 | 168 | - Internet Explorer 8 support deprecated since version 3.7.0. |
|
169 | 169 | - Internet Explorer 9 support deprecated since version 3.8.0. |
|
170 | 170 | |
|
171 | 171 | .. _here: https://rhodecode.com/licenses/ |
|
172 | 172 | .. _Transifex: https://explore.transifex.com/rhodecode/RhodeCode/ |
@@ -1,300 +1,300 b'' | |||
|
1 | 1 | .. _admin-tricks: |
|
2 | 2 | |
|
3 | 3 | One-time Admin Tasks |
|
4 | 4 | -------------------- |
|
5 | 5 | |
|
6 | 6 | * :ref:`web-analytics` |
|
7 | 7 | * :ref:`admin-tricks-license` |
|
8 | 8 | * :ref:`announcements` |
|
9 | 9 | * :ref:`md-rst` |
|
10 | 10 | * :ref:`repo-stats` |
|
11 | 11 | * :ref:`server-side-merge` |
|
12 | 12 | * :ref:`remap-rescan` |
|
13 | 13 | * :ref:`custom-hooks` |
|
14 | 14 | * :ref:`clear-repo-cache` |
|
15 | 15 | * :ref:`set-repo-pub` |
|
16 | 16 | * :ref:`ping` |
|
17 | 17 | |
|
18 | 18 | .. _web-analytics: |
|
19 | 19 | |
|
20 | 20 | Adding Web Analytics |
|
21 | 21 | ^^^^^^^^^^^^^^^^^^^^ |
|
22 | 22 | |
|
23 | 23 | If you wish to add a Google Analytics, or any other kind of tracker to your |
|
24 | 24 | |RCE| instance you can add the necessary codes to the header or footer |
|
25 | 25 | section of each instance using the following steps: |
|
26 | 26 | |
|
27 | 27 | 1. From the |RCE| interface, select |
|
28 | 28 | :menuselection:`Admin --> Settings --> Global` |
|
29 | 29 | 2. To add a tracking code to you instance, enter it in the header or footer |
|
30 | 30 | section and select **Save** |
|
31 | 31 | |
|
32 | 32 | Use the example templates in the drop-down menu to set up your configuration. |
|
33 | 33 | |
|
34 | 34 | .. _admin-tricks-license: |
|
35 | 35 | |
|
36 | 36 | Licence Key Management |
|
37 | 37 | ^^^^^^^^^^^^^^^^^^^^^^ |
|
38 | 38 | |
|
39 | 39 | To manage your license key, go to |
|
40 | 40 | :menuselection:`Admin --> Settings --> License`. |
|
41 | 41 | On this page you can see the license key details. If you need a new license, |
|
42 | 42 | or have questions about your current one, contact support@rhodecode.com |
|
43 | 43 | |
|
44 | 44 | .. _announcements: |
|
45 | 45 | |
|
46 | 46 | Server-wide Announcements |
|
47 | 47 | ^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
48 | 48 | |
|
49 | 49 | If you need to make a server-wide announcement to all users, |
|
50 | 50 | you can add a message to be displayed using the following steps: |
|
51 | 51 | |
|
52 | 52 | 1. From the |RCE| interface, select |
|
53 | 53 | :menuselection:`Admin --> Settings --> Global` |
|
54 | 54 | 2. To add a message that will be displayed to all users, |
|
55 | 55 | select :guilabel:`Server Announcement` from the drop-down menu and |
|
56 | 56 | change the ``var message = "TYPE YOUR MESSAGE HERE";`` example line. |
|
57 | 57 | 3. Select :guilabel:`Save`, and you will see the message once your page |
|
58 | 58 | refreshes. |
|
59 | 59 | |
|
60 | 60 | .. image:: ../../images/server-wide-announcement.png |
|
61 | 61 | :alt: Server Wide Announcement |
|
62 | 62 | |
|
63 | 63 | .. _md-rst: |
|
64 | 64 | |
|
65 | 65 | |
|
66 | 66 | Suppress license warnings or errors |
|
67 | 67 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
68 | 68 | |
|
69 | 69 | In case you're running on maximum allowed users, RhodeCode will display a |
|
70 | 70 | warning message on pages that you're close to the license limits. |
|
71 | 71 | It's often not desired to show that all the time. Here's how you can suppress |
|
72 | 72 | the license messages. |
|
73 | 73 | |
|
74 | 74 | 1. From the |RCE| interface, select |
|
75 | 75 | :menuselection:`Admin --> Settings --> Global` |
|
76 | 76 | 2. Select :guilabel:`Flash message filtering` from the drop-down menu. |
|
77 | 77 | 3. Select :guilabel:`Save`, and you will no longer see the license message |
|
78 | 78 | once your page refreshes. |
|
79 | 79 | |
|
80 | 80 | .. _admin-tricks-suppress-license-messages: |
|
81 | 81 | |
|
82 | 82 | |
|
83 | 83 | Markdown or RST Rendering |
|
84 | 84 | ^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
85 | 85 | |
|
86 | 86 | |RCE| can use `Markdown`_ or `reStructured Text`_ in commit message, |
|
87 | 87 | code review messages, and inline comments. To set the default to either, |
|
88 | 88 | select your preference from the drop-down menu on the |
|
89 | 89 | :menuselection:`Admin --> Settings --> Visual` page and select |
|
90 | 90 | :guilabel:`Save settings`. |
|
91 | 91 | |
|
92 | 92 | .. _repo-stats: |
|
93 | 93 | |
|
94 | 94 | Enabling Repository Statistics |
|
95 | 95 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
96 | 96 | |
|
97 | 97 | To enable |repo| statistics, use the following steps: |
|
98 | 98 | |
|
99 | 99 | 1. From the |RCE| interface, open |
|
100 | 100 | :menuselection:`Admin --> Repositories` and select |
|
101 | 101 | :guilabel:`Edit` beside the |repo| for which you wish to enable statistics. |
|
102 | 102 | 2. Check the :guilabel:`Enable statistics` box, and select :guilabel:`Save` |
|
103 | 103 | |
|
104 | 104 | .. _server-side-merge: |
|
105 | 105 | |
|
106 | 106 | Enabling Server-side Merging |
|
107 | 107 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
108 | 108 | |
|
109 | 109 | To enable server-side merging, use the following steps: |
|
110 | 110 | |
|
111 | 111 | 1. From the |RCE| interface, open :menuselection:`Admin --> Settings --> VCS` |
|
112 | 112 | 2. Check the :guilabel:`Server-side merge` box, and select |
|
113 | 113 | :guilabel:`Save Settings` |
|
114 | 114 | |
|
115 | 115 | If you encounter slow performance with server-side merging enabled, check the |
|
116 | 116 | speed at which your server is performing actions. When server-side merging is |
|
117 | 117 | enabled, the following actions occurs on the server. |
|
118 | 118 | |
|
119 | 119 | * A |pr| is created in the database. |
|
120 | 120 | * A shadow |repo| is created as a working environment for the |pr|. |
|
121 | 121 | * On display, |RCE| checks if the |pr| can be merged. |
|
122 | 122 | |
|
123 | 123 | To check how fast the shadow |repo| creation is occurring on your server, use |
|
124 | 124 | the following steps: |
|
125 | 125 | |
|
126 | 126 | 1. Log into your server and create a directory in your |repos| folder. |
|
127 | 127 | 2. Clone a |repo| that is showing slow performance and time the action. |
|
128 | 128 | |
|
129 | 129 | .. code-block:: bash |
|
130 | 130 | |
|
131 | 131 | # One option is to use the time command |
|
132 | 132 | $ time hg clone SOURCE_REPO TARGET |
|
133 | 133 | |
|
134 | 134 | .. _remap-rescan: |
|
135 | 135 | |
|
136 | 136 | Remap and Rescan Repositories |
|
137 | 137 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
138 | 138 | |
|
139 | 139 | You may want to Remap and rescan the |repos| that |RCE| is managing to ensure |
|
140 | 140 | the system is always up-to-date. This is useful after importing, deleting, |
|
141 | 141 | or carrying out general cleaning up operations. To do this use the |
|
142 | 142 | following steps: |
|
143 | 143 | |
|
144 | 144 | 1. From the |RCE|, open |
|
145 | 145 | :menuselection:`Admin --> Settings --> Remap and rescan` |
|
146 | 146 | 2. Click :guilabel:`Rescan Repositories` |
|
147 | 147 | |
|
148 | 148 | Check the additional options if needed: |
|
149 | 149 | |
|
150 | 150 | * :guilabel:`Destroy old data`: Useful for purging deleted repository |
|
151 | 151 | information from the database. |
|
152 | 152 | * :guilabel:`Invalidate cache for all repositories`: Use this to completely |
|
153 | 153 | remap all |repos|. Useful when importing or migrating |repos| to ensure all |
|
154 | 154 | new information is picked up. |
|
155 | 155 | |
|
156 | 156 | .. _custom-hooks: |
|
157 | 157 | |
|
158 | 158 | Adding Custom Hooks |
|
159 | 159 | ^^^^^^^^^^^^^^^^^^^ |
|
160 | 160 | |
|
161 | 161 | To add custom hooks to your instance, use the following steps: |
|
162 | 162 | |
|
163 | 163 | 1. Open :menuselection:`Admin --> Settings --> Hooks` |
|
164 | 164 | 2. Add your custom hook details, you can use a file path to specify custom |
|
165 | 165 | hook scripts, for example: |
|
166 | 166 | ``pretxnchangegroup.example`` with value ``python:/path/to/custom_hook.py:my_func_name`` |
|
167 | 167 | 3. Select :guilabel:`Save` |
|
168 | 168 | |
|
169 | 169 | Also, see the RhodeCode Extensions section of the :ref:`rc-tools` guide. RhodeCode |
|
170 | 170 | Extensions can be used to add additional hooks to your instance and comes |
|
171 | 171 | with a number of pre-built plugins if you chose to install them. |
|
172 | 172 | |
|
173 | 173 | .. _clear-repo-cache: |
|
174 | 174 | |
|
175 | 175 | Clearing |repo| cache |
|
176 | 176 | ^^^^^^^^^^^^^^^^^^^^^ |
|
177 | 177 | |
|
178 | 178 | If you need to clear the cache for a particular |repo|, use the following steps: |
|
179 | 179 | |
|
180 | 180 | 1. Open :menuselection:`Admin --> Repositories` and select :guilabel:`Edit` |
|
181 | 181 | beside the |repo| whose cache you wish to clear. |
|
182 | 182 | 2. On the |repo| settings page, go to the :guilabel:`Caches` tab and select |
|
183 | 183 | :guilabel:`Invalidate repository cache`. |
|
184 | 184 | |
|
185 | 185 | .. _set-lang: |
|
186 | 186 | |
|
187 | 187 | Changing Default Language |
|
188 | 188 | ^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
189 | 189 | |
|
190 | 190 | To change the default language of a |RCE| instance, change the language code |
|
191 |
in the :file:` |
|
|
191 | in the :file:`config/_shared/rhodecode.ini` file. To | |
|
192 | 192 | do this, use the following steps. |
|
193 | 193 | |
|
194 | 194 | 1. Open the :file:`rhodecode.ini` file and set the required language code. |
|
195 | 195 | |
|
196 | 196 | .. code-block:: ini |
|
197 | 197 | |
|
198 | 198 | ## Optional Languages |
|
199 | 199 | ## en(default), de, fr, it, ja, pl, pt, ru, zh |
|
200 | 200 | lang = de |
|
201 | 201 | |
|
202 | 202 | 2. Restart the |RCE| instance and check that the language has been updated. |
|
203 | 203 | |
|
204 | 204 | .. code-block:: bash |
|
205 | 205 | |
|
206 | 206 | $ rccontrol restart enterprise-2 |
|
207 | 207 | Instance "enterprise-2" successfully stopped. |
|
208 | 208 | Instance "enterprise-2" successfully started. |
|
209 | 209 | |
|
210 | 210 | .. image:: ../../images/language.png |
|
211 | 211 | |
|
212 | 212 | .. _set-repo-pub: |
|
213 | 213 | |
|
214 | 214 | Setting Repositories to Publish |
|
215 | 215 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
216 | 216 | |
|
217 | 217 | To automatically promote your local |repos| to public after pushing to |RCE|, |
|
218 | 218 | enable the :guilabel:`Set repositories as publishing` option on the |
|
219 | 219 | :menuselection:`Admin --> Settings --> VCS` page. |
|
220 | 220 | |
|
221 | 221 | .. note:: |
|
222 | 222 | |
|
223 | 223 | This option is enabled by default on most |RCE| versions, but if upgrading |
|
224 | 224 | from a 1.7.x version it could be disabled on upgrade due to inheriting |
|
225 | 225 | older default settings. |
|
226 | 226 | |
|
227 | 227 | .. _ping: |
|
228 | 228 | |
|
229 | 229 | Pinging the |RCE| Server |
|
230 | 230 | ^^^^^^^^^^^^^^^^^^^^^^^^ |
|
231 | 231 | |
|
232 | 232 | You can check the IP Address of your |RCE| instance using the |
|
233 | 233 | following URL: ``{instance-URL}/_admin/ping``. |
|
234 | 234 | |
|
235 | 235 | .. code-block:: bash |
|
236 | 236 | |
|
237 | 237 | $ curl https://your.rhodecode.url/_admin/ping |
|
238 | 238 | pong[rce-7880] => 203.0.113.23 |
|
239 | 239 | |
|
240 | 240 | .. _Markdown: http://daringfireball.net/projects/markdown/ |
|
241 | 241 | .. _reStructured Text: http://docutils.sourceforge.io/docs/index.html |
|
242 | 242 | |
|
243 | 243 | |
|
244 | 244 | Unarchiving a repository |
|
245 | 245 | ^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
246 | 246 | |
|
247 | 247 | Archive operation for the repository is similar as delete. Archive keeps the data for future references |
|
248 | 248 | but makes the repository read-only. After archiving the repository it shouldn't be modified in any way. |
|
249 | 249 | This is why repository settings are disabled for an archived repository. |
|
250 | 250 | |
|
251 | 251 | If there's a need for unarchiving a repository for some reasons, the interactive |
|
252 | 252 | ishell interface should be used. |
|
253 | 253 | |
|
254 | 254 | .. code-block:: bash |
|
255 | 255 | |
|
256 | 256 | # Open iShell from the terminal |
|
257 | 257 | $ rccontrol ishell enterprise-1/community-1 |
|
258 | 258 | |
|
259 | 259 | .. code-block:: python |
|
260 | 260 | |
|
261 | 261 | # Set repository as un-archived |
|
262 | 262 | In [1]: repo = Repository.get_by_repo_name('SOME_REPO_NAME') |
|
263 | 263 | In [2]: repo.archived = False |
|
264 | 264 | In [3]: Session().add(repo);Session().commit() |
|
265 | 265 | |
|
266 | 266 | |
|
267 | 267 | |
|
268 | 268 | |
|
269 | 269 | Bulk change repository owner |
|
270 | 270 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
271 | 271 | |
|
272 | 272 | Here's how one can change an owner of repository for an user who has been de activated. |
|
273 | 273 | Settings a new owner can be done via ishell for all repositories that past owner had. |
|
274 | 274 | |
|
275 | 275 | do run this script the interactive ishell interface should be used. |
|
276 | 276 | |
|
277 | 277 | .. code-block:: bash |
|
278 | 278 | |
|
279 | 279 | # Open iShell from the terminal |
|
280 | 280 | $ rccontrol ishell enterprise-1/community-1 |
|
281 | 281 | |
|
282 | 282 | |
|
283 | 283 | .. code-block:: python |
|
284 | 284 | |
|
285 | 285 | from rhodecode.model.db import User, Repository, Session |
|
286 | 286 | from rhodecode.model.permission import PermissionModel |
|
287 | 287 | |
|
288 | 288 | # replace old-owner and new-owner with your exact users |
|
289 | 289 | old_owner = User.get_by_username('old-owner') |
|
290 | 290 | new_owner = User.get_by_username('new-owner') |
|
291 | 291 | |
|
292 | 292 | # list of users we need to "flush" permissions |
|
293 | 293 | affected_user_ids = [new_owner.user_id, old_owner.user_id] |
|
294 | 294 | |
|
295 | 295 | for repo in Repository.get_all_repos(user_id=old_owner.user_id): |
|
296 | 296 | repo.user = new_owner |
|
297 | 297 | Session().add(repo) |
|
298 | 298 | Session().commit() |
|
299 | 299 | |
|
300 | 300 | PermissionModel().trigger_permission_flush(affected_user_ids) |
@@ -1,74 +1,74 b'' | |||
|
1 | 1 | .. _config-files: |
|
2 | 2 | |
|
3 | 3 | Configuration Files Overview |
|
4 | 4 | ============================ |
|
5 | 5 | |
|
6 | 6 | |RCE| and |RCC| have a number of different configuration files. The following |
|
7 | 7 | is a brief explanation of each, and links to their associated configuration |
|
8 | 8 | sections. |
|
9 | 9 | |
|
10 | 10 | .. rst-class:: dl-horizontal |
|
11 | 11 | |
|
12 | 12 | \- **rhodecode.ini** |
|
13 | 13 | Default location: |
|
14 |
:file:` |
|
|
14 | :file:`config/_shared/rhodecode.ini` | |
|
15 | 15 | |
|
16 | 16 | This is the main |RCE| configuration file and controls much of its |
|
17 | 17 | default behaviour. It is also used to configure certain customer |
|
18 | 18 | settings. Here are some of the most common reasons to make changes to |
|
19 | 19 | this file. |
|
20 | 20 | |
|
21 | 21 | * :ref:`config-database` |
|
22 | 22 | * :ref:`set-up-mail` |
|
23 | 23 | * :ref:`increase-gunicorn` |
|
24 | 24 | * :ref:`x-frame` |
|
25 | 25 | |
|
26 | 26 | \- **search_mapping.ini** |
|
27 | 27 | Default location: |
|
28 | 28 | :file:`/home/{user}/.rccontrol/{instance-id}/search_mapping.ini` |
|
29 | 29 | |
|
30 | 30 | This file is used to control the |RCE| indexer. It comes configured |
|
31 | 31 | to index your instance. To change the default configuration, see |
|
32 | 32 | :ref:`advanced-indexing`. |
|
33 | 33 | |
|
34 | 34 | \- **vcsserver.ini** |
|
35 | 35 | Default location: |
|
36 | 36 | :file:`/home/{user}/.rccontrol/{vcsserver-id}/vcsserver.ini` |
|
37 | 37 | |
|
38 | 38 | The VCS Server handles the connection between your |repos| and |RCE|. |
|
39 | 39 | See the :ref:`vcs-server` section for configuration options and more |
|
40 | 40 | detailed information. |
|
41 | 41 | |
|
42 | 42 | \- **supervisord.ini** |
|
43 | 43 | Default location: |
|
44 | 44 | :file:`/home/{user}/.rccontrol/supervisor/supervisord.ini` |
|
45 | 45 | |
|
46 | 46 | |RCC| uses Supervisor to monitor and manage installed instances of |
|
47 | 47 | |RCE| and the VCS Server. |RCC| will manage this file completely, |
|
48 | 48 | unless you install |RCE| in self-managed mode. For more information, |
|
49 | 49 | see the :ref:`Supervisor Setup<control:supervisor-setup>` section. |
|
50 | 50 | |
|
51 | 51 | \- **.rccontrol.ini** |
|
52 | 52 | Default location: :file:`/home/{user}/.rccontrol.ini` |
|
53 | 53 | |
|
54 | 54 | This file contains the instances that |RCC| starts at boot, which is all |
|
55 | 55 | by default, but for more information, see |
|
56 | 56 | the :ref:`Manually Start At Boot <control:set-start-boot>` section. |
|
57 | 57 | |
|
58 | 58 | \- **.rhoderc** |
|
59 | 59 | Default location: :file:`/home/{user}/.rhoderc` |
|
60 | 60 | |
|
61 | 61 | This file is used by the |RCE| API when accessing an instance from a |
|
62 | 62 | remote machine. The API checks this file for connection and |
|
63 | 63 | authentication details. For more details, see the :ref:`config-rhoderc` |
|
64 | 64 | section. |
|
65 | 65 | |
|
66 | 66 | \- **MANIFEST** |
|
67 | 67 | Default location: :file:`/home/{user}/.rccontrol/cache/MANIFEST` |
|
68 | 68 | |
|
69 | 69 | |RCC| uses this file to source the latest available builds from the |
|
70 | 70 | secure RhodeCode download channels. The only reason to mess with this file |
|
71 | 71 | is if you need to do an offline installation, |
|
72 | 72 | see the :ref:`Offline Installation<control:offline-installer-ref>` |
|
73 | 73 | instructions, otherwise |RCC| will completely manage this file. |
|
74 | 74 |
@@ -1,148 +1,148 b'' | |||
|
1 | 1 | .. _debug-mode: |
|
2 | 2 | |
|
3 | 3 | Enabling Debug Mode |
|
4 | 4 | ------------------- |
|
5 | 5 | |
|
6 | 6 | Debug Mode will enable debug logging, and request tracking middleware. Debug Mode |
|
7 | 7 | enabled DEBUG log-level which allows tracking various information about authentication |
|
8 | 8 | failures, LDAP connection, email etc. |
|
9 | 9 | |
|
10 | 10 | The request tracking will add a special |
|
11 | 11 | unique ID: `| req_id:00000000-0000-0000-0000-000000000000` at the end of each log line. |
|
12 | 12 | The req_id is the same for each individual requests, it means that if you want to |
|
13 | 13 | track particular user logs only, and exclude other concurrent ones |
|
14 | 14 | simply grep by `req_id` uuid which you'll have to find for the individual request. |
|
15 | 15 | |
|
16 | 16 | To enable debug mode on a |RCE| instance you need to set the debug property |
|
17 |
in the :file:` |
|
|
17 | in the :file:`config/_shared/rhodecode.ini` file. To | |
|
18 | 18 | do this, use the following steps |
|
19 | 19 | |
|
20 | 20 | 1. Open the file and set the ``debug`` line to ``true`` |
|
21 | 21 | 2. Restart you instance using the ``rccontrol restart`` command, |
|
22 | 22 | see the following example: |
|
23 | 23 | |
|
24 | 24 | .. code-block:: ini |
|
25 | 25 | |
|
26 | 26 | [DEFAULT] |
|
27 | 27 | debug = true |
|
28 | 28 | |
|
29 | 29 | .. code-block:: bash |
|
30 | 30 | |
|
31 | 31 | # Restart your instance |
|
32 | 32 | $ rccontrol restart enterprise-1 |
|
33 | 33 | Instance "enterprise-1" successfully stopped. |
|
34 | 34 | Instance "enterprise-1" successfully started. |
|
35 | 35 | |
|
36 | 36 | |
|
37 | 37 | Debug and Logging Configuration |
|
38 | 38 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
39 | 39 | |
|
40 | 40 | Further debugging and logging settings can also be set in the |
|
41 |
:file:` |
|
|
41 | :file:`config/_shared/rhodecode.ini` file. | |
|
42 | 42 | |
|
43 | 43 | In the logging section, the various packages that run with |RCE| can have |
|
44 | 44 | different debug levels set. If you want to increase the logging level change |
|
45 | 45 | ``level = DEBUG`` line to one of the valid options. |
|
46 | 46 | |
|
47 | 47 | You also need to change the log level for handlers. See the example |
|
48 | 48 | ``##handler`` section below. The ``handler`` level takes the same options as |
|
49 | 49 | the ``debug`` level. |
|
50 | 50 | |
|
51 | 51 | .. code-block:: ini |
|
52 | 52 | |
|
53 | 53 | ################################ |
|
54 | 54 | ### LOGGING CONFIGURATION #### |
|
55 | 55 | ################################ |
|
56 | 56 | [loggers] |
|
57 | 57 | keys = root, sqlalchemy, beaker, celery, rhodecode, ssh_wrapper |
|
58 | 58 | |
|
59 | 59 | [handlers] |
|
60 | 60 | keys = console, console_sql, file, file_rotating |
|
61 | 61 | |
|
62 | 62 | [formatters] |
|
63 | 63 | keys = generic, color_formatter, color_formatter_sql |
|
64 | 64 | |
|
65 | 65 | ############# |
|
66 | 66 | ## LOGGERS ## |
|
67 | 67 | ############# |
|
68 | 68 | [logger_root] |
|
69 | 69 | level = NOTSET |
|
70 | 70 | handlers = console |
|
71 | 71 | |
|
72 | 72 | [logger_sqlalchemy] |
|
73 | 73 | level = INFO |
|
74 | 74 | handlers = console_sql |
|
75 | 75 | qualname = sqlalchemy.engine |
|
76 | 76 | propagate = 0 |
|
77 | 77 | |
|
78 | 78 | [logger_beaker] |
|
79 | 79 | level = DEBUG |
|
80 | 80 | handlers = |
|
81 | 81 | qualname = beaker.container |
|
82 | 82 | propagate = 1 |
|
83 | 83 | |
|
84 | 84 | [logger_rhodecode] |
|
85 | 85 | level = DEBUG |
|
86 | 86 | handlers = |
|
87 | 87 | qualname = rhodecode |
|
88 | 88 | propagate = 1 |
|
89 | 89 | |
|
90 | 90 | [logger_ssh_wrapper] |
|
91 | 91 | level = DEBUG |
|
92 | 92 | handlers = |
|
93 | 93 | qualname = ssh_wrapper |
|
94 | 94 | propagate = 1 |
|
95 | 95 | |
|
96 | 96 | [logger_celery] |
|
97 | 97 | level = DEBUG |
|
98 | 98 | handlers = |
|
99 | 99 | qualname = celery |
|
100 | 100 | |
|
101 | 101 | ############## |
|
102 | 102 | ## HANDLERS ## |
|
103 | 103 | ############## |
|
104 | 104 | |
|
105 | 105 | [handler_console] |
|
106 | 106 | class = StreamHandler |
|
107 | 107 | args = (sys.stderr, ) |
|
108 | 108 | level = DEBUG |
|
109 | 109 | formatter = generic |
|
110 | 110 | |
|
111 | 111 | [handler_console_sql] |
|
112 | 112 | class = StreamHandler |
|
113 | 113 | args = (sys.stderr, ) |
|
114 | 114 | level = INFO |
|
115 | 115 | formatter = generic |
|
116 | 116 | |
|
117 | 117 | [handler_file] |
|
118 | 118 | class = FileHandler |
|
119 | 119 | args = ('rhodecode_debug.log', 'a',) |
|
120 | 120 | level = INFO |
|
121 | 121 | formatter = generic |
|
122 | 122 | |
|
123 | 123 | [handler_file_rotating] |
|
124 | 124 | class = logging.handlers.TimedRotatingFileHandler |
|
125 | 125 | # 'D', 5 - rotate every 5days |
|
126 | 126 | # you can set 'h', 'midnight' |
|
127 | 127 | args = ('rhodecode_debug_rotated.log', 'D', 5, 10,) |
|
128 | 128 | level = INFO |
|
129 | 129 | formatter = generic |
|
130 | 130 | |
|
131 | 131 | ################ |
|
132 | 132 | ## FORMATTERS ## |
|
133 | 133 | ################ |
|
134 | 134 | |
|
135 | 135 | [formatter_generic] |
|
136 | 136 | class = rhodecode.lib.logging_formatter.ExceptionAwareFormatter |
|
137 | 137 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
138 | 138 | datefmt = %Y-%m-%d %H:%M:%S |
|
139 | 139 | |
|
140 | 140 | [formatter_color_formatter] |
|
141 | 141 | class = rhodecode.lib.logging_formatter.ColorFormatter |
|
142 | 142 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
143 | 143 | datefmt = %Y-%m-%d %H:%M:%S |
|
144 | 144 | |
|
145 | 145 | [formatter_color_formatter_sql] |
|
146 | 146 | class = rhodecode.lib.logging_formatter.ColorFormatterSql |
|
147 | 147 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
148 | 148 | datefmt = %Y-%m-%d %H:%M:%S No newline at end of file |
@@ -1,208 +1,208 b'' | |||
|
1 | 1 | .. _svn-http: |
|
2 | 2 | |
|
3 | 3 | |svn| With Write Over HTTP |
|
4 | 4 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
5 | 5 | |
|
6 | 6 | To use |svn| with read/write support over the |svn| HTTP protocol, you have to |
|
7 | 7 | configure the HTTP |svn| backend. |
|
8 | 8 | |
|
9 | 9 | Prerequisites |
|
10 | 10 | ============= |
|
11 | 11 | |
|
12 | 12 | - Enable HTTP support inside the admin VCS settings on your |RCE| instance |
|
13 | 13 | - You need to install the following tools on the machine that is running an |
|
14 | 14 | instance of |RCE|: |
|
15 | 15 | ``Apache HTTP Server`` and ``mod_dav_svn``. |
|
16 | 16 | |
|
17 | 17 | |
|
18 | 18 | .. tip:: |
|
19 | 19 | |
|
20 | 20 | We recommend using Wandisco repositories which provide latest SVN versions |
|
21 | 21 | for most platforms. If you skip this version you'll have to ensure the Client version |
|
22 | 22 | is compatible with installed SVN version which might differ depending on the operating system. |
|
23 | 23 | Here is an example how to add the Wandisco repositories for Ubuntu. |
|
24 | 24 | |
|
25 | 25 | .. code-block:: bash |
|
26 | 26 | |
|
27 | 27 | $ sudo sh -c 'echo "deb http://opensource.wandisco.com/ubuntu `lsb_release -cs` svn110" >> /etc/apt/sources.list.d/subversion110.list' |
|
28 | 28 | $ sudo wget -q http://opensource.wandisco.com/wandisco-debian-new.gpg -O- | sudo apt-key add - |
|
29 | 29 | $ sudo apt-get update |
|
30 | 30 | |
|
31 | 31 | Here is an example how to add the Wandisco repositories for Centos/Redhat. Using |
|
32 | 32 | a yum config |
|
33 | 33 | |
|
34 | 34 | .. code-block:: bash |
|
35 | 35 | |
|
36 | 36 | [wandisco-Git] |
|
37 | 37 | name=CentOS-6 - Wandisco Git |
|
38 | 38 | baseurl=http://opensource.wandisco.com/centos/6/git/$basearch/ |
|
39 | 39 | enabled=1 |
|
40 | 40 | gpgcheck=1 |
|
41 | 41 | gpgkey=http://opensource.wandisco.com/RPM-GPG-KEY-WANdisco |
|
42 | 42 | |
|
43 | 43 | |
|
44 | 44 | |
|
45 | 45 | Example installation of required components for Ubuntu platform: |
|
46 | 46 | |
|
47 | 47 | .. code-block:: bash |
|
48 | 48 | |
|
49 | 49 | $ sudo apt-get install apache2 |
|
50 | 50 | $ sudo apt-get install libapache2-svn |
|
51 | 51 | |
|
52 | 52 | Once installed you need to enable ``dav_svn`` on Ubuntu: |
|
53 | 53 | |
|
54 | 54 | .. code-block:: bash |
|
55 | 55 | |
|
56 | 56 | $ sudo a2enmod dav_svn |
|
57 | 57 | $ sudo a2enmod headers |
|
58 | 58 | $ sudo a2enmod authn_anon |
|
59 | 59 | |
|
60 | 60 | |
|
61 | 61 | Example installation of required components for RedHat/CentOS platform: |
|
62 | 62 | |
|
63 | 63 | .. code-block:: bash |
|
64 | 64 | |
|
65 | 65 | $ sudo yum install httpd |
|
66 | 66 | $ sudo yum install subversion mod_dav_svn |
|
67 | 67 | |
|
68 | 68 | |
|
69 | 69 | Once installed you need to enable ``dav_svn`` on RedHat/CentOS: |
|
70 | 70 | |
|
71 | 71 | .. code-block:: bash |
|
72 | 72 | |
|
73 | 73 | sudo vi /etc/httpd/conf.modules.d/10-subversion.conf |
|
74 | 74 | ## The file should read: |
|
75 | 75 | |
|
76 | 76 | LoadModule dav_svn_module modules/mod_dav_svn.so |
|
77 | 77 | LoadModule headers_module modules/mod_headers.so |
|
78 | 78 | LoadModule authn_anon_module modules/mod_authn_anon.so |
|
79 | 79 | |
|
80 | 80 | .. tip:: |
|
81 | 81 | |
|
82 | 82 | To check the installed mod_dav_svn module version, you can use such command. |
|
83 | 83 | |
|
84 | 84 | `strings /usr/lib/apache2/modules/mod_dav_svn.so | grep 'Powered by'` |
|
85 | 85 | |
|
86 | 86 | |
|
87 | 87 | Configuring Apache Setup |
|
88 | 88 | ======================== |
|
89 | 89 | |
|
90 | 90 | .. tip:: |
|
91 | 91 | |
|
92 | 92 | It is recommended to run Apache on a port other than 80, due to possible |
|
93 | 93 | conflicts with other HTTP servers like nginx. To do this, set the |
|
94 | 94 | ``Listen`` parameter in the ``/etc/apache2/ports.conf`` file, for example |
|
95 | 95 | ``Listen 8090``. |
|
96 | 96 | |
|
97 | 97 | |
|
98 | 98 | .. warning:: |
|
99 | 99 | |
|
100 | 100 | Make sure your Apache instance which runs the mod_dav_svn module is |
|
101 | 101 | only accessible by |RCE|. Otherwise everyone is able to browse |
|
102 | 102 | the repositories or run subversion operations (checkout/commit/etc.). |
|
103 | 103 | |
|
104 | 104 | It is also recommended to run apache as the same user as |RCE|, otherwise |
|
105 | 105 | permission issues could occur. To do this edit the ``/etc/apache2/envvars`` |
|
106 | 106 | |
|
107 | 107 | .. code-block:: apache |
|
108 | 108 | |
|
109 | 109 | export APACHE_RUN_USER=rhodecode |
|
110 | 110 | export APACHE_RUN_GROUP=rhodecode |
|
111 | 111 | |
|
112 | 112 | 1. To configure Apache, create and edit a virtual hosts file, for example |
|
113 | 113 | :file:`/etc/apache2/sites-enabled/default.conf`. Below is an example |
|
114 | 114 | how to use one with auto-generated config ```mod_dav_svn.conf``` |
|
115 | 115 | from configured |RCE| instance. |
|
116 | 116 | |
|
117 | 117 | .. code-block:: apache |
|
118 | 118 | |
|
119 | 119 | <VirtualHost *:8090> |
|
120 | 120 | ServerAdmin rhodecode-admin@localhost |
|
121 | 121 | DocumentRoot /var/www/html |
|
122 | 122 | ErrorLog ${'${APACHE_LOG_DIR}'}/error.log |
|
123 | 123 | CustomLog ${'${APACHE_LOG_DIR}'}/access.log combined |
|
124 | 124 | LogLevel info |
|
125 | 125 | # allows custom host names, prevents 400 errors on checkout |
|
126 | 126 | HttpProtocolOptions Unsafe |
|
127 | 127 | # Most likely this will be: /home/user/.rccontrol/enterprise-1/mod_dav_svn.conf |
|
128 | 128 | Include /home/user/.rccontrol/enterprise-1/mod_dav_svn.conf |
|
129 | 129 | </VirtualHost> |
|
130 | 130 | |
|
131 | 131 | |
|
132 | 132 | 2. Go to the :menuselection:`Admin --> Settings --> VCS` page, and |
|
133 | 133 | enable :guilabel:`Proxy Subversion HTTP requests`, and specify the |
|
134 | 134 | :guilabel:`Subversion HTTP Server URL`. |
|
135 | 135 | |
|
136 | 136 | 3. Open the |RCE| configuration file, |
|
137 |
:file:` |
|
|
137 | :file:`config/_shared/rhodecode.ini` | |
|
138 | 138 | |
|
139 | 139 | 4. Add the following configuration option in the ``[app:main]`` |
|
140 | 140 | section if you don't have it yet. |
|
141 | 141 | |
|
142 | 142 | This enables mapping of the created |RCE| repo groups into special |
|
143 | 143 | |svn| paths. Each time a new repository group is created, the system will |
|
144 | 144 | update the template file and create new mapping. Apache web server needs to |
|
145 | 145 | be reloaded to pick up the changes on this file. |
|
146 | 146 | To do this, simply configure `svn.proxy.reload_cmd` inside the .ini file. |
|
147 | 147 | Example configuration: |
|
148 | 148 | |
|
149 | 149 | |
|
150 | 150 | .. code-block:: ini |
|
151 | 151 | |
|
152 | 152 | ############################################################ |
|
153 | 153 | ### Subversion proxy support (mod_dav_svn) ### |
|
154 | 154 | ### Maps RhodeCode repo groups into SVN paths for Apache ### |
|
155 | 155 | ############################################################ |
|
156 | 156 | ## Enable or disable the config file generation. |
|
157 | 157 | svn.proxy.generate_config = true |
|
158 | 158 | ## Generate config file with `SVNListParentPath` set to `On`. |
|
159 | 159 | svn.proxy.list_parent_path = true |
|
160 | 160 | ## Set location and file name of generated config file. |
|
161 | 161 | svn.proxy.config_file_path = %(here)s/mod_dav_svn.conf |
|
162 | 162 | ## Used as a prefix to the <Location> block in the generated config file. |
|
163 | 163 | ## In most cases it should be set to `/`. |
|
164 | 164 | svn.proxy.location_root = / |
|
165 | 165 | ## Command to reload the mod dav svn configuration on change. |
|
166 | 166 | ## Example: `/etc/init.d/apache2 reload` |
|
167 | 167 | svn.proxy.reload_cmd = /etc/init.d/apache2 reload |
|
168 | 168 | ## If the timeout expires before the reload command finishes, the command will |
|
169 | 169 | ## be killed. Setting it to zero means no timeout. Defaults to 10 seconds. |
|
170 | 170 | #svn.proxy.reload_timeout = 10 |
|
171 | 171 | |
|
172 | 172 | |
|
173 | 173 | This would create a special template file called ```mod_dav_svn.conf```. We |
|
174 | 174 | used that file path in the apache config above inside the Include statement. |
|
175 | 175 | It's also possible to manually generate the config from the |
|
176 | 176 | :menuselection:`Admin --> Settings --> VCS` page by clicking a |
|
177 | 177 | `Generate Apache Config` button. |
|
178 | 178 | |
|
179 | 179 | 5. Now only things left is to enable svn support, and generate the initial |
|
180 | 180 | configuration. |
|
181 | 181 | |
|
182 | 182 | - Select `Proxy subversion HTTP requests` checkbox |
|
183 | 183 | - Enter http://localhost:8090 into `Subversion HTTP Server URL` |
|
184 | 184 | - Click the `Generate Apache Config` button. |
|
185 | 185 | |
|
186 | 186 | This config will be automatically re-generated once an user-groups is added |
|
187 | 187 | to properly map the additional paths generated. |
|
188 | 188 | |
|
189 | 189 | |
|
190 | 190 | |
|
191 | 191 | Using |svn| |
|
192 | 192 | =========== |
|
193 | 193 | |
|
194 | 194 | Once |svn| has been enabled on your instance, you can use it with the |
|
195 | 195 | following examples. For more |svn| information, see the `Subversion Red Book`_ |
|
196 | 196 | |
|
197 | 197 | .. code-block:: bash |
|
198 | 198 | |
|
199 | 199 | # To clone a repository |
|
200 | 200 | svn checkout http://my-svn-server.example.com/my-svn-repo |
|
201 | 201 | |
|
202 | 202 | # svn commit |
|
203 | 203 | svn commit |
|
204 | 204 | |
|
205 | 205 | |
|
206 | 206 | .. _Subversion Red Book: http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.ref.svn |
|
207 | 207 | |
|
208 | 208 | .. _Ask Ubuntu: http://askubuntu.com/questions/162391/how-do-i-fix-my-locale-issue No newline at end of file |
@@ -1,22 +1,22 b'' | |||
|
1 | 1 | .. _change-encoding: |
|
2 | 2 | |
|
3 | 3 | Change Default Encoding |
|
4 | 4 | ----------------------- |
|
5 | 5 | |
|
6 | 6 | |RCE| uses ``utf8`` encoding by default. You can change the default encoding |
|
7 |
in the :file:` |
|
|
7 | in the :file:`config/_shared/rhodecode.ini` file. To | |
|
8 | 8 | change the default encoding used by |RCE|, set a new value for the |
|
9 | 9 | ``default_encoding``. |
|
10 | 10 | |
|
11 | 11 | .. code-block:: ini |
|
12 | 12 | |
|
13 | 13 | # default encoding used to convert from and to unicode |
|
14 | 14 | # can be also a comma separated list of encoding in case of mixed |
|
15 | 15 | # encodings |
|
16 | 16 | default_encoding = utf8 |
|
17 | 17 | |
|
18 | 18 | .. note:: |
|
19 | 19 | |
|
20 | 20 | Changing the default encoding will affect many parts of your |RCE| |
|
21 | 21 | installation, including committers names, |
|
22 | 22 | file names, and the encoding of commit messages. |
@@ -1,17 +1,17 b'' | |||
|
1 | 1 | .. _hg-auth-loop: |
|
2 | 2 | |
|
3 | 3 | |hg| Authentication Tuning |
|
4 | 4 | -------------------------- |
|
5 | 5 | |
|
6 | 6 | When using external authentication tools such as LDAP with |hg|, a |
|
7 | 7 | password retry loop in |hg| can result in users being locked out due to too |
|
8 | 8 | many failed password attempts. To prevent this from happening, add the |
|
9 | 9 | following setting to your |
|
10 |
:file:` |
|
|
10 | :file:`config/_shared/rhodecode.ini` file, in the | |
|
11 | 11 | ``[app:main]`` section. |
|
12 | 12 | |
|
13 | 13 | |
|
14 | 14 | .. code-block:: ini |
|
15 | 15 | |
|
16 | 16 | [app:main] |
|
17 | 17 | auth_ret_code_detection = true |
@@ -1,396 +1,396 b'' | |||
|
1 | 1 | .. _scale-horizontal-cluster: |
|
2 | 2 | |
|
3 | 3 | |
|
4 | 4 | Scale Horizontally / RhodeCode Cluster |
|
5 | 5 | -------------------------------------- |
|
6 | 6 | |
|
7 | 7 | |RCE| is built in a way it support horizontal scaling across multiple machines. |
|
8 | 8 | There are three main pre-requisites for that: |
|
9 | 9 | |
|
10 | 10 | - Shared storage that each machine can access. Using NFS or other shared storage system. |
|
11 | 11 | - Shared DB connection across machines. Using `MySQL`/`PostgreSQL` that each node can access. |
|
12 | 12 | - |RCE| user sessions and caches need to use a shared storage (e.g `Redis`_/`Memcached`) |
|
13 | 13 | |
|
14 | 14 | |
|
15 | 15 | Horizontal scaling means adding more machines or workers into your pool of |
|
16 | 16 | resources. Horizontally scaling |RCE| gives a huge performance increase, |
|
17 | 17 | especially under large traffic scenarios with a high number of requests. |
|
18 | 18 | This is very beneficial when |RCE| is serving many users simultaneously, |
|
19 | 19 | or if continuous integration servers are automatically pulling and pushing code. |
|
20 | 20 | It also adds High-Availability to your running system. |
|
21 | 21 | |
|
22 | 22 | |
|
23 | 23 | Cluster Overview |
|
24 | 24 | ^^^^^^^^^^^^^^^^ |
|
25 | 25 | |
|
26 | 26 | Below we'll present a configuration example that will use two separate nodes to serve |
|
27 | 27 | |RCE| in a load-balanced environment. The 3rd node will act as a shared storage/cache |
|
28 | 28 | and handle load-balancing. In addition 3rd node will be used as shared database instance. |
|
29 | 29 | |
|
30 | 30 | This setup can be used both in Docker based configuration or with individual |
|
31 | 31 | physical/virtual machines. Using the 3rd node for Storage/Redis/PostgreSQL/Nginx is |
|
32 | 32 | optional. All those components can be installed on one of the two nodes used for |RCE|. |
|
33 | 33 | We'll use following naming for our nodes: |
|
34 | 34 | |
|
35 | 35 | - `rc-node-1` (NFS, DB, Cache node) |
|
36 | 36 | - `rc-node-2` (Worker node1) |
|
37 | 37 | - `rc-node-3` (Worker node2) |
|
38 | 38 | |
|
39 | 39 | Our shares NFS storage in the example is located on `/home/rcdev/storage` and |
|
40 | 40 | it's RW accessible on **each** node. |
|
41 | 41 | |
|
42 | 42 | In this example we used certain recommended components, however many |
|
43 | 43 | of those can be replaced by other, in case your organization already uses them, for example: |
|
44 | 44 | |
|
45 | 45 | - `MySQL`/`PostgreSQL`: Aren't replaceable and are the two only supported databases. |
|
46 | 46 | - `Nginx`_ on `rc-node-1` can be replaced by: `Hardware Load Balancer (F5)`, `Apache`_, `HA-Proxy` etc. |
|
47 | 47 | - `Nginx`_ on rc-node-2/3 acts as a reverse proxy and can be replaced by other HTTP server |
|
48 | 48 | acting as reverse proxy such as `Apache`_. |
|
49 | 49 | - `Redis`_ on `rc-node-1` can be replaced by: `Memcached` |
|
50 | 50 | |
|
51 | 51 | |
|
52 | 52 | Here's an overview what components should be installed/setup on each server in our example: |
|
53 | 53 | |
|
54 | 54 | - **rc-node-1**: |
|
55 | 55 | |
|
56 | 56 | - main storage acting as NFS host. |
|
57 | 57 | - `nginx` acting as a load-balancer. |
|
58 | 58 | - `postgresql-server` used for database and sessions. |
|
59 | 59 | - `redis-server` used for storing shared caches. |
|
60 | 60 | - optionally `rabbitmq-server` or `redis` for `Celery` if used. |
|
61 | 61 | - optionally if `Celery` is used Enterprise/Community instance + VCSServer. |
|
62 | 62 | - optionally mailserver that can be shared by other instances. |
|
63 | 63 | - optionally channelstream server to handle live communication for all instances. |
|
64 | 64 | |
|
65 | 65 | |
|
66 | 66 | - **rc-node-2/3**: |
|
67 | 67 | |
|
68 | 68 | - `nginx` acting as a reverse proxy to handle requests to |RCE|. |
|
69 | 69 | - 1x RhodeCode Enterprise/Community instance. |
|
70 | 70 | - 1x VCSServer instance. |
|
71 | 71 | - optionally for testing connection: postgresql-client, redis-client (redis-tools). |
|
72 | 72 | |
|
73 | 73 | |
|
74 | 74 | Before we start here are few assumptions that should be fulfilled: |
|
75 | 75 | |
|
76 | 76 | - make sure each node can access each other. |
|
77 | 77 | - make sure `Redis`_/`MySQL`/`PostgreSQL`/`RabbitMQ`_ are running on `rc-node-1` |
|
78 | 78 | - make sure both `rc-node-2`/`3` can access NFS storage with RW access |
|
79 | 79 | - make sure rc-node-2/3 can access `Redis`_/`PostgreSQL`, `MySQL` database on `rc-node-1`. |
|
80 | 80 | - make sure `Redis`_/Database/`RabbitMQ`_ are password protected and accessible only from rc-node-2/3. |
|
81 | 81 | |
|
82 | 82 | |
|
83 | 83 | |
|
84 | 84 | Setup rc-node-2/3 |
|
85 | 85 | ^^^^^^^^^^^^^^^^^ |
|
86 | 86 | |
|
87 | 87 | Initially before `rc-node-1` we'll configure both nodes 2 and 3 to operate as standalone |
|
88 | 88 | nodes with their own hostnames. Use a default installation settings, and use |
|
89 | 89 | the default local addresses (127.0.0.1) to configure VCSServer and Community/Enterprise instances. |
|
90 | 90 | All external connectivity will be handled by the reverse proxy (`Nginx`_ in our example). |
|
91 | 91 | |
|
92 | 92 | This way we can ensure each individual host works, |
|
93 | 93 | accepts connections, or do some operations explicitly on chosen node. |
|
94 | 94 | |
|
95 | 95 | In addition this would allow use to explicitly direct certain traffic to a node, e.g |
|
96 | 96 | CI server will only call directly `rc-node-3`. This should be done similar to normal |
|
97 | 97 | installation so check out `Nginx`_/`Apache`_ configuration example to configure each host. |
|
98 | 98 | Each one should already connect to shared database during installation. |
|
99 | 99 | |
|
100 | 100 | |
|
101 | 101 | 1) Assuming our final url will be http://rc-node-1, Configure `instances_id`, `app.base_url` |
|
102 | 102 | |
|
103 |
a) On **rc-node-2** find the following settings and edit :file:` |
|
|
103 | a) On **rc-node-2** find the following settings and edit :file:`config/_shared/rhodecode.ini` | |
|
104 | 104 | |
|
105 | 105 | .. code-block:: ini |
|
106 | 106 | |
|
107 | 107 | ## required format is: *NAME- |
|
108 | 108 | instance_id = *rc-node-2- |
|
109 | 109 | app.base_url = http://rc-node-1 |
|
110 | 110 | |
|
111 | 111 | |
|
112 |
b) On **rc-node-3** find the following settings and edit :file:` |
|
|
112 | b) On **rc-node-3** find the following settings and edit :file:`config/_shared/rhodecode.ini` | |
|
113 | 113 | |
|
114 | 114 | .. code-block:: ini |
|
115 | 115 | |
|
116 | 116 | ## required format is: *NAME- |
|
117 | 117 | instance_id = *rc-node-3- |
|
118 | 118 | app.base_url = http://rc-node-1 |
|
119 | 119 | |
|
120 | 120 | |
|
121 | 121 | |
|
122 | 122 | 2) Configure `User Session` to use a shared database. Example config that should be |
|
123 | 123 | changed on both **rc-node-2** and **rc-node-3** . |
|
124 |
Edit :file:` |
|
|
124 | Edit :file:`config/_shared/rhodecode.ini` | |
|
125 | 125 | |
|
126 | 126 | .. code-block:: ini |
|
127 | 127 | |
|
128 | 128 | #################################### |
|
129 | 129 | ### BEAKER SESSION #### |
|
130 | 130 | #################################### |
|
131 | 131 | |
|
132 | 132 | ## Disable the default `file` sessions |
|
133 | 133 | #beaker.session.type = file |
|
134 | 134 | #beaker.session.data_dir = %(here)s/data/sessions |
|
135 | 135 | |
|
136 | 136 | ## use shared db based session, fast, and allows easy management over logged in users |
|
137 | 137 | beaker.session.type = ext:database |
|
138 | 138 | beaker.session.table_name = db_session |
|
139 | 139 | # use our rc-node-1 here |
|
140 | 140 | beaker.session.sa.url = postgresql://postgres:qweqwe@rc-node-1/rhodecode |
|
141 | 141 | beaker.session.sa.pool_recycle = 3600 |
|
142 | 142 | beaker.session.sa.echo = false |
|
143 | 143 | |
|
144 | 144 | In addition make sure both instances use the same `session.secret` so users have |
|
145 | 145 | persistent sessions across nodes. Please generate other one then in this example. |
|
146 | 146 | |
|
147 | 147 | .. code-block:: ini |
|
148 | 148 | |
|
149 | 149 | # use a unique generated long string |
|
150 | 150 | beaker.session.secret = 70e116cae2274656ba7265fd860aebbd |
|
151 | 151 | |
|
152 | 152 | 3) Configure stored cached/archive cache to our shared NFS `rc-node-1` |
|
153 | 153 | |
|
154 | 154 | .. code-block:: ini |
|
155 | 155 | |
|
156 | 156 | # note the `_` prefix that allows using a directory without |
|
157 | 157 | # remap and rescan checking for vcs inside it. |
|
158 | 158 | cache_dir = /home/rcdev/storage/_cache_dir/data |
|
159 | 159 | # note archive cache dir is disabled by default, however if you enable |
|
160 | 160 | # it also needs to be shared |
|
161 | 161 | #archive_cache_dir = /home/rcdev/storage/_tarball_cache_dir |
|
162 | 162 | |
|
163 | 163 | |
|
164 | 164 | 4) Use shared exception store. Example config that should be |
|
165 | 165 | changed on both **rc-node-2** and **rc-node-3**, and also for VCSServer. |
|
166 |
Edit :file:` |
|
|
166 | Edit :file:`config/_shared/rhodecode.ini` and | |
|
167 | 167 | :file:`/home/{user}/.rccontrol/{vcsserver-instance-id}/vcsserver.ini` |
|
168 | 168 | and add/change following setting. |
|
169 | 169 | |
|
170 | 170 | .. code-block:: ini |
|
171 | 171 | |
|
172 | 172 | exception_tracker.store_path = /home/rcdev/storage/_exception_store_data |
|
173 | 173 | |
|
174 | 174 | |
|
175 | 175 | 5) Change cache backends to use `Redis`_ based caches. Below full example config |
|
176 | 176 | that replaces default file-based cache to shared `Redis`_ with Distributed Lock. |
|
177 | 177 | |
|
178 | 178 | |
|
179 | 179 | .. code-block:: ini |
|
180 | 180 | |
|
181 | 181 | ##################################### |
|
182 | 182 | ### DOGPILE CACHE #### |
|
183 | 183 | ##################################### |
|
184 | 184 | |
|
185 | 185 | ## `cache_perms` cache settings for permission tree, auth TTL. |
|
186 | 186 | #rc_cache.cache_perms.backend = dogpile.cache.rc.file_namespace |
|
187 | 187 | #rc_cache.cache_perms.expiration_time = 300 |
|
188 | 188 | |
|
189 | 189 | ## alternative `cache_perms` redis backend with distributed lock |
|
190 | 190 | rc_cache.cache_perms.backend = dogpile.cache.rc.redis |
|
191 | 191 | rc_cache.cache_perms.expiration_time = 300 |
|
192 | 192 | ## redis_expiration_time needs to be greater then expiration_time |
|
193 | 193 | rc_cache.cache_perms.arguments.redis_expiration_time = 7200 |
|
194 | 194 | rc_cache.cache_perms.arguments.socket_timeout = 30 |
|
195 | 195 | rc_cache.cache_perms.arguments.host = rc-node-1 |
|
196 | 196 | rc_cache.cache_perms.arguments.password = qweqwe |
|
197 | 197 | rc_cache.cache_perms.arguments.port = 6379 |
|
198 | 198 | rc_cache.cache_perms.arguments.db = 0 |
|
199 | 199 | rc_cache.cache_perms.arguments.distributed_lock = true |
|
200 | 200 | |
|
201 | 201 | ## `cache_repo` cache settings for FileTree, Readme, RSS FEEDS |
|
202 | 202 | #rc_cache.cache_repo.backend = dogpile.cache.rc.file_namespace |
|
203 | 203 | #rc_cache.cache_repo.expiration_time = 2592000 |
|
204 | 204 | |
|
205 | 205 | ## alternative `cache_repo` redis backend with distributed lock |
|
206 | 206 | rc_cache.cache_repo.backend = dogpile.cache.rc.redis |
|
207 | 207 | rc_cache.cache_repo.expiration_time = 2592000 |
|
208 | 208 | ## redis_expiration_time needs to be greater then expiration_time |
|
209 | 209 | rc_cache.cache_repo.arguments.redis_expiration_time = 2678400 |
|
210 | 210 | rc_cache.cache_repo.arguments.socket_timeout = 30 |
|
211 | 211 | rc_cache.cache_repo.arguments.host = rc-node-1 |
|
212 | 212 | rc_cache.cache_repo.arguments.password = qweqwe |
|
213 | 213 | rc_cache.cache_repo.arguments.port = 6379 |
|
214 | 214 | rc_cache.cache_repo.arguments.db = 1 |
|
215 | 215 | rc_cache.cache_repo.arguments.distributed_lock = true |
|
216 | 216 | |
|
217 | 217 | ## cache settings for SQL queries, this needs to use memory type backend |
|
218 | 218 | rc_cache.sql_cache_short.backend = dogpile.cache.rc.memory_lru |
|
219 | 219 | rc_cache.sql_cache_short.expiration_time = 30 |
|
220 | 220 | |
|
221 | 221 | ## `cache_repo_longterm` cache for repo object instances, this needs to use memory |
|
222 | 222 | ## type backend as the objects kept are not pickle serializable |
|
223 | 223 | rc_cache.cache_repo_longterm.backend = dogpile.cache.rc.memory_lru |
|
224 | 224 | ## by default we use 96H, this is using invalidation on push anyway |
|
225 | 225 | rc_cache.cache_repo_longterm.expiration_time = 345600 |
|
226 | 226 | ## max items in LRU cache, reduce this number to save memory, and expire last used |
|
227 | 227 | ## cached objects |
|
228 | 228 | rc_cache.cache_repo_longterm.max_size = 10000 |
|
229 | 229 | |
|
230 | 230 | |
|
231 | 231 | 6) Configure `Nginx`_ as reverse proxy on `rc-node-2/3`: |
|
232 | 232 | Minimal `Nginx`_ config used: |
|
233 | 233 | |
|
234 | 234 | |
|
235 | 235 | .. code-block:: nginx |
|
236 | 236 | |
|
237 | 237 | ## rate limiter for certain pages to prevent brute force attacks |
|
238 | 238 | limit_req_zone $binary_remote_addr zone=req_limit:10m rate=1r/s; |
|
239 | 239 | |
|
240 | 240 | ## custom log format |
|
241 | 241 | log_format log_custom '$remote_addr - $remote_user [$time_local] ' |
|
242 | 242 | '"$request" $status $body_bytes_sent ' |
|
243 | 243 | '"$http_referer" "$http_user_agent" ' |
|
244 | 244 | '$request_time $upstream_response_time $pipe'; |
|
245 | 245 | |
|
246 | 246 | server { |
|
247 | 247 | listen 80; |
|
248 | 248 | server_name rc-node-2; |
|
249 | 249 | #server_name rc-node-3; |
|
250 | 250 | |
|
251 | 251 | access_log /var/log/nginx/rhodecode.access.log log_custom; |
|
252 | 252 | error_log /var/log/nginx/rhodecode.error.log; |
|
253 | 253 | |
|
254 | 254 | # example of proxy.conf can be found in our docs. |
|
255 | 255 | include /etc/nginx/proxy.conf; |
|
256 | 256 | |
|
257 | 257 | ## serve static files by Nginx, recommended for performance |
|
258 | 258 | location /_static/rhodecode { |
|
259 | 259 | gzip on; |
|
260 | 260 | gzip_min_length 500; |
|
261 | 261 | gzip_proxied any; |
|
262 | 262 | gzip_comp_level 4; |
|
263 | 263 | gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/json application/xml application/rss+xml font/truetype font/opentype application/vnd.ms-fontobject image/svg+xml; |
|
264 | 264 | gzip_vary on; |
|
265 | 265 | gzip_disable "msie6"; |
|
266 | 266 | expires 60d; |
|
267 | 267 | #alias /home/rcdev/.rccontrol/community-1/static; |
|
268 | 268 | alias /home/rcdev/.rccontrol/enterprise-1/static; |
|
269 | 269 | } |
|
270 | 270 | |
|
271 | 271 | |
|
272 | 272 | location /_admin/login { |
|
273 | 273 | limit_req zone=req_limit burst=10 nodelay; |
|
274 | 274 | try_files $uri @rhode; |
|
275 | 275 | } |
|
276 | 276 | |
|
277 | 277 | location / { |
|
278 | 278 | try_files $uri @rhode; |
|
279 | 279 | } |
|
280 | 280 | |
|
281 | 281 | location @rhode { |
|
282 | 282 | # Url to running RhodeCode instance. |
|
283 | 283 | # This is shown as `- URL: <host>` in output from rccontrol status. |
|
284 | 284 | proxy_pass http://127.0.0.1:10020; |
|
285 | 285 | } |
|
286 | 286 | |
|
287 | 287 | ## custom 502 error page. Will be displayed while RhodeCode server |
|
288 | 288 | ## is turned off |
|
289 | 289 | error_page 502 /502.html; |
|
290 | 290 | location = /502.html { |
|
291 | 291 | #root /home/rcdev/.rccontrol/community-1/static; |
|
292 | 292 | root /home/rcdev/.rccontrol/enterprise-1/static; |
|
293 | 293 | } |
|
294 | 294 | } |
|
295 | 295 | |
|
296 | 296 | |
|
297 | 297 | 7) Optional: Full text search, in case you use `Whoosh` full text search we also need a |
|
298 | 298 | shared storage for the index. In our example our NFS is mounted at `/home/rcdev/storage` |
|
299 | 299 | which represents out storage so we can use the following: |
|
300 | 300 | |
|
301 | 301 | .. code-block:: ini |
|
302 | 302 | |
|
303 | 303 | # note the `_` prefix that allows using a directory without |
|
304 | 304 | # remap and rescan checking for vcs inside it. |
|
305 | 305 | search.location = /home/rcdev/storage/_index_data/index |
|
306 | 306 | |
|
307 | 307 | |
|
308 | 308 | .. note:: |
|
309 | 309 | |
|
310 | 310 | If you use ElasticSearch it's by default shared, and simply running ES node is |
|
311 | 311 | by default cluster compatible. |
|
312 | 312 | |
|
313 | 313 | |
|
314 | 314 | 8) Optional: If you intend to use mailing all instances need to use either a shared |
|
315 | 315 | mailing node, or each will use individual local mail agent. Simply put node-1/2/3 |
|
316 | 316 | needs to use same mailing configuration. |
|
317 | 317 | |
|
318 | 318 | |
|
319 | 319 | |
|
320 | 320 | Setup rc-node-1 |
|
321 | 321 | ^^^^^^^^^^^^^^^ |
|
322 | 322 | |
|
323 | 323 | |
|
324 | 324 | Configure `Nginx`_ as Load Balancer to rc-node-2/3. |
|
325 | 325 | Minimal `Nginx`_ example below: |
|
326 | 326 | |
|
327 | 327 | .. code-block:: nginx |
|
328 | 328 | |
|
329 | 329 | ## define rc-cluster which contains a pool of our instances to connect to |
|
330 | 330 | upstream rc-cluster { |
|
331 | 331 | # rc-node-2/3 are stored in /etc/hosts with correct IP addresses |
|
332 | 332 | server rc-node-2:80; |
|
333 | 333 | server rc-node-3:80; |
|
334 | 334 | } |
|
335 | 335 | |
|
336 | 336 | server { |
|
337 | 337 | listen 80; |
|
338 | 338 | server_name rc-node-1; |
|
339 | 339 | |
|
340 | 340 | location / { |
|
341 | 341 | proxy_pass http://rc-cluster; |
|
342 | 342 | } |
|
343 | 343 | } |
|
344 | 344 | |
|
345 | 345 | |
|
346 | 346 | .. note:: |
|
347 | 347 | |
|
348 | 348 | You should configure your load balancing accordingly. We recommend writing |
|
349 | 349 | load balancing rules that will separate regular user traffic from |
|
350 | 350 | automated process traffic like continuous servers or build bots. Sticky sessions |
|
351 | 351 | are not required. |
|
352 | 352 | |
|
353 | 353 | |
|
354 | 354 | Show which instance handles a request |
|
355 | 355 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
356 | 356 | |
|
357 | 357 | You can easily check if load-balancing is working as expected. Visit our main node |
|
358 | 358 | `rc-node-1` URL which at that point should already handle incoming requests and balance |
|
359 | 359 | it across node-2/3. |
|
360 | 360 | |
|
361 | 361 | Add a special GET param `?showrcid=1` to show current instance handling your request. |
|
362 | 362 | |
|
363 | 363 | For example: visiting url `http://rc-node-1/?showrcid=1` will show, in the bottom |
|
364 | 364 | of the screen` cluster instance info. |
|
365 | 365 | e.g: `RhodeCode instance id: rc-node-3-rc-node-3-3246` |
|
366 | 366 | which is generated from:: |
|
367 | 367 | |
|
368 | 368 | <NODE_HOSTNAME>-<INSTANCE_ID>-<WORKER_PID> |
|
369 | 369 | |
|
370 | 370 | |
|
371 | 371 | Using Celery with cluster |
|
372 | 372 | ^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
373 | 373 | |
|
374 | 374 | |
|
375 | 375 | If `Celery` is used we recommend setting also an instance of Enterprise/Community+VCSserver |
|
376 | 376 | on the node that is running `RabbitMQ`_ or `Redis`_. Those instances will be used to |
|
377 | 377 | executed async tasks on the `rc-node-1`. This is the most efficient setup. |
|
378 | 378 | `Celery` usually handles tasks such as sending emails, forking repositories, importing |
|
379 | 379 | repositories from external location etc. Using workers on instance that has |
|
380 | 380 | the direct access to disks used by NFS as well as email server gives noticeable |
|
381 | 381 | performance boost. Running local workers to the NFS storage results in faster |
|
382 | 382 | execution of forking large repositories or sending lots of emails. |
|
383 | 383 | |
|
384 | 384 | Those instances need to be configured in the same way as for other nodes. |
|
385 | 385 | The instance in rc-node-1 can be added to the cluster, but we don't recommend doing it. |
|
386 | 386 | For best results let it be isolated to only executing `Celery` tasks in the cluster setup. |
|
387 | 387 | |
|
388 | 388 | |
|
389 | 389 | .. _Gunicorn: http://gunicorn.org/ |
|
390 | 390 | .. _Whoosh: https://pypi.python.org/pypi/Whoosh/ |
|
391 | 391 | .. _Elasticsearch: https://www.elastic.co/.. |
|
392 | 392 | .. _RabbitMQ: http://www.rabbitmq.com/ |
|
393 | 393 | .. _Nginx: http://nginx.io |
|
394 | 394 | .. _Apache: http://nginx.io |
|
395 | 395 | .. _Redis: http://redis.io |
|
396 | 396 |
@@ -1,67 +1,67 b'' | |||
|
1 | 1 | .. _user-session-ref: |
|
2 | 2 | |
|
3 | 3 | User Session Performance |
|
4 | 4 | ------------------------ |
|
5 | 5 | |
|
6 | 6 | The default file-based sessions are only suitable for smaller setups, or |
|
7 | 7 | instances that doesn't have a lot of users or traffic. |
|
8 | 8 | They are set as default option because it's setup-free solution. |
|
9 | 9 | |
|
10 | 10 | The most common issue of file based sessions are file limit errors which occur |
|
11 | 11 | if there are lots of session files. |
|
12 | 12 | |
|
13 | 13 | Therefore, in a large scale deployment, to give better performance, |
|
14 | 14 | scalability, and maintainability we recommend switching from file-based |
|
15 | 15 | sessions to database-based user sessions or Redis based sessions. |
|
16 | 16 | |
|
17 | 17 | To switch to database-based user sessions uncomment the following section in |
|
18 |
your :file:` |
|
|
18 | your :file:`config/_shared/rhodecode.ini` file. | |
|
19 | 19 | |
|
20 | 20 | |
|
21 | 21 | .. code-block:: ini |
|
22 | 22 | |
|
23 | 23 | ## db based session, fast, and allows easy management over logged in users |
|
24 | 24 | beaker.session.type = ext:database |
|
25 | 25 | beaker.session.table_name = db_session |
|
26 | 26 | |
|
27 | 27 | # use just one of the following according to the type of database |
|
28 | 28 | beaker.session.sa.url = postgresql://postgres:secret@localhost/rhodecode |
|
29 | 29 | # or |
|
30 | 30 | beaker.session.sa.url = mysql://root:secret@127.0.0.1/rhodecode |
|
31 | 31 | |
|
32 | 32 | beaker.session.sa.pool_recycle = 3600 |
|
33 | 33 | beaker.session.sa.echo = false |
|
34 | 34 | |
|
35 | 35 | |
|
36 | 36 | and make sure you comment out the file based sessions. |
|
37 | 37 | |
|
38 | 38 | .. code-block:: ini |
|
39 | 39 | |
|
40 | 40 | ## types are file, ext:memcached, ext:database, and memory (default). |
|
41 | 41 | #beaker.session.type = file |
|
42 | 42 | #beaker.session.data_dir = %(here)s/data/sessions/data |
|
43 | 43 | |
|
44 | 44 | |
|
45 | 45 | The `table_name` will be automatically created on specified database if it isn't yet existing. |
|
46 | 46 | Database specified in the `beaker.session.sa.url` can be the same that RhodeCode |
|
47 | 47 | uses, or if required it can be a different one. We recommend to use the same database. |
|
48 | 48 | |
|
49 | 49 | |
|
50 | 50 | |
|
51 | 51 | To switch to redis-based user sessions uncomment the following section in |
|
52 |
your :file:` |
|
|
52 | your :file:`config/_shared/rhodecode.ini` file. | |
|
53 | 53 | |
|
54 | 54 | .. code-block:: ini |
|
55 | 55 | |
|
56 | 56 | ## redis sessions |
|
57 | 57 | beaker.session.type = ext:redis |
|
58 | 58 | beaker.session.url = localhost:6379 |
|
59 | 59 | |
|
60 | 60 | |
|
61 | 61 | and make sure you comment out the file based sessions. |
|
62 | 62 | |
|
63 | 63 | .. code-block:: ini |
|
64 | 64 | |
|
65 | 65 | ## types are file, ext:memcached, ext:database, and memory (default). |
|
66 | 66 | #beaker.session.type = file |
|
67 | 67 | #beaker.session.data_dir = %(here)s/data/sessions/data No newline at end of file |
@@ -1,379 +1,379 b'' | |||
|
1 | 1 | .. _vcs-server: |
|
2 | 2 | |
|
3 | 3 | VCS Server Management |
|
4 | 4 | --------------------- |
|
5 | 5 | |
|
6 | 6 | The VCS Server handles |RCE| backend functionality. You need to configure |
|
7 | 7 | a VCS Server to run with a |RCE| instance. If you do not, you will be missing |
|
8 | 8 | the connection between |RCE| and its |repos|. This will cause error messages |
|
9 | 9 | on the web interface. You can run your setup in the following configurations, |
|
10 | 10 | currently the best performance is one of following: |
|
11 | 11 | |
|
12 | 12 | * One VCS Server per |RCE| instance. |
|
13 | 13 | * One VCS Server handling multiple instances. |
|
14 | 14 | |
|
15 | 15 | .. important:: |
|
16 | 16 | |
|
17 | 17 | If your server locale settings are not correctly configured, |
|
18 | 18 | |RCE| and the VCS Server can run into issues. See this `Ask Ubuntu`_ post |
|
19 | 19 | which explains the problem and gives a solution. |
|
20 | 20 | |
|
21 | 21 | For more information, see the following sections: |
|
22 | 22 | |
|
23 | 23 | * :ref:`install-vcs` |
|
24 | 24 | * :ref:`config-vcs` |
|
25 | 25 | * :ref:`vcs-server-options` |
|
26 | 26 | * :ref:`vcs-server-versions` |
|
27 | 27 | * :ref:`vcs-server-maintain` |
|
28 | 28 | * :ref:`vcs-server-config-file` |
|
29 | 29 | * :ref:`svn-http` |
|
30 | 30 | |
|
31 | 31 | .. _install-vcs: |
|
32 | 32 | |
|
33 | 33 | VCS Server Installation |
|
34 | 34 | ^^^^^^^^^^^^^^^^^^^^^^^ |
|
35 | 35 | |
|
36 | 36 | To install a VCS Server, see |
|
37 | 37 | :ref:`Installing a VCS server <control:install-vcsserver>`. |
|
38 | 38 | |
|
39 | 39 | .. _config-vcs: |
|
40 | 40 | |
|
41 | 41 | Hooking |RCE| to its VCS Server |
|
42 | 42 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
43 | 43 | |
|
44 | 44 | To configure a |RCE| instance to use a VCS server, see |
|
45 | 45 | :ref:`Configuring the VCS Server connection <control:manually-vcsserver-ini>`. |
|
46 | 46 | |
|
47 | 47 | .. _vcs-server-options: |
|
48 | 48 | |
|
49 | 49 | |RCE| VCS Server Options |
|
50 | 50 | ^^^^^^^^^^^^^^^^^^^^^^^^ |
|
51 | 51 | |
|
52 | 52 | The following list shows the available options on the |RCE| side of the |
|
53 | 53 | connection to the VCS Server. The settings are configured per |
|
54 | 54 | instance in the |
|
55 |
:file:` |
|
|
55 | :file:`config/_shared/rhodecode.ini` file. | |
|
56 | 56 | |
|
57 | 57 | .. rst-class:: dl-horizontal |
|
58 | 58 | |
|
59 | 59 | \vcs.backends <available-vcs-systems> |
|
60 | 60 | Set a comma-separated list of the |repo| options available from the |
|
61 | 61 | web interface. The default is ``hg, git, svn``, |
|
62 | 62 | which is all |repo| types available. The order of backends is also the |
|
63 | 63 | order backend will try to detect requests type. |
|
64 | 64 | |
|
65 | 65 | \vcs.connection_timeout <seconds> |
|
66 | 66 | Set the length of time in seconds that the VCS Server waits for |
|
67 | 67 | requests to process. After the timeout expires, |
|
68 | 68 | the request is closed. The default is ``3600``. Set to a higher |
|
69 | 69 | number if you experience network latency, or timeout issues with very |
|
70 | 70 | large push/pull requests. |
|
71 | 71 | |
|
72 | 72 | \vcs.server.enable <boolean> |
|
73 | 73 | Enable or disable the VCS Server. The available options are ``true`` or |
|
74 | 74 | ``false``. The default is ``true``. |
|
75 | 75 | |
|
76 | 76 | \vcs.server <host:port> |
|
77 | 77 | Set the host, either hostname or IP Address, and port of the VCS server |
|
78 | 78 | you wish to run with your |RCE| instance. |
|
79 | 79 | |
|
80 | 80 | .. code-block:: ini |
|
81 | 81 | |
|
82 | 82 | ################## |
|
83 | 83 | ### VCS CONFIG ### |
|
84 | 84 | ################## |
|
85 | 85 | # set this line to match your VCS Server |
|
86 | 86 | vcs.server = 127.0.0.1:10004 |
|
87 | 87 | # Set to False to disable the VCS Server |
|
88 | 88 | vcs.server.enable = True |
|
89 | 89 | vcs.backends = hg, git, svn |
|
90 | 90 | vcs.connection_timeout = 3600 |
|
91 | 91 | |
|
92 | 92 | |
|
93 | 93 | .. _vcs-server-versions: |
|
94 | 94 | |
|
95 | 95 | VCS Server Versions |
|
96 | 96 | ^^^^^^^^^^^^^^^^^^^ |
|
97 | 97 | |
|
98 | 98 | An updated version of the VCS Server is released with each |RCE| version. Use |
|
99 | 99 | the VCS Server number that matches with the |RCE| version to pair the |
|
100 | 100 | appropriate ones together. For |RCE| versions pre 3.3.0, |
|
101 | 101 | VCS Server 1.X.Y works with |RCE| 3.X.Y, for example: |
|
102 | 102 | |
|
103 | 103 | * VCS Server 1.0.0 works with |RCE| 3.0.0 |
|
104 | 104 | * VCS Server 1.2.2 works with |RCE| 3.2.2 |
|
105 | 105 | |
|
106 | 106 | For |RCE| versions post 3.3.0, the VCS Server and |RCE| version numbers |
|
107 | 107 | match, for example: |
|
108 | 108 | |
|
109 | 109 | * VCS Server |release| works with |RCE| |release| |
|
110 | 110 | |
|
111 | 111 | .. _vcs-server-maintain: |
|
112 | 112 | |
|
113 | 113 | VCS Server Cache Optimization |
|
114 | 114 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
115 | 115 | |
|
116 | 116 | To optimize the VCS server to manage the cache and memory usage efficiently, it's recommended to |
|
117 | 117 | configure the Redis backend for VCSServer caches. |
|
118 | 118 | Once configured, restart the VCS Server. |
|
119 | 119 | |
|
120 | 120 | Make sure Redis is installed and running. |
|
121 | 121 | Open :file:`/home/{user}/.rccontrol/{vcsserver-id}/vcsserver.ini` |
|
122 | 122 | file and ensure the below settings for `repo_object` type cache are set: |
|
123 | 123 | |
|
124 | 124 | .. code-block:: ini |
|
125 | 125 | |
|
126 | 126 | ; ensure the default file based cache is *commented out* |
|
127 | 127 | ##rc_cache.repo_object.backend = dogpile.cache.rc.file_namespace |
|
128 | 128 | ##rc_cache.repo_object.expiration_time = 2592000 |
|
129 | 129 | |
|
130 | 130 | ; `repo_object` cache settings for vcs methods for repositories |
|
131 | 131 | rc_cache.repo_object.backend = dogpile.cache.rc.redis_msgpack |
|
132 | 132 | |
|
133 | 133 | ; cache auto-expires after N seconds |
|
134 | 134 | ; Examples: 86400 (1Day), 604800 (7Days), 1209600 (14Days), 2592000 (30days), 7776000 (90Days) |
|
135 | 135 | rc_cache.repo_object.expiration_time = 2592000 |
|
136 | 136 | |
|
137 | 137 | ; redis_expiration_time needs to be greater then expiration_time |
|
138 | 138 | rc_cache.repo_object.arguments.redis_expiration_time = 3592000 |
|
139 | 139 | |
|
140 | 140 | rc_cache.repo_object.arguments.host = localhost |
|
141 | 141 | rc_cache.repo_object.arguments.port = 6379 |
|
142 | 142 | rc_cache.repo_object.arguments.db = 5 |
|
143 | 143 | rc_cache.repo_object.arguments.socket_timeout = 30 |
|
144 | 144 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
145 | 145 | rc_cache.repo_object.arguments.distributed_lock = true |
|
146 | 146 | |
|
147 | 147 | |
|
148 | 148 | To clear the cache completely, you can restart the VCS Server. |
|
149 | 149 | |
|
150 | 150 | .. important:: |
|
151 | 151 | |
|
152 | 152 | While the VCS Server handles a restart gracefully on the web interface, |
|
153 | 153 | it will drop connections during push/pull requests. So it is recommended |
|
154 | 154 | you only perform this when there is very little traffic on the instance. |
|
155 | 155 | |
|
156 | 156 | Use the following example to restart your VCS Server, |
|
157 | 157 | for full details see the :ref:`RhodeCode Control CLI <control:rcc-cli>`. |
|
158 | 158 | |
|
159 | 159 | .. code-block:: bash |
|
160 | 160 | |
|
161 | 161 | $ rccontrol status |
|
162 | 162 | |
|
163 | 163 | .. code-block:: vim |
|
164 | 164 | |
|
165 | 165 | - NAME: vcsserver-1 |
|
166 | 166 | - STATUS: RUNNING |
|
167 | 167 | logs:/home/ubuntu/.rccontrol/vcsserver-1/vcsserver.log |
|
168 | 168 | - VERSION: 4.7.2 VCSServer |
|
169 | 169 | - URL: http://127.0.0.1:10008 |
|
170 | 170 | - CONFIG: /home/ubuntu/.rccontrol/vcsserver-1/vcsserver.ini |
|
171 | 171 | |
|
172 | 172 | $ rccontrol restart vcsserver-1 |
|
173 | 173 | Instance "vcsserver-1" successfully stopped. |
|
174 | 174 | Instance "vcsserver-1" successfully started. |
|
175 | 175 | |
|
176 | 176 | .. _vcs-server-config-file: |
|
177 | 177 | |
|
178 | 178 | VCS Server Configuration |
|
179 | 179 | ^^^^^^^^^^^^^^^^^^^^^^^^ |
|
180 | 180 | |
|
181 | 181 | You can configure settings for multiple VCS Servers on your |
|
182 | 182 | system using their individual configuration files. Use the following |
|
183 | 183 | properties inside the configuration file to set up your system. The default |
|
184 | 184 | location is :file:`home/{user}/.rccontrol/{vcsserver-id}/vcsserver.ini`. |
|
185 | 185 | For a more detailed explanation of the logger levers, see :ref:`debug-mode`. |
|
186 | 186 | |
|
187 | 187 | .. rst-class:: dl-horizontal |
|
188 | 188 | |
|
189 | 189 | \host <ip-address> |
|
190 | 190 | Set the host on which the VCS Server will run. VCSServer is not |
|
191 | 191 | protected by any authentication, so we *highly* recommend running it |
|
192 | 192 | under localhost ip that is `127.0.0.1` |
|
193 | 193 | |
|
194 | 194 | \port <int> |
|
195 | 195 | Set the port number on which the VCS Server will be available. |
|
196 | 196 | |
|
197 | 197 | |
|
198 | 198 | .. note:: |
|
199 | 199 | |
|
200 | 200 | After making changes, you need to restart your VCS Server to pick them up. |
|
201 | 201 | |
|
202 | 202 | .. code-block:: ini |
|
203 | 203 | |
|
204 | 204 | ; ################################# |
|
205 | 205 | ; RHODECODE VCSSERVER CONFIGURATION |
|
206 | 206 | ; ################################# |
|
207 | 207 | |
|
208 | 208 | [server:main] |
|
209 | 209 | ; COMMON HOST/IP CONFIG |
|
210 | 210 | host = 127.0.0.1 |
|
211 | 211 | port = 10002 |
|
212 | 212 | |
|
213 | 213 | ; ########################### |
|
214 | 214 | ; GUNICORN APPLICATION SERVER |
|
215 | 215 | ; ########################### |
|
216 | 216 | |
|
217 | 217 | ; run with gunicorn --log-config rhodecode.ini --paste rhodecode.ini |
|
218 | 218 | |
|
219 | 219 | ; Module to use, this setting shouldn't be changed |
|
220 | 220 | use = egg:gunicorn#main |
|
221 | 221 | |
|
222 | 222 | ; Sets the number of process workers. More workers means more concurrent connections |
|
223 | 223 | ; RhodeCode can handle at the same time. Each additional worker also it increases |
|
224 | 224 | ; memory usage as each has it's own set of caches. |
|
225 | 225 | ; Recommended value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers, but no more |
|
226 | 226 | ; than 8-10 unless for really big deployments .e.g 700-1000 users. |
|
227 | 227 | ; `instance_id = *` must be set in the [app:main] section below (which is the default) |
|
228 | 228 | ; when using more than 1 worker. |
|
229 | 229 | workers = 6 |
|
230 | 230 | |
|
231 | 231 | ; Gunicorn access log level |
|
232 | 232 | loglevel = info |
|
233 | 233 | |
|
234 | 234 | ; Process name visible in process list |
|
235 | 235 | proc_name = rhodecode_vcsserver |
|
236 | 236 | |
|
237 | 237 | ; Type of worker class, one of sync, gevent |
|
238 | 238 | ; currently `sync` is the only option allowed. |
|
239 | 239 | worker_class = sync |
|
240 | 240 | |
|
241 | 241 | ; The maximum number of simultaneous clients. Valid only for gevent |
|
242 | 242 | worker_connections = 10 |
|
243 | 243 | |
|
244 | 244 | ; Max number of requests that worker will handle before being gracefully restarted. |
|
245 | 245 | ; Prevents memory leaks, jitter adds variability so not all workers are restarted at once. |
|
246 | 246 | max_requests = 1000 |
|
247 | 247 | max_requests_jitter = 30 |
|
248 | 248 | |
|
249 | 249 | ; Amount of time a worker can spend with handling a request before it |
|
250 | 250 | ; gets killed and restarted. By default set to 21600 (6hrs) |
|
251 | 251 | ; Examples: 1800 (30min), 3600 (1hr), 7200 (2hr), 43200 (12h) |
|
252 | 252 | timeout = 21600 |
|
253 | 253 | |
|
254 | 254 | ; The maximum size of HTTP request line in bytes. |
|
255 | 255 | ; 0 for unlimited |
|
256 | 256 | limit_request_line = 0 |
|
257 | 257 | |
|
258 | 258 | ; Limit the number of HTTP headers fields in a request. |
|
259 | 259 | ; By default this value is 100 and can't be larger than 32768. |
|
260 | 260 | limit_request_fields = 32768 |
|
261 | 261 | |
|
262 | 262 | ; Limit the allowed size of an HTTP request header field. |
|
263 | 263 | ; Value is a positive number or 0. |
|
264 | 264 | ; Setting it to 0 will allow unlimited header field sizes. |
|
265 | 265 | limit_request_field_size = 0 |
|
266 | 266 | |
|
267 | 267 | ; Timeout for graceful workers restart. |
|
268 | 268 | ; After receiving a restart signal, workers have this much time to finish |
|
269 | 269 | ; serving requests. Workers still alive after the timeout (starting from the |
|
270 | 270 | ; receipt of the restart signal) are force killed. |
|
271 | 271 | ; Examples: 1800 (30min), 3600 (1hr), 7200 (2hr), 43200 (12h) |
|
272 | 272 | graceful_timeout = 3600 |
|
273 | 273 | |
|
274 | 274 | # The number of seconds to wait for requests on a Keep-Alive connection. |
|
275 | 275 | # Generally set in the 1-5 seconds range. |
|
276 | 276 | keepalive = 2 |
|
277 | 277 | |
|
278 | 278 | ; Maximum memory usage that each worker can use before it will receive a |
|
279 | 279 | ; graceful restart signal 0 = memory monitoring is disabled |
|
280 | 280 | ; Examples: 268435456 (256MB), 536870912 (512MB) |
|
281 | 281 | ; 1073741824 (1GB), 2147483648 (2GB), 4294967296 (4GB) |
|
282 | 282 | memory_max_usage = 1073741824 |
|
283 | 283 | |
|
284 | 284 | ; How often in seconds to check for memory usage for each gunicorn worker |
|
285 | 285 | memory_usage_check_interval = 60 |
|
286 | 286 | |
|
287 | 287 | ; Threshold value for which we don't recycle worker if GarbageCollection |
|
288 | 288 | ; frees up enough resources. Before each restart we try to run GC on worker |
|
289 | 289 | ; in case we get enough free memory after that, restart will not happen. |
|
290 | 290 | memory_usage_recovery_threshold = 0.8 |
|
291 | 291 | |
|
292 | 292 | |
|
293 | 293 | [app:main] |
|
294 | 294 | use = egg:rhodecode-vcsserver |
|
295 | 295 | |
|
296 | 296 | pyramid.default_locale_name = en |
|
297 | 297 | pyramid.includes = |
|
298 | 298 | |
|
299 | 299 | ; default locale used by VCS systems |
|
300 | 300 | locale = en_US.UTF-8 |
|
301 | 301 | |
|
302 | 302 | ; ############# |
|
303 | 303 | ; DOGPILE CACHE |
|
304 | 304 | ; ############# |
|
305 | 305 | |
|
306 | 306 | ; Default cache dir for caches. Putting this into a ramdisk can boost performance. |
|
307 | 307 | ; eg. /tmpfs/data_ramdisk, however this directory might require large amount of space |
|
308 | 308 | cache_dir = %(here)s/data |
|
309 | 309 | |
|
310 | 310 | ; ********************************************************** |
|
311 | 311 | ; `repo_object` cache with redis backend |
|
312 | 312 | ; recommended for larger instance, or for better performance |
|
313 | 313 | ; ********************************************************** |
|
314 | 314 | |
|
315 | 315 | ; `repo_object` cache settings for vcs methods for repositories |
|
316 | 316 | rc_cache.repo_object.backend = dogpile.cache.rc.redis_msgpack |
|
317 | 317 | |
|
318 | 318 | ; cache auto-expires after N seconds |
|
319 | 319 | ; Examples: 86400 (1Day), 604800 (7Days), 1209600 (14Days), 2592000 (30days), 7776000 (90Days) |
|
320 | 320 | rc_cache.repo_object.expiration_time = 2592000 |
|
321 | 321 | |
|
322 | 322 | ; redis_expiration_time needs to be greater then expiration_time |
|
323 | 323 | rc_cache.repo_object.arguments.redis_expiration_time = 3592000 |
|
324 | 324 | |
|
325 | 325 | rc_cache.repo_object.arguments.host = localhost |
|
326 | 326 | rc_cache.repo_object.arguments.port = 6379 |
|
327 | 327 | rc_cache.repo_object.arguments.db = 5 |
|
328 | 328 | rc_cache.repo_object.arguments.socket_timeout = 30 |
|
329 | 329 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
330 | 330 | rc_cache.repo_object.arguments.distributed_lock = true |
|
331 | 331 | |
|
332 | 332 | ; ##################### |
|
333 | 333 | ; LOGGING CONFIGURATION |
|
334 | 334 | ; ##################### |
|
335 | 335 | [loggers] |
|
336 | 336 | keys = root, vcsserver |
|
337 | 337 | |
|
338 | 338 | [handlers] |
|
339 | 339 | keys = console |
|
340 | 340 | |
|
341 | 341 | [formatters] |
|
342 | 342 | keys = generic |
|
343 | 343 | |
|
344 | 344 | ; ####### |
|
345 | 345 | ; LOGGERS |
|
346 | 346 | ; ####### |
|
347 | 347 | [logger_root] |
|
348 | 348 | level = NOTSET |
|
349 | 349 | handlers = console |
|
350 | 350 | |
|
351 | 351 | [logger_vcsserver] |
|
352 | 352 | level = DEBUG |
|
353 | 353 | handlers = |
|
354 | 354 | qualname = vcsserver |
|
355 | 355 | propagate = 1 |
|
356 | 356 | |
|
357 | 357 | |
|
358 | 358 | ; ######## |
|
359 | 359 | ; HANDLERS |
|
360 | 360 | ; ######## |
|
361 | 361 | |
|
362 | 362 | [handler_console] |
|
363 | 363 | class = StreamHandler |
|
364 | 364 | args = (sys.stderr, ) |
|
365 | 365 | level = INFO |
|
366 | 366 | formatter = generic |
|
367 | 367 | |
|
368 | 368 | ; ########## |
|
369 | 369 | ; FORMATTERS |
|
370 | 370 | ; ########## |
|
371 | 371 | |
|
372 | 372 | [formatter_generic] |
|
373 | 373 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
374 | 374 | datefmt = %Y-%m-%d %H:%M:%S |
|
375 | 375 | |
|
376 | 376 | |
|
377 | 377 | .. _Subversion Red Book: http://svnbook.red-bean.com/en/1.7/svn-book.html#svn.ref.svn |
|
378 | 378 | |
|
379 | 379 | .. _Ask Ubuntu: http://askubuntu.com/questions/162391/how-do-i-fix-my-locale-issue |
@@ -1,209 +1,209 b'' | |||
|
1 | 1 | .. _api: |
|
2 | 2 | |
|
3 | 3 | API Documentation |
|
4 | 4 | ================= |
|
5 | 5 | |
|
6 | 6 | The |RCE| API uses a single scheme for calling all API methods. The API is |
|
7 | 7 | implemented with JSON protocol in both directions. To send API requests to |
|
8 | 8 | your instance of |RCE|, use the following URL format |
|
9 | 9 | ``<your_server>/_admin`` |
|
10 | 10 | |
|
11 | 11 | .. note:: |
|
12 | 12 | |
|
13 | 13 | To use the API, you should configure the :file:`~/.rhoderc` file with |
|
14 | 14 | access details per instance. For more information, see |
|
15 | 15 | :ref:`config-rhoderc`. |
|
16 | 16 | |
|
17 | 17 | |
|
18 | 18 | API ACCESS FOR WEB VIEWS |
|
19 | 19 | ------------------------ |
|
20 | 20 | |
|
21 | 21 | API access can also be turned on for each web view in |RCE| that is |
|
22 | 22 | decorated with a `@LoginRequired` decorator. To enable API access, change |
|
23 | 23 | the standard login decorator to `@LoginRequired(api_access=True)`. |
|
24 | 24 | |
|
25 | 25 | From |RCE| version 1.7.0 you can configure a white list |
|
26 | 26 | of views that have API access enabled by default. To enable these, |
|
27 | 27 | edit the |RCE| configuration ``.ini`` file. The default location is: |
|
28 | 28 | |
|
29 | 29 | * |RCE| Pre-2.2.7 :file:`root/rhodecode/data/production.ini` |
|
30 |
* |RCE| 3.0 :file:` |
|
|
30 | * |RCE| 3.0 :file:`config/_shared/rhodecode.ini` | |
|
31 | 31 | |
|
32 | 32 | To configure the white list, edit this section of the file. In this |
|
33 | 33 | configuration example, API access is granted to the patch/diff raw file and |
|
34 | 34 | archive. |
|
35 | 35 | |
|
36 | 36 | .. code-block:: ini |
|
37 | 37 | |
|
38 | 38 | ## List of controllers (using glob syntax) that AUTH TOKENS could be used for access. |
|
39 | 39 | ## Adding ?auth_token = <token> to the url authenticates this request as if it |
|
40 | 40 | ## came from the the logged in user who own this authentication token. |
|
41 | 41 | ## |
|
42 | 42 | ## Syntax is <ControllerClass>:<function_pattern>. |
|
43 | 43 | ## The list should be "," separated and on a single line. |
|
44 | 44 | ## |
|
45 | 45 | api_access_controllers_whitelist = RepoCommitsView:repo_commit_raw,RepoCommitsView:repo_commit_patch,RepoCommitsView:repo_commit_download |
|
46 | 46 | |
|
47 | 47 | After this change, a |RCE| view can be accessed without login by adding a |
|
48 | 48 | GET parameter ``?auth_token=<auth_token>`` to a url. For example to |
|
49 | 49 | access the raw diff. |
|
50 | 50 | |
|
51 | 51 | .. code-block:: html |
|
52 | 52 | |
|
53 | 53 | http://<server>/<repo>/changeset-diff/<sha>?auth_token=<auth_token> |
|
54 | 54 | |
|
55 | 55 | By default this is only enabled on RSS/ATOM feed views. Exposing raw diffs is a |
|
56 | 56 | good way to integrate with 3rd party services like code review, or build farms |
|
57 | 57 | that could download archives. |
|
58 | 58 | |
|
59 | 59 | API ACCESS |
|
60 | 60 | ---------- |
|
61 | 61 | |
|
62 | 62 | All clients are required to send JSON-RPC spec JSON data. |
|
63 | 63 | |
|
64 | 64 | .. code-block:: bash |
|
65 | 65 | |
|
66 | 66 | { |
|
67 | 67 | "id:"<id>", |
|
68 | 68 | "auth_token":"<auth_token>", |
|
69 | 69 | "method":"<method_name>", |
|
70 | 70 | "args":{"<arg_key>":"<arg_val>"} |
|
71 | 71 | } |
|
72 | 72 | |
|
73 | 73 | Example call for auto pulling from remote repositories using curl: |
|
74 | 74 | |
|
75 | 75 | .. code-block:: bash |
|
76 | 76 | |
|
77 | 77 | curl https://server.com/_admin/api -X POST -H 'content-type:text/plain' --data-binary '{"id":1, |
|
78 | 78 | "auth_token":"xe7cdb2v278e4evbdf5vs04v832v0efvcbcve4a3","method":"pull", "args":{"repoid":"CPython"}}' |
|
79 | 79 | |
|
80 | 80 | Provide those parameters: |
|
81 | 81 | - **id** A value of any type, which is used to match the response with the |
|
82 | 82 | request that it is replying to. |
|
83 | 83 | - **auth_token** for access and permission validation. |
|
84 | 84 | - **method** is name of method to call |
|
85 | 85 | - **args** is an ``key:value`` list of arguments to pass to method |
|
86 | 86 | |
|
87 | 87 | .. note:: |
|
88 | 88 | |
|
89 | 89 | To get your |authtoken|, from the |RCE| interface, |
|
90 | 90 | go to: |
|
91 | 91 | :menuselection:`username --> My account --> Auth tokens` |
|
92 | 92 | |
|
93 | 93 | For security reasons you should always create a dedicated |authtoken| for |
|
94 | 94 | API use only. |
|
95 | 95 | |
|
96 | 96 | |
|
97 | 97 | The |RCE| API will always return a JSON-RPC response: |
|
98 | 98 | |
|
99 | 99 | .. code-block:: bash |
|
100 | 100 | |
|
101 | 101 | { |
|
102 | 102 | "id": <id>, # matching id sent by request |
|
103 | 103 | "result": "<result>"|null, # JSON formatted result, null if any errors |
|
104 | 104 | "error": "null"|<error_message> # JSON formatted error (if any) |
|
105 | 105 | } |
|
106 | 106 | |
|
107 | 107 | All responses from API will be with `HTTP/1.0 200 OK` status code. |
|
108 | 108 | If there is an error when calling the API, the *error* key will contain a |
|
109 | 109 | failure description and the *result* will be `null`. |
|
110 | 110 | |
|
111 | 111 | API CLIENT |
|
112 | 112 | ---------- |
|
113 | 113 | |
|
114 | 114 | To install the |RCE| API, see :ref:`install-tools`. To configure the API per |
|
115 | 115 | instance, see the :ref:`rc-tools` section as you need to configure a |
|
116 | 116 | :file:`~/.rhoderc` file with your |authtokens|. |
|
117 | 117 | |
|
118 | 118 | Once you have set up your instance API access, use the following examples to |
|
119 | 119 | get started. |
|
120 | 120 | |
|
121 | 121 | .. code-block:: bash |
|
122 | 122 | |
|
123 | 123 | # Getting the 'rhodecode' repository |
|
124 | 124 | # from a RhodeCode Enterprise instance |
|
125 | 125 | rhodecode-api --instance-name=enterprise-1 get_repo repoid:rhodecode |
|
126 | 126 | |
|
127 | 127 | Calling method get_repo => http://127.0.0.1:5000 |
|
128 | 128 | Server response |
|
129 | 129 | { |
|
130 | 130 | <json data> |
|
131 | 131 | } |
|
132 | 132 | |
|
133 | 133 | # Creating a new mercurial repository called 'brand-new' |
|
134 | 134 | # with a description 'Repo-description' |
|
135 | 135 | rhodecode-api --instance-name=enterprise-1 create_repo repo_name:brand-new repo_type:hg description:Repo-description |
|
136 | 136 | { |
|
137 | 137 | "error": null, |
|
138 | 138 | "id": 1110, |
|
139 | 139 | "result": { |
|
140 | 140 | "msg": "Created new repository `brand-new`", |
|
141 | 141 | "success": true, |
|
142 | 142 | "task": null |
|
143 | 143 | } |
|
144 | 144 | } |
|
145 | 145 | |
|
146 | 146 | A broken example, what not to do. |
|
147 | 147 | |
|
148 | 148 | .. code-block:: bash |
|
149 | 149 | |
|
150 | 150 | # A call missing the required arguments |
|
151 | 151 | # and not specifying the instance |
|
152 | 152 | rhodecode-api get_repo |
|
153 | 153 | |
|
154 | 154 | Calling method get_repo => http://127.0.0.1:5000 |
|
155 | 155 | Server response |
|
156 | 156 | "Missing non optional `repoid` arg in JSON DATA" |
|
157 | 157 | |
|
158 | 158 | You can specify pure JSON using the ``--format`` parameter. |
|
159 | 159 | |
|
160 | 160 | .. code-block:: bash |
|
161 | 161 | |
|
162 | 162 | rhodecode-api --format=json get_repo repoid:rhodecode |
|
163 | 163 | |
|
164 | 164 | In such case only output that this function shows is pure JSON, we can use that |
|
165 | 165 | and pipe output to some json formatter. |
|
166 | 166 | |
|
167 | 167 | If output is in pure JSON format, you can pipe output to a JSON formatter. |
|
168 | 168 | |
|
169 | 169 | .. code-block:: bash |
|
170 | 170 | |
|
171 | 171 | rhodecode-api --instance-name=enterprise-1 --format=json get_repo repoid:rhodecode | python -m json.tool |
|
172 | 172 | |
|
173 | 173 | API METHODS |
|
174 | 174 | ----------- |
|
175 | 175 | |
|
176 | 176 | Each method by default required following arguments. |
|
177 | 177 | |
|
178 | 178 | .. code-block:: bash |
|
179 | 179 | |
|
180 | 180 | id : "<id_for_response>" |
|
181 | 181 | auth_token : "<auth_token>" |
|
182 | 182 | method : "<method name>" |
|
183 | 183 | args : {} |
|
184 | 184 | |
|
185 | 185 | Use each **param** from docs and put it in args, Optional parameters |
|
186 | 186 | are not required in args. |
|
187 | 187 | |
|
188 | 188 | .. code-block:: bash |
|
189 | 189 | |
|
190 | 190 | args: {"repoid": "rhodecode"} |
|
191 | 191 | |
|
192 | 192 | .. Note: From this point on things are generated by the script in |
|
193 | 193 | `scripts/fabfile.py`. To change things below, update the docstrings in the |
|
194 | 194 | ApiController. |
|
195 | 195 | |
|
196 | 196 | .. --- API DEFS MARKER --- |
|
197 | 197 | .. toctree:: |
|
198 | 198 | |
|
199 | 199 | methods/repo-methods |
|
200 | 200 | methods/store-methods |
|
201 | 201 | methods/license-methods |
|
202 | 202 | methods/deprecated-methods |
|
203 | 203 | methods/gist-methods |
|
204 | 204 | methods/pull-request-methods |
|
205 | 205 | methods/repo-group-methods |
|
206 | 206 | methods/search-methods |
|
207 | 207 | methods/server-methods |
|
208 | 208 | methods/user-methods |
|
209 | 209 | methods/user-group-methods |
@@ -1,144 +1,144 b'' | |||
|
1 | 1 | .. _ssh-connection: |
|
2 | 2 | |
|
3 | 3 | SSH Connection |
|
4 | 4 | -------------- |
|
5 | 5 | |
|
6 | 6 | If you wish to connect to your |repos| using SSH protocol, use the |
|
7 | 7 | following instructions. |
|
8 | 8 | |
|
9 | 9 | 1. Include |RCE| generated `authorized_keys` file into your sshd_config. |
|
10 | 10 | |
|
11 | 11 | By default a file `authorized_keys_rhodecode` is created containing |
|
12 | 12 | configuration and all allowed user connection keys are stored inside. |
|
13 | 13 | On each change of stored keys inside |RCE| this file is updated with |
|
14 | 14 | proper data. |
|
15 | 15 | |
|
16 | 16 | .. code-block:: bash |
|
17 | 17 | |
|
18 | 18 | # Edit sshd_config file most likely at /etc/ssh/sshd_config |
|
19 | 19 | # add or edit the AuthorizedKeysFile, and set to use custom files |
|
20 | 20 | |
|
21 | 21 | AuthorizedKeysFile %h/.ssh/authorized_keys %h/.ssh/authorized_keys_rhodecode |
|
22 | 22 | |
|
23 | 23 | This way we use a separate file for SSH access and separate one for |
|
24 | 24 | SSH access to |RCE| repositories. |
|
25 | 25 | |
|
26 | 26 | |
|
27 | 27 | 2. Enable the SSH module on instance. |
|
28 | 28 | |
|
29 | 29 | On the server where |RCE| is running executing: |
|
30 | 30 | |
|
31 | 31 | .. code-block:: bash |
|
32 | 32 | |
|
33 | 33 | rccontrol enable-module ssh {instance-id} |
|
34 | 34 | |
|
35 | 35 | This will add the following configuration into :file:`rhodecode.ini`. |
|
36 | 36 | This also can be done manually: |
|
37 | 37 | |
|
38 | 38 | .. code-block:: ini |
|
39 | 39 | |
|
40 | 40 | ############################################################ |
|
41 | 41 | ### SSH Support Settings ### |
|
42 | 42 | ############################################################ |
|
43 | 43 | |
|
44 | 44 | ## Defines if a custom authorized_keys file should be created and written on |
|
45 | 45 | ## any change user ssh keys. Setting this to false also disables posibility |
|
46 | 46 | ## of adding SSH keys by users from web interface. Super admins can still |
|
47 | 47 | ## manage SSH Keys. |
|
48 | 48 | ssh.generate_authorized_keyfile = true |
|
49 | 49 | |
|
50 | 50 | ## Options for ssh, default is `no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding` |
|
51 | 51 | # ssh.authorized_keys_ssh_opts = |
|
52 | 52 | |
|
53 | 53 | ## Path to the authrozied_keys file where the generate entries are placed. |
|
54 | 54 | ## It is possible to have multiple key files specified in `sshd_config` e.g. |
|
55 | 55 | ## AuthorizedKeysFile %h/.ssh/authorized_keys %h/.ssh/authorized_keys_rhodecode |
|
56 | 56 | ssh.authorized_keys_file_path = ~/.ssh/authorized_keys_rhodecode |
|
57 | 57 | |
|
58 | 58 | ## Command to execute the SSH wrapper. The binary is available in the |
|
59 | 59 | ## rhodecode installation directory. |
|
60 | 60 | ## e.g ~/.rccontrol/community-1/profile/bin/rc-ssh-wrapper |
|
61 | 61 | ssh.wrapper_cmd = ~/.rccontrol/community-1/rc-ssh-wrapper |
|
62 | 62 | |
|
63 | 63 | ## Allow shell when executing the ssh-wrapper command |
|
64 | 64 | ssh.wrapper_cmd_allow_shell = false |
|
65 | 65 | |
|
66 | 66 | ## Enables logging, and detailed output send back to the client during SSH |
|
67 | 67 | ## operations. Useful for debugging, shouldn't be used in production. |
|
68 | 68 | ssh.enable_debug_logging = false |
|
69 | 69 | |
|
70 | 70 | ## Paths to binary executable, by default they are the names, but we can |
|
71 | 71 | ## override them if we want to use a custom one |
|
72 | 72 | ssh.executable.hg = ~/.rccontrol/vcsserver-1/profile/bin/hg |
|
73 | 73 | ssh.executable.git = ~/.rccontrol/vcsserver-1/profile/bin/git |
|
74 | 74 | ssh.executable.svn = ~/.rccontrol/vcsserver-1/profile/bin/svnserve |
|
75 | 75 | |
|
76 | 76 | ## Enables SSH key generator web interface. Disabling this still allows users |
|
77 | 77 | ## to add their own keys. |
|
78 | 78 | ssh.enable_ui_key_generator = true |
|
79 | 79 | |
|
80 | 80 | |
|
81 | 81 | 3. Set base_url for instance to enable proper event handling (Optional): |
|
82 | 82 | |
|
83 | 83 | If you wish to have integrations working correctly via SSH please configure |
|
84 | 84 | The Application base_url. |
|
85 | 85 | |
|
86 | 86 | Use the ``rccontrol status`` command to view instance details. |
|
87 | 87 | Hostname is required for the integration to properly set the instance URL. |
|
88 | 88 | |
|
89 | 89 | When your hostname is known (e.g https://code.rhodecode.com) please set it |
|
90 |
inside :file:` |
|
|
90 | inside :file:`config/_shared/rhodecode.ini` | |
|
91 | 91 | |
|
92 | 92 | add into `[app:main]` section the following configuration: |
|
93 | 93 | |
|
94 | 94 | .. code-block:: ini |
|
95 | 95 | |
|
96 | 96 | app.base_url = https://code.rhodecode.com |
|
97 | 97 | |
|
98 | 98 | |
|
99 | 99 | 4. Add the public key to your user account for testing. |
|
100 | 100 | First generate a new key, or use your existing one and have your public key |
|
101 | 101 | at hand. |
|
102 | 102 | |
|
103 | 103 | Go to |
|
104 | 104 | :menuselection:`My Account --> SSH Keys` and add the public key with proper description. |
|
105 | 105 | |
|
106 | 106 | This will generate a new entry inside our configured `authorized_keys_rhodecode` file. |
|
107 | 107 | |
|
108 | 108 | Test the connection from your local machine using the following example: |
|
109 | 109 | |
|
110 | 110 | .. note:: |
|
111 | 111 | |
|
112 | 112 | In case of connection problems please set |
|
113 | 113 | `ssh.enable_debug_logging = true` inside the SSH configuration of |
|
114 |
:file:` |
|
|
114 | :file:`config/_shared/rhodecode.ini` | |
|
115 | 115 | Then add, remove your SSH key and try connecting again. |
|
116 | 116 | Debug logging will be printed to help find the problems on the server side. |
|
117 | 117 | |
|
118 | 118 | Test connection using the ssh command from the local machine. Make sure |
|
119 | 119 | to use the use who is running the |RCE| server, and not your username from |
|
120 | 120 | the web interface. |
|
121 | 121 | |
|
122 | 122 | |
|
123 | 123 | For SVN: |
|
124 | 124 | |
|
125 | 125 | .. code-block:: bash |
|
126 | 126 | |
|
127 | 127 | SVN_SSH="ssh -i ~/.ssh/id_rsa_test_ssh_private.key" svn checkout svn+ssh://rhodecode@rc-server/repo_name |
|
128 | 128 | |
|
129 | 129 | For GIT: |
|
130 | 130 | |
|
131 | 131 | .. code-block:: bash |
|
132 | 132 | |
|
133 | 133 | GIT_SSH_COMMAND='ssh -i ~/.ssh/id_rsa_test_ssh_private.key' git clone ssh://rhodecode@rc-server/repo_name |
|
134 | 134 | |
|
135 | 135 | For Mercurial: |
|
136 | 136 | |
|
137 | 137 | .. code-block:: bash |
|
138 | 138 | |
|
139 | 139 | Add to hgrc: |
|
140 | 140 | |
|
141 | 141 | [ui] |
|
142 | 142 | ssh = ssh -C -i ~/.ssh/id_rsa_test_ssh_private.key |
|
143 | 143 | |
|
144 | 144 | hg clone ssh://rhodecode@rc-server/repo_name |
@@ -1,45 +1,45 b'' | |||
|
1 | 1 | .. _set-up-mail: |
|
2 | 2 | |
|
3 | 3 | Set up Email |
|
4 | 4 | ------------ |
|
5 | 5 | |
|
6 | 6 | To setup email with your |RCE| instance, open the default |
|
7 | :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.ini` | |
|
7 | :file:`config/_shared/rhodecode.ini` | |
|
8 | 8 | file and uncomment and configure the email section. If it is not there, |
|
9 | 9 | use the below example to insert it. |
|
10 | 10 | |
|
11 | 11 | Once configured you can check the settings for your |RCE| instance on the |
|
12 | 12 | :menuselection:`Admin --> Settings --> Email` page. |
|
13 | 13 | |
|
14 | 14 | Please be aware that both section should be changed the `[DEFAULT]` for main applications |
|
15 | 15 | email config, and `[server:main]` for exception tracking email |
|
16 | 16 | |
|
17 | 17 | .. code-block:: ini |
|
18 | 18 | |
|
19 | 19 | [DEFAULT] |
|
20 | 20 | ; ######################################################################## |
|
21 | 21 | ; EMAIL CONFIGURATION |
|
22 | 22 | ; These settings will be used by the RhodeCode mailing system |
|
23 | 23 | ; ######################################################################## |
|
24 | 24 | |
|
25 | 25 | ; prefix all emails subjects with given prefix, helps filtering out emails |
|
26 | 26 | #email_prefix = [RhodeCode] |
|
27 | 27 | |
|
28 | 28 | ; email FROM address all mails will be sent |
|
29 | 29 | #app_email_from = rhodecode-noreply@localhost |
|
30 | 30 | |
|
31 | 31 | #smtp_server = mail.server.com |
|
32 | 32 | #smtp_username = |
|
33 | 33 | #smtp_password = |
|
34 | 34 | #smtp_port = |
|
35 | 35 | #smtp_use_tls = false |
|
36 | 36 | #smtp_use_ssl = true |
|
37 | 37 | |
|
38 | 38 | [server:main] |
|
39 | 39 | ; Send email with exception details when it happens |
|
40 | 40 | #exception_tracker.send_email = true |
|
41 | 41 | |
|
42 | 42 | ; Comma separated list of recipients for exception emails, |
|
43 | 43 | ; e.g admin@rhodecode.com,devops@rhodecode.com |
|
44 | 44 | ; Can be left empty, then emails will be sent to ALL super-admins |
|
45 | 45 | #exception_tracker.send_email_recipients = |
@@ -1,51 +1,52 b'' | |||
|
1 | 1 | |RCE| 5.0.0 |RNS| |
|
2 | 2 | ----------------- |
|
3 | 3 | |
|
4 | 4 | Release Date |
|
5 | 5 | ^^^^^^^^^^^^ |
|
6 | 6 | |
|
7 | - TBA | |
|
7 | ||
|
8 | - 2024-05-14 | |
|
8 | 9 | |
|
9 | 10 | |
|
10 | 11 | New Features |
|
11 | 12 | ^^^^^^^^^^^^ |
|
12 | 13 | |
|
13 | 14 | - Full support of Python3 and Python3.11 |
|
14 | 15 | - Git repositories with LFS object are now pushing and pulling the LFS objects when remote sync is enabled. |
|
15 | - Archive generation: implemented a new system for caching generated archive files that allows setting cache size limit | |
|
16 | see: `archive_cache.cache_size_gb=` option. | |
|
16 | - Archive generation: implemented a new system for caching generated archive files that allows setting cache size limit see: archive_cache.cache_size_gb= option. | |
|
17 | 17 | - Introduced statsd metrics in various places for new monitoring stack to provide useful details on traffic and usage. |
|
18 | 18 | |
|
19 | 19 | |
|
20 | 20 | General |
|
21 | 21 | ^^^^^^^ |
|
22 | 22 | |
|
23 | - Upgraded all dependency libraries to their latest available versions | |
|
24 | - Dropped support for deprecated hgsubversion no longer available in python3 | |
|
23 | - Upgraded all dependency libraries to their latest available versions for python3 compatability | |
|
25 | 24 | |
|
26 | 25 | |
|
27 | 26 | Security |
|
28 | 27 | ^^^^^^^^ |
|
29 | 28 | |
|
29 | - fixed few edge cases of permission invalidation on change of permissions | |
|
30 | 30 | |
|
31 | 31 | |
|
32 | 32 | Performance |
|
33 | 33 | ^^^^^^^^^^^ |
|
34 | 34 | |
|
35 | 35 | - Moved lot of dulwich based code into libgit2 implementation for better performance |
|
36 | 36 | |
|
37 | 37 | |
|
38 | 38 | Fixes |
|
39 | 39 | ^^^^^ |
|
40 | 40 | |
|
41 | - Various small fixes and improvements found during python3 migration | |
|
41 | 42 | |
|
42 | 43 | |
|
43 | 44 | Upgrade notes |
|
44 | 45 | ^^^^^^^^^^^^^ |
|
45 | 46 | |
|
46 | 47 | - RhodeCode 5.0.0 is a mayor upgrade that support latest Python versions 3.11, |
|
47 | 48 | and also comes with lots of internal changes and performance improvements. |
|
48 | 49 | - RhodeCode 5.0.0 is only supported via our new docker rcstack installer or helm/k8s stack |
|
49 | 50 | - Please see migration guide :ref:`rcstack:migration-to-rcstack` |
|
50 | 51 | - RhodeCode 5.0.0 also requires a new license for EE instances, please reach out to us via support@rhodecode.com to |
|
51 | 52 | convert your license key. |
@@ -1,168 +1,173 b'' | |||
|
1 | 1 | .. _rhodecode-release-notes-ref: |
|
2 | 2 | |
|
3 | 3 | Release Notes |
|
4 | 4 | ============= |
|
5 | 5 | |
|
6 | 6 | |RCE| 5.x Versions |
|
7 | 7 | ------------------ |
|
8 | 8 | |
|
9 | 9 | .. toctree:: |
|
10 | 10 | :maxdepth: 1 |
|
11 | 11 | |
|
12 | ||
|
13 | release-notes-5.1.0.rst | |
|
14 | release-notes-5.0.3.rst | |
|
15 | release-notes-5.0.2.rst | |
|
16 | release-notes-5.0.1.rst | |
|
12 | 17 | release-notes-5.0.0.rst |
|
13 | 18 | |
|
14 | 19 | |
|
15 | 20 | |RCE| 4.x Versions |
|
16 | 21 | ------------------ |
|
17 | 22 | |
|
18 | 23 | .. toctree:: |
|
19 | 24 | :maxdepth: 1 |
|
20 | 25 | |
|
21 | 26 | release-notes-4.27.1.rst |
|
22 | 27 | release-notes-4.27.0.rst |
|
23 | 28 | release-notes-4.26.0.rst |
|
24 | 29 | release-notes-4.25.2.rst |
|
25 | 30 | release-notes-4.25.1.rst |
|
26 | 31 | release-notes-4.25.0.rst |
|
27 | 32 | release-notes-4.24.1.rst |
|
28 | 33 | release-notes-4.24.0.rst |
|
29 | 34 | release-notes-4.23.2.rst |
|
30 | 35 | release-notes-4.23.1.rst |
|
31 | 36 | release-notes-4.23.0.rst |
|
32 | 37 | release-notes-4.22.0.rst |
|
33 | 38 | release-notes-4.21.0.rst |
|
34 | 39 | release-notes-4.20.1.rst |
|
35 | 40 | release-notes-4.20.0.rst |
|
36 | 41 | release-notes-4.19.3.rst |
|
37 | 42 | release-notes-4.19.2.rst |
|
38 | 43 | release-notes-4.19.1.rst |
|
39 | 44 | release-notes-4.19.0.rst |
|
40 | 45 | release-notes-4.18.3.rst |
|
41 | 46 | release-notes-4.18.2.rst |
|
42 | 47 | release-notes-4.18.1.rst |
|
43 | 48 | release-notes-4.18.0.rst |
|
44 | 49 | release-notes-4.17.4.rst |
|
45 | 50 | release-notes-4.17.3.rst |
|
46 | 51 | release-notes-4.17.2.rst |
|
47 | 52 | release-notes-4.17.1.rst |
|
48 | 53 | release-notes-4.17.0.rst |
|
49 | 54 | release-notes-4.16.2.rst |
|
50 | 55 | release-notes-4.16.1.rst |
|
51 | 56 | release-notes-4.16.0.rst |
|
52 | 57 | release-notes-4.15.2.rst |
|
53 | 58 | release-notes-4.15.1.rst |
|
54 | 59 | release-notes-4.15.0.rst |
|
55 | 60 | release-notes-4.14.1.rst |
|
56 | 61 | release-notes-4.14.0.rst |
|
57 | 62 | release-notes-4.13.3.rst |
|
58 | 63 | release-notes-4.13.2.rst |
|
59 | 64 | release-notes-4.13.1.rst |
|
60 | 65 | release-notes-4.13.0.rst |
|
61 | 66 | release-notes-4.12.4.rst |
|
62 | 67 | release-notes-4.12.3.rst |
|
63 | 68 | release-notes-4.12.2.rst |
|
64 | 69 | release-notes-4.12.1.rst |
|
65 | 70 | release-notes-4.12.0.rst |
|
66 | 71 | release-notes-4.11.6.rst |
|
67 | 72 | release-notes-4.11.5.rst |
|
68 | 73 | release-notes-4.11.4.rst |
|
69 | 74 | release-notes-4.11.3.rst |
|
70 | 75 | release-notes-4.11.2.rst |
|
71 | 76 | release-notes-4.11.1.rst |
|
72 | 77 | release-notes-4.11.0.rst |
|
73 | 78 | release-notes-4.10.6.rst |
|
74 | 79 | release-notes-4.10.5.rst |
|
75 | 80 | release-notes-4.10.4.rst |
|
76 | 81 | release-notes-4.10.3.rst |
|
77 | 82 | release-notes-4.10.2.rst |
|
78 | 83 | release-notes-4.10.1.rst |
|
79 | 84 | release-notes-4.10.0.rst |
|
80 | 85 | release-notes-4.9.1.rst |
|
81 | 86 | release-notes-4.9.0.rst |
|
82 | 87 | release-notes-4.8.0.rst |
|
83 | 88 | release-notes-4.7.2.rst |
|
84 | 89 | release-notes-4.7.1.rst |
|
85 | 90 | release-notes-4.7.0.rst |
|
86 | 91 | release-notes-4.6.1.rst |
|
87 | 92 | release-notes-4.6.0.rst |
|
88 | 93 | release-notes-4.5.2.rst |
|
89 | 94 | release-notes-4.5.1.rst |
|
90 | 95 | release-notes-4.5.0.rst |
|
91 | 96 | release-notes-4.4.2.rst |
|
92 | 97 | release-notes-4.4.1.rst |
|
93 | 98 | release-notes-4.4.0.rst |
|
94 | 99 | release-notes-4.3.1.rst |
|
95 | 100 | release-notes-4.3.0.rst |
|
96 | 101 | release-notes-4.2.1.rst |
|
97 | 102 | release-notes-4.2.0.rst |
|
98 | 103 | release-notes-4.1.2.rst |
|
99 | 104 | release-notes-4.1.1.rst |
|
100 | 105 | release-notes-4.1.0.rst |
|
101 | 106 | release-notes-4.0.1.rst |
|
102 | 107 | release-notes-4.0.0.rst |
|
103 | 108 | |
|
104 | 109 | |RCE| 3.x Versions |
|
105 | 110 | ------------------ |
|
106 | 111 | |
|
107 | 112 | .. toctree:: |
|
108 | 113 | :maxdepth: 1 |
|
109 | 114 | |
|
110 | 115 | release-notes-3.8.4.rst |
|
111 | 116 | release-notes-3.8.3.rst |
|
112 | 117 | release-notes-3.8.2.rst |
|
113 | 118 | release-notes-3.8.1.rst |
|
114 | 119 | release-notes-3.8.0.rst |
|
115 | 120 | release-notes-3.7.1.rst |
|
116 | 121 | release-notes-3.7.0.rst |
|
117 | 122 | release-notes-3.6.1.rst |
|
118 | 123 | release-notes-3.6.0.rst |
|
119 | 124 | release-notes-3.5.2.rst |
|
120 | 125 | release-notes-3.5.1.rst |
|
121 | 126 | release-notes-3.5.0.rst |
|
122 | 127 | release-notes-3.4.1.rst |
|
123 | 128 | release-notes-3.4.0.rst |
|
124 | 129 | release-notes-3.3.4.rst |
|
125 | 130 | release-notes-3.3.3.rst |
|
126 | 131 | release-notes-3.3.2.rst |
|
127 | 132 | release-notes-3.3.1.rst |
|
128 | 133 | release-notes-3.3.0.rst |
|
129 | 134 | release-notes-3.2.3.rst |
|
130 | 135 | release-notes-3.2.2.rst |
|
131 | 136 | release-notes-3.2.1.rst |
|
132 | 137 | release-notes-3.2.0.rst |
|
133 | 138 | release-notes-3.1.1.rst |
|
134 | 139 | release-notes-3.1.0.rst |
|
135 | 140 | release-notes-3.0.2.rst |
|
136 | 141 | release-notes-3.0.1.rst |
|
137 | 142 | release-notes-3.0.0.rst |
|
138 | 143 | |
|
139 | 144 | |RCE| 2.x Versions |
|
140 | 145 | ------------------ |
|
141 | 146 | |
|
142 | 147 | .. toctree:: |
|
143 | 148 | :maxdepth: 1 |
|
144 | 149 | |
|
145 | 150 | release-notes-2.2.8.rst |
|
146 | 151 | release-notes-2.2.7.rst |
|
147 | 152 | release-notes-2.2.6.rst |
|
148 | 153 | release-notes-2.2.5.rst |
|
149 | 154 | release-notes-2.2.4.rst |
|
150 | 155 | release-notes-2.2.3.rst |
|
151 | 156 | release-notes-2.2.2.rst |
|
152 | 157 | release-notes-2.2.1.rst |
|
153 | 158 | release-notes-2.2.0.rst |
|
154 | 159 | release-notes-2.1.0.rst |
|
155 | 160 | release-notes-2.0.2.rst |
|
156 | 161 | release-notes-2.0.1.rst |
|
157 | 162 | release-notes-2.0.0.rst |
|
158 | 163 | |
|
159 | 164 | |RCE| 1.x Versions |
|
160 | 165 | ------------------ |
|
161 | 166 | |
|
162 | 167 | .. toctree:: |
|
163 | 168 | :maxdepth: 1 |
|
164 | 169 | |
|
165 | 170 | release-notes-1.7.2.rst |
|
166 | 171 | release-notes-1.7.1.rst |
|
167 | 172 | release-notes-1.7.0.rst |
|
168 | 173 | release-notes-1.6.0.rst |
@@ -1,91 +1,91 b'' | |||
|
1 | 1 | .. _multi-instance-setup: |
|
2 | 2 | |
|
3 | 3 | Scaling |RCE| Using Multiple Instances |
|
4 | 4 | ====================================== |
|
5 | 5 | |
|
6 | 6 | Running multiple instances of |RCE| from a single database can be used to |
|
7 | 7 | scale the application for the following deployment setups: |
|
8 | 8 | |
|
9 | 9 | * Using dedicated Continuous Integrations instances. |
|
10 | 10 | * Locating instances closer to geographically dispersed development teams. |
|
11 | 11 | * Running production and testing instances, or failover instances on a |
|
12 | 12 | different server. |
|
13 | 13 | * Running proxy read-only instances for pull operations. |
|
14 | 14 | |
|
15 | 15 | If you wish to run multiple instances of |RCE| using a single database for |
|
16 | 16 | settings, use the following instructions to set this up. Before you get onto |
|
17 | 17 | multiple instances though, you should install |RCE|, and set |
|
18 | 18 | up your first instance as you see fit. You can see the full instructions here |
|
19 | 19 | :ref:`Installing RhodeCode Enterprise <control:rcc>` |
|
20 | 20 | |
|
21 | 21 | Once you have configured your first instance, you can run additional instances |
|
22 | 22 | from the same database using the following steps: |
|
23 | 23 | |
|
24 | 24 | 1. Install a new instance of |RCE|, choosing SQLite as the database. It is |
|
25 | 25 | important to choose SQLite, because this will not overwrite any other |
|
26 | 26 | database settings you may have. |
|
27 | 27 | |
|
28 | 28 | Once the new instance is installed you need to update the licence token and |
|
29 | 29 | database connection string in the |
|
30 |
:file:` |
|
|
30 | :file:`config/_shared/rhodecode.ini` file. | |
|
31 | 31 | |
|
32 | 32 | .. code-block:: bash |
|
33 | 33 | |
|
34 | 34 | $ rccontrol install Enterprise |
|
35 | 35 | |
|
36 | 36 | Agree to the licence agreement? [y/N]: y |
|
37 | 37 | Username [admin]: username |
|
38 | 38 | Password (min 6 chars): |
|
39 | 39 | Repeat for confirmation: |
|
40 | 40 | Email: user@example.com |
|
41 | 41 | Respositories location [/home/brian/repos]: |
|
42 | 42 | IP to start the Enterprise server on [127.0.0.1]: |
|
43 | 43 | Port for the Enterprise server to use [10000]: |
|
44 | 44 | Database type - [s]qlite, [m]ysql, [p]ostresql: s |
|
45 | 45 | |
|
46 | 46 | 2. The licence token used on each new instance needs to be the token from your |
|
47 | 47 | initial instance. This allows multiple instances to run the same licence key. |
|
48 | 48 | |
|
49 | 49 | To get the licence token, go to the |RCE| interface of your primary |
|
50 | 50 | instance and select :menuselection:`admin --> setting --> license`. Then |
|
51 | 51 | update the licence token setting in each new instance's |
|
52 | 52 | :file:`rhodecode.ini` file. |
|
53 | 53 | |
|
54 | 54 | .. code-block:: ini |
|
55 | 55 | |
|
56 | 56 | ## generated license token, goto license page in RhodeCode settings to get |
|
57 | 57 | ## new token |
|
58 | 58 | license_token = add-token-here |
|
59 | 59 | |
|
60 | 60 | 3. Update the database connection string in the |
|
61 | 61 | :file:`rhodecode.ini` file to point to your database. For |
|
62 | 62 | more information, see :ref:`config-database`. |
|
63 | 63 | |
|
64 | 64 | .. code-block:: ini |
|
65 | 65 | |
|
66 | 66 | ######################################################### |
|
67 | 67 | ### DB CONFIGS - EACH DB WILL HAVE IT'S OWN CONFIG ### |
|
68 | 68 | ######################################################### |
|
69 | 69 | |
|
70 | 70 | # Default SQLite config |
|
71 | 71 | sqlalchemy.db1.url = sqlite:////home/user/.rccontrol/enterprise-1/rhodecode.db |
|
72 | 72 | |
|
73 | 73 | # Use this example for a PostgreSQL |
|
74 | 74 | sqlalchemy.db1.url = postgresql://username:password@localhost/rhodecode |
|
75 | 75 | |
|
76 | 76 | 4. Restart your updated instance. Once restarted the new instance will read |
|
77 | 77 | the licence key in the database and will function identically as the |
|
78 | 78 | original instance. |
|
79 | 79 | |
|
80 | 80 | .. code-block:: bash |
|
81 | 81 | |
|
82 | 82 | $ rccontrol restart enterprise-2 |
|
83 | 83 | |
|
84 | 84 | If you wish to add additional performance to your setup, see the |
|
85 | 85 | :ref:`rhodecode-tuning-ref` section. |
|
86 | 86 | |
|
87 | 87 | Scaling Deployment Diagram |
|
88 | 88 | -------------------------- |
|
89 | 89 | |
|
90 | 90 | .. image:: ../images/scaling-diagrm.png |
|
91 | 91 | :align: center |
@@ -1,61 +1,60 b'' | |||
|
1 | 1 | { |
|
2 | 2 | "name": "rhodecode-enterprise", |
|
3 | 3 | "version": "5.0.0", |
|
4 | 4 | "private": true, |
|
5 | 5 | "description": "RhodeCode JS packaged", |
|
6 | 6 | "license": "SEE LICENSE IN LICENSE.txt", |
|
7 | 7 | "repository": { |
|
8 | 8 | "type": "hg", |
|
9 | 9 | "url": "https://code.rhodecode.com/rhodecode-enterprise-ce" |
|
10 | 10 | }, |
|
11 | 11 | "devDependencies": { |
|
12 | 12 | "@polymer/iron-a11y-keys": "^3.0.0", |
|
13 | 13 | "@polymer/iron-ajax": "^3.0.0", |
|
14 | 14 | "@polymer/iron-autogrow-textarea": "^3.0.0", |
|
15 | 15 | "@polymer/paper-button": "^3.0.0", |
|
16 | 16 | "@polymer/paper-spinner": "^3.0.0", |
|
17 | 17 | "@polymer/paper-toast": "^3.0.0", |
|
18 | 18 | "@polymer/paper-toggle-button": "^3.0.0", |
|
19 | 19 | "@polymer/paper-tooltip": "^3.0.0", |
|
20 | 20 | "@polymer/polymer": "^3.0.0", |
|
21 | 21 | "@webcomponents/webcomponentsjs": "^2.0.0", |
|
22 | 22 | "babel-core": "^6.26.3", |
|
23 | 23 | "babel-loader": "^7.1.2", |
|
24 | 24 | "babel-plugin-transform-object-rest-spread": "^6.26.0", |
|
25 | 25 | "babel-preset-env": "^1.6.0", |
|
26 | 26 | "clipboard": "^2.0.1", |
|
27 | 27 | "copy-webpack-plugin": "^4.4.2", |
|
28 | 28 | "css-loader": "^0.28.11", |
|
29 | 29 | "dropzone": "^5.5.0", |
|
30 | 30 | "exports-loader": "^0.6.4", |
|
31 | 31 | "favico.js": "^0.3.10", |
|
32 | 32 | "grunt": "^0.4.5", |
|
33 | 33 | "grunt-cli": "^1.4.3", |
|
34 | 34 | "grunt-contrib-concat": "^0.5.1", |
|
35 | 35 | "grunt-contrib-copy": "^1.0.0", |
|
36 | 36 | "grunt-contrib-jshint": "^0.12.0", |
|
37 | 37 | "grunt-contrib-less": "^1.1.0", |
|
38 | 38 | "grunt-contrib-uglify": "^4.0.1", |
|
39 | 39 | "grunt-contrib-watch": "^0.6.1", |
|
40 | 40 | "grunt-webpack": "^3.1.3", |
|
41 | 41 | "html-loader": "^0.4.4", |
|
42 | 42 | "html-webpack-plugin": "^3.2.0", |
|
43 | 43 | "imports-loader": "^0.7.1", |
|
44 | 44 | "jquery": "1.11.3", |
|
45 | 45 | "jshint": "^2.9.1-rc3", |
|
46 | 46 | "mark.js": "8.11.1", |
|
47 | 47 | "moment": "^2.18.1", |
|
48 | 48 | "mousetrap": "^1.6.1", |
|
49 | 49 | "polymer-webpack-loader": "^2.0.1", |
|
50 | "qrious": "^4.0.2", | |
|
51 | 50 | "raw-loader": "1.0.0-beta.0", |
|
52 | 51 | "sticky-sidebar": "3.3.1", |
|
53 | 52 | "style-loader": "^0.21.0", |
|
54 | 53 | "sweetalert2": "^9.10.12", |
|
55 | 54 | "ts-loader": "^1.3.3", |
|
56 | 55 | "waypoints": "4.0.1", |
|
57 | 56 | "webpack": "4.23.1", |
|
58 | 57 | "webpack-cli": "3.1.2", |
|
59 | 58 | "webpack-uglify-js-plugin": "^1.1.9" |
|
60 | 59 | } |
|
61 | 60 | } |
@@ -1,23 +1,27 b'' | |||
|
1 | 1 | [pytest] |
|
2 | 2 | testpaths = rhodecode |
|
3 | 3 | norecursedirs = rhodecode/public rhodecode/templates tests/scripts |
|
4 | 4 | cache_dir = /tmp/.pytest_cache |
|
5 | 5 | |
|
6 | 6 | pyramid_config = rhodecode/tests/rhodecode.ini |
|
7 | 7 | vcsserver_protocol = http |
|
8 | 8 | vcsserver_config_http = rhodecode/tests/vcsserver_http.ini |
|
9 | 9 | |
|
10 | 10 | addopts = |
|
11 | 11 | --pdbcls=IPython.terminal.debugger:TerminalPdb |
|
12 | 12 | --strict-markers |
|
13 | 13 | --capture=no |
|
14 | 14 | --show-capture=all |
|
15 | 15 | |
|
16 | 16 | # --test-loglevel=INFO, show log-level during execution |
|
17 | 17 | |
|
18 | 18 | markers = |
|
19 | 19 | vcs_operations: Mark tests depending on a running RhodeCode instance. |
|
20 | 20 | xfail_backends: Mark tests as xfail for given backends. |
|
21 | 21 | skip_backends: Mark tests as skipped for given backends. |
|
22 | 22 | backends: Mark backends |
|
23 | 23 | dbs: database markers for running tests for given DB |
|
24 | ||
|
25 | env = | |
|
26 | RC_TEST=1 | |
|
27 | RUN_ENV=test |
@@ -1,295 +1,313 b'' | |||
|
1 | 1 | # deps, generated via pipdeptree --exclude setuptools,wheel,pipdeptree,pip -f | tr '[:upper:]' '[:lower:]' |
|
2 | 2 | |
|
3 |
alembic==1.1 |
|
|
3 | alembic==1.13.1 | |
|
4 | 4 | mako==1.2.4 |
|
5 | 5 | markupsafe==2.1.2 |
|
6 |
sqlalchemy==1.4.5 |
|
|
6 | sqlalchemy==1.4.52 | |
|
7 | 7 | greenlet==3.0.3 |
|
8 | 8 | typing_extensions==4.9.0 |
|
9 | 9 | async-timeout==4.0.3 |
|
10 | 10 | babel==2.12.1 |
|
11 | 11 | beaker==1.12.1 |
|
12 | 12 | celery==5.3.6 |
|
13 | 13 | billiard==4.2.0 |
|
14 | 14 | click==8.1.3 |
|
15 | 15 | click-didyoumean==0.3.0 |
|
16 | 16 | click==8.1.3 |
|
17 | 17 | click-plugins==1.1.1 |
|
18 | 18 | click==8.1.3 |
|
19 | 19 | click-repl==0.2.0 |
|
20 | 20 | click==8.1.3 |
|
21 | 21 | prompt-toolkit==3.0.38 |
|
22 | 22 | wcwidth==0.2.6 |
|
23 | 23 | six==1.16.0 |
|
24 | 24 | kombu==5.3.5 |
|
25 | 25 | amqp==5.2.0 |
|
26 | 26 | vine==5.1.0 |
|
27 | 27 | vine==5.1.0 |
|
28 | 28 | python-dateutil==2.8.2 |
|
29 | 29 | six==1.16.0 |
|
30 |
tzdata==202 |
|
|
30 | tzdata==2024.1 | |
|
31 | 31 | vine==5.1.0 |
|
32 | 32 | channelstream==0.7.1 |
|
33 | 33 | gevent==24.2.1 |
|
34 | 34 | greenlet==3.0.3 |
|
35 | 35 | zope.event==5.0.0 |
|
36 |
zope.interface==6. |
|
|
36 | zope.interface==6.3.0 | |
|
37 | 37 | itsdangerous==1.1.0 |
|
38 | 38 | marshmallow==2.18.0 |
|
39 | 39 | pyramid==2.0.2 |
|
40 | 40 | hupper==1.12 |
|
41 | 41 | plaster==1.1.2 |
|
42 | 42 | plaster-pastedeploy==1.0.1 |
|
43 | 43 | pastedeploy==3.1.0 |
|
44 | 44 | plaster==1.1.2 |
|
45 | 45 | translationstring==1.4 |
|
46 | 46 | venusian==3.0.0 |
|
47 | 47 | webob==1.8.7 |
|
48 | 48 | zope.deprecation==5.0.0 |
|
49 |
zope.interface==6. |
|
|
50 | pyramid-apispec==0.3.3 | |
|
51 | apispec==1.3.3 | |
|
49 | zope.interface==6.3.0 | |
|
52 | 50 | pyramid-jinja2==2.10 |
|
53 | 51 | jinja2==3.1.2 |
|
54 | 52 | markupsafe==2.1.2 |
|
55 | 53 | markupsafe==2.1.2 |
|
56 | 54 | pyramid==2.0.2 |
|
57 | 55 | hupper==1.12 |
|
58 | 56 | plaster==1.1.2 |
|
59 | 57 | plaster-pastedeploy==1.0.1 |
|
60 | 58 | pastedeploy==3.1.0 |
|
61 | 59 | plaster==1.1.2 |
|
62 | 60 | translationstring==1.4 |
|
63 | 61 | venusian==3.0.0 |
|
64 | 62 | webob==1.8.7 |
|
65 | 63 | zope.deprecation==5.0.0 |
|
66 |
zope.interface==6. |
|
|
64 | zope.interface==6.3.0 | |
|
67 | 65 | zope.deprecation==5.0.0 |
|
68 | 66 | python-dateutil==2.8.2 |
|
69 | 67 | six==1.16.0 |
|
70 | 68 | requests==2.28.2 |
|
71 | 69 | certifi==2022.12.7 |
|
72 | 70 | charset-normalizer==3.1.0 |
|
73 | 71 | idna==3.4 |
|
74 | 72 | urllib3==1.26.14 |
|
75 | 73 | ws4py==0.5.1 |
|
76 | 74 | deform==2.0.15 |
|
77 | 75 | chameleon==3.10.2 |
|
78 | 76 | colander==2.0 |
|
79 | 77 | iso8601==1.1.0 |
|
80 | 78 | translationstring==1.4 |
|
81 | 79 | iso8601==1.1.0 |
|
82 | 80 | peppercorn==0.6 |
|
83 | 81 | translationstring==1.4 |
|
84 | 82 | zope.deprecation==5.0.0 |
|
85 | diskcache==5.6.3 | |
|
86 | 83 | docutils==0.19 |
|
87 |
dogpile.cache==1.3. |
|
|
84 | dogpile.cache==1.3.3 | |
|
88 | 85 | decorator==5.1.1 |
|
89 | 86 | stevedore==5.1.0 |
|
90 | 87 | pbr==5.11.1 |
|
91 | 88 | formencode==2.1.0 |
|
92 | 89 | six==1.16.0 |
|
90 | fsspec==2024.6.0 | |
|
93 | 91 | gunicorn==21.2.0 |
|
94 |
packaging==2 |
|
|
92 | packaging==24.0 | |
|
95 | 93 | gevent==24.2.1 |
|
96 | 94 | greenlet==3.0.3 |
|
97 | 95 | zope.event==5.0.0 |
|
98 |
zope.interface==6. |
|
|
96 | zope.interface==6.3.0 | |
|
99 | 97 | ipython==8.14.0 |
|
100 | 98 | backcall==0.2.0 |
|
101 | 99 | decorator==5.1.1 |
|
102 | 100 | jedi==0.19.0 |
|
103 | 101 | parso==0.8.3 |
|
104 | 102 | matplotlib-inline==0.1.6 |
|
105 | 103 | traitlets==5.9.0 |
|
106 | 104 | pexpect==4.8.0 |
|
107 | 105 | ptyprocess==0.7.0 |
|
108 | 106 | pickleshare==0.7.5 |
|
109 | 107 | prompt-toolkit==3.0.38 |
|
110 | 108 | wcwidth==0.2.6 |
|
111 | 109 | pygments==2.15.1 |
|
112 | 110 | stack-data==0.6.2 |
|
113 | 111 | asttokens==2.2.1 |
|
114 | 112 | six==1.16.0 |
|
115 | 113 | executing==1.2.0 |
|
116 | 114 | pure-eval==0.2.2 |
|
117 | 115 | traitlets==5.9.0 |
|
118 | 116 | markdown==3.4.3 |
|
119 |
msgpack==1.0. |
|
|
117 | msgpack==1.0.8 | |
|
120 | 118 | mysqlclient==2.1.1 |
|
121 | 119 | nbconvert==7.7.3 |
|
122 |
beautifulsoup4==4.1 |
|
|
123 |
soupsieve==2. |
|
|
120 | beautifulsoup4==4.12.3 | |
|
121 | soupsieve==2.5 | |
|
124 | 122 | bleach==6.1.0 |
|
125 | 123 | six==1.16.0 |
|
126 | 124 | webencodings==0.5.1 |
|
127 | 125 | defusedxml==0.7.1 |
|
128 | 126 | jinja2==3.1.2 |
|
129 | 127 | markupsafe==2.1.2 |
|
130 | 128 | jupyter_core==5.3.1 |
|
131 | 129 | platformdirs==3.10.0 |
|
132 | 130 | traitlets==5.9.0 |
|
133 | 131 | jupyterlab-pygments==0.2.2 |
|
134 | 132 | markupsafe==2.1.2 |
|
135 | 133 | mistune==2.0.5 |
|
136 | 134 | nbclient==0.8.0 |
|
137 | 135 | jupyter_client==8.3.0 |
|
138 | 136 | jupyter_core==5.3.1 |
|
139 | 137 | platformdirs==3.10.0 |
|
140 | 138 | traitlets==5.9.0 |
|
141 | 139 | python-dateutil==2.8.2 |
|
142 | 140 | six==1.16.0 |
|
143 | 141 | pyzmq==25.0.0 |
|
144 | 142 | tornado==6.2 |
|
145 | 143 | traitlets==5.9.0 |
|
146 | 144 | jupyter_core==5.3.1 |
|
147 | 145 | platformdirs==3.10.0 |
|
148 | 146 | traitlets==5.9.0 |
|
149 | 147 | nbformat==5.9.2 |
|
150 | 148 | fastjsonschema==2.18.0 |
|
151 | 149 | jsonschema==4.18.6 |
|
152 | 150 | attrs==22.2.0 |
|
153 | 151 | pyrsistent==0.19.3 |
|
154 | 152 | jupyter_core==5.3.1 |
|
155 | 153 | platformdirs==3.10.0 |
|
156 | 154 | traitlets==5.9.0 |
|
157 | 155 | traitlets==5.9.0 |
|
158 | 156 | traitlets==5.9.0 |
|
159 | 157 | nbformat==5.9.2 |
|
160 | 158 | fastjsonschema==2.18.0 |
|
161 | 159 | jsonschema==4.18.6 |
|
162 | 160 | attrs==22.2.0 |
|
163 | 161 | pyrsistent==0.19.3 |
|
164 | 162 | jupyter_core==5.3.1 |
|
165 | 163 | platformdirs==3.10.0 |
|
166 | 164 | traitlets==5.9.0 |
|
167 | 165 | traitlets==5.9.0 |
|
168 | packaging==23.1 | |
|
169 | 166 | pandocfilters==1.5.0 |
|
170 | 167 | pygments==2.15.1 |
|
171 | 168 | tinycss2==1.2.1 |
|
172 | 169 | webencodings==0.5.1 |
|
173 | 170 | traitlets==5.9.0 |
|
174 |
orjson==3. |
|
|
175 |
paste |
|
|
176 | paste==3.7.1 | |
|
177 | six==1.16.0 | |
|
178 | pastedeploy==3.1.0 | |
|
179 | six==1.16.0 | |
|
171 | orjson==3.10.3 | |
|
172 | paste==3.10.1 | |
|
180 | 173 | premailer==3.10.0 |
|
181 |
cachetools==5.3. |
|
|
174 | cachetools==5.3.3 | |
|
182 | 175 | cssselect==1.2.0 |
|
183 | 176 | cssutils==2.6.0 |
|
184 | 177 | lxml==4.9.3 |
|
185 | 178 | requests==2.28.2 |
|
186 | 179 | certifi==2022.12.7 |
|
187 | 180 | charset-normalizer==3.1.0 |
|
188 | 181 | idna==3.4 |
|
189 | 182 | urllib3==1.26.14 |
|
190 | 183 | psutil==5.9.8 |
|
191 | 184 | psycopg2==2.9.9 |
|
192 | 185 | py-bcrypt==0.4 |
|
193 | 186 | pycmarkgfm==1.2.0 |
|
194 | 187 | cffi==1.16.0 |
|
195 | 188 | pycparser==2.21 |
|
196 | 189 | pycryptodome==3.17 |
|
197 |
pycurl==7.45. |
|
|
190 | pycurl==7.45.3 | |
|
198 | 191 | pymysql==1.0.3 |
|
199 | 192 | pyotp==2.8.0 |
|
200 | 193 | pyparsing==3.1.1 |
|
201 | pyramid-debugtoolbar==4.11 | |
|
194 | pyramid-debugtoolbar==4.12.1 | |
|
202 | 195 | pygments==2.15.1 |
|
203 | 196 | pyramid==2.0.2 |
|
204 | 197 | hupper==1.12 |
|
205 | 198 | plaster==1.1.2 |
|
206 | 199 | plaster-pastedeploy==1.0.1 |
|
207 | 200 | pastedeploy==3.1.0 |
|
208 | 201 | plaster==1.1.2 |
|
209 | 202 | translationstring==1.4 |
|
210 | 203 | venusian==3.0.0 |
|
211 | 204 | webob==1.8.7 |
|
212 | 205 | zope.deprecation==5.0.0 |
|
213 |
zope.interface==6. |
|
|
206 | zope.interface==6.3.0 | |
|
214 | 207 | pyramid-mako==1.1.0 |
|
215 | 208 | mako==1.2.4 |
|
216 | 209 | markupsafe==2.1.2 |
|
217 | 210 | pyramid==2.0.2 |
|
218 | 211 | hupper==1.12 |
|
219 | 212 | plaster==1.1.2 |
|
220 | 213 | plaster-pastedeploy==1.0.1 |
|
221 | 214 | pastedeploy==3.1.0 |
|
222 | 215 | plaster==1.1.2 |
|
223 | 216 | translationstring==1.4 |
|
224 | 217 | venusian==3.0.0 |
|
225 | 218 | webob==1.8.7 |
|
226 | 219 | zope.deprecation==5.0.0 |
|
227 |
zope.interface==6. |
|
|
220 | zope.interface==6.3.0 | |
|
228 | 221 | pyramid-mailer==0.15.1 |
|
229 | 222 | pyramid==2.0.2 |
|
230 | 223 | hupper==1.12 |
|
231 | 224 | plaster==1.1.2 |
|
232 | 225 | plaster-pastedeploy==1.0.1 |
|
233 | 226 | pastedeploy==3.1.0 |
|
234 | 227 | plaster==1.1.2 |
|
235 | 228 | translationstring==1.4 |
|
236 | 229 | venusian==3.0.0 |
|
237 | 230 | webob==1.8.7 |
|
238 | 231 | zope.deprecation==5.0.0 |
|
239 |
zope.interface==6. |
|
|
232 | zope.interface==6.3.0 | |
|
240 | 233 | repoze.sendmail==4.4.1 |
|
241 | 234 | transaction==3.1.0 |
|
242 |
zope.interface==6. |
|
|
243 |
zope.interface==6. |
|
|
235 | zope.interface==6.3.0 | |
|
236 | zope.interface==6.3.0 | |
|
244 | 237 | transaction==3.1.0 |
|
245 |
zope.interface==6. |
|
|
238 | zope.interface==6.3.0 | |
|
246 | 239 | python-ldap==3.4.3 |
|
247 | 240 | pyasn1==0.4.8 |
|
248 | 241 | pyasn1-modules==0.2.8 |
|
249 | 242 | pyasn1==0.4.8 |
|
250 | 243 | python-memcached==1.59 |
|
251 | 244 | six==1.16.0 |
|
252 | 245 | python-pam==2.0.2 |
|
253 | 246 | python3-saml==1.15.0 |
|
254 | 247 | isodate==0.6.1 |
|
255 | 248 | six==1.16.0 |
|
256 | 249 | lxml==4.9.3 |
|
257 | 250 | xmlsec==1.3.13 |
|
258 | 251 | lxml==4.9.3 |
|
259 | 252 | pyyaml==6.0.1 |
|
260 |
redis==5.0. |
|
|
253 | redis==5.0.4 | |
|
254 | async-timeout==4.0.3 | |
|
261 | 255 | regex==2022.10.31 |
|
262 | 256 | routes==2.5.1 |
|
263 | 257 | repoze.lru==0.7 |
|
264 | 258 | six==1.16.0 |
|
265 | simplejson==3.19.1 | |
|
259 | s3fs==2024.6.0 | |
|
260 | aiobotocore==2.13.0 | |
|
261 | aiohttp==3.9.5 | |
|
262 | aiosignal==1.3.1 | |
|
263 | frozenlist==1.4.1 | |
|
264 | attrs==22.2.0 | |
|
265 | frozenlist==1.4.1 | |
|
266 | multidict==6.0.5 | |
|
267 | yarl==1.9.4 | |
|
268 | idna==3.4 | |
|
269 | multidict==6.0.5 | |
|
270 | aioitertools==0.11.0 | |
|
271 | botocore==1.34.106 | |
|
272 | jmespath==1.0.1 | |
|
273 | python-dateutil==2.8.2 | |
|
274 | six==1.16.0 | |
|
275 | urllib3==1.26.14 | |
|
276 | wrapt==1.16.0 | |
|
277 | aiohttp==3.9.5 | |
|
278 | aiosignal==1.3.1 | |
|
279 | frozenlist==1.4.1 | |
|
280 | attrs==22.2.0 | |
|
281 | frozenlist==1.4.1 | |
|
282 | multidict==6.0.5 | |
|
283 | yarl==1.9.4 | |
|
284 | idna==3.4 | |
|
285 | multidict==6.0.5 | |
|
286 | fsspec==2024.6.0 | |
|
287 | simplejson==3.19.2 | |
|
266 | 288 | sshpubkeys==3.3.1 |
|
267 | 289 | cryptography==40.0.2 |
|
268 | 290 | cffi==1.16.0 |
|
269 | 291 | pycparser==2.21 |
|
270 | 292 | ecdsa==0.18.0 |
|
271 | 293 | six==1.16.0 |
|
272 |
sqlalchemy==1.4.5 |
|
|
294 | sqlalchemy==1.4.52 | |
|
273 | 295 | greenlet==3.0.3 |
|
274 | 296 | typing_extensions==4.9.0 |
|
275 | 297 | supervisor==4.2.5 |
|
276 | 298 | tzlocal==4.3 |
|
277 | 299 | pytz-deprecation-shim==0.1.0.post0 |
|
278 |
tzdata==202 |
|
|
300 | tzdata==2024.1 | |
|
301 | tempita==0.5.2 | |
|
279 | 302 | unidecode==1.3.6 |
|
280 | 303 | urlobject==2.4.3 |
|
281 | 304 | waitress==3.0.0 |
|
282 | weberror==0.13.1 | |
|
283 | paste==3.7.1 | |
|
284 | six==1.16.0 | |
|
285 | pygments==2.15.1 | |
|
286 | tempita==0.5.2 | |
|
287 | webob==1.8.7 | |
|
288 | webhelpers2==2.0 | |
|
305 | webhelpers2==2.1 | |
|
289 | 306 | markupsafe==2.1.2 |
|
290 | 307 | six==1.16.0 |
|
291 | 308 | whoosh==2.7.4 |
|
292 | 309 | zope.cachedescriptors==5.0.0 |
|
310 | qrcode==7.4.2 | |
|
293 | 311 | |
|
294 | 312 | ## uncomment to add the debug libraries |
|
295 | 313 | #-r requirements_debug.txt |
@@ -1,46 +1,48 b'' | |||
|
1 | 1 | # test related requirements |
|
2 | ||
|
3 | cov-core==1.15.0 | |
|
4 |
coverage==7. |
|
|
5 | mock==5.0.2 | |
|
6 | py==1.11.0 | |
|
7 | pytest-cov==4.0.0 | |
|
8 | coverage==7.2.3 | |
|
9 | pytest==7.3.1 | |
|
10 | attrs==22.2.0 | |
|
2 | mock==5.1.0 | |
|
3 | pytest-cov==4.1.0 | |
|
4 | coverage==7.4.3 | |
|
5 | pytest==8.1.1 | |
|
11 | 6 | iniconfig==2.0.0 |
|
12 |
packaging==2 |
|
|
13 |
pluggy==1. |
|
|
14 |
pytest- |
|
|
7 | packaging==24.0 | |
|
8 | pluggy==1.4.0 | |
|
9 | pytest-env==1.1.3 | |
|
10 | pytest==8.1.1 | |
|
11 | iniconfig==2.0.0 | |
|
12 | packaging==24.0 | |
|
13 | pluggy==1.4.0 | |
|
15 | 14 | pytest-profiling==1.7.0 |
|
16 | 15 | gprof2dot==2022.7.29 |
|
17 |
pytest== |
|
|
18 | attrs==22.2.0 | |
|
16 | pytest==8.1.1 | |
|
19 | 17 | iniconfig==2.0.0 |
|
20 |
packaging==2 |
|
|
21 |
pluggy==1. |
|
|
18 | packaging==24.0 | |
|
19 | pluggy==1.4.0 | |
|
22 | 20 | six==1.16.0 |
|
23 |
pytest-r |
|
|
24 | pytest-sugar==0.9.7 | |
|
25 | packaging==23.1 | |
|
26 | pytest==7.3.1 | |
|
27 | attrs==22.2.0 | |
|
21 | pytest-rerunfailures==13.0 | |
|
22 | packaging==24.0 | |
|
23 | pytest==8.1.1 | |
|
28 | 24 | iniconfig==2.0.0 |
|
29 |
packaging==2 |
|
|
30 |
pluggy==1. |
|
|
31 | termcolor==2.3.0 | |
|
32 |
pytest- |
|
|
33 | pytest==7.3.1 | |
|
34 | attrs==22.2.0 | |
|
25 | packaging==24.0 | |
|
26 | pluggy==1.4.0 | |
|
27 | pytest-runner==6.0.1 | |
|
28 | pytest-sugar==1.0.0 | |
|
29 | packaging==24.0 | |
|
30 | pytest==8.1.1 | |
|
35 | 31 | iniconfig==2.0.0 |
|
36 |
packaging==2 |
|
|
37 |
pluggy==1. |
|
|
32 | packaging==24.0 | |
|
33 | pluggy==1.4.0 | |
|
34 | termcolor==2.4.0 | |
|
35 | pytest-timeout==2.3.1 | |
|
36 | pytest==8.1.1 | |
|
37 | iniconfig==2.0.0 | |
|
38 | packaging==24.0 | |
|
39 | pluggy==1.4.0 | |
|
38 | 40 | webtest==3.0.0 |
|
39 |
beautifulsoup4==4.1 |
|
|
40 |
soupsieve==2. |
|
|
41 | beautifulsoup4==4.12.3 | |
|
42 | soupsieve==2.5 | |
|
41 | 43 | waitress==3.0.0 |
|
42 | 44 | webob==1.8.7 |
|
43 | 45 | |
|
44 | 46 | # RhodeCode test-data |
|
45 | 47 | rc_testdata @ https://code.rhodecode.com/upstream/rc-testdata-dist/raw/77378e9097f700b4c1b9391b56199fe63566b5c9/rc_testdata-0.11.0.tar.gz#egg=rc_testdata |
|
46 | 48 | rc_testdata==0.11.0 |
@@ -1,91 +1,91 b'' | |||
|
1 | 1 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import os |
|
20 | 20 | import datetime |
|
21 | 21 | import collections |
|
22 | 22 | import logging |
|
23 | 23 | |
|
24 | 24 | |
|
25 | 25 | now = datetime.datetime.now() |
|
26 | 26 | now = now.strftime("%Y-%m-%d %H:%M:%S") + '.' + f"{int(now.microsecond/1000):03d}" |
|
27 | 27 | |
|
28 | 28 | log = logging.getLogger(__name__) |
|
29 | 29 | log.debug(f'{now} Starting RhodeCode imports...') |
|
30 | 30 | |
|
31 | 31 | |
|
32 | 32 | VERSION = tuple(open(os.path.join( |
|
33 | 33 | os.path.dirname(__file__), 'VERSION')).read().split('.')) |
|
34 | 34 | |
|
35 | 35 | BACKENDS = collections.OrderedDict() |
|
36 | 36 | |
|
37 | 37 | BACKENDS['hg'] = 'Mercurial repository' |
|
38 | 38 | BACKENDS['git'] = 'Git repository' |
|
39 | 39 | BACKENDS['svn'] = 'Subversion repository' |
|
40 | 40 | |
|
41 | 41 | |
|
42 | 42 | CELERY_ENABLED = False |
|
43 | 43 | CELERY_EAGER = False |
|
44 | 44 | |
|
45 | 45 | # link to config for pyramid |
|
46 | 46 | CONFIG = {} |
|
47 | 47 | |
|
48 | 48 | |
|
49 | 49 | class ConfigGet: |
|
50 | 50 | NotGiven = object() |
|
51 | 51 | |
|
52 | 52 | def _get_val_or_missing(self, key, missing): |
|
53 | 53 | if key not in CONFIG: |
|
54 | 54 | if missing == self.NotGiven: |
|
55 | 55 | return missing |
|
56 | 56 | # we don't get key, we don't get missing value, return nothing similar as config.get(key) |
|
57 | 57 | return None |
|
58 | 58 | else: |
|
59 | 59 | val = CONFIG[key] |
|
60 | 60 | return val |
|
61 | 61 | |
|
62 | 62 | def get_str(self, key, missing=NotGiven): |
|
63 | 63 | from rhodecode.lib.str_utils import safe_str |
|
64 | 64 | val = self._get_val_or_missing(key, missing) |
|
65 | 65 | return safe_str(val) |
|
66 | 66 | |
|
67 | 67 | def get_int(self, key, missing=NotGiven): |
|
68 | 68 | from rhodecode.lib.str_utils import safe_int |
|
69 | 69 | val = self._get_val_or_missing(key, missing) |
|
70 | 70 | return safe_int(val) |
|
71 | 71 | |
|
72 | 72 | def get_bool(self, key, missing=NotGiven): |
|
73 | 73 | from rhodecode.lib.type_utils import str2bool |
|
74 | 74 | val = self._get_val_or_missing(key, missing) |
|
75 | 75 | return str2bool(val) |
|
76 | 76 | |
|
77 | 77 | # Populated with the settings dictionary from application init in |
|
78 | 78 | # rhodecode.conf.environment.load_pyramid_environment |
|
79 | 79 | PYRAMID_SETTINGS = {} |
|
80 | 80 | |
|
81 | 81 | # Linked module for extensions |
|
82 | 82 | EXTENSIONS = {} |
|
83 | 83 | |
|
84 | 84 | __version__ = ('.'.join((str(each) for each in VERSION[:3]))) |
|
85 |
__dbversion__ = 11 |
|
|
85 | __dbversion__ = 115 # defines current db version for migrations | |
|
86 | 86 | __license__ = 'AGPLv3, and Commercial License' |
|
87 | 87 | __author__ = 'RhodeCode GmbH' |
|
88 | 88 | __url__ = 'https://code.rhodecode.com' |
|
89 | 89 | |
|
90 | is_test = False | |
|
90 | is_test = os.getenv('RC_TEST', '0') == '1' | |
|
91 | 91 | disable_error_handler = False |
@@ -1,573 +1,581 b'' | |||
|
1 | 1 | # Copyright (C) 2011-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import itertools |
|
20 | 20 | import logging |
|
21 | 21 | import sys |
|
22 | 22 | import fnmatch |
|
23 | 23 | |
|
24 | 24 | import decorator |
|
25 | import typing | |
|
26 | 25 | import venusian |
|
27 | 26 | from collections import OrderedDict |
|
28 | 27 | |
|
29 | 28 | from pyramid.exceptions import ConfigurationError |
|
30 | 29 | from pyramid.renderers import render |
|
31 | 30 | from pyramid.response import Response |
|
32 | 31 | from pyramid.httpexceptions import HTTPNotFound |
|
33 | 32 | |
|
34 | 33 | from rhodecode.api.exc import ( |
|
35 | 34 | JSONRPCBaseError, JSONRPCError, JSONRPCForbidden, JSONRPCValidationError) |
|
36 | 35 | from rhodecode.apps._base import TemplateArgs |
|
37 | 36 | from rhodecode.lib.auth import AuthUser |
|
38 | 37 | from rhodecode.lib.base import get_ip_addr, attach_context_attributes |
|
39 | 38 | from rhodecode.lib.exc_tracking import store_exception |
|
40 | 39 | from rhodecode.lib import ext_json |
|
41 | 40 | from rhodecode.lib.utils2 import safe_str |
|
42 | 41 | from rhodecode.lib.plugins.utils import get_plugin_settings |
|
43 | 42 | from rhodecode.model.db import User, UserApiKeys |
|
44 | 43 | |
|
45 | 44 | log = logging.getLogger(__name__) |
|
46 | 45 | |
|
47 | 46 | DEFAULT_RENDERER = 'jsonrpc_renderer' |
|
48 |
DEFAULT_URL = '/_admin/api |
|
|
47 | DEFAULT_URL = '/_admin/api' | |
|
48 | SERVICE_API_IDENTIFIER = 'service_' | |
|
49 | 49 | |
|
50 | 50 | |
|
51 | 51 | def find_methods(jsonrpc_methods, pattern): |
|
52 | 52 | matches = OrderedDict() |
|
53 | 53 | if not isinstance(pattern, (list, tuple)): |
|
54 | 54 | pattern = [pattern] |
|
55 | 55 | |
|
56 | 56 | for single_pattern in pattern: |
|
57 |
for method_name, method in |
|
|
57 | for method_name, method in filter( | |
|
58 | lambda x: not x[0].startswith(SERVICE_API_IDENTIFIER), jsonrpc_methods.items() | |
|
59 | ): | |
|
58 | 60 | if fnmatch.fnmatch(method_name, single_pattern): |
|
59 | 61 | matches[method_name] = method |
|
60 | 62 | return matches |
|
61 | 63 | |
|
62 | 64 | |
|
63 | 65 | class ExtJsonRenderer(object): |
|
64 | 66 | """ |
|
65 | 67 | Custom renderer that makes use of our ext_json lib |
|
66 | 68 | |
|
67 | 69 | """ |
|
68 | 70 | |
|
69 | 71 | def __init__(self): |
|
70 | 72 | self.serializer = ext_json.formatted_json |
|
71 | 73 | |
|
72 | 74 | def __call__(self, info): |
|
73 | 75 | """ Returns a plain JSON-encoded string with content-type |
|
74 | 76 | ``application/json``. The content-type may be overridden by |
|
75 | 77 | setting ``request.response.content_type``.""" |
|
76 | 78 | |
|
77 | 79 | def _render(value, system): |
|
78 | 80 | request = system.get('request') |
|
79 | 81 | if request is not None: |
|
80 | 82 | response = request.response |
|
81 | 83 | ct = response.content_type |
|
82 | 84 | if ct == response.default_content_type: |
|
83 | 85 | response.content_type = 'application/json' |
|
84 | 86 | |
|
85 | 87 | return self.serializer(value) |
|
86 | 88 | |
|
87 | 89 | return _render |
|
88 | 90 | |
|
89 | 91 | |
|
90 | 92 | def jsonrpc_response(request, result): |
|
91 | 93 | rpc_id = getattr(request, 'rpc_id', None) |
|
92 | 94 | |
|
93 | 95 | ret_value = '' |
|
94 | 96 | if rpc_id: |
|
95 | 97 | ret_value = {'id': rpc_id, 'result': result, 'error': None} |
|
96 | 98 | |
|
97 | 99 | # fetch deprecation warnings, and store it inside results |
|
98 | 100 | deprecation = getattr(request, 'rpc_deprecation', None) |
|
99 | 101 | if deprecation: |
|
100 | 102 | ret_value['DEPRECATION_WARNING'] = deprecation |
|
101 | 103 | |
|
102 | 104 | raw_body = render(DEFAULT_RENDERER, ret_value, request=request) |
|
103 | 105 | content_type = 'application/json' |
|
104 | 106 | content_type_header = 'Content-Type' |
|
105 | 107 | headers = { |
|
106 | 108 | content_type_header: content_type |
|
107 | 109 | } |
|
108 | 110 | return Response( |
|
109 | 111 | body=raw_body, |
|
110 | 112 | content_type=content_type, |
|
111 | 113 | headerlist=[(k, v) for k, v in headers.items()] |
|
112 | 114 | ) |
|
113 | 115 | |
|
114 | 116 | |
|
115 | 117 | def jsonrpc_error(request, message, retid=None, code: int | None = None, headers: dict | None = None): |
|
116 | 118 | """ |
|
117 | 119 | Generate a Response object with a JSON-RPC error body |
|
118 | 120 | """ |
|
119 | 121 | headers = headers or {} |
|
120 | 122 | content_type = 'application/json' |
|
121 | 123 | content_type_header = 'Content-Type' |
|
122 | 124 | if content_type_header not in headers: |
|
123 | 125 | headers[content_type_header] = content_type |
|
124 | 126 | |
|
125 | 127 | err_dict = {'id': retid, 'result': None, 'error': message} |
|
126 | 128 | raw_body = render(DEFAULT_RENDERER, err_dict, request=request) |
|
127 | 129 | |
|
128 | 130 | return Response( |
|
129 | 131 | body=raw_body, |
|
130 | 132 | status=code, |
|
131 | 133 | content_type=content_type, |
|
132 | 134 | headerlist=[(k, v) for k, v in headers.items()] |
|
133 | 135 | ) |
|
134 | 136 | |
|
135 | 137 | |
|
136 | 138 | def exception_view(exc, request): |
|
137 | 139 | rpc_id = getattr(request, 'rpc_id', None) |
|
138 | 140 | |
|
139 | 141 | if isinstance(exc, JSONRPCError): |
|
140 | 142 | fault_message = safe_str(exc) |
|
141 | 143 | log.debug('json-rpc error rpc_id:%s "%s"', rpc_id, fault_message) |
|
142 | 144 | elif isinstance(exc, JSONRPCValidationError): |
|
143 | 145 | colander_exc = exc.colander_exception |
|
144 | 146 | # TODO(marcink): think maybe of nicer way to serialize errors ? |
|
145 | 147 | fault_message = colander_exc.asdict() |
|
146 | 148 | log.debug('json-rpc colander error rpc_id:%s "%s"', rpc_id, fault_message) |
|
147 | 149 | elif isinstance(exc, JSONRPCForbidden): |
|
148 | 150 | fault_message = 'Access was denied to this resource.' |
|
149 | 151 | log.warning('json-rpc forbidden call rpc_id:%s "%s"', rpc_id, fault_message) |
|
150 | 152 | elif isinstance(exc, HTTPNotFound): |
|
151 | 153 | method = request.rpc_method |
|
152 | 154 | log.debug('json-rpc method `%s` not found in list of ' |
|
153 | 155 | 'api calls: %s, rpc_id:%s', |
|
154 | 156 | method, list(request.registry.jsonrpc_methods.keys()), rpc_id) |
|
155 | 157 | |
|
156 | 158 | similar = 'none' |
|
157 | 159 | try: |
|
158 | 160 | similar_paterns = [f'*{x}*' for x in method.split('_')] |
|
159 | 161 | similar_found = find_methods( |
|
160 | 162 | request.registry.jsonrpc_methods, similar_paterns) |
|
161 | 163 | similar = ', '.join(similar_found.keys()) or similar |
|
162 | 164 | except Exception: |
|
163 | 165 | # make the whole above block safe |
|
164 | 166 | pass |
|
165 | 167 | |
|
166 | 168 | fault_message = f"No such method: {method}. Similar methods: {similar}" |
|
167 | 169 | else: |
|
168 | 170 | fault_message = 'undefined error' |
|
169 | 171 | exc_info = exc.exc_info() |
|
170 | 172 | store_exception(id(exc_info), exc_info, prefix='rhodecode-api') |
|
171 | 173 | |
|
172 | 174 | statsd = request.registry.statsd |
|
173 | 175 | if statsd: |
|
174 | 176 | exc_type = f"{exc.__class__.__module__}.{exc.__class__.__name__}" |
|
175 | 177 | statsd.incr('rhodecode_exception_total', |
|
176 | 178 | tags=["exc_source:api", f"type:{exc_type}"]) |
|
177 | 179 | |
|
178 | 180 | return jsonrpc_error(request, fault_message, rpc_id) |
|
179 | 181 | |
|
180 | 182 | |
|
181 | 183 | def request_view(request): |
|
182 | 184 | """ |
|
183 | 185 | Main request handling method. It handles all logic to call a specific |
|
184 | 186 | exposed method |
|
185 | 187 | """ |
|
186 | 188 | # cython compatible inspect |
|
187 | 189 | from rhodecode.config.patches import inspect_getargspec |
|
188 | 190 | inspect = inspect_getargspec() |
|
189 | 191 | |
|
190 | 192 | # check if we can find this session using api_key, get_by_auth_token |
|
191 | 193 | # search not expired tokens only |
|
192 | 194 | try: |
|
193 | api_user = User.get_by_auth_token(request.rpc_api_key) | |
|
195 | if not request.rpc_method.startswith(SERVICE_API_IDENTIFIER): | |
|
196 | api_user = User.get_by_auth_token(request.rpc_api_key) | |
|
194 | 197 | |
|
195 | if api_user is None: | |
|
196 | return jsonrpc_error( | |
|
197 | request, retid=request.rpc_id, message='Invalid API KEY') | |
|
198 | if api_user is None: | |
|
199 | return jsonrpc_error( | |
|
200 | request, retid=request.rpc_id, message='Invalid API KEY') | |
|
198 | 201 | |
|
199 | if not api_user.active: | |
|
200 | return jsonrpc_error( | |
|
201 | request, retid=request.rpc_id, | |
|
202 | message='Request from this user not allowed') | |
|
202 | if not api_user.active: | |
|
203 | return jsonrpc_error( | |
|
204 | request, retid=request.rpc_id, | |
|
205 | message='Request from this user not allowed') | |
|
203 | 206 | |
|
204 | # check if we are allowed to use this IP | |
|
205 | auth_u = AuthUser( | |
|
206 | api_user.user_id, request.rpc_api_key, ip_addr=request.rpc_ip_addr) | |
|
207 | if not auth_u.ip_allowed: | |
|
208 | return jsonrpc_error( | |
|
209 | request, retid=request.rpc_id, | |
|
210 | message='Request from IP:{} not allowed'.format( | |
|
211 | request.rpc_ip_addr)) | |
|
212 | else: | |
|
213 | log.info('Access for IP:%s allowed', request.rpc_ip_addr) | |
|
207 | # check if we are allowed to use this IP | |
|
208 | auth_u = AuthUser( | |
|
209 | api_user.user_id, request.rpc_api_key, ip_addr=request.rpc_ip_addr) | |
|
210 | if not auth_u.ip_allowed: | |
|
211 | return jsonrpc_error( | |
|
212 | request, retid=request.rpc_id, | |
|
213 | message='Request from IP:{} not allowed'.format( | |
|
214 | request.rpc_ip_addr)) | |
|
215 | else: | |
|
216 | log.info('Access for IP:%s allowed', request.rpc_ip_addr) | |
|
217 | ||
|
218 | # register our auth-user | |
|
219 | request.rpc_user = auth_u | |
|
220 | request.environ['rc_auth_user_id'] = str(auth_u.user_id) | |
|
214 | 221 | |
|
215 | # register our auth-user | |
|
216 | request.rpc_user = auth_u | |
|
217 | request.environ['rc_auth_user_id'] = str(auth_u.user_id) | |
|
222 | # now check if token is valid for API | |
|
223 | auth_token = request.rpc_api_key | |
|
224 | token_match = api_user.authenticate_by_token( | |
|
225 | auth_token, roles=[UserApiKeys.ROLE_API]) | |
|
226 | invalid_token = not token_match | |
|
218 | 227 | |
|
219 | # now check if token is valid for API | |
|
220 | auth_token = request.rpc_api_key | |
|
221 | token_match = api_user.authenticate_by_token( | |
|
222 | auth_token, roles=[UserApiKeys.ROLE_API]) | |
|
223 | invalid_token = not token_match | |
|
224 | ||
|
225 | log.debug('Checking if API KEY is valid with proper role') | |
|
226 | if invalid_token: | |
|
227 | return jsonrpc_error( | |
|
228 | request, retid=request.rpc_id, | |
|
229 | message='API KEY invalid or, has bad role for an API call') | |
|
228 | log.debug('Checking if API KEY is valid with proper role') | |
|
229 | if invalid_token: | |
|
230 | return jsonrpc_error( | |
|
231 | request, retid=request.rpc_id, | |
|
232 | message='API KEY invalid or, has bad role for an API call') | |
|
233 | else: | |
|
234 | auth_u = 'service' | |
|
235 | if request.rpc_api_key != request.registry.settings['app.service_api.token']: | |
|
236 | raise Exception("Provided service secret is not recognized!") | |
|
230 | 237 | |
|
231 | 238 | except Exception: |
|
232 | 239 | log.exception('Error on API AUTH') |
|
233 | 240 | return jsonrpc_error( |
|
234 | 241 | request, retid=request.rpc_id, message='Invalid API KEY') |
|
235 | 242 | |
|
236 | 243 | method = request.rpc_method |
|
237 | 244 | func = request.registry.jsonrpc_methods[method] |
|
238 | 245 | |
|
239 | 246 | # now that we have a method, add request._req_params to |
|
240 | 247 | # self.kargs and dispatch control to WGIController |
|
241 | 248 | |
|
242 | 249 | argspec = inspect.getargspec(func) |
|
243 | 250 | arglist = argspec[0] |
|
244 | 251 | defs = argspec[3] or [] |
|
245 | 252 | defaults = [type(a) for a in defs] |
|
246 | 253 | default_empty = type(NotImplemented) |
|
247 | 254 | |
|
248 | 255 | # kw arguments required by this method |
|
249 | 256 | func_kwargs = dict(itertools.zip_longest( |
|
250 | 257 | reversed(arglist), reversed(defaults), fillvalue=default_empty)) |
|
251 | 258 | |
|
252 | 259 | # This attribute will need to be first param of a method that uses |
|
253 | 260 | # api_key, which is translated to instance of user at that name |
|
254 | 261 | user_var = 'apiuser' |
|
255 | 262 | request_var = 'request' |
|
256 | 263 | |
|
257 | 264 | for arg in [user_var, request_var]: |
|
258 | 265 | if arg not in arglist: |
|
259 | 266 | return jsonrpc_error( |
|
260 | 267 | request, |
|
261 | 268 | retid=request.rpc_id, |
|
262 | 269 | message='This method [%s] does not support ' |
|
263 | 270 | 'required parameter `%s`' % (func.__name__, arg)) |
|
264 | 271 | |
|
265 | 272 | # get our arglist and check if we provided them as args |
|
266 | 273 | for arg, default in func_kwargs.items(): |
|
267 | 274 | if arg in [user_var, request_var]: |
|
268 | 275 | # user_var and request_var are pre-hardcoded parameters and we |
|
269 | 276 | # don't need to do any translation |
|
270 | 277 | continue |
|
271 | 278 | |
|
272 | 279 | # skip the required param check if it's default value is |
|
273 | 280 | # NotImplementedType (default_empty) |
|
274 | 281 | if default == default_empty and arg not in request.rpc_params: |
|
275 | 282 | return jsonrpc_error( |
|
276 | 283 | request, |
|
277 | 284 | retid=request.rpc_id, |
|
278 | 285 | message=('Missing non optional `%s` arg in JSON DATA' % arg) |
|
279 | 286 | ) |
|
280 | 287 | |
|
281 | 288 | # sanitize extra passed arguments |
|
282 | 289 | for k in list(request.rpc_params.keys()): |
|
283 | 290 | if k not in func_kwargs: |
|
284 | 291 | del request.rpc_params[k] |
|
285 | 292 | |
|
286 | 293 | call_params = request.rpc_params |
|
287 | 294 | call_params.update({ |
|
288 | 295 | 'request': request, |
|
289 | 296 | 'apiuser': auth_u |
|
290 | 297 | }) |
|
291 | 298 | |
|
292 | 299 | # register some common functions for usage |
|
293 | attach_context_attributes(TemplateArgs(), request, request.rpc_user.user_id) | |
|
300 | rpc_user = request.rpc_user.user_id if hasattr(request, 'rpc_user') else None | |
|
301 | attach_context_attributes(TemplateArgs(), request, rpc_user) | |
|
294 | 302 | |
|
295 | 303 | statsd = request.registry.statsd |
|
296 | 304 | |
|
297 | 305 | try: |
|
298 | 306 | ret_value = func(**call_params) |
|
299 | 307 | resp = jsonrpc_response(request, ret_value) |
|
300 | 308 | if statsd: |
|
301 | 309 | statsd.incr('rhodecode_api_call_success_total') |
|
302 | 310 | return resp |
|
303 | 311 | except JSONRPCBaseError: |
|
304 | 312 | raise |
|
305 | 313 | except Exception: |
|
306 | 314 | log.exception('Unhandled exception occurred on api call: %s', func) |
|
307 | 315 | exc_info = sys.exc_info() |
|
308 | 316 | exc_id, exc_type_name = store_exception( |
|
309 | 317 | id(exc_info), exc_info, prefix='rhodecode-api') |
|
310 | 318 | error_headers = { |
|
311 | 319 | 'RhodeCode-Exception-Id': str(exc_id), |
|
312 | 320 | 'RhodeCode-Exception-Type': str(exc_type_name) |
|
313 | 321 | } |
|
314 | 322 | err_resp = jsonrpc_error( |
|
315 | 323 | request, retid=request.rpc_id, message='Internal server error', |
|
316 | 324 | headers=error_headers) |
|
317 | 325 | if statsd: |
|
318 | 326 | statsd.incr('rhodecode_api_call_fail_total') |
|
319 | 327 | return err_resp |
|
320 | 328 | |
|
321 | 329 | |
|
322 | 330 | def setup_request(request): |
|
323 | 331 | """ |
|
324 | 332 | Parse a JSON-RPC request body. It's used inside the predicates method |
|
325 | 333 | to validate and bootstrap requests for usage in rpc calls. |
|
326 | 334 | |
|
327 | 335 | We need to raise JSONRPCError here if we want to return some errors back to |
|
328 | 336 | user. |
|
329 | 337 | """ |
|
330 | 338 | |
|
331 | 339 | log.debug('Executing setup request: %r', request) |
|
332 | 340 | request.rpc_ip_addr = get_ip_addr(request.environ) |
|
333 | 341 | # TODO(marcink): deprecate GET at some point |
|
334 | 342 | if request.method not in ['POST', 'GET']: |
|
335 | 343 | log.debug('unsupported request method "%s"', request.method) |
|
336 | 344 | raise JSONRPCError( |
|
337 | 345 | 'unsupported request method "%s". Please use POST' % request.method) |
|
338 | 346 | |
|
339 | 347 | if 'CONTENT_LENGTH' not in request.environ: |
|
340 | 348 | log.debug("No Content-Length") |
|
341 | 349 | raise JSONRPCError("Empty body, No Content-Length in request") |
|
342 | 350 | |
|
343 | 351 | else: |
|
344 | 352 | length = request.environ['CONTENT_LENGTH'] |
|
345 | 353 | log.debug('Content-Length: %s', length) |
|
346 | 354 | |
|
347 | 355 | if length == 0: |
|
348 | 356 | log.debug("Content-Length is 0") |
|
349 | 357 | raise JSONRPCError("Content-Length is 0") |
|
350 | 358 | |
|
351 | 359 | raw_body = request.body |
|
352 | 360 | log.debug("Loading JSON body now") |
|
353 | 361 | try: |
|
354 | 362 | json_body = ext_json.json.loads(raw_body) |
|
355 | 363 | except ValueError as e: |
|
356 | 364 | # catch JSON errors Here |
|
357 | 365 | raise JSONRPCError(f"JSON parse error ERR:{e} RAW:{raw_body!r}") |
|
358 | 366 | |
|
359 | 367 | request.rpc_id = json_body.get('id') |
|
360 | 368 | request.rpc_method = json_body.get('method') |
|
361 | 369 | |
|
362 | 370 | # check required base parameters |
|
363 | 371 | try: |
|
364 | 372 | api_key = json_body.get('api_key') |
|
365 | 373 | if not api_key: |
|
366 | 374 | api_key = json_body.get('auth_token') |
|
367 | 375 | |
|
368 | 376 | if not api_key: |
|
369 | 377 | raise KeyError('api_key or auth_token') |
|
370 | 378 | |
|
371 | 379 | # TODO(marcink): support passing in token in request header |
|
372 | 380 | |
|
373 | 381 | request.rpc_api_key = api_key |
|
374 | 382 | request.rpc_id = json_body['id'] |
|
375 | 383 | request.rpc_method = json_body['method'] |
|
376 | 384 | request.rpc_params = json_body['args'] \ |
|
377 | 385 | if isinstance(json_body['args'], dict) else {} |
|
378 | 386 | |
|
379 | 387 | log.debug('method: %s, params: %.10240r', request.rpc_method, request.rpc_params) |
|
380 | 388 | except KeyError as e: |
|
381 | 389 | raise JSONRPCError(f'Incorrect JSON data. Missing {e}') |
|
382 | 390 | |
|
383 | 391 | log.debug('setup complete, now handling method:%s rpcid:%s', |
|
384 | 392 | request.rpc_method, request.rpc_id, ) |
|
385 | 393 | |
|
386 | 394 | |
|
387 | 395 | class RoutePredicate(object): |
|
388 | 396 | def __init__(self, val, config): |
|
389 | 397 | self.val = val |
|
390 | 398 | |
|
391 | 399 | def text(self): |
|
392 | 400 | return f'jsonrpc route = {self.val}' |
|
393 | 401 | |
|
394 | 402 | phash = text |
|
395 | 403 | |
|
396 | 404 | def __call__(self, info, request): |
|
397 | 405 | if self.val: |
|
398 | 406 | # potentially setup and bootstrap our call |
|
399 | 407 | setup_request(request) |
|
400 | 408 | |
|
401 | 409 | # Always return True so that even if it isn't a valid RPC it |
|
402 | 410 | # will fall through to the underlaying handlers like notfound_view |
|
403 | 411 | return True |
|
404 | 412 | |
|
405 | 413 | |
|
406 | 414 | class NotFoundPredicate(object): |
|
407 | 415 | def __init__(self, val, config): |
|
408 | 416 | self.val = val |
|
409 | 417 | self.methods = config.registry.jsonrpc_methods |
|
410 | 418 | |
|
411 | 419 | def text(self): |
|
412 | 420 | return f'jsonrpc method not found = {self.val}' |
|
413 | 421 | |
|
414 | 422 | phash = text |
|
415 | 423 | |
|
416 | 424 | def __call__(self, info, request): |
|
417 | 425 | return hasattr(request, 'rpc_method') |
|
418 | 426 | |
|
419 | 427 | |
|
420 | 428 | class MethodPredicate(object): |
|
421 | 429 | def __init__(self, val, config): |
|
422 | 430 | self.method = val |
|
423 | 431 | |
|
424 | 432 | def text(self): |
|
425 | 433 | return f'jsonrpc method = {self.method}' |
|
426 | 434 | |
|
427 | 435 | phash = text |
|
428 | 436 | |
|
429 | 437 | def __call__(self, context, request): |
|
430 | 438 | # we need to explicitly return False here, so pyramid doesn't try to |
|
431 | 439 | # execute our view directly. We need our main handler to execute things |
|
432 | 440 | return getattr(request, 'rpc_method') == self.method |
|
433 | 441 | |
|
434 | 442 | |
|
435 | 443 | def add_jsonrpc_method(config, view, **kwargs): |
|
436 | 444 | # pop the method name |
|
437 | 445 | method = kwargs.pop('method', None) |
|
438 | 446 | |
|
439 | 447 | if method is None: |
|
440 | 448 | raise ConfigurationError( |
|
441 | 449 | 'Cannot register a JSON-RPC method without specifying the "method"') |
|
442 | 450 | |
|
443 | 451 | # we define custom predicate, to enable to detect conflicting methods, |
|
444 | 452 | # those predicates are kind of "translation" from the decorator variables |
|
445 | 453 | # to internal predicates names |
|
446 | 454 | |
|
447 | 455 | kwargs['jsonrpc_method'] = method |
|
448 | 456 | |
|
449 | 457 | # register our view into global view store for validation |
|
450 | 458 | config.registry.jsonrpc_methods[method] = view |
|
451 | 459 | |
|
452 | 460 | # we're using our main request_view handler, here, so each method |
|
453 | 461 | # has a unified handler for itself |
|
454 | 462 | config.add_view(request_view, route_name='apiv2', **kwargs) |
|
455 | 463 | |
|
456 | 464 | |
|
457 | 465 | class jsonrpc_method(object): |
|
458 | 466 | """ |
|
459 | 467 | decorator that works similar to @add_view_config decorator, |
|
460 | 468 | but tailored for our JSON RPC |
|
461 | 469 | """ |
|
462 | 470 | |
|
463 | 471 | venusian = venusian # for testing injection |
|
464 | 472 | |
|
465 | 473 | def __init__(self, method=None, **kwargs): |
|
466 | 474 | self.method = method |
|
467 | 475 | self.kwargs = kwargs |
|
468 | 476 | |
|
469 | 477 | def __call__(self, wrapped): |
|
470 | 478 | kwargs = self.kwargs.copy() |
|
471 | 479 | kwargs['method'] = self.method or wrapped.__name__ |
|
472 | 480 | depth = kwargs.pop('_depth', 0) |
|
473 | 481 | |
|
474 | 482 | def callback(context, name, ob): |
|
475 | 483 | config = context.config.with_package(info.module) |
|
476 | 484 | config.add_jsonrpc_method(view=ob, **kwargs) |
|
477 | 485 | |
|
478 | 486 | info = venusian.attach(wrapped, callback, category='pyramid', |
|
479 | 487 | depth=depth + 1) |
|
480 | 488 | if info.scope == 'class': |
|
481 | 489 | # ensure that attr is set if decorating a class method |
|
482 | 490 | kwargs.setdefault('attr', wrapped.__name__) |
|
483 | 491 | |
|
484 | 492 | kwargs['_info'] = info.codeinfo # fbo action_method |
|
485 | 493 | return wrapped |
|
486 | 494 | |
|
487 | 495 | |
|
488 | 496 | class jsonrpc_deprecated_method(object): |
|
489 | 497 | """ |
|
490 | 498 | Marks method as deprecated, adds log.warning, and inject special key to |
|
491 | 499 | the request variable to mark method as deprecated. |
|
492 | 500 | Also injects special docstring that extract_docs will catch to mark |
|
493 | 501 | method as deprecated. |
|
494 | 502 | |
|
495 | 503 | :param use_method: specify which method should be used instead of |
|
496 | 504 | the decorated one |
|
497 | 505 | |
|
498 | 506 | Use like:: |
|
499 | 507 | |
|
500 | 508 | @jsonrpc_method() |
|
501 | 509 | @jsonrpc_deprecated_method(use_method='new_func', deprecated_at_version='3.0.0') |
|
502 | 510 | def old_func(request, apiuser, arg1, arg2): |
|
503 | 511 | ... |
|
504 | 512 | """ |
|
505 | 513 | |
|
506 | 514 | def __init__(self, use_method, deprecated_at_version): |
|
507 | 515 | self.use_method = use_method |
|
508 | 516 | self.deprecated_at_version = deprecated_at_version |
|
509 | 517 | self.deprecated_msg = '' |
|
510 | 518 | |
|
511 | 519 | def __call__(self, func): |
|
512 | 520 | self.deprecated_msg = 'Please use method `{method}` instead.'.format( |
|
513 | 521 | method=self.use_method) |
|
514 | 522 | |
|
515 | 523 | docstring = """\n |
|
516 | 524 | .. deprecated:: {version} |
|
517 | 525 | |
|
518 | 526 | {deprecation_message} |
|
519 | 527 | |
|
520 | 528 | {original_docstring} |
|
521 | 529 | """ |
|
522 | 530 | func.__doc__ = docstring.format( |
|
523 | 531 | version=self.deprecated_at_version, |
|
524 | 532 | deprecation_message=self.deprecated_msg, |
|
525 | 533 | original_docstring=func.__doc__) |
|
526 | 534 | return decorator.decorator(self.__wrapper, func) |
|
527 | 535 | |
|
528 | 536 | def __wrapper(self, func, *fargs, **fkwargs): |
|
529 | 537 | log.warning('DEPRECATED API CALL on function %s, please ' |
|
530 | 538 | 'use `%s` instead', func, self.use_method) |
|
531 | 539 | # alter function docstring to mark as deprecated, this is picked up |
|
532 | 540 | # via fabric file that generates API DOC. |
|
533 | 541 | result = func(*fargs, **fkwargs) |
|
534 | 542 | |
|
535 | 543 | request = fargs[0] |
|
536 | 544 | request.rpc_deprecation = 'DEPRECATED METHOD ' + self.deprecated_msg |
|
537 | 545 | return result |
|
538 | 546 | |
|
539 | 547 | |
|
540 | 548 | def add_api_methods(config): |
|
541 | 549 | from rhodecode.api.views import ( |
|
542 | 550 | deprecated_api, gist_api, pull_request_api, repo_api, repo_group_api, |
|
543 | 551 | server_api, search_api, testing_api, user_api, user_group_api) |
|
544 | 552 | |
|
545 | 553 | config.scan('rhodecode.api.views') |
|
546 | 554 | |
|
547 | 555 | |
|
548 | 556 | def includeme(config): |
|
549 | 557 | plugin_module = 'rhodecode.api' |
|
550 | 558 | plugin_settings = get_plugin_settings( |
|
551 | 559 | plugin_module, config.registry.settings) |
|
552 | 560 | |
|
553 | 561 | if not hasattr(config.registry, 'jsonrpc_methods'): |
|
554 | 562 | config.registry.jsonrpc_methods = OrderedDict() |
|
555 | 563 | |
|
556 | 564 | # match filter by given method only |
|
557 | 565 | config.add_view_predicate('jsonrpc_method', MethodPredicate) |
|
558 | 566 | config.add_view_predicate('jsonrpc_method_not_found', NotFoundPredicate) |
|
559 | 567 | |
|
560 | 568 | config.add_renderer(DEFAULT_RENDERER, ExtJsonRenderer()) |
|
561 | 569 | config.add_directive('add_jsonrpc_method', add_jsonrpc_method) |
|
562 | 570 | |
|
563 | 571 | config.add_route_predicate( |
|
564 | 572 | 'jsonrpc_call', RoutePredicate) |
|
565 | 573 | |
|
566 | 574 | config.add_route( |
|
567 | 575 | 'apiv2', plugin_settings.get('url', DEFAULT_URL), jsonrpc_call=True) |
|
568 | 576 | |
|
569 | 577 | # register some exception handling view |
|
570 | 578 | config.add_view(exception_view, context=JSONRPCBaseError) |
|
571 | 579 | config.add_notfound_view(exception_view, jsonrpc_method_not_found=True) |
|
572 | 580 | |
|
573 | 581 | add_api_methods(config) |
@@ -1,348 +1,348 b'' | |||
|
1 | 1 | |
|
2 | 2 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
3 | 3 | # |
|
4 | 4 | # This program is free software: you can redistribute it and/or modify |
|
5 | 5 | # it under the terms of the GNU Affero General Public License, version 3 |
|
6 | 6 | # (only), as published by the Free Software Foundation. |
|
7 | 7 | # |
|
8 | 8 | # This program is distributed in the hope that it will be useful, |
|
9 | 9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
10 | 10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
11 | 11 | # GNU General Public License for more details. |
|
12 | 12 | # |
|
13 | 13 | # You should have received a copy of the GNU Affero General Public License |
|
14 | 14 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
15 | 15 | # |
|
16 | 16 | # This program is dual-licensed. If you wish to learn more about the |
|
17 | 17 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
18 | 18 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
19 | 19 | |
|
20 | 20 | import mock |
|
21 | 21 | import pytest |
|
22 | 22 | |
|
23 | 23 | from rhodecode.lib.vcs import settings |
|
24 | 24 | from rhodecode.model.meta import Session |
|
25 | 25 | from rhodecode.model.repo import RepoModel |
|
26 | 26 | from rhodecode.model.user import UserModel |
|
27 | 27 | from rhodecode.tests import TEST_USER_ADMIN_LOGIN |
|
28 | 28 | from rhodecode.api.tests.utils import ( |
|
29 | 29 | build_data, api_call, assert_ok, assert_error, crash) |
|
30 | 30 | from rhodecode.tests.fixture import Fixture |
|
31 | 31 | from rhodecode.lib.ext_json import json |
|
32 | 32 | from rhodecode.lib.str_utils import safe_str |
|
33 | 33 | |
|
34 | 34 | |
|
35 | 35 | fixture = Fixture() |
|
36 | 36 | |
|
37 | 37 | |
|
38 | 38 | @pytest.mark.usefixtures("testuser_api", "app") |
|
39 | 39 | class TestCreateRepo(object): |
|
40 | 40 | |
|
41 | 41 | @pytest.mark.parametrize('given, expected_name, expected_exc', [ |
|
42 | 42 | ('api repo-1', 'api-repo-1', False), |
|
43 | 43 | ('api-repo 1-ąć', 'api-repo-1-ąć', False), |
|
44 |
( |
|
|
44 | ('unicode-ąć', u'unicode-ąć', False), | |
|
45 | 45 | ('some repo v1.2', 'some-repo-v1.2', False), |
|
46 | 46 | ('v2.0', 'v2.0', False), |
|
47 | 47 | ]) |
|
48 | 48 | def test_api_create_repo(self, backend, given, expected_name, expected_exc): |
|
49 | 49 | |
|
50 | 50 | id_, params = build_data( |
|
51 | 51 | self.apikey, |
|
52 | 52 | 'create_repo', |
|
53 | 53 | repo_name=given, |
|
54 | 54 | owner=TEST_USER_ADMIN_LOGIN, |
|
55 | 55 | repo_type=backend.alias, |
|
56 | 56 | ) |
|
57 | 57 | response = api_call(self.app, params) |
|
58 | 58 | |
|
59 | 59 | ret = { |
|
60 | 60 | 'msg': 'Created new repository `%s`' % (expected_name,), |
|
61 | 61 | 'success': True, |
|
62 | 62 | 'task': None, |
|
63 | 63 | } |
|
64 | 64 | expected = ret |
|
65 | 65 | assert_ok(id_, expected, given=response.body) |
|
66 | 66 | |
|
67 | 67 | repo = RepoModel().get_by_repo_name(safe_str(expected_name)) |
|
68 | 68 | assert repo is not None |
|
69 | 69 | |
|
70 | 70 | id_, params = build_data(self.apikey, 'get_repo', repoid=expected_name) |
|
71 | 71 | response = api_call(self.app, params) |
|
72 | 72 | body = json.loads(response.body) |
|
73 | 73 | |
|
74 | 74 | assert body['result']['enable_downloads'] is False |
|
75 | 75 | assert body['result']['enable_locking'] is False |
|
76 | 76 | assert body['result']['enable_statistics'] is False |
|
77 | 77 | |
|
78 | 78 | fixture.destroy_repo(safe_str(expected_name)) |
|
79 | 79 | |
|
80 | 80 | def test_api_create_restricted_repo_type(self, backend): |
|
81 | 81 | repo_name = 'api-repo-type-{0}'.format(backend.alias) |
|
82 | 82 | id_, params = build_data( |
|
83 | 83 | self.apikey, |
|
84 | 84 | 'create_repo', |
|
85 | 85 | repo_name=repo_name, |
|
86 | 86 | owner=TEST_USER_ADMIN_LOGIN, |
|
87 | 87 | repo_type=backend.alias, |
|
88 | 88 | ) |
|
89 | 89 | git_backend = settings.BACKENDS['git'] |
|
90 | 90 | with mock.patch( |
|
91 | 91 | 'rhodecode.lib.vcs.settings.BACKENDS', {'git': git_backend}): |
|
92 | 92 | response = api_call(self.app, params) |
|
93 | 93 | |
|
94 | 94 | repo = RepoModel().get_by_repo_name(repo_name) |
|
95 | 95 | |
|
96 | 96 | if backend.alias == 'git': |
|
97 | 97 | assert repo is not None |
|
98 | 98 | expected = { |
|
99 | 99 | 'msg': 'Created new repository `{0}`'.format(repo_name,), |
|
100 | 100 | 'success': True, |
|
101 | 101 | 'task': None, |
|
102 | 102 | } |
|
103 | 103 | assert_ok(id_, expected, given=response.body) |
|
104 | 104 | else: |
|
105 | 105 | assert repo is None |
|
106 | 106 | |
|
107 | 107 | fixture.destroy_repo(repo_name) |
|
108 | 108 | |
|
109 | 109 | def test_api_create_repo_with_booleans(self, backend): |
|
110 | 110 | repo_name = 'api-repo-2' |
|
111 | 111 | id_, params = build_data( |
|
112 | 112 | self.apikey, |
|
113 | 113 | 'create_repo', |
|
114 | 114 | repo_name=repo_name, |
|
115 | 115 | owner=TEST_USER_ADMIN_LOGIN, |
|
116 | 116 | repo_type=backend.alias, |
|
117 | 117 | enable_statistics=True, |
|
118 | 118 | enable_locking=True, |
|
119 | 119 | enable_downloads=True |
|
120 | 120 | ) |
|
121 | 121 | response = api_call(self.app, params) |
|
122 | 122 | |
|
123 | 123 | repo = RepoModel().get_by_repo_name(repo_name) |
|
124 | 124 | |
|
125 | 125 | assert repo is not None |
|
126 | 126 | ret = { |
|
127 | 127 | 'msg': 'Created new repository `%s`' % (repo_name,), |
|
128 | 128 | 'success': True, |
|
129 | 129 | 'task': None, |
|
130 | 130 | } |
|
131 | 131 | expected = ret |
|
132 | 132 | assert_ok(id_, expected, given=response.body) |
|
133 | 133 | |
|
134 | 134 | id_, params = build_data(self.apikey, 'get_repo', repoid=repo_name) |
|
135 | 135 | response = api_call(self.app, params) |
|
136 | 136 | body = json.loads(response.body) |
|
137 | 137 | |
|
138 | 138 | assert body['result']['enable_downloads'] is True |
|
139 | 139 | assert body['result']['enable_locking'] is True |
|
140 | 140 | assert body['result']['enable_statistics'] is True |
|
141 | 141 | |
|
142 | 142 | fixture.destroy_repo(repo_name) |
|
143 | 143 | |
|
144 | 144 | def test_api_create_repo_in_group(self, backend): |
|
145 | 145 | repo_group_name = 'my_gr' |
|
146 | 146 | # create the parent |
|
147 | 147 | fixture.create_repo_group(repo_group_name) |
|
148 | 148 | |
|
149 | 149 | repo_name = '%s/api-repo-gr' % (repo_group_name,) |
|
150 | 150 | id_, params = build_data( |
|
151 | 151 | self.apikey, 'create_repo', |
|
152 | 152 | repo_name=repo_name, |
|
153 | 153 | owner=TEST_USER_ADMIN_LOGIN, |
|
154 | 154 | repo_type=backend.alias,) |
|
155 | 155 | response = api_call(self.app, params) |
|
156 | 156 | repo = RepoModel().get_by_repo_name(repo_name) |
|
157 | 157 | assert repo is not None |
|
158 | 158 | assert repo.group is not None |
|
159 | 159 | |
|
160 | 160 | ret = { |
|
161 | 161 | 'msg': 'Created new repository `%s`' % (repo_name,), |
|
162 | 162 | 'success': True, |
|
163 | 163 | 'task': None, |
|
164 | 164 | } |
|
165 | 165 | expected = ret |
|
166 | 166 | assert_ok(id_, expected, given=response.body) |
|
167 | 167 | fixture.destroy_repo(repo_name) |
|
168 | 168 | fixture.destroy_repo_group(repo_group_name) |
|
169 | 169 | |
|
170 | 170 | def test_create_repo_in_group_that_doesnt_exist(self, backend, user_util): |
|
171 | 171 | repo_group_name = 'fake_group' |
|
172 | 172 | |
|
173 | 173 | repo_name = '%s/api-repo-gr' % (repo_group_name,) |
|
174 | 174 | id_, params = build_data( |
|
175 | 175 | self.apikey, 'create_repo', |
|
176 | 176 | repo_name=repo_name, |
|
177 | 177 | owner=TEST_USER_ADMIN_LOGIN, |
|
178 | 178 | repo_type=backend.alias,) |
|
179 | 179 | response = api_call(self.app, params) |
|
180 | 180 | |
|
181 | 181 | expected = {'repo_group': 'Repository group `{}` does not exist'.format( |
|
182 | 182 | repo_group_name)} |
|
183 | 183 | assert_error(id_, expected, given=response.body) |
|
184 | 184 | |
|
185 | 185 | def test_api_create_repo_unknown_owner(self, backend): |
|
186 | 186 | repo_name = 'api-repo-2' |
|
187 | 187 | owner = 'i-dont-exist' |
|
188 | 188 | id_, params = build_data( |
|
189 | 189 | self.apikey, 'create_repo', |
|
190 | 190 | repo_name=repo_name, |
|
191 | 191 | owner=owner, |
|
192 | 192 | repo_type=backend.alias) |
|
193 | 193 | response = api_call(self.app, params) |
|
194 | 194 | expected = 'user `%s` does not exist' % (owner,) |
|
195 | 195 | assert_error(id_, expected, given=response.body) |
|
196 | 196 | |
|
197 | 197 | def test_api_create_repo_dont_specify_owner(self, backend): |
|
198 | 198 | repo_name = 'api-repo-3' |
|
199 | 199 | id_, params = build_data( |
|
200 | 200 | self.apikey, 'create_repo', |
|
201 | 201 | repo_name=repo_name, |
|
202 | 202 | repo_type=backend.alias) |
|
203 | 203 | response = api_call(self.app, params) |
|
204 | 204 | |
|
205 | 205 | repo = RepoModel().get_by_repo_name(repo_name) |
|
206 | 206 | assert repo is not None |
|
207 | 207 | ret = { |
|
208 | 208 | 'msg': 'Created new repository `%s`' % (repo_name,), |
|
209 | 209 | 'success': True, |
|
210 | 210 | 'task': None, |
|
211 | 211 | } |
|
212 | 212 | expected = ret |
|
213 | 213 | assert_ok(id_, expected, given=response.body) |
|
214 | 214 | fixture.destroy_repo(repo_name) |
|
215 | 215 | |
|
216 | 216 | def test_api_create_repo_by_non_admin(self, backend): |
|
217 | 217 | repo_name = 'api-repo-4' |
|
218 | 218 | id_, params = build_data( |
|
219 | 219 | self.apikey_regular, 'create_repo', |
|
220 | 220 | repo_name=repo_name, |
|
221 | 221 | repo_type=backend.alias) |
|
222 | 222 | response = api_call(self.app, params) |
|
223 | 223 | |
|
224 | 224 | repo = RepoModel().get_by_repo_name(repo_name) |
|
225 | 225 | assert repo is not None |
|
226 | 226 | ret = { |
|
227 | 227 | 'msg': 'Created new repository `%s`' % (repo_name,), |
|
228 | 228 | 'success': True, |
|
229 | 229 | 'task': None, |
|
230 | 230 | } |
|
231 | 231 | expected = ret |
|
232 | 232 | assert_ok(id_, expected, given=response.body) |
|
233 | 233 | fixture.destroy_repo(repo_name) |
|
234 | 234 | |
|
235 | 235 | def test_api_create_repo_by_non_admin_specify_owner(self, backend): |
|
236 | 236 | repo_name = 'api-repo-5' |
|
237 | 237 | owner = 'i-dont-exist' |
|
238 | 238 | id_, params = build_data( |
|
239 | 239 | self.apikey_regular, 'create_repo', |
|
240 | 240 | repo_name=repo_name, |
|
241 | 241 | repo_type=backend.alias, |
|
242 | 242 | owner=owner) |
|
243 | 243 | response = api_call(self.app, params) |
|
244 | 244 | |
|
245 | 245 | expected = 'Only RhodeCode super-admin can specify `owner` param' |
|
246 | 246 | assert_error(id_, expected, given=response.body) |
|
247 | 247 | fixture.destroy_repo(repo_name) |
|
248 | 248 | |
|
249 | 249 | def test_api_create_repo_by_non_admin_no_parent_group_perms(self, backend): |
|
250 | 250 | repo_group_name = 'no-access' |
|
251 | 251 | fixture.create_repo_group(repo_group_name) |
|
252 | 252 | repo_name = 'no-access/api-repo' |
|
253 | 253 | |
|
254 | 254 | id_, params = build_data( |
|
255 | 255 | self.apikey_regular, 'create_repo', |
|
256 | 256 | repo_name=repo_name, |
|
257 | 257 | repo_type=backend.alias) |
|
258 | 258 | response = api_call(self.app, params) |
|
259 | 259 | |
|
260 | 260 | expected = {'repo_group': 'Repository group `{}` does not exist'.format( |
|
261 | 261 | repo_group_name)} |
|
262 | 262 | assert_error(id_, expected, given=response.body) |
|
263 | 263 | fixture.destroy_repo_group(repo_group_name) |
|
264 | 264 | fixture.destroy_repo(repo_name) |
|
265 | 265 | |
|
266 | 266 | def test_api_create_repo_non_admin_no_permission_to_create_to_root_level( |
|
267 | 267 | self, backend, user_util): |
|
268 | 268 | |
|
269 | 269 | regular_user = user_util.create_user() |
|
270 | 270 | regular_user_api_key = regular_user.api_key |
|
271 | 271 | |
|
272 | 272 | usr = UserModel().get_by_username(regular_user.username) |
|
273 | 273 | usr.inherit_default_permissions = False |
|
274 | 274 | Session().add(usr) |
|
275 | 275 | |
|
276 | 276 | repo_name = backend.new_repo_name() |
|
277 | 277 | id_, params = build_data( |
|
278 | 278 | regular_user_api_key, 'create_repo', |
|
279 | 279 | repo_name=repo_name, |
|
280 | 280 | repo_type=backend.alias) |
|
281 | 281 | response = api_call(self.app, params) |
|
282 | 282 | expected = { |
|
283 | 283 | "repo_name": "You do not have the permission to " |
|
284 | 284 | "store repositories in the root location."} |
|
285 | 285 | assert_error(id_, expected, given=response.body) |
|
286 | 286 | |
|
287 | 287 | def test_api_create_repo_exists(self, backend): |
|
288 | 288 | repo_name = backend.repo_name |
|
289 | 289 | id_, params = build_data( |
|
290 | 290 | self.apikey, 'create_repo', |
|
291 | 291 | repo_name=repo_name, |
|
292 | 292 | owner=TEST_USER_ADMIN_LOGIN, |
|
293 | 293 | repo_type=backend.alias,) |
|
294 | 294 | response = api_call(self.app, params) |
|
295 | 295 | expected = { |
|
296 | 296 | 'unique_repo_name': 'Repository with name `{}` already exists'.format( |
|
297 | 297 | repo_name)} |
|
298 | 298 | assert_error(id_, expected, given=response.body) |
|
299 | 299 | |
|
300 | 300 | @mock.patch.object(RepoModel, 'create', crash) |
|
301 | 301 | def test_api_create_repo_exception_occurred(self, backend): |
|
302 | 302 | repo_name = 'api-repo-6' |
|
303 | 303 | id_, params = build_data( |
|
304 | 304 | self.apikey, 'create_repo', |
|
305 | 305 | repo_name=repo_name, |
|
306 | 306 | owner=TEST_USER_ADMIN_LOGIN, |
|
307 | 307 | repo_type=backend.alias,) |
|
308 | 308 | response = api_call(self.app, params) |
|
309 | 309 | expected = 'failed to create repository `%s`' % (repo_name,) |
|
310 | 310 | assert_error(id_, expected, given=response.body) |
|
311 | 311 | |
|
312 | 312 | @pytest.mark.parametrize('parent_group, dirty_name, expected_name', [ |
|
313 | 313 | (None, 'foo bar x', 'foo-bar-x'), |
|
314 | 314 | ('foo', '/foo//bar x', 'foo/bar-x'), |
|
315 | 315 | ('foo-bar', 'foo-bar //bar x', 'foo-bar/bar-x'), |
|
316 | 316 | ]) |
|
317 | 317 | def test_create_repo_with_extra_slashes_in_name( |
|
318 | 318 | self, backend, parent_group, dirty_name, expected_name): |
|
319 | 319 | |
|
320 | 320 | if parent_group: |
|
321 | 321 | gr = fixture.create_repo_group(parent_group) |
|
322 | 322 | assert gr.group_name == parent_group |
|
323 | 323 | |
|
324 | 324 | id_, params = build_data( |
|
325 | 325 | self.apikey, 'create_repo', |
|
326 | 326 | repo_name=dirty_name, |
|
327 | 327 | repo_type=backend.alias, |
|
328 | 328 | owner=TEST_USER_ADMIN_LOGIN,) |
|
329 | 329 | response = api_call(self.app, params) |
|
330 | 330 | expected ={ |
|
331 | 331 | "msg": "Created new repository `{}`".format(expected_name), |
|
332 | 332 | "task": None, |
|
333 | 333 | "success": True |
|
334 | 334 | } |
|
335 | 335 | assert_ok(id_, expected, response.body) |
|
336 | 336 | |
|
337 | 337 | repo = RepoModel().get_by_repo_name(expected_name) |
|
338 | 338 | assert repo is not None |
|
339 | 339 | |
|
340 | 340 | expected = { |
|
341 | 341 | 'msg': 'Created new repository `%s`' % (expected_name,), |
|
342 | 342 | 'success': True, |
|
343 | 343 | 'task': None, |
|
344 | 344 | } |
|
345 | 345 | assert_ok(id_, expected, given=response.body) |
|
346 | 346 | fixture.destroy_repo(expected_name) |
|
347 | 347 | if parent_group: |
|
348 | 348 | fixture.destroy_repo_group(parent_group) |
@@ -1,288 +1,288 b'' | |||
|
1 | 1 | |
|
2 | 2 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
3 | 3 | # |
|
4 | 4 | # This program is free software: you can redistribute it and/or modify |
|
5 | 5 | # it under the terms of the GNU Affero General Public License, version 3 |
|
6 | 6 | # (only), as published by the Free Software Foundation. |
|
7 | 7 | # |
|
8 | 8 | # This program is distributed in the hope that it will be useful, |
|
9 | 9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
10 | 10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
11 | 11 | # GNU General Public License for more details. |
|
12 | 12 | # |
|
13 | 13 | # You should have received a copy of the GNU Affero General Public License |
|
14 | 14 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
15 | 15 | # |
|
16 | 16 | # This program is dual-licensed. If you wish to learn more about the |
|
17 | 17 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
18 | 18 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
19 | 19 | |
|
20 | 20 | import mock |
|
21 | 21 | import pytest |
|
22 | 22 | |
|
23 | 23 | from rhodecode.model.meta import Session |
|
24 | 24 | from rhodecode.model.repo_group import RepoGroupModel |
|
25 | 25 | from rhodecode.model.user import UserModel |
|
26 | 26 | from rhodecode.tests import TEST_USER_ADMIN_LOGIN |
|
27 | 27 | from rhodecode.api.tests.utils import ( |
|
28 | 28 | build_data, api_call, assert_ok, assert_error, crash) |
|
29 | 29 | from rhodecode.tests.fixture import Fixture |
|
30 | 30 | |
|
31 | 31 | |
|
32 | 32 | fixture = Fixture() |
|
33 | 33 | |
|
34 | 34 | |
|
35 | 35 | @pytest.mark.usefixtures("testuser_api", "app") |
|
36 | 36 | class TestCreateRepoGroup(object): |
|
37 | 37 | def test_api_create_repo_group(self): |
|
38 | 38 | repo_group_name = 'api-repo-group' |
|
39 | 39 | |
|
40 | 40 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
41 | 41 | assert repo_group is None |
|
42 | 42 | |
|
43 | 43 | id_, params = build_data( |
|
44 | 44 | self.apikey, 'create_repo_group', |
|
45 | 45 | group_name=repo_group_name, |
|
46 | 46 | owner=TEST_USER_ADMIN_LOGIN,) |
|
47 | 47 | response = api_call(self.app, params) |
|
48 | 48 | |
|
49 | 49 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
50 | 50 | assert repo_group is not None |
|
51 | 51 | ret = { |
|
52 | 52 | 'msg': 'Created new repo group `%s`' % (repo_group_name,), |
|
53 | 53 | 'repo_group': repo_group.get_api_data() |
|
54 | 54 | } |
|
55 | 55 | expected = ret |
|
56 | 56 | try: |
|
57 | 57 | assert_ok(id_, expected, given=response.body) |
|
58 | 58 | finally: |
|
59 | 59 | fixture.destroy_repo_group(repo_group_name) |
|
60 | 60 | |
|
61 | 61 | def test_api_create_repo_group_in_another_group(self): |
|
62 | 62 | repo_group_name = 'api-repo-group' |
|
63 | 63 | |
|
64 | 64 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
65 | 65 | assert repo_group is None |
|
66 | 66 | # create the parent |
|
67 | 67 | fixture.create_repo_group(repo_group_name) |
|
68 | 68 | |
|
69 | 69 | full_repo_group_name = repo_group_name+'/'+repo_group_name |
|
70 | 70 | id_, params = build_data( |
|
71 | 71 | self.apikey, 'create_repo_group', |
|
72 | 72 | group_name=full_repo_group_name, |
|
73 | 73 | owner=TEST_USER_ADMIN_LOGIN, |
|
74 | 74 | copy_permissions=True) |
|
75 | 75 | response = api_call(self.app, params) |
|
76 | 76 | |
|
77 | 77 | repo_group = RepoGroupModel.cls.get_by_group_name(full_repo_group_name) |
|
78 | 78 | assert repo_group is not None |
|
79 | 79 | ret = { |
|
80 | 80 | 'msg': 'Created new repo group `%s`' % (full_repo_group_name,), |
|
81 | 81 | 'repo_group': repo_group.get_api_data() |
|
82 | 82 | } |
|
83 | 83 | expected = ret |
|
84 | 84 | try: |
|
85 | 85 | assert_ok(id_, expected, given=response.body) |
|
86 | 86 | finally: |
|
87 | 87 | fixture.destroy_repo_group(full_repo_group_name) |
|
88 | 88 | fixture.destroy_repo_group(repo_group_name) |
|
89 | 89 | |
|
90 | 90 | def test_api_create_repo_group_in_another_group_not_existing(self): |
|
91 | 91 | repo_group_name = 'api-repo-group-no' |
|
92 | 92 | |
|
93 | 93 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
94 | 94 | assert repo_group is None |
|
95 | 95 | |
|
96 | 96 | full_repo_group_name = repo_group_name+'/'+repo_group_name |
|
97 | 97 | id_, params = build_data( |
|
98 | 98 | self.apikey, 'create_repo_group', |
|
99 | 99 | group_name=full_repo_group_name, |
|
100 | 100 | owner=TEST_USER_ADMIN_LOGIN, |
|
101 | 101 | copy_permissions=True) |
|
102 | 102 | response = api_call(self.app, params) |
|
103 | 103 | expected = { |
|
104 | 104 | 'repo_group': |
|
105 | 105 | 'Parent repository group `{}` does not exist'.format( |
|
106 | 106 | repo_group_name)} |
|
107 | 107 | assert_error(id_, expected, given=response.body) |
|
108 | 108 | |
|
109 | 109 | def test_api_create_repo_group_that_exists(self): |
|
110 | 110 | repo_group_name = 'api-repo-group' |
|
111 | 111 | |
|
112 | 112 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
113 | 113 | assert repo_group is None |
|
114 | 114 | |
|
115 | 115 | fixture.create_repo_group(repo_group_name) |
|
116 | 116 | id_, params = build_data( |
|
117 | 117 | self.apikey, 'create_repo_group', |
|
118 | 118 | group_name=repo_group_name, |
|
119 | 119 | owner=TEST_USER_ADMIN_LOGIN,) |
|
120 | 120 | response = api_call(self.app, params) |
|
121 | 121 | expected = { |
|
122 | 122 | 'unique_repo_group_name': |
|
123 | 123 | 'Repository group with name `{}` already exists'.format( |
|
124 | 124 | repo_group_name)} |
|
125 | 125 | try: |
|
126 | 126 | assert_error(id_, expected, given=response.body) |
|
127 | 127 | finally: |
|
128 | 128 | fixture.destroy_repo_group(repo_group_name) |
|
129 | 129 | |
|
130 | 130 | def test_api_create_repo_group_regular_user_wit_root_location_perms( |
|
131 | 131 | self, user_util): |
|
132 | 132 | regular_user = user_util.create_user() |
|
133 | 133 | regular_user_api_key = regular_user.api_key |
|
134 | 134 | |
|
135 | 135 | repo_group_name = 'api-repo-group-by-regular-user' |
|
136 | 136 | |
|
137 | 137 | usr = UserModel().get_by_username(regular_user.username) |
|
138 | 138 | usr.inherit_default_permissions = False |
|
139 | 139 | Session().add(usr) |
|
140 | 140 | |
|
141 | 141 | UserModel().grant_perm( |
|
142 | 142 | regular_user.username, 'hg.repogroup.create.true') |
|
143 | 143 | Session().commit() |
|
144 | 144 | |
|
145 | 145 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
146 | 146 | assert repo_group is None |
|
147 | 147 | |
|
148 | 148 | id_, params = build_data( |
|
149 | 149 | regular_user_api_key, 'create_repo_group', |
|
150 | 150 | group_name=repo_group_name) |
|
151 | 151 | response = api_call(self.app, params) |
|
152 | 152 | |
|
153 | 153 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
154 | 154 | assert repo_group is not None |
|
155 | 155 | expected = { |
|
156 | 156 | 'msg': 'Created new repo group `%s`' % (repo_group_name,), |
|
157 | 157 | 'repo_group': repo_group.get_api_data() |
|
158 | 158 | } |
|
159 | 159 | try: |
|
160 | 160 | assert_ok(id_, expected, given=response.body) |
|
161 | 161 | finally: |
|
162 | 162 | fixture.destroy_repo_group(repo_group_name) |
|
163 | 163 | |
|
164 | 164 | def test_api_create_repo_group_regular_user_with_admin_perms_to_parent( |
|
165 | 165 | self, user_util): |
|
166 | 166 | |
|
167 | 167 | repo_group_name = 'api-repo-group-parent' |
|
168 | 168 | |
|
169 | 169 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
170 | 170 | assert repo_group is None |
|
171 | 171 | # create the parent |
|
172 | 172 | fixture.create_repo_group(repo_group_name) |
|
173 | 173 | |
|
174 | 174 | # user perms |
|
175 | 175 | regular_user = user_util.create_user() |
|
176 | 176 | regular_user_api_key = regular_user.api_key |
|
177 | 177 | |
|
178 | 178 | usr = UserModel().get_by_username(regular_user.username) |
|
179 | 179 | usr.inherit_default_permissions = False |
|
180 | 180 | Session().add(usr) |
|
181 | 181 | |
|
182 | 182 | RepoGroupModel().grant_user_permission( |
|
183 | 183 | repo_group_name, regular_user.username, 'group.admin') |
|
184 | 184 | Session().commit() |
|
185 | 185 | |
|
186 | 186 | full_repo_group_name = repo_group_name + '/' + repo_group_name |
|
187 | 187 | id_, params = build_data( |
|
188 | 188 | regular_user_api_key, 'create_repo_group', |
|
189 | 189 | group_name=full_repo_group_name) |
|
190 | 190 | response = api_call(self.app, params) |
|
191 | 191 | |
|
192 | 192 | repo_group = RepoGroupModel.cls.get_by_group_name(full_repo_group_name) |
|
193 | 193 | assert repo_group is not None |
|
194 | 194 | expected = { |
|
195 | 195 | 'msg': 'Created new repo group `{}`'.format(full_repo_group_name), |
|
196 | 196 | 'repo_group': repo_group.get_api_data() |
|
197 | 197 | } |
|
198 | 198 | try: |
|
199 | 199 | assert_ok(id_, expected, given=response.body) |
|
200 | 200 | finally: |
|
201 | 201 | fixture.destroy_repo_group(full_repo_group_name) |
|
202 | 202 | fixture.destroy_repo_group(repo_group_name) |
|
203 | 203 | |
|
204 | 204 | def test_api_create_repo_group_regular_user_no_permission_to_create_to_root_level(self): |
|
205 | 205 | repo_group_name = 'api-repo-group' |
|
206 | 206 | |
|
207 | 207 | id_, params = build_data( |
|
208 | 208 | self.apikey_regular, 'create_repo_group', |
|
209 | 209 | group_name=repo_group_name) |
|
210 | 210 | response = api_call(self.app, params) |
|
211 | 211 | |
|
212 | 212 | expected = { |
|
213 | 213 | 'repo_group': |
|
214 |
|
|
|
215 |
|
|
|
214 | 'You do not have the permission to store ' | |
|
215 | 'repository groups in the root location.'} | |
|
216 | 216 | assert_error(id_, expected, given=response.body) |
|
217 | 217 | |
|
218 | 218 | def test_api_create_repo_group_regular_user_no_parent_group_perms(self): |
|
219 | 219 | repo_group_name = 'api-repo-group-regular-user' |
|
220 | 220 | |
|
221 | 221 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
222 | 222 | assert repo_group is None |
|
223 | 223 | # create the parent |
|
224 | 224 | fixture.create_repo_group(repo_group_name) |
|
225 | 225 | |
|
226 | 226 | full_repo_group_name = repo_group_name+'/'+repo_group_name |
|
227 | 227 | |
|
228 | 228 | id_, params = build_data( |
|
229 | 229 | self.apikey_regular, 'create_repo_group', |
|
230 | 230 | group_name=full_repo_group_name) |
|
231 | 231 | response = api_call(self.app, params) |
|
232 | 232 | |
|
233 | 233 | expected = { |
|
234 | 234 | 'repo_group': |
|
235 |
|
|
|
236 |
|
|
|
235 | "You do not have the permissions to store " | |
|
236 | "repository groups inside repository group `{}`".format(repo_group_name)} | |
|
237 | 237 | try: |
|
238 | 238 | assert_error(id_, expected, given=response.body) |
|
239 | 239 | finally: |
|
240 | 240 | fixture.destroy_repo_group(repo_group_name) |
|
241 | 241 | |
|
242 | 242 | def test_api_create_repo_group_regular_user_no_permission_to_specify_owner( |
|
243 | 243 | self): |
|
244 | 244 | repo_group_name = 'api-repo-group' |
|
245 | 245 | |
|
246 | 246 | id_, params = build_data( |
|
247 | 247 | self.apikey_regular, 'create_repo_group', |
|
248 | 248 | group_name=repo_group_name, |
|
249 | 249 | owner=TEST_USER_ADMIN_LOGIN,) |
|
250 | 250 | response = api_call(self.app, params) |
|
251 | 251 | |
|
252 | 252 | expected = "Only RhodeCode super-admin can specify `owner` param" |
|
253 | 253 | assert_error(id_, expected, given=response.body) |
|
254 | 254 | |
|
255 | 255 | @mock.patch.object(RepoGroupModel, 'create', crash) |
|
256 | 256 | def test_api_create_repo_group_exception_occurred(self): |
|
257 | 257 | repo_group_name = 'api-repo-group' |
|
258 | 258 | |
|
259 | 259 | repo_group = RepoGroupModel.cls.get_by_group_name(repo_group_name) |
|
260 | 260 | assert repo_group is None |
|
261 | 261 | |
|
262 | 262 | id_, params = build_data( |
|
263 | 263 | self.apikey, 'create_repo_group', |
|
264 | 264 | group_name=repo_group_name, |
|
265 | 265 | owner=TEST_USER_ADMIN_LOGIN,) |
|
266 | 266 | response = api_call(self.app, params) |
|
267 | 267 | expected = 'failed to create repo group `%s`' % (repo_group_name,) |
|
268 | 268 | assert_error(id_, expected, given=response.body) |
|
269 | 269 | |
|
270 | 270 | def test_create_group_with_extra_slashes_in_name(self, user_util): |
|
271 | 271 | existing_repo_group = user_util.create_repo_group() |
|
272 | 272 | dirty_group_name = '//{}//group2//'.format( |
|
273 | 273 | existing_repo_group.group_name) |
|
274 | 274 | cleaned_group_name = '{}/group2'.format( |
|
275 | 275 | existing_repo_group.group_name) |
|
276 | 276 | |
|
277 | 277 | id_, params = build_data( |
|
278 | 278 | self.apikey, 'create_repo_group', |
|
279 | 279 | group_name=dirty_group_name, |
|
280 | 280 | owner=TEST_USER_ADMIN_LOGIN,) |
|
281 | 281 | response = api_call(self.app, params) |
|
282 | 282 | repo_group = RepoGroupModel.cls.get_by_group_name(cleaned_group_name) |
|
283 | 283 | expected = { |
|
284 | 284 | 'msg': 'Created new repo group `%s`' % (cleaned_group_name,), |
|
285 | 285 | 'repo_group': repo_group.get_api_data() |
|
286 | 286 | } |
|
287 | 287 | assert_ok(id_, expected, given=response.body) |
|
288 | 288 | fixture.destroy_repo_group(cleaned_group_name) |
@@ -1,101 +1,101 b'' | |||
|
1 | 1 | |
|
2 | 2 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
3 | 3 | # |
|
4 | 4 | # This program is free software: you can redistribute it and/or modify |
|
5 | 5 | # it under the terms of the GNU Affero General Public License, version 3 |
|
6 | 6 | # (only), as published by the Free Software Foundation. |
|
7 | 7 | # |
|
8 | 8 | # This program is distributed in the hope that it will be useful, |
|
9 | 9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
10 | 10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
11 | 11 | # GNU General Public License for more details. |
|
12 | 12 | # |
|
13 | 13 | # You should have received a copy of the GNU Affero General Public License |
|
14 | 14 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
15 | 15 | # |
|
16 | 16 | # This program is dual-licensed. If you wish to learn more about the |
|
17 | 17 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
18 | 18 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
19 | 19 | |
|
20 | 20 | |
|
21 | 21 | import pytest |
|
22 | 22 | |
|
23 | 23 | from rhodecode.lib.str_utils import safe_bytes |
|
24 | 24 | from rhodecode.model.db import Gist |
|
25 | 25 | from rhodecode.api.tests.utils import ( |
|
26 | 26 | build_data, api_call, assert_error, assert_ok) |
|
27 | 27 | |
|
28 | 28 | |
|
29 | 29 | @pytest.mark.usefixtures("testuser_api", "app") |
|
30 | 30 | class TestApiGetGist(object): |
|
31 | 31 | def test_api_get_gist(self, gist_util, http_host_only_stub): |
|
32 | 32 | gist = gist_util.create_gist() |
|
33 | 33 | gist_id = gist.gist_access_id |
|
34 | 34 | gist_created_on = gist.created_on |
|
35 | 35 | gist_modified_at = gist.modified_at |
|
36 | 36 | id_, params = build_data( |
|
37 | 37 | self.apikey, 'get_gist', gistid=gist_id, ) |
|
38 | 38 | response = api_call(self.app, params) |
|
39 | 39 | |
|
40 | 40 | expected = { |
|
41 | 41 | 'access_id': gist_id, |
|
42 | 42 | 'created_on': gist_created_on, |
|
43 | 43 | 'modified_at': gist_modified_at, |
|
44 | 44 | 'description': 'new-gist', |
|
45 | 45 | 'expires': -1.0, |
|
46 | 46 | 'gist_id': int(gist_id), |
|
47 | 47 | 'type': 'public', |
|
48 | 48 | 'url': 'http://%s/_admin/gists/%s' % (http_host_only_stub, gist_id,), |
|
49 | 49 | 'acl_level': Gist.ACL_LEVEL_PUBLIC, |
|
50 | 50 | 'content': None, |
|
51 | 51 | } |
|
52 | 52 | |
|
53 | 53 | assert_ok(id_, expected, given=response.body) |
|
54 | 54 | |
|
55 | 55 | def test_api_get_gist_with_content(self, gist_util, http_host_only_stub): |
|
56 | 56 | mapping = { |
|
57 | 57 | b'filename1.txt': {'content': b'hello world'}, |
|
58 | 58 | safe_bytes('filename1ą.txt'): {'content': safe_bytes('hello worldę')} |
|
59 | 59 | } |
|
60 | 60 | gist = gist_util.create_gist(gist_mapping=mapping) |
|
61 | 61 | gist_id = gist.gist_access_id |
|
62 | 62 | gist_created_on = gist.created_on |
|
63 | 63 | gist_modified_at = gist.modified_at |
|
64 | 64 | id_, params = build_data( |
|
65 | 65 | self.apikey, 'get_gist', gistid=gist_id, content=True) |
|
66 | 66 | response = api_call(self.app, params) |
|
67 | 67 | |
|
68 | 68 | expected = { |
|
69 | 69 | 'access_id': gist_id, |
|
70 | 70 | 'created_on': gist_created_on, |
|
71 | 71 | 'modified_at': gist_modified_at, |
|
72 | 72 | 'description': 'new-gist', |
|
73 | 73 | 'expires': -1.0, |
|
74 | 74 | 'gist_id': int(gist_id), |
|
75 | 75 | 'type': 'public', |
|
76 | 76 | 'url': 'http://%s/_admin/gists/%s' % (http_host_only_stub, gist_id,), |
|
77 | 77 | 'acl_level': Gist.ACL_LEVEL_PUBLIC, |
|
78 | 78 | 'content': { |
|
79 |
|
|
|
80 |
|
|
|
79 | 'filename1.txt': 'hello world', | |
|
80 | 'filename1ą.txt': 'hello worldę' | |
|
81 | 81 | }, |
|
82 | 82 | } |
|
83 | 83 | |
|
84 | 84 | assert_ok(id_, expected, given=response.body) |
|
85 | 85 | |
|
86 | 86 | def test_api_get_gist_not_existing(self): |
|
87 | 87 | id_, params = build_data( |
|
88 | 88 | self.apikey_regular, 'get_gist', gistid='12345', ) |
|
89 | 89 | response = api_call(self.app, params) |
|
90 | 90 | expected = 'gist `%s` does not exist' % ('12345',) |
|
91 | 91 | assert_error(id_, expected, given=response.body) |
|
92 | 92 | |
|
93 | 93 | def test_api_get_gist_private_gist_without_permission(self, gist_util): |
|
94 | 94 | gist = gist_util.create_gist() |
|
95 | 95 | gist_id = gist.gist_access_id |
|
96 | 96 | id_, params = build_data( |
|
97 | 97 | self.apikey_regular, 'get_gist', gistid=gist_id, ) |
|
98 | 98 | response = api_call(self.app, params) |
|
99 | 99 | |
|
100 | 100 | expected = 'gist `%s` does not exist' % (gist_id,) |
|
101 | 101 | assert_error(id_, expected, given=response.body) |
@@ -1,424 +1,423 b'' | |||
|
1 | 1 | # Copyright (C) 2011-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import logging |
|
20 | 20 | import itertools |
|
21 | 21 | import base64 |
|
22 | 22 | |
|
23 | 23 | from rhodecode.api import ( |
|
24 | 24 | jsonrpc_method, JSONRPCError, JSONRPCForbidden, find_methods) |
|
25 | 25 | |
|
26 | 26 | from rhodecode.api.utils import ( |
|
27 | 27 | Optional, OAttr, has_superadmin_permission, get_user_or_error) |
|
28 | from rhodecode.lib.utils import repo2db_mapper | |
|
28 | from rhodecode.lib.utils import repo2db_mapper, get_rhodecode_repo_store_path | |
|
29 | 29 | from rhodecode.lib import system_info |
|
30 | 30 | from rhodecode.lib import user_sessions |
|
31 | 31 | from rhodecode.lib import exc_tracking |
|
32 | 32 | from rhodecode.lib.ext_json import json |
|
33 | 33 | from rhodecode.lib.utils2 import safe_int |
|
34 | 34 | from rhodecode.model.db import UserIpMap |
|
35 | 35 | from rhodecode.model.scm import ScmModel |
|
36 | from rhodecode.model.settings import VcsSettingsModel | |
|
37 | 36 | from rhodecode.apps.file_store import utils |
|
38 | 37 | from rhodecode.apps.file_store.exceptions import FileNotAllowedException, \ |
|
39 | 38 | FileOverSizeException |
|
40 | 39 | |
|
41 | 40 | log = logging.getLogger(__name__) |
|
42 | 41 | |
|
43 | 42 | |
|
44 | 43 | @jsonrpc_method() |
|
45 | 44 | def get_server_info(request, apiuser): |
|
46 | 45 | """ |
|
47 | 46 | Returns the |RCE| server information. |
|
48 | 47 | |
|
49 | 48 | This includes the running version of |RCE| and all installed |
|
50 | 49 | packages. This command takes the following options: |
|
51 | 50 | |
|
52 | 51 | :param apiuser: This is filled automatically from the |authtoken|. |
|
53 | 52 | :type apiuser: AuthUser |
|
54 | 53 | |
|
55 | 54 | Example output: |
|
56 | 55 | |
|
57 | 56 | .. code-block:: bash |
|
58 | 57 | |
|
59 | 58 | id : <id_given_in_input> |
|
60 | 59 | result : { |
|
61 | 60 | 'modules': [<module name>,...] |
|
62 | 61 | 'py_version': <python version>, |
|
63 | 62 | 'platform': <platform type>, |
|
64 | 63 | 'rhodecode_version': <rhodecode version> |
|
65 | 64 | } |
|
66 | 65 | error : null |
|
67 | 66 | """ |
|
68 | 67 | |
|
69 | 68 | if not has_superadmin_permission(apiuser): |
|
70 | 69 | raise JSONRPCForbidden() |
|
71 | 70 | |
|
72 | 71 | server_info = ScmModel().get_server_info(request.environ) |
|
73 | 72 | # rhodecode-index requires those |
|
74 | 73 | |
|
75 | 74 | server_info['index_storage'] = server_info['search']['value']['location'] |
|
76 | 75 | server_info['storage'] = server_info['storage']['value']['path'] |
|
77 | 76 | |
|
78 | 77 | return server_info |
|
79 | 78 | |
|
80 | 79 | |
|
81 | 80 | @jsonrpc_method() |
|
82 | 81 | def get_repo_store(request, apiuser): |
|
83 | 82 | """ |
|
84 | 83 | Returns the |RCE| repository storage information. |
|
85 | 84 | |
|
86 | 85 | :param apiuser: This is filled automatically from the |authtoken|. |
|
87 | 86 | :type apiuser: AuthUser |
|
88 | 87 | |
|
89 | 88 | Example output: |
|
90 | 89 | |
|
91 | 90 | .. code-block:: bash |
|
92 | 91 | |
|
93 | 92 | id : <id_given_in_input> |
|
94 | 93 | result : { |
|
95 | 94 | 'modules': [<module name>,...] |
|
96 | 95 | 'py_version': <python version>, |
|
97 | 96 | 'platform': <platform type>, |
|
98 | 97 | 'rhodecode_version': <rhodecode version> |
|
99 | 98 | } |
|
100 | 99 | error : null |
|
101 | 100 | """ |
|
102 | 101 | |
|
103 | 102 | if not has_superadmin_permission(apiuser): |
|
104 | 103 | raise JSONRPCForbidden() |
|
105 | 104 | |
|
106 | path = VcsSettingsModel().get_repos_location() | |
|
105 | path = get_rhodecode_repo_store_path() | |
|
107 | 106 | return {"path": path} |
|
108 | 107 | |
|
109 | 108 | |
|
110 | 109 | @jsonrpc_method() |
|
111 | 110 | def get_ip(request, apiuser, userid=Optional(OAttr('apiuser'))): |
|
112 | 111 | """ |
|
113 | 112 | Displays the IP Address as seen from the |RCE| server. |
|
114 | 113 | |
|
115 | 114 | * This command displays the IP Address, as well as all the defined IP |
|
116 | 115 | addresses for the specified user. If the ``userid`` is not set, the |
|
117 | 116 | data returned is for the user calling the method. |
|
118 | 117 | |
|
119 | 118 | This command can only be run using an |authtoken| with admin rights to |
|
120 | 119 | the specified repository. |
|
121 | 120 | |
|
122 | 121 | This command takes the following options: |
|
123 | 122 | |
|
124 | 123 | :param apiuser: This is filled automatically from |authtoken|. |
|
125 | 124 | :type apiuser: AuthUser |
|
126 | 125 | :param userid: Sets the userid for which associated IP Address data |
|
127 | 126 | is returned. |
|
128 | 127 | :type userid: Optional(str or int) |
|
129 | 128 | |
|
130 | 129 | Example output: |
|
131 | 130 | |
|
132 | 131 | .. code-block:: bash |
|
133 | 132 | |
|
134 | 133 | id : <id_given_in_input> |
|
135 | 134 | result : { |
|
136 | 135 | "server_ip_addr": "<ip_from_clien>", |
|
137 | 136 | "user_ips": [ |
|
138 | 137 | { |
|
139 | 138 | "ip_addr": "<ip_with_mask>", |
|
140 | 139 | "ip_range": ["<start_ip>", "<end_ip>"], |
|
141 | 140 | }, |
|
142 | 141 | ... |
|
143 | 142 | ] |
|
144 | 143 | } |
|
145 | 144 | |
|
146 | 145 | """ |
|
147 | 146 | if not has_superadmin_permission(apiuser): |
|
148 | 147 | raise JSONRPCForbidden() |
|
149 | 148 | |
|
150 | 149 | userid = Optional.extract(userid, evaluate_locals=locals()) |
|
151 | 150 | userid = getattr(userid, 'user_id', userid) |
|
152 | 151 | |
|
153 | 152 | user = get_user_or_error(userid) |
|
154 | 153 | ips = UserIpMap.query().filter(UserIpMap.user == user).all() |
|
155 | 154 | return { |
|
156 | 155 | 'server_ip_addr': request.rpc_ip_addr, |
|
157 | 156 | 'user_ips': ips |
|
158 | 157 | } |
|
159 | 158 | |
|
160 | 159 | |
|
161 | 160 | @jsonrpc_method() |
|
162 | 161 | def rescan_repos(request, apiuser, remove_obsolete=Optional(False)): |
|
163 | 162 | """ |
|
164 | 163 | Triggers a rescan of the specified repositories. |
|
165 | 164 | |
|
166 | 165 | * If the ``remove_obsolete`` option is set, it also deletes repositories |
|
167 | 166 | that are found in the database but not on the file system, so called |
|
168 | 167 | "clean zombies". |
|
169 | 168 | |
|
170 | 169 | This command can only be run using an |authtoken| with admin rights to |
|
171 | 170 | the specified repository. |
|
172 | 171 | |
|
173 | 172 | This command takes the following options: |
|
174 | 173 | |
|
175 | 174 | :param apiuser: This is filled automatically from the |authtoken|. |
|
176 | 175 | :type apiuser: AuthUser |
|
177 | 176 | :param remove_obsolete: Deletes repositories from the database that |
|
178 | 177 | are not found on the filesystem. |
|
179 | 178 | :type remove_obsolete: Optional(``True`` | ``False``) |
|
180 | 179 | |
|
181 | 180 | Example output: |
|
182 | 181 | |
|
183 | 182 | .. code-block:: bash |
|
184 | 183 | |
|
185 | 184 | id : <id_given_in_input> |
|
186 | 185 | result : { |
|
187 | 186 | 'added': [<added repository name>,...] |
|
188 | 187 | 'removed': [<removed repository name>,...] |
|
189 | 188 | } |
|
190 | 189 | error : null |
|
191 | 190 | |
|
192 | 191 | Example error output: |
|
193 | 192 | |
|
194 | 193 | .. code-block:: bash |
|
195 | 194 | |
|
196 | 195 | id : <id_given_in_input> |
|
197 | 196 | result : null |
|
198 | 197 | error : { |
|
199 | 198 | 'Error occurred during rescan repositories action' |
|
200 | 199 | } |
|
201 | 200 | |
|
202 | 201 | """ |
|
203 | 202 | if not has_superadmin_permission(apiuser): |
|
204 | 203 | raise JSONRPCForbidden() |
|
205 | 204 | |
|
206 | 205 | try: |
|
207 | 206 | rm_obsolete = Optional.extract(remove_obsolete) |
|
208 | 207 | added, removed = repo2db_mapper(ScmModel().repo_scan(), |
|
209 | 208 | remove_obsolete=rm_obsolete, force_hooks_rebuild=True) |
|
210 | 209 | return {'added': added, 'removed': removed} |
|
211 | 210 | except Exception: |
|
212 | 211 | log.exception('Failed to run repo rescann') |
|
213 | 212 | raise JSONRPCError( |
|
214 | 213 | 'Error occurred during rescan repositories action' |
|
215 | 214 | ) |
|
216 | 215 | |
|
217 | 216 | |
|
218 | 217 | @jsonrpc_method() |
|
219 | 218 | def cleanup_sessions(request, apiuser, older_then=Optional(60)): |
|
220 | 219 | """ |
|
221 | 220 | Triggers a session cleanup action. |
|
222 | 221 | |
|
223 | 222 | If the ``older_then`` option is set, only sessions that hasn't been |
|
224 | 223 | accessed in the given number of days will be removed. |
|
225 | 224 | |
|
226 | 225 | This command can only be run using an |authtoken| with admin rights to |
|
227 | 226 | the specified repository. |
|
228 | 227 | |
|
229 | 228 | This command takes the following options: |
|
230 | 229 | |
|
231 | 230 | :param apiuser: This is filled automatically from the |authtoken|. |
|
232 | 231 | :type apiuser: AuthUser |
|
233 | 232 | :param older_then: Deletes session that hasn't been accessed |
|
234 | 233 | in given number of days. |
|
235 | 234 | :type older_then: Optional(int) |
|
236 | 235 | |
|
237 | 236 | Example output: |
|
238 | 237 | |
|
239 | 238 | .. code-block:: bash |
|
240 | 239 | |
|
241 | 240 | id : <id_given_in_input> |
|
242 | 241 | result: { |
|
243 | 242 | "backend": "<type of backend>", |
|
244 | 243 | "sessions_removed": <number_of_removed_sessions> |
|
245 | 244 | } |
|
246 | 245 | error : null |
|
247 | 246 | |
|
248 | 247 | Example error output: |
|
249 | 248 | |
|
250 | 249 | .. code-block:: bash |
|
251 | 250 | |
|
252 | 251 | id : <id_given_in_input> |
|
253 | 252 | result : null |
|
254 | 253 | error : { |
|
255 | 254 | 'Error occurred during session cleanup' |
|
256 | 255 | } |
|
257 | 256 | |
|
258 | 257 | """ |
|
259 | 258 | if not has_superadmin_permission(apiuser): |
|
260 | 259 | raise JSONRPCForbidden() |
|
261 | 260 | |
|
262 | 261 | older_then = safe_int(Optional.extract(older_then)) or 60 |
|
263 | 262 | older_than_seconds = 60 * 60 * 24 * older_then |
|
264 | 263 | |
|
265 | 264 | config = system_info.rhodecode_config().get_value()['value']['config'] |
|
266 | 265 | session_model = user_sessions.get_session_handler( |
|
267 | 266 | config.get('beaker.session.type', 'memory'))(config) |
|
268 | 267 | |
|
269 | 268 | backend = session_model.SESSION_TYPE |
|
270 | 269 | try: |
|
271 | 270 | cleaned = session_model.clean_sessions( |
|
272 | 271 | older_than_seconds=older_than_seconds) |
|
273 | 272 | return {'sessions_removed': cleaned, 'backend': backend} |
|
274 | 273 | except user_sessions.CleanupCommand as msg: |
|
275 | 274 | return {'cleanup_command': str(msg), 'backend': backend} |
|
276 | 275 | except Exception as e: |
|
277 | 276 | log.exception('Failed session cleanup') |
|
278 | 277 | raise JSONRPCError( |
|
279 | 278 | 'Error occurred during session cleanup' |
|
280 | 279 | ) |
|
281 | 280 | |
|
282 | 281 | |
|
283 | 282 | @jsonrpc_method() |
|
284 | 283 | def get_method(request, apiuser, pattern=Optional('*')): |
|
285 | 284 | """ |
|
286 | 285 | Returns list of all available API methods. By default match pattern |
|
287 | 286 | os "*" but any other pattern can be specified. eg *comment* will return |
|
288 | 287 | all methods with comment inside them. If just single method is matched |
|
289 | 288 | returned data will also include method specification |
|
290 | 289 | |
|
291 | 290 | This command can only be run using an |authtoken| with admin rights to |
|
292 | 291 | the specified repository. |
|
293 | 292 | |
|
294 | 293 | This command takes the following options: |
|
295 | 294 | |
|
296 | 295 | :param apiuser: This is filled automatically from the |authtoken|. |
|
297 | 296 | :type apiuser: AuthUser |
|
298 | 297 | :param pattern: pattern to match method names against |
|
299 | 298 | :type pattern: Optional("*") |
|
300 | 299 | |
|
301 | 300 | Example output: |
|
302 | 301 | |
|
303 | 302 | .. code-block:: bash |
|
304 | 303 | |
|
305 | 304 | id : <id_given_in_input> |
|
306 | 305 | "result": [ |
|
307 | 306 | "changeset_comment", |
|
308 | 307 | "comment_pull_request", |
|
309 | 308 | "comment_commit" |
|
310 | 309 | ] |
|
311 | 310 | error : null |
|
312 | 311 | |
|
313 | 312 | .. code-block:: bash |
|
314 | 313 | |
|
315 | 314 | id : <id_given_in_input> |
|
316 | 315 | "result": [ |
|
317 | 316 | "comment_commit", |
|
318 | 317 | { |
|
319 | 318 | "apiuser": "<RequiredType>", |
|
320 | 319 | "comment_type": "<Optional:u'note'>", |
|
321 | 320 | "commit_id": "<RequiredType>", |
|
322 | 321 | "message": "<RequiredType>", |
|
323 | 322 | "repoid": "<RequiredType>", |
|
324 | 323 | "request": "<RequiredType>", |
|
325 | 324 | "resolves_comment_id": "<Optional:None>", |
|
326 | 325 | "status": "<Optional:None>", |
|
327 | 326 | "userid": "<Optional:<OptionalAttr:apiuser>>" |
|
328 | 327 | } |
|
329 | 328 | ] |
|
330 | 329 | error : null |
|
331 | 330 | """ |
|
332 | 331 | from rhodecode.config.patches import inspect_getargspec |
|
333 | 332 | inspect = inspect_getargspec() |
|
334 | 333 | |
|
335 | 334 | if not has_superadmin_permission(apiuser): |
|
336 | 335 | raise JSONRPCForbidden() |
|
337 | 336 | |
|
338 | 337 | pattern = Optional.extract(pattern) |
|
339 | 338 | |
|
340 | 339 | matches = find_methods(request.registry.jsonrpc_methods, pattern) |
|
341 | 340 | |
|
342 | 341 | args_desc = [] |
|
343 | 342 | matches_keys = list(matches.keys()) |
|
344 | 343 | if len(matches_keys) == 1: |
|
345 | 344 | func = matches[matches_keys[0]] |
|
346 | 345 | |
|
347 | 346 | argspec = inspect.getargspec(func) |
|
348 | 347 | arglist = argspec[0] |
|
349 | 348 | defaults = list(map(repr, argspec[3] or [])) |
|
350 | 349 | |
|
351 | 350 | default_empty = '<RequiredType>' |
|
352 | 351 | |
|
353 | 352 | # kw arguments required by this method |
|
354 | 353 | func_kwargs = dict(itertools.zip_longest( |
|
355 | 354 | reversed(arglist), reversed(defaults), fillvalue=default_empty)) |
|
356 | 355 | args_desc.append(func_kwargs) |
|
357 | 356 | |
|
358 | 357 | return matches_keys + args_desc |
|
359 | 358 | |
|
360 | 359 | |
|
361 | 360 | @jsonrpc_method() |
|
362 | 361 | def store_exception(request, apiuser, exc_data_json, prefix=Optional('rhodecode')): |
|
363 | 362 | """ |
|
364 | 363 | Stores sent exception inside the built-in exception tracker in |RCE| server. |
|
365 | 364 | |
|
366 | 365 | This command can only be run using an |authtoken| with admin rights to |
|
367 | 366 | the specified repository. |
|
368 | 367 | |
|
369 | 368 | This command takes the following options: |
|
370 | 369 | |
|
371 | 370 | :param apiuser: This is filled automatically from the |authtoken|. |
|
372 | 371 | :type apiuser: AuthUser |
|
373 | 372 | |
|
374 | 373 | :param exc_data_json: JSON data with exception e.g |
|
375 | 374 | {"exc_traceback": "Value `1` is not allowed", "exc_type_name": "ValueError"} |
|
376 | 375 | :type exc_data_json: JSON data |
|
377 | 376 | |
|
378 | 377 | :param prefix: prefix for error type, e.g 'rhodecode', 'vcsserver', 'rhodecode-tools' |
|
379 | 378 | :type prefix: Optional("rhodecode") |
|
380 | 379 | |
|
381 | 380 | Example output: |
|
382 | 381 | |
|
383 | 382 | .. code-block:: bash |
|
384 | 383 | |
|
385 | 384 | id : <id_given_in_input> |
|
386 | 385 | "result": { |
|
387 | 386 | "exc_id": 139718459226384, |
|
388 | 387 | "exc_url": "http://localhost:8080/_admin/settings/exceptions/139718459226384" |
|
389 | 388 | } |
|
390 | 389 | error : null |
|
391 | 390 | """ |
|
392 | 391 | if not has_superadmin_permission(apiuser): |
|
393 | 392 | raise JSONRPCForbidden() |
|
394 | 393 | |
|
395 | 394 | prefix = Optional.extract(prefix) |
|
396 | 395 | exc_id = exc_tracking.generate_id() |
|
397 | 396 | |
|
398 | 397 | try: |
|
399 | 398 | exc_data = json.loads(exc_data_json) |
|
400 | 399 | except Exception: |
|
401 | 400 | log.error('Failed to parse JSON: %r', exc_data_json) |
|
402 | 401 | raise JSONRPCError('Failed to parse JSON data from exc_data_json field. ' |
|
403 | 402 | 'Please make sure it contains a valid JSON.') |
|
404 | 403 | |
|
405 | 404 | try: |
|
406 | 405 | exc_traceback = exc_data['exc_traceback'] |
|
407 | 406 | exc_type_name = exc_data['exc_type_name'] |
|
408 | 407 | exc_value = '' |
|
409 | 408 | except KeyError as err: |
|
410 | 409 | raise JSONRPCError( |
|
411 | 410 | f'Missing exc_traceback, or exc_type_name ' |
|
412 | 411 | f'in exc_data_json field. Missing: {err}') |
|
413 | 412 | |
|
414 | 413 | class ExcType: |
|
415 | 414 | __name__ = exc_type_name |
|
416 | 415 | |
|
417 | 416 | exc_info = (ExcType(), exc_value, exc_traceback) |
|
418 | 417 | |
|
419 | 418 | exc_tracking._store_exception( |
|
420 | 419 | exc_id=exc_id, exc_info=exc_info, prefix=prefix) |
|
421 | 420 | |
|
422 | 421 | exc_url = request.route_url( |
|
423 | 422 | 'admin_settings_exception_tracker_show', exception_id=exc_id) |
|
424 | 423 | return {'exc_id': exc_id, 'exc_url': exc_url} |
@@ -1,947 +1,987 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import time |
|
20 | 20 | import logging |
|
21 | 21 | import operator |
|
22 | 22 | |
|
23 | 23 | from pyramid.httpexceptions import HTTPFound, HTTPForbidden, HTTPBadRequest |
|
24 | 24 | |
|
25 | 25 | from rhodecode.lib import helpers as h, diffs, rc_cache |
|
26 | 26 | from rhodecode.lib.str_utils import safe_str |
|
27 | 27 | from rhodecode.lib.utils import repo_name_slug |
|
28 | 28 | from rhodecode.lib.utils2 import ( |
|
29 | 29 | StrictAttributeDict, |
|
30 | 30 | str2bool, |
|
31 | 31 | safe_int, |
|
32 | 32 | datetime_to_time, |
|
33 | 33 | ) |
|
34 | 34 | from rhodecode.lib.markup_renderer import MarkupRenderer, relative_links |
|
35 | 35 | from rhodecode.lib.vcs.backends.base import EmptyCommit |
|
36 | 36 | from rhodecode.lib.vcs.exceptions import RepositoryRequirementError |
|
37 | 37 | from rhodecode.model import repo |
|
38 | 38 | from rhodecode.model import repo_group |
|
39 | 39 | from rhodecode.model import user_group |
|
40 | 40 | from rhodecode.model import user |
|
41 | 41 | from rhodecode.model.db import User |
|
42 | 42 | from rhodecode.model.scm import ScmModel |
|
43 | 43 | from rhodecode.model.settings import VcsSettingsModel, IssueTrackerSettingsModel |
|
44 | 44 | from rhodecode.model.repo import ReadmeFinder |
|
45 | 45 | |
|
46 | 46 | log = logging.getLogger(__name__) |
|
47 | 47 | |
|
48 | 48 | |
|
49 | 49 | ADMIN_PREFIX: str = "/_admin" |
|
50 | 50 | STATIC_FILE_PREFIX: str = "/_static" |
|
51 | 51 | |
|
52 | 52 | URL_NAME_REQUIREMENTS = { |
|
53 | 53 | # group name can have a slash in them, but they must not end with a slash |
|
54 | 54 | "group_name": r".*?[^/]", |
|
55 | 55 | "repo_group_name": r".*?[^/]", |
|
56 | 56 | # repo names can have a slash in them, but they must not end with a slash |
|
57 | 57 | "repo_name": r".*?[^/]", |
|
58 | 58 | # file path eats up everything at the end |
|
59 | 59 | "f_path": r".*", |
|
60 | 60 | # reference types |
|
61 | 61 | "source_ref_type": r"(branch|book|tag|rev|\%\(source_ref_type\)s)", |
|
62 | 62 | "target_ref_type": r"(branch|book|tag|rev|\%\(target_ref_type\)s)", |
|
63 | 63 | } |
|
64 | 64 | |
|
65 | 65 | |
|
66 | 66 | def add_route_with_slash(config, name, pattern, **kw): |
|
67 | 67 | config.add_route(name, pattern, **kw) |
|
68 | 68 | if not pattern.endswith("/"): |
|
69 | 69 | config.add_route(name + "_slash", pattern + "/", **kw) |
|
70 | 70 | |
|
71 | 71 | |
|
72 | 72 | def add_route_requirements(route_path, requirements=None): |
|
73 | 73 | """ |
|
74 | 74 | Adds regex requirements to pyramid routes using a mapping dict |
|
75 | 75 | e.g:: |
|
76 | 76 | add_route_requirements('{repo_name}/settings') |
|
77 | 77 | """ |
|
78 | 78 | requirements = requirements or URL_NAME_REQUIREMENTS |
|
79 | 79 | for key, regex in list(requirements.items()): |
|
80 | 80 | route_path = route_path.replace("{%s}" % key, "{%s:%s}" % (key, regex)) |
|
81 | 81 | return route_path |
|
82 | 82 | |
|
83 | 83 | |
|
84 | 84 | def get_format_ref_id(repo): |
|
85 | 85 | """Returns a `repo` specific reference formatter function""" |
|
86 | 86 | if h.is_svn(repo): |
|
87 | 87 | return _format_ref_id_svn |
|
88 | 88 | else: |
|
89 | 89 | return _format_ref_id |
|
90 | 90 | |
|
91 | 91 | |
|
92 | 92 | def _format_ref_id(name, raw_id): |
|
93 | 93 | """Default formatting of a given reference `name`""" |
|
94 | 94 | return name |
|
95 | 95 | |
|
96 | 96 | |
|
97 | 97 | def _format_ref_id_svn(name, raw_id): |
|
98 | 98 | """Special way of formatting a reference for Subversion including path""" |
|
99 | 99 | return f"{name}@{raw_id}" |
|
100 | 100 | |
|
101 | 101 | |
|
102 | 102 | class TemplateArgs(StrictAttributeDict): |
|
103 | 103 | pass |
|
104 | 104 | |
|
105 | 105 | |
|
106 | 106 | class BaseAppView(object): |
|
107 | DONT_CHECKOUT_VIEWS = ["channelstream_connect", "ops_ping"] | |
|
108 | EXTRA_VIEWS_TO_IGNORE = ['login', 'register', 'logout'] | |
|
109 | SETUP_2FA_VIEW = 'setup_2fa' | |
|
110 | VERIFY_2FA_VIEW = 'check_2fa' | |
|
111 | ||
|
107 | 112 | def __init__(self, context, request): |
|
108 | 113 | self.request = request |
|
109 | 114 | self.context = context |
|
110 | 115 | self.session = request.session |
|
111 | 116 | if not hasattr(request, "user"): |
|
112 | 117 | # NOTE(marcink): edge case, we ended up in matched route |
|
113 | 118 | # but probably of web-app context, e.g API CALL/VCS CALL |
|
114 | 119 | if hasattr(request, "vcs_call") or hasattr(request, "rpc_method"): |
|
115 | 120 | log.warning("Unable to process request `%s` in this scope", request) |
|
116 | 121 | raise HTTPBadRequest() |
|
117 | 122 | |
|
118 | 123 | self._rhodecode_user = request.user # auth user |
|
119 | 124 | self._rhodecode_db_user = self._rhodecode_user.get_instance() |
|
125 | self.user_data = self._rhodecode_db_user.user_data if self._rhodecode_db_user else {} | |
|
120 | 126 | self._maybe_needs_password_change( |
|
121 | 127 | request.matched_route.name, self._rhodecode_db_user |
|
122 | 128 | ) |
|
129 | self._maybe_needs_2fa_configuration( | |
|
130 | request.matched_route.name, self._rhodecode_db_user | |
|
131 | ) | |
|
132 | self._maybe_needs_2fa_check( | |
|
133 | request.matched_route.name, self._rhodecode_db_user | |
|
134 | ) | |
|
123 | 135 | |
|
124 | 136 | def _maybe_needs_password_change(self, view_name, user_obj): |
|
125 | dont_check_views = ["channelstream_connect", "ops_ping"] | |
|
126 | if view_name in dont_check_views: | |
|
137 | if view_name in self.DONT_CHECKOUT_VIEWS: | |
|
127 | 138 | return |
|
128 | 139 | |
|
129 | 140 | log.debug( |
|
130 | 141 | "Checking if user %s needs password change on view %s", user_obj, view_name |
|
131 | 142 | ) |
|
132 | 143 | |
|
133 | 144 | skip_user_views = [ |
|
134 | 145 | "logout", |
|
135 | 146 | "login", |
|
147 | "check_2fa", | |
|
136 | 148 | "my_account_password", |
|
137 | 149 | "my_account_password_update", |
|
138 | 150 | ] |
|
139 | 151 | |
|
140 | 152 | if not user_obj: |
|
141 | 153 | return |
|
142 | 154 | |
|
143 | 155 | if user_obj.username == User.DEFAULT_USER: |
|
144 | 156 | return |
|
145 | 157 | |
|
146 | 158 | now = time.time() |
|
147 |
should_change = |
|
|
159 | should_change = self.user_data.get("force_password_change") | |
|
148 | 160 | change_after = safe_int(should_change) or 0 |
|
149 | 161 | if should_change and now > change_after: |
|
150 | 162 | log.debug("User %s requires password change", user_obj) |
|
151 | 163 | h.flash( |
|
152 | 164 | "You are required to change your password", |
|
153 | 165 | "warning", |
|
154 | 166 | ignore_duplicate=True, |
|
155 | 167 | ) |
|
156 | 168 | |
|
157 | 169 | if view_name not in skip_user_views: |
|
158 | 170 | raise HTTPFound(self.request.route_path("my_account_password")) |
|
159 | 171 | |
|
172 | def _maybe_needs_2fa_configuration(self, view_name, user_obj): | |
|
173 | if view_name in self.DONT_CHECKOUT_VIEWS + self.EXTRA_VIEWS_TO_IGNORE: | |
|
174 | return | |
|
175 | ||
|
176 | if not user_obj: | |
|
177 | return | |
|
178 | ||
|
179 | if user_obj.needs_2fa_configure and view_name != self.SETUP_2FA_VIEW: | |
|
180 | h.flash( | |
|
181 | "You are required to configure 2FA", | |
|
182 | "warning", | |
|
183 | ignore_duplicate=False, | |
|
184 | ) | |
|
185 | # Special case for users created "on the fly" (ldap case for new user) | |
|
186 | user_obj.check_2fa_required = False | |
|
187 | raise HTTPFound(self.request.route_path(self.SETUP_2FA_VIEW)) | |
|
188 | ||
|
189 | def _maybe_needs_2fa_check(self, view_name, user_obj): | |
|
190 | if view_name in self.DONT_CHECKOUT_VIEWS + self.EXTRA_VIEWS_TO_IGNORE: | |
|
191 | return | |
|
192 | ||
|
193 | if not user_obj: | |
|
194 | return | |
|
195 | ||
|
196 | if user_obj.check_2fa_required and view_name != self.VERIFY_2FA_VIEW: | |
|
197 | raise HTTPFound(self.request.route_path(self.VERIFY_2FA_VIEW)) | |
|
198 | ||
|
160 | 199 | def _log_creation_exception(self, e, repo_name): |
|
161 | 200 | _ = self.request.translate |
|
162 | 201 | reason = None |
|
163 | 202 | if len(e.args) == 2: |
|
164 | 203 | reason = e.args[1] |
|
165 | 204 | |
|
166 | 205 | if reason == "INVALID_CERTIFICATE": |
|
167 | 206 | log.exception("Exception creating a repository: invalid certificate") |
|
168 | 207 | msg = _("Error creating repository %s: invalid certificate") % repo_name |
|
169 | 208 | else: |
|
170 | 209 | log.exception("Exception creating a repository") |
|
171 | 210 | msg = _("Error creating repository %s") % repo_name |
|
172 | 211 | return msg |
|
173 | 212 | |
|
174 | 213 | def _get_local_tmpl_context(self, include_app_defaults=True): |
|
175 | 214 | c = TemplateArgs() |
|
176 | 215 | c.auth_user = self.request.user |
|
177 | 216 | # TODO(marcink): migrate the usage of c.rhodecode_user to c.auth_user |
|
178 | 217 | c.rhodecode_user = self.request.user |
|
179 | 218 | |
|
180 | 219 | if include_app_defaults: |
|
181 | 220 | from rhodecode.lib.base import attach_context_attributes |
|
182 | 221 | |
|
183 | 222 | attach_context_attributes(c, self.request, self.request.user.user_id) |
|
184 | 223 | |
|
185 | 224 | c.is_super_admin = c.auth_user.is_admin |
|
186 | 225 | |
|
187 | 226 | c.can_create_repo = c.is_super_admin |
|
188 | 227 | c.can_create_repo_group = c.is_super_admin |
|
189 | 228 | c.can_create_user_group = c.is_super_admin |
|
190 | 229 | |
|
191 | 230 | c.is_delegated_admin = False |
|
192 | 231 | |
|
193 | 232 | if not c.auth_user.is_default and not c.is_super_admin: |
|
194 | 233 | c.can_create_repo = h.HasPermissionAny("hg.create.repository")( |
|
195 | 234 | user=self.request.user |
|
196 | 235 | ) |
|
197 | 236 | repositories = c.auth_user.repositories_admin or c.can_create_repo |
|
198 | 237 | |
|
199 | 238 | c.can_create_repo_group = h.HasPermissionAny("hg.repogroup.create.true")( |
|
200 | 239 | user=self.request.user |
|
201 | 240 | ) |
|
202 | 241 | repository_groups = ( |
|
203 | 242 | c.auth_user.repository_groups_admin or c.can_create_repo_group |
|
204 | 243 | ) |
|
205 | 244 | |
|
206 | 245 | c.can_create_user_group = h.HasPermissionAny("hg.usergroup.create.true")( |
|
207 | 246 | user=self.request.user |
|
208 | 247 | ) |
|
209 | 248 | user_groups = c.auth_user.user_groups_admin or c.can_create_user_group |
|
210 | 249 | # delegated admin can create, or manage some objects |
|
211 | 250 | c.is_delegated_admin = repositories or repository_groups or user_groups |
|
212 | 251 | return c |
|
213 | 252 | |
|
214 | 253 | def _get_template_context(self, tmpl_args, **kwargs): |
|
215 | 254 | local_tmpl_args = {"defaults": {}, "errors": {}, "c": tmpl_args} |
|
216 | 255 | local_tmpl_args.update(kwargs) |
|
217 | 256 | return local_tmpl_args |
|
218 | 257 | |
|
219 | 258 | def load_default_context(self): |
|
220 | 259 | """ |
|
221 | 260 | example: |
|
222 | 261 | |
|
223 | 262 | def load_default_context(self): |
|
224 | 263 | c = self._get_local_tmpl_context() |
|
225 | 264 | c.custom_var = 'foobar' |
|
226 | 265 | |
|
227 | 266 | return c |
|
228 | 267 | """ |
|
229 | 268 | raise NotImplementedError("Needs implementation in view class") |
|
230 | 269 | |
|
231 | 270 | |
|
232 | 271 | class RepoAppView(BaseAppView): |
|
233 | 272 | def __init__(self, context, request): |
|
234 | 273 | super().__init__(context, request) |
|
235 | 274 | self.db_repo = request.db_repo |
|
236 | 275 | self.db_repo_name = self.db_repo.repo_name |
|
237 | 276 | self.db_repo_pull_requests = ScmModel().get_pull_requests(self.db_repo) |
|
238 | 277 | self.db_repo_artifacts = ScmModel().get_artifacts(self.db_repo) |
|
239 | 278 | self.db_repo_patterns = IssueTrackerSettingsModel(repo=self.db_repo) |
|
240 | 279 | |
|
241 | 280 | def _handle_missing_requirements(self, error): |
|
242 | 281 | log.error( |
|
243 | 282 | "Requirements are missing for repository %s: %s", |
|
244 | 283 | self.db_repo_name, |
|
245 | 284 | safe_str(error), |
|
246 | 285 | ) |
|
247 | 286 | |
|
248 | 287 | def _prepare_and_set_clone_url(self, c): |
|
249 | 288 | username = "" |
|
250 | 289 | if self._rhodecode_user.username != User.DEFAULT_USER: |
|
251 | 290 | username = self._rhodecode_user.username |
|
252 | 291 | |
|
253 | 292 | _def_clone_uri = c.clone_uri_tmpl |
|
254 | 293 | _def_clone_uri_id = c.clone_uri_id_tmpl |
|
255 | 294 | _def_clone_uri_ssh = c.clone_uri_ssh_tmpl |
|
256 | 295 | |
|
257 | 296 | c.clone_repo_url = self.db_repo.clone_url( |
|
258 | 297 | user=username, uri_tmpl=_def_clone_uri |
|
259 | 298 | ) |
|
260 | 299 | c.clone_repo_url_id = self.db_repo.clone_url( |
|
261 | 300 | user=username, uri_tmpl=_def_clone_uri_id |
|
262 | 301 | ) |
|
263 | 302 | c.clone_repo_url_ssh = self.db_repo.clone_url( |
|
264 | 303 | uri_tmpl=_def_clone_uri_ssh, ssh=True |
|
265 | 304 | ) |
|
266 | 305 | |
|
267 | 306 | def _get_local_tmpl_context(self, include_app_defaults=True): |
|
268 | 307 | _ = self.request.translate |
|
269 | 308 | c = super()._get_local_tmpl_context(include_app_defaults=include_app_defaults) |
|
270 | 309 | |
|
271 | 310 | # register common vars for this type of view |
|
272 | 311 | c.rhodecode_db_repo = self.db_repo |
|
273 | 312 | c.repo_name = self.db_repo_name |
|
274 | 313 | c.repository_pull_requests = self.db_repo_pull_requests |
|
275 | 314 | c.repository_artifacts = self.db_repo_artifacts |
|
276 | 315 | c.repository_is_user_following = ScmModel().is_following_repo( |
|
277 | 316 | self.db_repo_name, self._rhodecode_user.user_id |
|
278 | 317 | ) |
|
279 | 318 | self.path_filter = PathFilter(None) |
|
280 | 319 | |
|
281 | 320 | c.repository_requirements_missing = {} |
|
282 | 321 | try: |
|
283 | 322 | self.rhodecode_vcs_repo = self.db_repo.scm_instance() |
|
284 | 323 | # NOTE(marcink): |
|
285 | 324 | # comparison to None since if it's an object __bool__ is expensive to |
|
286 | 325 | # calculate |
|
287 | 326 | if self.rhodecode_vcs_repo is not None: |
|
288 | 327 | path_perms = self.rhodecode_vcs_repo.get_path_permissions( |
|
289 | 328 | c.auth_user.username |
|
290 | 329 | ) |
|
291 | 330 | self.path_filter = PathFilter(path_perms) |
|
292 | 331 | except RepositoryRequirementError as e: |
|
293 | 332 | c.repository_requirements_missing = {"error": str(e)} |
|
294 | 333 | self._handle_missing_requirements(e) |
|
295 | 334 | self.rhodecode_vcs_repo = None |
|
296 | 335 | |
|
297 | 336 | c.path_filter = self.path_filter # used by atom_feed_entry.mako |
|
298 | 337 | |
|
299 | 338 | if self.rhodecode_vcs_repo is None: |
|
300 | 339 | # unable to fetch this repo as vcs instance, report back to user |
|
301 | 340 | log.debug( |
|
302 | 341 | "Repository was not found on filesystem, check if it exists or is not damaged" |
|
303 | 342 | ) |
|
304 | 343 | h.flash( |
|
305 | 344 | _( |
|
306 | 345 | "The repository `%(repo_name)s` cannot be loaded in filesystem. " |
|
307 | 346 | "Please check if it exist, or is not damaged." |
|
308 | 347 | ) |
|
309 | 348 | % {"repo_name": c.repo_name}, |
|
310 | 349 | category="error", |
|
311 | 350 | ignore_duplicate=True, |
|
312 | 351 | ) |
|
313 | 352 | if c.repository_requirements_missing: |
|
314 | 353 | route = self.request.matched_route.name |
|
315 | 354 | if route.startswith(("edit_repo", "repo_summary")): |
|
316 | 355 | # allow summary and edit repo on missing requirements |
|
317 | 356 | return c |
|
318 | 357 | |
|
319 | 358 | raise HTTPFound( |
|
320 | 359 | h.route_path("repo_summary", repo_name=self.db_repo_name) |
|
321 | 360 | ) |
|
322 | 361 | |
|
323 | 362 | else: # redirect if we don't show missing requirements |
|
324 | 363 | raise HTTPFound(h.route_path("home")) |
|
325 | 364 | |
|
326 | 365 | c.has_origin_repo_read_perm = False |
|
327 | 366 | if self.db_repo.fork: |
|
328 | 367 | c.has_origin_repo_read_perm = h.HasRepoPermissionAny( |
|
329 | 368 | "repository.write", "repository.read", "repository.admin" |
|
330 | 369 | )(self.db_repo.fork.repo_name, "summary fork link") |
|
331 | 370 | |
|
332 | 371 | return c |
|
333 | 372 | |
|
334 | 373 | def _get_f_path_unchecked(self, matchdict, default=None): |
|
335 | 374 | """ |
|
336 | 375 | Should only be used by redirects, everything else should call _get_f_path |
|
337 | 376 | """ |
|
338 | 377 | f_path = matchdict.get("f_path") |
|
339 | 378 | if f_path: |
|
340 | 379 | # fix for multiple initial slashes that causes errors for GIT |
|
341 | 380 | return f_path.lstrip("/") |
|
342 | 381 | |
|
343 | 382 | return default |
|
344 | 383 | |
|
345 | 384 | def _get_f_path(self, matchdict, default=None): |
|
346 | 385 | f_path_match = self._get_f_path_unchecked(matchdict, default) |
|
347 | 386 | return self.path_filter.assert_path_permissions(f_path_match) |
|
348 | 387 | |
|
349 | 388 | def _get_general_setting(self, target_repo, settings_key, default=False): |
|
350 | 389 | settings_model = VcsSettingsModel(repo=target_repo) |
|
351 | 390 | settings = settings_model.get_general_settings() |
|
352 | 391 | return settings.get(settings_key, default) |
|
353 | 392 | |
|
354 | 393 | def _get_repo_setting(self, target_repo, settings_key, default=False): |
|
355 | 394 | settings_model = VcsSettingsModel(repo=target_repo) |
|
356 | 395 | settings = settings_model.get_repo_settings_inherited() |
|
357 | 396 | return settings.get(settings_key, default) |
|
358 | 397 | |
|
359 | 398 | def _get_readme_data(self, db_repo, renderer_type, commit_id=None, path="/"): |
|
360 | 399 | log.debug("Looking for README file at path %s", path) |
|
361 | 400 | if commit_id: |
|
362 | 401 | landing_commit_id = commit_id |
|
363 | 402 | else: |
|
364 | 403 | landing_commit = db_repo.get_landing_commit() |
|
365 | 404 | if isinstance(landing_commit, EmptyCommit): |
|
366 | 405 | return None, None |
|
367 | 406 | landing_commit_id = landing_commit.raw_id |
|
368 | 407 | |
|
369 | 408 | cache_namespace_uid = f"repo.{db_repo.repo_id}" |
|
370 | 409 | region = rc_cache.get_or_create_region( |
|
371 | 410 | "cache_repo", cache_namespace_uid, use_async_runner=False |
|
372 | 411 | ) |
|
373 | 412 | start = time.time() |
|
374 | 413 | |
|
375 | 414 | @region.conditional_cache_on_arguments(namespace=cache_namespace_uid) |
|
376 | 415 | def generate_repo_readme( |
|
377 | 416 | repo_id, _commit_id, _repo_name, _readme_search_path, _renderer_type |
|
378 | 417 | ): |
|
379 | 418 | readme_data = None |
|
380 | 419 | readme_filename = None |
|
381 | 420 | |
|
382 | 421 | commit = db_repo.get_commit(_commit_id) |
|
383 | 422 | log.debug("Searching for a README file at commit %s.", _commit_id) |
|
384 | 423 | readme_node = ReadmeFinder(_renderer_type).search( |
|
385 | 424 | commit, path=_readme_search_path |
|
386 | 425 | ) |
|
387 | 426 | |
|
388 | 427 | if readme_node: |
|
389 | 428 | log.debug("Found README node: %s", readme_node) |
|
390 | 429 | |
|
391 | 430 | relative_urls = { |
|
392 | 431 | "raw": h.route_path( |
|
393 | 432 | "repo_file_raw", |
|
394 | 433 | repo_name=_repo_name, |
|
395 | 434 | commit_id=commit.raw_id, |
|
396 | 435 | f_path=readme_node.path, |
|
397 | 436 | ), |
|
398 | 437 | "standard": h.route_path( |
|
399 | 438 | "repo_files", |
|
400 | 439 | repo_name=_repo_name, |
|
401 | 440 | commit_id=commit.raw_id, |
|
402 | 441 | f_path=readme_node.path, |
|
403 | 442 | ), |
|
404 | 443 | } |
|
405 | 444 | |
|
406 | 445 | readme_data = self._render_readme_or_none( |
|
407 | 446 | commit, readme_node, relative_urls |
|
408 | 447 | ) |
|
409 | 448 | readme_filename = readme_node.str_path |
|
410 | 449 | |
|
411 | 450 | return readme_data, readme_filename |
|
412 | 451 | |
|
413 | 452 | readme_data, readme_filename = generate_repo_readme( |
|
414 | 453 | db_repo.repo_id, |
|
415 | 454 | landing_commit_id, |
|
416 | 455 | db_repo.repo_name, |
|
417 | 456 | path, |
|
418 | 457 | renderer_type, |
|
419 | 458 | ) |
|
420 | 459 | |
|
421 | 460 | compute_time = time.time() - start |
|
422 | 461 | log.debug( |
|
423 | 462 | "Repo README for path %s generated and computed in %.4fs", |
|
424 | 463 | path, |
|
425 | 464 | compute_time, |
|
426 | 465 | ) |
|
427 | 466 | return readme_data, readme_filename |
|
428 | 467 | |
|
429 | 468 | def _render_readme_or_none(self, commit, readme_node, relative_urls): |
|
430 | 469 | log.debug("Found README file `%s` rendering...", readme_node.path) |
|
431 | 470 | renderer = MarkupRenderer() |
|
432 | 471 | try: |
|
433 | 472 | html_source = renderer.render( |
|
434 | 473 | readme_node.str_content, filename=readme_node.path |
|
435 | 474 | ) |
|
436 | 475 | if relative_urls: |
|
437 | 476 | return relative_links(html_source, relative_urls) |
|
438 | 477 | return html_source |
|
439 | 478 | except Exception: |
|
440 | 479 | log.exception("Exception while trying to render the README") |
|
441 | 480 | |
|
442 | 481 | def get_recache_flag(self): |
|
443 | 482 | for flag_name in ["force_recache", "force-recache", "no-cache"]: |
|
444 | 483 | flag_val = self.request.GET.get(flag_name) |
|
445 | 484 | if str2bool(flag_val): |
|
446 | 485 | return True |
|
447 | 486 | return False |
|
448 | 487 | |
|
449 | 488 | def get_commit_preload_attrs(cls): |
|
450 | 489 | pre_load = [ |
|
451 | 490 | "author", |
|
452 | 491 | "branch", |
|
453 | 492 | "date", |
|
454 | 493 | "message", |
|
455 | 494 | "parents", |
|
456 | 495 | "obsolete", |
|
457 | 496 | "phase", |
|
458 | 497 | "hidden", |
|
459 | 498 | ] |
|
460 | 499 | return pre_load |
|
461 | 500 | |
|
462 | 501 | |
|
463 | 502 | class PathFilter(object): |
|
464 | 503 | # Expects and instance of BasePathPermissionChecker or None |
|
465 | 504 | def __init__(self, permission_checker): |
|
466 | 505 | self.permission_checker = permission_checker |
|
467 | 506 | |
|
468 | 507 | def assert_path_permissions(self, path): |
|
469 | 508 | if self.path_access_allowed(path): |
|
470 | 509 | return path |
|
471 | 510 | raise HTTPForbidden() |
|
472 | 511 | |
|
473 | 512 | def path_access_allowed(self, path): |
|
474 | 513 | log.debug("Checking ACL permissions for PathFilter for `%s`", path) |
|
475 | 514 | if self.permission_checker: |
|
476 | 515 | has_access = path and self.permission_checker.has_access(path) |
|
477 | 516 | log.debug( |
|
478 | 517 | "ACL Permissions checker enabled, ACL Check has_access: %s", has_access |
|
479 | 518 | ) |
|
480 | 519 | return has_access |
|
481 | 520 | |
|
482 | 521 | log.debug("ACL permissions checker not enabled, skipping...") |
|
483 | 522 | return True |
|
484 | 523 | |
|
485 | 524 | def filter_patchset(self, patchset): |
|
486 | 525 | if not self.permission_checker or not patchset: |
|
487 | 526 | return patchset, False |
|
488 | 527 | had_filtered = False |
|
489 | 528 | filtered_patchset = [] |
|
490 | 529 | for patch in patchset: |
|
491 | 530 | filename = patch.get("filename", None) |
|
492 | 531 | if not filename or self.permission_checker.has_access(filename): |
|
493 | 532 | filtered_patchset.append(patch) |
|
494 | 533 | else: |
|
495 | 534 | had_filtered = True |
|
496 | 535 | if had_filtered: |
|
497 | 536 | if isinstance(patchset, diffs.LimitedDiffContainer): |
|
498 | 537 | filtered_patchset = diffs.LimitedDiffContainer( |
|
499 | 538 | patchset.diff_limit, patchset.cur_diff_size, filtered_patchset |
|
500 | 539 | ) |
|
501 | 540 | return filtered_patchset, True |
|
502 | 541 | else: |
|
503 | 542 | return patchset, False |
|
504 | 543 | |
|
505 | 544 | def render_patchset_filtered( |
|
506 | 545 | self, diffset, patchset, source_ref=None, target_ref=None |
|
507 | 546 | ): |
|
508 | 547 | filtered_patchset, has_hidden_changes = self.filter_patchset(patchset) |
|
509 | 548 | result = diffset.render_patchset( |
|
510 | 549 | filtered_patchset, source_ref=source_ref, target_ref=target_ref |
|
511 | 550 | ) |
|
512 | 551 | result.has_hidden_changes = has_hidden_changes |
|
513 | 552 | return result |
|
514 | 553 | |
|
515 | 554 | def get_raw_patch(self, diff_processor): |
|
516 | 555 | if self.permission_checker is None: |
|
517 | 556 | return diff_processor.as_raw() |
|
518 | 557 | elif self.permission_checker.has_full_access: |
|
519 | 558 | return diff_processor.as_raw() |
|
520 | 559 | else: |
|
521 | 560 | return "# Repository has user-specific filters, raw patch generation is disabled." |
|
522 | 561 | |
|
523 | 562 | @property |
|
524 | 563 | def is_enabled(self): |
|
525 | 564 | return self.permission_checker is not None |
|
526 | 565 | |
|
527 | 566 | |
|
528 | 567 | class RepoGroupAppView(BaseAppView): |
|
529 | 568 | def __init__(self, context, request): |
|
530 | 569 | super().__init__(context, request) |
|
531 | 570 | self.db_repo_group = request.db_repo_group |
|
532 | 571 | self.db_repo_group_name = self.db_repo_group.group_name |
|
533 | 572 | |
|
534 | 573 | def _get_local_tmpl_context(self, include_app_defaults=True): |
|
535 | 574 | _ = self.request.translate |
|
536 | 575 | c = super()._get_local_tmpl_context(include_app_defaults=include_app_defaults) |
|
537 | 576 | c.repo_group = self.db_repo_group |
|
538 | 577 | return c |
|
539 | 578 | |
|
540 | 579 | def _revoke_perms_on_yourself(self, form_result): |
|
541 | 580 | _updates = [ |
|
542 | 581 | u |
|
543 | 582 | for u in form_result["perm_updates"] |
|
544 | 583 | if self._rhodecode_user.user_id == int(u[0]) |
|
545 | 584 | ] |
|
546 | 585 | _additions = [ |
|
547 | 586 | u |
|
548 | 587 | for u in form_result["perm_additions"] |
|
549 | 588 | if self._rhodecode_user.user_id == int(u[0]) |
|
550 | 589 | ] |
|
551 | 590 | _deletions = [ |
|
552 | 591 | u |
|
553 | 592 | for u in form_result["perm_deletions"] |
|
554 | 593 | if self._rhodecode_user.user_id == int(u[0]) |
|
555 | 594 | ] |
|
556 | 595 | admin_perm = "group.admin" |
|
557 | 596 | if ( |
|
558 | 597 | _updates |
|
559 | 598 | and _updates[0][1] != admin_perm |
|
560 | 599 | or _additions |
|
561 | 600 | and _additions[0][1] != admin_perm |
|
562 | 601 | or _deletions |
|
563 | 602 | and _deletions[0][1] != admin_perm |
|
564 | 603 | ): |
|
565 | 604 | return True |
|
566 | 605 | return False |
|
567 | 606 | |
|
568 | 607 | |
|
569 | 608 | class UserGroupAppView(BaseAppView): |
|
570 | 609 | def __init__(self, context, request): |
|
571 | 610 | super().__init__(context, request) |
|
572 | 611 | self.db_user_group = request.db_user_group |
|
573 | 612 | self.db_user_group_name = self.db_user_group.users_group_name |
|
574 | 613 | |
|
575 | 614 | |
|
576 | 615 | class UserAppView(BaseAppView): |
|
577 | 616 | def __init__(self, context, request): |
|
578 | 617 | super().__init__(context, request) |
|
579 | 618 | self.db_user = request.db_user |
|
580 | 619 | self.db_user_id = self.db_user.user_id |
|
581 | 620 | |
|
582 | 621 | _ = self.request.translate |
|
583 | 622 | if not request.db_user_supports_default: |
|
584 | 623 | if self.db_user.username == User.DEFAULT_USER: |
|
585 | 624 | h.flash( |
|
586 | 625 | _("Editing user `{}` is disabled.".format(User.DEFAULT_USER)), |
|
587 | 626 | category="warning", |
|
588 | 627 | ) |
|
589 | 628 | raise HTTPFound(h.route_path("users")) |
|
590 | 629 | |
|
591 | 630 | |
|
592 | 631 | class DataGridAppView(object): |
|
593 | 632 | """ |
|
594 | 633 | Common class to have re-usable grid rendering components |
|
595 | 634 | """ |
|
596 | 635 | |
|
597 | 636 | def _extract_ordering(self, request, column_map=None): |
|
598 | 637 | column_map = column_map or {} |
|
599 | 638 | column_index = safe_int(request.GET.get("order[0][column]")) |
|
600 | 639 | order_dir = request.GET.get("order[0][dir]", "desc") |
|
601 | 640 | order_by = request.GET.get("columns[%s][data][sort]" % column_index, "name_raw") |
|
602 | 641 | |
|
603 | 642 | # translate datatable to DB columns |
|
604 | 643 | order_by = column_map.get(order_by) or order_by |
|
605 | 644 | |
|
606 | 645 | search_q = request.GET.get("search[value]") |
|
607 | 646 | return search_q, order_by, order_dir |
|
608 | 647 | |
|
609 | 648 | def _extract_chunk(self, request): |
|
610 | 649 | start = safe_int(request.GET.get("start"), 0) |
|
611 | 650 | length = safe_int(request.GET.get("length"), 25) |
|
612 | 651 | draw = safe_int(request.GET.get("draw")) |
|
613 | 652 | return draw, start, length |
|
614 | 653 | |
|
615 | 654 | def _get_order_col(self, order_by, model): |
|
616 | 655 | if isinstance(order_by, str): |
|
617 | 656 | try: |
|
618 | 657 | return operator.attrgetter(order_by)(model) |
|
619 | 658 | except AttributeError: |
|
620 | 659 | return None |
|
621 | 660 | else: |
|
622 | 661 | return order_by |
|
623 | 662 | |
|
624 | 663 | |
|
625 | 664 | class BaseReferencesView(RepoAppView): |
|
626 | 665 | """ |
|
627 | 666 | Base for reference view for branches, tags and bookmarks. |
|
628 | 667 | """ |
|
629 | 668 | |
|
630 | 669 | def load_default_context(self): |
|
631 | 670 | c = self._get_local_tmpl_context() |
|
632 | 671 | return c |
|
633 | 672 | |
|
634 | 673 | def load_refs_context(self, ref_items, partials_template): |
|
635 | 674 | _render = self.request.get_partial_renderer(partials_template) |
|
636 | 675 | pre_load = ["author", "date", "message", "parents"] |
|
637 | 676 | |
|
638 | 677 | is_svn = h.is_svn(self.rhodecode_vcs_repo) |
|
639 | 678 | is_hg = h.is_hg(self.rhodecode_vcs_repo) |
|
640 | 679 | |
|
641 | 680 | format_ref_id = get_format_ref_id(self.rhodecode_vcs_repo) |
|
642 | 681 | |
|
643 | 682 | closed_refs = {} |
|
644 | 683 | if is_hg: |
|
645 | 684 | closed_refs = self.rhodecode_vcs_repo.branches_closed |
|
646 | 685 | |
|
647 | 686 | data = [] |
|
648 | 687 | for ref_name, commit_id in ref_items: |
|
649 | 688 | commit = self.rhodecode_vcs_repo.get_commit( |
|
650 | 689 | commit_id=commit_id, pre_load=pre_load |
|
651 | 690 | ) |
|
652 | 691 | closed = ref_name in closed_refs |
|
653 | 692 | |
|
654 | 693 | # TODO: johbo: Unify generation of reference links |
|
655 | 694 | use_commit_id = "/" in ref_name or is_svn |
|
656 | 695 | |
|
657 | 696 | if use_commit_id: |
|
658 | 697 | files_url = h.route_path( |
|
659 | 698 | "repo_files", |
|
660 | 699 | repo_name=self.db_repo_name, |
|
661 | 700 | f_path=ref_name if is_svn else "", |
|
662 | 701 | commit_id=commit_id, |
|
663 | 702 | _query=dict(at=ref_name), |
|
664 | 703 | ) |
|
665 | 704 | |
|
666 | 705 | else: |
|
667 | 706 | files_url = h.route_path( |
|
668 | 707 | "repo_files", |
|
669 | 708 | repo_name=self.db_repo_name, |
|
670 | 709 | f_path=ref_name if is_svn else "", |
|
671 | 710 | commit_id=ref_name, |
|
672 | 711 | _query=dict(at=ref_name), |
|
673 | 712 | ) |
|
674 | 713 | |
|
675 | 714 | data.append( |
|
676 | 715 | { |
|
677 | 716 | "name": _render("name", ref_name, files_url, closed), |
|
678 | 717 | "name_raw": ref_name, |
|
718 | "closed": closed, | |
|
679 | 719 | "date": _render("date", commit.date), |
|
680 | 720 | "date_raw": datetime_to_time(commit.date), |
|
681 | 721 | "author": _render("author", commit.author), |
|
682 | 722 | "commit": _render( |
|
683 | 723 | "commit", commit.message, commit.raw_id, commit.idx |
|
684 | 724 | ), |
|
685 | 725 | "commit_raw": commit.idx, |
|
686 | 726 | "compare": _render( |
|
687 | 727 | "compare", format_ref_id(ref_name, commit.raw_id) |
|
688 | 728 | ), |
|
689 | 729 | } |
|
690 | 730 | ) |
|
691 | 731 | |
|
692 | 732 | return data |
|
693 | 733 | |
|
694 | 734 | |
|
695 | 735 | class RepoRoutePredicate(object): |
|
696 | 736 | def __init__(self, val, config): |
|
697 | 737 | self.val = val |
|
698 | 738 | |
|
699 | 739 | def text(self): |
|
700 | 740 | return f"repo_route = {self.val}" |
|
701 | 741 | |
|
702 | 742 | phash = text |
|
703 | 743 | |
|
704 | 744 | def __call__(self, info, request): |
|
705 | 745 | if hasattr(request, "vcs_call"): |
|
706 | 746 | # skip vcs calls |
|
707 | 747 | return |
|
708 | 748 | |
|
709 | 749 | repo_name = info["match"]["repo_name"] |
|
710 | 750 | |
|
711 | 751 | repo_name_parts = repo_name.split("/") |
|
712 | 752 | repo_slugs = [x for x in (repo_name_slug(x) for x in repo_name_parts)] |
|
713 | 753 | |
|
714 | 754 | if repo_name_parts != repo_slugs: |
|
715 | 755 | # short-skip if the repo-name doesn't follow slug rule |
|
716 | 756 | log.warning( |
|
717 | 757 | "repo_name: %s is different than slug %s", repo_name_parts, repo_slugs |
|
718 | 758 | ) |
|
719 | 759 | return False |
|
720 | 760 | |
|
721 | 761 | repo_model = repo.RepoModel() |
|
722 | 762 | |
|
723 | 763 | by_name_match = repo_model.get_by_repo_name(repo_name, cache=False) |
|
724 | 764 | |
|
725 | 765 | def redirect_if_creating(route_info, db_repo): |
|
726 | 766 | skip_views = ["edit_repo_advanced_delete"] |
|
727 | 767 | route = route_info["route"] |
|
728 | 768 | # we should skip delete view so we can actually "remove" repositories |
|
729 | 769 | # if they get stuck in creating state. |
|
730 | 770 | if route.name in skip_views: |
|
731 | 771 | return |
|
732 | 772 | |
|
733 | 773 | if db_repo.repo_state in [repo.Repository.STATE_PENDING]: |
|
734 | 774 | repo_creating_url = request.route_path( |
|
735 | 775 | "repo_creating", repo_name=db_repo.repo_name |
|
736 | 776 | ) |
|
737 | 777 | raise HTTPFound(repo_creating_url) |
|
738 | 778 | |
|
739 | 779 | if by_name_match: |
|
740 | 780 | # register this as request object we can re-use later |
|
741 | 781 | request.db_repo = by_name_match |
|
742 | 782 | request.db_repo_name = request.db_repo.repo_name |
|
743 | 783 | |
|
744 | 784 | redirect_if_creating(info, by_name_match) |
|
745 | 785 | return True |
|
746 | 786 | |
|
747 | 787 | by_id_match = repo_model.get_repo_by_id(repo_name) |
|
748 | 788 | if by_id_match: |
|
749 | 789 | request.db_repo = by_id_match |
|
750 | 790 | request.db_repo_name = request.db_repo.repo_name |
|
751 | 791 | redirect_if_creating(info, by_id_match) |
|
752 | 792 | return True |
|
753 | 793 | |
|
754 | 794 | return False |
|
755 | 795 | |
|
756 | 796 | |
|
757 | 797 | class RepoForbidArchivedRoutePredicate(object): |
|
758 | 798 | def __init__(self, val, config): |
|
759 | 799 | self.val = val |
|
760 | 800 | |
|
761 | 801 | def text(self): |
|
762 | 802 | return f"repo_forbid_archived = {self.val}" |
|
763 | 803 | |
|
764 | 804 | phash = text |
|
765 | 805 | |
|
766 | 806 | def __call__(self, info, request): |
|
767 | 807 | _ = request.translate |
|
768 | 808 | rhodecode_db_repo = request.db_repo |
|
769 | 809 | |
|
770 | 810 | log.debug( |
|
771 | 811 | "%s checking if archived flag for repo for %s", |
|
772 | 812 | self.__class__.__name__, |
|
773 | 813 | rhodecode_db_repo.repo_name, |
|
774 | 814 | ) |
|
775 | 815 | |
|
776 | 816 | if rhodecode_db_repo.archived: |
|
777 | 817 | log.warning( |
|
778 | 818 | "Current view is not supported for archived repo:%s", |
|
779 | 819 | rhodecode_db_repo.repo_name, |
|
780 | 820 | ) |
|
781 | 821 | |
|
782 | 822 | h.flash( |
|
783 | 823 | h.literal(_("Action not supported for archived repository.")), |
|
784 | 824 | category="warning", |
|
785 | 825 | ) |
|
786 | 826 | summary_url = request.route_path( |
|
787 | 827 | "repo_summary", repo_name=rhodecode_db_repo.repo_name |
|
788 | 828 | ) |
|
789 | 829 | raise HTTPFound(summary_url) |
|
790 | 830 | return True |
|
791 | 831 | |
|
792 | 832 | |
|
793 | 833 | class RepoTypeRoutePredicate(object): |
|
794 | 834 | def __init__(self, val, config): |
|
795 | 835 | self.val = val or ["hg", "git", "svn"] |
|
796 | 836 | |
|
797 | 837 | def text(self): |
|
798 | 838 | return f"repo_accepted_type = {self.val}" |
|
799 | 839 | |
|
800 | 840 | phash = text |
|
801 | 841 | |
|
802 | 842 | def __call__(self, info, request): |
|
803 | 843 | if hasattr(request, "vcs_call"): |
|
804 | 844 | # skip vcs calls |
|
805 | 845 | return |
|
806 | 846 | |
|
807 | 847 | rhodecode_db_repo = request.db_repo |
|
808 | 848 | |
|
809 | 849 | log.debug( |
|
810 | 850 | "%s checking repo type for %s in %s", |
|
811 | 851 | self.__class__.__name__, |
|
812 | 852 | rhodecode_db_repo.repo_type, |
|
813 | 853 | self.val, |
|
814 | 854 | ) |
|
815 | 855 | |
|
816 | 856 | if rhodecode_db_repo.repo_type in self.val: |
|
817 | 857 | return True |
|
818 | 858 | else: |
|
819 | 859 | log.warning( |
|
820 | 860 | "Current view is not supported for repo type:%s", |
|
821 | 861 | rhodecode_db_repo.repo_type, |
|
822 | 862 | ) |
|
823 | 863 | return False |
|
824 | 864 | |
|
825 | 865 | |
|
826 | 866 | class RepoGroupRoutePredicate(object): |
|
827 | 867 | def __init__(self, val, config): |
|
828 | 868 | self.val = val |
|
829 | 869 | |
|
830 | 870 | def text(self): |
|
831 | 871 | return f"repo_group_route = {self.val}" |
|
832 | 872 | |
|
833 | 873 | phash = text |
|
834 | 874 | |
|
835 | 875 | def __call__(self, info, request): |
|
836 | 876 | if hasattr(request, "vcs_call"): |
|
837 | 877 | # skip vcs calls |
|
838 | 878 | return |
|
839 | 879 | |
|
840 | 880 | repo_group_name = info["match"]["repo_group_name"] |
|
841 | 881 | |
|
842 | 882 | repo_group_name_parts = repo_group_name.split("/") |
|
843 | 883 | repo_group_slugs = [ |
|
844 | 884 | x for x in [repo_name_slug(x) for x in repo_group_name_parts] |
|
845 | 885 | ] |
|
846 | 886 | if repo_group_name_parts != repo_group_slugs: |
|
847 | 887 | # short-skip if the repo-name doesn't follow slug rule |
|
848 | 888 | log.warning( |
|
849 | 889 | "repo_group_name: %s is different than slug %s", |
|
850 | 890 | repo_group_name_parts, |
|
851 | 891 | repo_group_slugs, |
|
852 | 892 | ) |
|
853 | 893 | return False |
|
854 | 894 | |
|
855 | 895 | repo_group_model = repo_group.RepoGroupModel() |
|
856 | 896 | by_name_match = repo_group_model.get_by_group_name(repo_group_name, cache=False) |
|
857 | 897 | |
|
858 | 898 | if by_name_match: |
|
859 | 899 | # register this as request object we can re-use later |
|
860 | 900 | request.db_repo_group = by_name_match |
|
861 | 901 | request.db_repo_group_name = request.db_repo_group.group_name |
|
862 | 902 | return True |
|
863 | 903 | |
|
864 | 904 | return False |
|
865 | 905 | |
|
866 | 906 | |
|
867 | 907 | class UserGroupRoutePredicate(object): |
|
868 | 908 | def __init__(self, val, config): |
|
869 | 909 | self.val = val |
|
870 | 910 | |
|
871 | 911 | def text(self): |
|
872 | 912 | return f"user_group_route = {self.val}" |
|
873 | 913 | |
|
874 | 914 | phash = text |
|
875 | 915 | |
|
876 | 916 | def __call__(self, info, request): |
|
877 | 917 | if hasattr(request, "vcs_call"): |
|
878 | 918 | # skip vcs calls |
|
879 | 919 | return |
|
880 | 920 | |
|
881 | 921 | user_group_id = info["match"]["user_group_id"] |
|
882 | 922 | user_group_model = user_group.UserGroup() |
|
883 | 923 | by_id_match = user_group_model.get(user_group_id, cache=False) |
|
884 | 924 | |
|
885 | 925 | if by_id_match: |
|
886 | 926 | # register this as request object we can re-use later |
|
887 | 927 | request.db_user_group = by_id_match |
|
888 | 928 | return True |
|
889 | 929 | |
|
890 | 930 | return False |
|
891 | 931 | |
|
892 | 932 | |
|
893 | 933 | class UserRoutePredicateBase(object): |
|
894 | 934 | supports_default = None |
|
895 | 935 | |
|
896 | 936 | def __init__(self, val, config): |
|
897 | 937 | self.val = val |
|
898 | 938 | |
|
899 | 939 | def text(self): |
|
900 | 940 | raise NotImplementedError() |
|
901 | 941 | |
|
902 | 942 | def __call__(self, info, request): |
|
903 | 943 | if hasattr(request, "vcs_call"): |
|
904 | 944 | # skip vcs calls |
|
905 | 945 | return |
|
906 | 946 | |
|
907 | 947 | user_id = info["match"]["user_id"] |
|
908 | 948 | user_model = user.User() |
|
909 | 949 | by_id_match = user_model.get(user_id, cache=False) |
|
910 | 950 | |
|
911 | 951 | if by_id_match: |
|
912 | 952 | # register this as request object we can re-use later |
|
913 | 953 | request.db_user = by_id_match |
|
914 | 954 | request.db_user_supports_default = self.supports_default |
|
915 | 955 | return True |
|
916 | 956 | |
|
917 | 957 | return False |
|
918 | 958 | |
|
919 | 959 | |
|
920 | 960 | class UserRoutePredicate(UserRoutePredicateBase): |
|
921 | 961 | supports_default = False |
|
922 | 962 | |
|
923 | 963 | def text(self): |
|
924 | 964 | return f"user_route = {self.val}" |
|
925 | 965 | |
|
926 | 966 | phash = text |
|
927 | 967 | |
|
928 | 968 | |
|
929 | 969 | class UserRouteWithDefaultPredicate(UserRoutePredicateBase): |
|
930 | 970 | supports_default = True |
|
931 | 971 | |
|
932 | 972 | def text(self): |
|
933 | 973 | return f"user_with_default_route = {self.val}" |
|
934 | 974 | |
|
935 | 975 | phash = text |
|
936 | 976 | |
|
937 | 977 | |
|
938 | 978 | def includeme(config): |
|
939 | 979 | config.add_route_predicate("repo_route", RepoRoutePredicate) |
|
940 | 980 | config.add_route_predicate("repo_accepted_types", RepoTypeRoutePredicate) |
|
941 | 981 | config.add_route_predicate( |
|
942 | 982 | "repo_forbid_when_archived", RepoForbidArchivedRoutePredicate |
|
943 | 983 | ) |
|
944 | 984 | config.add_route_predicate("repo_group_route", RepoGroupRoutePredicate) |
|
945 | 985 | config.add_route_predicate("user_group_route", UserGroupRoutePredicate) |
|
946 | 986 | config.add_route_predicate("user_route_with_default", UserRouteWithDefaultPredicate) |
|
947 | 987 | config.add_route_predicate("user_route", UserRoutePredicate) |
@@ -1,497 +1,497 b'' | |||
|
1 | 1 | |
|
2 | 2 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
3 | 3 | # |
|
4 | 4 | # This program is free software: you can redistribute it and/or modify |
|
5 | 5 | # it under the terms of the GNU Affero General Public License, version 3 |
|
6 | 6 | # (only), as published by the Free Software Foundation. |
|
7 | 7 | # |
|
8 | 8 | # This program is distributed in the hope that it will be useful, |
|
9 | 9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
10 | 10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
11 | 11 | # GNU General Public License for more details. |
|
12 | 12 | # |
|
13 | 13 | # You should have received a copy of the GNU Affero General Public License |
|
14 | 14 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
15 | 15 | # |
|
16 | 16 | # This program is dual-licensed. If you wish to learn more about the |
|
17 | 17 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
18 | 18 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
19 | 19 | |
|
20 | 20 | import urllib.request |
|
21 | 21 | import urllib.parse |
|
22 | 22 | import urllib.error |
|
23 | 23 | |
|
24 | 24 | import mock |
|
25 | 25 | import pytest |
|
26 | 26 | |
|
27 | 27 | from rhodecode.apps._base import ADMIN_PREFIX |
|
28 | 28 | from rhodecode.lib import auth |
|
29 | 29 | from rhodecode.lib.utils2 import safe_str |
|
30 | 30 | from rhodecode.lib import helpers as h |
|
31 | 31 | from rhodecode.model.db import ( |
|
32 | 32 | Repository, RepoGroup, UserRepoToPerm, User, Permission) |
|
33 | 33 | from rhodecode.model.meta import Session |
|
34 | 34 | from rhodecode.model.repo import RepoModel |
|
35 | 35 | from rhodecode.model.repo_group import RepoGroupModel |
|
36 | 36 | from rhodecode.model.user import UserModel |
|
37 | 37 | from rhodecode.tests import ( |
|
38 | 38 | login_user_session, assert_session_flash, TEST_USER_ADMIN_LOGIN, |
|
39 | 39 | TEST_USER_REGULAR_LOGIN, TEST_USER_REGULAR_PASS) |
|
40 | 40 | from rhodecode.tests.fixture import Fixture, error_function |
|
41 | 41 | from rhodecode.tests.utils import repo_on_filesystem |
|
42 | 42 | from rhodecode.tests.routes import route_path |
|
43 | 43 | |
|
44 | 44 | fixture = Fixture() |
|
45 | 45 | |
|
46 | 46 | |
|
47 | 47 | def _get_permission_for_user(user, repo): |
|
48 | 48 | perm = UserRepoToPerm.query()\ |
|
49 | 49 | .filter(UserRepoToPerm.repository == |
|
50 | 50 | Repository.get_by_repo_name(repo))\ |
|
51 | 51 | .filter(UserRepoToPerm.user == User.get_by_username(user))\ |
|
52 | 52 | .all() |
|
53 | 53 | return perm |
|
54 | 54 | |
|
55 | 55 | |
|
56 | 56 | @pytest.mark.usefixtures("app") |
|
57 | 57 | class TestAdminRepos(object): |
|
58 | 58 | |
|
59 | 59 | def test_repo_list(self, autologin_user, user_util, xhr_header): |
|
60 | 60 | repo = user_util.create_repo() |
|
61 | 61 | repo_name = repo.repo_name |
|
62 | 62 | response = self.app.get( |
|
63 | 63 | route_path('repos_data'), status=200, |
|
64 | 64 | extra_environ=xhr_header) |
|
65 | 65 | |
|
66 | 66 | response.mustcontain(repo_name) |
|
67 | 67 | |
|
68 | 68 | def test_create_page_restricted_to_single_backend(self, autologin_user, backend): |
|
69 | 69 | with mock.patch('rhodecode.BACKENDS', {'git': 'git'}): |
|
70 | 70 | response = self.app.get(route_path('repo_new'), status=200) |
|
71 | 71 | assert_response = response.assert_response() |
|
72 | 72 | element = assert_response.get_element('[name=repo_type]') |
|
73 | 73 | assert element.get('value') == 'git' |
|
74 | 74 | |
|
75 | 75 | def test_create_page_non_restricted_backends(self, autologin_user, backend): |
|
76 | 76 | response = self.app.get(route_path('repo_new'), status=200) |
|
77 | 77 | assert_response = response.assert_response() |
|
78 | 78 | assert ['hg', 'git', 'svn'] == [x.get('value') for x in assert_response.get_elements('[name=repo_type]')] |
|
79 | 79 | |
|
80 | 80 | @pytest.mark.parametrize( |
|
81 | 81 | "suffix", ['', 'xxa'], ids=['', 'non-ascii']) |
|
82 | 82 | def test_create(self, autologin_user, backend, suffix, csrf_token): |
|
83 | 83 | repo_name_unicode = backend.new_repo_name(suffix=suffix) |
|
84 | 84 | repo_name = repo_name_unicode |
|
85 | 85 | |
|
86 | 86 | description_unicode = 'description for newly created repo' + suffix |
|
87 | 87 | description = description_unicode |
|
88 | 88 | |
|
89 | 89 | response = self.app.post( |
|
90 | 90 | route_path('repo_create'), |
|
91 | 91 | fixture._get_repo_create_params( |
|
92 | 92 | repo_private=False, |
|
93 | 93 | repo_name=repo_name, |
|
94 | 94 | repo_type=backend.alias, |
|
95 | 95 | repo_description=description, |
|
96 | 96 | csrf_token=csrf_token), |
|
97 | 97 | status=302) |
|
98 | 98 | |
|
99 | 99 | self.assert_repository_is_created_correctly( |
|
100 | 100 | repo_name, description, backend) |
|
101 | 101 | |
|
102 | 102 | def test_create_numeric_name(self, autologin_user, backend, csrf_token): |
|
103 | 103 | numeric_repo = '1234' |
|
104 | 104 | repo_name = numeric_repo |
|
105 | 105 | description = 'description for newly created repo' + numeric_repo |
|
106 | 106 | self.app.post( |
|
107 | 107 | route_path('repo_create'), |
|
108 | 108 | fixture._get_repo_create_params( |
|
109 | 109 | repo_private=False, |
|
110 | 110 | repo_name=repo_name, |
|
111 | 111 | repo_type=backend.alias, |
|
112 | 112 | repo_description=description, |
|
113 | 113 | csrf_token=csrf_token)) |
|
114 | 114 | |
|
115 | 115 | self.assert_repository_is_created_correctly( |
|
116 | 116 | repo_name, description, backend) |
|
117 | 117 | |
|
118 | 118 | @pytest.mark.parametrize("suffix", ['', '_ąćę'], ids=['', 'non-ascii']) |
|
119 | 119 | def test_create_in_group( |
|
120 | 120 | self, autologin_user, backend, suffix, csrf_token): |
|
121 | 121 | # create GROUP |
|
122 | 122 | group_name = f'sometest_{backend.alias}' |
|
123 | 123 | gr = RepoGroupModel().create(group_name=group_name, |
|
124 | 124 | group_description='test', |
|
125 | 125 | owner=TEST_USER_ADMIN_LOGIN) |
|
126 | 126 | Session().commit() |
|
127 | 127 | |
|
128 | 128 | repo_name = f'ingroup{suffix}' |
|
129 | 129 | repo_name_full = RepoGroup.url_sep().join([group_name, repo_name]) |
|
130 | 130 | description = 'description for newly created repo' |
|
131 | 131 | |
|
132 | 132 | self.app.post( |
|
133 | 133 | route_path('repo_create'), |
|
134 | 134 | fixture._get_repo_create_params( |
|
135 | 135 | repo_private=False, |
|
136 | 136 | repo_name=safe_str(repo_name), |
|
137 | 137 | repo_type=backend.alias, |
|
138 | 138 | repo_description=description, |
|
139 | 139 | repo_group=gr.group_id, |
|
140 | 140 | csrf_token=csrf_token)) |
|
141 | 141 | |
|
142 | 142 | # TODO: johbo: Cleanup work to fixture |
|
143 | 143 | try: |
|
144 | 144 | self.assert_repository_is_created_correctly( |
|
145 | 145 | repo_name_full, description, backend) |
|
146 | 146 | |
|
147 | 147 | new_repo = RepoModel().get_by_repo_name(repo_name_full) |
|
148 | 148 | inherited_perms = UserRepoToPerm.query().filter( |
|
149 | 149 | UserRepoToPerm.repository_id == new_repo.repo_id).all() |
|
150 | 150 | assert len(inherited_perms) == 1 |
|
151 | 151 | finally: |
|
152 | 152 | RepoModel().delete(repo_name_full) |
|
153 | 153 | RepoGroupModel().delete(group_name) |
|
154 | 154 | Session().commit() |
|
155 | 155 | |
|
156 | 156 | def test_create_in_group_numeric_name( |
|
157 | 157 | self, autologin_user, backend, csrf_token): |
|
158 | 158 | # create GROUP |
|
159 | 159 | group_name = 'sometest_%s' % backend.alias |
|
160 | 160 | gr = RepoGroupModel().create(group_name=group_name, |
|
161 | 161 | group_description='test', |
|
162 | 162 | owner=TEST_USER_ADMIN_LOGIN) |
|
163 | 163 | Session().commit() |
|
164 | 164 | |
|
165 | 165 | repo_name = '12345' |
|
166 | 166 | repo_name_full = RepoGroup.url_sep().join([group_name, repo_name]) |
|
167 | 167 | description = 'description for newly created repo' |
|
168 | 168 | self.app.post( |
|
169 | 169 | route_path('repo_create'), |
|
170 | 170 | fixture._get_repo_create_params( |
|
171 | 171 | repo_private=False, |
|
172 | 172 | repo_name=repo_name, |
|
173 | 173 | repo_type=backend.alias, |
|
174 | 174 | repo_description=description, |
|
175 | 175 | repo_group=gr.group_id, |
|
176 | 176 | csrf_token=csrf_token)) |
|
177 | 177 | |
|
178 | 178 | # TODO: johbo: Cleanup work to fixture |
|
179 | 179 | try: |
|
180 | 180 | self.assert_repository_is_created_correctly( |
|
181 | 181 | repo_name_full, description, backend) |
|
182 | 182 | |
|
183 | 183 | new_repo = RepoModel().get_by_repo_name(repo_name_full) |
|
184 | 184 | inherited_perms = UserRepoToPerm.query()\ |
|
185 | 185 | .filter(UserRepoToPerm.repository_id == new_repo.repo_id).all() |
|
186 | 186 | assert len(inherited_perms) == 1 |
|
187 | 187 | finally: |
|
188 | 188 | RepoModel().delete(repo_name_full) |
|
189 | 189 | RepoGroupModel().delete(group_name) |
|
190 | 190 | Session().commit() |
|
191 | 191 | |
|
192 | 192 | def test_create_in_group_without_needed_permissions(self, backend): |
|
193 | 193 | session = login_user_session( |
|
194 | 194 | self.app, TEST_USER_REGULAR_LOGIN, TEST_USER_REGULAR_PASS) |
|
195 | 195 | csrf_token = auth.get_csrf_token(session) |
|
196 | 196 | # revoke |
|
197 | 197 | user_model = UserModel() |
|
198 | 198 | # disable fork and create on default user |
|
199 | 199 | user_model.revoke_perm(User.DEFAULT_USER, 'hg.create.repository') |
|
200 | 200 | user_model.grant_perm(User.DEFAULT_USER, 'hg.create.none') |
|
201 | 201 | user_model.revoke_perm(User.DEFAULT_USER, 'hg.fork.repository') |
|
202 | 202 | user_model.grant_perm(User.DEFAULT_USER, 'hg.fork.none') |
|
203 | 203 | |
|
204 | 204 | # disable on regular user |
|
205 | 205 | user_model.revoke_perm(TEST_USER_REGULAR_LOGIN, 'hg.create.repository') |
|
206 | 206 | user_model.grant_perm(TEST_USER_REGULAR_LOGIN, 'hg.create.none') |
|
207 | 207 | user_model.revoke_perm(TEST_USER_REGULAR_LOGIN, 'hg.fork.repository') |
|
208 | 208 | user_model.grant_perm(TEST_USER_REGULAR_LOGIN, 'hg.fork.none') |
|
209 | 209 | Session().commit() |
|
210 | 210 | |
|
211 | 211 | # create GROUP |
|
212 | 212 | group_name = 'reg_sometest_%s' % backend.alias |
|
213 | 213 | gr = RepoGroupModel().create(group_name=group_name, |
|
214 | 214 | group_description='test', |
|
215 | 215 | owner=TEST_USER_ADMIN_LOGIN) |
|
216 | 216 | Session().commit() |
|
217 | 217 | repo_group_id = gr.group_id |
|
218 | 218 | |
|
219 | 219 | group_name_allowed = 'reg_sometest_allowed_%s' % backend.alias |
|
220 | 220 | gr_allowed = RepoGroupModel().create( |
|
221 | 221 | group_name=group_name_allowed, |
|
222 | 222 | group_description='test', |
|
223 | 223 | owner=TEST_USER_REGULAR_LOGIN) |
|
224 | 224 | allowed_repo_group_id = gr_allowed.group_id |
|
225 | 225 | Session().commit() |
|
226 | 226 | |
|
227 | 227 | repo_name = 'ingroup' |
|
228 | 228 | description = 'description for newly created repo' |
|
229 | 229 | response = self.app.post( |
|
230 | 230 | route_path('repo_create'), |
|
231 | 231 | fixture._get_repo_create_params( |
|
232 | 232 | repo_private=False, |
|
233 | 233 | repo_name=repo_name, |
|
234 | 234 | repo_type=backend.alias, |
|
235 | 235 | repo_description=description, |
|
236 | 236 | repo_group=repo_group_id, |
|
237 | 237 | csrf_token=csrf_token)) |
|
238 | 238 | |
|
239 | 239 | response.mustcontain('Invalid value') |
|
240 | 240 | |
|
241 | 241 | # user is allowed to create in this group |
|
242 | 242 | repo_name = 'ingroup' |
|
243 | 243 | repo_name_full = RepoGroup.url_sep().join( |
|
244 | 244 | [group_name_allowed, repo_name]) |
|
245 | 245 | description = 'description for newly created repo' |
|
246 | 246 | response = self.app.post( |
|
247 | 247 | route_path('repo_create'), |
|
248 | 248 | fixture._get_repo_create_params( |
|
249 | 249 | repo_private=False, |
|
250 | 250 | repo_name=repo_name, |
|
251 | 251 | repo_type=backend.alias, |
|
252 | 252 | repo_description=description, |
|
253 | 253 | repo_group=allowed_repo_group_id, |
|
254 | 254 | csrf_token=csrf_token)) |
|
255 | 255 | |
|
256 | 256 | # TODO: johbo: Cleanup in pytest fixture |
|
257 | 257 | try: |
|
258 | 258 | self.assert_repository_is_created_correctly( |
|
259 | 259 | repo_name_full, description, backend) |
|
260 | 260 | |
|
261 | 261 | new_repo = RepoModel().get_by_repo_name(repo_name_full) |
|
262 | 262 | inherited_perms = UserRepoToPerm.query().filter( |
|
263 | 263 | UserRepoToPerm.repository_id == new_repo.repo_id).all() |
|
264 | 264 | assert len(inherited_perms) == 1 |
|
265 | 265 | |
|
266 | 266 | assert repo_on_filesystem(repo_name_full) |
|
267 | 267 | finally: |
|
268 | 268 | RepoModel().delete(repo_name_full) |
|
269 | 269 | RepoGroupModel().delete(group_name) |
|
270 | 270 | RepoGroupModel().delete(group_name_allowed) |
|
271 | 271 | Session().commit() |
|
272 | 272 | |
|
273 | 273 | def test_create_in_group_inherit_permissions(self, autologin_user, backend, |
|
274 | 274 | csrf_token): |
|
275 | 275 | # create GROUP |
|
276 | 276 | group_name = 'sometest_%s' % backend.alias |
|
277 | 277 | gr = RepoGroupModel().create(group_name=group_name, |
|
278 | 278 | group_description='test', |
|
279 | 279 | owner=TEST_USER_ADMIN_LOGIN) |
|
280 | 280 | perm = Permission.get_by_key('repository.write') |
|
281 | 281 | RepoGroupModel().grant_user_permission( |
|
282 | 282 | gr, TEST_USER_REGULAR_LOGIN, perm) |
|
283 | 283 | |
|
284 | 284 | # add repo permissions |
|
285 | 285 | Session().commit() |
|
286 | 286 | repo_group_id = gr.group_id |
|
287 | 287 | repo_name = 'ingroup_inherited_%s' % backend.alias |
|
288 | 288 | repo_name_full = RepoGroup.url_sep().join([group_name, repo_name]) |
|
289 | 289 | description = 'description for newly created repo' |
|
290 | 290 | self.app.post( |
|
291 | 291 | route_path('repo_create'), |
|
292 | 292 | fixture._get_repo_create_params( |
|
293 | 293 | repo_private=False, |
|
294 | 294 | repo_name=repo_name, |
|
295 | 295 | repo_type=backend.alias, |
|
296 | 296 | repo_description=description, |
|
297 | 297 | repo_group=repo_group_id, |
|
298 | 298 | repo_copy_permissions=True, |
|
299 | 299 | csrf_token=csrf_token)) |
|
300 | 300 | |
|
301 | 301 | # TODO: johbo: Cleanup to pytest fixture |
|
302 | 302 | try: |
|
303 | 303 | self.assert_repository_is_created_correctly( |
|
304 | 304 | repo_name_full, description, backend) |
|
305 | 305 | except Exception: |
|
306 | 306 | RepoGroupModel().delete(group_name) |
|
307 | 307 | Session().commit() |
|
308 | 308 | raise |
|
309 | 309 | |
|
310 | 310 | # check if inherited permissions are applied |
|
311 | 311 | new_repo = RepoModel().get_by_repo_name(repo_name_full) |
|
312 | 312 | inherited_perms = UserRepoToPerm.query().filter( |
|
313 | 313 | UserRepoToPerm.repository_id == new_repo.repo_id).all() |
|
314 | 314 | assert len(inherited_perms) == 2 |
|
315 | 315 | |
|
316 | 316 | assert TEST_USER_REGULAR_LOGIN in [ |
|
317 | 317 | x.user.username for x in inherited_perms] |
|
318 | 318 | assert 'repository.write' in [ |
|
319 | 319 | x.permission.permission_name for x in inherited_perms] |
|
320 | 320 | |
|
321 | 321 | RepoModel().delete(repo_name_full) |
|
322 | 322 | RepoGroupModel().delete(group_name) |
|
323 | 323 | Session().commit() |
|
324 | 324 | |
|
325 | 325 | @pytest.mark.xfail_backends( |
|
326 | 326 | "git", "hg", reason="Missing reposerver support") |
|
327 | 327 | def test_create_with_clone_uri(self, autologin_user, backend, reposerver, |
|
328 | 328 | csrf_token): |
|
329 | 329 | source_repo = backend.create_repo(number_of_commits=2) |
|
330 | 330 | source_repo_name = source_repo.repo_name |
|
331 | 331 | reposerver.serve(source_repo.scm_instance()) |
|
332 | 332 | |
|
333 | 333 | repo_name = backend.new_repo_name() |
|
334 | 334 | response = self.app.post( |
|
335 | 335 | route_path('repo_create'), |
|
336 | 336 | fixture._get_repo_create_params( |
|
337 | 337 | repo_private=False, |
|
338 | 338 | repo_name=repo_name, |
|
339 | 339 | repo_type=backend.alias, |
|
340 | 340 | repo_description='', |
|
341 | 341 | clone_uri=reposerver.url, |
|
342 | 342 | csrf_token=csrf_token), |
|
343 | 343 | status=302) |
|
344 | 344 | |
|
345 | 345 | # Should be redirected to the creating page |
|
346 | 346 | response.mustcontain('repo_creating') |
|
347 | 347 | |
|
348 | 348 | # Expecting that both repositories have same history |
|
349 | 349 | source_repo = RepoModel().get_by_repo_name(source_repo_name) |
|
350 | 350 | source_vcs = source_repo.scm_instance() |
|
351 | 351 | repo = RepoModel().get_by_repo_name(repo_name) |
|
352 | 352 | repo_vcs = repo.scm_instance() |
|
353 | 353 | assert source_vcs[0].message == repo_vcs[0].message |
|
354 | 354 | assert source_vcs.count() == repo_vcs.count() |
|
355 | 355 | assert source_vcs.commit_ids == repo_vcs.commit_ids |
|
356 | 356 | |
|
357 | 357 | @pytest.mark.xfail_backends("svn", reason="Depends on import support") |
|
358 | 358 | def test_create_remote_repo_wrong_clone_uri(self, autologin_user, backend, |
|
359 | 359 | csrf_token): |
|
360 | 360 | repo_name = backend.new_repo_name() |
|
361 | 361 | description = 'description for newly created repo' |
|
362 | 362 | response = self.app.post( |
|
363 | 363 | route_path('repo_create'), |
|
364 | 364 | fixture._get_repo_create_params( |
|
365 | 365 | repo_private=False, |
|
366 | 366 | repo_name=repo_name, |
|
367 | 367 | repo_type=backend.alias, |
|
368 | 368 | repo_description=description, |
|
369 | 369 | clone_uri='http://repo.invalid/repo', |
|
370 | 370 | csrf_token=csrf_token)) |
|
371 | 371 | response.mustcontain('invalid clone url') |
|
372 | 372 | |
|
373 | 373 | @pytest.mark.xfail_backends("svn", reason="Depends on import support") |
|
374 | 374 | def test_create_remote_repo_wrong_clone_uri_hg_svn( |
|
375 | 375 | self, autologin_user, backend, csrf_token): |
|
376 | 376 | repo_name = backend.new_repo_name() |
|
377 | 377 | description = 'description for newly created repo' |
|
378 | 378 | response = self.app.post( |
|
379 | 379 | route_path('repo_create'), |
|
380 | 380 | fixture._get_repo_create_params( |
|
381 | 381 | repo_private=False, |
|
382 | 382 | repo_name=repo_name, |
|
383 | 383 | repo_type=backend.alias, |
|
384 | 384 | repo_description=description, |
|
385 | 385 | clone_uri='svn+http://svn.invalid/repo', |
|
386 | 386 | csrf_token=csrf_token)) |
|
387 | 387 | response.mustcontain('invalid clone url') |
|
388 | 388 | |
|
389 | 389 | def test_create_with_git_suffix( |
|
390 | 390 | self, autologin_user, backend, csrf_token): |
|
391 | 391 | repo_name = backend.new_repo_name() + ".git" |
|
392 | 392 | description = 'description for newly created repo' |
|
393 | 393 | response = self.app.post( |
|
394 | 394 | route_path('repo_create'), |
|
395 | 395 | fixture._get_repo_create_params( |
|
396 | 396 | repo_private=False, |
|
397 | 397 | repo_name=repo_name, |
|
398 | 398 | repo_type=backend.alias, |
|
399 | 399 | repo_description=description, |
|
400 | 400 | csrf_token=csrf_token)) |
|
401 | 401 | response.mustcontain('Repository name cannot end with .git') |
|
402 | 402 | |
|
403 | 403 | def test_default_user_cannot_access_private_repo_in_a_group( |
|
404 | 404 | self, autologin_user, user_util, backend): |
|
405 | 405 | |
|
406 | 406 | group = user_util.create_repo_group() |
|
407 | 407 | |
|
408 | 408 | repo = backend.create_repo( |
|
409 | 409 | repo_private=True, repo_group=group, repo_copy_permissions=True) |
|
410 | 410 | |
|
411 | 411 | permissions = _get_permission_for_user( |
|
412 | 412 | user='default', repo=repo.repo_name) |
|
413 | 413 | assert len(permissions) == 1 |
|
414 | 414 | assert permissions[0].permission.permission_name == 'repository.none' |
|
415 | 415 | assert permissions[0].repository.private is True |
|
416 | 416 | |
|
417 | 417 | def test_create_on_top_level_without_permissions(self, backend): |
|
418 | 418 | session = login_user_session( |
|
419 | 419 | self.app, TEST_USER_REGULAR_LOGIN, TEST_USER_REGULAR_PASS) |
|
420 | 420 | csrf_token = auth.get_csrf_token(session) |
|
421 | 421 | |
|
422 | 422 | # revoke |
|
423 | 423 | user_model = UserModel() |
|
424 | 424 | # disable fork and create on default user |
|
425 | 425 | user_model.revoke_perm(User.DEFAULT_USER, 'hg.create.repository') |
|
426 | 426 | user_model.grant_perm(User.DEFAULT_USER, 'hg.create.none') |
|
427 | 427 | user_model.revoke_perm(User.DEFAULT_USER, 'hg.fork.repository') |
|
428 | 428 | user_model.grant_perm(User.DEFAULT_USER, 'hg.fork.none') |
|
429 | 429 | |
|
430 | 430 | # disable on regular user |
|
431 | 431 | user_model.revoke_perm(TEST_USER_REGULAR_LOGIN, 'hg.create.repository') |
|
432 | 432 | user_model.grant_perm(TEST_USER_REGULAR_LOGIN, 'hg.create.none') |
|
433 | 433 | user_model.revoke_perm(TEST_USER_REGULAR_LOGIN, 'hg.fork.repository') |
|
434 | 434 | user_model.grant_perm(TEST_USER_REGULAR_LOGIN, 'hg.fork.none') |
|
435 | 435 | Session().commit() |
|
436 | 436 | |
|
437 | 437 | repo_name = backend.new_repo_name() |
|
438 | 438 | description = 'description for newly created repo' |
|
439 | 439 | response = self.app.post( |
|
440 | 440 | route_path('repo_create'), |
|
441 | 441 | fixture._get_repo_create_params( |
|
442 | 442 | repo_private=False, |
|
443 | 443 | repo_name=repo_name, |
|
444 | 444 | repo_type=backend.alias, |
|
445 | 445 | repo_description=description, |
|
446 | 446 | csrf_token=csrf_token)) |
|
447 | 447 | |
|
448 | 448 | response.mustcontain( |
|
449 |
|
|
|
450 |
|
|
|
449 | "You do not have the permission to store repositories in " | |
|
450 | "the root location.") | |
|
451 | 451 | |
|
452 | 452 | @mock.patch.object(RepoModel, '_create_filesystem_repo', error_function) |
|
453 | 453 | def test_create_repo_when_filesystem_op_fails( |
|
454 | 454 | self, autologin_user, backend, csrf_token): |
|
455 | 455 | repo_name = backend.new_repo_name() |
|
456 | 456 | description = 'description for newly created repo' |
|
457 | 457 | |
|
458 | 458 | response = self.app.post( |
|
459 | 459 | route_path('repo_create'), |
|
460 | 460 | fixture._get_repo_create_params( |
|
461 | 461 | repo_private=False, |
|
462 | 462 | repo_name=repo_name, |
|
463 | 463 | repo_type=backend.alias, |
|
464 | 464 | repo_description=description, |
|
465 | 465 | csrf_token=csrf_token)) |
|
466 | 466 | |
|
467 | 467 | assert_session_flash( |
|
468 | 468 | response, 'Error creating repository %s' % repo_name) |
|
469 | 469 | # repo must not be in db |
|
470 | 470 | assert backend.repo is None |
|
471 | 471 | # repo must not be in filesystem ! |
|
472 | 472 | assert not repo_on_filesystem(repo_name) |
|
473 | 473 | |
|
474 | 474 | def assert_repository_is_created_correctly(self, repo_name, description, backend): |
|
475 | 475 | url_quoted_repo_name = urllib.parse.quote(repo_name) |
|
476 | 476 | |
|
477 | 477 | # run the check page that triggers the flash message |
|
478 | 478 | response = self.app.get( |
|
479 | 479 | route_path('repo_creating_check', repo_name=repo_name)) |
|
480 | 480 | assert response.json == {'result': True} |
|
481 | 481 | |
|
482 | 482 | flash_msg = 'Created repository <a href="/{}">{}</a>'.format(url_quoted_repo_name, repo_name) |
|
483 | 483 | assert_session_flash(response, flash_msg) |
|
484 | 484 | |
|
485 | 485 | # test if the repo was created in the database |
|
486 | 486 | new_repo = RepoModel().get_by_repo_name(repo_name) |
|
487 | 487 | |
|
488 | 488 | assert new_repo.repo_name == repo_name |
|
489 | 489 | assert new_repo.description == description |
|
490 | 490 | |
|
491 | 491 | # test if the repository is visible in the list ? |
|
492 | 492 | response = self.app.get( |
|
493 | 493 | h.route_path('repo_summary', repo_name=repo_name)) |
|
494 | 494 | response.mustcontain(repo_name) |
|
495 | 495 | response.mustcontain(backend.alias) |
|
496 | 496 | |
|
497 | 497 | assert repo_on_filesystem(repo_name) |
@@ -1,678 +1,678 b'' | |||
|
1 | 1 | |
|
2 | 2 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
3 | 3 | # |
|
4 | 4 | # This program is free software: you can redistribute it and/or modify |
|
5 | 5 | # it under the terms of the GNU Affero General Public License, version 3 |
|
6 | 6 | # (only), as published by the Free Software Foundation. |
|
7 | 7 | # |
|
8 | 8 | # This program is distributed in the hope that it will be useful, |
|
9 | 9 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
10 | 10 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
11 | 11 | # GNU General Public License for more details. |
|
12 | 12 | # |
|
13 | 13 | # You should have received a copy of the GNU Affero General Public License |
|
14 | 14 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
15 | 15 | # |
|
16 | 16 | # This program is dual-licensed. If you wish to learn more about the |
|
17 | 17 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
18 | 18 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
19 | 19 | |
|
20 | 20 | import mock |
|
21 | 21 | import pytest |
|
22 | 22 | |
|
23 | 23 | import rhodecode |
|
24 | 24 | from rhodecode.apps._base import ADMIN_PREFIX |
|
25 | 25 | from rhodecode.lib.hash_utils import md5_safe |
|
26 | 26 | from rhodecode.model.db import RhodeCodeUi |
|
27 | 27 | from rhodecode.model.meta import Session |
|
28 | 28 | from rhodecode.model.settings import SettingsModel, IssueTrackerSettingsModel |
|
29 | 29 | from rhodecode.tests import assert_session_flash |
|
30 | 30 | from rhodecode.tests.routes import route_path |
|
31 | 31 | |
|
32 | 32 | |
|
33 | 33 | UPDATE_DATA_QUALNAME = 'rhodecode.model.update.UpdateModel.get_update_data' |
|
34 | 34 | |
|
35 | 35 | |
|
36 | 36 | @pytest.mark.usefixtures('autologin_user', 'app') |
|
37 | 37 | class TestAdminSettingsController(object): |
|
38 | 38 | |
|
39 | 39 | @pytest.mark.parametrize('urlname', [ |
|
40 | 40 | 'admin_settings_vcs', |
|
41 | 41 | 'admin_settings_mapping', |
|
42 | 42 | 'admin_settings_global', |
|
43 | 43 | 'admin_settings_visual', |
|
44 | 44 | 'admin_settings_email', |
|
45 | 45 | 'admin_settings_hooks', |
|
46 | 46 | 'admin_settings_search', |
|
47 | 47 | ]) |
|
48 | 48 | def test_simple_get(self, urlname): |
|
49 | 49 | self.app.get(route_path(urlname)) |
|
50 | 50 | |
|
51 | 51 | def test_create_custom_hook(self, csrf_token): |
|
52 | 52 | response = self.app.post( |
|
53 | 53 | route_path('admin_settings_hooks_update'), |
|
54 | 54 | params={ |
|
55 | 55 | 'new_hook_ui_key': 'test_hooks_1', |
|
56 | 56 | 'new_hook_ui_value': 'cd /tmp', |
|
57 | 57 | 'csrf_token': csrf_token}) |
|
58 | 58 | |
|
59 | 59 | response = response.follow() |
|
60 | 60 | response.mustcontain('test_hooks_1') |
|
61 | 61 | response.mustcontain('cd /tmp') |
|
62 | 62 | |
|
63 | 63 | def test_create_custom_hook_delete(self, csrf_token): |
|
64 | 64 | response = self.app.post( |
|
65 | 65 | route_path('admin_settings_hooks_update'), |
|
66 | 66 | params={ |
|
67 | 67 | 'new_hook_ui_key': 'test_hooks_2', |
|
68 | 68 | 'new_hook_ui_value': 'cd /tmp2', |
|
69 | 69 | 'csrf_token': csrf_token}) |
|
70 | 70 | |
|
71 | 71 | response = response.follow() |
|
72 | 72 | response.mustcontain('test_hooks_2') |
|
73 | 73 | response.mustcontain('cd /tmp2') |
|
74 | 74 | |
|
75 | 75 | hook_id = SettingsModel().get_ui_by_key('test_hooks_2').ui_id |
|
76 | 76 | |
|
77 | 77 | # delete |
|
78 | 78 | self.app.post( |
|
79 | 79 | route_path('admin_settings_hooks_delete'), |
|
80 | 80 | params={'hook_id': hook_id, 'csrf_token': csrf_token}) |
|
81 | 81 | response = self.app.get(route_path('admin_settings_hooks')) |
|
82 | 82 | response.mustcontain(no=['test_hooks_2']) |
|
83 | 83 | response.mustcontain(no=['cd /tmp2']) |
|
84 | 84 | |
|
85 | 85 | |
|
86 | 86 | @pytest.mark.usefixtures('autologin_user', 'app') |
|
87 | 87 | class TestAdminSettingsGlobal(object): |
|
88 | 88 | |
|
89 | 89 | def test_pre_post_code_code_active(self, csrf_token): |
|
90 | 90 | pre_code = 'rc-pre-code-187652122' |
|
91 | 91 | post_code = 'rc-postcode-98165231' |
|
92 | 92 | |
|
93 | 93 | response = self.post_and_verify_settings({ |
|
94 | 94 | 'rhodecode_pre_code': pre_code, |
|
95 | 95 | 'rhodecode_post_code': post_code, |
|
96 | 96 | 'csrf_token': csrf_token, |
|
97 | 97 | }) |
|
98 | 98 | |
|
99 | 99 | response = response.follow() |
|
100 | 100 | response.mustcontain(pre_code, post_code) |
|
101 | 101 | |
|
102 | 102 | def test_pre_post_code_code_inactive(self, csrf_token): |
|
103 | 103 | pre_code = 'rc-pre-code-187652122' |
|
104 | 104 | post_code = 'rc-postcode-98165231' |
|
105 | 105 | response = self.post_and_verify_settings({ |
|
106 | 106 | 'rhodecode_pre_code': '', |
|
107 | 107 | 'rhodecode_post_code': '', |
|
108 | 108 | 'csrf_token': csrf_token, |
|
109 | 109 | }) |
|
110 | 110 | |
|
111 | 111 | response = response.follow() |
|
112 | 112 | response.mustcontain(no=[pre_code, post_code]) |
|
113 | 113 | |
|
114 | 114 | def test_captcha_activate(self, csrf_token): |
|
115 | 115 | self.post_and_verify_settings({ |
|
116 | 116 | 'rhodecode_captcha_private_key': '1234567890', |
|
117 | 117 | 'rhodecode_captcha_public_key': '1234567890', |
|
118 | 118 | 'csrf_token': csrf_token, |
|
119 | 119 | }) |
|
120 | 120 | |
|
121 | 121 | response = self.app.get(ADMIN_PREFIX + '/register') |
|
122 | 122 | response.mustcontain('captcha') |
|
123 | 123 | |
|
124 | 124 | def test_captcha_deactivate(self, csrf_token): |
|
125 | 125 | self.post_and_verify_settings({ |
|
126 | 126 | 'rhodecode_captcha_private_key': '', |
|
127 | 127 | 'rhodecode_captcha_public_key': '1234567890', |
|
128 | 128 | 'csrf_token': csrf_token, |
|
129 | 129 | }) |
|
130 | 130 | |
|
131 | 131 | response = self.app.get(ADMIN_PREFIX + '/register') |
|
132 | 132 | response.mustcontain(no=['captcha']) |
|
133 | 133 | |
|
134 | 134 | def test_title_change(self, csrf_token): |
|
135 | 135 | old_title = 'RhodeCode' |
|
136 | 136 | |
|
137 | 137 | for new_title in ['Changed', 'Żółwik', old_title]: |
|
138 | 138 | response = self.post_and_verify_settings({ |
|
139 | 139 | 'rhodecode_title': new_title, |
|
140 | 140 | 'csrf_token': csrf_token, |
|
141 | 141 | }) |
|
142 | 142 | |
|
143 | 143 | response = response.follow() |
|
144 | 144 | response.mustcontain(new_title) |
|
145 | 145 | |
|
146 | 146 | def post_and_verify_settings(self, settings): |
|
147 | 147 | old_title = 'RhodeCode' |
|
148 | 148 | old_realm = 'RhodeCode authentication' |
|
149 | 149 | params = { |
|
150 | 150 | 'rhodecode_title': old_title, |
|
151 | 151 | 'rhodecode_realm': old_realm, |
|
152 | 152 | 'rhodecode_pre_code': '', |
|
153 | 153 | 'rhodecode_post_code': '', |
|
154 | 154 | 'rhodecode_captcha_private_key': '', |
|
155 | 155 | 'rhodecode_captcha_public_key': '', |
|
156 | 156 | 'rhodecode_create_personal_repo_group': False, |
|
157 | 157 | 'rhodecode_personal_repo_group_pattern': '${username}', |
|
158 | 158 | } |
|
159 | 159 | params.update(settings) |
|
160 | 160 | response = self.app.post( |
|
161 | 161 | route_path('admin_settings_global_update'), params=params) |
|
162 | 162 | |
|
163 | 163 | assert_session_flash(response, 'Updated application settings') |
|
164 | 164 | |
|
165 | 165 | app_settings = SettingsModel().get_all_settings() |
|
166 | 166 | del settings['csrf_token'] |
|
167 | 167 | for key, value in settings.items(): |
|
168 | 168 | assert app_settings[key] == value |
|
169 | 169 | |
|
170 | 170 | return response |
|
171 | 171 | |
|
172 | 172 | |
|
173 | 173 | @pytest.mark.usefixtures('autologin_user', 'app') |
|
174 | 174 | class TestAdminSettingsVcs(object): |
|
175 | 175 | |
|
176 | 176 | def test_contains_svn_default_patterns(self): |
|
177 | 177 | response = self.app.get(route_path('admin_settings_vcs')) |
|
178 | 178 | expected_patterns = [ |
|
179 | 179 | '/trunk', |
|
180 | 180 | '/branches/*', |
|
181 | 181 | '/tags/*', |
|
182 | 182 | ] |
|
183 | 183 | for pattern in expected_patterns: |
|
184 | 184 | response.mustcontain(pattern) |
|
185 | 185 | |
|
186 | 186 | def test_add_new_svn_branch_and_tag_pattern( |
|
187 | 187 | self, backend_svn, form_defaults, disable_sql_cache, |
|
188 | 188 | csrf_token): |
|
189 | 189 | form_defaults.update({ |
|
190 | 190 | 'new_svn_branch': '/exp/branches/*', |
|
191 | 191 | 'new_svn_tag': '/important_tags/*', |
|
192 | 192 | 'csrf_token': csrf_token, |
|
193 | 193 | }) |
|
194 | 194 | |
|
195 | 195 | response = self.app.post( |
|
196 | 196 | route_path('admin_settings_vcs_update'), |
|
197 | 197 | params=form_defaults, status=302) |
|
198 | 198 | response = response.follow() |
|
199 | 199 | |
|
200 | 200 | # Expect to find the new values on the page |
|
201 | 201 | response.mustcontain('/exp/branches/*') |
|
202 | 202 | response.mustcontain('/important_tags/*') |
|
203 | 203 | |
|
204 | 204 | # Expect that those patterns are used to match branches and tags now |
|
205 | 205 | repo = backend_svn['svn-simple-layout'].scm_instance() |
|
206 | 206 | assert 'exp/branches/exp-sphinx-docs' in repo.branches |
|
207 | 207 | assert 'important_tags/v0.5' in repo.tags |
|
208 | 208 | |
|
209 | 209 | def test_add_same_svn_value_twice_shows_an_error_message( |
|
210 | 210 | self, form_defaults, csrf_token, settings_util): |
|
211 | 211 | settings_util.create_rhodecode_ui('vcs_svn_branch', '/test') |
|
212 | 212 | settings_util.create_rhodecode_ui('vcs_svn_tag', '/test') |
|
213 | 213 | |
|
214 | 214 | response = self.app.post( |
|
215 | 215 | route_path('admin_settings_vcs_update'), |
|
216 | 216 | params={ |
|
217 | 217 | 'paths_root_path': form_defaults['paths_root_path'], |
|
218 | 218 | 'new_svn_branch': '/test', |
|
219 | 219 | 'new_svn_tag': '/test', |
|
220 | 220 | 'csrf_token': csrf_token, |
|
221 | 221 | }, |
|
222 | 222 | status=200) |
|
223 | 223 | |
|
224 | 224 | response.mustcontain("Pattern already exists") |
|
225 | 225 | response.mustcontain("Some form inputs contain invalid data.") |
|
226 | 226 | |
|
227 | 227 | @pytest.mark.parametrize('section', [ |
|
228 | 228 | 'vcs_svn_branch', |
|
229 | 229 | 'vcs_svn_tag', |
|
230 | 230 | ]) |
|
231 | 231 | def test_delete_svn_patterns( |
|
232 | 232 | self, section, csrf_token, settings_util): |
|
233 | 233 | setting = settings_util.create_rhodecode_ui( |
|
234 | 234 | section, '/test_delete', cleanup=False) |
|
235 | 235 | |
|
236 | 236 | self.app.post( |
|
237 | 237 | route_path('admin_settings_vcs_svn_pattern_delete'), |
|
238 | 238 | params={ |
|
239 | 239 | 'delete_svn_pattern': setting.ui_id, |
|
240 | 240 | 'csrf_token': csrf_token}, |
|
241 | 241 | headers={'X-REQUESTED-WITH': 'XMLHttpRequest'}) |
|
242 | 242 | |
|
243 | 243 | @pytest.mark.parametrize('section', [ |
|
244 | 244 | 'vcs_svn_branch', |
|
245 | 245 | 'vcs_svn_tag', |
|
246 | 246 | ]) |
|
247 | 247 | def test_delete_svn_patterns_raises_404_when_no_xhr( |
|
248 | 248 | self, section, csrf_token, settings_util): |
|
249 | 249 | setting = settings_util.create_rhodecode_ui(section, '/test_delete') |
|
250 | 250 | |
|
251 | 251 | self.app.post( |
|
252 | 252 | route_path('admin_settings_vcs_svn_pattern_delete'), |
|
253 | 253 | params={ |
|
254 | 254 | 'delete_svn_pattern': setting.ui_id, |
|
255 | 255 | 'csrf_token': csrf_token}, |
|
256 | 256 | status=404) |
|
257 | 257 | |
|
258 | 258 | def test_extensions_hgevolve(self, form_defaults, csrf_token): |
|
259 | 259 | form_defaults.update({ |
|
260 | 260 | 'csrf_token': csrf_token, |
|
261 | 261 | 'extensions_evolve': 'True', |
|
262 | 262 | }) |
|
263 | 263 | response = self.app.post( |
|
264 | 264 | route_path('admin_settings_vcs_update'), |
|
265 | 265 | params=form_defaults, |
|
266 | 266 | status=302) |
|
267 | 267 | |
|
268 | 268 | response = response.follow() |
|
269 | 269 | extensions_input = ( |
|
270 | 270 | '<input id="extensions_evolve" ' |
|
271 | 271 | 'name="extensions_evolve" type="checkbox" ' |
|
272 | 272 | 'value="True" checked="checked" />') |
|
273 | 273 | response.mustcontain(extensions_input) |
|
274 | 274 | |
|
275 | 275 | def test_has_a_section_for_pull_request_settings(self): |
|
276 | 276 | response = self.app.get(route_path('admin_settings_vcs')) |
|
277 | 277 | response.mustcontain('Pull Request Settings') |
|
278 | 278 | |
|
279 | 279 | def test_has_an_input_for_invalidation_of_inline_comments(self): |
|
280 | 280 | response = self.app.get(route_path('admin_settings_vcs')) |
|
281 | 281 | assert_response = response.assert_response() |
|
282 | 282 | assert_response.one_element_exists( |
|
283 | 283 | '[name=rhodecode_use_outdated_comments]') |
|
284 | 284 | |
|
285 | 285 | @pytest.mark.parametrize('new_value', [True, False]) |
|
286 | 286 | def test_allows_to_change_invalidation_of_inline_comments( |
|
287 | 287 | self, form_defaults, csrf_token, new_value): |
|
288 | 288 | setting_key = 'use_outdated_comments' |
|
289 | 289 | setting = SettingsModel().create_or_update_setting( |
|
290 | 290 | setting_key, not new_value, 'bool') |
|
291 | 291 | Session().add(setting) |
|
292 | 292 | Session().commit() |
|
293 | 293 | |
|
294 | 294 | form_defaults.update({ |
|
295 | 295 | 'csrf_token': csrf_token, |
|
296 | 296 | 'rhodecode_use_outdated_comments': str(new_value), |
|
297 | 297 | }) |
|
298 | 298 | response = self.app.post( |
|
299 | 299 | route_path('admin_settings_vcs_update'), |
|
300 | 300 | params=form_defaults, |
|
301 | 301 | status=302) |
|
302 | 302 | response = response.follow() |
|
303 | 303 | setting = SettingsModel().get_setting_by_name(setting_key) |
|
304 | 304 | assert setting.app_settings_value is new_value |
|
305 | 305 | |
|
306 | 306 | @pytest.mark.parametrize('new_value', [True, False]) |
|
307 | 307 | def test_allows_to_change_hg_rebase_merge_strategy( |
|
308 | 308 | self, form_defaults, csrf_token, new_value): |
|
309 | 309 | setting_key = 'hg_use_rebase_for_merging' |
|
310 | 310 | |
|
311 | 311 | form_defaults.update({ |
|
312 | 312 | 'csrf_token': csrf_token, |
|
313 | 313 | 'rhodecode_' + setting_key: str(new_value), |
|
314 | 314 | }) |
|
315 | 315 | |
|
316 | 316 | with mock.patch.dict( |
|
317 | 317 | rhodecode.CONFIG, {'labs_settings_active': 'true'}): |
|
318 | 318 | self.app.post( |
|
319 | 319 | route_path('admin_settings_vcs_update'), |
|
320 | 320 | params=form_defaults, |
|
321 | 321 | status=302) |
|
322 | 322 | |
|
323 | 323 | setting = SettingsModel().get_setting_by_name(setting_key) |
|
324 | 324 | assert setting.app_settings_value is new_value |
|
325 | 325 | |
|
326 | 326 | @pytest.fixture() |
|
327 | 327 | def disable_sql_cache(self, request): |
|
328 | 328 | # patch _do_orm_execute so it returns None similar like if we don't use a cached query |
|
329 | 329 | patcher = mock.patch( |
|
330 | 330 | 'rhodecode.lib.caching_query.ORMCache._do_orm_execute', return_value=None) |
|
331 | 331 | request.addfinalizer(patcher.stop) |
|
332 | 332 | patcher.start() |
|
333 | 333 | |
|
334 | 334 | @pytest.fixture() |
|
335 | 335 | def form_defaults(self): |
|
336 | 336 | from rhodecode.apps.admin.views.settings import AdminSettingsView |
|
337 | 337 | return AdminSettingsView._form_defaults() |
|
338 | 338 | |
|
339 | 339 | # TODO: johbo: What we really want is to checkpoint before a test run and |
|
340 | 340 | # reset the session afterwards. |
|
341 | 341 | @pytest.fixture(scope='class', autouse=True) |
|
342 | 342 | def cleanup_settings(self, request, baseapp): |
|
343 | 343 | ui_id = RhodeCodeUi.ui_id |
|
344 | 344 | original_ids = [r.ui_id for r in RhodeCodeUi.query().with_entities(ui_id)] |
|
345 | 345 | |
|
346 | 346 | @request.addfinalizer |
|
347 | 347 | def cleanup(): |
|
348 | 348 | RhodeCodeUi.query().filter( |
|
349 | 349 | ui_id.notin_(original_ids)).delete(False) |
|
350 | 350 | |
|
351 | 351 | |
|
352 | 352 | @pytest.mark.usefixtures('autologin_user', 'app') |
|
353 | 353 | class TestLabsSettings(object): |
|
354 | 354 | def test_get_settings_page_disabled(self): |
|
355 | 355 | with mock.patch.dict( |
|
356 | 356 | rhodecode.CONFIG, {'labs_settings_active': 'false'}): |
|
357 | 357 | |
|
358 | 358 | response = self.app.get( |
|
359 | 359 | route_path('admin_settings_labs'), status=302) |
|
360 | 360 | |
|
361 | 361 | assert response.location.endswith(route_path('admin_settings')) |
|
362 | 362 | |
|
363 | 363 | def test_get_settings_page_enabled(self): |
|
364 | 364 | from rhodecode.apps.admin.views import settings |
|
365 | 365 | lab_settings = [ |
|
366 | 366 | settings.LabSetting( |
|
367 | 367 | key='rhodecode_bool', |
|
368 | 368 | type='bool', |
|
369 | 369 | group='bool group', |
|
370 | 370 | label='bool label', |
|
371 | 371 | help='bool help' |
|
372 | 372 | ), |
|
373 | 373 | settings.LabSetting( |
|
374 | 374 | key='rhodecode_text', |
|
375 | 375 | type='unicode', |
|
376 | 376 | group='text group', |
|
377 | 377 | label='text label', |
|
378 | 378 | help='text help' |
|
379 | 379 | ), |
|
380 | 380 | ] |
|
381 | 381 | with mock.patch.dict(rhodecode.CONFIG, |
|
382 | 382 | {'labs_settings_active': 'true'}): |
|
383 | 383 | with mock.patch.object(settings, '_LAB_SETTINGS', lab_settings): |
|
384 | 384 | response = self.app.get(route_path('admin_settings_labs')) |
|
385 | 385 | |
|
386 | 386 | assert '<label>bool group:</label>' in response |
|
387 | 387 | assert '<label for="rhodecode_bool">bool label</label>' in response |
|
388 | 388 | assert '<p class="help-block">bool help</p>' in response |
|
389 | 389 | assert 'name="rhodecode_bool" type="checkbox"' in response |
|
390 | 390 | |
|
391 | 391 | assert '<label>text group:</label>' in response |
|
392 | 392 | assert '<label for="rhodecode_text">text label</label>' in response |
|
393 | 393 | assert '<p class="help-block">text help</p>' in response |
|
394 | 394 | assert 'name="rhodecode_text" size="60" type="text"' in response |
|
395 | 395 | |
|
396 | 396 | |
|
397 | 397 | @pytest.mark.usefixtures('app') |
|
398 | 398 | class TestOpenSourceLicenses(object): |
|
399 | 399 | |
|
400 | 400 | def test_records_are_displayed(self, autologin_user): |
|
401 | 401 | sample_licenses = [ |
|
402 | 402 | { |
|
403 | 403 | "license": [ |
|
404 | 404 | { |
|
405 | 405 | "fullName": "BSD 4-clause \"Original\" or \"Old\" License", |
|
406 | 406 | "shortName": "bsdOriginal", |
|
407 | 407 | "spdxId": "BSD-4-Clause", |
|
408 | 408 | "url": "http://spdx.org/licenses/BSD-4-Clause.html" |
|
409 | 409 | } |
|
410 | 410 | ], |
|
411 | 411 | "name": "python2.7-coverage-3.7.1" |
|
412 | 412 | }, |
|
413 | 413 | { |
|
414 | 414 | "license": [ |
|
415 | 415 | { |
|
416 | 416 | "fullName": "MIT License", |
|
417 | 417 | "shortName": "mit", |
|
418 | 418 | "spdxId": "MIT", |
|
419 | 419 | "url": "http://spdx.org/licenses/MIT.html" |
|
420 | 420 | } |
|
421 | 421 | ], |
|
422 | 422 | "name": "python2.7-bootstrapped-pip-9.0.1" |
|
423 | 423 | }, |
|
424 | 424 | ] |
|
425 | 425 | read_licenses_patch = mock.patch( |
|
426 | 426 | 'rhodecode.apps.admin.views.open_source_licenses.read_opensource_licenses', |
|
427 | 427 | return_value=sample_licenses) |
|
428 | 428 | with read_licenses_patch: |
|
429 | 429 | response = self.app.get( |
|
430 | 430 | route_path('admin_settings_open_source'), status=200) |
|
431 | 431 | |
|
432 | 432 | assert_response = response.assert_response() |
|
433 | 433 | assert_response.element_contains( |
|
434 | 434 | '.panel-heading', 'Licenses of Third Party Packages') |
|
435 | 435 | for license_data in sample_licenses: |
|
436 | 436 | response.mustcontain(license_data["license"][0]["spdxId"]) |
|
437 | 437 | assert_response.element_contains('.panel-body', license_data["name"]) |
|
438 | 438 | |
|
439 | 439 | def test_records_can_be_read(self, autologin_user): |
|
440 | 440 | response = self.app.get( |
|
441 | 441 | route_path('admin_settings_open_source'), status=200) |
|
442 | 442 | assert_response = response.assert_response() |
|
443 | 443 | assert_response.element_contains( |
|
444 | 444 | '.panel-heading', 'Licenses of Third Party Packages') |
|
445 | 445 | |
|
446 | 446 | def test_forbidden_when_normal_user(self, autologin_regular_user): |
|
447 | 447 | self.app.get( |
|
448 | 448 | route_path('admin_settings_open_source'), status=404) |
|
449 | 449 | |
|
450 | 450 | |
|
451 | 451 | @pytest.mark.usefixtures('app') |
|
452 | 452 | class TestUserSessions(object): |
|
453 | 453 | |
|
454 | 454 | def test_forbidden_when_normal_user(self, autologin_regular_user): |
|
455 | 455 | self.app.get(route_path('admin_settings_sessions'), status=404) |
|
456 | 456 | |
|
457 | 457 | def test_show_sessions_page(self, autologin_user): |
|
458 | 458 | response = self.app.get(route_path('admin_settings_sessions'), status=200) |
|
459 | 459 | response.mustcontain('file') |
|
460 | 460 | |
|
461 | 461 | def test_cleanup_old_sessions(self, autologin_user, csrf_token): |
|
462 | 462 | |
|
463 | 463 | post_data = { |
|
464 | 464 | 'csrf_token': csrf_token, |
|
465 | 465 | 'expire_days': '60' |
|
466 | 466 | } |
|
467 | 467 | response = self.app.post( |
|
468 | 468 | route_path('admin_settings_sessions_cleanup'), params=post_data, |
|
469 | 469 | status=302) |
|
470 | 470 | assert_session_flash(response, 'Cleaned up old sessions') |
|
471 | 471 | |
|
472 | 472 | |
|
473 | 473 | @pytest.mark.usefixtures('app') |
|
474 | 474 | class TestAdminSystemInfo(object): |
|
475 | 475 | |
|
476 | 476 | def test_forbidden_when_normal_user(self, autologin_regular_user): |
|
477 | 477 | self.app.get(route_path('admin_settings_system'), status=404) |
|
478 | 478 | |
|
479 | 479 | def test_system_info_page(self, autologin_user): |
|
480 | 480 | response = self.app.get(route_path('admin_settings_system')) |
|
481 | 481 | response.mustcontain('RhodeCode Community Edition, version {}'.format( |
|
482 | 482 | rhodecode.__version__)) |
|
483 | 483 | |
|
484 | 484 | def test_system_update_new_version(self, autologin_user): |
|
485 | 485 | update_data = { |
|
486 | 486 | 'versions': [ |
|
487 | 487 | { |
|
488 |
'version': '100. |
|
|
488 | 'version': '100.0.0', | |
|
489 | 489 | 'general': 'The latest version we are ever going to ship' |
|
490 | 490 | }, |
|
491 | 491 | { |
|
492 | 492 | 'version': '0.0.0', |
|
493 | 493 | 'general': 'The first version we ever shipped' |
|
494 | 494 | } |
|
495 | 495 | ] |
|
496 | 496 | } |
|
497 | 497 | with mock.patch(UPDATE_DATA_QUALNAME, return_value=update_data): |
|
498 | 498 | response = self.app.get(route_path('admin_settings_system_update')) |
|
499 | 499 | response.mustcontain('A <b>new version</b> is available') |
|
500 | 500 | |
|
501 | 501 | def test_system_update_nothing_new(self, autologin_user): |
|
502 | 502 | update_data = { |
|
503 | 503 | 'versions': [ |
|
504 | 504 | { |
|
505 |
'version': ' |
|
|
505 | 'version': '4.0.0', | |
|
506 | 506 | 'general': 'The first version we ever shipped' |
|
507 | 507 | } |
|
508 | 508 | ] |
|
509 | 509 | } |
|
510 | text = f"Your current version, {rhodecode.__version__}, is up-to-date as it is equal to or newer than the latest available version, 4.0.0." | |
|
510 | 511 | with mock.patch(UPDATE_DATA_QUALNAME, return_value=update_data): |
|
511 | 512 | response = self.app.get(route_path('admin_settings_system_update')) |
|
512 | response.mustcontain( | |
|
513 | 'This instance is already running the <b>latest</b> stable version') | |
|
513 | response.mustcontain(text) | |
|
514 | 514 | |
|
515 | 515 | def test_system_update_bad_response(self, autologin_user): |
|
516 | 516 | with mock.patch(UPDATE_DATA_QUALNAME, side_effect=ValueError('foo')): |
|
517 | 517 | response = self.app.get(route_path('admin_settings_system_update')) |
|
518 | 518 | response.mustcontain( |
|
519 | 519 | 'Bad data sent from update server') |
|
520 | 520 | |
|
521 | 521 | |
|
522 | 522 | @pytest.mark.usefixtures("app") |
|
523 | 523 | class TestAdminSettingsIssueTracker(object): |
|
524 | 524 | RC_PREFIX = 'rhodecode_' |
|
525 | 525 | SHORT_PATTERN_KEY = 'issuetracker_pat_' |
|
526 | 526 | PATTERN_KEY = RC_PREFIX + SHORT_PATTERN_KEY |
|
527 | 527 | DESC_KEY = RC_PREFIX + 'issuetracker_desc_' |
|
528 | 528 | |
|
529 | 529 | def test_issuetracker_index(self, autologin_user): |
|
530 | 530 | response = self.app.get(route_path('admin_settings_issuetracker')) |
|
531 | 531 | assert response.status_code == 200 |
|
532 | 532 | |
|
533 | 533 | def test_add_empty_issuetracker_pattern( |
|
534 | 534 | self, request, autologin_user, csrf_token): |
|
535 | 535 | post_url = route_path('admin_settings_issuetracker_update') |
|
536 | 536 | post_data = { |
|
537 | 537 | 'csrf_token': csrf_token |
|
538 | 538 | } |
|
539 | 539 | self.app.post(post_url, post_data, status=302) |
|
540 | 540 | |
|
541 | 541 | def test_add_issuetracker_pattern( |
|
542 | 542 | self, request, autologin_user, csrf_token): |
|
543 | 543 | pattern = 'issuetracker_pat' |
|
544 | 544 | another_pattern = pattern+'1' |
|
545 | 545 | post_url = route_path('admin_settings_issuetracker_update') |
|
546 | 546 | post_data = { |
|
547 | 547 | 'new_pattern_pattern_0': pattern, |
|
548 | 548 | 'new_pattern_url_0': 'http://url', |
|
549 | 549 | 'new_pattern_prefix_0': 'prefix', |
|
550 | 550 | 'new_pattern_description_0': 'description', |
|
551 | 551 | 'new_pattern_pattern_1': another_pattern, |
|
552 | 552 | 'new_pattern_url_1': 'https://url1', |
|
553 | 553 | 'new_pattern_prefix_1': 'prefix1', |
|
554 | 554 | 'new_pattern_description_1': 'description1', |
|
555 | 555 | 'csrf_token': csrf_token |
|
556 | 556 | } |
|
557 | 557 | self.app.post(post_url, post_data, status=302) |
|
558 | 558 | settings = SettingsModel().get_all_settings() |
|
559 | 559 | self.uid = md5_safe(pattern) |
|
560 | 560 | assert settings[self.PATTERN_KEY+self.uid] == pattern |
|
561 | 561 | self.another_uid = md5_safe(another_pattern) |
|
562 | 562 | assert settings[self.PATTERN_KEY+self.another_uid] == another_pattern |
|
563 | 563 | |
|
564 | 564 | @request.addfinalizer |
|
565 | 565 | def cleanup(): |
|
566 | 566 | defaults = SettingsModel().get_all_settings() |
|
567 | 567 | |
|
568 | 568 | entries = [name for name in defaults if ( |
|
569 | 569 | (self.uid in name) or (self.another_uid in name))] |
|
570 | 570 | start = len(self.RC_PREFIX) |
|
571 | 571 | for del_key in entries: |
|
572 | 572 | # TODO: anderson: get_by_name needs name without prefix |
|
573 | 573 | entry = SettingsModel().get_setting_by_name(del_key[start:]) |
|
574 | 574 | Session().delete(entry) |
|
575 | 575 | |
|
576 | 576 | Session().commit() |
|
577 | 577 | |
|
578 | 578 | def test_edit_issuetracker_pattern( |
|
579 | 579 | self, autologin_user, backend, csrf_token, request): |
|
580 | 580 | |
|
581 | 581 | old_pattern = 'issuetracker_pat1' |
|
582 | 582 | old_uid = md5_safe(old_pattern) |
|
583 | 583 | |
|
584 | 584 | post_url = route_path('admin_settings_issuetracker_update') |
|
585 | 585 | post_data = { |
|
586 | 586 | 'new_pattern_pattern_0': old_pattern, |
|
587 | 587 | 'new_pattern_url_0': 'http://url', |
|
588 | 588 | 'new_pattern_prefix_0': 'prefix', |
|
589 | 589 | 'new_pattern_description_0': 'description', |
|
590 | 590 | |
|
591 | 591 | 'csrf_token': csrf_token |
|
592 | 592 | } |
|
593 | 593 | self.app.post(post_url, post_data, status=302) |
|
594 | 594 | |
|
595 | 595 | new_pattern = 'issuetracker_pat1_edited' |
|
596 | 596 | self.new_uid = md5_safe(new_pattern) |
|
597 | 597 | |
|
598 | 598 | post_url = route_path('admin_settings_issuetracker_update') |
|
599 | 599 | post_data = { |
|
600 | 600 | 'new_pattern_pattern_{}'.format(old_uid): new_pattern, |
|
601 | 601 | 'new_pattern_url_{}'.format(old_uid): 'https://url_edited', |
|
602 | 602 | 'new_pattern_prefix_{}'.format(old_uid): 'prefix_edited', |
|
603 | 603 | 'new_pattern_description_{}'.format(old_uid): 'description_edited', |
|
604 | 604 | 'uid': old_uid, |
|
605 | 605 | 'csrf_token': csrf_token |
|
606 | 606 | } |
|
607 | 607 | self.app.post(post_url, post_data, status=302) |
|
608 | 608 | |
|
609 | 609 | settings = SettingsModel().get_all_settings() |
|
610 | 610 | assert settings[self.PATTERN_KEY+self.new_uid] == new_pattern |
|
611 | 611 | assert settings[self.DESC_KEY + self.new_uid] == 'description_edited' |
|
612 | 612 | assert self.PATTERN_KEY+old_uid not in settings |
|
613 | 613 | |
|
614 | 614 | @request.addfinalizer |
|
615 | 615 | def cleanup(): |
|
616 | 616 | IssueTrackerSettingsModel().delete_entries(old_uid) |
|
617 | 617 | IssueTrackerSettingsModel().delete_entries(self.new_uid) |
|
618 | 618 | |
|
619 | 619 | def test_replace_issuetracker_pattern_description( |
|
620 | 620 | self, autologin_user, csrf_token, request, settings_util): |
|
621 | 621 | prefix = 'issuetracker' |
|
622 | 622 | pattern = 'issuetracker_pat' |
|
623 | 623 | self.uid = md5_safe(pattern) |
|
624 | 624 | pattern_key = '_'.join([prefix, 'pat', self.uid]) |
|
625 | 625 | rc_pattern_key = '_'.join(['rhodecode', pattern_key]) |
|
626 | 626 | desc_key = '_'.join([prefix, 'desc', self.uid]) |
|
627 | 627 | rc_desc_key = '_'.join(['rhodecode', desc_key]) |
|
628 | 628 | new_description = 'new_description' |
|
629 | 629 | |
|
630 | 630 | settings_util.create_rhodecode_setting( |
|
631 | 631 | pattern_key, pattern, 'unicode', cleanup=False) |
|
632 | 632 | settings_util.create_rhodecode_setting( |
|
633 | 633 | desc_key, 'old description', 'unicode', cleanup=False) |
|
634 | 634 | |
|
635 | 635 | post_url = route_path('admin_settings_issuetracker_update') |
|
636 | 636 | post_data = { |
|
637 | 637 | 'new_pattern_pattern_0': pattern, |
|
638 | 638 | 'new_pattern_url_0': 'https://url', |
|
639 | 639 | 'new_pattern_prefix_0': 'prefix', |
|
640 | 640 | 'new_pattern_description_0': new_description, |
|
641 | 641 | 'uid': self.uid, |
|
642 | 642 | 'csrf_token': csrf_token |
|
643 | 643 | } |
|
644 | 644 | self.app.post(post_url, post_data, status=302) |
|
645 | 645 | settings = SettingsModel().get_all_settings() |
|
646 | 646 | assert settings[rc_pattern_key] == pattern |
|
647 | 647 | assert settings[rc_desc_key] == new_description |
|
648 | 648 | |
|
649 | 649 | @request.addfinalizer |
|
650 | 650 | def cleanup(): |
|
651 | 651 | IssueTrackerSettingsModel().delete_entries(self.uid) |
|
652 | 652 | |
|
653 | 653 | def test_delete_issuetracker_pattern( |
|
654 | 654 | self, autologin_user, backend, csrf_token, settings_util, xhr_header): |
|
655 | 655 | |
|
656 | 656 | old_pattern = 'issuetracker_pat_deleted' |
|
657 | 657 | old_uid = md5_safe(old_pattern) |
|
658 | 658 | |
|
659 | 659 | post_url = route_path('admin_settings_issuetracker_update') |
|
660 | 660 | post_data = { |
|
661 | 661 | 'new_pattern_pattern_0': old_pattern, |
|
662 | 662 | 'new_pattern_url_0': 'http://url', |
|
663 | 663 | 'new_pattern_prefix_0': 'prefix', |
|
664 | 664 | 'new_pattern_description_0': 'description', |
|
665 | 665 | |
|
666 | 666 | 'csrf_token': csrf_token |
|
667 | 667 | } |
|
668 | 668 | self.app.post(post_url, post_data, status=302) |
|
669 | 669 | |
|
670 | 670 | post_url = route_path('admin_settings_issuetracker_delete') |
|
671 | 671 | post_data = { |
|
672 | 672 | 'uid': old_uid, |
|
673 | 673 | 'csrf_token': csrf_token |
|
674 | 674 | } |
|
675 | 675 | self.app.post(post_url, post_data, extra_environ=xhr_header, status=200) |
|
676 | 676 | settings = SettingsModel().get_all_settings() |
|
677 | 677 | assert self.PATTERN_KEY+old_uid not in settings |
|
678 | 678 | assert self.DESC_KEY + old_uid not in settings |
@@ -1,477 +1,477 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import re |
|
20 | 20 | import logging |
|
21 | 21 | import formencode |
|
22 | 22 | import formencode.htmlfill |
|
23 | 23 | import datetime |
|
24 | 24 | from pyramid.interfaces import IRoutesMapper |
|
25 | 25 | |
|
26 | 26 | from pyramid.httpexceptions import HTTPFound |
|
27 | 27 | from pyramid.renderers import render |
|
28 | 28 | from pyramid.response import Response |
|
29 | 29 | |
|
30 | 30 | from rhodecode.apps._base import BaseAppView, DataGridAppView |
|
31 | from rhodecode.apps.ssh_support import SshKeyFileChangeEvent | |
|
31 | from rhodecode.apps.ssh_support.events import SshKeyFileChangeEvent | |
|
32 | 32 | from rhodecode import events |
|
33 | 33 | |
|
34 | 34 | from rhodecode.lib import helpers as h |
|
35 | 35 | from rhodecode.lib.auth import ( |
|
36 | 36 | LoginRequired, HasPermissionAllDecorator, CSRFRequired) |
|
37 | 37 | from rhodecode.lib.utils2 import aslist, safe_str |
|
38 | 38 | from rhodecode.model.db import ( |
|
39 | 39 | or_, coalesce, User, UserIpMap, UserSshKeys) |
|
40 | 40 | from rhodecode.model.forms import ( |
|
41 | 41 | ApplicationPermissionsForm, ObjectPermissionsForm, UserPermissionsForm) |
|
42 | 42 | from rhodecode.model.meta import Session |
|
43 | 43 | from rhodecode.model.permission import PermissionModel |
|
44 | 44 | from rhodecode.model.settings import SettingsModel |
|
45 | 45 | |
|
46 | 46 | |
|
47 | 47 | log = logging.getLogger(__name__) |
|
48 | 48 | |
|
49 | 49 | |
|
50 | 50 | class AdminPermissionsView(BaseAppView, DataGridAppView): |
|
51 | 51 | def load_default_context(self): |
|
52 | 52 | c = self._get_local_tmpl_context() |
|
53 | 53 | PermissionModel().set_global_permission_choices( |
|
54 | 54 | c, gettext_translator=self.request.translate) |
|
55 | 55 | return c |
|
56 | 56 | |
|
57 | 57 | @LoginRequired() |
|
58 | 58 | @HasPermissionAllDecorator('hg.admin') |
|
59 | 59 | def permissions_application(self): |
|
60 | 60 | c = self.load_default_context() |
|
61 | 61 | c.active = 'application' |
|
62 | 62 | |
|
63 | 63 | c.user = User.get_default_user(refresh=True) |
|
64 | 64 | |
|
65 | 65 | app_settings = c.rc_config |
|
66 | 66 | |
|
67 | 67 | defaults = { |
|
68 | 68 | 'anonymous': c.user.active, |
|
69 | 69 | 'default_register_message': app_settings.get( |
|
70 | 70 | 'rhodecode_register_message') |
|
71 | 71 | } |
|
72 | 72 | defaults.update(c.user.get_default_perms()) |
|
73 | 73 | |
|
74 | 74 | data = render('rhodecode:templates/admin/permissions/permissions.mako', |
|
75 | 75 | self._get_template_context(c), self.request) |
|
76 | 76 | html = formencode.htmlfill.render( |
|
77 | 77 | data, |
|
78 | 78 | defaults=defaults, |
|
79 | 79 | encoding="UTF-8", |
|
80 | 80 | force_defaults=False |
|
81 | 81 | ) |
|
82 | 82 | return Response(html) |
|
83 | 83 | |
|
84 | 84 | @LoginRequired() |
|
85 | 85 | @HasPermissionAllDecorator('hg.admin') |
|
86 | 86 | @CSRFRequired() |
|
87 | 87 | def permissions_application_update(self): |
|
88 | 88 | _ = self.request.translate |
|
89 | 89 | c = self.load_default_context() |
|
90 | 90 | c.active = 'application' |
|
91 | 91 | |
|
92 | 92 | _form = ApplicationPermissionsForm( |
|
93 | 93 | self.request.translate, |
|
94 | 94 | [x[0] for x in c.register_choices], |
|
95 | 95 | [x[0] for x in c.password_reset_choices], |
|
96 | 96 | [x[0] for x in c.extern_activate_choices])() |
|
97 | 97 | |
|
98 | 98 | try: |
|
99 | 99 | form_result = _form.to_python(dict(self.request.POST)) |
|
100 | 100 | form_result.update({'perm_user_name': User.DEFAULT_USER}) |
|
101 | 101 | PermissionModel().update_application_permissions(form_result) |
|
102 | 102 | |
|
103 | 103 | settings = [ |
|
104 | 104 | ('register_message', 'default_register_message'), |
|
105 | 105 | ] |
|
106 | 106 | for setting, form_key in settings: |
|
107 | 107 | sett = SettingsModel().create_or_update_setting( |
|
108 | 108 | setting, form_result[form_key]) |
|
109 | 109 | Session().add(sett) |
|
110 | 110 | |
|
111 | 111 | Session().commit() |
|
112 | 112 | h.flash(_('Application permissions updated successfully'), |
|
113 | 113 | category='success') |
|
114 | 114 | |
|
115 | 115 | except formencode.Invalid as errors: |
|
116 | 116 | defaults = errors.value |
|
117 | 117 | |
|
118 | 118 | data = render( |
|
119 | 119 | 'rhodecode:templates/admin/permissions/permissions.mako', |
|
120 | 120 | self._get_template_context(c), self.request) |
|
121 | 121 | html = formencode.htmlfill.render( |
|
122 | 122 | data, |
|
123 | 123 | defaults=defaults, |
|
124 | 124 | errors=errors.unpack_errors() or {}, |
|
125 | 125 | prefix_error=False, |
|
126 | 126 | encoding="UTF-8", |
|
127 | 127 | force_defaults=False |
|
128 | 128 | ) |
|
129 | 129 | return Response(html) |
|
130 | 130 | |
|
131 | 131 | except Exception: |
|
132 | 132 | log.exception("Exception during update of permissions") |
|
133 | 133 | h.flash(_('Error occurred during update of permissions'), |
|
134 | 134 | category='error') |
|
135 | 135 | |
|
136 | 136 | affected_user_ids = [User.get_default_user_id()] |
|
137 | 137 | PermissionModel().trigger_permission_flush(affected_user_ids) |
|
138 | 138 | |
|
139 | 139 | raise HTTPFound(h.route_path('admin_permissions_application')) |
|
140 | 140 | |
|
141 | 141 | @LoginRequired() |
|
142 | 142 | @HasPermissionAllDecorator('hg.admin') |
|
143 | 143 | def permissions_objects(self): |
|
144 | 144 | c = self.load_default_context() |
|
145 | 145 | c.active = 'objects' |
|
146 | 146 | |
|
147 | 147 | c.user = User.get_default_user(refresh=True) |
|
148 | 148 | defaults = {} |
|
149 | 149 | defaults.update(c.user.get_default_perms()) |
|
150 | 150 | |
|
151 | 151 | data = render( |
|
152 | 152 | 'rhodecode:templates/admin/permissions/permissions.mako', |
|
153 | 153 | self._get_template_context(c), self.request) |
|
154 | 154 | html = formencode.htmlfill.render( |
|
155 | 155 | data, |
|
156 | 156 | defaults=defaults, |
|
157 | 157 | encoding="UTF-8", |
|
158 | 158 | force_defaults=False |
|
159 | 159 | ) |
|
160 | 160 | return Response(html) |
|
161 | 161 | |
|
162 | 162 | @LoginRequired() |
|
163 | 163 | @HasPermissionAllDecorator('hg.admin') |
|
164 | 164 | @CSRFRequired() |
|
165 | 165 | def permissions_objects_update(self): |
|
166 | 166 | _ = self.request.translate |
|
167 | 167 | c = self.load_default_context() |
|
168 | 168 | c.active = 'objects' |
|
169 | 169 | |
|
170 | 170 | _form = ObjectPermissionsForm( |
|
171 | 171 | self.request.translate, |
|
172 | 172 | [x[0] for x in c.repo_perms_choices], |
|
173 | 173 | [x[0] for x in c.group_perms_choices], |
|
174 | 174 | [x[0] for x in c.user_group_perms_choices], |
|
175 | 175 | )() |
|
176 | 176 | |
|
177 | 177 | try: |
|
178 | 178 | form_result = _form.to_python(dict(self.request.POST)) |
|
179 | 179 | form_result.update({'perm_user_name': User.DEFAULT_USER}) |
|
180 | 180 | PermissionModel().update_object_permissions(form_result) |
|
181 | 181 | |
|
182 | 182 | Session().commit() |
|
183 | 183 | h.flash(_('Object permissions updated successfully'), |
|
184 | 184 | category='success') |
|
185 | 185 | |
|
186 | 186 | except formencode.Invalid as errors: |
|
187 | 187 | defaults = errors.value |
|
188 | 188 | |
|
189 | 189 | data = render( |
|
190 | 190 | 'rhodecode:templates/admin/permissions/permissions.mako', |
|
191 | 191 | self._get_template_context(c), self.request) |
|
192 | 192 | html = formencode.htmlfill.render( |
|
193 | 193 | data, |
|
194 | 194 | defaults=defaults, |
|
195 | 195 | errors=errors.unpack_errors() or {}, |
|
196 | 196 | prefix_error=False, |
|
197 | 197 | encoding="UTF-8", |
|
198 | 198 | force_defaults=False |
|
199 | 199 | ) |
|
200 | 200 | return Response(html) |
|
201 | 201 | except Exception: |
|
202 | 202 | log.exception("Exception during update of permissions") |
|
203 | 203 | h.flash(_('Error occurred during update of permissions'), |
|
204 | 204 | category='error') |
|
205 | 205 | |
|
206 | 206 | affected_user_ids = [User.get_default_user_id()] |
|
207 | 207 | PermissionModel().trigger_permission_flush(affected_user_ids) |
|
208 | 208 | |
|
209 | 209 | raise HTTPFound(h.route_path('admin_permissions_object')) |
|
210 | 210 | |
|
211 | 211 | @LoginRequired() |
|
212 | 212 | @HasPermissionAllDecorator('hg.admin') |
|
213 | 213 | def permissions_branch(self): |
|
214 | 214 | c = self.load_default_context() |
|
215 | 215 | c.active = 'branch' |
|
216 | 216 | |
|
217 | 217 | c.user = User.get_default_user(refresh=True) |
|
218 | 218 | defaults = {} |
|
219 | 219 | defaults.update(c.user.get_default_perms()) |
|
220 | 220 | |
|
221 | 221 | data = render( |
|
222 | 222 | 'rhodecode:templates/admin/permissions/permissions.mako', |
|
223 | 223 | self._get_template_context(c), self.request) |
|
224 | 224 | html = formencode.htmlfill.render( |
|
225 | 225 | data, |
|
226 | 226 | defaults=defaults, |
|
227 | 227 | encoding="UTF-8", |
|
228 | 228 | force_defaults=False |
|
229 | 229 | ) |
|
230 | 230 | return Response(html) |
|
231 | 231 | |
|
232 | 232 | @LoginRequired() |
|
233 | 233 | @HasPermissionAllDecorator('hg.admin') |
|
234 | 234 | def permissions_global(self): |
|
235 | 235 | c = self.load_default_context() |
|
236 | 236 | c.active = 'global' |
|
237 | 237 | |
|
238 | 238 | c.user = User.get_default_user(refresh=True) |
|
239 | 239 | defaults = {} |
|
240 | 240 | defaults.update(c.user.get_default_perms()) |
|
241 | 241 | |
|
242 | 242 | data = render( |
|
243 | 243 | 'rhodecode:templates/admin/permissions/permissions.mako', |
|
244 | 244 | self._get_template_context(c), self.request) |
|
245 | 245 | html = formencode.htmlfill.render( |
|
246 | 246 | data, |
|
247 | 247 | defaults=defaults, |
|
248 | 248 | encoding="UTF-8", |
|
249 | 249 | force_defaults=False |
|
250 | 250 | ) |
|
251 | 251 | return Response(html) |
|
252 | 252 | |
|
253 | 253 | @LoginRequired() |
|
254 | 254 | @HasPermissionAllDecorator('hg.admin') |
|
255 | 255 | @CSRFRequired() |
|
256 | 256 | def permissions_global_update(self): |
|
257 | 257 | _ = self.request.translate |
|
258 | 258 | c = self.load_default_context() |
|
259 | 259 | c.active = 'global' |
|
260 | 260 | |
|
261 | 261 | _form = UserPermissionsForm( |
|
262 | 262 | self.request.translate, |
|
263 | 263 | [x[0] for x in c.repo_create_choices], |
|
264 | 264 | [x[0] for x in c.repo_create_on_write_choices], |
|
265 | 265 | [x[0] for x in c.repo_group_create_choices], |
|
266 | 266 | [x[0] for x in c.user_group_create_choices], |
|
267 | 267 | [x[0] for x in c.fork_choices], |
|
268 | 268 | [x[0] for x in c.inherit_default_permission_choices])() |
|
269 | 269 | |
|
270 | 270 | try: |
|
271 | 271 | form_result = _form.to_python(dict(self.request.POST)) |
|
272 | 272 | form_result.update({'perm_user_name': User.DEFAULT_USER}) |
|
273 | 273 | PermissionModel().update_user_permissions(form_result) |
|
274 | 274 | |
|
275 | 275 | Session().commit() |
|
276 | 276 | h.flash(_('Global permissions updated successfully'), |
|
277 | 277 | category='success') |
|
278 | 278 | |
|
279 | 279 | except formencode.Invalid as errors: |
|
280 | 280 | defaults = errors.value |
|
281 | 281 | |
|
282 | 282 | data = render( |
|
283 | 283 | 'rhodecode:templates/admin/permissions/permissions.mako', |
|
284 | 284 | self._get_template_context(c), self.request) |
|
285 | 285 | html = formencode.htmlfill.render( |
|
286 | 286 | data, |
|
287 | 287 | defaults=defaults, |
|
288 | 288 | errors=errors.unpack_errors() or {}, |
|
289 | 289 | prefix_error=False, |
|
290 | 290 | encoding="UTF-8", |
|
291 | 291 | force_defaults=False |
|
292 | 292 | ) |
|
293 | 293 | return Response(html) |
|
294 | 294 | except Exception: |
|
295 | 295 | log.exception("Exception during update of permissions") |
|
296 | 296 | h.flash(_('Error occurred during update of permissions'), |
|
297 | 297 | category='error') |
|
298 | 298 | |
|
299 | 299 | affected_user_ids = [User.get_default_user_id()] |
|
300 | 300 | PermissionModel().trigger_permission_flush(affected_user_ids) |
|
301 | 301 | |
|
302 | 302 | raise HTTPFound(h.route_path('admin_permissions_global')) |
|
303 | 303 | |
|
304 | 304 | @LoginRequired() |
|
305 | 305 | @HasPermissionAllDecorator('hg.admin') |
|
306 | 306 | def permissions_ips(self): |
|
307 | 307 | c = self.load_default_context() |
|
308 | 308 | c.active = 'ips' |
|
309 | 309 | |
|
310 | 310 | c.user = User.get_default_user(refresh=True) |
|
311 | 311 | c.user_ip_map = ( |
|
312 | 312 | UserIpMap.query().filter(UserIpMap.user == c.user).all()) |
|
313 | 313 | |
|
314 | 314 | return self._get_template_context(c) |
|
315 | 315 | |
|
316 | 316 | @LoginRequired() |
|
317 | 317 | @HasPermissionAllDecorator('hg.admin') |
|
318 | 318 | def permissions_overview(self): |
|
319 | 319 | c = self.load_default_context() |
|
320 | 320 | c.active = 'perms' |
|
321 | 321 | |
|
322 | 322 | c.user = User.get_default_user(refresh=True) |
|
323 | 323 | c.perm_user = c.user.AuthUser() |
|
324 | 324 | return self._get_template_context(c) |
|
325 | 325 | |
|
326 | 326 | @LoginRequired() |
|
327 | 327 | @HasPermissionAllDecorator('hg.admin') |
|
328 | 328 | def auth_token_access(self): |
|
329 | 329 | from rhodecode import CONFIG |
|
330 | 330 | |
|
331 | 331 | c = self.load_default_context() |
|
332 | 332 | c.active = 'auth_token_access' |
|
333 | 333 | |
|
334 | 334 | c.user = User.get_default_user(refresh=True) |
|
335 | 335 | c.perm_user = c.user.AuthUser() |
|
336 | 336 | |
|
337 | 337 | mapper = self.request.registry.queryUtility(IRoutesMapper) |
|
338 | 338 | c.view_data = [] |
|
339 | 339 | |
|
340 | 340 | _argument_prog = re.compile(r'\{(.*?)\}|:\((.*)\)') |
|
341 | 341 | introspector = self.request.registry.introspector |
|
342 | 342 | |
|
343 | 343 | view_intr = {} |
|
344 | 344 | for view_data in introspector.get_category('views'): |
|
345 | 345 | intr = view_data['introspectable'] |
|
346 | 346 | |
|
347 | 347 | if 'route_name' in intr and intr['attr']: |
|
348 | 348 | view_intr[intr['route_name']] = '{}:{}'.format( |
|
349 | 349 | str(intr['derived_callable'].__name__), intr['attr'] |
|
350 | 350 | ) |
|
351 | 351 | |
|
352 | 352 | c.whitelist_key = 'api_access_controllers_whitelist' |
|
353 | 353 | c.whitelist_file = CONFIG.get('__file__') |
|
354 | 354 | whitelist_views = aslist( |
|
355 | 355 | CONFIG.get(c.whitelist_key), sep=',') |
|
356 | 356 | |
|
357 | 357 | for route_info in mapper.get_routes(): |
|
358 | 358 | if not route_info.name.startswith('__'): |
|
359 | 359 | routepath = route_info.pattern |
|
360 | 360 | |
|
361 | 361 | def replace(matchobj): |
|
362 | 362 | if matchobj.group(1): |
|
363 | 363 | return "{%s}" % matchobj.group(1).split(':')[0] |
|
364 | 364 | else: |
|
365 | 365 | return "{%s}" % matchobj.group(2) |
|
366 | 366 | |
|
367 | 367 | routepath = _argument_prog.sub(replace, routepath) |
|
368 | 368 | |
|
369 | 369 | if not routepath.startswith('/'): |
|
370 | 370 | routepath = '/' + routepath |
|
371 | 371 | |
|
372 | 372 | view_fqn = view_intr.get(route_info.name, 'NOT AVAILABLE') |
|
373 | 373 | active = view_fqn in whitelist_views |
|
374 | 374 | c.view_data.append((route_info.name, view_fqn, routepath, active)) |
|
375 | 375 | |
|
376 | 376 | c.whitelist_views = whitelist_views |
|
377 | 377 | return self._get_template_context(c) |
|
378 | 378 | |
|
379 | 379 | def ssh_enabled(self): |
|
380 | 380 | return self.request.registry.settings.get( |
|
381 | 381 | 'ssh.generate_authorized_keyfile') |
|
382 | 382 | |
|
383 | 383 | @LoginRequired() |
|
384 | 384 | @HasPermissionAllDecorator('hg.admin') |
|
385 | 385 | def ssh_keys(self): |
|
386 | 386 | c = self.load_default_context() |
|
387 | 387 | c.active = 'ssh_keys' |
|
388 | 388 | c.ssh_enabled = self.ssh_enabled() |
|
389 | 389 | return self._get_template_context(c) |
|
390 | 390 | |
|
391 | 391 | @LoginRequired() |
|
392 | 392 | @HasPermissionAllDecorator('hg.admin') |
|
393 | 393 | def ssh_keys_data(self): |
|
394 | 394 | _ = self.request.translate |
|
395 | 395 | self.load_default_context() |
|
396 | 396 | column_map = { |
|
397 | 397 | 'fingerprint': 'ssh_key_fingerprint', |
|
398 | 398 | 'username': User.username |
|
399 | 399 | } |
|
400 | 400 | draw, start, limit = self._extract_chunk(self.request) |
|
401 | 401 | search_q, order_by, order_dir = self._extract_ordering( |
|
402 | 402 | self.request, column_map=column_map) |
|
403 | 403 | |
|
404 | 404 | ssh_keys_data_total_count = UserSshKeys.query()\ |
|
405 | 405 | .count() |
|
406 | 406 | |
|
407 | 407 | # json generate |
|
408 | 408 | base_q = UserSshKeys.query().join(UserSshKeys.user) |
|
409 | 409 | |
|
410 | 410 | if search_q: |
|
411 | 411 | like_expression = f'%{safe_str(search_q)}%' |
|
412 | 412 | base_q = base_q.filter(or_( |
|
413 | 413 | User.username.ilike(like_expression), |
|
414 | 414 | UserSshKeys.ssh_key_fingerprint.ilike(like_expression), |
|
415 | 415 | )) |
|
416 | 416 | |
|
417 | 417 | users_data_total_filtered_count = base_q.count() |
|
418 | 418 | |
|
419 | 419 | sort_col = self._get_order_col(order_by, UserSshKeys) |
|
420 | 420 | if sort_col: |
|
421 | 421 | if order_dir == 'asc': |
|
422 | 422 | # handle null values properly to order by NULL last |
|
423 | 423 | if order_by in ['created_on']: |
|
424 | 424 | sort_col = coalesce(sort_col, datetime.date.max) |
|
425 | 425 | sort_col = sort_col.asc() |
|
426 | 426 | else: |
|
427 | 427 | # handle null values properly to order by NULL last |
|
428 | 428 | if order_by in ['created_on']: |
|
429 | 429 | sort_col = coalesce(sort_col, datetime.date.min) |
|
430 | 430 | sort_col = sort_col.desc() |
|
431 | 431 | |
|
432 | 432 | base_q = base_q.order_by(sort_col) |
|
433 | 433 | base_q = base_q.offset(start).limit(limit) |
|
434 | 434 | |
|
435 | 435 | ssh_keys = base_q.all() |
|
436 | 436 | |
|
437 | 437 | ssh_keys_data = [] |
|
438 | 438 | for ssh_key in ssh_keys: |
|
439 | 439 | ssh_keys_data.append({ |
|
440 | 440 | "username": h.gravatar_with_user(self.request, ssh_key.user.username), |
|
441 | 441 | "fingerprint": ssh_key.ssh_key_fingerprint, |
|
442 | 442 | "description": ssh_key.description, |
|
443 | 443 | "created_on": h.format_date(ssh_key.created_on), |
|
444 | 444 | "accessed_on": h.format_date(ssh_key.accessed_on), |
|
445 | 445 | "action": h.link_to( |
|
446 | 446 | _('Edit'), h.route_path('edit_user_ssh_keys', |
|
447 | 447 | user_id=ssh_key.user.user_id)) |
|
448 | 448 | }) |
|
449 | 449 | |
|
450 | 450 | data = ({ |
|
451 | 451 | 'draw': draw, |
|
452 | 452 | 'data': ssh_keys_data, |
|
453 | 453 | 'recordsTotal': ssh_keys_data_total_count, |
|
454 | 454 | 'recordsFiltered': users_data_total_filtered_count, |
|
455 | 455 | }) |
|
456 | 456 | |
|
457 | 457 | return data |
|
458 | 458 | |
|
459 | 459 | @LoginRequired() |
|
460 | 460 | @HasPermissionAllDecorator('hg.admin') |
|
461 | 461 | @CSRFRequired() |
|
462 | 462 | def ssh_keys_update(self): |
|
463 | 463 | _ = self.request.translate |
|
464 | 464 | self.load_default_context() |
|
465 | 465 | |
|
466 | 466 | ssh_enabled = self.ssh_enabled() |
|
467 | 467 | key_file = self.request.registry.settings.get( |
|
468 | 468 | 'ssh.authorized_keys_file_path') |
|
469 | 469 | if ssh_enabled: |
|
470 | 470 | events.trigger(SshKeyFileChangeEvent(), self.request.registry) |
|
471 | 471 | h.flash(_('Updated SSH keys file: {}').format(key_file), |
|
472 | 472 | category='success') |
|
473 | 473 | else: |
|
474 | 474 | h.flash(_('SSH key support is disabled in .ini file'), |
|
475 | 475 | category='warning') |
|
476 | 476 | |
|
477 | 477 | raise HTTPFound(h.route_path('admin_permissions_ssh_keys')) |
@@ -1,714 +1,708 b'' | |||
|
1 | 1 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | |
|
20 | 20 | import logging |
|
21 | 21 | import collections |
|
22 | 22 | |
|
23 | 23 | import datetime |
|
24 | 24 | import formencode |
|
25 | 25 | import formencode.htmlfill |
|
26 | 26 | |
|
27 | 27 | import rhodecode |
|
28 | 28 | |
|
29 | 29 | from pyramid.httpexceptions import HTTPFound, HTTPNotFound |
|
30 | 30 | from pyramid.renderers import render |
|
31 | 31 | from pyramid.response import Response |
|
32 | 32 | |
|
33 | 33 | from rhodecode.apps._base import BaseAppView |
|
34 | 34 | from rhodecode.apps._base.navigation import navigation_list |
|
35 |
from rhodecode.apps.svn_support |
|
|
35 | from rhodecode.apps.svn_support import config_keys | |
|
36 | 36 | from rhodecode.lib import helpers as h |
|
37 | 37 | from rhodecode.lib.auth import ( |
|
38 | 38 | LoginRequired, HasPermissionAllDecorator, CSRFRequired) |
|
39 | 39 | from rhodecode.lib.celerylib import tasks, run_task |
|
40 | 40 | from rhodecode.lib.str_utils import safe_str |
|
41 | from rhodecode.lib.utils import repo2db_mapper | |
|
41 | from rhodecode.lib.utils import repo2db_mapper, get_rhodecode_repo_store_path | |
|
42 | 42 | from rhodecode.lib.utils2 import str2bool, AttributeDict |
|
43 | 43 | from rhodecode.lib.index import searcher_from_config |
|
44 | 44 | |
|
45 | 45 | from rhodecode.model.db import RhodeCodeUi, Repository |
|
46 | 46 | from rhodecode.model.forms import (ApplicationSettingsForm, |
|
47 | 47 | ApplicationUiSettingsForm, ApplicationVisualisationForm, |
|
48 | 48 | LabsSettingsForm, IssueTrackerPatternsForm) |
|
49 | 49 | from rhodecode.model.permission import PermissionModel |
|
50 | 50 | from rhodecode.model.repo_group import RepoGroupModel |
|
51 | 51 | |
|
52 | 52 | from rhodecode.model.scm import ScmModel |
|
53 | 53 | from rhodecode.model.notification import EmailNotificationModel |
|
54 | 54 | from rhodecode.model.meta import Session |
|
55 | 55 | from rhodecode.model.settings import ( |
|
56 | 56 | IssueTrackerSettingsModel, VcsSettingsModel, SettingNotFound, |
|
57 | 57 | SettingsModel) |
|
58 | 58 | |
|
59 | 59 | |
|
60 | 60 | log = logging.getLogger(__name__) |
|
61 | 61 | |
|
62 | 62 | |
|
63 | 63 | class AdminSettingsView(BaseAppView): |
|
64 | 64 | |
|
65 | 65 | def load_default_context(self): |
|
66 | 66 | c = self._get_local_tmpl_context() |
|
67 | 67 | c.labs_active = str2bool( |
|
68 | 68 | rhodecode.CONFIG.get('labs_settings_active', 'true')) |
|
69 | 69 | c.navlist = navigation_list(self.request) |
|
70 | 70 | return c |
|
71 | 71 | |
|
72 | 72 | @classmethod |
|
73 | 73 | def _get_ui_settings(cls): |
|
74 | 74 | ret = RhodeCodeUi.query().all() |
|
75 | 75 | |
|
76 | 76 | if not ret: |
|
77 | 77 | raise Exception('Could not get application ui settings !') |
|
78 | 78 | settings = {} |
|
79 | 79 | for each in ret: |
|
80 | 80 | k = each.ui_key |
|
81 | 81 | v = each.ui_value |
|
82 | 82 | if k == '/': |
|
83 | 83 | k = 'root_path' |
|
84 | 84 | |
|
85 | 85 | if k in ['push_ssl', 'publish', 'enabled']: |
|
86 | 86 | v = str2bool(v) |
|
87 | 87 | |
|
88 | 88 | if k.find('.') != -1: |
|
89 | 89 | k = k.replace('.', '_') |
|
90 | 90 | |
|
91 | 91 | if each.ui_section in ['hooks', 'extensions']: |
|
92 | 92 | v = each.ui_active |
|
93 | 93 | |
|
94 | 94 | settings[each.ui_section + '_' + k] = v |
|
95 | 95 | return settings |
|
96 | 96 | |
|
97 | 97 | @classmethod |
|
98 | 98 | def _form_defaults(cls): |
|
99 | 99 | defaults = SettingsModel().get_all_settings() |
|
100 | 100 | defaults.update(cls._get_ui_settings()) |
|
101 | 101 | |
|
102 | 102 | defaults.update({ |
|
103 | 103 | 'new_svn_branch': '', |
|
104 | 104 | 'new_svn_tag': '', |
|
105 | 105 | }) |
|
106 | 106 | return defaults |
|
107 | 107 | |
|
108 | 108 | @LoginRequired() |
|
109 | 109 | @HasPermissionAllDecorator('hg.admin') |
|
110 | 110 | def settings_vcs(self): |
|
111 | 111 | c = self.load_default_context() |
|
112 | 112 | c.active = 'vcs' |
|
113 | 113 | model = VcsSettingsModel() |
|
114 | 114 | c.svn_branch_patterns = model.get_global_svn_branch_patterns() |
|
115 | 115 | c.svn_tag_patterns = model.get_global_svn_tag_patterns() |
|
116 | ||
|
117 | settings = self.request.registry.settings | |
|
118 | c.svn_proxy_generate_config = settings[generate_config] | |
|
119 | ||
|
116 | c.svn_generate_config = rhodecode.ConfigGet().get_bool(config_keys.generate_config) | |
|
117 | c.svn_config_path = rhodecode.ConfigGet().get_str(config_keys.config_file_path) | |
|
120 | 118 | defaults = self._form_defaults() |
|
121 | 119 | |
|
122 | 120 | model.create_largeobjects_dirs_if_needed(defaults['paths_root_path']) |
|
123 | 121 | |
|
124 | 122 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
125 | 123 | self._get_template_context(c), self.request) |
|
126 | 124 | html = formencode.htmlfill.render( |
|
127 | 125 | data, |
|
128 | 126 | defaults=defaults, |
|
129 | 127 | encoding="UTF-8", |
|
130 | 128 | force_defaults=False |
|
131 | 129 | ) |
|
132 | 130 | return Response(html) |
|
133 | 131 | |
|
134 | 132 | @LoginRequired() |
|
135 | 133 | @HasPermissionAllDecorator('hg.admin') |
|
136 | 134 | @CSRFRequired() |
|
137 | 135 | def settings_vcs_update(self): |
|
138 | 136 | _ = self.request.translate |
|
139 | 137 | c = self.load_default_context() |
|
140 | 138 | c.active = 'vcs' |
|
141 | 139 | |
|
142 | 140 | model = VcsSettingsModel() |
|
143 | 141 | c.svn_branch_patterns = model.get_global_svn_branch_patterns() |
|
144 | 142 | c.svn_tag_patterns = model.get_global_svn_tag_patterns() |
|
145 | 143 | |
|
146 | settings = self.request.registry.settings | |
|
147 | c.svn_proxy_generate_config = settings[generate_config] | |
|
148 | ||
|
144 | c.svn_generate_config = rhodecode.ConfigGet().get_bool(config_keys.generate_config) | |
|
145 | c.svn_config_path = rhodecode.ConfigGet().get_str(config_keys.config_file_path) | |
|
149 | 146 | application_form = ApplicationUiSettingsForm(self.request.translate)() |
|
150 | 147 | |
|
151 | 148 | try: |
|
152 | 149 | form_result = application_form.to_python(dict(self.request.POST)) |
|
153 | 150 | except formencode.Invalid as errors: |
|
154 | 151 | h.flash( |
|
155 | 152 | _("Some form inputs contain invalid data."), |
|
156 | 153 | category='error') |
|
157 | 154 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
158 | 155 | self._get_template_context(c), self.request) |
|
159 | 156 | html = formencode.htmlfill.render( |
|
160 | 157 | data, |
|
161 | 158 | defaults=errors.value, |
|
162 | 159 | errors=errors.unpack_errors() or {}, |
|
163 | 160 | prefix_error=False, |
|
164 | 161 | encoding="UTF-8", |
|
165 | 162 | force_defaults=False |
|
166 | 163 | ) |
|
167 | 164 | return Response(html) |
|
168 | 165 | |
|
169 | 166 | try: |
|
170 | if c.visual.allow_repo_location_change: | |
|
171 | model.update_global_path_setting(form_result['paths_root_path']) | |
|
172 | ||
|
173 | 167 | model.update_global_ssl_setting(form_result['web_push_ssl']) |
|
174 | 168 | model.update_global_hook_settings(form_result) |
|
175 | 169 | |
|
176 | 170 | model.create_or_update_global_svn_settings(form_result) |
|
177 | 171 | model.create_or_update_global_hg_settings(form_result) |
|
178 | 172 | model.create_or_update_global_git_settings(form_result) |
|
179 | 173 | model.create_or_update_global_pr_settings(form_result) |
|
180 | 174 | except Exception: |
|
181 | 175 | log.exception("Exception while updating settings") |
|
182 | 176 | h.flash(_('Error occurred during updating ' |
|
183 | 177 | 'application settings'), category='error') |
|
184 | 178 | else: |
|
185 | 179 | Session().commit() |
|
186 | 180 | h.flash(_('Updated VCS settings'), category='success') |
|
187 | 181 | raise HTTPFound(h.route_path('admin_settings_vcs')) |
|
188 | 182 | |
|
189 | 183 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
190 | 184 | self._get_template_context(c), self.request) |
|
191 | 185 | html = formencode.htmlfill.render( |
|
192 | 186 | data, |
|
193 | 187 | defaults=self._form_defaults(), |
|
194 | 188 | encoding="UTF-8", |
|
195 | 189 | force_defaults=False |
|
196 | 190 | ) |
|
197 | 191 | return Response(html) |
|
198 | 192 | |
|
199 | 193 | @LoginRequired() |
|
200 | 194 | @HasPermissionAllDecorator('hg.admin') |
|
201 | 195 | @CSRFRequired() |
|
202 | 196 | def settings_vcs_delete_svn_pattern(self): |
|
203 | 197 | delete_pattern_id = self.request.POST.get('delete_svn_pattern') |
|
204 | 198 | model = VcsSettingsModel() |
|
205 | 199 | try: |
|
206 | 200 | model.delete_global_svn_pattern(delete_pattern_id) |
|
207 | 201 | except SettingNotFound: |
|
208 | 202 | log.exception( |
|
209 | 203 | 'Failed to delete svn_pattern with id %s', delete_pattern_id) |
|
210 | 204 | raise HTTPNotFound() |
|
211 | 205 | |
|
212 | 206 | Session().commit() |
|
213 | 207 | return True |
|
214 | 208 | |
|
215 | 209 | @LoginRequired() |
|
216 | 210 | @HasPermissionAllDecorator('hg.admin') |
|
217 | 211 | def settings_mapping(self): |
|
218 | 212 | c = self.load_default_context() |
|
219 | 213 | c.active = 'mapping' |
|
220 |
c.storage_path = |
|
|
214 | c.storage_path = get_rhodecode_repo_store_path() | |
|
221 | 215 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
222 | 216 | self._get_template_context(c), self.request) |
|
223 | 217 | html = formencode.htmlfill.render( |
|
224 | 218 | data, |
|
225 | 219 | defaults=self._form_defaults(), |
|
226 | 220 | encoding="UTF-8", |
|
227 | 221 | force_defaults=False |
|
228 | 222 | ) |
|
229 | 223 | return Response(html) |
|
230 | 224 | |
|
231 | 225 | @LoginRequired() |
|
232 | 226 | @HasPermissionAllDecorator('hg.admin') |
|
233 | 227 | @CSRFRequired() |
|
234 | 228 | def settings_mapping_update(self): |
|
235 | 229 | _ = self.request.translate |
|
236 | 230 | c = self.load_default_context() |
|
237 | 231 | c.active = 'mapping' |
|
238 | 232 | rm_obsolete = self.request.POST.get('destroy', False) |
|
239 | 233 | invalidate_cache = self.request.POST.get('invalidate', False) |
|
240 | 234 | log.debug('rescanning repo location with destroy obsolete=%s', rm_obsolete) |
|
241 | 235 | |
|
242 | 236 | if invalidate_cache: |
|
243 | 237 | log.debug('invalidating all repositories cache') |
|
244 | 238 | for repo in Repository.get_all(): |
|
245 | 239 | ScmModel().mark_for_invalidation(repo.repo_name, delete=True) |
|
246 | 240 | |
|
247 | 241 | filesystem_repos = ScmModel().repo_scan() |
|
248 | 242 | added, removed = repo2db_mapper(filesystem_repos, rm_obsolete, force_hooks_rebuild=True) |
|
249 | 243 | PermissionModel().trigger_permission_flush() |
|
250 | 244 | |
|
251 | 245 | def _repr(rm_repo): |
|
252 | 246 | return ', '.join(map(safe_str, rm_repo)) or '-' |
|
253 | 247 | |
|
254 | 248 | h.flash(_('Repositories successfully ' |
|
255 | 249 | 'rescanned added: %s ; removed: %s') % |
|
256 | 250 | (_repr(added), _repr(removed)), |
|
257 | 251 | category='success') |
|
258 | 252 | raise HTTPFound(h.route_path('admin_settings_mapping')) |
|
259 | 253 | |
|
260 | 254 | @LoginRequired() |
|
261 | 255 | @HasPermissionAllDecorator('hg.admin') |
|
262 | 256 | def settings_global(self): |
|
263 | 257 | c = self.load_default_context() |
|
264 | 258 | c.active = 'global' |
|
265 | 259 | c.personal_repo_group_default_pattern = RepoGroupModel()\ |
|
266 | 260 | .get_personal_group_name_pattern() |
|
267 | 261 | |
|
268 | 262 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
269 | 263 | self._get_template_context(c), self.request) |
|
270 | 264 | html = formencode.htmlfill.render( |
|
271 | 265 | data, |
|
272 | 266 | defaults=self._form_defaults(), |
|
273 | 267 | encoding="UTF-8", |
|
274 | 268 | force_defaults=False |
|
275 | 269 | ) |
|
276 | 270 | return Response(html) |
|
277 | 271 | |
|
278 | 272 | @LoginRequired() |
|
279 | 273 | @HasPermissionAllDecorator('hg.admin') |
|
280 | 274 | @CSRFRequired() |
|
281 | 275 | def settings_global_update(self): |
|
282 | 276 | _ = self.request.translate |
|
283 | 277 | c = self.load_default_context() |
|
284 | 278 | c.active = 'global' |
|
285 | 279 | c.personal_repo_group_default_pattern = RepoGroupModel()\ |
|
286 | 280 | .get_personal_group_name_pattern() |
|
287 | 281 | application_form = ApplicationSettingsForm(self.request.translate)() |
|
288 | 282 | try: |
|
289 | 283 | form_result = application_form.to_python(dict(self.request.POST)) |
|
290 | 284 | except formencode.Invalid as errors: |
|
291 | 285 | h.flash( |
|
292 | 286 | _("Some form inputs contain invalid data."), |
|
293 | 287 | category='error') |
|
294 | 288 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
295 | 289 | self._get_template_context(c), self.request) |
|
296 | 290 | html = formencode.htmlfill.render( |
|
297 | 291 | data, |
|
298 | 292 | defaults=errors.value, |
|
299 | 293 | errors=errors.unpack_errors() or {}, |
|
300 | 294 | prefix_error=False, |
|
301 | 295 | encoding="UTF-8", |
|
302 | 296 | force_defaults=False |
|
303 | 297 | ) |
|
304 | 298 | return Response(html) |
|
305 | 299 | |
|
306 | 300 | settings = [ |
|
307 | 301 | ('title', 'rhodecode_title', 'unicode'), |
|
308 | 302 | ('realm', 'rhodecode_realm', 'unicode'), |
|
309 | 303 | ('pre_code', 'rhodecode_pre_code', 'unicode'), |
|
310 | 304 | ('post_code', 'rhodecode_post_code', 'unicode'), |
|
311 | 305 | ('captcha_public_key', 'rhodecode_captcha_public_key', 'unicode'), |
|
312 | 306 | ('captcha_private_key', 'rhodecode_captcha_private_key', 'unicode'), |
|
313 | 307 | ('create_personal_repo_group', 'rhodecode_create_personal_repo_group', 'bool'), |
|
314 | 308 | ('personal_repo_group_pattern', 'rhodecode_personal_repo_group_pattern', 'unicode'), |
|
315 | 309 | ] |
|
316 | 310 | |
|
317 | 311 | try: |
|
318 | 312 | for setting, form_key, type_ in settings: |
|
319 | 313 | sett = SettingsModel().create_or_update_setting( |
|
320 | 314 | setting, form_result[form_key], type_) |
|
321 | 315 | Session().add(sett) |
|
322 | 316 | |
|
323 | 317 | Session().commit() |
|
324 | 318 | SettingsModel().invalidate_settings_cache() |
|
325 | 319 | h.flash(_('Updated application settings'), category='success') |
|
326 | 320 | except Exception: |
|
327 | 321 | log.exception("Exception while updating application settings") |
|
328 | 322 | h.flash( |
|
329 | 323 | _('Error occurred during updating application settings'), |
|
330 | 324 | category='error') |
|
331 | 325 | |
|
332 | 326 | raise HTTPFound(h.route_path('admin_settings_global')) |
|
333 | 327 | |
|
334 | 328 | @LoginRequired() |
|
335 | 329 | @HasPermissionAllDecorator('hg.admin') |
|
336 | 330 | def settings_visual(self): |
|
337 | 331 | c = self.load_default_context() |
|
338 | 332 | c.active = 'visual' |
|
339 | 333 | |
|
340 | 334 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
341 | 335 | self._get_template_context(c), self.request) |
|
342 | 336 | html = formencode.htmlfill.render( |
|
343 | 337 | data, |
|
344 | 338 | defaults=self._form_defaults(), |
|
345 | 339 | encoding="UTF-8", |
|
346 | 340 | force_defaults=False |
|
347 | 341 | ) |
|
348 | 342 | return Response(html) |
|
349 | 343 | |
|
350 | 344 | @LoginRequired() |
|
351 | 345 | @HasPermissionAllDecorator('hg.admin') |
|
352 | 346 | @CSRFRequired() |
|
353 | 347 | def settings_visual_update(self): |
|
354 | 348 | _ = self.request.translate |
|
355 | 349 | c = self.load_default_context() |
|
356 | 350 | c.active = 'visual' |
|
357 | 351 | application_form = ApplicationVisualisationForm(self.request.translate)() |
|
358 | 352 | try: |
|
359 | 353 | form_result = application_form.to_python(dict(self.request.POST)) |
|
360 | 354 | except formencode.Invalid as errors: |
|
361 | 355 | h.flash( |
|
362 | 356 | _("Some form inputs contain invalid data."), |
|
363 | 357 | category='error') |
|
364 | 358 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
365 | 359 | self._get_template_context(c), self.request) |
|
366 | 360 | html = formencode.htmlfill.render( |
|
367 | 361 | data, |
|
368 | 362 | defaults=errors.value, |
|
369 | 363 | errors=errors.unpack_errors() or {}, |
|
370 | 364 | prefix_error=False, |
|
371 | 365 | encoding="UTF-8", |
|
372 | 366 | force_defaults=False |
|
373 | 367 | ) |
|
374 | 368 | return Response(html) |
|
375 | 369 | |
|
376 | 370 | try: |
|
377 | 371 | settings = [ |
|
378 | 372 | ('show_public_icon', 'rhodecode_show_public_icon', 'bool'), |
|
379 | 373 | ('show_private_icon', 'rhodecode_show_private_icon', 'bool'), |
|
380 | 374 | ('stylify_metatags', 'rhodecode_stylify_metatags', 'bool'), |
|
381 | 375 | ('repository_fields', 'rhodecode_repository_fields', 'bool'), |
|
382 | 376 | ('dashboard_items', 'rhodecode_dashboard_items', 'int'), |
|
383 | 377 | ('admin_grid_items', 'rhodecode_admin_grid_items', 'int'), |
|
384 | 378 | ('show_version', 'rhodecode_show_version', 'bool'), |
|
385 | 379 | ('use_gravatar', 'rhodecode_use_gravatar', 'bool'), |
|
386 | 380 | ('markup_renderer', 'rhodecode_markup_renderer', 'unicode'), |
|
387 | 381 | ('gravatar_url', 'rhodecode_gravatar_url', 'unicode'), |
|
388 | 382 | ('clone_uri_tmpl', 'rhodecode_clone_uri_tmpl', 'unicode'), |
|
389 | 383 | ('clone_uri_id_tmpl', 'rhodecode_clone_uri_id_tmpl', 'unicode'), |
|
390 | 384 | ('clone_uri_ssh_tmpl', 'rhodecode_clone_uri_ssh_tmpl', 'unicode'), |
|
391 | 385 | ('support_url', 'rhodecode_support_url', 'unicode'), |
|
392 | 386 | ('show_revision_number', 'rhodecode_show_revision_number', 'bool'), |
|
393 | 387 | ('show_sha_length', 'rhodecode_show_sha_length', 'int'), |
|
394 | 388 | ] |
|
395 | 389 | for setting, form_key, type_ in settings: |
|
396 | 390 | sett = SettingsModel().create_or_update_setting( |
|
397 | 391 | setting, form_result[form_key], type_) |
|
398 | 392 | Session().add(sett) |
|
399 | 393 | |
|
400 | 394 | Session().commit() |
|
401 | 395 | SettingsModel().invalidate_settings_cache() |
|
402 | 396 | h.flash(_('Updated visualisation settings'), category='success') |
|
403 | 397 | except Exception: |
|
404 | 398 | log.exception("Exception updating visualization settings") |
|
405 | 399 | h.flash(_('Error occurred during updating ' |
|
406 | 400 | 'visualisation settings'), |
|
407 | 401 | category='error') |
|
408 | 402 | |
|
409 | 403 | raise HTTPFound(h.route_path('admin_settings_visual')) |
|
410 | 404 | |
|
411 | 405 | @LoginRequired() |
|
412 | 406 | @HasPermissionAllDecorator('hg.admin') |
|
413 | 407 | def settings_issuetracker(self): |
|
414 | 408 | c = self.load_default_context() |
|
415 | 409 | c.active = 'issuetracker' |
|
416 | 410 | defaults = c.rc_config |
|
417 | 411 | |
|
418 | 412 | entry_key = 'rhodecode_issuetracker_pat_' |
|
419 | 413 | |
|
420 | 414 | c.issuetracker_entries = {} |
|
421 | 415 | for k, v in defaults.items(): |
|
422 | 416 | if k.startswith(entry_key): |
|
423 | 417 | uid = k[len(entry_key):] |
|
424 | 418 | c.issuetracker_entries[uid] = None |
|
425 | 419 | |
|
426 | 420 | for uid in c.issuetracker_entries: |
|
427 | 421 | c.issuetracker_entries[uid] = AttributeDict({ |
|
428 | 422 | 'pat': defaults.get('rhodecode_issuetracker_pat_' + uid), |
|
429 | 423 | 'url': defaults.get('rhodecode_issuetracker_url_' + uid), |
|
430 | 424 | 'pref': defaults.get('rhodecode_issuetracker_pref_' + uid), |
|
431 | 425 | 'desc': defaults.get('rhodecode_issuetracker_desc_' + uid), |
|
432 | 426 | }) |
|
433 | 427 | |
|
434 | 428 | return self._get_template_context(c) |
|
435 | 429 | |
|
436 | 430 | @LoginRequired() |
|
437 | 431 | @HasPermissionAllDecorator('hg.admin') |
|
438 | 432 | @CSRFRequired() |
|
439 | 433 | def settings_issuetracker_test(self): |
|
440 | 434 | error_container = [] |
|
441 | 435 | |
|
442 | 436 | urlified_commit = h.urlify_commit_message( |
|
443 | 437 | self.request.POST.get('test_text', ''), |
|
444 | 438 | 'repo_group/test_repo1', error_container=error_container) |
|
445 | 439 | if error_container: |
|
446 | 440 | def converter(inp): |
|
447 | 441 | return h.html_escape(inp) |
|
448 | 442 | |
|
449 | 443 | return 'ERRORS: ' + '\n'.join(map(converter, error_container)) |
|
450 | 444 | |
|
451 | 445 | return urlified_commit |
|
452 | 446 | |
|
453 | 447 | @LoginRequired() |
|
454 | 448 | @HasPermissionAllDecorator('hg.admin') |
|
455 | 449 | @CSRFRequired() |
|
456 | 450 | def settings_issuetracker_update(self): |
|
457 | 451 | _ = self.request.translate |
|
458 | 452 | self.load_default_context() |
|
459 | 453 | settings_model = IssueTrackerSettingsModel() |
|
460 | 454 | |
|
461 | 455 | try: |
|
462 | 456 | form = IssueTrackerPatternsForm(self.request.translate)() |
|
463 | 457 | data = form.to_python(self.request.POST) |
|
464 | 458 | except formencode.Invalid as errors: |
|
465 | 459 | log.exception('Failed to add new pattern') |
|
466 | 460 | error = errors |
|
467 | 461 | h.flash(_(f'Invalid issue tracker pattern: {error}'), |
|
468 | 462 | category='error') |
|
469 | 463 | raise HTTPFound(h.route_path('admin_settings_issuetracker')) |
|
470 | 464 | |
|
471 | 465 | if data: |
|
472 | 466 | for uid in data.get('delete_patterns', []): |
|
473 | 467 | settings_model.delete_entries(uid) |
|
474 | 468 | |
|
475 | 469 | for pattern in data.get('patterns', []): |
|
476 | 470 | for setting, value, type_ in pattern: |
|
477 | 471 | sett = settings_model.create_or_update_setting( |
|
478 | 472 | setting, value, type_) |
|
479 | 473 | Session().add(sett) |
|
480 | 474 | |
|
481 | 475 | Session().commit() |
|
482 | 476 | |
|
483 | 477 | SettingsModel().invalidate_settings_cache() |
|
484 | 478 | h.flash(_('Updated issue tracker entries'), category='success') |
|
485 | 479 | raise HTTPFound(h.route_path('admin_settings_issuetracker')) |
|
486 | 480 | |
|
487 | 481 | @LoginRequired() |
|
488 | 482 | @HasPermissionAllDecorator('hg.admin') |
|
489 | 483 | @CSRFRequired() |
|
490 | 484 | def settings_issuetracker_delete(self): |
|
491 | 485 | _ = self.request.translate |
|
492 | 486 | self.load_default_context() |
|
493 | 487 | uid = self.request.POST.get('uid') |
|
494 | 488 | try: |
|
495 | 489 | IssueTrackerSettingsModel().delete_entries(uid) |
|
496 | 490 | except Exception: |
|
497 | 491 | log.exception('Failed to delete issue tracker setting %s', uid) |
|
498 | 492 | raise HTTPNotFound() |
|
499 | 493 | |
|
500 | 494 | SettingsModel().invalidate_settings_cache() |
|
501 | 495 | h.flash(_('Removed issue tracker entry.'), category='success') |
|
502 | 496 | |
|
503 | 497 | return {'deleted': uid} |
|
504 | 498 | |
|
505 | 499 | @LoginRequired() |
|
506 | 500 | @HasPermissionAllDecorator('hg.admin') |
|
507 | 501 | def settings_email(self): |
|
508 | 502 | c = self.load_default_context() |
|
509 | 503 | c.active = 'email' |
|
510 | 504 | c.rhodecode_ini = rhodecode.CONFIG |
|
511 | 505 | |
|
512 | 506 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
513 | 507 | self._get_template_context(c), self.request) |
|
514 | 508 | html = formencode.htmlfill.render( |
|
515 | 509 | data, |
|
516 | 510 | defaults=self._form_defaults(), |
|
517 | 511 | encoding="UTF-8", |
|
518 | 512 | force_defaults=False |
|
519 | 513 | ) |
|
520 | 514 | return Response(html) |
|
521 | 515 | |
|
522 | 516 | @LoginRequired() |
|
523 | 517 | @HasPermissionAllDecorator('hg.admin') |
|
524 | 518 | @CSRFRequired() |
|
525 | 519 | def settings_email_update(self): |
|
526 | 520 | _ = self.request.translate |
|
527 | 521 | c = self.load_default_context() |
|
528 | 522 | c.active = 'email' |
|
529 | 523 | |
|
530 | 524 | test_email = self.request.POST.get('test_email') |
|
531 | 525 | |
|
532 | 526 | if not test_email: |
|
533 | 527 | h.flash(_('Please enter email address'), category='error') |
|
534 | 528 | raise HTTPFound(h.route_path('admin_settings_email')) |
|
535 | 529 | |
|
536 | 530 | email_kwargs = { |
|
537 | 531 | 'date': datetime.datetime.now(), |
|
538 | 532 | 'user': self._rhodecode_db_user |
|
539 | 533 | } |
|
540 | 534 | |
|
541 | 535 | (subject, email_body, email_body_plaintext) = EmailNotificationModel().render_email( |
|
542 | 536 | EmailNotificationModel.TYPE_EMAIL_TEST, **email_kwargs) |
|
543 | 537 | |
|
544 | 538 | recipients = [test_email] if test_email else None |
|
545 | 539 | |
|
546 | 540 | run_task(tasks.send_email, recipients, subject, |
|
547 | 541 | email_body_plaintext, email_body) |
|
548 | 542 | |
|
549 | 543 | h.flash(_('Send email task created'), category='success') |
|
550 | 544 | raise HTTPFound(h.route_path('admin_settings_email')) |
|
551 | 545 | |
|
552 | 546 | @LoginRequired() |
|
553 | 547 | @HasPermissionAllDecorator('hg.admin') |
|
554 | 548 | def settings_hooks(self): |
|
555 | 549 | c = self.load_default_context() |
|
556 | 550 | c.active = 'hooks' |
|
557 | 551 | |
|
558 | 552 | model = SettingsModel() |
|
559 | 553 | c.hooks = model.get_builtin_hooks() |
|
560 | 554 | c.custom_hooks = model.get_custom_hooks() |
|
561 | 555 | |
|
562 | 556 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
563 | 557 | self._get_template_context(c), self.request) |
|
564 | 558 | html = formencode.htmlfill.render( |
|
565 | 559 | data, |
|
566 | 560 | defaults=self._form_defaults(), |
|
567 | 561 | encoding="UTF-8", |
|
568 | 562 | force_defaults=False |
|
569 | 563 | ) |
|
570 | 564 | return Response(html) |
|
571 | 565 | |
|
572 | 566 | @LoginRequired() |
|
573 | 567 | @HasPermissionAllDecorator('hg.admin') |
|
574 | 568 | @CSRFRequired() |
|
575 | 569 | def settings_hooks_update(self): |
|
576 | 570 | _ = self.request.translate |
|
577 | 571 | c = self.load_default_context() |
|
578 | 572 | c.active = 'hooks' |
|
579 | 573 | if c.visual.allow_custom_hooks_settings: |
|
580 | 574 | ui_key = self.request.POST.get('new_hook_ui_key') |
|
581 | 575 | ui_value = self.request.POST.get('new_hook_ui_value') |
|
582 | 576 | |
|
583 | 577 | hook_id = self.request.POST.get('hook_id') |
|
584 | 578 | new_hook = False |
|
585 | 579 | |
|
586 | 580 | model = SettingsModel() |
|
587 | 581 | try: |
|
588 | 582 | if ui_value and ui_key: |
|
589 | 583 | model.create_or_update_hook(ui_key, ui_value) |
|
590 | 584 | h.flash(_('Added new hook'), category='success') |
|
591 | 585 | new_hook = True |
|
592 | 586 | elif hook_id: |
|
593 | 587 | RhodeCodeUi.delete(hook_id) |
|
594 | 588 | Session().commit() |
|
595 | 589 | |
|
596 | 590 | # check for edits |
|
597 | 591 | update = False |
|
598 | 592 | _d = self.request.POST.dict_of_lists() |
|
599 | 593 | for k, v in zip(_d.get('hook_ui_key', []), |
|
600 | 594 | _d.get('hook_ui_value_new', [])): |
|
601 | 595 | model.create_or_update_hook(k, v) |
|
602 | 596 | update = True |
|
603 | 597 | |
|
604 | 598 | if update and not new_hook: |
|
605 | 599 | h.flash(_('Updated hooks'), category='success') |
|
606 | 600 | Session().commit() |
|
607 | 601 | except Exception: |
|
608 | 602 | log.exception("Exception during hook creation") |
|
609 | 603 | h.flash(_('Error occurred during hook creation'), |
|
610 | 604 | category='error') |
|
611 | 605 | |
|
612 | 606 | raise HTTPFound(h.route_path('admin_settings_hooks')) |
|
613 | 607 | |
|
614 | 608 | @LoginRequired() |
|
615 | 609 | @HasPermissionAllDecorator('hg.admin') |
|
616 | 610 | def settings_search(self): |
|
617 | 611 | c = self.load_default_context() |
|
618 | 612 | c.active = 'search' |
|
619 | 613 | |
|
620 | 614 | c.searcher = searcher_from_config(self.request.registry.settings) |
|
621 | 615 | c.statistics = c.searcher.statistics(self.request.translate) |
|
622 | 616 | |
|
623 | 617 | return self._get_template_context(c) |
|
624 | 618 | |
|
625 | 619 | @LoginRequired() |
|
626 | 620 | @HasPermissionAllDecorator('hg.admin') |
|
627 | 621 | def settings_labs(self): |
|
628 | 622 | c = self.load_default_context() |
|
629 | 623 | if not c.labs_active: |
|
630 | 624 | raise HTTPFound(h.route_path('admin_settings')) |
|
631 | 625 | |
|
632 | 626 | c.active = 'labs' |
|
633 | 627 | c.lab_settings = _LAB_SETTINGS |
|
634 | 628 | |
|
635 | 629 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
636 | 630 | self._get_template_context(c), self.request) |
|
637 | 631 | html = formencode.htmlfill.render( |
|
638 | 632 | data, |
|
639 | 633 | defaults=self._form_defaults(), |
|
640 | 634 | encoding="UTF-8", |
|
641 | 635 | force_defaults=False |
|
642 | 636 | ) |
|
643 | 637 | return Response(html) |
|
644 | 638 | |
|
645 | 639 | @LoginRequired() |
|
646 | 640 | @HasPermissionAllDecorator('hg.admin') |
|
647 | 641 | @CSRFRequired() |
|
648 | 642 | def settings_labs_update(self): |
|
649 | 643 | _ = self.request.translate |
|
650 | 644 | c = self.load_default_context() |
|
651 | 645 | c.active = 'labs' |
|
652 | 646 | |
|
653 | 647 | application_form = LabsSettingsForm(self.request.translate)() |
|
654 | 648 | try: |
|
655 | 649 | form_result = application_form.to_python(dict(self.request.POST)) |
|
656 | 650 | except formencode.Invalid as errors: |
|
657 | 651 | h.flash( |
|
658 | 652 | _("Some form inputs contain invalid data."), |
|
659 | 653 | category='error') |
|
660 | 654 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
661 | 655 | self._get_template_context(c), self.request) |
|
662 | 656 | html = formencode.htmlfill.render( |
|
663 | 657 | data, |
|
664 | 658 | defaults=errors.value, |
|
665 | 659 | errors=errors.unpack_errors() or {}, |
|
666 | 660 | prefix_error=False, |
|
667 | 661 | encoding="UTF-8", |
|
668 | 662 | force_defaults=False |
|
669 | 663 | ) |
|
670 | 664 | return Response(html) |
|
671 | 665 | |
|
672 | 666 | try: |
|
673 | 667 | session = Session() |
|
674 | 668 | for setting in _LAB_SETTINGS: |
|
675 | 669 | setting_name = setting.key[len('rhodecode_'):] |
|
676 | 670 | sett = SettingsModel().create_or_update_setting( |
|
677 | 671 | setting_name, form_result[setting.key], setting.type) |
|
678 | 672 | session.add(sett) |
|
679 | 673 | |
|
680 | 674 | except Exception: |
|
681 | 675 | log.exception('Exception while updating lab settings') |
|
682 | 676 | h.flash(_('Error occurred during updating labs settings'), |
|
683 | 677 | category='error') |
|
684 | 678 | else: |
|
685 | 679 | Session().commit() |
|
686 | 680 | SettingsModel().invalidate_settings_cache() |
|
687 | 681 | h.flash(_('Updated Labs settings'), category='success') |
|
688 | 682 | raise HTTPFound(h.route_path('admin_settings_labs')) |
|
689 | 683 | |
|
690 | 684 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
691 | 685 | self._get_template_context(c), self.request) |
|
692 | 686 | html = formencode.htmlfill.render( |
|
693 | 687 | data, |
|
694 | 688 | defaults=self._form_defaults(), |
|
695 | 689 | encoding="UTF-8", |
|
696 | 690 | force_defaults=False |
|
697 | 691 | ) |
|
698 | 692 | return Response(html) |
|
699 | 693 | |
|
700 | 694 | |
|
701 | 695 | # :param key: name of the setting including the 'rhodecode_' prefix |
|
702 | 696 | # :param type: the RhodeCodeSetting type to use. |
|
703 | 697 | # :param group: the i18ned group in which we should dispaly this setting |
|
704 | 698 | # :param label: the i18ned label we should display for this setting |
|
705 | 699 | # :param help: the i18ned help we should dispaly for this setting |
|
706 | 700 | LabSetting = collections.namedtuple( |
|
707 | 701 | 'LabSetting', ('key', 'type', 'group', 'label', 'help')) |
|
708 | 702 | |
|
709 | 703 | |
|
710 | 704 | # This list has to be kept in sync with the form |
|
711 | 705 | # rhodecode.model.forms.LabsSettingsForm. |
|
712 | 706 | _LAB_SETTINGS = [ |
|
713 | 707 | |
|
714 | 708 | ] |
@@ -1,237 +1,243 b'' | |||
|
1 | 1 | |
|
2 | 2 | |
|
3 | 3 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
4 | 4 | # |
|
5 | 5 | # This program is free software: you can redistribute it and/or modify |
|
6 | 6 | # it under the terms of the GNU Affero General Public License, version 3 |
|
7 | 7 | # (only), as published by the Free Software Foundation. |
|
8 | 8 | # |
|
9 | 9 | # This program is distributed in the hope that it will be useful, |
|
10 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
11 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
12 | 12 | # GNU General Public License for more details. |
|
13 | 13 | # |
|
14 | 14 | # You should have received a copy of the GNU Affero General Public License |
|
15 | 15 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
16 | 16 | # |
|
17 | 17 | # This program is dual-licensed. If you wish to learn more about the |
|
18 | 18 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
19 | 19 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
20 | 20 | |
|
21 | 21 | import logging |
|
22 | 22 | import urllib.request |
|
23 | 23 | import urllib.error |
|
24 | 24 | import urllib.parse |
|
25 | 25 | import os |
|
26 | 26 | |
|
27 | 27 | import rhodecode |
|
28 | 28 | from rhodecode.apps._base import BaseAppView |
|
29 | 29 | from rhodecode.apps._base.navigation import navigation_list |
|
30 | 30 | from rhodecode.lib import helpers as h |
|
31 | 31 | from rhodecode.lib.auth import (LoginRequired, HasPermissionAllDecorator) |
|
32 | 32 | from rhodecode.lib.utils2 import str2bool |
|
33 | 33 | from rhodecode.lib import system_info |
|
34 | 34 | from rhodecode.model.update import UpdateModel |
|
35 | 35 | |
|
36 | 36 | log = logging.getLogger(__name__) |
|
37 | 37 | |
|
38 | 38 | |
|
39 | 39 | class AdminSystemInfoSettingsView(BaseAppView): |
|
40 | 40 | def load_default_context(self): |
|
41 | 41 | c = self._get_local_tmpl_context() |
|
42 | 42 | return c |
|
43 | 43 | |
|
44 | 44 | def get_env_data(self): |
|
45 | 45 | black_list = [ |
|
46 | 46 | 'NIX_LDFLAGS', |
|
47 | 47 | 'NIX_CFLAGS_COMPILE', |
|
48 | 48 | 'propagatedBuildInputs', |
|
49 | 49 | 'propagatedNativeBuildInputs', |
|
50 | 50 | 'postInstall', |
|
51 | 51 | 'buildInputs', |
|
52 | 52 | 'buildPhase', |
|
53 | 53 | 'preShellHook', |
|
54 | 54 | 'preShellHook', |
|
55 | 55 | 'preCheck', |
|
56 | 56 | 'preBuild', |
|
57 | 57 | 'postShellHook', |
|
58 | 58 | 'postFixup', |
|
59 | 59 | 'postCheck', |
|
60 | 60 | 'nativeBuildInputs', |
|
61 | 61 | 'installPhase', |
|
62 | 62 | 'installCheckPhase', |
|
63 | 63 | 'checkPhase', |
|
64 | 64 | 'configurePhase', |
|
65 | 65 | 'shellHook' |
|
66 | 66 | ] |
|
67 | 67 | secret_list = [ |
|
68 | 68 | 'RHODECODE_USER_PASS' |
|
69 | 69 | ] |
|
70 | 70 | |
|
71 | 71 | for k, v in sorted(os.environ.items()): |
|
72 | 72 | if k in black_list: |
|
73 | 73 | continue |
|
74 | 74 | if k in secret_list: |
|
75 | 75 | v = '*****' |
|
76 | 76 | yield k, v |
|
77 | 77 | |
|
78 | 78 | @LoginRequired() |
|
79 | 79 | @HasPermissionAllDecorator('hg.admin') |
|
80 | 80 | def settings_system_info(self): |
|
81 | 81 | _ = self.request.translate |
|
82 | 82 | c = self.load_default_context() |
|
83 | 83 | |
|
84 | 84 | c.active = 'system' |
|
85 | 85 | c.navlist = navigation_list(self.request) |
|
86 | 86 | |
|
87 | 87 | # TODO(marcink), figure out how to allow only selected users to do this |
|
88 | 88 | c.allowed_to_snapshot = self._rhodecode_user.admin |
|
89 | 89 | |
|
90 | 90 | snapshot = str2bool(self.request.params.get('snapshot')) |
|
91 | 91 | |
|
92 | 92 | c.rhodecode_update_url = UpdateModel().get_update_url() |
|
93 | 93 | c.env_data = self.get_env_data() |
|
94 | 94 | server_info = system_info.get_system_info(self.request.environ) |
|
95 | 95 | |
|
96 | 96 | for key, val in server_info.items(): |
|
97 | 97 | setattr(c, key, val) |
|
98 | 98 | |
|
99 | 99 | def val(name, subkey='human_value'): |
|
100 | 100 | return server_info[name][subkey] |
|
101 | 101 | |
|
102 | 102 | def state(name): |
|
103 | 103 | return server_info[name]['state'] |
|
104 | 104 | |
|
105 | 105 | def val2(name): |
|
106 | 106 | val = server_info[name]['human_value'] |
|
107 | 107 | state = server_info[name]['state'] |
|
108 | 108 | return val, state |
|
109 | 109 | |
|
110 | 110 | update_info_msg = _('Note: please make sure this server can ' |
|
111 | 111 | 'access `${url}` for the update link to work', |
|
112 | 112 | mapping=dict(url=c.rhodecode_update_url)) |
|
113 | 113 | version = UpdateModel().get_stored_version() |
|
114 | 114 | is_outdated = UpdateModel().is_outdated( |
|
115 | 115 | rhodecode.__version__, version) |
|
116 | 116 | update_state = { |
|
117 | 117 | 'type': 'warning', |
|
118 | 118 | 'message': 'New version available: {}'.format(version) |
|
119 | 119 | } \ |
|
120 | 120 | if is_outdated else {} |
|
121 | 121 | c.data_items = [ |
|
122 | 122 | # update info |
|
123 | 123 | (_('Update info'), h.literal( |
|
124 | 124 | '<span class="link" id="check_for_update" >%s.</span>' % ( |
|
125 | 125 | _('Check for updates')) + |
|
126 | 126 | '<br/> <span >%s.</span>' % (update_info_msg) |
|
127 | 127 | ), ''), |
|
128 | 128 | |
|
129 | 129 | # RhodeCode specific |
|
130 | 130 | (_('RhodeCode Version'), val('rhodecode_app')['text'], state('rhodecode_app')), |
|
131 | 131 | (_('Latest version'), version, update_state), |
|
132 | 132 | (_('RhodeCode Base URL'), val('rhodecode_config')['config'].get('app.base_url'), state('rhodecode_config')), |
|
133 | 133 | (_('RhodeCode Server IP'), val('server')['server_ip'], state('server')), |
|
134 | 134 | (_('RhodeCode Server ID'), val('server')['server_id'], state('server')), |
|
135 | 135 | (_('RhodeCode Configuration'), val('rhodecode_config')['path'], state('rhodecode_config')), |
|
136 | 136 | (_('RhodeCode Certificate'), val('rhodecode_config')['cert_path'], state('rhodecode_config')), |
|
137 | 137 | (_('Workers'), val('rhodecode_config')['config']['server:main'].get('workers', '?'), state('rhodecode_config')), |
|
138 | 138 | (_('Worker Type'), val('rhodecode_config')['config']['server:main'].get('worker_class', 'sync'), state('rhodecode_config')), |
|
139 | 139 | ('', '', ''), # spacer |
|
140 | 140 | |
|
141 | 141 | # Database |
|
142 | 142 | (_('Database'), val('database')['url'], state('database')), |
|
143 | 143 | (_('Database version'), val('database')['version'], state('database')), |
|
144 | 144 | ('', '', ''), # spacer |
|
145 | 145 | |
|
146 | 146 | # Platform/Python |
|
147 | 147 | (_('Platform'), val('platform')['name'], state('platform')), |
|
148 | 148 | (_('Platform UUID'), val('platform')['uuid'], state('platform')), |
|
149 | 149 | (_('Lang'), val('locale'), state('locale')), |
|
150 | 150 | (_('Python version'), val('python')['version'], state('python')), |
|
151 | 151 | (_('Python path'), val('python')['executable'], state('python')), |
|
152 | 152 | ('', '', ''), # spacer |
|
153 | 153 | |
|
154 | 154 | # Systems stats |
|
155 | 155 | (_('CPU'), val('cpu')['text'], state('cpu')), |
|
156 | 156 | (_('Load'), val('load')['text'], state('load')), |
|
157 | 157 | (_('Memory'), val('memory')['text'], state('memory')), |
|
158 | 158 | (_('Uptime'), val('uptime')['text'], state('uptime')), |
|
159 | 159 | ('', '', ''), # spacer |
|
160 | 160 | |
|
161 | 161 | # ulimit |
|
162 | 162 | (_('Ulimit'), val('ulimit')['text'], state('ulimit')), |
|
163 | 163 | |
|
164 | 164 | # Repo storage |
|
165 | 165 | (_('Storage location'), val('storage')['path'], state('storage')), |
|
166 | 166 | (_('Storage info'), val('storage')['text'], state('storage')), |
|
167 | 167 | (_('Storage inodes'), val('storage_inodes')['text'], state('storage_inodes')), |
|
168 | ('', '', ''), # spacer | |
|
168 | 169 | |
|
169 | 170 | (_('Gist storage location'), val('storage_gist')['path'], state('storage_gist')), |
|
170 | 171 | (_('Gist storage info'), val('storage_gist')['text'], state('storage_gist')), |
|
172 | ('', '', ''), # spacer | |
|
171 | 173 | |
|
174 | (_('Archive cache storage type'), val('storage_archive')['type'], state('storage_archive')), | |
|
172 | 175 | (_('Archive cache storage location'), val('storage_archive')['path'], state('storage_archive')), |
|
173 | 176 | (_('Archive cache info'), val('storage_archive')['text'], state('storage_archive')), |
|
177 | ('', '', ''), # spacer | |
|
174 | 178 | |
|
175 | 179 | (_('Temp storage location'), val('storage_temp')['path'], state('storage_temp')), |
|
176 | 180 | (_('Temp storage info'), val('storage_temp')['text'], state('storage_temp')), |
|
181 | ('', '', ''), # spacer | |
|
177 | 182 | |
|
178 | 183 | (_('Search info'), val('search')['text'], state('search')), |
|
179 | 184 | (_('Search location'), val('search')['location'], state('search')), |
|
180 | 185 | ('', '', ''), # spacer |
|
181 | 186 | |
|
182 | 187 | # VCS specific |
|
183 | 188 | (_('VCS Backends'), val('vcs_backends'), state('vcs_backends')), |
|
184 | 189 | (_('VCS Server'), val('vcs_server')['text'], state('vcs_server')), |
|
185 | 190 | (_('GIT'), val('git'), state('git')), |
|
186 | 191 | (_('HG'), val('hg'), state('hg')), |
|
187 | 192 | (_('SVN'), val('svn'), state('svn')), |
|
188 | 193 | |
|
189 | 194 | ] |
|
190 | 195 | |
|
191 | 196 | c.vcsserver_data_items = [ |
|
192 | (k, v) for k,v in (val('vcs_server_config') or {}).items() | |
|
197 | (k, v) for k, v in (val('vcs_server_config') or {}).items() | |
|
193 | 198 | ] |
|
194 | 199 | |
|
195 | 200 | if snapshot: |
|
196 | 201 | if c.allowed_to_snapshot: |
|
197 | 202 | c.data_items.pop(0) # remove server info |
|
198 | 203 | self.request.override_renderer = 'admin/settings/settings_system_snapshot.mako' |
|
199 | 204 | else: |
|
200 | 205 | h.flash('You are not allowed to do this', category='warning') |
|
201 | 206 | return self._get_template_context(c) |
|
202 | 207 | |
|
203 | 208 | @LoginRequired() |
|
204 | 209 | @HasPermissionAllDecorator('hg.admin') |
|
205 | 210 | def settings_system_info_check_update(self): |
|
206 | 211 | _ = self.request.translate |
|
207 | 212 | c = self.load_default_context() |
|
208 | 213 | |
|
209 | 214 | update_url = UpdateModel().get_update_url() |
|
210 | 215 | |
|
211 | 216 | def _err(s): |
|
212 |
return '<div style="color:#ff8888; padding:4px 0px">{}</div>' |
|
|
217 | return f'<div style="color:#ff8888; padding:4px 0px">{s}</div>' | |
|
218 | ||
|
213 | 219 | try: |
|
214 | 220 | data = UpdateModel().get_update_data(update_url) |
|
215 | 221 | except urllib.error.URLError as e: |
|
216 | 222 | log.exception("Exception contacting upgrade server") |
|
217 | 223 | self.request.override_renderer = 'string' |
|
218 | 224 | return _err('Failed to contact upgrade server: %r' % e) |
|
219 | 225 | except ValueError as e: |
|
220 | 226 | log.exception("Bad data sent from update server") |
|
221 | 227 | self.request.override_renderer = 'string' |
|
222 | 228 | return _err('Bad data sent from update server') |
|
223 | 229 | |
|
224 | 230 | latest = data['versions'][0] |
|
225 | 231 | |
|
226 | 232 | c.update_url = update_url |
|
227 | 233 | c.latest_data = latest |
|
228 | c.latest_ver = latest['version'] | |
|
229 | c.cur_ver = rhodecode.__version__ | |
|
234 | c.latest_ver = (latest['version'] or '').strip() | |
|
235 | c.cur_ver = self.request.GET.get('ver') or rhodecode.__version__ | |
|
230 | 236 | c.should_upgrade = False |
|
231 | 237 | |
|
232 | is_oudated = UpdateModel().is_outdated(c.cur_ver, c.latest_ver) | |
|
233 | if is_oudated: | |
|
238 | is_outdated = UpdateModel().is_outdated(c.cur_ver, c.latest_ver) | |
|
239 | if is_outdated: | |
|
234 | 240 | c.should_upgrade = True |
|
235 | 241 | c.important_notices = latest['general'] |
|
236 | 242 | UpdateModel().store_version(latest['version']) |
|
237 | 243 | return self._get_template_context(c) |
@@ -1,1321 +1,1321 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import logging |
|
20 | 20 | import datetime |
|
21 | 21 | import formencode |
|
22 | 22 | import formencode.htmlfill |
|
23 | 23 | |
|
24 | 24 | from pyramid.httpexceptions import HTTPFound |
|
25 | 25 | from pyramid.renderers import render |
|
26 | 26 | from pyramid.response import Response |
|
27 | 27 | |
|
28 | 28 | from rhodecode import events |
|
29 | 29 | from rhodecode.apps._base import BaseAppView, DataGridAppView, UserAppView |
|
30 | from rhodecode.apps.ssh_support import SshKeyFileChangeEvent | |
|
30 | from rhodecode.apps.ssh_support.events import SshKeyFileChangeEvent | |
|
31 | 31 | from rhodecode.authentication.base import get_authn_registry, RhodeCodeExternalAuthPlugin |
|
32 | 32 | from rhodecode.authentication.plugins import auth_rhodecode |
|
33 | 33 | from rhodecode.events import trigger |
|
34 | 34 | from rhodecode.model.db import true, UserNotice |
|
35 | 35 | |
|
36 | 36 | from rhodecode.lib import audit_logger, rc_cache, auth |
|
37 | 37 | from rhodecode.lib.exceptions import ( |
|
38 | 38 | UserCreationError, UserOwnsReposException, UserOwnsRepoGroupsException, |
|
39 | 39 | UserOwnsUserGroupsException, UserOwnsPullRequestsException, |
|
40 | 40 | UserOwnsArtifactsException, DefaultUserException) |
|
41 | 41 | from rhodecode.lib import ext_json |
|
42 | 42 | from rhodecode.lib.auth import ( |
|
43 | 43 | LoginRequired, HasPermissionAllDecorator, CSRFRequired) |
|
44 | 44 | from rhodecode.lib import helpers as h |
|
45 | 45 | from rhodecode.lib.helpers import SqlPage |
|
46 | 46 | from rhodecode.lib.utils2 import safe_int, safe_str, AttributeDict |
|
47 | 47 | from rhodecode.model.auth_token import AuthTokenModel |
|
48 | 48 | from rhodecode.model.forms import ( |
|
49 | 49 | UserForm, UserIndividualPermissionsForm, UserPermissionsForm, |
|
50 | 50 | UserExtraEmailForm, UserExtraIpForm) |
|
51 | 51 | from rhodecode.model.permission import PermissionModel |
|
52 | 52 | from rhodecode.model.repo_group import RepoGroupModel |
|
53 | 53 | from rhodecode.model.ssh_key import SshKeyModel |
|
54 | 54 | from rhodecode.model.user import UserModel |
|
55 | 55 | from rhodecode.model.user_group import UserGroupModel |
|
56 | 56 | from rhodecode.model.db import ( |
|
57 | 57 | or_, coalesce,IntegrityError, User, UserGroup, UserIpMap, UserEmailMap, |
|
58 | 58 | UserApiKeys, UserSshKeys, RepoGroup) |
|
59 | 59 | from rhodecode.model.meta import Session |
|
60 | 60 | |
|
61 | 61 | log = logging.getLogger(__name__) |
|
62 | 62 | |
|
63 | 63 | |
|
64 | 64 | class AdminUsersView(BaseAppView, DataGridAppView): |
|
65 | 65 | |
|
66 | 66 | def load_default_context(self): |
|
67 | 67 | c = self._get_local_tmpl_context() |
|
68 | 68 | return c |
|
69 | 69 | |
|
70 | 70 | @LoginRequired() |
|
71 | 71 | @HasPermissionAllDecorator('hg.admin') |
|
72 | 72 | def users_list(self): |
|
73 | 73 | c = self.load_default_context() |
|
74 | 74 | return self._get_template_context(c) |
|
75 | 75 | |
|
76 | 76 | @LoginRequired() |
|
77 | 77 | @HasPermissionAllDecorator('hg.admin') |
|
78 | 78 | def users_list_data(self): |
|
79 | 79 | self.load_default_context() |
|
80 | 80 | column_map = { |
|
81 | 81 | 'first_name': 'name', |
|
82 | 82 | 'last_name': 'lastname', |
|
83 | 83 | } |
|
84 | 84 | draw, start, limit = self._extract_chunk(self.request) |
|
85 | 85 | search_q, order_by, order_dir = self._extract_ordering( |
|
86 | 86 | self.request, column_map=column_map) |
|
87 | 87 | _render = self.request.get_partial_renderer( |
|
88 | 88 | 'rhodecode:templates/data_table/_dt_elements.mako') |
|
89 | 89 | |
|
90 | 90 | def user_actions(user_id, username): |
|
91 | 91 | return _render("user_actions", user_id, username) |
|
92 | 92 | |
|
93 | 93 | users_data_total_count = User.query()\ |
|
94 | 94 | .filter(User.username != User.DEFAULT_USER) \ |
|
95 | 95 | .count() |
|
96 | 96 | |
|
97 | 97 | users_data_total_inactive_count = User.query()\ |
|
98 | 98 | .filter(User.username != User.DEFAULT_USER) \ |
|
99 | 99 | .filter(User.active != true())\ |
|
100 | 100 | .count() |
|
101 | 101 | |
|
102 | 102 | # json generate |
|
103 | 103 | base_q = User.query().filter(User.username != User.DEFAULT_USER) |
|
104 | 104 | base_inactive_q = base_q.filter(User.active != true()) |
|
105 | 105 | |
|
106 | 106 | if search_q: |
|
107 | 107 | like_expression = '%{}%'.format(safe_str(search_q)) |
|
108 | 108 | base_q = base_q.filter(or_( |
|
109 | 109 | User.username.ilike(like_expression), |
|
110 | 110 | User._email.ilike(like_expression), |
|
111 | 111 | User.name.ilike(like_expression), |
|
112 | 112 | User.lastname.ilike(like_expression), |
|
113 | 113 | )) |
|
114 | 114 | base_inactive_q = base_q.filter(User.active != true()) |
|
115 | 115 | |
|
116 | 116 | users_data_total_filtered_count = base_q.count() |
|
117 | 117 | users_data_total_filtered_inactive_count = base_inactive_q.count() |
|
118 | 118 | |
|
119 | 119 | sort_col = getattr(User, order_by, None) |
|
120 | 120 | if sort_col: |
|
121 | 121 | if order_dir == 'asc': |
|
122 | 122 | # handle null values properly to order by NULL last |
|
123 | 123 | if order_by in ['last_activity']: |
|
124 | 124 | sort_col = coalesce(sort_col, datetime.date.max) |
|
125 | 125 | sort_col = sort_col.asc() |
|
126 | 126 | else: |
|
127 | 127 | # handle null values properly to order by NULL last |
|
128 | 128 | if order_by in ['last_activity']: |
|
129 | 129 | sort_col = coalesce(sort_col, datetime.date.min) |
|
130 | 130 | sort_col = sort_col.desc() |
|
131 | 131 | |
|
132 | 132 | base_q = base_q.order_by(sort_col) |
|
133 | 133 | base_q = base_q.offset(start).limit(limit) |
|
134 | 134 | |
|
135 | 135 | users_list = base_q.all() |
|
136 | 136 | |
|
137 | 137 | users_data = [] |
|
138 | 138 | for user in users_list: |
|
139 | 139 | users_data.append({ |
|
140 | 140 | "username": h.gravatar_with_user(self.request, user.username), |
|
141 | 141 | "email": user.email, |
|
142 | 142 | "first_name": user.first_name, |
|
143 | 143 | "last_name": user.last_name, |
|
144 | 144 | "last_login": h.format_date(user.last_login), |
|
145 | 145 | "last_activity": h.format_date(user.last_activity), |
|
146 | 146 | "active": h.bool2icon(user.active), |
|
147 | 147 | "active_raw": user.active, |
|
148 | 148 | "admin": h.bool2icon(user.admin), |
|
149 | 149 | "extern_type": user.extern_type, |
|
150 | 150 | "extern_name": user.extern_name, |
|
151 | 151 | "action": user_actions(user.user_id, user.username), |
|
152 | 152 | }) |
|
153 | 153 | data = ({ |
|
154 | 154 | 'draw': draw, |
|
155 | 155 | 'data': users_data, |
|
156 | 156 | 'recordsTotal': users_data_total_count, |
|
157 | 157 | 'recordsFiltered': users_data_total_filtered_count, |
|
158 | 158 | 'recordsTotalInactive': users_data_total_inactive_count, |
|
159 | 159 | 'recordsFilteredInactive': users_data_total_filtered_inactive_count |
|
160 | 160 | }) |
|
161 | 161 | |
|
162 | 162 | return data |
|
163 | 163 | |
|
164 | 164 | def _set_personal_repo_group_template_vars(self, c_obj): |
|
165 | 165 | DummyUser = AttributeDict({ |
|
166 | 166 | 'username': '${username}', |
|
167 | 167 | 'user_id': '${user_id}', |
|
168 | 168 | }) |
|
169 | 169 | c_obj.default_create_repo_group = RepoGroupModel() \ |
|
170 | 170 | .get_default_create_personal_repo_group() |
|
171 | 171 | c_obj.personal_repo_group_name = RepoGroupModel() \ |
|
172 | 172 | .get_personal_group_name(DummyUser) |
|
173 | 173 | |
|
174 | 174 | @LoginRequired() |
|
175 | 175 | @HasPermissionAllDecorator('hg.admin') |
|
176 | 176 | def users_new(self): |
|
177 | 177 | _ = self.request.translate |
|
178 | 178 | c = self.load_default_context() |
|
179 | 179 | c.default_extern_type = auth_rhodecode.RhodeCodeAuthPlugin.uid |
|
180 | 180 | self._set_personal_repo_group_template_vars(c) |
|
181 | 181 | return self._get_template_context(c) |
|
182 | 182 | |
|
183 | 183 | @LoginRequired() |
|
184 | 184 | @HasPermissionAllDecorator('hg.admin') |
|
185 | 185 | @CSRFRequired() |
|
186 | 186 | def users_create(self): |
|
187 | 187 | _ = self.request.translate |
|
188 | 188 | c = self.load_default_context() |
|
189 | 189 | c.default_extern_type = auth_rhodecode.RhodeCodeAuthPlugin.uid |
|
190 | 190 | user_model = UserModel() |
|
191 | 191 | user_form = UserForm(self.request.translate)() |
|
192 | 192 | try: |
|
193 | 193 | form_result = user_form.to_python(dict(self.request.POST)) |
|
194 | 194 | user = user_model.create(form_result) |
|
195 | 195 | Session().flush() |
|
196 | 196 | creation_data = user.get_api_data() |
|
197 | 197 | username = form_result['username'] |
|
198 | 198 | |
|
199 | 199 | audit_logger.store_web( |
|
200 | 200 | 'user.create', action_data={'data': creation_data}, |
|
201 | 201 | user=c.rhodecode_user) |
|
202 | 202 | |
|
203 | 203 | user_link = h.link_to( |
|
204 | 204 | h.escape(username), |
|
205 | 205 | h.route_path('user_edit', user_id=user.user_id)) |
|
206 | 206 | h.flash(h.literal(_('Created user %(user_link)s') |
|
207 | 207 | % {'user_link': user_link}), category='success') |
|
208 | 208 | Session().commit() |
|
209 | 209 | except formencode.Invalid as errors: |
|
210 | 210 | self._set_personal_repo_group_template_vars(c) |
|
211 | 211 | data = render( |
|
212 | 212 | 'rhodecode:templates/admin/users/user_add.mako', |
|
213 | 213 | self._get_template_context(c), self.request) |
|
214 | 214 | html = formencode.htmlfill.render( |
|
215 | 215 | data, |
|
216 | 216 | defaults=errors.value, |
|
217 |
errors=errors. |
|
|
217 | errors=errors.error_dict or {}, | |
|
218 | 218 | prefix_error=False, |
|
219 | 219 | encoding="UTF-8", |
|
220 | 220 | force_defaults=False |
|
221 | 221 | ) |
|
222 | 222 | return Response(html) |
|
223 | 223 | except UserCreationError as e: |
|
224 | 224 | h.flash(safe_str(e), 'error') |
|
225 | 225 | except Exception: |
|
226 | 226 | log.exception("Exception creation of user") |
|
227 | 227 | h.flash(_('Error occurred during creation of user %s') |
|
228 | 228 | % self.request.POST.get('username'), category='error') |
|
229 | 229 | raise HTTPFound(h.route_path('users')) |
|
230 | 230 | |
|
231 | 231 | |
|
232 | 232 | class UsersView(UserAppView): |
|
233 | 233 | ALLOW_SCOPED_TOKENS = False |
|
234 | 234 | """ |
|
235 | 235 | This view has alternative version inside EE, if modified please take a look |
|
236 | 236 | in there as well. |
|
237 | 237 | """ |
|
238 | 238 | |
|
239 | 239 | def get_auth_plugins(self): |
|
240 | 240 | valid_plugins = [] |
|
241 | 241 | authn_registry = get_authn_registry(self.request.registry) |
|
242 | 242 | for plugin in authn_registry.get_plugins_for_authentication(): |
|
243 | 243 | if isinstance(plugin, RhodeCodeExternalAuthPlugin): |
|
244 | 244 | valid_plugins.append(plugin) |
|
245 | 245 | elif plugin.name == 'rhodecode': |
|
246 | 246 | valid_plugins.append(plugin) |
|
247 | 247 | |
|
248 | 248 | # extend our choices if user has set a bound plugin which isn't enabled at the |
|
249 | 249 | # moment |
|
250 | 250 | extern_type = self.db_user.extern_type |
|
251 | 251 | if extern_type not in [x.uid for x in valid_plugins]: |
|
252 | 252 | try: |
|
253 | 253 | plugin = authn_registry.get_plugin_by_uid(extern_type) |
|
254 | 254 | if plugin: |
|
255 | 255 | valid_plugins.append(plugin) |
|
256 | 256 | |
|
257 | 257 | except Exception: |
|
258 | 258 | log.exception( |
|
259 | 259 | f'Could not extend user plugins with `{extern_type}`') |
|
260 | 260 | return valid_plugins |
|
261 | 261 | |
|
262 | 262 | def load_default_context(self): |
|
263 | 263 | req = self.request |
|
264 | 264 | |
|
265 | 265 | c = self._get_local_tmpl_context() |
|
266 | 266 | c.allow_scoped_tokens = self.ALLOW_SCOPED_TOKENS |
|
267 | 267 | c.allowed_languages = [ |
|
268 | 268 | ('en', 'English (en)'), |
|
269 | 269 | ('de', 'German (de)'), |
|
270 | 270 | ('fr', 'French (fr)'), |
|
271 | 271 | ('it', 'Italian (it)'), |
|
272 | 272 | ('ja', 'Japanese (ja)'), |
|
273 | 273 | ('pl', 'Polish (pl)'), |
|
274 | 274 | ('pt', 'Portuguese (pt)'), |
|
275 | 275 | ('ru', 'Russian (ru)'), |
|
276 | 276 | ('zh', 'Chinese (zh)'), |
|
277 | 277 | ] |
|
278 | 278 | |
|
279 | 279 | c.allowed_extern_types = [ |
|
280 | 280 | (x.uid, x.get_display_name()) for x in self.get_auth_plugins() |
|
281 | 281 | ] |
|
282 | 282 | perms = req.registry.settings.get('available_permissions') |
|
283 | 283 | if not perms: |
|
284 | 284 | # inject info about available permissions |
|
285 | 285 | auth.set_available_permissions(req.registry.settings) |
|
286 | 286 | |
|
287 | 287 | c.available_permissions = req.registry.settings['available_permissions'] |
|
288 | 288 | PermissionModel().set_global_permission_choices( |
|
289 | 289 | c, gettext_translator=req.translate) |
|
290 | 290 | |
|
291 | 291 | return c |
|
292 | 292 | |
|
293 | 293 | @LoginRequired() |
|
294 | 294 | @HasPermissionAllDecorator('hg.admin') |
|
295 | 295 | @CSRFRequired() |
|
296 | 296 | def user_update(self): |
|
297 | 297 | _ = self.request.translate |
|
298 | 298 | c = self.load_default_context() |
|
299 | 299 | |
|
300 | 300 | user_id = self.db_user_id |
|
301 | 301 | c.user = self.db_user |
|
302 | 302 | |
|
303 | 303 | c.active = 'profile' |
|
304 | 304 | c.extern_type = c.user.extern_type |
|
305 | 305 | c.extern_name = c.user.extern_name |
|
306 | 306 | c.perm_user = c.user.AuthUser(ip_addr=self.request.remote_addr) |
|
307 | 307 | available_languages = [x[0] for x in c.allowed_languages] |
|
308 | 308 | _form = UserForm(self.request.translate, edit=True, |
|
309 | 309 | available_languages=available_languages, |
|
310 | 310 | old_data={'user_id': user_id, |
|
311 | 311 | 'email': c.user.email})() |
|
312 | 312 | |
|
313 | 313 | c.edit_mode = self.request.POST.get('edit') == '1' |
|
314 | 314 | form_result = {} |
|
315 | 315 | old_values = c.user.get_api_data() |
|
316 | 316 | try: |
|
317 | 317 | form_result = _form.to_python(dict(self.request.POST)) |
|
318 | 318 | skip_attrs = ['extern_name'] |
|
319 | 319 | # TODO: plugin should define if username can be updated |
|
320 | 320 | |
|
321 | 321 | if c.extern_type != "rhodecode" and not c.edit_mode: |
|
322 | 322 | # forbid updating username for external accounts |
|
323 | 323 | skip_attrs.append('username') |
|
324 | 324 | |
|
325 | 325 | UserModel().update_user( |
|
326 | 326 | user_id, skip_attrs=skip_attrs, **form_result) |
|
327 | 327 | |
|
328 | 328 | audit_logger.store_web( |
|
329 | 329 | 'user.edit', action_data={'old_data': old_values}, |
|
330 | 330 | user=c.rhodecode_user) |
|
331 | 331 | |
|
332 | 332 | Session().commit() |
|
333 | 333 | h.flash(_('User updated successfully'), category='success') |
|
334 | 334 | except formencode.Invalid as errors: |
|
335 | 335 | data = render( |
|
336 | 336 | 'rhodecode:templates/admin/users/user_edit.mako', |
|
337 | 337 | self._get_template_context(c), self.request) |
|
338 | 338 | html = formencode.htmlfill.render( |
|
339 | 339 | data, |
|
340 | 340 | defaults=errors.value, |
|
341 | 341 | errors=errors.unpack_errors() or {}, |
|
342 | 342 | prefix_error=False, |
|
343 | 343 | encoding="UTF-8", |
|
344 | 344 | force_defaults=False |
|
345 | 345 | ) |
|
346 | 346 | return Response(html) |
|
347 | 347 | except UserCreationError as e: |
|
348 | 348 | h.flash(safe_str(e), 'error') |
|
349 | 349 | except Exception: |
|
350 | 350 | log.exception("Exception updating user") |
|
351 | 351 | h.flash(_('Error occurred during update of user %s') |
|
352 | 352 | % form_result.get('username'), category='error') |
|
353 | 353 | raise HTTPFound(h.route_path('user_edit', user_id=user_id)) |
|
354 | 354 | |
|
355 | 355 | @LoginRequired() |
|
356 | 356 | @HasPermissionAllDecorator('hg.admin') |
|
357 | 357 | @CSRFRequired() |
|
358 | 358 | def user_delete(self): |
|
359 | 359 | _ = self.request.translate |
|
360 | 360 | c = self.load_default_context() |
|
361 | 361 | c.user = self.db_user |
|
362 | 362 | |
|
363 | 363 | _repos = len(c.user.repositories) |
|
364 | 364 | _repo_groups = len(c.user.repository_groups) |
|
365 | 365 | _user_groups = len(c.user.user_groups) |
|
366 | 366 | _pull_requests = len(c.user.user_pull_requests) |
|
367 | 367 | _artifacts = len(c.user.artifacts) |
|
368 | 368 | |
|
369 | 369 | handle_repos = None |
|
370 | 370 | handle_repo_groups = None |
|
371 | 371 | handle_user_groups = None |
|
372 | 372 | handle_pull_requests = None |
|
373 | 373 | handle_artifacts = None |
|
374 | 374 | |
|
375 | 375 | # calls for flash of handle based on handle case detach or delete |
|
376 | 376 | def set_handle_flash_repos(): |
|
377 | 377 | handle = handle_repos |
|
378 | 378 | if handle == 'detach': |
|
379 | 379 | h.flash(_('Detached %s repositories') % _repos, |
|
380 | 380 | category='success') |
|
381 | 381 | elif handle == 'delete': |
|
382 | 382 | h.flash(_('Deleted %s repositories') % _repos, |
|
383 | 383 | category='success') |
|
384 | 384 | |
|
385 | 385 | def set_handle_flash_repo_groups(): |
|
386 | 386 | handle = handle_repo_groups |
|
387 | 387 | if handle == 'detach': |
|
388 | 388 | h.flash(_('Detached %s repository groups') % _repo_groups, |
|
389 | 389 | category='success') |
|
390 | 390 | elif handle == 'delete': |
|
391 | 391 | h.flash(_('Deleted %s repository groups') % _repo_groups, |
|
392 | 392 | category='success') |
|
393 | 393 | |
|
394 | 394 | def set_handle_flash_user_groups(): |
|
395 | 395 | handle = handle_user_groups |
|
396 | 396 | if handle == 'detach': |
|
397 | 397 | h.flash(_('Detached %s user groups') % _user_groups, |
|
398 | 398 | category='success') |
|
399 | 399 | elif handle == 'delete': |
|
400 | 400 | h.flash(_('Deleted %s user groups') % _user_groups, |
|
401 | 401 | category='success') |
|
402 | 402 | |
|
403 | 403 | def set_handle_flash_pull_requests(): |
|
404 | 404 | handle = handle_pull_requests |
|
405 | 405 | if handle == 'detach': |
|
406 | 406 | h.flash(_('Detached %s pull requests') % _pull_requests, |
|
407 | 407 | category='success') |
|
408 | 408 | elif handle == 'delete': |
|
409 | 409 | h.flash(_('Deleted %s pull requests') % _pull_requests, |
|
410 | 410 | category='success') |
|
411 | 411 | |
|
412 | 412 | def set_handle_flash_artifacts(): |
|
413 | 413 | handle = handle_artifacts |
|
414 | 414 | if handle == 'detach': |
|
415 | 415 | h.flash(_('Detached %s artifacts') % _artifacts, |
|
416 | 416 | category='success') |
|
417 | 417 | elif handle == 'delete': |
|
418 | 418 | h.flash(_('Deleted %s artifacts') % _artifacts, |
|
419 | 419 | category='success') |
|
420 | 420 | |
|
421 | 421 | handle_user = User.get_first_super_admin() |
|
422 | 422 | handle_user_id = safe_int(self.request.POST.get('detach_user_id')) |
|
423 | 423 | if handle_user_id: |
|
424 | 424 | # NOTE(marcink): we get new owner for objects... |
|
425 | 425 | handle_user = User.get_or_404(handle_user_id) |
|
426 | 426 | |
|
427 | 427 | if _repos and self.request.POST.get('user_repos'): |
|
428 | 428 | handle_repos = self.request.POST['user_repos'] |
|
429 | 429 | |
|
430 | 430 | if _repo_groups and self.request.POST.get('user_repo_groups'): |
|
431 | 431 | handle_repo_groups = self.request.POST['user_repo_groups'] |
|
432 | 432 | |
|
433 | 433 | if _user_groups and self.request.POST.get('user_user_groups'): |
|
434 | 434 | handle_user_groups = self.request.POST['user_user_groups'] |
|
435 | 435 | |
|
436 | 436 | if _pull_requests and self.request.POST.get('user_pull_requests'): |
|
437 | 437 | handle_pull_requests = self.request.POST['user_pull_requests'] |
|
438 | 438 | |
|
439 | 439 | if _artifacts and self.request.POST.get('user_artifacts'): |
|
440 | 440 | handle_artifacts = self.request.POST['user_artifacts'] |
|
441 | 441 | |
|
442 | 442 | old_values = c.user.get_api_data() |
|
443 | 443 | |
|
444 | 444 | try: |
|
445 | 445 | |
|
446 | 446 | UserModel().delete( |
|
447 | 447 | c.user, |
|
448 | 448 | handle_repos=handle_repos, |
|
449 | 449 | handle_repo_groups=handle_repo_groups, |
|
450 | 450 | handle_user_groups=handle_user_groups, |
|
451 | 451 | handle_pull_requests=handle_pull_requests, |
|
452 | 452 | handle_artifacts=handle_artifacts, |
|
453 | 453 | handle_new_owner=handle_user |
|
454 | 454 | ) |
|
455 | 455 | |
|
456 | 456 | audit_logger.store_web( |
|
457 | 457 | 'user.delete', action_data={'old_data': old_values}, |
|
458 | 458 | user=c.rhodecode_user) |
|
459 | 459 | |
|
460 | 460 | Session().commit() |
|
461 | 461 | set_handle_flash_repos() |
|
462 | 462 | set_handle_flash_repo_groups() |
|
463 | 463 | set_handle_flash_user_groups() |
|
464 | 464 | set_handle_flash_pull_requests() |
|
465 | 465 | set_handle_flash_artifacts() |
|
466 | 466 | username = h.escape(old_values['username']) |
|
467 | 467 | h.flash(_('Successfully deleted user `{}`').format(username), category='success') |
|
468 | 468 | except (UserOwnsReposException, UserOwnsRepoGroupsException, |
|
469 | 469 | UserOwnsUserGroupsException, UserOwnsPullRequestsException, |
|
470 | 470 | UserOwnsArtifactsException, DefaultUserException) as e: |
|
471 | 471 | |
|
472 | 472 | h.flash(safe_str(e), category='warning') |
|
473 | 473 | except Exception: |
|
474 | 474 | log.exception("Exception during deletion of user") |
|
475 | 475 | h.flash(_('An error occurred during deletion of user'), |
|
476 | 476 | category='error') |
|
477 | 477 | raise HTTPFound(h.route_path('users')) |
|
478 | 478 | |
|
479 | 479 | @LoginRequired() |
|
480 | 480 | @HasPermissionAllDecorator('hg.admin') |
|
481 | 481 | def user_edit(self): |
|
482 | 482 | _ = self.request.translate |
|
483 | 483 | c = self.load_default_context() |
|
484 | 484 | c.user = self.db_user |
|
485 | 485 | |
|
486 | 486 | c.active = 'profile' |
|
487 | 487 | c.extern_type = c.user.extern_type |
|
488 | 488 | c.extern_name = c.user.extern_name |
|
489 | 489 | c.perm_user = c.user.AuthUser(ip_addr=self.request.remote_addr) |
|
490 | 490 | c.edit_mode = self.request.GET.get('edit') == '1' |
|
491 | 491 | |
|
492 | 492 | defaults = c.user.get_dict() |
|
493 | 493 | defaults.update({'language': c.user.user_data.get('language')}) |
|
494 | 494 | |
|
495 | 495 | data = render( |
|
496 | 496 | 'rhodecode:templates/admin/users/user_edit.mako', |
|
497 | 497 | self._get_template_context(c), self.request) |
|
498 | 498 | html = formencode.htmlfill.render( |
|
499 | 499 | data, |
|
500 | 500 | defaults=defaults, |
|
501 | 501 | encoding="UTF-8", |
|
502 | 502 | force_defaults=False |
|
503 | 503 | ) |
|
504 | 504 | return Response(html) |
|
505 | 505 | |
|
506 | 506 | @LoginRequired() |
|
507 | 507 | @HasPermissionAllDecorator('hg.admin') |
|
508 | 508 | def user_edit_advanced(self): |
|
509 | 509 | _ = self.request.translate |
|
510 | 510 | c = self.load_default_context() |
|
511 | 511 | |
|
512 | 512 | user_id = self.db_user_id |
|
513 | 513 | c.user = self.db_user |
|
514 | 514 | |
|
515 | 515 | c.detach_user = User.get_first_super_admin() |
|
516 | 516 | detach_user_id = safe_int(self.request.GET.get('detach_user_id')) |
|
517 | 517 | if detach_user_id: |
|
518 | 518 | c.detach_user = User.get_or_404(detach_user_id) |
|
519 | 519 | |
|
520 | 520 | c.active = 'advanced' |
|
521 | 521 | c.personal_repo_group = RepoGroup.get_user_personal_repo_group(user_id) |
|
522 | 522 | c.personal_repo_group_name = RepoGroupModel()\ |
|
523 | 523 | .get_personal_group_name(c.user) |
|
524 | 524 | |
|
525 | 525 | c.user_to_review_rules = sorted( |
|
526 | 526 | (x.user for x in c.user.user_review_rules), |
|
527 | 527 | key=lambda u: u.username.lower()) |
|
528 | 528 | |
|
529 | 529 | defaults = c.user.get_dict() |
|
530 | 530 | |
|
531 | 531 | # Interim workaround if the user participated on any pull requests as a |
|
532 | 532 | # reviewer. |
|
533 | 533 | has_review = len(c.user.reviewer_pull_requests) |
|
534 | 534 | c.can_delete_user = not has_review |
|
535 | 535 | c.can_delete_user_message = '' |
|
536 | 536 | inactive_link = h.link_to( |
|
537 | 537 | 'inactive', h.route_path('user_edit', user_id=user_id, _anchor='active')) |
|
538 | 538 | if has_review == 1: |
|
539 | 539 | c.can_delete_user_message = h.literal(_( |
|
540 | 540 | 'The user participates as reviewer in {} pull request and ' |
|
541 | 541 | 'cannot be deleted. \nYou can set the user to ' |
|
542 | 542 | '"{}" instead of deleting it.').format( |
|
543 | 543 | has_review, inactive_link)) |
|
544 | 544 | elif has_review: |
|
545 | 545 | c.can_delete_user_message = h.literal(_( |
|
546 | 546 | 'The user participates as reviewer in {} pull requests and ' |
|
547 | 547 | 'cannot be deleted. \nYou can set the user to ' |
|
548 | 548 | '"{}" instead of deleting it.').format( |
|
549 | 549 | has_review, inactive_link)) |
|
550 | 550 | |
|
551 | 551 | data = render( |
|
552 | 552 | 'rhodecode:templates/admin/users/user_edit.mako', |
|
553 | 553 | self._get_template_context(c), self.request) |
|
554 | 554 | html = formencode.htmlfill.render( |
|
555 | 555 | data, |
|
556 | 556 | defaults=defaults, |
|
557 | 557 | encoding="UTF-8", |
|
558 | 558 | force_defaults=False |
|
559 | 559 | ) |
|
560 | 560 | return Response(html) |
|
561 | 561 | |
|
562 | 562 | @LoginRequired() |
|
563 | 563 | @HasPermissionAllDecorator('hg.admin') |
|
564 | 564 | def user_edit_global_perms(self): |
|
565 | 565 | _ = self.request.translate |
|
566 | 566 | c = self.load_default_context() |
|
567 | 567 | c.user = self.db_user |
|
568 | 568 | |
|
569 | 569 | c.active = 'global_perms' |
|
570 | 570 | |
|
571 | 571 | c.default_user = User.get_default_user() |
|
572 | 572 | defaults = c.user.get_dict() |
|
573 | 573 | defaults.update(c.default_user.get_default_perms(suffix='_inherited')) |
|
574 | 574 | defaults.update(c.default_user.get_default_perms()) |
|
575 | 575 | defaults.update(c.user.get_default_perms()) |
|
576 | 576 | |
|
577 | 577 | data = render( |
|
578 | 578 | 'rhodecode:templates/admin/users/user_edit.mako', |
|
579 | 579 | self._get_template_context(c), self.request) |
|
580 | 580 | html = formencode.htmlfill.render( |
|
581 | 581 | data, |
|
582 | 582 | defaults=defaults, |
|
583 | 583 | encoding="UTF-8", |
|
584 | 584 | force_defaults=False |
|
585 | 585 | ) |
|
586 | 586 | return Response(html) |
|
587 | 587 | |
|
588 | 588 | @LoginRequired() |
|
589 | 589 | @HasPermissionAllDecorator('hg.admin') |
|
590 | 590 | @CSRFRequired() |
|
591 | 591 | def user_edit_global_perms_update(self): |
|
592 | 592 | _ = self.request.translate |
|
593 | 593 | c = self.load_default_context() |
|
594 | 594 | |
|
595 | 595 | user_id = self.db_user_id |
|
596 | 596 | c.user = self.db_user |
|
597 | 597 | |
|
598 | 598 | c.active = 'global_perms' |
|
599 | 599 | try: |
|
600 | 600 | # first stage that verifies the checkbox |
|
601 | 601 | _form = UserIndividualPermissionsForm(self.request.translate) |
|
602 | 602 | form_result = _form.to_python(dict(self.request.POST)) |
|
603 | 603 | inherit_perms = form_result['inherit_default_permissions'] |
|
604 | 604 | c.user.inherit_default_permissions = inherit_perms |
|
605 | 605 | Session().add(c.user) |
|
606 | 606 | |
|
607 | 607 | if not inherit_perms: |
|
608 | 608 | # only update the individual ones if we un check the flag |
|
609 | 609 | _form = UserPermissionsForm( |
|
610 | 610 | self.request.translate, |
|
611 | 611 | [x[0] for x in c.repo_create_choices], |
|
612 | 612 | [x[0] for x in c.repo_create_on_write_choices], |
|
613 | 613 | [x[0] for x in c.repo_group_create_choices], |
|
614 | 614 | [x[0] for x in c.user_group_create_choices], |
|
615 | 615 | [x[0] for x in c.fork_choices], |
|
616 | 616 | [x[0] for x in c.inherit_default_permission_choices])() |
|
617 | 617 | |
|
618 | 618 | form_result = _form.to_python(dict(self.request.POST)) |
|
619 | 619 | form_result.update({'perm_user_id': c.user.user_id}) |
|
620 | 620 | |
|
621 | 621 | PermissionModel().update_user_permissions(form_result) |
|
622 | 622 | |
|
623 | 623 | # TODO(marcink): implement global permissions |
|
624 | 624 | # audit_log.store_web('user.edit.permissions') |
|
625 | 625 | |
|
626 | 626 | Session().commit() |
|
627 | 627 | |
|
628 | 628 | h.flash(_('User global permissions updated successfully'), |
|
629 | 629 | category='success') |
|
630 | 630 | |
|
631 | 631 | except formencode.Invalid as errors: |
|
632 | 632 | data = render( |
|
633 | 633 | 'rhodecode:templates/admin/users/user_edit.mako', |
|
634 | 634 | self._get_template_context(c), self.request) |
|
635 | 635 | html = formencode.htmlfill.render( |
|
636 | 636 | data, |
|
637 | 637 | defaults=errors.value, |
|
638 | 638 | errors=errors.unpack_errors() or {}, |
|
639 | 639 | prefix_error=False, |
|
640 | 640 | encoding="UTF-8", |
|
641 | 641 | force_defaults=False |
|
642 | 642 | ) |
|
643 | 643 | return Response(html) |
|
644 | 644 | except Exception: |
|
645 | 645 | log.exception("Exception during permissions saving") |
|
646 | 646 | h.flash(_('An error occurred during permissions saving'), |
|
647 | 647 | category='error') |
|
648 | 648 | |
|
649 | 649 | affected_user_ids = [user_id] |
|
650 | 650 | PermissionModel().trigger_permission_flush(affected_user_ids) |
|
651 | 651 | raise HTTPFound(h.route_path('user_edit_global_perms', user_id=user_id)) |
|
652 | 652 | |
|
653 | 653 | @LoginRequired() |
|
654 | 654 | @HasPermissionAllDecorator('hg.admin') |
|
655 | 655 | @CSRFRequired() |
|
656 | 656 | def user_enable_force_password_reset(self): |
|
657 | 657 | _ = self.request.translate |
|
658 | 658 | c = self.load_default_context() |
|
659 | 659 | |
|
660 | 660 | user_id = self.db_user_id |
|
661 | 661 | c.user = self.db_user |
|
662 | 662 | |
|
663 | 663 | try: |
|
664 | 664 | c.user.update_userdata(force_password_change=True) |
|
665 | 665 | |
|
666 | 666 | msg = _('Force password change enabled for user') |
|
667 | 667 | audit_logger.store_web('user.edit.password_reset.enabled', |
|
668 | 668 | user=c.rhodecode_user) |
|
669 | 669 | |
|
670 | 670 | Session().commit() |
|
671 | 671 | h.flash(msg, category='success') |
|
672 | 672 | except Exception: |
|
673 | 673 | log.exception("Exception during password reset for user") |
|
674 | 674 | h.flash(_('An error occurred during password reset for user'), |
|
675 | 675 | category='error') |
|
676 | 676 | |
|
677 | 677 | raise HTTPFound(h.route_path('user_edit_advanced', user_id=user_id)) |
|
678 | 678 | |
|
679 | 679 | @LoginRequired() |
|
680 | 680 | @HasPermissionAllDecorator('hg.admin') |
|
681 | 681 | @CSRFRequired() |
|
682 | 682 | def user_disable_force_password_reset(self): |
|
683 | 683 | _ = self.request.translate |
|
684 | 684 | c = self.load_default_context() |
|
685 | 685 | |
|
686 | 686 | user_id = self.db_user_id |
|
687 | 687 | c.user = self.db_user |
|
688 | 688 | |
|
689 | 689 | try: |
|
690 | 690 | c.user.update_userdata(force_password_change=False) |
|
691 | 691 | |
|
692 | 692 | msg = _('Force password change disabled for user') |
|
693 | 693 | audit_logger.store_web( |
|
694 | 694 | 'user.edit.password_reset.disabled', |
|
695 | 695 | user=c.rhodecode_user) |
|
696 | 696 | |
|
697 | 697 | Session().commit() |
|
698 | 698 | h.flash(msg, category='success') |
|
699 | 699 | except Exception: |
|
700 | 700 | log.exception("Exception during password reset for user") |
|
701 | 701 | h.flash(_('An error occurred during password reset for user'), |
|
702 | 702 | category='error') |
|
703 | 703 | |
|
704 | 704 | raise HTTPFound(h.route_path('user_edit_advanced', user_id=user_id)) |
|
705 | 705 | |
|
706 | 706 | @LoginRequired() |
|
707 | 707 | @HasPermissionAllDecorator('hg.admin') |
|
708 | 708 | @CSRFRequired() |
|
709 | 709 | def user_notice_dismiss(self): |
|
710 | 710 | _ = self.request.translate |
|
711 | 711 | c = self.load_default_context() |
|
712 | 712 | |
|
713 | 713 | user_id = self.db_user_id |
|
714 | 714 | c.user = self.db_user |
|
715 | 715 | user_notice_id = safe_int(self.request.POST.get('notice_id')) |
|
716 | 716 | notice = UserNotice().query()\ |
|
717 | 717 | .filter(UserNotice.user_id == user_id)\ |
|
718 | 718 | .filter(UserNotice.user_notice_id == user_notice_id)\ |
|
719 | 719 | .scalar() |
|
720 | 720 | read = False |
|
721 | 721 | if notice: |
|
722 | 722 | notice.notice_read = True |
|
723 | 723 | Session().add(notice) |
|
724 | 724 | Session().commit() |
|
725 | 725 | read = True |
|
726 | 726 | |
|
727 | 727 | return {'notice': user_notice_id, 'read': read} |
|
728 | 728 | |
|
729 | 729 | @LoginRequired() |
|
730 | 730 | @HasPermissionAllDecorator('hg.admin') |
|
731 | 731 | @CSRFRequired() |
|
732 | 732 | def user_create_personal_repo_group(self): |
|
733 | 733 | """ |
|
734 | 734 | Create personal repository group for this user |
|
735 | 735 | """ |
|
736 | 736 | from rhodecode.model.repo_group import RepoGroupModel |
|
737 | 737 | |
|
738 | 738 | _ = self.request.translate |
|
739 | 739 | c = self.load_default_context() |
|
740 | 740 | |
|
741 | 741 | user_id = self.db_user_id |
|
742 | 742 | c.user = self.db_user |
|
743 | 743 | |
|
744 | 744 | personal_repo_group = RepoGroup.get_user_personal_repo_group( |
|
745 | 745 | c.user.user_id) |
|
746 | 746 | if personal_repo_group: |
|
747 | 747 | raise HTTPFound(h.route_path('user_edit_advanced', user_id=user_id)) |
|
748 | 748 | |
|
749 | 749 | personal_repo_group_name = RepoGroupModel().get_personal_group_name(c.user) |
|
750 | 750 | named_personal_group = RepoGroup.get_by_group_name( |
|
751 | 751 | personal_repo_group_name) |
|
752 | 752 | try: |
|
753 | 753 | |
|
754 | 754 | if named_personal_group and named_personal_group.user_id == c.user.user_id: |
|
755 | 755 | # migrate the same named group, and mark it as personal |
|
756 | 756 | named_personal_group.personal = True |
|
757 | 757 | Session().add(named_personal_group) |
|
758 | 758 | Session().commit() |
|
759 | 759 | msg = _('Linked repository group `{}` as personal'.format( |
|
760 | 760 | personal_repo_group_name)) |
|
761 | 761 | h.flash(msg, category='success') |
|
762 | 762 | elif not named_personal_group: |
|
763 | 763 | RepoGroupModel().create_personal_repo_group(c.user) |
|
764 | 764 | |
|
765 | 765 | msg = _('Created repository group `{}`'.format( |
|
766 | 766 | personal_repo_group_name)) |
|
767 | 767 | h.flash(msg, category='success') |
|
768 | 768 | else: |
|
769 | 769 | msg = _('Repository group `{}` is already taken'.format( |
|
770 | 770 | personal_repo_group_name)) |
|
771 | 771 | h.flash(msg, category='warning') |
|
772 | 772 | except Exception: |
|
773 | 773 | log.exception("Exception during repository group creation") |
|
774 | 774 | msg = _( |
|
775 | 775 | 'An error occurred during repository group creation for user') |
|
776 | 776 | h.flash(msg, category='error') |
|
777 | 777 | Session().rollback() |
|
778 | 778 | |
|
779 | 779 | raise HTTPFound(h.route_path('user_edit_advanced', user_id=user_id)) |
|
780 | 780 | |
|
781 | 781 | @LoginRequired() |
|
782 | 782 | @HasPermissionAllDecorator('hg.admin') |
|
783 | 783 | def auth_tokens(self): |
|
784 | 784 | _ = self.request.translate |
|
785 | 785 | c = self.load_default_context() |
|
786 | 786 | c.user = self.db_user |
|
787 | 787 | |
|
788 | 788 | c.active = 'auth_tokens' |
|
789 | 789 | |
|
790 | 790 | c.lifetime_values = AuthTokenModel.get_lifetime_values(translator=_) |
|
791 | 791 | c.role_values = [ |
|
792 | 792 | (x, AuthTokenModel.cls._get_role_name(x)) |
|
793 | 793 | for x in AuthTokenModel.cls.ROLES] |
|
794 | 794 | c.role_options = [(c.role_values, _("Role"))] |
|
795 | 795 | c.user_auth_tokens = AuthTokenModel().get_auth_tokens( |
|
796 | 796 | c.user.user_id, show_expired=True) |
|
797 | 797 | c.role_vcs = AuthTokenModel.cls.ROLE_VCS |
|
798 | 798 | return self._get_template_context(c) |
|
799 | 799 | |
|
800 | 800 | @LoginRequired() |
|
801 | 801 | @HasPermissionAllDecorator('hg.admin') |
|
802 | 802 | def auth_tokens_view(self): |
|
803 | 803 | _ = self.request.translate |
|
804 | 804 | c = self.load_default_context() |
|
805 | 805 | c.user = self.db_user |
|
806 | 806 | |
|
807 | 807 | auth_token_id = self.request.POST.get('auth_token_id') |
|
808 | 808 | |
|
809 | 809 | if auth_token_id: |
|
810 | 810 | token = UserApiKeys.get_or_404(auth_token_id) |
|
811 | 811 | |
|
812 | 812 | return { |
|
813 | 813 | 'auth_token': token.api_key |
|
814 | 814 | } |
|
815 | 815 | |
|
816 | 816 | def maybe_attach_token_scope(self, token): |
|
817 | 817 | # implemented in EE edition |
|
818 | 818 | pass |
|
819 | 819 | |
|
820 | 820 | @LoginRequired() |
|
821 | 821 | @HasPermissionAllDecorator('hg.admin') |
|
822 | 822 | @CSRFRequired() |
|
823 | 823 | def auth_tokens_add(self): |
|
824 | 824 | _ = self.request.translate |
|
825 | 825 | c = self.load_default_context() |
|
826 | 826 | |
|
827 | 827 | user_id = self.db_user_id |
|
828 | 828 | c.user = self.db_user |
|
829 | 829 | |
|
830 | 830 | user_data = c.user.get_api_data() |
|
831 | 831 | lifetime = safe_int(self.request.POST.get('lifetime'), -1) |
|
832 | 832 | description = self.request.POST.get('description') |
|
833 | 833 | role = self.request.POST.get('role') |
|
834 | 834 | |
|
835 | 835 | token = UserModel().add_auth_token( |
|
836 | 836 | user=c.user.user_id, |
|
837 | 837 | lifetime_minutes=lifetime, role=role, description=description, |
|
838 | 838 | scope_callback=self.maybe_attach_token_scope) |
|
839 | 839 | token_data = token.get_api_data() |
|
840 | 840 | |
|
841 | 841 | audit_logger.store_web( |
|
842 | 842 | 'user.edit.token.add', action_data={ |
|
843 | 843 | 'data': {'token': token_data, 'user': user_data}}, |
|
844 | 844 | user=self._rhodecode_user, ) |
|
845 | 845 | Session().commit() |
|
846 | 846 | |
|
847 | 847 | h.flash(_("Auth token successfully created"), category='success') |
|
848 | 848 | return HTTPFound(h.route_path('edit_user_auth_tokens', user_id=user_id)) |
|
849 | 849 | |
|
850 | 850 | @LoginRequired() |
|
851 | 851 | @HasPermissionAllDecorator('hg.admin') |
|
852 | 852 | @CSRFRequired() |
|
853 | 853 | def auth_tokens_delete(self): |
|
854 | 854 | _ = self.request.translate |
|
855 | 855 | c = self.load_default_context() |
|
856 | 856 | |
|
857 | 857 | user_id = self.db_user_id |
|
858 | 858 | c.user = self.db_user |
|
859 | 859 | |
|
860 | 860 | user_data = c.user.get_api_data() |
|
861 | 861 | |
|
862 | 862 | del_auth_token = self.request.POST.get('del_auth_token') |
|
863 | 863 | |
|
864 | 864 | if del_auth_token: |
|
865 | 865 | token = UserApiKeys.get_or_404(del_auth_token) |
|
866 | 866 | token_data = token.get_api_data() |
|
867 | 867 | |
|
868 | 868 | AuthTokenModel().delete(del_auth_token, c.user.user_id) |
|
869 | 869 | audit_logger.store_web( |
|
870 | 870 | 'user.edit.token.delete', action_data={ |
|
871 | 871 | 'data': {'token': token_data, 'user': user_data}}, |
|
872 | 872 | user=self._rhodecode_user,) |
|
873 | 873 | Session().commit() |
|
874 | 874 | h.flash(_("Auth token successfully deleted"), category='success') |
|
875 | 875 | |
|
876 | 876 | return HTTPFound(h.route_path('edit_user_auth_tokens', user_id=user_id)) |
|
877 | 877 | |
|
878 | 878 | @LoginRequired() |
|
879 | 879 | @HasPermissionAllDecorator('hg.admin') |
|
880 | 880 | def ssh_keys(self): |
|
881 | 881 | _ = self.request.translate |
|
882 | 882 | c = self.load_default_context() |
|
883 | 883 | c.user = self.db_user |
|
884 | 884 | |
|
885 | 885 | c.active = 'ssh_keys' |
|
886 | 886 | c.default_key = self.request.GET.get('default_key') |
|
887 | 887 | c.user_ssh_keys = SshKeyModel().get_ssh_keys(c.user.user_id) |
|
888 | 888 | return self._get_template_context(c) |
|
889 | 889 | |
|
890 | 890 | @LoginRequired() |
|
891 | 891 | @HasPermissionAllDecorator('hg.admin') |
|
892 | 892 | def ssh_keys_generate_keypair(self): |
|
893 | 893 | _ = self.request.translate |
|
894 | 894 | c = self.load_default_context() |
|
895 | 895 | |
|
896 | 896 | c.user = self.db_user |
|
897 | 897 | |
|
898 | 898 | c.active = 'ssh_keys_generate' |
|
899 | 899 | comment = 'RhodeCode-SSH {}'.format(c.user.email or '') |
|
900 | 900 | private_format = self.request.GET.get('private_format') \ |
|
901 | 901 | or SshKeyModel.DEFAULT_PRIVATE_KEY_FORMAT |
|
902 | 902 | c.private, c.public = SshKeyModel().generate_keypair( |
|
903 | 903 | comment=comment, private_format=private_format) |
|
904 | 904 | |
|
905 | 905 | return self._get_template_context(c) |
|
906 | 906 | |
|
907 | 907 | @LoginRequired() |
|
908 | 908 | @HasPermissionAllDecorator('hg.admin') |
|
909 | 909 | @CSRFRequired() |
|
910 | 910 | def ssh_keys_add(self): |
|
911 | 911 | _ = self.request.translate |
|
912 | 912 | c = self.load_default_context() |
|
913 | 913 | |
|
914 | 914 | user_id = self.db_user_id |
|
915 | 915 | c.user = self.db_user |
|
916 | 916 | |
|
917 | 917 | user_data = c.user.get_api_data() |
|
918 | 918 | key_data = self.request.POST.get('key_data') |
|
919 | 919 | description = self.request.POST.get('description') |
|
920 | 920 | |
|
921 | 921 | fingerprint = 'unknown' |
|
922 | 922 | try: |
|
923 | 923 | if not key_data: |
|
924 | 924 | raise ValueError('Please add a valid public key') |
|
925 | 925 | |
|
926 | 926 | key = SshKeyModel().parse_key(key_data.strip()) |
|
927 | 927 | fingerprint = key.hash_md5() |
|
928 | 928 | |
|
929 | 929 | ssh_key = SshKeyModel().create( |
|
930 | 930 | c.user.user_id, fingerprint, key.keydata, description) |
|
931 | 931 | ssh_key_data = ssh_key.get_api_data() |
|
932 | 932 | |
|
933 | 933 | audit_logger.store_web( |
|
934 | 934 | 'user.edit.ssh_key.add', action_data={ |
|
935 | 935 | 'data': {'ssh_key': ssh_key_data, 'user': user_data}}, |
|
936 | 936 | user=self._rhodecode_user, ) |
|
937 | 937 | Session().commit() |
|
938 | 938 | |
|
939 | 939 | # Trigger an event on change of keys. |
|
940 | 940 | trigger(SshKeyFileChangeEvent(), self.request.registry) |
|
941 | 941 | |
|
942 | 942 | h.flash(_("Ssh Key successfully created"), category='success') |
|
943 | 943 | |
|
944 | 944 | except IntegrityError: |
|
945 | 945 | log.exception("Exception during ssh key saving") |
|
946 | 946 | err = 'Such key with fingerprint `{}` already exists, ' \ |
|
947 | 947 | 'please use a different one'.format(fingerprint) |
|
948 | 948 | h.flash(_('An error occurred during ssh key saving: {}').format(err), |
|
949 | 949 | category='error') |
|
950 | 950 | except Exception as e: |
|
951 | 951 | log.exception("Exception during ssh key saving") |
|
952 | 952 | h.flash(_('An error occurred during ssh key saving: {}').format(e), |
|
953 | 953 | category='error') |
|
954 | 954 | |
|
955 | 955 | return HTTPFound( |
|
956 | 956 | h.route_path('edit_user_ssh_keys', user_id=user_id)) |
|
957 | 957 | |
|
958 | 958 | @LoginRequired() |
|
959 | 959 | @HasPermissionAllDecorator('hg.admin') |
|
960 | 960 | @CSRFRequired() |
|
961 | 961 | def ssh_keys_delete(self): |
|
962 | 962 | _ = self.request.translate |
|
963 | 963 | c = self.load_default_context() |
|
964 | 964 | |
|
965 | 965 | user_id = self.db_user_id |
|
966 | 966 | c.user = self.db_user |
|
967 | 967 | |
|
968 | 968 | user_data = c.user.get_api_data() |
|
969 | 969 | |
|
970 | 970 | del_ssh_key = self.request.POST.get('del_ssh_key') |
|
971 | 971 | |
|
972 | 972 | if del_ssh_key: |
|
973 | 973 | ssh_key = UserSshKeys.get_or_404(del_ssh_key) |
|
974 | 974 | ssh_key_data = ssh_key.get_api_data() |
|
975 | 975 | |
|
976 | 976 | SshKeyModel().delete(del_ssh_key, c.user.user_id) |
|
977 | 977 | audit_logger.store_web( |
|
978 | 978 | 'user.edit.ssh_key.delete', action_data={ |
|
979 | 979 | 'data': {'ssh_key': ssh_key_data, 'user': user_data}}, |
|
980 | 980 | user=self._rhodecode_user,) |
|
981 | 981 | Session().commit() |
|
982 | 982 | # Trigger an event on change of keys. |
|
983 | 983 | trigger(SshKeyFileChangeEvent(), self.request.registry) |
|
984 | 984 | h.flash(_("Ssh key successfully deleted"), category='success') |
|
985 | 985 | |
|
986 | 986 | return HTTPFound(h.route_path('edit_user_ssh_keys', user_id=user_id)) |
|
987 | 987 | |
|
988 | 988 | @LoginRequired() |
|
989 | 989 | @HasPermissionAllDecorator('hg.admin') |
|
990 | 990 | def emails(self): |
|
991 | 991 | _ = self.request.translate |
|
992 | 992 | c = self.load_default_context() |
|
993 | 993 | c.user = self.db_user |
|
994 | 994 | |
|
995 | 995 | c.active = 'emails' |
|
996 | 996 | c.user_email_map = UserEmailMap.query() \ |
|
997 | 997 | .filter(UserEmailMap.user == c.user).all() |
|
998 | 998 | |
|
999 | 999 | return self._get_template_context(c) |
|
1000 | 1000 | |
|
1001 | 1001 | @LoginRequired() |
|
1002 | 1002 | @HasPermissionAllDecorator('hg.admin') |
|
1003 | 1003 | @CSRFRequired() |
|
1004 | 1004 | def emails_add(self): |
|
1005 | 1005 | _ = self.request.translate |
|
1006 | 1006 | c = self.load_default_context() |
|
1007 | 1007 | |
|
1008 | 1008 | user_id = self.db_user_id |
|
1009 | 1009 | c.user = self.db_user |
|
1010 | 1010 | |
|
1011 | 1011 | email = self.request.POST.get('new_email') |
|
1012 | 1012 | user_data = c.user.get_api_data() |
|
1013 | 1013 | try: |
|
1014 | 1014 | |
|
1015 | 1015 | form = UserExtraEmailForm(self.request.translate)() |
|
1016 | 1016 | data = form.to_python({'email': email}) |
|
1017 | 1017 | email = data['email'] |
|
1018 | 1018 | |
|
1019 | 1019 | UserModel().add_extra_email(c.user.user_id, email) |
|
1020 | 1020 | audit_logger.store_web( |
|
1021 | 1021 | 'user.edit.email.add', |
|
1022 | 1022 | action_data={'email': email, 'user': user_data}, |
|
1023 | 1023 | user=self._rhodecode_user) |
|
1024 | 1024 | Session().commit() |
|
1025 | 1025 | h.flash(_("Added new email address `%s` for user account") % email, |
|
1026 | 1026 | category='success') |
|
1027 | 1027 | except formencode.Invalid as error: |
|
1028 | 1028 | msg = error.unpack_errors()['email'] |
|
1029 | 1029 | h.flash(h.escape(msg), category='error') |
|
1030 | 1030 | except IntegrityError: |
|
1031 | 1031 | log.warning("Email %s already exists", email) |
|
1032 | 1032 | h.flash(_('Email `{}` is already registered for another user.').format(email), |
|
1033 | 1033 | category='error') |
|
1034 | 1034 | except Exception: |
|
1035 | 1035 | log.exception("Exception during email saving") |
|
1036 | 1036 | h.flash(_('An error occurred during email saving'), |
|
1037 | 1037 | category='error') |
|
1038 | 1038 | raise HTTPFound(h.route_path('edit_user_emails', user_id=user_id)) |
|
1039 | 1039 | |
|
1040 | 1040 | @LoginRequired() |
|
1041 | 1041 | @HasPermissionAllDecorator('hg.admin') |
|
1042 | 1042 | @CSRFRequired() |
|
1043 | 1043 | def emails_delete(self): |
|
1044 | 1044 | _ = self.request.translate |
|
1045 | 1045 | c = self.load_default_context() |
|
1046 | 1046 | |
|
1047 | 1047 | user_id = self.db_user_id |
|
1048 | 1048 | c.user = self.db_user |
|
1049 | 1049 | |
|
1050 | 1050 | email_id = self.request.POST.get('del_email_id') |
|
1051 | 1051 | user_model = UserModel() |
|
1052 | 1052 | |
|
1053 | 1053 | email = UserEmailMap.query().get(email_id).email |
|
1054 | 1054 | user_data = c.user.get_api_data() |
|
1055 | 1055 | user_model.delete_extra_email(c.user.user_id, email_id) |
|
1056 | 1056 | audit_logger.store_web( |
|
1057 | 1057 | 'user.edit.email.delete', |
|
1058 | 1058 | action_data={'email': email, 'user': user_data}, |
|
1059 | 1059 | user=self._rhodecode_user) |
|
1060 | 1060 | Session().commit() |
|
1061 | 1061 | h.flash(_("Removed email address from user account"), |
|
1062 | 1062 | category='success') |
|
1063 | 1063 | raise HTTPFound(h.route_path('edit_user_emails', user_id=user_id)) |
|
1064 | 1064 | |
|
1065 | 1065 | @LoginRequired() |
|
1066 | 1066 | @HasPermissionAllDecorator('hg.admin') |
|
1067 | 1067 | def ips(self): |
|
1068 | 1068 | _ = self.request.translate |
|
1069 | 1069 | c = self.load_default_context() |
|
1070 | 1070 | c.user = self.db_user |
|
1071 | 1071 | |
|
1072 | 1072 | c.active = 'ips' |
|
1073 | 1073 | c.user_ip_map = UserIpMap.query() \ |
|
1074 | 1074 | .filter(UserIpMap.user == c.user).all() |
|
1075 | 1075 | |
|
1076 | 1076 | c.inherit_default_ips = c.user.inherit_default_permissions |
|
1077 | 1077 | c.default_user_ip_map = UserIpMap.query() \ |
|
1078 | 1078 | .filter(UserIpMap.user == User.get_default_user()).all() |
|
1079 | 1079 | |
|
1080 | 1080 | return self._get_template_context(c) |
|
1081 | 1081 | |
|
1082 | 1082 | @LoginRequired() |
|
1083 | 1083 | @HasPermissionAllDecorator('hg.admin') |
|
1084 | 1084 | @CSRFRequired() |
|
1085 | 1085 | # NOTE(marcink): this view is allowed for default users, as we can |
|
1086 | 1086 | # edit their IP white list |
|
1087 | 1087 | def ips_add(self): |
|
1088 | 1088 | _ = self.request.translate |
|
1089 | 1089 | c = self.load_default_context() |
|
1090 | 1090 | |
|
1091 | 1091 | user_id = self.db_user_id |
|
1092 | 1092 | c.user = self.db_user |
|
1093 | 1093 | |
|
1094 | 1094 | user_model = UserModel() |
|
1095 | 1095 | desc = self.request.POST.get('description') |
|
1096 | 1096 | try: |
|
1097 | 1097 | ip_list = user_model.parse_ip_range( |
|
1098 | 1098 | self.request.POST.get('new_ip')) |
|
1099 | 1099 | except Exception as e: |
|
1100 | 1100 | ip_list = [] |
|
1101 | 1101 | log.exception("Exception during ip saving") |
|
1102 | 1102 | h.flash(_('An error occurred during ip saving:%s' % (e,)), |
|
1103 | 1103 | category='error') |
|
1104 | 1104 | added = [] |
|
1105 | 1105 | user_data = c.user.get_api_data() |
|
1106 | 1106 | for ip in ip_list: |
|
1107 | 1107 | try: |
|
1108 | 1108 | form = UserExtraIpForm(self.request.translate)() |
|
1109 | 1109 | data = form.to_python({'ip': ip}) |
|
1110 | 1110 | ip = data['ip'] |
|
1111 | 1111 | |
|
1112 | 1112 | user_model.add_extra_ip(c.user.user_id, ip, desc) |
|
1113 | 1113 | audit_logger.store_web( |
|
1114 | 1114 | 'user.edit.ip.add', |
|
1115 | 1115 | action_data={'ip': ip, 'user': user_data}, |
|
1116 | 1116 | user=self._rhodecode_user) |
|
1117 | 1117 | Session().commit() |
|
1118 | 1118 | added.append(ip) |
|
1119 | 1119 | except formencode.Invalid as error: |
|
1120 | 1120 | msg = error.unpack_errors()['ip'] |
|
1121 | 1121 | h.flash(msg, category='error') |
|
1122 | 1122 | except Exception: |
|
1123 | 1123 | log.exception("Exception during ip saving") |
|
1124 | 1124 | h.flash(_('An error occurred during ip saving'), |
|
1125 | 1125 | category='error') |
|
1126 | 1126 | if added: |
|
1127 | 1127 | h.flash( |
|
1128 | 1128 | _("Added ips %s to user whitelist") % (', '.join(ip_list), ), |
|
1129 | 1129 | category='success') |
|
1130 | 1130 | if 'default_user' in self.request.POST: |
|
1131 | 1131 | # case for editing global IP list we do it for 'DEFAULT' user |
|
1132 | 1132 | raise HTTPFound(h.route_path('admin_permissions_ips')) |
|
1133 | 1133 | raise HTTPFound(h.route_path('edit_user_ips', user_id=user_id)) |
|
1134 | 1134 | |
|
1135 | 1135 | @LoginRequired() |
|
1136 | 1136 | @HasPermissionAllDecorator('hg.admin') |
|
1137 | 1137 | @CSRFRequired() |
|
1138 | 1138 | # NOTE(marcink): this view is allowed for default users, as we can |
|
1139 | 1139 | # edit their IP white list |
|
1140 | 1140 | def ips_delete(self): |
|
1141 | 1141 | _ = self.request.translate |
|
1142 | 1142 | c = self.load_default_context() |
|
1143 | 1143 | |
|
1144 | 1144 | user_id = self.db_user_id |
|
1145 | 1145 | c.user = self.db_user |
|
1146 | 1146 | |
|
1147 | 1147 | ip_id = self.request.POST.get('del_ip_id') |
|
1148 | 1148 | user_model = UserModel() |
|
1149 | 1149 | user_data = c.user.get_api_data() |
|
1150 | 1150 | ip = UserIpMap.query().get(ip_id).ip_addr |
|
1151 | 1151 | user_model.delete_extra_ip(c.user.user_id, ip_id) |
|
1152 | 1152 | audit_logger.store_web( |
|
1153 | 1153 | 'user.edit.ip.delete', action_data={'ip': ip, 'user': user_data}, |
|
1154 | 1154 | user=self._rhodecode_user) |
|
1155 | 1155 | Session().commit() |
|
1156 | 1156 | h.flash(_("Removed ip address from user whitelist"), category='success') |
|
1157 | 1157 | |
|
1158 | 1158 | if 'default_user' in self.request.POST: |
|
1159 | 1159 | # case for editing global IP list we do it for 'DEFAULT' user |
|
1160 | 1160 | raise HTTPFound(h.route_path('admin_permissions_ips')) |
|
1161 | 1161 | raise HTTPFound(h.route_path('edit_user_ips', user_id=user_id)) |
|
1162 | 1162 | |
|
1163 | 1163 | @LoginRequired() |
|
1164 | 1164 | @HasPermissionAllDecorator('hg.admin') |
|
1165 | 1165 | def groups_management(self): |
|
1166 | 1166 | c = self.load_default_context() |
|
1167 | 1167 | c.user = self.db_user |
|
1168 | 1168 | c.data = c.user.group_member |
|
1169 | 1169 | |
|
1170 | 1170 | groups = [UserGroupModel.get_user_groups_as_dict(group.users_group) |
|
1171 | 1171 | for group in c.user.group_member] |
|
1172 | 1172 | c.groups = ext_json.str_json(groups) |
|
1173 | 1173 | c.active = 'groups' |
|
1174 | 1174 | |
|
1175 | 1175 | return self._get_template_context(c) |
|
1176 | 1176 | |
|
1177 | 1177 | @LoginRequired() |
|
1178 | 1178 | @HasPermissionAllDecorator('hg.admin') |
|
1179 | 1179 | @CSRFRequired() |
|
1180 | 1180 | def groups_management_updates(self): |
|
1181 | 1181 | _ = self.request.translate |
|
1182 | 1182 | c = self.load_default_context() |
|
1183 | 1183 | |
|
1184 | 1184 | user_id = self.db_user_id |
|
1185 | 1185 | c.user = self.db_user |
|
1186 | 1186 | |
|
1187 | 1187 | user_groups = set(self.request.POST.getall('users_group_id')) |
|
1188 | 1188 | user_groups_objects = [] |
|
1189 | 1189 | |
|
1190 | 1190 | for ugid in user_groups: |
|
1191 | 1191 | user_groups_objects.append( |
|
1192 | 1192 | UserGroupModel().get_group(safe_int(ugid))) |
|
1193 | 1193 | user_group_model = UserGroupModel() |
|
1194 | 1194 | added_to_groups, removed_from_groups = \ |
|
1195 | 1195 | user_group_model.change_groups(c.user, user_groups_objects) |
|
1196 | 1196 | |
|
1197 | 1197 | user_data = c.user.get_api_data() |
|
1198 | 1198 | for user_group_id in added_to_groups: |
|
1199 | 1199 | user_group = UserGroup.get(user_group_id) |
|
1200 | 1200 | old_values = user_group.get_api_data() |
|
1201 | 1201 | audit_logger.store_web( |
|
1202 | 1202 | 'user_group.edit.member.add', |
|
1203 | 1203 | action_data={'user': user_data, 'old_data': old_values}, |
|
1204 | 1204 | user=self._rhodecode_user) |
|
1205 | 1205 | |
|
1206 | 1206 | for user_group_id in removed_from_groups: |
|
1207 | 1207 | user_group = UserGroup.get(user_group_id) |
|
1208 | 1208 | old_values = user_group.get_api_data() |
|
1209 | 1209 | audit_logger.store_web( |
|
1210 | 1210 | 'user_group.edit.member.delete', |
|
1211 | 1211 | action_data={'user': user_data, 'old_data': old_values}, |
|
1212 | 1212 | user=self._rhodecode_user) |
|
1213 | 1213 | |
|
1214 | 1214 | Session().commit() |
|
1215 | 1215 | c.active = 'user_groups_management' |
|
1216 | 1216 | h.flash(_("Groups successfully changed"), category='success') |
|
1217 | 1217 | |
|
1218 | 1218 | return HTTPFound(h.route_path( |
|
1219 | 1219 | 'edit_user_groups_management', user_id=user_id)) |
|
1220 | 1220 | |
|
1221 | 1221 | @LoginRequired() |
|
1222 | 1222 | @HasPermissionAllDecorator('hg.admin') |
|
1223 | 1223 | def user_audit_logs(self): |
|
1224 | 1224 | _ = self.request.translate |
|
1225 | 1225 | c = self.load_default_context() |
|
1226 | 1226 | c.user = self.db_user |
|
1227 | 1227 | |
|
1228 | 1228 | c.active = 'audit' |
|
1229 | 1229 | |
|
1230 | 1230 | p = safe_int(self.request.GET.get('page', 1), 1) |
|
1231 | 1231 | |
|
1232 | 1232 | filter_term = self.request.GET.get('filter') |
|
1233 | 1233 | user_log = UserModel().get_user_log(c.user, filter_term) |
|
1234 | 1234 | |
|
1235 | 1235 | def url_generator(page_num): |
|
1236 | 1236 | query_params = { |
|
1237 | 1237 | 'page': page_num |
|
1238 | 1238 | } |
|
1239 | 1239 | if filter_term: |
|
1240 | 1240 | query_params['filter'] = filter_term |
|
1241 | 1241 | return self.request.current_route_path(_query=query_params) |
|
1242 | 1242 | |
|
1243 | 1243 | c.audit_logs = SqlPage( |
|
1244 | 1244 | user_log, page=p, items_per_page=10, url_maker=url_generator) |
|
1245 | 1245 | c.filter_term = filter_term |
|
1246 | 1246 | return self._get_template_context(c) |
|
1247 | 1247 | |
|
1248 | 1248 | @LoginRequired() |
|
1249 | 1249 | @HasPermissionAllDecorator('hg.admin') |
|
1250 | 1250 | def user_audit_logs_download(self): |
|
1251 | 1251 | _ = self.request.translate |
|
1252 | 1252 | c = self.load_default_context() |
|
1253 | 1253 | c.user = self.db_user |
|
1254 | 1254 | |
|
1255 | 1255 | user_log = UserModel().get_user_log(c.user, filter_term=None) |
|
1256 | 1256 | |
|
1257 | 1257 | audit_log_data = {} |
|
1258 | 1258 | for entry in user_log: |
|
1259 | 1259 | audit_log_data[entry.user_log_id] = entry.get_dict() |
|
1260 | 1260 | |
|
1261 | 1261 | response = Response(ext_json.formatted_str_json(audit_log_data)) |
|
1262 | 1262 | response.content_disposition = f'attachment; filename=user_{c.user.user_id}_audit_logs.json' |
|
1263 | 1263 | response.content_type = 'application/json' |
|
1264 | 1264 | |
|
1265 | 1265 | return response |
|
1266 | 1266 | |
|
1267 | 1267 | @LoginRequired() |
|
1268 | 1268 | @HasPermissionAllDecorator('hg.admin') |
|
1269 | 1269 | def user_perms_summary(self): |
|
1270 | 1270 | _ = self.request.translate |
|
1271 | 1271 | c = self.load_default_context() |
|
1272 | 1272 | c.user = self.db_user |
|
1273 | 1273 | |
|
1274 | 1274 | c.active = 'perms_summary' |
|
1275 | 1275 | c.perm_user = c.user.AuthUser(ip_addr=self.request.remote_addr) |
|
1276 | 1276 | |
|
1277 | 1277 | return self._get_template_context(c) |
|
1278 | 1278 | |
|
1279 | 1279 | @LoginRequired() |
|
1280 | 1280 | @HasPermissionAllDecorator('hg.admin') |
|
1281 | 1281 | def user_perms_summary_json(self): |
|
1282 | 1282 | self.load_default_context() |
|
1283 | 1283 | perm_user = self.db_user.AuthUser(ip_addr=self.request.remote_addr) |
|
1284 | 1284 | |
|
1285 | 1285 | return perm_user.permissions |
|
1286 | 1286 | |
|
1287 | 1287 | @LoginRequired() |
|
1288 | 1288 | @HasPermissionAllDecorator('hg.admin') |
|
1289 | 1289 | def user_caches(self): |
|
1290 | 1290 | _ = self.request.translate |
|
1291 | 1291 | c = self.load_default_context() |
|
1292 | 1292 | c.user = self.db_user |
|
1293 | 1293 | |
|
1294 | 1294 | c.active = 'caches' |
|
1295 | 1295 | c.perm_user = c.user.AuthUser(ip_addr=self.request.remote_addr) |
|
1296 | 1296 | |
|
1297 | 1297 | cache_namespace_uid = f'cache_user_auth.{rc_cache.PERMISSIONS_CACHE_VER}.{self.db_user.user_id}' |
|
1298 | 1298 | c.region = rc_cache.get_or_create_region('cache_perms', cache_namespace_uid) |
|
1299 | 1299 | c.backend = c.region.backend |
|
1300 | 1300 | c.user_keys = sorted(c.region.backend.list_keys(prefix=cache_namespace_uid)) |
|
1301 | 1301 | |
|
1302 | 1302 | return self._get_template_context(c) |
|
1303 | 1303 | |
|
1304 | 1304 | @LoginRequired() |
|
1305 | 1305 | @HasPermissionAllDecorator('hg.admin') |
|
1306 | 1306 | @CSRFRequired() |
|
1307 | 1307 | def user_caches_update(self): |
|
1308 | 1308 | _ = self.request.translate |
|
1309 | 1309 | c = self.load_default_context() |
|
1310 | 1310 | c.user = self.db_user |
|
1311 | 1311 | |
|
1312 | 1312 | c.active = 'caches' |
|
1313 | 1313 | c.perm_user = c.user.AuthUser(ip_addr=self.request.remote_addr) |
|
1314 | 1314 | |
|
1315 | 1315 | cache_namespace_uid = f'cache_user_auth.{rc_cache.PERMISSIONS_CACHE_VER}.{self.db_user.user_id}' |
|
1316 | 1316 | del_keys = rc_cache.clear_cache_namespace('cache_perms', cache_namespace_uid, method=rc_cache.CLEAR_DELETE) |
|
1317 | 1317 | |
|
1318 | 1318 | h.flash(_("Deleted {} cache keys").format(del_keys), category='success') |
|
1319 | 1319 | |
|
1320 | 1320 | return HTTPFound(h.route_path( |
|
1321 | 1321 | 'edit_user_caches', user_id=c.user.user_id)) |
@@ -1,77 +1,101 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | |
|
20 | 20 | from rhodecode.apps._base import ADMIN_PREFIX |
|
21 | 21 | |
|
22 | 22 | |
|
23 | 23 | def includeme(config): |
|
24 | 24 | from rhodecode.apps.login.views import LoginView |
|
25 | 25 | |
|
26 | 26 | config.add_route( |
|
27 | 27 | name='login', |
|
28 | 28 | pattern=ADMIN_PREFIX + '/login') |
|
29 | 29 | config.add_view( |
|
30 | 30 | LoginView, |
|
31 | 31 | attr='login', |
|
32 | 32 | route_name='login', request_method='GET', |
|
33 | 33 | renderer='rhodecode:templates/login.mako') |
|
34 | 34 | config.add_view( |
|
35 | 35 | LoginView, |
|
36 | 36 | attr='login_post', |
|
37 | 37 | route_name='login', request_method='POST', |
|
38 | 38 | renderer='rhodecode:templates/login.mako') |
|
39 | 39 | |
|
40 | 40 | config.add_route( |
|
41 | 41 | name='logout', |
|
42 | 42 | pattern=ADMIN_PREFIX + '/logout') |
|
43 | 43 | config.add_view( |
|
44 | 44 | LoginView, |
|
45 | 45 | attr='logout', |
|
46 | 46 | route_name='logout', request_method='POST') |
|
47 | 47 | |
|
48 | 48 | config.add_route( |
|
49 | 49 | name='register', |
|
50 | 50 | pattern=ADMIN_PREFIX + '/register') |
|
51 | 51 | config.add_view( |
|
52 | 52 | LoginView, |
|
53 | 53 | attr='register', |
|
54 | 54 | route_name='register', request_method='GET', |
|
55 | 55 | renderer='rhodecode:templates/register.mako') |
|
56 | 56 | config.add_view( |
|
57 | 57 | LoginView, |
|
58 | 58 | attr='register_post', |
|
59 | 59 | route_name='register', request_method='POST', |
|
60 | 60 | renderer='rhodecode:templates/register.mako') |
|
61 | 61 | |
|
62 | 62 | config.add_route( |
|
63 | 63 | name='reset_password', |
|
64 | 64 | pattern=ADMIN_PREFIX + '/password_reset') |
|
65 | 65 | config.add_view( |
|
66 | 66 | LoginView, |
|
67 | 67 | attr='password_reset', |
|
68 | 68 | route_name='reset_password', request_method=('GET', 'POST'), |
|
69 | 69 | renderer='rhodecode:templates/password_reset.mako') |
|
70 | 70 | |
|
71 | 71 | config.add_route( |
|
72 | 72 | name='reset_password_confirmation', |
|
73 | 73 | pattern=ADMIN_PREFIX + '/password_reset_confirmation') |
|
74 | 74 | config.add_view( |
|
75 | 75 | LoginView, |
|
76 | 76 | attr='password_reset_confirmation', |
|
77 | 77 | route_name='reset_password_confirmation', request_method='GET') |
|
78 | ||
|
79 | config.add_route( | |
|
80 | name='setup_2fa', | |
|
81 | pattern=ADMIN_PREFIX + '/setup_2fa') | |
|
82 | config.add_view( | |
|
83 | LoginView, | |
|
84 | attr='setup_2fa', | |
|
85 | route_name='setup_2fa', request_method=['GET', 'POST'], | |
|
86 | renderer='rhodecode:templates/configure_2fa.mako') | |
|
87 | ||
|
88 | config.add_route( | |
|
89 | name='check_2fa', | |
|
90 | pattern=ADMIN_PREFIX + '/check_2fa') | |
|
91 | config.add_view( | |
|
92 | LoginView, | |
|
93 | attr='verify_2fa', | |
|
94 | route_name='check_2fa', request_method='GET', | |
|
95 | renderer='rhodecode:templates/verify_2fa.mako') | |
|
96 | config.add_view( | |
|
97 | LoginView, | |
|
98 | attr='verify_2fa', | |
|
99 | route_name='check_2fa', request_method='POST', | |
|
100 | renderer='rhodecode:templates/verify_2fa.mako') | |
|
101 |
@@ -1,581 +1,593 b'' | |||
|
1 | 1 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import urllib.parse |
|
20 | 20 | |
|
21 | 21 | import mock |
|
22 | 22 | import pytest |
|
23 | 23 | |
|
24 | 24 | |
|
25 | 25 | from rhodecode.lib.auth import check_password |
|
26 | 26 | from rhodecode.lib import helpers as h |
|
27 | 27 | from rhodecode.model.auth_token import AuthTokenModel |
|
28 | 28 | from rhodecode.model.db import User, Notification, UserApiKeys |
|
29 | 29 | from rhodecode.model.meta import Session |
|
30 | 30 | |
|
31 | 31 | from rhodecode.tests import ( |
|
32 | 32 | assert_session_flash, HG_REPO, TEST_USER_ADMIN_LOGIN, |
|
33 | 33 | no_newline_id_generator) |
|
34 | 34 | from rhodecode.tests.fixture import Fixture |
|
35 | 35 | from rhodecode.tests.routes import route_path |
|
36 | 36 | |
|
37 | 37 | fixture = Fixture() |
|
38 | 38 | |
|
39 | 39 | whitelist_view = ['RepoCommitsView:repo_commit_raw'] |
|
40 | 40 | |
|
41 | 41 | |
|
42 | 42 | @pytest.mark.usefixtures('app') |
|
43 | 43 | class TestLoginController(object): |
|
44 | 44 | destroy_users = set() |
|
45 | 45 | |
|
46 | 46 | @classmethod |
|
47 | 47 | def teardown_class(cls): |
|
48 | 48 | fixture.destroy_users(cls.destroy_users) |
|
49 | 49 | |
|
50 | 50 | def teardown_method(self, method): |
|
51 | 51 | for n in Notification.query().all(): |
|
52 | 52 | Session().delete(n) |
|
53 | 53 | |
|
54 | 54 | Session().commit() |
|
55 | 55 | assert Notification.query().all() == [] |
|
56 | 56 | |
|
57 | 57 | def test_index(self): |
|
58 | 58 | response = self.app.get(route_path('login')) |
|
59 | 59 | assert response.status == '200 OK' |
|
60 | 60 | # Test response... |
|
61 | 61 | |
|
62 | 62 | def test_login_admin_ok(self): |
|
63 | 63 | response = self.app.post(route_path('login'), |
|
64 | 64 | {'username': 'test_admin', |
|
65 | 65 | 'password': 'test12'}, status=302) |
|
66 | 66 | response = response.follow() |
|
67 | 67 | session = response.get_session_from_response() |
|
68 | 68 | username = session['rhodecode_user'].get('username') |
|
69 | 69 | assert username == 'test_admin' |
|
70 | 70 | response.mustcontain('logout') |
|
71 | 71 | |
|
72 | 72 | def test_login_regular_ok(self): |
|
73 | 73 | response = self.app.post(route_path('login'), |
|
74 | 74 | {'username': 'test_regular', |
|
75 | 75 | 'password': 'test12'}, status=302) |
|
76 | 76 | |
|
77 | 77 | response = response.follow() |
|
78 | 78 | session = response.get_session_from_response() |
|
79 | 79 | username = session['rhodecode_user'].get('username') |
|
80 | 80 | assert username == 'test_regular' |
|
81 | 81 | response.mustcontain('logout') |
|
82 | 82 | |
|
83 | def test_login_with_primary_email(self): | |
|
84 | user_email = 'test_regular@mail.com' | |
|
85 | response = self.app.post(route_path('login'), | |
|
86 | {'username': user_email, | |
|
87 | 'password': 'test12'}, status=302) | |
|
88 | response = response.follow() | |
|
89 | session = response.get_session_from_response() | |
|
90 | user = session['rhodecode_user'] | |
|
91 | assert user['username'] == user_email.split('@')[0] | |
|
92 | assert user['is_authenticated'] | |
|
93 | response.mustcontain('logout') | |
|
94 | ||
|
83 | 95 | def test_login_regular_forbidden_when_super_admin_restriction(self): |
|
84 | 96 | from rhodecode.authentication.plugins.auth_rhodecode import RhodeCodeAuthPlugin |
|
85 | 97 | with fixture.auth_restriction(self.app._pyramid_registry, |
|
86 | 98 | RhodeCodeAuthPlugin.AUTH_RESTRICTION_SUPER_ADMIN): |
|
87 | 99 | response = self.app.post(route_path('login'), |
|
88 | 100 | {'username': 'test_regular', |
|
89 | 101 | 'password': 'test12'}) |
|
90 | 102 | |
|
91 | 103 | response.mustcontain('invalid user name') |
|
92 | 104 | response.mustcontain('invalid password') |
|
93 | 105 | |
|
94 | 106 | def test_login_regular_forbidden_when_scope_restriction(self): |
|
95 | 107 | from rhodecode.authentication.plugins.auth_rhodecode import RhodeCodeAuthPlugin |
|
96 | 108 | with fixture.scope_restriction(self.app._pyramid_registry, |
|
97 | 109 | RhodeCodeAuthPlugin.AUTH_RESTRICTION_SCOPE_VCS): |
|
98 | 110 | response = self.app.post(route_path('login'), |
|
99 | 111 | {'username': 'test_regular', |
|
100 | 112 | 'password': 'test12'}) |
|
101 | 113 | |
|
102 | 114 | response.mustcontain('invalid user name') |
|
103 | 115 | response.mustcontain('invalid password') |
|
104 | 116 | |
|
105 | 117 | def test_login_ok_came_from(self): |
|
106 | 118 | test_came_from = '/_admin/users?branch=stable' |
|
107 | 119 | _url = '{}?came_from={}'.format(route_path('login'), test_came_from) |
|
108 | 120 | response = self.app.post( |
|
109 | 121 | _url, {'username': 'test_admin', 'password': 'test12'}, status=302) |
|
110 | 122 | |
|
111 | 123 | assert 'branch=stable' in response.location |
|
112 | 124 | response = response.follow() |
|
113 | 125 | |
|
114 | 126 | assert response.status == '200 OK' |
|
115 | 127 | response.mustcontain('Users administration') |
|
116 | 128 | |
|
117 | 129 | def test_redirect_to_login_with_get_args(self): |
|
118 | 130 | with fixture.anon_access(False): |
|
119 | 131 | kwargs = {'branch': 'stable'} |
|
120 | 132 | response = self.app.get( |
|
121 | 133 | h.route_path('repo_summary', repo_name=HG_REPO, _query=kwargs), |
|
122 | 134 | status=302) |
|
123 | 135 | |
|
124 | 136 | response_query = urllib.parse.parse_qsl(response.location) |
|
125 | 137 | assert 'branch=stable' in response_query[0][1] |
|
126 | 138 | |
|
127 | 139 | def test_login_form_with_get_args(self): |
|
128 | 140 | _url = '{}?came_from=/_admin/users,branch=stable'.format(route_path('login')) |
|
129 | 141 | response = self.app.get(_url) |
|
130 | 142 | assert 'branch%3Dstable' in response.form.action |
|
131 | 143 | |
|
132 | 144 | @pytest.mark.parametrize("url_came_from", [ |
|
133 | 145 | 'data:text/html,<script>window.alert("xss")</script>', |
|
134 | 146 | 'mailto:test@rhodecode.org', |
|
135 | 147 | 'file:///etc/passwd', |
|
136 | 148 | 'ftp://some.ftp.server', |
|
137 | 149 | 'http://other.domain', |
|
138 | 150 | ], ids=no_newline_id_generator) |
|
139 | 151 | def test_login_bad_came_froms(self, url_came_from): |
|
140 | 152 | _url = '{}?came_from={}'.format(route_path('login'), url_came_from) |
|
141 | 153 | response = self.app.post( |
|
142 | 154 | _url, {'username': 'test_admin', 'password': 'test12'}, status=302) |
|
143 | 155 | assert response.status == '302 Found' |
|
144 | 156 | response = response.follow() |
|
145 | 157 | assert response.status == '200 OK' |
|
146 | 158 | assert response.request.path == '/' |
|
147 | 159 | |
|
148 | 160 | @pytest.mark.xfail(reason="newline params changed behaviour in python3") |
|
149 | 161 | @pytest.mark.parametrize("url_came_from", [ |
|
150 | 162 | '/\r\nX-Forwarded-Host: \rhttp://example.org', |
|
151 | 163 | ], ids=no_newline_id_generator) |
|
152 | 164 | def test_login_bad_came_froms_404(self, url_came_from): |
|
153 | 165 | _url = '{}?came_from={}'.format(route_path('login'), url_came_from) |
|
154 | 166 | response = self.app.post( |
|
155 | 167 | _url, {'username': 'test_admin', 'password': 'test12'}, status=302) |
|
156 | 168 | |
|
157 | 169 | response = response.follow() |
|
158 | 170 | assert response.status == '404 Not Found' |
|
159 | 171 | |
|
160 | 172 | def test_login_short_password(self): |
|
161 | 173 | response = self.app.post(route_path('login'), |
|
162 | 174 | {'username': 'test_admin', |
|
163 | 175 | 'password': 'as'}) |
|
164 | 176 | assert response.status == '200 OK' |
|
165 | 177 | |
|
166 | 178 | response.mustcontain('Enter 3 characters or more') |
|
167 | 179 | |
|
168 | 180 | def test_login_wrong_non_ascii_password(self, user_regular): |
|
169 | 181 | response = self.app.post( |
|
170 | 182 | route_path('login'), |
|
171 | 183 | {'username': user_regular.username, |
|
172 | 184 | 'password': 'invalid-non-asci\xe4'.encode('utf8')}) |
|
173 | 185 | |
|
174 | 186 | response.mustcontain('invalid user name') |
|
175 | 187 | response.mustcontain('invalid password') |
|
176 | 188 | |
|
177 | 189 | def test_login_with_non_ascii_password(self, user_util): |
|
178 | 190 | password = u'valid-non-ascii\xe4' |
|
179 | 191 | user = user_util.create_user(password=password) |
|
180 | 192 | response = self.app.post( |
|
181 | 193 | route_path('login'), |
|
182 | 194 | {'username': user.username, |
|
183 | 195 | 'password': password}) |
|
184 | 196 | assert response.status_code == 302 |
|
185 | 197 | |
|
186 | 198 | def test_login_wrong_username_password(self): |
|
187 | 199 | response = self.app.post(route_path('login'), |
|
188 | 200 | {'username': 'error', |
|
189 | 201 | 'password': 'test12'}) |
|
190 | 202 | |
|
191 | 203 | response.mustcontain('invalid user name') |
|
192 | 204 | response.mustcontain('invalid password') |
|
193 | 205 | |
|
194 | 206 | def test_login_admin_ok_password_migration(self, real_crypto_backend): |
|
195 | 207 | from rhodecode.lib import auth |
|
196 | 208 | |
|
197 | 209 | # create new user, with sha256 password |
|
198 | 210 | temp_user = 'test_admin_sha256' |
|
199 | 211 | user = fixture.create_user(temp_user) |
|
200 | 212 | user.password = auth._RhodeCodeCryptoSha256().hash_create( |
|
201 | 213 | b'test123') |
|
202 | 214 | Session().add(user) |
|
203 | 215 | Session().commit() |
|
204 | 216 | self.destroy_users.add(temp_user) |
|
205 | 217 | response = self.app.post(route_path('login'), |
|
206 | 218 | {'username': temp_user, |
|
207 | 219 | 'password': 'test123'}, status=302) |
|
208 | 220 | |
|
209 | 221 | response = response.follow() |
|
210 | 222 | session = response.get_session_from_response() |
|
211 | 223 | username = session['rhodecode_user'].get('username') |
|
212 | 224 | assert username == temp_user |
|
213 | 225 | response.mustcontain('logout') |
|
214 | 226 | |
|
215 | 227 | # new password should be bcrypted, after log-in and transfer |
|
216 | 228 | user = User.get_by_username(temp_user) |
|
217 | 229 | assert user.password.startswith('$') |
|
218 | 230 | |
|
219 | 231 | # REGISTRATIONS |
|
220 | 232 | def test_register(self): |
|
221 | 233 | response = self.app.get(route_path('register')) |
|
222 | 234 | response.mustcontain('Create an Account') |
|
223 | 235 | |
|
224 | 236 | def test_register_err_same_username(self): |
|
225 | 237 | uname = 'test_admin' |
|
226 | 238 | response = self.app.post( |
|
227 | 239 | route_path('register'), |
|
228 | 240 | { |
|
229 | 241 | 'username': uname, |
|
230 | 242 | 'password': 'test12', |
|
231 | 243 | 'password_confirmation': 'test12', |
|
232 | 244 | 'email': 'goodmail@domain.com', |
|
233 | 245 | 'firstname': 'test', |
|
234 | 246 | 'lastname': 'test' |
|
235 | 247 | } |
|
236 | 248 | ) |
|
237 | 249 | |
|
238 | 250 | assertr = response.assert_response() |
|
239 | 251 | msg = 'Username "%(username)s" already exists' |
|
240 | 252 | msg = msg % {'username': uname} |
|
241 | 253 | assertr.element_contains('#username+.error-message', msg) |
|
242 | 254 | |
|
243 | 255 | def test_register_err_same_email(self): |
|
244 | 256 | response = self.app.post( |
|
245 | 257 | route_path('register'), |
|
246 | 258 | { |
|
247 | 259 | 'username': 'test_admin_0', |
|
248 | 260 | 'password': 'test12', |
|
249 | 261 | 'password_confirmation': 'test12', |
|
250 | 262 | 'email': 'test_admin@mail.com', |
|
251 | 263 | 'firstname': 'test', |
|
252 | 264 | 'lastname': 'test' |
|
253 | 265 | } |
|
254 | 266 | ) |
|
255 | 267 | |
|
256 | 268 | assertr = response.assert_response() |
|
257 |
msg = |
|
|
269 | msg = 'This e-mail address is already taken' | |
|
258 | 270 | assertr.element_contains('#email+.error-message', msg) |
|
259 | 271 | |
|
260 | 272 | def test_register_err_same_email_case_sensitive(self): |
|
261 | 273 | response = self.app.post( |
|
262 | 274 | route_path('register'), |
|
263 | 275 | { |
|
264 | 276 | 'username': 'test_admin_1', |
|
265 | 277 | 'password': 'test12', |
|
266 | 278 | 'password_confirmation': 'test12', |
|
267 | 279 | 'email': 'TesT_Admin@mail.COM', |
|
268 | 280 | 'firstname': 'test', |
|
269 | 281 | 'lastname': 'test' |
|
270 | 282 | } |
|
271 | 283 | ) |
|
272 | 284 | assertr = response.assert_response() |
|
273 |
msg = |
|
|
285 | msg = 'This e-mail address is already taken' | |
|
274 | 286 | assertr.element_contains('#email+.error-message', msg) |
|
275 | 287 | |
|
276 | 288 | def test_register_err_wrong_data(self): |
|
277 | 289 | response = self.app.post( |
|
278 | 290 | route_path('register'), |
|
279 | 291 | { |
|
280 | 292 | 'username': 'xs', |
|
281 | 293 | 'password': 'test', |
|
282 | 294 | 'password_confirmation': 'test', |
|
283 | 295 | 'email': 'goodmailm', |
|
284 | 296 | 'firstname': 'test', |
|
285 | 297 | 'lastname': 'test' |
|
286 | 298 | } |
|
287 | 299 | ) |
|
288 | 300 | assert response.status == '200 OK' |
|
289 | 301 | response.mustcontain('An email address must contain a single @') |
|
290 | 302 | response.mustcontain('Enter a value 6 characters long or more') |
|
291 | 303 | |
|
292 | 304 | def test_register_err_username(self): |
|
293 | 305 | response = self.app.post( |
|
294 | 306 | route_path('register'), |
|
295 | 307 | { |
|
296 | 308 | 'username': 'error user', |
|
297 | 309 | 'password': 'test12', |
|
298 | 310 | 'password_confirmation': 'test12', |
|
299 | 311 | 'email': 'goodmailm', |
|
300 | 312 | 'firstname': 'test', |
|
301 | 313 | 'lastname': 'test' |
|
302 | 314 | } |
|
303 | 315 | ) |
|
304 | 316 | |
|
305 | 317 | response.mustcontain('An email address must contain a single @') |
|
306 | 318 | response.mustcontain( |
|
307 | 319 | 'Username may only contain ' |
|
308 | 320 | 'alphanumeric characters underscores, ' |
|
309 | 321 | 'periods or dashes and must begin with ' |
|
310 | 322 | 'alphanumeric character') |
|
311 | 323 | |
|
312 | 324 | def test_register_err_case_sensitive(self): |
|
313 | 325 | usr = 'Test_Admin' |
|
314 | 326 | response = self.app.post( |
|
315 | 327 | route_path('register'), |
|
316 | 328 | { |
|
317 | 329 | 'username': usr, |
|
318 | 330 | 'password': 'test12', |
|
319 | 331 | 'password_confirmation': 'test12', |
|
320 | 332 | 'email': 'goodmailm', |
|
321 | 333 | 'firstname': 'test', |
|
322 | 334 | 'lastname': 'test' |
|
323 | 335 | } |
|
324 | 336 | ) |
|
325 | 337 | |
|
326 | 338 | assertr = response.assert_response() |
|
327 | 339 | msg = u'Username "%(username)s" already exists' |
|
328 | 340 | msg = msg % {'username': usr} |
|
329 | 341 | assertr.element_contains('#username+.error-message', msg) |
|
330 | 342 | |
|
331 | 343 | def test_register_special_chars(self): |
|
332 | 344 | response = self.app.post( |
|
333 | 345 | route_path('register'), |
|
334 | 346 | { |
|
335 | 347 | 'username': 'xxxaxn', |
|
336 | 348 | 'password': 'ąćźżąśśśś', |
|
337 | 349 | 'password_confirmation': 'ąćźżąśśśś', |
|
338 | 350 | 'email': 'goodmailm@test.plx', |
|
339 | 351 | 'firstname': 'test', |
|
340 | 352 | 'lastname': 'test' |
|
341 | 353 | } |
|
342 | 354 | ) |
|
343 | 355 | |
|
344 | 356 | msg = u'Invalid characters (non-ascii) in password' |
|
345 | 357 | response.mustcontain(msg) |
|
346 | 358 | |
|
347 | 359 | def test_register_password_mismatch(self): |
|
348 | 360 | response = self.app.post( |
|
349 | 361 | route_path('register'), |
|
350 | 362 | { |
|
351 | 363 | 'username': 'xs', |
|
352 | 364 | 'password': '123qwe', |
|
353 | 365 | 'password_confirmation': 'qwe123', |
|
354 | 366 | 'email': 'goodmailm@test.plxa', |
|
355 | 367 | 'firstname': 'test', |
|
356 | 368 | 'lastname': 'test' |
|
357 | 369 | } |
|
358 | 370 | ) |
|
359 | 371 | msg = u'Passwords do not match' |
|
360 | 372 | response.mustcontain(msg) |
|
361 | 373 | |
|
362 | 374 | def test_register_ok(self): |
|
363 | 375 | username = 'test_regular4' |
|
364 | 376 | password = 'qweqwe' |
|
365 | 377 | email = 'marcin@test.com' |
|
366 | 378 | name = 'testname' |
|
367 | 379 | lastname = 'testlastname' |
|
368 | 380 | |
|
369 | 381 | # this initializes a session |
|
370 | 382 | response = self.app.get(route_path('register')) |
|
371 | 383 | response.mustcontain('Create an Account') |
|
372 | 384 | |
|
373 | 385 | |
|
374 | 386 | response = self.app.post( |
|
375 | 387 | route_path('register'), |
|
376 | 388 | { |
|
377 | 389 | 'username': username, |
|
378 | 390 | 'password': password, |
|
379 | 391 | 'password_confirmation': password, |
|
380 | 392 | 'email': email, |
|
381 | 393 | 'firstname': name, |
|
382 | 394 | 'lastname': lastname, |
|
383 | 395 | 'admin': True |
|
384 | 396 | }, |
|
385 | 397 | status=302 |
|
386 | 398 | ) # This should be overridden |
|
387 | 399 | |
|
388 | 400 | assert_session_flash( |
|
389 | 401 | response, 'You have successfully registered with RhodeCode. You can log-in now.') |
|
390 | 402 | |
|
391 | 403 | ret = Session().query(User).filter( |
|
392 | 404 | User.username == 'test_regular4').one() |
|
393 | 405 | assert ret.username == username |
|
394 | 406 | assert check_password(password, ret.password) |
|
395 | 407 | assert ret.email == email |
|
396 | 408 | assert ret.name == name |
|
397 | 409 | assert ret.lastname == lastname |
|
398 | 410 | assert ret.auth_tokens is not None |
|
399 | 411 | assert not ret.admin |
|
400 | 412 | |
|
401 | 413 | def test_forgot_password_wrong_mail(self): |
|
402 | 414 | bad_email = 'marcin@wrongmail.org' |
|
403 | 415 | # this initializes a session |
|
404 | 416 | self.app.get(route_path('reset_password')) |
|
405 | 417 | |
|
406 | 418 | response = self.app.post( |
|
407 | 419 | route_path('reset_password'), {'email': bad_email, } |
|
408 | 420 | ) |
|
409 | 421 | assert_session_flash(response, |
|
410 | 422 | 'If such email exists, a password reset link was sent to it.') |
|
411 | 423 | |
|
412 | 424 | def test_forgot_password(self, user_util): |
|
413 | 425 | # this initializes a session |
|
414 | 426 | self.app.get(route_path('reset_password')) |
|
415 | 427 | |
|
416 | 428 | user = user_util.create_user() |
|
417 | 429 | user_id = user.user_id |
|
418 | 430 | email = user.email |
|
419 | 431 | |
|
420 | 432 | response = self.app.post(route_path('reset_password'), {'email': email, }) |
|
421 | 433 | |
|
422 | 434 | assert_session_flash(response, |
|
423 | 435 | 'If such email exists, a password reset link was sent to it.') |
|
424 | 436 | |
|
425 | 437 | # BAD KEY |
|
426 |
confirm_url = |
|
|
438 | confirm_url = route_path('reset_password_confirmation', params={'key': 'badkey'}) | |
|
427 | 439 | response = self.app.get(confirm_url, status=302) |
|
428 | 440 | assert response.location.endswith(route_path('reset_password')) |
|
429 | 441 | assert_session_flash(response, 'Given reset token is invalid') |
|
430 | 442 | |
|
431 | 443 | response.follow() # cleanup flash |
|
432 | 444 | |
|
433 | 445 | # GOOD KEY |
|
434 | 446 | key = UserApiKeys.query()\ |
|
435 | 447 | .filter(UserApiKeys.user_id == user_id)\ |
|
436 | 448 | .filter(UserApiKeys.role == UserApiKeys.ROLE_PASSWORD_RESET)\ |
|
437 | 449 | .first() |
|
438 | 450 | |
|
439 | 451 | assert key |
|
440 | 452 | |
|
441 | 453 | confirm_url = '{}?key={}'.format(route_path('reset_password_confirmation'), key.api_key) |
|
442 | 454 | response = self.app.get(confirm_url) |
|
443 | 455 | assert response.status == '302 Found' |
|
444 | 456 | assert response.location.endswith(route_path('login')) |
|
445 | 457 | |
|
446 | 458 | assert_session_flash( |
|
447 | 459 | response, |
|
448 | 460 | 'Your password reset was successful, ' |
|
449 | 461 | 'a new password has been sent to your email') |
|
450 | 462 | |
|
451 | 463 | response.follow() |
|
452 | 464 | |
|
453 | 465 | def _get_api_whitelist(self, values=None): |
|
454 | 466 | config = {'api_access_controllers_whitelist': values or []} |
|
455 | 467 | return config |
|
456 | 468 | |
|
457 | 469 | @pytest.mark.parametrize("test_name, auth_token", [ |
|
458 | 470 | ('none', None), |
|
459 | 471 | ('empty_string', ''), |
|
460 | 472 | ('fake_number', '123456'), |
|
461 | 473 | ('proper_auth_token', None) |
|
462 | 474 | ]) |
|
463 | 475 | def test_access_not_whitelisted_page_via_auth_token( |
|
464 | 476 | self, test_name, auth_token, user_admin): |
|
465 | 477 | |
|
466 | 478 | whitelist = self._get_api_whitelist([]) |
|
467 | 479 | with mock.patch.dict('rhodecode.CONFIG', whitelist): |
|
468 | 480 | assert [] == whitelist['api_access_controllers_whitelist'] |
|
469 | 481 | if test_name == 'proper_auth_token': |
|
470 | 482 | # use builtin if api_key is None |
|
471 | 483 | auth_token = user_admin.api_key |
|
472 | 484 | |
|
473 | 485 | with fixture.anon_access(False): |
|
474 | 486 | # webtest uses linter to check if response is bytes, |
|
475 | 487 | # and we use memoryview here as a wrapper, quick turn-off |
|
476 | 488 | self.app.lint = False |
|
477 | 489 | |
|
478 | 490 | self.app.get( |
|
479 | 491 | route_path('repo_commit_raw', |
|
480 | 492 | repo_name=HG_REPO, commit_id='tip', |
|
481 | 493 | params=dict(api_key=auth_token)), |
|
482 | 494 | status=302) |
|
483 | 495 | |
|
484 | 496 | @pytest.mark.parametrize("test_name, auth_token, code", [ |
|
485 | 497 | ('none', None, 302), |
|
486 | 498 | ('empty_string', '', 302), |
|
487 | 499 | ('fake_number', '123456', 302), |
|
488 | 500 | ('proper_auth_token', None, 200) |
|
489 | 501 | ]) |
|
490 | 502 | def test_access_whitelisted_page_via_auth_token( |
|
491 | 503 | self, test_name, auth_token, code, user_admin): |
|
492 | 504 | |
|
493 | 505 | whitelist = self._get_api_whitelist(whitelist_view) |
|
494 | 506 | |
|
495 | 507 | with mock.patch.dict('rhodecode.CONFIG', whitelist): |
|
496 | 508 | assert whitelist_view == whitelist['api_access_controllers_whitelist'] |
|
497 | 509 | |
|
498 | 510 | if test_name == 'proper_auth_token': |
|
499 | 511 | auth_token = user_admin.api_key |
|
500 | 512 | assert auth_token |
|
501 | 513 | |
|
502 | 514 | with fixture.anon_access(False): |
|
503 | 515 | # webtest uses linter to check if response is bytes, |
|
504 | 516 | # and we use memoryview here as a wrapper, quick turn-off |
|
505 | 517 | self.app.lint = False |
|
506 | 518 | self.app.get( |
|
507 | 519 | route_path('repo_commit_raw', |
|
508 | 520 | repo_name=HG_REPO, commit_id='tip', |
|
509 | 521 | params=dict(api_key=auth_token)), |
|
510 | 522 | status=code) |
|
511 | 523 | |
|
512 | 524 | @pytest.mark.parametrize("test_name, auth_token, code", [ |
|
513 | 525 | ('proper_auth_token', None, 200), |
|
514 | 526 | ('wrong_auth_token', '123456', 302), |
|
515 | 527 | ]) |
|
516 | 528 | def test_access_whitelisted_page_via_auth_token_bound_to_token( |
|
517 | 529 | self, test_name, auth_token, code, user_admin): |
|
518 | 530 | |
|
519 | 531 | expected_token = auth_token |
|
520 | 532 | if test_name == 'proper_auth_token': |
|
521 | 533 | auth_token = user_admin.api_key |
|
522 | 534 | expected_token = auth_token |
|
523 | 535 | assert auth_token |
|
524 | 536 | |
|
525 | 537 | whitelist = self._get_api_whitelist([ |
|
526 | 538 | 'RepoCommitsView:repo_commit_raw@{}'.format(expected_token)]) |
|
527 | 539 | |
|
528 | 540 | with mock.patch.dict('rhodecode.CONFIG', whitelist): |
|
529 | 541 | |
|
530 | 542 | with fixture.anon_access(False): |
|
531 | 543 | # webtest uses linter to check if response is bytes, |
|
532 | 544 | # and we use memoryview here as a wrapper, quick turn-off |
|
533 | 545 | self.app.lint = False |
|
534 | 546 | |
|
535 | 547 | self.app.get( |
|
536 | 548 | route_path('repo_commit_raw', |
|
537 | 549 | repo_name=HG_REPO, commit_id='tip', |
|
538 | 550 | params=dict(api_key=auth_token)), |
|
539 | 551 | status=code) |
|
540 | 552 | |
|
541 | 553 | def test_access_page_via_extra_auth_token(self): |
|
542 | 554 | whitelist = self._get_api_whitelist(whitelist_view) |
|
543 | 555 | with mock.patch.dict('rhodecode.CONFIG', whitelist): |
|
544 | 556 | assert whitelist_view == \ |
|
545 | 557 | whitelist['api_access_controllers_whitelist'] |
|
546 | 558 | |
|
547 | 559 | new_auth_token = AuthTokenModel().create( |
|
548 | 560 | TEST_USER_ADMIN_LOGIN, 'test') |
|
549 | 561 | Session().commit() |
|
550 | 562 | with fixture.anon_access(False): |
|
551 | 563 | # webtest uses linter to check if response is bytes, |
|
552 | 564 | # and we use memoryview here as a wrapper, quick turn-off |
|
553 | 565 | self.app.lint = False |
|
554 | 566 | self.app.get( |
|
555 | 567 | route_path('repo_commit_raw', |
|
556 | 568 | repo_name=HG_REPO, commit_id='tip', |
|
557 | 569 | params=dict(api_key=new_auth_token.api_key)), |
|
558 | 570 | status=200) |
|
559 | 571 | |
|
560 | 572 | def test_access_page_via_expired_auth_token(self): |
|
561 | 573 | whitelist = self._get_api_whitelist(whitelist_view) |
|
562 | 574 | with mock.patch.dict('rhodecode.CONFIG', whitelist): |
|
563 | 575 | assert whitelist_view == \ |
|
564 | 576 | whitelist['api_access_controllers_whitelist'] |
|
565 | 577 | |
|
566 | 578 | new_auth_token = AuthTokenModel().create( |
|
567 | 579 | TEST_USER_ADMIN_LOGIN, 'test') |
|
568 | 580 | Session().commit() |
|
569 | 581 | # patch the api key and make it expired |
|
570 | 582 | new_auth_token.expires = 0 |
|
571 | 583 | Session().add(new_auth_token) |
|
572 | 584 | Session().commit() |
|
573 | 585 | with fixture.anon_access(False): |
|
574 | 586 | # webtest uses linter to check if response is bytes, |
|
575 | 587 | # and we use memoryview here as a wrapper, quick turn-off |
|
576 | 588 | self.app.lint = False |
|
577 | 589 | self.app.get( |
|
578 | 590 | route_path('repo_commit_raw', |
|
579 | 591 | repo_name=HG_REPO, commit_id='tip', |
|
580 | 592 | params=dict(api_key=new_auth_token.api_key)), |
|
581 | 593 | status=302) |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file renamed from rhodecode/apps/ssh_support/lib/ssh_wrapper.py to rhodecode/apps/ssh_support/lib/ssh_wrapper_v1.py | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file renamed from rhodecode/lib/rc_cache/archive_cache.py to rhodecode/lib/archive_cache/backends/fanout_cache.py | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file renamed from rhodecode/tests/vcs_operations/test_vcs_operations.py to rhodecode/tests/vcs_operations/test_vcs_operations_git.py | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file was removed | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file was removed | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file was removed |
|
1 | NO CONTENT: file was removed | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file was removed | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file was removed | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file was removed | |
The requested commit or file is too big and content was truncated. Show full diff |
General Comments 0
You need to be logged in to leave comments.
Login now