Show More
The requested changes are too big and content was truncated. Show full diff
@@ -0,0 +1,161 b'' | |||
|
1 | .. _config-saml-azure-ref: | |
|
2 | ||
|
3 | ||
|
4 | SAML 2.0 with Azure Entra ID | |
|
5 | ---------------------------- | |
|
6 | ||
|
7 | **This plugin is available only in EE Edition.** | |
|
8 | ||
|
9 | |RCE| supports SAML 2.0 Authentication with Azure Entra ID provider. This allows | |
|
10 | users to log-in to RhodeCode via SSO mechanism of external identity provider | |
|
11 | such as Azure AD. The login can be triggered either by the external IDP, or internally | |
|
12 | by clicking specific authentication button on the log-in page. | |
|
13 | ||
|
14 | ||
|
15 | Configuration steps | |
|
16 | ^^^^^^^^^^^^^^^^^^^ | |
|
17 | ||
|
18 | To configure Duo Security SAML authentication, use the following steps: | |
|
19 | ||
|
20 | 1. From the |RCE| interface, select | |
|
21 | :menuselection:`Admin --> Authentication` | |
|
22 | 2. Activate the `Azure Entra ID` plugin and select :guilabel:`Save` | |
|
23 | 3. Go to newly available menu option called `Azure Entra ID` on the left side. | |
|
24 | 4. Check the `enabled` check box in the plugin configuration section, | |
|
25 | and fill in the required SAML information and :guilabel:`Save`, for more details, | |
|
26 | see :ref:`config-saml-azure` | |
|
27 | ||
|
28 | ||
|
29 | .. _config-saml-azure: | |
|
30 | ||
|
31 | ||
|
32 | Example SAML Azure Entra ID configuration | |
|
33 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
|
34 | ||
|
35 | Example configuration for SAML 2.0 with Azure Entra ID provider | |
|
36 | ||
|
37 | ||
|
38 | Enabled | |
|
39 | `True`: | |
|
40 | ||
|
41 | .. note:: | |
|
42 | Enable or disable this authentication plugin. | |
|
43 | ||
|
44 | ||
|
45 | Auth Cache TTL | |
|
46 | `30`: | |
|
47 | ||
|
48 | .. note:: | |
|
49 | Amount of seconds to cache the authentication and permissions check response call for this plugin. | |
|
50 | Useful for expensive calls like LDAP to improve the performance of the system (0 means disabled). | |
|
51 | ||
|
52 | Debug | |
|
53 | `True`: | |
|
54 | ||
|
55 | .. note:: | |
|
56 | Enable or disable debug mode that shows SAML errors in the RhodeCode logs. | |
|
57 | ||
|
58 | ||
|
59 | Auth button name | |
|
60 | `Azure Entra ID`: | |
|
61 | ||
|
62 | .. note:: | |
|
63 | Alternative authentication display name. E.g AzureAuth, CorporateID etc. | |
|
64 | ||
|
65 | ||
|
66 | Entity ID | |
|
67 | `https://sts.windows.net/APP_ID/`: | |
|
68 | ||
|
69 | .. note:: | |
|
70 | Identity Provider entity/metadata URI. Known as "Microsoft Entra Identifier" | |
|
71 | E.g. https://sts.windows.net/abcd-c655-dcee-aab7-abcd/ | |
|
72 | ||
|
73 | SSO URL | |
|
74 | `https://login.microsoftonline.com/APP_ID/saml2`: | |
|
75 | ||
|
76 | .. note:: | |
|
77 | SSO (SingleSignOn) endpoint URL of the IdP. This can be used to initialize login, Known also as Login URL | |
|
78 | E.g. https://login.microsoftonline.com/abcd-c655-dcee-aab7-abcd/saml2 | |
|
79 | ||
|
80 | SLO URL | |
|
81 | `https://login.microsoftonline.com/APP_ID/saml2`: | |
|
82 | ||
|
83 | .. note:: | |
|
84 | SLO (SingleLogout) endpoint URL of the IdP. , Known also as Logout URL | |
|
85 | E.g. https://login.microsoftonline.com/abcd-c655-dcee-aab7-abcd/saml2 | |
|
86 | ||
|
87 | x509cert | |
|
88 | `<CERTIFICATE_STRING>`: | |
|
89 | ||
|
90 | .. note:: | |
|
91 | Identity provider public x509 certificate. It will be converted to single-line format without headers. | |
|
92 | Download the raw base64 encoded certificate from the Identity provider and paste it here. | |
|
93 | ||
|
94 | SAML Signature | |
|
95 | `sha-256`: | |
|
96 | ||
|
97 | .. note:: | |
|
98 | Type of Algorithm to use for verification of SAML signature on Identity provider side. | |
|
99 | ||
|
100 | SAML Digest | |
|
101 | `sha-256`: | |
|
102 | ||
|
103 | .. note:: | |
|
104 | Type of Algorithm to use for verification of SAML digest on Identity provider side. | |
|
105 | ||
|
106 | Service Provider Cert Dir | |
|
107 | `/etc/rhodecode/conf/saml_ssl/`: | |
|
108 | ||
|
109 | .. note:: | |
|
110 | Optional directory to store service provider certificate and private keys. | |
|
111 | Expected certs for the SP should be stored in this folder as: | |
|
112 | ||
|
113 | * sp.key Private Key | |
|
114 | * sp.crt Public cert | |
|
115 | * sp_new.crt Future Public cert | |
|
116 | ||
|
117 | Also you can use other cert to sign the metadata of the SP using the: | |
|
118 | ||
|
119 | * metadata.key | |
|
120 | * metadata.crt | |
|
121 | ||
|
122 | Expected NameID Format | |
|
123 | `nameid-format:emailAddress`: | |
|
124 | ||
|
125 | .. note:: | |
|
126 | The format that specifies how the NameID is sent to the service provider. | |
|
127 | ||
|
128 | User ID Attribute | |
|
129 | `user.email`: | |
|
130 | ||
|
131 | .. note:: | |
|
132 | User ID Attribute name. This defines which attribute in SAML response will be used to link accounts via unique id. | |
|
133 | Ensure this is returned from DuoSecurity for example via duo_username. | |
|
134 | ||
|
135 | Username Attribute | |
|
136 | `user.username`: | |
|
137 | ||
|
138 | .. note:: | |
|
139 | Username Attribute name. This defines which attribute in SAML response will map to a username. | |
|
140 | ||
|
141 | Email Attribute | |
|
142 | `user.email`: | |
|
143 | ||
|
144 | .. note:: | |
|
145 | Email Attribute name. This defines which attribute in SAML response will map to an email address. | |
|
146 | ||
|
147 | ||
|
148 | ||
|
149 | Below is example setup from Azure Administration page that can be used with above config. | |
|
150 | ||
|
151 | .. image:: ../images/saml-azure-service-provider-example.png | |
|
152 | :alt: Azure SAML setup example | |
|
153 | :scale: 50 % | |
|
154 | ||
|
155 | ||
|
156 | Below is an example attribute mapping set for IDP provider required by the above config. | |
|
157 | ||
|
158 | ||
|
159 | .. image:: ../images/saml-azure-attributes-example.png | |
|
160 | :alt: Azure SAML setup example | |
|
161 | :scale: 50 % No newline at end of file |
|
1 | NO CONTENT: new file 100644, binary diff hidden |
|
1 | NO CONTENT: new file 100644, binary diff hidden |
@@ -0,0 +1,40 b'' | |||
|
1 | |RCE| 5.1.1 |RNS| | |
|
2 | ----------------- | |
|
3 | ||
|
4 | Release Date | |
|
5 | ^^^^^^^^^^^^ | |
|
6 | ||
|
7 | - 2024-07-23 | |
|
8 | ||
|
9 | ||
|
10 | New Features | |
|
11 | ^^^^^^^^^^^^ | |
|
12 | ||
|
13 | ||
|
14 | ||
|
15 | General | |
|
16 | ^^^^^^^ | |
|
17 | ||
|
18 | ||
|
19 | ||
|
20 | Security | |
|
21 | ^^^^^^^^ | |
|
22 | ||
|
23 | ||
|
24 | ||
|
25 | Performance | |
|
26 | ^^^^^^^^^^^ | |
|
27 | ||
|
28 | ||
|
29 | ||
|
30 | ||
|
31 | Fixes | |
|
32 | ^^^^^ | |
|
33 | ||
|
34 | - Fixed problems with JS static files build | |
|
35 | ||
|
36 | ||
|
37 | Upgrade notes | |
|
38 | ^^^^^^^^^^^^^ | |
|
39 | ||
|
40 | - RhodeCode 5.1.1 is unscheduled bugfix release to address some build issues with 5.1 images |
@@ -0,0 +1,41 b'' | |||
|
1 | |RCE| 5.1.2 |RNS| | |
|
2 | ----------------- | |
|
3 | ||
|
4 | Release Date | |
|
5 | ^^^^^^^^^^^^ | |
|
6 | ||
|
7 | - 2024-09-12 | |
|
8 | ||
|
9 | ||
|
10 | New Features | |
|
11 | ^^^^^^^^^^^^ | |
|
12 | ||
|
13 | ||
|
14 | ||
|
15 | General | |
|
16 | ^^^^^^^ | |
|
17 | ||
|
18 | ||
|
19 | ||
|
20 | Security | |
|
21 | ^^^^^^^^ | |
|
22 | ||
|
23 | ||
|
24 | ||
|
25 | Performance | |
|
26 | ^^^^^^^^^^^ | |
|
27 | ||
|
28 | ||
|
29 | ||
|
30 | ||
|
31 | Fixes | |
|
32 | ^^^^^ | |
|
33 | ||
|
34 | - Fixed problems with Mercurial authentication after enabling httppostargs. | |
|
35 | Currently this protocol will be disabled until proper fix is in place | |
|
36 | ||
|
37 | ||
|
38 | Upgrade notes | |
|
39 | ^^^^^^^^^^^^^ | |
|
40 | ||
|
41 | - RhodeCode 5.1.2 is unscheduled bugfix release to address some build issues with 5.1 images |
@@ -0,0 +1,55 b'' | |||
|
1 | |RCE| 5.2.0 |RNS| | |
|
2 | ----------------- | |
|
3 | ||
|
4 | Release Date | |
|
5 | ^^^^^^^^^^^^ | |
|
6 | ||
|
7 | - 2024-10-09 | |
|
8 | ||
|
9 | ||
|
10 | New Features | |
|
11 | ^^^^^^^^^^^^ | |
|
12 | ||
|
13 | - New artifact storage engines allowing an s3 based uploads | |
|
14 | - Enterprise version only: Added security tab to admin interface and possibility to whitelist specific vcs client versions. Some older versions clients have known security vulnerabilities, now you can disallow them. | |
|
15 | - Enterprise version only: Implemented support for Azure SAML authentication | |
|
16 | ||
|
17 | ||
|
18 | General | |
|
19 | ^^^^^^^ | |
|
20 | - Bumped version of packaging, gunicorn, orjson, zope.interface and some other requirements | |
|
21 | - Few tweaks and changes to saml plugins to allows easier setup | |
|
22 | - Configs: allow json log format for gunicorn | |
|
23 | - Configs: deprecated old ssh wrapper command and make the v2 the default one | |
|
24 | - Make sure commit-caches propagate to parent repo groups | |
|
25 | - Configs: Moved git lfs path and path of hg large files to ini file | |
|
26 | ||
|
27 | Security | |
|
28 | ^^^^^^^^ | |
|
29 | ||
|
30 | ||
|
31 | ||
|
32 | Performance | |
|
33 | ^^^^^^^^^^^ | |
|
34 | ||
|
35 | - description escaper for better performance | |
|
36 | ||
|
37 | Fixes | |
|
38 | ^^^^^ | |
|
39 | ||
|
40 | - Email notifications not working properly | |
|
41 | - Removed waitress as a default runner | |
|
42 | - Fixed issue with branch permissions | |
|
43 | - Ldap: fixed nested groups extraction logic | |
|
44 | - Fixed possible db corruption in case of filesystem problems | |
|
45 | - Cleanup and improvements to documentation | |
|
46 | - Added Kubernetes deployment section to the documentation | |
|
47 | - Added default value to celery result and broker | |
|
48 | - Fixed broken backends function after python3 migration | |
|
49 | - Explicitly disable mercurial web_push ssl flag to prevent from errors about ssl required | |
|
50 | - VCS: fixed problems with locked repos and with branch permissions reporting | |
|
51 | ||
|
52 | Upgrade notes | |
|
53 | ^^^^^^^^^^^^^ | |
|
54 | ||
|
55 | - RhodeCode 5.2.0 is a planned major release featuring Azure SAML, whitelist for client versions, s3 artifacts backend and more! |
@@ -0,0 +1,46 b'' | |||
|
1 | # Copyright (C) 2010-2024 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | ||
|
19 | import logging | |
|
20 | ||
|
21 | from rhodecode.apps._base import BaseAppView | |
|
22 | from rhodecode.lib.auth import LoginRequired, HasPermissionAllDecorator | |
|
23 | ||
|
24 | log = logging.getLogger(__name__) | |
|
25 | ||
|
26 | ||
|
27 | class AdminSecurityView(BaseAppView): | |
|
28 | ||
|
29 | def load_default_context(self): | |
|
30 | c = self._get_local_tmpl_context() | |
|
31 | return c | |
|
32 | ||
|
33 | @LoginRequired() | |
|
34 | @HasPermissionAllDecorator('hg.admin') | |
|
35 | def security(self): | |
|
36 | c = self.load_default_context() | |
|
37 | c.active = 'security' | |
|
38 | return self._get_template_context(c) | |
|
39 | ||
|
40 | ||
|
41 | @LoginRequired() | |
|
42 | @HasPermissionAllDecorator('hg.admin') | |
|
43 | def admin_security_modify_allowed_vcs_client_versions(self): | |
|
44 | c = self.load_default_context() | |
|
45 | c.active = 'security' | |
|
46 | return self._get_template_context(c) |
@@ -0,0 +1,269 b'' | |||
|
1 | # Copyright (C) 2016-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | ||
|
19 | import os | |
|
20 | import fsspec # noqa | |
|
21 | import logging | |
|
22 | ||
|
23 | from rhodecode.lib.ext_json import json | |
|
24 | ||
|
25 | from rhodecode.apps.file_store.utils import sha256_safe, ShardFileReader, get_uid_filename | |
|
26 | from rhodecode.apps.file_store.extensions import resolve_extensions | |
|
27 | from rhodecode.apps.file_store.exceptions import FileNotAllowedException, FileOverSizeException # noqa: F401 | |
|
28 | ||
|
29 | log = logging.getLogger(__name__) | |
|
30 | ||
|
31 | ||
|
32 | class BaseShard: | |
|
33 | ||
|
34 | metadata_suffix: str = '.metadata' | |
|
35 | storage_type: str = '' | |
|
36 | fs = None | |
|
37 | ||
|
38 | @property | |
|
39 | def storage_medium(self): | |
|
40 | if not self.storage_type: | |
|
41 | raise ValueError('No storage type set for this shard storage_type=""') | |
|
42 | return getattr(self, self.storage_type) | |
|
43 | ||
|
44 | def __contains__(self, key): | |
|
45 | full_path = self.store_path(key) | |
|
46 | return self.fs.exists(full_path) | |
|
47 | ||
|
48 | def metadata_convert(self, uid_filename, metadata): | |
|
49 | return metadata | |
|
50 | ||
|
51 | def get_metadata_filename(self, uid_filename) -> tuple[str, str]: | |
|
52 | metadata_file: str = f'{uid_filename}{self.metadata_suffix}' | |
|
53 | return metadata_file, self.store_path(metadata_file) | |
|
54 | ||
|
55 | def get_metadata(self, uid_filename, ignore_missing=False) -> dict: | |
|
56 | _metadata_file, metadata_file_path = self.get_metadata_filename(uid_filename) | |
|
57 | if ignore_missing and not self.fs.exists(metadata_file_path): | |
|
58 | return {} | |
|
59 | ||
|
60 | with self.fs.open(metadata_file_path, 'rb') as f: | |
|
61 | metadata = json.loads(f.read()) | |
|
62 | ||
|
63 | metadata = self.metadata_convert(uid_filename, metadata) | |
|
64 | return metadata | |
|
65 | ||
|
66 | def _store(self, key: str, uid_key: str, value_reader, max_filesize: int | None = None, metadata: dict | None = None, **kwargs): | |
|
67 | raise NotImplementedError | |
|
68 | ||
|
69 | def store(self, key: str, uid_key: str, value_reader, max_filesize: int | None = None, metadata: dict | None = None, **kwargs): | |
|
70 | return self._store(key, uid_key, value_reader, max_filesize, metadata, **kwargs) | |
|
71 | ||
|
72 | def _fetch(self, key, presigned_url_expires: int = 0): | |
|
73 | raise NotImplementedError | |
|
74 | ||
|
75 | def fetch(self, key, **kwargs) -> tuple[ShardFileReader, dict]: | |
|
76 | return self._fetch(key) | |
|
77 | ||
|
78 | def _delete(self, key): | |
|
79 | if key not in self: | |
|
80 | log.exception(f'requested key={key} not found in {self}') | |
|
81 | raise KeyError(key) | |
|
82 | ||
|
83 | metadata = self.get_metadata(key) | |
|
84 | _metadata_file, metadata_file_path = self.get_metadata_filename(key) | |
|
85 | artifact_file_path = metadata['filename_uid_path'] | |
|
86 | self.fs.rm(artifact_file_path) | |
|
87 | self.fs.rm(metadata_file_path) | |
|
88 | ||
|
89 | return 1 | |
|
90 | ||
|
91 | def delete(self, key): | |
|
92 | raise NotImplementedError | |
|
93 | ||
|
94 | def store_path(self, uid_filename): | |
|
95 | raise NotImplementedError | |
|
96 | ||
|
97 | ||
|
98 | class BaseFileStoreBackend: | |
|
99 | _shards = tuple() | |
|
100 | _shard_cls = BaseShard | |
|
101 | _config: dict | None = None | |
|
102 | _storage_path: str = '' | |
|
103 | ||
|
104 | def __init__(self, settings, extension_groups=None): | |
|
105 | self._config = settings | |
|
106 | extension_groups = extension_groups or ['any'] | |
|
107 | self.extensions = resolve_extensions([], groups=extension_groups) | |
|
108 | ||
|
109 | def __contains__(self, key): | |
|
110 | return self.filename_exists(key) | |
|
111 | ||
|
112 | def __repr__(self): | |
|
113 | return f'<{self.__class__.__name__}(storage={self.storage_path})>' | |
|
114 | ||
|
115 | @property | |
|
116 | def storage_path(self): | |
|
117 | return self._storage_path | |
|
118 | ||
|
119 | @classmethod | |
|
120 | def get_shard_index(cls, filename: str, num_shards) -> int: | |
|
121 | # Generate a hash value from the filename | |
|
122 | hash_value = sha256_safe(filename) | |
|
123 | ||
|
124 | # Convert the hash value to an integer | |
|
125 | hash_int = int(hash_value, 16) | |
|
126 | ||
|
127 | # Map the hash integer to a shard number between 1 and num_shards | |
|
128 | shard_number = (hash_int % num_shards) | |
|
129 | ||
|
130 | return shard_number | |
|
131 | ||
|
132 | @classmethod | |
|
133 | def apply_counter(cls, counter: int, filename: str) -> str: | |
|
134 | """ | |
|
135 | Apply a counter to the filename. | |
|
136 | ||
|
137 | :param counter: The counter value to apply. | |
|
138 | :param filename: The original filename. | |
|
139 | :return: The modified filename with the counter. | |
|
140 | """ | |
|
141 | name_counted = f'{counter:d}-{filename}' | |
|
142 | return name_counted | |
|
143 | ||
|
144 | def _get_shard(self, key) -> _shard_cls: | |
|
145 | index = self.get_shard_index(key, len(self._shards)) | |
|
146 | shard = self._shards[index] | |
|
147 | return shard | |
|
148 | ||
|
149 | def get_conf(self, key, pop=False): | |
|
150 | if key not in self._config: | |
|
151 | raise ValueError( | |
|
152 | f"No configuration key '{key}', please make sure it exists in filestore config") | |
|
153 | val = self._config[key] | |
|
154 | if pop: | |
|
155 | del self._config[key] | |
|
156 | return val | |
|
157 | ||
|
158 | def filename_allowed(self, filename, extensions=None): | |
|
159 | """Checks if a filename has an allowed extension | |
|
160 | ||
|
161 | :param filename: base name of file | |
|
162 | :param extensions: iterable of extensions (or self.extensions) | |
|
163 | """ | |
|
164 | _, ext = os.path.splitext(filename) | |
|
165 | return self.extension_allowed(ext, extensions) | |
|
166 | ||
|
167 | def extension_allowed(self, ext, extensions=None): | |
|
168 | """ | |
|
169 | Checks if an extension is permitted. Both e.g. ".jpg" and | |
|
170 | "jpg" can be passed in. Extension lookup is case-insensitive. | |
|
171 | ||
|
172 | :param ext: extension to check | |
|
173 | :param extensions: iterable of extensions to validate against (or self.extensions) | |
|
174 | """ | |
|
175 | def normalize_ext(_ext): | |
|
176 | if _ext.startswith('.'): | |
|
177 | _ext = _ext[1:] | |
|
178 | return _ext.lower() | |
|
179 | ||
|
180 | extensions = extensions or self.extensions | |
|
181 | if not extensions: | |
|
182 | return True | |
|
183 | ||
|
184 | ext = normalize_ext(ext) | |
|
185 | ||
|
186 | return ext in [normalize_ext(x) for x in extensions] | |
|
187 | ||
|
188 | def filename_exists(self, uid_filename): | |
|
189 | shard = self._get_shard(uid_filename) | |
|
190 | return uid_filename in shard | |
|
191 | ||
|
192 | def store_path(self, uid_filename): | |
|
193 | """ | |
|
194 | Returns absolute file path of the uid_filename | |
|
195 | """ | |
|
196 | shard = self._get_shard(uid_filename) | |
|
197 | return shard.store_path(uid_filename) | |
|
198 | ||
|
199 | def store_metadata(self, uid_filename): | |
|
200 | shard = self._get_shard(uid_filename) | |
|
201 | return shard.get_metadata_filename(uid_filename) | |
|
202 | ||
|
203 | def store(self, filename, value_reader, extensions=None, metadata=None, max_filesize=None, randomized_name=True, **kwargs): | |
|
204 | extensions = extensions or self.extensions | |
|
205 | ||
|
206 | if not self.filename_allowed(filename, extensions): | |
|
207 | msg = f'filename {filename} does not allow extensions {extensions}' | |
|
208 | raise FileNotAllowedException(msg) | |
|
209 | ||
|
210 | # # TODO: check why we need this setting ? it looks stupid... | |
|
211 | # no_body_seek is used in stream mode importer somehow | |
|
212 | # no_body_seek = kwargs.pop('no_body_seek', False) | |
|
213 | # if no_body_seek: | |
|
214 | # pass | |
|
215 | # else: | |
|
216 | # value_reader.seek(0) | |
|
217 | ||
|
218 | uid_filename = kwargs.pop('uid_filename', None) | |
|
219 | if uid_filename is None: | |
|
220 | uid_filename = get_uid_filename(filename, randomized=randomized_name) | |
|
221 | ||
|
222 | shard = self._get_shard(uid_filename) | |
|
223 | ||
|
224 | return shard.store(filename, uid_filename, value_reader, max_filesize, metadata, **kwargs) | |
|
225 | ||
|
226 | def import_to_store(self, value_reader, org_filename, uid_filename, metadata, **kwargs): | |
|
227 | shard = self._get_shard(uid_filename) | |
|
228 | max_filesize = None | |
|
229 | return shard.store(org_filename, uid_filename, value_reader, max_filesize, metadata, import_mode=True) | |
|
230 | ||
|
231 | def delete(self, uid_filename): | |
|
232 | shard = self._get_shard(uid_filename) | |
|
233 | return shard.delete(uid_filename) | |
|
234 | ||
|
235 | def fetch(self, uid_filename) -> tuple[ShardFileReader, dict]: | |
|
236 | shard = self._get_shard(uid_filename) | |
|
237 | return shard.fetch(uid_filename) | |
|
238 | ||
|
239 | def get_metadata(self, uid_filename, ignore_missing=False) -> dict: | |
|
240 | shard = self._get_shard(uid_filename) | |
|
241 | return shard.get_metadata(uid_filename, ignore_missing=ignore_missing) | |
|
242 | ||
|
243 | def iter_keys(self): | |
|
244 | for shard in self._shards: | |
|
245 | if shard.fs.exists(shard.storage_medium): | |
|
246 | for path, _dirs, _files in shard.fs.walk(shard.storage_medium): | |
|
247 | for key_file_path in _files: | |
|
248 | if key_file_path.endswith(shard.metadata_suffix): | |
|
249 | yield shard, key_file_path | |
|
250 | ||
|
251 | def iter_artifacts(self): | |
|
252 | for shard, key_file in self.iter_keys(): | |
|
253 | json_key = f"{shard.storage_medium}/{key_file}" | |
|
254 | with shard.fs.open(json_key, 'rb') as f: | |
|
255 | yield shard, json.loads(f.read())['filename_uid'] | |
|
256 | ||
|
257 | def get_statistics(self): | |
|
258 | total_files = 0 | |
|
259 | total_size = 0 | |
|
260 | meta = {} | |
|
261 | ||
|
262 | for shard, key_file in self.iter_keys(): | |
|
263 | json_key = f"{shard.storage_medium}/{key_file}" | |
|
264 | with shard.fs.open(json_key, 'rb') as f: | |
|
265 | total_files += 1 | |
|
266 | metadata = json.loads(f.read()) | |
|
267 | total_size += metadata['size'] | |
|
268 | ||
|
269 | return total_files, total_size, meta |
@@ -0,0 +1,183 b'' | |||
|
1 | # Copyright (C) 2016-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | ||
|
19 | import os | |
|
20 | import hashlib | |
|
21 | import functools | |
|
22 | import time | |
|
23 | import logging | |
|
24 | ||
|
25 | from .. import config_keys | |
|
26 | from ..exceptions import FileOverSizeException | |
|
27 | from ..backends.base import BaseFileStoreBackend, fsspec, BaseShard, ShardFileReader | |
|
28 | ||
|
29 | from ....lib.ext_json import json | |
|
30 | ||
|
31 | log = logging.getLogger(__name__) | |
|
32 | ||
|
33 | ||
|
34 | class FileSystemShard(BaseShard): | |
|
35 | METADATA_VER = 'v2' | |
|
36 | BACKEND_TYPE = config_keys.backend_filesystem | |
|
37 | storage_type: str = 'directory' | |
|
38 | ||
|
39 | def __init__(self, index, directory, directory_folder, fs, **settings): | |
|
40 | self._index: int = index | |
|
41 | self._directory: str = directory | |
|
42 | self._directory_folder: str = directory_folder | |
|
43 | self.fs = fs | |
|
44 | ||
|
45 | @property | |
|
46 | def directory(self) -> str: | |
|
47 | """Cache directory final path.""" | |
|
48 | return os.path.join(self._directory, self._directory_folder) | |
|
49 | ||
|
50 | def _write_file(self, full_path, iterator, max_filesize, mode='wb'): | |
|
51 | ||
|
52 | # ensure dir exists | |
|
53 | destination, _ = os.path.split(full_path) | |
|
54 | if not self.fs.exists(destination): | |
|
55 | self.fs.makedirs(destination) | |
|
56 | ||
|
57 | writer = self.fs.open(full_path, mode) | |
|
58 | ||
|
59 | digest = hashlib.sha256() | |
|
60 | oversize_cleanup = False | |
|
61 | with writer: | |
|
62 | size = 0 | |
|
63 | for chunk in iterator: | |
|
64 | size += len(chunk) | |
|
65 | digest.update(chunk) | |
|
66 | writer.write(chunk) | |
|
67 | ||
|
68 | if max_filesize and size > max_filesize: | |
|
69 | oversize_cleanup = True | |
|
70 | # free up the copied file, and raise exc | |
|
71 | break | |
|
72 | ||
|
73 | writer.flush() | |
|
74 | # Get the file descriptor | |
|
75 | fd = writer.fileno() | |
|
76 | ||
|
77 | # Sync the file descriptor to disk, helps with NFS cases... | |
|
78 | os.fsync(fd) | |
|
79 | ||
|
80 | if oversize_cleanup: | |
|
81 | self.fs.rm(full_path) | |
|
82 | raise FileOverSizeException(f'given file is over size limit ({max_filesize}): {full_path}') | |
|
83 | ||
|
84 | sha256 = digest.hexdigest() | |
|
85 | log.debug('written new artifact under %s, sha256: %s', full_path, sha256) | |
|
86 | return size, sha256 | |
|
87 | ||
|
88 | def _store(self, key: str, uid_key, value_reader, max_filesize: int | None = None, metadata: dict | None = None, **kwargs): | |
|
89 | ||
|
90 | filename = key | |
|
91 | uid_filename = uid_key | |
|
92 | full_path = self.store_path(uid_filename) | |
|
93 | ||
|
94 | # STORE METADATA | |
|
95 | _metadata = { | |
|
96 | "version": self.METADATA_VER, | |
|
97 | "store_type": self.BACKEND_TYPE, | |
|
98 | ||
|
99 | "filename": filename, | |
|
100 | "filename_uid_path": full_path, | |
|
101 | "filename_uid": uid_filename, | |
|
102 | "sha256": "", # NOTE: filled in by reader iteration | |
|
103 | ||
|
104 | "store_time": time.time(), | |
|
105 | ||
|
106 | "size": 0 | |
|
107 | } | |
|
108 | ||
|
109 | if metadata: | |
|
110 | if kwargs.pop('import_mode', False): | |
|
111 | # in import mode, we don't need to compute metadata, we just take the old version | |
|
112 | _metadata["import_mode"] = True | |
|
113 | else: | |
|
114 | _metadata.update(metadata) | |
|
115 | ||
|
116 | read_iterator = iter(functools.partial(value_reader.read, 2**22), b'') | |
|
117 | size, sha256 = self._write_file(full_path, read_iterator, max_filesize) | |
|
118 | _metadata['size'] = size | |
|
119 | _metadata['sha256'] = sha256 | |
|
120 | ||
|
121 | # after storing the artifacts, we write the metadata present | |
|
122 | _metadata_file, metadata_file_path = self.get_metadata_filename(uid_key) | |
|
123 | ||
|
124 | with self.fs.open(metadata_file_path, 'wb') as f: | |
|
125 | f.write(json.dumps(_metadata)) | |
|
126 | ||
|
127 | return uid_filename, _metadata | |
|
128 | ||
|
129 | def store_path(self, uid_filename): | |
|
130 | """ | |
|
131 | Returns absolute file path of the uid_filename | |
|
132 | """ | |
|
133 | return os.path.join(self._directory, self._directory_folder, uid_filename) | |
|
134 | ||
|
135 | def _fetch(self, key, presigned_url_expires: int = 0): | |
|
136 | if key not in self: | |
|
137 | log.exception(f'requested key={key} not found in {self}') | |
|
138 | raise KeyError(key) | |
|
139 | ||
|
140 | metadata = self.get_metadata(key) | |
|
141 | ||
|
142 | file_path = metadata['filename_uid_path'] | |
|
143 | if presigned_url_expires and presigned_url_expires > 0: | |
|
144 | metadata['url'] = self.fs.url(file_path, expires=presigned_url_expires) | |
|
145 | ||
|
146 | return ShardFileReader(self.fs.open(file_path, 'rb')), metadata | |
|
147 | ||
|
148 | def delete(self, key): | |
|
149 | return self._delete(key) | |
|
150 | ||
|
151 | ||
|
152 | class FileSystemBackend(BaseFileStoreBackend): | |
|
153 | shard_name: str = 'shard_{:03d}' | |
|
154 | _shard_cls = FileSystemShard | |
|
155 | ||
|
156 | def __init__(self, settings): | |
|
157 | super().__init__(settings) | |
|
158 | ||
|
159 | store_dir = self.get_conf(config_keys.filesystem_storage_path) | |
|
160 | directory = os.path.expanduser(store_dir) | |
|
161 | ||
|
162 | self._directory = directory | |
|
163 | self._storage_path = directory # common path for all from BaseCache | |
|
164 | self._shard_count = int(self.get_conf(config_keys.filesystem_shards, pop=True)) | |
|
165 | if self._shard_count < 1: | |
|
166 | raise ValueError(f'{config_keys.filesystem_shards} must be 1 or more') | |
|
167 | ||
|
168 | log.debug('Initializing %s file_store instance', self) | |
|
169 | fs = fsspec.filesystem('file') | |
|
170 | ||
|
171 | if not fs.exists(self._directory): | |
|
172 | fs.makedirs(self._directory, exist_ok=True) | |
|
173 | ||
|
174 | self._shards = tuple( | |
|
175 | self._shard_cls( | |
|
176 | index=num, | |
|
177 | directory=directory, | |
|
178 | directory_folder=self.shard_name.format(num), | |
|
179 | fs=fs, | |
|
180 | **settings, | |
|
181 | ) | |
|
182 | for num in range(self._shard_count) | |
|
183 | ) |
@@ -0,0 +1,278 b'' | |||
|
1 | # Copyright (C) 2016-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | import errno | |
|
19 | import os | |
|
20 | import hashlib | |
|
21 | import functools | |
|
22 | import time | |
|
23 | import logging | |
|
24 | ||
|
25 | from .. import config_keys | |
|
26 | from ..exceptions import FileOverSizeException | |
|
27 | from ..backends.base import BaseFileStoreBackend, fsspec, BaseShard, ShardFileReader | |
|
28 | ||
|
29 | from ....lib.ext_json import json | |
|
30 | ||
|
31 | log = logging.getLogger(__name__) | |
|
32 | ||
|
33 | ||
|
34 | class LegacyFileSystemShard(BaseShard): | |
|
35 | # legacy ver | |
|
36 | METADATA_VER = 'v2' | |
|
37 | BACKEND_TYPE = config_keys.backend_legacy_filesystem | |
|
38 | storage_type: str = 'dir_struct' | |
|
39 | ||
|
40 | # legacy suffix | |
|
41 | metadata_suffix: str = '.meta' | |
|
42 | ||
|
43 | @classmethod | |
|
44 | def _sub_store_from_filename(cls, filename): | |
|
45 | return filename[:2] | |
|
46 | ||
|
47 | @classmethod | |
|
48 | def apply_counter(cls, counter, filename): | |
|
49 | name_counted = '%d-%s' % (counter, filename) | |
|
50 | return name_counted | |
|
51 | ||
|
52 | @classmethod | |
|
53 | def safe_make_dirs(cls, dir_path): | |
|
54 | if not os.path.exists(dir_path): | |
|
55 | try: | |
|
56 | os.makedirs(dir_path) | |
|
57 | except OSError as e: | |
|
58 | if e.errno != errno.EEXIST: | |
|
59 | raise | |
|
60 | return | |
|
61 | ||
|
62 | @classmethod | |
|
63 | def resolve_name(cls, name, directory): | |
|
64 | """ | |
|
65 | Resolves a unique name and the correct path. If a filename | |
|
66 | for that path already exists then a numeric prefix with values > 0 will be | |
|
67 | added, for example test.jpg -> 1-test.jpg etc. initially file would have 0 prefix. | |
|
68 | ||
|
69 | :param name: base name of file | |
|
70 | :param directory: absolute directory path | |
|
71 | """ | |
|
72 | ||
|
73 | counter = 0 | |
|
74 | while True: | |
|
75 | name_counted = cls.apply_counter(counter, name) | |
|
76 | ||
|
77 | # sub_store prefix to optimize disk usage, e.g some_path/ab/final_file | |
|
78 | sub_store: str = cls._sub_store_from_filename(name_counted) | |
|
79 | sub_store_path: str = os.path.join(directory, sub_store) | |
|
80 | cls.safe_make_dirs(sub_store_path) | |
|
81 | ||
|
82 | path = os.path.join(sub_store_path, name_counted) | |
|
83 | if not os.path.exists(path): | |
|
84 | return name_counted, path | |
|
85 | counter += 1 | |
|
86 | ||
|
87 | def __init__(self, index, directory, directory_folder, fs, **settings): | |
|
88 | self._index: int = index | |
|
89 | self._directory: str = directory | |
|
90 | self._directory_folder: str = directory_folder | |
|
91 | self.fs = fs | |
|
92 | ||
|
93 | @property | |
|
94 | def dir_struct(self) -> str: | |
|
95 | """Cache directory final path.""" | |
|
96 | return os.path.join(self._directory, '0-') | |
|
97 | ||
|
98 | def _write_file(self, full_path, iterator, max_filesize, mode='wb'): | |
|
99 | ||
|
100 | # ensure dir exists | |
|
101 | destination, _ = os.path.split(full_path) | |
|
102 | if not self.fs.exists(destination): | |
|
103 | self.fs.makedirs(destination) | |
|
104 | ||
|
105 | writer = self.fs.open(full_path, mode) | |
|
106 | ||
|
107 | digest = hashlib.sha256() | |
|
108 | oversize_cleanup = False | |
|
109 | with writer: | |
|
110 | size = 0 | |
|
111 | for chunk in iterator: | |
|
112 | size += len(chunk) | |
|
113 | digest.update(chunk) | |
|
114 | writer.write(chunk) | |
|
115 | ||
|
116 | if max_filesize and size > max_filesize: | |
|
117 | # free up the copied file, and raise exc | |
|
118 | oversize_cleanup = True | |
|
119 | break | |
|
120 | ||
|
121 | writer.flush() | |
|
122 | # Get the file descriptor | |
|
123 | fd = writer.fileno() | |
|
124 | ||
|
125 | # Sync the file descriptor to disk, helps with NFS cases... | |
|
126 | os.fsync(fd) | |
|
127 | ||
|
128 | if oversize_cleanup: | |
|
129 | self.fs.rm(full_path) | |
|
130 | raise FileOverSizeException(f'given file is over size limit ({max_filesize}): {full_path}') | |
|
131 | ||
|
132 | sha256 = digest.hexdigest() | |
|
133 | log.debug('written new artifact under %s, sha256: %s', full_path, sha256) | |
|
134 | return size, sha256 | |
|
135 | ||
|
136 | def _store(self, key: str, uid_key, value_reader, max_filesize: int | None = None, metadata: dict | None = None, **kwargs): | |
|
137 | ||
|
138 | filename = key | |
|
139 | uid_filename = uid_key | |
|
140 | ||
|
141 | # NOTE:, also apply N- Counter... | |
|
142 | uid_filename, full_path = self.resolve_name(uid_filename, self._directory) | |
|
143 | ||
|
144 | # STORE METADATA | |
|
145 | # TODO: make it compatible, and backward proof | |
|
146 | _metadata = { | |
|
147 | "version": self.METADATA_VER, | |
|
148 | ||
|
149 | "filename": filename, | |
|
150 | "filename_uid_path": full_path, | |
|
151 | "filename_uid": uid_filename, | |
|
152 | "sha256": "", # NOTE: filled in by reader iteration | |
|
153 | ||
|
154 | "store_time": time.time(), | |
|
155 | ||
|
156 | "size": 0 | |
|
157 | } | |
|
158 | if metadata: | |
|
159 | _metadata.update(metadata) | |
|
160 | ||
|
161 | read_iterator = iter(functools.partial(value_reader.read, 2**22), b'') | |
|
162 | size, sha256 = self._write_file(full_path, read_iterator, max_filesize) | |
|
163 | _metadata['size'] = size | |
|
164 | _metadata['sha256'] = sha256 | |
|
165 | ||
|
166 | # after storing the artifacts, we write the metadata present | |
|
167 | _metadata_file, metadata_file_path = self.get_metadata_filename(uid_filename) | |
|
168 | ||
|
169 | with self.fs.open(metadata_file_path, 'wb') as f: | |
|
170 | f.write(json.dumps(_metadata)) | |
|
171 | ||
|
172 | return uid_filename, _metadata | |
|
173 | ||
|
174 | def store_path(self, uid_filename): | |
|
175 | """ | |
|
176 | Returns absolute file path of the uid_filename | |
|
177 | """ | |
|
178 | prefix_dir = '' | |
|
179 | if '/' in uid_filename: | |
|
180 | prefix_dir, filename = uid_filename.split('/') | |
|
181 | sub_store = self._sub_store_from_filename(filename) | |
|
182 | else: | |
|
183 | sub_store = self._sub_store_from_filename(uid_filename) | |
|
184 | ||
|
185 | return os.path.join(self._directory, prefix_dir, sub_store, uid_filename) | |
|
186 | ||
|
187 | def metadata_convert(self, uid_filename, metadata): | |
|
188 | # NOTE: backward compat mode here... this is for file created PRE 5.2 system | |
|
189 | if 'meta_ver' in metadata: | |
|
190 | full_path = self.store_path(uid_filename) | |
|
191 | metadata = { | |
|
192 | "_converted": True, | |
|
193 | "_org": metadata, | |
|
194 | "version": self.METADATA_VER, | |
|
195 | "store_type": self.BACKEND_TYPE, | |
|
196 | ||
|
197 | "filename": metadata['filename'], | |
|
198 | "filename_uid_path": full_path, | |
|
199 | "filename_uid": uid_filename, | |
|
200 | "sha256": metadata['sha256'], | |
|
201 | ||
|
202 | "store_time": metadata['time'], | |
|
203 | ||
|
204 | "size": metadata['size'] | |
|
205 | } | |
|
206 | return metadata | |
|
207 | ||
|
208 | def _fetch(self, key, presigned_url_expires: int = 0): | |
|
209 | if key not in self: | |
|
210 | log.exception(f'requested key={key} not found in {self}') | |
|
211 | raise KeyError(key) | |
|
212 | ||
|
213 | metadata = self.get_metadata(key) | |
|
214 | ||
|
215 | file_path = metadata['filename_uid_path'] | |
|
216 | if presigned_url_expires and presigned_url_expires > 0: | |
|
217 | metadata['url'] = self.fs.url(file_path, expires=presigned_url_expires) | |
|
218 | ||
|
219 | return ShardFileReader(self.fs.open(file_path, 'rb')), metadata | |
|
220 | ||
|
221 | def delete(self, key): | |
|
222 | return self._delete(key) | |
|
223 | ||
|
224 | def _delete(self, key): | |
|
225 | if key not in self: | |
|
226 | log.exception(f'requested key={key} not found in {self}') | |
|
227 | raise KeyError(key) | |
|
228 | ||
|
229 | metadata = self.get_metadata(key) | |
|
230 | metadata_file, metadata_file_path = self.get_metadata_filename(key) | |
|
231 | artifact_file_path = metadata['filename_uid_path'] | |
|
232 | self.fs.rm(artifact_file_path) | |
|
233 | self.fs.rm(metadata_file_path) | |
|
234 | ||
|
235 | def get_metadata_filename(self, uid_filename) -> tuple[str, str]: | |
|
236 | ||
|
237 | metadata_file: str = f'{uid_filename}{self.metadata_suffix}' | |
|
238 | uid_path_in_store = self.store_path(uid_filename) | |
|
239 | ||
|
240 | metadata_file_path = f'{uid_path_in_store}{self.metadata_suffix}' | |
|
241 | return metadata_file, metadata_file_path | |
|
242 | ||
|
243 | ||
|
244 | class LegacyFileSystemBackend(BaseFileStoreBackend): | |
|
245 | _shard_cls = LegacyFileSystemShard | |
|
246 | ||
|
247 | def __init__(self, settings): | |
|
248 | super().__init__(settings) | |
|
249 | ||
|
250 | store_dir = self.get_conf(config_keys.legacy_filesystem_storage_path) | |
|
251 | directory = os.path.expanduser(store_dir) | |
|
252 | ||
|
253 | self._directory = directory | |
|
254 | self._storage_path = directory # common path for all from BaseCache | |
|
255 | ||
|
256 | log.debug('Initializing %s file_store instance', self) | |
|
257 | fs = fsspec.filesystem('file') | |
|
258 | ||
|
259 | if not fs.exists(self._directory): | |
|
260 | fs.makedirs(self._directory, exist_ok=True) | |
|
261 | ||
|
262 | # legacy system uses single shard | |
|
263 | self._shards = tuple( | |
|
264 | [ | |
|
265 | self._shard_cls( | |
|
266 | index=0, | |
|
267 | directory=directory, | |
|
268 | directory_folder='', | |
|
269 | fs=fs, | |
|
270 | **settings, | |
|
271 | ) | |
|
272 | ] | |
|
273 | ) | |
|
274 | ||
|
275 | @classmethod | |
|
276 | def get_shard_index(cls, filename: str, num_shards) -> int: | |
|
277 | # legacy filesystem doesn't use shards, and always uses single shard | |
|
278 | return 0 |
@@ -0,0 +1,184 b'' | |||
|
1 | # Copyright (C) 2016-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | ||
|
19 | import os | |
|
20 | import hashlib | |
|
21 | import functools | |
|
22 | import time | |
|
23 | import logging | |
|
24 | ||
|
25 | from .. import config_keys | |
|
26 | from ..exceptions import FileOverSizeException | |
|
27 | from ..backends.base import BaseFileStoreBackend, fsspec, BaseShard, ShardFileReader | |
|
28 | ||
|
29 | from ....lib.ext_json import json | |
|
30 | ||
|
31 | log = logging.getLogger(__name__) | |
|
32 | ||
|
33 | ||
|
34 | class S3Shard(BaseShard): | |
|
35 | METADATA_VER = 'v2' | |
|
36 | BACKEND_TYPE = config_keys.backend_objectstore | |
|
37 | storage_type: str = 'bucket' | |
|
38 | ||
|
39 | def __init__(self, index, bucket, bucket_folder, fs, **settings): | |
|
40 | self._index: int = index | |
|
41 | self._bucket_main: str = bucket | |
|
42 | self._bucket_folder: str = bucket_folder | |
|
43 | ||
|
44 | self.fs = fs | |
|
45 | ||
|
46 | @property | |
|
47 | def bucket(self) -> str: | |
|
48 | """Cache bucket final path.""" | |
|
49 | return os.path.join(self._bucket_main, self._bucket_folder) | |
|
50 | ||
|
51 | def _write_file(self, full_path, iterator, max_filesize, mode='wb'): | |
|
52 | ||
|
53 | # ensure dir exists | |
|
54 | destination, _ = os.path.split(full_path) | |
|
55 | if not self.fs.exists(destination): | |
|
56 | self.fs.makedirs(destination) | |
|
57 | ||
|
58 | writer = self.fs.open(full_path, mode) | |
|
59 | ||
|
60 | digest = hashlib.sha256() | |
|
61 | oversize_cleanup = False | |
|
62 | with writer: | |
|
63 | size = 0 | |
|
64 | for chunk in iterator: | |
|
65 | size += len(chunk) | |
|
66 | digest.update(chunk) | |
|
67 | writer.write(chunk) | |
|
68 | ||
|
69 | if max_filesize and size > max_filesize: | |
|
70 | oversize_cleanup = True | |
|
71 | # free up the copied file, and raise exc | |
|
72 | break | |
|
73 | ||
|
74 | if oversize_cleanup: | |
|
75 | self.fs.rm(full_path) | |
|
76 | raise FileOverSizeException(f'given file is over size limit ({max_filesize}): {full_path}') | |
|
77 | ||
|
78 | sha256 = digest.hexdigest() | |
|
79 | log.debug('written new artifact under %s, sha256: %s', full_path, sha256) | |
|
80 | return size, sha256 | |
|
81 | ||
|
82 | def _store(self, key: str, uid_key, value_reader, max_filesize: int | None = None, metadata: dict | None = None, **kwargs): | |
|
83 | ||
|
84 | filename = key | |
|
85 | uid_filename = uid_key | |
|
86 | full_path = self.store_path(uid_filename) | |
|
87 | ||
|
88 | # STORE METADATA | |
|
89 | _metadata = { | |
|
90 | "version": self.METADATA_VER, | |
|
91 | "store_type": self.BACKEND_TYPE, | |
|
92 | ||
|
93 | "filename": filename, | |
|
94 | "filename_uid_path": full_path, | |
|
95 | "filename_uid": uid_filename, | |
|
96 | "sha256": "", # NOTE: filled in by reader iteration | |
|
97 | ||
|
98 | "store_time": time.time(), | |
|
99 | ||
|
100 | "size": 0 | |
|
101 | } | |
|
102 | ||
|
103 | if metadata: | |
|
104 | if kwargs.pop('import_mode', False): | |
|
105 | # in import mode, we don't need to compute metadata, we just take the old version | |
|
106 | _metadata["import_mode"] = True | |
|
107 | else: | |
|
108 | _metadata.update(metadata) | |
|
109 | ||
|
110 | read_iterator = iter(functools.partial(value_reader.read, 2**22), b'') | |
|
111 | size, sha256 = self._write_file(full_path, read_iterator, max_filesize) | |
|
112 | _metadata['size'] = size | |
|
113 | _metadata['sha256'] = sha256 | |
|
114 | ||
|
115 | # after storing the artifacts, we write the metadata present | |
|
116 | metadata_file, metadata_file_path = self.get_metadata_filename(uid_key) | |
|
117 | ||
|
118 | with self.fs.open(metadata_file_path, 'wb') as f: | |
|
119 | f.write(json.dumps(_metadata)) | |
|
120 | ||
|
121 | return uid_filename, _metadata | |
|
122 | ||
|
123 | def store_path(self, uid_filename): | |
|
124 | """ | |
|
125 | Returns absolute file path of the uid_filename | |
|
126 | """ | |
|
127 | return os.path.join(self._bucket_main, self._bucket_folder, uid_filename) | |
|
128 | ||
|
129 | def _fetch(self, key, presigned_url_expires: int = 0): | |
|
130 | if key not in self: | |
|
131 | log.exception(f'requested key={key} not found in {self}') | |
|
132 | raise KeyError(key) | |
|
133 | ||
|
134 | metadata_file, metadata_file_path = self.get_metadata_filename(key) | |
|
135 | with self.fs.open(metadata_file_path, 'rb') as f: | |
|
136 | metadata = json.loads(f.read()) | |
|
137 | ||
|
138 | file_path = metadata['filename_uid_path'] | |
|
139 | if presigned_url_expires and presigned_url_expires > 0: | |
|
140 | metadata['url'] = self.fs.url(file_path, expires=presigned_url_expires) | |
|
141 | ||
|
142 | return ShardFileReader(self.fs.open(file_path, 'rb')), metadata | |
|
143 | ||
|
144 | def delete(self, key): | |
|
145 | return self._delete(key) | |
|
146 | ||
|
147 | ||
|
148 | class ObjectStoreBackend(BaseFileStoreBackend): | |
|
149 | shard_name: str = 'shard-{:03d}' | |
|
150 | _shard_cls = S3Shard | |
|
151 | ||
|
152 | def __init__(self, settings): | |
|
153 | super().__init__(settings) | |
|
154 | ||
|
155 | self._shard_count = int(self.get_conf(config_keys.objectstore_bucket_shards, pop=True)) | |
|
156 | if self._shard_count < 1: | |
|
157 | raise ValueError('cache_shards must be 1 or more') | |
|
158 | ||
|
159 | self._bucket = settings.pop(config_keys.objectstore_bucket) | |
|
160 | if not self._bucket: | |
|
161 | raise ValueError(f'{config_keys.objectstore_bucket} needs to have a value') | |
|
162 | ||
|
163 | objectstore_url = self.get_conf(config_keys.objectstore_url) | |
|
164 | key = settings.pop(config_keys.objectstore_key) | |
|
165 | secret = settings.pop(config_keys.objectstore_secret) | |
|
166 | ||
|
167 | self._storage_path = objectstore_url # common path for all from BaseCache | |
|
168 | log.debug('Initializing %s file_store instance', self) | |
|
169 | fs = fsspec.filesystem('s3', anon=False, endpoint_url=objectstore_url, key=key, secret=secret) | |
|
170 | ||
|
171 | # init main bucket | |
|
172 | if not fs.exists(self._bucket): | |
|
173 | fs.mkdir(self._bucket) | |
|
174 | ||
|
175 | self._shards = tuple( | |
|
176 | self._shard_cls( | |
|
177 | index=num, | |
|
178 | bucket=self._bucket, | |
|
179 | bucket_folder=self.shard_name.format(num), | |
|
180 | fs=fs, | |
|
181 | **settings, | |
|
182 | ) | |
|
183 | for num in range(self._shard_count) | |
|
184 | ) |
@@ -0,0 +1,128 b'' | |||
|
1 | # Copyright (C) 2010-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | import pytest | |
|
19 | ||
|
20 | from rhodecode.apps import file_store | |
|
21 | from rhodecode.apps.file_store import config_keys | |
|
22 | from rhodecode.apps.file_store.backends.filesystem_legacy import LegacyFileSystemBackend | |
|
23 | from rhodecode.apps.file_store.backends.filesystem import FileSystemBackend | |
|
24 | from rhodecode.apps.file_store.backends.objectstore import ObjectStoreBackend | |
|
25 | from rhodecode.apps.file_store.exceptions import FileNotAllowedException, FileOverSizeException | |
|
26 | ||
|
27 | from rhodecode.apps.file_store import utils as store_utils | |
|
28 | from rhodecode.apps.file_store.tests import random_binary_file, file_store_instance | |
|
29 | ||
|
30 | ||
|
31 | class TestFileStoreBackends: | |
|
32 | ||
|
33 | @pytest.mark.parametrize('backend_type, expected_instance', [ | |
|
34 | (config_keys.backend_legacy_filesystem, LegacyFileSystemBackend), | |
|
35 | (config_keys.backend_filesystem, FileSystemBackend), | |
|
36 | (config_keys.backend_objectstore, ObjectStoreBackend), | |
|
37 | ]) | |
|
38 | def test_get_backend(self, backend_type, expected_instance, ini_settings): | |
|
39 | config = ini_settings | |
|
40 | config[config_keys.backend_type] = backend_type | |
|
41 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
42 | assert isinstance(f_store, expected_instance) | |
|
43 | ||
|
44 | @pytest.mark.parametrize('backend_type, expected_instance', [ | |
|
45 | (config_keys.backend_legacy_filesystem, LegacyFileSystemBackend), | |
|
46 | (config_keys.backend_filesystem, FileSystemBackend), | |
|
47 | (config_keys.backend_objectstore, ObjectStoreBackend), | |
|
48 | ]) | |
|
49 | def test_store_and_read(self, backend_type, expected_instance, ini_settings, random_binary_file): | |
|
50 | filename, temp_file = random_binary_file | |
|
51 | config = ini_settings | |
|
52 | config[config_keys.backend_type] = backend_type | |
|
53 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
54 | metadata = { | |
|
55 | 'user_uploaded': { | |
|
56 | 'username': 'user1', | |
|
57 | 'user_id': 10, | |
|
58 | 'ip': '10.20.30.40' | |
|
59 | } | |
|
60 | } | |
|
61 | store_fid, metadata = f_store.store(filename, temp_file, extra_metadata=metadata) | |
|
62 | assert store_fid | |
|
63 | assert metadata | |
|
64 | ||
|
65 | # read-after write | |
|
66 | reader, metadata2 = f_store.fetch(store_fid) | |
|
67 | assert reader | |
|
68 | assert metadata2['filename'] == filename | |
|
69 | ||
|
70 | @pytest.mark.parametrize('backend_type, expected_instance', [ | |
|
71 | (config_keys.backend_legacy_filesystem, LegacyFileSystemBackend), | |
|
72 | (config_keys.backend_filesystem, FileSystemBackend), | |
|
73 | (config_keys.backend_objectstore, ObjectStoreBackend), | |
|
74 | ]) | |
|
75 | def test_store_file_not_allowed(self, backend_type, expected_instance, ini_settings, random_binary_file): | |
|
76 | filename, temp_file = random_binary_file | |
|
77 | config = ini_settings | |
|
78 | config[config_keys.backend_type] = backend_type | |
|
79 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
80 | with pytest.raises(FileNotAllowedException): | |
|
81 | f_store.store('notallowed.exe', temp_file, extensions=['.txt']) | |
|
82 | ||
|
83 | @pytest.mark.parametrize('backend_type, expected_instance', [ | |
|
84 | (config_keys.backend_legacy_filesystem, LegacyFileSystemBackend), | |
|
85 | (config_keys.backend_filesystem, FileSystemBackend), | |
|
86 | (config_keys.backend_objectstore, ObjectStoreBackend), | |
|
87 | ]) | |
|
88 | def test_store_file_over_size(self, backend_type, expected_instance, ini_settings, random_binary_file): | |
|
89 | filename, temp_file = random_binary_file | |
|
90 | config = ini_settings | |
|
91 | config[config_keys.backend_type] = backend_type | |
|
92 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
93 | with pytest.raises(FileOverSizeException): | |
|
94 | f_store.store('toobig.exe', temp_file, extensions=['.exe'], max_filesize=124) | |
|
95 | ||
|
96 | @pytest.mark.parametrize('backend_type, expected_instance, extra_conf', [ | |
|
97 | (config_keys.backend_legacy_filesystem, LegacyFileSystemBackend, {}), | |
|
98 | (config_keys.backend_filesystem, FileSystemBackend, {config_keys.filesystem_storage_path: '/tmp/test-fs-store'}), | |
|
99 | (config_keys.backend_objectstore, ObjectStoreBackend, {config_keys.objectstore_bucket: 'test-bucket'}), | |
|
100 | ]) | |
|
101 | def test_store_stats_and_keys(self, backend_type, expected_instance, extra_conf, ini_settings, random_binary_file): | |
|
102 | config = ini_settings | |
|
103 | config[config_keys.backend_type] = backend_type | |
|
104 | config.update(extra_conf) | |
|
105 | ||
|
106 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
107 | ||
|
108 | # purge storage before running | |
|
109 | for shard, k in f_store.iter_artifacts(): | |
|
110 | f_store.delete(k) | |
|
111 | ||
|
112 | for i in range(10): | |
|
113 | filename, temp_file = random_binary_file | |
|
114 | ||
|
115 | metadata = { | |
|
116 | 'user_uploaded': { | |
|
117 | 'username': 'user1', | |
|
118 | 'user_id': 10, | |
|
119 | 'ip': '10.20.30.40' | |
|
120 | } | |
|
121 | } | |
|
122 | store_fid, metadata = f_store.store(filename, temp_file, extra_metadata=metadata) | |
|
123 | assert store_fid | |
|
124 | assert metadata | |
|
125 | ||
|
126 | cnt, size, meta = f_store.get_statistics() | |
|
127 | assert cnt == 10 | |
|
128 | assert 10 == len(list(f_store.iter_keys())) |
@@ -0,0 +1,52 b'' | |||
|
1 | # Copyright (C) 2010-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | import pytest | |
|
19 | ||
|
20 | from rhodecode.apps.file_store import utils as store_utils | |
|
21 | from rhodecode.apps.file_store import config_keys | |
|
22 | from rhodecode.apps.file_store.tests import generate_random_filename | |
|
23 | ||
|
24 | ||
|
25 | @pytest.fixture() | |
|
26 | def file_store_filesystem_instance(ini_settings): | |
|
27 | config = ini_settings | |
|
28 | config[config_keys.backend_type] = config_keys.backend_filesystem | |
|
29 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
30 | return f_store | |
|
31 | ||
|
32 | ||
|
33 | class TestFileStoreFileSystemBackend: | |
|
34 | ||
|
35 | @pytest.mark.parametrize('filename', [generate_random_filename() for _ in range(10)]) | |
|
36 | def test_get_shard_number(self, filename, file_store_filesystem_instance): | |
|
37 | shard_number = file_store_filesystem_instance.get_shard_index(filename, len(file_store_filesystem_instance._shards)) | |
|
38 | # Check that the shard number is between 0 and max-shards | |
|
39 | assert 0 <= shard_number <= len(file_store_filesystem_instance._shards) | |
|
40 | ||
|
41 | @pytest.mark.parametrize('filename, expected_shard_num', [ | |
|
42 | ('my-name-1', 3), | |
|
43 | ('my-name-2', 2), | |
|
44 | ('my-name-3', 4), | |
|
45 | ('my-name-4', 1), | |
|
46 | ||
|
47 | ('rhodecode-enterprise-ce', 5), | |
|
48 | ('rhodecode-enterprise-ee', 6), | |
|
49 | ]) | |
|
50 | def test_get_shard_number_consistency(self, filename, expected_shard_num, file_store_filesystem_instance): | |
|
51 | shard_number = file_store_filesystem_instance.get_shard_index(filename, len(file_store_filesystem_instance._shards)) | |
|
52 | assert expected_shard_num == shard_number |
@@ -0,0 +1,17 b'' | |||
|
1 | # Copyright (C) 2010-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ No newline at end of file |
@@ -0,0 +1,52 b'' | |||
|
1 | # Copyright (C) 2010-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | import pytest | |
|
19 | ||
|
20 | from rhodecode.apps.file_store import utils as store_utils | |
|
21 | from rhodecode.apps.file_store import config_keys | |
|
22 | from rhodecode.apps.file_store.tests import generate_random_filename | |
|
23 | ||
|
24 | ||
|
25 | @pytest.fixture() | |
|
26 | def file_store_legacy_instance(ini_settings): | |
|
27 | config = ini_settings | |
|
28 | config[config_keys.backend_type] = config_keys.backend_legacy_filesystem | |
|
29 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
30 | return f_store | |
|
31 | ||
|
32 | ||
|
33 | class TestFileStoreLegacyBackend: | |
|
34 | ||
|
35 | @pytest.mark.parametrize('filename', [generate_random_filename() for _ in range(10)]) | |
|
36 | def test_get_shard_number(self, filename, file_store_legacy_instance): | |
|
37 | shard_number = file_store_legacy_instance.get_shard_index(filename, len(file_store_legacy_instance._shards)) | |
|
38 | # Check that the shard number is 0 for legacy filesystem store we don't use shards | |
|
39 | assert shard_number == 0 | |
|
40 | ||
|
41 | @pytest.mark.parametrize('filename, expected_shard_num', [ | |
|
42 | ('my-name-1', 0), | |
|
43 | ('my-name-2', 0), | |
|
44 | ('my-name-3', 0), | |
|
45 | ('my-name-4', 0), | |
|
46 | ||
|
47 | ('rhodecode-enterprise-ce', 0), | |
|
48 | ('rhodecode-enterprise-ee', 0), | |
|
49 | ]) | |
|
50 | def test_get_shard_number_consistency(self, filename, expected_shard_num, file_store_legacy_instance): | |
|
51 | shard_number = file_store_legacy_instance.get_shard_index(filename, len(file_store_legacy_instance._shards)) | |
|
52 | assert expected_shard_num == shard_number |
@@ -0,0 +1,52 b'' | |||
|
1 | # Copyright (C) 2010-2023 RhodeCode GmbH | |
|
2 | # | |
|
3 | # This program is free software: you can redistribute it and/or modify | |
|
4 | # it under the terms of the GNU Affero General Public License, version 3 | |
|
5 | # (only), as published by the Free Software Foundation. | |
|
6 | # | |
|
7 | # This program is distributed in the hope that it will be useful, | |
|
8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of | |
|
9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | |
|
10 | # GNU General Public License for more details. | |
|
11 | # | |
|
12 | # You should have received a copy of the GNU Affero General Public License | |
|
13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. | |
|
14 | # | |
|
15 | # This program is dual-licensed. If you wish to learn more about the | |
|
16 | # RhodeCode Enterprise Edition, including its added features, Support services, | |
|
17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ | |
|
18 | import pytest | |
|
19 | ||
|
20 | from rhodecode.apps.file_store import utils as store_utils | |
|
21 | from rhodecode.apps.file_store import config_keys | |
|
22 | from rhodecode.apps.file_store.tests import generate_random_filename | |
|
23 | ||
|
24 | ||
|
25 | @pytest.fixture() | |
|
26 | def file_store_objectstore_instance(ini_settings): | |
|
27 | config = ini_settings | |
|
28 | config[config_keys.backend_type] = config_keys.backend_objectstore | |
|
29 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
30 | return f_store | |
|
31 | ||
|
32 | ||
|
33 | class TestFileStoreObjectStoreBackend: | |
|
34 | ||
|
35 | @pytest.mark.parametrize('filename', [generate_random_filename() for _ in range(10)]) | |
|
36 | def test_get_shard_number(self, filename, file_store_objectstore_instance): | |
|
37 | shard_number = file_store_objectstore_instance.get_shard_index(filename, len(file_store_objectstore_instance._shards)) | |
|
38 | # Check that the shard number is between 0 and shards | |
|
39 | assert 0 <= shard_number <= len(file_store_objectstore_instance._shards) | |
|
40 | ||
|
41 | @pytest.mark.parametrize('filename, expected_shard_num', [ | |
|
42 | ('my-name-1', 3), | |
|
43 | ('my-name-2', 2), | |
|
44 | ('my-name-3', 4), | |
|
45 | ('my-name-4', 1), | |
|
46 | ||
|
47 | ('rhodecode-enterprise-ce', 5), | |
|
48 | ('rhodecode-enterprise-ee', 6), | |
|
49 | ]) | |
|
50 | def test_get_shard_number_consistency(self, filename, expected_shard_num, file_store_objectstore_instance): | |
|
51 | shard_number = file_store_objectstore_instance.get_shard_index(filename, len(file_store_objectstore_instance._shards)) | |
|
52 | assert expected_shard_num == shard_number |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: new file 100644 | |
The requested commit or file is too big and content was truncated. Show full diff |
@@ -1,5 +1,5 b'' | |||
|
1 | 1 | [bumpversion] |
|
2 |
current_version = 5. |
|
|
2 | current_version = 5.2.0 | |
|
3 | 3 | message = release: Bump version {current_version} to {new_version} |
|
4 | 4 | |
|
5 | 5 | [bumpversion:file:rhodecode/VERSION] |
@@ -1,71 +1,71 b'' | |||
|
1 | 1 | syntax: glob |
|
2 | 2 | |
|
3 | 3 | *.egg |
|
4 | 4 | *.egg-info |
|
5 | 5 | *.idea |
|
6 | 6 | *.orig |
|
7 | 7 | *.pyc |
|
8 | 8 | *.sqlite-journal |
|
9 | 9 | *.swp |
|
10 | 10 | *.tox |
|
11 | 11 | *.DS_Store* |
|
12 | 12 | rhodecode/public/js/src/components/**/*.css |
|
13 | 13 | |
|
14 | 14 | syntax: regexp |
|
15 | 15 | |
|
16 | 16 | #.filename |
|
17 | 17 | ^\.settings$ |
|
18 | 18 | ^\.project$ |
|
19 | 19 | ^\.pydevproject$ |
|
20 | 20 | ^\.coverage$ |
|
21 | 21 | ^\.cache.*$ |
|
22 | 22 | ^\.ruff_cache.*$ |
|
23 | 23 | ^\.rhodecode$ |
|
24 | 24 | |
|
25 | 25 | ^rcextensions |
|
26 | 26 | ^.dev |
|
27 | 27 | ^._dev |
|
28 | 28 | ^build/ |
|
29 | 29 | ^coverage\.xml$ |
|
30 | 30 | ^data$ |
|
31 | 31 | ^\.eggs/ |
|
32 | 32 | ^configs/data$ |
|
33 | 33 | ^dev.ini$ |
|
34 | 34 | ^acceptance_tests/dev.*\.ini$ |
|
35 | 35 | ^dist/ |
|
36 | 36 | ^fabfile.py |
|
37 | 37 | ^htmlcov |
|
38 | 38 | ^junit\.xml$ |
|
39 | 39 | ^node_modules/ |
|
40 | 40 | ^node_binaries/ |
|
41 | 41 | ^pylint.log$ |
|
42 | 42 | ^rcextensions/ |
|
43 | 43 | ^result$ |
|
44 | 44 | ^rhodecode/public/css/style.css$ |
|
45 | 45 | ^rhodecode/public/css/style-polymer.css$ |
|
46 | 46 | ^rhodecode/public/css/style-ipython.css$ |
|
47 | 47 | ^rhodecode/public/js/rhodecode-components.html$ |
|
48 | 48 | ^rhodecode/public/js/rhodecode-components.js$ |
|
49 | 49 | ^rhodecode/public/js/scripts.js$ |
|
50 | 50 | ^rhodecode/public/js/scripts.min.js$ |
|
51 | 51 | ^rhodecode/public/js/src/components/root-styles.gen.html$ |
|
52 | 52 | ^rhodecode/public/js/vendors/webcomponentsjs/ |
|
53 | 53 | ^rhodecode\.db$ |
|
54 | 54 | ^rhodecode\.log$ |
|
55 | 55 | ^rhodecode_dev\.log$ |
|
56 | 56 | ^test\.db$ |
|
57 | ||
|
57 | ^venv/ | |
|
58 | 58 | |
|
59 | 59 | # ac-tests |
|
60 | 60 | ^acceptance_tests/\.cache.*$ |
|
61 | 61 | ^acceptance_tests/externals |
|
62 | 62 | ^acceptance_tests/ghostdriver.log$ |
|
63 | 63 | ^acceptance_tests/local(_.+)?\.ini$ |
|
64 | 64 | |
|
65 | 65 | # docs |
|
66 | 66 | ^docs/_build$ |
|
67 | 67 | ^docs/result$ |
|
68 | 68 | ^docs-internal/_build$ |
|
69 | 69 | |
|
70 | 70 | # Cythonized things |
|
71 | 71 | ^rhodecode/.*\.(c|so)$ |
@@ -1,172 +1,158 b'' | |||
|
1 | .DEFAULT_GOAL := help | |
|
2 | ||
|
3 | # Pretty print values cf. https://misc.flogisoft.com/bash/tip_colors_and_formatting | |
|
4 | RESET := \033[0m # Reset all formatting | |
|
5 | GREEN := \033[0;32m # Resets before setting 16b colour (32 -- green) | |
|
6 | YELLOW := \033[0;33m | |
|
7 | ORANGE := \033[0;38;5;208m # Reset then set 256b colour (208 -- orange) | |
|
8 | PEACH := \033[0;38;5;216m | |
|
9 | ||
|
10 | ||
|
11 | ## ---------------------------------------------------------------------------------- ## | |
|
12 | ## ------------------------- Help usage builder ------------------------------------- ## | |
|
13 | ## ---------------------------------------------------------------------------------- ## | |
|
14 | # use '# >>> Build commands' to create section | |
|
15 | # use '# target: target description' to create help for target | |
|
16 | .PHONY: help | |
|
17 | help: | |
|
18 | @echo "Usage:" | |
|
19 | @cat $(MAKEFILE_LIST) | grep -E '^# >>>|^# [A-Za-z0-9_.-]+:' | sed -E 's/^# //' | awk ' \ | |
|
20 | BEGIN { \ | |
|
21 | green="\033[32m"; \ | |
|
22 | yellow="\033[33m"; \ | |
|
23 | reset="\033[0m"; \ | |
|
24 | section=""; \ | |
|
25 | } \ | |
|
26 | /^>>>/ { \ | |
|
27 | section=substr($$0, 5); \ | |
|
28 | printf "\n" green ">>> %s" reset "\n", section; \ | |
|
29 | next; \ | |
|
30 | } \ | |
|
31 | /^([A-Za-z0-9_.-]+):/ { \ | |
|
32 | target=$$1; \ | |
|
33 | gsub(/:$$/, "", target); \ | |
|
34 | description=substr($$0, index($$0, ":") + 2); \ | |
|
35 | if (description == "") { description="-"; } \ | |
|
36 | printf " - " yellow "%-35s" reset " %s\n", target, description; \ | |
|
37 | } \ | |
|
38 | ' | |
|
39 | ||
|
1 | 40 | # required for pushd to work.. |
|
2 | 41 | SHELL = /bin/bash |
|
3 | 42 | |
|
4 | ||
|
5 | # set by: PATH_TO_OUTDATED_PACKAGES=/some/path/outdated_packages.py | |
|
6 | OUTDATED_PACKAGES = ${PATH_TO_OUTDATED_PACKAGES} | |
|
43 | # >>> Tests commands | |
|
7 | 44 | |
|
8 | 45 | .PHONY: clean |
|
9 |
# |
|
|
46 | # clean: Cleanup compiled and cache py files | |
|
10 | 47 | clean: |
|
11 | 48 | make test-clean |
|
12 | 49 | find . -type f \( -iname '*.c' -o -iname '*.pyc' -o -iname '*.so' -o -iname '*.orig' \) -exec rm '{}' ';' |
|
13 | 50 | find . -type d -name "build" -prune -exec rm -rf '{}' ';' |
|
14 | 51 | |
|
15 | 52 | |
|
16 | 53 | .PHONY: test |
|
17 |
# |
|
|
54 | # test: run test-clean and tests | |
|
18 | 55 | test: |
|
19 | 56 | make test-clean |
|
20 | make test-only | |
|
57 | unset RC_SQLALCHEMY_DB1_URL && unset RC_DB_URL && make test-only | |
|
21 | 58 | |
|
22 | 59 | |
|
23 | 60 | .PHONY: test-clean |
|
24 |
# |
|
|
61 | # test-clean: run test-clean and tests | |
|
25 | 62 | test-clean: |
|
26 | 63 | rm -rf coverage.xml htmlcov junit.xml pylint.log result |
|
27 | 64 | find . -type d -name "__pycache__" -prune -exec rm -rf '{}' ';' |
|
28 | 65 | find . -type f \( -iname '.coverage.*' \) -exec rm '{}' ';' |
|
29 | 66 | |
|
30 | 67 | |
|
31 | 68 | .PHONY: test-only |
|
32 |
# |
|
|
69 | # test-only: Run tests only without cleanup | |
|
33 | 70 | test-only: |
|
34 | 71 | PYTHONHASHSEED=random \ |
|
35 | 72 | py.test -x -vv -r xw -p no:sugar \ |
|
36 | 73 | --cov-report=term-missing --cov-report=html \ |
|
37 | 74 | --cov=rhodecode rhodecode |
|
38 | 75 | |
|
76 | # >>> Docs commands | |
|
39 | 77 | |
|
40 | 78 | .PHONY: docs |
|
41 |
# |
|
|
79 | # docs: build docs | |
|
42 | 80 | docs: |
|
43 | 81 | (cd docs; docker run --rm -v $(PWD):/project --workdir=/project/docs sphinx-doc-build-rc make clean html SPHINXOPTS="-W") |
|
44 | 82 | |
|
45 | 83 | |
|
46 | 84 | .PHONY: docs-clean |
|
47 |
# |
|
|
85 | # docs-clean: Cleanup docs | |
|
48 | 86 | docs-clean: |
|
49 | 87 | (cd docs; docker run --rm -v $(PWD):/project --workdir=/project/docs sphinx-doc-build-rc make clean) |
|
50 | 88 | |
|
51 | 89 | |
|
52 | 90 | .PHONY: docs-cleanup |
|
53 |
# |
|
|
91 | # docs-cleanup: Cleanup docs | |
|
54 | 92 | docs-cleanup: |
|
55 | 93 | (cd docs; docker run --rm -v $(PWD):/project --workdir=/project/docs sphinx-doc-build-rc make cleanup) |
|
56 | 94 | |
|
95 | # >>> Dev commands | |
|
57 | 96 | |
|
58 | 97 | .PHONY: web-build |
|
59 |
# |
|
|
98 | # web-build: Build JS packages static/js | |
|
60 | 99 | web-build: |
|
61 | 100 | rm -rf node_modules |
|
62 | 101 | docker run -it --rm -v $(PWD):/project --workdir=/project rhodecode/static-files-build:16 -c "npm install && /project/node_modules/.bin/grunt" |
|
63 | 102 | # run static file check |
|
64 | 103 | ./rhodecode/tests/scripts/static-file-check.sh rhodecode/public/ |
|
65 | 104 | rm -rf node_modules |
|
66 | 105 | |
|
67 | .PHONY: ruff-check | |
|
68 | ## run a ruff analysis | |
|
69 | ruff-check: | |
|
70 | ruff check --ignore F401 --ignore I001 --ignore E402 --ignore E501 --ignore F841 --exclude rhodecode/lib/dbmigrate --exclude .eggs --exclude .dev . | |
|
71 | ||
|
72 | .PHONY: pip-packages | |
|
73 | ## Show outdated packages | |
|
74 | pip-packages: | |
|
75 | python ${OUTDATED_PACKAGES} | |
|
76 | ||
|
77 | ||
|
78 | .PHONY: build | |
|
79 | ## Build sdist/egg | |
|
80 | build: | |
|
81 | python -m build | |
|
82 | ||
|
83 | 106 | |
|
84 | 107 | .PHONY: dev-sh |
|
85 |
# |
|
|
108 | # dev-sh: make dev-sh | |
|
86 | 109 | dev-sh: |
|
87 | 110 | sudo echo "deb [trusted=yes] https://apt.fury.io/rsteube/ /" | sudo tee -a "/etc/apt/sources.list.d/fury.list" |
|
88 | 111 | sudo apt-get update |
|
89 | 112 | sudo apt-get install -y zsh carapace-bin |
|
90 | 113 | rm -rf /home/rhodecode/.oh-my-zsh |
|
91 | 114 | curl https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh | sh |
|
92 | 115 | @echo "source <(carapace _carapace)" > /home/rhodecode/.zsrc |
|
93 | 116 | @echo "${RC_DEV_CMD_HELP}" |
|
94 | 117 | @PROMPT='%(?.%F{green}√.%F{red}?%?)%f %B%F{240}%1~%f%b %# ' zsh |
|
95 | 118 | |
|
96 | 119 | |
|
97 | 120 | .PHONY: dev-cleanup |
|
98 |
# |
|
|
121 | # dev-cleanup: Cleanup: pip freeze | grep -v "^-e" | grep -v "@" | xargs pip uninstall -y | |
|
99 | 122 | dev-cleanup: |
|
100 | 123 | pip freeze | grep -v "^-e" | grep -v "@" | xargs pip uninstall -y |
|
101 | 124 | rm -rf /tmp/* |
|
102 | 125 | |
|
103 | 126 | |
|
104 | 127 | .PHONY: dev-env |
|
105 |
# |
|
|
128 | # dev-env: make dev-env based on the requirements files and install develop of packages | |
|
106 | 129 | ## Cleanup: pip freeze | grep -v "^-e" | grep -v "@" | xargs pip uninstall -y |
|
107 | 130 | dev-env: |
|
108 | 131 | sudo -u root chown rhodecode:rhodecode /home/rhodecode/.cache/pip/ |
|
109 | 132 | pip install build virtualenv |
|
110 | 133 | pushd ../rhodecode-vcsserver/ && make dev-env && popd |
|
111 | 134 | pip wheel --wheel-dir=/home/rhodecode/.cache/pip/wheels -r requirements.txt -r requirements_rc_tools.txt -r requirements_test.txt -r requirements_debug.txt |
|
112 | 135 | pip install --no-index --find-links=/home/rhodecode/.cache/pip/wheels -r requirements.txt -r requirements_rc_tools.txt -r requirements_test.txt -r requirements_debug.txt |
|
113 | 136 | pip install -e . |
|
114 | 137 | |
|
115 | 138 | |
|
116 | 139 | .PHONY: sh |
|
117 |
# |
|
|
140 | # sh: shortcut for make dev-sh dev-env | |
|
118 | 141 | sh: |
|
119 | 142 | make dev-env |
|
120 | 143 | make dev-sh |
|
121 | 144 | |
|
122 | 145 | |
|
123 | 146 | ## Allows changes of workers e.g make dev-srv-g workers=2 |
|
124 | 147 | workers?=1 |
|
125 | 148 | |
|
126 | 149 | .PHONY: dev-srv |
|
127 |
# |
|
|
150 | # dev-srv: run gunicorn web server with reloader, use workers=N to set multiworker mode, workers=N allows changes of workers | |
|
128 | 151 | dev-srv: |
|
129 | 152 | gunicorn --paste=.dev/dev.ini --bind=0.0.0.0:10020 --config=.dev/gunicorn_config.py --timeout=120 --reload --workers=$(workers) |
|
130 | 153 | |
|
131 | ||
|
132 | # Default command on calling make | |
|
133 | .DEFAULT_GOAL := show-help | |
|
154 | .PHONY: ruff-check | |
|
155 | # ruff-check: run a ruff analysis | |
|
156 | ruff-check: | |
|
157 | ruff check --ignore F401 --ignore I001 --ignore E402 --ignore E501 --ignore F841 --exclude rhodecode/lib/dbmigrate --exclude .eggs --exclude .dev . | |
|
134 | 158 |
|
|
135 | .PHONY: show-help | |
|
136 | show-help: | |
|
137 | @echo "$$(tput bold)Available rules:$$(tput sgr0)" | |
|
138 | @echo | |
|
139 | @sed -n -e "/^## / { \ | |
|
140 | h; \ | |
|
141 | s/.*//; \ | |
|
142 | :doc" \ | |
|
143 | -e "H; \ | |
|
144 | n; \ | |
|
145 | s/^## //; \ | |
|
146 | t doc" \ | |
|
147 | -e "s/:.*//; \ | |
|
148 | G; \ | |
|
149 | s/\\n## /---/; \ | |
|
150 | s/\\n/ /g; \ | |
|
151 | p; \ | |
|
152 | }" ${MAKEFILE_LIST} \ | |
|
153 | | LC_ALL='C' sort --ignore-case \ | |
|
154 | | awk -F '---' \ | |
|
155 | -v ncol=$$(tput cols) \ | |
|
156 | -v indent=19 \ | |
|
157 | -v col_on="$$(tput setaf 6)" \ | |
|
158 | -v col_off="$$(tput sgr0)" \ | |
|
159 | '{ \ | |
|
160 | printf "%s%*s%s ", col_on, -indent, $$1, col_off; \ | |
|
161 | n = split($$2, words, " "); \ | |
|
162 | line_length = ncol - indent; \ | |
|
163 | for (i = 1; i <= n; i++) { \ | |
|
164 | line_length -= length(words[i]) + 1; \ | |
|
165 | if (line_length <= 0) { \ | |
|
166 | line_length = ncol - indent - length(words[i]) - 1; \ | |
|
167 | printf "\n%*s ", -indent, " "; \ | |
|
168 | } \ | |
|
169 | printf "%s ", words[i]; \ | |
|
170 | } \ | |
|
171 | printf "\n"; \ | |
|
172 | }' |
@@ -1,856 +1,912 b'' | |||
|
1 | 1 | |
|
2 | 2 | ; ######################################### |
|
3 | 3 | ; RHODECODE COMMUNITY EDITION CONFIGURATION |
|
4 | 4 | ; ######################################### |
|
5 | 5 | |
|
6 | 6 | [DEFAULT] |
|
7 | 7 | ; Debug flag sets all loggers to debug, and enables request tracking |
|
8 | 8 | debug = true |
|
9 | 9 | |
|
10 | 10 | ; ######################################################################## |
|
11 | 11 | ; EMAIL CONFIGURATION |
|
12 | 12 | ; These settings will be used by the RhodeCode mailing system |
|
13 | 13 | ; ######################################################################## |
|
14 | 14 | |
|
15 | 15 | ; prefix all emails subjects with given prefix, helps filtering out emails |
|
16 | 16 | #email_prefix = [RhodeCode] |
|
17 | 17 | |
|
18 | 18 | ; email FROM address all mails will be sent |
|
19 | 19 | #app_email_from = rhodecode-noreply@localhost |
|
20 | 20 | |
|
21 | 21 | #smtp_server = mail.server.com |
|
22 | 22 | #smtp_username = |
|
23 | 23 | #smtp_password = |
|
24 | 24 | #smtp_port = |
|
25 | 25 | #smtp_use_tls = false |
|
26 | 26 | #smtp_use_ssl = true |
|
27 | 27 | |
|
28 | 28 | [server:main] |
|
29 | 29 | ; COMMON HOST/IP CONFIG, This applies mostly to develop setup, |
|
30 | 30 | ; Host port for gunicorn are controlled by gunicorn_conf.py |
|
31 | 31 | host = 127.0.0.1 |
|
32 | 32 | port = 10020 |
|
33 | 33 | |
|
34 | 34 | |
|
35 | 35 | ; ########################### |
|
36 | 36 | ; GUNICORN APPLICATION SERVER |
|
37 | 37 | ; ########################### |
|
38 | 38 | |
|
39 | 39 | ; run with gunicorn --config gunicorn_conf.py --paste rhodecode.ini |
|
40 | 40 | |
|
41 | 41 | ; Module to use, this setting shouldn't be changed |
|
42 | 42 | use = egg:gunicorn#main |
|
43 | 43 | |
|
44 | 44 | ; Prefix middleware for RhodeCode. |
|
45 | 45 | ; recommended when using proxy setup. |
|
46 | 46 | ; allows to set RhodeCode under a prefix in server. |
|
47 | 47 | ; eg https://server.com/custom_prefix. Enable `filter-with =` option below as well. |
|
48 | 48 | ; And set your prefix like: `prefix = /custom_prefix` |
|
49 | 49 | ; be sure to also set beaker.session.cookie_path = /custom_prefix if you need |
|
50 | 50 | ; to make your cookies only work on prefix url |
|
51 | 51 | [filter:proxy-prefix] |
|
52 | 52 | use = egg:PasteDeploy#prefix |
|
53 | 53 | prefix = / |
|
54 | 54 | |
|
55 | 55 | [app:main] |
|
56 | 56 | ; The %(here)s variable will be replaced with the absolute path of parent directory |
|
57 | 57 | ; of this file |
|
58 | 58 | ; Each option in the app:main can be override by an environmental variable |
|
59 | 59 | ; |
|
60 | 60 | ;To override an option: |
|
61 | 61 | ; |
|
62 | 62 | ;RC_<KeyName> |
|
63 | 63 | ;Everything should be uppercase, . and - should be replaced by _. |
|
64 | 64 | ;For example, if you have these configuration settings: |
|
65 | 65 | ;rc_cache.repo_object.backend = foo |
|
66 | 66 | ;can be overridden by |
|
67 | 67 | ;export RC_CACHE_REPO_OBJECT_BACKEND=foo |
|
68 | 68 | |
|
69 | 69 | use = egg:rhodecode-enterprise-ce |
|
70 | 70 | |
|
71 | 71 | ; enable proxy prefix middleware, defined above |
|
72 | 72 | #filter-with = proxy-prefix |
|
73 | 73 | |
|
74 | 74 | ; ############# |
|
75 | 75 | ; DEBUG OPTIONS |
|
76 | 76 | ; ############# |
|
77 | 77 | |
|
78 | 78 | pyramid.reload_templates = true |
|
79 | 79 | |
|
80 | 80 | # During development the we want to have the debug toolbar enabled |
|
81 | 81 | pyramid.includes = |
|
82 | 82 | pyramid_debugtoolbar |
|
83 | 83 | |
|
84 | 84 | debugtoolbar.hosts = 0.0.0.0/0 |
|
85 | 85 | debugtoolbar.exclude_prefixes = |
|
86 | 86 | /css |
|
87 | 87 | /fonts |
|
88 | 88 | /images |
|
89 | 89 | /js |
|
90 | 90 | |
|
91 | 91 | ## RHODECODE PLUGINS ## |
|
92 | 92 | rhodecode.includes = |
|
93 | 93 | rhodecode.api |
|
94 | 94 | |
|
95 | 95 | |
|
96 | 96 | # api prefix url |
|
97 | 97 | rhodecode.api.url = /_admin/api |
|
98 | 98 | |
|
99 | 99 | ; enable debug style page |
|
100 | 100 | debug_style = true |
|
101 | 101 | |
|
102 | 102 | ; ################# |
|
103 | 103 | ; END DEBUG OPTIONS |
|
104 | 104 | ; ################# |
|
105 | 105 | |
|
106 | 106 | ; encryption key used to encrypt social plugin tokens, |
|
107 | 107 | ; remote_urls with credentials etc, if not set it defaults to |
|
108 | 108 | ; `beaker.session.secret` |
|
109 | 109 | #rhodecode.encrypted_values.secret = |
|
110 | 110 | |
|
111 | 111 | ; decryption strict mode (enabled by default). It controls if decryption raises |
|
112 | 112 | ; `SignatureVerificationError` in case of wrong key, or damaged encryption data. |
|
113 | 113 | #rhodecode.encrypted_values.strict = false |
|
114 | 114 | |
|
115 | 115 | ; Pick algorithm for encryption. Either fernet (more secure) or aes (default) |
|
116 | 116 | ; fernet is safer, and we strongly recommend switching to it. |
|
117 | 117 | ; Due to backward compatibility aes is used as default. |
|
118 | 118 | #rhodecode.encrypted_values.algorithm = fernet |
|
119 | 119 | |
|
120 | 120 | ; Return gzipped responses from RhodeCode (static files/application) |
|
121 | 121 | gzip_responses = false |
|
122 | 122 | |
|
123 | 123 | ; Auto-generate javascript routes file on startup |
|
124 | 124 | generate_js_files = false |
|
125 | 125 | |
|
126 | 126 | ; System global default language. |
|
127 | 127 | ; All available languages: en (default), be, de, es, fr, it, ja, pl, pt, ru, zh |
|
128 | 128 | lang = en |
|
129 | 129 | |
|
130 | 130 | ; Perform a full repository scan and import on each server start. |
|
131 | 131 | ; Settings this to true could lead to very long startup time. |
|
132 | 132 | startup.import_repos = false |
|
133 | 133 | |
|
134 | 134 | ; URL at which the application is running. This is used for Bootstrapping |
|
135 | 135 | ; requests in context when no web request is available. Used in ishell, or |
|
136 | 136 | ; SSH calls. Set this for events to receive proper url for SSH calls. |
|
137 | 137 | app.base_url = http://rhodecode.local |
|
138 | 138 | |
|
139 | 139 | ; Host at which the Service API is running. |
|
140 | 140 | app.service_api.host = http://rhodecode.local:10020 |
|
141 | 141 | |
|
142 | 142 | ; Secret for Service API authentication. |
|
143 | 143 | app.service_api.token = |
|
144 | 144 | |
|
145 | 145 | ; Unique application ID. Should be a random unique string for security. |
|
146 | 146 | app_instance_uuid = rc-production |
|
147 | 147 | |
|
148 | 148 | ; Cut off limit for large diffs (size in bytes). If overall diff size on |
|
149 | 149 | ; commit, or pull request exceeds this limit this diff will be displayed |
|
150 | 150 | ; partially. E.g 512000 == 512Kb |
|
151 | 151 | cut_off_limit_diff = 512000 |
|
152 | 152 | |
|
153 | 153 | ; Cut off limit for large files inside diffs (size in bytes). Each individual |
|
154 | 154 | ; file inside diff which exceeds this limit will be displayed partially. |
|
155 | 155 | ; E.g 128000 == 128Kb |
|
156 | 156 | cut_off_limit_file = 128000 |
|
157 | 157 | |
|
158 | 158 | ; Use cached version of vcs repositories everywhere. Recommended to be `true` |
|
159 | 159 | vcs_full_cache = true |
|
160 | 160 | |
|
161 | 161 | ; Force https in RhodeCode, fixes https redirects, assumes it's always https. |
|
162 | 162 | ; Normally this is controlled by proper flags sent from http server such as Nginx or Apache |
|
163 | 163 | force_https = false |
|
164 | 164 | |
|
165 | 165 | ; use Strict-Transport-Security headers |
|
166 | 166 | use_htsts = false |
|
167 | 167 | |
|
168 | 168 | ; Set to true if your repos are exposed using the dumb protocol |
|
169 | 169 | git_update_server_info = false |
|
170 | 170 | |
|
171 | 171 | ; RSS/ATOM feed options |
|
172 | 172 | rss_cut_off_limit = 256000 |
|
173 | 173 | rss_items_per_page = 10 |
|
174 | 174 | rss_include_diff = false |
|
175 | 175 | |
|
176 | 176 | ; gist URL alias, used to create nicer urls for gist. This should be an |
|
177 | 177 | ; url that does rewrites to _admin/gists/{gistid}. |
|
178 | 178 | ; example: http://gist.rhodecode.org/{gistid}. Empty means use the internal |
|
179 | 179 | ; RhodeCode url, ie. http[s]://rhodecode.server/_admin/gists/{gistid} |
|
180 | 180 | gist_alias_url = |
|
181 | 181 | |
|
182 | 182 | ; List of views (using glob pattern syntax) that AUTH TOKENS could be |
|
183 | 183 | ; used for access. |
|
184 | 184 | ; Adding ?auth_token=TOKEN_HASH to the url authenticates this request as if it |
|
185 | 185 | ; came from the the logged in user who own this authentication token. |
|
186 | 186 | ; Additionally @TOKEN syntax can be used to bound the view to specific |
|
187 | 187 | ; authentication token. Such view would be only accessible when used together |
|
188 | 188 | ; with this authentication token |
|
189 | 189 | ; list of all views can be found under `/_admin/permissions/auth_token_access` |
|
190 | 190 | ; The list should be "," separated and on a single line. |
|
191 | 191 | ; Most common views to enable: |
|
192 | 192 | |
|
193 | 193 | # RepoCommitsView:repo_commit_download |
|
194 | 194 | # RepoCommitsView:repo_commit_patch |
|
195 | 195 | # RepoCommitsView:repo_commit_raw |
|
196 | 196 | # RepoCommitsView:repo_commit_raw@TOKEN |
|
197 | 197 | # RepoFilesView:repo_files_diff |
|
198 | 198 | # RepoFilesView:repo_archivefile |
|
199 | 199 | # RepoFilesView:repo_file_raw |
|
200 | 200 | # GistView:* |
|
201 | 201 | api_access_controllers_whitelist = |
|
202 | 202 | |
|
203 | 203 | ; Default encoding used to convert from and to unicode |
|
204 | 204 | ; can be also a comma separated list of encoding in case of mixed encodings |
|
205 | 205 | default_encoding = UTF-8 |
|
206 | 206 | |
|
207 | 207 | ; instance-id prefix |
|
208 | 208 | ; a prefix key for this instance used for cache invalidation when running |
|
209 | 209 | ; multiple instances of RhodeCode, make sure it's globally unique for |
|
210 | 210 | ; all running RhodeCode instances. Leave empty if you don't use it |
|
211 | 211 | instance_id = |
|
212 | 212 | |
|
213 | 213 | ; Fallback authentication plugin. Set this to a plugin ID to force the usage |
|
214 | 214 | ; of an authentication plugin also if it is disabled by it's settings. |
|
215 | 215 | ; This could be useful if you are unable to log in to the system due to broken |
|
216 | 216 | ; authentication settings. Then you can enable e.g. the internal RhodeCode auth |
|
217 | 217 | ; module to log in again and fix the settings. |
|
218 | 218 | ; Available builtin plugin IDs (hash is part of the ID): |
|
219 | 219 | ; egg:rhodecode-enterprise-ce#rhodecode |
|
220 | 220 | ; egg:rhodecode-enterprise-ce#pam |
|
221 | 221 | ; egg:rhodecode-enterprise-ce#ldap |
|
222 | 222 | ; egg:rhodecode-enterprise-ce#jasig_cas |
|
223 | 223 | ; egg:rhodecode-enterprise-ce#headers |
|
224 | 224 | ; egg:rhodecode-enterprise-ce#crowd |
|
225 | 225 | |
|
226 | 226 | #rhodecode.auth_plugin_fallback = egg:rhodecode-enterprise-ce#rhodecode |
|
227 | 227 | |
|
228 | 228 | ; Flag to control loading of legacy plugins in py:/path format |
|
229 | 229 | auth_plugin.import_legacy_plugins = true |
|
230 | 230 | |
|
231 | 231 | ; alternative return HTTP header for failed authentication. Default HTTP |
|
232 | 232 | ; response is 401 HTTPUnauthorized. Currently HG clients have troubles with |
|
233 | 233 | ; handling that causing a series of failed authentication calls. |
|
234 | 234 | ; Set this variable to 403 to return HTTPForbidden, or any other HTTP code |
|
235 | 235 | ; This will be served instead of default 401 on bad authentication |
|
236 | 236 | auth_ret_code = |
|
237 | 237 | |
|
238 | 238 | ; use special detection method when serving auth_ret_code, instead of serving |
|
239 | 239 | ; ret_code directly, use 401 initially (Which triggers credentials prompt) |
|
240 | 240 | ; and then serve auth_ret_code to clients |
|
241 | 241 | auth_ret_code_detection = false |
|
242 | 242 | |
|
243 | 243 | ; locking return code. When repository is locked return this HTTP code. 2XX |
|
244 | 244 | ; codes don't break the transactions while 4XX codes do |
|
245 | 245 | lock_ret_code = 423 |
|
246 | 246 | |
|
247 | 247 | ; Filesystem location were repositories should be stored |
|
248 | 248 | repo_store.path = /var/opt/rhodecode_repo_store |
|
249 | 249 | |
|
250 | 250 | ; allows to setup custom hooks in settings page |
|
251 | 251 | allow_custom_hooks_settings = true |
|
252 | 252 | |
|
253 | 253 | ; Generated license token required for EE edition license. |
|
254 | 254 | ; New generated token value can be found in Admin > settings > license page. |
|
255 | 255 | license_token = |
|
256 | 256 | |
|
257 | 257 | ; This flag hides sensitive information on the license page such as token, and license data |
|
258 | 258 | license.hide_license_info = false |
|
259 | 259 | |
|
260 | ; Import EE license from this license path | |
|
261 | #license.import_path = %(here)s/rhodecode_enterprise.license | |
|
262 | ||
|
263 | ; import license 'if-missing' or 'force' (always override) | |
|
264 | ; if-missing means apply license if it doesn't exist. 'force' option always overrides it | |
|
265 | license.import_path_mode = if-missing | |
|
266 | ||
|
260 | 267 | ; supervisor connection uri, for managing supervisor and logs. |
|
261 | 268 | supervisor.uri = |
|
262 | 269 | |
|
263 | 270 | ; supervisord group name/id we only want this RC instance to handle |
|
264 | 271 | supervisor.group_id = dev |
|
265 | 272 | |
|
266 | 273 | ; Display extended labs settings |
|
267 | 274 | labs_settings_active = true |
|
268 | 275 | |
|
269 | 276 | ; Custom exception store path, defaults to TMPDIR |
|
270 | 277 | ; This is used to store exception from RhodeCode in shared directory |
|
271 | 278 | #exception_tracker.store_path = |
|
272 | 279 | |
|
273 | 280 | ; Send email with exception details when it happens |
|
274 | 281 | #exception_tracker.send_email = false |
|
275 | 282 | |
|
276 | 283 | ; Comma separated list of recipients for exception emails, |
|
277 | 284 | ; e.g admin@rhodecode.com,devops@rhodecode.com |
|
278 | 285 | ; Can be left empty, then emails will be sent to ALL super-admins |
|
279 | 286 | #exception_tracker.send_email_recipients = |
|
280 | 287 | |
|
281 | 288 | ; optional prefix to Add to email Subject |
|
282 | 289 | #exception_tracker.email_prefix = [RHODECODE ERROR] |
|
283 | 290 | |
|
284 | ; File store configuration. This is used to store and serve uploaded files | |
|
285 | file_store.enabled = true | |
|
291 | ; NOTE: this setting IS DEPRECATED: | |
|
292 | ; file_store backend is always enabled | |
|
293 | #file_store.enabled = true | |
|
286 | 294 | |
|
295 | ; NOTE: this setting IS DEPRECATED: | |
|
296 | ; file_store.backend = X -> use `file_store.backend.type = filesystem_v2` instead | |
|
287 | 297 | ; Storage backend, available options are: local |
|
288 | file_store.backend = local | |
|
298 | #file_store.backend = local | |
|
289 | 299 | |
|
300 | ; NOTE: this setting IS DEPRECATED: | |
|
301 | ; file_store.storage_path = X -> use `file_store.filesystem_v2.storage_path = X` instead | |
|
290 | 302 | ; path to store the uploaded binaries and artifacts |
|
291 | file_store.storage_path = /var/opt/rhodecode_data/file_store | |
|
303 | #file_store.storage_path = /var/opt/rhodecode_data/file_store | |
|
304 | ||
|
305 | ; Artifacts file-store, is used to store comment attachments and artifacts uploads. | |
|
306 | ; file_store backend type: filesystem_v1, filesystem_v2 or objectstore (s3-based) are available as options | |
|
307 | ; filesystem_v1 is backwards compat with pre 5.1 storage changes | |
|
308 | ; new installations should choose filesystem_v2 or objectstore (s3-based), pick filesystem when migrating from | |
|
309 | ; previous installations to keep the artifacts without a need of migration | |
|
310 | #file_store.backend.type = filesystem_v2 | |
|
311 | ||
|
312 | ; filesystem options... | |
|
313 | #file_store.filesystem_v1.storage_path = /var/opt/rhodecode_data/artifacts_file_store | |
|
314 | ||
|
315 | ; filesystem_v2 options... | |
|
316 | #file_store.filesystem_v2.storage_path = /var/opt/rhodecode_data/artifacts_file_store | |
|
317 | #file_store.filesystem_v2.shards = 8 | |
|
292 | 318 | |
|
319 | ; objectstore options... | |
|
320 | ; url for s3 compatible storage that allows to upload artifacts | |
|
321 | ; e.g http://minio:9000 | |
|
322 | #file_store.backend.type = objectstore | |
|
323 | #file_store.objectstore.url = http://s3-minio:9000 | |
|
324 | ||
|
325 | ; a top-level bucket to put all other shards in | |
|
326 | ; objects will be stored in rhodecode-file-store/shard-N based on the bucket_shards number | |
|
327 | #file_store.objectstore.bucket = rhodecode-file-store | |
|
328 | ||
|
329 | ; number of sharded buckets to create to distribute archives across | |
|
330 | ; default is 8 shards | |
|
331 | #file_store.objectstore.bucket_shards = 8 | |
|
332 | ||
|
333 | ; key for s3 auth | |
|
334 | #file_store.objectstore.key = s3admin | |
|
335 | ||
|
336 | ; secret for s3 auth | |
|
337 | #file_store.objectstore.secret = s3secret4 | |
|
338 | ||
|
339 | ;region for s3 storage | |
|
340 | #file_store.objectstore.region = eu-central-1 | |
|
293 | 341 | |
|
294 | 342 | ; Redis url to acquire/check generation of archives locks |
|
295 | 343 | archive_cache.locking.url = redis://redis:6379/1 |
|
296 | 344 | |
|
297 | 345 | ; Storage backend, only 'filesystem' and 'objectstore' are available now |
|
298 | 346 | archive_cache.backend.type = filesystem |
|
299 | 347 | |
|
300 | 348 | ; url for s3 compatible storage that allows to upload artifacts |
|
301 | 349 | ; e.g http://minio:9000 |
|
302 | 350 | archive_cache.objectstore.url = http://s3-minio:9000 |
|
303 | 351 | |
|
304 | 352 | ; key for s3 auth |
|
305 | 353 | archive_cache.objectstore.key = key |
|
306 | 354 | |
|
307 | 355 | ; secret for s3 auth |
|
308 | 356 | archive_cache.objectstore.secret = secret |
|
309 | 357 | |
|
310 | 358 | ;region for s3 storage |
|
311 | 359 | archive_cache.objectstore.region = eu-central-1 |
|
312 | 360 | |
|
313 | 361 | ; number of sharded buckets to create to distribute archives across |
|
314 | 362 | ; default is 8 shards |
|
315 | 363 | archive_cache.objectstore.bucket_shards = 8 |
|
316 | 364 | |
|
317 | 365 | ; a top-level bucket to put all other shards in |
|
318 | 366 | ; objects will be stored in rhodecode-archive-cache/shard-N based on the bucket_shards number |
|
319 | 367 | archive_cache.objectstore.bucket = rhodecode-archive-cache |
|
320 | 368 | |
|
321 | 369 | ; if true, this cache will try to retry with retry_attempts=N times waiting retry_backoff time |
|
322 | 370 | archive_cache.objectstore.retry = false |
|
323 | 371 | |
|
324 | 372 | ; number of seconds to wait for next try using retry |
|
325 | 373 | archive_cache.objectstore.retry_backoff = 1 |
|
326 | 374 | |
|
327 | 375 | ; how many tries do do a retry fetch from this backend |
|
328 | 376 | archive_cache.objectstore.retry_attempts = 10 |
|
329 | 377 | |
|
330 | 378 | ; Default is $cache_dir/archive_cache if not set |
|
331 | 379 | ; Generated repo archives will be cached at this location |
|
332 | 380 | ; and served from the cache during subsequent requests for the same archive of |
|
333 | 381 | ; the repository. This path is important to be shared across filesystems and with |
|
334 | 382 | ; RhodeCode and vcsserver |
|
335 | 383 | archive_cache.filesystem.store_dir = /var/opt/rhodecode_data/archive_cache |
|
336 | 384 | |
|
337 | 385 | ; The limit in GB sets how much data we cache before recycling last used, defaults to 10 gb |
|
338 | 386 | archive_cache.filesystem.cache_size_gb = 1 |
|
339 | 387 | |
|
340 | 388 | ; Eviction policy used to clear out after cache_size_gb limit is reached |
|
341 | 389 | archive_cache.filesystem.eviction_policy = least-recently-stored |
|
342 | 390 | |
|
343 | 391 | ; By default cache uses sharding technique, this specifies how many shards are there |
|
344 | 392 | ; default is 8 shards |
|
345 | 393 | archive_cache.filesystem.cache_shards = 8 |
|
346 | 394 | |
|
347 | 395 | ; if true, this cache will try to retry with retry_attempts=N times waiting retry_backoff time |
|
348 | 396 | archive_cache.filesystem.retry = false |
|
349 | 397 | |
|
350 | 398 | ; number of seconds to wait for next try using retry |
|
351 | 399 | archive_cache.filesystem.retry_backoff = 1 |
|
352 | 400 | |
|
353 | 401 | ; how many tries do do a retry fetch from this backend |
|
354 | 402 | archive_cache.filesystem.retry_attempts = 10 |
|
355 | 403 | |
|
356 | 404 | |
|
357 | 405 | ; ############# |
|
358 | 406 | ; CELERY CONFIG |
|
359 | 407 | ; ############# |
|
360 | 408 | |
|
361 | 409 | ; manually run celery: /path/to/celery worker --task-events --beat --app rhodecode.lib.celerylib.loader --scheduler rhodecode.lib.celerylib.scheduler.RcScheduler --loglevel DEBUG --ini /path/to/rhodecode.ini |
|
362 | 410 | |
|
363 | 411 | use_celery = true |
|
364 | 412 | |
|
365 | 413 | ; path to store schedule database |
|
366 | 414 | #celerybeat-schedule.path = |
|
367 | 415 | |
|
368 | 416 | ; connection url to the message broker (default redis) |
|
369 | 417 | celery.broker_url = redis://redis:6379/8 |
|
370 | 418 | |
|
371 | 419 | ; results backend to get results for (default redis) |
|
372 | 420 | celery.result_backend = redis://redis:6379/8 |
|
373 | 421 | |
|
374 | 422 | ; rabbitmq example |
|
375 | 423 | #celery.broker_url = amqp://rabbitmq:qweqwe@localhost:5672/rabbitmqhost |
|
376 | 424 | |
|
377 | 425 | ; maximum tasks to execute before worker restart |
|
378 | 426 | celery.max_tasks_per_child = 20 |
|
379 | 427 | |
|
380 | 428 | ; tasks will never be sent to the queue, but executed locally instead. |
|
381 | 429 | celery.task_always_eager = false |
|
382 | 430 | |
|
383 | 431 | ; ############# |
|
384 | 432 | ; DOGPILE CACHE |
|
385 | 433 | ; ############# |
|
386 | 434 | |
|
387 | 435 | ; Default cache dir for caches. Putting this into a ramdisk can boost performance. |
|
388 | 436 | ; eg. /tmpfs/data_ramdisk, however this directory might require large amount of space |
|
389 | 437 | cache_dir = /var/opt/rhodecode_data |
|
390 | 438 | |
|
391 | 439 | ; ********************************************* |
|
392 | 440 | ; `sql_cache_short` cache for heavy SQL queries |
|
393 | 441 | ; Only supported backend is `memory_lru` |
|
394 | 442 | ; ********************************************* |
|
395 | 443 | rc_cache.sql_cache_short.backend = dogpile.cache.rc.memory_lru |
|
396 | 444 | rc_cache.sql_cache_short.expiration_time = 30 |
|
397 | 445 | |
|
398 | 446 | |
|
399 | 447 | ; ***************************************************** |
|
400 | 448 | ; `cache_repo_longterm` cache for repo object instances |
|
401 | 449 | ; Only supported backend is `memory_lru` |
|
402 | 450 | ; ***************************************************** |
|
403 | 451 | rc_cache.cache_repo_longterm.backend = dogpile.cache.rc.memory_lru |
|
404 | 452 | ; by default we use 30 Days, cache is still invalidated on push |
|
405 | 453 | rc_cache.cache_repo_longterm.expiration_time = 2592000 |
|
406 | 454 | ; max items in LRU cache, set to smaller number to save memory, and expire last used caches |
|
407 | 455 | rc_cache.cache_repo_longterm.max_size = 10000 |
|
408 | 456 | |
|
409 | 457 | |
|
410 | 458 | ; ********************************************* |
|
411 | 459 | ; `cache_general` cache for general purpose use |
|
412 | 460 | ; for simplicity use rc.file_namespace backend, |
|
413 | 461 | ; for performance and scale use rc.redis |
|
414 | 462 | ; ********************************************* |
|
415 | 463 | rc_cache.cache_general.backend = dogpile.cache.rc.file_namespace |
|
416 | 464 | rc_cache.cache_general.expiration_time = 43200 |
|
417 | 465 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
418 | 466 | #rc_cache.cache_general.arguments.filename = /tmp/cache_general_db |
|
419 | 467 | |
|
420 | 468 | ; alternative `cache_general` redis backend with distributed lock |
|
421 | 469 | #rc_cache.cache_general.backend = dogpile.cache.rc.redis |
|
422 | 470 | #rc_cache.cache_general.expiration_time = 300 |
|
423 | 471 | |
|
424 | 472 | ; redis_expiration_time needs to be greater then expiration_time |
|
425 | 473 | #rc_cache.cache_general.arguments.redis_expiration_time = 7200 |
|
426 | 474 | |
|
427 | 475 | #rc_cache.cache_general.arguments.host = localhost |
|
428 | 476 | #rc_cache.cache_general.arguments.port = 6379 |
|
429 | 477 | #rc_cache.cache_general.arguments.db = 0 |
|
430 | 478 | #rc_cache.cache_general.arguments.socket_timeout = 30 |
|
431 | 479 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
432 | 480 | #rc_cache.cache_general.arguments.distributed_lock = true |
|
433 | 481 | |
|
434 | 482 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
435 | 483 | #rc_cache.cache_general.arguments.lock_auto_renewal = true |
|
436 | 484 | |
|
437 | 485 | ; ************************************************* |
|
438 | 486 | ; `cache_perms` cache for permission tree, auth TTL |
|
439 | 487 | ; for simplicity use rc.file_namespace backend, |
|
440 | 488 | ; for performance and scale use rc.redis |
|
441 | 489 | ; ************************************************* |
|
442 | 490 | rc_cache.cache_perms.backend = dogpile.cache.rc.file_namespace |
|
443 | 491 | rc_cache.cache_perms.expiration_time = 3600 |
|
444 | 492 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
445 | 493 | #rc_cache.cache_perms.arguments.filename = /tmp/cache_perms_db |
|
446 | 494 | |
|
447 | 495 | ; alternative `cache_perms` redis backend with distributed lock |
|
448 | 496 | #rc_cache.cache_perms.backend = dogpile.cache.rc.redis |
|
449 | 497 | #rc_cache.cache_perms.expiration_time = 300 |
|
450 | 498 | |
|
451 | 499 | ; redis_expiration_time needs to be greater then expiration_time |
|
452 | 500 | #rc_cache.cache_perms.arguments.redis_expiration_time = 7200 |
|
453 | 501 | |
|
454 | 502 | #rc_cache.cache_perms.arguments.host = localhost |
|
455 | 503 | #rc_cache.cache_perms.arguments.port = 6379 |
|
456 | 504 | #rc_cache.cache_perms.arguments.db = 0 |
|
457 | 505 | #rc_cache.cache_perms.arguments.socket_timeout = 30 |
|
458 | 506 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
459 | 507 | #rc_cache.cache_perms.arguments.distributed_lock = true |
|
460 | 508 | |
|
461 | 509 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
462 | 510 | #rc_cache.cache_perms.arguments.lock_auto_renewal = true |
|
463 | 511 | |
|
464 | 512 | ; *************************************************** |
|
465 | 513 | ; `cache_repo` cache for file tree, Readme, RSS FEEDS |
|
466 | 514 | ; for simplicity use rc.file_namespace backend, |
|
467 | 515 | ; for performance and scale use rc.redis |
|
468 | 516 | ; *************************************************** |
|
469 | 517 | rc_cache.cache_repo.backend = dogpile.cache.rc.file_namespace |
|
470 | 518 | rc_cache.cache_repo.expiration_time = 2592000 |
|
471 | 519 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
472 | 520 | #rc_cache.cache_repo.arguments.filename = /tmp/cache_repo_db |
|
473 | 521 | |
|
474 | 522 | ; alternative `cache_repo` redis backend with distributed lock |
|
475 | 523 | #rc_cache.cache_repo.backend = dogpile.cache.rc.redis |
|
476 | 524 | #rc_cache.cache_repo.expiration_time = 2592000 |
|
477 | 525 | |
|
478 | 526 | ; redis_expiration_time needs to be greater then expiration_time |
|
479 | 527 | #rc_cache.cache_repo.arguments.redis_expiration_time = 2678400 |
|
480 | 528 | |
|
481 | 529 | #rc_cache.cache_repo.arguments.host = localhost |
|
482 | 530 | #rc_cache.cache_repo.arguments.port = 6379 |
|
483 | 531 | #rc_cache.cache_repo.arguments.db = 1 |
|
484 | 532 | #rc_cache.cache_repo.arguments.socket_timeout = 30 |
|
485 | 533 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
486 | 534 | #rc_cache.cache_repo.arguments.distributed_lock = true |
|
487 | 535 | |
|
488 | 536 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
489 | 537 | #rc_cache.cache_repo.arguments.lock_auto_renewal = true |
|
490 | 538 | |
|
491 | 539 | ; ############## |
|
492 | 540 | ; BEAKER SESSION |
|
493 | 541 | ; ############## |
|
494 | 542 | |
|
495 | 543 | ; beaker.session.type is type of storage options for the logged users sessions. Current allowed |
|
496 | 544 | ; types are file, ext:redis, ext:database, ext:memcached |
|
497 | 545 | ; Fastest ones are ext:redis and ext:database, DO NOT use memory type for session |
|
498 | 546 | #beaker.session.type = file |
|
499 | 547 | #beaker.session.data_dir = %(here)s/data/sessions |
|
500 | 548 | |
|
501 | 549 | ; Redis based sessions |
|
502 | 550 | beaker.session.type = ext:redis |
|
503 | 551 | beaker.session.url = redis://redis:6379/2 |
|
504 | 552 | |
|
505 | 553 | ; DB based session, fast, and allows easy management over logged in users |
|
506 | 554 | #beaker.session.type = ext:database |
|
507 | 555 | #beaker.session.table_name = db_session |
|
508 | 556 | #beaker.session.sa.url = postgresql://postgres:secret@localhost/rhodecode |
|
509 | 557 | #beaker.session.sa.url = mysql://root:secret@127.0.0.1/rhodecode |
|
510 | 558 | #beaker.session.sa.pool_recycle = 3600 |
|
511 | 559 | #beaker.session.sa.echo = false |
|
512 | 560 | |
|
513 | 561 | beaker.session.key = rhodecode |
|
514 | 562 | beaker.session.secret = develop-rc-uytcxaz |
|
515 | 563 | beaker.session.lock_dir = /data_ramdisk/lock |
|
516 | 564 | |
|
517 | 565 | ; Secure encrypted cookie. Requires AES and AES python libraries |
|
518 | 566 | ; you must disable beaker.session.secret to use this |
|
519 | 567 | #beaker.session.encrypt_key = key_for_encryption |
|
520 | 568 | #beaker.session.validate_key = validation_key |
|
521 | 569 | |
|
522 | 570 | ; Sets session as invalid (also logging out user) if it haven not been |
|
523 | 571 | ; accessed for given amount of time in seconds |
|
524 | 572 | beaker.session.timeout = 2592000 |
|
525 | 573 | beaker.session.httponly = true |
|
526 | 574 | |
|
527 | 575 | ; Path to use for the cookie. Set to prefix if you use prefix middleware |
|
528 | 576 | #beaker.session.cookie_path = /custom_prefix |
|
529 | 577 | |
|
530 | 578 | ; Set https secure cookie |
|
531 | 579 | beaker.session.secure = false |
|
532 | 580 | |
|
533 | 581 | ; default cookie expiration time in seconds, set to `true` to set expire |
|
534 | 582 | ; at browser close |
|
535 | 583 | #beaker.session.cookie_expires = 3600 |
|
536 | 584 | |
|
537 | 585 | ; ############################# |
|
538 | 586 | ; SEARCH INDEXING CONFIGURATION |
|
539 | 587 | ; ############################# |
|
540 | 588 | |
|
541 | 589 | ; Full text search indexer is available in rhodecode-tools under |
|
542 | 590 | ; `rhodecode-tools index` command |
|
543 | 591 | |
|
544 | 592 | ; WHOOSH Backend, doesn't require additional services to run |
|
545 | 593 | ; it works good with few dozen repos |
|
546 | 594 | search.module = rhodecode.lib.index.whoosh |
|
547 | 595 | search.location = %(here)s/data/index |
|
548 | 596 | |
|
549 | 597 | ; #################### |
|
550 | 598 | ; CHANNELSTREAM CONFIG |
|
551 | 599 | ; #################### |
|
552 | 600 | |
|
553 | 601 | ; channelstream enables persistent connections and live notification |
|
554 | 602 | ; in the system. It's also used by the chat system |
|
555 | 603 | |
|
556 | 604 | channelstream.enabled = true |
|
557 | 605 | |
|
558 | 606 | ; server address for channelstream server on the backend |
|
559 | 607 | channelstream.server = channelstream:9800 |
|
560 | 608 | |
|
561 | 609 | ; location of the channelstream server from outside world |
|
562 | 610 | ; use ws:// for http or wss:// for https. This address needs to be handled |
|
563 | 611 | ; by external HTTP server such as Nginx or Apache |
|
564 | 612 | ; see Nginx/Apache configuration examples in our docs |
|
565 | 613 | channelstream.ws_url = ws://rhodecode.yourserver.com/_channelstream |
|
566 | 614 | channelstream.secret = ENV_GENERATED |
|
567 | 615 | channelstream.history.location = /var/opt/rhodecode_data/channelstream_history |
|
568 | 616 | |
|
569 | 617 | ; Internal application path that Javascript uses to connect into. |
|
570 | 618 | ; If you use proxy-prefix the prefix should be added before /_channelstream |
|
571 | 619 | channelstream.proxy_path = /_channelstream |
|
572 | 620 | |
|
573 | 621 | |
|
574 | 622 | ; ############################## |
|
575 | 623 | ; MAIN RHODECODE DATABASE CONFIG |
|
576 | 624 | ; ############################## |
|
577 | 625 | |
|
578 | 626 | #sqlalchemy.db1.url = sqlite:///%(here)s/rhodecode.db?timeout=30 |
|
579 | 627 | #sqlalchemy.db1.url = postgresql://postgres:qweqwe@localhost/rhodecode |
|
580 | 628 | #sqlalchemy.db1.url = mysql://root:qweqwe@localhost/rhodecode?charset=utf8 |
|
581 | 629 | ; pymysql is an alternative driver for MySQL, use in case of problems with default one |
|
582 | 630 | #sqlalchemy.db1.url = mysql+pymysql://root:qweqwe@localhost/rhodecode |
|
583 | 631 | |
|
584 | 632 | sqlalchemy.db1.url = sqlite:///%(here)s/rhodecode.db?timeout=30 |
|
585 | 633 | |
|
586 | 634 | ; see sqlalchemy docs for other advanced settings |
|
587 | 635 | ; print the sql statements to output |
|
588 | 636 | sqlalchemy.db1.echo = false |
|
589 | 637 | |
|
590 | 638 | ; recycle the connections after this amount of seconds |
|
591 | 639 | sqlalchemy.db1.pool_recycle = 3600 |
|
592 | 640 | |
|
593 | 641 | ; the number of connections to keep open inside the connection pool. |
|
594 | 642 | ; 0 indicates no limit |
|
595 | 643 | ; the general calculus with gevent is: |
|
596 | 644 | ; if your system allows 500 concurrent greenlets (max_connections) that all do database access, |
|
597 | 645 | ; then increase pool size + max overflow so that they add up to 500. |
|
598 | 646 | #sqlalchemy.db1.pool_size = 5 |
|
599 | 647 | |
|
600 | 648 | ; The number of connections to allow in connection pool "overflow", that is |
|
601 | 649 | ; connections that can be opened above and beyond the pool_size setting, |
|
602 | 650 | ; which defaults to five. |
|
603 | 651 | #sqlalchemy.db1.max_overflow = 10 |
|
604 | 652 | |
|
605 | 653 | ; Connection check ping, used to detect broken database connections |
|
606 | 654 | ; could be enabled to better handle cases if MySQL has gone away errors |
|
607 | 655 | #sqlalchemy.db1.ping_connection = true |
|
608 | 656 | |
|
609 | 657 | ; ########## |
|
610 | 658 | ; VCS CONFIG |
|
611 | 659 | ; ########## |
|
612 | 660 | vcs.server.enable = true |
|
613 | 661 | vcs.server = vcsserver:10010 |
|
614 | 662 | |
|
615 | 663 | ; Web server connectivity protocol, responsible for web based VCS operations |
|
616 | 664 | ; Available protocols are: |
|
617 | 665 | ; `http` - use http-rpc backend (default) |
|
618 | 666 | vcs.server.protocol = http |
|
619 | 667 | |
|
620 | 668 | ; Push/Pull operations protocol, available options are: |
|
621 | 669 | ; `http` - use http-rpc backend (default) |
|
622 | 670 | vcs.scm_app_implementation = http |
|
623 | 671 | |
|
624 | 672 | ; Push/Pull operations hooks protocol, available options are: |
|
625 | 673 | ; `http` - use http-rpc backend (default) |
|
626 | 674 | ; `celery` - use celery based hooks |
|
627 | vcs.hooks.protocol = http | |
|
675 | #DEPRECATED:vcs.hooks.protocol = http | |
|
676 | vcs.hooks.protocol.v2 = celery | |
|
628 | 677 | |
|
629 | 678 | ; Host on which this instance is listening for hooks. vcsserver will call this host to pull/push hooks so it should be |
|
630 | 679 | ; accessible via network. |
|
631 | 680 | ; Use vcs.hooks.host = "*" to bind to current hostname (for Docker) |
|
632 | 681 | vcs.hooks.host = * |
|
633 | 682 | |
|
634 | 683 | ; Start VCSServer with this instance as a subprocess, useful for development |
|
635 | 684 | vcs.start_server = false |
|
636 | 685 | |
|
637 | 686 | ; List of enabled VCS backends, available options are: |
|
638 | 687 | ; `hg` - mercurial |
|
639 | 688 | ; `git` - git |
|
640 | 689 | ; `svn` - subversion |
|
641 | 690 | vcs.backends = hg, git, svn |
|
642 | 691 | |
|
643 | 692 | ; Wait this number of seconds before killing connection to the vcsserver |
|
644 | 693 | vcs.connection_timeout = 3600 |
|
645 | 694 | |
|
646 | 695 | ; Cache flag to cache vcsserver remote calls locally |
|
647 | 696 | ; It uses cache_region `cache_repo` |
|
648 | 697 | vcs.methods.cache = true |
|
649 | 698 | |
|
699 | ; Filesystem location where Git lfs objects should be stored | |
|
700 | vcs.git.lfs.storage_location = /var/opt/rhodecode_repo_store/.cache/git_lfs_store | |
|
701 | ||
|
702 | ; Filesystem location where Mercurial largefile objects should be stored | |
|
703 | vcs.hg.largefiles.storage_location = /var/opt/rhodecode_repo_store/.cache/hg_largefiles_store | |
|
704 | ||
|
650 | 705 | ; #################################################### |
|
651 | 706 | ; Subversion proxy support (mod_dav_svn) |
|
652 | 707 | ; Maps RhodeCode repo groups into SVN paths for Apache |
|
653 | 708 | ; #################################################### |
|
654 | 709 | |
|
655 | 710 | ; Compatibility version when creating SVN repositories. Defaults to newest version when commented out. |
|
656 | 711 | ; Set a numeric version for your current SVN e.g 1.8, or 1.12 |
|
657 | 712 | ; Legacy available options are: pre-1.4-compatible, pre-1.5-compatible, pre-1.6-compatible, pre-1.8-compatible, pre-1.9-compatible |
|
658 | 713 | #vcs.svn.compatible_version = 1.8 |
|
659 | 714 | |
|
660 | 715 | ; Redis connection settings for svn integrations logic |
|
661 | 716 | ; This connection string needs to be the same on ce and vcsserver |
|
662 | 717 | vcs.svn.redis_conn = redis://redis:6379/0 |
|
663 | 718 | |
|
664 | 719 | ; Enable SVN proxy of requests over HTTP |
|
665 | 720 | vcs.svn.proxy.enabled = true |
|
666 | 721 | |
|
667 | 722 | ; host to connect to running SVN subsystem |
|
668 | 723 | vcs.svn.proxy.host = http://svn:8090 |
|
669 | 724 | |
|
670 | 725 | ; Enable or disable the config file generation. |
|
671 | 726 | svn.proxy.generate_config = true |
|
672 | 727 | |
|
673 | 728 | ; Generate config file with `SVNListParentPath` set to `On`. |
|
674 | 729 | svn.proxy.list_parent_path = true |
|
675 | 730 | |
|
676 | 731 | ; Set location and file name of generated config file. |
|
677 | 732 | svn.proxy.config_file_path = /etc/rhodecode/conf/svn/mod_dav_svn.conf |
|
678 | 733 | |
|
679 | 734 | ; alternative mod_dav config template. This needs to be a valid mako template |
|
680 | 735 | ; Example template can be found in the source code: |
|
681 | 736 | ; rhodecode/apps/svn_support/templates/mod-dav-svn.conf.mako |
|
682 | 737 | #svn.proxy.config_template = ~/.rccontrol/enterprise-1/custom_svn_conf.mako |
|
683 | 738 | |
|
684 | 739 | ; Used as a prefix to the `Location` block in the generated config file. |
|
685 | 740 | ; In most cases it should be set to `/`. |
|
686 | 741 | svn.proxy.location_root = / |
|
687 | 742 | |
|
688 | 743 | ; Command to reload the mod dav svn configuration on change. |
|
689 | 744 | ; Example: `/etc/init.d/apache2 reload` or /home/USER/apache_reload.sh |
|
690 | 745 | ; Make sure user who runs RhodeCode process is allowed to reload Apache |
|
691 | 746 | #svn.proxy.reload_cmd = /etc/init.d/apache2 reload |
|
692 | 747 | |
|
693 | 748 | ; If the timeout expires before the reload command finishes, the command will |
|
694 | 749 | ; be killed. Setting it to zero means no timeout. Defaults to 10 seconds. |
|
695 | 750 | #svn.proxy.reload_timeout = 10 |
|
696 | 751 | |
|
697 | 752 | ; #################### |
|
698 | 753 | ; SSH Support Settings |
|
699 | 754 | ; #################### |
|
700 | 755 | |
|
701 | 756 | ; Defines if a custom authorized_keys file should be created and written on |
|
702 | 757 | ; any change user ssh keys. Setting this to false also disables possibility |
|
703 | 758 | ; of adding SSH keys by users from web interface. Super admins can still |
|
704 | 759 | ; manage SSH Keys. |
|
705 | 760 | ssh.generate_authorized_keyfile = true |
|
706 | 761 | |
|
707 | 762 | ; Options for ssh, default is `no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding` |
|
708 | 763 | # ssh.authorized_keys_ssh_opts = |
|
709 | 764 | |
|
710 | 765 | ; Path to the authorized_keys file where the generate entries are placed. |
|
711 | 766 | ; It is possible to have multiple key files specified in `sshd_config` e.g. |
|
712 | 767 | ; AuthorizedKeysFile %h/.ssh/authorized_keys %h/.ssh/authorized_keys_rhodecode |
|
713 | 768 | ssh.authorized_keys_file_path = /etc/rhodecode/conf/ssh/authorized_keys_rhodecode |
|
714 | 769 | |
|
715 | 770 | ; Command to execute the SSH wrapper. The binary is available in the |
|
716 | 771 | ; RhodeCode installation directory. |
|
717 | 772 | ; legacy: /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper |
|
718 | 773 | ; new rewrite: /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper-v2 |
|
719 | ssh.wrapper_cmd = /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper | |
|
774 | #DEPRECATED: ssh.wrapper_cmd = /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper | |
|
775 | ssh.wrapper_cmd.v2 = /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper-v2 | |
|
720 | 776 | |
|
721 | 777 | ; Allow shell when executing the ssh-wrapper command |
|
722 | 778 | ssh.wrapper_cmd_allow_shell = false |
|
723 | 779 | |
|
724 | 780 | ; Enables logging, and detailed output send back to the client during SSH |
|
725 | 781 | ; operations. Useful for debugging, shouldn't be used in production. |
|
726 | 782 | ssh.enable_debug_logging = true |
|
727 | 783 | |
|
728 | 784 | ; Paths to binary executable, by default they are the names, but we can |
|
729 | 785 | ; override them if we want to use a custom one |
|
730 | 786 | ssh.executable.hg = /usr/local/bin/rhodecode_bin/vcs_bin/hg |
|
731 | 787 | ssh.executable.git = /usr/local/bin/rhodecode_bin/vcs_bin/git |
|
732 | 788 | ssh.executable.svn = /usr/local/bin/rhodecode_bin/vcs_bin/svnserve |
|
733 | 789 | |
|
734 | 790 | ; Enables SSH key generator web interface. Disabling this still allows users |
|
735 | 791 | ; to add their own keys. |
|
736 | 792 | ssh.enable_ui_key_generator = true |
|
737 | 793 | |
|
738 | 794 | ; Statsd client config, this is used to send metrics to statsd |
|
739 | 795 | ; We recommend setting statsd_exported and scrape them using Prometheus |
|
740 | 796 | #statsd.enabled = false |
|
741 | 797 | #statsd.statsd_host = 0.0.0.0 |
|
742 | 798 | #statsd.statsd_port = 8125 |
|
743 | 799 | #statsd.statsd_prefix = |
|
744 | 800 | #statsd.statsd_ipv6 = false |
|
745 | 801 | |
|
746 | 802 | ; configure logging automatically at server startup set to false |
|
747 | 803 | ; to use the below custom logging config. |
|
748 | 804 | ; RC_LOGGING_FORMATTER |
|
749 | 805 | ; RC_LOGGING_LEVEL |
|
750 | 806 | ; env variables can control the settings for logging in case of autoconfigure |
|
751 | 807 | |
|
752 | 808 | #logging.autoconfigure = true |
|
753 | 809 | |
|
754 | 810 | ; specify your own custom logging config file to configure logging |
|
755 | 811 | #logging.logging_conf_file = /path/to/custom_logging.ini |
|
756 | 812 | |
|
757 | 813 | ; Dummy marker to add new entries after. |
|
758 | 814 | ; Add any custom entries below. Please don't remove this marker. |
|
759 | 815 | custom.conf = 1 |
|
760 | 816 | |
|
761 | 817 | |
|
762 | 818 | ; ##################### |
|
763 | 819 | ; LOGGING CONFIGURATION |
|
764 | 820 | ; ##################### |
|
765 | 821 | |
|
766 | 822 | [loggers] |
|
767 | 823 | keys = root, sqlalchemy, beaker, celery, rhodecode, ssh_wrapper |
|
768 | 824 | |
|
769 | 825 | [handlers] |
|
770 | 826 | keys = console, console_sql |
|
771 | 827 | |
|
772 | 828 | [formatters] |
|
773 | 829 | keys = generic, json, color_formatter, color_formatter_sql |
|
774 | 830 | |
|
775 | 831 | ; ####### |
|
776 | 832 | ; LOGGERS |
|
777 | 833 | ; ####### |
|
778 | 834 | [logger_root] |
|
779 | 835 | level = NOTSET |
|
780 | 836 | handlers = console |
|
781 | 837 | |
|
782 | 838 | [logger_sqlalchemy] |
|
783 | 839 | level = INFO |
|
784 | 840 | handlers = console_sql |
|
785 | 841 | qualname = sqlalchemy.engine |
|
786 | 842 | propagate = 0 |
|
787 | 843 | |
|
788 | 844 | [logger_beaker] |
|
789 | 845 | level = DEBUG |
|
790 | 846 | handlers = |
|
791 | 847 | qualname = beaker.container |
|
792 | 848 | propagate = 1 |
|
793 | 849 | |
|
794 | 850 | [logger_rhodecode] |
|
795 | 851 | level = DEBUG |
|
796 | 852 | handlers = |
|
797 | 853 | qualname = rhodecode |
|
798 | 854 | propagate = 1 |
|
799 | 855 | |
|
800 | 856 | [logger_ssh_wrapper] |
|
801 | 857 | level = DEBUG |
|
802 | 858 | handlers = |
|
803 | 859 | qualname = ssh_wrapper |
|
804 | 860 | propagate = 1 |
|
805 | 861 | |
|
806 | 862 | [logger_celery] |
|
807 | 863 | level = DEBUG |
|
808 | 864 | handlers = |
|
809 | 865 | qualname = celery |
|
810 | 866 | |
|
811 | 867 | |
|
812 | 868 | ; ######## |
|
813 | 869 | ; HANDLERS |
|
814 | 870 | ; ######## |
|
815 | 871 | |
|
816 | 872 | [handler_console] |
|
817 | 873 | class = StreamHandler |
|
818 | 874 | args = (sys.stderr, ) |
|
819 | 875 | level = DEBUG |
|
820 | 876 | ; To enable JSON formatted logs replace 'generic/color_formatter' with 'json' |
|
821 | 877 | ; This allows sending properly formatted logs to grafana loki or elasticsearch |
|
822 | 878 | formatter = color_formatter |
|
823 | 879 | |
|
824 | 880 | [handler_console_sql] |
|
825 | 881 | ; "level = DEBUG" logs SQL queries and results. |
|
826 | 882 | ; "level = INFO" logs SQL queries. |
|
827 | 883 | ; "level = WARN" logs neither. (Recommended for production systems.) |
|
828 | 884 | class = StreamHandler |
|
829 | 885 | args = (sys.stderr, ) |
|
830 | 886 | level = WARN |
|
831 | 887 | ; To enable JSON formatted logs replace 'generic/color_formatter_sql' with 'json' |
|
832 | 888 | ; This allows sending properly formatted logs to grafana loki or elasticsearch |
|
833 | 889 | formatter = color_formatter_sql |
|
834 | 890 | |
|
835 | 891 | ; ########## |
|
836 | 892 | ; FORMATTERS |
|
837 | 893 | ; ########## |
|
838 | 894 | |
|
839 | 895 | [formatter_generic] |
|
840 | 896 | class = rhodecode.lib.logging_formatter.ExceptionAwareFormatter |
|
841 | 897 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
842 | 898 | datefmt = %Y-%m-%d %H:%M:%S |
|
843 | 899 | |
|
844 | 900 | [formatter_color_formatter] |
|
845 | 901 | class = rhodecode.lib.logging_formatter.ColorFormatter |
|
846 | 902 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
847 | 903 | datefmt = %Y-%m-%d %H:%M:%S |
|
848 | 904 | |
|
849 | 905 | [formatter_color_formatter_sql] |
|
850 | 906 | class = rhodecode.lib.logging_formatter.ColorFormatterSql |
|
851 | 907 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
852 | 908 | datefmt = %Y-%m-%d %H:%M:%S |
|
853 | 909 | |
|
854 | 910 | [formatter_json] |
|
855 | 911 | format = %(timestamp)s %(levelname)s %(name)s %(message)s %(req_id)s |
|
856 | 912 | class = rhodecode.lib._vendor.jsonlogger.JsonFormatter |
@@ -1,520 +1,545 b'' | |||
|
1 | 1 | """ |
|
2 | 2 | Gunicorn config extension and hooks. This config file adds some extra settings and memory management. |
|
3 | 3 | Gunicorn configuration should be managed by .ini files entries of RhodeCode or VCSServer |
|
4 | 4 | """ |
|
5 | 5 | |
|
6 | 6 | import gc |
|
7 | 7 | import os |
|
8 | 8 | import sys |
|
9 | 9 | import math |
|
10 | 10 | import time |
|
11 | 11 | import threading |
|
12 | 12 | import traceback |
|
13 | 13 | import random |
|
14 | 14 | import socket |
|
15 | 15 | import dataclasses |
|
16 | import json | |
|
16 | 17 | from gunicorn.glogging import Logger |
|
17 | 18 | |
|
18 | 19 | |
|
19 | 20 | def get_workers(): |
|
20 | 21 | import multiprocessing |
|
21 | 22 | return multiprocessing.cpu_count() * 2 + 1 |
|
22 | 23 | |
|
23 | 24 | |
|
24 | 25 | bind = "127.0.0.1:10020" |
|
25 | 26 | |
|
26 | 27 | |
|
27 | 28 | # Error logging output for gunicorn (-) is stdout |
|
28 | 29 | errorlog = '-' |
|
29 | 30 | |
|
30 | 31 | # Access logging output for gunicorn (-) is stdout |
|
31 | 32 | accesslog = '-' |
|
32 | 33 | |
|
33 | 34 | |
|
34 | 35 | # SERVER MECHANICS |
|
35 | 36 | # None == system temp dir |
|
36 | 37 | # worker_tmp_dir is recommended to be set to some tmpfs |
|
37 | 38 | worker_tmp_dir = None |
|
38 | 39 | tmp_upload_dir = None |
|
39 | 40 | |
|
40 | # use re-use port logic | |
|
41 |
|
|
|
41 | # use re-use port logic to let linux internals load-balance the requests better. | |
|
42 | reuse_port = True | |
|
42 | 43 | |
|
43 | 44 | # Custom log format |
|
44 | 45 | #access_log_format = ( |
|
45 | 46 | # '%(t)s %(p)s INFO [GNCRN] %(h)-15s rqt:%(L)s %(s)s %(b)-6s "%(m)s:%(U)s %(q)s" usr:%(u)s "%(f)s" "%(a)s"') |
|
46 | 47 | |
|
47 | 48 | # loki format for easier parsing in grafana |
|
48 | access_log_format = ( | |
|
49 | loki_access_log_format = ( | |
|
49 | 50 | 'time="%(t)s" pid=%(p)s level="INFO" type="[GNCRN]" ip="%(h)-15s" rqt="%(L)s" response_code="%(s)s" response_bytes="%(b)-6s" uri="%(m)s:%(U)s %(q)s" user=":%(u)s" user_agent="%(a)s"') |
|
50 | 51 | |
|
52 | # JSON format | |
|
53 | json_access_log_format = json.dumps({ | |
|
54 | 'time': r'%(t)s', | |
|
55 | 'pid': r'%(p)s', | |
|
56 | 'level': 'INFO', | |
|
57 | 'ip': r'%(h)s', | |
|
58 | 'request_time': r'%(L)s', | |
|
59 | 'remote_address': r'%(h)s', | |
|
60 | 'user_name': r'%(u)s', | |
|
61 | 'status': r'%(s)s', | |
|
62 | 'method': r'%(m)s', | |
|
63 | 'url_path': r'%(U)s', | |
|
64 | 'query_string': r'%(q)s', | |
|
65 | 'protocol': r'%(H)s', | |
|
66 | 'response_length': r'%(B)s', | |
|
67 | 'referer': r'%(f)s', | |
|
68 | 'user_agent': r'%(a)s', | |
|
69 | ||
|
70 | }) | |
|
71 | ||
|
72 | access_log_format = loki_access_log_format | |
|
73 | if os.environ.get('RC_LOGGING_FORMATTER') == 'json': | |
|
74 | access_log_format = json_access_log_format | |
|
75 | ||
|
51 | 76 | # self adjust workers based on CPU count, to use maximum of CPU and not overquota the resources |
|
52 | 77 | # workers = get_workers() |
|
53 | 78 | |
|
54 | 79 | # Gunicorn access log level |
|
55 | 80 | loglevel = 'info' |
|
56 | 81 | |
|
57 | 82 | # Process name visible in a process list |
|
58 | 83 | proc_name = 'rhodecode_enterprise' |
|
59 | 84 | |
|
60 | 85 | # Type of worker class, one of `sync`, `gevent` or `gthread` |
|
61 | 86 | # currently `sync` is the only option allowed for vcsserver and for rhodecode all of 3 are allowed |
|
62 | 87 | # gevent: |
|
63 | 88 | # In this case, the maximum number of concurrent requests is (N workers * X worker_connections) |
|
64 | 89 | # e.g. workers =3 worker_connections=10 = 3*10, 30 concurrent requests can be handled |
|
65 | 90 | # gthread: |
|
66 | 91 | # In this case, the maximum number of concurrent requests is (N workers * X threads) |
|
67 | 92 | # e.g. workers = 3 threads=3 = 3*3, 9 concurrent requests can be handled |
|
68 | 93 | worker_class = 'gthread' |
|
69 | 94 | |
|
70 | 95 | # Sets the number of process workers. More workers means more concurrent connections |
|
71 | 96 | # RhodeCode can handle at the same time. Each additional worker also it increases |
|
72 | 97 | # memory usage as each has its own set of caches. |
|
73 | 98 | # The Recommended value is (2 * NUMBER_OF_CPUS + 1), eg 2CPU = 5 workers, but no more |
|
74 | 99 | # than 8-10 unless for huge deployments .e.g 700-1000 users. |
|
75 | 100 | # `instance_id = *` must be set in the [app:main] section below (which is the default) |
|
76 | 101 | # when using more than 1 worker. |
|
77 | 102 | workers = 2 |
|
78 | 103 | |
|
79 | 104 | # Threads numbers for worker class gthread |
|
80 | 105 | threads = 1 |
|
81 | 106 | |
|
82 | 107 | # The maximum number of simultaneous clients. Valid only for gevent |
|
83 | 108 | # In this case, the maximum number of concurrent requests is (N workers * X worker_connections) |
|
84 | 109 | # e.g workers =3 worker_connections=10 = 3*10, 30 concurrent requests can be handled |
|
85 | 110 | worker_connections = 10 |
|
86 | 111 | |
|
87 | 112 | # Max number of requests that worker will handle before being gracefully restarted. |
|
88 | 113 | # Prevents memory leaks, jitter adds variability so not all workers are restarted at once. |
|
89 | 114 | max_requests = 2000 |
|
90 | 115 | max_requests_jitter = int(max_requests * 0.2) # 20% of max_requests |
|
91 | 116 | |
|
92 | 117 | # The maximum number of pending connections. |
|
93 | 118 | # Exceeding this number results in the client getting an error when attempting to connect. |
|
94 | 119 | backlog = 64 |
|
95 | 120 | |
|
96 | 121 | # The Amount of time a worker can spend with handling a request before it |
|
97 | 122 | # gets killed and restarted. By default, set to 21600 (6hrs) |
|
98 | 123 | # Examples: 1800 (30min), 3600 (1hr), 7200 (2hr), 43200 (12h) |
|
99 | 124 | timeout = 21600 |
|
100 | 125 | |
|
101 | 126 | # The maximum size of HTTP request line in bytes. |
|
102 | 127 | # 0 for unlimited |
|
103 | 128 | limit_request_line = 0 |
|
104 | 129 | |
|
105 | 130 | # Limit the number of HTTP headers fields in a request. |
|
106 | 131 | # By default this value is 100 and can't be larger than 32768. |
|
107 | 132 | limit_request_fields = 32768 |
|
108 | 133 | |
|
109 | 134 | # Limit the allowed size of an HTTP request header field. |
|
110 | 135 | # Value is a positive number or 0. |
|
111 | 136 | # Setting it to 0 will allow unlimited header field sizes. |
|
112 | 137 | limit_request_field_size = 0 |
|
113 | 138 | |
|
114 | 139 | # Timeout for graceful workers restart. |
|
115 | 140 | # After receiving a restart signal, workers have this much time to finish |
|
116 | 141 | # serving requests. Workers still alive after the timeout (starting from the |
|
117 | 142 | # receipt of the restart signal) are force killed. |
|
118 | 143 | # Examples: 1800 (30min), 3600 (1hr), 7200 (2hr), 43200 (12h) |
|
119 | 144 | graceful_timeout = 21600 |
|
120 | 145 | |
|
121 | 146 | # The number of seconds to wait for requests on a Keep-Alive connection. |
|
122 | 147 | # Generally set in the 1-5 seconds range. |
|
123 | 148 | keepalive = 2 |
|
124 | 149 | |
|
125 | 150 | # Maximum memory usage that each worker can use before it will receive a |
|
126 | 151 | # graceful restart signal 0 = memory monitoring is disabled |
|
127 | 152 | # Examples: 268435456 (256MB), 536870912 (512MB) |
|
128 | 153 | # 1073741824 (1GB), 2147483648 (2GB), 4294967296 (4GB) |
|
129 | 154 | # Dynamic formula 1024 * 1024 * 256 == 256MBs |
|
130 | 155 | memory_max_usage = 0 |
|
131 | 156 | |
|
132 | 157 | # How often in seconds to check for memory usage for each gunicorn worker |
|
133 | 158 | memory_usage_check_interval = 60 |
|
134 | 159 | |
|
135 | 160 | # Threshold value for which we don't recycle worker if GarbageCollection |
|
136 | 161 | # frees up enough resources. Before each restart, we try to run GC on worker |
|
137 | 162 | # in case we get enough free memory after that; restart will not happen. |
|
138 | 163 | memory_usage_recovery_threshold = 0.8 |
|
139 | 164 | |
|
140 | 165 | |
|
141 | 166 | @dataclasses.dataclass |
|
142 | 167 | class MemoryCheckConfig: |
|
143 | 168 | max_usage: int |
|
144 | 169 | check_interval: int |
|
145 | 170 | recovery_threshold: float |
|
146 | 171 | |
|
147 | 172 | |
|
148 | 173 | def _get_process_rss(pid=None): |
|
149 | 174 | try: |
|
150 | 175 | import psutil |
|
151 | 176 | if pid: |
|
152 | 177 | proc = psutil.Process(pid) |
|
153 | 178 | else: |
|
154 | 179 | proc = psutil.Process() |
|
155 | 180 | return proc.memory_info().rss |
|
156 | 181 | except Exception: |
|
157 | 182 | return None |
|
158 | 183 | |
|
159 | 184 | |
|
160 | 185 | def _get_config(ini_path): |
|
161 | 186 | import configparser |
|
162 | 187 | |
|
163 | 188 | try: |
|
164 | 189 | config = configparser.RawConfigParser() |
|
165 | 190 | config.read(ini_path) |
|
166 | 191 | return config |
|
167 | 192 | except Exception: |
|
168 | 193 | return None |
|
169 | 194 | |
|
170 | 195 | |
|
171 | 196 | def get_memory_usage_params(config=None): |
|
172 | 197 | # memory spec defaults |
|
173 | 198 | _memory_max_usage = memory_max_usage |
|
174 | 199 | _memory_usage_check_interval = memory_usage_check_interval |
|
175 | 200 | _memory_usage_recovery_threshold = memory_usage_recovery_threshold |
|
176 | 201 | |
|
177 | 202 | if config: |
|
178 | 203 | ini_path = os.path.abspath(config) |
|
179 | 204 | conf = _get_config(ini_path) |
|
180 | 205 | |
|
181 | 206 | section = 'server:main' |
|
182 | 207 | if conf and conf.has_section(section): |
|
183 | 208 | |
|
184 | 209 | if conf.has_option(section, 'memory_max_usage'): |
|
185 | 210 | _memory_max_usage = conf.getint(section, 'memory_max_usage') |
|
186 | 211 | |
|
187 | 212 | if conf.has_option(section, 'memory_usage_check_interval'): |
|
188 | 213 | _memory_usage_check_interval = conf.getint(section, 'memory_usage_check_interval') |
|
189 | 214 | |
|
190 | 215 | if conf.has_option(section, 'memory_usage_recovery_threshold'): |
|
191 | 216 | _memory_usage_recovery_threshold = conf.getfloat(section, 'memory_usage_recovery_threshold') |
|
192 | 217 | |
|
193 | 218 | _memory_max_usage = int(os.environ.get('RC_GUNICORN_MEMORY_MAX_USAGE', '') |
|
194 | 219 | or _memory_max_usage) |
|
195 | 220 | _memory_usage_check_interval = int(os.environ.get('RC_GUNICORN_MEMORY_USAGE_CHECK_INTERVAL', '') |
|
196 | 221 | or _memory_usage_check_interval) |
|
197 | 222 | _memory_usage_recovery_threshold = float(os.environ.get('RC_GUNICORN_MEMORY_USAGE_RECOVERY_THRESHOLD', '') |
|
198 | 223 | or _memory_usage_recovery_threshold) |
|
199 | 224 | |
|
200 | 225 | return MemoryCheckConfig(_memory_max_usage, _memory_usage_check_interval, _memory_usage_recovery_threshold) |
|
201 | 226 | |
|
202 | 227 | |
|
203 | 228 | def _time_with_offset(check_interval): |
|
204 | 229 | return time.time() - random.randint(0, check_interval/2.0) |
|
205 | 230 | |
|
206 | 231 | |
|
207 | 232 | def pre_fork(server, worker): |
|
208 | 233 | pass |
|
209 | 234 | |
|
210 | 235 | |
|
211 | 236 | def post_fork(server, worker): |
|
212 | 237 | |
|
213 | 238 | memory_conf = get_memory_usage_params() |
|
214 | 239 | _memory_max_usage = memory_conf.max_usage |
|
215 | 240 | _memory_usage_check_interval = memory_conf.check_interval |
|
216 | 241 | _memory_usage_recovery_threshold = memory_conf.recovery_threshold |
|
217 | 242 | |
|
218 | 243 | worker._memory_max_usage = int(os.environ.get('RC_GUNICORN_MEMORY_MAX_USAGE', '') |
|
219 | 244 | or _memory_max_usage) |
|
220 | 245 | worker._memory_usage_check_interval = int(os.environ.get('RC_GUNICORN_MEMORY_USAGE_CHECK_INTERVAL', '') |
|
221 | 246 | or _memory_usage_check_interval) |
|
222 | 247 | worker._memory_usage_recovery_threshold = float(os.environ.get('RC_GUNICORN_MEMORY_USAGE_RECOVERY_THRESHOLD', '') |
|
223 | 248 | or _memory_usage_recovery_threshold) |
|
224 | 249 | |
|
225 | 250 | # register memory last check time, with some random offset so we don't recycle all |
|
226 | 251 | # at once |
|
227 | 252 | worker._last_memory_check_time = _time_with_offset(_memory_usage_check_interval) |
|
228 | 253 | |
|
229 | 254 | if _memory_max_usage: |
|
230 | 255 | server.log.info("pid=[%-10s] WORKER spawned with max memory set at %s", worker.pid, |
|
231 | 256 | _format_data_size(_memory_max_usage)) |
|
232 | 257 | else: |
|
233 | 258 | server.log.info("pid=[%-10s] WORKER spawned", worker.pid) |
|
234 | 259 | |
|
235 | 260 | |
|
236 | 261 | def pre_exec(server): |
|
237 | 262 | server.log.info("Forked child, re-executing.") |
|
238 | 263 | |
|
239 | 264 | |
|
240 | 265 | def on_starting(server): |
|
241 | 266 | server_lbl = '{} {}'.format(server.proc_name, server.address) |
|
242 | 267 | server.log.info("Server %s is starting.", server_lbl) |
|
243 | 268 | server.log.info('Config:') |
|
244 | 269 | server.log.info(f"\n{server.cfg}") |
|
245 | 270 | server.log.info(get_memory_usage_params()) |
|
246 | 271 | |
|
247 | 272 | |
|
248 | 273 | def when_ready(server): |
|
249 | 274 | server.log.info("Server %s is ready. Spawning workers", server) |
|
250 | 275 | |
|
251 | 276 | |
|
252 | 277 | def on_reload(server): |
|
253 | 278 | pass |
|
254 | 279 | |
|
255 | 280 | |
|
256 | 281 | def _format_data_size(size, unit="B", precision=1, binary=True): |
|
257 | 282 | """Format a number using SI units (kilo, mega, etc.). |
|
258 | 283 | |
|
259 | 284 | ``size``: The number as a float or int. |
|
260 | 285 | |
|
261 | 286 | ``unit``: The unit name in plural form. Examples: "bytes", "B". |
|
262 | 287 | |
|
263 | 288 | ``precision``: How many digits to the right of the decimal point. Default |
|
264 | 289 | is 1. 0 suppresses the decimal point. |
|
265 | 290 | |
|
266 | 291 | ``binary``: If false, use base-10 decimal prefixes (kilo = K = 1000). |
|
267 | 292 | If true, use base-2 binary prefixes (kibi = Ki = 1024). |
|
268 | 293 | |
|
269 | 294 | ``full_name``: If false (default), use the prefix abbreviation ("k" or |
|
270 | 295 | "Ki"). If true, use the full prefix ("kilo" or "kibi"). If false, |
|
271 | 296 | use abbreviation ("k" or "Ki"). |
|
272 | 297 | |
|
273 | 298 | """ |
|
274 | 299 | |
|
275 | 300 | if not binary: |
|
276 | 301 | base = 1000 |
|
277 | 302 | multiples = ('', 'k', 'M', 'G', 'T', 'P', 'E', 'Z', 'Y') |
|
278 | 303 | else: |
|
279 | 304 | base = 1024 |
|
280 | 305 | multiples = ('', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi', 'Yi') |
|
281 | 306 | |
|
282 | 307 | sign = "" |
|
283 | 308 | if size > 0: |
|
284 | 309 | m = int(math.log(size, base)) |
|
285 | 310 | elif size < 0: |
|
286 | 311 | sign = "-" |
|
287 | 312 | size = -size |
|
288 | 313 | m = int(math.log(size, base)) |
|
289 | 314 | else: |
|
290 | 315 | m = 0 |
|
291 | 316 | if m > 8: |
|
292 | 317 | m = 8 |
|
293 | 318 | |
|
294 | 319 | if m == 0: |
|
295 | 320 | precision = '%.0f' |
|
296 | 321 | else: |
|
297 | 322 | precision = '%%.%df' % precision |
|
298 | 323 | |
|
299 | 324 | size = precision % (size / math.pow(base, m)) |
|
300 | 325 | |
|
301 | 326 | return '%s%s %s%s' % (sign, size.strip(), multiples[m], unit) |
|
302 | 327 | |
|
303 | 328 | |
|
304 | 329 | def _check_memory_usage(worker): |
|
305 | 330 | _memory_max_usage = worker._memory_max_usage |
|
306 | 331 | if not _memory_max_usage: |
|
307 | 332 | return |
|
308 | 333 | |
|
309 | 334 | _memory_usage_check_interval = worker._memory_usage_check_interval |
|
310 | 335 | _memory_usage_recovery_threshold = memory_max_usage * worker._memory_usage_recovery_threshold |
|
311 | 336 | |
|
312 | 337 | elapsed = time.time() - worker._last_memory_check_time |
|
313 | 338 | if elapsed > _memory_usage_check_interval: |
|
314 | 339 | mem_usage = _get_process_rss() |
|
315 | 340 | if mem_usage and mem_usage > _memory_max_usage: |
|
316 | 341 | worker.log.info( |
|
317 | 342 | "memory usage %s > %s, forcing gc", |
|
318 | 343 | _format_data_size(mem_usage), _format_data_size(_memory_max_usage)) |
|
319 | 344 | # Try to clean it up by forcing a full collection. |
|
320 | 345 | gc.collect() |
|
321 | 346 | mem_usage = _get_process_rss() |
|
322 | 347 | if mem_usage > _memory_usage_recovery_threshold: |
|
323 | 348 | # Didn't clean up enough, we'll have to terminate. |
|
324 | 349 | worker.log.warning( |
|
325 | 350 | "memory usage %s > %s after gc, quitting", |
|
326 | 351 | _format_data_size(mem_usage), _format_data_size(_memory_max_usage)) |
|
327 | 352 | # This will cause worker to auto-restart itself |
|
328 | 353 | worker.alive = False |
|
329 | 354 | worker._last_memory_check_time = time.time() |
|
330 | 355 | |
|
331 | 356 | |
|
332 | 357 | def worker_int(worker): |
|
333 | 358 | worker.log.info("pid=[%-10s] worker received INT or QUIT signal", worker.pid) |
|
334 | 359 | |
|
335 | 360 | # get traceback info, when a worker crashes |
|
336 | 361 | def get_thread_id(t_id): |
|
337 | 362 | id2name = dict([(th.ident, th.name) for th in threading.enumerate()]) |
|
338 | 363 | return id2name.get(t_id, "unknown_thread_id") |
|
339 | 364 | |
|
340 | 365 | code = [] |
|
341 | 366 | for thread_id, stack in sys._current_frames().items(): # noqa |
|
342 | 367 | code.append( |
|
343 | 368 | "\n# Thread: %s(%d)" % (get_thread_id(thread_id), thread_id)) |
|
344 | 369 | for fname, lineno, name, line in traceback.extract_stack(stack): |
|
345 | 370 | code.append('File: "%s", line %d, in %s' % (fname, lineno, name)) |
|
346 | 371 | if line: |
|
347 | 372 | code.append(" %s" % (line.strip())) |
|
348 | 373 | worker.log.debug("\n".join(code)) |
|
349 | 374 | |
|
350 | 375 | |
|
351 | 376 | def worker_abort(worker): |
|
352 | 377 | worker.log.info("pid=[%-10s] worker received SIGABRT signal", worker.pid) |
|
353 | 378 | |
|
354 | 379 | |
|
355 | 380 | def worker_exit(server, worker): |
|
356 | 381 | worker.log.info("pid=[%-10s] worker exit", worker.pid) |
|
357 | 382 | |
|
358 | 383 | |
|
359 | 384 | def child_exit(server, worker): |
|
360 | 385 | worker.log.info("pid=[%-10s] worker child exit", worker.pid) |
|
361 | 386 | |
|
362 | 387 | |
|
363 | 388 | def pre_request(worker, req): |
|
364 | 389 | worker.start_time = time.time() |
|
365 | 390 | worker.log.debug( |
|
366 | 391 | "GNCRN PRE WORKER [cnt:%s]: %s %s", worker.nr, req.method, req.path) |
|
367 | 392 | |
|
368 | 393 | |
|
369 | 394 | def post_request(worker, req, environ, resp): |
|
370 | 395 | total_time = time.time() - worker.start_time |
|
371 | 396 | # Gunicorn sometimes has problems with reading the status_code |
|
372 | 397 | status_code = getattr(resp, 'status_code', '') |
|
373 | 398 | worker.log.debug( |
|
374 | 399 | "GNCRN POST WORKER [cnt:%s]: %s %s resp: %s, Load Time: %.4fs", |
|
375 | 400 | worker.nr, req.method, req.path, status_code, total_time) |
|
376 | 401 | _check_memory_usage(worker) |
|
377 | 402 | |
|
378 | 403 | |
|
379 | 404 | def _filter_proxy(ip): |
|
380 | 405 | """ |
|
381 | 406 | Passed in IP addresses in HEADERS can be in a special format of multiple |
|
382 | 407 | ips. Those comma separated IPs are passed from various proxies in the |
|
383 | 408 | chain of request processing. The left-most being the original client. |
|
384 | 409 | We only care about the first IP which came from the org. client. |
|
385 | 410 | |
|
386 | 411 | :param ip: ip string from headers |
|
387 | 412 | """ |
|
388 | 413 | if ',' in ip: |
|
389 | 414 | _ips = ip.split(',') |
|
390 | 415 | _first_ip = _ips[0].strip() |
|
391 | 416 | return _first_ip |
|
392 | 417 | return ip |
|
393 | 418 | |
|
394 | 419 | |
|
395 | 420 | def _filter_port(ip): |
|
396 | 421 | """ |
|
397 | 422 | Removes a port from ip, there are 4 main cases to handle here. |
|
398 | 423 | - ipv4 eg. 127.0.0.1 |
|
399 | 424 | - ipv6 eg. ::1 |
|
400 | 425 | - ipv4+port eg. 127.0.0.1:8080 |
|
401 | 426 | - ipv6+port eg. [::1]:8080 |
|
402 | 427 | |
|
403 | 428 | :param ip: |
|
404 | 429 | """ |
|
405 | 430 | def is_ipv6(ip_addr): |
|
406 | 431 | if hasattr(socket, 'inet_pton'): |
|
407 | 432 | try: |
|
408 | 433 | socket.inet_pton(socket.AF_INET6, ip_addr) |
|
409 | 434 | except socket.error: |
|
410 | 435 | return False |
|
411 | 436 | else: |
|
412 | 437 | return False |
|
413 | 438 | return True |
|
414 | 439 | |
|
415 | 440 | if ':' not in ip: # must be ipv4 pure ip |
|
416 | 441 | return ip |
|
417 | 442 | |
|
418 | 443 | if '[' in ip and ']' in ip: # ipv6 with port |
|
419 | 444 | return ip.split(']')[0][1:].lower() |
|
420 | 445 | |
|
421 | 446 | # must be ipv6 or ipv4 with port |
|
422 | 447 | if is_ipv6(ip): |
|
423 | 448 | return ip |
|
424 | 449 | else: |
|
425 | 450 | ip, _port = ip.split(':')[:2] # means ipv4+port |
|
426 | 451 | return ip |
|
427 | 452 | |
|
428 | 453 | |
|
429 | 454 | def get_ip_addr(environ): |
|
430 | 455 | proxy_key = 'HTTP_X_REAL_IP' |
|
431 | 456 | proxy_key2 = 'HTTP_X_FORWARDED_FOR' |
|
432 | 457 | def_key = 'REMOTE_ADDR' |
|
433 | 458 | |
|
434 | 459 | def _filters(x): |
|
435 | 460 | return _filter_port(_filter_proxy(x)) |
|
436 | 461 | |
|
437 | 462 | ip = environ.get(proxy_key) |
|
438 | 463 | if ip: |
|
439 | 464 | return _filters(ip) |
|
440 | 465 | |
|
441 | 466 | ip = environ.get(proxy_key2) |
|
442 | 467 | if ip: |
|
443 | 468 | return _filters(ip) |
|
444 | 469 | |
|
445 | 470 | ip = environ.get(def_key, '0.0.0.0') |
|
446 | 471 | return _filters(ip) |
|
447 | 472 | |
|
448 | 473 | |
|
449 | 474 | class RhodeCodeLogger(Logger): |
|
450 | 475 | """ |
|
451 | 476 | Custom Logger that allows some customization that gunicorn doesn't allow |
|
452 | 477 | """ |
|
453 | 478 | |
|
454 | 479 | datefmt = r"%Y-%m-%d %H:%M:%S" |
|
455 | 480 | |
|
456 | 481 | def __init__(self, cfg): |
|
457 | 482 | Logger.__init__(self, cfg) |
|
458 | 483 | |
|
459 | 484 | def now(self): |
|
460 | 485 | """ return date in RhodeCode Log format """ |
|
461 | 486 | now = time.time() |
|
462 | 487 | msecs = int((now - int(now)) * 1000) |
|
463 | 488 | return time.strftime(self.datefmt, time.localtime(now)) + '.{0:03d}'.format(msecs) |
|
464 | 489 | |
|
465 | 490 | def atoms(self, resp, req, environ, request_time): |
|
466 | 491 | """ Gets atoms for log formatting. |
|
467 | 492 | """ |
|
468 | 493 | status = resp.status |
|
469 | 494 | if isinstance(status, str): |
|
470 | 495 | status = status.split(None, 1)[0] |
|
471 | 496 | atoms = { |
|
472 | 497 | 'h': get_ip_addr(environ), |
|
473 | 498 | 'l': '-', |
|
474 | 499 | 'u': self._get_user(environ) or '-', |
|
475 | 500 | 't': self.now(), |
|
476 | 501 | 'r': "%s %s %s" % (environ['REQUEST_METHOD'], |
|
477 | 502 | environ['RAW_URI'], |
|
478 | 503 | environ["SERVER_PROTOCOL"]), |
|
479 | 504 | 's': status, |
|
480 | 505 | 'm': environ.get('REQUEST_METHOD'), |
|
481 | 506 | 'U': environ.get('PATH_INFO'), |
|
482 | 507 | 'q': environ.get('QUERY_STRING'), |
|
483 | 508 | 'H': environ.get('SERVER_PROTOCOL'), |
|
484 | 509 | 'b': getattr(resp, 'sent', None) is not None and str(resp.sent) or '-', |
|
485 | 510 | 'B': getattr(resp, 'sent', None), |
|
486 | 511 | 'f': environ.get('HTTP_REFERER', '-'), |
|
487 | 512 | 'a': environ.get('HTTP_USER_AGENT', '-'), |
|
488 | 513 | 'T': request_time.seconds, |
|
489 | 514 | 'D': (request_time.seconds * 1000000) + request_time.microseconds, |
|
490 | 515 | 'M': (request_time.seconds * 1000) + int(request_time.microseconds/1000), |
|
491 | 516 | 'L': "%d.%06d" % (request_time.seconds, request_time.microseconds), |
|
492 | 517 | 'p': "<%s>" % os.getpid() |
|
493 | 518 | } |
|
494 | 519 | |
|
495 | 520 | # add request headers |
|
496 | 521 | if hasattr(req, 'headers'): |
|
497 | 522 | req_headers = req.headers |
|
498 | 523 | else: |
|
499 | 524 | req_headers = req |
|
500 | 525 | |
|
501 | 526 | if hasattr(req_headers, "items"): |
|
502 | 527 | req_headers = req_headers.items() |
|
503 | 528 | |
|
504 | 529 | atoms.update({"{%s}i" % k.lower(): v for k, v in req_headers}) |
|
505 | 530 | |
|
506 | 531 | resp_headers = resp.headers |
|
507 | 532 | if hasattr(resp_headers, "items"): |
|
508 | 533 | resp_headers = resp_headers.items() |
|
509 | 534 | |
|
510 | 535 | # add response headers |
|
511 | 536 | atoms.update({"{%s}o" % k.lower(): v for k, v in resp_headers}) |
|
512 | 537 | |
|
513 | 538 | # add environ variables |
|
514 | 539 | environ_variables = environ.items() |
|
515 | 540 | atoms.update({"{%s}e" % k.lower(): v for k, v in environ_variables}) |
|
516 | 541 | |
|
517 | 542 | return atoms |
|
518 | 543 | |
|
519 | 544 | |
|
520 | 545 | logger_class = RhodeCodeLogger |
@@ -1,824 +1,880 b'' | |||
|
1 | 1 | |
|
2 | 2 | ; ######################################### |
|
3 | 3 | ; RHODECODE COMMUNITY EDITION CONFIGURATION |
|
4 | 4 | ; ######################################### |
|
5 | 5 | |
|
6 | 6 | [DEFAULT] |
|
7 | 7 | ; Debug flag sets all loggers to debug, and enables request tracking |
|
8 | 8 | debug = false |
|
9 | 9 | |
|
10 | 10 | ; ######################################################################## |
|
11 | 11 | ; EMAIL CONFIGURATION |
|
12 | 12 | ; These settings will be used by the RhodeCode mailing system |
|
13 | 13 | ; ######################################################################## |
|
14 | 14 | |
|
15 | 15 | ; prefix all emails subjects with given prefix, helps filtering out emails |
|
16 | 16 | #email_prefix = [RhodeCode] |
|
17 | 17 | |
|
18 | 18 | ; email FROM address all mails will be sent |
|
19 | 19 | #app_email_from = rhodecode-noreply@localhost |
|
20 | 20 | |
|
21 | 21 | #smtp_server = mail.server.com |
|
22 | 22 | #smtp_username = |
|
23 | 23 | #smtp_password = |
|
24 | 24 | #smtp_port = |
|
25 | 25 | #smtp_use_tls = false |
|
26 | 26 | #smtp_use_ssl = true |
|
27 | 27 | |
|
28 | 28 | [server:main] |
|
29 | 29 | ; COMMON HOST/IP CONFIG, This applies mostly to develop setup, |
|
30 | 30 | ; Host port for gunicorn are controlled by gunicorn_conf.py |
|
31 | 31 | host = 127.0.0.1 |
|
32 | 32 | port = 10020 |
|
33 | 33 | |
|
34 | 34 | |
|
35 | 35 | ; ########################### |
|
36 | 36 | ; GUNICORN APPLICATION SERVER |
|
37 | 37 | ; ########################### |
|
38 | 38 | |
|
39 | 39 | ; run with gunicorn --config gunicorn_conf.py --paste rhodecode.ini |
|
40 | 40 | |
|
41 | 41 | ; Module to use, this setting shouldn't be changed |
|
42 | 42 | use = egg:gunicorn#main |
|
43 | 43 | |
|
44 | 44 | ; Prefix middleware for RhodeCode. |
|
45 | 45 | ; recommended when using proxy setup. |
|
46 | 46 | ; allows to set RhodeCode under a prefix in server. |
|
47 | 47 | ; eg https://server.com/custom_prefix. Enable `filter-with =` option below as well. |
|
48 | 48 | ; And set your prefix like: `prefix = /custom_prefix` |
|
49 | 49 | ; be sure to also set beaker.session.cookie_path = /custom_prefix if you need |
|
50 | 50 | ; to make your cookies only work on prefix url |
|
51 | 51 | [filter:proxy-prefix] |
|
52 | 52 | use = egg:PasteDeploy#prefix |
|
53 | 53 | prefix = / |
|
54 | 54 | |
|
55 | 55 | [app:main] |
|
56 | 56 | ; The %(here)s variable will be replaced with the absolute path of parent directory |
|
57 | 57 | ; of this file |
|
58 | 58 | ; Each option in the app:main can be override by an environmental variable |
|
59 | 59 | ; |
|
60 | 60 | ;To override an option: |
|
61 | 61 | ; |
|
62 | 62 | ;RC_<KeyName> |
|
63 | 63 | ;Everything should be uppercase, . and - should be replaced by _. |
|
64 | 64 | ;For example, if you have these configuration settings: |
|
65 | 65 | ;rc_cache.repo_object.backend = foo |
|
66 | 66 | ;can be overridden by |
|
67 | 67 | ;export RC_CACHE_REPO_OBJECT_BACKEND=foo |
|
68 | 68 | |
|
69 | 69 | use = egg:rhodecode-enterprise-ce |
|
70 | 70 | |
|
71 | 71 | ; enable proxy prefix middleware, defined above |
|
72 | 72 | #filter-with = proxy-prefix |
|
73 | 73 | |
|
74 | 74 | ; encryption key used to encrypt social plugin tokens, |
|
75 | 75 | ; remote_urls with credentials etc, if not set it defaults to |
|
76 | 76 | ; `beaker.session.secret` |
|
77 | 77 | #rhodecode.encrypted_values.secret = |
|
78 | 78 | |
|
79 | 79 | ; decryption strict mode (enabled by default). It controls if decryption raises |
|
80 | 80 | ; `SignatureVerificationError` in case of wrong key, or damaged encryption data. |
|
81 | 81 | #rhodecode.encrypted_values.strict = false |
|
82 | 82 | |
|
83 | 83 | ; Pick algorithm for encryption. Either fernet (more secure) or aes (default) |
|
84 | 84 | ; fernet is safer, and we strongly recommend switching to it. |
|
85 | 85 | ; Due to backward compatibility aes is used as default. |
|
86 | 86 | #rhodecode.encrypted_values.algorithm = fernet |
|
87 | 87 | |
|
88 | 88 | ; Return gzipped responses from RhodeCode (static files/application) |
|
89 | 89 | gzip_responses = false |
|
90 | 90 | |
|
91 | 91 | ; Auto-generate javascript routes file on startup |
|
92 | 92 | generate_js_files = false |
|
93 | 93 | |
|
94 | 94 | ; System global default language. |
|
95 | 95 | ; All available languages: en (default), be, de, es, fr, it, ja, pl, pt, ru, zh |
|
96 | 96 | lang = en |
|
97 | 97 | |
|
98 | 98 | ; Perform a full repository scan and import on each server start. |
|
99 | 99 | ; Settings this to true could lead to very long startup time. |
|
100 | 100 | startup.import_repos = false |
|
101 | 101 | |
|
102 | 102 | ; URL at which the application is running. This is used for Bootstrapping |
|
103 | 103 | ; requests in context when no web request is available. Used in ishell, or |
|
104 | 104 | ; SSH calls. Set this for events to receive proper url for SSH calls. |
|
105 | 105 | app.base_url = http://rhodecode.local |
|
106 | 106 | |
|
107 | 107 | ; Host at which the Service API is running. |
|
108 | 108 | app.service_api.host = http://rhodecode.local:10020 |
|
109 | 109 | |
|
110 | 110 | ; Secret for Service API authentication. |
|
111 | 111 | app.service_api.token = |
|
112 | 112 | |
|
113 | 113 | ; Unique application ID. Should be a random unique string for security. |
|
114 | 114 | app_instance_uuid = rc-production |
|
115 | 115 | |
|
116 | 116 | ; Cut off limit for large diffs (size in bytes). If overall diff size on |
|
117 | 117 | ; commit, or pull request exceeds this limit this diff will be displayed |
|
118 | 118 | ; partially. E.g 512000 == 512Kb |
|
119 | 119 | cut_off_limit_diff = 512000 |
|
120 | 120 | |
|
121 | 121 | ; Cut off limit for large files inside diffs (size in bytes). Each individual |
|
122 | 122 | ; file inside diff which exceeds this limit will be displayed partially. |
|
123 | 123 | ; E.g 128000 == 128Kb |
|
124 | 124 | cut_off_limit_file = 128000 |
|
125 | 125 | |
|
126 | 126 | ; Use cached version of vcs repositories everywhere. Recommended to be `true` |
|
127 | 127 | vcs_full_cache = true |
|
128 | 128 | |
|
129 | 129 | ; Force https in RhodeCode, fixes https redirects, assumes it's always https. |
|
130 | 130 | ; Normally this is controlled by proper flags sent from http server such as Nginx or Apache |
|
131 | 131 | force_https = false |
|
132 | 132 | |
|
133 | 133 | ; use Strict-Transport-Security headers |
|
134 | 134 | use_htsts = false |
|
135 | 135 | |
|
136 | 136 | ; Set to true if your repos are exposed using the dumb protocol |
|
137 | 137 | git_update_server_info = false |
|
138 | 138 | |
|
139 | 139 | ; RSS/ATOM feed options |
|
140 | 140 | rss_cut_off_limit = 256000 |
|
141 | 141 | rss_items_per_page = 10 |
|
142 | 142 | rss_include_diff = false |
|
143 | 143 | |
|
144 | 144 | ; gist URL alias, used to create nicer urls for gist. This should be an |
|
145 | 145 | ; url that does rewrites to _admin/gists/{gistid}. |
|
146 | 146 | ; example: http://gist.rhodecode.org/{gistid}. Empty means use the internal |
|
147 | 147 | ; RhodeCode url, ie. http[s]://rhodecode.server/_admin/gists/{gistid} |
|
148 | 148 | gist_alias_url = |
|
149 | 149 | |
|
150 | 150 | ; List of views (using glob pattern syntax) that AUTH TOKENS could be |
|
151 | 151 | ; used for access. |
|
152 | 152 | ; Adding ?auth_token=TOKEN_HASH to the url authenticates this request as if it |
|
153 | 153 | ; came from the the logged in user who own this authentication token. |
|
154 | 154 | ; Additionally @TOKEN syntax can be used to bound the view to specific |
|
155 | 155 | ; authentication token. Such view would be only accessible when used together |
|
156 | 156 | ; with this authentication token |
|
157 | 157 | ; list of all views can be found under `/_admin/permissions/auth_token_access` |
|
158 | 158 | ; The list should be "," separated and on a single line. |
|
159 | 159 | ; Most common views to enable: |
|
160 | 160 | |
|
161 | 161 | # RepoCommitsView:repo_commit_download |
|
162 | 162 | # RepoCommitsView:repo_commit_patch |
|
163 | 163 | # RepoCommitsView:repo_commit_raw |
|
164 | 164 | # RepoCommitsView:repo_commit_raw@TOKEN |
|
165 | 165 | # RepoFilesView:repo_files_diff |
|
166 | 166 | # RepoFilesView:repo_archivefile |
|
167 | 167 | # RepoFilesView:repo_file_raw |
|
168 | 168 | # GistView:* |
|
169 | 169 | api_access_controllers_whitelist = |
|
170 | 170 | |
|
171 | 171 | ; Default encoding used to convert from and to unicode |
|
172 | 172 | ; can be also a comma separated list of encoding in case of mixed encodings |
|
173 | 173 | default_encoding = UTF-8 |
|
174 | 174 | |
|
175 | 175 | ; instance-id prefix |
|
176 | 176 | ; a prefix key for this instance used for cache invalidation when running |
|
177 | 177 | ; multiple instances of RhodeCode, make sure it's globally unique for |
|
178 | 178 | ; all running RhodeCode instances. Leave empty if you don't use it |
|
179 | 179 | instance_id = |
|
180 | 180 | |
|
181 | 181 | ; Fallback authentication plugin. Set this to a plugin ID to force the usage |
|
182 | 182 | ; of an authentication plugin also if it is disabled by it's settings. |
|
183 | 183 | ; This could be useful if you are unable to log in to the system due to broken |
|
184 | 184 | ; authentication settings. Then you can enable e.g. the internal RhodeCode auth |
|
185 | 185 | ; module to log in again and fix the settings. |
|
186 | 186 | ; Available builtin plugin IDs (hash is part of the ID): |
|
187 | 187 | ; egg:rhodecode-enterprise-ce#rhodecode |
|
188 | 188 | ; egg:rhodecode-enterprise-ce#pam |
|
189 | 189 | ; egg:rhodecode-enterprise-ce#ldap |
|
190 | 190 | ; egg:rhodecode-enterprise-ce#jasig_cas |
|
191 | 191 | ; egg:rhodecode-enterprise-ce#headers |
|
192 | 192 | ; egg:rhodecode-enterprise-ce#crowd |
|
193 | 193 | |
|
194 | 194 | #rhodecode.auth_plugin_fallback = egg:rhodecode-enterprise-ce#rhodecode |
|
195 | 195 | |
|
196 | 196 | ; Flag to control loading of legacy plugins in py:/path format |
|
197 | 197 | auth_plugin.import_legacy_plugins = true |
|
198 | 198 | |
|
199 | 199 | ; alternative return HTTP header for failed authentication. Default HTTP |
|
200 | 200 | ; response is 401 HTTPUnauthorized. Currently HG clients have troubles with |
|
201 | 201 | ; handling that causing a series of failed authentication calls. |
|
202 | 202 | ; Set this variable to 403 to return HTTPForbidden, or any other HTTP code |
|
203 | 203 | ; This will be served instead of default 401 on bad authentication |
|
204 | 204 | auth_ret_code = |
|
205 | 205 | |
|
206 | 206 | ; use special detection method when serving auth_ret_code, instead of serving |
|
207 | 207 | ; ret_code directly, use 401 initially (Which triggers credentials prompt) |
|
208 | 208 | ; and then serve auth_ret_code to clients |
|
209 | 209 | auth_ret_code_detection = false |
|
210 | 210 | |
|
211 | 211 | ; locking return code. When repository is locked return this HTTP code. 2XX |
|
212 | 212 | ; codes don't break the transactions while 4XX codes do |
|
213 | 213 | lock_ret_code = 423 |
|
214 | 214 | |
|
215 | 215 | ; Filesystem location were repositories should be stored |
|
216 | 216 | repo_store.path = /var/opt/rhodecode_repo_store |
|
217 | 217 | |
|
218 | 218 | ; allows to setup custom hooks in settings page |
|
219 | 219 | allow_custom_hooks_settings = true |
|
220 | 220 | |
|
221 | 221 | ; Generated license token required for EE edition license. |
|
222 | 222 | ; New generated token value can be found in Admin > settings > license page. |
|
223 | 223 | license_token = |
|
224 | 224 | |
|
225 | 225 | ; This flag hides sensitive information on the license page such as token, and license data |
|
226 | 226 | license.hide_license_info = false |
|
227 | 227 | |
|
228 | ; Import EE license from this license path | |
|
229 | #license.import_path = %(here)s/rhodecode_enterprise.license | |
|
230 | ||
|
231 | ; import license 'if-missing' or 'force' (always override) | |
|
232 | ; if-missing means apply license if it doesn't exist. 'force' option always overrides it | |
|
233 | license.import_path_mode = if-missing | |
|
234 | ||
|
228 | 235 | ; supervisor connection uri, for managing supervisor and logs. |
|
229 | 236 | supervisor.uri = |
|
230 | 237 | |
|
231 | 238 | ; supervisord group name/id we only want this RC instance to handle |
|
232 | 239 | supervisor.group_id = prod |
|
233 | 240 | |
|
234 | 241 | ; Display extended labs settings |
|
235 | 242 | labs_settings_active = true |
|
236 | 243 | |
|
237 | 244 | ; Custom exception store path, defaults to TMPDIR |
|
238 | 245 | ; This is used to store exception from RhodeCode in shared directory |
|
239 | 246 | #exception_tracker.store_path = |
|
240 | 247 | |
|
241 | 248 | ; Send email with exception details when it happens |
|
242 | 249 | #exception_tracker.send_email = false |
|
243 | 250 | |
|
244 | 251 | ; Comma separated list of recipients for exception emails, |
|
245 | 252 | ; e.g admin@rhodecode.com,devops@rhodecode.com |
|
246 | 253 | ; Can be left empty, then emails will be sent to ALL super-admins |
|
247 | 254 | #exception_tracker.send_email_recipients = |
|
248 | 255 | |
|
249 | 256 | ; optional prefix to Add to email Subject |
|
250 | 257 | #exception_tracker.email_prefix = [RHODECODE ERROR] |
|
251 | 258 | |
|
252 | ; File store configuration. This is used to store and serve uploaded files | |
|
253 | file_store.enabled = true | |
|
259 | ; NOTE: this setting IS DEPRECATED: | |
|
260 | ; file_store backend is always enabled | |
|
261 | #file_store.enabled = true | |
|
254 | 262 | |
|
263 | ; NOTE: this setting IS DEPRECATED: | |
|
264 | ; file_store.backend = X -> use `file_store.backend.type = filesystem_v2` instead | |
|
255 | 265 | ; Storage backend, available options are: local |
|
256 | file_store.backend = local | |
|
266 | #file_store.backend = local | |
|
257 | 267 | |
|
268 | ; NOTE: this setting IS DEPRECATED: | |
|
269 | ; file_store.storage_path = X -> use `file_store.filesystem_v2.storage_path = X` instead | |
|
258 | 270 | ; path to store the uploaded binaries and artifacts |
|
259 | file_store.storage_path = /var/opt/rhodecode_data/file_store | |
|
271 | #file_store.storage_path = /var/opt/rhodecode_data/file_store | |
|
272 | ||
|
273 | ; Artifacts file-store, is used to store comment attachments and artifacts uploads. | |
|
274 | ; file_store backend type: filesystem_v1, filesystem_v2 or objectstore (s3-based) are available as options | |
|
275 | ; filesystem_v1 is backwards compat with pre 5.1 storage changes | |
|
276 | ; new installations should choose filesystem_v2 or objectstore (s3-based), pick filesystem when migrating from | |
|
277 | ; previous installations to keep the artifacts without a need of migration | |
|
278 | #file_store.backend.type = filesystem_v2 | |
|
279 | ||
|
280 | ; filesystem options... | |
|
281 | #file_store.filesystem_v1.storage_path = /var/opt/rhodecode_data/artifacts_file_store | |
|
282 | ||
|
283 | ; filesystem_v2 options... | |
|
284 | #file_store.filesystem_v2.storage_path = /var/opt/rhodecode_data/artifacts_file_store | |
|
285 | #file_store.filesystem_v2.shards = 8 | |
|
260 | 286 | |
|
287 | ; objectstore options... | |
|
288 | ; url for s3 compatible storage that allows to upload artifacts | |
|
289 | ; e.g http://minio:9000 | |
|
290 | #file_store.backend.type = objectstore | |
|
291 | #file_store.objectstore.url = http://s3-minio:9000 | |
|
292 | ||
|
293 | ; a top-level bucket to put all other shards in | |
|
294 | ; objects will be stored in rhodecode-file-store/shard-N based on the bucket_shards number | |
|
295 | #file_store.objectstore.bucket = rhodecode-file-store | |
|
296 | ||
|
297 | ; number of sharded buckets to create to distribute archives across | |
|
298 | ; default is 8 shards | |
|
299 | #file_store.objectstore.bucket_shards = 8 | |
|
300 | ||
|
301 | ; key for s3 auth | |
|
302 | #file_store.objectstore.key = s3admin | |
|
303 | ||
|
304 | ; secret for s3 auth | |
|
305 | #file_store.objectstore.secret = s3secret4 | |
|
306 | ||
|
307 | ;region for s3 storage | |
|
308 | #file_store.objectstore.region = eu-central-1 | |
|
261 | 309 | |
|
262 | 310 | ; Redis url to acquire/check generation of archives locks |
|
263 | 311 | archive_cache.locking.url = redis://redis:6379/1 |
|
264 | 312 | |
|
265 | 313 | ; Storage backend, only 'filesystem' and 'objectstore' are available now |
|
266 | 314 | archive_cache.backend.type = filesystem |
|
267 | 315 | |
|
268 | 316 | ; url for s3 compatible storage that allows to upload artifacts |
|
269 | 317 | ; e.g http://minio:9000 |
|
270 | 318 | archive_cache.objectstore.url = http://s3-minio:9000 |
|
271 | 319 | |
|
272 | 320 | ; key for s3 auth |
|
273 | 321 | archive_cache.objectstore.key = key |
|
274 | 322 | |
|
275 | 323 | ; secret for s3 auth |
|
276 | 324 | archive_cache.objectstore.secret = secret |
|
277 | 325 | |
|
278 | 326 | ;region for s3 storage |
|
279 | 327 | archive_cache.objectstore.region = eu-central-1 |
|
280 | 328 | |
|
281 | 329 | ; number of sharded buckets to create to distribute archives across |
|
282 | 330 | ; default is 8 shards |
|
283 | 331 | archive_cache.objectstore.bucket_shards = 8 |
|
284 | 332 | |
|
285 | 333 | ; a top-level bucket to put all other shards in |
|
286 | 334 | ; objects will be stored in rhodecode-archive-cache/shard-N based on the bucket_shards number |
|
287 | 335 | archive_cache.objectstore.bucket = rhodecode-archive-cache |
|
288 | 336 | |
|
289 | 337 | ; if true, this cache will try to retry with retry_attempts=N times waiting retry_backoff time |
|
290 | 338 | archive_cache.objectstore.retry = false |
|
291 | 339 | |
|
292 | 340 | ; number of seconds to wait for next try using retry |
|
293 | 341 | archive_cache.objectstore.retry_backoff = 1 |
|
294 | 342 | |
|
295 | 343 | ; how many tries do do a retry fetch from this backend |
|
296 | 344 | archive_cache.objectstore.retry_attempts = 10 |
|
297 | 345 | |
|
298 | 346 | ; Default is $cache_dir/archive_cache if not set |
|
299 | 347 | ; Generated repo archives will be cached at this location |
|
300 | 348 | ; and served from the cache during subsequent requests for the same archive of |
|
301 | 349 | ; the repository. This path is important to be shared across filesystems and with |
|
302 | 350 | ; RhodeCode and vcsserver |
|
303 | 351 | archive_cache.filesystem.store_dir = /var/opt/rhodecode_data/archive_cache |
|
304 | 352 | |
|
305 | 353 | ; The limit in GB sets how much data we cache before recycling last used, defaults to 10 gb |
|
306 | 354 | archive_cache.filesystem.cache_size_gb = 40 |
|
307 | 355 | |
|
308 | 356 | ; Eviction policy used to clear out after cache_size_gb limit is reached |
|
309 | 357 | archive_cache.filesystem.eviction_policy = least-recently-stored |
|
310 | 358 | |
|
311 | 359 | ; By default cache uses sharding technique, this specifies how many shards are there |
|
312 | 360 | ; default is 8 shards |
|
313 | 361 | archive_cache.filesystem.cache_shards = 8 |
|
314 | 362 | |
|
315 | 363 | ; if true, this cache will try to retry with retry_attempts=N times waiting retry_backoff time |
|
316 | 364 | archive_cache.filesystem.retry = false |
|
317 | 365 | |
|
318 | 366 | ; number of seconds to wait for next try using retry |
|
319 | 367 | archive_cache.filesystem.retry_backoff = 1 |
|
320 | 368 | |
|
321 | 369 | ; how many tries do do a retry fetch from this backend |
|
322 | 370 | archive_cache.filesystem.retry_attempts = 10 |
|
323 | 371 | |
|
324 | 372 | |
|
325 | 373 | ; ############# |
|
326 | 374 | ; CELERY CONFIG |
|
327 | 375 | ; ############# |
|
328 | 376 | |
|
329 | 377 | ; manually run celery: /path/to/celery worker --task-events --beat --app rhodecode.lib.celerylib.loader --scheduler rhodecode.lib.celerylib.scheduler.RcScheduler --loglevel DEBUG --ini /path/to/rhodecode.ini |
|
330 | 378 | |
|
331 | 379 | use_celery = true |
|
332 | 380 | |
|
333 | 381 | ; path to store schedule database |
|
334 | 382 | #celerybeat-schedule.path = |
|
335 | 383 | |
|
336 | 384 | ; connection url to the message broker (default redis) |
|
337 | 385 | celery.broker_url = redis://redis:6379/8 |
|
338 | 386 | |
|
339 | 387 | ; results backend to get results for (default redis) |
|
340 | 388 | celery.result_backend = redis://redis:6379/8 |
|
341 | 389 | |
|
342 | 390 | ; rabbitmq example |
|
343 | 391 | #celery.broker_url = amqp://rabbitmq:qweqwe@localhost:5672/rabbitmqhost |
|
344 | 392 | |
|
345 | 393 | ; maximum tasks to execute before worker restart |
|
346 | 394 | celery.max_tasks_per_child = 20 |
|
347 | 395 | |
|
348 | 396 | ; tasks will never be sent to the queue, but executed locally instead. |
|
349 | 397 | celery.task_always_eager = false |
|
350 | 398 | |
|
351 | 399 | ; ############# |
|
352 | 400 | ; DOGPILE CACHE |
|
353 | 401 | ; ############# |
|
354 | 402 | |
|
355 | 403 | ; Default cache dir for caches. Putting this into a ramdisk can boost performance. |
|
356 | 404 | ; eg. /tmpfs/data_ramdisk, however this directory might require large amount of space |
|
357 | 405 | cache_dir = /var/opt/rhodecode_data |
|
358 | 406 | |
|
359 | 407 | ; ********************************************* |
|
360 | 408 | ; `sql_cache_short` cache for heavy SQL queries |
|
361 | 409 | ; Only supported backend is `memory_lru` |
|
362 | 410 | ; ********************************************* |
|
363 | 411 | rc_cache.sql_cache_short.backend = dogpile.cache.rc.memory_lru |
|
364 | 412 | rc_cache.sql_cache_short.expiration_time = 30 |
|
365 | 413 | |
|
366 | 414 | |
|
367 | 415 | ; ***************************************************** |
|
368 | 416 | ; `cache_repo_longterm` cache for repo object instances |
|
369 | 417 | ; Only supported backend is `memory_lru` |
|
370 | 418 | ; ***************************************************** |
|
371 | 419 | rc_cache.cache_repo_longterm.backend = dogpile.cache.rc.memory_lru |
|
372 | 420 | ; by default we use 30 Days, cache is still invalidated on push |
|
373 | 421 | rc_cache.cache_repo_longterm.expiration_time = 2592000 |
|
374 | 422 | ; max items in LRU cache, set to smaller number to save memory, and expire last used caches |
|
375 | 423 | rc_cache.cache_repo_longterm.max_size = 10000 |
|
376 | 424 | |
|
377 | 425 | |
|
378 | 426 | ; ********************************************* |
|
379 | 427 | ; `cache_general` cache for general purpose use |
|
380 | 428 | ; for simplicity use rc.file_namespace backend, |
|
381 | 429 | ; for performance and scale use rc.redis |
|
382 | 430 | ; ********************************************* |
|
383 | 431 | rc_cache.cache_general.backend = dogpile.cache.rc.file_namespace |
|
384 | 432 | rc_cache.cache_general.expiration_time = 43200 |
|
385 | 433 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
386 | 434 | #rc_cache.cache_general.arguments.filename = /tmp/cache_general_db |
|
387 | 435 | |
|
388 | 436 | ; alternative `cache_general` redis backend with distributed lock |
|
389 | 437 | #rc_cache.cache_general.backend = dogpile.cache.rc.redis |
|
390 | 438 | #rc_cache.cache_general.expiration_time = 300 |
|
391 | 439 | |
|
392 | 440 | ; redis_expiration_time needs to be greater then expiration_time |
|
393 | 441 | #rc_cache.cache_general.arguments.redis_expiration_time = 7200 |
|
394 | 442 | |
|
395 | 443 | #rc_cache.cache_general.arguments.host = localhost |
|
396 | 444 | #rc_cache.cache_general.arguments.port = 6379 |
|
397 | 445 | #rc_cache.cache_general.arguments.db = 0 |
|
398 | 446 | #rc_cache.cache_general.arguments.socket_timeout = 30 |
|
399 | 447 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
400 | 448 | #rc_cache.cache_general.arguments.distributed_lock = true |
|
401 | 449 | |
|
402 | 450 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
403 | 451 | #rc_cache.cache_general.arguments.lock_auto_renewal = true |
|
404 | 452 | |
|
405 | 453 | ; ************************************************* |
|
406 | 454 | ; `cache_perms` cache for permission tree, auth TTL |
|
407 | 455 | ; for simplicity use rc.file_namespace backend, |
|
408 | 456 | ; for performance and scale use rc.redis |
|
409 | 457 | ; ************************************************* |
|
410 | 458 | rc_cache.cache_perms.backend = dogpile.cache.rc.file_namespace |
|
411 | 459 | rc_cache.cache_perms.expiration_time = 3600 |
|
412 | 460 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
413 | 461 | #rc_cache.cache_perms.arguments.filename = /tmp/cache_perms_db |
|
414 | 462 | |
|
415 | 463 | ; alternative `cache_perms` redis backend with distributed lock |
|
416 | 464 | #rc_cache.cache_perms.backend = dogpile.cache.rc.redis |
|
417 | 465 | #rc_cache.cache_perms.expiration_time = 300 |
|
418 | 466 | |
|
419 | 467 | ; redis_expiration_time needs to be greater then expiration_time |
|
420 | 468 | #rc_cache.cache_perms.arguments.redis_expiration_time = 7200 |
|
421 | 469 | |
|
422 | 470 | #rc_cache.cache_perms.arguments.host = localhost |
|
423 | 471 | #rc_cache.cache_perms.arguments.port = 6379 |
|
424 | 472 | #rc_cache.cache_perms.arguments.db = 0 |
|
425 | 473 | #rc_cache.cache_perms.arguments.socket_timeout = 30 |
|
426 | 474 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
427 | 475 | #rc_cache.cache_perms.arguments.distributed_lock = true |
|
428 | 476 | |
|
429 | 477 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
430 | 478 | #rc_cache.cache_perms.arguments.lock_auto_renewal = true |
|
431 | 479 | |
|
432 | 480 | ; *************************************************** |
|
433 | 481 | ; `cache_repo` cache for file tree, Readme, RSS FEEDS |
|
434 | 482 | ; for simplicity use rc.file_namespace backend, |
|
435 | 483 | ; for performance and scale use rc.redis |
|
436 | 484 | ; *************************************************** |
|
437 | 485 | rc_cache.cache_repo.backend = dogpile.cache.rc.file_namespace |
|
438 | 486 | rc_cache.cache_repo.expiration_time = 2592000 |
|
439 | 487 | ; file cache store path. Defaults to `cache_dir =` value or tempdir if both values are not set |
|
440 | 488 | #rc_cache.cache_repo.arguments.filename = /tmp/cache_repo_db |
|
441 | 489 | |
|
442 | 490 | ; alternative `cache_repo` redis backend with distributed lock |
|
443 | 491 | #rc_cache.cache_repo.backend = dogpile.cache.rc.redis |
|
444 | 492 | #rc_cache.cache_repo.expiration_time = 2592000 |
|
445 | 493 | |
|
446 | 494 | ; redis_expiration_time needs to be greater then expiration_time |
|
447 | 495 | #rc_cache.cache_repo.arguments.redis_expiration_time = 2678400 |
|
448 | 496 | |
|
449 | 497 | #rc_cache.cache_repo.arguments.host = localhost |
|
450 | 498 | #rc_cache.cache_repo.arguments.port = 6379 |
|
451 | 499 | #rc_cache.cache_repo.arguments.db = 1 |
|
452 | 500 | #rc_cache.cache_repo.arguments.socket_timeout = 30 |
|
453 | 501 | ; more Redis options: https://dogpilecache.sqlalchemy.org/en/latest/api.html#redis-backends |
|
454 | 502 | #rc_cache.cache_repo.arguments.distributed_lock = true |
|
455 | 503 | |
|
456 | 504 | ; auto-renew lock to prevent stale locks, slower but safer. Use only if problems happen |
|
457 | 505 | #rc_cache.cache_repo.arguments.lock_auto_renewal = true |
|
458 | 506 | |
|
459 | 507 | ; ############## |
|
460 | 508 | ; BEAKER SESSION |
|
461 | 509 | ; ############## |
|
462 | 510 | |
|
463 | 511 | ; beaker.session.type is type of storage options for the logged users sessions. Current allowed |
|
464 | 512 | ; types are file, ext:redis, ext:database, ext:memcached |
|
465 | 513 | ; Fastest ones are ext:redis and ext:database, DO NOT use memory type for session |
|
466 | 514 | #beaker.session.type = file |
|
467 | 515 | #beaker.session.data_dir = %(here)s/data/sessions |
|
468 | 516 | |
|
469 | 517 | ; Redis based sessions |
|
470 | 518 | beaker.session.type = ext:redis |
|
471 | 519 | beaker.session.url = redis://redis:6379/2 |
|
472 | 520 | |
|
473 | 521 | ; DB based session, fast, and allows easy management over logged in users |
|
474 | 522 | #beaker.session.type = ext:database |
|
475 | 523 | #beaker.session.table_name = db_session |
|
476 | 524 | #beaker.session.sa.url = postgresql://postgres:secret@localhost/rhodecode |
|
477 | 525 | #beaker.session.sa.url = mysql://root:secret@127.0.0.1/rhodecode |
|
478 | 526 | #beaker.session.sa.pool_recycle = 3600 |
|
479 | 527 | #beaker.session.sa.echo = false |
|
480 | 528 | |
|
481 | 529 | beaker.session.key = rhodecode |
|
482 | 530 | beaker.session.secret = production-rc-uytcxaz |
|
483 | 531 | beaker.session.lock_dir = /data_ramdisk/lock |
|
484 | 532 | |
|
485 | 533 | ; Secure encrypted cookie. Requires AES and AES python libraries |
|
486 | 534 | ; you must disable beaker.session.secret to use this |
|
487 | 535 | #beaker.session.encrypt_key = key_for_encryption |
|
488 | 536 | #beaker.session.validate_key = validation_key |
|
489 | 537 | |
|
490 | 538 | ; Sets session as invalid (also logging out user) if it haven not been |
|
491 | 539 | ; accessed for given amount of time in seconds |
|
492 | 540 | beaker.session.timeout = 2592000 |
|
493 | 541 | beaker.session.httponly = true |
|
494 | 542 | |
|
495 | 543 | ; Path to use for the cookie. Set to prefix if you use prefix middleware |
|
496 | 544 | #beaker.session.cookie_path = /custom_prefix |
|
497 | 545 | |
|
498 | 546 | ; Set https secure cookie |
|
499 | 547 | beaker.session.secure = false |
|
500 | 548 | |
|
501 | 549 | ; default cookie expiration time in seconds, set to `true` to set expire |
|
502 | 550 | ; at browser close |
|
503 | 551 | #beaker.session.cookie_expires = 3600 |
|
504 | 552 | |
|
505 | 553 | ; ############################# |
|
506 | 554 | ; SEARCH INDEXING CONFIGURATION |
|
507 | 555 | ; ############################# |
|
508 | 556 | |
|
509 | 557 | ; Full text search indexer is available in rhodecode-tools under |
|
510 | 558 | ; `rhodecode-tools index` command |
|
511 | 559 | |
|
512 | 560 | ; WHOOSH Backend, doesn't require additional services to run |
|
513 | 561 | ; it works good with few dozen repos |
|
514 | 562 | search.module = rhodecode.lib.index.whoosh |
|
515 | 563 | search.location = %(here)s/data/index |
|
516 | 564 | |
|
517 | 565 | ; #################### |
|
518 | 566 | ; CHANNELSTREAM CONFIG |
|
519 | 567 | ; #################### |
|
520 | 568 | |
|
521 | 569 | ; channelstream enables persistent connections and live notification |
|
522 | 570 | ; in the system. It's also used by the chat system |
|
523 | 571 | |
|
524 | 572 | channelstream.enabled = true |
|
525 | 573 | |
|
526 | 574 | ; server address for channelstream server on the backend |
|
527 | 575 | channelstream.server = channelstream:9800 |
|
528 | 576 | |
|
529 | 577 | ; location of the channelstream server from outside world |
|
530 | 578 | ; use ws:// for http or wss:// for https. This address needs to be handled |
|
531 | 579 | ; by external HTTP server such as Nginx or Apache |
|
532 | 580 | ; see Nginx/Apache configuration examples in our docs |
|
533 | 581 | channelstream.ws_url = ws://rhodecode.yourserver.com/_channelstream |
|
534 | 582 | channelstream.secret = ENV_GENERATED |
|
535 | 583 | channelstream.history.location = /var/opt/rhodecode_data/channelstream_history |
|
536 | 584 | |
|
537 | 585 | ; Internal application path that Javascript uses to connect into. |
|
538 | 586 | ; If you use proxy-prefix the prefix should be added before /_channelstream |
|
539 | 587 | channelstream.proxy_path = /_channelstream |
|
540 | 588 | |
|
541 | 589 | |
|
542 | 590 | ; ############################## |
|
543 | 591 | ; MAIN RHODECODE DATABASE CONFIG |
|
544 | 592 | ; ############################## |
|
545 | 593 | |
|
546 | 594 | #sqlalchemy.db1.url = sqlite:///%(here)s/rhodecode.db?timeout=30 |
|
547 | 595 | #sqlalchemy.db1.url = postgresql://postgres:qweqwe@localhost/rhodecode |
|
548 | 596 | #sqlalchemy.db1.url = mysql://root:qweqwe@localhost/rhodecode?charset=utf8 |
|
549 | 597 | ; pymysql is an alternative driver for MySQL, use in case of problems with default one |
|
550 | 598 | #sqlalchemy.db1.url = mysql+pymysql://root:qweqwe@localhost/rhodecode |
|
551 | 599 | |
|
552 | 600 | sqlalchemy.db1.url = postgresql://postgres:qweqwe@localhost/rhodecode |
|
553 | 601 | |
|
554 | 602 | ; see sqlalchemy docs for other advanced settings |
|
555 | 603 | ; print the sql statements to output |
|
556 | 604 | sqlalchemy.db1.echo = false |
|
557 | 605 | |
|
558 | 606 | ; recycle the connections after this amount of seconds |
|
559 | 607 | sqlalchemy.db1.pool_recycle = 3600 |
|
560 | 608 | |
|
561 | 609 | ; the number of connections to keep open inside the connection pool. |
|
562 | 610 | ; 0 indicates no limit |
|
563 | 611 | ; the general calculus with gevent is: |
|
564 | 612 | ; if your system allows 500 concurrent greenlets (max_connections) that all do database access, |
|
565 | 613 | ; then increase pool size + max overflow so that they add up to 500. |
|
566 | 614 | #sqlalchemy.db1.pool_size = 5 |
|
567 | 615 | |
|
568 | 616 | ; The number of connections to allow in connection pool "overflow", that is |
|
569 | 617 | ; connections that can be opened above and beyond the pool_size setting, |
|
570 | 618 | ; which defaults to five. |
|
571 | 619 | #sqlalchemy.db1.max_overflow = 10 |
|
572 | 620 | |
|
573 | 621 | ; Connection check ping, used to detect broken database connections |
|
574 | 622 | ; could be enabled to better handle cases if MySQL has gone away errors |
|
575 | 623 | #sqlalchemy.db1.ping_connection = true |
|
576 | 624 | |
|
577 | 625 | ; ########## |
|
578 | 626 | ; VCS CONFIG |
|
579 | 627 | ; ########## |
|
580 | 628 | vcs.server.enable = true |
|
581 | 629 | vcs.server = vcsserver:10010 |
|
582 | 630 | |
|
583 | 631 | ; Web server connectivity protocol, responsible for web based VCS operations |
|
584 | 632 | ; Available protocols are: |
|
585 | 633 | ; `http` - use http-rpc backend (default) |
|
586 | 634 | vcs.server.protocol = http |
|
587 | 635 | |
|
588 | 636 | ; Push/Pull operations protocol, available options are: |
|
589 | 637 | ; `http` - use http-rpc backend (default) |
|
590 | 638 | vcs.scm_app_implementation = http |
|
591 | 639 | |
|
592 | 640 | ; Push/Pull operations hooks protocol, available options are: |
|
593 | 641 | ; `http` - use http-rpc backend (default) |
|
594 | 642 | ; `celery` - use celery based hooks |
|
595 | vcs.hooks.protocol = http | |
|
643 | #DEPRECATED:vcs.hooks.protocol = http | |
|
644 | vcs.hooks.protocol.v2 = celery | |
|
596 | 645 | |
|
597 | 646 | ; Host on which this instance is listening for hooks. vcsserver will call this host to pull/push hooks so it should be |
|
598 | 647 | ; accessible via network. |
|
599 | 648 | ; Use vcs.hooks.host = "*" to bind to current hostname (for Docker) |
|
600 | 649 | vcs.hooks.host = * |
|
601 | 650 | |
|
602 | 651 | ; Start VCSServer with this instance as a subprocess, useful for development |
|
603 | 652 | vcs.start_server = false |
|
604 | 653 | |
|
605 | 654 | ; List of enabled VCS backends, available options are: |
|
606 | 655 | ; `hg` - mercurial |
|
607 | 656 | ; `git` - git |
|
608 | 657 | ; `svn` - subversion |
|
609 | 658 | vcs.backends = hg, git, svn |
|
610 | 659 | |
|
611 | 660 | ; Wait this number of seconds before killing connection to the vcsserver |
|
612 | 661 | vcs.connection_timeout = 3600 |
|
613 | 662 | |
|
614 | 663 | ; Cache flag to cache vcsserver remote calls locally |
|
615 | 664 | ; It uses cache_region `cache_repo` |
|
616 | 665 | vcs.methods.cache = true |
|
617 | 666 | |
|
667 | ; Filesystem location where Git lfs objects should be stored | |
|
668 | vcs.git.lfs.storage_location = /var/opt/rhodecode_repo_store/.cache/git_lfs_store | |
|
669 | ||
|
670 | ; Filesystem location where Mercurial largefile objects should be stored | |
|
671 | vcs.hg.largefiles.storage_location = /var/opt/rhodecode_repo_store/.cache/hg_largefiles_store | |
|
672 | ||
|
618 | 673 | ; #################################################### |
|
619 | 674 | ; Subversion proxy support (mod_dav_svn) |
|
620 | 675 | ; Maps RhodeCode repo groups into SVN paths for Apache |
|
621 | 676 | ; #################################################### |
|
622 | 677 | |
|
623 | 678 | ; Compatibility version when creating SVN repositories. Defaults to newest version when commented out. |
|
624 | 679 | ; Set a numeric version for your current SVN e.g 1.8, or 1.12 |
|
625 | 680 | ; Legacy available options are: pre-1.4-compatible, pre-1.5-compatible, pre-1.6-compatible, pre-1.8-compatible, pre-1.9-compatible |
|
626 | 681 | #vcs.svn.compatible_version = 1.8 |
|
627 | 682 | |
|
628 | 683 | ; Redis connection settings for svn integrations logic |
|
629 | 684 | ; This connection string needs to be the same on ce and vcsserver |
|
630 | 685 | vcs.svn.redis_conn = redis://redis:6379/0 |
|
631 | 686 | |
|
632 | 687 | ; Enable SVN proxy of requests over HTTP |
|
633 | 688 | vcs.svn.proxy.enabled = true |
|
634 | 689 | |
|
635 | 690 | ; host to connect to running SVN subsystem |
|
636 | 691 | vcs.svn.proxy.host = http://svn:8090 |
|
637 | 692 | |
|
638 | 693 | ; Enable or disable the config file generation. |
|
639 | 694 | svn.proxy.generate_config = true |
|
640 | 695 | |
|
641 | 696 | ; Generate config file with `SVNListParentPath` set to `On`. |
|
642 | 697 | svn.proxy.list_parent_path = true |
|
643 | 698 | |
|
644 | 699 | ; Set location and file name of generated config file. |
|
645 | 700 | svn.proxy.config_file_path = /etc/rhodecode/conf/svn/mod_dav_svn.conf |
|
646 | 701 | |
|
647 | 702 | ; alternative mod_dav config template. This needs to be a valid mako template |
|
648 | 703 | ; Example template can be found in the source code: |
|
649 | 704 | ; rhodecode/apps/svn_support/templates/mod-dav-svn.conf.mako |
|
650 | 705 | #svn.proxy.config_template = ~/.rccontrol/enterprise-1/custom_svn_conf.mako |
|
651 | 706 | |
|
652 | 707 | ; Used as a prefix to the `Location` block in the generated config file. |
|
653 | 708 | ; In most cases it should be set to `/`. |
|
654 | 709 | svn.proxy.location_root = / |
|
655 | 710 | |
|
656 | 711 | ; Command to reload the mod dav svn configuration on change. |
|
657 | 712 | ; Example: `/etc/init.d/apache2 reload` or /home/USER/apache_reload.sh |
|
658 | 713 | ; Make sure user who runs RhodeCode process is allowed to reload Apache |
|
659 | 714 | #svn.proxy.reload_cmd = /etc/init.d/apache2 reload |
|
660 | 715 | |
|
661 | 716 | ; If the timeout expires before the reload command finishes, the command will |
|
662 | 717 | ; be killed. Setting it to zero means no timeout. Defaults to 10 seconds. |
|
663 | 718 | #svn.proxy.reload_timeout = 10 |
|
664 | 719 | |
|
665 | 720 | ; #################### |
|
666 | 721 | ; SSH Support Settings |
|
667 | 722 | ; #################### |
|
668 | 723 | |
|
669 | 724 | ; Defines if a custom authorized_keys file should be created and written on |
|
670 | 725 | ; any change user ssh keys. Setting this to false also disables possibility |
|
671 | 726 | ; of adding SSH keys by users from web interface. Super admins can still |
|
672 | 727 | ; manage SSH Keys. |
|
673 | 728 | ssh.generate_authorized_keyfile = true |
|
674 | 729 | |
|
675 | 730 | ; Options for ssh, default is `no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding` |
|
676 | 731 | # ssh.authorized_keys_ssh_opts = |
|
677 | 732 | |
|
678 | 733 | ; Path to the authorized_keys file where the generate entries are placed. |
|
679 | 734 | ; It is possible to have multiple key files specified in `sshd_config` e.g. |
|
680 | 735 | ; AuthorizedKeysFile %h/.ssh/authorized_keys %h/.ssh/authorized_keys_rhodecode |
|
681 | 736 | ssh.authorized_keys_file_path = /etc/rhodecode/conf/ssh/authorized_keys_rhodecode |
|
682 | 737 | |
|
683 | 738 | ; Command to execute the SSH wrapper. The binary is available in the |
|
684 | 739 | ; RhodeCode installation directory. |
|
685 | 740 | ; legacy: /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper |
|
686 | 741 | ; new rewrite: /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper-v2 |
|
687 | ssh.wrapper_cmd = /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper | |
|
742 | #DEPRECATED: ssh.wrapper_cmd = /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper | |
|
743 | ssh.wrapper_cmd.v2 = /usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper-v2 | |
|
688 | 744 | |
|
689 | 745 | ; Allow shell when executing the ssh-wrapper command |
|
690 | 746 | ssh.wrapper_cmd_allow_shell = false |
|
691 | 747 | |
|
692 | 748 | ; Enables logging, and detailed output send back to the client during SSH |
|
693 | 749 | ; operations. Useful for debugging, shouldn't be used in production. |
|
694 | 750 | ssh.enable_debug_logging = false |
|
695 | 751 | |
|
696 | 752 | ; Paths to binary executable, by default they are the names, but we can |
|
697 | 753 | ; override them if we want to use a custom one |
|
698 | 754 | ssh.executable.hg = /usr/local/bin/rhodecode_bin/vcs_bin/hg |
|
699 | 755 | ssh.executable.git = /usr/local/bin/rhodecode_bin/vcs_bin/git |
|
700 | 756 | ssh.executable.svn = /usr/local/bin/rhodecode_bin/vcs_bin/svnserve |
|
701 | 757 | |
|
702 | 758 | ; Enables SSH key generator web interface. Disabling this still allows users |
|
703 | 759 | ; to add their own keys. |
|
704 | 760 | ssh.enable_ui_key_generator = true |
|
705 | 761 | |
|
706 | 762 | ; Statsd client config, this is used to send metrics to statsd |
|
707 | 763 | ; We recommend setting statsd_exported and scrape them using Prometheus |
|
708 | 764 | #statsd.enabled = false |
|
709 | 765 | #statsd.statsd_host = 0.0.0.0 |
|
710 | 766 | #statsd.statsd_port = 8125 |
|
711 | 767 | #statsd.statsd_prefix = |
|
712 | 768 | #statsd.statsd_ipv6 = false |
|
713 | 769 | |
|
714 | 770 | ; configure logging automatically at server startup set to false |
|
715 | 771 | ; to use the below custom logging config. |
|
716 | 772 | ; RC_LOGGING_FORMATTER |
|
717 | 773 | ; RC_LOGGING_LEVEL |
|
718 | 774 | ; env variables can control the settings for logging in case of autoconfigure |
|
719 | 775 | |
|
720 | 776 | #logging.autoconfigure = true |
|
721 | 777 | |
|
722 | 778 | ; specify your own custom logging config file to configure logging |
|
723 | 779 | #logging.logging_conf_file = /path/to/custom_logging.ini |
|
724 | 780 | |
|
725 | 781 | ; Dummy marker to add new entries after. |
|
726 | 782 | ; Add any custom entries below. Please don't remove this marker. |
|
727 | 783 | custom.conf = 1 |
|
728 | 784 | |
|
729 | 785 | |
|
730 | 786 | ; ##################### |
|
731 | 787 | ; LOGGING CONFIGURATION |
|
732 | 788 | ; ##################### |
|
733 | 789 | |
|
734 | 790 | [loggers] |
|
735 | 791 | keys = root, sqlalchemy, beaker, celery, rhodecode, ssh_wrapper |
|
736 | 792 | |
|
737 | 793 | [handlers] |
|
738 | 794 | keys = console, console_sql |
|
739 | 795 | |
|
740 | 796 | [formatters] |
|
741 | 797 | keys = generic, json, color_formatter, color_formatter_sql |
|
742 | 798 | |
|
743 | 799 | ; ####### |
|
744 | 800 | ; LOGGERS |
|
745 | 801 | ; ####### |
|
746 | 802 | [logger_root] |
|
747 | 803 | level = NOTSET |
|
748 | 804 | handlers = console |
|
749 | 805 | |
|
750 | 806 | [logger_sqlalchemy] |
|
751 | 807 | level = INFO |
|
752 | 808 | handlers = console_sql |
|
753 | 809 | qualname = sqlalchemy.engine |
|
754 | 810 | propagate = 0 |
|
755 | 811 | |
|
756 | 812 | [logger_beaker] |
|
757 | 813 | level = DEBUG |
|
758 | 814 | handlers = |
|
759 | 815 | qualname = beaker.container |
|
760 | 816 | propagate = 1 |
|
761 | 817 | |
|
762 | 818 | [logger_rhodecode] |
|
763 | 819 | level = DEBUG |
|
764 | 820 | handlers = |
|
765 | 821 | qualname = rhodecode |
|
766 | 822 | propagate = 1 |
|
767 | 823 | |
|
768 | 824 | [logger_ssh_wrapper] |
|
769 | 825 | level = DEBUG |
|
770 | 826 | handlers = |
|
771 | 827 | qualname = ssh_wrapper |
|
772 | 828 | propagate = 1 |
|
773 | 829 | |
|
774 | 830 | [logger_celery] |
|
775 | 831 | level = DEBUG |
|
776 | 832 | handlers = |
|
777 | 833 | qualname = celery |
|
778 | 834 | |
|
779 | 835 | |
|
780 | 836 | ; ######## |
|
781 | 837 | ; HANDLERS |
|
782 | 838 | ; ######## |
|
783 | 839 | |
|
784 | 840 | [handler_console] |
|
785 | 841 | class = StreamHandler |
|
786 | 842 | args = (sys.stderr, ) |
|
787 | 843 | level = INFO |
|
788 | 844 | ; To enable JSON formatted logs replace 'generic/color_formatter' with 'json' |
|
789 | 845 | ; This allows sending properly formatted logs to grafana loki or elasticsearch |
|
790 | 846 | formatter = generic |
|
791 | 847 | |
|
792 | 848 | [handler_console_sql] |
|
793 | 849 | ; "level = DEBUG" logs SQL queries and results. |
|
794 | 850 | ; "level = INFO" logs SQL queries. |
|
795 | 851 | ; "level = WARN" logs neither. (Recommended for production systems.) |
|
796 | 852 | class = StreamHandler |
|
797 | 853 | args = (sys.stderr, ) |
|
798 | 854 | level = WARN |
|
799 | 855 | ; To enable JSON formatted logs replace 'generic/color_formatter_sql' with 'json' |
|
800 | 856 | ; This allows sending properly formatted logs to grafana loki or elasticsearch |
|
801 | 857 | formatter = generic |
|
802 | 858 | |
|
803 | 859 | ; ########## |
|
804 | 860 | ; FORMATTERS |
|
805 | 861 | ; ########## |
|
806 | 862 | |
|
807 | 863 | [formatter_generic] |
|
808 | 864 | class = rhodecode.lib.logging_formatter.ExceptionAwareFormatter |
|
809 | 865 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
810 | 866 | datefmt = %Y-%m-%d %H:%M:%S |
|
811 | 867 | |
|
812 | 868 | [formatter_color_formatter] |
|
813 | 869 | class = rhodecode.lib.logging_formatter.ColorFormatter |
|
814 | 870 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
815 | 871 | datefmt = %Y-%m-%d %H:%M:%S |
|
816 | 872 | |
|
817 | 873 | [formatter_color_formatter_sql] |
|
818 | 874 | class = rhodecode.lib.logging_formatter.ColorFormatterSql |
|
819 | 875 | format = %(asctime)s.%(msecs)03d [%(process)d] %(levelname)-5.5s [%(name)s] %(message)s |
|
820 | 876 | datefmt = %Y-%m-%d %H:%M:%S |
|
821 | 877 | |
|
822 | 878 | [formatter_json] |
|
823 | 879 | format = %(timestamp)s %(levelname)s %(name)s %(message)s %(req_id)s |
|
824 | 880 | class = rhodecode.lib._vendor.jsonlogger.JsonFormatter |
@@ -1,33 +1,39 b'' | |||
|
1 | 1 | FROM python:3.12.0-bullseye |
|
2 | 2 | |
|
3 | 3 | WORKDIR /project |
|
4 | 4 | |
|
5 | 5 | RUN apt-get update \ |
|
6 | 6 | && apt-get install --no-install-recommends --yes \ |
|
7 | 7 | curl \ |
|
8 | 8 | zip \ |
|
9 | 9 | graphviz \ |
|
10 | 10 | dvipng \ |
|
11 | 11 | imagemagick \ |
|
12 | 12 | make \ |
|
13 | 13 | latexmk \ |
|
14 | 14 | texlive-latex-recommended \ |
|
15 | 15 | texlive-latex-extra \ |
|
16 | 16 | texlive-xetex \ |
|
17 | 17 | fonts-freefont-otf \ |
|
18 | 18 | texlive-fonts-recommended \ |
|
19 | 19 | texlive-lang-greek \ |
|
20 | 20 | tex-gyre \ |
|
21 | 21 | && apt-get autoremove \ |
|
22 | 22 | && apt-get clean \ |
|
23 | 23 | && rm -rf /var/lib/apt/lists/* |
|
24 | 24 | |
|
25 | RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \ | |
|
26 | unzip awscliv2.zip && \ | |
|
27 | ./aws/install && \ | |
|
28 | rm -rf ./aws && \ | |
|
29 | rm awscliv2.zip | |
|
30 | ||
|
25 | 31 | RUN \ |
|
26 | 32 | python3 -m pip install --no-cache-dir --upgrade pip && \ |
|
27 | 33 | python3 -m pip install --no-cache-dir Sphinx Pillow |
|
28 | 34 | |
|
29 | 35 | ADD requirements_docs.txt /project |
|
30 | 36 | RUN \ |
|
31 | 37 | python3 -m pip install -r requirements_docs.txt |
|
32 | 38 | |
|
33 | 39 | CMD ["sphinx-build", "-M", "html", ".", "_build"] |
@@ -1,172 +1,168 b'' | |||
|
1 | 1 | .. _system-overview-ref: |
|
2 | 2 | |
|
3 | 3 | System Overview |
|
4 | 4 | =============== |
|
5 | 5 | |
|
6 | 6 | Latest Version |
|
7 | 7 | -------------- |
|
8 | 8 | |
|
9 | 9 | * |release| on Unix and Windows systems. |
|
10 | 10 | |
|
11 | 11 | System Architecture |
|
12 | 12 | ------------------- |
|
13 | 13 | |
|
14 | 14 | The following diagram shows a typical production architecture. |
|
15 | 15 | |
|
16 | 16 | .. image:: ../images/architecture-diagram.png |
|
17 | 17 | :align: center |
|
18 | 18 | |
|
19 | 19 | Supported Operating Systems |
|
20 | 20 | --------------------------- |
|
21 | 21 | |
|
22 | 22 | Linux |
|
23 | 23 | ^^^^^ |
|
24 | 24 | |
|
25 | 25 | * Ubuntu 14.04+ |
|
26 | 26 | * CentOS 6.2, 7 and 8 |
|
27 | 27 | * RHEL 6.2, 7 and 8 |
|
28 | 28 | * Debian 7.8 |
|
29 | 29 | * RedHat Fedora |
|
30 | 30 | * Arch Linux |
|
31 | 31 | * SUSE Linux |
|
32 | 32 | |
|
33 | 33 | Windows |
|
34 | 34 | ^^^^^^^ |
|
35 | 35 | |
|
36 | 36 | * Windows Vista Ultimate 64bit |
|
37 | 37 | * Windows 7 Ultimate 64bit |
|
38 | 38 | * Windows 8 Professional 64bit |
|
39 | 39 | * Windows 8.1 Enterprise 64bit |
|
40 | 40 | * Windows Server 2008 64bit |
|
41 | 41 | * Windows Server 2008-R2 64bit |
|
42 | 42 | * Windows Server 2012 64bit |
|
43 | 43 | |
|
44 | 44 | Supported Databases |
|
45 | 45 | ------------------- |
|
46 | 46 | |
|
47 | 47 | * SQLite |
|
48 | 48 | * MySQL |
|
49 | 49 | * MariaDB |
|
50 | 50 | * PostgreSQL |
|
51 | 51 | |
|
52 | 52 | Supported Browsers |
|
53 | 53 | ------------------ |
|
54 | 54 | |
|
55 | 55 | * Chrome |
|
56 | 56 | * Safari |
|
57 | 57 | * Firefox |
|
58 | 58 | * Internet Explorer 10 & 11 |
|
59 | 59 | |
|
60 | 60 | System Requirements |
|
61 | 61 | ------------------- |
|
62 | 62 | |
|
63 | 63 | |RCE| performs best on machines with ultra-fast hard disks. Generally disk |
|
64 | 64 | performance is more important than CPU performance. In a corporate production |
|
65 | 65 | environment handling 1000s of users and |repos| you should deploy on a 12+ |
|
66 | 66 | core 64GB RAM server. In short, the more RAM the better. |
|
67 | 67 | |
|
68 | 68 | |
|
69 | 69 | For example: |
|
70 | 70 | |
|
71 | 71 | - for team of 1 - 5 active users you can run on 1GB RAM machine with 1CPU |
|
72 | 72 | - above 250 active users, |RCE| needs at least 8GB of memory. |
|
73 | 73 | Number of CPUs is less important, but recommended to have at least 2-3 CPUs |
|
74 | 74 | |
|
75 | 75 | |
|
76 | 76 | .. _config-rce-files: |
|
77 | 77 | |
|
78 | 78 | Configuration Files |
|
79 | 79 | ------------------- |
|
80 | 80 | |
|
81 | 81 | * :file:`config/_shared/rhodecode.ini` |
|
82 | 82 | * :file:`/home/{user}/.rccontrol/{instance-id}/search_mapping.ini` |
|
83 | 83 | * :file:`/home/{user}/.rccontrol/{vcsserver-id}/vcsserver.ini` |
|
84 | 84 | * :file:`/home/{user}/.rccontrol/supervisor/supervisord.ini` |
|
85 | 85 | * :file:`/home/{user}/.rccontrol.ini` |
|
86 | 86 | * :file:`/home/{user}/.rhoderc` |
|
87 | 87 | * :file:`/home/{user}/.rccontrol/cache/MANIFEST` |
|
88 | 88 | |
|
89 | 89 | For more information, see the :ref:`config-files` section. |
|
90 | 90 | |
|
91 | 91 | Log Files |
|
92 | 92 | --------- |
|
93 | 93 | |
|
94 | 94 | * :file:`/home/{user}/.rccontrol/{instance-id}/enterprise.log` |
|
95 | 95 | * :file:`/home/{user}/.rccontrol/{vcsserver-id}/vcsserver.log` |
|
96 | 96 | * :file:`/home/{user}/.rccontrol/supervisor/supervisord.log` |
|
97 | 97 | * :file:`/tmp/rccontrol.log` |
|
98 | 98 | * :file:`/tmp/rhodecode_tools.log` |
|
99 | 99 | |
|
100 | 100 | Storage Files |
|
101 | 101 | ------------- |
|
102 | 102 | |
|
103 | 103 | * :file:`/home/{user}/.rccontrol/{instance-id}/data/index/{index-file.toc}` |
|
104 | 104 | * :file:`/home/{user}/repos/.rc_gist_store` |
|
105 | 105 | * :file:`/home/{user}/.rccontrol/{instance-id}/rhodecode.db` |
|
106 | 106 | * :file:`/opt/rhodecode/store/{unique-hash}` |
|
107 | 107 | |
|
108 | 108 | Default Repositories Location |
|
109 | 109 | ----------------------------- |
|
110 | 110 | |
|
111 | 111 | * :file:`/home/{user}/repos` |
|
112 | 112 | |
|
113 | 113 | Connection Methods |
|
114 | 114 | ------------------ |
|
115 | 115 | |
|
116 | 116 | * HTTPS |
|
117 | 117 | * SSH |
|
118 | 118 | * |RCE| API |
|
119 | 119 | |
|
120 | 120 | Internationalization Support |
|
121 | 121 | ---------------------------- |
|
122 | 122 | |
|
123 | 123 | Currently available in the following languages, see `Transifex`_ for the |
|
124 | 124 | latest details. If you want a new language added, please contact us. To |
|
125 | 125 | configure your language settings, see the :ref:`set-lang` section. |
|
126 | 126 | |
|
127 | 127 | .. hlist:: |
|
128 | 128 | |
|
129 | 129 | * Belorussian |
|
130 | 130 | * Chinese |
|
131 | 131 | * French |
|
132 | 132 | * German |
|
133 | 133 | * Italian |
|
134 | 134 | * Japanese |
|
135 | 135 | * Portuguese |
|
136 | 136 | * Polish |
|
137 | 137 | * Russian |
|
138 | 138 | * Spanish |
|
139 | 139 | |
|
140 | 140 | Licencing Information |
|
141 | 141 | --------------------- |
|
142 | 142 | |
|
143 | 143 | * See licencing information `here`_ |
|
144 | 144 | |
|
145 | 145 | Peer-to-peer Failover Support |
|
146 | 146 | ----------------------------- |
|
147 | 147 | |
|
148 | 148 | * Yes |
|
149 | 149 | |
|
150 | Additional Binaries | |
|
151 | ------------------- | |
|
152 | ||
|
153 | * Yes, see :ref:`rhodecode-nix-ref` for full details. | |
|
154 | 150 | |
|
155 | 151 | Remote Connectivity |
|
156 | 152 | ------------------- |
|
157 | 153 | |
|
158 | 154 | * Available |
|
159 | 155 | |
|
160 | 156 | Executable Files |
|
161 | 157 | ---------------- |
|
162 | 158 | |
|
163 | 159 | Windows: :file:`RhodeCode-installer-{version}.exe` |
|
164 | 160 | |
|
165 | 161 | Deprecated Support |
|
166 | 162 | ------------------ |
|
167 | 163 | |
|
168 | 164 | - Internet Explorer 8 support deprecated since version 3.7.0. |
|
169 | 165 | - Internet Explorer 9 support deprecated since version 3.8.0. |
|
170 | 166 | |
|
171 | 167 | .. _here: https://rhodecode.com/licenses/ |
|
172 | 168 | .. _Transifex: https://explore.transifex.com/rhodecode/RhodeCode/ |
@@ -1,88 +1,90 b'' | |||
|
1 | 1 | .. _auth-saml-bulk-enroll-users-ref: |
|
2 | 2 | |
|
3 | 3 | |
|
4 | 4 | Bulk enroll multiple existing users |
|
5 | 5 | ----------------------------------- |
|
6 | 6 | |
|
7 | 7 | |
|
8 | 8 | RhodeCode Supports standard SAML 2.0 SSO for the web-application part. |
|
9 | 9 | Below is an example how to enroll list of all or some users to use SAML authentication. |
|
10 | 10 | This method simply enables SAML authentication for many users at once. |
|
11 | 11 | |
|
12 | 12 | |
|
13 | 13 | From the server RhodeCode Enterprise is running run ishell on the instance which we |
|
14 | 14 | want to apply the SAML migration:: |
|
15 | 15 | |
|
16 | rccontrol ishell enterprise-1 | |
|
16 | ./rcstack cli ishell | |
|
17 | 17 | |
|
18 | 18 | Follow these steps to enable SAML authentication for multiple users. |
|
19 | 19 | |
|
20 | 20 | |
|
21 | 21 | 1) Create a user_id => attribute mapping |
|
22 | 22 | |
|
23 | 23 | |
|
24 | 24 | `saml2user` is a mapping of external ID from SAML provider such as OneLogin, DuoSecurity, Google. |
|
25 | 25 | This mapping consists of local rhodecode user_id mapped to set of required attributes needed to bind SAML |
|
26 | 26 | account to internal rhodecode user. |
|
27 | 27 | For example, 123 is local rhodecode user_id, and '48253211' is OneLogin ID. |
|
28 | 28 | For other providers you'd have to figure out what would be the user-id, sometimes it's the email, i.e for Google |
|
29 | 29 | The most important this id needs to be unique for each user. |
|
30 | 30 | |
|
31 | 31 | .. code-block:: python |
|
32 | 32 | |
|
33 | 33 | In [1]: saml2user = { |
|
34 | 34 | ...: # OneLogin, uses externalID available to read from in the UI |
|
35 | 35 | ...: 123: {'id': '48253211'}, |
|
36 | 36 | ...: # for Google/DuoSecurity email is also an option for unique ID |
|
37 | 37 | ...: 124: {'id': 'email@domain.com'}, |
|
38 | 38 | ...: } |
|
39 | 39 | |
|
40 | 40 | |
|
41 | 41 | 2) Import the plugin you want to run migration for. |
|
42 | 42 | |
|
43 | 43 | From available options pick only one and run the `import` statement |
|
44 | 44 | |
|
45 | 45 | .. code-block:: python |
|
46 | 46 | |
|
47 | 47 | # for Duo Security |
|
48 | 48 | In [2]: from rc_auth_plugins.auth_duo_security import RhodeCodeAuthPlugin |
|
49 | # for Azure Entra | |
|
50 | In [2]: from rc_auth_plugins.auth_azure import RhodeCodeAuthPlugin | |
|
49 | 51 | # for OneLogin |
|
50 | 52 | In [2]: from rc_auth_plugins.auth_onelogin import RhodeCodeAuthPlugin |
|
51 | 53 | # generic SAML plugin |
|
52 | 54 | In [2]: from rc_auth_plugins.auth_saml import RhodeCodeAuthPlugin |
|
53 | 55 | |
|
54 | 56 | 3) Run the migration based on saml2user mapping. |
|
55 | 57 | |
|
56 | 58 | Enter in the ishell prompt |
|
57 | 59 | |
|
58 | 60 | .. code-block:: python |
|
59 | 61 | |
|
60 | 62 | In [3]: for user in User.get_all(): |
|
61 | 63 | ...: existing_identity = ExternalIdentity().query().filter(ExternalIdentity.local_user_id == user.user_id).scalar() |
|
62 | 64 | ...: attrs = saml2user.get(user.user_id) |
|
63 | 65 | ...: provider = RhodeCodeAuthPlugin.uid |
|
64 | 66 | ...: if existing_identity: |
|
65 |
...: print('Identity for user `{ |
|
|
67 | ...: print(f'Identity for user `{user.username}` already exists, skipping') | |
|
66 | 68 | ...: continue |
|
67 | 69 | ...: if attrs: |
|
68 | 70 | ...: external_id = attrs['id'] |
|
69 | 71 | ...: new_external_identity = ExternalIdentity() |
|
70 | 72 | ...: new_external_identity.external_id = external_id |
|
71 |
...: new_external_identity.external_username = '{ |
|
|
73 | ...: new_external_identity.external_username = f'{user.username}-saml-{user.user_id}' | |
|
72 | 74 | ...: new_external_identity.provider_name = provider |
|
73 | 75 | ...: new_external_identity.local_user_id = user.user_id |
|
74 | 76 | ...: new_external_identity.access_token = '' |
|
75 | 77 | ...: new_external_identity.token_secret = '' |
|
76 | 78 | ...: new_external_identity.alt_token = '' |
|
77 | 79 | ...: Session().add(ex_identity) |
|
78 | 80 | ...: Session().commit() |
|
79 |
...: print('Set user `{ |
|
|
81 | ...: print(f'Set user `{user.username}` external identity bound to ExternalID:{external_id}') | |
|
80 | 82 | |
|
81 | 83 | .. note:: |
|
82 | 84 | |
|
83 | 85 | saml2user can be really big and hard to maintain in ishell. It's also possible |
|
84 | 86 | to load it as a JSON file prepared before and stored on disk. To do so run:: |
|
85 | 87 | |
|
86 | 88 | import json |
|
87 | 89 | saml2user = json.loads(open('/path/to/saml2user.json','rb').read()) |
|
88 | 90 |
@@ -1,105 +1,161 b'' | |||
|
1 | 1 | .. _config-saml-duosecurity-ref: |
|
2 | 2 | |
|
3 | 3 | |
|
4 | 4 | SAML 2.0 with Duo Security |
|
5 | 5 | -------------------------- |
|
6 | 6 | |
|
7 | 7 | **This plugin is available only in EE Edition.** |
|
8 | 8 | |
|
9 | 9 | |RCE| supports SAML 2.0 Authentication with Duo Security provider. This allows |
|
10 | 10 | users to log-in to RhodeCode via SSO mechanism of external identity provider |
|
11 | 11 | such as Duo. The login can be triggered either by the external IDP, or internally |
|
12 | 12 | by clicking specific authentication button on the log-in page. |
|
13 | 13 | |
|
14 | 14 | |
|
15 | 15 | Configuration steps |
|
16 | 16 | ^^^^^^^^^^^^^^^^^^^ |
|
17 | 17 | |
|
18 | 18 | To configure Duo Security SAML authentication, use the following steps: |
|
19 | 19 | |
|
20 | 20 | 1. From the |RCE| interface, select |
|
21 | 21 | :menuselection:`Admin --> Authentication` |
|
22 | 22 | 2. Activate the `Duo Security` plugin and select :guilabel:`Save` |
|
23 | 23 | 3. Go to newly available menu option called `Duo Security` on the left side. |
|
24 | 24 | 4. Check the `enabled` check box in the plugin configuration section, |
|
25 | 25 | and fill in the required SAML information and :guilabel:`Save`, for more details, |
|
26 | 26 | see :ref:`config-saml-duosecurity` |
|
27 | 27 | |
|
28 | 28 | |
|
29 | 29 | .. _config-saml-duosecurity: |
|
30 | 30 | |
|
31 | 31 | |
|
32 | 32 | Example SAML Duo Security configuration |
|
33 | 33 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
34 | 34 | |
|
35 |
Example configuration for SAML 2.0 with Duo Security provider |
|
|
35 | Example configuration for SAML 2.0 with Duo Security provider | |
|
36 | ||
|
37 | ||
|
38 | Enabled | |
|
39 | `True`: | |
|
36 | 40 | |
|
37 | *option*: `enabled` => `True` | |
|
38 |
|
|
|
41 | .. note:: | |
|
42 | Enable or disable this authentication plugin. | |
|
43 | ||
|
44 | ||
|
45 | Auth Cache TTL | |
|
46 | `30`: | |
|
39 | 47 | |
|
40 | *option*: `cache_ttl` => `0` | |
|
41 |
|
|
|
42 |
|
|
|
48 | .. note:: | |
|
49 | Amount of seconds to cache the authentication and permissions check response call for this plugin. | |
|
50 | Useful for expensive calls like LDAP to improve the performance of the system (0 means disabled). | |
|
51 | ||
|
52 | Debug | |
|
53 | `True`: | |
|
43 | 54 | |
|
44 | *option*: `debug` => `True` | |
|
45 |
|
|
|
55 | .. note:: | |
|
56 | Enable or disable debug mode that shows SAML errors in the RhodeCode logs. | |
|
57 | ||
|
58 | ||
|
59 | Auth button name | |
|
60 | `Azure Entra ID`: | |
|
46 | 61 | |
|
47 | *option*: `entity_id` => `http://rc-app.com/dag/saml2/idp/metadata.php` | |
|
48 | # Identity Provider entity/metadata URI. | |
|
49 | # E.g. https://duo-gateway.com/dag/saml2/idp/metadata.php | |
|
62 | .. note:: | |
|
63 | Alternative authentication display name. E.g AzureAuth, CorporateID etc. | |
|
64 | ||
|
65 | ||
|
66 | Entity ID | |
|
67 | `https://my-duo-gateway.com/dag/saml2/idp/metadata.php`: | |
|
68 | ||
|
69 | .. note:: | |
|
70 | Identity Provider entity/metadata URI. | |
|
71 | E.g. https://duo-gateway.com/dag/saml2/idp/metadata.php | |
|
72 | ||
|
73 | SSO URL | |
|
74 | `https://duo-gateway.com/dag/saml2/idp/SSOService.php?spentityid=<metadata_entity_id>`: | |
|
50 | 75 | |
|
51 | *option*: `sso_service_url` => `http://rc-app.com/dag/saml2/idp/SSOService.php?spentityid=http://rc.local.pl/_admin/auth/duosecurity/saml-metadata` | |
|
52 |
|
|
|
53 |
|
|
|
76 | .. note:: | |
|
77 | SSO (SingleSignOn) endpoint URL of the IdP. This can be used to initialize login, Known also as Login URL | |
|
78 | E.g. http://rc-app.com/dag/saml2/idp/SSOService.php?spentityid=https://docker-dev/_admin/auth/duosecurity/saml-metadata | |
|
79 | ||
|
80 | SLO URL | |
|
81 | `https://duo-gateway.com/dag/saml2/idp/SingleLogoutService.php?ReturnTo=<return_url>`: | |
|
54 | 82 | |
|
55 | *option*: `slo_service_url` => `http://rc-app.com/dag/saml2/idp/SingleLogoutService.php?ReturnTo=http://rc-app.com/dag/module.php/duosecurity/logout.php` | |
|
56 |
|
|
|
57 |
|
|
|
83 | .. note:: | |
|
84 | SLO (SingleLogout) endpoint URL of the IdP. , Known also as Logout URL | |
|
85 | E.g. http://rc-app.com/dag/saml2/idp/SingleLogoutService.php?ReturnTo=https://docker-dev/_admin/auth/duosecurity/saml-sign-out-endpoint | |
|
58 | 86 | |
|
59 | *option*: `x509cert` => `<CERTIFICATE_STRING>` | |
|
60 | # Identity provider public x509 certificate. It will be converted to single-line format without headers | |
|
87 | x509cert | |
|
88 | `<CERTIFICATE_STRING>`: | |
|
61 | 89 | |
|
62 | *option*: `name_id_format` => `sha-1` | |
|
63 | # The format that specifies how the NameID is sent to the service provider. | |
|
90 | .. note:: | |
|
91 | Identity provider public x509 certificate. It will be converted to single-line format without headers. | |
|
92 | Download the raw base64 encoded certificate from the Identity provider and paste it here. | |
|
93 | ||
|
94 | SAML Signature | |
|
95 | `sha-256`: | |
|
96 | ||
|
97 | .. note:: | |
|
98 | Type of Algorithm to use for verification of SAML signature on Identity provider side. | |
|
99 | ||
|
100 | SAML Digest | |
|
101 | `sha-256`: | |
|
64 | 102 | |
|
65 | *option*: `signature_algo` => `sha-256` | |
|
66 |
|
|
|
103 | .. note:: | |
|
104 | Type of Algorithm to use for verification of SAML digest on Identity provider side. | |
|
105 | ||
|
106 | Service Provider Cert Dir | |
|
107 | `/etc/rhodecode/conf/saml_ssl/`: | |
|
67 | 108 | |
|
68 | *option*: `digest_algo` => `sha-256` | |
|
69 | # Type of Algorithm to use for verification of SAML digest on Identity provider side | |
|
109 | .. note:: | |
|
110 | Optional directory to store service provider certificate and private keys. | |
|
111 | Expected certs for the SP should be stored in this folder as: | |
|
112 | ||
|
113 | * sp.key Private Key | |
|
114 | * sp.crt Public cert | |
|
115 | * sp_new.crt Future Public cert | |
|
116 | ||
|
117 | Also you can use other cert to sign the metadata of the SP using the: | |
|
70 | 118 | |
|
71 | *option*: `cert_dir` => `/etc/saml/` | |
|
72 | # Optional directory to store service provider certificate and private keys. | |
|
73 | # Expected certs for the SP should be stored in this folder as: | |
|
74 | # * sp.key Private Key | |
|
75 | # * sp.crt Public cert | |
|
76 | # * sp_new.crt Future Public cert | |
|
77 | # | |
|
78 | # Also you can use other cert to sign the metadata of the SP using the: | |
|
79 | # * metadata.key | |
|
80 | # * metadata.crt | |
|
119 | * metadata.key | |
|
120 | * metadata.crt | |
|
121 | ||
|
122 | Expected NameID Format | |
|
123 | `nameid-format:emailAddress`: | |
|
124 | ||
|
125 | .. note:: | |
|
126 | The format that specifies how the NameID is sent to the service provider. | |
|
127 | ||
|
128 | User ID Attribute | |
|
129 | `PersonImmutableID`: | |
|
81 | 130 | |
|
82 | *option*: `user_id_attribute` => `PersonImmutableID` | |
|
83 |
|
|
|
84 |
|
|
|
131 | .. note:: | |
|
132 | User ID Attribute name. This defines which attribute in SAML response will be used to link accounts via unique id. | |
|
133 | Ensure this is returned from DuoSecurity for example via duo_username. | |
|
134 | ||
|
135 | Username Attribute | |
|
136 | `User.username`: | |
|
85 | 137 | |
|
86 | *option*: `username_attribute` => `User.username` | |
|
87 |
|
|
|
138 | .. note:: | |
|
139 | Username Attribute name. This defines which attribute in SAML response will map to a username. | |
|
88 | 140 | |
|
89 | *option*: `email_attribute` => `User.email` | |
|
90 | # Email Attribute name. This defines which attribute in SAML response will map to an email address. | |
|
141 | Email Attribute | |
|
142 | `User.email`: | |
|
143 | ||
|
144 | .. note:: | |
|
145 | Email Attribute name. This defines which attribute in SAML response will map to an email address. | |
|
146 | ||
|
91 | 147 | |
|
92 | 148 | |
|
93 | 149 | Below is example setup from DUO Administration page that can be used with above config. |
|
94 | 150 | |
|
95 | 151 | .. image:: ../images/saml-duosecurity-service-provider-example.png |
|
96 | 152 | :alt: DUO Security SAML setup example |
|
97 | 153 | :scale: 50 % |
|
98 | 154 | |
|
99 | 155 | |
|
100 | 156 | Below is an example attribute mapping set for IDP provider required by the above config. |
|
101 | 157 | |
|
102 | 158 | |
|
103 | 159 | .. image:: ../images/saml-duosecurity-attributes-example.png |
|
104 | 160 | :alt: DUO Security SAML setup example |
|
105 | 161 | :scale: 50 % No newline at end of file |
@@ -1,19 +1,20 b'' | |||
|
1 | 1 | .. _config-saml-generic-ref: |
|
2 | 2 | |
|
3 | 3 | |
|
4 | 4 | SAML 2.0 Authentication |
|
5 | 5 | ----------------------- |
|
6 | 6 | |
|
7 | 7 | |
|
8 | 8 | **This plugin is available only in EE Edition.** |
|
9 | 9 | |
|
10 | 10 | RhodeCode Supports standard SAML 2.0 SSO for the web-application part. |
|
11 | 11 | |
|
12 | 12 | Please check for reference two example providers: |
|
13 | 13 | |
|
14 | 14 | .. toctree:: |
|
15 | 15 | |
|
16 | 16 | auth-saml-duosecurity |
|
17 | 17 | auth-saml-onelogin |
|
18 | auth-saml-azure | |
|
18 | 19 | auth-saml-bulk-enroll-users |
|
19 | 20 |
@@ -1,106 +1,161 b'' | |||
|
1 | 1 | .. _config-saml-onelogin-ref: |
|
2 | 2 | |
|
3 | 3 | |
|
4 | 4 | SAML 2.0 with One Login |
|
5 | 5 | ----------------------- |
|
6 | 6 | |
|
7 | 7 | **This plugin is available only in EE Edition.** |
|
8 | 8 | |
|
9 | 9 | |RCE| supports SAML 2.0 Authentication with OneLogin provider. This allows |
|
10 | 10 | users to log-in to RhodeCode via SSO mechanism of external identity provider |
|
11 | 11 | such as OneLogin. The login can be triggered either by the external IDP, or internally |
|
12 | 12 | by clicking specific authentication button on the log-in page. |
|
13 | 13 | |
|
14 | 14 | |
|
15 | 15 | Configuration steps |
|
16 | 16 | ^^^^^^^^^^^^^^^^^^^ |
|
17 | 17 | |
|
18 | 18 | To configure OneLogin SAML authentication, use the following steps: |
|
19 | 19 | |
|
20 | 20 | 1. From the |RCE| interface, select |
|
21 | 21 | :menuselection:`Admin --> Authentication` |
|
22 | 22 | 2. Activate the `OneLogin` plugin and select :guilabel:`Save` |
|
23 | 23 | 3. Go to newly available menu option called `OneLogin` on the left side. |
|
24 | 24 | 4. Check the `enabled` check box in the plugin configuration section, |
|
25 | 25 | and fill in the required SAML information and :guilabel:`Save`, for more details, |
|
26 | 26 | see :ref:`config-saml-onelogin` |
|
27 | 27 | |
|
28 | 28 | |
|
29 | 29 | .. _config-saml-onelogin: |
|
30 | 30 | |
|
31 | 31 | |
|
32 | 32 | Example SAML OneLogin configuration |
|
33 | 33 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
34 | 34 | |
|
35 |
Example configuration for SAML 2.0 with OneLogin provider |
|
|
35 | Example configuration for SAML 2.0 with OneLogin provider | |
|
36 | ||
|
37 | ||
|
38 | Enabled | |
|
39 | `True`: | |
|
36 | 40 | |
|
37 | *option*: `enabled` => `True` | |
|
38 |
|
|
|
41 | .. note:: | |
|
42 | Enable or disable this authentication plugin. | |
|
43 | ||
|
44 | ||
|
45 | Auth Cache TTL | |
|
46 | `30`: | |
|
39 | 47 | |
|
40 | *option*: `cache_ttl` => `0` | |
|
41 |
|
|
|
42 |
|
|
|
48 | .. note:: | |
|
49 | Amount of seconds to cache the authentication and permissions check response call for this plugin. | |
|
50 | Useful for expensive calls like LDAP to improve the performance of the system (0 means disabled). | |
|
51 | ||
|
52 | Debug | |
|
53 | `True`: | |
|
43 | 54 | |
|
44 | *option*: `debug` => `True` | |
|
45 |
|
|
|
55 | .. note:: | |
|
56 | Enable or disable debug mode that shows SAML errors in the RhodeCode logs. | |
|
57 | ||
|
58 | ||
|
59 | Auth button name | |
|
60 | `Azure Entra ID`: | |
|
46 | 61 | |
|
47 | *option*: `entity_id` => `https://app.onelogin.com/saml/metadata/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx` | |
|
48 | # Identity Provider entity/metadata URI. | |
|
49 | # E.g. https://app.onelogin.com/saml/metadata/<onelogin_connector_id> | |
|
62 | .. note:: | |
|
63 | Alternative authentication display name. E.g AzureAuth, CorporateID etc. | |
|
64 | ||
|
65 | ||
|
66 | Entity ID | |
|
67 | `https://app.onelogin.com/saml/metadata/<onelogin_connector_id>`: | |
|
68 | ||
|
69 | .. note:: | |
|
70 | Identity Provider entity/metadata URI. | |
|
71 | E.g. https://app.onelogin.com/saml/metadata/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx | |
|
72 | ||
|
73 | SSO URL | |
|
74 | `https://app.onelogin.com/trust/saml2/http-post/sso/<onelogin_connector_id>`: | |
|
50 | 75 | |
|
51 | *option*: `sso_service_url` => `https://customer-domain.onelogin.com/trust/saml2/http-post/sso/xxxxxx` | |
|
52 |
|
|
|
53 |
|
|
|
76 | .. note:: | |
|
77 | SSO (SingleSignOn) endpoint URL of the IdP. This can be used to initialize login, Known also as Login URL | |
|
78 | E.g. https://app.onelogin.com/trust/saml2/http-post/sso/<onelogin_connector_id> | |
|
79 | ||
|
80 | SLO URL | |
|
81 | `https://app.onelogin.com/trust/saml2/http-redirect/slo/<onelogin_connector_id>`: | |
|
54 | 82 | |
|
55 | *option*: `slo_service_url` => `https://customer-domain.onelogin.com/trust/saml2/http-redirect/slo/xxxxxx` | |
|
56 |
|
|
|
57 |
|
|
|
83 | .. note:: | |
|
84 | SLO (SingleLogout) endpoint URL of the IdP. , Known also as Logout URL | |
|
85 | E.g. https://app.onelogin.com/trust/saml2/http-redirect/slo/<onelogin_connector_id> | |
|
58 | 86 | |
|
59 | *option*: `x509cert` => `<CERTIFICATE_STRING>` | |
|
60 | # Identity provider public x509 certificate. It will be converted to single-line format without headers | |
|
87 | x509cert | |
|
88 | `<CERTIFICATE_STRING>`: | |
|
61 | 89 | |
|
62 | *option*: `name_id_format` => `sha-1` | |
|
63 | # The format that specifies how the NameID is sent to the service provider. | |
|
90 | .. note:: | |
|
91 | Identity provider public x509 certificate. It will be converted to single-line format without headers. | |
|
92 | Download the raw base64 encoded certificate from the Identity provider and paste it here. | |
|
93 | ||
|
94 | SAML Signature | |
|
95 | `sha-256`: | |
|
96 | ||
|
97 | .. note:: | |
|
98 | Type of Algorithm to use for verification of SAML signature on Identity provider side. | |
|
99 | ||
|
100 | SAML Digest | |
|
101 | `sha-256`: | |
|
64 | 102 | |
|
65 | *option*: `signature_algo` => `sha-256` | |
|
66 |
|
|
|
103 | .. note:: | |
|
104 | Type of Algorithm to use for verification of SAML digest on Identity provider side. | |
|
105 | ||
|
106 | Service Provider Cert Dir | |
|
107 | `/etc/rhodecode/conf/saml_ssl/`: | |
|
67 | 108 | |
|
68 | *option*: `digest_algo` => `sha-256` | |
|
69 | # Type of Algorithm to use for verification of SAML digest on Identity provider side | |
|
109 | .. note:: | |
|
110 | Optional directory to store service provider certificate and private keys. | |
|
111 | Expected certs for the SP should be stored in this folder as: | |
|
112 | ||
|
113 | * sp.key Private Key | |
|
114 | * sp.crt Public cert | |
|
115 | * sp_new.crt Future Public cert | |
|
70 | 116 | |
|
71 | *option*: `cert_dir` => `/etc/saml/` | |
|
72 | # Optional directory to store service provider certificate and private keys. | |
|
73 | # Expected certs for the SP should be stored in this folder as: | |
|
74 | # * sp.key Private Key | |
|
75 | # * sp.crt Public cert | |
|
76 | # * sp_new.crt Future Public cert | |
|
77 | # | |
|
78 | # Also you can use other cert to sign the metadata of the SP using the: | |
|
79 | # * metadata.key | |
|
80 | # * metadata.crt | |
|
117 | Also you can use other cert to sign the metadata of the SP using the: | |
|
118 | ||
|
119 | * metadata.key | |
|
120 | * metadata.crt | |
|
121 | ||
|
122 | Expected NameID Format | |
|
123 | `nameid-format:emailAddress`: | |
|
124 | ||
|
125 | .. note:: | |
|
126 | The format that specifies how the NameID is sent to the service provider. | |
|
127 | ||
|
128 | User ID Attribute | |
|
129 | `PersonImmutableID`: | |
|
81 | 130 | |
|
82 | *option*: `user_id_attribute` => `PersonImmutableID` | |
|
83 |
|
|
|
84 |
|
|
|
131 | .. note:: | |
|
132 | User ID Attribute name. This defines which attribute in SAML response will be used to link accounts via unique id. | |
|
133 | Ensure this is returned from DuoSecurity for example via duo_username. | |
|
134 | ||
|
135 | Username Attribute | |
|
136 | `User.username`: | |
|
85 | 137 | |
|
86 | *option*: `username_attribute` => `User.username` | |
|
87 |
|
|
|
138 | .. note:: | |
|
139 | Username Attribute name. This defines which attribute in SAML response will map to a username. | |
|
88 | 140 | |
|
89 | *option*: `email_attribute` => `User.email` | |
|
90 | # Email Attribute name. This defines which attribute in SAML response will map to an email address. | |
|
141 | Email Attribute | |
|
142 | `User.email`: | |
|
143 | ||
|
144 | .. note:: | |
|
145 | Email Attribute name. This defines which attribute in SAML response will map to an email address. | |
|
91 | 146 | |
|
92 | 147 | |
|
93 | 148 | |
|
94 | 149 | Below is example setup that can be used with OneLogin SAML authentication that can be used with above config.. |
|
95 | 150 | |
|
96 | 151 | .. image:: ../images/saml-onelogin-config-example.png |
|
97 | 152 | :alt: OneLogin SAML setup example |
|
98 | 153 | :scale: 50 % |
|
99 | 154 | |
|
100 | 155 | |
|
101 | 156 | Below is an example attribute mapping set for IDP provider required by the above config. |
|
102 | 157 | |
|
103 | 158 | |
|
104 | 159 | .. image:: ../images/saml-onelogin-attributes-example.png |
|
105 | 160 | :alt: OneLogin SAML setup example |
|
106 | 161 | :scale: 50 % No newline at end of file |
@@ -1,34 +1,35 b'' | |||
|
1 | 1 | .. _authentication-ref: |
|
2 | 2 | |
|
3 | 3 | Authentication Options |
|
4 | 4 | ====================== |
|
5 | 5 | |
|
6 | 6 | |RCE| provides a built in authentication against its own database. This is |
|
7 | 7 | implemented using ``RhodeCode Internal`` plugin. This plugin is enabled by default. |
|
8 | 8 | Additionally, |RCE| provides a Pluggable Authentication System. This gives the |
|
9 | 9 | administrator greater control over how users authenticate with the system. |
|
10 | 10 | |
|
11 | 11 | .. important:: |
|
12 | 12 | |
|
13 | 13 | You can disable the built in |RCE| authentication plugin |
|
14 | 14 | ``RhodeCode Internal`` and force all authentication to go |
|
15 | 15 | through your authentication plugin of choice e.g LDAP only. |
|
16 | 16 | However, if you do this, and your external authentication tools fails, |
|
17 | 17 | accessing |RCE| will be blocked unless a fallback plugin is |
|
18 | 18 | enabled via :file: rhodecode.ini |
|
19 | 19 | |
|
20 | 20 | |
|
21 | 21 | |RCE| comes with the following user authentication management plugins: |
|
22 | 22 | |
|
23 | 23 | |
|
24 | 24 | .. toctree:: |
|
25 | 25 | |
|
26 | 26 | auth-token |
|
27 | 27 | auth-ldap |
|
28 | 28 | auth-ldap-groups |
|
29 | 29 | auth-saml-generic |
|
30 | 30 | auth-saml-onelogin |
|
31 | 31 | auth-saml-duosecurity |
|
32 | auth-saml-azure | |
|
32 | 33 | auth-crowd |
|
33 | 34 | auth-pam |
|
34 | 35 | ssh-connection |
@@ -1,243 +1,14 b'' | |||
|
1 | 1 | .. _dev-setup: |
|
2 | 2 | |
|
3 | 3 | =================== |
|
4 | 4 | Development setup |
|
5 | 5 | =================== |
|
6 | 6 | |
|
7 | ||
|
8 | RhodeCode Enterprise runs inside a Nix managed environment. This ensures build | |
|
9 | environment dependencies are correctly declared and installed during setup. | |
|
10 | It also enables atomic upgrades, rollbacks, and multiple instances of RhodeCode | |
|
11 | Enterprise running with isolation. | |
|
12 | ||
|
13 | To set up RhodeCode Enterprise inside the Nix environment, use the following steps: | |
|
14 | ||
|
15 | ||
|
16 | ||
|
17 | Setup Nix Package Manager | |
|
18 | ------------------------- | |
|
19 | ||
|
20 | To install the Nix Package Manager, please run:: | |
|
21 | ||
|
22 | $ curl https://releases.nixos.org/nix/nix-2.3.4/install | sh | |
|
23 | ||
|
24 | or go to https://nixos.org/nix/ and follow the installation instructions. | |
|
25 | Once this is correctly set up on your system, you should be able to use the | |
|
26 | following commands: | |
|
27 | ||
|
28 | * `nix-env` | |
|
29 | ||
|
30 | * `nix-shell` | |
|
31 | ||
|
32 | ||
|
33 | .. tip:: | |
|
34 | ||
|
35 | Update your channels frequently by running ``nix-channel --update``. | |
|
36 | ||
|
37 | .. note:: | |
|
38 | ||
|
39 | To uninstall nix run the following: | |
|
40 | ||
|
41 | remove the . "$HOME/.nix-profile/etc/profile.d/nix.sh" line in your ~/.profile or ~/.bash_profile | |
|
42 | rm -rf $HOME/{.nix-channels,.nix-defexpr,.nix-profile,.config/nixpkgs} | |
|
43 | sudo rm -rf /nix | |
|
44 | ||
|
45 | Switch nix to the latest STABLE channel | |
|
46 | --------------------------------------- | |
|
47 | ||
|
48 | run:: | |
|
49 | ||
|
50 | nix-channel --add https://nixos.org/channels/nixos-20.03 nixpkgs | |
|
51 | ||
|
52 | Followed by:: | |
|
53 | ||
|
54 | nix-channel --update | |
|
55 | nix-env -i nix-2.3.4 | |
|
56 | ||
|
57 | ||
|
58 | Install required binaries | |
|
59 | ------------------------- | |
|
60 | ||
|
61 | We need some handy tools first. | |
|
62 | ||
|
63 | run:: | |
|
64 | ||
|
65 | nix-env -i nix-prefetch-hg | |
|
66 | nix-env -i nix-prefetch-git | |
|
67 | ||
|
68 | ||
|
69 | Speed up JS build by installing PhantomJS | |
|
70 | ----------------------------------------- | |
|
71 | ||
|
72 | PhantomJS will be downloaded each time nix-shell is invoked. To speed this by | |
|
73 | setting already downloaded version do this:: | |
|
74 | ||
|
75 | nix-env -i phantomjs-2.1.1 | |
|
76 | ||
|
77 | # and set nix bin path | |
|
78 | export PATH=$PATH:~/.nix-profile/bin | |
|
79 | ||
|
80 | ||
|
81 | Clone the required repositories | |
|
82 | ------------------------------- | |
|
83 | ||
|
84 | After Nix is set up, clone the RhodeCode Enterprise Community Edition and | |
|
85 | RhodeCode VCSServer repositories into the same directory. | |
|
86 | RhodeCode currently is using Mercurial Version Control System, please make sure | |
|
87 | you have it installed before continuing. | |
|
88 | ||
|
89 | To obtain the required sources, use the following commands:: | |
|
90 | ||
|
91 | mkdir rhodecode-develop && cd rhodecode-develop | |
|
92 | hg clone -u default https://code.rhodecode.com/rhodecode-enterprise-ce | |
|
93 | hg clone -u default https://code.rhodecode.com/rhodecode-vcsserver | |
|
94 | ||
|
95 | .. note:: | |
|
96 | ||
|
97 | If you cannot clone the repository, please contact us via support@rhodecode.com | |
|
98 | ||
|
99 | ||
|
100 | Install some required libraries | |
|
101 | ------------------------------- | |
|
102 | ||
|
103 | There are some required drivers and dev libraries that we need to install to | |
|
104 | test RhodeCode under different types of databases. For example in Ubuntu we | |
|
105 | need to install the following. | |
|
106 | ||
|
107 | required libraries:: | |
|
108 | ||
|
109 | # svn related | |
|
110 | sudo apt-get install libapr1-dev libaprutil1-dev | |
|
111 | sudo apt-get install libsvn-dev | |
|
112 | # libcurl required too | |
|
113 | sudo apt-get install libcurl4-openssl-dev | |
|
114 | # mysql/pg server for development, optional | |
|
115 | sudo apt-get install mysql-server libmysqlclient-dev | |
|
116 | sudo apt-get install postgresql postgresql-contrib libpq-dev | |
|
117 | ||
|
118 | ||
|
119 | ||
|
120 | Enter the Development Shell | |
|
121 | --------------------------- | |
|
122 | ||
|
123 | The final step is to start the development shells. To do this, run the | |
|
124 | following command from inside the cloned repository:: | |
|
125 | ||
|
126 | # first, the vcsserver | |
|
127 | cd ~/rhodecode-vcsserver | |
|
128 | nix-shell | |
|
129 | ||
|
130 | # then enterprise sources | |
|
131 | cd ~/rhodecode-enterprise-ce | |
|
132 | nix-shell | |
|
133 | ||
|
134 | .. note:: | |
|
135 | ||
|
136 | On the first run, this will take a while to download and optionally compile | |
|
137 | a few things. The following runs will be faster. The development shell works | |
|
138 | fine on both MacOS and Linux platforms. | |
|
139 | ||
|
140 | ||
|
141 | Create config.nix for development | |
|
142 | --------------------------------- | |
|
143 | ||
|
144 | In order to run proper tests and setup linking across projects, a config.nix | |
|
145 | file needs to be setup:: | |
|
146 | ||
|
147 | # create config | |
|
148 | mkdir -p ~/.nixpkgs | |
|
149 | touch ~/.nixpkgs/config.nix | |
|
150 | ||
|
151 | # put the below content into the ~/.nixpkgs/config.nix file | |
|
152 | # adjusts, the path to where you cloned your repositories. | |
|
153 | ||
|
154 | { | |
|
155 | rc = { | |
|
156 | sources = { | |
|
157 | rhodecode-vcsserver = "/home/dev/rhodecode-vcsserver"; | |
|
158 | rhodecode-enterprise-ce = "/home/dev/rhodecode-enterprise-ce"; | |
|
159 | rhodecode-enterprise-ee = "/home/dev/rhodecode-enterprise-ee"; | |
|
160 | }; | |
|
161 | }; | |
|
162 | } | |
|
163 | ||
|
164 | ||
|
165 | ||
|
166 | Creating a Development Configuration | |
|
167 | ------------------------------------ | |
|
168 | ||
|
169 | To create a development environment for RhodeCode Enterprise, | |
|
170 | use the following steps: | |
|
171 | ||
|
172 | 1. Create a copy of vcsserver config: | |
|
173 | `cp ~/rhodecode-vcsserver/configs/development.ini ~/rhodecode-vcsserver/configs/dev.ini` | |
|
174 | 2. Create a copy of rhodocode config: | |
|
175 | `cp ~/rhodecode-enterprise-ce/configs/development.ini ~/rhodecode-enterprise-ce/configs/dev.ini` | |
|
176 | 3. Adjust the configuration settings to your needs if needed. | |
|
177 | ||
|
178 | .. note:: | |
|
179 | ||
|
180 | It is recommended to use the name `dev.ini` since it's included in .hgignore file. | |
|
181 | ||
|
182 | ||
|
183 | Setup the Development Database | |
|
184 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
|
185 | ||
|
186 | To create a development database, use the following example. This is a one | |
|
187 | time operation executed from the nix-shell of rhodecode-enterprise-ce sources :: | |
|
188 | ||
|
189 | rc-setup-app dev.ini \ | |
|
190 | --user=admin --password=secret \ | |
|
191 | --email=admin@example.com \ | |
|
192 | --repos=~/my_dev_repos | |
|
193 | ||
|
194 | ||
|
195 | Compile CSS and JavaScript | |
|
196 | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
|
197 | ||
|
198 | To use the application's frontend and prepare it for production deployment, | |
|
199 | you will need to compile the CSS and JavaScript with Grunt. | |
|
200 | This is easily done from within the nix-shell using the following command:: | |
|
201 | ||
|
202 | make web-build | |
|
203 | ||
|
204 | When developing new features you will need to recompile following any | |
|
205 | changes made to the CSS or JavaScript files when developing the code:: | |
|
206 | ||
|
207 | grunt watch | |
|
208 | ||
|
209 | This prepares the development (with comments/whitespace) versions of files. | |
|
210 | ||
|
211 | Start the Development Servers | |
|
212 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
|
213 | ||
|
214 | From the rhodecode-vcsserver directory, start the development server in another | |
|
215 | nix-shell, using the following command:: | |
|
216 | ||
|
217 | pserve configs/dev.ini | |
|
218 | ||
|
219 | In the adjacent nix-shell which you created for your development server, you may | |
|
220 | now start CE with the following command:: | |
|
221 | ||
|
222 | ||
|
223 | pserve --reload configs/dev.ini | |
|
224 | ||
|
225 | .. note:: | |
|
226 | ||
|
227 | `--reload` flag will automatically reload the server when source file changes. | |
|
228 | ||
|
229 | ||
|
230 | Run the Environment Tests | |
|
231 | ^^^^^^^^^^^^^^^^^^^^^^^^^ | |
|
232 | ||
|
233 | Please make sure that the tests are passing to verify that your environment is | |
|
234 | set up correctly. RhodeCode uses py.test to run tests. | |
|
235 | While your instance is running, start a new nix-shell and simply run | |
|
236 | ``make test`` to run the basic test suite. | |
|
237 | ||
|
7 | Please refer to RCstack installed documentation for instructions on setting up dev environment: | |
|
8 | https://docs.rhodecode.com/rcstack/dev/dev-setup.html | |
|
238 | 9 | |
|
239 | 10 | Need Help? |
|
240 | 11 | ^^^^^^^^^^ |
|
241 | 12 | |
|
242 | 13 | Join us on Slack via https://rhodecode.com/join or post questions in our |
|
243 | 14 | Community Portal at https://community.rhodecode.com |
@@ -1,92 +1,104 b'' | |||
|
1 | 1 | |RCE| |
|
2 | 2 | ===== |
|
3 | 3 | |
|
4 | 4 | |RCE| is a high-performance source code management and collaboration system. |
|
5 | 5 | It enables you to develop projects securely behind the firewall while |
|
6 | 6 | providing collaboration tools that work with |git|, |hg|, |
|
7 | 7 | and |svn| |repos|. The user interface allows you to create, edit, |
|
8 | 8 | and commit files and |repos| while managing their security permissions. |
|
9 | 9 | |
|
10 | 10 | |RCE| provides the following features: |
|
11 | 11 | |
|
12 | 12 | * Source code management. |
|
13 | 13 | * Extended permissions management. |
|
14 | 14 | * Integrated code collaboration tools. |
|
15 | 15 | * Integrated code review and notifications. |
|
16 | 16 | * Scalability provided by multi-node setup. |
|
17 | 17 | * Fully programmable automation API. |
|
18 | 18 | * Web-based hook management. |
|
19 | 19 | * Native |svn| support. |
|
20 | 20 | * Migration from existing databases. |
|
21 | 21 | * |RCE| SDK. |
|
22 | 22 | * Built-in analytics |
|
23 | 23 | * Built in integrations including: Slack, Webhooks (used for Jenkins/TeamCity and other CIs), Jira, Redmine, Hipchat |
|
24 | 24 | * Pluggable authentication system. |
|
25 | 25 | * Support for AD, |LDAP|, Crowd, CAS, PAM. |
|
26 | 26 | * Support for external authentication via Oauth Google, Github, Bitbucket, Twitter. |
|
27 | 27 | * Debug modes of operation. |
|
28 | 28 | * Private and public gists. |
|
29 | 29 | * Gists with limited lifetimes and within instance only sharing. |
|
30 | 30 | * Fully integrated code search function. |
|
31 | 31 | * Always on SSL connectivity. |
|
32 | 32 | |
|
33 | 33 | .. only:: html |
|
34 | 34 | |
|
35 | 35 | Table of Contents |
|
36 | 36 | ----------------- |
|
37 | 37 | |
|
38 | 38 | .. toctree:: |
|
39 | 39 | :maxdepth: 1 |
|
40 | :caption: Documentation directory | |
|
41 | ||
|
42 | Back to documentation directory <https://docs.rhodecode.com/> | |
|
43 | ||
|
44 | .. toctree:: | |
|
45 | :maxdepth: 1 | |
|
46 | :caption: RhodeCode RCstack Documentation | |
|
47 | ||
|
48 | RhodeCode RCstack Installer <https://docs.rhodecode.com/rcstack/> | |
|
49 | ||
|
50 | .. toctree:: | |
|
51 | :maxdepth: 1 | |
|
40 | 52 | :caption: Admin Documentation |
|
41 | 53 | |
|
42 | 54 | install/quick-start |
|
43 | 55 | install/install-database |
|
44 | 56 | install/install-steps |
|
45 | 57 | admin/system-overview |
|
46 | 58 | admin/system-admin |
|
47 | 59 | admin/user-admin |
|
48 | 60 | admin/repo-admin |
|
49 | 61 | admin/security-tips |
|
50 | 62 | auth/auth |
|
51 | 63 | issue-trackers/issue-trackers |
|
52 | 64 | admin/lab-settings |
|
53 | 65 | |
|
54 | 66 | .. toctree:: |
|
55 | 67 | :maxdepth: 1 |
|
56 | 68 | :caption: Feature Documentation |
|
57 | 69 | |
|
58 | 70 | collaboration/collaboration |
|
59 | 71 | collaboration/review-notifications |
|
60 | 72 | collaboration/pull-requests |
|
61 | 73 | code-review/code-review |
|
62 | 74 | integrations/integrations |
|
63 | 75 | |
|
64 | 76 | .. toctree:: |
|
65 | 77 | :maxdepth: 1 |
|
66 | 78 | :caption: User Documentation |
|
67 | 79 | |
|
68 | 80 | usage/basic-usage |
|
69 | 81 | tutorials/tutorials |
|
70 | 82 | |
|
71 | 83 | .. toctree:: |
|
72 | 84 | :maxdepth: 1 |
|
73 | 85 | :caption: Developer Documentation |
|
74 | 86 | |
|
75 | 87 | api/api |
|
76 | 88 | tools/rhodecode-tools |
|
77 | 89 | extensions/extensions-hooks |
|
78 | 90 | contributing/contributing |
|
79 | 91 | |
|
80 | 92 | .. toctree:: |
|
81 | 93 | :maxdepth: 2 |
|
82 | 94 | :caption: RhodeCode rcstack Documentation |
|
83 | 95 | |
|
84 | 96 | RhodeCode Installer <https://docs.rhodecode.com/rcstack/> |
|
85 | 97 | |
|
86 | 98 | .. toctree:: |
|
87 | 99 | :maxdepth: 1 |
|
88 | 100 | :caption: About |
|
89 | 101 | |
|
90 | 102 | release-notes/release-notes |
|
91 | 103 | known-issues/known-issues |
|
92 | 104 | admin/glossary |
@@ -1,92 +1,93 b'' | |||
|
1 | 1 | .. _quick-start: |
|
2 | 2 | |
|
3 | 3 | Quick Start Installation Guide |
|
4 | 4 | ============================== |
|
5 | 5 | |
|
6 | 6 | .. important:: |
|
7 | 7 | |
|
8 | 8 | These are quick start instructions. To optimize your |RCE|, |
|
9 | 9 | |RCC|, and |RCT| usage, read the more detailed instructions in our guides. |
|
10 | 10 | For detailed installation instructions, see |
|
11 | 11 | :ref:`RhodeCode rcstack Documentation <rcstack:installation>` |
|
12 | 12 | |
|
13 | 13 | |
|
14 | 14 | |
|
15 | 15 | To get |RCE| up and running, run through the below steps: |
|
16 | 16 | |
|
17 | 17 | 1. Register to get the latest |RCC| installer instruction from `rhodecode.com/download`_. |
|
18 | 18 | If you don't have an account, sign up at `rhodecode.com/register`_. |
|
19 | 19 | |
|
20 | 20 | 2. Run the |RCS| installer and start init process. |
|
21 | 21 | following example: |
|
22 | 22 | |
|
23 | 23 | .. code-block:: bash |
|
24 | 24 | |
|
25 | 25 | mkdir docker-rhodecode && cd docker-rhodecode |
|
26 | 26 | curl -L -s -o rcstack https://dls.rhodecode.com/get-rcstack && chmod +x rcstack |
|
27 | 27 | |
|
28 | 28 | ./rcstack init |
|
29 | 29 | |
|
30 | 30 | |
|
31 | 31 | .. important:: |
|
32 | 32 | |
|
33 | 33 | We recommend running RhodeCode as a non-root user, such as `rhodecode`; |
|
34 | 34 | this user must have a proper home directory and sudo permissions (to start Docker) |
|
35 | 35 | Either log in as that user to install the software, or do it as root |
|
36 | 36 | with `sudo -i -u rhodecode ./rcstack init` |
|
37 | 37 | |
|
38 | 38 | |
|
39 | 39 | 3. Follow instructions on |RCS| documentation pages |
|
40 | 40 | |
|
41 | 41 | :ref:`Quick install tutorial <rcstack:quick_installation>` |
|
42 | 42 | |
|
43 | 43 | 4. Check stack status |
|
44 | 44 | |
|
45 | 45 | .. code-block:: bash |
|
46 | 46 | |
|
47 | 47 | ./rcstack status |
|
48 | 48 | |
|
49 | 49 | |
|
50 | 50 | Output should look similar to this: |
|
51 | 51 | |
|
52 | 52 | .. code-block:: bash |
|
53 | 53 | |
|
54 | 54 | --- |
|
55 | 55 | CONTAINER ID IMAGE STATUS NAMES PORTS |
|
56 | 56 | ef54fc528e3a traefik:v2.9.5 Up 2 hours rc_cluster_router-traefik-1 0.0.0.0:80->80/tcp, :::80->80/tcp |
|
57 | 57 | f3ea0539e8b0 rhodecode/rhodecode-ee:4.28.0 Up 2 hours (healthy) rc_cluster_apps-rhodecode-1 0.0.0.0:10020->10020/tcp, :::10020->10020/tcp |
|
58 | 58 | 2be52ba58ffe rhodecode/rhodecode-ee:4.28.0 Up 2 hours (healthy) rc_cluster_apps-vcsserver-1 |
|
59 | 59 | 7cd730ad3263 rhodecode/rhodecode-ee:4.28.0 Up 2 hours (healthy) rc_cluster_apps-celery-1 |
|
60 | 60 | dfa231342c87 rhodecode/rhodecode-ee:4.28.0 Up 2 hours (healthy) rc_cluster_apps-celery-beat-1 |
|
61 | 61 | d3d76ce2de96 rhodecode/rhodecode-ee:4.28.0 Up 2 hours (healthy) rc_cluster_apps-sshd-1 |
|
62 | 62 | daaac329414b rhodecode/rhodecode-ee:4.28.0 Up 2 hours (healthy) rc_cluster_apps-svn-1 |
|
63 | 63 | 7b8504fb9acb nginx:1.23.2 Up 2 hours (healthy) rc_cluster_services-nginx-1 80/tcp |
|
64 | 64 | 7279c25feb6b elasticsearch:6.8.23 Up 2 hours (healthy) rc_cluster_services-elasticsearch-1 9200/tcp, 9300/tcp |
|
65 | 65 | 19fb93587493 redis:7.0.5 Up 2 hours (healthy) rc_cluster_services-redis-1 6379/tcp |
|
66 | 66 | fb77fb6496c6 channelstream/channelstream:0.7.1 Up 2 hours (healthy) rc_cluster_services-channelstream-1 8000/tcp |
|
67 | 67 | cb6c5c022f5b postgres:14.6 Up 2 hours (healthy) rc_cluster_services-database-1 5432/tcp |
|
68 | 68 | |
|
69 | ||
|
69 | 70 | At this point you should be able to access: |
|
70 | 71 | |
|
71 | 72 | - RhodeCode instance at your domain entered, e.g http://rhodecode.local, the default access |
|
72 | 73 | credentials are generated and stored inside .runtime.env. |
|
73 | 74 | For example:: |
|
74 | 75 | |
|
75 | 76 | RHODECODE_USER_NAME=admin |
|
76 | 77 | RHODECODE_USER_PASS=super-secret-password |
|
77 | 78 | |
|
78 | 79 | |
|
80 | ||
|
79 | 81 | .. note:: |
|
80 | 82 | |
|
81 | 83 | Recommended post quick start install instructions: |
|
82 | 84 | |
|
83 | 85 | * Read the documentation |
|
84 | 86 | * Carry out the :ref:`rhodecode-post-install-ref` |
|
85 | 87 | * Set up :ref:`indexing-ref` |
|
86 | 88 | * Familiarise yourself with the :ref:`rhodecode-admin-ref` section. |
|
87 | 89 | |
|
88 | .. _rhodecode.com/download/: https://rhodecode.com/download/ | |
|
89 | 90 | .. _rhodecode.com: https://rhodecode.com/ |
|
90 | 91 | .. _rhodecode.com/register: https://rhodecode.com/register/ |
|
91 | 92 | .. _rhodecode.com/download: https://rhodecode.com/download/ |
|
92 | 93 |
@@ -1,32 +1,21 b'' | |||
|
1 | 1 | .. _install-sqlite-database: |
|
2 | 2 | |
|
3 | SQLite | |
|
4 | ------ | |
|
3 | SQLite (Deprecated) | |
|
4 | ------------------- | |
|
5 | 5 | |
|
6 | 6 | .. important:: |
|
7 | 7 | |
|
8 | We do not recommend using SQLite in a large development environment | |
|
9 | as it has an internal locking mechanism which can become a performance | |
|
10 | bottleneck when there are more than 5 concurrent users. | |
|
8 | As of 5.x, SQLite is no longer supported, we advise to migrate to MySQL or PostgreSQL. | |
|
11 | 9 | |
|
12 | |RCE| installs SQLite as the default database if you do not specify another | |
|
13 | during installation. SQLite is suitable for small teams, | |
|
14 | projects with a low load, and evaluation purposes since it is built into | |
|
15 | |RCE| and does not require any additional database server. | |
|
16 | ||
|
17 | Using MySQL or PostgreSQL in an large setup gives you much greater | |
|
18 | performance, and while migration tools exist to move from one database type | |
|
19 | to another, it is better to get it right first time and to immediately use | |
|
20 | MySQL or PostgreSQL when you deploy |RCE| in a production environment. | |
|
21 | 10 | |
|
22 | 11 | Migrating From SQLite to PostgreSQL |
|
23 | 12 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ |
|
24 | 13 | |
|
25 | 14 | If you started working with SQLite and now need to migrate your database |
|
26 | 15 | to PostgreSQL, you can contact support@rhodecode.com for some help. We have a |
|
27 | 16 | set of scripts that enable SQLite to PostgreSQL migration. These scripts have |
|
28 | 17 | been tested, and work with PostgreSQL 9.1+. |
|
29 | 18 | |
|
30 | 19 | .. note:: |
|
31 | 20 | |
|
32 | 21 | There are no SQLite to MySQL or MariaDB scripts available. |
@@ -1,95 +1,117 b'' | |||
|
1 | 1 | .. _known-issues: |
|
2 | 2 | |
|
3 | 3 | Known Issues |
|
4 | 4 | ============ |
|
5 | 5 | |
|
6 | 6 | Windows Upload |
|
7 | 7 | -------------- |
|
8 | 8 | |
|
9 | 9 | There can be an issue with uploading files from web interface on Windows, |
|
10 | 10 | and afterwards users cannot properly clone or synchronize with the repository. |
|
11 | 11 | |
|
12 | 12 | Early testing shows that often uploading files via HTML forms on Windows |
|
13 | 13 | includes the full path of the file being uploaded and not the name of the file. |
|
14 | 14 | |
|
15 | 15 | Old Format of Git Repositories |
|
16 | 16 | ------------------------------ |
|
17 | 17 | |
|
18 | 18 | There is an issue when trying to import old |git| format |repos| into recent |
|
19 | 19 | versions of |RCE|. This issue can occur when importing from external |git| |
|
20 | 20 | repositories or from older versions of |RCE| (<=2.2.7). |
|
21 | 21 | |
|
22 | 22 | To convert the old version into a current version, clone the old |
|
23 | 23 | |repo| into a local machine using a recent |git| client, then push it to a new |
|
24 | 24 | |repo| inside |RCE|. |
|
25 | 25 | |
|
26 | 26 | |
|
27 | 27 | VCS Server Memory Consumption |
|
28 | 28 | ----------------------------- |
|
29 | 29 | |
|
30 | 30 | The VCS Server cache grows without limits if not configured correctly. This |
|
31 | 31 | applies to |RCE| versions prior to the 3.3.2 releases, as 3.3.2 |
|
32 | 32 | shipped with the optimal configuration as default. See the |
|
33 | 33 | :ref:`vcs-server-maintain` section for details. |
|
34 | 34 | |
|
35 | 35 | To fix this issue, upgrade to |RCE| 3.3.2 or greater, and if you discover |
|
36 | 36 | memory consumption issues check the VCS Server settings. |
|
37 | 37 | |
|
38 | 38 | Newer Operating system locales |
|
39 | 39 | ------------------------------ |
|
40 | 40 | |
|
41 | 41 | |RCC| has a know problem with locales, due to changes in glibc 2.27+ which affects |
|
42 | 42 | the local-archive format, which is now incompatible with our used glibc 2.26. |
|
43 | 43 | |
|
44 | 44 | Mostly affected are: |
|
45 | ||
|
45 | 46 | - Fedora 23+ |
|
46 | 47 | - Ubuntu 18.04 |
|
47 | 48 | - CentOS / RHEL 8 |
|
48 | 49 | |
|
49 | 50 | To work around this problem, you need set path to ``$LOCAL_ARCHIVE`` to the |
|
50 | 51 | locale package in older pre glibc 2.27 format, or set `LC_ALL=C` in your enviroment. |
|
51 | 52 | |
|
52 | 53 | To use the pre 2.27 locale-archive fix follow these steps: |
|
53 | 54 | |
|
54 | 55 | 1. Download the pre 2.27 locale-archive package |
|
55 | 56 | |
|
56 | 57 | .. code-block:: bash |
|
57 | 58 | |
|
58 | 59 | wget https://dls.rhodecode.com/assets/locale-archive |
|
59 | 60 | |
|
60 | 61 | |
|
61 | 62 | 2. Point ``$LOCAL_ARCHIVE`` to the locale package. |
|
62 | 63 | |
|
63 | 64 | .. code-block:: bash |
|
64 | 65 | |
|
65 | 66 | $ export LOCALE_ARCHIVE=/home/USER/locale-archive # change to your path |
|
66 | 67 | |
|
67 | 68 | This should be added *both* in `enviroment` variable of `~/.rccontrol/supervisor/supervisord.ini` |
|
68 | 69 | e.g |
|
69 | 70 | |
|
70 | 71 | ``` |
|
71 | 72 | [supervisord] |
|
72 | 73 | environment = HOME=/home/user/rhodecode,LOCALE_ARCHIVE=/YOUR-PATH/locale-archive` |
|
73 | 74 | ``` |
|
74 | 75 | |
|
75 | 76 | and in user .bashrc/.zshrc etc, or via a startup script that |
|
76 | 77 | runs `rccontrol self-init` |
|
77 | 78 | |
|
78 | 79 | If you happen to be running |RCC| from systemd, use the following |
|
79 | 80 | example to pass the correct locale information on boot. |
|
80 | 81 | |
|
81 | 82 | .. code-block:: ini |
|
82 | 83 | |
|
83 | 84 | [Unit] |
|
84 | 85 | Description=Rhodecode |
|
85 | 86 | After=network.target |
|
86 | 87 | |
|
87 | 88 | [Service] |
|
88 | 89 | Type=forking |
|
89 | 90 | User=scm |
|
90 | 91 | Environment="LOCALE_ARCHIVE=/YOUR-PATH/locale-archive" |
|
91 | 92 | ExecStart=/YOUR-PATH/.rccontrol-profile/bin/rccontrol-self-init |
|
92 | 93 | |
|
93 | 94 | [Install] |
|
94 | 95 | WantedBy=multi-user.target |
|
95 | 96 | |
|
97 | ||
|
98 | Merge stucks in "merging" status | |
|
99 | -------------------------------- | |
|
100 | ||
|
101 | Similar issues: | |
|
102 | ||
|
103 | - Pull Request duplicated and/or stucks in "creating" status. | |
|
104 | ||
|
105 | Mostly affected are: | |
|
106 | ||
|
107 | - Kubernetes AWS EKS setup with NFS as shared storage | |
|
108 | - AWS EFS as shared storage | |
|
109 | ||
|
110 | Workaround: | |
|
111 | ||
|
112 | 1. Manually clear the repo cache via UI: | |
|
113 | :menuselection:`Repository Settings --> Caches --> Invalidate repository cache` | |
|
114 | ||
|
115 | 1. Open problematic PR and reset status to "created" | |
|
116 | ||
|
117 | Now you can merge PR normally |
@@ -1,59 +1,58 b'' | |||
|
1 | 1 | |RCE| 5.1.0 |RNS| |
|
2 | 2 | ----------------- |
|
3 | 3 | |
|
4 | 4 | Release Date |
|
5 | 5 | ^^^^^^^^^^^^ |
|
6 | 6 | |
|
7 | 7 | - 2024-07-18 |
|
8 | 8 | |
|
9 | 9 | |
|
10 | 10 | New Features |
|
11 | 11 | ^^^^^^^^^^^^ |
|
12 | 12 | |
|
13 |
- We've introduced 2FA for users. Now alongside the external auth 2 |
|
|
13 | - We've introduced 2FA for users. Now alongside the external auth 2FA support RhodeCode allows to enable 2FA for users. | |
|
14 | 14 | 2FA options will be available for each user individually, or enforced via authentication plugins like ldap, or internal. |
|
15 | 15 | - Email based log-in. RhodeCode now allows to log-in using email as well as username for main authentication type. |
|
16 | 16 | - Ability to replace a file using web UI. Now one can replace an existing file from the web-ui. |
|
17 | 17 | - GIT LFS Sync automation. Remote push/pull commands now can also sync GIT LFS objects. |
|
18 | - Added ability to remove or close branches from the web ui | |
|
19 | - Added ability to delete a branch automatically after merging PR for git repositories | |
|
20 |
- Added support for S3 based archive_cache |
|
|
18 | - Added ability to remove or close branches from the web ui. | |
|
19 | - Added ability to delete a branch automatically after merging PR for git repositories. | |
|
20 | - Added support for S3 based archive_cache that allows storing cached archives in S3 compatible object store. | |
|
21 | 21 | |
|
22 | 22 | |
|
23 | 23 | General |
|
24 | 24 | ^^^^^^^ |
|
25 | 25 | |
|
26 | - Upgraded all dependency libraries to their latest available versions | |
|
26 | - Upgraded all dependency libraries to their latest available versions. | |
|
27 | 27 | - Repository storage is no longer controlled via DB settings, but .ini file. This allows easier automated deployments. |
|
28 | 28 | - Bumped mercurial to 6.7.4 |
|
29 | 29 | - Mercurial: enable httppostarguments for better support of large repositories with lots of heads. |
|
30 | 30 | - Added explicit db-migrate step to update hooks for 5.X release. |
|
31 | 31 | |
|
32 | 32 | |
|
33 | 33 | Security |
|
34 | 34 | ^^^^^^^^ |
|
35 | 35 | |
|
36 | 36 | |
|
37 | 37 | |
|
38 | 38 | Performance |
|
39 | 39 | ^^^^^^^^^^^ |
|
40 | 40 | |
|
41 | 41 | - Introduced a full rewrite of ssh backend for performance. The result is 2-5x speed improvement for operation with ssh. |
|
42 |
|
|
|
43 |
- Introduced a new hooks subsystem that is more scalable and faster, enable it by setting |
|
|
42 | Enable new ssh wrapper by setting: `ssh.wrapper_cmd = /home/rhodecode/venv/bin/rc-ssh-wrapper-v2` | |
|
43 | - Introduced a new hooks subsystem that is more scalable and faster, enable it by setting: `vcs.hooks.protocol = celery` | |
|
44 | 44 | |
|
45 | 45 | |
|
46 | 46 | Fixes |
|
47 | 47 | ^^^^^ |
|
48 | 48 | |
|
49 | - Archives: Zip archive download breaks when a gitmodules file is present | |
|
50 | - Branch permissions: fixed bug preventing to specify own rules from 4.X install | |
|
51 | - SVN: refactored svn events, thus fixing support for it in dockerized env | |
|
52 | - Fixed empty server url in PR link after push from cli | |
|
49 | - Archives: Zip archive download breaks when a gitmodules file is present. | |
|
50 | - Branch permissions: fixed bug preventing to specify own rules from 4.X install. | |
|
51 | - SVN: refactored svn events, thus fixing support for it in dockerized environment. | |
|
52 | - Fixed empty server url in PR link after push from cli. | |
|
53 | 53 | |
|
54 | 54 | |
|
55 | 55 | Upgrade notes |
|
56 | 56 | ^^^^^^^^^^^^^ |
|
57 | 57 | |
|
58 |
- RhodeCode 5.1.0 is a ma |
|
|
59 | rich release | |
|
58 | - RhodeCode 5.1.0 is a major feature release after big 5.0.0 python3 migration. Happy to ship a first time feature-rich release. |
@@ -1,173 +1,175 b'' | |||
|
1 | 1 | .. _rhodecode-release-notes-ref: |
|
2 | 2 | |
|
3 | 3 | Release Notes |
|
4 | 4 | ============= |
|
5 | 5 | |
|
6 | 6 | |RCE| 5.x Versions |
|
7 | 7 | ------------------ |
|
8 | 8 | |
|
9 | 9 | .. toctree:: |
|
10 | 10 | :maxdepth: 1 |
|
11 | 11 | |
|
12 | ||
|
12 | release-notes-5.2.0.rst | |
|
13 | release-notes-5.1.2.rst | |
|
14 | release-notes-5.1.1.rst | |
|
13 | 15 | release-notes-5.1.0.rst |
|
14 | 16 | release-notes-5.0.3.rst |
|
15 | 17 | release-notes-5.0.2.rst |
|
16 | 18 | release-notes-5.0.1.rst |
|
17 | 19 | release-notes-5.0.0.rst |
|
18 | 20 | |
|
19 | 21 | |
|
20 | 22 | |RCE| 4.x Versions |
|
21 | 23 | ------------------ |
|
22 | 24 | |
|
23 | 25 | .. toctree:: |
|
24 | 26 | :maxdepth: 1 |
|
25 | 27 | |
|
26 | 28 | release-notes-4.27.1.rst |
|
27 | 29 | release-notes-4.27.0.rst |
|
28 | 30 | release-notes-4.26.0.rst |
|
29 | 31 | release-notes-4.25.2.rst |
|
30 | 32 | release-notes-4.25.1.rst |
|
31 | 33 | release-notes-4.25.0.rst |
|
32 | 34 | release-notes-4.24.1.rst |
|
33 | 35 | release-notes-4.24.0.rst |
|
34 | 36 | release-notes-4.23.2.rst |
|
35 | 37 | release-notes-4.23.1.rst |
|
36 | 38 | release-notes-4.23.0.rst |
|
37 | 39 | release-notes-4.22.0.rst |
|
38 | 40 | release-notes-4.21.0.rst |
|
39 | 41 | release-notes-4.20.1.rst |
|
40 | 42 | release-notes-4.20.0.rst |
|
41 | 43 | release-notes-4.19.3.rst |
|
42 | 44 | release-notes-4.19.2.rst |
|
43 | 45 | release-notes-4.19.1.rst |
|
44 | 46 | release-notes-4.19.0.rst |
|
45 | 47 | release-notes-4.18.3.rst |
|
46 | 48 | release-notes-4.18.2.rst |
|
47 | 49 | release-notes-4.18.1.rst |
|
48 | 50 | release-notes-4.18.0.rst |
|
49 | 51 | release-notes-4.17.4.rst |
|
50 | 52 | release-notes-4.17.3.rst |
|
51 | 53 | release-notes-4.17.2.rst |
|
52 | 54 | release-notes-4.17.1.rst |
|
53 | 55 | release-notes-4.17.0.rst |
|
54 | 56 | release-notes-4.16.2.rst |
|
55 | 57 | release-notes-4.16.1.rst |
|
56 | 58 | release-notes-4.16.0.rst |
|
57 | 59 | release-notes-4.15.2.rst |
|
58 | 60 | release-notes-4.15.1.rst |
|
59 | 61 | release-notes-4.15.0.rst |
|
60 | 62 | release-notes-4.14.1.rst |
|
61 | 63 | release-notes-4.14.0.rst |
|
62 | 64 | release-notes-4.13.3.rst |
|
63 | 65 | release-notes-4.13.2.rst |
|
64 | 66 | release-notes-4.13.1.rst |
|
65 | 67 | release-notes-4.13.0.rst |
|
66 | 68 | release-notes-4.12.4.rst |
|
67 | 69 | release-notes-4.12.3.rst |
|
68 | 70 | release-notes-4.12.2.rst |
|
69 | 71 | release-notes-4.12.1.rst |
|
70 | 72 | release-notes-4.12.0.rst |
|
71 | 73 | release-notes-4.11.6.rst |
|
72 | 74 | release-notes-4.11.5.rst |
|
73 | 75 | release-notes-4.11.4.rst |
|
74 | 76 | release-notes-4.11.3.rst |
|
75 | 77 | release-notes-4.11.2.rst |
|
76 | 78 | release-notes-4.11.1.rst |
|
77 | 79 | release-notes-4.11.0.rst |
|
78 | 80 | release-notes-4.10.6.rst |
|
79 | 81 | release-notes-4.10.5.rst |
|
80 | 82 | release-notes-4.10.4.rst |
|
81 | 83 | release-notes-4.10.3.rst |
|
82 | 84 | release-notes-4.10.2.rst |
|
83 | 85 | release-notes-4.10.1.rst |
|
84 | 86 | release-notes-4.10.0.rst |
|
85 | 87 | release-notes-4.9.1.rst |
|
86 | 88 | release-notes-4.9.0.rst |
|
87 | 89 | release-notes-4.8.0.rst |
|
88 | 90 | release-notes-4.7.2.rst |
|
89 | 91 | release-notes-4.7.1.rst |
|
90 | 92 | release-notes-4.7.0.rst |
|
91 | 93 | release-notes-4.6.1.rst |
|
92 | 94 | release-notes-4.6.0.rst |
|
93 | 95 | release-notes-4.5.2.rst |
|
94 | 96 | release-notes-4.5.1.rst |
|
95 | 97 | release-notes-4.5.0.rst |
|
96 | 98 | release-notes-4.4.2.rst |
|
97 | 99 | release-notes-4.4.1.rst |
|
98 | 100 | release-notes-4.4.0.rst |
|
99 | 101 | release-notes-4.3.1.rst |
|
100 | 102 | release-notes-4.3.0.rst |
|
101 | 103 | release-notes-4.2.1.rst |
|
102 | 104 | release-notes-4.2.0.rst |
|
103 | 105 | release-notes-4.1.2.rst |
|
104 | 106 | release-notes-4.1.1.rst |
|
105 | 107 | release-notes-4.1.0.rst |
|
106 | 108 | release-notes-4.0.1.rst |
|
107 | 109 | release-notes-4.0.0.rst |
|
108 | 110 | |
|
109 | 111 | |RCE| 3.x Versions |
|
110 | 112 | ------------------ |
|
111 | 113 | |
|
112 | 114 | .. toctree:: |
|
113 | 115 | :maxdepth: 1 |
|
114 | 116 | |
|
115 | 117 | release-notes-3.8.4.rst |
|
116 | 118 | release-notes-3.8.3.rst |
|
117 | 119 | release-notes-3.8.2.rst |
|
118 | 120 | release-notes-3.8.1.rst |
|
119 | 121 | release-notes-3.8.0.rst |
|
120 | 122 | release-notes-3.7.1.rst |
|
121 | 123 | release-notes-3.7.0.rst |
|
122 | 124 | release-notes-3.6.1.rst |
|
123 | 125 | release-notes-3.6.0.rst |
|
124 | 126 | release-notes-3.5.2.rst |
|
125 | 127 | release-notes-3.5.1.rst |
|
126 | 128 | release-notes-3.5.0.rst |
|
127 | 129 | release-notes-3.4.1.rst |
|
128 | 130 | release-notes-3.4.0.rst |
|
129 | 131 | release-notes-3.3.4.rst |
|
130 | 132 | release-notes-3.3.3.rst |
|
131 | 133 | release-notes-3.3.2.rst |
|
132 | 134 | release-notes-3.3.1.rst |
|
133 | 135 | release-notes-3.3.0.rst |
|
134 | 136 | release-notes-3.2.3.rst |
|
135 | 137 | release-notes-3.2.2.rst |
|
136 | 138 | release-notes-3.2.1.rst |
|
137 | 139 | release-notes-3.2.0.rst |
|
138 | 140 | release-notes-3.1.1.rst |
|
139 | 141 | release-notes-3.1.0.rst |
|
140 | 142 | release-notes-3.0.2.rst |
|
141 | 143 | release-notes-3.0.1.rst |
|
142 | 144 | release-notes-3.0.0.rst |
|
143 | 145 | |
|
144 | 146 | |RCE| 2.x Versions |
|
145 | 147 | ------------------ |
|
146 | 148 | |
|
147 | 149 | .. toctree:: |
|
148 | 150 | :maxdepth: 1 |
|
149 | 151 | |
|
150 | 152 | release-notes-2.2.8.rst |
|
151 | 153 | release-notes-2.2.7.rst |
|
152 | 154 | release-notes-2.2.6.rst |
|
153 | 155 | release-notes-2.2.5.rst |
|
154 | 156 | release-notes-2.2.4.rst |
|
155 | 157 | release-notes-2.2.3.rst |
|
156 | 158 | release-notes-2.2.2.rst |
|
157 | 159 | release-notes-2.2.1.rst |
|
158 | 160 | release-notes-2.2.0.rst |
|
159 | 161 | release-notes-2.1.0.rst |
|
160 | 162 | release-notes-2.0.2.rst |
|
161 | 163 | release-notes-2.0.1.rst |
|
162 | 164 | release-notes-2.0.0.rst |
|
163 | 165 | |
|
164 | 166 | |RCE| 1.x Versions |
|
165 | 167 | ------------------ |
|
166 | 168 | |
|
167 | 169 | .. toctree:: |
|
168 | 170 | :maxdepth: 1 |
|
169 | 171 | |
|
170 | 172 | release-notes-1.7.2.rst |
|
171 | 173 | release-notes-1.7.1.rst |
|
172 | 174 | release-notes-1.7.0.rst |
|
173 | 175 | release-notes-1.6.0.rst |
@@ -1,11 +1,11 b'' | |||
|
1 | 1 | sphinx==7.2.6 |
|
2 | 2 | |
|
3 | 3 | furo==2023.9.10 |
|
4 | 4 | sphinx-press-theme==0.8.0 |
|
5 | 5 | sphinx-rtd-theme==1.3.0 |
|
6 | 6 | |
|
7 |
pygments==2.1 |
|
|
7 | pygments==2.18.0 | |
|
8 | 8 | |
|
9 | 9 | docutils<0.19 |
|
10 | 10 | markupsafe==2.1.3 |
|
11 | 11 | jinja2==3.1.2 |
@@ -1,313 +1,299 b'' | |||
|
1 | 1 | # deps, generated via pipdeptree --exclude setuptools,wheel,pipdeptree,pip -f | tr '[:upper:]' '[:lower:]' |
|
2 | 2 | |
|
3 | 3 | alembic==1.13.1 |
|
4 | 4 | mako==1.2.4 |
|
5 | 5 | markupsafe==2.1.2 |
|
6 | 6 | sqlalchemy==1.4.52 |
|
7 | 7 | greenlet==3.0.3 |
|
8 |
typing_extensions==4. |
|
|
8 | typing_extensions==4.12.2 | |
|
9 | 9 | async-timeout==4.0.3 |
|
10 | 10 | babel==2.12.1 |
|
11 | 11 | beaker==1.12.1 |
|
12 | 12 | celery==5.3.6 |
|
13 | 13 | billiard==4.2.0 |
|
14 | 14 | click==8.1.3 |
|
15 | 15 | click-didyoumean==0.3.0 |
|
16 | 16 | click==8.1.3 |
|
17 | 17 | click-plugins==1.1.1 |
|
18 | 18 | click==8.1.3 |
|
19 | 19 | click-repl==0.2.0 |
|
20 | 20 | click==8.1.3 |
|
21 |
prompt |
|
|
22 |
wcwidth==0.2. |
|
|
21 | prompt_toolkit==3.0.47 | |
|
22 | wcwidth==0.2.13 | |
|
23 | 23 | six==1.16.0 |
|
24 | 24 | kombu==5.3.5 |
|
25 | 25 | amqp==5.2.0 |
|
26 | 26 | vine==5.1.0 |
|
27 | 27 | vine==5.1.0 |
|
28 | 28 | python-dateutil==2.8.2 |
|
29 | 29 | six==1.16.0 |
|
30 | 30 | tzdata==2024.1 |
|
31 | 31 | vine==5.1.0 |
|
32 | 32 | channelstream==0.7.1 |
|
33 | 33 | gevent==24.2.1 |
|
34 | 34 | greenlet==3.0.3 |
|
35 | 35 | zope.event==5.0.0 |
|
36 |
zope.interface== |
|
|
36 | zope.interface==7.0.3 | |
|
37 | 37 | itsdangerous==1.1.0 |
|
38 | 38 | marshmallow==2.18.0 |
|
39 | 39 | pyramid==2.0.2 |
|
40 | 40 | hupper==1.12 |
|
41 | 41 | plaster==1.1.2 |
|
42 | 42 | plaster-pastedeploy==1.0.1 |
|
43 | 43 | pastedeploy==3.1.0 |
|
44 | 44 | plaster==1.1.2 |
|
45 | 45 | translationstring==1.4 |
|
46 | 46 | venusian==3.0.0 |
|
47 | 47 | webob==1.8.7 |
|
48 | 48 | zope.deprecation==5.0.0 |
|
49 |
zope.interface== |
|
|
49 | zope.interface==7.0.3 | |
|
50 | 50 | pyramid-jinja2==2.10 |
|
51 | 51 | jinja2==3.1.2 |
|
52 | 52 | markupsafe==2.1.2 |
|
53 | 53 | markupsafe==2.1.2 |
|
54 | 54 | pyramid==2.0.2 |
|
55 | 55 | hupper==1.12 |
|
56 | 56 | plaster==1.1.2 |
|
57 | 57 | plaster-pastedeploy==1.0.1 |
|
58 | 58 | pastedeploy==3.1.0 |
|
59 | 59 | plaster==1.1.2 |
|
60 | 60 | translationstring==1.4 |
|
61 | 61 | venusian==3.0.0 |
|
62 | 62 | webob==1.8.7 |
|
63 | 63 | zope.deprecation==5.0.0 |
|
64 |
zope.interface== |
|
|
64 | zope.interface==7.0.3 | |
|
65 | 65 | zope.deprecation==5.0.0 |
|
66 | 66 | python-dateutil==2.8.2 |
|
67 | 67 | six==1.16.0 |
|
68 | 68 | requests==2.28.2 |
|
69 | 69 | certifi==2022.12.7 |
|
70 | 70 | charset-normalizer==3.1.0 |
|
71 | 71 | idna==3.4 |
|
72 | 72 | urllib3==1.26.14 |
|
73 | 73 | ws4py==0.5.1 |
|
74 | 74 | deform==2.0.15 |
|
75 | 75 | chameleon==3.10.2 |
|
76 | 76 | colander==2.0 |
|
77 | 77 | iso8601==1.1.0 |
|
78 | 78 | translationstring==1.4 |
|
79 | 79 | iso8601==1.1.0 |
|
80 | 80 | peppercorn==0.6 |
|
81 | 81 | translationstring==1.4 |
|
82 | 82 | zope.deprecation==5.0.0 |
|
83 | 83 | docutils==0.19 |
|
84 | 84 | dogpile.cache==1.3.3 |
|
85 | 85 | decorator==5.1.1 |
|
86 | 86 | stevedore==5.1.0 |
|
87 | 87 | pbr==5.11.1 |
|
88 | 88 | formencode==2.1.0 |
|
89 | 89 | six==1.16.0 |
|
90 |
fsspec==2024. |
|
|
91 |
gunicorn==2 |
|
|
92 |
packaging==24. |
|
|
90 | fsspec==2024.9.0 | |
|
91 | gunicorn==23.0.0 | |
|
92 | packaging==24.1 | |
|
93 | 93 | gevent==24.2.1 |
|
94 | 94 | greenlet==3.0.3 |
|
95 | 95 | zope.event==5.0.0 |
|
96 |
zope.interface== |
|
|
97 |
ipython==8. |
|
|
98 | backcall==0.2.0 | |
|
96 | zope.interface==7.0.3 | |
|
97 | ipython==8.26.0 | |
|
99 | 98 | decorator==5.1.1 |
|
100 |
jedi==0.19. |
|
|
101 |
parso==0.8. |
|
|
102 |
matplotlib-inline==0.1. |
|
|
103 |
traitlets==5. |
|
|
104 |
pexpect==4. |
|
|
99 | jedi==0.19.1 | |
|
100 | parso==0.8.4 | |
|
101 | matplotlib-inline==0.1.7 | |
|
102 | traitlets==5.14.3 | |
|
103 | pexpect==4.9.0 | |
|
105 | 104 | ptyprocess==0.7.0 |
|
106 | pickleshare==0.7.5 | |
|
107 | prompt-toolkit==3.0.38 | |
|
108 | wcwidth==0.2.6 | |
|
109 | pygments==2.15.1 | |
|
110 | stack-data==0.6.2 | |
|
111 | asttokens==2.2.1 | |
|
105 | prompt_toolkit==3.0.47 | |
|
106 | wcwidth==0.2.13 | |
|
107 | pygments==2.18.0 | |
|
108 | stack-data==0.6.3 | |
|
109 | asttokens==2.4.1 | |
|
112 | 110 | six==1.16.0 |
|
113 |
executing== |
|
|
114 |
pure |
|
|
115 |
traitlets==5. |
|
|
111 | executing==2.0.1 | |
|
112 | pure_eval==0.2.3 | |
|
113 | traitlets==5.14.3 | |
|
114 | typing_extensions==4.12.2 | |
|
116 | 115 | markdown==3.4.3 |
|
117 | 116 | msgpack==1.0.8 |
|
118 | 117 | mysqlclient==2.1.1 |
|
119 | 118 | nbconvert==7.7.3 |
|
120 | 119 | beautifulsoup4==4.12.3 |
|
121 | 120 | soupsieve==2.5 |
|
122 | 121 | bleach==6.1.0 |
|
123 | 122 | six==1.16.0 |
|
124 | 123 | webencodings==0.5.1 |
|
125 | 124 | defusedxml==0.7.1 |
|
126 | 125 | jinja2==3.1.2 |
|
127 | 126 | markupsafe==2.1.2 |
|
128 | 127 | jupyter_core==5.3.1 |
|
129 | 128 | platformdirs==3.10.0 |
|
130 |
traitlets==5. |
|
|
129 | traitlets==5.14.3 | |
|
131 | 130 | jupyterlab-pygments==0.2.2 |
|
132 | 131 | markupsafe==2.1.2 |
|
133 | 132 | mistune==2.0.5 |
|
134 | 133 | nbclient==0.8.0 |
|
135 | 134 | jupyter_client==8.3.0 |
|
136 | 135 | jupyter_core==5.3.1 |
|
137 | 136 | platformdirs==3.10.0 |
|
138 |
traitlets==5. |
|
|
137 | traitlets==5.14.3 | |
|
139 | 138 | python-dateutil==2.8.2 |
|
140 | 139 | six==1.16.0 |
|
141 | 140 | pyzmq==25.0.0 |
|
142 | 141 | tornado==6.2 |
|
143 |
traitlets==5. |
|
|
142 | traitlets==5.14.3 | |
|
144 | 143 | jupyter_core==5.3.1 |
|
145 | 144 | platformdirs==3.10.0 |
|
146 |
traitlets==5. |
|
|
145 | traitlets==5.14.3 | |
|
147 | 146 | nbformat==5.9.2 |
|
148 | 147 | fastjsonschema==2.18.0 |
|
149 | 148 | jsonschema==4.18.6 |
|
150 | 149 | attrs==22.2.0 |
|
151 | 150 | pyrsistent==0.19.3 |
|
152 | 151 | jupyter_core==5.3.1 |
|
153 | 152 | platformdirs==3.10.0 |
|
154 |
traitlets==5. |
|
|
155 |
traitlets==5. |
|
|
156 |
traitlets==5. |
|
|
153 | traitlets==5.14.3 | |
|
154 | traitlets==5.14.3 | |
|
155 | traitlets==5.14.3 | |
|
157 | 156 | nbformat==5.9.2 |
|
158 | 157 | fastjsonschema==2.18.0 |
|
159 | 158 | jsonschema==4.18.6 |
|
160 | 159 | attrs==22.2.0 |
|
161 | 160 | pyrsistent==0.19.3 |
|
162 | 161 | jupyter_core==5.3.1 |
|
163 | 162 | platformdirs==3.10.0 |
|
164 |
traitlets==5. |
|
|
165 |
traitlets==5. |
|
|
163 | traitlets==5.14.3 | |
|
164 | traitlets==5.14.3 | |
|
166 | 165 | pandocfilters==1.5.0 |
|
167 |
pygments==2.1 |
|
|
166 | pygments==2.18.0 | |
|
168 | 167 | tinycss2==1.2.1 |
|
169 | 168 | webencodings==0.5.1 |
|
170 |
traitlets==5. |
|
|
171 |
orjson==3.10. |
|
|
169 | traitlets==5.14.3 | |
|
170 | orjson==3.10.7 | |
|
172 | 171 | paste==3.10.1 |
|
173 | 172 | premailer==3.10.0 |
|
174 | 173 | cachetools==5.3.3 |
|
175 | 174 | cssselect==1.2.0 |
|
176 | 175 | cssutils==2.6.0 |
|
177 |
lxml== |
|
|
176 | lxml==5.3.0 | |
|
178 | 177 | requests==2.28.2 |
|
179 | 178 | certifi==2022.12.7 |
|
180 | 179 | charset-normalizer==3.1.0 |
|
181 | 180 | idna==3.4 |
|
182 | 181 | urllib3==1.26.14 |
|
183 | 182 | psutil==5.9.8 |
|
184 | 183 | psycopg2==2.9.9 |
|
185 | 184 | py-bcrypt==0.4 |
|
186 | 185 | pycmarkgfm==1.2.0 |
|
187 | 186 | cffi==1.16.0 |
|
188 | 187 | pycparser==2.21 |
|
189 | 188 | pycryptodome==3.17 |
|
190 | 189 | pycurl==7.45.3 |
|
191 | 190 | pymysql==1.0.3 |
|
192 | 191 | pyotp==2.8.0 |
|
193 | 192 | pyparsing==3.1.1 |
|
194 | pyramid-debugtoolbar==4.12.1 | |
|
195 | pygments==2.15.1 | |
|
196 | pyramid==2.0.2 | |
|
197 | hupper==1.12 | |
|
198 | plaster==1.1.2 | |
|
199 | plaster-pastedeploy==1.0.1 | |
|
200 | pastedeploy==3.1.0 | |
|
201 | plaster==1.1.2 | |
|
202 | translationstring==1.4 | |
|
203 | venusian==3.0.0 | |
|
204 | webob==1.8.7 | |
|
205 | zope.deprecation==5.0.0 | |
|
206 | zope.interface==6.3.0 | |
|
207 | pyramid-mako==1.1.0 | |
|
208 | mako==1.2.4 | |
|
209 | markupsafe==2.1.2 | |
|
210 | pyramid==2.0.2 | |
|
211 | hupper==1.12 | |
|
212 | plaster==1.1.2 | |
|
213 | plaster-pastedeploy==1.0.1 | |
|
214 | pastedeploy==3.1.0 | |
|
215 | plaster==1.1.2 | |
|
216 | translationstring==1.4 | |
|
217 | venusian==3.0.0 | |
|
218 | webob==1.8.7 | |
|
219 | zope.deprecation==5.0.0 | |
|
220 | zope.interface==6.3.0 | |
|
221 | 193 | pyramid-mailer==0.15.1 |
|
222 | 194 | pyramid==2.0.2 |
|
223 | 195 | hupper==1.12 |
|
224 | 196 | plaster==1.1.2 |
|
225 | 197 | plaster-pastedeploy==1.0.1 |
|
226 | 198 | pastedeploy==3.1.0 |
|
227 | 199 | plaster==1.1.2 |
|
228 | 200 | translationstring==1.4 |
|
229 | 201 | venusian==3.0.0 |
|
230 | 202 | webob==1.8.7 |
|
231 | 203 | zope.deprecation==5.0.0 |
|
232 |
zope.interface== |
|
|
204 | zope.interface==7.0.3 | |
|
233 | 205 | repoze.sendmail==4.4.1 |
|
234 |
transaction== |
|
|
235 |
zope.interface== |
|
|
236 |
zope.interface== |
|
|
237 |
transaction== |
|
|
238 |
zope.interface== |
|
|
206 | transaction==5.0.0 | |
|
207 | zope.interface==7.0.3 | |
|
208 | zope.interface==7.0.3 | |
|
209 | transaction==5.0.0 | |
|
210 | zope.interface==7.0.3 | |
|
211 | pyramid-mako==1.1.0 | |
|
212 | mako==1.2.4 | |
|
213 | markupsafe==2.1.2 | |
|
214 | pyramid==2.0.2 | |
|
215 | hupper==1.12 | |
|
216 | plaster==1.1.2 | |
|
217 | plaster-pastedeploy==1.0.1 | |
|
218 | pastedeploy==3.1.0 | |
|
219 | plaster==1.1.2 | |
|
220 | translationstring==1.4 | |
|
221 | venusian==3.0.0 | |
|
222 | webob==1.8.7 | |
|
223 | zope.deprecation==5.0.0 | |
|
224 | zope.interface==7.0.3 | |
|
239 | 225 | python-ldap==3.4.3 |
|
240 | 226 | pyasn1==0.4.8 |
|
241 | 227 | pyasn1-modules==0.2.8 |
|
242 | 228 | pyasn1==0.4.8 |
|
243 | 229 | python-memcached==1.59 |
|
244 | 230 | six==1.16.0 |
|
245 | 231 | python-pam==2.0.2 |
|
246 |
python3-saml==1.1 |
|
|
232 | python3-saml==1.16.0 | |
|
247 | 233 | isodate==0.6.1 |
|
248 | 234 | six==1.16.0 |
|
249 |
lxml== |
|
|
250 |
xmlsec==1.3.1 |
|
|
251 |
lxml== |
|
|
235 | lxml==5.3.0 | |
|
236 | xmlsec==1.3.14 | |
|
237 | lxml==5.3.0 | |
|
252 | 238 | pyyaml==6.0.1 |
|
253 |
redis==5. |
|
|
239 | redis==5.1.0 | |
|
254 | 240 | async-timeout==4.0.3 |
|
255 | 241 | regex==2022.10.31 |
|
256 | 242 | routes==2.5.1 |
|
257 | 243 | repoze.lru==0.7 |
|
258 | 244 | six==1.16.0 |
|
259 |
s3fs==2024. |
|
|
245 | s3fs==2024.9.0 | |
|
260 | 246 | aiobotocore==2.13.0 |
|
261 | 247 | aiohttp==3.9.5 |
|
262 | 248 | aiosignal==1.3.1 |
|
263 | 249 | frozenlist==1.4.1 |
|
264 | 250 | attrs==22.2.0 |
|
265 | 251 | frozenlist==1.4.1 |
|
266 | 252 | multidict==6.0.5 |
|
267 | 253 | yarl==1.9.4 |
|
268 | 254 | idna==3.4 |
|
269 | 255 | multidict==6.0.5 |
|
270 | 256 | aioitertools==0.11.0 |
|
271 | 257 | botocore==1.34.106 |
|
272 | 258 | jmespath==1.0.1 |
|
273 | 259 | python-dateutil==2.8.2 |
|
274 | 260 | six==1.16.0 |
|
275 | 261 | urllib3==1.26.14 |
|
276 | 262 | wrapt==1.16.0 |
|
277 | 263 | aiohttp==3.9.5 |
|
278 | 264 | aiosignal==1.3.1 |
|
279 | 265 | frozenlist==1.4.1 |
|
280 | 266 | attrs==22.2.0 |
|
281 | 267 | frozenlist==1.4.1 |
|
282 | 268 | multidict==6.0.5 |
|
283 | 269 | yarl==1.9.4 |
|
284 | 270 | idna==3.4 |
|
285 | 271 | multidict==6.0.5 |
|
286 |
fsspec==2024. |
|
|
272 | fsspec==2024.9.0 | |
|
287 | 273 | simplejson==3.19.2 |
|
288 | 274 | sshpubkeys==3.3.1 |
|
289 | 275 | cryptography==40.0.2 |
|
290 | 276 | cffi==1.16.0 |
|
291 | 277 | pycparser==2.21 |
|
292 | 278 | ecdsa==0.18.0 |
|
293 | 279 | six==1.16.0 |
|
294 | 280 | sqlalchemy==1.4.52 |
|
295 | 281 | greenlet==3.0.3 |
|
296 |
typing_extensions==4. |
|
|
282 | typing_extensions==4.12.2 | |
|
297 | 283 | supervisor==4.2.5 |
|
298 | 284 | tzlocal==4.3 |
|
299 | 285 | pytz-deprecation-shim==0.1.0.post0 |
|
300 | 286 | tzdata==2024.1 |
|
301 | 287 | tempita==0.5.2 |
|
302 | 288 | unidecode==1.3.6 |
|
303 | 289 | urlobject==2.4.3 |
|
304 | 290 | waitress==3.0.0 |
|
305 | 291 | webhelpers2==2.1 |
|
306 | 292 | markupsafe==2.1.2 |
|
307 | 293 | six==1.16.0 |
|
308 | 294 | whoosh==2.7.4 |
|
309 | 295 | zope.cachedescriptors==5.0.0 |
|
310 | 296 | qrcode==7.4.2 |
|
311 | 297 | |
|
312 | 298 | ## uncomment to add the debug libraries |
|
313 | 299 | #-r requirements_debug.txt |
@@ -1,28 +1,29 b'' | |||
|
1 | 1 | ## special libraries we could extend the requirements.txt file with to add some |
|
2 | 2 | ## custom libraries usefull for debug and memory tracing |
|
3 | 3 | |
|
4 | 4 | objgraph |
|
5 | 5 | memory-profiler |
|
6 | 6 | pympler |
|
7 | 7 | |
|
8 | 8 | ## debug |
|
9 | 9 | ipdb |
|
10 | 10 | ipython |
|
11 | 11 | rich |
|
12 | pyramid-debugtoolbar | |
|
12 | 13 | |
|
13 | 14 | # format |
|
14 | 15 | flake8 |
|
15 | 16 | ruff |
|
16 | 17 | |
|
17 | 18 | pipdeptree==2.7.1 |
|
18 | 19 | invoke==2.0.0 |
|
19 | 20 | bumpversion==0.6.0 |
|
20 | 21 | bump2version==1.0.1 |
|
21 | 22 | |
|
22 | 23 | docutils-stubs |
|
23 | 24 | types-redis |
|
24 | 25 | types-requests==2.31.0.6 |
|
25 | 26 | types-sqlalchemy |
|
26 | 27 | types-psutil |
|
27 | 28 | types-pycurl |
|
28 | 29 | types-ujson |
@@ -1,48 +1,48 b'' | |||
|
1 | 1 | # test related requirements |
|
2 | 2 | mock==5.1.0 |
|
3 | 3 | pytest-cov==4.1.0 |
|
4 | 4 | coverage==7.4.3 |
|
5 | 5 | pytest==8.1.1 |
|
6 | 6 | iniconfig==2.0.0 |
|
7 |
packaging==24. |
|
|
7 | packaging==24.1 | |
|
8 | 8 | pluggy==1.4.0 |
|
9 | 9 | pytest-env==1.1.3 |
|
10 | 10 | pytest==8.1.1 |
|
11 | 11 | iniconfig==2.0.0 |
|
12 |
packaging==24. |
|
|
12 | packaging==24.1 | |
|
13 | 13 | pluggy==1.4.0 |
|
14 | 14 | pytest-profiling==1.7.0 |
|
15 | 15 | gprof2dot==2022.7.29 |
|
16 | 16 | pytest==8.1.1 |
|
17 | 17 | iniconfig==2.0.0 |
|
18 |
packaging==24. |
|
|
18 | packaging==24.1 | |
|
19 | 19 | pluggy==1.4.0 |
|
20 | 20 | six==1.16.0 |
|
21 | 21 | pytest-rerunfailures==13.0 |
|
22 |
packaging==24. |
|
|
22 | packaging==24.1 | |
|
23 | 23 | pytest==8.1.1 |
|
24 | 24 | iniconfig==2.0.0 |
|
25 |
packaging==24. |
|
|
25 | packaging==24.1 | |
|
26 | 26 | pluggy==1.4.0 |
|
27 | 27 | pytest-runner==6.0.1 |
|
28 | 28 | pytest-sugar==1.0.0 |
|
29 |
packaging==24. |
|
|
29 | packaging==24.1 | |
|
30 | 30 | pytest==8.1.1 |
|
31 | 31 | iniconfig==2.0.0 |
|
32 |
packaging==24. |
|
|
32 | packaging==24.1 | |
|
33 | 33 | pluggy==1.4.0 |
|
34 | 34 | termcolor==2.4.0 |
|
35 | 35 | pytest-timeout==2.3.1 |
|
36 | 36 | pytest==8.1.1 |
|
37 | 37 | iniconfig==2.0.0 |
|
38 |
packaging==24. |
|
|
38 | packaging==24.1 | |
|
39 | 39 | pluggy==1.4.0 |
|
40 | 40 | webtest==3.0.0 |
|
41 | 41 | beautifulsoup4==4.12.3 |
|
42 | 42 | soupsieve==2.5 |
|
43 | 43 | waitress==3.0.0 |
|
44 | 44 | webob==1.8.7 |
|
45 | 45 | |
|
46 | 46 | # RhodeCode test-data |
|
47 | 47 | rc_testdata @ https://code.rhodecode.com/upstream/rc-testdata-dist/raw/77378e9097f700b4c1b9391b56199fe63566b5c9/rc_testdata-0.11.0.tar.gz#egg=rc_testdata |
|
48 | 48 | rc_testdata==0.11.0 |
@@ -1,581 +1,581 b'' | |||
|
1 | 1 | # Copyright (C) 2011-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import itertools |
|
20 | 20 | import logging |
|
21 | 21 | import sys |
|
22 | 22 | import fnmatch |
|
23 | 23 | |
|
24 | 24 | import decorator |
|
25 | 25 | import venusian |
|
26 | 26 | from collections import OrderedDict |
|
27 | 27 | |
|
28 | 28 | from pyramid.exceptions import ConfigurationError |
|
29 | 29 | from pyramid.renderers import render |
|
30 | 30 | from pyramid.response import Response |
|
31 | 31 | from pyramid.httpexceptions import HTTPNotFound |
|
32 | 32 | |
|
33 | 33 | from rhodecode.api.exc import ( |
|
34 | 34 | JSONRPCBaseError, JSONRPCError, JSONRPCForbidden, JSONRPCValidationError) |
|
35 | 35 | from rhodecode.apps._base import TemplateArgs |
|
36 | 36 | from rhodecode.lib.auth import AuthUser |
|
37 | 37 | from rhodecode.lib.base import get_ip_addr, attach_context_attributes |
|
38 | 38 | from rhodecode.lib.exc_tracking import store_exception |
|
39 | 39 | from rhodecode.lib import ext_json |
|
40 | 40 | from rhodecode.lib.utils2 import safe_str |
|
41 | 41 | from rhodecode.lib.plugins.utils import get_plugin_settings |
|
42 | 42 | from rhodecode.model.db import User, UserApiKeys |
|
43 | from rhodecode.config.patches import inspect_getargspec | |
|
43 | 44 | |
|
44 | 45 | log = logging.getLogger(__name__) |
|
45 | 46 | |
|
46 | 47 | DEFAULT_RENDERER = 'jsonrpc_renderer' |
|
47 | 48 | DEFAULT_URL = '/_admin/api' |
|
48 | 49 | SERVICE_API_IDENTIFIER = 'service_' |
|
49 | 50 | |
|
50 | 51 | |
|
51 | 52 | def find_methods(jsonrpc_methods, pattern): |
|
52 | 53 | matches = OrderedDict() |
|
53 | 54 | if not isinstance(pattern, (list, tuple)): |
|
54 | 55 | pattern = [pattern] |
|
55 | 56 | |
|
56 | 57 | for single_pattern in pattern: |
|
57 | 58 | for method_name, method in filter( |
|
58 | 59 | lambda x: not x[0].startswith(SERVICE_API_IDENTIFIER), jsonrpc_methods.items() |
|
59 | 60 | ): |
|
60 | 61 | if fnmatch.fnmatch(method_name, single_pattern): |
|
61 | 62 | matches[method_name] = method |
|
62 | 63 | return matches |
|
63 | 64 | |
|
64 | 65 | |
|
65 | 66 | class ExtJsonRenderer(object): |
|
66 | 67 | """ |
|
67 | 68 | Custom renderer that makes use of our ext_json lib |
|
68 | 69 | |
|
69 | 70 | """ |
|
70 | 71 | |
|
71 | 72 | def __init__(self): |
|
72 | 73 | self.serializer = ext_json.formatted_json |
|
73 | 74 | |
|
74 | 75 | def __call__(self, info): |
|
75 | 76 | """ Returns a plain JSON-encoded string with content-type |
|
76 | 77 | ``application/json``. The content-type may be overridden by |
|
77 | 78 | setting ``request.response.content_type``.""" |
|
78 | 79 | |
|
79 | 80 | def _render(value, system): |
|
80 | 81 | request = system.get('request') |
|
81 | 82 | if request is not None: |
|
82 | 83 | response = request.response |
|
83 | 84 | ct = response.content_type |
|
84 | 85 | if ct == response.default_content_type: |
|
85 | 86 | response.content_type = 'application/json' |
|
86 | 87 | |
|
87 | 88 | return self.serializer(value) |
|
88 | 89 | |
|
89 | 90 | return _render |
|
90 | 91 | |
|
91 | 92 | |
|
92 | 93 | def jsonrpc_response(request, result): |
|
93 | 94 | rpc_id = getattr(request, 'rpc_id', None) |
|
94 | 95 | |
|
95 | 96 | ret_value = '' |
|
96 | 97 | if rpc_id: |
|
97 | 98 | ret_value = {'id': rpc_id, 'result': result, 'error': None} |
|
98 | 99 | |
|
99 | 100 | # fetch deprecation warnings, and store it inside results |
|
100 | 101 | deprecation = getattr(request, 'rpc_deprecation', None) |
|
101 | 102 | if deprecation: |
|
102 | 103 | ret_value['DEPRECATION_WARNING'] = deprecation |
|
103 | 104 | |
|
104 | 105 | raw_body = render(DEFAULT_RENDERER, ret_value, request=request) |
|
105 | 106 | content_type = 'application/json' |
|
106 | 107 | content_type_header = 'Content-Type' |
|
107 | 108 | headers = { |
|
108 | 109 | content_type_header: content_type |
|
109 | 110 | } |
|
110 | 111 | return Response( |
|
111 | 112 | body=raw_body, |
|
112 | 113 | content_type=content_type, |
|
113 | 114 | headerlist=[(k, v) for k, v in headers.items()] |
|
114 | 115 | ) |
|
115 | 116 | |
|
116 | 117 | |
|
117 | 118 | def jsonrpc_error(request, message, retid=None, code: int | None = None, headers: dict | None = None): |
|
118 | 119 | """ |
|
119 | 120 | Generate a Response object with a JSON-RPC error body |
|
120 | 121 | """ |
|
121 | 122 | headers = headers or {} |
|
122 | 123 | content_type = 'application/json' |
|
123 | 124 | content_type_header = 'Content-Type' |
|
124 | 125 | if content_type_header not in headers: |
|
125 | 126 | headers[content_type_header] = content_type |
|
126 | 127 | |
|
127 | 128 | err_dict = {'id': retid, 'result': None, 'error': message} |
|
128 | 129 | raw_body = render(DEFAULT_RENDERER, err_dict, request=request) |
|
129 | 130 | |
|
130 | 131 | return Response( |
|
131 | 132 | body=raw_body, |
|
132 | 133 | status=code, |
|
133 | 134 | content_type=content_type, |
|
134 | 135 | headerlist=[(k, v) for k, v in headers.items()] |
|
135 | 136 | ) |
|
136 | 137 | |
|
137 | 138 | |
|
138 | 139 | def exception_view(exc, request): |
|
139 | 140 | rpc_id = getattr(request, 'rpc_id', None) |
|
140 | 141 | |
|
141 | 142 | if isinstance(exc, JSONRPCError): |
|
142 | 143 | fault_message = safe_str(exc) |
|
143 | 144 | log.debug('json-rpc error rpc_id:%s "%s"', rpc_id, fault_message) |
|
144 | 145 | elif isinstance(exc, JSONRPCValidationError): |
|
145 | 146 | colander_exc = exc.colander_exception |
|
146 | 147 | # TODO(marcink): think maybe of nicer way to serialize errors ? |
|
147 | 148 | fault_message = colander_exc.asdict() |
|
148 | 149 | log.debug('json-rpc colander error rpc_id:%s "%s"', rpc_id, fault_message) |
|
149 | 150 | elif isinstance(exc, JSONRPCForbidden): |
|
150 | 151 | fault_message = 'Access was denied to this resource.' |
|
151 | 152 | log.warning('json-rpc forbidden call rpc_id:%s "%s"', rpc_id, fault_message) |
|
152 | 153 | elif isinstance(exc, HTTPNotFound): |
|
153 | 154 | method = request.rpc_method |
|
154 | 155 | log.debug('json-rpc method `%s` not found in list of ' |
|
155 | 156 | 'api calls: %s, rpc_id:%s', |
|
156 | 157 | method, list(request.registry.jsonrpc_methods.keys()), rpc_id) |
|
157 | 158 | |
|
158 | 159 | similar = 'none' |
|
159 | 160 | try: |
|
160 | 161 | similar_paterns = [f'*{x}*' for x in method.split('_')] |
|
161 | 162 | similar_found = find_methods( |
|
162 | 163 | request.registry.jsonrpc_methods, similar_paterns) |
|
163 | 164 | similar = ', '.join(similar_found.keys()) or similar |
|
164 | 165 | except Exception: |
|
165 | 166 | # make the whole above block safe |
|
166 | 167 | pass |
|
167 | 168 | |
|
168 | 169 | fault_message = f"No such method: {method}. Similar methods: {similar}" |
|
169 | 170 | else: |
|
170 | 171 | fault_message = 'undefined error' |
|
171 | 172 | exc_info = exc.exc_info() |
|
172 | 173 | store_exception(id(exc_info), exc_info, prefix='rhodecode-api') |
|
173 | 174 | |
|
174 | 175 | statsd = request.registry.statsd |
|
175 | 176 | if statsd: |
|
176 | 177 | exc_type = f"{exc.__class__.__module__}.{exc.__class__.__name__}" |
|
177 | 178 | statsd.incr('rhodecode_exception_total', |
|
178 | 179 | tags=["exc_source:api", f"type:{exc_type}"]) |
|
179 | 180 | |
|
180 | 181 | return jsonrpc_error(request, fault_message, rpc_id) |
|
181 | 182 | |
|
182 | 183 | |
|
183 | 184 | def request_view(request): |
|
184 | 185 | """ |
|
185 | 186 | Main request handling method. It handles all logic to call a specific |
|
186 | 187 | exposed method |
|
187 | 188 | """ |
|
188 | 189 | # cython compatible inspect |
|
189 | from rhodecode.config.patches import inspect_getargspec | |
|
190 | 190 | inspect = inspect_getargspec() |
|
191 | 191 | |
|
192 | 192 | # check if we can find this session using api_key, get_by_auth_token |
|
193 | 193 | # search not expired tokens only |
|
194 | 194 | try: |
|
195 | 195 | if not request.rpc_method.startswith(SERVICE_API_IDENTIFIER): |
|
196 | 196 | api_user = User.get_by_auth_token(request.rpc_api_key) |
|
197 | 197 | |
|
198 | 198 | if api_user is None: |
|
199 | 199 | return jsonrpc_error( |
|
200 | 200 | request, retid=request.rpc_id, message='Invalid API KEY') |
|
201 | 201 | |
|
202 | 202 | if not api_user.active: |
|
203 | 203 | return jsonrpc_error( |
|
204 | 204 | request, retid=request.rpc_id, |
|
205 | 205 | message='Request from this user not allowed') |
|
206 | 206 | |
|
207 | 207 | # check if we are allowed to use this IP |
|
208 | 208 | auth_u = AuthUser( |
|
209 | 209 | api_user.user_id, request.rpc_api_key, ip_addr=request.rpc_ip_addr) |
|
210 | 210 | if not auth_u.ip_allowed: |
|
211 | 211 | return jsonrpc_error( |
|
212 | 212 | request, retid=request.rpc_id, |
|
213 | 213 | message='Request from IP:{} not allowed'.format( |
|
214 | 214 | request.rpc_ip_addr)) |
|
215 | 215 | else: |
|
216 | 216 | log.info('Access for IP:%s allowed', request.rpc_ip_addr) |
|
217 | 217 | |
|
218 | 218 | # register our auth-user |
|
219 | 219 | request.rpc_user = auth_u |
|
220 | 220 | request.environ['rc_auth_user_id'] = str(auth_u.user_id) |
|
221 | 221 | |
|
222 | 222 | # now check if token is valid for API |
|
223 | 223 | auth_token = request.rpc_api_key |
|
224 | 224 | token_match = api_user.authenticate_by_token( |
|
225 | 225 | auth_token, roles=[UserApiKeys.ROLE_API]) |
|
226 | 226 | invalid_token = not token_match |
|
227 | 227 | |
|
228 | 228 | log.debug('Checking if API KEY is valid with proper role') |
|
229 | 229 | if invalid_token: |
|
230 | 230 | return jsonrpc_error( |
|
231 | 231 | request, retid=request.rpc_id, |
|
232 | 232 | message='API KEY invalid or, has bad role for an API call') |
|
233 | 233 | else: |
|
234 | 234 | auth_u = 'service' |
|
235 | 235 | if request.rpc_api_key != request.registry.settings['app.service_api.token']: |
|
236 | 236 | raise Exception("Provided service secret is not recognized!") |
|
237 | 237 | |
|
238 | 238 | except Exception: |
|
239 | 239 | log.exception('Error on API AUTH') |
|
240 | 240 | return jsonrpc_error( |
|
241 | 241 | request, retid=request.rpc_id, message='Invalid API KEY') |
|
242 | 242 | |
|
243 | 243 | method = request.rpc_method |
|
244 | 244 | func = request.registry.jsonrpc_methods[method] |
|
245 | 245 | |
|
246 | 246 | # now that we have a method, add request._req_params to |
|
247 | 247 | # self.kargs and dispatch control to WGIController |
|
248 | 248 | |
|
249 | 249 | argspec = inspect.getargspec(func) |
|
250 | 250 | arglist = argspec[0] |
|
251 | 251 | defs = argspec[3] or [] |
|
252 | 252 | defaults = [type(a) for a in defs] |
|
253 | 253 | default_empty = type(NotImplemented) |
|
254 | 254 | |
|
255 | 255 | # kw arguments required by this method |
|
256 | 256 | func_kwargs = dict(itertools.zip_longest( |
|
257 | 257 | reversed(arglist), reversed(defaults), fillvalue=default_empty)) |
|
258 | 258 | |
|
259 | 259 | # This attribute will need to be first param of a method that uses |
|
260 | 260 | # api_key, which is translated to instance of user at that name |
|
261 | 261 | user_var = 'apiuser' |
|
262 | 262 | request_var = 'request' |
|
263 | 263 | |
|
264 | 264 | for arg in [user_var, request_var]: |
|
265 | 265 | if arg not in arglist: |
|
266 | 266 | return jsonrpc_error( |
|
267 | 267 | request, |
|
268 | 268 | retid=request.rpc_id, |
|
269 | 269 | message='This method [%s] does not support ' |
|
270 | 270 | 'required parameter `%s`' % (func.__name__, arg)) |
|
271 | 271 | |
|
272 | 272 | # get our arglist and check if we provided them as args |
|
273 | 273 | for arg, default in func_kwargs.items(): |
|
274 | 274 | if arg in [user_var, request_var]: |
|
275 | 275 | # user_var and request_var are pre-hardcoded parameters and we |
|
276 | 276 | # don't need to do any translation |
|
277 | 277 | continue |
|
278 | 278 | |
|
279 | 279 | # skip the required param check if it's default value is |
|
280 | 280 | # NotImplementedType (default_empty) |
|
281 | 281 | if default == default_empty and arg not in request.rpc_params: |
|
282 | 282 | return jsonrpc_error( |
|
283 | 283 | request, |
|
284 | 284 | retid=request.rpc_id, |
|
285 | 285 | message=('Missing non optional `%s` arg in JSON DATA' % arg) |
|
286 | 286 | ) |
|
287 | 287 | |
|
288 | 288 | # sanitize extra passed arguments |
|
289 | 289 | for k in list(request.rpc_params.keys()): |
|
290 | 290 | if k not in func_kwargs: |
|
291 | 291 | del request.rpc_params[k] |
|
292 | 292 | |
|
293 | 293 | call_params = request.rpc_params |
|
294 | 294 | call_params.update({ |
|
295 | 295 | 'request': request, |
|
296 | 296 | 'apiuser': auth_u |
|
297 | 297 | }) |
|
298 | 298 | |
|
299 | 299 | # register some common functions for usage |
|
300 | 300 | rpc_user = request.rpc_user.user_id if hasattr(request, 'rpc_user') else None |
|
301 | 301 | attach_context_attributes(TemplateArgs(), request, rpc_user) |
|
302 | 302 | |
|
303 | 303 | statsd = request.registry.statsd |
|
304 | 304 | |
|
305 | 305 | try: |
|
306 | 306 | ret_value = func(**call_params) |
|
307 | 307 | resp = jsonrpc_response(request, ret_value) |
|
308 | 308 | if statsd: |
|
309 | 309 | statsd.incr('rhodecode_api_call_success_total') |
|
310 | 310 | return resp |
|
311 | 311 | except JSONRPCBaseError: |
|
312 | 312 | raise |
|
313 | 313 | except Exception: |
|
314 | 314 | log.exception('Unhandled exception occurred on api call: %s', func) |
|
315 | 315 | exc_info = sys.exc_info() |
|
316 | 316 | exc_id, exc_type_name = store_exception( |
|
317 | 317 | id(exc_info), exc_info, prefix='rhodecode-api') |
|
318 | 318 | error_headers = { |
|
319 | 319 | 'RhodeCode-Exception-Id': str(exc_id), |
|
320 | 320 | 'RhodeCode-Exception-Type': str(exc_type_name) |
|
321 | 321 | } |
|
322 | 322 | err_resp = jsonrpc_error( |
|
323 | 323 | request, retid=request.rpc_id, message='Internal server error', |
|
324 | 324 | headers=error_headers) |
|
325 | 325 | if statsd: |
|
326 | 326 | statsd.incr('rhodecode_api_call_fail_total') |
|
327 | 327 | return err_resp |
|
328 | 328 | |
|
329 | 329 | |
|
330 | 330 | def setup_request(request): |
|
331 | 331 | """ |
|
332 | 332 | Parse a JSON-RPC request body. It's used inside the predicates method |
|
333 | 333 | to validate and bootstrap requests for usage in rpc calls. |
|
334 | 334 | |
|
335 | 335 | We need to raise JSONRPCError here if we want to return some errors back to |
|
336 | 336 | user. |
|
337 | 337 | """ |
|
338 | 338 | |
|
339 | 339 | log.debug('Executing setup request: %r', request) |
|
340 | 340 | request.rpc_ip_addr = get_ip_addr(request.environ) |
|
341 | 341 | # TODO(marcink): deprecate GET at some point |
|
342 | 342 | if request.method not in ['POST', 'GET']: |
|
343 | 343 | log.debug('unsupported request method "%s"', request.method) |
|
344 | 344 | raise JSONRPCError( |
|
345 | 345 | 'unsupported request method "%s". Please use POST' % request.method) |
|
346 | 346 | |
|
347 | 347 | if 'CONTENT_LENGTH' not in request.environ: |
|
348 | 348 | log.debug("No Content-Length") |
|
349 | 349 | raise JSONRPCError("Empty body, No Content-Length in request") |
|
350 | 350 | |
|
351 | 351 | else: |
|
352 | 352 | length = request.environ['CONTENT_LENGTH'] |
|
353 | 353 | log.debug('Content-Length: %s', length) |
|
354 | 354 | |
|
355 | 355 | if length == 0: |
|
356 | 356 | log.debug("Content-Length is 0") |
|
357 | 357 | raise JSONRPCError("Content-Length is 0") |
|
358 | 358 | |
|
359 | 359 | raw_body = request.body |
|
360 | 360 | log.debug("Loading JSON body now") |
|
361 | 361 | try: |
|
362 | 362 | json_body = ext_json.json.loads(raw_body) |
|
363 | 363 | except ValueError as e: |
|
364 | 364 | # catch JSON errors Here |
|
365 | 365 | raise JSONRPCError(f"JSON parse error ERR:{e} RAW:{raw_body!r}") |
|
366 | 366 | |
|
367 | 367 | request.rpc_id = json_body.get('id') |
|
368 | 368 | request.rpc_method = json_body.get('method') |
|
369 | 369 | |
|
370 | 370 | # check required base parameters |
|
371 | 371 | try: |
|
372 | 372 | api_key = json_body.get('api_key') |
|
373 | 373 | if not api_key: |
|
374 | 374 | api_key = json_body.get('auth_token') |
|
375 | 375 | |
|
376 | 376 | if not api_key: |
|
377 | 377 | raise KeyError('api_key or auth_token') |
|
378 | 378 | |
|
379 | 379 | # TODO(marcink): support passing in token in request header |
|
380 | 380 | |
|
381 | 381 | request.rpc_api_key = api_key |
|
382 | 382 | request.rpc_id = json_body['id'] |
|
383 | 383 | request.rpc_method = json_body['method'] |
|
384 | 384 | request.rpc_params = json_body['args'] \ |
|
385 | 385 | if isinstance(json_body['args'], dict) else {} |
|
386 | 386 | |
|
387 | 387 | log.debug('method: %s, params: %.10240r', request.rpc_method, request.rpc_params) |
|
388 | 388 | except KeyError as e: |
|
389 | 389 | raise JSONRPCError(f'Incorrect JSON data. Missing {e}') |
|
390 | 390 | |
|
391 | 391 | log.debug('setup complete, now handling method:%s rpcid:%s', |
|
392 | 392 | request.rpc_method, request.rpc_id, ) |
|
393 | 393 | |
|
394 | 394 | |
|
395 | 395 | class RoutePredicate(object): |
|
396 | 396 | def __init__(self, val, config): |
|
397 | 397 | self.val = val |
|
398 | 398 | |
|
399 | 399 | def text(self): |
|
400 | 400 | return f'jsonrpc route = {self.val}' |
|
401 | 401 | |
|
402 | 402 | phash = text |
|
403 | 403 | |
|
404 | 404 | def __call__(self, info, request): |
|
405 | 405 | if self.val: |
|
406 | 406 | # potentially setup and bootstrap our call |
|
407 | 407 | setup_request(request) |
|
408 | 408 | |
|
409 | 409 | # Always return True so that even if it isn't a valid RPC it |
|
410 | 410 | # will fall through to the underlaying handlers like notfound_view |
|
411 | 411 | return True |
|
412 | 412 | |
|
413 | 413 | |
|
414 | 414 | class NotFoundPredicate(object): |
|
415 | 415 | def __init__(self, val, config): |
|
416 | 416 | self.val = val |
|
417 | 417 | self.methods = config.registry.jsonrpc_methods |
|
418 | 418 | |
|
419 | 419 | def text(self): |
|
420 | 420 | return f'jsonrpc method not found = {self.val}' |
|
421 | 421 | |
|
422 | 422 | phash = text |
|
423 | 423 | |
|
424 | 424 | def __call__(self, info, request): |
|
425 | 425 | return hasattr(request, 'rpc_method') |
|
426 | 426 | |
|
427 | 427 | |
|
428 | 428 | class MethodPredicate(object): |
|
429 | 429 | def __init__(self, val, config): |
|
430 | 430 | self.method = val |
|
431 | 431 | |
|
432 | 432 | def text(self): |
|
433 | 433 | return f'jsonrpc method = {self.method}' |
|
434 | 434 | |
|
435 | 435 | phash = text |
|
436 | 436 | |
|
437 | 437 | def __call__(self, context, request): |
|
438 | 438 | # we need to explicitly return False here, so pyramid doesn't try to |
|
439 | 439 | # execute our view directly. We need our main handler to execute things |
|
440 | 440 | return getattr(request, 'rpc_method') == self.method |
|
441 | 441 | |
|
442 | 442 | |
|
443 | 443 | def add_jsonrpc_method(config, view, **kwargs): |
|
444 | 444 | # pop the method name |
|
445 | 445 | method = kwargs.pop('method', None) |
|
446 | 446 | |
|
447 | 447 | if method is None: |
|
448 | 448 | raise ConfigurationError( |
|
449 | 449 | 'Cannot register a JSON-RPC method without specifying the "method"') |
|
450 | 450 | |
|
451 | 451 | # we define custom predicate, to enable to detect conflicting methods, |
|
452 | 452 | # those predicates are kind of "translation" from the decorator variables |
|
453 | 453 | # to internal predicates names |
|
454 | 454 | |
|
455 | 455 | kwargs['jsonrpc_method'] = method |
|
456 | 456 | |
|
457 | 457 | # register our view into global view store for validation |
|
458 | 458 | config.registry.jsonrpc_methods[method] = view |
|
459 | 459 | |
|
460 | 460 | # we're using our main request_view handler, here, so each method |
|
461 | 461 | # has a unified handler for itself |
|
462 | 462 | config.add_view(request_view, route_name='apiv2', **kwargs) |
|
463 | 463 | |
|
464 | 464 | |
|
465 | 465 | class jsonrpc_method(object): |
|
466 | 466 | """ |
|
467 | 467 | decorator that works similar to @add_view_config decorator, |
|
468 | 468 | but tailored for our JSON RPC |
|
469 | 469 | """ |
|
470 | 470 | |
|
471 | 471 | venusian = venusian # for testing injection |
|
472 | 472 | |
|
473 | 473 | def __init__(self, method=None, **kwargs): |
|
474 | 474 | self.method = method |
|
475 | 475 | self.kwargs = kwargs |
|
476 | 476 | |
|
477 | 477 | def __call__(self, wrapped): |
|
478 | 478 | kwargs = self.kwargs.copy() |
|
479 | 479 | kwargs['method'] = self.method or wrapped.__name__ |
|
480 | 480 | depth = kwargs.pop('_depth', 0) |
|
481 | 481 | |
|
482 | 482 | def callback(context, name, ob): |
|
483 | 483 | config = context.config.with_package(info.module) |
|
484 | 484 | config.add_jsonrpc_method(view=ob, **kwargs) |
|
485 | 485 | |
|
486 | 486 | info = venusian.attach(wrapped, callback, category='pyramid', |
|
487 | 487 | depth=depth + 1) |
|
488 | 488 | if info.scope == 'class': |
|
489 | 489 | # ensure that attr is set if decorating a class method |
|
490 | 490 | kwargs.setdefault('attr', wrapped.__name__) |
|
491 | 491 | |
|
492 | 492 | kwargs['_info'] = info.codeinfo # fbo action_method |
|
493 | 493 | return wrapped |
|
494 | 494 | |
|
495 | 495 | |
|
496 | 496 | class jsonrpc_deprecated_method(object): |
|
497 | 497 | """ |
|
498 | 498 | Marks method as deprecated, adds log.warning, and inject special key to |
|
499 | 499 | the request variable to mark method as deprecated. |
|
500 | 500 | Also injects special docstring that extract_docs will catch to mark |
|
501 | 501 | method as deprecated. |
|
502 | 502 | |
|
503 | 503 | :param use_method: specify which method should be used instead of |
|
504 | 504 | the decorated one |
|
505 | 505 | |
|
506 | 506 | Use like:: |
|
507 | 507 | |
|
508 | 508 | @jsonrpc_method() |
|
509 | 509 | @jsonrpc_deprecated_method(use_method='new_func', deprecated_at_version='3.0.0') |
|
510 | 510 | def old_func(request, apiuser, arg1, arg2): |
|
511 | 511 | ... |
|
512 | 512 | """ |
|
513 | 513 | |
|
514 | 514 | def __init__(self, use_method, deprecated_at_version): |
|
515 | 515 | self.use_method = use_method |
|
516 | 516 | self.deprecated_at_version = deprecated_at_version |
|
517 | 517 | self.deprecated_msg = '' |
|
518 | 518 | |
|
519 | 519 | def __call__(self, func): |
|
520 | 520 | self.deprecated_msg = 'Please use method `{method}` instead.'.format( |
|
521 | 521 | method=self.use_method) |
|
522 | 522 | |
|
523 | 523 | docstring = """\n |
|
524 | 524 | .. deprecated:: {version} |
|
525 | 525 | |
|
526 | 526 | {deprecation_message} |
|
527 | 527 | |
|
528 | 528 | {original_docstring} |
|
529 | 529 | """ |
|
530 | 530 | func.__doc__ = docstring.format( |
|
531 | 531 | version=self.deprecated_at_version, |
|
532 | 532 | deprecation_message=self.deprecated_msg, |
|
533 | 533 | original_docstring=func.__doc__) |
|
534 | 534 | return decorator.decorator(self.__wrapper, func) |
|
535 | 535 | |
|
536 | 536 | def __wrapper(self, func, *fargs, **fkwargs): |
|
537 | 537 | log.warning('DEPRECATED API CALL on function %s, please ' |
|
538 | 538 | 'use `%s` instead', func, self.use_method) |
|
539 | 539 | # alter function docstring to mark as deprecated, this is picked up |
|
540 | 540 | # via fabric file that generates API DOC. |
|
541 | 541 | result = func(*fargs, **fkwargs) |
|
542 | 542 | |
|
543 | 543 | request = fargs[0] |
|
544 | 544 | request.rpc_deprecation = 'DEPRECATED METHOD ' + self.deprecated_msg |
|
545 | 545 | return result |
|
546 | 546 | |
|
547 | 547 | |
|
548 | 548 | def add_api_methods(config): |
|
549 | 549 | from rhodecode.api.views import ( |
|
550 | 550 | deprecated_api, gist_api, pull_request_api, repo_api, repo_group_api, |
|
551 | 551 | server_api, search_api, testing_api, user_api, user_group_api) |
|
552 | 552 | |
|
553 | 553 | config.scan('rhodecode.api.views') |
|
554 | 554 | |
|
555 | 555 | |
|
556 | 556 | def includeme(config): |
|
557 | 557 | plugin_module = 'rhodecode.api' |
|
558 | 558 | plugin_settings = get_plugin_settings( |
|
559 | 559 | plugin_module, config.registry.settings) |
|
560 | 560 | |
|
561 | 561 | if not hasattr(config.registry, 'jsonrpc_methods'): |
|
562 | 562 | config.registry.jsonrpc_methods = OrderedDict() |
|
563 | 563 | |
|
564 | 564 | # match filter by given method only |
|
565 | 565 | config.add_view_predicate('jsonrpc_method', MethodPredicate) |
|
566 | 566 | config.add_view_predicate('jsonrpc_method_not_found', NotFoundPredicate) |
|
567 | 567 | |
|
568 | 568 | config.add_renderer(DEFAULT_RENDERER, ExtJsonRenderer()) |
|
569 | 569 | config.add_directive('add_jsonrpc_method', add_jsonrpc_method) |
|
570 | 570 | |
|
571 | 571 | config.add_route_predicate( |
|
572 | 572 | 'jsonrpc_call', RoutePredicate) |
|
573 | 573 | |
|
574 | 574 | config.add_route( |
|
575 | 575 | 'apiv2', plugin_settings.get('url', DEFAULT_URL), jsonrpc_call=True) |
|
576 | 576 | |
|
577 | 577 | # register some exception handling view |
|
578 | 578 | config.add_view(exception_view, context=JSONRPCBaseError) |
|
579 | 579 | config.add_notfound_view(exception_view, jsonrpc_method_not_found=True) |
|
580 | 580 | |
|
581 | 581 | add_api_methods(config) |
@@ -1,423 +1,423 b'' | |||
|
1 | 1 | # Copyright (C) 2011-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import logging |
|
20 | 20 | import itertools |
|
21 | 21 | import base64 |
|
22 | 22 | |
|
23 | 23 | from rhodecode.api import ( |
|
24 | 24 | jsonrpc_method, JSONRPCError, JSONRPCForbidden, find_methods) |
|
25 | 25 | |
|
26 | 26 | from rhodecode.api.utils import ( |
|
27 | 27 | Optional, OAttr, has_superadmin_permission, get_user_or_error) |
|
28 | 28 | from rhodecode.lib.utils import repo2db_mapper, get_rhodecode_repo_store_path |
|
29 | 29 | from rhodecode.lib import system_info |
|
30 | 30 | from rhodecode.lib import user_sessions |
|
31 | 31 | from rhodecode.lib import exc_tracking |
|
32 | 32 | from rhodecode.lib.ext_json import json |
|
33 | 33 | from rhodecode.lib.utils2 import safe_int |
|
34 | 34 | from rhodecode.model.db import UserIpMap |
|
35 | 35 | from rhodecode.model.scm import ScmModel |
|
36 | from rhodecode.apps.file_store import utils | |
|
36 | from rhodecode.apps.file_store import utils as store_utils | |
|
37 | 37 | from rhodecode.apps.file_store.exceptions import FileNotAllowedException, \ |
|
38 | 38 | FileOverSizeException |
|
39 | 39 | |
|
40 | 40 | log = logging.getLogger(__name__) |
|
41 | 41 | |
|
42 | 42 | |
|
43 | 43 | @jsonrpc_method() |
|
44 | 44 | def get_server_info(request, apiuser): |
|
45 | 45 | """ |
|
46 | 46 | Returns the |RCE| server information. |
|
47 | 47 | |
|
48 | 48 | This includes the running version of |RCE| and all installed |
|
49 | 49 | packages. This command takes the following options: |
|
50 | 50 | |
|
51 | 51 | :param apiuser: This is filled automatically from the |authtoken|. |
|
52 | 52 | :type apiuser: AuthUser |
|
53 | 53 | |
|
54 | 54 | Example output: |
|
55 | 55 | |
|
56 | 56 | .. code-block:: bash |
|
57 | 57 | |
|
58 | 58 | id : <id_given_in_input> |
|
59 | 59 | result : { |
|
60 | 60 | 'modules': [<module name>,...] |
|
61 | 61 | 'py_version': <python version>, |
|
62 | 62 | 'platform': <platform type>, |
|
63 | 63 | 'rhodecode_version': <rhodecode version> |
|
64 | 64 | } |
|
65 | 65 | error : null |
|
66 | 66 | """ |
|
67 | 67 | |
|
68 | 68 | if not has_superadmin_permission(apiuser): |
|
69 | 69 | raise JSONRPCForbidden() |
|
70 | 70 | |
|
71 | 71 | server_info = ScmModel().get_server_info(request.environ) |
|
72 | 72 | # rhodecode-index requires those |
|
73 | 73 | |
|
74 | 74 | server_info['index_storage'] = server_info['search']['value']['location'] |
|
75 | 75 | server_info['storage'] = server_info['storage']['value']['path'] |
|
76 | 76 | |
|
77 | 77 | return server_info |
|
78 | 78 | |
|
79 | 79 | |
|
80 | 80 | @jsonrpc_method() |
|
81 | 81 | def get_repo_store(request, apiuser): |
|
82 | 82 | """ |
|
83 | 83 | Returns the |RCE| repository storage information. |
|
84 | 84 | |
|
85 | 85 | :param apiuser: This is filled automatically from the |authtoken|. |
|
86 | 86 | :type apiuser: AuthUser |
|
87 | 87 | |
|
88 | 88 | Example output: |
|
89 | 89 | |
|
90 | 90 | .. code-block:: bash |
|
91 | 91 | |
|
92 | 92 | id : <id_given_in_input> |
|
93 | 93 | result : { |
|
94 | 94 | 'modules': [<module name>,...] |
|
95 | 95 | 'py_version': <python version>, |
|
96 | 96 | 'platform': <platform type>, |
|
97 | 97 | 'rhodecode_version': <rhodecode version> |
|
98 | 98 | } |
|
99 | 99 | error : null |
|
100 | 100 | """ |
|
101 | 101 | |
|
102 | 102 | if not has_superadmin_permission(apiuser): |
|
103 | 103 | raise JSONRPCForbidden() |
|
104 | 104 | |
|
105 | 105 | path = get_rhodecode_repo_store_path() |
|
106 | 106 | return {"path": path} |
|
107 | 107 | |
|
108 | 108 | |
|
109 | 109 | @jsonrpc_method() |
|
110 | 110 | def get_ip(request, apiuser, userid=Optional(OAttr('apiuser'))): |
|
111 | 111 | """ |
|
112 | 112 | Displays the IP Address as seen from the |RCE| server. |
|
113 | 113 | |
|
114 | 114 | * This command displays the IP Address, as well as all the defined IP |
|
115 | 115 | addresses for the specified user. If the ``userid`` is not set, the |
|
116 | 116 | data returned is for the user calling the method. |
|
117 | 117 | |
|
118 | 118 | This command can only be run using an |authtoken| with admin rights to |
|
119 | 119 | the specified repository. |
|
120 | 120 | |
|
121 | 121 | This command takes the following options: |
|
122 | 122 | |
|
123 | 123 | :param apiuser: This is filled automatically from |authtoken|. |
|
124 | 124 | :type apiuser: AuthUser |
|
125 | 125 | :param userid: Sets the userid for which associated IP Address data |
|
126 | 126 | is returned. |
|
127 | 127 | :type userid: Optional(str or int) |
|
128 | 128 | |
|
129 | 129 | Example output: |
|
130 | 130 | |
|
131 | 131 | .. code-block:: bash |
|
132 | 132 | |
|
133 | 133 | id : <id_given_in_input> |
|
134 | 134 | result : { |
|
135 | 135 | "server_ip_addr": "<ip_from_clien>", |
|
136 | 136 | "user_ips": [ |
|
137 | 137 | { |
|
138 | 138 | "ip_addr": "<ip_with_mask>", |
|
139 | 139 | "ip_range": ["<start_ip>", "<end_ip>"], |
|
140 | 140 | }, |
|
141 | 141 | ... |
|
142 | 142 | ] |
|
143 | 143 | } |
|
144 | 144 | |
|
145 | 145 | """ |
|
146 | 146 | if not has_superadmin_permission(apiuser): |
|
147 | 147 | raise JSONRPCForbidden() |
|
148 | 148 | |
|
149 | 149 | userid = Optional.extract(userid, evaluate_locals=locals()) |
|
150 | 150 | userid = getattr(userid, 'user_id', userid) |
|
151 | 151 | |
|
152 | 152 | user = get_user_or_error(userid) |
|
153 | 153 | ips = UserIpMap.query().filter(UserIpMap.user == user).all() |
|
154 | 154 | return { |
|
155 | 155 | 'server_ip_addr': request.rpc_ip_addr, |
|
156 | 156 | 'user_ips': ips |
|
157 | 157 | } |
|
158 | 158 | |
|
159 | 159 | |
|
160 | 160 | @jsonrpc_method() |
|
161 | 161 | def rescan_repos(request, apiuser, remove_obsolete=Optional(False)): |
|
162 | 162 | """ |
|
163 | 163 | Triggers a rescan of the specified repositories. |
|
164 | 164 | |
|
165 | 165 | * If the ``remove_obsolete`` option is set, it also deletes repositories |
|
166 | 166 | that are found in the database but not on the file system, so called |
|
167 | 167 | "clean zombies". |
|
168 | 168 | |
|
169 | 169 | This command can only be run using an |authtoken| with admin rights to |
|
170 | 170 | the specified repository. |
|
171 | 171 | |
|
172 | 172 | This command takes the following options: |
|
173 | 173 | |
|
174 | 174 | :param apiuser: This is filled automatically from the |authtoken|. |
|
175 | 175 | :type apiuser: AuthUser |
|
176 | 176 | :param remove_obsolete: Deletes repositories from the database that |
|
177 | 177 | are not found on the filesystem. |
|
178 | 178 | :type remove_obsolete: Optional(``True`` | ``False``) |
|
179 | 179 | |
|
180 | 180 | Example output: |
|
181 | 181 | |
|
182 | 182 | .. code-block:: bash |
|
183 | 183 | |
|
184 | 184 | id : <id_given_in_input> |
|
185 | 185 | result : { |
|
186 | 186 | 'added': [<added repository name>,...] |
|
187 | 187 | 'removed': [<removed repository name>,...] |
|
188 | 188 | } |
|
189 | 189 | error : null |
|
190 | 190 | |
|
191 | 191 | Example error output: |
|
192 | 192 | |
|
193 | 193 | .. code-block:: bash |
|
194 | 194 | |
|
195 | 195 | id : <id_given_in_input> |
|
196 | 196 | result : null |
|
197 | 197 | error : { |
|
198 | 198 | 'Error occurred during rescan repositories action' |
|
199 | 199 | } |
|
200 | 200 | |
|
201 | 201 | """ |
|
202 | 202 | if not has_superadmin_permission(apiuser): |
|
203 | 203 | raise JSONRPCForbidden() |
|
204 | 204 | |
|
205 | 205 | try: |
|
206 | 206 | rm_obsolete = Optional.extract(remove_obsolete) |
|
207 | 207 | added, removed = repo2db_mapper(ScmModel().repo_scan(), |
|
208 | 208 | remove_obsolete=rm_obsolete, force_hooks_rebuild=True) |
|
209 | 209 | return {'added': added, 'removed': removed} |
|
210 | 210 | except Exception: |
|
211 | 211 | log.exception('Failed to run repo rescann') |
|
212 | 212 | raise JSONRPCError( |
|
213 | 213 | 'Error occurred during rescan repositories action' |
|
214 | 214 | ) |
|
215 | 215 | |
|
216 | 216 | |
|
217 | 217 | @jsonrpc_method() |
|
218 | 218 | def cleanup_sessions(request, apiuser, older_then=Optional(60)): |
|
219 | 219 | """ |
|
220 | 220 | Triggers a session cleanup action. |
|
221 | 221 | |
|
222 | 222 | If the ``older_then`` option is set, only sessions that hasn't been |
|
223 | 223 | accessed in the given number of days will be removed. |
|
224 | 224 | |
|
225 | 225 | This command can only be run using an |authtoken| with admin rights to |
|
226 | 226 | the specified repository. |
|
227 | 227 | |
|
228 | 228 | This command takes the following options: |
|
229 | 229 | |
|
230 | 230 | :param apiuser: This is filled automatically from the |authtoken|. |
|
231 | 231 | :type apiuser: AuthUser |
|
232 | 232 | :param older_then: Deletes session that hasn't been accessed |
|
233 | 233 | in given number of days. |
|
234 | 234 | :type older_then: Optional(int) |
|
235 | 235 | |
|
236 | 236 | Example output: |
|
237 | 237 | |
|
238 | 238 | .. code-block:: bash |
|
239 | 239 | |
|
240 | 240 | id : <id_given_in_input> |
|
241 | 241 | result: { |
|
242 | 242 | "backend": "<type of backend>", |
|
243 | 243 | "sessions_removed": <number_of_removed_sessions> |
|
244 | 244 | } |
|
245 | 245 | error : null |
|
246 | 246 | |
|
247 | 247 | Example error output: |
|
248 | 248 | |
|
249 | 249 | .. code-block:: bash |
|
250 | 250 | |
|
251 | 251 | id : <id_given_in_input> |
|
252 | 252 | result : null |
|
253 | 253 | error : { |
|
254 | 254 | 'Error occurred during session cleanup' |
|
255 | 255 | } |
|
256 | 256 | |
|
257 | 257 | """ |
|
258 | 258 | if not has_superadmin_permission(apiuser): |
|
259 | 259 | raise JSONRPCForbidden() |
|
260 | 260 | |
|
261 | 261 | older_then = safe_int(Optional.extract(older_then)) or 60 |
|
262 | 262 | older_than_seconds = 60 * 60 * 24 * older_then |
|
263 | 263 | |
|
264 | 264 | config = system_info.rhodecode_config().get_value()['value']['config'] |
|
265 | 265 | session_model = user_sessions.get_session_handler( |
|
266 | 266 | config.get('beaker.session.type', 'memory'))(config) |
|
267 | 267 | |
|
268 | 268 | backend = session_model.SESSION_TYPE |
|
269 | 269 | try: |
|
270 | 270 | cleaned = session_model.clean_sessions( |
|
271 | 271 | older_than_seconds=older_than_seconds) |
|
272 | 272 | return {'sessions_removed': cleaned, 'backend': backend} |
|
273 | 273 | except user_sessions.CleanupCommand as msg: |
|
274 | 274 | return {'cleanup_command': str(msg), 'backend': backend} |
|
275 | 275 | except Exception as e: |
|
276 | 276 | log.exception('Failed session cleanup') |
|
277 | 277 | raise JSONRPCError( |
|
278 | 278 | 'Error occurred during session cleanup' |
|
279 | 279 | ) |
|
280 | 280 | |
|
281 | 281 | |
|
282 | 282 | @jsonrpc_method() |
|
283 | 283 | def get_method(request, apiuser, pattern=Optional('*')): |
|
284 | 284 | """ |
|
285 | 285 | Returns list of all available API methods. By default match pattern |
|
286 | 286 | os "*" but any other pattern can be specified. eg *comment* will return |
|
287 | 287 | all methods with comment inside them. If just single method is matched |
|
288 | 288 | returned data will also include method specification |
|
289 | 289 | |
|
290 | 290 | This command can only be run using an |authtoken| with admin rights to |
|
291 | 291 | the specified repository. |
|
292 | 292 | |
|
293 | 293 | This command takes the following options: |
|
294 | 294 | |
|
295 | 295 | :param apiuser: This is filled automatically from the |authtoken|. |
|
296 | 296 | :type apiuser: AuthUser |
|
297 | 297 | :param pattern: pattern to match method names against |
|
298 | 298 | :type pattern: Optional("*") |
|
299 | 299 | |
|
300 | 300 | Example output: |
|
301 | 301 | |
|
302 | 302 | .. code-block:: bash |
|
303 | 303 | |
|
304 | 304 | id : <id_given_in_input> |
|
305 | 305 | "result": [ |
|
306 | 306 | "changeset_comment", |
|
307 | 307 | "comment_pull_request", |
|
308 | 308 | "comment_commit" |
|
309 | 309 | ] |
|
310 | 310 | error : null |
|
311 | 311 | |
|
312 | 312 | .. code-block:: bash |
|
313 | 313 | |
|
314 | 314 | id : <id_given_in_input> |
|
315 | 315 | "result": [ |
|
316 | 316 | "comment_commit", |
|
317 | 317 | { |
|
318 | 318 | "apiuser": "<RequiredType>", |
|
319 | 319 | "comment_type": "<Optional:u'note'>", |
|
320 | 320 | "commit_id": "<RequiredType>", |
|
321 | 321 | "message": "<RequiredType>", |
|
322 | 322 | "repoid": "<RequiredType>", |
|
323 | 323 | "request": "<RequiredType>", |
|
324 | 324 | "resolves_comment_id": "<Optional:None>", |
|
325 | 325 | "status": "<Optional:None>", |
|
326 | 326 | "userid": "<Optional:<OptionalAttr:apiuser>>" |
|
327 | 327 | } |
|
328 | 328 | ] |
|
329 | 329 | error : null |
|
330 | 330 | """ |
|
331 |
from rhodecode.config |
|
|
332 | inspect = inspect_getargspec() | |
|
331 | from rhodecode.config import patches | |
|
332 | inspect = patches.inspect_getargspec() | |
|
333 | 333 | |
|
334 | 334 | if not has_superadmin_permission(apiuser): |
|
335 | 335 | raise JSONRPCForbidden() |
|
336 | 336 | |
|
337 | 337 | pattern = Optional.extract(pattern) |
|
338 | 338 | |
|
339 | 339 | matches = find_methods(request.registry.jsonrpc_methods, pattern) |
|
340 | 340 | |
|
341 | 341 | args_desc = [] |
|
342 | 342 | matches_keys = list(matches.keys()) |
|
343 | 343 | if len(matches_keys) == 1: |
|
344 | 344 | func = matches[matches_keys[0]] |
|
345 | 345 | |
|
346 | 346 | argspec = inspect.getargspec(func) |
|
347 | 347 | arglist = argspec[0] |
|
348 | 348 | defaults = list(map(repr, argspec[3] or [])) |
|
349 | 349 | |
|
350 | 350 | default_empty = '<RequiredType>' |
|
351 | 351 | |
|
352 | 352 | # kw arguments required by this method |
|
353 | 353 | func_kwargs = dict(itertools.zip_longest( |
|
354 | 354 | reversed(arglist), reversed(defaults), fillvalue=default_empty)) |
|
355 | 355 | args_desc.append(func_kwargs) |
|
356 | 356 | |
|
357 | 357 | return matches_keys + args_desc |
|
358 | 358 | |
|
359 | 359 | |
|
360 | 360 | @jsonrpc_method() |
|
361 | 361 | def store_exception(request, apiuser, exc_data_json, prefix=Optional('rhodecode')): |
|
362 | 362 | """ |
|
363 | 363 | Stores sent exception inside the built-in exception tracker in |RCE| server. |
|
364 | 364 | |
|
365 | 365 | This command can only be run using an |authtoken| with admin rights to |
|
366 | 366 | the specified repository. |
|
367 | 367 | |
|
368 | 368 | This command takes the following options: |
|
369 | 369 | |
|
370 | 370 | :param apiuser: This is filled automatically from the |authtoken|. |
|
371 | 371 | :type apiuser: AuthUser |
|
372 | 372 | |
|
373 | 373 | :param exc_data_json: JSON data with exception e.g |
|
374 | 374 | {"exc_traceback": "Value `1` is not allowed", "exc_type_name": "ValueError"} |
|
375 | 375 | :type exc_data_json: JSON data |
|
376 | 376 | |
|
377 | 377 | :param prefix: prefix for error type, e.g 'rhodecode', 'vcsserver', 'rhodecode-tools' |
|
378 | 378 | :type prefix: Optional("rhodecode") |
|
379 | 379 | |
|
380 | 380 | Example output: |
|
381 | 381 | |
|
382 | 382 | .. code-block:: bash |
|
383 | 383 | |
|
384 | 384 | id : <id_given_in_input> |
|
385 | 385 | "result": { |
|
386 | 386 | "exc_id": 139718459226384, |
|
387 | 387 | "exc_url": "http://localhost:8080/_admin/settings/exceptions/139718459226384" |
|
388 | 388 | } |
|
389 | 389 | error : null |
|
390 | 390 | """ |
|
391 | 391 | if not has_superadmin_permission(apiuser): |
|
392 | 392 | raise JSONRPCForbidden() |
|
393 | 393 | |
|
394 | 394 | prefix = Optional.extract(prefix) |
|
395 | 395 | exc_id = exc_tracking.generate_id() |
|
396 | 396 | |
|
397 | 397 | try: |
|
398 | 398 | exc_data = json.loads(exc_data_json) |
|
399 | 399 | except Exception: |
|
400 | 400 | log.error('Failed to parse JSON: %r', exc_data_json) |
|
401 | 401 | raise JSONRPCError('Failed to parse JSON data from exc_data_json field. ' |
|
402 | 402 | 'Please make sure it contains a valid JSON.') |
|
403 | 403 | |
|
404 | 404 | try: |
|
405 | 405 | exc_traceback = exc_data['exc_traceback'] |
|
406 | 406 | exc_type_name = exc_data['exc_type_name'] |
|
407 | 407 | exc_value = '' |
|
408 | 408 | except KeyError as err: |
|
409 | 409 | raise JSONRPCError( |
|
410 | 410 | f'Missing exc_traceback, or exc_type_name ' |
|
411 | 411 | f'in exc_data_json field. Missing: {err}') |
|
412 | 412 | |
|
413 | 413 | class ExcType: |
|
414 | 414 | __name__ = exc_type_name |
|
415 | 415 | |
|
416 | 416 | exc_info = (ExcType(), exc_value, exc_traceback) |
|
417 | 417 | |
|
418 | 418 | exc_tracking._store_exception( |
|
419 | 419 | exc_id=exc_id, exc_info=exc_info, prefix=prefix) |
|
420 | 420 | |
|
421 | 421 | exc_url = request.route_url( |
|
422 | 422 | 'admin_settings_exception_tracker_show', exception_id=exc_id) |
|
423 | 423 | return {'exc_id': exc_id, 'exc_url': exc_url} |
@@ -1,1093 +1,1124 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | |
|
20 | 20 | from rhodecode.apps._base import ADMIN_PREFIX |
|
21 | 21 | from rhodecode.apps._base.navigation import includeme as nav_includeme |
|
22 | 22 | from rhodecode.apps.admin.views.main_views import AdminMainView |
|
23 | 23 | |
|
24 | 24 | |
|
25 | 25 | def admin_routes(config): |
|
26 | 26 | """ |
|
27 | 27 | Admin prefixed routes |
|
28 | 28 | """ |
|
29 | 29 | from rhodecode.apps.admin.views.audit_logs import AdminAuditLogsView |
|
30 | 30 | from rhodecode.apps.admin.views.artifacts import AdminArtifactsView |
|
31 | 31 | from rhodecode.apps.admin.views.automation import AdminAutomationView |
|
32 | 32 | from rhodecode.apps.admin.views.scheduler import AdminSchedulerView |
|
33 | 33 | from rhodecode.apps.admin.views.defaults import AdminDefaultSettingsView |
|
34 | 34 | from rhodecode.apps.admin.views.exception_tracker import ExceptionsTrackerView |
|
35 | 35 | from rhodecode.apps.admin.views.open_source_licenses import OpenSourceLicensesAdminSettingsView |
|
36 | 36 | from rhodecode.apps.admin.views.permissions import AdminPermissionsView |
|
37 | 37 | from rhodecode.apps.admin.views.process_management import AdminProcessManagementView |
|
38 | 38 | from rhodecode.apps.admin.views.repo_groups import AdminRepoGroupsView |
|
39 | 39 | from rhodecode.apps.admin.views.repositories import AdminReposView |
|
40 | 40 | from rhodecode.apps.admin.views.sessions import AdminSessionSettingsView |
|
41 | 41 | from rhodecode.apps.admin.views.settings import AdminSettingsView |
|
42 | 42 | from rhodecode.apps.admin.views.svn_config import AdminSvnConfigView |
|
43 | 43 | from rhodecode.apps.admin.views.system_info import AdminSystemInfoSettingsView |
|
44 | 44 | from rhodecode.apps.admin.views.user_groups import AdminUserGroupsView |
|
45 | 45 | from rhodecode.apps.admin.views.users import AdminUsersView, UsersView |
|
46 | ||
|
46 | from rhodecode.apps.admin.views.security import AdminSecurityView | |
|
47 | ||
|
48 | # Security EE feature | |
|
49 | ||
|
50 | config.add_route( | |
|
51 | 'admin_security', | |
|
52 | pattern='/security') | |
|
53 | config.add_view( | |
|
54 | AdminSecurityView, | |
|
55 | attr='security', | |
|
56 | route_name='admin_security', request_method='GET', | |
|
57 | renderer='rhodecode:templates/admin/security/security.mako') | |
|
58 | ||
|
59 | config.add_route( | |
|
60 | name='admin_security_update', | |
|
61 | pattern='/security/update') | |
|
62 | config.add_view( | |
|
63 | AdminSecurityView, | |
|
64 | attr='security_update', | |
|
65 | route_name='admin_security_update', request_method='POST', | |
|
66 | renderer='rhodecode:templates/admin/security/security.mako') | |
|
67 | ||
|
68 | config.add_route( | |
|
69 | name='admin_security_modify_allowed_vcs_client_versions', | |
|
70 | pattern=ADMIN_PREFIX + '/security/modify/allowed_vcs_client_versions') | |
|
71 | config.add_view( | |
|
72 | AdminSecurityView, | |
|
73 | attr='vcs_whitelisted_client_versions_edit', | |
|
74 | route_name='admin_security_modify_allowed_vcs_client_versions', request_method=('GET', 'POST'), | |
|
75 | renderer='rhodecode:templates/admin/security/edit_allowed_vcs_client_versions.mako') | |
|
76 | ||
|
77 | ||
|
47 | 78 | config.add_route( |
|
48 | 79 | name='admin_audit_logs', |
|
49 | 80 | pattern='/audit_logs') |
|
50 | 81 | config.add_view( |
|
51 | 82 | AdminAuditLogsView, |
|
52 | 83 | attr='admin_audit_logs', |
|
53 | 84 | route_name='admin_audit_logs', request_method='GET', |
|
54 | 85 | renderer='rhodecode:templates/admin/admin_audit_logs.mako') |
|
55 | 86 | |
|
56 | 87 | config.add_route( |
|
57 | 88 | name='admin_audit_log_entry', |
|
58 | 89 | pattern='/audit_logs/{audit_log_id}') |
|
59 | 90 | config.add_view( |
|
60 | 91 | AdminAuditLogsView, |
|
61 | 92 | attr='admin_audit_log_entry', |
|
62 | 93 | route_name='admin_audit_log_entry', request_method='GET', |
|
63 | 94 | renderer='rhodecode:templates/admin/admin_audit_log_entry.mako') |
|
64 | 95 | |
|
65 | 96 | # Artifacts EE feature |
|
66 | 97 | config.add_route( |
|
67 | 98 | 'admin_artifacts', |
|
68 | 99 | pattern=ADMIN_PREFIX + '/artifacts') |
|
69 | 100 | config.add_route( |
|
70 | 101 | 'admin_artifacts_show_all', |
|
71 | 102 | pattern=ADMIN_PREFIX + '/artifacts') |
|
72 | 103 | config.add_view( |
|
73 | 104 | AdminArtifactsView, |
|
74 | 105 | attr='artifacts', |
|
75 | 106 | route_name='admin_artifacts', request_method='GET', |
|
76 | 107 | renderer='rhodecode:templates/admin/artifacts/artifacts.mako') |
|
77 | 108 | config.add_view( |
|
78 | 109 | AdminArtifactsView, |
|
79 | 110 | attr='artifacts', |
|
80 | 111 | route_name='admin_artifacts_show_all', request_method='GET', |
|
81 | 112 | renderer='rhodecode:templates/admin/artifacts/artifacts.mako') |
|
82 | 113 | |
|
83 | 114 | # EE views |
|
84 | 115 | config.add_route( |
|
85 | 116 | name='admin_artifacts_show_info', |
|
86 | 117 | pattern=ADMIN_PREFIX + '/artifacts/{uid}') |
|
87 | 118 | config.add_route( |
|
88 | 119 | name='admin_artifacts_delete', |
|
89 | 120 | pattern=ADMIN_PREFIX + '/artifacts/{uid}/delete') |
|
90 | 121 | config.add_route( |
|
91 | 122 | name='admin_artifacts_update', |
|
92 | 123 | pattern=ADMIN_PREFIX + '/artifacts/{uid}/update') |
|
93 | 124 | |
|
94 | 125 | # Automation EE feature |
|
95 | 126 | config.add_route( |
|
96 | 127 | 'admin_automation', |
|
97 | 128 | pattern=ADMIN_PREFIX + '/automation') |
|
98 | 129 | config.add_view( |
|
99 | 130 | AdminAutomationView, |
|
100 | 131 | attr='automation', |
|
101 | 132 | route_name='admin_automation', request_method='GET', |
|
102 | 133 | renderer='rhodecode:templates/admin/automation/automation.mako') |
|
103 | 134 | |
|
104 | 135 | # Scheduler EE feature |
|
105 | 136 | config.add_route( |
|
106 | 137 | 'admin_scheduler', |
|
107 | 138 | pattern=ADMIN_PREFIX + '/scheduler') |
|
108 | 139 | config.add_view( |
|
109 | 140 | AdminSchedulerView, |
|
110 | 141 | attr='scheduler', |
|
111 | 142 | route_name='admin_scheduler', request_method='GET', |
|
112 | 143 | renderer='rhodecode:templates/admin/scheduler/scheduler.mako') |
|
113 | 144 | |
|
114 | 145 | config.add_route( |
|
115 | 146 | name='admin_settings_open_source', |
|
116 | 147 | pattern='/settings/open_source') |
|
117 | 148 | config.add_view( |
|
118 | 149 | OpenSourceLicensesAdminSettingsView, |
|
119 | 150 | attr='open_source_licenses', |
|
120 | 151 | route_name='admin_settings_open_source', request_method='GET', |
|
121 | 152 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
122 | 153 | |
|
123 | 154 | config.add_route( |
|
124 | 155 | name='admin_settings_vcs_svn_generate_cfg', |
|
125 | 156 | pattern='/settings/vcs/svn_generate_cfg') |
|
126 | 157 | config.add_view( |
|
127 | 158 | AdminSvnConfigView, |
|
128 | 159 | attr='vcs_svn_generate_config', |
|
129 | 160 | route_name='admin_settings_vcs_svn_generate_cfg', |
|
130 | 161 | request_method='POST', renderer='json') |
|
131 | 162 | |
|
132 | 163 | config.add_route( |
|
133 | 164 | name='admin_settings_system', |
|
134 | 165 | pattern='/settings/system') |
|
135 | 166 | config.add_view( |
|
136 | 167 | AdminSystemInfoSettingsView, |
|
137 | 168 | attr='settings_system_info', |
|
138 | 169 | route_name='admin_settings_system', request_method='GET', |
|
139 | 170 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
140 | 171 | |
|
141 | 172 | config.add_route( |
|
142 | 173 | name='admin_settings_system_update', |
|
143 | 174 | pattern='/settings/system/updates') |
|
144 | 175 | config.add_view( |
|
145 | 176 | AdminSystemInfoSettingsView, |
|
146 | 177 | attr='settings_system_info_check_update', |
|
147 | 178 | route_name='admin_settings_system_update', request_method='GET', |
|
148 | 179 | renderer='rhodecode:templates/admin/settings/settings_system_update.mako') |
|
149 | 180 | |
|
150 | 181 | config.add_route( |
|
151 | 182 | name='admin_settings_exception_tracker', |
|
152 | 183 | pattern='/settings/exceptions') |
|
153 | 184 | config.add_view( |
|
154 | 185 | ExceptionsTrackerView, |
|
155 | 186 | attr='browse_exceptions', |
|
156 | 187 | route_name='admin_settings_exception_tracker', request_method='GET', |
|
157 | 188 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
158 | 189 | |
|
159 | 190 | config.add_route( |
|
160 | 191 | name='admin_settings_exception_tracker_delete_all', |
|
161 | 192 | pattern='/settings/exceptions_delete_all') |
|
162 | 193 | config.add_view( |
|
163 | 194 | ExceptionsTrackerView, |
|
164 | 195 | attr='exception_delete_all', |
|
165 | 196 | route_name='admin_settings_exception_tracker_delete_all', request_method='POST', |
|
166 | 197 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
167 | 198 | |
|
168 | 199 | config.add_route( |
|
169 | 200 | name='admin_settings_exception_tracker_show', |
|
170 | 201 | pattern='/settings/exceptions/{exception_id}') |
|
171 | 202 | config.add_view( |
|
172 | 203 | ExceptionsTrackerView, |
|
173 | 204 | attr='exception_show', |
|
174 | 205 | route_name='admin_settings_exception_tracker_show', request_method='GET', |
|
175 | 206 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
176 | 207 | |
|
177 | 208 | config.add_route( |
|
178 | 209 | name='admin_settings_exception_tracker_delete', |
|
179 | 210 | pattern='/settings/exceptions/{exception_id}/delete') |
|
180 | 211 | config.add_view( |
|
181 | 212 | ExceptionsTrackerView, |
|
182 | 213 | attr='exception_delete', |
|
183 | 214 | route_name='admin_settings_exception_tracker_delete', request_method='POST', |
|
184 | 215 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
185 | 216 | |
|
186 | 217 | config.add_route( |
|
187 | 218 | name='admin_settings_sessions', |
|
188 | 219 | pattern='/settings/sessions') |
|
189 | 220 | config.add_view( |
|
190 | 221 | AdminSessionSettingsView, |
|
191 | 222 | attr='settings_sessions', |
|
192 | 223 | route_name='admin_settings_sessions', request_method='GET', |
|
193 | 224 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
194 | 225 | |
|
195 | 226 | config.add_route( |
|
196 | 227 | name='admin_settings_sessions_cleanup', |
|
197 | 228 | pattern='/settings/sessions/cleanup') |
|
198 | 229 | config.add_view( |
|
199 | 230 | AdminSessionSettingsView, |
|
200 | 231 | attr='settings_sessions_cleanup', |
|
201 | 232 | route_name='admin_settings_sessions_cleanup', request_method='POST') |
|
202 | 233 | |
|
203 | 234 | config.add_route( |
|
204 | 235 | name='admin_settings_process_management', |
|
205 | 236 | pattern='/settings/process_management') |
|
206 | 237 | config.add_view( |
|
207 | 238 | AdminProcessManagementView, |
|
208 | 239 | attr='process_management', |
|
209 | 240 | route_name='admin_settings_process_management', request_method='GET', |
|
210 | 241 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
211 | 242 | |
|
212 | 243 | config.add_route( |
|
213 | 244 | name='admin_settings_process_management_data', |
|
214 | 245 | pattern='/settings/process_management/data') |
|
215 | 246 | config.add_view( |
|
216 | 247 | AdminProcessManagementView, |
|
217 | 248 | attr='process_management_data', |
|
218 | 249 | route_name='admin_settings_process_management_data', request_method='GET', |
|
219 | 250 | renderer='rhodecode:templates/admin/settings/settings_process_management_data.mako') |
|
220 | 251 | |
|
221 | 252 | config.add_route( |
|
222 | 253 | name='admin_settings_process_management_signal', |
|
223 | 254 | pattern='/settings/process_management/signal') |
|
224 | 255 | config.add_view( |
|
225 | 256 | AdminProcessManagementView, |
|
226 | 257 | attr='process_management_signal', |
|
227 | 258 | route_name='admin_settings_process_management_signal', |
|
228 | 259 | request_method='POST', renderer='json_ext') |
|
229 | 260 | |
|
230 | 261 | config.add_route( |
|
231 | 262 | name='admin_settings_process_management_master_signal', |
|
232 | 263 | pattern='/settings/process_management/master_signal') |
|
233 | 264 | config.add_view( |
|
234 | 265 | AdminProcessManagementView, |
|
235 | 266 | attr='process_management_master_signal', |
|
236 | 267 | route_name='admin_settings_process_management_master_signal', |
|
237 | 268 | request_method='POST', renderer='json_ext') |
|
238 | 269 | |
|
239 | 270 | # default settings |
|
240 | 271 | config.add_route( |
|
241 | 272 | name='admin_defaults_repositories', |
|
242 | 273 | pattern='/defaults/repositories') |
|
243 | 274 | config.add_view( |
|
244 | 275 | AdminDefaultSettingsView, |
|
245 | 276 | attr='defaults_repository_show', |
|
246 | 277 | route_name='admin_defaults_repositories', request_method='GET', |
|
247 | 278 | renderer='rhodecode:templates/admin/defaults/defaults.mako') |
|
248 | 279 | |
|
249 | 280 | config.add_route( |
|
250 | 281 | name='admin_defaults_repositories_update', |
|
251 | 282 | pattern='/defaults/repositories/update') |
|
252 | 283 | config.add_view( |
|
253 | 284 | AdminDefaultSettingsView, |
|
254 | 285 | attr='defaults_repository_update', |
|
255 | 286 | route_name='admin_defaults_repositories_update', request_method='POST', |
|
256 | 287 | renderer='rhodecode:templates/admin/defaults/defaults.mako') |
|
257 | 288 | |
|
258 | 289 | # admin settings |
|
259 | 290 | |
|
260 | 291 | config.add_route( |
|
261 | 292 | name='admin_settings', |
|
262 | 293 | pattern='/settings') |
|
263 | 294 | config.add_view( |
|
264 | 295 | AdminSettingsView, |
|
265 | 296 | attr='settings_global', |
|
266 | 297 | route_name='admin_settings', request_method='GET', |
|
267 | 298 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
268 | 299 | |
|
269 | 300 | config.add_route( |
|
270 | 301 | name='admin_settings_update', |
|
271 | 302 | pattern='/settings/update') |
|
272 | 303 | config.add_view( |
|
273 | 304 | AdminSettingsView, |
|
274 | 305 | attr='settings_global_update', |
|
275 | 306 | route_name='admin_settings_update', request_method='POST', |
|
276 | 307 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
277 | 308 | |
|
278 | 309 | config.add_route( |
|
279 | 310 | name='admin_settings_global', |
|
280 | 311 | pattern='/settings/global') |
|
281 | 312 | config.add_view( |
|
282 | 313 | AdminSettingsView, |
|
283 | 314 | attr='settings_global', |
|
284 | 315 | route_name='admin_settings_global', request_method='GET', |
|
285 | 316 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
286 | 317 | |
|
287 | 318 | config.add_route( |
|
288 | 319 | name='admin_settings_global_update', |
|
289 | 320 | pattern='/settings/global/update') |
|
290 | 321 | config.add_view( |
|
291 | 322 | AdminSettingsView, |
|
292 | 323 | attr='settings_global_update', |
|
293 | 324 | route_name='admin_settings_global_update', request_method='POST', |
|
294 | 325 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
295 | 326 | |
|
296 | 327 | config.add_route( |
|
297 | 328 | name='admin_settings_vcs', |
|
298 | 329 | pattern='/settings/vcs') |
|
299 | 330 | config.add_view( |
|
300 | 331 | AdminSettingsView, |
|
301 | 332 | attr='settings_vcs', |
|
302 | 333 | route_name='admin_settings_vcs', request_method='GET', |
|
303 | 334 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
304 | 335 | |
|
305 | 336 | config.add_route( |
|
306 | 337 | name='admin_settings_vcs_update', |
|
307 | 338 | pattern='/settings/vcs/update') |
|
308 | 339 | config.add_view( |
|
309 | 340 | AdminSettingsView, |
|
310 | 341 | attr='settings_vcs_update', |
|
311 | 342 | route_name='admin_settings_vcs_update', request_method='POST', |
|
312 | 343 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
313 | 344 | |
|
314 | 345 | config.add_route( |
|
315 | 346 | name='admin_settings_vcs_svn_pattern_delete', |
|
316 | 347 | pattern='/settings/vcs/svn_pattern_delete') |
|
317 | 348 | config.add_view( |
|
318 | 349 | AdminSettingsView, |
|
319 | 350 | attr='settings_vcs_delete_svn_pattern', |
|
320 | 351 | route_name='admin_settings_vcs_svn_pattern_delete', request_method='POST', |
|
321 | 352 | renderer='json_ext', xhr=True) |
|
322 | 353 | |
|
323 | 354 | config.add_route( |
|
324 | 355 | name='admin_settings_mapping', |
|
325 | 356 | pattern='/settings/mapping') |
|
326 | 357 | config.add_view( |
|
327 | 358 | AdminSettingsView, |
|
328 | 359 | attr='settings_mapping', |
|
329 | 360 | route_name='admin_settings_mapping', request_method='GET', |
|
330 | 361 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
331 | 362 | |
|
332 | 363 | config.add_route( |
|
333 | 364 | name='admin_settings_mapping_update', |
|
334 | 365 | pattern='/settings/mapping/update') |
|
335 | 366 | config.add_view( |
|
336 | 367 | AdminSettingsView, |
|
337 | 368 | attr='settings_mapping_update', |
|
338 | 369 | route_name='admin_settings_mapping_update', request_method='POST', |
|
339 | 370 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
340 | 371 | |
|
341 | 372 | config.add_route( |
|
342 | 373 | name='admin_settings_visual', |
|
343 | 374 | pattern='/settings/visual') |
|
344 | 375 | config.add_view( |
|
345 | 376 | AdminSettingsView, |
|
346 | 377 | attr='settings_visual', |
|
347 | 378 | route_name='admin_settings_visual', request_method='GET', |
|
348 | 379 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
349 | 380 | |
|
350 | 381 | config.add_route( |
|
351 | 382 | name='admin_settings_visual_update', |
|
352 | 383 | pattern='/settings/visual/update') |
|
353 | 384 | config.add_view( |
|
354 | 385 | AdminSettingsView, |
|
355 | 386 | attr='settings_visual_update', |
|
356 | 387 | route_name='admin_settings_visual_update', request_method='POST', |
|
357 | 388 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
358 | 389 | |
|
359 | 390 | config.add_route( |
|
360 | 391 | name='admin_settings_issuetracker', |
|
361 | 392 | pattern='/settings/issue-tracker') |
|
362 | 393 | config.add_view( |
|
363 | 394 | AdminSettingsView, |
|
364 | 395 | attr='settings_issuetracker', |
|
365 | 396 | route_name='admin_settings_issuetracker', request_method='GET', |
|
366 | 397 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
367 | 398 | |
|
368 | 399 | config.add_route( |
|
369 | 400 | name='admin_settings_issuetracker_update', |
|
370 | 401 | pattern='/settings/issue-tracker/update') |
|
371 | 402 | config.add_view( |
|
372 | 403 | AdminSettingsView, |
|
373 | 404 | attr='settings_issuetracker_update', |
|
374 | 405 | route_name='admin_settings_issuetracker_update', request_method='POST', |
|
375 | 406 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
376 | 407 | |
|
377 | 408 | config.add_route( |
|
378 | 409 | name='admin_settings_issuetracker_test', |
|
379 | 410 | pattern='/settings/issue-tracker/test') |
|
380 | 411 | config.add_view( |
|
381 | 412 | AdminSettingsView, |
|
382 | 413 | attr='settings_issuetracker_test', |
|
383 | 414 | route_name='admin_settings_issuetracker_test', request_method='POST', |
|
384 | 415 | renderer='string', xhr=True) |
|
385 | 416 | |
|
386 | 417 | config.add_route( |
|
387 | 418 | name='admin_settings_issuetracker_delete', |
|
388 | 419 | pattern='/settings/issue-tracker/delete') |
|
389 | 420 | config.add_view( |
|
390 | 421 | AdminSettingsView, |
|
391 | 422 | attr='settings_issuetracker_delete', |
|
392 | 423 | route_name='admin_settings_issuetracker_delete', request_method='POST', |
|
393 | 424 | renderer='json_ext', xhr=True) |
|
394 | 425 | |
|
395 | 426 | config.add_route( |
|
396 | 427 | name='admin_settings_email', |
|
397 | 428 | pattern='/settings/email') |
|
398 | 429 | config.add_view( |
|
399 | 430 | AdminSettingsView, |
|
400 | 431 | attr='settings_email', |
|
401 | 432 | route_name='admin_settings_email', request_method='GET', |
|
402 | 433 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
403 | 434 | |
|
404 | 435 | config.add_route( |
|
405 | 436 | name='admin_settings_email_update', |
|
406 | 437 | pattern='/settings/email/update') |
|
407 | 438 | config.add_view( |
|
408 | 439 | AdminSettingsView, |
|
409 | 440 | attr='settings_email_update', |
|
410 | 441 | route_name='admin_settings_email_update', request_method='POST', |
|
411 | 442 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
412 | 443 | |
|
413 | 444 | config.add_route( |
|
414 | 445 | name='admin_settings_hooks', |
|
415 | 446 | pattern='/settings/hooks') |
|
416 | 447 | config.add_view( |
|
417 | 448 | AdminSettingsView, |
|
418 | 449 | attr='settings_hooks', |
|
419 | 450 | route_name='admin_settings_hooks', request_method='GET', |
|
420 | 451 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
421 | 452 | |
|
422 | 453 | config.add_route( |
|
423 | 454 | name='admin_settings_hooks_update', |
|
424 | 455 | pattern='/settings/hooks/update') |
|
425 | 456 | config.add_view( |
|
426 | 457 | AdminSettingsView, |
|
427 | 458 | attr='settings_hooks_update', |
|
428 | 459 | route_name='admin_settings_hooks_update', request_method='POST', |
|
429 | 460 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
430 | 461 | |
|
431 | 462 | config.add_route( |
|
432 | 463 | name='admin_settings_hooks_delete', |
|
433 | 464 | pattern='/settings/hooks/delete') |
|
434 | 465 | config.add_view( |
|
435 | 466 | AdminSettingsView, |
|
436 | 467 | attr='settings_hooks_update', |
|
437 | 468 | route_name='admin_settings_hooks_delete', request_method='POST', |
|
438 | 469 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
439 | 470 | |
|
440 | 471 | config.add_route( |
|
441 | 472 | name='admin_settings_search', |
|
442 | 473 | pattern='/settings/search') |
|
443 | 474 | config.add_view( |
|
444 | 475 | AdminSettingsView, |
|
445 | 476 | attr='settings_search', |
|
446 | 477 | route_name='admin_settings_search', request_method='GET', |
|
447 | 478 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
448 | 479 | |
|
449 | 480 | config.add_route( |
|
450 | 481 | name='admin_settings_labs', |
|
451 | 482 | pattern='/settings/labs') |
|
452 | 483 | config.add_view( |
|
453 | 484 | AdminSettingsView, |
|
454 | 485 | attr='settings_labs', |
|
455 | 486 | route_name='admin_settings_labs', request_method='GET', |
|
456 | 487 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
457 | 488 | |
|
458 | 489 | config.add_route( |
|
459 | 490 | name='admin_settings_labs_update', |
|
460 | 491 | pattern='/settings/labs/update') |
|
461 | 492 | config.add_view( |
|
462 | 493 | AdminSettingsView, |
|
463 | 494 | attr='settings_labs_update', |
|
464 | 495 | route_name='admin_settings_labs_update', request_method='POST', |
|
465 | 496 | renderer='rhodecode:templates/admin/settings/settings.mako') |
|
466 | 497 | |
|
467 | 498 | # global permissions |
|
468 | 499 | |
|
469 | 500 | config.add_route( |
|
470 | 501 | name='admin_permissions_application', |
|
471 | 502 | pattern='/permissions/application') |
|
472 | 503 | config.add_view( |
|
473 | 504 | AdminPermissionsView, |
|
474 | 505 | attr='permissions_application', |
|
475 | 506 | route_name='admin_permissions_application', request_method='GET', |
|
476 | 507 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
477 | 508 | |
|
478 | 509 | config.add_route( |
|
479 | 510 | name='admin_permissions_application_update', |
|
480 | 511 | pattern='/permissions/application/update') |
|
481 | 512 | config.add_view( |
|
482 | 513 | AdminPermissionsView, |
|
483 | 514 | attr='permissions_application_update', |
|
484 | 515 | route_name='admin_permissions_application_update', request_method='POST', |
|
485 | 516 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
486 | 517 | |
|
487 | 518 | config.add_route( |
|
488 | 519 | name='admin_permissions_global', |
|
489 | 520 | pattern='/permissions/global') |
|
490 | 521 | config.add_view( |
|
491 | 522 | AdminPermissionsView, |
|
492 | 523 | attr='permissions_global', |
|
493 | 524 | route_name='admin_permissions_global', request_method='GET', |
|
494 | 525 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
495 | 526 | |
|
496 | 527 | config.add_route( |
|
497 | 528 | name='admin_permissions_global_update', |
|
498 | 529 | pattern='/permissions/global/update') |
|
499 | 530 | config.add_view( |
|
500 | 531 | AdminPermissionsView, |
|
501 | 532 | attr='permissions_global_update', |
|
502 | 533 | route_name='admin_permissions_global_update', request_method='POST', |
|
503 | 534 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
504 | 535 | |
|
505 | 536 | config.add_route( |
|
506 | 537 | name='admin_permissions_object', |
|
507 | 538 | pattern='/permissions/object') |
|
508 | 539 | config.add_view( |
|
509 | 540 | AdminPermissionsView, |
|
510 | 541 | attr='permissions_objects', |
|
511 | 542 | route_name='admin_permissions_object', request_method='GET', |
|
512 | 543 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
513 | 544 | |
|
514 | 545 | config.add_route( |
|
515 | 546 | name='admin_permissions_object_update', |
|
516 | 547 | pattern='/permissions/object/update') |
|
517 | 548 | config.add_view( |
|
518 | 549 | AdminPermissionsView, |
|
519 | 550 | attr='permissions_objects_update', |
|
520 | 551 | route_name='admin_permissions_object_update', request_method='POST', |
|
521 | 552 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
522 | 553 | |
|
523 | 554 | # Branch perms EE feature |
|
524 | 555 | config.add_route( |
|
525 | 556 | name='admin_permissions_branch', |
|
526 | 557 | pattern='/permissions/branch') |
|
527 | 558 | config.add_view( |
|
528 | 559 | AdminPermissionsView, |
|
529 | 560 | attr='permissions_branch', |
|
530 | 561 | route_name='admin_permissions_branch', request_method='GET', |
|
531 | 562 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
532 | 563 | |
|
533 | 564 | config.add_route( |
|
534 | 565 | name='admin_permissions_ips', |
|
535 | 566 | pattern='/permissions/ips') |
|
536 | 567 | config.add_view( |
|
537 | 568 | AdminPermissionsView, |
|
538 | 569 | attr='permissions_ips', |
|
539 | 570 | route_name='admin_permissions_ips', request_method='GET', |
|
540 | 571 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
541 | 572 | |
|
542 | 573 | config.add_route( |
|
543 | 574 | name='admin_permissions_overview', |
|
544 | 575 | pattern='/permissions/overview') |
|
545 | 576 | config.add_view( |
|
546 | 577 | AdminPermissionsView, |
|
547 | 578 | attr='permissions_overview', |
|
548 | 579 | route_name='admin_permissions_overview', request_method='GET', |
|
549 | 580 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
550 | 581 | |
|
551 | 582 | config.add_route( |
|
552 | 583 | name='admin_permissions_auth_token_access', |
|
553 | 584 | pattern='/permissions/auth_token_access') |
|
554 | 585 | config.add_view( |
|
555 | 586 | AdminPermissionsView, |
|
556 | 587 | attr='auth_token_access', |
|
557 | 588 | route_name='admin_permissions_auth_token_access', request_method='GET', |
|
558 | 589 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
559 | 590 | |
|
560 | 591 | config.add_route( |
|
561 | 592 | name='admin_permissions_ssh_keys', |
|
562 | 593 | pattern='/permissions/ssh_keys') |
|
563 | 594 | config.add_view( |
|
564 | 595 | AdminPermissionsView, |
|
565 | 596 | attr='ssh_keys', |
|
566 | 597 | route_name='admin_permissions_ssh_keys', request_method='GET', |
|
567 | 598 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
568 | 599 | |
|
569 | 600 | config.add_route( |
|
570 | 601 | name='admin_permissions_ssh_keys_data', |
|
571 | 602 | pattern='/permissions/ssh_keys/data') |
|
572 | 603 | config.add_view( |
|
573 | 604 | AdminPermissionsView, |
|
574 | 605 | attr='ssh_keys_data', |
|
575 | 606 | route_name='admin_permissions_ssh_keys_data', request_method='GET', |
|
576 | 607 | renderer='json_ext', xhr=True) |
|
577 | 608 | |
|
578 | 609 | config.add_route( |
|
579 | 610 | name='admin_permissions_ssh_keys_update', |
|
580 | 611 | pattern='/permissions/ssh_keys/update') |
|
581 | 612 | config.add_view( |
|
582 | 613 | AdminPermissionsView, |
|
583 | 614 | attr='ssh_keys_update', |
|
584 | 615 | route_name='admin_permissions_ssh_keys_update', request_method='POST', |
|
585 | 616 | renderer='rhodecode:templates/admin/permissions/permissions.mako') |
|
586 | 617 | |
|
587 | 618 | # users admin |
|
588 | 619 | config.add_route( |
|
589 | 620 | name='users', |
|
590 | 621 | pattern='/users') |
|
591 | 622 | config.add_view( |
|
592 | 623 | AdminUsersView, |
|
593 | 624 | attr='users_list', |
|
594 | 625 | route_name='users', request_method='GET', |
|
595 | 626 | renderer='rhodecode:templates/admin/users/users.mako') |
|
596 | 627 | |
|
597 | 628 | config.add_route( |
|
598 | 629 | name='users_data', |
|
599 | 630 | pattern='/users_data') |
|
600 | 631 | config.add_view( |
|
601 | 632 | AdminUsersView, |
|
602 | 633 | attr='users_list_data', |
|
603 | 634 | # renderer defined below |
|
604 | 635 | route_name='users_data', request_method='GET', |
|
605 | 636 | renderer='json_ext', xhr=True) |
|
606 | 637 | |
|
607 | 638 | config.add_route( |
|
608 | 639 | name='users_create', |
|
609 | 640 | pattern='/users/create') |
|
610 | 641 | config.add_view( |
|
611 | 642 | AdminUsersView, |
|
612 | 643 | attr='users_create', |
|
613 | 644 | route_name='users_create', request_method='POST', |
|
614 | 645 | renderer='rhodecode:templates/admin/users/user_add.mako') |
|
615 | 646 | |
|
616 | 647 | config.add_route( |
|
617 | 648 | name='users_new', |
|
618 | 649 | pattern='/users/new') |
|
619 | 650 | config.add_view( |
|
620 | 651 | AdminUsersView, |
|
621 | 652 | attr='users_new', |
|
622 | 653 | route_name='users_new', request_method='GET', |
|
623 | 654 | renderer='rhodecode:templates/admin/users/user_add.mako') |
|
624 | 655 | |
|
625 | 656 | # user management |
|
626 | 657 | config.add_route( |
|
627 | 658 | name='user_edit', |
|
628 | 659 | pattern=r'/users/{user_id:\d+}/edit', |
|
629 | 660 | user_route=True) |
|
630 | 661 | config.add_view( |
|
631 | 662 | UsersView, |
|
632 | 663 | attr='user_edit', |
|
633 | 664 | route_name='user_edit', request_method='GET', |
|
634 | 665 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
635 | 666 | |
|
636 | 667 | config.add_route( |
|
637 | 668 | name='user_edit_advanced', |
|
638 | 669 | pattern=r'/users/{user_id:\d+}/edit/advanced', |
|
639 | 670 | user_route=True) |
|
640 | 671 | config.add_view( |
|
641 | 672 | UsersView, |
|
642 | 673 | attr='user_edit_advanced', |
|
643 | 674 | route_name='user_edit_advanced', request_method='GET', |
|
644 | 675 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
645 | 676 | |
|
646 | 677 | config.add_route( |
|
647 | 678 | name='user_edit_global_perms', |
|
648 | 679 | pattern=r'/users/{user_id:\d+}/edit/global_permissions', |
|
649 | 680 | user_route=True) |
|
650 | 681 | config.add_view( |
|
651 | 682 | UsersView, |
|
652 | 683 | attr='user_edit_global_perms', |
|
653 | 684 | route_name='user_edit_global_perms', request_method='GET', |
|
654 | 685 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
655 | 686 | |
|
656 | 687 | config.add_route( |
|
657 | 688 | name='user_edit_global_perms_update', |
|
658 | 689 | pattern=r'/users/{user_id:\d+}/edit/global_permissions/update', |
|
659 | 690 | user_route=True) |
|
660 | 691 | config.add_view( |
|
661 | 692 | UsersView, |
|
662 | 693 | attr='user_edit_global_perms_update', |
|
663 | 694 | route_name='user_edit_global_perms_update', request_method='POST', |
|
664 | 695 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
665 | 696 | |
|
666 | 697 | config.add_route( |
|
667 | 698 | name='user_update', |
|
668 | 699 | pattern=r'/users/{user_id:\d+}/update', |
|
669 | 700 | user_route=True) |
|
670 | 701 | config.add_view( |
|
671 | 702 | UsersView, |
|
672 | 703 | attr='user_update', |
|
673 | 704 | route_name='user_update', request_method='POST', |
|
674 | 705 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
675 | 706 | |
|
676 | 707 | config.add_route( |
|
677 | 708 | name='user_delete', |
|
678 | 709 | pattern=r'/users/{user_id:\d+}/delete', |
|
679 | 710 | user_route=True) |
|
680 | 711 | config.add_view( |
|
681 | 712 | UsersView, |
|
682 | 713 | attr='user_delete', |
|
683 | 714 | route_name='user_delete', request_method='POST', |
|
684 | 715 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
685 | 716 | |
|
686 | 717 | config.add_route( |
|
687 | 718 | name='user_enable_force_password_reset', |
|
688 | 719 | pattern=r'/users/{user_id:\d+}/password_reset_enable', |
|
689 | 720 | user_route=True) |
|
690 | 721 | config.add_view( |
|
691 | 722 | UsersView, |
|
692 | 723 | attr='user_enable_force_password_reset', |
|
693 | 724 | route_name='user_enable_force_password_reset', request_method='POST', |
|
694 | 725 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
695 | 726 | |
|
696 | 727 | config.add_route( |
|
697 | 728 | name='user_disable_force_password_reset', |
|
698 | 729 | pattern=r'/users/{user_id:\d+}/password_reset_disable', |
|
699 | 730 | user_route=True) |
|
700 | 731 | config.add_view( |
|
701 | 732 | UsersView, |
|
702 | 733 | attr='user_disable_force_password_reset', |
|
703 | 734 | route_name='user_disable_force_password_reset', request_method='POST', |
|
704 | 735 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
705 | 736 | |
|
706 | 737 | config.add_route( |
|
707 | 738 | name='user_create_personal_repo_group', |
|
708 | 739 | pattern=r'/users/{user_id:\d+}/create_repo_group', |
|
709 | 740 | user_route=True) |
|
710 | 741 | config.add_view( |
|
711 | 742 | UsersView, |
|
712 | 743 | attr='user_create_personal_repo_group', |
|
713 | 744 | route_name='user_create_personal_repo_group', request_method='POST', |
|
714 | 745 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
715 | 746 | |
|
716 | 747 | # user notice |
|
717 | 748 | config.add_route( |
|
718 | 749 | name='user_notice_dismiss', |
|
719 | 750 | pattern=r'/users/{user_id:\d+}/notice_dismiss', |
|
720 | 751 | user_route=True) |
|
721 | 752 | config.add_view( |
|
722 | 753 | UsersView, |
|
723 | 754 | attr='user_notice_dismiss', |
|
724 | 755 | route_name='user_notice_dismiss', request_method='POST', |
|
725 | 756 | renderer='json_ext', xhr=True) |
|
726 | 757 | |
|
727 | 758 | # user auth tokens |
|
728 | 759 | config.add_route( |
|
729 | 760 | name='edit_user_auth_tokens', |
|
730 | 761 | pattern=r'/users/{user_id:\d+}/edit/auth_tokens', |
|
731 | 762 | user_route=True) |
|
732 | 763 | config.add_view( |
|
733 | 764 | UsersView, |
|
734 | 765 | attr='auth_tokens', |
|
735 | 766 | route_name='edit_user_auth_tokens', request_method='GET', |
|
736 | 767 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
737 | 768 | |
|
738 | 769 | config.add_route( |
|
739 | 770 | name='edit_user_auth_tokens_view', |
|
740 | 771 | pattern=r'/users/{user_id:\d+}/edit/auth_tokens/view', |
|
741 | 772 | user_route=True) |
|
742 | 773 | config.add_view( |
|
743 | 774 | UsersView, |
|
744 | 775 | attr='auth_tokens_view', |
|
745 | 776 | route_name='edit_user_auth_tokens_view', request_method='POST', |
|
746 | 777 | renderer='json_ext', xhr=True) |
|
747 | 778 | |
|
748 | 779 | config.add_route( |
|
749 | 780 | name='edit_user_auth_tokens_add', |
|
750 | 781 | pattern=r'/users/{user_id:\d+}/edit/auth_tokens/new', |
|
751 | 782 | user_route=True) |
|
752 | 783 | config.add_view( |
|
753 | 784 | UsersView, |
|
754 | 785 | attr='auth_tokens_add', |
|
755 | 786 | route_name='edit_user_auth_tokens_add', request_method='POST') |
|
756 | 787 | |
|
757 | 788 | config.add_route( |
|
758 | 789 | name='edit_user_auth_tokens_delete', |
|
759 | 790 | pattern=r'/users/{user_id:\d+}/edit/auth_tokens/delete', |
|
760 | 791 | user_route=True) |
|
761 | 792 | config.add_view( |
|
762 | 793 | UsersView, |
|
763 | 794 | attr='auth_tokens_delete', |
|
764 | 795 | route_name='edit_user_auth_tokens_delete', request_method='POST') |
|
765 | 796 | |
|
766 | 797 | # user ssh keys |
|
767 | 798 | config.add_route( |
|
768 | 799 | name='edit_user_ssh_keys', |
|
769 | 800 | pattern=r'/users/{user_id:\d+}/edit/ssh_keys', |
|
770 | 801 | user_route=True) |
|
771 | 802 | config.add_view( |
|
772 | 803 | UsersView, |
|
773 | 804 | attr='ssh_keys', |
|
774 | 805 | route_name='edit_user_ssh_keys', request_method='GET', |
|
775 | 806 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
776 | 807 | |
|
777 | 808 | config.add_route( |
|
778 | 809 | name='edit_user_ssh_keys_generate_keypair', |
|
779 | 810 | pattern=r'/users/{user_id:\d+}/edit/ssh_keys/generate', |
|
780 | 811 | user_route=True) |
|
781 | 812 | config.add_view( |
|
782 | 813 | UsersView, |
|
783 | 814 | attr='ssh_keys_generate_keypair', |
|
784 | 815 | route_name='edit_user_ssh_keys_generate_keypair', request_method='GET', |
|
785 | 816 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
786 | 817 | |
|
787 | 818 | config.add_route( |
|
788 | 819 | name='edit_user_ssh_keys_add', |
|
789 | 820 | pattern=r'/users/{user_id:\d+}/edit/ssh_keys/new', |
|
790 | 821 | user_route=True) |
|
791 | 822 | config.add_view( |
|
792 | 823 | UsersView, |
|
793 | 824 | attr='ssh_keys_add', |
|
794 | 825 | route_name='edit_user_ssh_keys_add', request_method='POST') |
|
795 | 826 | |
|
796 | 827 | config.add_route( |
|
797 | 828 | name='edit_user_ssh_keys_delete', |
|
798 | 829 | pattern=r'/users/{user_id:\d+}/edit/ssh_keys/delete', |
|
799 | 830 | user_route=True) |
|
800 | 831 | config.add_view( |
|
801 | 832 | UsersView, |
|
802 | 833 | attr='ssh_keys_delete', |
|
803 | 834 | route_name='edit_user_ssh_keys_delete', request_method='POST') |
|
804 | 835 | |
|
805 | 836 | # user emails |
|
806 | 837 | config.add_route( |
|
807 | 838 | name='edit_user_emails', |
|
808 | 839 | pattern=r'/users/{user_id:\d+}/edit/emails', |
|
809 | 840 | user_route=True) |
|
810 | 841 | config.add_view( |
|
811 | 842 | UsersView, |
|
812 | 843 | attr='emails', |
|
813 | 844 | route_name='edit_user_emails', request_method='GET', |
|
814 | 845 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
815 | 846 | |
|
816 | 847 | config.add_route( |
|
817 | 848 | name='edit_user_emails_add', |
|
818 | 849 | pattern=r'/users/{user_id:\d+}/edit/emails/new', |
|
819 | 850 | user_route=True) |
|
820 | 851 | config.add_view( |
|
821 | 852 | UsersView, |
|
822 | 853 | attr='emails_add', |
|
823 | 854 | route_name='edit_user_emails_add', request_method='POST') |
|
824 | 855 | |
|
825 | 856 | config.add_route( |
|
826 | 857 | name='edit_user_emails_delete', |
|
827 | 858 | pattern=r'/users/{user_id:\d+}/edit/emails/delete', |
|
828 | 859 | user_route=True) |
|
829 | 860 | config.add_view( |
|
830 | 861 | UsersView, |
|
831 | 862 | attr='emails_delete', |
|
832 | 863 | route_name='edit_user_emails_delete', request_method='POST') |
|
833 | 864 | |
|
834 | 865 | # user IPs |
|
835 | 866 | config.add_route( |
|
836 | 867 | name='edit_user_ips', |
|
837 | 868 | pattern=r'/users/{user_id:\d+}/edit/ips', |
|
838 | 869 | user_route=True) |
|
839 | 870 | config.add_view( |
|
840 | 871 | UsersView, |
|
841 | 872 | attr='ips', |
|
842 | 873 | route_name='edit_user_ips', request_method='GET', |
|
843 | 874 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
844 | 875 | |
|
845 | 876 | config.add_route( |
|
846 | 877 | name='edit_user_ips_add', |
|
847 | 878 | pattern=r'/users/{user_id:\d+}/edit/ips/new', |
|
848 | 879 | user_route_with_default=True) # enabled for default user too |
|
849 | 880 | config.add_view( |
|
850 | 881 | UsersView, |
|
851 | 882 | attr='ips_add', |
|
852 | 883 | route_name='edit_user_ips_add', request_method='POST') |
|
853 | 884 | |
|
854 | 885 | config.add_route( |
|
855 | 886 | name='edit_user_ips_delete', |
|
856 | 887 | pattern=r'/users/{user_id:\d+}/edit/ips/delete', |
|
857 | 888 | user_route_with_default=True) # enabled for default user too |
|
858 | 889 | config.add_view( |
|
859 | 890 | UsersView, |
|
860 | 891 | attr='ips_delete', |
|
861 | 892 | route_name='edit_user_ips_delete', request_method='POST') |
|
862 | 893 | |
|
863 | 894 | # user perms |
|
864 | 895 | config.add_route( |
|
865 | 896 | name='edit_user_perms_summary', |
|
866 | 897 | pattern=r'/users/{user_id:\d+}/edit/permissions_summary', |
|
867 | 898 | user_route=True) |
|
868 | 899 | config.add_view( |
|
869 | 900 | UsersView, |
|
870 | 901 | attr='user_perms_summary', |
|
871 | 902 | route_name='edit_user_perms_summary', request_method='GET', |
|
872 | 903 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
873 | 904 | |
|
874 | 905 | config.add_route( |
|
875 | 906 | name='edit_user_perms_summary_json', |
|
876 | 907 | pattern=r'/users/{user_id:\d+}/edit/permissions_summary/json', |
|
877 | 908 | user_route=True) |
|
878 | 909 | config.add_view( |
|
879 | 910 | UsersView, |
|
880 | 911 | attr='user_perms_summary_json', |
|
881 | 912 | route_name='edit_user_perms_summary_json', request_method='GET', |
|
882 | 913 | renderer='json_ext') |
|
883 | 914 | |
|
884 | 915 | # user user groups management |
|
885 | 916 | config.add_route( |
|
886 | 917 | name='edit_user_groups_management', |
|
887 | 918 | pattern=r'/users/{user_id:\d+}/edit/groups_management', |
|
888 | 919 | user_route=True) |
|
889 | 920 | config.add_view( |
|
890 | 921 | UsersView, |
|
891 | 922 | attr='groups_management', |
|
892 | 923 | route_name='edit_user_groups_management', request_method='GET', |
|
893 | 924 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
894 | 925 | |
|
895 | 926 | config.add_route( |
|
896 | 927 | name='edit_user_groups_management_updates', |
|
897 | 928 | pattern=r'/users/{user_id:\d+}/edit/edit_user_groups_management/updates', |
|
898 | 929 | user_route=True) |
|
899 | 930 | config.add_view( |
|
900 | 931 | UsersView, |
|
901 | 932 | attr='groups_management_updates', |
|
902 | 933 | route_name='edit_user_groups_management_updates', request_method='POST') |
|
903 | 934 | |
|
904 | 935 | # user audit logs |
|
905 | 936 | config.add_route( |
|
906 | 937 | name='edit_user_audit_logs', |
|
907 | 938 | pattern=r'/users/{user_id:\d+}/edit/audit', user_route=True) |
|
908 | 939 | config.add_view( |
|
909 | 940 | UsersView, |
|
910 | 941 | attr='user_audit_logs', |
|
911 | 942 | route_name='edit_user_audit_logs', request_method='GET', |
|
912 | 943 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
913 | 944 | |
|
914 | 945 | config.add_route( |
|
915 | 946 | name='edit_user_audit_logs_download', |
|
916 | 947 | pattern=r'/users/{user_id:\d+}/edit/audit/download', user_route=True) |
|
917 | 948 | config.add_view( |
|
918 | 949 | UsersView, |
|
919 | 950 | attr='user_audit_logs_download', |
|
920 | 951 | route_name='edit_user_audit_logs_download', request_method='GET', |
|
921 | 952 | renderer='string') |
|
922 | 953 | |
|
923 | 954 | # user caches |
|
924 | 955 | config.add_route( |
|
925 | 956 | name='edit_user_caches', |
|
926 | 957 | pattern=r'/users/{user_id:\d+}/edit/caches', |
|
927 | 958 | user_route=True) |
|
928 | 959 | config.add_view( |
|
929 | 960 | UsersView, |
|
930 | 961 | attr='user_caches', |
|
931 | 962 | route_name='edit_user_caches', request_method='GET', |
|
932 | 963 | renderer='rhodecode:templates/admin/users/user_edit.mako') |
|
933 | 964 | |
|
934 | 965 | config.add_route( |
|
935 | 966 | name='edit_user_caches_update', |
|
936 | 967 | pattern=r'/users/{user_id:\d+}/edit/caches/update', |
|
937 | 968 | user_route=True) |
|
938 | 969 | config.add_view( |
|
939 | 970 | UsersView, |
|
940 | 971 | attr='user_caches_update', |
|
941 | 972 | route_name='edit_user_caches_update', request_method='POST') |
|
942 | 973 | |
|
943 | 974 | # user-groups admin |
|
944 | 975 | config.add_route( |
|
945 | 976 | name='user_groups', |
|
946 | 977 | pattern='/user_groups') |
|
947 | 978 | config.add_view( |
|
948 | 979 | AdminUserGroupsView, |
|
949 | 980 | attr='user_groups_list', |
|
950 | 981 | route_name='user_groups', request_method='GET', |
|
951 | 982 | renderer='rhodecode:templates/admin/user_groups/user_groups.mako') |
|
952 | 983 | |
|
953 | 984 | config.add_route( |
|
954 | 985 | name='user_groups_data', |
|
955 | 986 | pattern='/user_groups_data') |
|
956 | 987 | config.add_view( |
|
957 | 988 | AdminUserGroupsView, |
|
958 | 989 | attr='user_groups_list_data', |
|
959 | 990 | route_name='user_groups_data', request_method='GET', |
|
960 | 991 | renderer='json_ext', xhr=True) |
|
961 | 992 | |
|
962 | 993 | config.add_route( |
|
963 | 994 | name='user_groups_new', |
|
964 | 995 | pattern='/user_groups/new') |
|
965 | 996 | config.add_view( |
|
966 | 997 | AdminUserGroupsView, |
|
967 | 998 | attr='user_groups_new', |
|
968 | 999 | route_name='user_groups_new', request_method='GET', |
|
969 | 1000 | renderer='rhodecode:templates/admin/user_groups/user_group_add.mako') |
|
970 | 1001 | |
|
971 | 1002 | config.add_route( |
|
972 | 1003 | name='user_groups_create', |
|
973 | 1004 | pattern='/user_groups/create') |
|
974 | 1005 | config.add_view( |
|
975 | 1006 | AdminUserGroupsView, |
|
976 | 1007 | attr='user_groups_create', |
|
977 | 1008 | route_name='user_groups_create', request_method='POST', |
|
978 | 1009 | renderer='rhodecode:templates/admin/user_groups/user_group_add.mako') |
|
979 | 1010 | |
|
980 | 1011 | # repos admin |
|
981 | 1012 | config.add_route( |
|
982 | 1013 | name='repos', |
|
983 | 1014 | pattern='/repos') |
|
984 | 1015 | config.add_view( |
|
985 | 1016 | AdminReposView, |
|
986 | 1017 | attr='repository_list', |
|
987 | 1018 | route_name='repos', request_method='GET', |
|
988 | 1019 | renderer='rhodecode:templates/admin/repos/repos.mako') |
|
989 | 1020 | |
|
990 | 1021 | config.add_route( |
|
991 | 1022 | name='repos_data', |
|
992 | 1023 | pattern='/repos_data') |
|
993 | 1024 | config.add_view( |
|
994 | 1025 | AdminReposView, |
|
995 | 1026 | attr='repository_list_data', |
|
996 | 1027 | route_name='repos_data', request_method='GET', |
|
997 | 1028 | renderer='json_ext', xhr=True) |
|
998 | 1029 | |
|
999 | 1030 | config.add_route( |
|
1000 | 1031 | name='repo_new', |
|
1001 | 1032 | pattern='/repos/new') |
|
1002 | 1033 | config.add_view( |
|
1003 | 1034 | AdminReposView, |
|
1004 | 1035 | attr='repository_new', |
|
1005 | 1036 | route_name='repo_new', request_method='GET', |
|
1006 | 1037 | renderer='rhodecode:templates/admin/repos/repo_add.mako') |
|
1007 | 1038 | |
|
1008 | 1039 | config.add_route( |
|
1009 | 1040 | name='repo_create', |
|
1010 | 1041 | pattern='/repos/create') |
|
1011 | 1042 | config.add_view( |
|
1012 | 1043 | AdminReposView, |
|
1013 | 1044 | attr='repository_create', |
|
1014 | 1045 | route_name='repo_create', request_method='POST', |
|
1015 | 1046 | renderer='rhodecode:templates/admin/repos/repos.mako') |
|
1016 | 1047 | |
|
1017 | 1048 | # repo groups admin |
|
1018 | 1049 | config.add_route( |
|
1019 | 1050 | name='repo_groups', |
|
1020 | 1051 | pattern='/repo_groups') |
|
1021 | 1052 | config.add_view( |
|
1022 | 1053 | AdminRepoGroupsView, |
|
1023 | 1054 | attr='repo_group_list', |
|
1024 | 1055 | route_name='repo_groups', request_method='GET', |
|
1025 | 1056 | renderer='rhodecode:templates/admin/repo_groups/repo_groups.mako') |
|
1026 | 1057 | |
|
1027 | 1058 | config.add_route( |
|
1028 | 1059 | name='repo_groups_data', |
|
1029 | 1060 | pattern='/repo_groups_data') |
|
1030 | 1061 | config.add_view( |
|
1031 | 1062 | AdminRepoGroupsView, |
|
1032 | 1063 | attr='repo_group_list_data', |
|
1033 | 1064 | route_name='repo_groups_data', request_method='GET', |
|
1034 | 1065 | renderer='json_ext', xhr=True) |
|
1035 | 1066 | |
|
1036 | 1067 | config.add_route( |
|
1037 | 1068 | name='repo_group_new', |
|
1038 | 1069 | pattern='/repo_group/new') |
|
1039 | 1070 | config.add_view( |
|
1040 | 1071 | AdminRepoGroupsView, |
|
1041 | 1072 | attr='repo_group_new', |
|
1042 | 1073 | route_name='repo_group_new', request_method='GET', |
|
1043 | 1074 | renderer='rhodecode:templates/admin/repo_groups/repo_group_add.mako') |
|
1044 | 1075 | |
|
1045 | 1076 | config.add_route( |
|
1046 | 1077 | name='repo_group_create', |
|
1047 | 1078 | pattern='/repo_group/create') |
|
1048 | 1079 | config.add_view( |
|
1049 | 1080 | AdminRepoGroupsView, |
|
1050 | 1081 | attr='repo_group_create', |
|
1051 | 1082 | route_name='repo_group_create', request_method='POST', |
|
1052 | 1083 | renderer='rhodecode:templates/admin/repo_groups/repo_group_add.mako') |
|
1053 | 1084 | |
|
1054 | 1085 | |
|
1055 | 1086 | def includeme(config): |
|
1056 | 1087 | # Create admin navigation registry and add it to the pyramid registry. |
|
1057 | 1088 | nav_includeme(config) |
|
1058 | 1089 | |
|
1059 | 1090 | # main admin routes |
|
1060 | 1091 | config.add_route( |
|
1061 | 1092 | name='admin_home', pattern=ADMIN_PREFIX) |
|
1062 | 1093 | config.add_view( |
|
1063 | 1094 | AdminMainView, |
|
1064 | 1095 | attr='admin_main', |
|
1065 | 1096 | route_name='admin_home', request_method='GET', |
|
1066 | 1097 | renderer='rhodecode:templates/admin/main.mako') |
|
1067 | 1098 | |
|
1068 | 1099 | # pr global redirect |
|
1069 | 1100 | config.add_route( |
|
1070 | 1101 | name='pull_requests_global_0', # backward compat |
|
1071 | 1102 | pattern=ADMIN_PREFIX + r'/pull_requests/{pull_request_id:\d+}') |
|
1072 | 1103 | config.add_view( |
|
1073 | 1104 | AdminMainView, |
|
1074 | 1105 | attr='pull_requests', |
|
1075 | 1106 | route_name='pull_requests_global_0', request_method='GET') |
|
1076 | 1107 | |
|
1077 | 1108 | config.add_route( |
|
1078 | 1109 | name='pull_requests_global_1', # backward compat |
|
1079 | 1110 | pattern=ADMIN_PREFIX + r'/pull-requests/{pull_request_id:\d+}') |
|
1080 | 1111 | config.add_view( |
|
1081 | 1112 | AdminMainView, |
|
1082 | 1113 | attr='pull_requests', |
|
1083 | 1114 | route_name='pull_requests_global_1', request_method='GET') |
|
1084 | 1115 | |
|
1085 | 1116 | config.add_route( |
|
1086 | 1117 | name='pull_requests_global', |
|
1087 | 1118 | pattern=ADMIN_PREFIX + r'/pull-request/{pull_request_id:\d+}') |
|
1088 | 1119 | config.add_view( |
|
1089 | 1120 | AdminMainView, |
|
1090 | 1121 | attr='pull_requests', |
|
1091 | 1122 | route_name='pull_requests_global', request_method='GET') |
|
1092 | 1123 | |
|
1093 | 1124 | config.include(admin_routes, route_prefix=ADMIN_PREFIX) |
@@ -1,708 +1,715 b'' | |||
|
1 | 1 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | |
|
20 | 20 | import logging |
|
21 | 21 | import collections |
|
22 | 22 | |
|
23 | 23 | import datetime |
|
24 | 24 | import formencode |
|
25 | 25 | import formencode.htmlfill |
|
26 | 26 | |
|
27 | 27 | import rhodecode |
|
28 | 28 | |
|
29 | 29 | from pyramid.httpexceptions import HTTPFound, HTTPNotFound |
|
30 | 30 | from pyramid.renderers import render |
|
31 | 31 | from pyramid.response import Response |
|
32 | 32 | |
|
33 | 33 | from rhodecode.apps._base import BaseAppView |
|
34 | 34 | from rhodecode.apps._base.navigation import navigation_list |
|
35 | 35 | from rhodecode.apps.svn_support import config_keys |
|
36 | 36 | from rhodecode.lib import helpers as h |
|
37 | 37 | from rhodecode.lib.auth import ( |
|
38 | 38 | LoginRequired, HasPermissionAllDecorator, CSRFRequired) |
|
39 | 39 | from rhodecode.lib.celerylib import tasks, run_task |
|
40 | 40 | from rhodecode.lib.str_utils import safe_str |
|
41 | 41 | from rhodecode.lib.utils import repo2db_mapper, get_rhodecode_repo_store_path |
|
42 | 42 | from rhodecode.lib.utils2 import str2bool, AttributeDict |
|
43 | 43 | from rhodecode.lib.index import searcher_from_config |
|
44 | 44 | |
|
45 | 45 | from rhodecode.model.db import RhodeCodeUi, Repository |
|
46 | 46 | from rhodecode.model.forms import (ApplicationSettingsForm, |
|
47 | 47 | ApplicationUiSettingsForm, ApplicationVisualisationForm, |
|
48 | 48 | LabsSettingsForm, IssueTrackerPatternsForm) |
|
49 | 49 | from rhodecode.model.permission import PermissionModel |
|
50 | 50 | from rhodecode.model.repo_group import RepoGroupModel |
|
51 | 51 | |
|
52 | 52 | from rhodecode.model.scm import ScmModel |
|
53 | 53 | from rhodecode.model.notification import EmailNotificationModel |
|
54 | 54 | from rhodecode.model.meta import Session |
|
55 | 55 | from rhodecode.model.settings import ( |
|
56 | 56 | IssueTrackerSettingsModel, VcsSettingsModel, SettingNotFound, |
|
57 | 57 | SettingsModel) |
|
58 | 58 | |
|
59 | 59 | |
|
60 | 60 | log = logging.getLogger(__name__) |
|
61 | 61 | |
|
62 | 62 | |
|
63 | 63 | class AdminSettingsView(BaseAppView): |
|
64 | 64 | |
|
65 | 65 | def load_default_context(self): |
|
66 | 66 | c = self._get_local_tmpl_context() |
|
67 | 67 | c.labs_active = str2bool( |
|
68 | 68 | rhodecode.CONFIG.get('labs_settings_active', 'true')) |
|
69 | 69 | c.navlist = navigation_list(self.request) |
|
70 | 70 | return c |
|
71 | 71 | |
|
72 | 72 | @classmethod |
|
73 | 73 | def _get_ui_settings(cls): |
|
74 | 74 | ret = RhodeCodeUi.query().all() |
|
75 | 75 | |
|
76 | 76 | if not ret: |
|
77 | 77 | raise Exception('Could not get application ui settings !') |
|
78 |
settings = { |
|
|
78 | settings = { | |
|
79 | # legacy param that needs to be kept | |
|
80 | 'web_push_ssl': False | |
|
81 | } | |
|
79 | 82 | for each in ret: |
|
80 | 83 | k = each.ui_key |
|
81 | 84 | v = each.ui_value |
|
85 | # skip some options if they are defined | |
|
86 | if k in ['push_ssl']: | |
|
87 | continue | |
|
88 | ||
|
82 | 89 | if k == '/': |
|
83 | 90 | k = 'root_path' |
|
84 | 91 | |
|
85 |
if k in [ |
|
|
92 | if k in ['publish', 'enabled']: | |
|
86 | 93 | v = str2bool(v) |
|
87 | 94 | |
|
88 | 95 | if k.find('.') != -1: |
|
89 | 96 | k = k.replace('.', '_') |
|
90 | 97 | |
|
91 | 98 | if each.ui_section in ['hooks', 'extensions']: |
|
92 | 99 | v = each.ui_active |
|
93 | 100 | |
|
94 | 101 | settings[each.ui_section + '_' + k] = v |
|
102 | ||
|
95 | 103 | return settings |
|
96 | 104 | |
|
97 | 105 | @classmethod |
|
98 | 106 | def _form_defaults(cls): |
|
99 | 107 | defaults = SettingsModel().get_all_settings() |
|
100 | 108 | defaults.update(cls._get_ui_settings()) |
|
101 | 109 | |
|
102 | 110 | defaults.update({ |
|
103 | 111 | 'new_svn_branch': '', |
|
104 | 112 | 'new_svn_tag': '', |
|
105 | 113 | }) |
|
106 | 114 | return defaults |
|
107 | 115 | |
|
108 | 116 | @LoginRequired() |
|
109 | 117 | @HasPermissionAllDecorator('hg.admin') |
|
110 | 118 | def settings_vcs(self): |
|
111 | 119 | c = self.load_default_context() |
|
112 | 120 | c.active = 'vcs' |
|
113 | 121 | model = VcsSettingsModel() |
|
114 | 122 | c.svn_branch_patterns = model.get_global_svn_branch_patterns() |
|
115 | 123 | c.svn_tag_patterns = model.get_global_svn_tag_patterns() |
|
116 | 124 | c.svn_generate_config = rhodecode.ConfigGet().get_bool(config_keys.generate_config) |
|
117 | 125 | c.svn_config_path = rhodecode.ConfigGet().get_str(config_keys.config_file_path) |
|
118 | 126 | defaults = self._form_defaults() |
|
119 | 127 | |
|
120 | 128 | model.create_largeobjects_dirs_if_needed(defaults['paths_root_path']) |
|
121 | 129 | |
|
122 | 130 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
123 | 131 | self._get_template_context(c), self.request) |
|
124 | 132 | html = formencode.htmlfill.render( |
|
125 | 133 | data, |
|
126 | 134 | defaults=defaults, |
|
127 | 135 | encoding="UTF-8", |
|
128 | 136 | force_defaults=False |
|
129 | 137 | ) |
|
130 | 138 | return Response(html) |
|
131 | 139 | |
|
132 | 140 | @LoginRequired() |
|
133 | 141 | @HasPermissionAllDecorator('hg.admin') |
|
134 | 142 | @CSRFRequired() |
|
135 | 143 | def settings_vcs_update(self): |
|
136 | 144 | _ = self.request.translate |
|
137 | 145 | c = self.load_default_context() |
|
138 | 146 | c.active = 'vcs' |
|
139 | 147 | |
|
140 | 148 | model = VcsSettingsModel() |
|
141 | 149 | c.svn_branch_patterns = model.get_global_svn_branch_patterns() |
|
142 | 150 | c.svn_tag_patterns = model.get_global_svn_tag_patterns() |
|
143 | 151 | |
|
144 | 152 | c.svn_generate_config = rhodecode.ConfigGet().get_bool(config_keys.generate_config) |
|
145 | 153 | c.svn_config_path = rhodecode.ConfigGet().get_str(config_keys.config_file_path) |
|
146 | 154 | application_form = ApplicationUiSettingsForm(self.request.translate)() |
|
147 | 155 | |
|
148 | 156 | try: |
|
149 | 157 | form_result = application_form.to_python(dict(self.request.POST)) |
|
150 | 158 | except formencode.Invalid as errors: |
|
151 | 159 | h.flash( |
|
152 | 160 | _("Some form inputs contain invalid data."), |
|
153 | 161 | category='error') |
|
154 | 162 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
155 | 163 | self._get_template_context(c), self.request) |
|
156 | 164 | html = formencode.htmlfill.render( |
|
157 | 165 | data, |
|
158 | 166 | defaults=errors.value, |
|
159 | 167 | errors=errors.unpack_errors() or {}, |
|
160 | 168 | prefix_error=False, |
|
161 | 169 | encoding="UTF-8", |
|
162 | 170 | force_defaults=False |
|
163 | 171 | ) |
|
164 | 172 | return Response(html) |
|
165 | 173 | |
|
166 | 174 | try: |
|
167 | model.update_global_ssl_setting(form_result['web_push_ssl']) | |
|
168 | 175 | model.update_global_hook_settings(form_result) |
|
169 | 176 | |
|
170 | 177 | model.create_or_update_global_svn_settings(form_result) |
|
171 | 178 | model.create_or_update_global_hg_settings(form_result) |
|
172 | 179 | model.create_or_update_global_git_settings(form_result) |
|
173 | 180 | model.create_or_update_global_pr_settings(form_result) |
|
174 | 181 | except Exception: |
|
175 | 182 | log.exception("Exception while updating settings") |
|
176 | 183 | h.flash(_('Error occurred during updating ' |
|
177 | 184 | 'application settings'), category='error') |
|
178 | 185 | else: |
|
179 | 186 | Session().commit() |
|
180 | 187 | h.flash(_('Updated VCS settings'), category='success') |
|
181 | 188 | raise HTTPFound(h.route_path('admin_settings_vcs')) |
|
182 | 189 | |
|
183 | 190 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
184 | 191 | self._get_template_context(c), self.request) |
|
185 | 192 | html = formencode.htmlfill.render( |
|
186 | 193 | data, |
|
187 | 194 | defaults=self._form_defaults(), |
|
188 | 195 | encoding="UTF-8", |
|
189 | 196 | force_defaults=False |
|
190 | 197 | ) |
|
191 | 198 | return Response(html) |
|
192 | 199 | |
|
193 | 200 | @LoginRequired() |
|
194 | 201 | @HasPermissionAllDecorator('hg.admin') |
|
195 | 202 | @CSRFRequired() |
|
196 | 203 | def settings_vcs_delete_svn_pattern(self): |
|
197 | 204 | delete_pattern_id = self.request.POST.get('delete_svn_pattern') |
|
198 | 205 | model = VcsSettingsModel() |
|
199 | 206 | try: |
|
200 | 207 | model.delete_global_svn_pattern(delete_pattern_id) |
|
201 | 208 | except SettingNotFound: |
|
202 | 209 | log.exception( |
|
203 | 210 | 'Failed to delete svn_pattern with id %s', delete_pattern_id) |
|
204 | 211 | raise HTTPNotFound() |
|
205 | 212 | |
|
206 | 213 | Session().commit() |
|
207 | 214 | return True |
|
208 | 215 | |
|
209 | 216 | @LoginRequired() |
|
210 | 217 | @HasPermissionAllDecorator('hg.admin') |
|
211 | 218 | def settings_mapping(self): |
|
212 | 219 | c = self.load_default_context() |
|
213 | 220 | c.active = 'mapping' |
|
214 | 221 | c.storage_path = get_rhodecode_repo_store_path() |
|
215 | 222 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
216 | 223 | self._get_template_context(c), self.request) |
|
217 | 224 | html = formencode.htmlfill.render( |
|
218 | 225 | data, |
|
219 | 226 | defaults=self._form_defaults(), |
|
220 | 227 | encoding="UTF-8", |
|
221 | 228 | force_defaults=False |
|
222 | 229 | ) |
|
223 | 230 | return Response(html) |
|
224 | 231 | |
|
225 | 232 | @LoginRequired() |
|
226 | 233 | @HasPermissionAllDecorator('hg.admin') |
|
227 | 234 | @CSRFRequired() |
|
228 | 235 | def settings_mapping_update(self): |
|
229 | 236 | _ = self.request.translate |
|
230 | 237 | c = self.load_default_context() |
|
231 | 238 | c.active = 'mapping' |
|
232 | 239 | rm_obsolete = self.request.POST.get('destroy', False) |
|
233 | 240 | invalidate_cache = self.request.POST.get('invalidate', False) |
|
234 | 241 | log.debug('rescanning repo location with destroy obsolete=%s', rm_obsolete) |
|
235 | 242 | |
|
236 | 243 | if invalidate_cache: |
|
237 | 244 | log.debug('invalidating all repositories cache') |
|
238 | 245 | for repo in Repository.get_all(): |
|
239 | 246 | ScmModel().mark_for_invalidation(repo.repo_name, delete=True) |
|
240 | 247 | |
|
241 | 248 | filesystem_repos = ScmModel().repo_scan() |
|
242 | 249 | added, removed = repo2db_mapper(filesystem_repos, rm_obsolete, force_hooks_rebuild=True) |
|
243 | 250 | PermissionModel().trigger_permission_flush() |
|
244 | 251 | |
|
245 | 252 | def _repr(rm_repo): |
|
246 | 253 | return ', '.join(map(safe_str, rm_repo)) or '-' |
|
247 | 254 | |
|
248 | 255 | h.flash(_('Repositories successfully ' |
|
249 | 256 | 'rescanned added: %s ; removed: %s') % |
|
250 | 257 | (_repr(added), _repr(removed)), |
|
251 | 258 | category='success') |
|
252 | 259 | raise HTTPFound(h.route_path('admin_settings_mapping')) |
|
253 | 260 | |
|
254 | 261 | @LoginRequired() |
|
255 | 262 | @HasPermissionAllDecorator('hg.admin') |
|
256 | 263 | def settings_global(self): |
|
257 | 264 | c = self.load_default_context() |
|
258 | 265 | c.active = 'global' |
|
259 | 266 | c.personal_repo_group_default_pattern = RepoGroupModel()\ |
|
260 | 267 | .get_personal_group_name_pattern() |
|
261 | 268 | |
|
262 | 269 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
263 | 270 | self._get_template_context(c), self.request) |
|
264 | 271 | html = formencode.htmlfill.render( |
|
265 | 272 | data, |
|
266 | 273 | defaults=self._form_defaults(), |
|
267 | 274 | encoding="UTF-8", |
|
268 | 275 | force_defaults=False |
|
269 | 276 | ) |
|
270 | 277 | return Response(html) |
|
271 | 278 | |
|
272 | 279 | @LoginRequired() |
|
273 | 280 | @HasPermissionAllDecorator('hg.admin') |
|
274 | 281 | @CSRFRequired() |
|
275 | 282 | def settings_global_update(self): |
|
276 | 283 | _ = self.request.translate |
|
277 | 284 | c = self.load_default_context() |
|
278 | 285 | c.active = 'global' |
|
279 | 286 | c.personal_repo_group_default_pattern = RepoGroupModel()\ |
|
280 | 287 | .get_personal_group_name_pattern() |
|
281 | 288 | application_form = ApplicationSettingsForm(self.request.translate)() |
|
282 | 289 | try: |
|
283 | 290 | form_result = application_form.to_python(dict(self.request.POST)) |
|
284 | 291 | except formencode.Invalid as errors: |
|
285 | 292 | h.flash( |
|
286 | 293 | _("Some form inputs contain invalid data."), |
|
287 | 294 | category='error') |
|
288 | 295 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
289 | 296 | self._get_template_context(c), self.request) |
|
290 | 297 | html = formencode.htmlfill.render( |
|
291 | 298 | data, |
|
292 | 299 | defaults=errors.value, |
|
293 | 300 | errors=errors.unpack_errors() or {}, |
|
294 | 301 | prefix_error=False, |
|
295 | 302 | encoding="UTF-8", |
|
296 | 303 | force_defaults=False |
|
297 | 304 | ) |
|
298 | 305 | return Response(html) |
|
299 | 306 | |
|
300 | 307 | settings = [ |
|
301 | 308 | ('title', 'rhodecode_title', 'unicode'), |
|
302 | 309 | ('realm', 'rhodecode_realm', 'unicode'), |
|
303 | 310 | ('pre_code', 'rhodecode_pre_code', 'unicode'), |
|
304 | 311 | ('post_code', 'rhodecode_post_code', 'unicode'), |
|
305 | 312 | ('captcha_public_key', 'rhodecode_captcha_public_key', 'unicode'), |
|
306 | 313 | ('captcha_private_key', 'rhodecode_captcha_private_key', 'unicode'), |
|
307 | 314 | ('create_personal_repo_group', 'rhodecode_create_personal_repo_group', 'bool'), |
|
308 | 315 | ('personal_repo_group_pattern', 'rhodecode_personal_repo_group_pattern', 'unicode'), |
|
309 | 316 | ] |
|
310 | 317 | |
|
311 | 318 | try: |
|
312 | 319 | for setting, form_key, type_ in settings: |
|
313 | 320 | sett = SettingsModel().create_or_update_setting( |
|
314 | 321 | setting, form_result[form_key], type_) |
|
315 | 322 | Session().add(sett) |
|
316 | 323 | |
|
317 | 324 | Session().commit() |
|
318 | 325 | SettingsModel().invalidate_settings_cache() |
|
319 | 326 | h.flash(_('Updated application settings'), category='success') |
|
320 | 327 | except Exception: |
|
321 | 328 | log.exception("Exception while updating application settings") |
|
322 | 329 | h.flash( |
|
323 | 330 | _('Error occurred during updating application settings'), |
|
324 | 331 | category='error') |
|
325 | 332 | |
|
326 | 333 | raise HTTPFound(h.route_path('admin_settings_global')) |
|
327 | 334 | |
|
328 | 335 | @LoginRequired() |
|
329 | 336 | @HasPermissionAllDecorator('hg.admin') |
|
330 | 337 | def settings_visual(self): |
|
331 | 338 | c = self.load_default_context() |
|
332 | 339 | c.active = 'visual' |
|
333 | 340 | |
|
334 | 341 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
335 | 342 | self._get_template_context(c), self.request) |
|
336 | 343 | html = formencode.htmlfill.render( |
|
337 | 344 | data, |
|
338 | 345 | defaults=self._form_defaults(), |
|
339 | 346 | encoding="UTF-8", |
|
340 | 347 | force_defaults=False |
|
341 | 348 | ) |
|
342 | 349 | return Response(html) |
|
343 | 350 | |
|
344 | 351 | @LoginRequired() |
|
345 | 352 | @HasPermissionAllDecorator('hg.admin') |
|
346 | 353 | @CSRFRequired() |
|
347 | 354 | def settings_visual_update(self): |
|
348 | 355 | _ = self.request.translate |
|
349 | 356 | c = self.load_default_context() |
|
350 | 357 | c.active = 'visual' |
|
351 | 358 | application_form = ApplicationVisualisationForm(self.request.translate)() |
|
352 | 359 | try: |
|
353 | 360 | form_result = application_form.to_python(dict(self.request.POST)) |
|
354 | 361 | except formencode.Invalid as errors: |
|
355 | 362 | h.flash( |
|
356 | 363 | _("Some form inputs contain invalid data."), |
|
357 | 364 | category='error') |
|
358 | 365 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
359 | 366 | self._get_template_context(c), self.request) |
|
360 | 367 | html = formencode.htmlfill.render( |
|
361 | 368 | data, |
|
362 | 369 | defaults=errors.value, |
|
363 | 370 | errors=errors.unpack_errors() or {}, |
|
364 | 371 | prefix_error=False, |
|
365 | 372 | encoding="UTF-8", |
|
366 | 373 | force_defaults=False |
|
367 | 374 | ) |
|
368 | 375 | return Response(html) |
|
369 | 376 | |
|
370 | 377 | try: |
|
371 | 378 | settings = [ |
|
372 | 379 | ('show_public_icon', 'rhodecode_show_public_icon', 'bool'), |
|
373 | 380 | ('show_private_icon', 'rhodecode_show_private_icon', 'bool'), |
|
374 | 381 | ('stylify_metatags', 'rhodecode_stylify_metatags', 'bool'), |
|
375 | 382 | ('repository_fields', 'rhodecode_repository_fields', 'bool'), |
|
376 | 383 | ('dashboard_items', 'rhodecode_dashboard_items', 'int'), |
|
377 | 384 | ('admin_grid_items', 'rhodecode_admin_grid_items', 'int'), |
|
378 | 385 | ('show_version', 'rhodecode_show_version', 'bool'), |
|
379 | 386 | ('use_gravatar', 'rhodecode_use_gravatar', 'bool'), |
|
380 | 387 | ('markup_renderer', 'rhodecode_markup_renderer', 'unicode'), |
|
381 | 388 | ('gravatar_url', 'rhodecode_gravatar_url', 'unicode'), |
|
382 | 389 | ('clone_uri_tmpl', 'rhodecode_clone_uri_tmpl', 'unicode'), |
|
383 | 390 | ('clone_uri_id_tmpl', 'rhodecode_clone_uri_id_tmpl', 'unicode'), |
|
384 | 391 | ('clone_uri_ssh_tmpl', 'rhodecode_clone_uri_ssh_tmpl', 'unicode'), |
|
385 | 392 | ('support_url', 'rhodecode_support_url', 'unicode'), |
|
386 | 393 | ('show_revision_number', 'rhodecode_show_revision_number', 'bool'), |
|
387 | 394 | ('show_sha_length', 'rhodecode_show_sha_length', 'int'), |
|
388 | 395 | ] |
|
389 | 396 | for setting, form_key, type_ in settings: |
|
390 | 397 | sett = SettingsModel().create_or_update_setting( |
|
391 | 398 | setting, form_result[form_key], type_) |
|
392 | 399 | Session().add(sett) |
|
393 | 400 | |
|
394 | 401 | Session().commit() |
|
395 | 402 | SettingsModel().invalidate_settings_cache() |
|
396 | 403 | h.flash(_('Updated visualisation settings'), category='success') |
|
397 | 404 | except Exception: |
|
398 | 405 | log.exception("Exception updating visualization settings") |
|
399 | 406 | h.flash(_('Error occurred during updating ' |
|
400 | 407 | 'visualisation settings'), |
|
401 | 408 | category='error') |
|
402 | 409 | |
|
403 | 410 | raise HTTPFound(h.route_path('admin_settings_visual')) |
|
404 | 411 | |
|
405 | 412 | @LoginRequired() |
|
406 | 413 | @HasPermissionAllDecorator('hg.admin') |
|
407 | 414 | def settings_issuetracker(self): |
|
408 | 415 | c = self.load_default_context() |
|
409 | 416 | c.active = 'issuetracker' |
|
410 | 417 | defaults = c.rc_config |
|
411 | 418 | |
|
412 | 419 | entry_key = 'rhodecode_issuetracker_pat_' |
|
413 | 420 | |
|
414 | 421 | c.issuetracker_entries = {} |
|
415 | 422 | for k, v in defaults.items(): |
|
416 | 423 | if k.startswith(entry_key): |
|
417 | 424 | uid = k[len(entry_key):] |
|
418 | 425 | c.issuetracker_entries[uid] = None |
|
419 | 426 | |
|
420 | 427 | for uid in c.issuetracker_entries: |
|
421 | 428 | c.issuetracker_entries[uid] = AttributeDict({ |
|
422 | 429 | 'pat': defaults.get('rhodecode_issuetracker_pat_' + uid), |
|
423 | 430 | 'url': defaults.get('rhodecode_issuetracker_url_' + uid), |
|
424 | 431 | 'pref': defaults.get('rhodecode_issuetracker_pref_' + uid), |
|
425 | 432 | 'desc': defaults.get('rhodecode_issuetracker_desc_' + uid), |
|
426 | 433 | }) |
|
427 | 434 | |
|
428 | 435 | return self._get_template_context(c) |
|
429 | 436 | |
|
430 | 437 | @LoginRequired() |
|
431 | 438 | @HasPermissionAllDecorator('hg.admin') |
|
432 | 439 | @CSRFRequired() |
|
433 | 440 | def settings_issuetracker_test(self): |
|
434 | 441 | error_container = [] |
|
435 | 442 | |
|
436 | 443 | urlified_commit = h.urlify_commit_message( |
|
437 | 444 | self.request.POST.get('test_text', ''), |
|
438 | 445 | 'repo_group/test_repo1', error_container=error_container) |
|
439 | 446 | if error_container: |
|
440 | 447 | def converter(inp): |
|
441 | 448 | return h.html_escape(inp) |
|
442 | 449 | |
|
443 | 450 | return 'ERRORS: ' + '\n'.join(map(converter, error_container)) |
|
444 | 451 | |
|
445 | 452 | return urlified_commit |
|
446 | 453 | |
|
447 | 454 | @LoginRequired() |
|
448 | 455 | @HasPermissionAllDecorator('hg.admin') |
|
449 | 456 | @CSRFRequired() |
|
450 | 457 | def settings_issuetracker_update(self): |
|
451 | 458 | _ = self.request.translate |
|
452 | 459 | self.load_default_context() |
|
453 | 460 | settings_model = IssueTrackerSettingsModel() |
|
454 | 461 | |
|
455 | 462 | try: |
|
456 | 463 | form = IssueTrackerPatternsForm(self.request.translate)() |
|
457 | 464 | data = form.to_python(self.request.POST) |
|
458 | 465 | except formencode.Invalid as errors: |
|
459 | 466 | log.exception('Failed to add new pattern') |
|
460 | 467 | error = errors |
|
461 | 468 | h.flash(_(f'Invalid issue tracker pattern: {error}'), |
|
462 | 469 | category='error') |
|
463 | 470 | raise HTTPFound(h.route_path('admin_settings_issuetracker')) |
|
464 | 471 | |
|
465 | 472 | if data: |
|
466 | 473 | for uid in data.get('delete_patterns', []): |
|
467 | 474 | settings_model.delete_entries(uid) |
|
468 | 475 | |
|
469 | 476 | for pattern in data.get('patterns', []): |
|
470 | 477 | for setting, value, type_ in pattern: |
|
471 | 478 | sett = settings_model.create_or_update_setting( |
|
472 | 479 | setting, value, type_) |
|
473 | 480 | Session().add(sett) |
|
474 | 481 | |
|
475 | 482 | Session().commit() |
|
476 | 483 | |
|
477 | 484 | SettingsModel().invalidate_settings_cache() |
|
478 | 485 | h.flash(_('Updated issue tracker entries'), category='success') |
|
479 | 486 | raise HTTPFound(h.route_path('admin_settings_issuetracker')) |
|
480 | 487 | |
|
481 | 488 | @LoginRequired() |
|
482 | 489 | @HasPermissionAllDecorator('hg.admin') |
|
483 | 490 | @CSRFRequired() |
|
484 | 491 | def settings_issuetracker_delete(self): |
|
485 | 492 | _ = self.request.translate |
|
486 | 493 | self.load_default_context() |
|
487 | 494 | uid = self.request.POST.get('uid') |
|
488 | 495 | try: |
|
489 | 496 | IssueTrackerSettingsModel().delete_entries(uid) |
|
490 | 497 | except Exception: |
|
491 | 498 | log.exception('Failed to delete issue tracker setting %s', uid) |
|
492 | 499 | raise HTTPNotFound() |
|
493 | 500 | |
|
494 | 501 | SettingsModel().invalidate_settings_cache() |
|
495 | 502 | h.flash(_('Removed issue tracker entry.'), category='success') |
|
496 | 503 | |
|
497 | 504 | return {'deleted': uid} |
|
498 | 505 | |
|
499 | 506 | @LoginRequired() |
|
500 | 507 | @HasPermissionAllDecorator('hg.admin') |
|
501 | 508 | def settings_email(self): |
|
502 | 509 | c = self.load_default_context() |
|
503 | 510 | c.active = 'email' |
|
504 | 511 | c.rhodecode_ini = rhodecode.CONFIG |
|
505 | 512 | |
|
506 | 513 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
507 | 514 | self._get_template_context(c), self.request) |
|
508 | 515 | html = formencode.htmlfill.render( |
|
509 | 516 | data, |
|
510 | 517 | defaults=self._form_defaults(), |
|
511 | 518 | encoding="UTF-8", |
|
512 | 519 | force_defaults=False |
|
513 | 520 | ) |
|
514 | 521 | return Response(html) |
|
515 | 522 | |
|
516 | 523 | @LoginRequired() |
|
517 | 524 | @HasPermissionAllDecorator('hg.admin') |
|
518 | 525 | @CSRFRequired() |
|
519 | 526 | def settings_email_update(self): |
|
520 | 527 | _ = self.request.translate |
|
521 | 528 | c = self.load_default_context() |
|
522 | 529 | c.active = 'email' |
|
523 | 530 | |
|
524 | 531 | test_email = self.request.POST.get('test_email') |
|
525 | 532 | |
|
526 | 533 | if not test_email: |
|
527 | 534 | h.flash(_('Please enter email address'), category='error') |
|
528 | 535 | raise HTTPFound(h.route_path('admin_settings_email')) |
|
529 | 536 | |
|
530 | 537 | email_kwargs = { |
|
531 | 538 | 'date': datetime.datetime.now(), |
|
532 | 539 | 'user': self._rhodecode_db_user |
|
533 | 540 | } |
|
534 | 541 | |
|
535 | 542 | (subject, email_body, email_body_plaintext) = EmailNotificationModel().render_email( |
|
536 | 543 | EmailNotificationModel.TYPE_EMAIL_TEST, **email_kwargs) |
|
537 | 544 | |
|
538 | 545 | recipients = [test_email] if test_email else None |
|
539 | 546 | |
|
540 | 547 | run_task(tasks.send_email, recipients, subject, |
|
541 | 548 | email_body_plaintext, email_body) |
|
542 | 549 | |
|
543 | 550 | h.flash(_('Send email task created'), category='success') |
|
544 | 551 | raise HTTPFound(h.route_path('admin_settings_email')) |
|
545 | 552 | |
|
546 | 553 | @LoginRequired() |
|
547 | 554 | @HasPermissionAllDecorator('hg.admin') |
|
548 | 555 | def settings_hooks(self): |
|
549 | 556 | c = self.load_default_context() |
|
550 | 557 | c.active = 'hooks' |
|
551 | 558 | |
|
552 | 559 | model = SettingsModel() |
|
553 | 560 | c.hooks = model.get_builtin_hooks() |
|
554 | 561 | c.custom_hooks = model.get_custom_hooks() |
|
555 | 562 | |
|
556 | 563 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
557 | 564 | self._get_template_context(c), self.request) |
|
558 | 565 | html = formencode.htmlfill.render( |
|
559 | 566 | data, |
|
560 | 567 | defaults=self._form_defaults(), |
|
561 | 568 | encoding="UTF-8", |
|
562 | 569 | force_defaults=False |
|
563 | 570 | ) |
|
564 | 571 | return Response(html) |
|
565 | 572 | |
|
566 | 573 | @LoginRequired() |
|
567 | 574 | @HasPermissionAllDecorator('hg.admin') |
|
568 | 575 | @CSRFRequired() |
|
569 | 576 | def settings_hooks_update(self): |
|
570 | 577 | _ = self.request.translate |
|
571 | 578 | c = self.load_default_context() |
|
572 | 579 | c.active = 'hooks' |
|
573 | 580 | if c.visual.allow_custom_hooks_settings: |
|
574 | 581 | ui_key = self.request.POST.get('new_hook_ui_key') |
|
575 | 582 | ui_value = self.request.POST.get('new_hook_ui_value') |
|
576 | 583 | |
|
577 | 584 | hook_id = self.request.POST.get('hook_id') |
|
578 | 585 | new_hook = False |
|
579 | 586 | |
|
580 | 587 | model = SettingsModel() |
|
581 | 588 | try: |
|
582 | 589 | if ui_value and ui_key: |
|
583 | 590 | model.create_or_update_hook(ui_key, ui_value) |
|
584 | 591 | h.flash(_('Added new hook'), category='success') |
|
585 | 592 | new_hook = True |
|
586 | 593 | elif hook_id: |
|
587 | 594 | RhodeCodeUi.delete(hook_id) |
|
588 | 595 | Session().commit() |
|
589 | 596 | |
|
590 | 597 | # check for edits |
|
591 | 598 | update = False |
|
592 | 599 | _d = self.request.POST.dict_of_lists() |
|
593 | 600 | for k, v in zip(_d.get('hook_ui_key', []), |
|
594 | 601 | _d.get('hook_ui_value_new', [])): |
|
595 | 602 | model.create_or_update_hook(k, v) |
|
596 | 603 | update = True |
|
597 | 604 | |
|
598 | 605 | if update and not new_hook: |
|
599 | 606 | h.flash(_('Updated hooks'), category='success') |
|
600 | 607 | Session().commit() |
|
601 | 608 | except Exception: |
|
602 | 609 | log.exception("Exception during hook creation") |
|
603 | 610 | h.flash(_('Error occurred during hook creation'), |
|
604 | 611 | category='error') |
|
605 | 612 | |
|
606 | 613 | raise HTTPFound(h.route_path('admin_settings_hooks')) |
|
607 | 614 | |
|
608 | 615 | @LoginRequired() |
|
609 | 616 | @HasPermissionAllDecorator('hg.admin') |
|
610 | 617 | def settings_search(self): |
|
611 | 618 | c = self.load_default_context() |
|
612 | 619 | c.active = 'search' |
|
613 | 620 | |
|
614 | 621 | c.searcher = searcher_from_config(self.request.registry.settings) |
|
615 | 622 | c.statistics = c.searcher.statistics(self.request.translate) |
|
616 | 623 | |
|
617 | 624 | return self._get_template_context(c) |
|
618 | 625 | |
|
619 | 626 | @LoginRequired() |
|
620 | 627 | @HasPermissionAllDecorator('hg.admin') |
|
621 | 628 | def settings_labs(self): |
|
622 | 629 | c = self.load_default_context() |
|
623 | 630 | if not c.labs_active: |
|
624 | 631 | raise HTTPFound(h.route_path('admin_settings')) |
|
625 | 632 | |
|
626 | 633 | c.active = 'labs' |
|
627 | 634 | c.lab_settings = _LAB_SETTINGS |
|
628 | 635 | |
|
629 | 636 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
630 | 637 | self._get_template_context(c), self.request) |
|
631 | 638 | html = formencode.htmlfill.render( |
|
632 | 639 | data, |
|
633 | 640 | defaults=self._form_defaults(), |
|
634 | 641 | encoding="UTF-8", |
|
635 | 642 | force_defaults=False |
|
636 | 643 | ) |
|
637 | 644 | return Response(html) |
|
638 | 645 | |
|
639 | 646 | @LoginRequired() |
|
640 | 647 | @HasPermissionAllDecorator('hg.admin') |
|
641 | 648 | @CSRFRequired() |
|
642 | 649 | def settings_labs_update(self): |
|
643 | 650 | _ = self.request.translate |
|
644 | 651 | c = self.load_default_context() |
|
645 | 652 | c.active = 'labs' |
|
646 | 653 | |
|
647 | 654 | application_form = LabsSettingsForm(self.request.translate)() |
|
648 | 655 | try: |
|
649 | 656 | form_result = application_form.to_python(dict(self.request.POST)) |
|
650 | 657 | except formencode.Invalid as errors: |
|
651 | 658 | h.flash( |
|
652 | 659 | _("Some form inputs contain invalid data."), |
|
653 | 660 | category='error') |
|
654 | 661 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
655 | 662 | self._get_template_context(c), self.request) |
|
656 | 663 | html = formencode.htmlfill.render( |
|
657 | 664 | data, |
|
658 | 665 | defaults=errors.value, |
|
659 | 666 | errors=errors.unpack_errors() or {}, |
|
660 | 667 | prefix_error=False, |
|
661 | 668 | encoding="UTF-8", |
|
662 | 669 | force_defaults=False |
|
663 | 670 | ) |
|
664 | 671 | return Response(html) |
|
665 | 672 | |
|
666 | 673 | try: |
|
667 | 674 | session = Session() |
|
668 | 675 | for setting in _LAB_SETTINGS: |
|
669 | 676 | setting_name = setting.key[len('rhodecode_'):] |
|
670 | 677 | sett = SettingsModel().create_or_update_setting( |
|
671 | 678 | setting_name, form_result[setting.key], setting.type) |
|
672 | 679 | session.add(sett) |
|
673 | 680 | |
|
674 | 681 | except Exception: |
|
675 | 682 | log.exception('Exception while updating lab settings') |
|
676 | 683 | h.flash(_('Error occurred during updating labs settings'), |
|
677 | 684 | category='error') |
|
678 | 685 | else: |
|
679 | 686 | Session().commit() |
|
680 | 687 | SettingsModel().invalidate_settings_cache() |
|
681 | 688 | h.flash(_('Updated Labs settings'), category='success') |
|
682 | 689 | raise HTTPFound(h.route_path('admin_settings_labs')) |
|
683 | 690 | |
|
684 | 691 | data = render('rhodecode:templates/admin/settings/settings.mako', |
|
685 | 692 | self._get_template_context(c), self.request) |
|
686 | 693 | html = formencode.htmlfill.render( |
|
687 | 694 | data, |
|
688 | 695 | defaults=self._form_defaults(), |
|
689 | 696 | encoding="UTF-8", |
|
690 | 697 | force_defaults=False |
|
691 | 698 | ) |
|
692 | 699 | return Response(html) |
|
693 | 700 | |
|
694 | 701 | |
|
695 | 702 | # :param key: name of the setting including the 'rhodecode_' prefix |
|
696 | 703 | # :param type: the RhodeCodeSetting type to use. |
|
697 | 704 | # :param group: the i18ned group in which we should dispaly this setting |
|
698 | 705 | # :param label: the i18ned label we should display for this setting |
|
699 | 706 | # :param help: the i18ned help we should dispaly for this setting |
|
700 | 707 | LabSetting = collections.namedtuple( |
|
701 | 708 | 'LabSetting', ('key', 'type', 'group', 'label', 'help')) |
|
702 | 709 | |
|
703 | 710 | |
|
704 | 711 | # This list has to be kept in sync with the form |
|
705 | 712 | # rhodecode.model.forms.LabsSettingsForm. |
|
706 | 713 | _LAB_SETTINGS = [ |
|
707 | 714 | |
|
708 | 715 | ] |
@@ -1,243 +1,249 b'' | |||
|
1 | 1 | |
|
2 | 2 | |
|
3 | 3 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
4 | 4 | # |
|
5 | 5 | # This program is free software: you can redistribute it and/or modify |
|
6 | 6 | # it under the terms of the GNU Affero General Public License, version 3 |
|
7 | 7 | # (only), as published by the Free Software Foundation. |
|
8 | 8 | # |
|
9 | 9 | # This program is distributed in the hope that it will be useful, |
|
10 | 10 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
11 | 11 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
12 | 12 | # GNU General Public License for more details. |
|
13 | 13 | # |
|
14 | 14 | # You should have received a copy of the GNU Affero General Public License |
|
15 | 15 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
16 | 16 | # |
|
17 | 17 | # This program is dual-licensed. If you wish to learn more about the |
|
18 | 18 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
19 | 19 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
20 | 20 | |
|
21 | 21 | import logging |
|
22 | 22 | import urllib.request |
|
23 | 23 | import urllib.error |
|
24 | 24 | import urllib.parse |
|
25 | 25 | import os |
|
26 | 26 | |
|
27 | 27 | import rhodecode |
|
28 | 28 | from rhodecode.apps._base import BaseAppView |
|
29 | 29 | from rhodecode.apps._base.navigation import navigation_list |
|
30 | 30 | from rhodecode.lib import helpers as h |
|
31 | 31 | from rhodecode.lib.auth import (LoginRequired, HasPermissionAllDecorator) |
|
32 | 32 | from rhodecode.lib.utils2 import str2bool |
|
33 | 33 | from rhodecode.lib import system_info |
|
34 | 34 | from rhodecode.model.update import UpdateModel |
|
35 | 35 | |
|
36 | 36 | log = logging.getLogger(__name__) |
|
37 | 37 | |
|
38 | 38 | |
|
39 | 39 | class AdminSystemInfoSettingsView(BaseAppView): |
|
40 | 40 | def load_default_context(self): |
|
41 | 41 | c = self._get_local_tmpl_context() |
|
42 | 42 | return c |
|
43 | 43 | |
|
44 | 44 | def get_env_data(self): |
|
45 | 45 | black_list = [ |
|
46 | 46 | 'NIX_LDFLAGS', |
|
47 | 47 | 'NIX_CFLAGS_COMPILE', |
|
48 | 48 | 'propagatedBuildInputs', |
|
49 | 49 | 'propagatedNativeBuildInputs', |
|
50 | 50 | 'postInstall', |
|
51 | 51 | 'buildInputs', |
|
52 | 52 | 'buildPhase', |
|
53 | 53 | 'preShellHook', |
|
54 | 54 | 'preShellHook', |
|
55 | 55 | 'preCheck', |
|
56 | 56 | 'preBuild', |
|
57 | 57 | 'postShellHook', |
|
58 | 58 | 'postFixup', |
|
59 | 59 | 'postCheck', |
|
60 | 60 | 'nativeBuildInputs', |
|
61 | 61 | 'installPhase', |
|
62 | 62 | 'installCheckPhase', |
|
63 | 63 | 'checkPhase', |
|
64 | 64 | 'configurePhase', |
|
65 | 65 | 'shellHook' |
|
66 | 66 | ] |
|
67 | 67 | secret_list = [ |
|
68 | 68 | 'RHODECODE_USER_PASS' |
|
69 | 69 | ] |
|
70 | 70 | |
|
71 | 71 | for k, v in sorted(os.environ.items()): |
|
72 | 72 | if k in black_list: |
|
73 | 73 | continue |
|
74 | 74 | if k in secret_list: |
|
75 | 75 | v = '*****' |
|
76 | 76 | yield k, v |
|
77 | 77 | |
|
78 | 78 | @LoginRequired() |
|
79 | 79 | @HasPermissionAllDecorator('hg.admin') |
|
80 | 80 | def settings_system_info(self): |
|
81 | 81 | _ = self.request.translate |
|
82 | 82 | c = self.load_default_context() |
|
83 | 83 | |
|
84 | 84 | c.active = 'system' |
|
85 | 85 | c.navlist = navigation_list(self.request) |
|
86 | 86 | |
|
87 | 87 | # TODO(marcink), figure out how to allow only selected users to do this |
|
88 | 88 | c.allowed_to_snapshot = self._rhodecode_user.admin |
|
89 | 89 | |
|
90 | 90 | snapshot = str2bool(self.request.params.get('snapshot')) |
|
91 | 91 | |
|
92 | 92 | c.rhodecode_update_url = UpdateModel().get_update_url() |
|
93 | 93 | c.env_data = self.get_env_data() |
|
94 | 94 | server_info = system_info.get_system_info(self.request.environ) |
|
95 | 95 | |
|
96 | 96 | for key, val in server_info.items(): |
|
97 | 97 | setattr(c, key, val) |
|
98 | 98 | |
|
99 | 99 | def val(name, subkey='human_value'): |
|
100 | 100 | return server_info[name][subkey] |
|
101 | 101 | |
|
102 | 102 | def state(name): |
|
103 | 103 | return server_info[name]['state'] |
|
104 | 104 | |
|
105 | 105 | def val2(name): |
|
106 | 106 | val = server_info[name]['human_value'] |
|
107 | 107 | state = server_info[name]['state'] |
|
108 | 108 | return val, state |
|
109 | 109 | |
|
110 | 110 | update_info_msg = _('Note: please make sure this server can ' |
|
111 | 111 | 'access `${url}` for the update link to work', |
|
112 | 112 | mapping=dict(url=c.rhodecode_update_url)) |
|
113 | 113 | version = UpdateModel().get_stored_version() |
|
114 | 114 | is_outdated = UpdateModel().is_outdated( |
|
115 | 115 | rhodecode.__version__, version) |
|
116 | 116 | update_state = { |
|
117 | 117 | 'type': 'warning', |
|
118 | 118 | 'message': 'New version available: {}'.format(version) |
|
119 | 119 | } \ |
|
120 | 120 | if is_outdated else {} |
|
121 | 121 | c.data_items = [ |
|
122 | 122 | # update info |
|
123 | 123 | (_('Update info'), h.literal( |
|
124 | 124 | '<span class="link" id="check_for_update" >%s.</span>' % ( |
|
125 | 125 | _('Check for updates')) + |
|
126 | 126 | '<br/> <span >%s.</span>' % (update_info_msg) |
|
127 | 127 | ), ''), |
|
128 | 128 | |
|
129 | 129 | # RhodeCode specific |
|
130 | 130 | (_('RhodeCode Version'), val('rhodecode_app')['text'], state('rhodecode_app')), |
|
131 | 131 | (_('Latest version'), version, update_state), |
|
132 | 132 | (_('RhodeCode Base URL'), val('rhodecode_config')['config'].get('app.base_url'), state('rhodecode_config')), |
|
133 | 133 | (_('RhodeCode Server IP'), val('server')['server_ip'], state('server')), |
|
134 | 134 | (_('RhodeCode Server ID'), val('server')['server_id'], state('server')), |
|
135 | 135 | (_('RhodeCode Configuration'), val('rhodecode_config')['path'], state('rhodecode_config')), |
|
136 | 136 | (_('RhodeCode Certificate'), val('rhodecode_config')['cert_path'], state('rhodecode_config')), |
|
137 | 137 | (_('Workers'), val('rhodecode_config')['config']['server:main'].get('workers', '?'), state('rhodecode_config')), |
|
138 | 138 | (_('Worker Type'), val('rhodecode_config')['config']['server:main'].get('worker_class', 'sync'), state('rhodecode_config')), |
|
139 | 139 | ('', '', ''), # spacer |
|
140 | 140 | |
|
141 | 141 | # Database |
|
142 | 142 | (_('Database'), val('database')['url'], state('database')), |
|
143 | 143 | (_('Database version'), val('database')['version'], state('database')), |
|
144 | 144 | ('', '', ''), # spacer |
|
145 | 145 | |
|
146 | 146 | # Platform/Python |
|
147 | 147 | (_('Platform'), val('platform')['name'], state('platform')), |
|
148 | 148 | (_('Platform UUID'), val('platform')['uuid'], state('platform')), |
|
149 | 149 | (_('Lang'), val('locale'), state('locale')), |
|
150 | 150 | (_('Python version'), val('python')['version'], state('python')), |
|
151 | 151 | (_('Python path'), val('python')['executable'], state('python')), |
|
152 | 152 | ('', '', ''), # spacer |
|
153 | 153 | |
|
154 | 154 | # Systems stats |
|
155 | 155 | (_('CPU'), val('cpu')['text'], state('cpu')), |
|
156 | 156 | (_('Load'), val('load')['text'], state('load')), |
|
157 | 157 | (_('Memory'), val('memory')['text'], state('memory')), |
|
158 | 158 | (_('Uptime'), val('uptime')['text'], state('uptime')), |
|
159 | 159 | ('', '', ''), # spacer |
|
160 | 160 | |
|
161 | 161 | # ulimit |
|
162 | 162 | (_('Ulimit'), val('ulimit')['text'], state('ulimit')), |
|
163 | 163 | |
|
164 | 164 | # Repo storage |
|
165 | 165 | (_('Storage location'), val('storage')['path'], state('storage')), |
|
166 | 166 | (_('Storage info'), val('storage')['text'], state('storage')), |
|
167 | 167 | (_('Storage inodes'), val('storage_inodes')['text'], state('storage_inodes')), |
|
168 | 168 | ('', '', ''), # spacer |
|
169 | 169 | |
|
170 | 170 | (_('Gist storage location'), val('storage_gist')['path'], state('storage_gist')), |
|
171 | 171 | (_('Gist storage info'), val('storage_gist')['text'], state('storage_gist')), |
|
172 | 172 | ('', '', ''), # spacer |
|
173 | 173 | |
|
174 |
(_('Ar |
|
|
174 | (_('Artifacts storage backend'), val('storage_artifacts')['type'], state('storage_artifacts')), | |
|
175 | (_('Artifacts storage location'), val('storage_artifacts')['path'], state('storage_artifacts')), | |
|
176 | (_('Artifacts info'), val('storage_artifacts')['text'], state('storage_artifacts')), | |
|
177 | ('', '', ''), # spacer | |
|
178 | ||
|
179 | (_('Archive cache storage backend'), val('storage_archive')['type'], state('storage_archive')), | |
|
175 | 180 | (_('Archive cache storage location'), val('storage_archive')['path'], state('storage_archive')), |
|
176 | 181 | (_('Archive cache info'), val('storage_archive')['text'], state('storage_archive')), |
|
177 | 182 | ('', '', ''), # spacer |
|
178 | 183 | |
|
184 | ||
|
179 | 185 | (_('Temp storage location'), val('storage_temp')['path'], state('storage_temp')), |
|
180 | 186 | (_('Temp storage info'), val('storage_temp')['text'], state('storage_temp')), |
|
181 | 187 | ('', '', ''), # spacer |
|
182 | 188 | |
|
183 | 189 | (_('Search info'), val('search')['text'], state('search')), |
|
184 | 190 | (_('Search location'), val('search')['location'], state('search')), |
|
185 | 191 | ('', '', ''), # spacer |
|
186 | 192 | |
|
187 | 193 | # VCS specific |
|
188 | 194 | (_('VCS Backends'), val('vcs_backends'), state('vcs_backends')), |
|
189 | 195 | (_('VCS Server'), val('vcs_server')['text'], state('vcs_server')), |
|
190 | 196 | (_('GIT'), val('git'), state('git')), |
|
191 | 197 | (_('HG'), val('hg'), state('hg')), |
|
192 | 198 | (_('SVN'), val('svn'), state('svn')), |
|
193 | 199 | |
|
194 | 200 | ] |
|
195 | 201 | |
|
196 | 202 | c.vcsserver_data_items = [ |
|
197 | 203 | (k, v) for k, v in (val('vcs_server_config') or {}).items() |
|
198 | 204 | ] |
|
199 | 205 | |
|
200 | 206 | if snapshot: |
|
201 | 207 | if c.allowed_to_snapshot: |
|
202 | 208 | c.data_items.pop(0) # remove server info |
|
203 | 209 | self.request.override_renderer = 'admin/settings/settings_system_snapshot.mako' |
|
204 | 210 | else: |
|
205 | 211 | h.flash('You are not allowed to do this', category='warning') |
|
206 | 212 | return self._get_template_context(c) |
|
207 | 213 | |
|
208 | 214 | @LoginRequired() |
|
209 | 215 | @HasPermissionAllDecorator('hg.admin') |
|
210 | 216 | def settings_system_info_check_update(self): |
|
211 | 217 | _ = self.request.translate |
|
212 | 218 | c = self.load_default_context() |
|
213 | 219 | |
|
214 | 220 | update_url = UpdateModel().get_update_url() |
|
215 | 221 | |
|
216 | 222 | def _err(s): |
|
217 | 223 | return f'<div style="color:#ff8888; padding:4px 0px">{s}</div>' |
|
218 | 224 | |
|
219 | 225 | try: |
|
220 | 226 | data = UpdateModel().get_update_data(update_url) |
|
221 | 227 | except urllib.error.URLError as e: |
|
222 | 228 | log.exception("Exception contacting upgrade server") |
|
223 | 229 | self.request.override_renderer = 'string' |
|
224 | 230 | return _err('Failed to contact upgrade server: %r' % e) |
|
225 | 231 | except ValueError as e: |
|
226 | 232 | log.exception("Bad data sent from update server") |
|
227 | 233 | self.request.override_renderer = 'string' |
|
228 | 234 | return _err('Bad data sent from update server') |
|
229 | 235 | |
|
230 | 236 | latest = data['versions'][0] |
|
231 | 237 | |
|
232 | 238 | c.update_url = update_url |
|
233 | 239 | c.latest_data = latest |
|
234 | 240 | c.latest_ver = (latest['version'] or '').strip() |
|
235 | 241 | c.cur_ver = self.request.GET.get('ver') or rhodecode.__version__ |
|
236 | 242 | c.should_upgrade = False |
|
237 | 243 | |
|
238 | 244 | is_outdated = UpdateModel().is_outdated(c.cur_ver, c.latest_ver) |
|
239 | 245 | if is_outdated: |
|
240 | 246 | c.should_upgrade = True |
|
241 | 247 | c.important_notices = latest['general'] |
|
242 | 248 | UpdateModel().store_version(latest['version']) |
|
243 | 249 | return self._get_template_context(c) |
@@ -1,66 +1,97 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | import os |
|
19 | from rhodecode.apps.file_store import config_keys | |
|
19 | ||
|
20 | ||
|
20 | 21 | from rhodecode.config.settings_maker import SettingsMaker |
|
21 | 22 | |
|
22 | 23 | |
|
23 | 24 | def _sanitize_settings_and_apply_defaults(settings): |
|
24 | 25 | """ |
|
25 | 26 | Set defaults, convert to python types and validate settings. |
|
26 | 27 | """ |
|
28 | from rhodecode.apps.file_store import config_keys | |
|
29 | ||
|
30 | # translate "legacy" params into new config | |
|
31 | settings.pop(config_keys.deprecated_enabled, True) | |
|
32 | if config_keys.deprecated_backend in settings: | |
|
33 | # if legacy backend key is detected we use "legacy" backward compat setting | |
|
34 | settings.pop(config_keys.deprecated_backend) | |
|
35 | settings[config_keys.backend_type] = config_keys.backend_legacy_filesystem | |
|
36 | ||
|
37 | if config_keys.deprecated_store_path in settings: | |
|
38 | store_path = settings.pop(config_keys.deprecated_store_path) | |
|
39 | settings[config_keys.legacy_filesystem_storage_path] = store_path | |
|
40 | ||
|
27 | 41 | settings_maker = SettingsMaker(settings) |
|
28 | 42 | |
|
29 | settings_maker.make_setting(config_keys.enabled, True, parser='bool') | |
|
30 | settings_maker.make_setting(config_keys.backend, 'local') | |
|
43 | default_cache_dir = settings['cache_dir'] | |
|
44 | default_store_dir = os.path.join(default_cache_dir, 'artifacts_filestore') | |
|
45 | ||
|
46 | # set default backend | |
|
47 | settings_maker.make_setting(config_keys.backend_type, config_keys.backend_legacy_filesystem) | |
|
48 | ||
|
49 | # legacy filesystem defaults | |
|
50 | settings_maker.make_setting(config_keys.legacy_filesystem_storage_path, default_store_dir, default_when_empty=True, ) | |
|
31 | 51 | |
|
32 | default_store = os.path.join(os.path.dirname(settings['__file__']), 'upload_store') | |
|
33 | settings_maker.make_setting(config_keys.store_path, default_store) | |
|
52 | # filesystem defaults | |
|
53 | settings_maker.make_setting(config_keys.filesystem_storage_path, default_store_dir, default_when_empty=True,) | |
|
54 | settings_maker.make_setting(config_keys.filesystem_shards, 8, parser='int') | |
|
55 | ||
|
56 | # objectstore defaults | |
|
57 | settings_maker.make_setting(config_keys.objectstore_url, 'http://s3-minio:9000') | |
|
58 | settings_maker.make_setting(config_keys.objectstore_bucket, 'rhodecode-artifacts-filestore') | |
|
59 | settings_maker.make_setting(config_keys.objectstore_bucket_shards, 8, parser='int') | |
|
60 | ||
|
61 | settings_maker.make_setting(config_keys.objectstore_region, '') | |
|
62 | settings_maker.make_setting(config_keys.objectstore_key, '') | |
|
63 | settings_maker.make_setting(config_keys.objectstore_secret, '') | |
|
34 | 64 | |
|
35 | 65 | settings_maker.env_expand() |
|
36 | 66 | |
|
37 | 67 | |
|
38 | 68 | def includeme(config): |
|
69 | ||
|
39 | 70 | from rhodecode.apps.file_store.views import FileStoreView |
|
40 | 71 | |
|
41 | 72 | settings = config.registry.settings |
|
42 | 73 | _sanitize_settings_and_apply_defaults(settings) |
|
43 | 74 | |
|
44 | 75 | config.add_route( |
|
45 | 76 | name='upload_file', |
|
46 | 77 | pattern='/_file_store/upload') |
|
47 | 78 | config.add_view( |
|
48 | 79 | FileStoreView, |
|
49 | 80 | attr='upload_file', |
|
50 | 81 | route_name='upload_file', request_method='POST', renderer='json_ext') |
|
51 | 82 | |
|
52 | 83 | config.add_route( |
|
53 | 84 | name='download_file', |
|
54 | 85 | pattern='/_file_store/download/{fid:.*}') |
|
55 | 86 | config.add_view( |
|
56 | 87 | FileStoreView, |
|
57 | 88 | attr='download_file', |
|
58 | 89 | route_name='download_file') |
|
59 | 90 | |
|
60 | 91 | config.add_route( |
|
61 | 92 | name='download_file_by_token', |
|
62 | 93 | pattern='/_file_store/token-download/{_auth_token}/{fid:.*}') |
|
63 | 94 | config.add_view( |
|
64 | 95 | FileStoreView, |
|
65 | 96 | attr='download_file_by_token', |
|
66 | 97 | route_name='download_file_by_token') |
@@ -1,25 +1,57 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | |
|
20 | 20 | # Definition of setting keys used to configure this module. Defined here to |
|
21 | 21 | # avoid repetition of keys throughout the module. |
|
22 | 22 | |
|
23 | enabled = 'file_store.enabled' | |
|
24 |
|
|
|
25 | store_path = 'file_store.storage_path' | |
|
23 | # OLD and deprecated keys not used anymore | |
|
24 | deprecated_enabled = 'file_store.enabled' | |
|
25 | deprecated_backend = 'file_store.backend' | |
|
26 | deprecated_store_path = 'file_store.storage_path' | |
|
27 | ||
|
28 | ||
|
29 | backend_type = 'file_store.backend.type' | |
|
30 | ||
|
31 | backend_legacy_filesystem = 'filesystem_v1' | |
|
32 | backend_filesystem = 'filesystem_v2' | |
|
33 | backend_objectstore = 'objectstore' | |
|
34 | ||
|
35 | backend_types = [ | |
|
36 | backend_legacy_filesystem, | |
|
37 | backend_filesystem, | |
|
38 | backend_objectstore, | |
|
39 | ] | |
|
40 | ||
|
41 | # filesystem_v1 legacy | |
|
42 | legacy_filesystem_storage_path = 'file_store.filesystem_v1.storage_path' | |
|
43 | ||
|
44 | ||
|
45 | # filesystem_v2 new option | |
|
46 | filesystem_storage_path = 'file_store.filesystem_v2.storage_path' | |
|
47 | filesystem_shards = 'file_store.filesystem_v2.shards' | |
|
48 | ||
|
49 | # objectstore | |
|
50 | objectstore_url = 'file_store.objectstore.url' | |
|
51 | objectstore_bucket = 'file_store.objectstore.bucket' | |
|
52 | objectstore_bucket_shards = 'file_store.objectstore.bucket_shards' | |
|
53 | ||
|
54 | objectstore_region = 'file_store.objectstore.region' | |
|
55 | objectstore_key = 'file_store.objectstore.key' | |
|
56 | objectstore_secret = 'file_store.objectstore.secret' | |
|
57 |
@@ -1,18 +1,57 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | import os | |
|
20 | import random | |
|
21 | import tempfile | |
|
22 | import string | |
|
23 | ||
|
24 | import pytest | |
|
25 | ||
|
26 | from rhodecode.apps.file_store import utils as store_utils | |
|
27 | ||
|
28 | ||
|
29 | @pytest.fixture() | |
|
30 | def file_store_instance(ini_settings): | |
|
31 | config = ini_settings | |
|
32 | f_store = store_utils.get_filestore_backend(config=config, always_init=True) | |
|
33 | return f_store | |
|
34 | ||
|
35 | ||
|
36 | @pytest.fixture | |
|
37 | def random_binary_file(): | |
|
38 | # Generate random binary data | |
|
39 | data = bytearray(random.getrandbits(8) for _ in range(1024 * 512)) # 512 KB of random data | |
|
40 | ||
|
41 | # Create a temporary file | |
|
42 | temp_file = tempfile.NamedTemporaryFile(delete=False) | |
|
43 | filename = temp_file.name | |
|
44 | ||
|
45 | try: | |
|
46 | # Write the random binary data to the file | |
|
47 | temp_file.write(data) | |
|
48 | temp_file.seek(0) # Rewind the file pointer to the beginning | |
|
49 | yield filename, temp_file | |
|
50 | finally: | |
|
51 | # Close and delete the temporary file after the test | |
|
52 | temp_file.close() | |
|
53 | os.remove(filename) | |
|
54 | ||
|
55 | ||
|
56 | def generate_random_filename(length=10): | |
|
57 | return ''.join(random.choices(string.ascii_letters + string.digits, k=length)) No newline at end of file |
@@ -1,246 +1,253 b'' | |||
|
1 | 1 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | ||
|
18 | 19 | import os |
|
20 | ||
|
19 | 21 | import pytest |
|
20 | 22 | |
|
21 | 23 | from rhodecode.lib.ext_json import json |
|
22 | 24 | from rhodecode.model.auth_token import AuthTokenModel |
|
23 | 25 | from rhodecode.model.db import Session, FileStore, Repository, User |
|
24 |
from rhodecode.apps.file_store import utils |
|
|
26 | from rhodecode.apps.file_store import utils as store_utils | |
|
27 | from rhodecode.apps.file_store import config_keys | |
|
25 | 28 | |
|
26 | 29 | from rhodecode.tests import TestController |
|
27 | 30 | from rhodecode.tests.routes import route_path |
|
28 | 31 | |
|
29 | 32 | |
|
30 | 33 | class TestFileStoreViews(TestController): |
|
31 | 34 | |
|
35 | @pytest.fixture() | |
|
36 | def create_artifact_factory(self, tmpdir, ini_settings): | |
|
37 | ||
|
38 | def factory(user_id, content, f_name='example.txt'): | |
|
39 | ||
|
40 | config = ini_settings | |
|
41 | config[config_keys.backend_type] = config_keys.backend_legacy_filesystem | |
|
42 | ||
|
43 | f_store = store_utils.get_filestore_backend(config) | |
|
44 | ||
|
45 | filesystem_file = os.path.join(str(tmpdir), f_name) | |
|
46 | with open(filesystem_file, 'wt') as f: | |
|
47 | f.write(content) | |
|
48 | ||
|
49 | with open(filesystem_file, 'rb') as f: | |
|
50 | store_uid, metadata = f_store.store(f_name, f, metadata={'filename': f_name}) | |
|
51 | os.remove(filesystem_file) | |
|
52 | ||
|
53 | entry = FileStore.create( | |
|
54 | file_uid=store_uid, filename=metadata["filename"], | |
|
55 | file_hash=metadata["sha256"], file_size=metadata["size"], | |
|
56 | file_display_name='file_display_name', | |
|
57 | file_description='repo artifact `{}`'.format(metadata["filename"]), | |
|
58 | check_acl=True, user_id=user_id, | |
|
59 | ) | |
|
60 | Session().add(entry) | |
|
61 | Session().commit() | |
|
62 | return entry | |
|
63 | return factory | |
|
64 | ||
|
32 | 65 | @pytest.mark.parametrize("fid, content, exists", [ |
|
33 | 66 | ('abcde-0.jpg', "xxxxx", True), |
|
34 | 67 | ('abcde-0.exe', "1234567", True), |
|
35 | 68 | ('abcde-0.jpg', "xxxxx", False), |
|
36 | 69 | ]) |
|
37 | def test_get_files_from_store(self, fid, content, exists, tmpdir, user_util): | |
|
70 | def test_get_files_from_store(self, fid, content, exists, tmpdir, user_util, ini_settings): | |
|
38 | 71 | user = self.log_user() |
|
39 | 72 | user_id = user['user_id'] |
|
40 | 73 | repo_id = user_util.create_repo().repo_id |
|
41 | store_path = self.app._pyramid_settings[config_keys.store_path] | |
|
74 | ||
|
75 | config = ini_settings | |
|
76 | config[config_keys.backend_type] = config_keys.backend_legacy_filesystem | |
|
77 | ||
|
42 | 78 | store_uid = fid |
|
43 | 79 | |
|
44 | 80 | if exists: |
|
45 | 81 | status = 200 |
|
46 |
store = utils.get_file |
|
|
82 | f_store = store_utils.get_filestore_backend(config) | |
|
47 | 83 | filesystem_file = os.path.join(str(tmpdir), fid) |
|
48 | 84 | with open(filesystem_file, 'wt') as f: |
|
49 | 85 | f.write(content) |
|
50 | 86 | |
|
51 | 87 | with open(filesystem_file, 'rb') as f: |
|
52 |
store_uid, metadata = store |
|
|
88 | store_uid, metadata = f_store.store(fid, f, metadata={'filename': fid}) | |
|
89 | os.remove(filesystem_file) | |
|
53 | 90 | |
|
54 | 91 | entry = FileStore.create( |
|
55 | 92 | file_uid=store_uid, filename=metadata["filename"], |
|
56 | 93 | file_hash=metadata["sha256"], file_size=metadata["size"], |
|
57 | 94 | file_display_name='file_display_name', |
|
58 | 95 | file_description='repo artifact `{}`'.format(metadata["filename"]), |
|
59 | 96 | check_acl=True, user_id=user_id, |
|
60 | 97 | scope_repo_id=repo_id |
|
61 | 98 | ) |
|
62 | 99 | Session().add(entry) |
|
63 | 100 | Session().commit() |
|
64 | 101 | |
|
65 | 102 | else: |
|
66 | 103 | status = 404 |
|
67 | 104 | |
|
68 | 105 | response = self.app.get(route_path('download_file', fid=store_uid), status=status) |
|
69 | 106 | |
|
70 | 107 | if exists: |
|
71 | 108 | assert response.text == content |
|
72 | file_store_path = os.path.dirname(store.resolve_name(store_uid, store_path)[1]) | |
|
73 | metadata_file = os.path.join(file_store_path, store_uid + '.meta') | |
|
74 | assert os.path.exists(metadata_file) | |
|
75 | with open(metadata_file, 'rb') as f: | |
|
76 | json_data = json.loads(f.read()) | |
|
77 | 109 | |
|
78 | assert json_data | |
|
79 | assert 'size' in json_data | |
|
110 | metadata = f_store.get_metadata(store_uid) | |
|
111 | ||
|
112 | assert 'size' in metadata | |
|
80 | 113 | |
|
81 | 114 | def test_upload_files_without_content_to_store(self): |
|
82 | 115 | self.log_user() |
|
83 | 116 | response = self.app.post( |
|
84 | 117 | route_path('upload_file'), |
|
85 | 118 | params={'csrf_token': self.csrf_token}, |
|
86 | 119 | status=200) |
|
87 | 120 | |
|
88 | 121 | assert response.json == { |
|
89 | 122 | 'error': 'store_file data field is missing', |
|
90 | 123 | 'access_path': None, |
|
91 | 124 | 'store_fid': None} |
|
92 | 125 | |
|
93 | 126 | def test_upload_files_bogus_content_to_store(self): |
|
94 | 127 | self.log_user() |
|
95 | 128 | response = self.app.post( |
|
96 | 129 | route_path('upload_file'), |
|
97 | 130 | params={'csrf_token': self.csrf_token, 'store_file': 'bogus'}, |
|
98 | 131 | status=200) |
|
99 | 132 | |
|
100 | 133 | assert response.json == { |
|
101 | 134 | 'error': 'filename cannot be read from the data field', |
|
102 | 135 | 'access_path': None, |
|
103 | 136 | 'store_fid': None} |
|
104 | 137 | |
|
105 | 138 | def test_upload_content_to_store(self): |
|
106 | 139 | self.log_user() |
|
107 | 140 | response = self.app.post( |
|
108 | 141 | route_path('upload_file'), |
|
109 | 142 | upload_files=[('store_file', b'myfile.txt', b'SOME CONTENT')], |
|
110 | 143 | params={'csrf_token': self.csrf_token}, |
|
111 | 144 | status=200) |
|
112 | 145 | |
|
113 | 146 | assert response.json['store_fid'] |
|
114 | 147 | |
|
115 | @pytest.fixture() | |
|
116 | def create_artifact_factory(self, tmpdir): | |
|
117 | def factory(user_id, content): | |
|
118 | store_path = self.app._pyramid_settings[config_keys.store_path] | |
|
119 | store = utils.get_file_storage({config_keys.store_path: store_path}) | |
|
120 | fid = 'example.txt' | |
|
121 | ||
|
122 | filesystem_file = os.path.join(str(tmpdir), fid) | |
|
123 | with open(filesystem_file, 'wt') as f: | |
|
124 | f.write(content) | |
|
125 | ||
|
126 | with open(filesystem_file, 'rb') as f: | |
|
127 | store_uid, metadata = store.save_file(f, fid, extra_metadata={'filename': fid}) | |
|
128 | ||
|
129 | entry = FileStore.create( | |
|
130 | file_uid=store_uid, filename=metadata["filename"], | |
|
131 | file_hash=metadata["sha256"], file_size=metadata["size"], | |
|
132 | file_display_name='file_display_name', | |
|
133 | file_description='repo artifact `{}`'.format(metadata["filename"]), | |
|
134 | check_acl=True, user_id=user_id, | |
|
135 | ) | |
|
136 | Session().add(entry) | |
|
137 | Session().commit() | |
|
138 | return entry | |
|
139 | return factory | |
|
140 | ||
|
141 | 148 | def test_download_file_non_scoped(self, user_util, create_artifact_factory): |
|
142 | 149 | user = self.log_user() |
|
143 | 150 | user_id = user['user_id'] |
|
144 | 151 | content = 'HELLO MY NAME IS ARTIFACT !' |
|
145 | 152 | |
|
146 | 153 | artifact = create_artifact_factory(user_id, content) |
|
147 | 154 | file_uid = artifact.file_uid |
|
148 | 155 | response = self.app.get(route_path('download_file', fid=file_uid), status=200) |
|
149 | 156 | assert response.text == content |
|
150 | 157 | |
|
151 | 158 | # log-in to new user and test download again |
|
152 | 159 | user = user_util.create_user(password='qweqwe') |
|
153 | 160 | self.log_user(user.username, 'qweqwe') |
|
154 | 161 | response = self.app.get(route_path('download_file', fid=file_uid), status=200) |
|
155 | 162 | assert response.text == content |
|
156 | 163 | |
|
157 | 164 | def test_download_file_scoped_to_repo(self, user_util, create_artifact_factory): |
|
158 | 165 | user = self.log_user() |
|
159 | 166 | user_id = user['user_id'] |
|
160 | 167 | content = 'HELLO MY NAME IS ARTIFACT !' |
|
161 | 168 | |
|
162 | 169 | artifact = create_artifact_factory(user_id, content) |
|
163 | 170 | # bind to repo |
|
164 | 171 | repo = user_util.create_repo() |
|
165 | 172 | repo_id = repo.repo_id |
|
166 | 173 | artifact.scope_repo_id = repo_id |
|
167 | 174 | Session().add(artifact) |
|
168 | 175 | Session().commit() |
|
169 | 176 | |
|
170 | 177 | file_uid = artifact.file_uid |
|
171 | 178 | response = self.app.get(route_path('download_file', fid=file_uid), status=200) |
|
172 | 179 | assert response.text == content |
|
173 | 180 | |
|
174 | 181 | # log-in to new user and test download again |
|
175 | 182 | user = user_util.create_user(password='qweqwe') |
|
176 | 183 | self.log_user(user.username, 'qweqwe') |
|
177 | 184 | response = self.app.get(route_path('download_file', fid=file_uid), status=200) |
|
178 | 185 | assert response.text == content |
|
179 | 186 | |
|
180 | 187 | # forbid user the rights to repo |
|
181 | 188 | repo = Repository.get(repo_id) |
|
182 | 189 | user_util.grant_user_permission_to_repo(repo, user, 'repository.none') |
|
183 | 190 | self.app.get(route_path('download_file', fid=file_uid), status=404) |
|
184 | 191 | |
|
185 | 192 | def test_download_file_scoped_to_user(self, user_util, create_artifact_factory): |
|
186 | 193 | user = self.log_user() |
|
187 | 194 | user_id = user['user_id'] |
|
188 | 195 | content = 'HELLO MY NAME IS ARTIFACT !' |
|
189 | 196 | |
|
190 | 197 | artifact = create_artifact_factory(user_id, content) |
|
191 | 198 | # bind to user |
|
192 | 199 | user = user_util.create_user(password='qweqwe') |
|
193 | 200 | |
|
194 | 201 | artifact.scope_user_id = user.user_id |
|
195 | 202 | Session().add(artifact) |
|
196 | 203 | Session().commit() |
|
197 | 204 | |
|
198 | 205 | # artifact creator doesn't have access since it's bind to another user |
|
199 | 206 | file_uid = artifact.file_uid |
|
200 | 207 | self.app.get(route_path('download_file', fid=file_uid), status=404) |
|
201 | 208 | |
|
202 | 209 | # log-in to new user and test download again, should be ok since we're bind to this artifact |
|
203 | 210 | self.log_user(user.username, 'qweqwe') |
|
204 | 211 | response = self.app.get(route_path('download_file', fid=file_uid), status=200) |
|
205 | 212 | assert response.text == content |
|
206 | 213 | |
|
207 | 214 | def test_download_file_scoped_to_repo_with_bad_token(self, user_util, create_artifact_factory): |
|
208 | 215 | user_id = User.get_first_super_admin().user_id |
|
209 | 216 | content = 'HELLO MY NAME IS ARTIFACT !' |
|
210 | 217 | |
|
211 | 218 | artifact = create_artifact_factory(user_id, content) |
|
212 | 219 | # bind to repo |
|
213 | 220 | repo = user_util.create_repo() |
|
214 | 221 | repo_id = repo.repo_id |
|
215 | 222 | artifact.scope_repo_id = repo_id |
|
216 | 223 | Session().add(artifact) |
|
217 | 224 | Session().commit() |
|
218 | 225 | |
|
219 | 226 | file_uid = artifact.file_uid |
|
220 | 227 | self.app.get(route_path('download_file_by_token', |
|
221 | 228 | _auth_token='bogus', fid=file_uid), status=302) |
|
222 | 229 | |
|
223 | 230 | def test_download_file_scoped_to_repo_with_token(self, user_util, create_artifact_factory): |
|
224 | 231 | user = User.get_first_super_admin() |
|
225 | 232 | AuthTokenModel().create(user, 'test artifact token', |
|
226 | 233 | role=AuthTokenModel.cls.ROLE_ARTIFACT_DOWNLOAD) |
|
227 | 234 | |
|
228 | 235 | user = User.get_first_super_admin() |
|
229 | 236 | artifact_token = user.artifact_token |
|
230 | 237 | |
|
231 | 238 | user_id = User.get_first_super_admin().user_id |
|
232 | 239 | content = 'HELLO MY NAME IS ARTIFACT !' |
|
233 | 240 | |
|
234 | 241 | artifact = create_artifact_factory(user_id, content) |
|
235 | 242 | # bind to repo |
|
236 | 243 | repo = user_util.create_repo() |
|
237 | 244 | repo_id = repo.repo_id |
|
238 | 245 | artifact.scope_repo_id = repo_id |
|
239 | 246 | Session().add(artifact) |
|
240 | 247 | Session().commit() |
|
241 | 248 | |
|
242 | 249 | file_uid = artifact.file_uid |
|
243 | 250 | response = self.app.get( |
|
244 | 251 | route_path('download_file_by_token', |
|
245 | 252 | _auth_token=artifact_token, fid=file_uid), status=200) |
|
246 | 253 | assert response.text == content |
@@ -1,55 +1,145 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import io |
|
20 | 20 | import uuid |
|
21 | 21 | import pathlib |
|
22 | import s3fs | |
|
23 | ||
|
24 | from rhodecode.lib.hash_utils import sha256_safe | |
|
25 | from rhodecode.apps.file_store import config_keys | |
|
26 | ||
|
27 | ||
|
28 | file_store_meta = None | |
|
29 | ||
|
30 | ||
|
31 | def get_filestore_config(config) -> dict: | |
|
32 | ||
|
33 | final_config = {} | |
|
34 | ||
|
35 | for k, v in config.items(): | |
|
36 | if k.startswith('file_store'): | |
|
37 | final_config[k] = v | |
|
38 | ||
|
39 | return final_config | |
|
22 | 40 | |
|
23 | 41 | |
|
24 | def get_file_storage(settings): | |
|
25 | from rhodecode.apps.file_store.backends.local_store import LocalFileStorage | |
|
26 | from rhodecode.apps.file_store import config_keys | |
|
27 | store_path = settings.get(config_keys.store_path) | |
|
28 | return LocalFileStorage(base_path=store_path) | |
|
42 | def get_filestore_backend(config, always_init=False): | |
|
43 | """ | |
|
44 | ||
|
45 | usage:: | |
|
46 | from rhodecode.apps.file_store import get_filestore_backend | |
|
47 | f_store = get_filestore_backend(config=CONFIG) | |
|
48 | ||
|
49 | :param config: | |
|
50 | :param always_init: | |
|
51 | :return: | |
|
52 | """ | |
|
53 | ||
|
54 | global file_store_meta | |
|
55 | if file_store_meta is not None and not always_init: | |
|
56 | return file_store_meta | |
|
57 | ||
|
58 | config = get_filestore_config(config) | |
|
59 | backend = config[config_keys.backend_type] | |
|
60 | ||
|
61 | match backend: | |
|
62 | case config_keys.backend_legacy_filesystem: | |
|
63 | # Legacy backward compatible storage | |
|
64 | from rhodecode.apps.file_store.backends.filesystem_legacy import LegacyFileSystemBackend | |
|
65 | d_cache = LegacyFileSystemBackend( | |
|
66 | settings=config | |
|
67 | ) | |
|
68 | case config_keys.backend_filesystem: | |
|
69 | from rhodecode.apps.file_store.backends.filesystem import FileSystemBackend | |
|
70 | d_cache = FileSystemBackend( | |
|
71 | settings=config | |
|
72 | ) | |
|
73 | case config_keys.backend_objectstore: | |
|
74 | from rhodecode.apps.file_store.backends.objectstore import ObjectStoreBackend | |
|
75 | d_cache = ObjectStoreBackend( | |
|
76 | settings=config | |
|
77 | ) | |
|
78 | case _: | |
|
79 | raise ValueError( | |
|
80 | f'file_store.backend.type only supports "{config_keys.backend_types}" got {backend}' | |
|
81 | ) | |
|
82 | ||
|
83 | cache_meta = d_cache | |
|
84 | return cache_meta | |
|
29 | 85 | |
|
30 | 86 | |
|
31 | 87 | def splitext(filename): |
|
32 | ext = ''.join(pathlib.Path(filename).suffixes) | |
|
88 | final_ext = [] | |
|
89 | for suffix in pathlib.Path(filename).suffixes: | |
|
90 | if not suffix.isascii(): | |
|
91 | continue | |
|
92 | ||
|
93 | suffix = " ".join(suffix.split()).replace(" ", "") | |
|
94 | final_ext.append(suffix) | |
|
95 | ext = ''.join(final_ext) | |
|
33 | 96 | return filename, ext |
|
34 | 97 | |
|
35 | 98 | |
|
36 | def uid_filename(filename, randomized=True): | |
|
99 | def get_uid_filename(filename, randomized=True): | |
|
37 | 100 | """ |
|
38 | 101 | Generates a randomized or stable (uuid) filename, |
|
39 | 102 | preserving the original extension. |
|
40 | 103 | |
|
41 | 104 | :param filename: the original filename |
|
42 | 105 | :param randomized: define if filename should be stable (sha1 based) or randomized |
|
43 | 106 | """ |
|
44 | 107 | |
|
45 | 108 | _, ext = splitext(filename) |
|
46 | 109 | if randomized: |
|
47 | 110 | uid = uuid.uuid4() |
|
48 | 111 | else: |
|
49 | hash_key = '{}.{}'.format(filename, 'store') | |
|
112 | store_suffix = "store" | |
|
113 | hash_key = f'{filename}.{store_suffix}' | |
|
50 | 114 | uid = uuid.uuid5(uuid.NAMESPACE_URL, hash_key) |
|
51 | 115 | return str(uid) + ext.lower() |
|
52 | 116 | |
|
53 | 117 | |
|
54 | 118 | def bytes_to_file_obj(bytes_data): |
|
55 |
return io. |
|
|
119 | return io.BytesIO(bytes_data) | |
|
120 | ||
|
121 | ||
|
122 | class ShardFileReader: | |
|
123 | ||
|
124 | def __init__(self, file_like_reader): | |
|
125 | self._file_like_reader = file_like_reader | |
|
126 | ||
|
127 | def __getattr__(self, item): | |
|
128 | if isinstance(self._file_like_reader, s3fs.core.S3File): | |
|
129 | match item: | |
|
130 | case 'name': | |
|
131 | # S3 FileWrapper doesn't support name attribute, and we use it | |
|
132 | return self._file_like_reader.full_name | |
|
133 | case _: | |
|
134 | return getattr(self._file_like_reader, item) | |
|
135 | else: | |
|
136 | return getattr(self._file_like_reader, item) | |
|
137 | ||
|
138 | ||
|
139 | def archive_iterator(_reader, block_size: int = 4096 * 512): | |
|
140 | # 4096 * 64 = 64KB | |
|
141 | while 1: | |
|
142 | data = _reader.read(block_size) | |
|
143 | if not data: | |
|
144 | break | |
|
145 | yield data |
@@ -1,200 +1,197 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | import logging |
|
19 | 19 | |
|
20 | ||
|
21 | from pyramid.response import FileResponse | |
|
20 | from pyramid.response import Response | |
|
22 | 21 | from pyramid.httpexceptions import HTTPFound, HTTPNotFound |
|
23 | 22 | |
|
24 | 23 | from rhodecode.apps._base import BaseAppView |
|
25 | from rhodecode.apps.file_store import utils | |
|
24 | from rhodecode.apps.file_store import utils as store_utils | |
|
26 | 25 | from rhodecode.apps.file_store.exceptions import ( |
|
27 | 26 | FileNotAllowedException, FileOverSizeException) |
|
28 | 27 | |
|
29 | 28 | from rhodecode.lib import helpers as h |
|
30 | 29 | from rhodecode.lib import audit_logger |
|
31 | 30 | from rhodecode.lib.auth import ( |
|
32 | 31 | CSRFRequired, NotAnonymous, HasRepoPermissionAny, HasRepoGroupPermissionAny, |
|
33 | 32 | LoginRequired) |
|
33 | from rhodecode.lib.str_utils import header_safe_str | |
|
34 | 34 | from rhodecode.lib.vcs.conf.mtypes import get_mimetypes_db |
|
35 | 35 | from rhodecode.model.db import Session, FileStore, UserApiKeys |
|
36 | 36 | |
|
37 | 37 | log = logging.getLogger(__name__) |
|
38 | 38 | |
|
39 | 39 | |
|
40 | 40 | class FileStoreView(BaseAppView): |
|
41 | 41 | upload_key = 'store_file' |
|
42 | 42 | |
|
43 | 43 | def load_default_context(self): |
|
44 | 44 | c = self._get_local_tmpl_context() |
|
45 |
self.stor |
|
|
45 | self.f_store = store_utils.get_filestore_backend(self.request.registry.settings) | |
|
46 | 46 | return c |
|
47 | 47 | |
|
48 | 48 | def _guess_type(self, file_name): |
|
49 | 49 | """ |
|
50 | 50 | Our own type guesser for mimetypes using the rich DB |
|
51 | 51 | """ |
|
52 | 52 | if not hasattr(self, 'db'): |
|
53 | 53 | self.db = get_mimetypes_db() |
|
54 | 54 | _content_type, _encoding = self.db.guess_type(file_name, strict=False) |
|
55 | 55 | return _content_type, _encoding |
|
56 | 56 | |
|
57 | 57 | def _serve_file(self, file_uid): |
|
58 |
if not self.stor |
|
|
59 |
store_path = self.stor |
|
|
60 |
log. |
|
|
61 | file_uid, store_path) | |
|
58 | if not self.f_store.filename_exists(file_uid): | |
|
59 | store_path = self.f_store.store_path(file_uid) | |
|
60 | log.warning('File with FID:%s not found in the store under `%s`', | |
|
61 | file_uid, store_path) | |
|
62 | 62 | raise HTTPNotFound() |
|
63 | 63 | |
|
64 | 64 | db_obj = FileStore.get_by_store_uid(file_uid, safe=True) |
|
65 | 65 | if not db_obj: |
|
66 | 66 | raise HTTPNotFound() |
|
67 | 67 | |
|
68 | 68 | # private upload for user |
|
69 | 69 | if db_obj.check_acl and db_obj.scope_user_id: |
|
70 | 70 | log.debug('Artifact: checking scope access for bound artifact user: `%s`', |
|
71 | 71 | db_obj.scope_user_id) |
|
72 | 72 | user = db_obj.user |
|
73 | 73 | if self._rhodecode_db_user.user_id != user.user_id: |
|
74 | 74 | log.warning('Access to file store object forbidden') |
|
75 | 75 | raise HTTPNotFound() |
|
76 | 76 | |
|
77 | 77 | # scoped to repository permissions |
|
78 | 78 | if db_obj.check_acl and db_obj.scope_repo_id: |
|
79 | 79 | log.debug('Artifact: checking scope access for bound artifact repo: `%s`', |
|
80 | 80 | db_obj.scope_repo_id) |
|
81 | 81 | repo = db_obj.repo |
|
82 | 82 | perm_set = ['repository.read', 'repository.write', 'repository.admin'] |
|
83 | 83 | has_perm = HasRepoPermissionAny(*perm_set)(repo.repo_name, 'FileStore check') |
|
84 | 84 | if not has_perm: |
|
85 | 85 | log.warning('Access to file store object `%s` forbidden', file_uid) |
|
86 | 86 | raise HTTPNotFound() |
|
87 | 87 | |
|
88 | 88 | # scoped to repository group permissions |
|
89 | 89 | if db_obj.check_acl and db_obj.scope_repo_group_id: |
|
90 | 90 | log.debug('Artifact: checking scope access for bound artifact repo group: `%s`', |
|
91 | 91 | db_obj.scope_repo_group_id) |
|
92 | 92 | repo_group = db_obj.repo_group |
|
93 | 93 | perm_set = ['group.read', 'group.write', 'group.admin'] |
|
94 | 94 | has_perm = HasRepoGroupPermissionAny(*perm_set)(repo_group.group_name, 'FileStore check') |
|
95 | 95 | if not has_perm: |
|
96 | 96 | log.warning('Access to file store object `%s` forbidden', file_uid) |
|
97 | 97 | raise HTTPNotFound() |
|
98 | 98 | |
|
99 | 99 | FileStore.bump_access_counter(file_uid) |
|
100 | 100 | |
|
101 | file_path = self.storage.store_path(file_uid) | |
|
101 | file_name = db_obj.file_display_name | |
|
102 | 102 | content_type = 'application/octet-stream' |
|
103 | content_encoding = None | |
|
104 | 103 | |
|
105 |
_content_type, _encoding = self._guess_type(file_ |
|
|
104 | _content_type, _encoding = self._guess_type(file_name) | |
|
106 | 105 | if _content_type: |
|
107 | 106 | content_type = _content_type |
|
108 | 107 | |
|
109 | 108 | # For file store we don't submit any session data, this logic tells the |
|
110 | 109 | # Session lib to skip it |
|
111 | 110 | setattr(self.request, '_file_response', True) |
|
112 | response = FileResponse( | |
|
113 | file_path, request=self.request, | |
|
114 | content_type=content_type, content_encoding=content_encoding) | |
|
111 | reader, _meta = self.f_store.fetch(file_uid) | |
|
115 | 112 | |
|
116 | file_name = db_obj.file_display_name | |
|
113 | response = Response(app_iter=store_utils.archive_iterator(reader)) | |
|
117 | 114 | |
|
118 | response.headers["Content-Disposition"] = ( | |
|
119 |
|
|
|
120 | ) | |
|
115 | response.content_type = str(content_type) | |
|
116 | response.content_disposition = f'attachment; filename="{header_safe_str(file_name)}"' | |
|
117 | ||
|
121 | 118 | response.headers["X-RC-Artifact-Id"] = str(db_obj.file_store_id) |
|
122 | response.headers["X-RC-Artifact-Desc"] = str(db_obj.file_description) | |
|
119 | response.headers["X-RC-Artifact-Desc"] = header_safe_str(db_obj.file_description) | |
|
123 | 120 | response.headers["X-RC-Artifact-Sha256"] = str(db_obj.file_hash) |
|
124 | 121 | return response |
|
125 | 122 | |
|
126 | 123 | @LoginRequired() |
|
127 | 124 | @NotAnonymous() |
|
128 | 125 | @CSRFRequired() |
|
129 | 126 | def upload_file(self): |
|
130 | 127 | self.load_default_context() |
|
131 | 128 | file_obj = self.request.POST.get(self.upload_key) |
|
132 | 129 | |
|
133 | 130 | if file_obj is None: |
|
134 | 131 | return {'store_fid': None, |
|
135 | 132 | 'access_path': None, |
|
136 | 133 | 'error': f'{self.upload_key} data field is missing'} |
|
137 | 134 | |
|
138 | 135 | if not hasattr(file_obj, 'filename'): |
|
139 | 136 | return {'store_fid': None, |
|
140 | 137 | 'access_path': None, |
|
141 | 138 | 'error': 'filename cannot be read from the data field'} |
|
142 | 139 | |
|
143 | 140 | filename = file_obj.filename |
|
144 | 141 | |
|
145 | 142 | metadata = { |
|
146 | 143 | 'user_uploaded': {'username': self._rhodecode_user.username, |
|
147 | 144 | 'user_id': self._rhodecode_user.user_id, |
|
148 | 145 | 'ip': self._rhodecode_user.ip_addr}} |
|
149 | 146 | try: |
|
150 |
store_uid, metadata = self.stor |
|
|
151 |
file_obj.file |
|
|
147 | store_uid, metadata = self.f_store.store( | |
|
148 | filename, file_obj.file, extra_metadata=metadata) | |
|
152 | 149 | except FileNotAllowedException: |
|
153 | 150 | return {'store_fid': None, |
|
154 | 151 | 'access_path': None, |
|
155 | 152 | 'error': f'File {filename} is not allowed.'} |
|
156 | 153 | |
|
157 | 154 | except FileOverSizeException: |
|
158 | 155 | return {'store_fid': None, |
|
159 | 156 | 'access_path': None, |
|
160 | 157 | 'error': f'File {filename} is exceeding allowed limit.'} |
|
161 | 158 | |
|
162 | 159 | try: |
|
163 | 160 | entry = FileStore.create( |
|
164 | 161 | file_uid=store_uid, filename=metadata["filename"], |
|
165 | 162 | file_hash=metadata["sha256"], file_size=metadata["size"], |
|
166 | 163 | file_description='upload attachment', |
|
167 | 164 | check_acl=False, user_id=self._rhodecode_user.user_id |
|
168 | 165 | ) |
|
169 | 166 | Session().add(entry) |
|
170 | 167 | Session().commit() |
|
171 | 168 | log.debug('Stored upload in DB as %s', entry) |
|
172 | 169 | except Exception: |
|
173 | 170 | log.exception('Failed to store file %s', filename) |
|
174 | 171 | return {'store_fid': None, |
|
175 | 172 | 'access_path': None, |
|
176 | 173 | 'error': f'File {filename} failed to store in DB.'} |
|
177 | 174 | |
|
178 | 175 | return {'store_fid': store_uid, |
|
179 | 176 | 'access_path': h.route_path('download_file', fid=store_uid)} |
|
180 | 177 | |
|
181 | 178 | # ACL is checked by scopes, if no scope the file is accessible to all |
|
182 | 179 | def download_file(self): |
|
183 | 180 | self.load_default_context() |
|
184 | 181 | file_uid = self.request.matchdict['fid'] |
|
185 |
log.debug('Requesting FID:%s from store %s', file_uid, self.stor |
|
|
182 | log.debug('Requesting FID:%s from store %s', file_uid, self.f_store) | |
|
186 | 183 | return self._serve_file(file_uid) |
|
187 | 184 | |
|
188 | 185 | # in addition to @LoginRequired ACL is checked by scopes |
|
189 | 186 | @LoginRequired(auth_token_access=[UserApiKeys.ROLE_ARTIFACT_DOWNLOAD]) |
|
190 | 187 | @NotAnonymous() |
|
191 | 188 | def download_file_by_token(self): |
|
192 | 189 | """ |
|
193 | 190 | Special view that allows to access the download file by special URL that |
|
194 | 191 | is stored inside the URL. |
|
195 | 192 | |
|
196 | 193 | http://example.com/_file_store/token-download/TOKEN/FILE_UID |
|
197 | 194 | """ |
|
198 | 195 | self.load_default_context() |
|
199 | 196 | file_uid = self.request.matchdict['fid'] |
|
200 | 197 | return self._serve_file(file_uid) |
@@ -1,830 +1,830 b'' | |||
|
1 | 1 | # Copyright (C) 2010-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import logging |
|
20 | 20 | import collections |
|
21 | 21 | |
|
22 | 22 | from pyramid.httpexceptions import ( |
|
23 | 23 | HTTPNotFound, HTTPBadRequest, HTTPFound, HTTPForbidden, HTTPConflict) |
|
24 | 24 | from pyramid.renderers import render |
|
25 | 25 | from pyramid.response import Response |
|
26 | 26 | |
|
27 | 27 | from rhodecode.apps._base import RepoAppView |
|
28 | 28 | from rhodecode.apps.file_store import utils as store_utils |
|
29 | 29 | from rhodecode.apps.file_store.exceptions import FileNotAllowedException, FileOverSizeException |
|
30 | 30 | |
|
31 | 31 | from rhodecode.lib import diffs, codeblocks, channelstream |
|
32 | 32 | from rhodecode.lib.auth import ( |
|
33 | 33 | LoginRequired, HasRepoPermissionAnyDecorator, NotAnonymous, CSRFRequired) |
|
34 | 34 | from rhodecode.lib import ext_json |
|
35 | 35 | from collections import OrderedDict |
|
36 | 36 | from rhodecode.lib.diffs import ( |
|
37 | 37 | cache_diff, load_cached_diff, diff_cache_exist, get_diff_context, |
|
38 | 38 | get_diff_whitespace_flag) |
|
39 | 39 | from rhodecode.lib.exceptions import StatusChangeOnClosedPullRequestError, CommentVersionMismatch |
|
40 | 40 | import rhodecode.lib.helpers as h |
|
41 | 41 | from rhodecode.lib.utils2 import str2bool, StrictAttributeDict, safe_str |
|
42 | 42 | from rhodecode.lib.vcs.backends.base import EmptyCommit |
|
43 | 43 | from rhodecode.lib.vcs.exceptions import ( |
|
44 | 44 | RepositoryError, CommitDoesNotExistError) |
|
45 | 45 | from rhodecode.model.db import ChangesetComment, ChangesetStatus, FileStore, \ |
|
46 | 46 | ChangesetCommentHistory |
|
47 | 47 | from rhodecode.model.changeset_status import ChangesetStatusModel |
|
48 | 48 | from rhodecode.model.comment import CommentsModel |
|
49 | 49 | from rhodecode.model.meta import Session |
|
50 | 50 | from rhodecode.model.settings import VcsSettingsModel |
|
51 | 51 | |
|
52 | 52 | log = logging.getLogger(__name__) |
|
53 | 53 | |
|
54 | 54 | |
|
55 | 55 | def _update_with_GET(params, request): |
|
56 | 56 | for k in ['diff1', 'diff2', 'diff']: |
|
57 | 57 | params[k] += request.GET.getall(k) |
|
58 | 58 | |
|
59 | 59 | |
|
60 | 60 | class RepoCommitsView(RepoAppView): |
|
61 | 61 | def load_default_context(self): |
|
62 | 62 | c = self._get_local_tmpl_context(include_app_defaults=True) |
|
63 | 63 | c.rhodecode_repo = self.rhodecode_vcs_repo |
|
64 | 64 | |
|
65 | 65 | return c |
|
66 | 66 | |
|
67 | 67 | def _is_diff_cache_enabled(self, target_repo): |
|
68 | 68 | caching_enabled = self._get_general_setting( |
|
69 | 69 | target_repo, 'rhodecode_diff_cache') |
|
70 | 70 | log.debug('Diff caching enabled: %s', caching_enabled) |
|
71 | 71 | return caching_enabled |
|
72 | 72 | |
|
73 | 73 | def _commit(self, commit_id_range, method): |
|
74 | 74 | _ = self.request.translate |
|
75 | 75 | c = self.load_default_context() |
|
76 | 76 | c.fulldiff = self.request.GET.get('fulldiff') |
|
77 | 77 | redirect_to_combined = str2bool(self.request.GET.get('redirect_combined')) |
|
78 | 78 | |
|
79 | 79 | # fetch global flags of ignore ws or context lines |
|
80 | 80 | diff_context = get_diff_context(self.request) |
|
81 | 81 | hide_whitespace_changes = get_diff_whitespace_flag(self.request) |
|
82 | 82 | |
|
83 | 83 | # diff_limit will cut off the whole diff if the limit is applied |
|
84 | 84 | # otherwise it will just hide the big files from the front-end |
|
85 | 85 | diff_limit = c.visual.cut_off_limit_diff |
|
86 | 86 | file_limit = c.visual.cut_off_limit_file |
|
87 | 87 | |
|
88 | 88 | # get ranges of commit ids if preset |
|
89 | 89 | commit_range = commit_id_range.split('...')[:2] |
|
90 | 90 | |
|
91 | 91 | try: |
|
92 | 92 | pre_load = ['affected_files', 'author', 'branch', 'date', |
|
93 | 93 | 'message', 'parents'] |
|
94 | 94 | if self.rhodecode_vcs_repo.alias == 'hg': |
|
95 | 95 | pre_load += ['hidden', 'obsolete', 'phase'] |
|
96 | 96 | |
|
97 | 97 | if len(commit_range) == 2: |
|
98 | 98 | commits = self.rhodecode_vcs_repo.get_commits( |
|
99 | 99 | start_id=commit_range[0], end_id=commit_range[1], |
|
100 | 100 | pre_load=pre_load, translate_tags=False) |
|
101 | 101 | commits = list(commits) |
|
102 | 102 | else: |
|
103 | 103 | commits = [self.rhodecode_vcs_repo.get_commit( |
|
104 | 104 | commit_id=commit_id_range, pre_load=pre_load)] |
|
105 | 105 | |
|
106 | 106 | c.commit_ranges = commits |
|
107 | 107 | if not c.commit_ranges: |
|
108 | 108 | raise RepositoryError('The commit range returned an empty result') |
|
109 | 109 | except CommitDoesNotExistError as e: |
|
110 | 110 | msg = _('No such commit exists. Org exception: `{}`').format(safe_str(e)) |
|
111 | 111 | h.flash(msg, category='error') |
|
112 | 112 | raise HTTPNotFound() |
|
113 | 113 | except Exception: |
|
114 | 114 | log.exception("General failure") |
|
115 | 115 | raise HTTPNotFound() |
|
116 | 116 | single_commit = len(c.commit_ranges) == 1 |
|
117 | 117 | |
|
118 | 118 | if redirect_to_combined and not single_commit: |
|
119 | 119 | source_ref = getattr(c.commit_ranges[0].parents[0] |
|
120 | 120 | if c.commit_ranges[0].parents else h.EmptyCommit(), 'raw_id') |
|
121 | 121 | target_ref = c.commit_ranges[-1].raw_id |
|
122 | 122 | next_url = h.route_path( |
|
123 | 123 | 'repo_compare', |
|
124 | 124 | repo_name=c.repo_name, |
|
125 | 125 | source_ref_type='rev', |
|
126 | 126 | source_ref=source_ref, |
|
127 | 127 | target_ref_type='rev', |
|
128 | 128 | target_ref=target_ref) |
|
129 | 129 | raise HTTPFound(next_url) |
|
130 | 130 | |
|
131 | 131 | c.changes = OrderedDict() |
|
132 | 132 | c.lines_added = 0 |
|
133 | 133 | c.lines_deleted = 0 |
|
134 | 134 | |
|
135 | 135 | # auto collapse if we have more than limit |
|
136 | 136 | collapse_limit = diffs.DiffProcessor._collapse_commits_over |
|
137 | 137 | c.collapse_all_commits = len(c.commit_ranges) > collapse_limit |
|
138 | 138 | |
|
139 | 139 | c.commit_statuses = ChangesetStatus.STATUSES |
|
140 | 140 | c.inline_comments = [] |
|
141 | 141 | c.files = [] |
|
142 | 142 | |
|
143 | 143 | c.comments = [] |
|
144 | 144 | c.unresolved_comments = [] |
|
145 | 145 | c.resolved_comments = [] |
|
146 | 146 | |
|
147 | 147 | # Single commit |
|
148 | 148 | if single_commit: |
|
149 | 149 | commit = c.commit_ranges[0] |
|
150 | 150 | c.comments = CommentsModel().get_comments( |
|
151 | 151 | self.db_repo.repo_id, |
|
152 | 152 | revision=commit.raw_id) |
|
153 | 153 | |
|
154 | 154 | # comments from PR |
|
155 | 155 | statuses = ChangesetStatusModel().get_statuses( |
|
156 | 156 | self.db_repo.repo_id, commit.raw_id, |
|
157 | 157 | with_revisions=True) |
|
158 | 158 | |
|
159 | 159 | prs = set() |
|
160 | 160 | reviewers = list() |
|
161 | 161 | reviewers_duplicates = set() # to not have duplicates from multiple votes |
|
162 | 162 | for c_status in statuses: |
|
163 | 163 | |
|
164 | 164 | # extract associated pull-requests from votes |
|
165 | 165 | if c_status.pull_request: |
|
166 | 166 | prs.add(c_status.pull_request) |
|
167 | 167 | |
|
168 | 168 | # extract reviewers |
|
169 | 169 | _user_id = c_status.author.user_id |
|
170 | 170 | if _user_id not in reviewers_duplicates: |
|
171 | 171 | reviewers.append( |
|
172 | 172 | StrictAttributeDict({ |
|
173 | 173 | 'user': c_status.author, |
|
174 | 174 | |
|
175 | 175 | # fake attributed for commit, page that we don't have |
|
176 | 176 | # but we share the display with PR page |
|
177 | 177 | 'mandatory': False, |
|
178 | 178 | 'reasons': [], |
|
179 | 179 | 'rule_user_group_data': lambda: None |
|
180 | 180 | }) |
|
181 | 181 | ) |
|
182 | 182 | reviewers_duplicates.add(_user_id) |
|
183 | 183 | |
|
184 | 184 | c.reviewers_count = len(reviewers) |
|
185 | 185 | c.observers_count = 0 |
|
186 | 186 | |
|
187 | 187 | # from associated statuses, check the pull requests, and |
|
188 | 188 | # show comments from them |
|
189 | 189 | for pr in prs: |
|
190 | 190 | c.comments.extend(pr.comments) |
|
191 | 191 | |
|
192 | 192 | c.unresolved_comments = CommentsModel()\ |
|
193 | 193 | .get_commit_unresolved_todos(commit.raw_id) |
|
194 | 194 | c.resolved_comments = CommentsModel()\ |
|
195 | 195 | .get_commit_resolved_todos(commit.raw_id) |
|
196 | 196 | |
|
197 | 197 | c.inline_comments_flat = CommentsModel()\ |
|
198 | 198 | .get_commit_inline_comments(commit.raw_id) |
|
199 | 199 | |
|
200 | 200 | review_statuses = ChangesetStatusModel().aggregate_votes_by_user( |
|
201 | 201 | statuses, reviewers) |
|
202 | 202 | |
|
203 | 203 | c.commit_review_status = ChangesetStatus.STATUS_NOT_REVIEWED |
|
204 | 204 | |
|
205 | 205 | c.commit_set_reviewers_data_json = collections.OrderedDict({'reviewers': []}) |
|
206 | 206 | |
|
207 | 207 | for review_obj, member, reasons, mandatory, status in review_statuses: |
|
208 | 208 | member_reviewer = h.reviewer_as_json( |
|
209 | 209 | member, reasons=reasons, mandatory=mandatory, role=None, |
|
210 | 210 | user_group=None |
|
211 | 211 | ) |
|
212 | 212 | |
|
213 | 213 | current_review_status = status[0][1].status if status else ChangesetStatus.STATUS_NOT_REVIEWED |
|
214 | 214 | member_reviewer['review_status'] = current_review_status |
|
215 | 215 | member_reviewer['review_status_label'] = h.commit_status_lbl(current_review_status) |
|
216 | 216 | member_reviewer['allowed_to_update'] = False |
|
217 | 217 | c.commit_set_reviewers_data_json['reviewers'].append(member_reviewer) |
|
218 | 218 | |
|
219 | 219 | c.commit_set_reviewers_data_json = ext_json.str_json(c.commit_set_reviewers_data_json) |
|
220 | 220 | |
|
221 | 221 | # NOTE(marcink): this uses the same voting logic as in pull-requests |
|
222 | 222 | c.commit_review_status = ChangesetStatusModel().calculate_status(review_statuses) |
|
223 | 223 | c.commit_broadcast_channel = channelstream.comment_channel(c.repo_name, commit_obj=commit) |
|
224 | 224 | |
|
225 | 225 | diff = None |
|
226 | 226 | # Iterate over ranges (default commit view is always one commit) |
|
227 | 227 | for commit in c.commit_ranges: |
|
228 | 228 | c.changes[commit.raw_id] = [] |
|
229 | 229 | |
|
230 | 230 | commit2 = commit |
|
231 | 231 | commit1 = commit.first_parent |
|
232 | 232 | |
|
233 | 233 | if method == 'show': |
|
234 | 234 | inline_comments = CommentsModel().get_inline_comments( |
|
235 | 235 | self.db_repo.repo_id, revision=commit.raw_id) |
|
236 | 236 | c.inline_cnt = len(CommentsModel().get_inline_comments_as_list( |
|
237 | 237 | inline_comments)) |
|
238 | 238 | c.inline_comments = inline_comments |
|
239 | 239 | |
|
240 | 240 | cache_path = self.rhodecode_vcs_repo.get_create_shadow_cache_pr_path( |
|
241 | 241 | self.db_repo) |
|
242 | 242 | cache_file_path = diff_cache_exist( |
|
243 | 243 | cache_path, 'diff', commit.raw_id, |
|
244 | 244 | hide_whitespace_changes, diff_context, c.fulldiff) |
|
245 | 245 | |
|
246 | 246 | caching_enabled = self._is_diff_cache_enabled(self.db_repo) |
|
247 | 247 | force_recache = str2bool(self.request.GET.get('force_recache')) |
|
248 | 248 | |
|
249 | 249 | cached_diff = None |
|
250 | 250 | if caching_enabled: |
|
251 | 251 | cached_diff = load_cached_diff(cache_file_path) |
|
252 | 252 | |
|
253 | 253 | has_proper_diff_cache = cached_diff and cached_diff.get('diff') |
|
254 | 254 | if not force_recache and has_proper_diff_cache: |
|
255 | 255 | diffset = cached_diff['diff'] |
|
256 | 256 | else: |
|
257 | 257 | vcs_diff = self.rhodecode_vcs_repo.get_diff( |
|
258 | 258 | commit1, commit2, |
|
259 | 259 | ignore_whitespace=hide_whitespace_changes, |
|
260 | 260 | context=diff_context) |
|
261 | 261 | |
|
262 | 262 | diff_processor = diffs.DiffProcessor(vcs_diff, diff_format='newdiff', |
|
263 | 263 | diff_limit=diff_limit, |
|
264 | 264 | file_limit=file_limit, |
|
265 | 265 | show_full_diff=c.fulldiff) |
|
266 | 266 | |
|
267 | 267 | _parsed = diff_processor.prepare() |
|
268 | 268 | |
|
269 | 269 | diffset = codeblocks.DiffSet( |
|
270 | 270 | repo_name=self.db_repo_name, |
|
271 | 271 | source_node_getter=codeblocks.diffset_node_getter(commit1), |
|
272 | 272 | target_node_getter=codeblocks.diffset_node_getter(commit2)) |
|
273 | 273 | |
|
274 | 274 | diffset = self.path_filter.render_patchset_filtered( |
|
275 | 275 | diffset, _parsed, commit1.raw_id, commit2.raw_id) |
|
276 | 276 | |
|
277 | 277 | # save cached diff |
|
278 | 278 | if caching_enabled: |
|
279 | 279 | cache_diff(cache_file_path, diffset, None) |
|
280 | 280 | |
|
281 | 281 | c.limited_diff = diffset.limited_diff |
|
282 | 282 | c.changes[commit.raw_id] = diffset |
|
283 | 283 | else: |
|
284 | 284 | # TODO(marcink): no cache usage here... |
|
285 | 285 | _diff = self.rhodecode_vcs_repo.get_diff( |
|
286 | 286 | commit1, commit2, |
|
287 | 287 | ignore_whitespace=hide_whitespace_changes, context=diff_context) |
|
288 | 288 | diff_processor = diffs.DiffProcessor(_diff, diff_format='newdiff', |
|
289 | 289 | diff_limit=diff_limit, |
|
290 | 290 | file_limit=file_limit, show_full_diff=c.fulldiff) |
|
291 | 291 | # downloads/raw we only need RAW diff nothing else |
|
292 | 292 | diff = self.path_filter.get_raw_patch(diff_processor) |
|
293 | 293 | c.changes[commit.raw_id] = [None, None, None, None, diff, None, None] |
|
294 | 294 | |
|
295 | 295 | # sort comments by how they were generated |
|
296 | 296 | c.comments = sorted(c.comments, key=lambda x: x.comment_id) |
|
297 | 297 | c.at_version_num = None |
|
298 | 298 | |
|
299 | 299 | if len(c.commit_ranges) == 1: |
|
300 | 300 | c.commit = c.commit_ranges[0] |
|
301 | 301 | c.parent_tmpl = ''.join( |
|
302 | 302 | '# Parent %s\n' % x.raw_id for x in c.commit.parents) |
|
303 | 303 | |
|
304 | 304 | if method == 'download': |
|
305 | 305 | response = Response(diff) |
|
306 | 306 | response.content_type = 'text/plain' |
|
307 | 307 | response.content_disposition = ( |
|
308 | 308 | 'attachment; filename=%s.diff' % commit_id_range[:12]) |
|
309 | 309 | return response |
|
310 | 310 | elif method == 'patch': |
|
311 | 311 | |
|
312 | 312 | c.diff = safe_str(diff) |
|
313 | 313 | patch = render( |
|
314 | 314 | 'rhodecode:templates/changeset/patch_changeset.mako', |
|
315 | 315 | self._get_template_context(c), self.request) |
|
316 | 316 | response = Response(patch) |
|
317 | 317 | response.content_type = 'text/plain' |
|
318 | 318 | return response |
|
319 | 319 | elif method == 'raw': |
|
320 | 320 | response = Response(diff) |
|
321 | 321 | response.content_type = 'text/plain' |
|
322 | 322 | return response |
|
323 | 323 | elif method == 'show': |
|
324 | 324 | if len(c.commit_ranges) == 1: |
|
325 | 325 | html = render( |
|
326 | 326 | 'rhodecode:templates/changeset/changeset.mako', |
|
327 | 327 | self._get_template_context(c), self.request) |
|
328 | 328 | return Response(html) |
|
329 | 329 | else: |
|
330 | 330 | c.ancestor = None |
|
331 | 331 | c.target_repo = self.db_repo |
|
332 | 332 | html = render( |
|
333 | 333 | 'rhodecode:templates/changeset/changeset_range.mako', |
|
334 | 334 | self._get_template_context(c), self.request) |
|
335 | 335 | return Response(html) |
|
336 | 336 | |
|
337 | 337 | raise HTTPBadRequest() |
|
338 | 338 | |
|
339 | 339 | @LoginRequired() |
|
340 | 340 | @HasRepoPermissionAnyDecorator( |
|
341 | 341 | 'repository.read', 'repository.write', 'repository.admin') |
|
342 | 342 | def repo_commit_show(self): |
|
343 | 343 | commit_id = self.request.matchdict['commit_id'] |
|
344 | 344 | return self._commit(commit_id, method='show') |
|
345 | 345 | |
|
346 | 346 | @LoginRequired() |
|
347 | 347 | @HasRepoPermissionAnyDecorator( |
|
348 | 348 | 'repository.read', 'repository.write', 'repository.admin') |
|
349 | 349 | def repo_commit_raw(self): |
|
350 | 350 | commit_id = self.request.matchdict['commit_id'] |
|
351 | 351 | return self._commit(commit_id, method='raw') |
|
352 | 352 | |
|
353 | 353 | @LoginRequired() |
|
354 | 354 | @HasRepoPermissionAnyDecorator( |
|
355 | 355 | 'repository.read', 'repository.write', 'repository.admin') |
|
356 | 356 | def repo_commit_patch(self): |
|
357 | 357 | commit_id = self.request.matchdict['commit_id'] |
|
358 | 358 | return self._commit(commit_id, method='patch') |
|
359 | 359 | |
|
360 | 360 | @LoginRequired() |
|
361 | 361 | @HasRepoPermissionAnyDecorator( |
|
362 | 362 | 'repository.read', 'repository.write', 'repository.admin') |
|
363 | 363 | def repo_commit_download(self): |
|
364 | 364 | commit_id = self.request.matchdict['commit_id'] |
|
365 | 365 | return self._commit(commit_id, method='download') |
|
366 | 366 | |
|
367 | 367 | def _commit_comments_create(self, commit_id, comments): |
|
368 | 368 | _ = self.request.translate |
|
369 | 369 | data = {} |
|
370 | 370 | if not comments: |
|
371 | 371 | return |
|
372 | 372 | |
|
373 | 373 | commit = self.db_repo.get_commit(commit_id) |
|
374 | 374 | |
|
375 | 375 | all_drafts = len([x for x in comments if str2bool(x['is_draft'])]) == len(comments) |
|
376 | 376 | for entry in comments: |
|
377 | 377 | c = self.load_default_context() |
|
378 | 378 | comment_type = entry['comment_type'] |
|
379 | 379 | text = entry['text'] |
|
380 | 380 | status = entry['status'] |
|
381 | 381 | is_draft = str2bool(entry['is_draft']) |
|
382 | 382 | resolves_comment_id = entry['resolves_comment_id'] |
|
383 | 383 | f_path = entry['f_path'] |
|
384 | 384 | line_no = entry['line'] |
|
385 | 385 | target_elem_id = f'file-{h.safeid(h.safe_str(f_path))}' |
|
386 | 386 | |
|
387 | 387 | if status: |
|
388 | 388 | text = text or (_('Status change %(transition_icon)s %(status)s') |
|
389 | 389 | % {'transition_icon': '>', |
|
390 | 390 | 'status': ChangesetStatus.get_status_lbl(status)}) |
|
391 | 391 | |
|
392 | 392 | comment = CommentsModel().create( |
|
393 | 393 | text=text, |
|
394 | 394 | repo=self.db_repo.repo_id, |
|
395 | 395 | user=self._rhodecode_db_user.user_id, |
|
396 | 396 | commit_id=commit_id, |
|
397 | 397 | f_path=f_path, |
|
398 | 398 | line_no=line_no, |
|
399 | 399 | status_change=(ChangesetStatus.get_status_lbl(status) |
|
400 | 400 | if status else None), |
|
401 | 401 | status_change_type=status, |
|
402 | 402 | comment_type=comment_type, |
|
403 | 403 | is_draft=is_draft, |
|
404 | 404 | resolves_comment_id=resolves_comment_id, |
|
405 | 405 | auth_user=self._rhodecode_user, |
|
406 | 406 | send_email=not is_draft, # skip notification for draft comments |
|
407 | 407 | ) |
|
408 | 408 | is_inline = comment.is_inline |
|
409 | 409 | |
|
410 | 410 | # get status if set ! |
|
411 | 411 | if status: |
|
412 | 412 | # `dont_allow_on_closed_pull_request = True` means |
|
413 | 413 | # if latest status was from pull request and it's closed |
|
414 | 414 | # disallow changing status ! |
|
415 | 415 | |
|
416 | 416 | try: |
|
417 | 417 | ChangesetStatusModel().set_status( |
|
418 | 418 | self.db_repo.repo_id, |
|
419 | 419 | status, |
|
420 | 420 | self._rhodecode_db_user.user_id, |
|
421 | 421 | comment, |
|
422 | 422 | revision=commit_id, |
|
423 | 423 | dont_allow_on_closed_pull_request=True |
|
424 | 424 | ) |
|
425 | 425 | except StatusChangeOnClosedPullRequestError: |
|
426 | 426 | msg = _('Changing the status of a commit associated with ' |
|
427 | 427 | 'a closed pull request is not allowed') |
|
428 | 428 | log.exception(msg) |
|
429 | 429 | h.flash(msg, category='warning') |
|
430 | 430 | raise HTTPFound(h.route_path( |
|
431 | 431 | 'repo_commit', repo_name=self.db_repo_name, |
|
432 | 432 | commit_id=commit_id)) |
|
433 | 433 | |
|
434 | 434 | Session().flush() |
|
435 | 435 | # this is somehow required to get access to some relationship |
|
436 | 436 | # loaded on comment |
|
437 | 437 | Session().refresh(comment) |
|
438 | 438 | |
|
439 | 439 | # skip notifications for drafts |
|
440 | 440 | if not is_draft: |
|
441 | 441 | CommentsModel().trigger_commit_comment_hook( |
|
442 | 442 | self.db_repo, self._rhodecode_user, 'create', |
|
443 | 443 | data={'comment': comment, 'commit': commit}) |
|
444 | 444 | |
|
445 | 445 | comment_id = comment.comment_id |
|
446 | 446 | data[comment_id] = { |
|
447 | 447 | 'target_id': target_elem_id |
|
448 | 448 | } |
|
449 | 449 | Session().flush() |
|
450 | 450 | |
|
451 | 451 | c.co = comment |
|
452 | 452 | c.at_version_num = 0 |
|
453 | 453 | c.is_new = True |
|
454 | 454 | rendered_comment = render( |
|
455 | 455 | 'rhodecode:templates/changeset/changeset_comment_block.mako', |
|
456 | 456 | self._get_template_context(c), self.request) |
|
457 | 457 | |
|
458 | 458 | data[comment_id].update(comment.get_dict()) |
|
459 | 459 | data[comment_id].update({'rendered_text': rendered_comment}) |
|
460 | 460 | |
|
461 | 461 | # finalize, commit and redirect |
|
462 | 462 | Session().commit() |
|
463 | 463 | |
|
464 | 464 | # skip channelstream for draft comments |
|
465 | 465 | if not all_drafts: |
|
466 | 466 | comment_broadcast_channel = channelstream.comment_channel( |
|
467 | 467 | self.db_repo_name, commit_obj=commit) |
|
468 | 468 | |
|
469 | 469 | comment_data = data |
|
470 | 470 | posted_comment_type = 'inline' if is_inline else 'general' |
|
471 | 471 | if len(data) == 1: |
|
472 | 472 | msg = _('posted {} new {} comment').format(len(data), posted_comment_type) |
|
473 | 473 | else: |
|
474 | 474 | msg = _('posted {} new {} comments').format(len(data), posted_comment_type) |
|
475 | 475 | |
|
476 | 476 | channelstream.comment_channelstream_push( |
|
477 | 477 | self.request, comment_broadcast_channel, self._rhodecode_user, msg, |
|
478 | 478 | comment_data=comment_data) |
|
479 | 479 | |
|
480 | 480 | return data |
|
481 | 481 | |
|
482 | 482 | @LoginRequired() |
|
483 | 483 | @NotAnonymous() |
|
484 | 484 | @HasRepoPermissionAnyDecorator( |
|
485 | 485 | 'repository.read', 'repository.write', 'repository.admin') |
|
486 | 486 | @CSRFRequired() |
|
487 | 487 | def repo_commit_comment_create(self): |
|
488 | 488 | _ = self.request.translate |
|
489 | 489 | commit_id = self.request.matchdict['commit_id'] |
|
490 | 490 | |
|
491 | 491 | multi_commit_ids = [] |
|
492 | 492 | for _commit_id in self.request.POST.get('commit_ids', '').split(','): |
|
493 | 493 | if _commit_id not in ['', None, EmptyCommit.raw_id]: |
|
494 | 494 | if _commit_id not in multi_commit_ids: |
|
495 | 495 | multi_commit_ids.append(_commit_id) |
|
496 | 496 | |
|
497 | 497 | commit_ids = multi_commit_ids or [commit_id] |
|
498 | 498 | |
|
499 | 499 | data = [] |
|
500 | 500 | # Multiple comments for each passed commit id |
|
501 | 501 | for current_id in filter(None, commit_ids): |
|
502 | 502 | comment_data = { |
|
503 | 503 | 'comment_type': self.request.POST.get('comment_type'), |
|
504 | 504 | 'text': self.request.POST.get('text'), |
|
505 | 505 | 'status': self.request.POST.get('changeset_status', None), |
|
506 | 506 | 'is_draft': self.request.POST.get('draft'), |
|
507 | 507 | 'resolves_comment_id': self.request.POST.get('resolves_comment_id', None), |
|
508 | 508 | 'close_pull_request': self.request.POST.get('close_pull_request'), |
|
509 | 509 | 'f_path': self.request.POST.get('f_path'), |
|
510 | 510 | 'line': self.request.POST.get('line'), |
|
511 | 511 | } |
|
512 | 512 | comment = self._commit_comments_create(commit_id=current_id, comments=[comment_data]) |
|
513 | 513 | data.append(comment) |
|
514 | 514 | |
|
515 | 515 | return data if len(data) > 1 else data[0] |
|
516 | 516 | |
|
517 | 517 | @LoginRequired() |
|
518 | 518 | @NotAnonymous() |
|
519 | 519 | @HasRepoPermissionAnyDecorator( |
|
520 | 520 | 'repository.read', 'repository.write', 'repository.admin') |
|
521 | 521 | @CSRFRequired() |
|
522 | 522 | def repo_commit_comment_preview(self): |
|
523 | 523 | # Technically a CSRF token is not needed as no state changes with this |
|
524 | 524 | # call. However, as this is a POST is better to have it, so automated |
|
525 | 525 | # tools don't flag it as potential CSRF. |
|
526 | 526 | # Post is required because the payload could be bigger than the maximum |
|
527 | 527 | # allowed by GET. |
|
528 | 528 | |
|
529 | 529 | text = self.request.POST.get('text') |
|
530 | 530 | renderer = self.request.POST.get('renderer') or 'rst' |
|
531 | 531 | if text: |
|
532 | 532 | return h.render(text, renderer=renderer, mentions=True, |
|
533 | 533 | repo_name=self.db_repo_name) |
|
534 | 534 | return '' |
|
535 | 535 | |
|
536 | 536 | @LoginRequired() |
|
537 | 537 | @HasRepoPermissionAnyDecorator( |
|
538 | 538 | 'repository.read', 'repository.write', 'repository.admin') |
|
539 | 539 | @CSRFRequired() |
|
540 | 540 | def repo_commit_comment_history_view(self): |
|
541 | 541 | c = self.load_default_context() |
|
542 | 542 | comment_id = self.request.matchdict['comment_id'] |
|
543 | 543 | comment_history_id = self.request.matchdict['comment_history_id'] |
|
544 | 544 | |
|
545 | 545 | comment = ChangesetComment.get_or_404(comment_id) |
|
546 | 546 | comment_owner = (comment.author.user_id == self._rhodecode_db_user.user_id) |
|
547 | 547 | if comment.draft and not comment_owner: |
|
548 | 548 | # if we see draft comments history, we only allow this for owner |
|
549 | 549 | raise HTTPNotFound() |
|
550 | 550 | |
|
551 | 551 | comment_history = ChangesetCommentHistory.get_or_404(comment_history_id) |
|
552 | 552 | is_repo_comment = comment_history.comment.repo.repo_id == self.db_repo.repo_id |
|
553 | 553 | |
|
554 | 554 | if is_repo_comment: |
|
555 | 555 | c.comment_history = comment_history |
|
556 | 556 | |
|
557 | 557 | rendered_comment = render( |
|
558 | 558 | 'rhodecode:templates/changeset/comment_history.mako', |
|
559 | 559 | self._get_template_context(c), self.request) |
|
560 | 560 | return rendered_comment |
|
561 | 561 | else: |
|
562 | 562 | log.warning('No permissions for user %s to show comment_history_id: %s', |
|
563 | 563 | self._rhodecode_db_user, comment_history_id) |
|
564 | 564 | raise HTTPNotFound() |
|
565 | 565 | |
|
566 | 566 | @LoginRequired() |
|
567 | 567 | @NotAnonymous() |
|
568 | 568 | @HasRepoPermissionAnyDecorator( |
|
569 | 569 | 'repository.read', 'repository.write', 'repository.admin') |
|
570 | 570 | @CSRFRequired() |
|
571 | 571 | def repo_commit_comment_attachment_upload(self): |
|
572 | 572 | c = self.load_default_context() |
|
573 | 573 | upload_key = 'attachment' |
|
574 | 574 | |
|
575 | 575 | file_obj = self.request.POST.get(upload_key) |
|
576 | 576 | |
|
577 | 577 | if file_obj is None: |
|
578 | 578 | self.request.response.status = 400 |
|
579 | 579 | return {'store_fid': None, |
|
580 | 580 | 'access_path': None, |
|
581 | 581 | 'error': f'{upload_key} data field is missing'} |
|
582 | 582 | |
|
583 | 583 | if not hasattr(file_obj, 'filename'): |
|
584 | 584 | self.request.response.status = 400 |
|
585 | 585 | return {'store_fid': None, |
|
586 | 586 | 'access_path': None, |
|
587 | 587 | 'error': 'filename cannot be read from the data field'} |
|
588 | 588 | |
|
589 | 589 | filename = file_obj.filename |
|
590 | 590 | file_display_name = filename |
|
591 | 591 | |
|
592 | 592 | metadata = { |
|
593 | 593 | 'user_uploaded': {'username': self._rhodecode_user.username, |
|
594 | 594 | 'user_id': self._rhodecode_user.user_id, |
|
595 | 595 | 'ip': self._rhodecode_user.ip_addr}} |
|
596 | 596 | |
|
597 | 597 | # TODO(marcink): allow .ini configuration for allowed_extensions, and file-size |
|
598 | 598 | allowed_extensions = [ |
|
599 | 599 | 'gif', '.jpeg', '.jpg', '.png', '.docx', '.gz', '.log', '.pdf', |
|
600 | 600 | '.pptx', '.txt', '.xlsx', '.zip'] |
|
601 | 601 | max_file_size = 10 * 1024 * 1024 # 10MB, also validated via dropzone.js |
|
602 | 602 | |
|
603 | 603 | try: |
|
604 |
stor |
|
|
605 |
store_uid, metadata = stor |
|
|
606 |
file_obj.file, |
|
|
604 | f_store = store_utils.get_filestore_backend(self.request.registry.settings) | |
|
605 | store_uid, metadata = f_store.store( | |
|
606 | filename, file_obj.file, metadata=metadata, | |
|
607 | 607 | extensions=allowed_extensions, max_filesize=max_file_size) |
|
608 | 608 | except FileNotAllowedException: |
|
609 | 609 | self.request.response.status = 400 |
|
610 | 610 | permitted_extensions = ', '.join(allowed_extensions) |
|
611 | error_msg = 'File `{}` is not allowed. ' \ | |
|
612 |
'Only following extensions are permitted: {}' |
|
|
613 | filename, permitted_extensions) | |
|
611 | error_msg = f'File `{filename}` is not allowed. ' \ | |
|
612 | f'Only following extensions are permitted: {permitted_extensions}' | |
|
613 | ||
|
614 | 614 | return {'store_fid': None, |
|
615 | 615 | 'access_path': None, |
|
616 | 616 | 'error': error_msg} |
|
617 | 617 | except FileOverSizeException: |
|
618 | 618 | self.request.response.status = 400 |
|
619 | 619 | limit_mb = h.format_byte_size_binary(max_file_size) |
|
620 | error_msg = f'File {filename} is exceeding allowed limit of {limit_mb}.' | |
|
620 | 621 | return {'store_fid': None, |
|
621 | 622 | 'access_path': None, |
|
622 | 'error': 'File {} is exceeding allowed limit of {}.'.format( | |
|
623 | filename, limit_mb)} | |
|
623 | 'error': error_msg} | |
|
624 | 624 | |
|
625 | 625 | try: |
|
626 | 626 | entry = FileStore.create( |
|
627 | 627 | file_uid=store_uid, filename=metadata["filename"], |
|
628 | 628 | file_hash=metadata["sha256"], file_size=metadata["size"], |
|
629 | 629 | file_display_name=file_display_name, |
|
630 | 630 | file_description=f'comment attachment `{safe_str(filename)}`', |
|
631 | 631 | hidden=True, check_acl=True, user_id=self._rhodecode_user.user_id, |
|
632 | 632 | scope_repo_id=self.db_repo.repo_id |
|
633 | 633 | ) |
|
634 | 634 | Session().add(entry) |
|
635 | 635 | Session().commit() |
|
636 | 636 | log.debug('Stored upload in DB as %s', entry) |
|
637 | 637 | except Exception: |
|
638 | 638 | log.exception('Failed to store file %s', filename) |
|
639 | 639 | self.request.response.status = 400 |
|
640 | 640 | return {'store_fid': None, |
|
641 | 641 | 'access_path': None, |
|
642 | 642 | 'error': f'File {filename} failed to store in DB.'} |
|
643 | 643 | |
|
644 | 644 | Session().commit() |
|
645 | 645 | |
|
646 | 646 | data = { |
|
647 | 647 | 'store_fid': store_uid, |
|
648 | 648 | 'access_path': h.route_path( |
|
649 | 649 | 'download_file', fid=store_uid), |
|
650 | 650 | 'fqn_access_path': h.route_url( |
|
651 | 651 | 'download_file', fid=store_uid), |
|
652 | 652 | # for EE those are replaced by FQN links on repo-only like |
|
653 | 653 | 'repo_access_path': h.route_url( |
|
654 | 654 | 'download_file', fid=store_uid), |
|
655 | 655 | 'repo_fqn_access_path': h.route_url( |
|
656 | 656 | 'download_file', fid=store_uid), |
|
657 | 657 | } |
|
658 | 658 | # this data is a part of CE/EE additional code |
|
659 | 659 | if c.rhodecode_edition_id == 'EE': |
|
660 | 660 | data.update({ |
|
661 | 661 | 'repo_access_path': h.route_path( |
|
662 | 662 | 'repo_artifacts_get', repo_name=self.db_repo_name, uid=store_uid), |
|
663 | 663 | 'repo_fqn_access_path': h.route_url( |
|
664 | 664 | 'repo_artifacts_get', repo_name=self.db_repo_name, uid=store_uid), |
|
665 | 665 | }) |
|
666 | 666 | |
|
667 | 667 | return data |
|
668 | 668 | |
|
669 | 669 | @LoginRequired() |
|
670 | 670 | @NotAnonymous() |
|
671 | 671 | @HasRepoPermissionAnyDecorator( |
|
672 | 672 | 'repository.read', 'repository.write', 'repository.admin') |
|
673 | 673 | @CSRFRequired() |
|
674 | 674 | def repo_commit_comment_delete(self): |
|
675 | 675 | commit_id = self.request.matchdict['commit_id'] |
|
676 | 676 | comment_id = self.request.matchdict['comment_id'] |
|
677 | 677 | |
|
678 | 678 | comment = ChangesetComment.get_or_404(comment_id) |
|
679 | 679 | if not comment: |
|
680 | 680 | log.debug('Comment with id:%s not found, skipping', comment_id) |
|
681 | 681 | # comment already deleted in another call probably |
|
682 | 682 | return True |
|
683 | 683 | |
|
684 | 684 | if comment.immutable: |
|
685 | 685 | # don't allow deleting comments that are immutable |
|
686 | 686 | raise HTTPForbidden() |
|
687 | 687 | |
|
688 | 688 | is_repo_admin = h.HasRepoPermissionAny('repository.admin')(self.db_repo_name) |
|
689 | 689 | super_admin = h.HasPermissionAny('hg.admin')() |
|
690 | 690 | comment_owner = (comment.author.user_id == self._rhodecode_db_user.user_id) |
|
691 | 691 | is_repo_comment = comment.repo.repo_id == self.db_repo.repo_id |
|
692 | 692 | comment_repo_admin = is_repo_admin and is_repo_comment |
|
693 | 693 | |
|
694 | 694 | if comment.draft and not comment_owner: |
|
695 | 695 | # We never allow to delete draft comments for other than owners |
|
696 | 696 | raise HTTPNotFound() |
|
697 | 697 | |
|
698 | 698 | if super_admin or comment_owner or comment_repo_admin: |
|
699 | 699 | CommentsModel().delete(comment=comment, auth_user=self._rhodecode_user) |
|
700 | 700 | Session().commit() |
|
701 | 701 | return True |
|
702 | 702 | else: |
|
703 | 703 | log.warning('No permissions for user %s to delete comment_id: %s', |
|
704 | 704 | self._rhodecode_db_user, comment_id) |
|
705 | 705 | raise HTTPNotFound() |
|
706 | 706 | |
|
707 | 707 | @LoginRequired() |
|
708 | 708 | @NotAnonymous() |
|
709 | 709 | @HasRepoPermissionAnyDecorator( |
|
710 | 710 | 'repository.read', 'repository.write', 'repository.admin') |
|
711 | 711 | @CSRFRequired() |
|
712 | 712 | def repo_commit_comment_edit(self): |
|
713 | 713 | self.load_default_context() |
|
714 | 714 | |
|
715 | 715 | commit_id = self.request.matchdict['commit_id'] |
|
716 | 716 | comment_id = self.request.matchdict['comment_id'] |
|
717 | 717 | comment = ChangesetComment.get_or_404(comment_id) |
|
718 | 718 | |
|
719 | 719 | if comment.immutable: |
|
720 | 720 | # don't allow deleting comments that are immutable |
|
721 | 721 | raise HTTPForbidden() |
|
722 | 722 | |
|
723 | 723 | is_repo_admin = h.HasRepoPermissionAny('repository.admin')(self.db_repo_name) |
|
724 | 724 | super_admin = h.HasPermissionAny('hg.admin')() |
|
725 | 725 | comment_owner = (comment.author.user_id == self._rhodecode_db_user.user_id) |
|
726 | 726 | is_repo_comment = comment.repo.repo_id == self.db_repo.repo_id |
|
727 | 727 | comment_repo_admin = is_repo_admin and is_repo_comment |
|
728 | 728 | |
|
729 | 729 | if super_admin or comment_owner or comment_repo_admin: |
|
730 | 730 | text = self.request.POST.get('text') |
|
731 | 731 | version = self.request.POST.get('version') |
|
732 | 732 | if text == comment.text: |
|
733 | 733 | log.warning( |
|
734 | 734 | 'Comment(repo): ' |
|
735 | 735 | 'Trying to create new version ' |
|
736 | 736 | 'with the same comment body {}'.format( |
|
737 | 737 | comment_id, |
|
738 | 738 | ) |
|
739 | 739 | ) |
|
740 | 740 | raise HTTPNotFound() |
|
741 | 741 | |
|
742 | 742 | if version.isdigit(): |
|
743 | 743 | version = int(version) |
|
744 | 744 | else: |
|
745 | 745 | log.warning( |
|
746 | 746 | 'Comment(repo): Wrong version type {} {} ' |
|
747 | 747 | 'for comment {}'.format( |
|
748 | 748 | version, |
|
749 | 749 | type(version), |
|
750 | 750 | comment_id, |
|
751 | 751 | ) |
|
752 | 752 | ) |
|
753 | 753 | raise HTTPNotFound() |
|
754 | 754 | |
|
755 | 755 | try: |
|
756 | 756 | comment_history = CommentsModel().edit( |
|
757 | 757 | comment_id=comment_id, |
|
758 | 758 | text=text, |
|
759 | 759 | auth_user=self._rhodecode_user, |
|
760 | 760 | version=version, |
|
761 | 761 | ) |
|
762 | 762 | except CommentVersionMismatch: |
|
763 | 763 | raise HTTPConflict() |
|
764 | 764 | |
|
765 | 765 | if not comment_history: |
|
766 | 766 | raise HTTPNotFound() |
|
767 | 767 | |
|
768 | 768 | if not comment.draft: |
|
769 | 769 | commit = self.db_repo.get_commit(commit_id) |
|
770 | 770 | CommentsModel().trigger_commit_comment_hook( |
|
771 | 771 | self.db_repo, self._rhodecode_user, 'edit', |
|
772 | 772 | data={'comment': comment, 'commit': commit}) |
|
773 | 773 | |
|
774 | 774 | Session().commit() |
|
775 | 775 | return { |
|
776 | 776 | 'comment_history_id': comment_history.comment_history_id, |
|
777 | 777 | 'comment_id': comment.comment_id, |
|
778 | 778 | 'comment_version': comment_history.version, |
|
779 | 779 | 'comment_author_username': comment_history.author.username, |
|
780 | 780 | 'comment_author_gravatar': h.gravatar_url(comment_history.author.email, 16, request=self.request), |
|
781 | 781 | 'comment_created_on': h.age_component(comment_history.created_on, |
|
782 | 782 | time_is_local=True), |
|
783 | 783 | } |
|
784 | 784 | else: |
|
785 | 785 | log.warning('No permissions for user %s to edit comment_id: %s', |
|
786 | 786 | self._rhodecode_db_user, comment_id) |
|
787 | 787 | raise HTTPNotFound() |
|
788 | 788 | |
|
789 | 789 | @LoginRequired() |
|
790 | 790 | @HasRepoPermissionAnyDecorator( |
|
791 | 791 | 'repository.read', 'repository.write', 'repository.admin') |
|
792 | 792 | def repo_commit_data(self): |
|
793 | 793 | commit_id = self.request.matchdict['commit_id'] |
|
794 | 794 | self.load_default_context() |
|
795 | 795 | |
|
796 | 796 | try: |
|
797 | 797 | return self.rhodecode_vcs_repo.get_commit(commit_id=commit_id) |
|
798 | 798 | except CommitDoesNotExistError as e: |
|
799 | 799 | return EmptyCommit(message=str(e)) |
|
800 | 800 | |
|
801 | 801 | @LoginRequired() |
|
802 | 802 | @HasRepoPermissionAnyDecorator( |
|
803 | 803 | 'repository.read', 'repository.write', 'repository.admin') |
|
804 | 804 | def repo_commit_children(self): |
|
805 | 805 | commit_id = self.request.matchdict['commit_id'] |
|
806 | 806 | self.load_default_context() |
|
807 | 807 | |
|
808 | 808 | try: |
|
809 | 809 | commit = self.rhodecode_vcs_repo.get_commit(commit_id=commit_id) |
|
810 | 810 | children = commit.children |
|
811 | 811 | except CommitDoesNotExistError: |
|
812 | 812 | children = [] |
|
813 | 813 | |
|
814 | 814 | result = {"results": children} |
|
815 | 815 | return result |
|
816 | 816 | |
|
817 | 817 | @LoginRequired() |
|
818 | 818 | @HasRepoPermissionAnyDecorator( |
|
819 | 819 | 'repository.read', 'repository.write', 'repository.admin') |
|
820 | 820 | def repo_commit_parents(self): |
|
821 | 821 | commit_id = self.request.matchdict['commit_id'] |
|
822 | 822 | self.load_default_context() |
|
823 | 823 | |
|
824 | 824 | try: |
|
825 | 825 | commit = self.rhodecode_vcs_repo.get_commit(commit_id=commit_id) |
|
826 | 826 | parents = commit.parents |
|
827 | 827 | except CommitDoesNotExistError: |
|
828 | 828 | parents = [] |
|
829 | 829 | result = {"results": parents} |
|
830 | 830 | return result |
@@ -1,1716 +1,1716 b'' | |||
|
1 | 1 | # Copyright (C) 2011-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import itertools |
|
20 | 20 | import logging |
|
21 | 21 | import os |
|
22 | 22 | import collections |
|
23 | 23 | import urllib.request |
|
24 | 24 | import urllib.parse |
|
25 | 25 | import urllib.error |
|
26 | 26 | import pathlib |
|
27 | 27 | import time |
|
28 | 28 | import random |
|
29 | 29 | |
|
30 | 30 | from pyramid.httpexceptions import HTTPNotFound, HTTPBadRequest, HTTPFound |
|
31 | 31 | |
|
32 | 32 | from pyramid.renderers import render |
|
33 | 33 | from pyramid.response import Response |
|
34 | 34 | |
|
35 | 35 | import rhodecode |
|
36 | 36 | from rhodecode.apps._base import RepoAppView |
|
37 | 37 | |
|
38 | 38 | |
|
39 | 39 | from rhodecode.lib import diffs, helpers as h, rc_cache |
|
40 | 40 | from rhodecode.lib import audit_logger |
|
41 | 41 | from rhodecode.lib.hash_utils import sha1_safe |
|
42 | 42 | from rhodecode.lib.archive_cache import ( |
|
43 | 43 | get_archival_cache_store, get_archival_config, ArchiveCacheGenerationLock, archive_iterator) |
|
44 | 44 | from rhodecode.lib.str_utils import safe_bytes, convert_special_chars |
|
45 | 45 | from rhodecode.lib.view_utils import parse_path_ref |
|
46 | 46 | from rhodecode.lib.exceptions import NonRelativePathError |
|
47 | 47 | from rhodecode.lib.codeblocks import ( |
|
48 | 48 | filenode_as_lines_tokens, filenode_as_annotated_lines_tokens) |
|
49 | 49 | from rhodecode.lib.utils2 import convert_line_endings, detect_mode |
|
50 | 50 | from rhodecode.lib.type_utils import str2bool |
|
51 | from rhodecode.lib.str_utils import safe_str, safe_int | |
|
51 | from rhodecode.lib.str_utils import safe_str, safe_int, header_safe_str | |
|
52 | 52 | from rhodecode.lib.auth import ( |
|
53 | 53 | LoginRequired, HasRepoPermissionAnyDecorator, CSRFRequired) |
|
54 | 54 | from rhodecode.lib.vcs import path as vcspath |
|
55 | 55 | from rhodecode.lib.vcs.backends.base import EmptyCommit |
|
56 | 56 | from rhodecode.lib.vcs.conf import settings |
|
57 | 57 | from rhodecode.lib.vcs.nodes import FileNode |
|
58 | 58 | from rhodecode.lib.vcs.exceptions import ( |
|
59 | 59 | RepositoryError, CommitDoesNotExistError, EmptyRepositoryError, |
|
60 | 60 | ImproperArchiveTypeError, VCSError, NodeAlreadyExistsError, |
|
61 | 61 | NodeDoesNotExistError, CommitError, NodeError) |
|
62 | 62 | |
|
63 | 63 | from rhodecode.model.scm import ScmModel |
|
64 | 64 | from rhodecode.model.db import Repository |
|
65 | 65 | |
|
66 | 66 | log = logging.getLogger(__name__) |
|
67 | 67 | |
|
68 | 68 | |
|
69 | 69 | def get_archive_name(db_repo_id, db_repo_name, commit_sha, ext, subrepos=False, path_sha='', with_hash=True): |
|
70 | 70 | # original backward compat name of archive |
|
71 | 71 | clean_name = safe_str(convert_special_chars(db_repo_name).replace('/', '_')) |
|
72 | 72 | |
|
73 | 73 | # e.g vcsserver-id-abcd-sub-1-abcfdef-archive-all.zip |
|
74 | 74 | # vcsserver-id-abcd-sub-0-abcfdef-COMMIT_SHA-PATH_SHA.zip |
|
75 | 75 | id_sha = sha1_safe(str(db_repo_id))[:4] |
|
76 | 76 | sub_repo = 'sub-1' if subrepos else 'sub-0' |
|
77 | 77 | commit = commit_sha if with_hash else 'archive' |
|
78 | 78 | path_marker = (path_sha if with_hash else '') or 'all' |
|
79 | 79 | archive_name = f'{clean_name}-id-{id_sha}-{sub_repo}-{commit}-{path_marker}{ext}' |
|
80 | 80 | |
|
81 | 81 | return archive_name |
|
82 | 82 | |
|
83 | 83 | |
|
84 | 84 | def get_path_sha(at_path): |
|
85 | 85 | return safe_str(sha1_safe(at_path)[:8]) |
|
86 | 86 | |
|
87 | 87 | |
|
88 | 88 | def _get_archive_spec(fname): |
|
89 | 89 | log.debug('Detecting archive spec for: `%s`', fname) |
|
90 | 90 | |
|
91 | 91 | fileformat = None |
|
92 | 92 | ext = None |
|
93 | 93 | content_type = None |
|
94 | 94 | for a_type, content_type, extension in settings.ARCHIVE_SPECS: |
|
95 | 95 | |
|
96 | 96 | if fname.endswith(extension): |
|
97 | 97 | fileformat = a_type |
|
98 | 98 | log.debug('archive is of type: %s', fileformat) |
|
99 | 99 | ext = extension |
|
100 | 100 | break |
|
101 | 101 | |
|
102 | 102 | if not fileformat: |
|
103 | 103 | raise ValueError() |
|
104 | 104 | |
|
105 | 105 | # left over part of whole fname is the commit |
|
106 | 106 | commit_id = fname[:-len(ext)] |
|
107 | 107 | |
|
108 | 108 | return commit_id, ext, fileformat, content_type |
|
109 | 109 | |
|
110 | 110 | |
|
111 | 111 | class RepoFilesView(RepoAppView): |
|
112 | 112 | |
|
113 | 113 | @staticmethod |
|
114 | 114 | def adjust_file_path_for_svn(f_path, repo): |
|
115 | 115 | """ |
|
116 | 116 | Computes the relative path of `f_path`. |
|
117 | 117 | |
|
118 | 118 | This is mainly based on prefix matching of the recognized tags and |
|
119 | 119 | branches in the underlying repository. |
|
120 | 120 | """ |
|
121 | 121 | tags_and_branches = itertools.chain( |
|
122 | 122 | repo.branches.keys(), |
|
123 | 123 | repo.tags.keys()) |
|
124 | 124 | tags_and_branches = sorted(tags_and_branches, key=len, reverse=True) |
|
125 | 125 | |
|
126 | 126 | for name in tags_and_branches: |
|
127 | 127 | if f_path.startswith(f'{name}/'): |
|
128 | 128 | f_path = vcspath.relpath(f_path, name) |
|
129 | 129 | break |
|
130 | 130 | return f_path |
|
131 | 131 | |
|
132 | 132 | def load_default_context(self): |
|
133 | 133 | c = self._get_local_tmpl_context(include_app_defaults=True) |
|
134 | 134 | c.rhodecode_repo = self.rhodecode_vcs_repo |
|
135 | 135 | c.enable_downloads = self.db_repo.enable_downloads |
|
136 | 136 | return c |
|
137 | 137 | |
|
138 | 138 | def _ensure_not_locked(self, commit_id='tip'): |
|
139 | 139 | _ = self.request.translate |
|
140 | 140 | |
|
141 | 141 | repo = self.db_repo |
|
142 | 142 | if repo.enable_locking and repo.locked[0]: |
|
143 | 143 | h.flash(_('This repository has been locked by %s on %s') |
|
144 | 144 | % (h.person_by_id(repo.locked[0]), |
|
145 | 145 | h.format_date(h.time_to_datetime(repo.locked[1]))), |
|
146 | 146 | 'warning') |
|
147 | 147 | files_url = h.route_path( |
|
148 | 148 | 'repo_files:default_path', |
|
149 | 149 | repo_name=self.db_repo_name, commit_id=commit_id) |
|
150 | 150 | raise HTTPFound(files_url) |
|
151 | 151 | |
|
152 | 152 | def forbid_non_head(self, is_head, f_path, commit_id='tip', json_mode=False): |
|
153 | 153 | _ = self.request.translate |
|
154 | 154 | |
|
155 | 155 | if not is_head: |
|
156 | 156 | message = _('Cannot modify file. ' |
|
157 | 157 | 'Given commit `{}` is not head of a branch.').format(commit_id) |
|
158 | 158 | h.flash(message, category='warning') |
|
159 | 159 | |
|
160 | 160 | if json_mode: |
|
161 | 161 | return message |
|
162 | 162 | |
|
163 | 163 | files_url = h.route_path( |
|
164 | 164 | 'repo_files', repo_name=self.db_repo_name, commit_id=commit_id, |
|
165 | 165 | f_path=f_path) |
|
166 | 166 | raise HTTPFound(files_url) |
|
167 | 167 | |
|
168 | 168 | def check_branch_permission(self, branch_name, commit_id='tip', json_mode=False): |
|
169 | 169 | _ = self.request.translate |
|
170 | 170 | |
|
171 | 171 | rule, branch_perm = self._rhodecode_user.get_rule_and_branch_permission( |
|
172 | 172 | self.db_repo_name, branch_name) |
|
173 | 173 | if branch_perm and branch_perm not in ['branch.push', 'branch.push_force']: |
|
174 | 174 | message = _('Branch `{}` changes forbidden by rule {}.').format( |
|
175 | 175 | h.escape(branch_name), h.escape(rule)) |
|
176 | 176 | h.flash(message, 'warning') |
|
177 | 177 | |
|
178 | 178 | if json_mode: |
|
179 | 179 | return message |
|
180 | 180 | |
|
181 | 181 | files_url = h.route_path( |
|
182 | 182 | 'repo_files:default_path', repo_name=self.db_repo_name, commit_id=commit_id) |
|
183 | 183 | |
|
184 | 184 | raise HTTPFound(files_url) |
|
185 | 185 | |
|
186 | 186 | def _get_commit_and_path(self): |
|
187 | 187 | default_commit_id = self.db_repo.landing_ref_name |
|
188 | 188 | default_f_path = '/' |
|
189 | 189 | |
|
190 | 190 | commit_id = self.request.matchdict.get( |
|
191 | 191 | 'commit_id', default_commit_id) |
|
192 | 192 | f_path = self._get_f_path(self.request.matchdict, default_f_path) |
|
193 | 193 | return commit_id, f_path |
|
194 | 194 | |
|
195 | 195 | def _get_default_encoding(self, c): |
|
196 | 196 | enc_list = getattr(c, 'default_encodings', []) |
|
197 | 197 | return enc_list[0] if enc_list else 'UTF-8' |
|
198 | 198 | |
|
199 | 199 | def _get_commit_or_redirect(self, commit_id, redirect_after=True): |
|
200 | 200 | """ |
|
201 | 201 | This is a safe way to get commit. If an error occurs it redirects to |
|
202 | 202 | tip with proper message |
|
203 | 203 | |
|
204 | 204 | :param commit_id: id of commit to fetch |
|
205 | 205 | :param redirect_after: toggle redirection |
|
206 | 206 | """ |
|
207 | 207 | _ = self.request.translate |
|
208 | 208 | |
|
209 | 209 | try: |
|
210 | 210 | return self.rhodecode_vcs_repo.get_commit(commit_id) |
|
211 | 211 | except EmptyRepositoryError: |
|
212 | 212 | if not redirect_after: |
|
213 | 213 | return None |
|
214 | 214 | |
|
215 | 215 | add_new = upload_new = "" |
|
216 | 216 | if h.HasRepoPermissionAny( |
|
217 | 217 | 'repository.write', 'repository.admin')(self.db_repo_name): |
|
218 | 218 | _url = h.route_path( |
|
219 | 219 | 'repo_files_add_file', |
|
220 | 220 | repo_name=self.db_repo_name, commit_id=0, f_path='') |
|
221 | 221 | add_new = h.link_to( |
|
222 | 222 | _('add a new file'), _url, class_="alert-link") |
|
223 | 223 | |
|
224 | 224 | _url_upld = h.route_path( |
|
225 | 225 | 'repo_files_upload_file', |
|
226 | 226 | repo_name=self.db_repo_name, commit_id=0, f_path='') |
|
227 | 227 | upload_new = h.link_to( |
|
228 | 228 | _('upload a new file'), _url_upld, class_="alert-link") |
|
229 | 229 | |
|
230 | 230 | h.flash(h.literal( |
|
231 | 231 | _('There are no files yet. Click here to %s or %s.') % (add_new, upload_new)), category='warning') |
|
232 | 232 | raise HTTPFound( |
|
233 | 233 | h.route_path('repo_summary', repo_name=self.db_repo_name)) |
|
234 | 234 | |
|
235 | 235 | except (CommitDoesNotExistError, LookupError) as e: |
|
236 | 236 | msg = _('No such commit exists for this repository. Commit: {}').format(commit_id) |
|
237 | 237 | h.flash(msg, category='error') |
|
238 | 238 | raise HTTPNotFound() |
|
239 | 239 | except RepositoryError as e: |
|
240 | 240 | h.flash(h.escape(safe_str(e)), category='error') |
|
241 | 241 | raise HTTPNotFound() |
|
242 | 242 | |
|
243 | 243 | def _get_filenode_or_redirect(self, commit_obj, path, pre_load=None): |
|
244 | 244 | """ |
|
245 | 245 | Returns file_node, if error occurs or given path is directory, |
|
246 | 246 | it'll redirect to top level path |
|
247 | 247 | """ |
|
248 | 248 | _ = self.request.translate |
|
249 | 249 | |
|
250 | 250 | try: |
|
251 | 251 | file_node = commit_obj.get_node(path, pre_load=pre_load) |
|
252 | 252 | if file_node.is_dir(): |
|
253 | 253 | raise RepositoryError('The given path is a directory') |
|
254 | 254 | except CommitDoesNotExistError: |
|
255 | 255 | log.exception('No such commit exists for this repository') |
|
256 | 256 | h.flash(_('No such commit exists for this repository'), category='error') |
|
257 | 257 | raise HTTPNotFound() |
|
258 | 258 | except RepositoryError as e: |
|
259 | 259 | log.warning('Repository error while fetching filenode `%s`. Err:%s', path, e) |
|
260 | 260 | h.flash(h.escape(safe_str(e)), category='error') |
|
261 | 261 | raise HTTPNotFound() |
|
262 | 262 | |
|
263 | 263 | return file_node |
|
264 | 264 | |
|
265 | 265 | def _is_valid_head(self, commit_id, repo, landing_ref): |
|
266 | 266 | branch_name = sha_commit_id = '' |
|
267 | 267 | is_head = False |
|
268 | 268 | log.debug('Checking if commit_id `%s` is a head for %s.', commit_id, repo) |
|
269 | 269 | |
|
270 | 270 | for _branch_name, branch_commit_id in repo.branches.items(): |
|
271 | 271 | # simple case we pass in branch name, it's a HEAD |
|
272 | 272 | if commit_id == _branch_name: |
|
273 | 273 | is_head = True |
|
274 | 274 | branch_name = _branch_name |
|
275 | 275 | sha_commit_id = branch_commit_id |
|
276 | 276 | break |
|
277 | 277 | # case when we pass in full sha commit_id, which is a head |
|
278 | 278 | elif commit_id == branch_commit_id: |
|
279 | 279 | is_head = True |
|
280 | 280 | branch_name = _branch_name |
|
281 | 281 | sha_commit_id = branch_commit_id |
|
282 | 282 | break |
|
283 | 283 | |
|
284 | 284 | if h.is_svn(repo) and not repo.is_empty(): |
|
285 | 285 | # Note: Subversion only has one head. |
|
286 | 286 | if commit_id == repo.get_commit(commit_idx=-1).raw_id: |
|
287 | 287 | is_head = True |
|
288 | 288 | return branch_name, sha_commit_id, is_head |
|
289 | 289 | |
|
290 | 290 | # checked branches, means we only need to try to get the branch/commit_sha |
|
291 | 291 | if repo.is_empty(): |
|
292 | 292 | is_head = True |
|
293 | 293 | branch_name = landing_ref |
|
294 | 294 | sha_commit_id = EmptyCommit().raw_id |
|
295 | 295 | else: |
|
296 | 296 | commit = repo.get_commit(commit_id=commit_id) |
|
297 | 297 | if commit: |
|
298 | 298 | branch_name = commit.branch |
|
299 | 299 | sha_commit_id = commit.raw_id |
|
300 | 300 | |
|
301 | 301 | return branch_name, sha_commit_id, is_head |
|
302 | 302 | |
|
303 | 303 | def _get_tree_at_commit(self, c, commit_id, f_path, full_load=False, at_rev=None): |
|
304 | 304 | |
|
305 | 305 | repo_id = self.db_repo.repo_id |
|
306 | 306 | force_recache = self.get_recache_flag() |
|
307 | 307 | |
|
308 | 308 | cache_seconds = safe_int( |
|
309 | 309 | rhodecode.CONFIG.get('rc_cache.cache_repo.expiration_time')) |
|
310 | 310 | cache_on = not force_recache and cache_seconds > 0 |
|
311 | 311 | log.debug( |
|
312 | 312 | 'Computing FILE TREE for repo_id %s commit_id `%s` and path `%s`' |
|
313 | 313 | 'with caching: %s[TTL: %ss]' % ( |
|
314 | 314 | repo_id, commit_id, f_path, cache_on, cache_seconds or 0)) |
|
315 | 315 | |
|
316 | 316 | cache_namespace_uid = f'repo.{rc_cache.FILE_TREE_CACHE_VER}.{repo_id}' |
|
317 | 317 | region = rc_cache.get_or_create_region('cache_repo', cache_namespace_uid) |
|
318 | 318 | |
|
319 | 319 | @region.conditional_cache_on_arguments(namespace=cache_namespace_uid, condition=cache_on) |
|
320 | 320 | def compute_file_tree(_name_hash, _repo_id, _commit_id, _f_path, _full_load, _at_rev): |
|
321 | 321 | log.debug('Generating cached file tree at for repo_id: %s, %s, %s', |
|
322 | 322 | _repo_id, _commit_id, _f_path) |
|
323 | 323 | |
|
324 | 324 | c.full_load = _full_load |
|
325 | 325 | return render( |
|
326 | 326 | 'rhodecode:templates/files/files_browser_tree.mako', |
|
327 | 327 | self._get_template_context(c), self.request, _at_rev) |
|
328 | 328 | |
|
329 | 329 | return compute_file_tree( |
|
330 | 330 | self.db_repo.repo_name_hash, self.db_repo.repo_id, commit_id, f_path, full_load, at_rev) |
|
331 | 331 | |
|
332 | 332 | def create_pure_path(self, *parts): |
|
333 | 333 | # Split paths and sanitize them, removing any ../ etc |
|
334 | 334 | sanitized_path = [ |
|
335 | 335 | x for x in pathlib.PurePath(*parts).parts |
|
336 | 336 | if x not in ['.', '..']] |
|
337 | 337 | |
|
338 | 338 | pure_path = pathlib.PurePath(*sanitized_path) |
|
339 | 339 | return pure_path |
|
340 | 340 | |
|
341 | 341 | def _is_lf_enabled(self, target_repo): |
|
342 | 342 | lf_enabled = False |
|
343 | 343 | |
|
344 | 344 | lf_key_for_vcs_map = { |
|
345 | 345 | 'hg': 'extensions_largefiles', |
|
346 | 346 | 'git': 'vcs_git_lfs_enabled' |
|
347 | 347 | } |
|
348 | 348 | |
|
349 | 349 | lf_key_for_vcs = lf_key_for_vcs_map.get(target_repo.repo_type) |
|
350 | 350 | |
|
351 | 351 | if lf_key_for_vcs: |
|
352 | 352 | lf_enabled = self._get_repo_setting(target_repo, lf_key_for_vcs) |
|
353 | 353 | |
|
354 | 354 | return lf_enabled |
|
355 | 355 | |
|
356 | 356 | @LoginRequired() |
|
357 | 357 | @HasRepoPermissionAnyDecorator( |
|
358 | 358 | 'repository.read', 'repository.write', 'repository.admin') |
|
359 | 359 | def repo_archivefile(self): |
|
360 | 360 | # archive cache config |
|
361 | 361 | from rhodecode import CONFIG |
|
362 | 362 | _ = self.request.translate |
|
363 | 363 | self.load_default_context() |
|
364 | 364 | default_at_path = '/' |
|
365 | 365 | fname = self.request.matchdict['fname'] |
|
366 | 366 | subrepos = self.request.GET.get('subrepos') == 'true' |
|
367 | 367 | with_hash = str2bool(self.request.GET.get('with_hash', '1')) |
|
368 | 368 | at_path = self.request.GET.get('at_path') or default_at_path |
|
369 | 369 | |
|
370 | 370 | if not self.db_repo.enable_downloads: |
|
371 | 371 | return Response(_('Downloads disabled')) |
|
372 | 372 | |
|
373 | 373 | try: |
|
374 | 374 | commit_id, ext, fileformat, content_type = \ |
|
375 | 375 | _get_archive_spec(fname) |
|
376 | 376 | except ValueError: |
|
377 | 377 | return Response(_('Unknown archive type for: `{}`').format( |
|
378 | 378 | h.escape(fname))) |
|
379 | 379 | |
|
380 | 380 | try: |
|
381 | 381 | commit = self.rhodecode_vcs_repo.get_commit(commit_id) |
|
382 | 382 | except CommitDoesNotExistError: |
|
383 | 383 | return Response(_('Unknown commit_id {}').format( |
|
384 | 384 | h.escape(commit_id))) |
|
385 | 385 | except EmptyRepositoryError: |
|
386 | 386 | return Response(_('Empty repository')) |
|
387 | 387 | |
|
388 | 388 | # we used a ref, or a shorter version, lets redirect client ot use explicit hash |
|
389 | 389 | if commit_id != commit.raw_id: |
|
390 | 390 | fname=f'{commit.raw_id}{ext}' |
|
391 | 391 | raise HTTPFound(self.request.current_route_path(fname=fname)) |
|
392 | 392 | |
|
393 | 393 | try: |
|
394 | 394 | at_path = commit.get_node(at_path).path or default_at_path |
|
395 | 395 | except Exception: |
|
396 | 396 | return Response(_('No node at path {} for this repository').format(h.escape(at_path))) |
|
397 | 397 | |
|
398 | 398 | path_sha = get_path_sha(at_path) |
|
399 | 399 | |
|
400 | 400 | # used for cache etc, consistent unique archive name |
|
401 | 401 | archive_name_key = get_archive_name( |
|
402 | 402 | self.db_repo.repo_id, self.db_repo_name, commit_sha=commit.short_id, ext=ext, subrepos=subrepos, |
|
403 | 403 | path_sha=path_sha, with_hash=True) |
|
404 | 404 | |
|
405 | 405 | if not with_hash: |
|
406 | 406 | path_sha = '' |
|
407 | 407 | |
|
408 | 408 | # what end client gets served |
|
409 | 409 | response_archive_name = get_archive_name( |
|
410 | 410 | self.db_repo.repo_id, self.db_repo_name, commit_sha=commit.short_id, ext=ext, subrepos=subrepos, |
|
411 | 411 | path_sha=path_sha, with_hash=with_hash) |
|
412 | 412 | |
|
413 | 413 | # remove extension from our archive directory name |
|
414 | 414 | archive_dir_name = response_archive_name[:-len(ext)] |
|
415 | 415 | |
|
416 | 416 | archive_cache_disable = self.request.GET.get('no_cache') |
|
417 | 417 | |
|
418 | 418 | d_cache = get_archival_cache_store(config=CONFIG) |
|
419 | 419 | |
|
420 | 420 | # NOTE: we get the config to pass to a call to lazy-init the SAME type of cache on vcsserver |
|
421 | 421 | d_cache_conf = get_archival_config(config=CONFIG) |
|
422 | 422 | |
|
423 | 423 | # This is also a cache key, and lock key |
|
424 | 424 | reentrant_lock_key = archive_name_key + '.lock' |
|
425 | 425 | |
|
426 | 426 | use_cached_archive = False |
|
427 | 427 | if not archive_cache_disable and archive_name_key in d_cache: |
|
428 | 428 | reader, metadata = d_cache.fetch(archive_name_key) |
|
429 | 429 | |
|
430 | 430 | use_cached_archive = True |
|
431 | 431 | log.debug('Found cached archive as key=%s tag=%s, serving archive from cache reader=%s', |
|
432 | 432 | archive_name_key, metadata, reader.name) |
|
433 | 433 | else: |
|
434 | 434 | reader = None |
|
435 | 435 | log.debug('Archive with key=%s is not yet cached, creating one now...', archive_name_key) |
|
436 | 436 | |
|
437 | 437 | if not reader: |
|
438 | 438 | # generate new archive, as previous was not found in the cache |
|
439 | 439 | try: |
|
440 | 440 | with d_cache.get_lock(reentrant_lock_key): |
|
441 | 441 | try: |
|
442 | 442 | commit.archive_repo(archive_name_key, archive_dir_name=archive_dir_name, |
|
443 | 443 | kind=fileformat, subrepos=subrepos, |
|
444 | 444 | archive_at_path=at_path, cache_config=d_cache_conf) |
|
445 | 445 | except ImproperArchiveTypeError: |
|
446 | 446 | return _('Unknown archive type') |
|
447 | 447 | |
|
448 | 448 | except ArchiveCacheGenerationLock: |
|
449 | 449 | retry_after = round(random.uniform(0.3, 3.0), 1) |
|
450 | 450 | time.sleep(retry_after) |
|
451 | 451 | |
|
452 | 452 | location = self.request.url |
|
453 | 453 | response = Response( |
|
454 | 454 | f"archive {archive_name_key} generation in progress, Retry-After={retry_after}, Location={location}" |
|
455 | 455 | ) |
|
456 | 456 | response.headers["Retry-After"] = str(retry_after) |
|
457 | 457 | response.status_code = 307 # temporary redirect |
|
458 | 458 | |
|
459 | 459 | response.location = location |
|
460 | 460 | return response |
|
461 | 461 | |
|
462 | 462 | reader, metadata = d_cache.fetch(archive_name_key, retry=True, retry_attempts=30) |
|
463 | 463 | |
|
464 | 464 | response = Response(app_iter=archive_iterator(reader)) |
|
465 | 465 | response.content_disposition = f'attachment; filename={response_archive_name}' |
|
466 | 466 | response.content_type = str(content_type) |
|
467 | 467 | |
|
468 | 468 | try: |
|
469 | 469 | return response |
|
470 | 470 | finally: |
|
471 | 471 | # store download action |
|
472 | 472 | audit_logger.store_web( |
|
473 | 473 | 'repo.archive.download', action_data={ |
|
474 | 474 | 'user_agent': self.request.user_agent, |
|
475 | 475 | 'archive_name': archive_name_key, |
|
476 | 476 | 'archive_spec': fname, |
|
477 | 477 | 'archive_cached': use_cached_archive}, |
|
478 | 478 | user=self._rhodecode_user, |
|
479 | 479 | repo=self.db_repo, |
|
480 | 480 | commit=True |
|
481 | 481 | ) |
|
482 | 482 | |
|
483 | 483 | def _get_file_node(self, commit_id, f_path): |
|
484 | 484 | if commit_id not in ['', None, 'None', '0' * 12, '0' * 40]: |
|
485 | 485 | commit = self.rhodecode_vcs_repo.get_commit(commit_id=commit_id) |
|
486 | 486 | try: |
|
487 | 487 | node = commit.get_node(f_path) |
|
488 | 488 | if node.is_dir(): |
|
489 | 489 | raise NodeError(f'{node} path is a {type(node)} not a file') |
|
490 | 490 | except NodeDoesNotExistError: |
|
491 | 491 | commit = EmptyCommit( |
|
492 | 492 | commit_id=commit_id, |
|
493 | 493 | idx=commit.idx, |
|
494 | 494 | repo=commit.repository, |
|
495 | 495 | alias=commit.repository.alias, |
|
496 | 496 | message=commit.message, |
|
497 | 497 | author=commit.author, |
|
498 | 498 | date=commit.date) |
|
499 | 499 | node = FileNode(safe_bytes(f_path), b'', commit=commit) |
|
500 | 500 | else: |
|
501 | 501 | commit = EmptyCommit( |
|
502 | 502 | repo=self.rhodecode_vcs_repo, |
|
503 | 503 | alias=self.rhodecode_vcs_repo.alias) |
|
504 | 504 | node = FileNode(safe_bytes(f_path), b'', commit=commit) |
|
505 | 505 | return node |
|
506 | 506 | |
|
507 | 507 | @LoginRequired() |
|
508 | 508 | @HasRepoPermissionAnyDecorator( |
|
509 | 509 | 'repository.read', 'repository.write', 'repository.admin') |
|
510 | 510 | def repo_files_diff(self): |
|
511 | 511 | c = self.load_default_context() |
|
512 | 512 | f_path = self._get_f_path(self.request.matchdict) |
|
513 | 513 | diff1 = self.request.GET.get('diff1', '') |
|
514 | 514 | diff2 = self.request.GET.get('diff2', '') |
|
515 | 515 | |
|
516 | 516 | path1, diff1 = parse_path_ref(diff1, default_path=f_path) |
|
517 | 517 | |
|
518 | 518 | ignore_whitespace = str2bool(self.request.GET.get('ignorews')) |
|
519 | 519 | line_context = self.request.GET.get('context', 3) |
|
520 | 520 | |
|
521 | 521 | if not any((diff1, diff2)): |
|
522 | 522 | h.flash( |
|
523 | 523 | 'Need query parameter "diff1" or "diff2" to generate a diff.', |
|
524 | 524 | category='error') |
|
525 | 525 | raise HTTPBadRequest() |
|
526 | 526 | |
|
527 | 527 | c.action = self.request.GET.get('diff') |
|
528 | 528 | if c.action not in ['download', 'raw']: |
|
529 | 529 | compare_url = h.route_path( |
|
530 | 530 | 'repo_compare', |
|
531 | 531 | repo_name=self.db_repo_name, |
|
532 | 532 | source_ref_type='rev', |
|
533 | 533 | source_ref=diff1, |
|
534 | 534 | target_repo=self.db_repo_name, |
|
535 | 535 | target_ref_type='rev', |
|
536 | 536 | target_ref=diff2, |
|
537 | 537 | _query=dict(f_path=f_path)) |
|
538 | 538 | # redirect to new view if we render diff |
|
539 | 539 | raise HTTPFound(compare_url) |
|
540 | 540 | |
|
541 | 541 | try: |
|
542 | 542 | node1 = self._get_file_node(diff1, path1) |
|
543 | 543 | node2 = self._get_file_node(diff2, f_path) |
|
544 | 544 | except (RepositoryError, NodeError): |
|
545 | 545 | log.exception("Exception while trying to get node from repository") |
|
546 | 546 | raise HTTPFound( |
|
547 | 547 | h.route_path('repo_files', repo_name=self.db_repo_name, |
|
548 | 548 | commit_id='tip', f_path=f_path)) |
|
549 | 549 | |
|
550 | 550 | if all(isinstance(node.commit, EmptyCommit) |
|
551 | 551 | for node in (node1, node2)): |
|
552 | 552 | raise HTTPNotFound() |
|
553 | 553 | |
|
554 | 554 | c.commit_1 = node1.commit |
|
555 | 555 | c.commit_2 = node2.commit |
|
556 | 556 | |
|
557 | 557 | if c.action == 'download': |
|
558 | 558 | _diff = diffs.get_gitdiff(node1, node2, |
|
559 | 559 | ignore_whitespace=ignore_whitespace, |
|
560 | 560 | context=line_context) |
|
561 | 561 | # NOTE: this was using diff_format='gitdiff' |
|
562 | 562 | diff = diffs.DiffProcessor(_diff, diff_format='newdiff') |
|
563 | 563 | |
|
564 | 564 | response = Response(self.path_filter.get_raw_patch(diff)) |
|
565 | 565 | response.content_type = 'text/plain' |
|
566 | 566 | response.content_disposition = ( |
|
567 | 567 | f'attachment; filename={f_path}_{diff1}_vs_{diff2}.diff' |
|
568 | 568 | ) |
|
569 | 569 | charset = self._get_default_encoding(c) |
|
570 | 570 | if charset: |
|
571 | 571 | response.charset = charset |
|
572 | 572 | return response |
|
573 | 573 | |
|
574 | 574 | elif c.action == 'raw': |
|
575 | 575 | _diff = diffs.get_gitdiff(node1, node2, |
|
576 | 576 | ignore_whitespace=ignore_whitespace, |
|
577 | 577 | context=line_context) |
|
578 | 578 | # NOTE: this was using diff_format='gitdiff' |
|
579 | 579 | diff = diffs.DiffProcessor(_diff, diff_format='newdiff') |
|
580 | 580 | |
|
581 | 581 | response = Response(self.path_filter.get_raw_patch(diff)) |
|
582 | 582 | response.content_type = 'text/plain' |
|
583 | 583 | charset = self._get_default_encoding(c) |
|
584 | 584 | if charset: |
|
585 | 585 | response.charset = charset |
|
586 | 586 | return response |
|
587 | 587 | |
|
588 | 588 | # in case we ever end up here |
|
589 | 589 | raise HTTPNotFound() |
|
590 | 590 | |
|
591 | 591 | @LoginRequired() |
|
592 | 592 | @HasRepoPermissionAnyDecorator( |
|
593 | 593 | 'repository.read', 'repository.write', 'repository.admin') |
|
594 | 594 | def repo_files_diff_2way_redirect(self): |
|
595 | 595 | """ |
|
596 | 596 | Kept only to make OLD links work |
|
597 | 597 | """ |
|
598 | 598 | f_path = self._get_f_path_unchecked(self.request.matchdict) |
|
599 | 599 | diff1 = self.request.GET.get('diff1', '') |
|
600 | 600 | diff2 = self.request.GET.get('diff2', '') |
|
601 | 601 | |
|
602 | 602 | if not any((diff1, diff2)): |
|
603 | 603 | h.flash( |
|
604 | 604 | 'Need query parameter "diff1" or "diff2" to generate a diff.', |
|
605 | 605 | category='error') |
|
606 | 606 | raise HTTPBadRequest() |
|
607 | 607 | |
|
608 | 608 | compare_url = h.route_path( |
|
609 | 609 | 'repo_compare', |
|
610 | 610 | repo_name=self.db_repo_name, |
|
611 | 611 | source_ref_type='rev', |
|
612 | 612 | source_ref=diff1, |
|
613 | 613 | target_ref_type='rev', |
|
614 | 614 | target_ref=diff2, |
|
615 | 615 | _query=dict(f_path=f_path, diffmode='sideside', |
|
616 | 616 | target_repo=self.db_repo_name,)) |
|
617 | 617 | raise HTTPFound(compare_url) |
|
618 | 618 | |
|
619 | 619 | @LoginRequired() |
|
620 | 620 | def repo_files_default_commit_redirect(self): |
|
621 | 621 | """ |
|
622 | 622 | Special page that redirects to the landing page of files based on the default |
|
623 | 623 | commit for repository |
|
624 | 624 | """ |
|
625 | 625 | c = self.load_default_context() |
|
626 | 626 | ref_name = c.rhodecode_db_repo.landing_ref_name |
|
627 | 627 | landing_url = h.repo_files_by_ref_url( |
|
628 | 628 | c.rhodecode_db_repo.repo_name, |
|
629 | 629 | c.rhodecode_db_repo.repo_type, |
|
630 | 630 | f_path='', |
|
631 | 631 | ref_name=ref_name, |
|
632 | 632 | commit_id='tip', |
|
633 | 633 | query=dict(at=ref_name) |
|
634 | 634 | ) |
|
635 | 635 | |
|
636 | 636 | raise HTTPFound(landing_url) |
|
637 | 637 | |
|
638 | 638 | @LoginRequired() |
|
639 | 639 | @HasRepoPermissionAnyDecorator( |
|
640 | 640 | 'repository.read', 'repository.write', 'repository.admin') |
|
641 | 641 | def repo_files(self): |
|
642 | 642 | c = self.load_default_context() |
|
643 | 643 | |
|
644 | 644 | view_name = getattr(self.request.matched_route, 'name', None) |
|
645 | 645 | |
|
646 | 646 | c.annotate = view_name == 'repo_files:annotated' |
|
647 | 647 | # default is false, but .rst/.md files later are auto rendered, we can |
|
648 | 648 | # overwrite auto rendering by setting this GET flag |
|
649 | 649 | c.renderer = view_name == 'repo_files:rendered' or not self.request.GET.get('no-render', False) |
|
650 | 650 | |
|
651 | 651 | commit_id, f_path = self._get_commit_and_path() |
|
652 | 652 | |
|
653 | 653 | c.commit = self._get_commit_or_redirect(commit_id) |
|
654 | 654 | c.branch = self.request.GET.get('branch', None) |
|
655 | 655 | c.f_path = f_path |
|
656 | 656 | at_rev = self.request.GET.get('at') |
|
657 | 657 | |
|
658 | 658 | # files or dirs |
|
659 | 659 | try: |
|
660 | 660 | c.file = c.commit.get_node(f_path, pre_load=['is_binary', 'size', 'data']) |
|
661 | 661 | |
|
662 | 662 | c.file_author = True |
|
663 | 663 | c.file_tree = '' |
|
664 | 664 | |
|
665 | 665 | # prev link |
|
666 | 666 | try: |
|
667 | 667 | prev_commit = c.commit.prev(c.branch) |
|
668 | 668 | c.prev_commit = prev_commit |
|
669 | 669 | c.url_prev = h.route_path( |
|
670 | 670 | 'repo_files', repo_name=self.db_repo_name, |
|
671 | 671 | commit_id=prev_commit.raw_id, f_path=f_path) |
|
672 | 672 | if c.branch: |
|
673 | 673 | c.url_prev += '?branch=%s' % c.branch |
|
674 | 674 | except (CommitDoesNotExistError, VCSError): |
|
675 | 675 | c.url_prev = '#' |
|
676 | 676 | c.prev_commit = EmptyCommit() |
|
677 | 677 | |
|
678 | 678 | # next link |
|
679 | 679 | try: |
|
680 | 680 | next_commit = c.commit.next(c.branch) |
|
681 | 681 | c.next_commit = next_commit |
|
682 | 682 | c.url_next = h.route_path( |
|
683 | 683 | 'repo_files', repo_name=self.db_repo_name, |
|
684 | 684 | commit_id=next_commit.raw_id, f_path=f_path) |
|
685 | 685 | if c.branch: |
|
686 | 686 | c.url_next += '?branch=%s' % c.branch |
|
687 | 687 | except (CommitDoesNotExistError, VCSError): |
|
688 | 688 | c.url_next = '#' |
|
689 | 689 | c.next_commit = EmptyCommit() |
|
690 | 690 | |
|
691 | 691 | # load file content |
|
692 | 692 | if c.file.is_file(): |
|
693 | 693 | |
|
694 | 694 | c.lf_node = {} |
|
695 | 695 | |
|
696 | 696 | has_lf_enabled = self._is_lf_enabled(self.db_repo) |
|
697 | 697 | if has_lf_enabled: |
|
698 | 698 | c.lf_node = c.file.get_largefile_node() |
|
699 | 699 | |
|
700 | 700 | c.file_source_page = 'true' |
|
701 | 701 | c.file_last_commit = c.file.last_commit |
|
702 | 702 | |
|
703 | 703 | c.file_size_too_big = c.file.size > c.visual.cut_off_limit_file |
|
704 | 704 | |
|
705 | 705 | if not (c.file_size_too_big or c.file.is_binary): |
|
706 | 706 | if c.annotate: # annotation has precedence over renderer |
|
707 | 707 | c.annotated_lines = filenode_as_annotated_lines_tokens( |
|
708 | 708 | c.file |
|
709 | 709 | ) |
|
710 | 710 | else: |
|
711 | 711 | c.renderer = ( |
|
712 | 712 | c.renderer and h.renderer_from_filename(c.file.path) |
|
713 | 713 | ) |
|
714 | 714 | if not c.renderer: |
|
715 | 715 | c.lines = filenode_as_lines_tokens(c.file) |
|
716 | 716 | |
|
717 | 717 | _branch_name, _sha_commit_id, is_head = \ |
|
718 | 718 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
719 | 719 | landing_ref=self.db_repo.landing_ref_name) |
|
720 | 720 | c.on_branch_head = is_head |
|
721 | 721 | |
|
722 | 722 | branch = c.commit.branch if ( |
|
723 | 723 | c.commit.branch and '/' not in c.commit.branch) else None |
|
724 | 724 | c.branch_or_raw_id = branch or c.commit.raw_id |
|
725 | 725 | c.branch_name = c.commit.branch or h.short_id(c.commit.raw_id) |
|
726 | 726 | |
|
727 | 727 | author = c.file_last_commit.author |
|
728 | 728 | c.authors = [[ |
|
729 | 729 | h.email(author), |
|
730 | 730 | h.person(author, 'username_or_name_or_email'), |
|
731 | 731 | 1 |
|
732 | 732 | ]] |
|
733 | 733 | |
|
734 | 734 | else: # load tree content at path |
|
735 | 735 | c.file_source_page = 'false' |
|
736 | 736 | c.authors = [] |
|
737 | 737 | # this loads a simple tree without metadata to speed things up |
|
738 | 738 | # later via ajax we call repo_nodetree_full and fetch whole |
|
739 | 739 | c.file_tree = self._get_tree_at_commit(c, c.commit.raw_id, f_path, at_rev=at_rev) |
|
740 | 740 | |
|
741 | 741 | c.readme_data, c.readme_file = \ |
|
742 | 742 | self._get_readme_data(self.db_repo, c.visual.default_renderer, |
|
743 | 743 | c.commit.raw_id, f_path) |
|
744 | 744 | |
|
745 | 745 | except RepositoryError as e: |
|
746 | 746 | h.flash(h.escape(safe_str(e)), category='error') |
|
747 | 747 | raise HTTPNotFound() |
|
748 | 748 | |
|
749 | 749 | if self.request.environ.get('HTTP_X_PJAX'): |
|
750 | 750 | html = render('rhodecode:templates/files/files_pjax.mako', |
|
751 | 751 | self._get_template_context(c), self.request) |
|
752 | 752 | else: |
|
753 | 753 | html = render('rhodecode:templates/files/files.mako', |
|
754 | 754 | self._get_template_context(c), self.request) |
|
755 | 755 | return Response(html) |
|
756 | 756 | |
|
757 | 757 | @HasRepoPermissionAnyDecorator( |
|
758 | 758 | 'repository.read', 'repository.write', 'repository.admin') |
|
759 | 759 | def repo_files_annotated_previous(self): |
|
760 | 760 | self.load_default_context() |
|
761 | 761 | |
|
762 | 762 | commit_id, f_path = self._get_commit_and_path() |
|
763 | 763 | commit = self._get_commit_or_redirect(commit_id) |
|
764 | 764 | prev_commit_id = commit.raw_id |
|
765 | 765 | line_anchor = self.request.GET.get('line_anchor') |
|
766 | 766 | is_file = False |
|
767 | 767 | try: |
|
768 | 768 | _file = commit.get_node(f_path) |
|
769 | 769 | is_file = _file.is_file() |
|
770 | 770 | except (NodeDoesNotExistError, CommitDoesNotExistError, VCSError): |
|
771 | 771 | pass |
|
772 | 772 | |
|
773 | 773 | if is_file: |
|
774 | 774 | history = commit.get_path_history(f_path) |
|
775 | 775 | prev_commit_id = history[1].raw_id \ |
|
776 | 776 | if len(history) > 1 else prev_commit_id |
|
777 | 777 | prev_url = h.route_path( |
|
778 | 778 | 'repo_files:annotated', repo_name=self.db_repo_name, |
|
779 | 779 | commit_id=prev_commit_id, f_path=f_path, |
|
780 | 780 | _anchor=f'L{line_anchor}') |
|
781 | 781 | |
|
782 | 782 | raise HTTPFound(prev_url) |
|
783 | 783 | |
|
784 | 784 | @LoginRequired() |
|
785 | 785 | @HasRepoPermissionAnyDecorator( |
|
786 | 786 | 'repository.read', 'repository.write', 'repository.admin') |
|
787 | 787 | def repo_nodetree_full(self): |
|
788 | 788 | """ |
|
789 | 789 | Returns rendered html of file tree that contains commit date, |
|
790 | 790 | author, commit_id for the specified combination of |
|
791 | 791 | repo, commit_id and file path |
|
792 | 792 | """ |
|
793 | 793 | c = self.load_default_context() |
|
794 | 794 | |
|
795 | 795 | commit_id, f_path = self._get_commit_and_path() |
|
796 | 796 | commit = self._get_commit_or_redirect(commit_id) |
|
797 | 797 | try: |
|
798 | 798 | dir_node = commit.get_node(f_path) |
|
799 | 799 | except RepositoryError as e: |
|
800 | 800 | return Response(f'error: {h.escape(safe_str(e))}') |
|
801 | 801 | |
|
802 | 802 | if dir_node.is_file(): |
|
803 | 803 | return Response('') |
|
804 | 804 | |
|
805 | 805 | c.file = dir_node |
|
806 | 806 | c.commit = commit |
|
807 | 807 | at_rev = self.request.GET.get('at') |
|
808 | 808 | |
|
809 | 809 | html = self._get_tree_at_commit( |
|
810 | 810 | c, commit.raw_id, dir_node.path, full_load=True, at_rev=at_rev) |
|
811 | 811 | |
|
812 | 812 | return Response(html) |
|
813 | 813 | |
|
814 | 814 | def _get_attachement_headers(self, f_path): |
|
815 | 815 | f_name = safe_str(f_path.split(Repository.NAME_SEP)[-1]) |
|
816 | 816 | safe_path = f_name.replace('"', '\\"') |
|
817 | 817 | encoded_path = urllib.parse.quote(f_name) |
|
818 | 818 | |
|
819 | 819 | headers = "attachment; " \ |
|
820 | 820 | "filename=\"{}\"; " \ |
|
821 | 821 | "filename*=UTF-8\'\'{}".format(safe_path, encoded_path) |
|
822 | 822 | |
|
823 | return safe_bytes(headers).decode('latin-1', errors='replace') | |
|
823 | return header_safe_str(headers) | |
|
824 | 824 | |
|
825 | 825 | @LoginRequired() |
|
826 | 826 | @HasRepoPermissionAnyDecorator( |
|
827 | 827 | 'repository.read', 'repository.write', 'repository.admin') |
|
828 | 828 | def repo_file_raw(self): |
|
829 | 829 | """ |
|
830 | 830 | Action for show as raw, some mimetypes are "rendered", |
|
831 | 831 | those include images, icons. |
|
832 | 832 | """ |
|
833 | 833 | c = self.load_default_context() |
|
834 | 834 | |
|
835 | 835 | commit_id, f_path = self._get_commit_and_path() |
|
836 | 836 | commit = self._get_commit_or_redirect(commit_id) |
|
837 | 837 | file_node = self._get_filenode_or_redirect(commit, f_path) |
|
838 | 838 | |
|
839 | 839 | raw_mimetype_mapping = { |
|
840 | 840 | # map original mimetype to a mimetype used for "show as raw" |
|
841 | 841 | # you can also provide a content-disposition to override the |
|
842 | 842 | # default "attachment" disposition. |
|
843 | 843 | # orig_type: (new_type, new_dispo) |
|
844 | 844 | |
|
845 | 845 | # show images inline: |
|
846 | 846 | # Do not re-add SVG: it is unsafe and permits XSS attacks. One can |
|
847 | 847 | # for example render an SVG with javascript inside or even render |
|
848 | 848 | # HTML. |
|
849 | 849 | 'image/x-icon': ('image/x-icon', 'inline'), |
|
850 | 850 | 'image/png': ('image/png', 'inline'), |
|
851 | 851 | 'image/gif': ('image/gif', 'inline'), |
|
852 | 852 | 'image/jpeg': ('image/jpeg', 'inline'), |
|
853 | 853 | 'application/pdf': ('application/pdf', 'inline'), |
|
854 | 854 | } |
|
855 | 855 | |
|
856 | 856 | mimetype = file_node.mimetype |
|
857 | 857 | try: |
|
858 | 858 | mimetype, disposition = raw_mimetype_mapping[mimetype] |
|
859 | 859 | except KeyError: |
|
860 | 860 | # we don't know anything special about this, handle it safely |
|
861 | 861 | if file_node.is_binary: |
|
862 | 862 | # do same as download raw for binary files |
|
863 | 863 | mimetype, disposition = 'application/octet-stream', 'attachment' |
|
864 | 864 | else: |
|
865 | 865 | # do not just use the original mimetype, but force text/plain, |
|
866 | 866 | # otherwise it would serve text/html and that might be unsafe. |
|
867 | 867 | # Note: underlying vcs library fakes text/plain mimetype if the |
|
868 | 868 | # mimetype can not be determined and it thinks it is not |
|
869 | 869 | # binary.This might lead to erroneous text display in some |
|
870 | 870 | # cases, but helps in other cases, like with text files |
|
871 | 871 | # without extension. |
|
872 | 872 | mimetype, disposition = 'text/plain', 'inline' |
|
873 | 873 | |
|
874 | 874 | if disposition == 'attachment': |
|
875 | 875 | disposition = self._get_attachement_headers(f_path) |
|
876 | 876 | |
|
877 | 877 | stream_content = file_node.stream_bytes() |
|
878 | 878 | |
|
879 | 879 | response = Response(app_iter=stream_content) |
|
880 | 880 | response.content_disposition = disposition |
|
881 | 881 | response.content_type = mimetype |
|
882 | 882 | |
|
883 | 883 | charset = self._get_default_encoding(c) |
|
884 | 884 | if charset: |
|
885 | 885 | response.charset = charset |
|
886 | 886 | |
|
887 | 887 | return response |
|
888 | 888 | |
|
889 | 889 | @LoginRequired() |
|
890 | 890 | @HasRepoPermissionAnyDecorator( |
|
891 | 891 | 'repository.read', 'repository.write', 'repository.admin') |
|
892 | 892 | def repo_file_download(self): |
|
893 | 893 | c = self.load_default_context() |
|
894 | 894 | |
|
895 | 895 | commit_id, f_path = self._get_commit_and_path() |
|
896 | 896 | commit = self._get_commit_or_redirect(commit_id) |
|
897 | 897 | file_node = self._get_filenode_or_redirect(commit, f_path) |
|
898 | 898 | |
|
899 | 899 | if self.request.GET.get('lf'): |
|
900 | 900 | # only if lf get flag is passed, we download this file |
|
901 | 901 | # as LFS/Largefile |
|
902 | 902 | lf_node = file_node.get_largefile_node() |
|
903 | 903 | if lf_node: |
|
904 | 904 | # overwrite our pointer with the REAL large-file |
|
905 | 905 | file_node = lf_node |
|
906 | 906 | |
|
907 | 907 | disposition = self._get_attachement_headers(f_path) |
|
908 | 908 | |
|
909 | 909 | stream_content = file_node.stream_bytes() |
|
910 | 910 | |
|
911 | 911 | response = Response(app_iter=stream_content) |
|
912 | 912 | response.content_disposition = disposition |
|
913 | 913 | response.content_type = file_node.mimetype |
|
914 | 914 | |
|
915 | 915 | charset = self._get_default_encoding(c) |
|
916 | 916 | if charset: |
|
917 | 917 | response.charset = charset |
|
918 | 918 | |
|
919 | 919 | return response |
|
920 | 920 | |
|
921 | 921 | def _get_nodelist_at_commit(self, repo_name, repo_id, commit_id, f_path): |
|
922 | 922 | |
|
923 | 923 | cache_seconds = safe_int( |
|
924 | 924 | rhodecode.CONFIG.get('rc_cache.cache_repo.expiration_time')) |
|
925 | 925 | cache_on = cache_seconds > 0 |
|
926 | 926 | log.debug( |
|
927 | 927 | 'Computing FILE SEARCH for repo_id %s commit_id `%s` and path `%s`' |
|
928 | 928 | 'with caching: %s[TTL: %ss]' % ( |
|
929 | 929 | repo_id, commit_id, f_path, cache_on, cache_seconds or 0)) |
|
930 | 930 | |
|
931 | 931 | cache_namespace_uid = f'repo.{repo_id}' |
|
932 | 932 | region = rc_cache.get_or_create_region('cache_repo', cache_namespace_uid) |
|
933 | 933 | |
|
934 | 934 | @region.conditional_cache_on_arguments(namespace=cache_namespace_uid, condition=cache_on) |
|
935 | 935 | def compute_file_search(_name_hash, _repo_id, _commit_id, _f_path): |
|
936 | 936 | log.debug('Generating cached nodelist for repo_id:%s, %s, %s', |
|
937 | 937 | _repo_id, commit_id, f_path) |
|
938 | 938 | try: |
|
939 | 939 | _d, _f = ScmModel().get_quick_filter_nodes(repo_name, _commit_id, _f_path) |
|
940 | 940 | except (RepositoryError, CommitDoesNotExistError, Exception) as e: |
|
941 | 941 | log.exception(safe_str(e)) |
|
942 | 942 | h.flash(h.escape(safe_str(e)), category='error') |
|
943 | 943 | raise HTTPFound(h.route_path( |
|
944 | 944 | 'repo_files', repo_name=self.db_repo_name, |
|
945 | 945 | commit_id='tip', f_path='/')) |
|
946 | 946 | |
|
947 | 947 | return _d + _f |
|
948 | 948 | |
|
949 | 949 | result = compute_file_search(self.db_repo.repo_name_hash, self.db_repo.repo_id, |
|
950 | 950 | commit_id, f_path) |
|
951 | 951 | return filter(lambda n: self.path_filter.path_access_allowed(n['name']), result) |
|
952 | 952 | |
|
953 | 953 | @LoginRequired() |
|
954 | 954 | @HasRepoPermissionAnyDecorator( |
|
955 | 955 | 'repository.read', 'repository.write', 'repository.admin') |
|
956 | 956 | def repo_nodelist(self): |
|
957 | 957 | self.load_default_context() |
|
958 | 958 | |
|
959 | 959 | commit_id, f_path = self._get_commit_and_path() |
|
960 | 960 | commit = self._get_commit_or_redirect(commit_id) |
|
961 | 961 | |
|
962 | 962 | metadata = self._get_nodelist_at_commit( |
|
963 | 963 | self.db_repo_name, self.db_repo.repo_id, commit.raw_id, f_path) |
|
964 | 964 | return {'nodes': [x for x in metadata]} |
|
965 | 965 | |
|
966 | 966 | def _create_references(self, branches_or_tags, symbolic_reference, f_path, ref_type): |
|
967 | 967 | items = [] |
|
968 | 968 | for name, commit_id in branches_or_tags.items(): |
|
969 | 969 | sym_ref = symbolic_reference(commit_id, name, f_path, ref_type) |
|
970 | 970 | items.append((sym_ref, name, ref_type)) |
|
971 | 971 | return items |
|
972 | 972 | |
|
973 | 973 | def _symbolic_reference(self, commit_id, name, f_path, ref_type): |
|
974 | 974 | return commit_id |
|
975 | 975 | |
|
976 | 976 | def _symbolic_reference_svn(self, commit_id, name, f_path, ref_type): |
|
977 | 977 | return commit_id |
|
978 | 978 | |
|
979 | 979 | # NOTE(dan): old code we used in "diff" mode compare |
|
980 | 980 | new_f_path = vcspath.join(name, f_path) |
|
981 | 981 | return f'{new_f_path}@{commit_id}' |
|
982 | 982 | |
|
983 | 983 | def _get_node_history(self, commit_obj, f_path, commits=None): |
|
984 | 984 | """ |
|
985 | 985 | get commit history for given node |
|
986 | 986 | |
|
987 | 987 | :param commit_obj: commit to calculate history |
|
988 | 988 | :param f_path: path for node to calculate history for |
|
989 | 989 | :param commits: if passed don't calculate history and take |
|
990 | 990 | commits defined in this list |
|
991 | 991 | """ |
|
992 | 992 | _ = self.request.translate |
|
993 | 993 | |
|
994 | 994 | # calculate history based on tip |
|
995 | 995 | tip = self.rhodecode_vcs_repo.get_commit() |
|
996 | 996 | if commits is None: |
|
997 | 997 | pre_load = ["author", "branch"] |
|
998 | 998 | try: |
|
999 | 999 | commits = tip.get_path_history(f_path, pre_load=pre_load) |
|
1000 | 1000 | except (NodeDoesNotExistError, CommitError): |
|
1001 | 1001 | # this node is not present at tip! |
|
1002 | 1002 | commits = commit_obj.get_path_history(f_path, pre_load=pre_load) |
|
1003 | 1003 | |
|
1004 | 1004 | history = [] |
|
1005 | 1005 | commits_group = ([], _("Changesets")) |
|
1006 | 1006 | for commit in commits: |
|
1007 | 1007 | branch = ' (%s)' % commit.branch if commit.branch else '' |
|
1008 | 1008 | n_desc = f'r{commit.idx}:{commit.short_id}{branch}' |
|
1009 | 1009 | commits_group[0].append((commit.raw_id, n_desc, 'sha')) |
|
1010 | 1010 | history.append(commits_group) |
|
1011 | 1011 | |
|
1012 | 1012 | symbolic_reference = self._symbolic_reference |
|
1013 | 1013 | |
|
1014 | 1014 | if self.rhodecode_vcs_repo.alias == 'svn': |
|
1015 | 1015 | adjusted_f_path = RepoFilesView.adjust_file_path_for_svn( |
|
1016 | 1016 | f_path, self.rhodecode_vcs_repo) |
|
1017 | 1017 | if adjusted_f_path != f_path: |
|
1018 | 1018 | log.debug( |
|
1019 | 1019 | 'Recognized svn tag or branch in file "%s", using svn ' |
|
1020 | 1020 | 'specific symbolic references', f_path) |
|
1021 | 1021 | f_path = adjusted_f_path |
|
1022 | 1022 | symbolic_reference = self._symbolic_reference_svn |
|
1023 | 1023 | |
|
1024 | 1024 | branches = self._create_references( |
|
1025 | 1025 | self.rhodecode_vcs_repo.branches, symbolic_reference, f_path, 'branch') |
|
1026 | 1026 | branches_group = (branches, _("Branches")) |
|
1027 | 1027 | |
|
1028 | 1028 | tags = self._create_references( |
|
1029 | 1029 | self.rhodecode_vcs_repo.tags, symbolic_reference, f_path, 'tag') |
|
1030 | 1030 | tags_group = (tags, _("Tags")) |
|
1031 | 1031 | |
|
1032 | 1032 | history.append(branches_group) |
|
1033 | 1033 | history.append(tags_group) |
|
1034 | 1034 | |
|
1035 | 1035 | return history, commits |
|
1036 | 1036 | |
|
1037 | 1037 | @LoginRequired() |
|
1038 | 1038 | @HasRepoPermissionAnyDecorator( |
|
1039 | 1039 | 'repository.read', 'repository.write', 'repository.admin') |
|
1040 | 1040 | def repo_file_history(self): |
|
1041 | 1041 | self.load_default_context() |
|
1042 | 1042 | |
|
1043 | 1043 | commit_id, f_path = self._get_commit_and_path() |
|
1044 | 1044 | commit = self._get_commit_or_redirect(commit_id) |
|
1045 | 1045 | file_node = self._get_filenode_or_redirect(commit, f_path) |
|
1046 | 1046 | |
|
1047 | 1047 | if file_node.is_file(): |
|
1048 | 1048 | file_history, _hist = self._get_node_history(commit, f_path) |
|
1049 | 1049 | |
|
1050 | 1050 | res = [] |
|
1051 | 1051 | for section_items, section in file_history: |
|
1052 | 1052 | items = [] |
|
1053 | 1053 | for obj_id, obj_text, obj_type in section_items: |
|
1054 | 1054 | at_rev = '' |
|
1055 | 1055 | if obj_type in ['branch', 'bookmark', 'tag']: |
|
1056 | 1056 | at_rev = obj_text |
|
1057 | 1057 | entry = { |
|
1058 | 1058 | 'id': obj_id, |
|
1059 | 1059 | 'text': obj_text, |
|
1060 | 1060 | 'type': obj_type, |
|
1061 | 1061 | 'at_rev': at_rev |
|
1062 | 1062 | } |
|
1063 | 1063 | |
|
1064 | 1064 | items.append(entry) |
|
1065 | 1065 | |
|
1066 | 1066 | res.append({ |
|
1067 | 1067 | 'text': section, |
|
1068 | 1068 | 'children': items |
|
1069 | 1069 | }) |
|
1070 | 1070 | |
|
1071 | 1071 | data = { |
|
1072 | 1072 | 'more': False, |
|
1073 | 1073 | 'results': res |
|
1074 | 1074 | } |
|
1075 | 1075 | return data |
|
1076 | 1076 | |
|
1077 | 1077 | log.warning('Cannot fetch history for directory') |
|
1078 | 1078 | raise HTTPBadRequest() |
|
1079 | 1079 | |
|
1080 | 1080 | @LoginRequired() |
|
1081 | 1081 | @HasRepoPermissionAnyDecorator( |
|
1082 | 1082 | 'repository.read', 'repository.write', 'repository.admin') |
|
1083 | 1083 | def repo_file_authors(self): |
|
1084 | 1084 | c = self.load_default_context() |
|
1085 | 1085 | |
|
1086 | 1086 | commit_id, f_path = self._get_commit_and_path() |
|
1087 | 1087 | commit = self._get_commit_or_redirect(commit_id) |
|
1088 | 1088 | file_node = self._get_filenode_or_redirect(commit, f_path) |
|
1089 | 1089 | |
|
1090 | 1090 | if not file_node.is_file(): |
|
1091 | 1091 | raise HTTPBadRequest() |
|
1092 | 1092 | |
|
1093 | 1093 | c.file_last_commit = file_node.last_commit |
|
1094 | 1094 | if self.request.GET.get('annotate') == '1': |
|
1095 | 1095 | # use _hist from annotation if annotation mode is on |
|
1096 | 1096 | commit_ids = {x[1] for x in file_node.annotate} |
|
1097 | 1097 | _hist = ( |
|
1098 | 1098 | self.rhodecode_vcs_repo.get_commit(commit_id) |
|
1099 | 1099 | for commit_id in commit_ids) |
|
1100 | 1100 | else: |
|
1101 | 1101 | _f_history, _hist = self._get_node_history(commit, f_path) |
|
1102 | 1102 | c.file_author = False |
|
1103 | 1103 | |
|
1104 | 1104 | unique = collections.OrderedDict() |
|
1105 | 1105 | for commit in _hist: |
|
1106 | 1106 | author = commit.author |
|
1107 | 1107 | if author not in unique: |
|
1108 | 1108 | unique[commit.author] = [ |
|
1109 | 1109 | h.email(author), |
|
1110 | 1110 | h.person(author, 'username_or_name_or_email'), |
|
1111 | 1111 | 1 # counter |
|
1112 | 1112 | ] |
|
1113 | 1113 | |
|
1114 | 1114 | else: |
|
1115 | 1115 | # increase counter |
|
1116 | 1116 | unique[commit.author][2] += 1 |
|
1117 | 1117 | |
|
1118 | 1118 | c.authors = [val for val in unique.values()] |
|
1119 | 1119 | |
|
1120 | 1120 | return self._get_template_context(c) |
|
1121 | 1121 | |
|
1122 | 1122 | @LoginRequired() |
|
1123 | 1123 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1124 | 1124 | def repo_files_check_head(self): |
|
1125 | 1125 | self.load_default_context() |
|
1126 | 1126 | |
|
1127 | 1127 | commit_id, f_path = self._get_commit_and_path() |
|
1128 | 1128 | _branch_name, _sha_commit_id, is_head = \ |
|
1129 | 1129 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1130 | 1130 | landing_ref=self.db_repo.landing_ref_name) |
|
1131 | 1131 | |
|
1132 | 1132 | new_path = self.request.POST.get('path') |
|
1133 | 1133 | operation = self.request.POST.get('operation') |
|
1134 | 1134 | path_exist = '' |
|
1135 | 1135 | |
|
1136 | 1136 | if new_path and operation in ['create', 'upload']: |
|
1137 | 1137 | new_f_path = os.path.join(f_path.lstrip('/'), new_path) |
|
1138 | 1138 | try: |
|
1139 | 1139 | commit_obj = self.rhodecode_vcs_repo.get_commit(commit_id) |
|
1140 | 1140 | # NOTE(dan): construct whole path without leading / |
|
1141 | 1141 | file_node = commit_obj.get_node(new_f_path) |
|
1142 | 1142 | if file_node is not None: |
|
1143 | 1143 | path_exist = new_f_path |
|
1144 | 1144 | except EmptyRepositoryError: |
|
1145 | 1145 | pass |
|
1146 | 1146 | except Exception: |
|
1147 | 1147 | pass |
|
1148 | 1148 | |
|
1149 | 1149 | return { |
|
1150 | 1150 | 'branch': _branch_name, |
|
1151 | 1151 | 'sha': _sha_commit_id, |
|
1152 | 1152 | 'is_head': is_head, |
|
1153 | 1153 | 'path_exists': path_exist |
|
1154 | 1154 | } |
|
1155 | 1155 | |
|
1156 | 1156 | @LoginRequired() |
|
1157 | 1157 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1158 | 1158 | def repo_files_remove_file(self): |
|
1159 | 1159 | _ = self.request.translate |
|
1160 | 1160 | c = self.load_default_context() |
|
1161 | 1161 | commit_id, f_path = self._get_commit_and_path() |
|
1162 | 1162 | |
|
1163 | 1163 | self._ensure_not_locked() |
|
1164 | 1164 | _branch_name, _sha_commit_id, is_head = \ |
|
1165 | 1165 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1166 | 1166 | landing_ref=self.db_repo.landing_ref_name) |
|
1167 | 1167 | |
|
1168 | 1168 | self.forbid_non_head(is_head, f_path) |
|
1169 | 1169 | self.check_branch_permission(_branch_name) |
|
1170 | 1170 | |
|
1171 | 1171 | c.commit = self._get_commit_or_redirect(commit_id) |
|
1172 | 1172 | c.file = self._get_filenode_or_redirect(c.commit, f_path) |
|
1173 | 1173 | |
|
1174 | 1174 | c.default_message = _( |
|
1175 | 1175 | 'Deleted file {} via RhodeCode Enterprise').format(f_path) |
|
1176 | 1176 | c.f_path = f_path |
|
1177 | 1177 | |
|
1178 | 1178 | return self._get_template_context(c) |
|
1179 | 1179 | |
|
1180 | 1180 | @LoginRequired() |
|
1181 | 1181 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1182 | 1182 | @CSRFRequired() |
|
1183 | 1183 | def repo_files_delete_file(self): |
|
1184 | 1184 | _ = self.request.translate |
|
1185 | 1185 | |
|
1186 | 1186 | c = self.load_default_context() |
|
1187 | 1187 | commit_id, f_path = self._get_commit_and_path() |
|
1188 | 1188 | |
|
1189 | 1189 | self._ensure_not_locked() |
|
1190 | 1190 | _branch_name, _sha_commit_id, is_head = \ |
|
1191 | 1191 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1192 | 1192 | landing_ref=self.db_repo.landing_ref_name) |
|
1193 | 1193 | |
|
1194 | 1194 | self.forbid_non_head(is_head, f_path) |
|
1195 | 1195 | self.check_branch_permission(_branch_name) |
|
1196 | 1196 | |
|
1197 | 1197 | c.commit = self._get_commit_or_redirect(commit_id) |
|
1198 | 1198 | c.file = self._get_filenode_or_redirect(c.commit, f_path) |
|
1199 | 1199 | |
|
1200 | 1200 | c.default_message = _( |
|
1201 | 1201 | 'Deleted file {} via RhodeCode Enterprise').format(f_path) |
|
1202 | 1202 | c.f_path = f_path |
|
1203 | 1203 | node_path = f_path |
|
1204 | 1204 | author = self._rhodecode_db_user.full_contact |
|
1205 | 1205 | message = self.request.POST.get('message') or c.default_message |
|
1206 | 1206 | try: |
|
1207 | 1207 | nodes = { |
|
1208 | 1208 | safe_bytes(node_path): { |
|
1209 | 1209 | 'content': b'' |
|
1210 | 1210 | } |
|
1211 | 1211 | } |
|
1212 | 1212 | ScmModel().delete_nodes( |
|
1213 | 1213 | user=self._rhodecode_db_user.user_id, repo=self.db_repo, |
|
1214 | 1214 | message=message, |
|
1215 | 1215 | nodes=nodes, |
|
1216 | 1216 | parent_commit=c.commit, |
|
1217 | 1217 | author=author, |
|
1218 | 1218 | ) |
|
1219 | 1219 | |
|
1220 | 1220 | h.flash( |
|
1221 | 1221 | _('Successfully deleted file `{}`').format( |
|
1222 | 1222 | h.escape(f_path)), category='success') |
|
1223 | 1223 | except Exception: |
|
1224 | 1224 | log.exception('Error during commit operation') |
|
1225 | 1225 | h.flash(_('Error occurred during commit'), category='error') |
|
1226 | 1226 | raise HTTPFound( |
|
1227 | 1227 | h.route_path('repo_commit', repo_name=self.db_repo_name, |
|
1228 | 1228 | commit_id='tip')) |
|
1229 | 1229 | |
|
1230 | 1230 | @LoginRequired() |
|
1231 | 1231 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1232 | 1232 | def repo_files_edit_file(self): |
|
1233 | 1233 | _ = self.request.translate |
|
1234 | 1234 | c = self.load_default_context() |
|
1235 | 1235 | commit_id, f_path = self._get_commit_and_path() |
|
1236 | 1236 | |
|
1237 | 1237 | self._ensure_not_locked() |
|
1238 | 1238 | _branch_name, _sha_commit_id, is_head = \ |
|
1239 | 1239 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1240 | 1240 | landing_ref=self.db_repo.landing_ref_name) |
|
1241 | 1241 | |
|
1242 | 1242 | self.forbid_non_head(is_head, f_path, commit_id=commit_id) |
|
1243 | 1243 | self.check_branch_permission(_branch_name, commit_id=commit_id) |
|
1244 | 1244 | |
|
1245 | 1245 | c.commit = self._get_commit_or_redirect(commit_id) |
|
1246 | 1246 | c.file = self._get_filenode_or_redirect(c.commit, f_path) |
|
1247 | 1247 | |
|
1248 | 1248 | if c.file.is_binary: |
|
1249 | 1249 | files_url = h.route_path( |
|
1250 | 1250 | 'repo_files', |
|
1251 | 1251 | repo_name=self.db_repo_name, |
|
1252 | 1252 | commit_id=c.commit.raw_id, f_path=f_path) |
|
1253 | 1253 | raise HTTPFound(files_url) |
|
1254 | 1254 | |
|
1255 | 1255 | c.default_message = _('Edited file {} via RhodeCode Enterprise').format(f_path) |
|
1256 | 1256 | c.f_path = f_path |
|
1257 | 1257 | |
|
1258 | 1258 | return self._get_template_context(c) |
|
1259 | 1259 | |
|
1260 | 1260 | @LoginRequired() |
|
1261 | 1261 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1262 | 1262 | @CSRFRequired() |
|
1263 | 1263 | def repo_files_update_file(self): |
|
1264 | 1264 | _ = self.request.translate |
|
1265 | 1265 | c = self.load_default_context() |
|
1266 | 1266 | commit_id, f_path = self._get_commit_and_path() |
|
1267 | 1267 | |
|
1268 | 1268 | self._ensure_not_locked() |
|
1269 | 1269 | |
|
1270 | 1270 | c.commit = self._get_commit_or_redirect(commit_id) |
|
1271 | 1271 | c.file = self._get_filenode_or_redirect(c.commit, f_path) |
|
1272 | 1272 | |
|
1273 | 1273 | if c.file.is_binary: |
|
1274 | 1274 | raise HTTPFound(h.route_path('repo_files', repo_name=self.db_repo_name, |
|
1275 | 1275 | commit_id=c.commit.raw_id, f_path=f_path)) |
|
1276 | 1276 | |
|
1277 | 1277 | _branch_name, _sha_commit_id, is_head = \ |
|
1278 | 1278 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1279 | 1279 | landing_ref=self.db_repo.landing_ref_name) |
|
1280 | 1280 | |
|
1281 | 1281 | self.forbid_non_head(is_head, f_path, commit_id=commit_id) |
|
1282 | 1282 | self.check_branch_permission(_branch_name, commit_id=commit_id) |
|
1283 | 1283 | |
|
1284 | 1284 | c.default_message = _('Edited file {} via RhodeCode Enterprise').format(f_path) |
|
1285 | 1285 | c.f_path = f_path |
|
1286 | 1286 | |
|
1287 | 1287 | old_content = c.file.str_content |
|
1288 | 1288 | sl = old_content.splitlines(1) |
|
1289 | 1289 | first_line = sl[0] if sl else '' |
|
1290 | 1290 | |
|
1291 | 1291 | r_post = self.request.POST |
|
1292 | 1292 | # line endings: 0 - Unix, 1 - Mac, 2 - DOS |
|
1293 | 1293 | line_ending_mode = detect_mode(first_line, 0) |
|
1294 | 1294 | content = convert_line_endings(r_post.get('content', ''), line_ending_mode) |
|
1295 | 1295 | |
|
1296 | 1296 | message = r_post.get('message') or c.default_message |
|
1297 | 1297 | |
|
1298 | 1298 | org_node_path = c.file.str_path |
|
1299 | 1299 | filename = r_post['filename'] |
|
1300 | 1300 | |
|
1301 | 1301 | root_path = c.file.dir_path |
|
1302 | 1302 | pure_path = self.create_pure_path(root_path, filename) |
|
1303 | 1303 | node_path = pure_path.as_posix() |
|
1304 | 1304 | |
|
1305 | 1305 | default_redirect_url = h.route_path('repo_commit', repo_name=self.db_repo_name, |
|
1306 | 1306 | commit_id=commit_id) |
|
1307 | 1307 | if content == old_content and node_path == org_node_path: |
|
1308 | 1308 | h.flash(_('No changes detected on {}').format(h.escape(org_node_path)), |
|
1309 | 1309 | category='warning') |
|
1310 | 1310 | raise HTTPFound(default_redirect_url) |
|
1311 | 1311 | |
|
1312 | 1312 | try: |
|
1313 | 1313 | mapping = { |
|
1314 | 1314 | c.file.bytes_path: { |
|
1315 | 1315 | 'org_filename': org_node_path, |
|
1316 | 1316 | 'filename': safe_bytes(node_path), |
|
1317 | 1317 | 'content': safe_bytes(content), |
|
1318 | 1318 | 'lexer': '', |
|
1319 | 1319 | 'op': 'mod', |
|
1320 | 1320 | 'mode': c.file.mode |
|
1321 | 1321 | } |
|
1322 | 1322 | } |
|
1323 | 1323 | |
|
1324 | 1324 | commit = ScmModel().update_nodes( |
|
1325 | 1325 | user=self._rhodecode_db_user.user_id, |
|
1326 | 1326 | repo=self.db_repo, |
|
1327 | 1327 | message=message, |
|
1328 | 1328 | nodes=mapping, |
|
1329 | 1329 | parent_commit=c.commit, |
|
1330 | 1330 | ) |
|
1331 | 1331 | |
|
1332 | 1332 | h.flash(_('Successfully committed changes to file `{}`').format( |
|
1333 | 1333 | h.escape(f_path)), category='success') |
|
1334 | 1334 | default_redirect_url = h.route_path( |
|
1335 | 1335 | 'repo_commit', repo_name=self.db_repo_name, commit_id=commit.raw_id) |
|
1336 | 1336 | |
|
1337 | 1337 | except Exception: |
|
1338 | 1338 | log.exception('Error occurred during commit') |
|
1339 | 1339 | h.flash(_('Error occurred during commit'), category='error') |
|
1340 | 1340 | |
|
1341 | 1341 | raise HTTPFound(default_redirect_url) |
|
1342 | 1342 | |
|
1343 | 1343 | @LoginRequired() |
|
1344 | 1344 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1345 | 1345 | def repo_files_add_file(self): |
|
1346 | 1346 | _ = self.request.translate |
|
1347 | 1347 | c = self.load_default_context() |
|
1348 | 1348 | commit_id, f_path = self._get_commit_and_path() |
|
1349 | 1349 | |
|
1350 | 1350 | self._ensure_not_locked() |
|
1351 | 1351 | |
|
1352 | 1352 | # Check if we need to use this page to upload binary |
|
1353 | 1353 | upload_binary = str2bool(self.request.params.get('upload_binary', False)) |
|
1354 | 1354 | |
|
1355 | 1355 | c.commit = self._get_commit_or_redirect(commit_id, redirect_after=False) |
|
1356 | 1356 | if c.commit is None: |
|
1357 | 1357 | c.commit = EmptyCommit(alias=self.rhodecode_vcs_repo.alias) |
|
1358 | 1358 | |
|
1359 | 1359 | if self.rhodecode_vcs_repo.is_empty(): |
|
1360 | 1360 | # for empty repository we cannot check for current branch, we rely on |
|
1361 | 1361 | # c.commit.branch instead |
|
1362 | 1362 | _branch_name, _sha_commit_id, is_head = c.commit.branch, '', True |
|
1363 | 1363 | else: |
|
1364 | 1364 | _branch_name, _sha_commit_id, is_head = \ |
|
1365 | 1365 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1366 | 1366 | landing_ref=self.db_repo.landing_ref_name) |
|
1367 | 1367 | |
|
1368 | 1368 | self.forbid_non_head(is_head, f_path, commit_id=commit_id) |
|
1369 | 1369 | self.check_branch_permission(_branch_name, commit_id=commit_id) |
|
1370 | 1370 | |
|
1371 | 1371 | c.default_message = (_('Added file via RhodeCode Enterprise')) \ |
|
1372 | 1372 | if not upload_binary else (_('Edited file {} via RhodeCode Enterprise').format(f_path)) |
|
1373 | 1373 | c.f_path = f_path.lstrip('/') # ensure not relative path |
|
1374 | 1374 | c.replace_binary = upload_binary |
|
1375 | 1375 | |
|
1376 | 1376 | return self._get_template_context(c) |
|
1377 | 1377 | |
|
1378 | 1378 | @LoginRequired() |
|
1379 | 1379 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1380 | 1380 | @CSRFRequired() |
|
1381 | 1381 | def repo_files_create_file(self): |
|
1382 | 1382 | _ = self.request.translate |
|
1383 | 1383 | c = self.load_default_context() |
|
1384 | 1384 | commit_id, f_path = self._get_commit_and_path() |
|
1385 | 1385 | |
|
1386 | 1386 | self._ensure_not_locked() |
|
1387 | 1387 | |
|
1388 | 1388 | c.commit = self._get_commit_or_redirect(commit_id, redirect_after=False) |
|
1389 | 1389 | if c.commit is None: |
|
1390 | 1390 | c.commit = EmptyCommit(alias=self.rhodecode_vcs_repo.alias) |
|
1391 | 1391 | |
|
1392 | 1392 | # calculate redirect URL |
|
1393 | 1393 | if self.rhodecode_vcs_repo.is_empty(): |
|
1394 | 1394 | default_redirect_url = h.route_path( |
|
1395 | 1395 | 'repo_summary', repo_name=self.db_repo_name) |
|
1396 | 1396 | else: |
|
1397 | 1397 | default_redirect_url = h.route_path( |
|
1398 | 1398 | 'repo_commit', repo_name=self.db_repo_name, commit_id='tip') |
|
1399 | 1399 | |
|
1400 | 1400 | if self.rhodecode_vcs_repo.is_empty(): |
|
1401 | 1401 | # for empty repository we cannot check for current branch, we rely on |
|
1402 | 1402 | # c.commit.branch instead |
|
1403 | 1403 | _branch_name, _sha_commit_id, is_head = c.commit.branch, '', True |
|
1404 | 1404 | else: |
|
1405 | 1405 | _branch_name, _sha_commit_id, is_head = \ |
|
1406 | 1406 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1407 | 1407 | landing_ref=self.db_repo.landing_ref_name) |
|
1408 | 1408 | |
|
1409 | 1409 | self.forbid_non_head(is_head, f_path, commit_id=commit_id) |
|
1410 | 1410 | self.check_branch_permission(_branch_name, commit_id=commit_id) |
|
1411 | 1411 | |
|
1412 | 1412 | c.default_message = (_('Added file via RhodeCode Enterprise')) |
|
1413 | 1413 | c.f_path = f_path |
|
1414 | 1414 | |
|
1415 | 1415 | r_post = self.request.POST |
|
1416 | 1416 | message = r_post.get('message') or c.default_message |
|
1417 | 1417 | filename = r_post.get('filename') |
|
1418 | 1418 | unix_mode = 0 |
|
1419 | 1419 | |
|
1420 | 1420 | if not filename: |
|
1421 | 1421 | # If there's no commit, redirect to repo summary |
|
1422 | 1422 | if type(c.commit) is EmptyCommit: |
|
1423 | 1423 | redirect_url = h.route_path( |
|
1424 | 1424 | 'repo_summary', repo_name=self.db_repo_name) |
|
1425 | 1425 | else: |
|
1426 | 1426 | redirect_url = default_redirect_url |
|
1427 | 1427 | h.flash(_('No filename specified'), category='warning') |
|
1428 | 1428 | raise HTTPFound(redirect_url) |
|
1429 | 1429 | |
|
1430 | 1430 | root_path = f_path |
|
1431 | 1431 | pure_path = self.create_pure_path(root_path, filename) |
|
1432 | 1432 | node_path = pure_path.as_posix().lstrip('/') |
|
1433 | 1433 | |
|
1434 | 1434 | author = self._rhodecode_db_user.full_contact |
|
1435 | 1435 | content = convert_line_endings(r_post.get('content', ''), unix_mode) |
|
1436 | 1436 | nodes = { |
|
1437 | 1437 | safe_bytes(node_path): { |
|
1438 | 1438 | 'content': safe_bytes(content) |
|
1439 | 1439 | } |
|
1440 | 1440 | } |
|
1441 | 1441 | |
|
1442 | 1442 | try: |
|
1443 | 1443 | |
|
1444 | 1444 | commit = ScmModel().create_nodes( |
|
1445 | 1445 | user=self._rhodecode_db_user.user_id, |
|
1446 | 1446 | repo=self.db_repo, |
|
1447 | 1447 | message=message, |
|
1448 | 1448 | nodes=nodes, |
|
1449 | 1449 | parent_commit=c.commit, |
|
1450 | 1450 | author=author, |
|
1451 | 1451 | ) |
|
1452 | 1452 | |
|
1453 | 1453 | h.flash(_('Successfully committed new file `{}`').format( |
|
1454 | 1454 | h.escape(node_path)), category='success') |
|
1455 | 1455 | |
|
1456 | 1456 | default_redirect_url = h.route_path( |
|
1457 | 1457 | 'repo_commit', repo_name=self.db_repo_name, commit_id=commit.raw_id) |
|
1458 | 1458 | |
|
1459 | 1459 | except NonRelativePathError: |
|
1460 | 1460 | log.exception('Non Relative path found') |
|
1461 | 1461 | h.flash(_('The location specified must be a relative path and must not ' |
|
1462 | 1462 | 'contain .. in the path'), category='warning') |
|
1463 | 1463 | raise HTTPFound(default_redirect_url) |
|
1464 | 1464 | except (NodeError, NodeAlreadyExistsError) as e: |
|
1465 | 1465 | h.flash(h.escape(safe_str(e)), category='error') |
|
1466 | 1466 | except Exception: |
|
1467 | 1467 | log.exception('Error occurred during commit') |
|
1468 | 1468 | h.flash(_('Error occurred during commit'), category='error') |
|
1469 | 1469 | |
|
1470 | 1470 | raise HTTPFound(default_redirect_url) |
|
1471 | 1471 | |
|
1472 | 1472 | @LoginRequired() |
|
1473 | 1473 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1474 | 1474 | @CSRFRequired() |
|
1475 | 1475 | def repo_files_upload_file(self): |
|
1476 | 1476 | _ = self.request.translate |
|
1477 | 1477 | c = self.load_default_context() |
|
1478 | 1478 | commit_id, f_path = self._get_commit_and_path() |
|
1479 | 1479 | |
|
1480 | 1480 | self._ensure_not_locked() |
|
1481 | 1481 | |
|
1482 | 1482 | c.commit = self._get_commit_or_redirect(commit_id, redirect_after=False) |
|
1483 | 1483 | if c.commit is None: |
|
1484 | 1484 | c.commit = EmptyCommit(alias=self.rhodecode_vcs_repo.alias) |
|
1485 | 1485 | |
|
1486 | 1486 | # calculate redirect URL |
|
1487 | 1487 | if self.rhodecode_vcs_repo.is_empty(): |
|
1488 | 1488 | default_redirect_url = h.route_path( |
|
1489 | 1489 | 'repo_summary', repo_name=self.db_repo_name) |
|
1490 | 1490 | else: |
|
1491 | 1491 | default_redirect_url = h.route_path( |
|
1492 | 1492 | 'repo_commit', repo_name=self.db_repo_name, commit_id='tip') |
|
1493 | 1493 | |
|
1494 | 1494 | if self.rhodecode_vcs_repo.is_empty(): |
|
1495 | 1495 | # for empty repository we cannot check for current branch, we rely on |
|
1496 | 1496 | # c.commit.branch instead |
|
1497 | 1497 | _branch_name, _sha_commit_id, is_head = c.commit.branch, '', True |
|
1498 | 1498 | else: |
|
1499 | 1499 | _branch_name, _sha_commit_id, is_head = \ |
|
1500 | 1500 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1501 | 1501 | landing_ref=self.db_repo.landing_ref_name) |
|
1502 | 1502 | |
|
1503 | 1503 | error = self.forbid_non_head(is_head, f_path, json_mode=True) |
|
1504 | 1504 | if error: |
|
1505 | 1505 | return { |
|
1506 | 1506 | 'error': error, |
|
1507 | 1507 | 'redirect_url': default_redirect_url |
|
1508 | 1508 | } |
|
1509 | 1509 | error = self.check_branch_permission(_branch_name, json_mode=True) |
|
1510 | 1510 | if error: |
|
1511 | 1511 | return { |
|
1512 | 1512 | 'error': error, |
|
1513 | 1513 | 'redirect_url': default_redirect_url |
|
1514 | 1514 | } |
|
1515 | 1515 | |
|
1516 | 1516 | c.default_message = (_('Added file via RhodeCode Enterprise')) |
|
1517 | 1517 | c.f_path = f_path |
|
1518 | 1518 | |
|
1519 | 1519 | r_post = self.request.POST |
|
1520 | 1520 | |
|
1521 | 1521 | message = c.default_message |
|
1522 | 1522 | user_message = r_post.getall('message') |
|
1523 | 1523 | if isinstance(user_message, list) and user_message: |
|
1524 | 1524 | # we take the first from duplicated results if it's not empty |
|
1525 | 1525 | message = user_message[0] if user_message[0] else message |
|
1526 | 1526 | |
|
1527 | 1527 | nodes = {} |
|
1528 | 1528 | |
|
1529 | 1529 | for file_obj in r_post.getall('files_upload') or []: |
|
1530 | 1530 | content = file_obj.file |
|
1531 | 1531 | filename = file_obj.filename |
|
1532 | 1532 | |
|
1533 | 1533 | root_path = f_path |
|
1534 | 1534 | pure_path = self.create_pure_path(root_path, filename) |
|
1535 | 1535 | node_path = pure_path.as_posix().lstrip('/') |
|
1536 | 1536 | |
|
1537 | 1537 | nodes[safe_bytes(node_path)] = { |
|
1538 | 1538 | 'content': content |
|
1539 | 1539 | } |
|
1540 | 1540 | |
|
1541 | 1541 | if not nodes: |
|
1542 | 1542 | error = 'missing files' |
|
1543 | 1543 | return { |
|
1544 | 1544 | 'error': error, |
|
1545 | 1545 | 'redirect_url': default_redirect_url |
|
1546 | 1546 | } |
|
1547 | 1547 | |
|
1548 | 1548 | author = self._rhodecode_db_user.full_contact |
|
1549 | 1549 | |
|
1550 | 1550 | try: |
|
1551 | 1551 | commit = ScmModel().create_nodes( |
|
1552 | 1552 | user=self._rhodecode_db_user.user_id, |
|
1553 | 1553 | repo=self.db_repo, |
|
1554 | 1554 | message=message, |
|
1555 | 1555 | nodes=nodes, |
|
1556 | 1556 | parent_commit=c.commit, |
|
1557 | 1557 | author=author, |
|
1558 | 1558 | ) |
|
1559 | 1559 | if len(nodes) == 1: |
|
1560 | 1560 | flash_message = _('Successfully committed {} new files').format(len(nodes)) |
|
1561 | 1561 | else: |
|
1562 | 1562 | flash_message = _('Successfully committed 1 new file') |
|
1563 | 1563 | |
|
1564 | 1564 | h.flash(flash_message, category='success') |
|
1565 | 1565 | |
|
1566 | 1566 | default_redirect_url = h.route_path( |
|
1567 | 1567 | 'repo_commit', repo_name=self.db_repo_name, commit_id=commit.raw_id) |
|
1568 | 1568 | |
|
1569 | 1569 | except NonRelativePathError: |
|
1570 | 1570 | log.exception('Non Relative path found') |
|
1571 | 1571 | error = _('The location specified must be a relative path and must not ' |
|
1572 | 1572 | 'contain .. in the path') |
|
1573 | 1573 | h.flash(error, category='warning') |
|
1574 | 1574 | |
|
1575 | 1575 | return { |
|
1576 | 1576 | 'error': error, |
|
1577 | 1577 | 'redirect_url': default_redirect_url |
|
1578 | 1578 | } |
|
1579 | 1579 | except (NodeError, NodeAlreadyExistsError) as e: |
|
1580 | 1580 | error = h.escape(e) |
|
1581 | 1581 | h.flash(error, category='error') |
|
1582 | 1582 | |
|
1583 | 1583 | return { |
|
1584 | 1584 | 'error': error, |
|
1585 | 1585 | 'redirect_url': default_redirect_url |
|
1586 | 1586 | } |
|
1587 | 1587 | except Exception: |
|
1588 | 1588 | log.exception('Error occurred during commit') |
|
1589 | 1589 | error = _('Error occurred during commit') |
|
1590 | 1590 | h.flash(error, category='error') |
|
1591 | 1591 | return { |
|
1592 | 1592 | 'error': error, |
|
1593 | 1593 | 'redirect_url': default_redirect_url |
|
1594 | 1594 | } |
|
1595 | 1595 | |
|
1596 | 1596 | return { |
|
1597 | 1597 | 'error': None, |
|
1598 | 1598 | 'redirect_url': default_redirect_url |
|
1599 | 1599 | } |
|
1600 | 1600 | |
|
1601 | 1601 | @LoginRequired() |
|
1602 | 1602 | @HasRepoPermissionAnyDecorator('repository.write', 'repository.admin') |
|
1603 | 1603 | @CSRFRequired() |
|
1604 | 1604 | def repo_files_replace_file(self): |
|
1605 | 1605 | _ = self.request.translate |
|
1606 | 1606 | c = self.load_default_context() |
|
1607 | 1607 | commit_id, f_path = self._get_commit_and_path() |
|
1608 | 1608 | |
|
1609 | 1609 | self._ensure_not_locked() |
|
1610 | 1610 | |
|
1611 | 1611 | c.commit = self._get_commit_or_redirect(commit_id, redirect_after=False) |
|
1612 | 1612 | if c.commit is None: |
|
1613 | 1613 | c.commit = EmptyCommit(alias=self.rhodecode_vcs_repo.alias) |
|
1614 | 1614 | |
|
1615 | 1615 | if self.rhodecode_vcs_repo.is_empty(): |
|
1616 | 1616 | default_redirect_url = h.route_path( |
|
1617 | 1617 | 'repo_summary', repo_name=self.db_repo_name) |
|
1618 | 1618 | else: |
|
1619 | 1619 | default_redirect_url = h.route_path( |
|
1620 | 1620 | 'repo_commit', repo_name=self.db_repo_name, commit_id='tip') |
|
1621 | 1621 | |
|
1622 | 1622 | if self.rhodecode_vcs_repo.is_empty(): |
|
1623 | 1623 | # for empty repository we cannot check for current branch, we rely on |
|
1624 | 1624 | # c.commit.branch instead |
|
1625 | 1625 | _branch_name, _sha_commit_id, is_head = c.commit.branch, '', True |
|
1626 | 1626 | else: |
|
1627 | 1627 | _branch_name, _sha_commit_id, is_head = \ |
|
1628 | 1628 | self._is_valid_head(commit_id, self.rhodecode_vcs_repo, |
|
1629 | 1629 | landing_ref=self.db_repo.landing_ref_name) |
|
1630 | 1630 | |
|
1631 | 1631 | error = self.forbid_non_head(is_head, f_path, json_mode=True) |
|
1632 | 1632 | if error: |
|
1633 | 1633 | return { |
|
1634 | 1634 | 'error': error, |
|
1635 | 1635 | 'redirect_url': default_redirect_url |
|
1636 | 1636 | } |
|
1637 | 1637 | error = self.check_branch_permission(_branch_name, json_mode=True) |
|
1638 | 1638 | if error: |
|
1639 | 1639 | return { |
|
1640 | 1640 | 'error': error, |
|
1641 | 1641 | 'redirect_url': default_redirect_url |
|
1642 | 1642 | } |
|
1643 | 1643 | |
|
1644 | 1644 | c.default_message = (_('Edited file {} via RhodeCode Enterprise').format(f_path)) |
|
1645 | 1645 | c.f_path = f_path |
|
1646 | 1646 | |
|
1647 | 1647 | r_post = self.request.POST |
|
1648 | 1648 | |
|
1649 | 1649 | message = c.default_message |
|
1650 | 1650 | user_message = r_post.getall('message') |
|
1651 | 1651 | if isinstance(user_message, list) and user_message: |
|
1652 | 1652 | # we take the first from duplicated results if it's not empty |
|
1653 | 1653 | message = user_message[0] if user_message[0] else message |
|
1654 | 1654 | |
|
1655 | 1655 | data_for_replacement = r_post.getall('files_upload') or [] |
|
1656 | 1656 | if (objects_count := len(data_for_replacement)) > 1: |
|
1657 | 1657 | return { |
|
1658 | 1658 | 'error': 'too many files for replacement', |
|
1659 | 1659 | 'redirect_url': default_redirect_url |
|
1660 | 1660 | } |
|
1661 | 1661 | elif not objects_count: |
|
1662 | 1662 | return { |
|
1663 | 1663 | 'error': 'missing files', |
|
1664 | 1664 | 'redirect_url': default_redirect_url |
|
1665 | 1665 | } |
|
1666 | 1666 | |
|
1667 | 1667 | content = data_for_replacement[0].file |
|
1668 | 1668 | retrieved_filename = data_for_replacement[0].filename |
|
1669 | 1669 | |
|
1670 | 1670 | if retrieved_filename.split('.')[-1] != f_path.split('.')[-1]: |
|
1671 | 1671 | return { |
|
1672 | 1672 | 'error': 'file extension of uploaded file doesn\'t match an original file\'s extension', |
|
1673 | 1673 | 'redirect_url': default_redirect_url |
|
1674 | 1674 | } |
|
1675 | 1675 | |
|
1676 | 1676 | author = self._rhodecode_db_user.full_contact |
|
1677 | 1677 | |
|
1678 | 1678 | try: |
|
1679 | 1679 | commit = ScmModel().update_binary_node( |
|
1680 | 1680 | user=self._rhodecode_db_user.user_id, |
|
1681 | 1681 | repo=self.db_repo, |
|
1682 | 1682 | message=message, |
|
1683 | 1683 | node={ |
|
1684 | 1684 | 'content': content, |
|
1685 | 1685 | 'file_path': f_path.encode(), |
|
1686 | 1686 | }, |
|
1687 | 1687 | parent_commit=c.commit, |
|
1688 | 1688 | author=author, |
|
1689 | 1689 | ) |
|
1690 | 1690 | |
|
1691 | 1691 | h.flash(_('Successfully committed 1 new file'), category='success') |
|
1692 | 1692 | |
|
1693 | 1693 | default_redirect_url = h.route_path( |
|
1694 | 1694 | 'repo_commit', repo_name=self.db_repo_name, commit_id=commit.raw_id) |
|
1695 | 1695 | |
|
1696 | 1696 | except (NodeError, NodeAlreadyExistsError) as e: |
|
1697 | 1697 | error = h.escape(e) |
|
1698 | 1698 | h.flash(error, category='error') |
|
1699 | 1699 | |
|
1700 | 1700 | return { |
|
1701 | 1701 | 'error': error, |
|
1702 | 1702 | 'redirect_url': default_redirect_url |
|
1703 | 1703 | } |
|
1704 | 1704 | except Exception: |
|
1705 | 1705 | log.exception('Error occurred during commit') |
|
1706 | 1706 | error = _('Error occurred during commit') |
|
1707 | 1707 | h.flash(error, category='error') |
|
1708 | 1708 | return { |
|
1709 | 1709 | 'error': error, |
|
1710 | 1710 | 'redirect_url': default_redirect_url |
|
1711 | 1711 | } |
|
1712 | 1712 | |
|
1713 | 1713 | return { |
|
1714 | 1714 | 'error': None, |
|
1715 | 1715 | 'redirect_url': default_redirect_url |
|
1716 | 1716 | } |
@@ -1,302 +1,309 b'' | |||
|
1 | 1 | # Copyright (C) 2011-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import logging |
|
20 | 20 | |
|
21 | 21 | |
|
22 | 22 | from pyramid.httpexceptions import HTTPFound |
|
23 | 23 | from packaging.version import Version |
|
24 | 24 | |
|
25 | 25 | from rhodecode import events |
|
26 | 26 | from rhodecode.apps._base import RepoAppView |
|
27 | 27 | from rhodecode.lib import helpers as h |
|
28 | 28 | from rhodecode.lib import audit_logger |
|
29 | 29 | from rhodecode.lib.auth import ( |
|
30 | 30 | LoginRequired, HasRepoPermissionAnyDecorator, CSRFRequired, |
|
31 | 31 | HasRepoPermissionAny) |
|
32 | from rhodecode.lib.exceptions import AttachedForksError, AttachedPullRequestsError | |
|
32 | from rhodecode.lib.exceptions import AttachedForksError, AttachedPullRequestsError, AttachedArtifactsError | |
|
33 | 33 | from rhodecode.lib.utils2 import safe_int |
|
34 | 34 | from rhodecode.lib.vcs import RepositoryError |
|
35 | 35 | from rhodecode.model.db import Session, UserFollowing, User, Repository |
|
36 | 36 | from rhodecode.model.permission import PermissionModel |
|
37 | 37 | from rhodecode.model.repo import RepoModel |
|
38 | 38 | from rhodecode.model.scm import ScmModel |
|
39 | 39 | |
|
40 | 40 | log = logging.getLogger(__name__) |
|
41 | 41 | |
|
42 | 42 | |
|
43 | 43 | class RepoSettingsAdvancedView(RepoAppView): |
|
44 | 44 | |
|
45 | 45 | def load_default_context(self): |
|
46 | 46 | c = self._get_local_tmpl_context() |
|
47 | 47 | return c |
|
48 | 48 | |
|
49 | 49 | def _get_users_with_permissions(self): |
|
50 | 50 | user_permissions = {} |
|
51 | 51 | for perm in self.db_repo.permissions(): |
|
52 | 52 | user_permissions[perm.user_id] = perm |
|
53 | 53 | |
|
54 | 54 | return user_permissions |
|
55 | 55 | |
|
56 | 56 | @LoginRequired() |
|
57 | 57 | @HasRepoPermissionAnyDecorator('repository.admin') |
|
58 | 58 | def edit_advanced(self): |
|
59 | 59 | _ = self.request.translate |
|
60 | 60 | c = self.load_default_context() |
|
61 | 61 | c.active = 'advanced' |
|
62 | 62 | |
|
63 | 63 | c.default_user_id = User.get_default_user_id() |
|
64 | 64 | c.in_public_journal = UserFollowing.query() \ |
|
65 | 65 | .filter(UserFollowing.user_id == c.default_user_id) \ |
|
66 | 66 | .filter(UserFollowing.follows_repository == self.db_repo).scalar() |
|
67 | 67 | |
|
68 | 68 | c.ver_info_dict = self.rhodecode_vcs_repo.get_hooks_info() |
|
69 | 69 | c.hooks_outdated = False |
|
70 | 70 | |
|
71 | 71 | try: |
|
72 | 72 | if Version(c.ver_info_dict['pre_version']) < Version(c.rhodecode_version): |
|
73 | 73 | c.hooks_outdated = True |
|
74 | 74 | except Exception: |
|
75 | 75 | pass |
|
76 | 76 | |
|
77 | 77 | # update commit cache if GET flag is present |
|
78 | 78 | if self.request.GET.get('update_commit_cache'): |
|
79 | 79 | self.db_repo.update_commit_cache() |
|
80 | 80 | h.flash(_('updated commit cache'), category='success') |
|
81 | 81 | |
|
82 | 82 | return self._get_template_context(c) |
|
83 | 83 | |
|
84 | 84 | @LoginRequired() |
|
85 | 85 | @HasRepoPermissionAnyDecorator('repository.admin') |
|
86 | 86 | @CSRFRequired() |
|
87 | 87 | def edit_advanced_archive(self): |
|
88 | 88 | """ |
|
89 | 89 | Archives the repository. It will become read-only, and not visible in search |
|
90 | 90 | or other queries. But still visible for super-admins. |
|
91 | 91 | """ |
|
92 | 92 | |
|
93 | 93 | _ = self.request.translate |
|
94 | 94 | |
|
95 | 95 | try: |
|
96 | 96 | old_data = self.db_repo.get_api_data() |
|
97 | 97 | RepoModel().archive(self.db_repo) |
|
98 | 98 | |
|
99 | 99 | repo = audit_logger.RepoWrap(repo_id=None, repo_name=self.db_repo.repo_name) |
|
100 | 100 | audit_logger.store_web( |
|
101 | 101 | 'repo.archive', action_data={'old_data': old_data}, |
|
102 | 102 | user=self._rhodecode_user, repo=repo) |
|
103 | 103 | |
|
104 | 104 | ScmModel().mark_for_invalidation(self.db_repo_name, delete=True) |
|
105 | 105 | h.flash( |
|
106 | 106 | _('Archived repository `%s`') % self.db_repo_name, |
|
107 | 107 | category='success') |
|
108 | 108 | Session().commit() |
|
109 | 109 | except Exception: |
|
110 | 110 | log.exception("Exception during archiving of repository") |
|
111 | 111 | h.flash(_('An error occurred during archiving of `%s`') |
|
112 | 112 | % self.db_repo_name, category='error') |
|
113 | 113 | # redirect to advanced for more deletion options |
|
114 | 114 | raise HTTPFound( |
|
115 | 115 | h.route_path('edit_repo_advanced', repo_name=self.db_repo_name, |
|
116 | 116 | _anchor='advanced-archive')) |
|
117 | 117 | |
|
118 | 118 | # flush permissions for all users defined in permissions |
|
119 | 119 | affected_user_ids = self._get_users_with_permissions().keys() |
|
120 | 120 | PermissionModel().trigger_permission_flush(affected_user_ids) |
|
121 | 121 | |
|
122 | 122 | raise HTTPFound(h.route_path('home')) |
|
123 | 123 | |
|
124 | 124 | @LoginRequired() |
|
125 | 125 | @HasRepoPermissionAnyDecorator('repository.admin') |
|
126 | 126 | @CSRFRequired() |
|
127 | 127 | def edit_advanced_delete(self): |
|
128 | 128 | """ |
|
129 | 129 | Deletes the repository, or shows warnings if deletion is not possible |
|
130 | 130 | because of attached forks or other errors. |
|
131 | 131 | """ |
|
132 | 132 | _ = self.request.translate |
|
133 | 133 | handle_forks = self.request.POST.get('forks', None) |
|
134 | 134 | if handle_forks == 'detach_forks': |
|
135 | 135 | handle_forks = 'detach' |
|
136 | 136 | elif handle_forks == 'delete_forks': |
|
137 | 137 | handle_forks = 'delete' |
|
138 | 138 | |
|
139 | repo_advanced_url = h.route_path( | |
|
140 | 'edit_repo_advanced', repo_name=self.db_repo_name, | |
|
141 | _anchor='advanced-delete') | |
|
139 | 142 | try: |
|
140 | 143 | old_data = self.db_repo.get_api_data() |
|
141 | 144 | RepoModel().delete(self.db_repo, forks=handle_forks) |
|
142 | 145 | |
|
143 | 146 | _forks = self.db_repo.forks.count() |
|
144 | 147 | if _forks and handle_forks: |
|
145 | 148 | if handle_forks == 'detach_forks': |
|
146 | 149 | h.flash(_('Detached %s forks') % _forks, category='success') |
|
147 | 150 | elif handle_forks == 'delete_forks': |
|
148 | 151 | h.flash(_('Deleted %s forks') % _forks, category='success') |
|
149 | 152 | |
|
150 | 153 | repo = audit_logger.RepoWrap(repo_id=None, repo_name=self.db_repo.repo_name) |
|
151 | 154 | audit_logger.store_web( |
|
152 | 155 | 'repo.delete', action_data={'old_data': old_data}, |
|
153 | 156 | user=self._rhodecode_user, repo=repo) |
|
154 | 157 | |
|
155 | 158 | ScmModel().mark_for_invalidation(self.db_repo_name, delete=True) |
|
156 | 159 | h.flash( |
|
157 | 160 | _('Deleted repository `%s`') % self.db_repo_name, |
|
158 | 161 | category='success') |
|
159 | 162 | Session().commit() |
|
160 | 163 | except AttachedForksError: |
|
161 | repo_advanced_url = h.route_path( | |
|
162 | 'edit_repo_advanced', repo_name=self.db_repo_name, | |
|
163 | _anchor='advanced-delete') | |
|
164 | 164 | delete_anchor = h.link_to(_('detach or delete'), repo_advanced_url) |
|
165 | 165 | h.flash(_('Cannot delete `{repo}` it still contains attached forks. ' |
|
166 | 166 | 'Try using {delete_or_detach} option.') |
|
167 | 167 | .format(repo=self.db_repo_name, delete_or_detach=delete_anchor), |
|
168 | 168 | category='warning') |
|
169 | 169 | |
|
170 | 170 | # redirect to advanced for forks handle action ? |
|
171 | 171 | raise HTTPFound(repo_advanced_url) |
|
172 | 172 | |
|
173 | 173 | except AttachedPullRequestsError: |
|
174 | repo_advanced_url = h.route_path( | |
|
175 | 'edit_repo_advanced', repo_name=self.db_repo_name, | |
|
176 | _anchor='advanced-delete') | |
|
177 | 174 | attached_prs = len(self.db_repo.pull_requests_source + |
|
178 | 175 | self.db_repo.pull_requests_target) |
|
179 | 176 | h.flash( |
|
180 | 177 | _('Cannot delete `{repo}` it still contains {num} attached pull requests. ' |
|
181 | 178 | 'Consider archiving the repository instead.').format( |
|
182 | 179 | repo=self.db_repo_name, num=attached_prs), category='warning') |
|
183 | 180 | |
|
184 | 181 | # redirect to advanced for forks handle action ? |
|
185 | 182 | raise HTTPFound(repo_advanced_url) |
|
186 | 183 | |
|
184 | except AttachedArtifactsError: | |
|
185 | ||
|
186 | attached_artifacts = len(self.db_repo.artifacts) | |
|
187 | h.flash( | |
|
188 | _('Cannot delete `{repo}` it still contains {num} attached artifacts. ' | |
|
189 | 'Consider archiving the repository instead.').format( | |
|
190 | repo=self.db_repo_name, num=attached_artifacts), category='warning') | |
|
191 | ||
|
192 | # redirect to advanced for forks handle action ? | |
|
193 | raise HTTPFound(repo_advanced_url) | |
|
187 | 194 | except Exception: |
|
188 | 195 | log.exception("Exception during deletion of repository") |
|
189 | 196 | h.flash(_('An error occurred during deletion of `%s`') |
|
190 | 197 | % self.db_repo_name, category='error') |
|
191 | 198 | # redirect to advanced for more deletion options |
|
192 | 199 | raise HTTPFound( |
|
193 | 200 | h.route_path('edit_repo_advanced', repo_name=self.db_repo_name, |
|
194 | 201 | _anchor='advanced-delete')) |
|
195 | 202 | |
|
196 | 203 | raise HTTPFound(h.route_path('home')) |
|
197 | 204 | |
|
198 | 205 | @LoginRequired() |
|
199 | 206 | @HasRepoPermissionAnyDecorator('repository.admin') |
|
200 | 207 | @CSRFRequired() |
|
201 | 208 | def edit_advanced_journal(self): |
|
202 | 209 | """ |
|
203 | 210 | Set's this repository to be visible in public journal, |
|
204 | 211 | in other words making default user to follow this repo |
|
205 | 212 | """ |
|
206 | 213 | _ = self.request.translate |
|
207 | 214 | |
|
208 | 215 | try: |
|
209 | 216 | user_id = User.get_default_user_id() |
|
210 | 217 | ScmModel().toggle_following_repo(self.db_repo.repo_id, user_id) |
|
211 | 218 | h.flash(_('Updated repository visibility in public journal'), |
|
212 | 219 | category='success') |
|
213 | 220 | Session().commit() |
|
214 | 221 | except Exception: |
|
215 | 222 | h.flash(_('An error occurred during setting this ' |
|
216 | 223 | 'repository in public journal'), |
|
217 | 224 | category='error') |
|
218 | 225 | |
|
219 | 226 | raise HTTPFound( |
|
220 | 227 | h.route_path('edit_repo_advanced', repo_name=self.db_repo_name)) |
|
221 | 228 | |
|
222 | 229 | @LoginRequired() |
|
223 | 230 | @HasRepoPermissionAnyDecorator('repository.admin') |
|
224 | 231 | @CSRFRequired() |
|
225 | 232 | def edit_advanced_fork(self): |
|
226 | 233 | """ |
|
227 | 234 | Mark given repository as a fork of another |
|
228 | 235 | """ |
|
229 | 236 | _ = self.request.translate |
|
230 | 237 | |
|
231 | 238 | new_fork_id = safe_int(self.request.POST.get('id_fork_of')) |
|
232 | 239 | |
|
233 | 240 | # valid repo, re-check permissions |
|
234 | 241 | if new_fork_id: |
|
235 | 242 | repo = Repository.get(new_fork_id) |
|
236 | 243 | # ensure we have at least read access to the repo we mark |
|
237 | 244 | perm_check = HasRepoPermissionAny( |
|
238 | 245 | 'repository.read', 'repository.write', 'repository.admin') |
|
239 | 246 | |
|
240 | 247 | if repo and perm_check(repo_name=repo.repo_name): |
|
241 | 248 | new_fork_id = repo.repo_id |
|
242 | 249 | else: |
|
243 | 250 | new_fork_id = None |
|
244 | 251 | |
|
245 | 252 | try: |
|
246 | 253 | repo = ScmModel().mark_as_fork( |
|
247 | 254 | self.db_repo_name, new_fork_id, self._rhodecode_user.user_id) |
|
248 | 255 | fork = repo.fork.repo_name if repo.fork else _('Nothing') |
|
249 | 256 | Session().commit() |
|
250 | 257 | h.flash( |
|
251 | 258 | _('Marked repo %s as fork of %s') % (self.db_repo_name, fork), |
|
252 | 259 | category='success') |
|
253 | 260 | except RepositoryError as e: |
|
254 | 261 | log.exception("Repository Error occurred") |
|
255 | 262 | h.flash(str(e), category='error') |
|
256 | 263 | except Exception: |
|
257 | 264 | log.exception("Exception while editing fork") |
|
258 | 265 | h.flash(_('An error occurred during this operation'), |
|
259 | 266 | category='error') |
|
260 | 267 | |
|
261 | 268 | raise HTTPFound( |
|
262 | 269 | h.route_path('edit_repo_advanced', repo_name=self.db_repo_name)) |
|
263 | 270 | |
|
264 | 271 | @LoginRequired() |
|
265 | 272 | @HasRepoPermissionAnyDecorator('repository.admin') |
|
266 | 273 | @CSRFRequired() |
|
267 | 274 | def edit_advanced_toggle_locking(self): |
|
268 | 275 | """ |
|
269 | 276 | Toggle locking of repository |
|
270 | 277 | """ |
|
271 | 278 | _ = self.request.translate |
|
272 | 279 | set_lock = self.request.POST.get('set_lock') |
|
273 | 280 | set_unlock = self.request.POST.get('set_unlock') |
|
274 | 281 | |
|
275 | 282 | try: |
|
276 | 283 | if set_lock: |
|
277 | 284 | Repository.lock(self.db_repo, self._rhodecode_user.user_id, |
|
278 | 285 | lock_reason=Repository.LOCK_WEB) |
|
279 | 286 | h.flash(_('Locked repository'), category='success') |
|
280 | 287 | elif set_unlock: |
|
281 | 288 | Repository.unlock(self.db_repo) |
|
282 | 289 | h.flash(_('Unlocked repository'), category='success') |
|
283 | 290 | except Exception as e: |
|
284 | 291 | log.exception("Exception during unlocking") |
|
285 | 292 | h.flash(_('An error occurred during unlocking'), category='error') |
|
286 | 293 | |
|
287 | 294 | raise HTTPFound( |
|
288 | 295 | h.route_path('edit_repo_advanced', repo_name=self.db_repo_name)) |
|
289 | 296 | |
|
290 | 297 | @LoginRequired() |
|
291 | 298 | @HasRepoPermissionAnyDecorator('repository.admin') |
|
292 | 299 | def edit_advanced_install_hooks(self): |
|
293 | 300 | """ |
|
294 | 301 | Install Hooks for repository |
|
295 | 302 | """ |
|
296 | 303 | _ = self.request.translate |
|
297 | 304 | self.load_default_context() |
|
298 | 305 | self.rhodecode_vcs_repo.install_hooks(force=True) |
|
299 | 306 | h.flash(_('installed updated hooks into this repository'), |
|
300 | 307 | category='success') |
|
301 | 308 | raise HTTPFound( |
|
302 | 309 | h.route_path('edit_repo_advanced', repo_name=self.db_repo_name)) |
@@ -1,60 +1,60 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | import logging |
|
20 | 20 | |
|
21 | 21 | from . import config_keys |
|
22 | 22 | |
|
23 | 23 | from rhodecode.config.settings_maker import SettingsMaker |
|
24 | 24 | |
|
25 | 25 | log = logging.getLogger(__name__) |
|
26 | 26 | |
|
27 | 27 | |
|
28 | 28 | def _sanitize_settings_and_apply_defaults(settings): |
|
29 | 29 | """ |
|
30 | 30 | Set defaults, convert to python types and validate settings. |
|
31 | 31 | """ |
|
32 | 32 | settings_maker = SettingsMaker(settings) |
|
33 | 33 | |
|
34 | 34 | settings_maker.make_setting(config_keys.generate_authorized_keyfile, False, parser='bool') |
|
35 | 35 | settings_maker.make_setting(config_keys.wrapper_allow_shell, False, parser='bool') |
|
36 | 36 | settings_maker.make_setting(config_keys.enable_debug_logging, False, parser='bool') |
|
37 | 37 | settings_maker.make_setting(config_keys.ssh_key_generator_enabled, True, parser='bool') |
|
38 | 38 | |
|
39 | 39 | settings_maker.make_setting(config_keys.authorized_keys_file_path, '~/.ssh/authorized_keys_rhodecode') |
|
40 | settings_maker.make_setting(config_keys.wrapper_cmd, '') | |
|
40 | settings_maker.make_setting(config_keys.wrapper_cmd, '/usr/local/bin/rhodecode_bin/bin/rc-ssh-wrapper-v2') | |
|
41 | 41 | settings_maker.make_setting(config_keys.authorized_keys_line_ssh_opts, '') |
|
42 | 42 | |
|
43 | 43 | settings_maker.make_setting(config_keys.ssh_hg_bin, '/usr/local/bin/rhodecode_bin/vcs_bin/hg') |
|
44 | 44 | settings_maker.make_setting(config_keys.ssh_git_bin, '/usr/local/bin/rhodecode_bin/vcs_bin/git') |
|
45 | 45 | settings_maker.make_setting(config_keys.ssh_svn_bin, '/usr/local/bin/rhodecode_bin/vcs_bin/svnserve') |
|
46 | 46 | |
|
47 | 47 | settings_maker.env_expand() |
|
48 | 48 | |
|
49 | 49 | |
|
50 | 50 | def includeme(config): |
|
51 | 51 | settings = config.registry.settings |
|
52 | 52 | _sanitize_settings_and_apply_defaults(settings) |
|
53 | 53 | |
|
54 | 54 | # if we have enable generation of file, subscribe to event |
|
55 | 55 | if settings[config_keys.generate_authorized_keyfile]: |
|
56 | 56 | # lazy import here for faster code reading... via sshwrapper-v2 mode |
|
57 | 57 | from .subscribers import generate_ssh_authorized_keys_file_subscriber |
|
58 | 58 | from .events import SshKeyFileChangeEvent |
|
59 | 59 | config.add_subscriber( |
|
60 | 60 | generate_ssh_authorized_keys_file_subscriber, SshKeyFileChangeEvent) |
@@ -1,32 +1,32 b'' | |||
|
1 | 1 | # Copyright (C) 2016-2023 RhodeCode GmbH |
|
2 | 2 | # |
|
3 | 3 | # This program is free software: you can redistribute it and/or modify |
|
4 | 4 | # it under the terms of the GNU Affero General Public License, version 3 |
|
5 | 5 | # (only), as published by the Free Software Foundation. |
|
6 | 6 | # |
|
7 | 7 | # This program is distributed in the hope that it will be useful, |
|
8 | 8 | # but WITHOUT ANY WARRANTY; without even the implied warranty of |
|
9 | 9 | # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the |
|
10 | 10 | # GNU General Public License for more details. |
|
11 | 11 | # |
|
12 | 12 | # You should have received a copy of the GNU Affero General Public License |
|
13 | 13 | # along with this program. If not, see <http://www.gnu.org/licenses/>. |
|
14 | 14 | # |
|
15 | 15 | # This program is dual-licensed. If you wish to learn more about the |
|
16 | 16 | # RhodeCode Enterprise Edition, including its added features, Support services, |
|
17 | 17 | # and proprietary license terms, please see https://rhodecode.com/licenses/ |
|
18 | 18 | |
|
19 | 19 | |
|
20 | 20 | # Definition of setting keys used to configure this module. Defined here to |
|
21 | 21 | # avoid repetition of keys throughout the module. |
|
22 | 22 | generate_authorized_keyfile = 'ssh.generate_authorized_keyfile' |
|
23 | 23 | authorized_keys_file_path = 'ssh.authorized_keys_file_path' |
|
24 | 24 | authorized_keys_line_ssh_opts = 'ssh.authorized_keys_ssh_opts' |
|
25 | 25 | ssh_key_generator_enabled = 'ssh.enable_ui_key_generator' |
|
26 | wrapper_cmd = 'ssh.wrapper_cmd' | |
|
26 | wrapper_cmd = 'ssh.wrapper_cmd.v2' | |
|
27 | 27 | wrapper_allow_shell = 'ssh.wrapper_cmd_allow_shell' |
|
28 | 28 | enable_debug_logging = 'ssh.enable_debug_logging' |
|
29 | 29 | |
|
30 | 30 | ssh_hg_bin = 'ssh.executable.hg' |
|
31 | 31 | ssh_git_bin = 'ssh.executable.git' |
|
32 | 32 | ssh_svn_bin = 'ssh.executable.svn' |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: modified file | |
The requested commit or file is too big and content was truncated. Show full diff |
|
1 | NO CONTENT: file was removed |
|
1 | NO CONTENT: file was removed |
|
1 | NO CONTENT: file was removed |
General Comments 0
You need to be logged in to leave comments.
Login now