Show More
The requested changes are too big and content was truncated. Show full diff
@@ -1,1022 +1,1034 b'' | |||||
1 | # This software may be used and distributed according to the terms of the |
|
1 | # This software may be used and distributed according to the terms of the | |
2 | # GNU General Public License version 2 or any later version. |
|
2 | # GNU General Public License version 2 or any later version. | |
3 |
|
3 | |||
4 | """advertise pre-generated bundles to seed clones |
|
4 | """advertise pre-generated bundles to seed clones | |
5 |
|
5 | |||
6 | "clonebundles" is a server-side extension used to advertise the existence |
|
6 | "clonebundles" is a server-side extension used to advertise the existence | |
7 | of pre-generated, externally hosted bundle files to clients that are |
|
7 | of pre-generated, externally hosted bundle files to clients that are | |
8 | cloning so that cloning can be faster, more reliable, and require less |
|
8 | cloning so that cloning can be faster, more reliable, and require less | |
9 | resources on the server. "pullbundles" is a related feature for sending |
|
9 | resources on the server. "pullbundles" is a related feature for sending | |
10 | pre-generated bundle files to clients as part of pull operations. |
|
10 | pre-generated bundle files to clients as part of pull operations. | |
11 |
|
11 | |||
12 | Cloning can be a CPU and I/O intensive operation on servers. Traditionally, |
|
12 | Cloning can be a CPU and I/O intensive operation on servers. Traditionally, | |
13 | the server, in response to a client's request to clone, dynamically generates |
|
13 | the server, in response to a client's request to clone, dynamically generates | |
14 | a bundle containing the entire repository content and sends it to the client. |
|
14 | a bundle containing the entire repository content and sends it to the client. | |
15 | There is no caching on the server and the server will have to redundantly |
|
15 | There is no caching on the server and the server will have to redundantly | |
16 | generate the same outgoing bundle in response to each clone request. For |
|
16 | generate the same outgoing bundle in response to each clone request. For | |
17 | servers with large repositories or with high clone volume, the load from |
|
17 | servers with large repositories or with high clone volume, the load from | |
18 | clones can make scaling the server challenging and costly. |
|
18 | clones can make scaling the server challenging and costly. | |
19 |
|
19 | |||
20 | This extension provides server operators the ability to offload |
|
20 | This extension provides server operators the ability to offload | |
21 | potentially expensive clone load to an external service. Pre-generated |
|
21 | potentially expensive clone load to an external service. Pre-generated | |
22 | bundles also allow using more CPU intensive compression, reducing the |
|
22 | bundles also allow using more CPU intensive compression, reducing the | |
23 | effective bandwidth requirements. |
|
23 | effective bandwidth requirements. | |
24 |
|
24 | |||
25 | Here's how clone bundles work: |
|
25 | Here's how clone bundles work: | |
26 |
|
26 | |||
27 | 1. A server operator establishes a mechanism for making bundle files available |
|
27 | 1. A server operator establishes a mechanism for making bundle files available | |
28 | on a hosting service where Mercurial clients can fetch them. |
|
28 | on a hosting service where Mercurial clients can fetch them. | |
29 | 2. A manifest file listing available bundle URLs and some optional metadata |
|
29 | 2. A manifest file listing available bundle URLs and some optional metadata | |
30 | is added to the Mercurial repository on the server. |
|
30 | is added to the Mercurial repository on the server. | |
31 | 3. A client initiates a clone against a clone bundles aware server. |
|
31 | 3. A client initiates a clone against a clone bundles aware server. | |
32 | 4. The client sees the server is advertising clone bundles and fetches the |
|
32 | 4. The client sees the server is advertising clone bundles and fetches the | |
33 | manifest listing available bundles. |
|
33 | manifest listing available bundles. | |
34 | 5. The client filters and sorts the available bundles based on what it |
|
34 | 5. The client filters and sorts the available bundles based on what it | |
35 | supports and prefers. |
|
35 | supports and prefers. | |
36 | 6. The client downloads and applies an available bundle from the |
|
36 | 6. The client downloads and applies an available bundle from the | |
37 | server-specified URL. |
|
37 | server-specified URL. | |
38 | 7. The client reconnects to the original server and performs the equivalent |
|
38 | 7. The client reconnects to the original server and performs the equivalent | |
39 | of :hg:`pull` to retrieve all repository data not in the bundle. (The |
|
39 | of :hg:`pull` to retrieve all repository data not in the bundle. (The | |
40 | repository could have been updated between when the bundle was created |
|
40 | repository could have been updated between when the bundle was created | |
41 | and when the client started the clone.) This may use "pullbundles". |
|
41 | and when the client started the clone.) This may use "pullbundles". | |
42 |
|
42 | |||
43 | Instead of the server generating full repository bundles for every clone |
|
43 | Instead of the server generating full repository bundles for every clone | |
44 | request, it generates full bundles once and they are subsequently reused to |
|
44 | request, it generates full bundles once and they are subsequently reused to | |
45 | bootstrap new clones. The server may still transfer data at clone time. |
|
45 | bootstrap new clones. The server may still transfer data at clone time. | |
46 | However, this is only data that has been added/changed since the bundle was |
|
46 | However, this is only data that has been added/changed since the bundle was | |
47 | created. For large, established repositories, this can reduce server load for |
|
47 | created. For large, established repositories, this can reduce server load for | |
48 | clones to less than 1% of original. |
|
48 | clones to less than 1% of original. | |
49 |
|
49 | |||
50 | Here's how pullbundles work: |
|
50 | Here's how pullbundles work: | |
51 |
|
51 | |||
52 | 1. A manifest file listing available bundles and describing the revisions |
|
52 | 1. A manifest file listing available bundles and describing the revisions | |
53 | is added to the Mercurial repository on the server. |
|
53 | is added to the Mercurial repository on the server. | |
54 | 2. A new-enough client informs the server that it supports partial pulls |
|
54 | 2. A new-enough client informs the server that it supports partial pulls | |
55 | and initiates a pull. |
|
55 | and initiates a pull. | |
56 | 3. If the server has pull bundles enabled and sees the client advertising |
|
56 | 3. If the server has pull bundles enabled and sees the client advertising | |
57 | partial pulls, it checks for a matching pull bundle in the manifest. |
|
57 | partial pulls, it checks for a matching pull bundle in the manifest. | |
58 | A bundle matches if the format is supported by the client, the client |
|
58 | A bundle matches if the format is supported by the client, the client | |
59 | has the required revisions already and needs something from the bundle. |
|
59 | has the required revisions already and needs something from the bundle. | |
60 | 4. If there is at least one matching bundle, the server sends it to the client. |
|
60 | 4. If there is at least one matching bundle, the server sends it to the client. | |
61 | 5. The client applies the bundle and notices that the server reply was |
|
61 | 5. The client applies the bundle and notices that the server reply was | |
62 | incomplete. It initiates another pull. |
|
62 | incomplete. It initiates another pull. | |
63 |
|
63 | |||
64 | To work, this extension requires the following of server operators: |
|
64 | To work, this extension requires the following of server operators: | |
65 |
|
65 | |||
66 | * Generating bundle files of repository content (typically periodically, |
|
66 | * Generating bundle files of repository content (typically periodically, | |
67 | such as once per day). |
|
67 | such as once per day). | |
68 | * Clone bundles: A file server that clients have network access to and that |
|
68 | * Clone bundles: A file server that clients have network access to and that | |
69 | Python knows how to talk to through its normal URL handling facility |
|
69 | Python knows how to talk to through its normal URL handling facility | |
70 | (typically an HTTP/HTTPS server). |
|
70 | (typically an HTTP/HTTPS server). | |
71 | * A process for keeping the bundles manifest in sync with available bundle |
|
71 | * A process for keeping the bundles manifest in sync with available bundle | |
72 | files. |
|
72 | files. | |
73 |
|
73 | |||
74 | Strictly speaking, using a static file hosting server isn't required: a server |
|
74 | Strictly speaking, using a static file hosting server isn't required: a server | |
75 | operator could use a dynamic service for retrieving bundle data. However, |
|
75 | operator could use a dynamic service for retrieving bundle data. However, | |
76 | static file hosting services are simple and scalable and should be sufficient |
|
76 | static file hosting services are simple and scalable and should be sufficient | |
77 | for most needs. |
|
77 | for most needs. | |
78 |
|
78 | |||
79 | Bundle files can be generated with the :hg:`bundle` command. Typically |
|
79 | Bundle files can be generated with the :hg:`bundle` command. Typically | |
80 | :hg:`bundle --all` is used to produce a bundle of the entire repository. |
|
80 | :hg:`bundle --all` is used to produce a bundle of the entire repository. | |
81 |
|
81 | |||
82 | :hg:`debugcreatestreamclonebundle` can be used to produce a special |
|
82 | :hg:`debugcreatestreamclonebundle` can be used to produce a special | |
83 | *streaming clonebundle*. These are bundle files that are extremely efficient |
|
83 | *streaming clonebundle*. These are bundle files that are extremely efficient | |
84 | to produce and consume (read: fast). However, they are larger than |
|
84 | to produce and consume (read: fast). However, they are larger than | |
85 | traditional bundle formats and require that clients support the exact set |
|
85 | traditional bundle formats and require that clients support the exact set | |
86 | of repository data store formats in use by the repository that created them. |
|
86 | of repository data store formats in use by the repository that created them. | |
87 | Typically, a newer server can serve data that is compatible with older clients. |
|
87 | Typically, a newer server can serve data that is compatible with older clients. | |
88 | However, *streaming clone bundles* don't have this guarantee. **Server |
|
88 | However, *streaming clone bundles* don't have this guarantee. **Server | |
89 | operators need to be aware that newer versions of Mercurial may produce |
|
89 | operators need to be aware that newer versions of Mercurial may produce | |
90 | streaming clone bundles incompatible with older Mercurial versions.** |
|
90 | streaming clone bundles incompatible with older Mercurial versions.** | |
91 |
|
91 | |||
92 | A server operator is responsible for creating a ``.hg/clonebundles.manifest`` |
|
92 | A server operator is responsible for creating a ``.hg/clonebundles.manifest`` | |
93 | file containing the list of available bundle files suitable for seeding |
|
93 | file containing the list of available bundle files suitable for seeding | |
94 | clones. If this file does not exist, the repository will not advertise the |
|
94 | clones. If this file does not exist, the repository will not advertise the | |
95 | existence of clone bundles when clients connect. For pull bundles, |
|
95 | existence of clone bundles when clients connect. For pull bundles, | |
96 | ``.hg/pullbundles.manifest`` is used. |
|
96 | ``.hg/pullbundles.manifest`` is used. | |
97 |
|
97 | |||
98 | The manifest file contains a newline (\\n) delimited list of entries. |
|
98 | The manifest file contains a newline (\\n) delimited list of entries. | |
99 |
|
99 | |||
100 | Each line in this file defines an available bundle. Lines have the format: |
|
100 | Each line in this file defines an available bundle. Lines have the format: | |
101 |
|
101 | |||
102 | <URL> [<key>=<value>[ <key>=<value>]] |
|
102 | <URL> [<key>=<value>[ <key>=<value>]] | |
103 |
|
103 | |||
104 | That is, a URL followed by an optional, space-delimited list of key=value |
|
104 | That is, a URL followed by an optional, space-delimited list of key=value | |
105 | pairs describing additional properties of this bundle. Both keys and values |
|
105 | pairs describing additional properties of this bundle. Both keys and values | |
106 | are URI encoded. |
|
106 | are URI encoded. | |
107 |
|
107 | |||
108 | For pull bundles, the URL is a path under the ``.hg`` directory of the |
|
108 | For pull bundles, the URL is a path under the ``.hg`` directory of the | |
109 | repository. |
|
109 | repository. | |
110 |
|
110 | |||
111 | Keys in UPPERCASE are reserved for use by Mercurial and are defined below. |
|
111 | Keys in UPPERCASE are reserved for use by Mercurial and are defined below. | |
112 | All non-uppercase keys can be used by site installations. An example use |
|
112 | All non-uppercase keys can be used by site installations. An example use | |
113 | for custom properties is to use the *datacenter* attribute to define which |
|
113 | for custom properties is to use the *datacenter* attribute to define which | |
114 | data center a file is hosted in. Clients could then prefer a server in the |
|
114 | data center a file is hosted in. Clients could then prefer a server in the | |
115 | data center closest to them. |
|
115 | data center closest to them. | |
116 |
|
116 | |||
117 | The following reserved keys are currently defined: |
|
117 | The following reserved keys are currently defined: | |
118 |
|
118 | |||
119 | BUNDLESPEC |
|
119 | BUNDLESPEC | |
120 | A "bundle specification" string that describes the type of the bundle. |
|
120 | A "bundle specification" string that describes the type of the bundle. | |
121 |
|
121 | |||
122 | These are string values that are accepted by the "--type" argument of |
|
122 | These are string values that are accepted by the "--type" argument of | |
123 | :hg:`bundle`. |
|
123 | :hg:`bundle`. | |
124 |
|
124 | |||
125 | The values are parsed in strict mode, which means they must be of the |
|
125 | The values are parsed in strict mode, which means they must be of the | |
126 | "<compression>-<type>" form. See |
|
126 | "<compression>-<type>" form. See | |
127 | mercurial.exchange.parsebundlespec() for more details. |
|
127 | mercurial.exchange.parsebundlespec() for more details. | |
128 |
|
128 | |||
129 | :hg:`debugbundle --spec` can be used to print the bundle specification |
|
129 | :hg:`debugbundle --spec` can be used to print the bundle specification | |
130 | string for a bundle file. The output of this command can be used verbatim |
|
130 | string for a bundle file. The output of this command can be used verbatim | |
131 | for the value of ``BUNDLESPEC`` (it is already escaped). |
|
131 | for the value of ``BUNDLESPEC`` (it is already escaped). | |
132 |
|
132 | |||
133 | Clients will automatically filter out specifications that are unknown or |
|
133 | Clients will automatically filter out specifications that are unknown or | |
134 | unsupported so they won't attempt to download something that likely won't |
|
134 | unsupported so they won't attempt to download something that likely won't | |
135 | apply. |
|
135 | apply. | |
136 |
|
136 | |||
137 | The actual value doesn't impact client behavior beyond filtering: |
|
137 | The actual value doesn't impact client behavior beyond filtering: | |
138 | clients will still sniff the bundle type from the header of downloaded |
|
138 | clients will still sniff the bundle type from the header of downloaded | |
139 | files. |
|
139 | files. | |
140 |
|
140 | |||
141 | **Use of this key is highly recommended**, as it allows clients to |
|
141 | **Use of this key is highly recommended**, as it allows clients to | |
142 | easily skip unsupported bundles. If this key is not defined, an old |
|
142 | easily skip unsupported bundles. If this key is not defined, an old | |
143 | client may attempt to apply a bundle that it is incapable of reading. |
|
143 | client may attempt to apply a bundle that it is incapable of reading. | |
144 |
|
144 | |||
145 | REQUIRESNI |
|
145 | REQUIRESNI | |
146 | Whether Server Name Indication (SNI) is required to connect to the URL. |
|
146 | Whether Server Name Indication (SNI) is required to connect to the URL. | |
147 | SNI allows servers to use multiple certificates on the same IP. It is |
|
147 | SNI allows servers to use multiple certificates on the same IP. It is | |
148 | somewhat common in CDNs and other hosting providers. Older Python |
|
148 | somewhat common in CDNs and other hosting providers. Older Python | |
149 | versions do not support SNI. Defining this attribute enables clients |
|
149 | versions do not support SNI. Defining this attribute enables clients | |
150 | with older Python versions to filter this entry without experiencing |
|
150 | with older Python versions to filter this entry without experiencing | |
151 | an opaque SSL failure at connection time. |
|
151 | an opaque SSL failure at connection time. | |
152 |
|
152 | |||
153 | If this is defined, it is important to advertise a non-SNI fallback |
|
153 | If this is defined, it is important to advertise a non-SNI fallback | |
154 | URL or clients running old Python releases may not be able to clone |
|
154 | URL or clients running old Python releases may not be able to clone | |
155 | with the clonebundles facility. |
|
155 | with the clonebundles facility. | |
156 |
|
156 | |||
157 | Value should be "true". |
|
157 | Value should be "true". | |
158 |
|
158 | |||
159 | REQUIREDRAM |
|
159 | REQUIREDRAM | |
160 | Value specifies expected memory requirements to decode the payload. |
|
160 | Value specifies expected memory requirements to decode the payload. | |
161 | Values can have suffixes for common bytes sizes. e.g. "64MB". |
|
161 | Values can have suffixes for common bytes sizes. e.g. "64MB". | |
162 |
|
162 | |||
163 | This key is often used with zstd-compressed bundles using a high |
|
163 | This key is often used with zstd-compressed bundles using a high | |
164 | compression level / window size, which can require 100+ MB of memory |
|
164 | compression level / window size, which can require 100+ MB of memory | |
165 | to decode. |
|
165 | to decode. | |
166 |
|
166 | |||
167 | heads |
|
167 | heads | |
168 | Used for pull bundles. This contains the ``;`` separated changeset |
|
168 | Used for pull bundles. This contains the ``;`` separated changeset | |
169 | hashes of the heads of the bundle content. |
|
169 | hashes of the heads of the bundle content. | |
170 |
|
170 | |||
171 | bases |
|
171 | bases | |
172 | Used for pull bundles. This contains the ``;`` separated changeset |
|
172 | Used for pull bundles. This contains the ``;`` separated changeset | |
173 | hashes of the roots of the bundle content. This can be skipped if |
|
173 | hashes of the roots of the bundle content. This can be skipped if | |
174 | the bundle was created without ``--base``. |
|
174 | the bundle was created without ``--base``. | |
175 |
|
175 | |||
176 | Manifests can contain multiple entries. Assuming metadata is defined, clients |
|
176 | Manifests can contain multiple entries. Assuming metadata is defined, clients | |
177 | will filter entries from the manifest that they don't support. The remaining |
|
177 | will filter entries from the manifest that they don't support. The remaining | |
178 | entries are optionally sorted by client preferences |
|
178 | entries are optionally sorted by client preferences | |
179 | (``ui.clonebundleprefers`` config option). The client then attempts |
|
179 | (``ui.clonebundleprefers`` config option). The client then attempts | |
180 | to fetch the bundle at the first URL in the remaining list. |
|
180 | to fetch the bundle at the first URL in the remaining list. | |
181 |
|
181 | |||
182 | **Errors when downloading a bundle will fail the entire clone operation: |
|
182 | **Errors when downloading a bundle will fail the entire clone operation: | |
183 | clients do not automatically fall back to a traditional clone.** The reason |
|
183 | clients do not automatically fall back to a traditional clone.** The reason | |
184 | for this is that if a server is using clone bundles, it is probably doing so |
|
184 | for this is that if a server is using clone bundles, it is probably doing so | |
185 | because the feature is necessary to help it scale. In other words, there |
|
185 | because the feature is necessary to help it scale. In other words, there | |
186 | is an assumption that clone load will be offloaded to another service and |
|
186 | is an assumption that clone load will be offloaded to another service and | |
187 | that the Mercurial server isn't responsible for serving this clone load. |
|
187 | that the Mercurial server isn't responsible for serving this clone load. | |
188 | If that other service experiences issues and clients start mass falling back to |
|
188 | If that other service experiences issues and clients start mass falling back to | |
189 | the original Mercurial server, the added clone load could overwhelm the server |
|
189 | the original Mercurial server, the added clone load could overwhelm the server | |
190 | due to unexpected load and effectively take it offline. Not having clients |
|
190 | due to unexpected load and effectively take it offline. Not having clients | |
191 | automatically fall back to cloning from the original server mitigates this |
|
191 | automatically fall back to cloning from the original server mitigates this | |
192 | scenario. |
|
192 | scenario. | |
193 |
|
193 | |||
194 | Because there is no automatic Mercurial server fallback on failure of the |
|
194 | Because there is no automatic Mercurial server fallback on failure of the | |
195 | bundle hosting service, it is important for server operators to view the bundle |
|
195 | bundle hosting service, it is important for server operators to view the bundle | |
196 | hosting service as an extension of the Mercurial server in terms of |
|
196 | hosting service as an extension of the Mercurial server in terms of | |
197 | availability and service level agreements: if the bundle hosting service goes |
|
197 | availability and service level agreements: if the bundle hosting service goes | |
198 | down, so does the ability for clients to clone. Note: clients will see a |
|
198 | down, so does the ability for clients to clone. Note: clients will see a | |
199 | message informing them how to bypass the clone bundles facility when a failure |
|
199 | message informing them how to bypass the clone bundles facility when a failure | |
200 | occurs. So server operators should prepare for some people to follow these |
|
200 | occurs. So server operators should prepare for some people to follow these | |
201 | instructions when a failure occurs, thus driving more load to the original |
|
201 | instructions when a failure occurs, thus driving more load to the original | |
202 | Mercurial server when the bundle hosting service fails. |
|
202 | Mercurial server when the bundle hosting service fails. | |
203 |
|
203 | |||
204 |
|
204 | |||
|
205 | inline clonebundles | |||
|
206 | ------------------- | |||
|
207 | ||||
|
208 | It is possible to transmit clonebundles inline in case repositories are | |||
|
209 | accessed over SSH. This avoids having to setup an external HTTPS server | |||
|
210 | and results in the same access control as already present for the SSH setup. | |||
|
211 | ||||
|
212 | Inline clonebundles should be placed into the `.hg/bundle-cache` directory. | |||
|
213 | A clonebundle at `.hg/bundle-cache/mybundle.bundle` is referred to | |||
|
214 | in the `clonebundles.manifest` file as `peer-bundle-cache://mybundle.bundle`. | |||
|
215 | ||||
|
216 | ||||
205 | auto-generation of clone bundles |
|
217 | auto-generation of clone bundles | |
206 | -------------------------------- |
|
218 | -------------------------------- | |
207 |
|
219 | |||
208 | It is possible to set Mercurial to automatically re-generate clone bundles when |
|
220 | It is possible to set Mercurial to automatically re-generate clone bundles when | |
209 | enough new content is available. |
|
221 | enough new content is available. | |
210 |
|
222 | |||
211 | Mercurial will take care of the process asynchronously. The defined list of |
|
223 | Mercurial will take care of the process asynchronously. The defined list of | |
212 | bundle-type will be generated, uploaded, and advertised. Older bundles will get |
|
224 | bundle-type will be generated, uploaded, and advertised. Older bundles will get | |
213 | decommissioned as newer ones replace them. |
|
225 | decommissioned as newer ones replace them. | |
214 |
|
226 | |||
215 | Bundles Generation: |
|
227 | Bundles Generation: | |
216 | ................... |
|
228 | ................... | |
217 |
|
229 | |||
218 | The extension can generate multiple variants of the clone bundle. Each |
|
230 | The extension can generate multiple variants of the clone bundle. Each | |
219 | different variant will be defined by the "bundle-spec" they use:: |
|
231 | different variant will be defined by the "bundle-spec" they use:: | |
220 |
|
232 | |||
221 | [clone-bundles] |
|
233 | [clone-bundles] | |
222 | auto-generate.formats= zstd-v2, gzip-v2 |
|
234 | auto-generate.formats= zstd-v2, gzip-v2 | |
223 |
|
235 | |||
224 | See `hg help bundlespec` for details about available options. |
|
236 | See `hg help bundlespec` for details about available options. | |
225 |
|
237 | |||
226 | By default, new bundles are generated when 5% of the repository contents or at |
|
238 | By default, new bundles are generated when 5% of the repository contents or at | |
227 | least 1000 revisions are not contained in the cached bundles. This option can |
|
239 | least 1000 revisions are not contained in the cached bundles. This option can | |
228 | be controlled by the `clone-bundles.trigger.below-bundled-ratio` option |
|
240 | be controlled by the `clone-bundles.trigger.below-bundled-ratio` option | |
229 | (default 0.95) and the `clone-bundles.trigger.revs` option (default 1000):: |
|
241 | (default 0.95) and the `clone-bundles.trigger.revs` option (default 1000):: | |
230 |
|
242 | |||
231 | [clone-bundles] |
|
243 | [clone-bundles] | |
232 | trigger.below-bundled-ratio=0.95 |
|
244 | trigger.below-bundled-ratio=0.95 | |
233 | trigger.revs=1000 |
|
245 | trigger.revs=1000 | |
234 |
|
246 | |||
235 | This logic can be manually triggered using the `admin::clone-bundles-refresh` |
|
247 | This logic can be manually triggered using the `admin::clone-bundles-refresh` | |
236 | command, or automatically on each repository change if |
|
248 | command, or automatically on each repository change if | |
237 | `clone-bundles.auto-generate.on-change` is set to `yes`. |
|
249 | `clone-bundles.auto-generate.on-change` is set to `yes`. | |
238 |
|
250 | |||
239 | [clone-bundles] |
|
251 | [clone-bundles] | |
240 | auto-generate.on-change=yes |
|
252 | auto-generate.on-change=yes | |
241 | auto-generate.formats= zstd-v2, gzip-v2 |
|
253 | auto-generate.formats= zstd-v2, gzip-v2 | |
242 |
|
254 | |||
243 | Bundles Upload and Serving: |
|
255 | Bundles Upload and Serving: | |
244 | ........................... |
|
256 | ........................... | |
245 |
|
257 | |||
246 | The generated bundles need to be made available to users through a "public" URL. |
|
258 | The generated bundles need to be made available to users through a "public" URL. | |
247 | This should be donne through `clone-bundles.upload-command` configuration. The |
|
259 | This should be donne through `clone-bundles.upload-command` configuration. The | |
248 | value of this command should be a shell command. It will have access to the |
|
260 | value of this command should be a shell command. It will have access to the | |
249 | bundle file path through the `$HGCB_BUNDLE_PATH` variable. And the expected |
|
261 | bundle file path through the `$HGCB_BUNDLE_PATH` variable. And the expected | |
250 | basename in the "public" URL is accessible at:: |
|
262 | basename in the "public" URL is accessible at:: | |
251 |
|
263 | |||
252 | [clone-bundles] |
|
264 | [clone-bundles] | |
253 | upload-command=sftp put $HGCB_BUNDLE_PATH \ |
|
265 | upload-command=sftp put $HGCB_BUNDLE_PATH \ | |
254 | sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME |
|
266 | sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME | |
255 |
|
267 | |||
256 | If the file was already uploaded, the command must still succeed. |
|
268 | If the file was already uploaded, the command must still succeed. | |
257 |
|
269 | |||
258 | After upload, the file should be available at an url defined by |
|
270 | After upload, the file should be available at an url defined by | |
259 | `clone-bundles.url-template`. |
|
271 | `clone-bundles.url-template`. | |
260 |
|
272 | |||
261 | [clone-bundles] |
|
273 | [clone-bundles] | |
262 | url-template=https://bundles.host/cache/clone-bundles/{basename} |
|
274 | url-template=https://bundles.host/cache/clone-bundles/{basename} | |
263 |
|
275 | |||
264 | Old bundles cleanup: |
|
276 | Old bundles cleanup: | |
265 | .................... |
|
277 | .................... | |
266 |
|
278 | |||
267 | When new bundles are generated, the older ones are no longer necessary and can |
|
279 | When new bundles are generated, the older ones are no longer necessary and can | |
268 | be removed from storage. This is done through the `clone-bundles.delete-command` |
|
280 | be removed from storage. This is done through the `clone-bundles.delete-command` | |
269 | configuration. The command is given the url of the artifact to delete through |
|
281 | configuration. The command is given the url of the artifact to delete through | |
270 | the `$HGCB_BUNDLE_URL` environment variable. |
|
282 | the `$HGCB_BUNDLE_URL` environment variable. | |
271 |
|
283 | |||
272 | [clone-bundles] |
|
284 | [clone-bundles] | |
273 | delete-command=sftp rm sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME |
|
285 | delete-command=sftp rm sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME | |
274 |
|
286 | |||
275 | If the file was already deleted, the command must still succeed. |
|
287 | If the file was already deleted, the command must still succeed. | |
276 | """ |
|
288 | """ | |
277 |
|
289 | |||
278 |
|
290 | |||
279 | import os |
|
291 | import os | |
280 | import weakref |
|
292 | import weakref | |
281 |
|
293 | |||
282 | from mercurial.i18n import _ |
|
294 | from mercurial.i18n import _ | |
283 |
|
295 | |||
284 | from mercurial import ( |
|
296 | from mercurial import ( | |
285 | bundlecaches, |
|
297 | bundlecaches, | |
286 | commands, |
|
298 | commands, | |
287 | error, |
|
299 | error, | |
288 | extensions, |
|
300 | extensions, | |
289 | localrepo, |
|
301 | localrepo, | |
290 | lock, |
|
302 | lock, | |
291 | node, |
|
303 | node, | |
292 | registrar, |
|
304 | registrar, | |
293 | util, |
|
305 | util, | |
294 | wireprotov1server, |
|
306 | wireprotov1server, | |
295 | ) |
|
307 | ) | |
296 |
|
308 | |||
297 |
|
309 | |||
298 | from mercurial.utils import ( |
|
310 | from mercurial.utils import ( | |
299 | procutil, |
|
311 | procutil, | |
300 | ) |
|
312 | ) | |
301 |
|
313 | |||
302 | testedwith = b'ships-with-hg-core' |
|
314 | testedwith = b'ships-with-hg-core' | |
303 |
|
315 | |||
304 |
|
316 | |||
305 | def capabilities(orig, repo, proto): |
|
317 | def capabilities(orig, repo, proto): | |
306 | caps = orig(repo, proto) |
|
318 | caps = orig(repo, proto) | |
307 |
|
319 | |||
308 | # Only advertise if a manifest exists. This does add some I/O to requests. |
|
320 | # Only advertise if a manifest exists. This does add some I/O to requests. | |
309 | # But this should be cheaper than a wasted network round trip due to |
|
321 | # But this should be cheaper than a wasted network round trip due to | |
310 | # missing file. |
|
322 | # missing file. | |
311 | if repo.vfs.exists(bundlecaches.CB_MANIFEST_FILE): |
|
323 | if repo.vfs.exists(bundlecaches.CB_MANIFEST_FILE): | |
312 | caps.append(b'clonebundles') |
|
324 | caps.append(b'clonebundles') | |
313 |
|
325 | |||
314 | return caps |
|
326 | return caps | |
315 |
|
327 | |||
316 |
|
328 | |||
317 | def extsetup(ui): |
|
329 | def extsetup(ui): | |
318 | extensions.wrapfunction(wireprotov1server, b'_capabilities', capabilities) |
|
330 | extensions.wrapfunction(wireprotov1server, b'_capabilities', capabilities) | |
319 |
|
331 | |||
320 |
|
332 | |||
321 | # logic for bundle auto-generation |
|
333 | # logic for bundle auto-generation | |
322 |
|
334 | |||
323 |
|
335 | |||
324 | configtable = {} |
|
336 | configtable = {} | |
325 | configitem = registrar.configitem(configtable) |
|
337 | configitem = registrar.configitem(configtable) | |
326 |
|
338 | |||
327 | cmdtable = {} |
|
339 | cmdtable = {} | |
328 | command = registrar.command(cmdtable) |
|
340 | command = registrar.command(cmdtable) | |
329 |
|
341 | |||
330 | configitem(b'clone-bundles', b'auto-generate.on-change', default=False) |
|
342 | configitem(b'clone-bundles', b'auto-generate.on-change', default=False) | |
331 | configitem(b'clone-bundles', b'auto-generate.formats', default=list) |
|
343 | configitem(b'clone-bundles', b'auto-generate.formats', default=list) | |
332 | configitem(b'clone-bundles', b'trigger.below-bundled-ratio', default=0.95) |
|
344 | configitem(b'clone-bundles', b'trigger.below-bundled-ratio', default=0.95) | |
333 | configitem(b'clone-bundles', b'trigger.revs', default=1000) |
|
345 | configitem(b'clone-bundles', b'trigger.revs', default=1000) | |
334 |
|
346 | |||
335 | configitem(b'clone-bundles', b'upload-command', default=None) |
|
347 | configitem(b'clone-bundles', b'upload-command', default=None) | |
336 |
|
348 | |||
337 | configitem(b'clone-bundles', b'delete-command', default=None) |
|
349 | configitem(b'clone-bundles', b'delete-command', default=None) | |
338 |
|
350 | |||
339 | configitem(b'clone-bundles', b'url-template', default=None) |
|
351 | configitem(b'clone-bundles', b'url-template', default=None) | |
340 |
|
352 | |||
341 | configitem(b'devel', b'debug.clonebundles', default=False) |
|
353 | configitem(b'devel', b'debug.clonebundles', default=False) | |
342 |
|
354 | |||
343 |
|
355 | |||
344 | # category for the post-close transaction hooks |
|
356 | # category for the post-close transaction hooks | |
345 | CAT_POSTCLOSE = b"clonebundles-autobundles" |
|
357 | CAT_POSTCLOSE = b"clonebundles-autobundles" | |
346 |
|
358 | |||
347 | # template for bundle file names |
|
359 | # template for bundle file names | |
348 | BUNDLE_MASK = ( |
|
360 | BUNDLE_MASK = ( | |
349 | b"full-%(bundle_type)s-%(revs)d_revs-%(tip_short)s_tip-%(op_id)s.hg" |
|
361 | b"full-%(bundle_type)s-%(revs)d_revs-%(tip_short)s_tip-%(op_id)s.hg" | |
350 | ) |
|
362 | ) | |
351 |
|
363 | |||
352 |
|
364 | |||
353 | # file in .hg/ use to track clonebundles being auto-generated |
|
365 | # file in .hg/ use to track clonebundles being auto-generated | |
354 | AUTO_GEN_FILE = b'clonebundles.auto-gen' |
|
366 | AUTO_GEN_FILE = b'clonebundles.auto-gen' | |
355 |
|
367 | |||
356 |
|
368 | |||
357 | class BundleBase(object): |
|
369 | class BundleBase(object): | |
358 | """represents the core of properties that matters for us in a bundle |
|
370 | """represents the core of properties that matters for us in a bundle | |
359 |
|
371 | |||
360 | :bundle_type: the bundlespec (see hg help bundlespec) |
|
372 | :bundle_type: the bundlespec (see hg help bundlespec) | |
361 | :revs: the number of revisions in the repo at bundle creation time |
|
373 | :revs: the number of revisions in the repo at bundle creation time | |
362 | :tip_rev: the rev-num of the tip revision |
|
374 | :tip_rev: the rev-num of the tip revision | |
363 | :tip_node: the node id of the tip-most revision in the bundle |
|
375 | :tip_node: the node id of the tip-most revision in the bundle | |
364 |
|
376 | |||
365 | :ready: True if the bundle is ready to be served |
|
377 | :ready: True if the bundle is ready to be served | |
366 | """ |
|
378 | """ | |
367 |
|
379 | |||
368 | ready = False |
|
380 | ready = False | |
369 |
|
381 | |||
370 | def __init__(self, bundle_type, revs, tip_rev, tip_node): |
|
382 | def __init__(self, bundle_type, revs, tip_rev, tip_node): | |
371 | self.bundle_type = bundle_type |
|
383 | self.bundle_type = bundle_type | |
372 | self.revs = revs |
|
384 | self.revs = revs | |
373 | self.tip_rev = tip_rev |
|
385 | self.tip_rev = tip_rev | |
374 | self.tip_node = tip_node |
|
386 | self.tip_node = tip_node | |
375 |
|
387 | |||
376 | def valid_for(self, repo): |
|
388 | def valid_for(self, repo): | |
377 | """is this bundle applicable to the current repository |
|
389 | """is this bundle applicable to the current repository | |
378 |
|
390 | |||
379 | This is useful for detecting bundles made irrelevant by stripping. |
|
391 | This is useful for detecting bundles made irrelevant by stripping. | |
380 | """ |
|
392 | """ | |
381 | tip_node = node.bin(self.tip_node) |
|
393 | tip_node = node.bin(self.tip_node) | |
382 | return repo.changelog.index.get_rev(tip_node) == self.tip_rev |
|
394 | return repo.changelog.index.get_rev(tip_node) == self.tip_rev | |
383 |
|
395 | |||
384 | def __eq__(self, other): |
|
396 | def __eq__(self, other): | |
385 | left = (self.ready, self.bundle_type, self.tip_rev, self.tip_node) |
|
397 | left = (self.ready, self.bundle_type, self.tip_rev, self.tip_node) | |
386 | right = (other.ready, other.bundle_type, other.tip_rev, other.tip_node) |
|
398 | right = (other.ready, other.bundle_type, other.tip_rev, other.tip_node) | |
387 | return left == right |
|
399 | return left == right | |
388 |
|
400 | |||
389 | def __neq__(self, other): |
|
401 | def __neq__(self, other): | |
390 | return not self == other |
|
402 | return not self == other | |
391 |
|
403 | |||
392 | def __cmp__(self, other): |
|
404 | def __cmp__(self, other): | |
393 | if self == other: |
|
405 | if self == other: | |
394 | return 0 |
|
406 | return 0 | |
395 | return -1 |
|
407 | return -1 | |
396 |
|
408 | |||
397 |
|
409 | |||
398 | class RequestedBundle(BundleBase): |
|
410 | class RequestedBundle(BundleBase): | |
399 | """A bundle that should be generated. |
|
411 | """A bundle that should be generated. | |
400 |
|
412 | |||
401 | Additional attributes compared to BundleBase |
|
413 | Additional attributes compared to BundleBase | |
402 | :heads: list of head revisions (as rev-num) |
|
414 | :heads: list of head revisions (as rev-num) | |
403 | :op_id: a "unique" identifier for the operation triggering the change |
|
415 | :op_id: a "unique" identifier for the operation triggering the change | |
404 | """ |
|
416 | """ | |
405 |
|
417 | |||
406 | def __init__(self, bundle_type, revs, tip_rev, tip_node, head_revs, op_id): |
|
418 | def __init__(self, bundle_type, revs, tip_rev, tip_node, head_revs, op_id): | |
407 | self.head_revs = head_revs |
|
419 | self.head_revs = head_revs | |
408 | self.op_id = op_id |
|
420 | self.op_id = op_id | |
409 | super(RequestedBundle, self).__init__( |
|
421 | super(RequestedBundle, self).__init__( | |
410 | bundle_type, |
|
422 | bundle_type, | |
411 | revs, |
|
423 | revs, | |
412 | tip_rev, |
|
424 | tip_rev, | |
413 | tip_node, |
|
425 | tip_node, | |
414 | ) |
|
426 | ) | |
415 |
|
427 | |||
416 | @property |
|
428 | @property | |
417 | def suggested_filename(self): |
|
429 | def suggested_filename(self): | |
418 | """A filename that can be used for the generated bundle""" |
|
430 | """A filename that can be used for the generated bundle""" | |
419 | data = { |
|
431 | data = { | |
420 | b'bundle_type': self.bundle_type, |
|
432 | b'bundle_type': self.bundle_type, | |
421 | b'revs': self.revs, |
|
433 | b'revs': self.revs, | |
422 | b'heads': self.head_revs, |
|
434 | b'heads': self.head_revs, | |
423 | b'tip_rev': self.tip_rev, |
|
435 | b'tip_rev': self.tip_rev, | |
424 | b'tip_node': self.tip_node, |
|
436 | b'tip_node': self.tip_node, | |
425 | b'tip_short': self.tip_node[:12], |
|
437 | b'tip_short': self.tip_node[:12], | |
426 | b'op_id': self.op_id, |
|
438 | b'op_id': self.op_id, | |
427 | } |
|
439 | } | |
428 | return BUNDLE_MASK % data |
|
440 | return BUNDLE_MASK % data | |
429 |
|
441 | |||
430 | def generate_bundle(self, repo, file_path): |
|
442 | def generate_bundle(self, repo, file_path): | |
431 | """generate the bundle at `filepath`""" |
|
443 | """generate the bundle at `filepath`""" | |
432 | commands.bundle( |
|
444 | commands.bundle( | |
433 | repo.ui, |
|
445 | repo.ui, | |
434 | repo, |
|
446 | repo, | |
435 | file_path, |
|
447 | file_path, | |
436 | base=[b"null"], |
|
448 | base=[b"null"], | |
437 | rev=self.head_revs, |
|
449 | rev=self.head_revs, | |
438 | type=self.bundle_type, |
|
450 | type=self.bundle_type, | |
439 | quiet=True, |
|
451 | quiet=True, | |
440 | ) |
|
452 | ) | |
441 |
|
453 | |||
442 | def generating(self, file_path, hostname=None, pid=None): |
|
454 | def generating(self, file_path, hostname=None, pid=None): | |
443 | """return a GeneratingBundle object from this object""" |
|
455 | """return a GeneratingBundle object from this object""" | |
444 | if pid is None: |
|
456 | if pid is None: | |
445 | pid = os.getpid() |
|
457 | pid = os.getpid() | |
446 | if hostname is None: |
|
458 | if hostname is None: | |
447 | hostname = lock._getlockprefix() |
|
459 | hostname = lock._getlockprefix() | |
448 | return GeneratingBundle( |
|
460 | return GeneratingBundle( | |
449 | self.bundle_type, |
|
461 | self.bundle_type, | |
450 | self.revs, |
|
462 | self.revs, | |
451 | self.tip_rev, |
|
463 | self.tip_rev, | |
452 | self.tip_node, |
|
464 | self.tip_node, | |
453 | hostname, |
|
465 | hostname, | |
454 | pid, |
|
466 | pid, | |
455 | file_path, |
|
467 | file_path, | |
456 | ) |
|
468 | ) | |
457 |
|
469 | |||
458 |
|
470 | |||
459 | class GeneratingBundle(BundleBase): |
|
471 | class GeneratingBundle(BundleBase): | |
460 | """A bundle being generated |
|
472 | """A bundle being generated | |
461 |
|
473 | |||
462 | extra attributes compared to BundleBase: |
|
474 | extra attributes compared to BundleBase: | |
463 |
|
475 | |||
464 | :hostname: the hostname of the machine generating the bundle |
|
476 | :hostname: the hostname of the machine generating the bundle | |
465 | :pid: the pid of the process generating the bundle |
|
477 | :pid: the pid of the process generating the bundle | |
466 | :filepath: the target filename of the bundle |
|
478 | :filepath: the target filename of the bundle | |
467 |
|
479 | |||
468 | These attributes exist to help detect stalled generation processes. |
|
480 | These attributes exist to help detect stalled generation processes. | |
469 | """ |
|
481 | """ | |
470 |
|
482 | |||
471 | ready = False |
|
483 | ready = False | |
472 |
|
484 | |||
473 | def __init__( |
|
485 | def __init__( | |
474 | self, bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath |
|
486 | self, bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath | |
475 | ): |
|
487 | ): | |
476 | self.hostname = hostname |
|
488 | self.hostname = hostname | |
477 | self.pid = pid |
|
489 | self.pid = pid | |
478 | self.filepath = filepath |
|
490 | self.filepath = filepath | |
479 | super(GeneratingBundle, self).__init__( |
|
491 | super(GeneratingBundle, self).__init__( | |
480 | bundle_type, revs, tip_rev, tip_node |
|
492 | bundle_type, revs, tip_rev, tip_node | |
481 | ) |
|
493 | ) | |
482 |
|
494 | |||
483 | @classmethod |
|
495 | @classmethod | |
484 | def from_line(cls, line): |
|
496 | def from_line(cls, line): | |
485 | """create an object by deserializing a line from AUTO_GEN_FILE""" |
|
497 | """create an object by deserializing a line from AUTO_GEN_FILE""" | |
486 | assert line.startswith(b'PENDING-v1 ') |
|
498 | assert line.startswith(b'PENDING-v1 ') | |
487 | ( |
|
499 | ( | |
488 | __, |
|
500 | __, | |
489 | bundle_type, |
|
501 | bundle_type, | |
490 | revs, |
|
502 | revs, | |
491 | tip_rev, |
|
503 | tip_rev, | |
492 | tip_node, |
|
504 | tip_node, | |
493 | hostname, |
|
505 | hostname, | |
494 | pid, |
|
506 | pid, | |
495 | filepath, |
|
507 | filepath, | |
496 | ) = line.split() |
|
508 | ) = line.split() | |
497 | hostname = util.urlreq.unquote(hostname) |
|
509 | hostname = util.urlreq.unquote(hostname) | |
498 | filepath = util.urlreq.unquote(filepath) |
|
510 | filepath = util.urlreq.unquote(filepath) | |
499 | revs = int(revs) |
|
511 | revs = int(revs) | |
500 | tip_rev = int(tip_rev) |
|
512 | tip_rev = int(tip_rev) | |
501 | pid = int(pid) |
|
513 | pid = int(pid) | |
502 | return cls( |
|
514 | return cls( | |
503 | bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath |
|
515 | bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath | |
504 | ) |
|
516 | ) | |
505 |
|
517 | |||
506 | def to_line(self): |
|
518 | def to_line(self): | |
507 | """serialize the object to include as a line in AUTO_GEN_FILE""" |
|
519 | """serialize the object to include as a line in AUTO_GEN_FILE""" | |
508 | templ = b"PENDING-v1 %s %d %d %s %s %d %s" |
|
520 | templ = b"PENDING-v1 %s %d %d %s %s %d %s" | |
509 | data = ( |
|
521 | data = ( | |
510 | self.bundle_type, |
|
522 | self.bundle_type, | |
511 | self.revs, |
|
523 | self.revs, | |
512 | self.tip_rev, |
|
524 | self.tip_rev, | |
513 | self.tip_node, |
|
525 | self.tip_node, | |
514 | util.urlreq.quote(self.hostname), |
|
526 | util.urlreq.quote(self.hostname), | |
515 | self.pid, |
|
527 | self.pid, | |
516 | util.urlreq.quote(self.filepath), |
|
528 | util.urlreq.quote(self.filepath), | |
517 | ) |
|
529 | ) | |
518 | return templ % data |
|
530 | return templ % data | |
519 |
|
531 | |||
520 | def __eq__(self, other): |
|
532 | def __eq__(self, other): | |
521 | if not super(GeneratingBundle, self).__eq__(other): |
|
533 | if not super(GeneratingBundle, self).__eq__(other): | |
522 | return False |
|
534 | return False | |
523 | left = (self.hostname, self.pid, self.filepath) |
|
535 | left = (self.hostname, self.pid, self.filepath) | |
524 | right = (other.hostname, other.pid, other.filepath) |
|
536 | right = (other.hostname, other.pid, other.filepath) | |
525 | return left == right |
|
537 | return left == right | |
526 |
|
538 | |||
527 | def uploaded(self, url, basename): |
|
539 | def uploaded(self, url, basename): | |
528 | """return a GeneratedBundle from this object""" |
|
540 | """return a GeneratedBundle from this object""" | |
529 | return GeneratedBundle( |
|
541 | return GeneratedBundle( | |
530 | self.bundle_type, |
|
542 | self.bundle_type, | |
531 | self.revs, |
|
543 | self.revs, | |
532 | self.tip_rev, |
|
544 | self.tip_rev, | |
533 | self.tip_node, |
|
545 | self.tip_node, | |
534 | url, |
|
546 | url, | |
535 | basename, |
|
547 | basename, | |
536 | ) |
|
548 | ) | |
537 |
|
549 | |||
538 |
|
550 | |||
539 | class GeneratedBundle(BundleBase): |
|
551 | class GeneratedBundle(BundleBase): | |
540 | """A bundle that is done being generated and can be served |
|
552 | """A bundle that is done being generated and can be served | |
541 |
|
553 | |||
542 | extra attributes compared to BundleBase: |
|
554 | extra attributes compared to BundleBase: | |
543 |
|
555 | |||
544 | :file_url: the url where the bundle is available. |
|
556 | :file_url: the url where the bundle is available. | |
545 | :basename: the "basename" used to upload (useful for deletion) |
|
557 | :basename: the "basename" used to upload (useful for deletion) | |
546 |
|
558 | |||
547 | These attributes exist to generate a bundle manifest |
|
559 | These attributes exist to generate a bundle manifest | |
548 | (.hg/pullbundles.manifest) |
|
560 | (.hg/pullbundles.manifest) | |
549 | """ |
|
561 | """ | |
550 |
|
562 | |||
551 | ready = True |
|
563 | ready = True | |
552 |
|
564 | |||
553 | def __init__( |
|
565 | def __init__( | |
554 | self, bundle_type, revs, tip_rev, tip_node, file_url, basename |
|
566 | self, bundle_type, revs, tip_rev, tip_node, file_url, basename | |
555 | ): |
|
567 | ): | |
556 | self.file_url = file_url |
|
568 | self.file_url = file_url | |
557 | self.basename = basename |
|
569 | self.basename = basename | |
558 | super(GeneratedBundle, self).__init__( |
|
570 | super(GeneratedBundle, self).__init__( | |
559 | bundle_type, revs, tip_rev, tip_node |
|
571 | bundle_type, revs, tip_rev, tip_node | |
560 | ) |
|
572 | ) | |
561 |
|
573 | |||
562 | @classmethod |
|
574 | @classmethod | |
563 | def from_line(cls, line): |
|
575 | def from_line(cls, line): | |
564 | """create an object by deserializing a line from AUTO_GEN_FILE""" |
|
576 | """create an object by deserializing a line from AUTO_GEN_FILE""" | |
565 | assert line.startswith(b'DONE-v1 ') |
|
577 | assert line.startswith(b'DONE-v1 ') | |
566 | ( |
|
578 | ( | |
567 | __, |
|
579 | __, | |
568 | bundle_type, |
|
580 | bundle_type, | |
569 | revs, |
|
581 | revs, | |
570 | tip_rev, |
|
582 | tip_rev, | |
571 | tip_node, |
|
583 | tip_node, | |
572 | file_url, |
|
584 | file_url, | |
573 | basename, |
|
585 | basename, | |
574 | ) = line.split() |
|
586 | ) = line.split() | |
575 | revs = int(revs) |
|
587 | revs = int(revs) | |
576 | tip_rev = int(tip_rev) |
|
588 | tip_rev = int(tip_rev) | |
577 | file_url = util.urlreq.unquote(file_url) |
|
589 | file_url = util.urlreq.unquote(file_url) | |
578 | return cls(bundle_type, revs, tip_rev, tip_node, file_url, basename) |
|
590 | return cls(bundle_type, revs, tip_rev, tip_node, file_url, basename) | |
579 |
|
591 | |||
580 | def to_line(self): |
|
592 | def to_line(self): | |
581 | """serialize the object to include as a line in AUTO_GEN_FILE""" |
|
593 | """serialize the object to include as a line in AUTO_GEN_FILE""" | |
582 | templ = b"DONE-v1 %s %d %d %s %s %s" |
|
594 | templ = b"DONE-v1 %s %d %d %s %s %s" | |
583 | data = ( |
|
595 | data = ( | |
584 | self.bundle_type, |
|
596 | self.bundle_type, | |
585 | self.revs, |
|
597 | self.revs, | |
586 | self.tip_rev, |
|
598 | self.tip_rev, | |
587 | self.tip_node, |
|
599 | self.tip_node, | |
588 | util.urlreq.quote(self.file_url), |
|
600 | util.urlreq.quote(self.file_url), | |
589 | self.basename, |
|
601 | self.basename, | |
590 | ) |
|
602 | ) | |
591 | return templ % data |
|
603 | return templ % data | |
592 |
|
604 | |||
593 | def manifest_line(self): |
|
605 | def manifest_line(self): | |
594 | """serialize the object to include as a line in pullbundles.manifest""" |
|
606 | """serialize the object to include as a line in pullbundles.manifest""" | |
595 | templ = b"%s BUNDLESPEC=%s REQUIRESNI=true" |
|
607 | templ = b"%s BUNDLESPEC=%s REQUIRESNI=true" | |
596 | return templ % (self.file_url, self.bundle_type) |
|
608 | return templ % (self.file_url, self.bundle_type) | |
597 |
|
609 | |||
598 | def __eq__(self, other): |
|
610 | def __eq__(self, other): | |
599 | if not super(GeneratedBundle, self).__eq__(other): |
|
611 | if not super(GeneratedBundle, self).__eq__(other): | |
600 | return False |
|
612 | return False | |
601 | return self.file_url == other.file_url |
|
613 | return self.file_url == other.file_url | |
602 |
|
614 | |||
603 |
|
615 | |||
604 | def parse_auto_gen(content): |
|
616 | def parse_auto_gen(content): | |
605 | """parse the AUTO_GEN_FILE to return a list of Bundle object""" |
|
617 | """parse the AUTO_GEN_FILE to return a list of Bundle object""" | |
606 | bundles = [] |
|
618 | bundles = [] | |
607 | for line in content.splitlines(): |
|
619 | for line in content.splitlines(): | |
608 | if line.startswith(b'PENDING-v1 '): |
|
620 | if line.startswith(b'PENDING-v1 '): | |
609 | bundles.append(GeneratingBundle.from_line(line)) |
|
621 | bundles.append(GeneratingBundle.from_line(line)) | |
610 | elif line.startswith(b'DONE-v1 '): |
|
622 | elif line.startswith(b'DONE-v1 '): | |
611 | bundles.append(GeneratedBundle.from_line(line)) |
|
623 | bundles.append(GeneratedBundle.from_line(line)) | |
612 | return bundles |
|
624 | return bundles | |
613 |
|
625 | |||
614 |
|
626 | |||
615 | def dumps_auto_gen(bundles): |
|
627 | def dumps_auto_gen(bundles): | |
616 | """serialize a list of Bundle as a AUTO_GEN_FILE content""" |
|
628 | """serialize a list of Bundle as a AUTO_GEN_FILE content""" | |
617 | lines = [] |
|
629 | lines = [] | |
618 | for b in bundles: |
|
630 | for b in bundles: | |
619 | lines.append(b"%s\n" % b.to_line()) |
|
631 | lines.append(b"%s\n" % b.to_line()) | |
620 | lines.sort() |
|
632 | lines.sort() | |
621 | return b"".join(lines) |
|
633 | return b"".join(lines) | |
622 |
|
634 | |||
623 |
|
635 | |||
624 | def read_auto_gen(repo): |
|
636 | def read_auto_gen(repo): | |
625 | """read the AUTO_GEN_FILE for the <repo> a list of Bundle object""" |
|
637 | """read the AUTO_GEN_FILE for the <repo> a list of Bundle object""" | |
626 | data = repo.vfs.tryread(AUTO_GEN_FILE) |
|
638 | data = repo.vfs.tryread(AUTO_GEN_FILE) | |
627 | if not data: |
|
639 | if not data: | |
628 | return [] |
|
640 | return [] | |
629 | return parse_auto_gen(data) |
|
641 | return parse_auto_gen(data) | |
630 |
|
642 | |||
631 |
|
643 | |||
632 | def write_auto_gen(repo, bundles): |
|
644 | def write_auto_gen(repo, bundles): | |
633 | """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" |
|
645 | """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" | |
634 | assert repo._cb_lock_ref is not None |
|
646 | assert repo._cb_lock_ref is not None | |
635 | data = dumps_auto_gen(bundles) |
|
647 | data = dumps_auto_gen(bundles) | |
636 | with repo.vfs(AUTO_GEN_FILE, mode=b'wb', atomictemp=True) as f: |
|
648 | with repo.vfs(AUTO_GEN_FILE, mode=b'wb', atomictemp=True) as f: | |
637 | f.write(data) |
|
649 | f.write(data) | |
638 |
|
650 | |||
639 |
|
651 | |||
640 | def generate_manifest(bundles): |
|
652 | def generate_manifest(bundles): | |
641 | """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" |
|
653 | """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" | |
642 | bundles = list(bundles) |
|
654 | bundles = list(bundles) | |
643 | bundles.sort(key=lambda b: b.bundle_type) |
|
655 | bundles.sort(key=lambda b: b.bundle_type) | |
644 | lines = [] |
|
656 | lines = [] | |
645 | for b in bundles: |
|
657 | for b in bundles: | |
646 | lines.append(b"%s\n" % b.manifest_line()) |
|
658 | lines.append(b"%s\n" % b.manifest_line()) | |
647 | return b"".join(lines) |
|
659 | return b"".join(lines) | |
648 |
|
660 | |||
649 |
|
661 | |||
650 | def update_ondisk_manifest(repo): |
|
662 | def update_ondisk_manifest(repo): | |
651 | """update the clonebundle manifest with latest url""" |
|
663 | """update the clonebundle manifest with latest url""" | |
652 | with repo.clonebundles_lock(): |
|
664 | with repo.clonebundles_lock(): | |
653 | bundles = read_auto_gen(repo) |
|
665 | bundles = read_auto_gen(repo) | |
654 |
|
666 | |||
655 | per_types = {} |
|
667 | per_types = {} | |
656 | for b in bundles: |
|
668 | for b in bundles: | |
657 | if not (b.ready and b.valid_for(repo)): |
|
669 | if not (b.ready and b.valid_for(repo)): | |
658 | continue |
|
670 | continue | |
659 | current = per_types.get(b.bundle_type) |
|
671 | current = per_types.get(b.bundle_type) | |
660 | if current is not None and current.revs >= b.revs: |
|
672 | if current is not None and current.revs >= b.revs: | |
661 | continue |
|
673 | continue | |
662 | per_types[b.bundle_type] = b |
|
674 | per_types[b.bundle_type] = b | |
663 | manifest = generate_manifest(per_types.values()) |
|
675 | manifest = generate_manifest(per_types.values()) | |
664 | with repo.vfs( |
|
676 | with repo.vfs( | |
665 | bundlecaches.CB_MANIFEST_FILE, mode=b"wb", atomictemp=True |
|
677 | bundlecaches.CB_MANIFEST_FILE, mode=b"wb", atomictemp=True | |
666 | ) as f: |
|
678 | ) as f: | |
667 | f.write(manifest) |
|
679 | f.write(manifest) | |
668 |
|
680 | |||
669 |
|
681 | |||
670 | def update_bundle_list(repo, new_bundles=(), del_bundles=()): |
|
682 | def update_bundle_list(repo, new_bundles=(), del_bundles=()): | |
671 | """modify the repo's AUTO_GEN_FILE |
|
683 | """modify the repo's AUTO_GEN_FILE | |
672 |
|
684 | |||
673 | This method also regenerates the clone bundle manifest when needed""" |
|
685 | This method also regenerates the clone bundle manifest when needed""" | |
674 | with repo.clonebundles_lock(): |
|
686 | with repo.clonebundles_lock(): | |
675 | bundles = read_auto_gen(repo) |
|
687 | bundles = read_auto_gen(repo) | |
676 | if del_bundles: |
|
688 | if del_bundles: | |
677 | bundles = [b for b in bundles if b not in del_bundles] |
|
689 | bundles = [b for b in bundles if b not in del_bundles] | |
678 | new_bundles = [b for b in new_bundles if b not in bundles] |
|
690 | new_bundles = [b for b in new_bundles if b not in bundles] | |
679 | bundles.extend(new_bundles) |
|
691 | bundles.extend(new_bundles) | |
680 | write_auto_gen(repo, bundles) |
|
692 | write_auto_gen(repo, bundles) | |
681 | all_changed = [] |
|
693 | all_changed = [] | |
682 | all_changed.extend(new_bundles) |
|
694 | all_changed.extend(new_bundles) | |
683 | all_changed.extend(del_bundles) |
|
695 | all_changed.extend(del_bundles) | |
684 | if any(b.ready for b in all_changed): |
|
696 | if any(b.ready for b in all_changed): | |
685 | update_ondisk_manifest(repo) |
|
697 | update_ondisk_manifest(repo) | |
686 |
|
698 | |||
687 |
|
699 | |||
688 | def cleanup_tmp_bundle(repo, target): |
|
700 | def cleanup_tmp_bundle(repo, target): | |
689 | """remove a GeneratingBundle file and entry""" |
|
701 | """remove a GeneratingBundle file and entry""" | |
690 | assert not target.ready |
|
702 | assert not target.ready | |
691 | with repo.clonebundles_lock(): |
|
703 | with repo.clonebundles_lock(): | |
692 | repo.vfs.tryunlink(target.filepath) |
|
704 | repo.vfs.tryunlink(target.filepath) | |
693 | update_bundle_list(repo, del_bundles=[target]) |
|
705 | update_bundle_list(repo, del_bundles=[target]) | |
694 |
|
706 | |||
695 |
|
707 | |||
696 | def finalize_one_bundle(repo, target): |
|
708 | def finalize_one_bundle(repo, target): | |
697 | """upload a generated bundle and advertise it in the clonebundles.manifest""" |
|
709 | """upload a generated bundle and advertise it in the clonebundles.manifest""" | |
698 | with repo.clonebundles_lock(): |
|
710 | with repo.clonebundles_lock(): | |
699 | bundles = read_auto_gen(repo) |
|
711 | bundles = read_auto_gen(repo) | |
700 | if target in bundles and target.valid_for(repo): |
|
712 | if target in bundles and target.valid_for(repo): | |
701 | result = upload_bundle(repo, target) |
|
713 | result = upload_bundle(repo, target) | |
702 | update_bundle_list(repo, new_bundles=[result]) |
|
714 | update_bundle_list(repo, new_bundles=[result]) | |
703 | cleanup_tmp_bundle(repo, target) |
|
715 | cleanup_tmp_bundle(repo, target) | |
704 |
|
716 | |||
705 |
|
717 | |||
706 | def find_outdated_bundles(repo, bundles): |
|
718 | def find_outdated_bundles(repo, bundles): | |
707 | """finds outdated bundles""" |
|
719 | """finds outdated bundles""" | |
708 | olds = [] |
|
720 | olds = [] | |
709 | per_types = {} |
|
721 | per_types = {} | |
710 | for b in bundles: |
|
722 | for b in bundles: | |
711 | if not b.valid_for(repo): |
|
723 | if not b.valid_for(repo): | |
712 | olds.append(b) |
|
724 | olds.append(b) | |
713 | continue |
|
725 | continue | |
714 | l = per_types.setdefault(b.bundle_type, []) |
|
726 | l = per_types.setdefault(b.bundle_type, []) | |
715 | l.append(b) |
|
727 | l.append(b) | |
716 | for key in sorted(per_types): |
|
728 | for key in sorted(per_types): | |
717 | all = per_types[key] |
|
729 | all = per_types[key] | |
718 | if len(all) > 1: |
|
730 | if len(all) > 1: | |
719 | all.sort(key=lambda b: b.revs, reverse=True) |
|
731 | all.sort(key=lambda b: b.revs, reverse=True) | |
720 | olds.extend(all[1:]) |
|
732 | olds.extend(all[1:]) | |
721 | return olds |
|
733 | return olds | |
722 |
|
734 | |||
723 |
|
735 | |||
724 | def collect_garbage(repo): |
|
736 | def collect_garbage(repo): | |
725 | """finds outdated bundles and get them deleted""" |
|
737 | """finds outdated bundles and get them deleted""" | |
726 | with repo.clonebundles_lock(): |
|
738 | with repo.clonebundles_lock(): | |
727 | bundles = read_auto_gen(repo) |
|
739 | bundles = read_auto_gen(repo) | |
728 | olds = find_outdated_bundles(repo, bundles) |
|
740 | olds = find_outdated_bundles(repo, bundles) | |
729 | for o in olds: |
|
741 | for o in olds: | |
730 | delete_bundle(repo, o) |
|
742 | delete_bundle(repo, o) | |
731 | update_bundle_list(repo, del_bundles=olds) |
|
743 | update_bundle_list(repo, del_bundles=olds) | |
732 |
|
744 | |||
733 |
|
745 | |||
734 | def upload_bundle(repo, bundle): |
|
746 | def upload_bundle(repo, bundle): | |
735 | """upload the result of a GeneratingBundle and return a GeneratedBundle |
|
747 | """upload the result of a GeneratingBundle and return a GeneratedBundle | |
736 |
|
748 | |||
737 | The upload is done using the `clone-bundles.upload-command` |
|
749 | The upload is done using the `clone-bundles.upload-command` | |
738 | """ |
|
750 | """ | |
739 | cmd = repo.ui.config(b'clone-bundles', b'upload-command') |
|
751 | cmd = repo.ui.config(b'clone-bundles', b'upload-command') | |
740 | url = repo.ui.config(b'clone-bundles', b'url-template') |
|
752 | url = repo.ui.config(b'clone-bundles', b'url-template') | |
741 | basename = repo.vfs.basename(bundle.filepath) |
|
753 | basename = repo.vfs.basename(bundle.filepath) | |
742 | filepath = procutil.shellquote(bundle.filepath) |
|
754 | filepath = procutil.shellquote(bundle.filepath) | |
743 | variables = { |
|
755 | variables = { | |
744 | b'HGCB_BUNDLE_PATH': filepath, |
|
756 | b'HGCB_BUNDLE_PATH': filepath, | |
745 | b'HGCB_BUNDLE_BASENAME': basename, |
|
757 | b'HGCB_BUNDLE_BASENAME': basename, | |
746 | } |
|
758 | } | |
747 | env = procutil.shellenviron(environ=variables) |
|
759 | env = procutil.shellenviron(environ=variables) | |
748 | ret = repo.ui.system(cmd, environ=env) |
|
760 | ret = repo.ui.system(cmd, environ=env) | |
749 | if ret: |
|
761 | if ret: | |
750 | raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) |
|
762 | raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) | |
751 | url = ( |
|
763 | url = ( | |
752 | url.decode('utf8') |
|
764 | url.decode('utf8') | |
753 | .format(basename=basename.decode('utf8')) |
|
765 | .format(basename=basename.decode('utf8')) | |
754 | .encode('utf8') |
|
766 | .encode('utf8') | |
755 | ) |
|
767 | ) | |
756 | return bundle.uploaded(url, basename) |
|
768 | return bundle.uploaded(url, basename) | |
757 |
|
769 | |||
758 |
|
770 | |||
759 | def delete_bundle(repo, bundle): |
|
771 | def delete_bundle(repo, bundle): | |
760 | """delete a bundle from storage""" |
|
772 | """delete a bundle from storage""" | |
761 | assert bundle.ready |
|
773 | assert bundle.ready | |
762 | msg = b'clone-bundles: deleting bundle %s\n' |
|
774 | msg = b'clone-bundles: deleting bundle %s\n' | |
763 | msg %= bundle.basename |
|
775 | msg %= bundle.basename | |
764 | if repo.ui.configbool(b'devel', b'debug.clonebundles'): |
|
776 | if repo.ui.configbool(b'devel', b'debug.clonebundles'): | |
765 | repo.ui.write(msg) |
|
777 | repo.ui.write(msg) | |
766 | else: |
|
778 | else: | |
767 | repo.ui.debug(msg) |
|
779 | repo.ui.debug(msg) | |
768 |
|
780 | |||
769 | cmd = repo.ui.config(b'clone-bundles', b'delete-command') |
|
781 | cmd = repo.ui.config(b'clone-bundles', b'delete-command') | |
770 | variables = { |
|
782 | variables = { | |
771 | b'HGCB_BUNDLE_URL': bundle.file_url, |
|
783 | b'HGCB_BUNDLE_URL': bundle.file_url, | |
772 | b'HGCB_BASENAME': bundle.basename, |
|
784 | b'HGCB_BASENAME': bundle.basename, | |
773 | } |
|
785 | } | |
774 | env = procutil.shellenviron(environ=variables) |
|
786 | env = procutil.shellenviron(environ=variables) | |
775 | ret = repo.ui.system(cmd, environ=env) |
|
787 | ret = repo.ui.system(cmd, environ=env) | |
776 | if ret: |
|
788 | if ret: | |
777 | raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) |
|
789 | raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) | |
778 |
|
790 | |||
779 |
|
791 | |||
780 | def auto_bundle_needed_actions(repo, bundles, op_id): |
|
792 | def auto_bundle_needed_actions(repo, bundles, op_id): | |
781 | """find the list of bundles that need action |
|
793 | """find the list of bundles that need action | |
782 |
|
794 | |||
783 | returns a list of RequestedBundle objects that need to be generated and |
|
795 | returns a list of RequestedBundle objects that need to be generated and | |
784 | uploaded.""" |
|
796 | uploaded.""" | |
785 | create_bundles = [] |
|
797 | create_bundles = [] | |
786 | delete_bundles = [] |
|
798 | delete_bundles = [] | |
787 | repo = repo.filtered(b"immutable") |
|
799 | repo = repo.filtered(b"immutable") | |
788 | targets = repo.ui.configlist(b'clone-bundles', b'auto-generate.formats') |
|
800 | targets = repo.ui.configlist(b'clone-bundles', b'auto-generate.formats') | |
789 | ratio = float( |
|
801 | ratio = float( | |
790 | repo.ui.config(b'clone-bundles', b'trigger.below-bundled-ratio') |
|
802 | repo.ui.config(b'clone-bundles', b'trigger.below-bundled-ratio') | |
791 | ) |
|
803 | ) | |
792 | abs_revs = repo.ui.configint(b'clone-bundles', b'trigger.revs') |
|
804 | abs_revs = repo.ui.configint(b'clone-bundles', b'trigger.revs') | |
793 | revs = len(repo.changelog) |
|
805 | revs = len(repo.changelog) | |
794 | generic_data = { |
|
806 | generic_data = { | |
795 | 'revs': revs, |
|
807 | 'revs': revs, | |
796 | 'head_revs': repo.changelog.headrevs(), |
|
808 | 'head_revs': repo.changelog.headrevs(), | |
797 | 'tip_rev': repo.changelog.tiprev(), |
|
809 | 'tip_rev': repo.changelog.tiprev(), | |
798 | 'tip_node': node.hex(repo.changelog.tip()), |
|
810 | 'tip_node': node.hex(repo.changelog.tip()), | |
799 | 'op_id': op_id, |
|
811 | 'op_id': op_id, | |
800 | } |
|
812 | } | |
801 | for t in targets: |
|
813 | for t in targets: | |
802 | if new_bundle_needed(repo, bundles, ratio, abs_revs, t, revs): |
|
814 | if new_bundle_needed(repo, bundles, ratio, abs_revs, t, revs): | |
803 | data = generic_data.copy() |
|
815 | data = generic_data.copy() | |
804 | data['bundle_type'] = t |
|
816 | data['bundle_type'] = t | |
805 | b = RequestedBundle(**data) |
|
817 | b = RequestedBundle(**data) | |
806 | create_bundles.append(b) |
|
818 | create_bundles.append(b) | |
807 | delete_bundles.extend(find_outdated_bundles(repo, bundles)) |
|
819 | delete_bundles.extend(find_outdated_bundles(repo, bundles)) | |
808 | return create_bundles, delete_bundles |
|
820 | return create_bundles, delete_bundles | |
809 |
|
821 | |||
810 |
|
822 | |||
811 | def new_bundle_needed(repo, bundles, ratio, abs_revs, bundle_type, revs): |
|
823 | def new_bundle_needed(repo, bundles, ratio, abs_revs, bundle_type, revs): | |
812 | """consider the current cached content and trigger new bundles if needed""" |
|
824 | """consider the current cached content and trigger new bundles if needed""" | |
813 | threshold = max((revs * ratio), (revs - abs_revs)) |
|
825 | threshold = max((revs * ratio), (revs - abs_revs)) | |
814 | for b in bundles: |
|
826 | for b in bundles: | |
815 | if not b.valid_for(repo) or b.bundle_type != bundle_type: |
|
827 | if not b.valid_for(repo) or b.bundle_type != bundle_type: | |
816 | continue |
|
828 | continue | |
817 | if b.revs > threshold: |
|
829 | if b.revs > threshold: | |
818 | return False |
|
830 | return False | |
819 | return True |
|
831 | return True | |
820 |
|
832 | |||
821 |
|
833 | |||
822 | def start_one_bundle(repo, bundle): |
|
834 | def start_one_bundle(repo, bundle): | |
823 | """start the generation of a single bundle file |
|
835 | """start the generation of a single bundle file | |
824 |
|
836 | |||
825 | the `bundle` argument should be a RequestedBundle object. |
|
837 | the `bundle` argument should be a RequestedBundle object. | |
826 |
|
838 | |||
827 | This data is passed to the `debugmakeclonebundles` "as is". |
|
839 | This data is passed to the `debugmakeclonebundles` "as is". | |
828 | """ |
|
840 | """ | |
829 | data = util.pickle.dumps(bundle) |
|
841 | data = util.pickle.dumps(bundle) | |
830 | cmd = [procutil.hgexecutable(), b'--cwd', repo.path, INTERNAL_CMD] |
|
842 | cmd = [procutil.hgexecutable(), b'--cwd', repo.path, INTERNAL_CMD] | |
831 | env = procutil.shellenviron() |
|
843 | env = procutil.shellenviron() | |
832 | msg = b'clone-bundles: starting bundle generation: %s\n' |
|
844 | msg = b'clone-bundles: starting bundle generation: %s\n' | |
833 | stdout = None |
|
845 | stdout = None | |
834 | stderr = None |
|
846 | stderr = None | |
835 | waits = [] |
|
847 | waits = [] | |
836 | record_wait = None |
|
848 | record_wait = None | |
837 | if repo.ui.configbool(b'devel', b'debug.clonebundles'): |
|
849 | if repo.ui.configbool(b'devel', b'debug.clonebundles'): | |
838 | stdout = procutil.stdout |
|
850 | stdout = procutil.stdout | |
839 | stderr = procutil.stderr |
|
851 | stderr = procutil.stderr | |
840 | repo.ui.write(msg % bundle.bundle_type) |
|
852 | repo.ui.write(msg % bundle.bundle_type) | |
841 | record_wait = waits.append |
|
853 | record_wait = waits.append | |
842 | else: |
|
854 | else: | |
843 | repo.ui.debug(msg % bundle.bundle_type) |
|
855 | repo.ui.debug(msg % bundle.bundle_type) | |
844 | bg = procutil.runbgcommand |
|
856 | bg = procutil.runbgcommand | |
845 | bg( |
|
857 | bg( | |
846 | cmd, |
|
858 | cmd, | |
847 | env, |
|
859 | env, | |
848 | stdin_bytes=data, |
|
860 | stdin_bytes=data, | |
849 | stdout=stdout, |
|
861 | stdout=stdout, | |
850 | stderr=stderr, |
|
862 | stderr=stderr, | |
851 | record_wait=record_wait, |
|
863 | record_wait=record_wait, | |
852 | ) |
|
864 | ) | |
853 | for f in waits: |
|
865 | for f in waits: | |
854 | f() |
|
866 | f() | |
855 |
|
867 | |||
856 |
|
868 | |||
857 | INTERNAL_CMD = b'debug::internal-make-clone-bundles' |
|
869 | INTERNAL_CMD = b'debug::internal-make-clone-bundles' | |
858 |
|
870 | |||
859 |
|
871 | |||
860 | @command(INTERNAL_CMD, [], b'') |
|
872 | @command(INTERNAL_CMD, [], b'') | |
861 | def debugmakeclonebundles(ui, repo): |
|
873 | def debugmakeclonebundles(ui, repo): | |
862 | """Internal command to auto-generate debug bundles""" |
|
874 | """Internal command to auto-generate debug bundles""" | |
863 | requested_bundle = util.pickle.load(procutil.stdin) |
|
875 | requested_bundle = util.pickle.load(procutil.stdin) | |
864 | procutil.stdin.close() |
|
876 | procutil.stdin.close() | |
865 |
|
877 | |||
866 | collect_garbage(repo) |
|
878 | collect_garbage(repo) | |
867 |
|
879 | |||
868 | fname = requested_bundle.suggested_filename |
|
880 | fname = requested_bundle.suggested_filename | |
869 | fpath = repo.vfs.makedirs(b'tmp-bundles') |
|
881 | fpath = repo.vfs.makedirs(b'tmp-bundles') | |
870 | fpath = repo.vfs.join(b'tmp-bundles', fname) |
|
882 | fpath = repo.vfs.join(b'tmp-bundles', fname) | |
871 | bundle = requested_bundle.generating(fpath) |
|
883 | bundle = requested_bundle.generating(fpath) | |
872 | update_bundle_list(repo, new_bundles=[bundle]) |
|
884 | update_bundle_list(repo, new_bundles=[bundle]) | |
873 |
|
885 | |||
874 | requested_bundle.generate_bundle(repo, fpath) |
|
886 | requested_bundle.generate_bundle(repo, fpath) | |
875 |
|
887 | |||
876 | repo.invalidate() |
|
888 | repo.invalidate() | |
877 | finalize_one_bundle(repo, bundle) |
|
889 | finalize_one_bundle(repo, bundle) | |
878 |
|
890 | |||
879 |
|
891 | |||
880 | def make_auto_bundler(source_repo): |
|
892 | def make_auto_bundler(source_repo): | |
881 | reporef = weakref.ref(source_repo) |
|
893 | reporef = weakref.ref(source_repo) | |
882 |
|
894 | |||
883 | def autobundle(tr): |
|
895 | def autobundle(tr): | |
884 | repo = reporef() |
|
896 | repo = reporef() | |
885 | assert repo is not None |
|
897 | assert repo is not None | |
886 | bundles = read_auto_gen(repo) |
|
898 | bundles = read_auto_gen(repo) | |
887 | new, __ = auto_bundle_needed_actions(repo, bundles, b"%d_txn" % id(tr)) |
|
899 | new, __ = auto_bundle_needed_actions(repo, bundles, b"%d_txn" % id(tr)) | |
888 | for data in new: |
|
900 | for data in new: | |
889 | start_one_bundle(repo, data) |
|
901 | start_one_bundle(repo, data) | |
890 | return None |
|
902 | return None | |
891 |
|
903 | |||
892 | return autobundle |
|
904 | return autobundle | |
893 |
|
905 | |||
894 |
|
906 | |||
895 | def reposetup(ui, repo): |
|
907 | def reposetup(ui, repo): | |
896 | """install the two pieces needed for automatic clonebundle generation |
|
908 | """install the two pieces needed for automatic clonebundle generation | |
897 |
|
909 | |||
898 | - add a "post-close" hook that fires bundling when needed |
|
910 | - add a "post-close" hook that fires bundling when needed | |
899 | - introduce a clone-bundle lock to let multiple processes meddle with the |
|
911 | - introduce a clone-bundle lock to let multiple processes meddle with the | |
900 | state files. |
|
912 | state files. | |
901 | """ |
|
913 | """ | |
902 | if not repo.local(): |
|
914 | if not repo.local(): | |
903 | return |
|
915 | return | |
904 |
|
916 | |||
905 | class autobundlesrepo(repo.__class__): |
|
917 | class autobundlesrepo(repo.__class__): | |
906 | def transaction(self, *args, **kwargs): |
|
918 | def transaction(self, *args, **kwargs): | |
907 | tr = super(autobundlesrepo, self).transaction(*args, **kwargs) |
|
919 | tr = super(autobundlesrepo, self).transaction(*args, **kwargs) | |
908 | enabled = repo.ui.configbool( |
|
920 | enabled = repo.ui.configbool( | |
909 | b'clone-bundles', |
|
921 | b'clone-bundles', | |
910 | b'auto-generate.on-change', |
|
922 | b'auto-generate.on-change', | |
911 | ) |
|
923 | ) | |
912 | targets = repo.ui.configlist( |
|
924 | targets = repo.ui.configlist( | |
913 | b'clone-bundles', b'auto-generate.formats' |
|
925 | b'clone-bundles', b'auto-generate.formats' | |
914 | ) |
|
926 | ) | |
915 | if enabled and targets: |
|
927 | if enabled and targets: | |
916 | tr.addpostclose(CAT_POSTCLOSE, make_auto_bundler(self)) |
|
928 | tr.addpostclose(CAT_POSTCLOSE, make_auto_bundler(self)) | |
917 | return tr |
|
929 | return tr | |
918 |
|
930 | |||
919 | @localrepo.unfilteredmethod |
|
931 | @localrepo.unfilteredmethod | |
920 | def clonebundles_lock(self, wait=True): |
|
932 | def clonebundles_lock(self, wait=True): | |
921 | '''Lock the repository file related to clone bundles''' |
|
933 | '''Lock the repository file related to clone bundles''' | |
922 | if not util.safehasattr(self, '_cb_lock_ref'): |
|
934 | if not util.safehasattr(self, '_cb_lock_ref'): | |
923 | self._cb_lock_ref = None |
|
935 | self._cb_lock_ref = None | |
924 | l = self._currentlock(self._cb_lock_ref) |
|
936 | l = self._currentlock(self._cb_lock_ref) | |
925 | if l is not None: |
|
937 | if l is not None: | |
926 | l.lock() |
|
938 | l.lock() | |
927 | return l |
|
939 | return l | |
928 |
|
940 | |||
929 | l = self._lock( |
|
941 | l = self._lock( | |
930 | vfs=self.vfs, |
|
942 | vfs=self.vfs, | |
931 | lockname=b"clonebundleslock", |
|
943 | lockname=b"clonebundleslock", | |
932 | wait=wait, |
|
944 | wait=wait, | |
933 | releasefn=None, |
|
945 | releasefn=None, | |
934 | acquirefn=None, |
|
946 | acquirefn=None, | |
935 | desc=_(b'repository %s') % self.origroot, |
|
947 | desc=_(b'repository %s') % self.origroot, | |
936 | ) |
|
948 | ) | |
937 | self._cb_lock_ref = weakref.ref(l) |
|
949 | self._cb_lock_ref = weakref.ref(l) | |
938 | return l |
|
950 | return l | |
939 |
|
951 | |||
940 | repo._wlockfreeprefix.add(AUTO_GEN_FILE) |
|
952 | repo._wlockfreeprefix.add(AUTO_GEN_FILE) | |
941 | repo._wlockfreeprefix.add(bundlecaches.CB_MANIFEST_FILE) |
|
953 | repo._wlockfreeprefix.add(bundlecaches.CB_MANIFEST_FILE) | |
942 | repo.__class__ = autobundlesrepo |
|
954 | repo.__class__ = autobundlesrepo | |
943 |
|
955 | |||
944 |
|
956 | |||
945 | @command( |
|
957 | @command( | |
946 | b'admin::clone-bundles-refresh', |
|
958 | b'admin::clone-bundles-refresh', | |
947 | [ |
|
959 | [ | |
948 | ( |
|
960 | ( | |
949 | b'', |
|
961 | b'', | |
950 | b'background', |
|
962 | b'background', | |
951 | False, |
|
963 | False, | |
952 | _(b'start bundle generation in the background'), |
|
964 | _(b'start bundle generation in the background'), | |
953 | ), |
|
965 | ), | |
954 | ], |
|
966 | ], | |
955 | b'', |
|
967 | b'', | |
956 | ) |
|
968 | ) | |
957 | def cmd_admin_clone_bundles_refresh( |
|
969 | def cmd_admin_clone_bundles_refresh( | |
958 | ui, |
|
970 | ui, | |
959 | repo: localrepo.localrepository, |
|
971 | repo: localrepo.localrepository, | |
960 | background=False, |
|
972 | background=False, | |
961 | ): |
|
973 | ): | |
962 | """generate clone bundles according to the configuration |
|
974 | """generate clone bundles according to the configuration | |
963 |
|
975 | |||
964 | This runs the logic for automatic generation, removing outdated bundles and |
|
976 | This runs the logic for automatic generation, removing outdated bundles and | |
965 | generating new ones if necessary. See :hg:`help -e clone-bundles` for |
|
977 | generating new ones if necessary. See :hg:`help -e clone-bundles` for | |
966 | details about how to configure this feature. |
|
978 | details about how to configure this feature. | |
967 | """ |
|
979 | """ | |
968 | debug = repo.ui.configbool(b'devel', b'debug.clonebundles') |
|
980 | debug = repo.ui.configbool(b'devel', b'debug.clonebundles') | |
969 | bundles = read_auto_gen(repo) |
|
981 | bundles = read_auto_gen(repo) | |
970 | op_id = b"%d_acbr" % os.getpid() |
|
982 | op_id = b"%d_acbr" % os.getpid() | |
971 | create, delete = auto_bundle_needed_actions(repo, bundles, op_id) |
|
983 | create, delete = auto_bundle_needed_actions(repo, bundles, op_id) | |
972 |
|
984 | |||
973 | # if some bundles are scheduled for creation in the background, they will |
|
985 | # if some bundles are scheduled for creation in the background, they will | |
974 | # deal with garbage collection too, so no need to synchroniously do it. |
|
986 | # deal with garbage collection too, so no need to synchroniously do it. | |
975 | # |
|
987 | # | |
976 | # However if no bundles are scheduled for creation, we need to explicitly do |
|
988 | # However if no bundles are scheduled for creation, we need to explicitly do | |
977 | # it here. |
|
989 | # it here. | |
978 | if not (background and create): |
|
990 | if not (background and create): | |
979 | # we clean up outdated bundles before generating new ones to keep the |
|
991 | # we clean up outdated bundles before generating new ones to keep the | |
980 | # last two versions of the bundle around for a while and avoid having to |
|
992 | # last two versions of the bundle around for a while and avoid having to | |
981 | # deal with clients that just got served a manifest. |
|
993 | # deal with clients that just got served a manifest. | |
982 | for o in delete: |
|
994 | for o in delete: | |
983 | delete_bundle(repo, o) |
|
995 | delete_bundle(repo, o) | |
984 | update_bundle_list(repo, del_bundles=delete) |
|
996 | update_bundle_list(repo, del_bundles=delete) | |
985 |
|
997 | |||
986 | if create: |
|
998 | if create: | |
987 | fpath = repo.vfs.makedirs(b'tmp-bundles') |
|
999 | fpath = repo.vfs.makedirs(b'tmp-bundles') | |
988 |
|
1000 | |||
989 | if background: |
|
1001 | if background: | |
990 | for requested_bundle in create: |
|
1002 | for requested_bundle in create: | |
991 | start_one_bundle(repo, requested_bundle) |
|
1003 | start_one_bundle(repo, requested_bundle) | |
992 | else: |
|
1004 | else: | |
993 | for requested_bundle in create: |
|
1005 | for requested_bundle in create: | |
994 | if debug: |
|
1006 | if debug: | |
995 | msg = b'clone-bundles: starting bundle generation: %s\n' |
|
1007 | msg = b'clone-bundles: starting bundle generation: %s\n' | |
996 | repo.ui.write(msg % requested_bundle.bundle_type) |
|
1008 | repo.ui.write(msg % requested_bundle.bundle_type) | |
997 | fname = requested_bundle.suggested_filename |
|
1009 | fname = requested_bundle.suggested_filename | |
998 | fpath = repo.vfs.join(b'tmp-bundles', fname) |
|
1010 | fpath = repo.vfs.join(b'tmp-bundles', fname) | |
999 | generating_bundle = requested_bundle.generating(fpath) |
|
1011 | generating_bundle = requested_bundle.generating(fpath) | |
1000 | update_bundle_list(repo, new_bundles=[generating_bundle]) |
|
1012 | update_bundle_list(repo, new_bundles=[generating_bundle]) | |
1001 | requested_bundle.generate_bundle(repo, fpath) |
|
1013 | requested_bundle.generate_bundle(repo, fpath) | |
1002 | result = upload_bundle(repo, generating_bundle) |
|
1014 | result = upload_bundle(repo, generating_bundle) | |
1003 | update_bundle_list(repo, new_bundles=[result]) |
|
1015 | update_bundle_list(repo, new_bundles=[result]) | |
1004 | update_ondisk_manifest(repo) |
|
1016 | update_ondisk_manifest(repo) | |
1005 | cleanup_tmp_bundle(repo, generating_bundle) |
|
1017 | cleanup_tmp_bundle(repo, generating_bundle) | |
1006 |
|
1018 | |||
1007 |
|
1019 | |||
1008 | @command(b'admin::clone-bundles-clear', [], b'') |
|
1020 | @command(b'admin::clone-bundles-clear', [], b'') | |
1009 | def cmd_admin_clone_bundles_clear(ui, repo: localrepo.localrepository): |
|
1021 | def cmd_admin_clone_bundles_clear(ui, repo: localrepo.localrepository): | |
1010 | """remove existing clone bundle caches |
|
1022 | """remove existing clone bundle caches | |
1011 |
|
1023 | |||
1012 | See `hg help admin::clone-bundles-refresh` for details on how to regenerate |
|
1024 | See `hg help admin::clone-bundles-refresh` for details on how to regenerate | |
1013 | them. |
|
1025 | them. | |
1014 |
|
1026 | |||
1015 | This command will only affect bundles currently available, it will not |
|
1027 | This command will only affect bundles currently available, it will not | |
1016 | affect bundles being asynchronously generated. |
|
1028 | affect bundles being asynchronously generated. | |
1017 | """ |
|
1029 | """ | |
1018 | bundles = read_auto_gen(repo) |
|
1030 | bundles = read_auto_gen(repo) | |
1019 | delete = [b for b in bundles if b.ready] |
|
1031 | delete = [b for b in bundles if b.ready] | |
1020 | for o in delete: |
|
1032 | for o in delete: | |
1021 | delete_bundle(repo, o) |
|
1033 | delete_bundle(repo, o) | |
1022 | update_bundle_list(repo, del_bundles=delete) |
|
1034 | update_bundle_list(repo, del_bundles=delete) |
@@ -1,538 +1,540 b'' | |||||
1 | # bundlecaches.py - utility to deal with pre-computed bundle for servers |
|
1 | # bundlecaches.py - utility to deal with pre-computed bundle for servers | |
2 | # |
|
2 | # | |
3 | # This software may be used and distributed according to the terms of the |
|
3 | # This software may be used and distributed according to the terms of the | |
4 | # GNU General Public License version 2 or any later version. |
|
4 | # GNU General Public License version 2 or any later version. | |
5 |
|
5 | |||
6 | import collections |
|
6 | import collections | |
7 |
|
7 | |||
8 | from typing import ( |
|
8 | from typing import ( | |
9 | cast, |
|
9 | cast, | |
10 | ) |
|
10 | ) | |
11 |
|
11 | |||
12 | from .i18n import _ |
|
12 | from .i18n import _ | |
13 |
|
13 | |||
14 | from .thirdparty import attr |
|
14 | from .thirdparty import attr | |
15 |
|
15 | |||
16 | from . import ( |
|
16 | from . import ( | |
17 | error, |
|
17 | error, | |
18 | requirements as requirementsmod, |
|
18 | requirements as requirementsmod, | |
19 | sslutil, |
|
19 | sslutil, | |
20 | util, |
|
20 | util, | |
21 | ) |
|
21 | ) | |
22 | from .utils import stringutil |
|
22 | from .utils import stringutil | |
23 |
|
23 | |||
24 | urlreq = util.urlreq |
|
24 | urlreq = util.urlreq | |
25 |
|
25 | |||
|
26 | BUNDLE_CACHE_DIR = b'bundle-cache' | |||
26 | CB_MANIFEST_FILE = b'clonebundles.manifest' |
|
27 | CB_MANIFEST_FILE = b'clonebundles.manifest' | |
|
28 | CLONEBUNDLESCHEME = b"peer-bundle-cache://" | |||
27 |
|
29 | |||
28 |
|
30 | |||
29 | def get_manifest(repo): |
|
31 | def get_manifest(repo): | |
30 | """get the bundle manifest to be served to a client from a server""" |
|
32 | """get the bundle manifest to be served to a client from a server""" | |
31 | raw_text = repo.vfs.tryread(CB_MANIFEST_FILE) |
|
33 | raw_text = repo.vfs.tryread(CB_MANIFEST_FILE) | |
32 | entries = [e.split(b' ', 1) for e in raw_text.splitlines()] |
|
34 | entries = [e.split(b' ', 1) for e in raw_text.splitlines()] | |
33 |
|
35 | |||
34 | new_lines = [] |
|
36 | new_lines = [] | |
35 | for e in entries: |
|
37 | for e in entries: | |
36 | url = alter_bundle_url(repo, e[0]) |
|
38 | url = alter_bundle_url(repo, e[0]) | |
37 | if len(e) == 1: |
|
39 | if len(e) == 1: | |
38 | line = url + b'\n' |
|
40 | line = url + b'\n' | |
39 | else: |
|
41 | else: | |
40 | line = b"%s %s\n" % (url, e[1]) |
|
42 | line = b"%s %s\n" % (url, e[1]) | |
41 | new_lines.append(line) |
|
43 | new_lines.append(line) | |
42 | return b''.join(new_lines) |
|
44 | return b''.join(new_lines) | |
43 |
|
45 | |||
44 |
|
46 | |||
45 | def alter_bundle_url(repo, url): |
|
47 | def alter_bundle_url(repo, url): | |
46 | """a function that exist to help extension and hosting to alter the url |
|
48 | """a function that exist to help extension and hosting to alter the url | |
47 |
|
49 | |||
48 | This will typically be used to inject authentication information in the url |
|
50 | This will typically be used to inject authentication information in the url | |
49 | of cached bundles.""" |
|
51 | of cached bundles.""" | |
50 | return url |
|
52 | return url | |
51 |
|
53 | |||
52 |
|
54 | |||
53 | @attr.s |
|
55 | @attr.s | |
54 | class bundlespec: |
|
56 | class bundlespec: | |
55 | compression = attr.ib() |
|
57 | compression = attr.ib() | |
56 | wirecompression = attr.ib() |
|
58 | wirecompression = attr.ib() | |
57 | version = attr.ib() |
|
59 | version = attr.ib() | |
58 | wireversion = attr.ib() |
|
60 | wireversion = attr.ib() | |
59 | # parameters explicitly overwritten by the config or the specification |
|
61 | # parameters explicitly overwritten by the config or the specification | |
60 | _explicit_params = attr.ib() |
|
62 | _explicit_params = attr.ib() | |
61 | # default parameter for the version |
|
63 | # default parameter for the version | |
62 | # |
|
64 | # | |
63 | # Keeping it separated is useful to check what was actually overwritten. |
|
65 | # Keeping it separated is useful to check what was actually overwritten. | |
64 | _default_opts = attr.ib() |
|
66 | _default_opts = attr.ib() | |
65 |
|
67 | |||
66 | @property |
|
68 | @property | |
67 | def params(self): |
|
69 | def params(self): | |
68 | return collections.ChainMap(self._explicit_params, self._default_opts) |
|
70 | return collections.ChainMap(self._explicit_params, self._default_opts) | |
69 |
|
71 | |||
70 | @property |
|
72 | @property | |
71 | def contentopts(self): |
|
73 | def contentopts(self): | |
72 | # kept for Backward Compatibility concerns. |
|
74 | # kept for Backward Compatibility concerns. | |
73 | return self.params |
|
75 | return self.params | |
74 |
|
76 | |||
75 | def set_param(self, key, value, overwrite=True): |
|
77 | def set_param(self, key, value, overwrite=True): | |
76 | """Set a bundle parameter value. |
|
78 | """Set a bundle parameter value. | |
77 |
|
79 | |||
78 | Will only overwrite if overwrite is true""" |
|
80 | Will only overwrite if overwrite is true""" | |
79 | if overwrite or key not in self._explicit_params: |
|
81 | if overwrite or key not in self._explicit_params: | |
80 | self._explicit_params[key] = value |
|
82 | self._explicit_params[key] = value | |
81 |
|
83 | |||
82 |
|
84 | |||
83 | # Maps bundle version human names to changegroup versions. |
|
85 | # Maps bundle version human names to changegroup versions. | |
84 | _bundlespeccgversions = { |
|
86 | _bundlespeccgversions = { | |
85 | b'v1': b'01', |
|
87 | b'v1': b'01', | |
86 | b'v2': b'02', |
|
88 | b'v2': b'02', | |
87 | b'v3': b'03', |
|
89 | b'v3': b'03', | |
88 | b'packed1': b's1', |
|
90 | b'packed1': b's1', | |
89 | b'bundle2': b'02', # legacy |
|
91 | b'bundle2': b'02', # legacy | |
90 | } |
|
92 | } | |
91 |
|
93 | |||
92 | # Maps bundle version with content opts to choose which part to bundle |
|
94 | # Maps bundle version with content opts to choose which part to bundle | |
93 | _bundlespeccontentopts = { |
|
95 | _bundlespeccontentopts = { | |
94 | b'v1': { |
|
96 | b'v1': { | |
95 | b'changegroup': True, |
|
97 | b'changegroup': True, | |
96 | b'cg.version': b'01', |
|
98 | b'cg.version': b'01', | |
97 | b'obsolescence': False, |
|
99 | b'obsolescence': False, | |
98 | b'phases': False, |
|
100 | b'phases': False, | |
99 | b'tagsfnodescache': False, |
|
101 | b'tagsfnodescache': False, | |
100 | b'revbranchcache': False, |
|
102 | b'revbranchcache': False, | |
101 | }, |
|
103 | }, | |
102 | b'v2': { |
|
104 | b'v2': { | |
103 | b'changegroup': True, |
|
105 | b'changegroup': True, | |
104 | b'cg.version': b'02', |
|
106 | b'cg.version': b'02', | |
105 | b'obsolescence': False, |
|
107 | b'obsolescence': False, | |
106 | b'phases': False, |
|
108 | b'phases': False, | |
107 | b'tagsfnodescache': True, |
|
109 | b'tagsfnodescache': True, | |
108 | b'revbranchcache': True, |
|
110 | b'revbranchcache': True, | |
109 | }, |
|
111 | }, | |
110 | b'v3': { |
|
112 | b'v3': { | |
111 | b'changegroup': True, |
|
113 | b'changegroup': True, | |
112 | b'cg.version': b'03', |
|
114 | b'cg.version': b'03', | |
113 | b'obsolescence': False, |
|
115 | b'obsolescence': False, | |
114 | b'phases': True, |
|
116 | b'phases': True, | |
115 | b'tagsfnodescache': True, |
|
117 | b'tagsfnodescache': True, | |
116 | b'revbranchcache': True, |
|
118 | b'revbranchcache': True, | |
117 | }, |
|
119 | }, | |
118 | b'streamv2': { |
|
120 | b'streamv2': { | |
119 | b'changegroup': False, |
|
121 | b'changegroup': False, | |
120 | b'cg.version': b'02', |
|
122 | b'cg.version': b'02', | |
121 | b'obsolescence': False, |
|
123 | b'obsolescence': False, | |
122 | b'phases': False, |
|
124 | b'phases': False, | |
123 | b"streamv2": True, |
|
125 | b"streamv2": True, | |
124 | b'tagsfnodescache': False, |
|
126 | b'tagsfnodescache': False, | |
125 | b'revbranchcache': False, |
|
127 | b'revbranchcache': False, | |
126 | }, |
|
128 | }, | |
127 | b'streamv3-exp': { |
|
129 | b'streamv3-exp': { | |
128 | b'changegroup': False, |
|
130 | b'changegroup': False, | |
129 | b'cg.version': b'03', |
|
131 | b'cg.version': b'03', | |
130 | b'obsolescence': False, |
|
132 | b'obsolescence': False, | |
131 | b'phases': False, |
|
133 | b'phases': False, | |
132 | b"streamv3-exp": True, |
|
134 | b"streamv3-exp": True, | |
133 | b'tagsfnodescache': False, |
|
135 | b'tagsfnodescache': False, | |
134 | b'revbranchcache': False, |
|
136 | b'revbranchcache': False, | |
135 | }, |
|
137 | }, | |
136 | b'packed1': { |
|
138 | b'packed1': { | |
137 | b'cg.version': b's1', |
|
139 | b'cg.version': b's1', | |
138 | }, |
|
140 | }, | |
139 | b'bundle2': { # legacy |
|
141 | b'bundle2': { # legacy | |
140 | b'cg.version': b'02', |
|
142 | b'cg.version': b'02', | |
141 | }, |
|
143 | }, | |
142 | } |
|
144 | } | |
143 | _bundlespeccontentopts[b'bundle2'] = _bundlespeccontentopts[b'v2'] |
|
145 | _bundlespeccontentopts[b'bundle2'] = _bundlespeccontentopts[b'v2'] | |
144 |
|
146 | |||
145 | _bundlespecvariants = {b"streamv2": {}} |
|
147 | _bundlespecvariants = {b"streamv2": {}} | |
146 |
|
148 | |||
147 | # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE. |
|
149 | # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE. | |
148 | _bundlespecv1compengines = {b'gzip', b'bzip2', b'none'} |
|
150 | _bundlespecv1compengines = {b'gzip', b'bzip2', b'none'} | |
149 |
|
151 | |||
150 |
|
152 | |||
151 | def param_bool(key, value): |
|
153 | def param_bool(key, value): | |
152 | """make a boolean out of a parameter value""" |
|
154 | """make a boolean out of a parameter value""" | |
153 | b = stringutil.parsebool(value) |
|
155 | b = stringutil.parsebool(value) | |
154 | if b is None: |
|
156 | if b is None: | |
155 | msg = _(b"parameter %s should be a boolean ('%s')") |
|
157 | msg = _(b"parameter %s should be a boolean ('%s')") | |
156 | msg %= (key, value) |
|
158 | msg %= (key, value) | |
157 | raise error.InvalidBundleSpecification(msg) |
|
159 | raise error.InvalidBundleSpecification(msg) | |
158 | return b |
|
160 | return b | |
159 |
|
161 | |||
160 |
|
162 | |||
161 | # mapping of known parameter name need their value processed |
|
163 | # mapping of known parameter name need their value processed | |
162 | bundle_spec_param_processing = { |
|
164 | bundle_spec_param_processing = { | |
163 | b"obsolescence": param_bool, |
|
165 | b"obsolescence": param_bool, | |
164 | b"obsolescence-mandatory": param_bool, |
|
166 | b"obsolescence-mandatory": param_bool, | |
165 | b"phases": param_bool, |
|
167 | b"phases": param_bool, | |
166 | } |
|
168 | } | |
167 |
|
169 | |||
168 |
|
170 | |||
169 | def _parseparams(s): |
|
171 | def _parseparams(s): | |
170 | """parse bundlespec parameter section |
|
172 | """parse bundlespec parameter section | |
171 |
|
173 | |||
172 | input: "comp-version;params" string |
|
174 | input: "comp-version;params" string | |
173 |
|
175 | |||
174 | return: (spec; {param_key: param_value}) |
|
176 | return: (spec; {param_key: param_value}) | |
175 | """ |
|
177 | """ | |
176 | if b';' not in s: |
|
178 | if b';' not in s: | |
177 | return s, {} |
|
179 | return s, {} | |
178 |
|
180 | |||
179 | params = {} |
|
181 | params = {} | |
180 | version, paramstr = s.split(b';', 1) |
|
182 | version, paramstr = s.split(b';', 1) | |
181 |
|
183 | |||
182 | err = _(b'invalid bundle specification: missing "=" in parameter: %s') |
|
184 | err = _(b'invalid bundle specification: missing "=" in parameter: %s') | |
183 | for p in paramstr.split(b';'): |
|
185 | for p in paramstr.split(b';'): | |
184 | if b'=' not in p: |
|
186 | if b'=' not in p: | |
185 | msg = err % p |
|
187 | msg = err % p | |
186 | raise error.InvalidBundleSpecification(msg) |
|
188 | raise error.InvalidBundleSpecification(msg) | |
187 |
|
189 | |||
188 | key, value = p.split(b'=', 1) |
|
190 | key, value = p.split(b'=', 1) | |
189 | key = urlreq.unquote(key) |
|
191 | key = urlreq.unquote(key) | |
190 | value = urlreq.unquote(value) |
|
192 | value = urlreq.unquote(value) | |
191 | process = bundle_spec_param_processing.get(key) |
|
193 | process = bundle_spec_param_processing.get(key) | |
192 | if process is not None: |
|
194 | if process is not None: | |
193 | value = process(key, value) |
|
195 | value = process(key, value) | |
194 | params[key] = value |
|
196 | params[key] = value | |
195 |
|
197 | |||
196 | return version, params |
|
198 | return version, params | |
197 |
|
199 | |||
198 |
|
200 | |||
199 | def parsebundlespec(repo, spec, strict=True): |
|
201 | def parsebundlespec(repo, spec, strict=True): | |
200 | """Parse a bundle string specification into parts. |
|
202 | """Parse a bundle string specification into parts. | |
201 |
|
203 | |||
202 | Bundle specifications denote a well-defined bundle/exchange format. |
|
204 | Bundle specifications denote a well-defined bundle/exchange format. | |
203 | The content of a given specification should not change over time in |
|
205 | The content of a given specification should not change over time in | |
204 | order to ensure that bundles produced by a newer version of Mercurial are |
|
206 | order to ensure that bundles produced by a newer version of Mercurial are | |
205 | readable from an older version. |
|
207 | readable from an older version. | |
206 |
|
208 | |||
207 | The string currently has the form: |
|
209 | The string currently has the form: | |
208 |
|
210 | |||
209 | <compression>-<type>[;<parameter0>[;<parameter1>]] |
|
211 | <compression>-<type>[;<parameter0>[;<parameter1>]] | |
210 |
|
212 | |||
211 | Where <compression> is one of the supported compression formats |
|
213 | Where <compression> is one of the supported compression formats | |
212 | and <type> is (currently) a version string. A ";" can follow the type and |
|
214 | and <type> is (currently) a version string. A ";" can follow the type and | |
213 | all text afterwards is interpreted as URI encoded, ";" delimited key=value |
|
215 | all text afterwards is interpreted as URI encoded, ";" delimited key=value | |
214 | pairs. |
|
216 | pairs. | |
215 |
|
217 | |||
216 | If ``strict`` is True (the default) <compression> is required. Otherwise, |
|
218 | If ``strict`` is True (the default) <compression> is required. Otherwise, | |
217 | it is optional. |
|
219 | it is optional. | |
218 |
|
220 | |||
219 | Returns a bundlespec object of (compression, version, parameters). |
|
221 | Returns a bundlespec object of (compression, version, parameters). | |
220 | Compression will be ``None`` if not in strict mode and a compression isn't |
|
222 | Compression will be ``None`` if not in strict mode and a compression isn't | |
221 | defined. |
|
223 | defined. | |
222 |
|
224 | |||
223 | An ``InvalidBundleSpecification`` is raised when the specification is |
|
225 | An ``InvalidBundleSpecification`` is raised when the specification is | |
224 | not syntactically well formed. |
|
226 | not syntactically well formed. | |
225 |
|
227 | |||
226 | An ``UnsupportedBundleSpecification`` is raised when the compression or |
|
228 | An ``UnsupportedBundleSpecification`` is raised when the compression or | |
227 | bundle type/version is not recognized. |
|
229 | bundle type/version is not recognized. | |
228 |
|
230 | |||
229 | Note: this function will likely eventually return a more complex data |
|
231 | Note: this function will likely eventually return a more complex data | |
230 | structure, including bundle2 part information. |
|
232 | structure, including bundle2 part information. | |
231 | """ |
|
233 | """ | |
232 | if strict and b'-' not in spec: |
|
234 | if strict and b'-' not in spec: | |
233 | raise error.InvalidBundleSpecification( |
|
235 | raise error.InvalidBundleSpecification( | |
234 | _( |
|
236 | _( | |
235 | b'invalid bundle specification; ' |
|
237 | b'invalid bundle specification; ' | |
236 | b'must be prefixed with compression: %s' |
|
238 | b'must be prefixed with compression: %s' | |
237 | ) |
|
239 | ) | |
238 | % spec |
|
240 | % spec | |
239 | ) |
|
241 | ) | |
240 |
|
242 | |||
241 | pre_args = spec.split(b';', 1)[0] |
|
243 | pre_args = spec.split(b';', 1)[0] | |
242 | if b'-' in pre_args: |
|
244 | if b'-' in pre_args: | |
243 | compression, version = spec.split(b'-', 1) |
|
245 | compression, version = spec.split(b'-', 1) | |
244 |
|
246 | |||
245 | if compression not in util.compengines.supportedbundlenames: |
|
247 | if compression not in util.compengines.supportedbundlenames: | |
246 | raise error.UnsupportedBundleSpecification( |
|
248 | raise error.UnsupportedBundleSpecification( | |
247 | _(b'%s compression is not supported') % compression |
|
249 | _(b'%s compression is not supported') % compression | |
248 | ) |
|
250 | ) | |
249 |
|
251 | |||
250 | version, params = _parseparams(version) |
|
252 | version, params = _parseparams(version) | |
251 |
|
253 | |||
252 | if version not in _bundlespeccontentopts: |
|
254 | if version not in _bundlespeccontentopts: | |
253 | raise error.UnsupportedBundleSpecification( |
|
255 | raise error.UnsupportedBundleSpecification( | |
254 | _(b'%s is not a recognized bundle version') % version |
|
256 | _(b'%s is not a recognized bundle version') % version | |
255 | ) |
|
257 | ) | |
256 | else: |
|
258 | else: | |
257 | # Value could be just the compression or just the version, in which |
|
259 | # Value could be just the compression or just the version, in which | |
258 | # case some defaults are assumed (but only when not in strict mode). |
|
260 | # case some defaults are assumed (but only when not in strict mode). | |
259 | assert not strict |
|
261 | assert not strict | |
260 |
|
262 | |||
261 | spec, params = _parseparams(spec) |
|
263 | spec, params = _parseparams(spec) | |
262 |
|
264 | |||
263 | if spec in util.compengines.supportedbundlenames: |
|
265 | if spec in util.compengines.supportedbundlenames: | |
264 | compression = spec |
|
266 | compression = spec | |
265 | version = b'v1' |
|
267 | version = b'v1' | |
266 | # Generaldelta repos require v2. |
|
268 | # Generaldelta repos require v2. | |
267 | if requirementsmod.GENERALDELTA_REQUIREMENT in repo.requirements: |
|
269 | if requirementsmod.GENERALDELTA_REQUIREMENT in repo.requirements: | |
268 | version = b'v2' |
|
270 | version = b'v2' | |
269 | elif requirementsmod.REVLOGV2_REQUIREMENT in repo.requirements: |
|
271 | elif requirementsmod.REVLOGV2_REQUIREMENT in repo.requirements: | |
270 | version = b'v2' |
|
272 | version = b'v2' | |
271 | # Modern compression engines require v2. |
|
273 | # Modern compression engines require v2. | |
272 | if compression not in _bundlespecv1compengines: |
|
274 | if compression not in _bundlespecv1compengines: | |
273 | version = b'v2' |
|
275 | version = b'v2' | |
274 | elif spec in _bundlespeccontentopts: |
|
276 | elif spec in _bundlespeccontentopts: | |
275 | if spec == b'packed1': |
|
277 | if spec == b'packed1': | |
276 | compression = b'none' |
|
278 | compression = b'none' | |
277 | else: |
|
279 | else: | |
278 | compression = b'bzip2' |
|
280 | compression = b'bzip2' | |
279 | version = spec |
|
281 | version = spec | |
280 | else: |
|
282 | else: | |
281 | raise error.UnsupportedBundleSpecification( |
|
283 | raise error.UnsupportedBundleSpecification( | |
282 | _(b'%s is not a recognized bundle specification') % spec |
|
284 | _(b'%s is not a recognized bundle specification') % spec | |
283 | ) |
|
285 | ) | |
284 |
|
286 | |||
285 | # Bundle version 1 only supports a known set of compression engines. |
|
287 | # Bundle version 1 only supports a known set of compression engines. | |
286 | if version == b'v1' and compression not in _bundlespecv1compengines: |
|
288 | if version == b'v1' and compression not in _bundlespecv1compengines: | |
287 | raise error.UnsupportedBundleSpecification( |
|
289 | raise error.UnsupportedBundleSpecification( | |
288 | _(b'compression engine %s is not supported on v1 bundles') |
|
290 | _(b'compression engine %s is not supported on v1 bundles') | |
289 | % compression |
|
291 | % compression | |
290 | ) |
|
292 | ) | |
291 |
|
293 | |||
292 | # The specification for packed1 can optionally declare the data formats |
|
294 | # The specification for packed1 can optionally declare the data formats | |
293 | # required to apply it. If we see this metadata, compare against what the |
|
295 | # required to apply it. If we see this metadata, compare against what the | |
294 | # repo supports and error if the bundle isn't compatible. |
|
296 | # repo supports and error if the bundle isn't compatible. | |
295 | if version == b'packed1' and b'requirements' in params: |
|
297 | if version == b'packed1' and b'requirements' in params: | |
296 | requirements = set(cast(bytes, params[b'requirements']).split(b',')) |
|
298 | requirements = set(cast(bytes, params[b'requirements']).split(b',')) | |
297 | missingreqs = requirements - requirementsmod.STREAM_FIXED_REQUIREMENTS |
|
299 | missingreqs = requirements - requirementsmod.STREAM_FIXED_REQUIREMENTS | |
298 | if missingreqs: |
|
300 | if missingreqs: | |
299 | raise error.UnsupportedBundleSpecification( |
|
301 | raise error.UnsupportedBundleSpecification( | |
300 | _(b'missing support for repository features: %s') |
|
302 | _(b'missing support for repository features: %s') | |
301 | % b', '.join(sorted(missingreqs)) |
|
303 | % b', '.join(sorted(missingreqs)) | |
302 | ) |
|
304 | ) | |
303 |
|
305 | |||
304 | # Compute contentopts based on the version |
|
306 | # Compute contentopts based on the version | |
305 | if b"stream" in params: |
|
307 | if b"stream" in params: | |
306 | # This case is fishy as this mostly derails the version selection |
|
308 | # This case is fishy as this mostly derails the version selection | |
307 | # mechanism. `stream` bundles are quite specific and used differently |
|
309 | # mechanism. `stream` bundles are quite specific and used differently | |
308 | # as "normal" bundles. |
|
310 | # as "normal" bundles. | |
309 | # |
|
311 | # | |
310 | # (we should probably define a cleaner way to do this and raise a |
|
312 | # (we should probably define a cleaner way to do this and raise a | |
311 | # warning when the old way is encountered) |
|
313 | # warning when the old way is encountered) | |
312 | if params[b"stream"] == b"v2": |
|
314 | if params[b"stream"] == b"v2": | |
313 | version = b"streamv2" |
|
315 | version = b"streamv2" | |
314 | if params[b"stream"] == b"v3-exp": |
|
316 | if params[b"stream"] == b"v3-exp": | |
315 | version = b"streamv3-exp" |
|
317 | version = b"streamv3-exp" | |
316 | contentopts = _bundlespeccontentopts.get(version, {}).copy() |
|
318 | contentopts = _bundlespeccontentopts.get(version, {}).copy() | |
317 | if version == b"streamv2" or version == b"streamv3-exp": |
|
319 | if version == b"streamv2" or version == b"streamv3-exp": | |
318 | # streamv2 have been reported as "v2" for a while. |
|
320 | # streamv2 have been reported as "v2" for a while. | |
319 | version = b"v2" |
|
321 | version = b"v2" | |
320 |
|
322 | |||
321 | engine = util.compengines.forbundlename(compression) |
|
323 | engine = util.compengines.forbundlename(compression) | |
322 | compression, wirecompression = engine.bundletype() |
|
324 | compression, wirecompression = engine.bundletype() | |
323 | wireversion = _bundlespeccontentopts[version][b'cg.version'] |
|
325 | wireversion = _bundlespeccontentopts[version][b'cg.version'] | |
324 |
|
326 | |||
325 | return bundlespec( |
|
327 | return bundlespec( | |
326 | compression, wirecompression, version, wireversion, params, contentopts |
|
328 | compression, wirecompression, version, wireversion, params, contentopts | |
327 | ) |
|
329 | ) | |
328 |
|
330 | |||
329 |
|
331 | |||
330 | def parseclonebundlesmanifest(repo, s): |
|
332 | def parseclonebundlesmanifest(repo, s): | |
331 | """Parses the raw text of a clone bundles manifest. |
|
333 | """Parses the raw text of a clone bundles manifest. | |
332 |
|
334 | |||
333 | Returns a list of dicts. The dicts have a ``URL`` key corresponding |
|
335 | Returns a list of dicts. The dicts have a ``URL`` key corresponding | |
334 | to the URL and other keys are the attributes for the entry. |
|
336 | to the URL and other keys are the attributes for the entry. | |
335 | """ |
|
337 | """ | |
336 | m = [] |
|
338 | m = [] | |
337 | for line in s.splitlines(): |
|
339 | for line in s.splitlines(): | |
338 | fields = line.split() |
|
340 | fields = line.split() | |
339 | if not fields: |
|
341 | if not fields: | |
340 | continue |
|
342 | continue | |
341 | attrs = {b'URL': fields[0]} |
|
343 | attrs = {b'URL': fields[0]} | |
342 | for rawattr in fields[1:]: |
|
344 | for rawattr in fields[1:]: | |
343 | key, value = rawattr.split(b'=', 1) |
|
345 | key, value = rawattr.split(b'=', 1) | |
344 | key = util.urlreq.unquote(key) |
|
346 | key = util.urlreq.unquote(key) | |
345 | value = util.urlreq.unquote(value) |
|
347 | value = util.urlreq.unquote(value) | |
346 | attrs[key] = value |
|
348 | attrs[key] = value | |
347 |
|
349 | |||
348 | # Parse BUNDLESPEC into components. This makes client-side |
|
350 | # Parse BUNDLESPEC into components. This makes client-side | |
349 | # preferences easier to specify since you can prefer a single |
|
351 | # preferences easier to specify since you can prefer a single | |
350 | # component of the BUNDLESPEC. |
|
352 | # component of the BUNDLESPEC. | |
351 | if key == b'BUNDLESPEC': |
|
353 | if key == b'BUNDLESPEC': | |
352 | try: |
|
354 | try: | |
353 | bundlespec = parsebundlespec(repo, value) |
|
355 | bundlespec = parsebundlespec(repo, value) | |
354 | attrs[b'COMPRESSION'] = bundlespec.compression |
|
356 | attrs[b'COMPRESSION'] = bundlespec.compression | |
355 | attrs[b'VERSION'] = bundlespec.version |
|
357 | attrs[b'VERSION'] = bundlespec.version | |
356 | except error.InvalidBundleSpecification: |
|
358 | except error.InvalidBundleSpecification: | |
357 | pass |
|
359 | pass | |
358 | except error.UnsupportedBundleSpecification: |
|
360 | except error.UnsupportedBundleSpecification: | |
359 | pass |
|
361 | pass | |
360 |
|
362 | |||
361 | m.append(attrs) |
|
363 | m.append(attrs) | |
362 |
|
364 | |||
363 | return m |
|
365 | return m | |
364 |
|
366 | |||
365 |
|
367 | |||
366 | def isstreamclonespec(bundlespec): |
|
368 | def isstreamclonespec(bundlespec): | |
367 | # Stream clone v1 |
|
369 | # Stream clone v1 | |
368 | if bundlespec.wirecompression == b'UN' and bundlespec.wireversion == b's1': |
|
370 | if bundlespec.wirecompression == b'UN' and bundlespec.wireversion == b's1': | |
369 | return True |
|
371 | return True | |
370 |
|
372 | |||
371 | # Stream clone v2 |
|
373 | # Stream clone v2 | |
372 | if ( |
|
374 | if ( | |
373 | bundlespec.wirecompression == b'UN' |
|
375 | bundlespec.wirecompression == b'UN' | |
374 | and bundlespec.wireversion == b'02' |
|
376 | and bundlespec.wireversion == b'02' | |
375 | and ( |
|
377 | and ( | |
376 | bundlespec.contentopts.get(b'streamv2') |
|
378 | bundlespec.contentopts.get(b'streamv2') | |
377 | or bundlespec.contentopts.get(b'streamv3-exp') |
|
379 | or bundlespec.contentopts.get(b'streamv3-exp') | |
378 | ) |
|
380 | ) | |
379 | ): |
|
381 | ): | |
380 | return True |
|
382 | return True | |
381 |
|
383 | |||
382 | return False |
|
384 | return False | |
383 |
|
385 | |||
384 |
|
386 | |||
385 | def filterclonebundleentries(repo, entries, streamclonerequested=False): |
|
387 | def filterclonebundleentries(repo, entries, streamclonerequested=False): | |
386 | """Remove incompatible clone bundle manifest entries. |
|
388 | """Remove incompatible clone bundle manifest entries. | |
387 |
|
389 | |||
388 | Accepts a list of entries parsed with ``parseclonebundlesmanifest`` |
|
390 | Accepts a list of entries parsed with ``parseclonebundlesmanifest`` | |
389 | and returns a new list consisting of only the entries that this client |
|
391 | and returns a new list consisting of only the entries that this client | |
390 | should be able to apply. |
|
392 | should be able to apply. | |
391 |
|
393 | |||
392 | There is no guarantee we'll be able to apply all returned entries because |
|
394 | There is no guarantee we'll be able to apply all returned entries because | |
393 | the metadata we use to filter on may be missing or wrong. |
|
395 | the metadata we use to filter on may be missing or wrong. | |
394 | """ |
|
396 | """ | |
395 | newentries = [] |
|
397 | newentries = [] | |
396 | for entry in entries: |
|
398 | for entry in entries: | |
397 | spec = entry.get(b'BUNDLESPEC') |
|
399 | spec = entry.get(b'BUNDLESPEC') | |
398 | if spec: |
|
400 | if spec: | |
399 | try: |
|
401 | try: | |
400 | bundlespec = parsebundlespec(repo, spec, strict=True) |
|
402 | bundlespec = parsebundlespec(repo, spec, strict=True) | |
401 |
|
403 | |||
402 | # If a stream clone was requested, filter out non-streamclone |
|
404 | # If a stream clone was requested, filter out non-streamclone | |
403 | # entries. |
|
405 | # entries. | |
404 | if streamclonerequested and not isstreamclonespec(bundlespec): |
|
406 | if streamclonerequested and not isstreamclonespec(bundlespec): | |
405 | repo.ui.debug( |
|
407 | repo.ui.debug( | |
406 | b'filtering %s because not a stream clone\n' |
|
408 | b'filtering %s because not a stream clone\n' | |
407 | % entry[b'URL'] |
|
409 | % entry[b'URL'] | |
408 | ) |
|
410 | ) | |
409 | continue |
|
411 | continue | |
410 |
|
412 | |||
411 | except error.InvalidBundleSpecification as e: |
|
413 | except error.InvalidBundleSpecification as e: | |
412 | repo.ui.debug(stringutil.forcebytestr(e) + b'\n') |
|
414 | repo.ui.debug(stringutil.forcebytestr(e) + b'\n') | |
413 | continue |
|
415 | continue | |
414 | except error.UnsupportedBundleSpecification as e: |
|
416 | except error.UnsupportedBundleSpecification as e: | |
415 | repo.ui.debug( |
|
417 | repo.ui.debug( | |
416 | b'filtering %s because unsupported bundle ' |
|
418 | b'filtering %s because unsupported bundle ' | |
417 | b'spec: %s\n' % (entry[b'URL'], stringutil.forcebytestr(e)) |
|
419 | b'spec: %s\n' % (entry[b'URL'], stringutil.forcebytestr(e)) | |
418 | ) |
|
420 | ) | |
419 | continue |
|
421 | continue | |
420 | # If we don't have a spec and requested a stream clone, we don't know |
|
422 | # If we don't have a spec and requested a stream clone, we don't know | |
421 | # what the entry is so don't attempt to apply it. |
|
423 | # what the entry is so don't attempt to apply it. | |
422 | elif streamclonerequested: |
|
424 | elif streamclonerequested: | |
423 | repo.ui.debug( |
|
425 | repo.ui.debug( | |
424 | b'filtering %s because cannot determine if a stream ' |
|
426 | b'filtering %s because cannot determine if a stream ' | |
425 | b'clone bundle\n' % entry[b'URL'] |
|
427 | b'clone bundle\n' % entry[b'URL'] | |
426 | ) |
|
428 | ) | |
427 | continue |
|
429 | continue | |
428 |
|
430 | |||
429 | if b'REQUIRESNI' in entry and not sslutil.hassni: |
|
431 | if b'REQUIRESNI' in entry and not sslutil.hassni: | |
430 | repo.ui.debug( |
|
432 | repo.ui.debug( | |
431 | b'filtering %s because SNI not supported\n' % entry[b'URL'] |
|
433 | b'filtering %s because SNI not supported\n' % entry[b'URL'] | |
432 | ) |
|
434 | ) | |
433 | continue |
|
435 | continue | |
434 |
|
436 | |||
435 | if b'REQUIREDRAM' in entry: |
|
437 | if b'REQUIREDRAM' in entry: | |
436 | try: |
|
438 | try: | |
437 | requiredram = util.sizetoint(entry[b'REQUIREDRAM']) |
|
439 | requiredram = util.sizetoint(entry[b'REQUIREDRAM']) | |
438 | except error.ParseError: |
|
440 | except error.ParseError: | |
439 | repo.ui.debug( |
|
441 | repo.ui.debug( | |
440 | b'filtering %s due to a bad REQUIREDRAM attribute\n' |
|
442 | b'filtering %s due to a bad REQUIREDRAM attribute\n' | |
441 | % entry[b'URL'] |
|
443 | % entry[b'URL'] | |
442 | ) |
|
444 | ) | |
443 | continue |
|
445 | continue | |
444 | actualram = repo.ui.estimatememory() |
|
446 | actualram = repo.ui.estimatememory() | |
445 | if actualram is not None and actualram * 0.66 < requiredram: |
|
447 | if actualram is not None and actualram * 0.66 < requiredram: | |
446 | repo.ui.debug( |
|
448 | repo.ui.debug( | |
447 | b'filtering %s as it needs more than 2/3 of system memory\n' |
|
449 | b'filtering %s as it needs more than 2/3 of system memory\n' | |
448 | % entry[b'URL'] |
|
450 | % entry[b'URL'] | |
449 | ) |
|
451 | ) | |
450 | continue |
|
452 | continue | |
451 |
|
453 | |||
452 | newentries.append(entry) |
|
454 | newentries.append(entry) | |
453 |
|
455 | |||
454 | return newentries |
|
456 | return newentries | |
455 |
|
457 | |||
456 |
|
458 | |||
457 | class clonebundleentry: |
|
459 | class clonebundleentry: | |
458 | """Represents an item in a clone bundles manifest. |
|
460 | """Represents an item in a clone bundles manifest. | |
459 |
|
461 | |||
460 | This rich class is needed to support sorting since sorted() in Python 3 |
|
462 | This rich class is needed to support sorting since sorted() in Python 3 | |
461 | doesn't support ``cmp`` and our comparison is complex enough that ``key=`` |
|
463 | doesn't support ``cmp`` and our comparison is complex enough that ``key=`` | |
462 | won't work. |
|
464 | won't work. | |
463 | """ |
|
465 | """ | |
464 |
|
466 | |||
465 | def __init__(self, value, prefers): |
|
467 | def __init__(self, value, prefers): | |
466 | self.value = value |
|
468 | self.value = value | |
467 | self.prefers = prefers |
|
469 | self.prefers = prefers | |
468 |
|
470 | |||
469 | def _cmp(self, other): |
|
471 | def _cmp(self, other): | |
470 | for prefkey, prefvalue in self.prefers: |
|
472 | for prefkey, prefvalue in self.prefers: | |
471 | avalue = self.value.get(prefkey) |
|
473 | avalue = self.value.get(prefkey) | |
472 | bvalue = other.value.get(prefkey) |
|
474 | bvalue = other.value.get(prefkey) | |
473 |
|
475 | |||
474 | # Special case for b missing attribute and a matches exactly. |
|
476 | # Special case for b missing attribute and a matches exactly. | |
475 | if avalue is not None and bvalue is None and avalue == prefvalue: |
|
477 | if avalue is not None and bvalue is None and avalue == prefvalue: | |
476 | return -1 |
|
478 | return -1 | |
477 |
|
479 | |||
478 | # Special case for a missing attribute and b matches exactly. |
|
480 | # Special case for a missing attribute and b matches exactly. | |
479 | if bvalue is not None and avalue is None and bvalue == prefvalue: |
|
481 | if bvalue is not None and avalue is None and bvalue == prefvalue: | |
480 | return 1 |
|
482 | return 1 | |
481 |
|
483 | |||
482 | # We can't compare unless attribute present on both. |
|
484 | # We can't compare unless attribute present on both. | |
483 | if avalue is None or bvalue is None: |
|
485 | if avalue is None or bvalue is None: | |
484 | continue |
|
486 | continue | |
485 |
|
487 | |||
486 | # Same values should fall back to next attribute. |
|
488 | # Same values should fall back to next attribute. | |
487 | if avalue == bvalue: |
|
489 | if avalue == bvalue: | |
488 | continue |
|
490 | continue | |
489 |
|
491 | |||
490 | # Exact matches come first. |
|
492 | # Exact matches come first. | |
491 | if avalue == prefvalue: |
|
493 | if avalue == prefvalue: | |
492 | return -1 |
|
494 | return -1 | |
493 | if bvalue == prefvalue: |
|
495 | if bvalue == prefvalue: | |
494 | return 1 |
|
496 | return 1 | |
495 |
|
497 | |||
496 | # Fall back to next attribute. |
|
498 | # Fall back to next attribute. | |
497 | continue |
|
499 | continue | |
498 |
|
500 | |||
499 | # If we got here we couldn't sort by attributes and prefers. Fall |
|
501 | # If we got here we couldn't sort by attributes and prefers. Fall | |
500 | # back to index order. |
|
502 | # back to index order. | |
501 | return 0 |
|
503 | return 0 | |
502 |
|
504 | |||
503 | def __lt__(self, other): |
|
505 | def __lt__(self, other): | |
504 | return self._cmp(other) < 0 |
|
506 | return self._cmp(other) < 0 | |
505 |
|
507 | |||
506 | def __gt__(self, other): |
|
508 | def __gt__(self, other): | |
507 | return self._cmp(other) > 0 |
|
509 | return self._cmp(other) > 0 | |
508 |
|
510 | |||
509 | def __eq__(self, other): |
|
511 | def __eq__(self, other): | |
510 | return self._cmp(other) == 0 |
|
512 | return self._cmp(other) == 0 | |
511 |
|
513 | |||
512 | def __le__(self, other): |
|
514 | def __le__(self, other): | |
513 | return self._cmp(other) <= 0 |
|
515 | return self._cmp(other) <= 0 | |
514 |
|
516 | |||
515 | def __ge__(self, other): |
|
517 | def __ge__(self, other): | |
516 | return self._cmp(other) >= 0 |
|
518 | return self._cmp(other) >= 0 | |
517 |
|
519 | |||
518 | def __ne__(self, other): |
|
520 | def __ne__(self, other): | |
519 | return self._cmp(other) != 0 |
|
521 | return self._cmp(other) != 0 | |
520 |
|
522 | |||
521 |
|
523 | |||
522 | def sortclonebundleentries(ui, entries): |
|
524 | def sortclonebundleentries(ui, entries): | |
523 | prefers = ui.configlist(b'ui', b'clonebundleprefers') |
|
525 | prefers = ui.configlist(b'ui', b'clonebundleprefers') | |
524 | if not prefers: |
|
526 | if not prefers: | |
525 | return list(entries) |
|
527 | return list(entries) | |
526 |
|
528 | |||
527 | def _split(p): |
|
529 | def _split(p): | |
528 | if b'=' not in p: |
|
530 | if b'=' not in p: | |
529 | hint = _(b"each comma separated item should be key=value pairs") |
|
531 | hint = _(b"each comma separated item should be key=value pairs") | |
530 | raise error.Abort( |
|
532 | raise error.Abort( | |
531 | _(b"invalid ui.clonebundleprefers item: %s") % p, hint=hint |
|
533 | _(b"invalid ui.clonebundleprefers item: %s") % p, hint=hint | |
532 | ) |
|
534 | ) | |
533 | return p.split(b'=', 1) |
|
535 | return p.split(b'=', 1) | |
534 |
|
536 | |||
535 | prefers = [_split(p) for p in prefers] |
|
537 | prefers = [_split(p) for p in prefers] | |
536 |
|
538 | |||
537 | items = sorted(clonebundleentry(v, prefers) for v in entries) |
|
539 | items = sorted(clonebundleentry(v, prefers) for v in entries) | |
538 | return [i.value for i in items] |
|
540 | return [i.value for i in items] |
@@ -1,2881 +1,2892 b'' | |||||
1 | # exchange.py - utility to exchange data between repos. |
|
1 | # exchange.py - utility to exchange data between repos. | |
2 | # |
|
2 | # | |
3 | # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com> |
|
3 | # Copyright 2005-2007 Olivia Mackall <olivia@selenic.com> | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 |
|
7 | |||
8 |
|
8 | |||
9 | import collections |
|
9 | import collections | |
10 | import weakref |
|
10 | import weakref | |
11 |
|
11 | |||
12 | from .i18n import _ |
|
12 | from .i18n import _ | |
13 | from .node import ( |
|
13 | from .node import ( | |
14 | hex, |
|
14 | hex, | |
15 | nullrev, |
|
15 | nullrev, | |
16 | ) |
|
16 | ) | |
17 | from . import ( |
|
17 | from . import ( | |
18 | bookmarks as bookmod, |
|
18 | bookmarks as bookmod, | |
19 | bundle2, |
|
19 | bundle2, | |
20 | bundlecaches, |
|
20 | bundlecaches, | |
21 | changegroup, |
|
21 | changegroup, | |
22 | discovery, |
|
22 | discovery, | |
23 | error, |
|
23 | error, | |
24 | lock as lockmod, |
|
24 | lock as lockmod, | |
25 | logexchange, |
|
25 | logexchange, | |
26 | narrowspec, |
|
26 | narrowspec, | |
27 | obsolete, |
|
27 | obsolete, | |
28 | obsutil, |
|
28 | obsutil, | |
29 | phases, |
|
29 | phases, | |
30 | pushkey, |
|
30 | pushkey, | |
31 | pycompat, |
|
31 | pycompat, | |
32 | requirements, |
|
32 | requirements, | |
33 | scmutil, |
|
33 | scmutil, | |
34 | streamclone, |
|
34 | streamclone, | |
35 | url as urlmod, |
|
35 | url as urlmod, | |
36 | util, |
|
36 | util, | |
37 | wireprototypes, |
|
37 | wireprototypes, | |
38 | ) |
|
38 | ) | |
39 | from .utils import ( |
|
39 | from .utils import ( | |
40 | hashutil, |
|
40 | hashutil, | |
41 | stringutil, |
|
41 | stringutil, | |
42 | urlutil, |
|
42 | urlutil, | |
43 | ) |
|
43 | ) | |
44 | from .interfaces import repository |
|
44 | from .interfaces import repository | |
45 |
|
45 | |||
46 | urlerr = util.urlerr |
|
46 | urlerr = util.urlerr | |
47 | urlreq = util.urlreq |
|
47 | urlreq = util.urlreq | |
48 |
|
48 | |||
49 | _NARROWACL_SECTION = b'narrowacl' |
|
49 | _NARROWACL_SECTION = b'narrowacl' | |
50 |
|
50 | |||
51 |
|
51 | |||
52 | def readbundle(ui, fh, fname, vfs=None): |
|
52 | def readbundle(ui, fh, fname, vfs=None): | |
53 | header = changegroup.readexactly(fh, 4) |
|
53 | header = changegroup.readexactly(fh, 4) | |
54 |
|
54 | |||
55 | alg = None |
|
55 | alg = None | |
56 | if not fname: |
|
56 | if not fname: | |
57 | fname = b"stream" |
|
57 | fname = b"stream" | |
58 | if not header.startswith(b'HG') and header.startswith(b'\0'): |
|
58 | if not header.startswith(b'HG') and header.startswith(b'\0'): | |
59 | fh = changegroup.headerlessfixup(fh, header) |
|
59 | fh = changegroup.headerlessfixup(fh, header) | |
60 | header = b"HG10" |
|
60 | header = b"HG10" | |
61 | alg = b'UN' |
|
61 | alg = b'UN' | |
62 | elif vfs: |
|
62 | elif vfs: | |
63 | fname = vfs.join(fname) |
|
63 | fname = vfs.join(fname) | |
64 |
|
64 | |||
65 | magic, version = header[0:2], header[2:4] |
|
65 | magic, version = header[0:2], header[2:4] | |
66 |
|
66 | |||
67 | if magic != b'HG': |
|
67 | if magic != b'HG': | |
68 | raise error.Abort(_(b'%s: not a Mercurial bundle') % fname) |
|
68 | raise error.Abort(_(b'%s: not a Mercurial bundle') % fname) | |
69 | if version == b'10': |
|
69 | if version == b'10': | |
70 | if alg is None: |
|
70 | if alg is None: | |
71 | alg = changegroup.readexactly(fh, 2) |
|
71 | alg = changegroup.readexactly(fh, 2) | |
72 | return changegroup.cg1unpacker(fh, alg) |
|
72 | return changegroup.cg1unpacker(fh, alg) | |
73 | elif version.startswith(b'2'): |
|
73 | elif version.startswith(b'2'): | |
74 | return bundle2.getunbundler(ui, fh, magicstring=magic + version) |
|
74 | return bundle2.getunbundler(ui, fh, magicstring=magic + version) | |
75 | elif version == b'S1': |
|
75 | elif version == b'S1': | |
76 | return streamclone.streamcloneapplier(fh) |
|
76 | return streamclone.streamcloneapplier(fh) | |
77 | else: |
|
77 | else: | |
78 | raise error.Abort( |
|
78 | raise error.Abort( | |
79 | _(b'%s: unknown bundle version %s') % (fname, version) |
|
79 | _(b'%s: unknown bundle version %s') % (fname, version) | |
80 | ) |
|
80 | ) | |
81 |
|
81 | |||
82 |
|
82 | |||
83 | def _format_params(params): |
|
83 | def _format_params(params): | |
84 | parts = [] |
|
84 | parts = [] | |
85 | for key, value in sorted(params.items()): |
|
85 | for key, value in sorted(params.items()): | |
86 | value = urlreq.quote(value) |
|
86 | value = urlreq.quote(value) | |
87 | parts.append(b"%s=%s" % (key, value)) |
|
87 | parts.append(b"%s=%s" % (key, value)) | |
88 | return b';'.join(parts) |
|
88 | return b';'.join(parts) | |
89 |
|
89 | |||
90 |
|
90 | |||
91 | def getbundlespec(ui, fh): |
|
91 | def getbundlespec(ui, fh): | |
92 | """Infer the bundlespec from a bundle file handle. |
|
92 | """Infer the bundlespec from a bundle file handle. | |
93 |
|
93 | |||
94 | The input file handle is seeked and the original seek position is not |
|
94 | The input file handle is seeked and the original seek position is not | |
95 | restored. |
|
95 | restored. | |
96 | """ |
|
96 | """ | |
97 |
|
97 | |||
98 | def speccompression(alg): |
|
98 | def speccompression(alg): | |
99 | try: |
|
99 | try: | |
100 | return util.compengines.forbundletype(alg).bundletype()[0] |
|
100 | return util.compengines.forbundletype(alg).bundletype()[0] | |
101 | except KeyError: |
|
101 | except KeyError: | |
102 | return None |
|
102 | return None | |
103 |
|
103 | |||
104 | params = {} |
|
104 | params = {} | |
105 |
|
105 | |||
106 | b = readbundle(ui, fh, None) |
|
106 | b = readbundle(ui, fh, None) | |
107 | if isinstance(b, changegroup.cg1unpacker): |
|
107 | if isinstance(b, changegroup.cg1unpacker): | |
108 | alg = b._type |
|
108 | alg = b._type | |
109 | if alg == b'_truncatedBZ': |
|
109 | if alg == b'_truncatedBZ': | |
110 | alg = b'BZ' |
|
110 | alg = b'BZ' | |
111 | comp = speccompression(alg) |
|
111 | comp = speccompression(alg) | |
112 | if not comp: |
|
112 | if not comp: | |
113 | raise error.Abort(_(b'unknown compression algorithm: %s') % alg) |
|
113 | raise error.Abort(_(b'unknown compression algorithm: %s') % alg) | |
114 | return b'%s-v1' % comp |
|
114 | return b'%s-v1' % comp | |
115 | elif isinstance(b, bundle2.unbundle20): |
|
115 | elif isinstance(b, bundle2.unbundle20): | |
116 | if b'Compression' in b.params: |
|
116 | if b'Compression' in b.params: | |
117 | comp = speccompression(b.params[b'Compression']) |
|
117 | comp = speccompression(b.params[b'Compression']) | |
118 | if not comp: |
|
118 | if not comp: | |
119 | raise error.Abort( |
|
119 | raise error.Abort( | |
120 | _(b'unknown compression algorithm: %s') % comp |
|
120 | _(b'unknown compression algorithm: %s') % comp | |
121 | ) |
|
121 | ) | |
122 | else: |
|
122 | else: | |
123 | comp = b'none' |
|
123 | comp = b'none' | |
124 |
|
124 | |||
125 | version = None |
|
125 | version = None | |
126 | for part in b.iterparts(): |
|
126 | for part in b.iterparts(): | |
127 | if part.type == b'changegroup': |
|
127 | if part.type == b'changegroup': | |
128 | cgversion = part.params[b'version'] |
|
128 | cgversion = part.params[b'version'] | |
129 | if cgversion in (b'01', b'02'): |
|
129 | if cgversion in (b'01', b'02'): | |
130 | version = b'v2' |
|
130 | version = b'v2' | |
131 | elif cgversion in (b'03',): |
|
131 | elif cgversion in (b'03',): | |
132 | version = b'v2' |
|
132 | version = b'v2' | |
133 | params[b'cg.version'] = cgversion |
|
133 | params[b'cg.version'] = cgversion | |
134 | else: |
|
134 | else: | |
135 | raise error.Abort( |
|
135 | raise error.Abort( | |
136 | _( |
|
136 | _( | |
137 | b'changegroup version %s does not have ' |
|
137 | b'changegroup version %s does not have ' | |
138 | b'a known bundlespec' |
|
138 | b'a known bundlespec' | |
139 | ) |
|
139 | ) | |
140 | % version, |
|
140 | % version, | |
141 | hint=_(b'try upgrading your Mercurial client'), |
|
141 | hint=_(b'try upgrading your Mercurial client'), | |
142 | ) |
|
142 | ) | |
143 | elif part.type == b'stream2' and version is None: |
|
143 | elif part.type == b'stream2' and version is None: | |
144 | # A stream2 part requires to be part of a v2 bundle |
|
144 | # A stream2 part requires to be part of a v2 bundle | |
145 | requirements = urlreq.unquote(part.params[b'requirements']) |
|
145 | requirements = urlreq.unquote(part.params[b'requirements']) | |
146 | splitted = requirements.split() |
|
146 | splitted = requirements.split() | |
147 | params = bundle2._formatrequirementsparams(splitted) |
|
147 | params = bundle2._formatrequirementsparams(splitted) | |
148 | return b'none-v2;stream=v2;%s' % params |
|
148 | return b'none-v2;stream=v2;%s' % params | |
149 | elif part.type == b'stream3-exp' and version is None: |
|
149 | elif part.type == b'stream3-exp' and version is None: | |
150 | # A stream3 part requires to be part of a v2 bundle |
|
150 | # A stream3 part requires to be part of a v2 bundle | |
151 | requirements = urlreq.unquote(part.params[b'requirements']) |
|
151 | requirements = urlreq.unquote(part.params[b'requirements']) | |
152 | splitted = requirements.split() |
|
152 | splitted = requirements.split() | |
153 | params = bundle2._formatrequirementsparams(splitted) |
|
153 | params = bundle2._formatrequirementsparams(splitted) | |
154 | return b'none-v2;stream=v3-exp;%s' % params |
|
154 | return b'none-v2;stream=v3-exp;%s' % params | |
155 | elif part.type == b'obsmarkers': |
|
155 | elif part.type == b'obsmarkers': | |
156 | params[b'obsolescence'] = b'yes' |
|
156 | params[b'obsolescence'] = b'yes' | |
157 | if not part.mandatory: |
|
157 | if not part.mandatory: | |
158 | params[b'obsolescence-mandatory'] = b'no' |
|
158 | params[b'obsolescence-mandatory'] = b'no' | |
159 |
|
159 | |||
160 | if not version: |
|
160 | if not version: | |
161 | raise error.Abort( |
|
161 | raise error.Abort( | |
162 | _(b'could not identify changegroup version in bundle') |
|
162 | _(b'could not identify changegroup version in bundle') | |
163 | ) |
|
163 | ) | |
164 | spec = b'%s-%s' % (comp, version) |
|
164 | spec = b'%s-%s' % (comp, version) | |
165 | if params: |
|
165 | if params: | |
166 | spec += b';' |
|
166 | spec += b';' | |
167 | spec += _format_params(params) |
|
167 | spec += _format_params(params) | |
168 | return spec |
|
168 | return spec | |
169 |
|
169 | |||
170 | elif isinstance(b, streamclone.streamcloneapplier): |
|
170 | elif isinstance(b, streamclone.streamcloneapplier): | |
171 | requirements = streamclone.readbundle1header(fh)[2] |
|
171 | requirements = streamclone.readbundle1header(fh)[2] | |
172 | formatted = bundle2._formatrequirementsparams(requirements) |
|
172 | formatted = bundle2._formatrequirementsparams(requirements) | |
173 | return b'none-packed1;%s' % formatted |
|
173 | return b'none-packed1;%s' % formatted | |
174 | else: |
|
174 | else: | |
175 | raise error.Abort(_(b'unknown bundle type: %s') % b) |
|
175 | raise error.Abort(_(b'unknown bundle type: %s') % b) | |
176 |
|
176 | |||
177 |
|
177 | |||
178 | def _computeoutgoing(repo, heads, common): |
|
178 | def _computeoutgoing(repo, heads, common): | |
179 | """Computes which revs are outgoing given a set of common |
|
179 | """Computes which revs are outgoing given a set of common | |
180 | and a set of heads. |
|
180 | and a set of heads. | |
181 |
|
181 | |||
182 | This is a separate function so extensions can have access to |
|
182 | This is a separate function so extensions can have access to | |
183 | the logic. |
|
183 | the logic. | |
184 |
|
184 | |||
185 | Returns a discovery.outgoing object. |
|
185 | Returns a discovery.outgoing object. | |
186 | """ |
|
186 | """ | |
187 | cl = repo.changelog |
|
187 | cl = repo.changelog | |
188 | if common: |
|
188 | if common: | |
189 | hasnode = cl.hasnode |
|
189 | hasnode = cl.hasnode | |
190 | common = [n for n in common if hasnode(n)] |
|
190 | common = [n for n in common if hasnode(n)] | |
191 | else: |
|
191 | else: | |
192 | common = [repo.nullid] |
|
192 | common = [repo.nullid] | |
193 | if not heads: |
|
193 | if not heads: | |
194 | heads = cl.heads() |
|
194 | heads = cl.heads() | |
195 | return discovery.outgoing(repo, common, heads) |
|
195 | return discovery.outgoing(repo, common, heads) | |
196 |
|
196 | |||
197 |
|
197 | |||
198 | def _checkpublish(pushop): |
|
198 | def _checkpublish(pushop): | |
199 | repo = pushop.repo |
|
199 | repo = pushop.repo | |
200 | ui = repo.ui |
|
200 | ui = repo.ui | |
201 | behavior = ui.config(b'experimental', b'auto-publish') |
|
201 | behavior = ui.config(b'experimental', b'auto-publish') | |
202 | if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'): |
|
202 | if pushop.publish or behavior not in (b'warn', b'confirm', b'abort'): | |
203 | return |
|
203 | return | |
204 | remotephases = listkeys(pushop.remote, b'phases') |
|
204 | remotephases = listkeys(pushop.remote, b'phases') | |
205 | if not remotephases.get(b'publishing', False): |
|
205 | if not remotephases.get(b'publishing', False): | |
206 | return |
|
206 | return | |
207 |
|
207 | |||
208 | if pushop.revs is None: |
|
208 | if pushop.revs is None: | |
209 | published = repo.filtered(b'served').revs(b'not public()') |
|
209 | published = repo.filtered(b'served').revs(b'not public()') | |
210 | else: |
|
210 | else: | |
211 | published = repo.revs(b'::%ln - public()', pushop.revs) |
|
211 | published = repo.revs(b'::%ln - public()', pushop.revs) | |
212 | # we want to use pushop.revs in the revset even if they themselves are |
|
212 | # we want to use pushop.revs in the revset even if they themselves are | |
213 | # secret, but we don't want to have anything that the server won't see |
|
213 | # secret, but we don't want to have anything that the server won't see | |
214 | # in the result of this expression |
|
214 | # in the result of this expression | |
215 | published &= repo.filtered(b'served') |
|
215 | published &= repo.filtered(b'served') | |
216 | if published: |
|
216 | if published: | |
217 | if behavior == b'warn': |
|
217 | if behavior == b'warn': | |
218 | ui.warn( |
|
218 | ui.warn( | |
219 | _(b'%i changesets about to be published\n') % len(published) |
|
219 | _(b'%i changesets about to be published\n') % len(published) | |
220 | ) |
|
220 | ) | |
221 | elif behavior == b'confirm': |
|
221 | elif behavior == b'confirm': | |
222 | if ui.promptchoice( |
|
222 | if ui.promptchoice( | |
223 | _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No') |
|
223 | _(b'push and publish %i changesets (yn)?$$ &Yes $$ &No') | |
224 | % len(published) |
|
224 | % len(published) | |
225 | ): |
|
225 | ): | |
226 | raise error.CanceledError(_(b'user quit')) |
|
226 | raise error.CanceledError(_(b'user quit')) | |
227 | elif behavior == b'abort': |
|
227 | elif behavior == b'abort': | |
228 | msg = _(b'push would publish %i changesets') % len(published) |
|
228 | msg = _(b'push would publish %i changesets') % len(published) | |
229 | hint = _( |
|
229 | hint = _( | |
230 | b"use --publish or adjust 'experimental.auto-publish'" |
|
230 | b"use --publish or adjust 'experimental.auto-publish'" | |
231 | b" config" |
|
231 | b" config" | |
232 | ) |
|
232 | ) | |
233 | raise error.Abort(msg, hint=hint) |
|
233 | raise error.Abort(msg, hint=hint) | |
234 |
|
234 | |||
235 |
|
235 | |||
236 | def _forcebundle1(op): |
|
236 | def _forcebundle1(op): | |
237 | """return true if a pull/push must use bundle1 |
|
237 | """return true if a pull/push must use bundle1 | |
238 |
|
238 | |||
239 | This function is used to allow testing of the older bundle version""" |
|
239 | This function is used to allow testing of the older bundle version""" | |
240 | ui = op.repo.ui |
|
240 | ui = op.repo.ui | |
241 | # The goal is this config is to allow developer to choose the bundle |
|
241 | # The goal is this config is to allow developer to choose the bundle | |
242 | # version used during exchanged. This is especially handy during test. |
|
242 | # version used during exchanged. This is especially handy during test. | |
243 | # Value is a list of bundle version to be picked from, highest version |
|
243 | # Value is a list of bundle version to be picked from, highest version | |
244 | # should be used. |
|
244 | # should be used. | |
245 | # |
|
245 | # | |
246 | # developer config: devel.legacy.exchange |
|
246 | # developer config: devel.legacy.exchange | |
247 | exchange = ui.configlist(b'devel', b'legacy.exchange') |
|
247 | exchange = ui.configlist(b'devel', b'legacy.exchange') | |
248 | forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange |
|
248 | forcebundle1 = b'bundle2' not in exchange and b'bundle1' in exchange | |
249 | return forcebundle1 or not op.remote.capable(b'bundle2') |
|
249 | return forcebundle1 or not op.remote.capable(b'bundle2') | |
250 |
|
250 | |||
251 |
|
251 | |||
252 | class pushoperation: |
|
252 | class pushoperation: | |
253 | """A object that represent a single push operation |
|
253 | """A object that represent a single push operation | |
254 |
|
254 | |||
255 | Its purpose is to carry push related state and very common operations. |
|
255 | Its purpose is to carry push related state and very common operations. | |
256 |
|
256 | |||
257 | A new pushoperation should be created at the beginning of each push and |
|
257 | A new pushoperation should be created at the beginning of each push and | |
258 | discarded afterward. |
|
258 | discarded afterward. | |
259 | """ |
|
259 | """ | |
260 |
|
260 | |||
261 | def __init__( |
|
261 | def __init__( | |
262 | self, |
|
262 | self, | |
263 | repo, |
|
263 | repo, | |
264 | remote, |
|
264 | remote, | |
265 | force=False, |
|
265 | force=False, | |
266 | revs=None, |
|
266 | revs=None, | |
267 | newbranch=False, |
|
267 | newbranch=False, | |
268 | bookmarks=(), |
|
268 | bookmarks=(), | |
269 | publish=False, |
|
269 | publish=False, | |
270 | pushvars=None, |
|
270 | pushvars=None, | |
271 | ): |
|
271 | ): | |
272 | # repo we push from |
|
272 | # repo we push from | |
273 | self.repo = repo |
|
273 | self.repo = repo | |
274 | self.ui = repo.ui |
|
274 | self.ui = repo.ui | |
275 | # repo we push to |
|
275 | # repo we push to | |
276 | self.remote = remote |
|
276 | self.remote = remote | |
277 | # force option provided |
|
277 | # force option provided | |
278 | self.force = force |
|
278 | self.force = force | |
279 | # revs to be pushed (None is "all") |
|
279 | # revs to be pushed (None is "all") | |
280 | self.revs = revs |
|
280 | self.revs = revs | |
281 | # bookmark explicitly pushed |
|
281 | # bookmark explicitly pushed | |
282 | self.bookmarks = bookmarks |
|
282 | self.bookmarks = bookmarks | |
283 | # allow push of new branch |
|
283 | # allow push of new branch | |
284 | self.newbranch = newbranch |
|
284 | self.newbranch = newbranch | |
285 | # step already performed |
|
285 | # step already performed | |
286 | # (used to check what steps have been already performed through bundle2) |
|
286 | # (used to check what steps have been already performed through bundle2) | |
287 | self.stepsdone = set() |
|
287 | self.stepsdone = set() | |
288 | # Integer version of the changegroup push result |
|
288 | # Integer version of the changegroup push result | |
289 | # - None means nothing to push |
|
289 | # - None means nothing to push | |
290 | # - 0 means HTTP error |
|
290 | # - 0 means HTTP error | |
291 | # - 1 means we pushed and remote head count is unchanged *or* |
|
291 | # - 1 means we pushed and remote head count is unchanged *or* | |
292 | # we have outgoing changesets but refused to push |
|
292 | # we have outgoing changesets but refused to push | |
293 | # - other values as described by addchangegroup() |
|
293 | # - other values as described by addchangegroup() | |
294 | self.cgresult = None |
|
294 | self.cgresult = None | |
295 | # Boolean value for the bookmark push |
|
295 | # Boolean value for the bookmark push | |
296 | self.bkresult = None |
|
296 | self.bkresult = None | |
297 | # discover.outgoing object (contains common and outgoing data) |
|
297 | # discover.outgoing object (contains common and outgoing data) | |
298 | self.outgoing = None |
|
298 | self.outgoing = None | |
299 | # all remote topological heads before the push |
|
299 | # all remote topological heads before the push | |
300 | self.remoteheads = None |
|
300 | self.remoteheads = None | |
301 | # Details of the remote branch pre and post push |
|
301 | # Details of the remote branch pre and post push | |
302 | # |
|
302 | # | |
303 | # mapping: {'branch': ([remoteheads], |
|
303 | # mapping: {'branch': ([remoteheads], | |
304 | # [newheads], |
|
304 | # [newheads], | |
305 | # [unsyncedheads], |
|
305 | # [unsyncedheads], | |
306 | # [discardedheads])} |
|
306 | # [discardedheads])} | |
307 | # - branch: the branch name |
|
307 | # - branch: the branch name | |
308 | # - remoteheads: the list of remote heads known locally |
|
308 | # - remoteheads: the list of remote heads known locally | |
309 | # None if the branch is new |
|
309 | # None if the branch is new | |
310 | # - newheads: the new remote heads (known locally) with outgoing pushed |
|
310 | # - newheads: the new remote heads (known locally) with outgoing pushed | |
311 | # - unsyncedheads: the list of remote heads unknown locally. |
|
311 | # - unsyncedheads: the list of remote heads unknown locally. | |
312 | # - discardedheads: the list of remote heads made obsolete by the push |
|
312 | # - discardedheads: the list of remote heads made obsolete by the push | |
313 | self.pushbranchmap = None |
|
313 | self.pushbranchmap = None | |
314 | # testable as a boolean indicating if any nodes are missing locally. |
|
314 | # testable as a boolean indicating if any nodes are missing locally. | |
315 | self.incoming = None |
|
315 | self.incoming = None | |
316 | # summary of the remote phase situation |
|
316 | # summary of the remote phase situation | |
317 | self.remotephases = None |
|
317 | self.remotephases = None | |
318 | # phases changes that must be pushed along side the changesets |
|
318 | # phases changes that must be pushed along side the changesets | |
319 | self.outdatedphases = None |
|
319 | self.outdatedphases = None | |
320 | # phases changes that must be pushed if changeset push fails |
|
320 | # phases changes that must be pushed if changeset push fails | |
321 | self.fallbackoutdatedphases = None |
|
321 | self.fallbackoutdatedphases = None | |
322 | # outgoing obsmarkers |
|
322 | # outgoing obsmarkers | |
323 | self.outobsmarkers = set() |
|
323 | self.outobsmarkers = set() | |
324 | # outgoing bookmarks, list of (bm, oldnode | '', newnode | '') |
|
324 | # outgoing bookmarks, list of (bm, oldnode | '', newnode | '') | |
325 | self.outbookmarks = [] |
|
325 | self.outbookmarks = [] | |
326 | # transaction manager |
|
326 | # transaction manager | |
327 | self.trmanager = None |
|
327 | self.trmanager = None | |
328 | # map { pushkey partid -> callback handling failure} |
|
328 | # map { pushkey partid -> callback handling failure} | |
329 | # used to handle exception from mandatory pushkey part failure |
|
329 | # used to handle exception from mandatory pushkey part failure | |
330 | self.pkfailcb = {} |
|
330 | self.pkfailcb = {} | |
331 | # an iterable of pushvars or None |
|
331 | # an iterable of pushvars or None | |
332 | self.pushvars = pushvars |
|
332 | self.pushvars = pushvars | |
333 | # publish pushed changesets |
|
333 | # publish pushed changesets | |
334 | self.publish = publish |
|
334 | self.publish = publish | |
335 |
|
335 | |||
336 | @util.propertycache |
|
336 | @util.propertycache | |
337 | def futureheads(self): |
|
337 | def futureheads(self): | |
338 | """future remote heads if the changeset push succeeds""" |
|
338 | """future remote heads if the changeset push succeeds""" | |
339 | return self.outgoing.ancestorsof |
|
339 | return self.outgoing.ancestorsof | |
340 |
|
340 | |||
341 | @util.propertycache |
|
341 | @util.propertycache | |
342 | def fallbackheads(self): |
|
342 | def fallbackheads(self): | |
343 | """future remote heads if the changeset push fails""" |
|
343 | """future remote heads if the changeset push fails""" | |
344 | if self.revs is None: |
|
344 | if self.revs is None: | |
345 | # not target to push, all common are relevant |
|
345 | # not target to push, all common are relevant | |
346 | return self.outgoing.commonheads |
|
346 | return self.outgoing.commonheads | |
347 | unfi = self.repo.unfiltered() |
|
347 | unfi = self.repo.unfiltered() | |
348 | # I want cheads = heads(::ancestorsof and ::commonheads) |
|
348 | # I want cheads = heads(::ancestorsof and ::commonheads) | |
349 | # (ancestorsof is revs with secret changeset filtered out) |
|
349 | # (ancestorsof is revs with secret changeset filtered out) | |
350 | # |
|
350 | # | |
351 | # This can be expressed as: |
|
351 | # This can be expressed as: | |
352 | # cheads = ( (ancestorsof and ::commonheads) |
|
352 | # cheads = ( (ancestorsof and ::commonheads) | |
353 | # + (commonheads and ::ancestorsof))" |
|
353 | # + (commonheads and ::ancestorsof))" | |
354 | # ) |
|
354 | # ) | |
355 | # |
|
355 | # | |
356 | # while trying to push we already computed the following: |
|
356 | # while trying to push we already computed the following: | |
357 | # common = (::commonheads) |
|
357 | # common = (::commonheads) | |
358 | # missing = ((commonheads::ancestorsof) - commonheads) |
|
358 | # missing = ((commonheads::ancestorsof) - commonheads) | |
359 | # |
|
359 | # | |
360 | # We can pick: |
|
360 | # We can pick: | |
361 | # * ancestorsof part of common (::commonheads) |
|
361 | # * ancestorsof part of common (::commonheads) | |
362 | common = self.outgoing.common |
|
362 | common = self.outgoing.common | |
363 | rev = self.repo.changelog.index.rev |
|
363 | rev = self.repo.changelog.index.rev | |
364 | cheads = [node for node in self.revs if rev(node) in common] |
|
364 | cheads = [node for node in self.revs if rev(node) in common] | |
365 | # and |
|
365 | # and | |
366 | # * commonheads parents on missing |
|
366 | # * commonheads parents on missing | |
367 | revset = unfi.set( |
|
367 | revset = unfi.set( | |
368 | b'%ln and parents(roots(%ln))', |
|
368 | b'%ln and parents(roots(%ln))', | |
369 | self.outgoing.commonheads, |
|
369 | self.outgoing.commonheads, | |
370 | self.outgoing.missing, |
|
370 | self.outgoing.missing, | |
371 | ) |
|
371 | ) | |
372 | cheads.extend(c.node() for c in revset) |
|
372 | cheads.extend(c.node() for c in revset) | |
373 | return cheads |
|
373 | return cheads | |
374 |
|
374 | |||
375 | @property |
|
375 | @property | |
376 | def commonheads(self): |
|
376 | def commonheads(self): | |
377 | """set of all common heads after changeset bundle push""" |
|
377 | """set of all common heads after changeset bundle push""" | |
378 | if self.cgresult: |
|
378 | if self.cgresult: | |
379 | return self.futureheads |
|
379 | return self.futureheads | |
380 | else: |
|
380 | else: | |
381 | return self.fallbackheads |
|
381 | return self.fallbackheads | |
382 |
|
382 | |||
383 |
|
383 | |||
384 | # mapping of message used when pushing bookmark |
|
384 | # mapping of message used when pushing bookmark | |
385 | bookmsgmap = { |
|
385 | bookmsgmap = { | |
386 | b'update': ( |
|
386 | b'update': ( | |
387 | _(b"updating bookmark %s\n"), |
|
387 | _(b"updating bookmark %s\n"), | |
388 | _(b'updating bookmark %s failed\n'), |
|
388 | _(b'updating bookmark %s failed\n'), | |
389 | ), |
|
389 | ), | |
390 | b'export': ( |
|
390 | b'export': ( | |
391 | _(b"exporting bookmark %s\n"), |
|
391 | _(b"exporting bookmark %s\n"), | |
392 | _(b'exporting bookmark %s failed\n'), |
|
392 | _(b'exporting bookmark %s failed\n'), | |
393 | ), |
|
393 | ), | |
394 | b'delete': ( |
|
394 | b'delete': ( | |
395 | _(b"deleting remote bookmark %s\n"), |
|
395 | _(b"deleting remote bookmark %s\n"), | |
396 | _(b'deleting remote bookmark %s failed\n'), |
|
396 | _(b'deleting remote bookmark %s failed\n'), | |
397 | ), |
|
397 | ), | |
398 | } |
|
398 | } | |
399 |
|
399 | |||
400 |
|
400 | |||
401 | def push( |
|
401 | def push( | |
402 | repo, |
|
402 | repo, | |
403 | remote, |
|
403 | remote, | |
404 | force=False, |
|
404 | force=False, | |
405 | revs=None, |
|
405 | revs=None, | |
406 | newbranch=False, |
|
406 | newbranch=False, | |
407 | bookmarks=(), |
|
407 | bookmarks=(), | |
408 | publish=False, |
|
408 | publish=False, | |
409 | opargs=None, |
|
409 | opargs=None, | |
410 | ): |
|
410 | ): | |
411 | """Push outgoing changesets (limited by revs) from a local |
|
411 | """Push outgoing changesets (limited by revs) from a local | |
412 | repository to remote. Return an integer: |
|
412 | repository to remote. Return an integer: | |
413 | - None means nothing to push |
|
413 | - None means nothing to push | |
414 | - 0 means HTTP error |
|
414 | - 0 means HTTP error | |
415 | - 1 means we pushed and remote head count is unchanged *or* |
|
415 | - 1 means we pushed and remote head count is unchanged *or* | |
416 | we have outgoing changesets but refused to push |
|
416 | we have outgoing changesets but refused to push | |
417 | - other values as described by addchangegroup() |
|
417 | - other values as described by addchangegroup() | |
418 | """ |
|
418 | """ | |
419 | if opargs is None: |
|
419 | if opargs is None: | |
420 | opargs = {} |
|
420 | opargs = {} | |
421 | pushop = pushoperation( |
|
421 | pushop = pushoperation( | |
422 | repo, |
|
422 | repo, | |
423 | remote, |
|
423 | remote, | |
424 | force, |
|
424 | force, | |
425 | revs, |
|
425 | revs, | |
426 | newbranch, |
|
426 | newbranch, | |
427 | bookmarks, |
|
427 | bookmarks, | |
428 | publish, |
|
428 | publish, | |
429 | **pycompat.strkwargs(opargs) |
|
429 | **pycompat.strkwargs(opargs) | |
430 | ) |
|
430 | ) | |
431 | if pushop.remote.local(): |
|
431 | if pushop.remote.local(): | |
432 | missing = ( |
|
432 | missing = ( | |
433 | set(pushop.repo.requirements) - pushop.remote.local().supported |
|
433 | set(pushop.repo.requirements) - pushop.remote.local().supported | |
434 | ) |
|
434 | ) | |
435 | if missing: |
|
435 | if missing: | |
436 | msg = _( |
|
436 | msg = _( | |
437 | b"required features are not" |
|
437 | b"required features are not" | |
438 | b" supported in the destination:" |
|
438 | b" supported in the destination:" | |
439 | b" %s" |
|
439 | b" %s" | |
440 | ) % (b', '.join(sorted(missing))) |
|
440 | ) % (b', '.join(sorted(missing))) | |
441 | raise error.Abort(msg) |
|
441 | raise error.Abort(msg) | |
442 |
|
442 | |||
443 | if not pushop.remote.canpush(): |
|
443 | if not pushop.remote.canpush(): | |
444 | raise error.Abort(_(b"destination does not support push")) |
|
444 | raise error.Abort(_(b"destination does not support push")) | |
445 |
|
445 | |||
446 | if not pushop.remote.capable(b'unbundle'): |
|
446 | if not pushop.remote.capable(b'unbundle'): | |
447 | raise error.Abort( |
|
447 | raise error.Abort( | |
448 | _( |
|
448 | _( | |
449 | b'cannot push: destination does not support the ' |
|
449 | b'cannot push: destination does not support the ' | |
450 | b'unbundle wire protocol command' |
|
450 | b'unbundle wire protocol command' | |
451 | ) |
|
451 | ) | |
452 | ) |
|
452 | ) | |
453 | for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)): |
|
453 | for category in sorted(bundle2.read_remote_wanted_sidedata(pushop.remote)): | |
454 | # Check that a computer is registered for that category for at least |
|
454 | # Check that a computer is registered for that category for at least | |
455 | # one revlog kind. |
|
455 | # one revlog kind. | |
456 | for kind, computers in repo._sidedata_computers.items(): |
|
456 | for kind, computers in repo._sidedata_computers.items(): | |
457 | if computers.get(category): |
|
457 | if computers.get(category): | |
458 | break |
|
458 | break | |
459 | else: |
|
459 | else: | |
460 | raise error.Abort( |
|
460 | raise error.Abort( | |
461 | _( |
|
461 | _( | |
462 | b'cannot push: required sidedata category not supported' |
|
462 | b'cannot push: required sidedata category not supported' | |
463 | b" by this client: '%s'" |
|
463 | b" by this client: '%s'" | |
464 | ) |
|
464 | ) | |
465 | % pycompat.bytestr(category) |
|
465 | % pycompat.bytestr(category) | |
466 | ) |
|
466 | ) | |
467 | # get lock as we might write phase data |
|
467 | # get lock as we might write phase data | |
468 | wlock = lock = None |
|
468 | wlock = lock = None | |
469 | try: |
|
469 | try: | |
470 | # bundle2 push may receive a reply bundle touching bookmarks |
|
470 | # bundle2 push may receive a reply bundle touching bookmarks | |
471 | # requiring the wlock. Take it now to ensure proper ordering. |
|
471 | # requiring the wlock. Take it now to ensure proper ordering. | |
472 | maypushback = pushop.ui.configbool(b'experimental', b'bundle2.pushback') |
|
472 | maypushback = pushop.ui.configbool(b'experimental', b'bundle2.pushback') | |
473 | if ( |
|
473 | if ( | |
474 | (not _forcebundle1(pushop)) |
|
474 | (not _forcebundle1(pushop)) | |
475 | and maypushback |
|
475 | and maypushback | |
476 | and not bookmod.bookmarksinstore(repo) |
|
476 | and not bookmod.bookmarksinstore(repo) | |
477 | ): |
|
477 | ): | |
478 | wlock = pushop.repo.wlock() |
|
478 | wlock = pushop.repo.wlock() | |
479 | lock = pushop.repo.lock() |
|
479 | lock = pushop.repo.lock() | |
480 | pushop.trmanager = transactionmanager( |
|
480 | pushop.trmanager = transactionmanager( | |
481 | pushop.repo, b'push-response', pushop.remote.url() |
|
481 | pushop.repo, b'push-response', pushop.remote.url() | |
482 | ) |
|
482 | ) | |
483 | except error.LockUnavailable as err: |
|
483 | except error.LockUnavailable as err: | |
484 | # source repo cannot be locked. |
|
484 | # source repo cannot be locked. | |
485 | # We do not abort the push, but just disable the local phase |
|
485 | # We do not abort the push, but just disable the local phase | |
486 | # synchronisation. |
|
486 | # synchronisation. | |
487 | msg = b'cannot lock source repository: %s\n' % stringutil.forcebytestr( |
|
487 | msg = b'cannot lock source repository: %s\n' % stringutil.forcebytestr( | |
488 | err |
|
488 | err | |
489 | ) |
|
489 | ) | |
490 | pushop.ui.debug(msg) |
|
490 | pushop.ui.debug(msg) | |
491 |
|
491 | |||
492 | with wlock or util.nullcontextmanager(): |
|
492 | with wlock or util.nullcontextmanager(): | |
493 | with lock or util.nullcontextmanager(): |
|
493 | with lock or util.nullcontextmanager(): | |
494 | with pushop.trmanager or util.nullcontextmanager(): |
|
494 | with pushop.trmanager or util.nullcontextmanager(): | |
495 | pushop.repo.checkpush(pushop) |
|
495 | pushop.repo.checkpush(pushop) | |
496 | _checkpublish(pushop) |
|
496 | _checkpublish(pushop) | |
497 | _pushdiscovery(pushop) |
|
497 | _pushdiscovery(pushop) | |
498 | if not pushop.force: |
|
498 | if not pushop.force: | |
499 | _checksubrepostate(pushop) |
|
499 | _checksubrepostate(pushop) | |
500 | if not _forcebundle1(pushop): |
|
500 | if not _forcebundle1(pushop): | |
501 | _pushbundle2(pushop) |
|
501 | _pushbundle2(pushop) | |
502 | _pushchangeset(pushop) |
|
502 | _pushchangeset(pushop) | |
503 | _pushsyncphase(pushop) |
|
503 | _pushsyncphase(pushop) | |
504 | _pushobsolete(pushop) |
|
504 | _pushobsolete(pushop) | |
505 | _pushbookmark(pushop) |
|
505 | _pushbookmark(pushop) | |
506 |
|
506 | |||
507 | if repo.ui.configbool(b'experimental', b'remotenames'): |
|
507 | if repo.ui.configbool(b'experimental', b'remotenames'): | |
508 | logexchange.pullremotenames(repo, remote) |
|
508 | logexchange.pullremotenames(repo, remote) | |
509 |
|
509 | |||
510 | return pushop |
|
510 | return pushop | |
511 |
|
511 | |||
512 |
|
512 | |||
513 | # list of steps to perform discovery before push |
|
513 | # list of steps to perform discovery before push | |
514 | pushdiscoveryorder = [] |
|
514 | pushdiscoveryorder = [] | |
515 |
|
515 | |||
516 | # Mapping between step name and function |
|
516 | # Mapping between step name and function | |
517 | # |
|
517 | # | |
518 | # This exists to help extensions wrap steps if necessary |
|
518 | # This exists to help extensions wrap steps if necessary | |
519 | pushdiscoverymapping = {} |
|
519 | pushdiscoverymapping = {} | |
520 |
|
520 | |||
521 |
|
521 | |||
522 | def pushdiscovery(stepname): |
|
522 | def pushdiscovery(stepname): | |
523 | """decorator for function performing discovery before push |
|
523 | """decorator for function performing discovery before push | |
524 |
|
524 | |||
525 | The function is added to the step -> function mapping and appended to the |
|
525 | The function is added to the step -> function mapping and appended to the | |
526 | list of steps. Beware that decorated function will be added in order (this |
|
526 | list of steps. Beware that decorated function will be added in order (this | |
527 | may matter). |
|
527 | may matter). | |
528 |
|
528 | |||
529 | You can only use this decorator for a new step, if you want to wrap a step |
|
529 | You can only use this decorator for a new step, if you want to wrap a step | |
530 | from an extension, change the pushdiscovery dictionary directly.""" |
|
530 | from an extension, change the pushdiscovery dictionary directly.""" | |
531 |
|
531 | |||
532 | def dec(func): |
|
532 | def dec(func): | |
533 | assert stepname not in pushdiscoverymapping |
|
533 | assert stepname not in pushdiscoverymapping | |
534 | pushdiscoverymapping[stepname] = func |
|
534 | pushdiscoverymapping[stepname] = func | |
535 | pushdiscoveryorder.append(stepname) |
|
535 | pushdiscoveryorder.append(stepname) | |
536 | return func |
|
536 | return func | |
537 |
|
537 | |||
538 | return dec |
|
538 | return dec | |
539 |
|
539 | |||
540 |
|
540 | |||
541 | def _pushdiscovery(pushop): |
|
541 | def _pushdiscovery(pushop): | |
542 | """Run all discovery steps""" |
|
542 | """Run all discovery steps""" | |
543 | for stepname in pushdiscoveryorder: |
|
543 | for stepname in pushdiscoveryorder: | |
544 | step = pushdiscoverymapping[stepname] |
|
544 | step = pushdiscoverymapping[stepname] | |
545 | step(pushop) |
|
545 | step(pushop) | |
546 |
|
546 | |||
547 |
|
547 | |||
548 | def _checksubrepostate(pushop): |
|
548 | def _checksubrepostate(pushop): | |
549 | """Ensure all outgoing referenced subrepo revisions are present locally""" |
|
549 | """Ensure all outgoing referenced subrepo revisions are present locally""" | |
550 |
|
550 | |||
551 | repo = pushop.repo |
|
551 | repo = pushop.repo | |
552 |
|
552 | |||
553 | # If the repository does not use subrepos, skip the expensive |
|
553 | # If the repository does not use subrepos, skip the expensive | |
554 | # manifest checks. |
|
554 | # manifest checks. | |
555 | if not len(repo.file(b'.hgsub')) or not len(repo.file(b'.hgsubstate')): |
|
555 | if not len(repo.file(b'.hgsub')) or not len(repo.file(b'.hgsubstate')): | |
556 | return |
|
556 | return | |
557 |
|
557 | |||
558 | for n in pushop.outgoing.missing: |
|
558 | for n in pushop.outgoing.missing: | |
559 | ctx = repo[n] |
|
559 | ctx = repo[n] | |
560 |
|
560 | |||
561 | if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files(): |
|
561 | if b'.hgsub' in ctx.manifest() and b'.hgsubstate' in ctx.files(): | |
562 | for subpath in sorted(ctx.substate): |
|
562 | for subpath in sorted(ctx.substate): | |
563 | sub = ctx.sub(subpath) |
|
563 | sub = ctx.sub(subpath) | |
564 | sub.verify(onpush=True) |
|
564 | sub.verify(onpush=True) | |
565 |
|
565 | |||
566 |
|
566 | |||
567 | @pushdiscovery(b'changeset') |
|
567 | @pushdiscovery(b'changeset') | |
568 | def _pushdiscoverychangeset(pushop): |
|
568 | def _pushdiscoverychangeset(pushop): | |
569 | """discover the changeset that need to be pushed""" |
|
569 | """discover the changeset that need to be pushed""" | |
570 | fci = discovery.findcommonincoming |
|
570 | fci = discovery.findcommonincoming | |
571 | if pushop.revs: |
|
571 | if pushop.revs: | |
572 | commoninc = fci( |
|
572 | commoninc = fci( | |
573 | pushop.repo, |
|
573 | pushop.repo, | |
574 | pushop.remote, |
|
574 | pushop.remote, | |
575 | force=pushop.force, |
|
575 | force=pushop.force, | |
576 | ancestorsof=pushop.revs, |
|
576 | ancestorsof=pushop.revs, | |
577 | ) |
|
577 | ) | |
578 | else: |
|
578 | else: | |
579 | commoninc = fci(pushop.repo, pushop.remote, force=pushop.force) |
|
579 | commoninc = fci(pushop.repo, pushop.remote, force=pushop.force) | |
580 | common, inc, remoteheads = commoninc |
|
580 | common, inc, remoteheads = commoninc | |
581 | fco = discovery.findcommonoutgoing |
|
581 | fco = discovery.findcommonoutgoing | |
582 | outgoing = fco( |
|
582 | outgoing = fco( | |
583 | pushop.repo, |
|
583 | pushop.repo, | |
584 | pushop.remote, |
|
584 | pushop.remote, | |
585 | onlyheads=pushop.revs, |
|
585 | onlyheads=pushop.revs, | |
586 | commoninc=commoninc, |
|
586 | commoninc=commoninc, | |
587 | force=pushop.force, |
|
587 | force=pushop.force, | |
588 | ) |
|
588 | ) | |
589 | pushop.outgoing = outgoing |
|
589 | pushop.outgoing = outgoing | |
590 | pushop.remoteheads = remoteheads |
|
590 | pushop.remoteheads = remoteheads | |
591 | pushop.incoming = inc |
|
591 | pushop.incoming = inc | |
592 |
|
592 | |||
593 |
|
593 | |||
594 | @pushdiscovery(b'phase') |
|
594 | @pushdiscovery(b'phase') | |
595 | def _pushdiscoveryphase(pushop): |
|
595 | def _pushdiscoveryphase(pushop): | |
596 | """discover the phase that needs to be pushed |
|
596 | """discover the phase that needs to be pushed | |
597 |
|
597 | |||
598 | (computed for both success and failure case for changesets push)""" |
|
598 | (computed for both success and failure case for changesets push)""" | |
599 | outgoing = pushop.outgoing |
|
599 | outgoing = pushop.outgoing | |
600 | unfi = pushop.repo.unfiltered() |
|
600 | unfi = pushop.repo.unfiltered() | |
601 | remotephases = listkeys(pushop.remote, b'phases') |
|
601 | remotephases = listkeys(pushop.remote, b'phases') | |
602 |
|
602 | |||
603 | if ( |
|
603 | if ( | |
604 | pushop.ui.configbool(b'ui', b'_usedassubrepo') |
|
604 | pushop.ui.configbool(b'ui', b'_usedassubrepo') | |
605 | and remotephases # server supports phases |
|
605 | and remotephases # server supports phases | |
606 | and not pushop.outgoing.missing # no changesets to be pushed |
|
606 | and not pushop.outgoing.missing # no changesets to be pushed | |
607 | and remotephases.get(b'publishing', False) |
|
607 | and remotephases.get(b'publishing', False) | |
608 | ): |
|
608 | ): | |
609 | # When: |
|
609 | # When: | |
610 | # - this is a subrepo push |
|
610 | # - this is a subrepo push | |
611 | # - and remote support phase |
|
611 | # - and remote support phase | |
612 | # - and no changeset are to be pushed |
|
612 | # - and no changeset are to be pushed | |
613 | # - and remote is publishing |
|
613 | # - and remote is publishing | |
614 | # We may be in issue 3781 case! |
|
614 | # We may be in issue 3781 case! | |
615 | # We drop the possible phase synchronisation done by |
|
615 | # We drop the possible phase synchronisation done by | |
616 | # courtesy to publish changesets possibly locally draft |
|
616 | # courtesy to publish changesets possibly locally draft | |
617 | # on the remote. |
|
617 | # on the remote. | |
618 | pushop.outdatedphases = [] |
|
618 | pushop.outdatedphases = [] | |
619 | pushop.fallbackoutdatedphases = [] |
|
619 | pushop.fallbackoutdatedphases = [] | |
620 | return |
|
620 | return | |
621 |
|
621 | |||
622 | pushop.remotephases = phases.remotephasessummary( |
|
622 | pushop.remotephases = phases.remotephasessummary( | |
623 | pushop.repo, pushop.fallbackheads, remotephases |
|
623 | pushop.repo, pushop.fallbackheads, remotephases | |
624 | ) |
|
624 | ) | |
625 | droots = pushop.remotephases.draftroots |
|
625 | droots = pushop.remotephases.draftroots | |
626 |
|
626 | |||
627 | extracond = b'' |
|
627 | extracond = b'' | |
628 | if not pushop.remotephases.publishing: |
|
628 | if not pushop.remotephases.publishing: | |
629 | extracond = b' and public()' |
|
629 | extracond = b' and public()' | |
630 | revset = b'heads((%%ln::%%ln) %s)' % extracond |
|
630 | revset = b'heads((%%ln::%%ln) %s)' % extracond | |
631 | # Get the list of all revs draft on remote by public here. |
|
631 | # Get the list of all revs draft on remote by public here. | |
632 | # XXX Beware that revset break if droots is not strictly |
|
632 | # XXX Beware that revset break if droots is not strictly | |
633 | # XXX root we may want to ensure it is but it is costly |
|
633 | # XXX root we may want to ensure it is but it is costly | |
634 | fallback = list(unfi.set(revset, droots, pushop.fallbackheads)) |
|
634 | fallback = list(unfi.set(revset, droots, pushop.fallbackheads)) | |
635 | if not pushop.remotephases.publishing and pushop.publish: |
|
635 | if not pushop.remotephases.publishing and pushop.publish: | |
636 | future = list( |
|
636 | future = list( | |
637 | unfi.set( |
|
637 | unfi.set( | |
638 | b'%ln and (not public() or %ln::)', pushop.futureheads, droots |
|
638 | b'%ln and (not public() or %ln::)', pushop.futureheads, droots | |
639 | ) |
|
639 | ) | |
640 | ) |
|
640 | ) | |
641 | elif not outgoing.missing: |
|
641 | elif not outgoing.missing: | |
642 | future = fallback |
|
642 | future = fallback | |
643 | else: |
|
643 | else: | |
644 | # adds changeset we are going to push as draft |
|
644 | # adds changeset we are going to push as draft | |
645 | # |
|
645 | # | |
646 | # should not be necessary for publishing server, but because of an |
|
646 | # should not be necessary for publishing server, but because of an | |
647 | # issue fixed in xxxxx we have to do it anyway. |
|
647 | # issue fixed in xxxxx we have to do it anyway. | |
648 | fdroots = list( |
|
648 | fdroots = list( | |
649 | unfi.set(b'roots(%ln + %ln::)', outgoing.missing, droots) |
|
649 | unfi.set(b'roots(%ln + %ln::)', outgoing.missing, droots) | |
650 | ) |
|
650 | ) | |
651 | fdroots = [f.node() for f in fdroots] |
|
651 | fdroots = [f.node() for f in fdroots] | |
652 | future = list(unfi.set(revset, fdroots, pushop.futureheads)) |
|
652 | future = list(unfi.set(revset, fdroots, pushop.futureheads)) | |
653 | pushop.outdatedphases = future |
|
653 | pushop.outdatedphases = future | |
654 | pushop.fallbackoutdatedphases = fallback |
|
654 | pushop.fallbackoutdatedphases = fallback | |
655 |
|
655 | |||
656 |
|
656 | |||
657 | @pushdiscovery(b'obsmarker') |
|
657 | @pushdiscovery(b'obsmarker') | |
658 | def _pushdiscoveryobsmarkers(pushop): |
|
658 | def _pushdiscoveryobsmarkers(pushop): | |
659 | if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt): |
|
659 | if not obsolete.isenabled(pushop.repo, obsolete.exchangeopt): | |
660 | return |
|
660 | return | |
661 |
|
661 | |||
662 | if not pushop.repo.obsstore: |
|
662 | if not pushop.repo.obsstore: | |
663 | return |
|
663 | return | |
664 |
|
664 | |||
665 | if b'obsolete' not in listkeys(pushop.remote, b'namespaces'): |
|
665 | if b'obsolete' not in listkeys(pushop.remote, b'namespaces'): | |
666 | return |
|
666 | return | |
667 |
|
667 | |||
668 | repo = pushop.repo |
|
668 | repo = pushop.repo | |
669 | # very naive computation, that can be quite expensive on big repo. |
|
669 | # very naive computation, that can be quite expensive on big repo. | |
670 | # However: evolution is currently slow on them anyway. |
|
670 | # However: evolution is currently slow on them anyway. | |
671 | nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads)) |
|
671 | nodes = (c.node() for c in repo.set(b'::%ln', pushop.futureheads)) | |
672 | pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes) |
|
672 | pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes) | |
673 |
|
673 | |||
674 |
|
674 | |||
675 | @pushdiscovery(b'bookmarks') |
|
675 | @pushdiscovery(b'bookmarks') | |
676 | def _pushdiscoverybookmarks(pushop): |
|
676 | def _pushdiscoverybookmarks(pushop): | |
677 | ui = pushop.ui |
|
677 | ui = pushop.ui | |
678 | repo = pushop.repo.unfiltered() |
|
678 | repo = pushop.repo.unfiltered() | |
679 | remote = pushop.remote |
|
679 | remote = pushop.remote | |
680 | ui.debug(b"checking for updated bookmarks\n") |
|
680 | ui.debug(b"checking for updated bookmarks\n") | |
681 | ancestors = () |
|
681 | ancestors = () | |
682 | if pushop.revs: |
|
682 | if pushop.revs: | |
683 | revnums = pycompat.maplist(repo.changelog.rev, pushop.revs) |
|
683 | revnums = pycompat.maplist(repo.changelog.rev, pushop.revs) | |
684 | ancestors = repo.changelog.ancestors(revnums, inclusive=True) |
|
684 | ancestors = repo.changelog.ancestors(revnums, inclusive=True) | |
685 |
|
685 | |||
686 | remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks')) |
|
686 | remotebookmark = bookmod.unhexlifybookmarks(listkeys(remote, b'bookmarks')) | |
687 |
|
687 | |||
688 | explicit = { |
|
688 | explicit = { | |
689 | repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks |
|
689 | repo._bookmarks.expandname(bookmark) for bookmark in pushop.bookmarks | |
690 | } |
|
690 | } | |
691 |
|
691 | |||
692 | comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark) |
|
692 | comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark) | |
693 | return _processcompared(pushop, ancestors, explicit, remotebookmark, comp) |
|
693 | return _processcompared(pushop, ancestors, explicit, remotebookmark, comp) | |
694 |
|
694 | |||
695 |
|
695 | |||
696 | def _processcompared(pushop, pushed, explicit, remotebms, comp): |
|
696 | def _processcompared(pushop, pushed, explicit, remotebms, comp): | |
697 | """take decision on bookmarks to push to the remote repo |
|
697 | """take decision on bookmarks to push to the remote repo | |
698 |
|
698 | |||
699 | Exists to help extensions alter this behavior. |
|
699 | Exists to help extensions alter this behavior. | |
700 | """ |
|
700 | """ | |
701 | addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp |
|
701 | addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp | |
702 |
|
702 | |||
703 | repo = pushop.repo |
|
703 | repo = pushop.repo | |
704 |
|
704 | |||
705 | for b, scid, dcid in advsrc: |
|
705 | for b, scid, dcid in advsrc: | |
706 | if b in explicit: |
|
706 | if b in explicit: | |
707 | explicit.remove(b) |
|
707 | explicit.remove(b) | |
708 | if not pushed or repo[scid].rev() in pushed: |
|
708 | if not pushed or repo[scid].rev() in pushed: | |
709 | pushop.outbookmarks.append((b, dcid, scid)) |
|
709 | pushop.outbookmarks.append((b, dcid, scid)) | |
710 | # search added bookmark |
|
710 | # search added bookmark | |
711 | for b, scid, dcid in addsrc: |
|
711 | for b, scid, dcid in addsrc: | |
712 | if b in explicit: |
|
712 | if b in explicit: | |
713 | explicit.remove(b) |
|
713 | explicit.remove(b) | |
714 | if bookmod.isdivergent(b): |
|
714 | if bookmod.isdivergent(b): | |
715 | pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b) |
|
715 | pushop.ui.warn(_(b'cannot push divergent bookmark %s!\n') % b) | |
716 | pushop.bkresult = 2 |
|
716 | pushop.bkresult = 2 | |
717 | else: |
|
717 | else: | |
718 | pushop.outbookmarks.append((b, b'', scid)) |
|
718 | pushop.outbookmarks.append((b, b'', scid)) | |
719 | # search for overwritten bookmark |
|
719 | # search for overwritten bookmark | |
720 | for b, scid, dcid in list(advdst) + list(diverge) + list(differ): |
|
720 | for b, scid, dcid in list(advdst) + list(diverge) + list(differ): | |
721 | if b in explicit: |
|
721 | if b in explicit: | |
722 | explicit.remove(b) |
|
722 | explicit.remove(b) | |
723 | pushop.outbookmarks.append((b, dcid, scid)) |
|
723 | pushop.outbookmarks.append((b, dcid, scid)) | |
724 | # search for bookmark to delete |
|
724 | # search for bookmark to delete | |
725 | for b, scid, dcid in adddst: |
|
725 | for b, scid, dcid in adddst: | |
726 | if b in explicit: |
|
726 | if b in explicit: | |
727 | explicit.remove(b) |
|
727 | explicit.remove(b) | |
728 | # treat as "deleted locally" |
|
728 | # treat as "deleted locally" | |
729 | pushop.outbookmarks.append((b, dcid, b'')) |
|
729 | pushop.outbookmarks.append((b, dcid, b'')) | |
730 | # identical bookmarks shouldn't get reported |
|
730 | # identical bookmarks shouldn't get reported | |
731 | for b, scid, dcid in same: |
|
731 | for b, scid, dcid in same: | |
732 | if b in explicit: |
|
732 | if b in explicit: | |
733 | explicit.remove(b) |
|
733 | explicit.remove(b) | |
734 |
|
734 | |||
735 | if explicit: |
|
735 | if explicit: | |
736 | explicit = sorted(explicit) |
|
736 | explicit = sorted(explicit) | |
737 | # we should probably list all of them |
|
737 | # we should probably list all of them | |
738 | pushop.ui.warn( |
|
738 | pushop.ui.warn( | |
739 | _( |
|
739 | _( | |
740 | b'bookmark %s does not exist on the local ' |
|
740 | b'bookmark %s does not exist on the local ' | |
741 | b'or remote repository!\n' |
|
741 | b'or remote repository!\n' | |
742 | ) |
|
742 | ) | |
743 | % explicit[0] |
|
743 | % explicit[0] | |
744 | ) |
|
744 | ) | |
745 | pushop.bkresult = 2 |
|
745 | pushop.bkresult = 2 | |
746 |
|
746 | |||
747 | pushop.outbookmarks.sort() |
|
747 | pushop.outbookmarks.sort() | |
748 |
|
748 | |||
749 |
|
749 | |||
750 | def _pushcheckoutgoing(pushop): |
|
750 | def _pushcheckoutgoing(pushop): | |
751 | outgoing = pushop.outgoing |
|
751 | outgoing = pushop.outgoing | |
752 | unfi = pushop.repo.unfiltered() |
|
752 | unfi = pushop.repo.unfiltered() | |
753 | if not outgoing.missing: |
|
753 | if not outgoing.missing: | |
754 | # nothing to push |
|
754 | # nothing to push | |
755 | scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded) |
|
755 | scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded) | |
756 | return False |
|
756 | return False | |
757 | # something to push |
|
757 | # something to push | |
758 | if not pushop.force: |
|
758 | if not pushop.force: | |
759 | # if repo.obsstore == False --> no obsolete |
|
759 | # if repo.obsstore == False --> no obsolete | |
760 | # then, save the iteration |
|
760 | # then, save the iteration | |
761 | if unfi.obsstore: |
|
761 | if unfi.obsstore: | |
762 | # this message are here for 80 char limit reason |
|
762 | # this message are here for 80 char limit reason | |
763 | mso = _(b"push includes obsolete changeset: %s!") |
|
763 | mso = _(b"push includes obsolete changeset: %s!") | |
764 | mspd = _(b"push includes phase-divergent changeset: %s!") |
|
764 | mspd = _(b"push includes phase-divergent changeset: %s!") | |
765 | mscd = _(b"push includes content-divergent changeset: %s!") |
|
765 | mscd = _(b"push includes content-divergent changeset: %s!") | |
766 | mst = { |
|
766 | mst = { | |
767 | b"orphan": _(b"push includes orphan changeset: %s!"), |
|
767 | b"orphan": _(b"push includes orphan changeset: %s!"), | |
768 | b"phase-divergent": mspd, |
|
768 | b"phase-divergent": mspd, | |
769 | b"content-divergent": mscd, |
|
769 | b"content-divergent": mscd, | |
770 | } |
|
770 | } | |
771 | # If we are to push if there is at least one |
|
771 | # If we are to push if there is at least one | |
772 | # obsolete or unstable changeset in missing, at |
|
772 | # obsolete or unstable changeset in missing, at | |
773 | # least one of the missinghead will be obsolete or |
|
773 | # least one of the missinghead will be obsolete or | |
774 | # unstable. So checking heads only is ok |
|
774 | # unstable. So checking heads only is ok | |
775 | for node in outgoing.ancestorsof: |
|
775 | for node in outgoing.ancestorsof: | |
776 | ctx = unfi[node] |
|
776 | ctx = unfi[node] | |
777 | if ctx.obsolete(): |
|
777 | if ctx.obsolete(): | |
778 | raise error.Abort(mso % ctx) |
|
778 | raise error.Abort(mso % ctx) | |
779 | elif ctx.isunstable(): |
|
779 | elif ctx.isunstable(): | |
780 | # TODO print more than one instability in the abort |
|
780 | # TODO print more than one instability in the abort | |
781 | # message |
|
781 | # message | |
782 | raise error.Abort(mst[ctx.instabilities()[0]] % ctx) |
|
782 | raise error.Abort(mst[ctx.instabilities()[0]] % ctx) | |
783 |
|
783 | |||
784 | discovery.checkheads(pushop) |
|
784 | discovery.checkheads(pushop) | |
785 | return True |
|
785 | return True | |
786 |
|
786 | |||
787 |
|
787 | |||
788 | # List of names of steps to perform for an outgoing bundle2, order matters. |
|
788 | # List of names of steps to perform for an outgoing bundle2, order matters. | |
789 | b2partsgenorder = [] |
|
789 | b2partsgenorder = [] | |
790 |
|
790 | |||
791 | # Mapping between step name and function |
|
791 | # Mapping between step name and function | |
792 | # |
|
792 | # | |
793 | # This exists to help extensions wrap steps if necessary |
|
793 | # This exists to help extensions wrap steps if necessary | |
794 | b2partsgenmapping = {} |
|
794 | b2partsgenmapping = {} | |
795 |
|
795 | |||
796 |
|
796 | |||
797 | def b2partsgenerator(stepname, idx=None): |
|
797 | def b2partsgenerator(stepname, idx=None): | |
798 | """decorator for function generating bundle2 part |
|
798 | """decorator for function generating bundle2 part | |
799 |
|
799 | |||
800 | The function is added to the step -> function mapping and appended to the |
|
800 | The function is added to the step -> function mapping and appended to the | |
801 | list of steps. Beware that decorated functions will be added in order |
|
801 | list of steps. Beware that decorated functions will be added in order | |
802 | (this may matter). |
|
802 | (this may matter). | |
803 |
|
803 | |||
804 | You can only use this decorator for new steps, if you want to wrap a step |
|
804 | You can only use this decorator for new steps, if you want to wrap a step | |
805 | from an extension, attack the b2partsgenmapping dictionary directly.""" |
|
805 | from an extension, attack the b2partsgenmapping dictionary directly.""" | |
806 |
|
806 | |||
807 | def dec(func): |
|
807 | def dec(func): | |
808 | assert stepname not in b2partsgenmapping |
|
808 | assert stepname not in b2partsgenmapping | |
809 | b2partsgenmapping[stepname] = func |
|
809 | b2partsgenmapping[stepname] = func | |
810 | if idx is None: |
|
810 | if idx is None: | |
811 | b2partsgenorder.append(stepname) |
|
811 | b2partsgenorder.append(stepname) | |
812 | else: |
|
812 | else: | |
813 | b2partsgenorder.insert(idx, stepname) |
|
813 | b2partsgenorder.insert(idx, stepname) | |
814 | return func |
|
814 | return func | |
815 |
|
815 | |||
816 | return dec |
|
816 | return dec | |
817 |
|
817 | |||
818 |
|
818 | |||
819 | def _pushb2ctxcheckheads(pushop, bundler): |
|
819 | def _pushb2ctxcheckheads(pushop, bundler): | |
820 | """Generate race condition checking parts |
|
820 | """Generate race condition checking parts | |
821 |
|
821 | |||
822 | Exists as an independent function to aid extensions |
|
822 | Exists as an independent function to aid extensions | |
823 | """ |
|
823 | """ | |
824 | # * 'force' do not check for push race, |
|
824 | # * 'force' do not check for push race, | |
825 | # * if we don't push anything, there are nothing to check. |
|
825 | # * if we don't push anything, there are nothing to check. | |
826 | if not pushop.force and pushop.outgoing.ancestorsof: |
|
826 | if not pushop.force and pushop.outgoing.ancestorsof: | |
827 | allowunrelated = b'related' in bundler.capabilities.get( |
|
827 | allowunrelated = b'related' in bundler.capabilities.get( | |
828 | b'checkheads', () |
|
828 | b'checkheads', () | |
829 | ) |
|
829 | ) | |
830 | emptyremote = pushop.pushbranchmap is None |
|
830 | emptyremote = pushop.pushbranchmap is None | |
831 | if not allowunrelated or emptyremote: |
|
831 | if not allowunrelated or emptyremote: | |
832 | bundler.newpart(b'check:heads', data=iter(pushop.remoteheads)) |
|
832 | bundler.newpart(b'check:heads', data=iter(pushop.remoteheads)) | |
833 | else: |
|
833 | else: | |
834 | affected = set() |
|
834 | affected = set() | |
835 | for branch, heads in pushop.pushbranchmap.items(): |
|
835 | for branch, heads in pushop.pushbranchmap.items(): | |
836 | remoteheads, newheads, unsyncedheads, discardedheads = heads |
|
836 | remoteheads, newheads, unsyncedheads, discardedheads = heads | |
837 | if remoteheads is not None: |
|
837 | if remoteheads is not None: | |
838 | remote = set(remoteheads) |
|
838 | remote = set(remoteheads) | |
839 | affected |= set(discardedheads) & remote |
|
839 | affected |= set(discardedheads) & remote | |
840 | affected |= remote - set(newheads) |
|
840 | affected |= remote - set(newheads) | |
841 | if affected: |
|
841 | if affected: | |
842 | data = iter(sorted(affected)) |
|
842 | data = iter(sorted(affected)) | |
843 | bundler.newpart(b'check:updated-heads', data=data) |
|
843 | bundler.newpart(b'check:updated-heads', data=data) | |
844 |
|
844 | |||
845 |
|
845 | |||
846 | def _pushing(pushop): |
|
846 | def _pushing(pushop): | |
847 | """return True if we are pushing anything""" |
|
847 | """return True if we are pushing anything""" | |
848 | return bool( |
|
848 | return bool( | |
849 | pushop.outgoing.missing |
|
849 | pushop.outgoing.missing | |
850 | or pushop.outdatedphases |
|
850 | or pushop.outdatedphases | |
851 | or pushop.outobsmarkers |
|
851 | or pushop.outobsmarkers | |
852 | or pushop.outbookmarks |
|
852 | or pushop.outbookmarks | |
853 | ) |
|
853 | ) | |
854 |
|
854 | |||
855 |
|
855 | |||
856 | @b2partsgenerator(b'check-bookmarks') |
|
856 | @b2partsgenerator(b'check-bookmarks') | |
857 | def _pushb2checkbookmarks(pushop, bundler): |
|
857 | def _pushb2checkbookmarks(pushop, bundler): | |
858 | """insert bookmark move checking""" |
|
858 | """insert bookmark move checking""" | |
859 | if not _pushing(pushop) or pushop.force: |
|
859 | if not _pushing(pushop) or pushop.force: | |
860 | return |
|
860 | return | |
861 | b2caps = bundle2.bundle2caps(pushop.remote) |
|
861 | b2caps = bundle2.bundle2caps(pushop.remote) | |
862 | hasbookmarkcheck = b'bookmarks' in b2caps |
|
862 | hasbookmarkcheck = b'bookmarks' in b2caps | |
863 | if not (pushop.outbookmarks and hasbookmarkcheck): |
|
863 | if not (pushop.outbookmarks and hasbookmarkcheck): | |
864 | return |
|
864 | return | |
865 | data = [] |
|
865 | data = [] | |
866 | for book, old, new in pushop.outbookmarks: |
|
866 | for book, old, new in pushop.outbookmarks: | |
867 | data.append((book, old)) |
|
867 | data.append((book, old)) | |
868 | checkdata = bookmod.binaryencode(pushop.repo, data) |
|
868 | checkdata = bookmod.binaryencode(pushop.repo, data) | |
869 | bundler.newpart(b'check:bookmarks', data=checkdata) |
|
869 | bundler.newpart(b'check:bookmarks', data=checkdata) | |
870 |
|
870 | |||
871 |
|
871 | |||
872 | @b2partsgenerator(b'check-phases') |
|
872 | @b2partsgenerator(b'check-phases') | |
873 | def _pushb2checkphases(pushop, bundler): |
|
873 | def _pushb2checkphases(pushop, bundler): | |
874 | """insert phase move checking""" |
|
874 | """insert phase move checking""" | |
875 | if not _pushing(pushop) or pushop.force: |
|
875 | if not _pushing(pushop) or pushop.force: | |
876 | return |
|
876 | return | |
877 | b2caps = bundle2.bundle2caps(pushop.remote) |
|
877 | b2caps = bundle2.bundle2caps(pushop.remote) | |
878 | hasphaseheads = b'heads' in b2caps.get(b'phases', ()) |
|
878 | hasphaseheads = b'heads' in b2caps.get(b'phases', ()) | |
879 | if pushop.remotephases is not None and hasphaseheads: |
|
879 | if pushop.remotephases is not None and hasphaseheads: | |
880 | # check that the remote phase has not changed |
|
880 | # check that the remote phase has not changed | |
881 | checks = {p: [] for p in phases.allphases} |
|
881 | checks = {p: [] for p in phases.allphases} | |
882 | checks[phases.public].extend(pushop.remotephases.publicheads) |
|
882 | checks[phases.public].extend(pushop.remotephases.publicheads) | |
883 | checks[phases.draft].extend(pushop.remotephases.draftroots) |
|
883 | checks[phases.draft].extend(pushop.remotephases.draftroots) | |
884 | if any(checks.values()): |
|
884 | if any(checks.values()): | |
885 | for phase in checks: |
|
885 | for phase in checks: | |
886 | checks[phase].sort() |
|
886 | checks[phase].sort() | |
887 | checkdata = phases.binaryencode(checks) |
|
887 | checkdata = phases.binaryencode(checks) | |
888 | bundler.newpart(b'check:phases', data=checkdata) |
|
888 | bundler.newpart(b'check:phases', data=checkdata) | |
889 |
|
889 | |||
890 |
|
890 | |||
891 | @b2partsgenerator(b'changeset') |
|
891 | @b2partsgenerator(b'changeset') | |
892 | def _pushb2ctx(pushop, bundler): |
|
892 | def _pushb2ctx(pushop, bundler): | |
893 | """handle changegroup push through bundle2 |
|
893 | """handle changegroup push through bundle2 | |
894 |
|
894 | |||
895 | addchangegroup result is stored in the ``pushop.cgresult`` attribute. |
|
895 | addchangegroup result is stored in the ``pushop.cgresult`` attribute. | |
896 | """ |
|
896 | """ | |
897 | if b'changesets' in pushop.stepsdone: |
|
897 | if b'changesets' in pushop.stepsdone: | |
898 | return |
|
898 | return | |
899 | pushop.stepsdone.add(b'changesets') |
|
899 | pushop.stepsdone.add(b'changesets') | |
900 | # Send known heads to the server for race detection. |
|
900 | # Send known heads to the server for race detection. | |
901 | if not _pushcheckoutgoing(pushop): |
|
901 | if not _pushcheckoutgoing(pushop): | |
902 | return |
|
902 | return | |
903 | pushop.repo.prepushoutgoinghooks(pushop) |
|
903 | pushop.repo.prepushoutgoinghooks(pushop) | |
904 |
|
904 | |||
905 | _pushb2ctxcheckheads(pushop, bundler) |
|
905 | _pushb2ctxcheckheads(pushop, bundler) | |
906 |
|
906 | |||
907 | b2caps = bundle2.bundle2caps(pushop.remote) |
|
907 | b2caps = bundle2.bundle2caps(pushop.remote) | |
908 | version = b'01' |
|
908 | version = b'01' | |
909 | cgversions = b2caps.get(b'changegroup') |
|
909 | cgversions = b2caps.get(b'changegroup') | |
910 | if cgversions: # 3.1 and 3.2 ship with an empty value |
|
910 | if cgversions: # 3.1 and 3.2 ship with an empty value | |
911 | cgversions = [ |
|
911 | cgversions = [ | |
912 | v |
|
912 | v | |
913 | for v in cgversions |
|
913 | for v in cgversions | |
914 | if v in changegroup.supportedoutgoingversions(pushop.repo) |
|
914 | if v in changegroup.supportedoutgoingversions(pushop.repo) | |
915 | ] |
|
915 | ] | |
916 | if not cgversions: |
|
916 | if not cgversions: | |
917 | raise error.Abort(_(b'no common changegroup version')) |
|
917 | raise error.Abort(_(b'no common changegroup version')) | |
918 | version = max(cgversions) |
|
918 | version = max(cgversions) | |
919 |
|
919 | |||
920 | remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote) |
|
920 | remote_sidedata = bundle2.read_remote_wanted_sidedata(pushop.remote) | |
921 | cgstream = changegroup.makestream( |
|
921 | cgstream = changegroup.makestream( | |
922 | pushop.repo, |
|
922 | pushop.repo, | |
923 | pushop.outgoing, |
|
923 | pushop.outgoing, | |
924 | version, |
|
924 | version, | |
925 | b'push', |
|
925 | b'push', | |
926 | bundlecaps=b2caps, |
|
926 | bundlecaps=b2caps, | |
927 | remote_sidedata=remote_sidedata, |
|
927 | remote_sidedata=remote_sidedata, | |
928 | ) |
|
928 | ) | |
929 | cgpart = bundler.newpart(b'changegroup', data=cgstream) |
|
929 | cgpart = bundler.newpart(b'changegroup', data=cgstream) | |
930 | if cgversions: |
|
930 | if cgversions: | |
931 | cgpart.addparam(b'version', version) |
|
931 | cgpart.addparam(b'version', version) | |
932 | if scmutil.istreemanifest(pushop.repo): |
|
932 | if scmutil.istreemanifest(pushop.repo): | |
933 | cgpart.addparam(b'treemanifest', b'1') |
|
933 | cgpart.addparam(b'treemanifest', b'1') | |
934 | if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features: |
|
934 | if repository.REPO_FEATURE_SIDE_DATA in pushop.repo.features: | |
935 | cgpart.addparam(b'exp-sidedata', b'1') |
|
935 | cgpart.addparam(b'exp-sidedata', b'1') | |
936 |
|
936 | |||
937 | def handlereply(op): |
|
937 | def handlereply(op): | |
938 | """extract addchangegroup returns from server reply""" |
|
938 | """extract addchangegroup returns from server reply""" | |
939 | cgreplies = op.records.getreplies(cgpart.id) |
|
939 | cgreplies = op.records.getreplies(cgpart.id) | |
940 | assert len(cgreplies[b'changegroup']) == 1 |
|
940 | assert len(cgreplies[b'changegroup']) == 1 | |
941 | pushop.cgresult = cgreplies[b'changegroup'][0][b'return'] |
|
941 | pushop.cgresult = cgreplies[b'changegroup'][0][b'return'] | |
942 |
|
942 | |||
943 | return handlereply |
|
943 | return handlereply | |
944 |
|
944 | |||
945 |
|
945 | |||
946 | @b2partsgenerator(b'phase') |
|
946 | @b2partsgenerator(b'phase') | |
947 | def _pushb2phases(pushop, bundler): |
|
947 | def _pushb2phases(pushop, bundler): | |
948 | """handle phase push through bundle2""" |
|
948 | """handle phase push through bundle2""" | |
949 | if b'phases' in pushop.stepsdone: |
|
949 | if b'phases' in pushop.stepsdone: | |
950 | return |
|
950 | return | |
951 | b2caps = bundle2.bundle2caps(pushop.remote) |
|
951 | b2caps = bundle2.bundle2caps(pushop.remote) | |
952 | ui = pushop.repo.ui |
|
952 | ui = pushop.repo.ui | |
953 |
|
953 | |||
954 | legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange') |
|
954 | legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange') | |
955 | haspushkey = b'pushkey' in b2caps |
|
955 | haspushkey = b'pushkey' in b2caps | |
956 | hasphaseheads = b'heads' in b2caps.get(b'phases', ()) |
|
956 | hasphaseheads = b'heads' in b2caps.get(b'phases', ()) | |
957 |
|
957 | |||
958 | if hasphaseheads and not legacyphase: |
|
958 | if hasphaseheads and not legacyphase: | |
959 | return _pushb2phaseheads(pushop, bundler) |
|
959 | return _pushb2phaseheads(pushop, bundler) | |
960 | elif haspushkey: |
|
960 | elif haspushkey: | |
961 | return _pushb2phasespushkey(pushop, bundler) |
|
961 | return _pushb2phasespushkey(pushop, bundler) | |
962 |
|
962 | |||
963 |
|
963 | |||
964 | def _pushb2phaseheads(pushop, bundler): |
|
964 | def _pushb2phaseheads(pushop, bundler): | |
965 | """push phase information through a bundle2 - binary part""" |
|
965 | """push phase information through a bundle2 - binary part""" | |
966 | pushop.stepsdone.add(b'phases') |
|
966 | pushop.stepsdone.add(b'phases') | |
967 | if pushop.outdatedphases: |
|
967 | if pushop.outdatedphases: | |
968 | updates = {p: [] for p in phases.allphases} |
|
968 | updates = {p: [] for p in phases.allphases} | |
969 | updates[0].extend(h.node() for h in pushop.outdatedphases) |
|
969 | updates[0].extend(h.node() for h in pushop.outdatedphases) | |
970 | phasedata = phases.binaryencode(updates) |
|
970 | phasedata = phases.binaryencode(updates) | |
971 | bundler.newpart(b'phase-heads', data=phasedata) |
|
971 | bundler.newpart(b'phase-heads', data=phasedata) | |
972 |
|
972 | |||
973 |
|
973 | |||
974 | def _pushb2phasespushkey(pushop, bundler): |
|
974 | def _pushb2phasespushkey(pushop, bundler): | |
975 | """push phase information through a bundle2 - pushkey part""" |
|
975 | """push phase information through a bundle2 - pushkey part""" | |
976 | pushop.stepsdone.add(b'phases') |
|
976 | pushop.stepsdone.add(b'phases') | |
977 | part2node = [] |
|
977 | part2node = [] | |
978 |
|
978 | |||
979 | def handlefailure(pushop, exc): |
|
979 | def handlefailure(pushop, exc): | |
980 | targetid = int(exc.partid) |
|
980 | targetid = int(exc.partid) | |
981 | for partid, node in part2node: |
|
981 | for partid, node in part2node: | |
982 | if partid == targetid: |
|
982 | if partid == targetid: | |
983 | raise error.Abort(_(b'updating %s to public failed') % node) |
|
983 | raise error.Abort(_(b'updating %s to public failed') % node) | |
984 |
|
984 | |||
985 | enc = pushkey.encode |
|
985 | enc = pushkey.encode | |
986 | for newremotehead in pushop.outdatedphases: |
|
986 | for newremotehead in pushop.outdatedphases: | |
987 | part = bundler.newpart(b'pushkey') |
|
987 | part = bundler.newpart(b'pushkey') | |
988 | part.addparam(b'namespace', enc(b'phases')) |
|
988 | part.addparam(b'namespace', enc(b'phases')) | |
989 | part.addparam(b'key', enc(newremotehead.hex())) |
|
989 | part.addparam(b'key', enc(newremotehead.hex())) | |
990 | part.addparam(b'old', enc(b'%d' % phases.draft)) |
|
990 | part.addparam(b'old', enc(b'%d' % phases.draft)) | |
991 | part.addparam(b'new', enc(b'%d' % phases.public)) |
|
991 | part.addparam(b'new', enc(b'%d' % phases.public)) | |
992 | part2node.append((part.id, newremotehead)) |
|
992 | part2node.append((part.id, newremotehead)) | |
993 | pushop.pkfailcb[part.id] = handlefailure |
|
993 | pushop.pkfailcb[part.id] = handlefailure | |
994 |
|
994 | |||
995 | def handlereply(op): |
|
995 | def handlereply(op): | |
996 | for partid, node in part2node: |
|
996 | for partid, node in part2node: | |
997 | partrep = op.records.getreplies(partid) |
|
997 | partrep = op.records.getreplies(partid) | |
998 | results = partrep[b'pushkey'] |
|
998 | results = partrep[b'pushkey'] | |
999 | assert len(results) <= 1 |
|
999 | assert len(results) <= 1 | |
1000 | msg = None |
|
1000 | msg = None | |
1001 | if not results: |
|
1001 | if not results: | |
1002 | msg = _(b'server ignored update of %s to public!\n') % node |
|
1002 | msg = _(b'server ignored update of %s to public!\n') % node | |
1003 | elif not int(results[0][b'return']): |
|
1003 | elif not int(results[0][b'return']): | |
1004 | msg = _(b'updating %s to public failed!\n') % node |
|
1004 | msg = _(b'updating %s to public failed!\n') % node | |
1005 | if msg is not None: |
|
1005 | if msg is not None: | |
1006 | pushop.ui.warn(msg) |
|
1006 | pushop.ui.warn(msg) | |
1007 |
|
1007 | |||
1008 | return handlereply |
|
1008 | return handlereply | |
1009 |
|
1009 | |||
1010 |
|
1010 | |||
1011 | @b2partsgenerator(b'obsmarkers') |
|
1011 | @b2partsgenerator(b'obsmarkers') | |
1012 | def _pushb2obsmarkers(pushop, bundler): |
|
1012 | def _pushb2obsmarkers(pushop, bundler): | |
1013 | if b'obsmarkers' in pushop.stepsdone: |
|
1013 | if b'obsmarkers' in pushop.stepsdone: | |
1014 | return |
|
1014 | return | |
1015 | remoteversions = bundle2.obsmarkersversion(bundler.capabilities) |
|
1015 | remoteversions = bundle2.obsmarkersversion(bundler.capabilities) | |
1016 | if obsolete.commonversion(remoteversions) is None: |
|
1016 | if obsolete.commonversion(remoteversions) is None: | |
1017 | return |
|
1017 | return | |
1018 | pushop.stepsdone.add(b'obsmarkers') |
|
1018 | pushop.stepsdone.add(b'obsmarkers') | |
1019 | if pushop.outobsmarkers: |
|
1019 | if pushop.outobsmarkers: | |
1020 | markers = obsutil.sortedmarkers(pushop.outobsmarkers) |
|
1020 | markers = obsutil.sortedmarkers(pushop.outobsmarkers) | |
1021 | bundle2.buildobsmarkerspart(bundler, markers) |
|
1021 | bundle2.buildobsmarkerspart(bundler, markers) | |
1022 |
|
1022 | |||
1023 |
|
1023 | |||
1024 | @b2partsgenerator(b'bookmarks') |
|
1024 | @b2partsgenerator(b'bookmarks') | |
1025 | def _pushb2bookmarks(pushop, bundler): |
|
1025 | def _pushb2bookmarks(pushop, bundler): | |
1026 | """handle bookmark push through bundle2""" |
|
1026 | """handle bookmark push through bundle2""" | |
1027 | if b'bookmarks' in pushop.stepsdone: |
|
1027 | if b'bookmarks' in pushop.stepsdone: | |
1028 | return |
|
1028 | return | |
1029 | b2caps = bundle2.bundle2caps(pushop.remote) |
|
1029 | b2caps = bundle2.bundle2caps(pushop.remote) | |
1030 |
|
1030 | |||
1031 | legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange') |
|
1031 | legacy = pushop.repo.ui.configlist(b'devel', b'legacy.exchange') | |
1032 | legacybooks = b'bookmarks' in legacy |
|
1032 | legacybooks = b'bookmarks' in legacy | |
1033 |
|
1033 | |||
1034 | if not legacybooks and b'bookmarks' in b2caps: |
|
1034 | if not legacybooks and b'bookmarks' in b2caps: | |
1035 | return _pushb2bookmarkspart(pushop, bundler) |
|
1035 | return _pushb2bookmarkspart(pushop, bundler) | |
1036 | elif b'pushkey' in b2caps: |
|
1036 | elif b'pushkey' in b2caps: | |
1037 | return _pushb2bookmarkspushkey(pushop, bundler) |
|
1037 | return _pushb2bookmarkspushkey(pushop, bundler) | |
1038 |
|
1038 | |||
1039 |
|
1039 | |||
1040 | def _bmaction(old, new): |
|
1040 | def _bmaction(old, new): | |
1041 | """small utility for bookmark pushing""" |
|
1041 | """small utility for bookmark pushing""" | |
1042 | if not old: |
|
1042 | if not old: | |
1043 | return b'export' |
|
1043 | return b'export' | |
1044 | elif not new: |
|
1044 | elif not new: | |
1045 | return b'delete' |
|
1045 | return b'delete' | |
1046 | return b'update' |
|
1046 | return b'update' | |
1047 |
|
1047 | |||
1048 |
|
1048 | |||
1049 | def _abortonsecretctx(pushop, node, b): |
|
1049 | def _abortonsecretctx(pushop, node, b): | |
1050 | """abort if a given bookmark points to a secret changeset""" |
|
1050 | """abort if a given bookmark points to a secret changeset""" | |
1051 | if node and pushop.repo[node].phase() == phases.secret: |
|
1051 | if node and pushop.repo[node].phase() == phases.secret: | |
1052 | raise error.Abort( |
|
1052 | raise error.Abort( | |
1053 | _(b'cannot push bookmark %s as it points to a secret changeset') % b |
|
1053 | _(b'cannot push bookmark %s as it points to a secret changeset') % b | |
1054 | ) |
|
1054 | ) | |
1055 |
|
1055 | |||
1056 |
|
1056 | |||
1057 | def _pushb2bookmarkspart(pushop, bundler): |
|
1057 | def _pushb2bookmarkspart(pushop, bundler): | |
1058 | pushop.stepsdone.add(b'bookmarks') |
|
1058 | pushop.stepsdone.add(b'bookmarks') | |
1059 | if not pushop.outbookmarks: |
|
1059 | if not pushop.outbookmarks: | |
1060 | return |
|
1060 | return | |
1061 |
|
1061 | |||
1062 | allactions = [] |
|
1062 | allactions = [] | |
1063 | data = [] |
|
1063 | data = [] | |
1064 | for book, old, new in pushop.outbookmarks: |
|
1064 | for book, old, new in pushop.outbookmarks: | |
1065 | _abortonsecretctx(pushop, new, book) |
|
1065 | _abortonsecretctx(pushop, new, book) | |
1066 | data.append((book, new)) |
|
1066 | data.append((book, new)) | |
1067 | allactions.append((book, _bmaction(old, new))) |
|
1067 | allactions.append((book, _bmaction(old, new))) | |
1068 | checkdata = bookmod.binaryencode(pushop.repo, data) |
|
1068 | checkdata = bookmod.binaryencode(pushop.repo, data) | |
1069 | bundler.newpart(b'bookmarks', data=checkdata) |
|
1069 | bundler.newpart(b'bookmarks', data=checkdata) | |
1070 |
|
1070 | |||
1071 | def handlereply(op): |
|
1071 | def handlereply(op): | |
1072 | ui = pushop.ui |
|
1072 | ui = pushop.ui | |
1073 | # if success |
|
1073 | # if success | |
1074 | for book, action in allactions: |
|
1074 | for book, action in allactions: | |
1075 | ui.status(bookmsgmap[action][0] % book) |
|
1075 | ui.status(bookmsgmap[action][0] % book) | |
1076 |
|
1076 | |||
1077 | return handlereply |
|
1077 | return handlereply | |
1078 |
|
1078 | |||
1079 |
|
1079 | |||
1080 | def _pushb2bookmarkspushkey(pushop, bundler): |
|
1080 | def _pushb2bookmarkspushkey(pushop, bundler): | |
1081 | pushop.stepsdone.add(b'bookmarks') |
|
1081 | pushop.stepsdone.add(b'bookmarks') | |
1082 | part2book = [] |
|
1082 | part2book = [] | |
1083 | enc = pushkey.encode |
|
1083 | enc = pushkey.encode | |
1084 |
|
1084 | |||
1085 | def handlefailure(pushop, exc): |
|
1085 | def handlefailure(pushop, exc): | |
1086 | targetid = int(exc.partid) |
|
1086 | targetid = int(exc.partid) | |
1087 | for partid, book, action in part2book: |
|
1087 | for partid, book, action in part2book: | |
1088 | if partid == targetid: |
|
1088 | if partid == targetid: | |
1089 | raise error.Abort(bookmsgmap[action][1].rstrip() % book) |
|
1089 | raise error.Abort(bookmsgmap[action][1].rstrip() % book) | |
1090 | # we should not be called for part we did not generated |
|
1090 | # we should not be called for part we did not generated | |
1091 | assert False |
|
1091 | assert False | |
1092 |
|
1092 | |||
1093 | for book, old, new in pushop.outbookmarks: |
|
1093 | for book, old, new in pushop.outbookmarks: | |
1094 | _abortonsecretctx(pushop, new, book) |
|
1094 | _abortonsecretctx(pushop, new, book) | |
1095 | part = bundler.newpart(b'pushkey') |
|
1095 | part = bundler.newpart(b'pushkey') | |
1096 | part.addparam(b'namespace', enc(b'bookmarks')) |
|
1096 | part.addparam(b'namespace', enc(b'bookmarks')) | |
1097 | part.addparam(b'key', enc(book)) |
|
1097 | part.addparam(b'key', enc(book)) | |
1098 | part.addparam(b'old', enc(hex(old))) |
|
1098 | part.addparam(b'old', enc(hex(old))) | |
1099 | part.addparam(b'new', enc(hex(new))) |
|
1099 | part.addparam(b'new', enc(hex(new))) | |
1100 | action = b'update' |
|
1100 | action = b'update' | |
1101 | if not old: |
|
1101 | if not old: | |
1102 | action = b'export' |
|
1102 | action = b'export' | |
1103 | elif not new: |
|
1103 | elif not new: | |
1104 | action = b'delete' |
|
1104 | action = b'delete' | |
1105 | part2book.append((part.id, book, action)) |
|
1105 | part2book.append((part.id, book, action)) | |
1106 | pushop.pkfailcb[part.id] = handlefailure |
|
1106 | pushop.pkfailcb[part.id] = handlefailure | |
1107 |
|
1107 | |||
1108 | def handlereply(op): |
|
1108 | def handlereply(op): | |
1109 | ui = pushop.ui |
|
1109 | ui = pushop.ui | |
1110 | for partid, book, action in part2book: |
|
1110 | for partid, book, action in part2book: | |
1111 | partrep = op.records.getreplies(partid) |
|
1111 | partrep = op.records.getreplies(partid) | |
1112 | results = partrep[b'pushkey'] |
|
1112 | results = partrep[b'pushkey'] | |
1113 | assert len(results) <= 1 |
|
1113 | assert len(results) <= 1 | |
1114 | if not results: |
|
1114 | if not results: | |
1115 | pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book) |
|
1115 | pushop.ui.warn(_(b'server ignored bookmark %s update\n') % book) | |
1116 | else: |
|
1116 | else: | |
1117 | ret = int(results[0][b'return']) |
|
1117 | ret = int(results[0][b'return']) | |
1118 | if ret: |
|
1118 | if ret: | |
1119 | ui.status(bookmsgmap[action][0] % book) |
|
1119 | ui.status(bookmsgmap[action][0] % book) | |
1120 | else: |
|
1120 | else: | |
1121 | ui.warn(bookmsgmap[action][1] % book) |
|
1121 | ui.warn(bookmsgmap[action][1] % book) | |
1122 | if pushop.bkresult is not None: |
|
1122 | if pushop.bkresult is not None: | |
1123 | pushop.bkresult = 1 |
|
1123 | pushop.bkresult = 1 | |
1124 |
|
1124 | |||
1125 | return handlereply |
|
1125 | return handlereply | |
1126 |
|
1126 | |||
1127 |
|
1127 | |||
1128 | @b2partsgenerator(b'pushvars', idx=0) |
|
1128 | @b2partsgenerator(b'pushvars', idx=0) | |
1129 | def _getbundlesendvars(pushop, bundler): |
|
1129 | def _getbundlesendvars(pushop, bundler): | |
1130 | '''send shellvars via bundle2''' |
|
1130 | '''send shellvars via bundle2''' | |
1131 | pushvars = pushop.pushvars |
|
1131 | pushvars = pushop.pushvars | |
1132 | if pushvars: |
|
1132 | if pushvars: | |
1133 | shellvars = {} |
|
1133 | shellvars = {} | |
1134 | for raw in pushvars: |
|
1134 | for raw in pushvars: | |
1135 | if b'=' not in raw: |
|
1135 | if b'=' not in raw: | |
1136 | msg = ( |
|
1136 | msg = ( | |
1137 | b"unable to parse variable '%s', should follow " |
|
1137 | b"unable to parse variable '%s', should follow " | |
1138 | b"'KEY=VALUE' or 'KEY=' format" |
|
1138 | b"'KEY=VALUE' or 'KEY=' format" | |
1139 | ) |
|
1139 | ) | |
1140 | raise error.Abort(msg % raw) |
|
1140 | raise error.Abort(msg % raw) | |
1141 | k, v = raw.split(b'=', 1) |
|
1141 | k, v = raw.split(b'=', 1) | |
1142 | shellvars[k] = v |
|
1142 | shellvars[k] = v | |
1143 |
|
1143 | |||
1144 | part = bundler.newpart(b'pushvars') |
|
1144 | part = bundler.newpart(b'pushvars') | |
1145 |
|
1145 | |||
1146 | for key, value in shellvars.items(): |
|
1146 | for key, value in shellvars.items(): | |
1147 | part.addparam(key, value, mandatory=False) |
|
1147 | part.addparam(key, value, mandatory=False) | |
1148 |
|
1148 | |||
1149 |
|
1149 | |||
1150 | def _pushbundle2(pushop): |
|
1150 | def _pushbundle2(pushop): | |
1151 | """push data to the remote using bundle2 |
|
1151 | """push data to the remote using bundle2 | |
1152 |
|
1152 | |||
1153 | The only currently supported type of data is changegroup but this will |
|
1153 | The only currently supported type of data is changegroup but this will | |
1154 | evolve in the future.""" |
|
1154 | evolve in the future.""" | |
1155 | bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote)) |
|
1155 | bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote)) | |
1156 | pushback = pushop.trmanager and pushop.ui.configbool( |
|
1156 | pushback = pushop.trmanager and pushop.ui.configbool( | |
1157 | b'experimental', b'bundle2.pushback' |
|
1157 | b'experimental', b'bundle2.pushback' | |
1158 | ) |
|
1158 | ) | |
1159 |
|
1159 | |||
1160 | # create reply capability |
|
1160 | # create reply capability | |
1161 | capsblob = bundle2.encodecaps( |
|
1161 | capsblob = bundle2.encodecaps( | |
1162 | bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client') |
|
1162 | bundle2.getrepocaps(pushop.repo, allowpushback=pushback, role=b'client') | |
1163 | ) |
|
1163 | ) | |
1164 | bundler.newpart(b'replycaps', data=capsblob) |
|
1164 | bundler.newpart(b'replycaps', data=capsblob) | |
1165 | replyhandlers = [] |
|
1165 | replyhandlers = [] | |
1166 | for partgenname in b2partsgenorder: |
|
1166 | for partgenname in b2partsgenorder: | |
1167 | partgen = b2partsgenmapping[partgenname] |
|
1167 | partgen = b2partsgenmapping[partgenname] | |
1168 | ret = partgen(pushop, bundler) |
|
1168 | ret = partgen(pushop, bundler) | |
1169 | if callable(ret): |
|
1169 | if callable(ret): | |
1170 | replyhandlers.append(ret) |
|
1170 | replyhandlers.append(ret) | |
1171 | # do not push if nothing to push |
|
1171 | # do not push if nothing to push | |
1172 | if bundler.nbparts <= 1: |
|
1172 | if bundler.nbparts <= 1: | |
1173 | return |
|
1173 | return | |
1174 | stream = util.chunkbuffer(bundler.getchunks()) |
|
1174 | stream = util.chunkbuffer(bundler.getchunks()) | |
1175 | try: |
|
1175 | try: | |
1176 | try: |
|
1176 | try: | |
1177 | with pushop.remote.commandexecutor() as e: |
|
1177 | with pushop.remote.commandexecutor() as e: | |
1178 | reply = e.callcommand( |
|
1178 | reply = e.callcommand( | |
1179 | b'unbundle', |
|
1179 | b'unbundle', | |
1180 | { |
|
1180 | { | |
1181 | b'bundle': stream, |
|
1181 | b'bundle': stream, | |
1182 | b'heads': [b'force'], |
|
1182 | b'heads': [b'force'], | |
1183 | b'url': pushop.remote.url(), |
|
1183 | b'url': pushop.remote.url(), | |
1184 | }, |
|
1184 | }, | |
1185 | ).result() |
|
1185 | ).result() | |
1186 | except error.BundleValueError as exc: |
|
1186 | except error.BundleValueError as exc: | |
1187 | raise error.RemoteError(_(b'missing support for %s') % exc) |
|
1187 | raise error.RemoteError(_(b'missing support for %s') % exc) | |
1188 | try: |
|
1188 | try: | |
1189 | trgetter = None |
|
1189 | trgetter = None | |
1190 | if pushback: |
|
1190 | if pushback: | |
1191 | trgetter = pushop.trmanager.transaction |
|
1191 | trgetter = pushop.trmanager.transaction | |
1192 | op = bundle2.processbundle( |
|
1192 | op = bundle2.processbundle( | |
1193 | pushop.repo, |
|
1193 | pushop.repo, | |
1194 | reply, |
|
1194 | reply, | |
1195 | trgetter, |
|
1195 | trgetter, | |
1196 | remote=pushop.remote, |
|
1196 | remote=pushop.remote, | |
1197 | ) |
|
1197 | ) | |
1198 | except error.BundleValueError as exc: |
|
1198 | except error.BundleValueError as exc: | |
1199 | raise error.RemoteError(_(b'missing support for %s') % exc) |
|
1199 | raise error.RemoteError(_(b'missing support for %s') % exc) | |
1200 | except bundle2.AbortFromPart as exc: |
|
1200 | except bundle2.AbortFromPart as exc: | |
1201 | pushop.ui.error(_(b'remote: %s\n') % exc) |
|
1201 | pushop.ui.error(_(b'remote: %s\n') % exc) | |
1202 | if exc.hint is not None: |
|
1202 | if exc.hint is not None: | |
1203 | pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint)) |
|
1203 | pushop.ui.error(_(b'remote: %s\n') % (b'(%s)' % exc.hint)) | |
1204 | raise error.RemoteError(_(b'push failed on remote')) |
|
1204 | raise error.RemoteError(_(b'push failed on remote')) | |
1205 | except error.PushkeyFailed as exc: |
|
1205 | except error.PushkeyFailed as exc: | |
1206 | partid = int(exc.partid) |
|
1206 | partid = int(exc.partid) | |
1207 | if partid not in pushop.pkfailcb: |
|
1207 | if partid not in pushop.pkfailcb: | |
1208 | raise |
|
1208 | raise | |
1209 | pushop.pkfailcb[partid](pushop, exc) |
|
1209 | pushop.pkfailcb[partid](pushop, exc) | |
1210 | for rephand in replyhandlers: |
|
1210 | for rephand in replyhandlers: | |
1211 | rephand(op) |
|
1211 | rephand(op) | |
1212 |
|
1212 | |||
1213 |
|
1213 | |||
1214 | def _pushchangeset(pushop): |
|
1214 | def _pushchangeset(pushop): | |
1215 | """Make the actual push of changeset bundle to remote repo""" |
|
1215 | """Make the actual push of changeset bundle to remote repo""" | |
1216 | if b'changesets' in pushop.stepsdone: |
|
1216 | if b'changesets' in pushop.stepsdone: | |
1217 | return |
|
1217 | return | |
1218 | pushop.stepsdone.add(b'changesets') |
|
1218 | pushop.stepsdone.add(b'changesets') | |
1219 | if not _pushcheckoutgoing(pushop): |
|
1219 | if not _pushcheckoutgoing(pushop): | |
1220 | return |
|
1220 | return | |
1221 |
|
1221 | |||
1222 | # Should have verified this in push(). |
|
1222 | # Should have verified this in push(). | |
1223 | assert pushop.remote.capable(b'unbundle') |
|
1223 | assert pushop.remote.capable(b'unbundle') | |
1224 |
|
1224 | |||
1225 | pushop.repo.prepushoutgoinghooks(pushop) |
|
1225 | pushop.repo.prepushoutgoinghooks(pushop) | |
1226 | outgoing = pushop.outgoing |
|
1226 | outgoing = pushop.outgoing | |
1227 | # TODO: get bundlecaps from remote |
|
1227 | # TODO: get bundlecaps from remote | |
1228 | bundlecaps = None |
|
1228 | bundlecaps = None | |
1229 | # create a changegroup from local |
|
1229 | # create a changegroup from local | |
1230 | if pushop.revs is None and not ( |
|
1230 | if pushop.revs is None and not ( | |
1231 | outgoing.excluded or pushop.repo.changelog.filteredrevs |
|
1231 | outgoing.excluded or pushop.repo.changelog.filteredrevs | |
1232 | ): |
|
1232 | ): | |
1233 | # push everything, |
|
1233 | # push everything, | |
1234 | # use the fast path, no race possible on push |
|
1234 | # use the fast path, no race possible on push | |
1235 | cg = changegroup.makechangegroup( |
|
1235 | cg = changegroup.makechangegroup( | |
1236 | pushop.repo, |
|
1236 | pushop.repo, | |
1237 | outgoing, |
|
1237 | outgoing, | |
1238 | b'01', |
|
1238 | b'01', | |
1239 | b'push', |
|
1239 | b'push', | |
1240 | fastpath=True, |
|
1240 | fastpath=True, | |
1241 | bundlecaps=bundlecaps, |
|
1241 | bundlecaps=bundlecaps, | |
1242 | ) |
|
1242 | ) | |
1243 | else: |
|
1243 | else: | |
1244 | cg = changegroup.makechangegroup( |
|
1244 | cg = changegroup.makechangegroup( | |
1245 | pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps |
|
1245 | pushop.repo, outgoing, b'01', b'push', bundlecaps=bundlecaps | |
1246 | ) |
|
1246 | ) | |
1247 |
|
1247 | |||
1248 | # apply changegroup to remote |
|
1248 | # apply changegroup to remote | |
1249 | # local repo finds heads on server, finds out what |
|
1249 | # local repo finds heads on server, finds out what | |
1250 | # revs it must push. once revs transferred, if server |
|
1250 | # revs it must push. once revs transferred, if server | |
1251 | # finds it has different heads (someone else won |
|
1251 | # finds it has different heads (someone else won | |
1252 | # commit/push race), server aborts. |
|
1252 | # commit/push race), server aborts. | |
1253 | if pushop.force: |
|
1253 | if pushop.force: | |
1254 | remoteheads = [b'force'] |
|
1254 | remoteheads = [b'force'] | |
1255 | else: |
|
1255 | else: | |
1256 | remoteheads = pushop.remoteheads |
|
1256 | remoteheads = pushop.remoteheads | |
1257 | # ssh: return remote's addchangegroup() |
|
1257 | # ssh: return remote's addchangegroup() | |
1258 | # http: return remote's addchangegroup() or 0 for error |
|
1258 | # http: return remote's addchangegroup() or 0 for error | |
1259 | pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url()) |
|
1259 | pushop.cgresult = pushop.remote.unbundle(cg, remoteheads, pushop.repo.url()) | |
1260 |
|
1260 | |||
1261 |
|
1261 | |||
1262 | def _pushsyncphase(pushop): |
|
1262 | def _pushsyncphase(pushop): | |
1263 | """synchronise phase information locally and remotely""" |
|
1263 | """synchronise phase information locally and remotely""" | |
1264 | cheads = pushop.commonheads |
|
1264 | cheads = pushop.commonheads | |
1265 | # even when we don't push, exchanging phase data is useful |
|
1265 | # even when we don't push, exchanging phase data is useful | |
1266 | remotephases = listkeys(pushop.remote, b'phases') |
|
1266 | remotephases = listkeys(pushop.remote, b'phases') | |
1267 | if ( |
|
1267 | if ( | |
1268 | pushop.ui.configbool(b'ui', b'_usedassubrepo') |
|
1268 | pushop.ui.configbool(b'ui', b'_usedassubrepo') | |
1269 | and remotephases # server supports phases |
|
1269 | and remotephases # server supports phases | |
1270 | and pushop.cgresult is None # nothing was pushed |
|
1270 | and pushop.cgresult is None # nothing was pushed | |
1271 | and remotephases.get(b'publishing', False) |
|
1271 | and remotephases.get(b'publishing', False) | |
1272 | ): |
|
1272 | ): | |
1273 | # When: |
|
1273 | # When: | |
1274 | # - this is a subrepo push |
|
1274 | # - this is a subrepo push | |
1275 | # - and remote support phase |
|
1275 | # - and remote support phase | |
1276 | # - and no changeset was pushed |
|
1276 | # - and no changeset was pushed | |
1277 | # - and remote is publishing |
|
1277 | # - and remote is publishing | |
1278 | # We may be in issue 3871 case! |
|
1278 | # We may be in issue 3871 case! | |
1279 | # We drop the possible phase synchronisation done by |
|
1279 | # We drop the possible phase synchronisation done by | |
1280 | # courtesy to publish changesets possibly locally draft |
|
1280 | # courtesy to publish changesets possibly locally draft | |
1281 | # on the remote. |
|
1281 | # on the remote. | |
1282 | remotephases = {b'publishing': b'True'} |
|
1282 | remotephases = {b'publishing': b'True'} | |
1283 | if not remotephases: # old server or public only reply from non-publishing |
|
1283 | if not remotephases: # old server or public only reply from non-publishing | |
1284 | _localphasemove(pushop, cheads) |
|
1284 | _localphasemove(pushop, cheads) | |
1285 | # don't push any phase data as there is nothing to push |
|
1285 | # don't push any phase data as there is nothing to push | |
1286 | else: |
|
1286 | else: | |
1287 | ana = phases.analyzeremotephases(pushop.repo, cheads, remotephases) |
|
1287 | ana = phases.analyzeremotephases(pushop.repo, cheads, remotephases) | |
1288 | pheads, droots = ana |
|
1288 | pheads, droots = ana | |
1289 | ### Apply remote phase on local |
|
1289 | ### Apply remote phase on local | |
1290 | if remotephases.get(b'publishing', False): |
|
1290 | if remotephases.get(b'publishing', False): | |
1291 | _localphasemove(pushop, cheads) |
|
1291 | _localphasemove(pushop, cheads) | |
1292 | else: # publish = False |
|
1292 | else: # publish = False | |
1293 | _localphasemove(pushop, pheads) |
|
1293 | _localphasemove(pushop, pheads) | |
1294 | _localphasemove(pushop, cheads, phases.draft) |
|
1294 | _localphasemove(pushop, cheads, phases.draft) | |
1295 | ### Apply local phase on remote |
|
1295 | ### Apply local phase on remote | |
1296 |
|
1296 | |||
1297 | if pushop.cgresult: |
|
1297 | if pushop.cgresult: | |
1298 | if b'phases' in pushop.stepsdone: |
|
1298 | if b'phases' in pushop.stepsdone: | |
1299 | # phases already pushed though bundle2 |
|
1299 | # phases already pushed though bundle2 | |
1300 | return |
|
1300 | return | |
1301 | outdated = pushop.outdatedphases |
|
1301 | outdated = pushop.outdatedphases | |
1302 | else: |
|
1302 | else: | |
1303 | outdated = pushop.fallbackoutdatedphases |
|
1303 | outdated = pushop.fallbackoutdatedphases | |
1304 |
|
1304 | |||
1305 | pushop.stepsdone.add(b'phases') |
|
1305 | pushop.stepsdone.add(b'phases') | |
1306 |
|
1306 | |||
1307 | # filter heads already turned public by the push |
|
1307 | # filter heads already turned public by the push | |
1308 | outdated = [c for c in outdated if c.node() not in pheads] |
|
1308 | outdated = [c for c in outdated if c.node() not in pheads] | |
1309 | # fallback to independent pushkey command |
|
1309 | # fallback to independent pushkey command | |
1310 | for newremotehead in outdated: |
|
1310 | for newremotehead in outdated: | |
1311 | with pushop.remote.commandexecutor() as e: |
|
1311 | with pushop.remote.commandexecutor() as e: | |
1312 | r = e.callcommand( |
|
1312 | r = e.callcommand( | |
1313 | b'pushkey', |
|
1313 | b'pushkey', | |
1314 | { |
|
1314 | { | |
1315 | b'namespace': b'phases', |
|
1315 | b'namespace': b'phases', | |
1316 | b'key': newremotehead.hex(), |
|
1316 | b'key': newremotehead.hex(), | |
1317 | b'old': b'%d' % phases.draft, |
|
1317 | b'old': b'%d' % phases.draft, | |
1318 | b'new': b'%d' % phases.public, |
|
1318 | b'new': b'%d' % phases.public, | |
1319 | }, |
|
1319 | }, | |
1320 | ).result() |
|
1320 | ).result() | |
1321 |
|
1321 | |||
1322 | if not r: |
|
1322 | if not r: | |
1323 | pushop.ui.warn( |
|
1323 | pushop.ui.warn( | |
1324 | _(b'updating %s to public failed!\n') % newremotehead |
|
1324 | _(b'updating %s to public failed!\n') % newremotehead | |
1325 | ) |
|
1325 | ) | |
1326 |
|
1326 | |||
1327 |
|
1327 | |||
1328 | def _localphasemove(pushop, nodes, phase=phases.public): |
|
1328 | def _localphasemove(pushop, nodes, phase=phases.public): | |
1329 | """move <nodes> to <phase> in the local source repo""" |
|
1329 | """move <nodes> to <phase> in the local source repo""" | |
1330 | if pushop.trmanager: |
|
1330 | if pushop.trmanager: | |
1331 | phases.advanceboundary( |
|
1331 | phases.advanceboundary( | |
1332 | pushop.repo, pushop.trmanager.transaction(), phase, nodes |
|
1332 | pushop.repo, pushop.trmanager.transaction(), phase, nodes | |
1333 | ) |
|
1333 | ) | |
1334 | else: |
|
1334 | else: | |
1335 | # repo is not locked, do not change any phases! |
|
1335 | # repo is not locked, do not change any phases! | |
1336 | # Informs the user that phases should have been moved when |
|
1336 | # Informs the user that phases should have been moved when | |
1337 | # applicable. |
|
1337 | # applicable. | |
1338 | actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()] |
|
1338 | actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()] | |
1339 | phasestr = phases.phasenames[phase] |
|
1339 | phasestr = phases.phasenames[phase] | |
1340 | if actualmoves: |
|
1340 | if actualmoves: | |
1341 | pushop.ui.status( |
|
1341 | pushop.ui.status( | |
1342 | _( |
|
1342 | _( | |
1343 | b'cannot lock source repo, skipping ' |
|
1343 | b'cannot lock source repo, skipping ' | |
1344 | b'local %s phase update\n' |
|
1344 | b'local %s phase update\n' | |
1345 | ) |
|
1345 | ) | |
1346 | % phasestr |
|
1346 | % phasestr | |
1347 | ) |
|
1347 | ) | |
1348 |
|
1348 | |||
1349 |
|
1349 | |||
1350 | def _pushobsolete(pushop): |
|
1350 | def _pushobsolete(pushop): | |
1351 | """utility function to push obsolete markers to a remote""" |
|
1351 | """utility function to push obsolete markers to a remote""" | |
1352 | if b'obsmarkers' in pushop.stepsdone: |
|
1352 | if b'obsmarkers' in pushop.stepsdone: | |
1353 | return |
|
1353 | return | |
1354 | repo = pushop.repo |
|
1354 | repo = pushop.repo | |
1355 | remote = pushop.remote |
|
1355 | remote = pushop.remote | |
1356 | pushop.stepsdone.add(b'obsmarkers') |
|
1356 | pushop.stepsdone.add(b'obsmarkers') | |
1357 | if pushop.outobsmarkers: |
|
1357 | if pushop.outobsmarkers: | |
1358 | pushop.ui.debug(b'try to push obsolete markers to remote\n') |
|
1358 | pushop.ui.debug(b'try to push obsolete markers to remote\n') | |
1359 | rslts = [] |
|
1359 | rslts = [] | |
1360 | markers = obsutil.sortedmarkers(pushop.outobsmarkers) |
|
1360 | markers = obsutil.sortedmarkers(pushop.outobsmarkers) | |
1361 | remotedata = obsolete._pushkeyescape(markers) |
|
1361 | remotedata = obsolete._pushkeyescape(markers) | |
1362 | for key in sorted(remotedata, reverse=True): |
|
1362 | for key in sorted(remotedata, reverse=True): | |
1363 | # reverse sort to ensure we end with dump0 |
|
1363 | # reverse sort to ensure we end with dump0 | |
1364 | data = remotedata[key] |
|
1364 | data = remotedata[key] | |
1365 | rslts.append(remote.pushkey(b'obsolete', key, b'', data)) |
|
1365 | rslts.append(remote.pushkey(b'obsolete', key, b'', data)) | |
1366 | if [r for r in rslts if not r]: |
|
1366 | if [r for r in rslts if not r]: | |
1367 | msg = _(b'failed to push some obsolete markers!\n') |
|
1367 | msg = _(b'failed to push some obsolete markers!\n') | |
1368 | repo.ui.warn(msg) |
|
1368 | repo.ui.warn(msg) | |
1369 |
|
1369 | |||
1370 |
|
1370 | |||
1371 | def _pushbookmark(pushop): |
|
1371 | def _pushbookmark(pushop): | |
1372 | """Update bookmark position on remote""" |
|
1372 | """Update bookmark position on remote""" | |
1373 | if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone: |
|
1373 | if pushop.cgresult == 0 or b'bookmarks' in pushop.stepsdone: | |
1374 | return |
|
1374 | return | |
1375 | pushop.stepsdone.add(b'bookmarks') |
|
1375 | pushop.stepsdone.add(b'bookmarks') | |
1376 | ui = pushop.ui |
|
1376 | ui = pushop.ui | |
1377 | remote = pushop.remote |
|
1377 | remote = pushop.remote | |
1378 |
|
1378 | |||
1379 | for b, old, new in pushop.outbookmarks: |
|
1379 | for b, old, new in pushop.outbookmarks: | |
1380 | action = b'update' |
|
1380 | action = b'update' | |
1381 | if not old: |
|
1381 | if not old: | |
1382 | action = b'export' |
|
1382 | action = b'export' | |
1383 | elif not new: |
|
1383 | elif not new: | |
1384 | action = b'delete' |
|
1384 | action = b'delete' | |
1385 |
|
1385 | |||
1386 | with remote.commandexecutor() as e: |
|
1386 | with remote.commandexecutor() as e: | |
1387 | r = e.callcommand( |
|
1387 | r = e.callcommand( | |
1388 | b'pushkey', |
|
1388 | b'pushkey', | |
1389 | { |
|
1389 | { | |
1390 | b'namespace': b'bookmarks', |
|
1390 | b'namespace': b'bookmarks', | |
1391 | b'key': b, |
|
1391 | b'key': b, | |
1392 | b'old': hex(old), |
|
1392 | b'old': hex(old), | |
1393 | b'new': hex(new), |
|
1393 | b'new': hex(new), | |
1394 | }, |
|
1394 | }, | |
1395 | ).result() |
|
1395 | ).result() | |
1396 |
|
1396 | |||
1397 | if r: |
|
1397 | if r: | |
1398 | ui.status(bookmsgmap[action][0] % b) |
|
1398 | ui.status(bookmsgmap[action][0] % b) | |
1399 | else: |
|
1399 | else: | |
1400 | ui.warn(bookmsgmap[action][1] % b) |
|
1400 | ui.warn(bookmsgmap[action][1] % b) | |
1401 | # discovery can have set the value form invalid entry |
|
1401 | # discovery can have set the value form invalid entry | |
1402 | if pushop.bkresult is not None: |
|
1402 | if pushop.bkresult is not None: | |
1403 | pushop.bkresult = 1 |
|
1403 | pushop.bkresult = 1 | |
1404 |
|
1404 | |||
1405 |
|
1405 | |||
1406 | class pulloperation: |
|
1406 | class pulloperation: | |
1407 | """A object that represent a single pull operation |
|
1407 | """A object that represent a single pull operation | |
1408 |
|
1408 | |||
1409 | It purpose is to carry pull related state and very common operation. |
|
1409 | It purpose is to carry pull related state and very common operation. | |
1410 |
|
1410 | |||
1411 | A new should be created at the beginning of each pull and discarded |
|
1411 | A new should be created at the beginning of each pull and discarded | |
1412 | afterward. |
|
1412 | afterward. | |
1413 | """ |
|
1413 | """ | |
1414 |
|
1414 | |||
1415 | def __init__( |
|
1415 | def __init__( | |
1416 | self, |
|
1416 | self, | |
1417 | repo, |
|
1417 | repo, | |
1418 | remote, |
|
1418 | remote, | |
1419 | heads=None, |
|
1419 | heads=None, | |
1420 | force=False, |
|
1420 | force=False, | |
1421 | bookmarks=(), |
|
1421 | bookmarks=(), | |
1422 | remotebookmarks=None, |
|
1422 | remotebookmarks=None, | |
1423 | streamclonerequested=None, |
|
1423 | streamclonerequested=None, | |
1424 | includepats=None, |
|
1424 | includepats=None, | |
1425 | excludepats=None, |
|
1425 | excludepats=None, | |
1426 | depth=None, |
|
1426 | depth=None, | |
1427 | path=None, |
|
1427 | path=None, | |
1428 | ): |
|
1428 | ): | |
1429 | # repo we pull into |
|
1429 | # repo we pull into | |
1430 | self.repo = repo |
|
1430 | self.repo = repo | |
1431 | # repo we pull from |
|
1431 | # repo we pull from | |
1432 | self.remote = remote |
|
1432 | self.remote = remote | |
1433 | # path object used to build this remote |
|
1433 | # path object used to build this remote | |
1434 | # |
|
1434 | # | |
1435 | # Ideally, the remote peer would carry that directly. |
|
1435 | # Ideally, the remote peer would carry that directly. | |
1436 | self.remote_path = path |
|
1436 | self.remote_path = path | |
1437 | # revision we try to pull (None is "all") |
|
1437 | # revision we try to pull (None is "all") | |
1438 | self.heads = heads |
|
1438 | self.heads = heads | |
1439 | # bookmark pulled explicitly |
|
1439 | # bookmark pulled explicitly | |
1440 | self.explicitbookmarks = [ |
|
1440 | self.explicitbookmarks = [ | |
1441 | repo._bookmarks.expandname(bookmark) for bookmark in bookmarks |
|
1441 | repo._bookmarks.expandname(bookmark) for bookmark in bookmarks | |
1442 | ] |
|
1442 | ] | |
1443 | # do we force pull? |
|
1443 | # do we force pull? | |
1444 | self.force = force |
|
1444 | self.force = force | |
1445 | # whether a streaming clone was requested |
|
1445 | # whether a streaming clone was requested | |
1446 | self.streamclonerequested = streamclonerequested |
|
1446 | self.streamclonerequested = streamclonerequested | |
1447 | # transaction manager |
|
1447 | # transaction manager | |
1448 | self.trmanager = None |
|
1448 | self.trmanager = None | |
1449 | # set of common changeset between local and remote before pull |
|
1449 | # set of common changeset between local and remote before pull | |
1450 | self.common = None |
|
1450 | self.common = None | |
1451 | # set of pulled head |
|
1451 | # set of pulled head | |
1452 | self.rheads = None |
|
1452 | self.rheads = None | |
1453 | # list of missing changeset to fetch remotely |
|
1453 | # list of missing changeset to fetch remotely | |
1454 | self.fetch = None |
|
1454 | self.fetch = None | |
1455 | # remote bookmarks data |
|
1455 | # remote bookmarks data | |
1456 | self.remotebookmarks = remotebookmarks |
|
1456 | self.remotebookmarks = remotebookmarks | |
1457 | # result of changegroup pulling (used as return code by pull) |
|
1457 | # result of changegroup pulling (used as return code by pull) | |
1458 | self.cgresult = None |
|
1458 | self.cgresult = None | |
1459 | # list of step already done |
|
1459 | # list of step already done | |
1460 | self.stepsdone = set() |
|
1460 | self.stepsdone = set() | |
1461 | # Whether we attempted a clone from pre-generated bundles. |
|
1461 | # Whether we attempted a clone from pre-generated bundles. | |
1462 | self.clonebundleattempted = False |
|
1462 | self.clonebundleattempted = False | |
1463 | # Set of file patterns to include. |
|
1463 | # Set of file patterns to include. | |
1464 | self.includepats = includepats |
|
1464 | self.includepats = includepats | |
1465 | # Set of file patterns to exclude. |
|
1465 | # Set of file patterns to exclude. | |
1466 | self.excludepats = excludepats |
|
1466 | self.excludepats = excludepats | |
1467 | # Number of ancestor changesets to pull from each pulled head. |
|
1467 | # Number of ancestor changesets to pull from each pulled head. | |
1468 | self.depth = depth |
|
1468 | self.depth = depth | |
1469 |
|
1469 | |||
1470 | @util.propertycache |
|
1470 | @util.propertycache | |
1471 | def pulledsubset(self): |
|
1471 | def pulledsubset(self): | |
1472 | """heads of the set of changeset target by the pull""" |
|
1472 | """heads of the set of changeset target by the pull""" | |
1473 | # compute target subset |
|
1473 | # compute target subset | |
1474 | if self.heads is None: |
|
1474 | if self.heads is None: | |
1475 | # We pulled every thing possible |
|
1475 | # We pulled every thing possible | |
1476 | # sync on everything common |
|
1476 | # sync on everything common | |
1477 | c = set(self.common) |
|
1477 | c = set(self.common) | |
1478 | ret = list(self.common) |
|
1478 | ret = list(self.common) | |
1479 | for n in self.rheads: |
|
1479 | for n in self.rheads: | |
1480 | if n not in c: |
|
1480 | if n not in c: | |
1481 | ret.append(n) |
|
1481 | ret.append(n) | |
1482 | return ret |
|
1482 | return ret | |
1483 | else: |
|
1483 | else: | |
1484 | # We pulled a specific subset |
|
1484 | # We pulled a specific subset | |
1485 | # sync on this subset |
|
1485 | # sync on this subset | |
1486 | return self.heads |
|
1486 | return self.heads | |
1487 |
|
1487 | |||
1488 | @util.propertycache |
|
1488 | @util.propertycache | |
1489 | def canusebundle2(self): |
|
1489 | def canusebundle2(self): | |
1490 | return not _forcebundle1(self) |
|
1490 | return not _forcebundle1(self) | |
1491 |
|
1491 | |||
1492 | @util.propertycache |
|
1492 | @util.propertycache | |
1493 | def remotebundle2caps(self): |
|
1493 | def remotebundle2caps(self): | |
1494 | return bundle2.bundle2caps(self.remote) |
|
1494 | return bundle2.bundle2caps(self.remote) | |
1495 |
|
1495 | |||
1496 | def gettransaction(self): |
|
1496 | def gettransaction(self): | |
1497 | # deprecated; talk to trmanager directly |
|
1497 | # deprecated; talk to trmanager directly | |
1498 | return self.trmanager.transaction() |
|
1498 | return self.trmanager.transaction() | |
1499 |
|
1499 | |||
1500 |
|
1500 | |||
1501 | class transactionmanager(util.transactional): |
|
1501 | class transactionmanager(util.transactional): | |
1502 | """An object to manage the life cycle of a transaction |
|
1502 | """An object to manage the life cycle of a transaction | |
1503 |
|
1503 | |||
1504 | It creates the transaction on demand and calls the appropriate hooks when |
|
1504 | It creates the transaction on demand and calls the appropriate hooks when | |
1505 | closing the transaction.""" |
|
1505 | closing the transaction.""" | |
1506 |
|
1506 | |||
1507 | def __init__(self, repo, source, url): |
|
1507 | def __init__(self, repo, source, url): | |
1508 | self.repo = repo |
|
1508 | self.repo = repo | |
1509 | self.source = source |
|
1509 | self.source = source | |
1510 | self.url = url |
|
1510 | self.url = url | |
1511 | self._tr = None |
|
1511 | self._tr = None | |
1512 |
|
1512 | |||
1513 | def transaction(self): |
|
1513 | def transaction(self): | |
1514 | """Return an open transaction object, constructing if necessary""" |
|
1514 | """Return an open transaction object, constructing if necessary""" | |
1515 | if not self._tr: |
|
1515 | if not self._tr: | |
1516 | trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url)) |
|
1516 | trname = b'%s\n%s' % (self.source, urlutil.hidepassword(self.url)) | |
1517 | self._tr = self.repo.transaction(trname) |
|
1517 | self._tr = self.repo.transaction(trname) | |
1518 | self._tr.hookargs[b'source'] = self.source |
|
1518 | self._tr.hookargs[b'source'] = self.source | |
1519 | self._tr.hookargs[b'url'] = self.url |
|
1519 | self._tr.hookargs[b'url'] = self.url | |
1520 | return self._tr |
|
1520 | return self._tr | |
1521 |
|
1521 | |||
1522 | def close(self): |
|
1522 | def close(self): | |
1523 | """close transaction if created""" |
|
1523 | """close transaction if created""" | |
1524 | if self._tr is not None: |
|
1524 | if self._tr is not None: | |
1525 | self._tr.close() |
|
1525 | self._tr.close() | |
1526 |
|
1526 | |||
1527 | def release(self): |
|
1527 | def release(self): | |
1528 | """release transaction if created""" |
|
1528 | """release transaction if created""" | |
1529 | if self._tr is not None: |
|
1529 | if self._tr is not None: | |
1530 | self._tr.release() |
|
1530 | self._tr.release() | |
1531 |
|
1531 | |||
1532 |
|
1532 | |||
1533 | def listkeys(remote, namespace): |
|
1533 | def listkeys(remote, namespace): | |
1534 | with remote.commandexecutor() as e: |
|
1534 | with remote.commandexecutor() as e: | |
1535 | return e.callcommand(b'listkeys', {b'namespace': namespace}).result() |
|
1535 | return e.callcommand(b'listkeys', {b'namespace': namespace}).result() | |
1536 |
|
1536 | |||
1537 |
|
1537 | |||
1538 | def _fullpullbundle2(repo, pullop): |
|
1538 | def _fullpullbundle2(repo, pullop): | |
1539 | # The server may send a partial reply, i.e. when inlining |
|
1539 | # The server may send a partial reply, i.e. when inlining | |
1540 | # pre-computed bundles. In that case, update the common |
|
1540 | # pre-computed bundles. In that case, update the common | |
1541 | # set based on the results and pull another bundle. |
|
1541 | # set based on the results and pull another bundle. | |
1542 | # |
|
1542 | # | |
1543 | # There are two indicators that the process is finished: |
|
1543 | # There are two indicators that the process is finished: | |
1544 | # - no changeset has been added, or |
|
1544 | # - no changeset has been added, or | |
1545 | # - all remote heads are known locally. |
|
1545 | # - all remote heads are known locally. | |
1546 | # The head check must use the unfiltered view as obsoletion |
|
1546 | # The head check must use the unfiltered view as obsoletion | |
1547 | # markers can hide heads. |
|
1547 | # markers can hide heads. | |
1548 | unfi = repo.unfiltered() |
|
1548 | unfi = repo.unfiltered() | |
1549 | unficl = unfi.changelog |
|
1549 | unficl = unfi.changelog | |
1550 |
|
1550 | |||
1551 | def headsofdiff(h1, h2): |
|
1551 | def headsofdiff(h1, h2): | |
1552 | """Returns heads(h1 % h2)""" |
|
1552 | """Returns heads(h1 % h2)""" | |
1553 | res = unfi.set(b'heads(%ln %% %ln)', h1, h2) |
|
1553 | res = unfi.set(b'heads(%ln %% %ln)', h1, h2) | |
1554 | return {ctx.node() for ctx in res} |
|
1554 | return {ctx.node() for ctx in res} | |
1555 |
|
1555 | |||
1556 | def headsofunion(h1, h2): |
|
1556 | def headsofunion(h1, h2): | |
1557 | """Returns heads((h1 + h2) - null)""" |
|
1557 | """Returns heads((h1 + h2) - null)""" | |
1558 | res = unfi.set(b'heads((%ln + %ln - null))', h1, h2) |
|
1558 | res = unfi.set(b'heads((%ln + %ln - null))', h1, h2) | |
1559 | return {ctx.node() for ctx in res} |
|
1559 | return {ctx.node() for ctx in res} | |
1560 |
|
1560 | |||
1561 | while True: |
|
1561 | while True: | |
1562 | old_heads = unficl.heads() |
|
1562 | old_heads = unficl.heads() | |
1563 | clstart = len(unficl) |
|
1563 | clstart = len(unficl) | |
1564 | _pullbundle2(pullop) |
|
1564 | _pullbundle2(pullop) | |
1565 | if requirements.NARROW_REQUIREMENT in repo.requirements: |
|
1565 | if requirements.NARROW_REQUIREMENT in repo.requirements: | |
1566 | # XXX narrow clones filter the heads on the server side during |
|
1566 | # XXX narrow clones filter the heads on the server side during | |
1567 | # XXX getbundle and result in partial replies as well. |
|
1567 | # XXX getbundle and result in partial replies as well. | |
1568 | # XXX Disable pull bundles in this case as band aid to avoid |
|
1568 | # XXX Disable pull bundles in this case as band aid to avoid | |
1569 | # XXX extra round trips. |
|
1569 | # XXX extra round trips. | |
1570 | break |
|
1570 | break | |
1571 | if clstart == len(unficl): |
|
1571 | if clstart == len(unficl): | |
1572 | break |
|
1572 | break | |
1573 | if all(unficl.hasnode(n) for n in pullop.rheads): |
|
1573 | if all(unficl.hasnode(n) for n in pullop.rheads): | |
1574 | break |
|
1574 | break | |
1575 | new_heads = headsofdiff(unficl.heads(), old_heads) |
|
1575 | new_heads = headsofdiff(unficl.heads(), old_heads) | |
1576 | pullop.common = headsofunion(new_heads, pullop.common) |
|
1576 | pullop.common = headsofunion(new_heads, pullop.common) | |
1577 | pullop.rheads = set(pullop.rheads) - pullop.common |
|
1577 | pullop.rheads = set(pullop.rheads) - pullop.common | |
1578 |
|
1578 | |||
1579 |
|
1579 | |||
1580 | def add_confirm_callback(repo, pullop): |
|
1580 | def add_confirm_callback(repo, pullop): | |
1581 | """adds a finalize callback to transaction which can be used to show stats |
|
1581 | """adds a finalize callback to transaction which can be used to show stats | |
1582 | to user and confirm the pull before committing transaction""" |
|
1582 | to user and confirm the pull before committing transaction""" | |
1583 |
|
1583 | |||
1584 | tr = pullop.trmanager.transaction() |
|
1584 | tr = pullop.trmanager.transaction() | |
1585 | scmutil.registersummarycallback( |
|
1585 | scmutil.registersummarycallback( | |
1586 | repo, tr, txnname=b'pull', as_validator=True |
|
1586 | repo, tr, txnname=b'pull', as_validator=True | |
1587 | ) |
|
1587 | ) | |
1588 | reporef = weakref.ref(repo.unfiltered()) |
|
1588 | reporef = weakref.ref(repo.unfiltered()) | |
1589 |
|
1589 | |||
1590 | def prompt(tr): |
|
1590 | def prompt(tr): | |
1591 | repo = reporef() |
|
1591 | repo = reporef() | |
1592 | cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No') |
|
1592 | cm = _(b'accept incoming changes (yn)?$$ &Yes $$ &No') | |
1593 | if repo.ui.promptchoice(cm): |
|
1593 | if repo.ui.promptchoice(cm): | |
1594 | raise error.Abort(b"user aborted") |
|
1594 | raise error.Abort(b"user aborted") | |
1595 |
|
1595 | |||
1596 | tr.addvalidator(b'900-pull-prompt', prompt) |
|
1596 | tr.addvalidator(b'900-pull-prompt', prompt) | |
1597 |
|
1597 | |||
1598 |
|
1598 | |||
1599 | def pull( |
|
1599 | def pull( | |
1600 | repo, |
|
1600 | repo, | |
1601 | remote, |
|
1601 | remote, | |
1602 | path=None, |
|
1602 | path=None, | |
1603 | heads=None, |
|
1603 | heads=None, | |
1604 | force=False, |
|
1604 | force=False, | |
1605 | bookmarks=(), |
|
1605 | bookmarks=(), | |
1606 | opargs=None, |
|
1606 | opargs=None, | |
1607 | streamclonerequested=None, |
|
1607 | streamclonerequested=None, | |
1608 | includepats=None, |
|
1608 | includepats=None, | |
1609 | excludepats=None, |
|
1609 | excludepats=None, | |
1610 | depth=None, |
|
1610 | depth=None, | |
1611 | confirm=None, |
|
1611 | confirm=None, | |
1612 | ): |
|
1612 | ): | |
1613 | """Fetch repository data from a remote. |
|
1613 | """Fetch repository data from a remote. | |
1614 |
|
1614 | |||
1615 | This is the main function used to retrieve data from a remote repository. |
|
1615 | This is the main function used to retrieve data from a remote repository. | |
1616 |
|
1616 | |||
1617 | ``repo`` is the local repository to clone into. |
|
1617 | ``repo`` is the local repository to clone into. | |
1618 | ``remote`` is a peer instance. |
|
1618 | ``remote`` is a peer instance. | |
1619 | ``heads`` is an iterable of revisions we want to pull. ``None`` (the |
|
1619 | ``heads`` is an iterable of revisions we want to pull. ``None`` (the | |
1620 | default) means to pull everything from the remote. |
|
1620 | default) means to pull everything from the remote. | |
1621 | ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By |
|
1621 | ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By | |
1622 | default, all remote bookmarks are pulled. |
|
1622 | default, all remote bookmarks are pulled. | |
1623 | ``opargs`` are additional keyword arguments to pass to ``pulloperation`` |
|
1623 | ``opargs`` are additional keyword arguments to pass to ``pulloperation`` | |
1624 | initialization. |
|
1624 | initialization. | |
1625 | ``streamclonerequested`` is a boolean indicating whether a "streaming |
|
1625 | ``streamclonerequested`` is a boolean indicating whether a "streaming | |
1626 | clone" is requested. A "streaming clone" is essentially a raw file copy |
|
1626 | clone" is requested. A "streaming clone" is essentially a raw file copy | |
1627 | of revlogs from the server. This only works when the local repository is |
|
1627 | of revlogs from the server. This only works when the local repository is | |
1628 | empty. The default value of ``None`` means to respect the server |
|
1628 | empty. The default value of ``None`` means to respect the server | |
1629 | configuration for preferring stream clones. |
|
1629 | configuration for preferring stream clones. | |
1630 | ``includepats`` and ``excludepats`` define explicit file patterns to |
|
1630 | ``includepats`` and ``excludepats`` define explicit file patterns to | |
1631 | include and exclude in storage, respectively. If not defined, narrow |
|
1631 | include and exclude in storage, respectively. If not defined, narrow | |
1632 | patterns from the repo instance are used, if available. |
|
1632 | patterns from the repo instance are used, if available. | |
1633 | ``depth`` is an integer indicating the DAG depth of history we're |
|
1633 | ``depth`` is an integer indicating the DAG depth of history we're | |
1634 | interested in. If defined, for each revision specified in ``heads``, we |
|
1634 | interested in. If defined, for each revision specified in ``heads``, we | |
1635 | will fetch up to this many of its ancestors and data associated with them. |
|
1635 | will fetch up to this many of its ancestors and data associated with them. | |
1636 | ``confirm`` is a boolean indicating whether the pull should be confirmed |
|
1636 | ``confirm`` is a boolean indicating whether the pull should be confirmed | |
1637 | before committing the transaction. This overrides HGPLAIN. |
|
1637 | before committing the transaction. This overrides HGPLAIN. | |
1638 |
|
1638 | |||
1639 | Returns the ``pulloperation`` created for this pull. |
|
1639 | Returns the ``pulloperation`` created for this pull. | |
1640 | """ |
|
1640 | """ | |
1641 | if opargs is None: |
|
1641 | if opargs is None: | |
1642 | opargs = {} |
|
1642 | opargs = {} | |
1643 |
|
1643 | |||
1644 | # We allow the narrow patterns to be passed in explicitly to provide more |
|
1644 | # We allow the narrow patterns to be passed in explicitly to provide more | |
1645 | # flexibility for API consumers. |
|
1645 | # flexibility for API consumers. | |
1646 | if includepats is not None or excludepats is not None: |
|
1646 | if includepats is not None or excludepats is not None: | |
1647 | includepats = includepats or set() |
|
1647 | includepats = includepats or set() | |
1648 | excludepats = excludepats or set() |
|
1648 | excludepats = excludepats or set() | |
1649 | else: |
|
1649 | else: | |
1650 | includepats, excludepats = repo.narrowpats |
|
1650 | includepats, excludepats = repo.narrowpats | |
1651 |
|
1651 | |||
1652 | narrowspec.validatepatterns(includepats) |
|
1652 | narrowspec.validatepatterns(includepats) | |
1653 | narrowspec.validatepatterns(excludepats) |
|
1653 | narrowspec.validatepatterns(excludepats) | |
1654 |
|
1654 | |||
1655 | pullop = pulloperation( |
|
1655 | pullop = pulloperation( | |
1656 | repo, |
|
1656 | repo, | |
1657 | remote, |
|
1657 | remote, | |
1658 | path=path, |
|
1658 | path=path, | |
1659 | heads=heads, |
|
1659 | heads=heads, | |
1660 | force=force, |
|
1660 | force=force, | |
1661 | bookmarks=bookmarks, |
|
1661 | bookmarks=bookmarks, | |
1662 | streamclonerequested=streamclonerequested, |
|
1662 | streamclonerequested=streamclonerequested, | |
1663 | includepats=includepats, |
|
1663 | includepats=includepats, | |
1664 | excludepats=excludepats, |
|
1664 | excludepats=excludepats, | |
1665 | depth=depth, |
|
1665 | depth=depth, | |
1666 | **pycompat.strkwargs(opargs) |
|
1666 | **pycompat.strkwargs(opargs) | |
1667 | ) |
|
1667 | ) | |
1668 |
|
1668 | |||
1669 | peerlocal = pullop.remote.local() |
|
1669 | peerlocal = pullop.remote.local() | |
1670 | if peerlocal: |
|
1670 | if peerlocal: | |
1671 | missing = set(peerlocal.requirements) - pullop.repo.supported |
|
1671 | missing = set(peerlocal.requirements) - pullop.repo.supported | |
1672 | if missing: |
|
1672 | if missing: | |
1673 | msg = _( |
|
1673 | msg = _( | |
1674 | b"required features are not" |
|
1674 | b"required features are not" | |
1675 | b" supported in the destination:" |
|
1675 | b" supported in the destination:" | |
1676 | b" %s" |
|
1676 | b" %s" | |
1677 | ) % (b', '.join(sorted(missing))) |
|
1677 | ) % (b', '.join(sorted(missing))) | |
1678 | raise error.Abort(msg) |
|
1678 | raise error.Abort(msg) | |
1679 |
|
1679 | |||
1680 | for category in repo._wanted_sidedata: |
|
1680 | for category in repo._wanted_sidedata: | |
1681 | # Check that a computer is registered for that category for at least |
|
1681 | # Check that a computer is registered for that category for at least | |
1682 | # one revlog kind. |
|
1682 | # one revlog kind. | |
1683 | for kind, computers in repo._sidedata_computers.items(): |
|
1683 | for kind, computers in repo._sidedata_computers.items(): | |
1684 | if computers.get(category): |
|
1684 | if computers.get(category): | |
1685 | break |
|
1685 | break | |
1686 | else: |
|
1686 | else: | |
1687 | # This should never happen since repos are supposed to be able to |
|
1687 | # This should never happen since repos are supposed to be able to | |
1688 | # generate the sidedata they require. |
|
1688 | # generate the sidedata they require. | |
1689 | raise error.ProgrammingError( |
|
1689 | raise error.ProgrammingError( | |
1690 | _( |
|
1690 | _( | |
1691 | b'sidedata category requested by local side without local' |
|
1691 | b'sidedata category requested by local side without local' | |
1692 | b"support: '%s'" |
|
1692 | b"support: '%s'" | |
1693 | ) |
|
1693 | ) | |
1694 | % pycompat.bytestr(category) |
|
1694 | % pycompat.bytestr(category) | |
1695 | ) |
|
1695 | ) | |
1696 |
|
1696 | |||
1697 | pullop.trmanager = transactionmanager(repo, b'pull', remote.url()) |
|
1697 | pullop.trmanager = transactionmanager(repo, b'pull', remote.url()) | |
1698 | wlock = util.nullcontextmanager() |
|
1698 | wlock = util.nullcontextmanager() | |
1699 | if not bookmod.bookmarksinstore(repo): |
|
1699 | if not bookmod.bookmarksinstore(repo): | |
1700 | wlock = repo.wlock() |
|
1700 | wlock = repo.wlock() | |
1701 | with wlock, repo.lock(), pullop.trmanager: |
|
1701 | with wlock, repo.lock(), pullop.trmanager: | |
1702 | if confirm or ( |
|
1702 | if confirm or ( | |
1703 | repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain() |
|
1703 | repo.ui.configbool(b"pull", b"confirm") and not repo.ui.plain() | |
1704 | ): |
|
1704 | ): | |
1705 | add_confirm_callback(repo, pullop) |
|
1705 | add_confirm_callback(repo, pullop) | |
1706 |
|
1706 | |||
1707 | # This should ideally be in _pullbundle2(). However, it needs to run |
|
1707 | # This should ideally be in _pullbundle2(). However, it needs to run | |
1708 | # before discovery to avoid extra work. |
|
1708 | # before discovery to avoid extra work. | |
1709 | _maybeapplyclonebundle(pullop) |
|
1709 | _maybeapplyclonebundle(pullop) | |
1710 | streamclone.maybeperformlegacystreamclone(pullop) |
|
1710 | streamclone.maybeperformlegacystreamclone(pullop) | |
1711 | _pulldiscovery(pullop) |
|
1711 | _pulldiscovery(pullop) | |
1712 | if pullop.canusebundle2: |
|
1712 | if pullop.canusebundle2: | |
1713 | _fullpullbundle2(repo, pullop) |
|
1713 | _fullpullbundle2(repo, pullop) | |
1714 | _pullchangeset(pullop) |
|
1714 | _pullchangeset(pullop) | |
1715 | _pullphase(pullop) |
|
1715 | _pullphase(pullop) | |
1716 | _pullbookmarks(pullop) |
|
1716 | _pullbookmarks(pullop) | |
1717 | _pullobsolete(pullop) |
|
1717 | _pullobsolete(pullop) | |
1718 |
|
1718 | |||
1719 | # storing remotenames |
|
1719 | # storing remotenames | |
1720 | if repo.ui.configbool(b'experimental', b'remotenames'): |
|
1720 | if repo.ui.configbool(b'experimental', b'remotenames'): | |
1721 | logexchange.pullremotenames(repo, remote) |
|
1721 | logexchange.pullremotenames(repo, remote) | |
1722 |
|
1722 | |||
1723 | return pullop |
|
1723 | return pullop | |
1724 |
|
1724 | |||
1725 |
|
1725 | |||
1726 | # list of steps to perform discovery before pull |
|
1726 | # list of steps to perform discovery before pull | |
1727 | pulldiscoveryorder = [] |
|
1727 | pulldiscoveryorder = [] | |
1728 |
|
1728 | |||
1729 | # Mapping between step name and function |
|
1729 | # Mapping between step name and function | |
1730 | # |
|
1730 | # | |
1731 | # This exists to help extensions wrap steps if necessary |
|
1731 | # This exists to help extensions wrap steps if necessary | |
1732 | pulldiscoverymapping = {} |
|
1732 | pulldiscoverymapping = {} | |
1733 |
|
1733 | |||
1734 |
|
1734 | |||
1735 | def pulldiscovery(stepname): |
|
1735 | def pulldiscovery(stepname): | |
1736 | """decorator for function performing discovery before pull |
|
1736 | """decorator for function performing discovery before pull | |
1737 |
|
1737 | |||
1738 | The function is added to the step -> function mapping and appended to the |
|
1738 | The function is added to the step -> function mapping and appended to the | |
1739 | list of steps. Beware that decorated function will be added in order (this |
|
1739 | list of steps. Beware that decorated function will be added in order (this | |
1740 | may matter). |
|
1740 | may matter). | |
1741 |
|
1741 | |||
1742 | You can only use this decorator for a new step, if you want to wrap a step |
|
1742 | You can only use this decorator for a new step, if you want to wrap a step | |
1743 | from an extension, change the pulldiscovery dictionary directly.""" |
|
1743 | from an extension, change the pulldiscovery dictionary directly.""" | |
1744 |
|
1744 | |||
1745 | def dec(func): |
|
1745 | def dec(func): | |
1746 | assert stepname not in pulldiscoverymapping |
|
1746 | assert stepname not in pulldiscoverymapping | |
1747 | pulldiscoverymapping[stepname] = func |
|
1747 | pulldiscoverymapping[stepname] = func | |
1748 | pulldiscoveryorder.append(stepname) |
|
1748 | pulldiscoveryorder.append(stepname) | |
1749 | return func |
|
1749 | return func | |
1750 |
|
1750 | |||
1751 | return dec |
|
1751 | return dec | |
1752 |
|
1752 | |||
1753 |
|
1753 | |||
1754 | def _pulldiscovery(pullop): |
|
1754 | def _pulldiscovery(pullop): | |
1755 | """Run all discovery steps""" |
|
1755 | """Run all discovery steps""" | |
1756 | for stepname in pulldiscoveryorder: |
|
1756 | for stepname in pulldiscoveryorder: | |
1757 | step = pulldiscoverymapping[stepname] |
|
1757 | step = pulldiscoverymapping[stepname] | |
1758 | step(pullop) |
|
1758 | step(pullop) | |
1759 |
|
1759 | |||
1760 |
|
1760 | |||
1761 | @pulldiscovery(b'b1:bookmarks') |
|
1761 | @pulldiscovery(b'b1:bookmarks') | |
1762 | def _pullbookmarkbundle1(pullop): |
|
1762 | def _pullbookmarkbundle1(pullop): | |
1763 | """fetch bookmark data in bundle1 case |
|
1763 | """fetch bookmark data in bundle1 case | |
1764 |
|
1764 | |||
1765 | If not using bundle2, we have to fetch bookmarks before changeset |
|
1765 | If not using bundle2, we have to fetch bookmarks before changeset | |
1766 | discovery to reduce the chance and impact of race conditions.""" |
|
1766 | discovery to reduce the chance and impact of race conditions.""" | |
1767 | if pullop.remotebookmarks is not None: |
|
1767 | if pullop.remotebookmarks is not None: | |
1768 | return |
|
1768 | return | |
1769 | if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps: |
|
1769 | if pullop.canusebundle2 and b'listkeys' in pullop.remotebundle2caps: | |
1770 | # all known bundle2 servers now support listkeys, but lets be nice with |
|
1770 | # all known bundle2 servers now support listkeys, but lets be nice with | |
1771 | # new implementation. |
|
1771 | # new implementation. | |
1772 | return |
|
1772 | return | |
1773 | books = listkeys(pullop.remote, b'bookmarks') |
|
1773 | books = listkeys(pullop.remote, b'bookmarks') | |
1774 | pullop.remotebookmarks = bookmod.unhexlifybookmarks(books) |
|
1774 | pullop.remotebookmarks = bookmod.unhexlifybookmarks(books) | |
1775 |
|
1775 | |||
1776 |
|
1776 | |||
1777 | @pulldiscovery(b'changegroup') |
|
1777 | @pulldiscovery(b'changegroup') | |
1778 | def _pulldiscoverychangegroup(pullop): |
|
1778 | def _pulldiscoverychangegroup(pullop): | |
1779 | """discovery phase for the pull |
|
1779 | """discovery phase for the pull | |
1780 |
|
1780 | |||
1781 | Current handle changeset discovery only, will change handle all discovery |
|
1781 | Current handle changeset discovery only, will change handle all discovery | |
1782 | at some point.""" |
|
1782 | at some point.""" | |
1783 | tmp = discovery.findcommonincoming( |
|
1783 | tmp = discovery.findcommonincoming( | |
1784 | pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force |
|
1784 | pullop.repo, pullop.remote, heads=pullop.heads, force=pullop.force | |
1785 | ) |
|
1785 | ) | |
1786 | common, fetch, rheads = tmp |
|
1786 | common, fetch, rheads = tmp | |
1787 | has_node = pullop.repo.unfiltered().changelog.index.has_node |
|
1787 | has_node = pullop.repo.unfiltered().changelog.index.has_node | |
1788 | if fetch and rheads: |
|
1788 | if fetch and rheads: | |
1789 | # If a remote heads is filtered locally, put in back in common. |
|
1789 | # If a remote heads is filtered locally, put in back in common. | |
1790 | # |
|
1790 | # | |
1791 | # This is a hackish solution to catch most of "common but locally |
|
1791 | # This is a hackish solution to catch most of "common but locally | |
1792 | # hidden situation". We do not performs discovery on unfiltered |
|
1792 | # hidden situation". We do not performs discovery on unfiltered | |
1793 | # repository because it end up doing a pathological amount of round |
|
1793 | # repository because it end up doing a pathological amount of round | |
1794 | # trip for w huge amount of changeset we do not care about. |
|
1794 | # trip for w huge amount of changeset we do not care about. | |
1795 | # |
|
1795 | # | |
1796 | # If a set of such "common but filtered" changeset exist on the server |
|
1796 | # If a set of such "common but filtered" changeset exist on the server | |
1797 | # but are not including a remote heads, we'll not be able to detect it, |
|
1797 | # but are not including a remote heads, we'll not be able to detect it, | |
1798 | scommon = set(common) |
|
1798 | scommon = set(common) | |
1799 | for n in rheads: |
|
1799 | for n in rheads: | |
1800 | if has_node(n): |
|
1800 | if has_node(n): | |
1801 | if n not in scommon: |
|
1801 | if n not in scommon: | |
1802 | common.append(n) |
|
1802 | common.append(n) | |
1803 | if set(rheads).issubset(set(common)): |
|
1803 | if set(rheads).issubset(set(common)): | |
1804 | fetch = [] |
|
1804 | fetch = [] | |
1805 | pullop.common = common |
|
1805 | pullop.common = common | |
1806 | pullop.fetch = fetch |
|
1806 | pullop.fetch = fetch | |
1807 | pullop.rheads = rheads |
|
1807 | pullop.rheads = rheads | |
1808 |
|
1808 | |||
1809 |
|
1809 | |||
1810 | def _pullbundle2(pullop): |
|
1810 | def _pullbundle2(pullop): | |
1811 | """pull data using bundle2 |
|
1811 | """pull data using bundle2 | |
1812 |
|
1812 | |||
1813 | For now, the only supported data are changegroup.""" |
|
1813 | For now, the only supported data are changegroup.""" | |
1814 | kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')} |
|
1814 | kwargs = {b'bundlecaps': caps20to10(pullop.repo, role=b'client')} | |
1815 |
|
1815 | |||
1816 | # make ui easier to access |
|
1816 | # make ui easier to access | |
1817 | ui = pullop.repo.ui |
|
1817 | ui = pullop.repo.ui | |
1818 |
|
1818 | |||
1819 | # At the moment we don't do stream clones over bundle2. If that is |
|
1819 | # At the moment we don't do stream clones over bundle2. If that is | |
1820 | # implemented then here's where the check for that will go. |
|
1820 | # implemented then here's where the check for that will go. | |
1821 | streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0] |
|
1821 | streaming = streamclone.canperformstreamclone(pullop, bundle2=True)[0] | |
1822 |
|
1822 | |||
1823 | # declare pull perimeters |
|
1823 | # declare pull perimeters | |
1824 | kwargs[b'common'] = pullop.common |
|
1824 | kwargs[b'common'] = pullop.common | |
1825 | kwargs[b'heads'] = pullop.heads or pullop.rheads |
|
1825 | kwargs[b'heads'] = pullop.heads or pullop.rheads | |
1826 |
|
1826 | |||
1827 | # check server supports narrow and then adding includepats and excludepats |
|
1827 | # check server supports narrow and then adding includepats and excludepats | |
1828 | servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP) |
|
1828 | servernarrow = pullop.remote.capable(wireprototypes.NARROWCAP) | |
1829 | if servernarrow and pullop.includepats: |
|
1829 | if servernarrow and pullop.includepats: | |
1830 | kwargs[b'includepats'] = pullop.includepats |
|
1830 | kwargs[b'includepats'] = pullop.includepats | |
1831 | if servernarrow and pullop.excludepats: |
|
1831 | if servernarrow and pullop.excludepats: | |
1832 | kwargs[b'excludepats'] = pullop.excludepats |
|
1832 | kwargs[b'excludepats'] = pullop.excludepats | |
1833 |
|
1833 | |||
1834 | if streaming: |
|
1834 | if streaming: | |
1835 | kwargs[b'cg'] = False |
|
1835 | kwargs[b'cg'] = False | |
1836 | kwargs[b'stream'] = True |
|
1836 | kwargs[b'stream'] = True | |
1837 | pullop.stepsdone.add(b'changegroup') |
|
1837 | pullop.stepsdone.add(b'changegroup') | |
1838 | pullop.stepsdone.add(b'phases') |
|
1838 | pullop.stepsdone.add(b'phases') | |
1839 |
|
1839 | |||
1840 | else: |
|
1840 | else: | |
1841 | # pulling changegroup |
|
1841 | # pulling changegroup | |
1842 | pullop.stepsdone.add(b'changegroup') |
|
1842 | pullop.stepsdone.add(b'changegroup') | |
1843 |
|
1843 | |||
1844 | kwargs[b'cg'] = pullop.fetch |
|
1844 | kwargs[b'cg'] = pullop.fetch | |
1845 |
|
1845 | |||
1846 | legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange') |
|
1846 | legacyphase = b'phases' in ui.configlist(b'devel', b'legacy.exchange') | |
1847 | hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ()) |
|
1847 | hasbinaryphase = b'heads' in pullop.remotebundle2caps.get(b'phases', ()) | |
1848 | if not legacyphase and hasbinaryphase: |
|
1848 | if not legacyphase and hasbinaryphase: | |
1849 | kwargs[b'phases'] = True |
|
1849 | kwargs[b'phases'] = True | |
1850 | pullop.stepsdone.add(b'phases') |
|
1850 | pullop.stepsdone.add(b'phases') | |
1851 |
|
1851 | |||
1852 | if b'listkeys' in pullop.remotebundle2caps: |
|
1852 | if b'listkeys' in pullop.remotebundle2caps: | |
1853 | if b'phases' not in pullop.stepsdone: |
|
1853 | if b'phases' not in pullop.stepsdone: | |
1854 | kwargs[b'listkeys'] = [b'phases'] |
|
1854 | kwargs[b'listkeys'] = [b'phases'] | |
1855 |
|
1855 | |||
1856 | bookmarksrequested = False |
|
1856 | bookmarksrequested = False | |
1857 | legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange') |
|
1857 | legacybookmark = b'bookmarks' in ui.configlist(b'devel', b'legacy.exchange') | |
1858 | hasbinarybook = b'bookmarks' in pullop.remotebundle2caps |
|
1858 | hasbinarybook = b'bookmarks' in pullop.remotebundle2caps | |
1859 |
|
1859 | |||
1860 | if pullop.remotebookmarks is not None: |
|
1860 | if pullop.remotebookmarks is not None: | |
1861 | pullop.stepsdone.add(b'request-bookmarks') |
|
1861 | pullop.stepsdone.add(b'request-bookmarks') | |
1862 |
|
1862 | |||
1863 | if ( |
|
1863 | if ( | |
1864 | b'request-bookmarks' not in pullop.stepsdone |
|
1864 | b'request-bookmarks' not in pullop.stepsdone | |
1865 | and pullop.remotebookmarks is None |
|
1865 | and pullop.remotebookmarks is None | |
1866 | and not legacybookmark |
|
1866 | and not legacybookmark | |
1867 | and hasbinarybook |
|
1867 | and hasbinarybook | |
1868 | ): |
|
1868 | ): | |
1869 | kwargs[b'bookmarks'] = True |
|
1869 | kwargs[b'bookmarks'] = True | |
1870 | bookmarksrequested = True |
|
1870 | bookmarksrequested = True | |
1871 |
|
1871 | |||
1872 | if b'listkeys' in pullop.remotebundle2caps: |
|
1872 | if b'listkeys' in pullop.remotebundle2caps: | |
1873 | if b'request-bookmarks' not in pullop.stepsdone: |
|
1873 | if b'request-bookmarks' not in pullop.stepsdone: | |
1874 | # make sure to always includes bookmark data when migrating |
|
1874 | # make sure to always includes bookmark data when migrating | |
1875 | # `hg incoming --bundle` to using this function. |
|
1875 | # `hg incoming --bundle` to using this function. | |
1876 | pullop.stepsdone.add(b'request-bookmarks') |
|
1876 | pullop.stepsdone.add(b'request-bookmarks') | |
1877 | kwargs.setdefault(b'listkeys', []).append(b'bookmarks') |
|
1877 | kwargs.setdefault(b'listkeys', []).append(b'bookmarks') | |
1878 |
|
1878 | |||
1879 | # If this is a full pull / clone and the server supports the clone bundles |
|
1879 | # If this is a full pull / clone and the server supports the clone bundles | |
1880 | # feature, tell the server whether we attempted a clone bundle. The |
|
1880 | # feature, tell the server whether we attempted a clone bundle. The | |
1881 | # presence of this flag indicates the client supports clone bundles. This |
|
1881 | # presence of this flag indicates the client supports clone bundles. This | |
1882 | # will enable the server to treat clients that support clone bundles |
|
1882 | # will enable the server to treat clients that support clone bundles | |
1883 | # differently from those that don't. |
|
1883 | # differently from those that don't. | |
1884 | if ( |
|
1884 | if ( | |
1885 | pullop.remote.capable(b'clonebundles') |
|
1885 | pullop.remote.capable(b'clonebundles') | |
1886 | and pullop.heads is None |
|
1886 | and pullop.heads is None | |
1887 | and list(pullop.common) == [pullop.repo.nullid] |
|
1887 | and list(pullop.common) == [pullop.repo.nullid] | |
1888 | ): |
|
1888 | ): | |
1889 | kwargs[b'cbattempted'] = pullop.clonebundleattempted |
|
1889 | kwargs[b'cbattempted'] = pullop.clonebundleattempted | |
1890 |
|
1890 | |||
1891 | if streaming: |
|
1891 | if streaming: | |
1892 | pullop.repo.ui.status(_(b'streaming all changes\n')) |
|
1892 | pullop.repo.ui.status(_(b'streaming all changes\n')) | |
1893 | elif not pullop.fetch: |
|
1893 | elif not pullop.fetch: | |
1894 | pullop.repo.ui.status(_(b"no changes found\n")) |
|
1894 | pullop.repo.ui.status(_(b"no changes found\n")) | |
1895 | pullop.cgresult = 0 |
|
1895 | pullop.cgresult = 0 | |
1896 | else: |
|
1896 | else: | |
1897 | if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]: |
|
1897 | if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]: | |
1898 | pullop.repo.ui.status(_(b"requesting all changes\n")) |
|
1898 | pullop.repo.ui.status(_(b"requesting all changes\n")) | |
1899 | if obsolete.isenabled(pullop.repo, obsolete.exchangeopt): |
|
1899 | if obsolete.isenabled(pullop.repo, obsolete.exchangeopt): | |
1900 | remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps) |
|
1900 | remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps) | |
1901 | if obsolete.commonversion(remoteversions) is not None: |
|
1901 | if obsolete.commonversion(remoteversions) is not None: | |
1902 | kwargs[b'obsmarkers'] = True |
|
1902 | kwargs[b'obsmarkers'] = True | |
1903 | pullop.stepsdone.add(b'obsmarkers') |
|
1903 | pullop.stepsdone.add(b'obsmarkers') | |
1904 | _pullbundle2extraprepare(pullop, kwargs) |
|
1904 | _pullbundle2extraprepare(pullop, kwargs) | |
1905 |
|
1905 | |||
1906 | remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote) |
|
1906 | remote_sidedata = bundle2.read_remote_wanted_sidedata(pullop.remote) | |
1907 | if remote_sidedata: |
|
1907 | if remote_sidedata: | |
1908 | kwargs[b'remote_sidedata'] = remote_sidedata |
|
1908 | kwargs[b'remote_sidedata'] = remote_sidedata | |
1909 |
|
1909 | |||
1910 | with pullop.remote.commandexecutor() as e: |
|
1910 | with pullop.remote.commandexecutor() as e: | |
1911 | args = dict(kwargs) |
|
1911 | args = dict(kwargs) | |
1912 | args[b'source'] = b'pull' |
|
1912 | args[b'source'] = b'pull' | |
1913 | bundle = e.callcommand(b'getbundle', args).result() |
|
1913 | bundle = e.callcommand(b'getbundle', args).result() | |
1914 |
|
1914 | |||
1915 | try: |
|
1915 | try: | |
1916 | op = bundle2.bundleoperation( |
|
1916 | op = bundle2.bundleoperation( | |
1917 | pullop.repo, |
|
1917 | pullop.repo, | |
1918 | pullop.gettransaction, |
|
1918 | pullop.gettransaction, | |
1919 | source=b'pull', |
|
1919 | source=b'pull', | |
1920 | remote=pullop.remote, |
|
1920 | remote=pullop.remote, | |
1921 | ) |
|
1921 | ) | |
1922 | op.modes[b'bookmarks'] = b'records' |
|
1922 | op.modes[b'bookmarks'] = b'records' | |
1923 | bundle2.processbundle( |
|
1923 | bundle2.processbundle( | |
1924 | pullop.repo, |
|
1924 | pullop.repo, | |
1925 | bundle, |
|
1925 | bundle, | |
1926 | op=op, |
|
1926 | op=op, | |
1927 | remote=pullop.remote, |
|
1927 | remote=pullop.remote, | |
1928 | ) |
|
1928 | ) | |
1929 | except bundle2.AbortFromPart as exc: |
|
1929 | except bundle2.AbortFromPart as exc: | |
1930 | pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc) |
|
1930 | pullop.repo.ui.error(_(b'remote: abort: %s\n') % exc) | |
1931 | raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint) |
|
1931 | raise error.RemoteError(_(b'pull failed on remote'), hint=exc.hint) | |
1932 | except error.BundleValueError as exc: |
|
1932 | except error.BundleValueError as exc: | |
1933 | raise error.RemoteError(_(b'missing support for %s') % exc) |
|
1933 | raise error.RemoteError(_(b'missing support for %s') % exc) | |
1934 |
|
1934 | |||
1935 | if pullop.fetch: |
|
1935 | if pullop.fetch: | |
1936 | pullop.cgresult = bundle2.combinechangegroupresults(op) |
|
1936 | pullop.cgresult = bundle2.combinechangegroupresults(op) | |
1937 |
|
1937 | |||
1938 | # processing phases change |
|
1938 | # processing phases change | |
1939 | for namespace, value in op.records[b'listkeys']: |
|
1939 | for namespace, value in op.records[b'listkeys']: | |
1940 | if namespace == b'phases': |
|
1940 | if namespace == b'phases': | |
1941 | _pullapplyphases(pullop, value) |
|
1941 | _pullapplyphases(pullop, value) | |
1942 |
|
1942 | |||
1943 | # processing bookmark update |
|
1943 | # processing bookmark update | |
1944 | if bookmarksrequested: |
|
1944 | if bookmarksrequested: | |
1945 | books = {} |
|
1945 | books = {} | |
1946 | for record in op.records[b'bookmarks']: |
|
1946 | for record in op.records[b'bookmarks']: | |
1947 | books[record[b'bookmark']] = record[b"node"] |
|
1947 | books[record[b'bookmark']] = record[b"node"] | |
1948 | pullop.remotebookmarks = books |
|
1948 | pullop.remotebookmarks = books | |
1949 | else: |
|
1949 | else: | |
1950 | for namespace, value in op.records[b'listkeys']: |
|
1950 | for namespace, value in op.records[b'listkeys']: | |
1951 | if namespace == b'bookmarks': |
|
1951 | if namespace == b'bookmarks': | |
1952 | pullop.remotebookmarks = bookmod.unhexlifybookmarks(value) |
|
1952 | pullop.remotebookmarks = bookmod.unhexlifybookmarks(value) | |
1953 |
|
1953 | |||
1954 | # bookmark data were either already there or pulled in the bundle |
|
1954 | # bookmark data were either already there or pulled in the bundle | |
1955 | if pullop.remotebookmarks is not None: |
|
1955 | if pullop.remotebookmarks is not None: | |
1956 | _pullbookmarks(pullop) |
|
1956 | _pullbookmarks(pullop) | |
1957 |
|
1957 | |||
1958 |
|
1958 | |||
1959 | def _pullbundle2extraprepare(pullop, kwargs): |
|
1959 | def _pullbundle2extraprepare(pullop, kwargs): | |
1960 | """hook function so that extensions can extend the getbundle call""" |
|
1960 | """hook function so that extensions can extend the getbundle call""" | |
1961 |
|
1961 | |||
1962 |
|
1962 | |||
1963 | def _pullchangeset(pullop): |
|
1963 | def _pullchangeset(pullop): | |
1964 | """pull changeset from unbundle into the local repo""" |
|
1964 | """pull changeset from unbundle into the local repo""" | |
1965 | # We delay the open of the transaction as late as possible so we |
|
1965 | # We delay the open of the transaction as late as possible so we | |
1966 | # don't open transaction for nothing or you break future useful |
|
1966 | # don't open transaction for nothing or you break future useful | |
1967 | # rollback call |
|
1967 | # rollback call | |
1968 | if b'changegroup' in pullop.stepsdone: |
|
1968 | if b'changegroup' in pullop.stepsdone: | |
1969 | return |
|
1969 | return | |
1970 | pullop.stepsdone.add(b'changegroup') |
|
1970 | pullop.stepsdone.add(b'changegroup') | |
1971 | if not pullop.fetch: |
|
1971 | if not pullop.fetch: | |
1972 | pullop.repo.ui.status(_(b"no changes found\n")) |
|
1972 | pullop.repo.ui.status(_(b"no changes found\n")) | |
1973 | pullop.cgresult = 0 |
|
1973 | pullop.cgresult = 0 | |
1974 | return |
|
1974 | return | |
1975 | tr = pullop.gettransaction() |
|
1975 | tr = pullop.gettransaction() | |
1976 | if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]: |
|
1976 | if pullop.heads is None and list(pullop.common) == [pullop.repo.nullid]: | |
1977 | pullop.repo.ui.status(_(b"requesting all changes\n")) |
|
1977 | pullop.repo.ui.status(_(b"requesting all changes\n")) | |
1978 | elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'): |
|
1978 | elif pullop.heads is None and pullop.remote.capable(b'changegroupsubset'): | |
1979 | # issue1320, avoid a race if remote changed after discovery |
|
1979 | # issue1320, avoid a race if remote changed after discovery | |
1980 | pullop.heads = pullop.rheads |
|
1980 | pullop.heads = pullop.rheads | |
1981 |
|
1981 | |||
1982 | if pullop.remote.capable(b'getbundle'): |
|
1982 | if pullop.remote.capable(b'getbundle'): | |
1983 | # TODO: get bundlecaps from remote |
|
1983 | # TODO: get bundlecaps from remote | |
1984 | cg = pullop.remote.getbundle( |
|
1984 | cg = pullop.remote.getbundle( | |
1985 | b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads |
|
1985 | b'pull', common=pullop.common, heads=pullop.heads or pullop.rheads | |
1986 | ) |
|
1986 | ) | |
1987 | elif pullop.heads is None: |
|
1987 | elif pullop.heads is None: | |
1988 | with pullop.remote.commandexecutor() as e: |
|
1988 | with pullop.remote.commandexecutor() as e: | |
1989 | cg = e.callcommand( |
|
1989 | cg = e.callcommand( | |
1990 | b'changegroup', |
|
1990 | b'changegroup', | |
1991 | { |
|
1991 | { | |
1992 | b'nodes': pullop.fetch, |
|
1992 | b'nodes': pullop.fetch, | |
1993 | b'source': b'pull', |
|
1993 | b'source': b'pull', | |
1994 | }, |
|
1994 | }, | |
1995 | ).result() |
|
1995 | ).result() | |
1996 |
|
1996 | |||
1997 | elif not pullop.remote.capable(b'changegroupsubset'): |
|
1997 | elif not pullop.remote.capable(b'changegroupsubset'): | |
1998 | raise error.Abort( |
|
1998 | raise error.Abort( | |
1999 | _( |
|
1999 | _( | |
2000 | b"partial pull cannot be done because " |
|
2000 | b"partial pull cannot be done because " | |
2001 | b"other repository doesn't support " |
|
2001 | b"other repository doesn't support " | |
2002 | b"changegroupsubset." |
|
2002 | b"changegroupsubset." | |
2003 | ) |
|
2003 | ) | |
2004 | ) |
|
2004 | ) | |
2005 | else: |
|
2005 | else: | |
2006 | with pullop.remote.commandexecutor() as e: |
|
2006 | with pullop.remote.commandexecutor() as e: | |
2007 | cg = e.callcommand( |
|
2007 | cg = e.callcommand( | |
2008 | b'changegroupsubset', |
|
2008 | b'changegroupsubset', | |
2009 | { |
|
2009 | { | |
2010 | b'bases': pullop.fetch, |
|
2010 | b'bases': pullop.fetch, | |
2011 | b'heads': pullop.heads, |
|
2011 | b'heads': pullop.heads, | |
2012 | b'source': b'pull', |
|
2012 | b'source': b'pull', | |
2013 | }, |
|
2013 | }, | |
2014 | ).result() |
|
2014 | ).result() | |
2015 |
|
2015 | |||
2016 | bundleop = bundle2.applybundle( |
|
2016 | bundleop = bundle2.applybundle( | |
2017 | pullop.repo, |
|
2017 | pullop.repo, | |
2018 | cg, |
|
2018 | cg, | |
2019 | tr, |
|
2019 | tr, | |
2020 | b'pull', |
|
2020 | b'pull', | |
2021 | pullop.remote.url(), |
|
2021 | pullop.remote.url(), | |
2022 | remote=pullop.remote, |
|
2022 | remote=pullop.remote, | |
2023 | ) |
|
2023 | ) | |
2024 | pullop.cgresult = bundle2.combinechangegroupresults(bundleop) |
|
2024 | pullop.cgresult = bundle2.combinechangegroupresults(bundleop) | |
2025 |
|
2025 | |||
2026 |
|
2026 | |||
2027 | def _pullphase(pullop): |
|
2027 | def _pullphase(pullop): | |
2028 | # Get remote phases data from remote |
|
2028 | # Get remote phases data from remote | |
2029 | if b'phases' in pullop.stepsdone: |
|
2029 | if b'phases' in pullop.stepsdone: | |
2030 | return |
|
2030 | return | |
2031 | remotephases = listkeys(pullop.remote, b'phases') |
|
2031 | remotephases = listkeys(pullop.remote, b'phases') | |
2032 | _pullapplyphases(pullop, remotephases) |
|
2032 | _pullapplyphases(pullop, remotephases) | |
2033 |
|
2033 | |||
2034 |
|
2034 | |||
2035 | def _pullapplyphases(pullop, remotephases): |
|
2035 | def _pullapplyphases(pullop, remotephases): | |
2036 | """apply phase movement from observed remote state""" |
|
2036 | """apply phase movement from observed remote state""" | |
2037 | if b'phases' in pullop.stepsdone: |
|
2037 | if b'phases' in pullop.stepsdone: | |
2038 | return |
|
2038 | return | |
2039 | pullop.stepsdone.add(b'phases') |
|
2039 | pullop.stepsdone.add(b'phases') | |
2040 | publishing = bool(remotephases.get(b'publishing', False)) |
|
2040 | publishing = bool(remotephases.get(b'publishing', False)) | |
2041 | if remotephases and not publishing: |
|
2041 | if remotephases and not publishing: | |
2042 | # remote is new and non-publishing |
|
2042 | # remote is new and non-publishing | |
2043 | pheads, _dr = phases.analyzeremotephases( |
|
2043 | pheads, _dr = phases.analyzeremotephases( | |
2044 | pullop.repo, pullop.pulledsubset, remotephases |
|
2044 | pullop.repo, pullop.pulledsubset, remotephases | |
2045 | ) |
|
2045 | ) | |
2046 | dheads = pullop.pulledsubset |
|
2046 | dheads = pullop.pulledsubset | |
2047 | else: |
|
2047 | else: | |
2048 | # Remote is old or publishing all common changesets |
|
2048 | # Remote is old or publishing all common changesets | |
2049 | # should be seen as public |
|
2049 | # should be seen as public | |
2050 | pheads = pullop.pulledsubset |
|
2050 | pheads = pullop.pulledsubset | |
2051 | dheads = [] |
|
2051 | dheads = [] | |
2052 | unfi = pullop.repo.unfiltered() |
|
2052 | unfi = pullop.repo.unfiltered() | |
2053 | phase = unfi._phasecache.phase |
|
2053 | phase = unfi._phasecache.phase | |
2054 | rev = unfi.changelog.index.get_rev |
|
2054 | rev = unfi.changelog.index.get_rev | |
2055 | public = phases.public |
|
2055 | public = phases.public | |
2056 | draft = phases.draft |
|
2056 | draft = phases.draft | |
2057 |
|
2057 | |||
2058 | # exclude changesets already public locally and update the others |
|
2058 | # exclude changesets already public locally and update the others | |
2059 | pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public] |
|
2059 | pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public] | |
2060 | if pheads: |
|
2060 | if pheads: | |
2061 | tr = pullop.gettransaction() |
|
2061 | tr = pullop.gettransaction() | |
2062 | phases.advanceboundary(pullop.repo, tr, public, pheads) |
|
2062 | phases.advanceboundary(pullop.repo, tr, public, pheads) | |
2063 |
|
2063 | |||
2064 | # exclude changesets already draft locally and update the others |
|
2064 | # exclude changesets already draft locally and update the others | |
2065 | dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft] |
|
2065 | dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft] | |
2066 | if dheads: |
|
2066 | if dheads: | |
2067 | tr = pullop.gettransaction() |
|
2067 | tr = pullop.gettransaction() | |
2068 | phases.advanceboundary(pullop.repo, tr, draft, dheads) |
|
2068 | phases.advanceboundary(pullop.repo, tr, draft, dheads) | |
2069 |
|
2069 | |||
2070 |
|
2070 | |||
2071 | def _pullbookmarks(pullop): |
|
2071 | def _pullbookmarks(pullop): | |
2072 | """process the remote bookmark information to update the local one""" |
|
2072 | """process the remote bookmark information to update the local one""" | |
2073 | if b'bookmarks' in pullop.stepsdone: |
|
2073 | if b'bookmarks' in pullop.stepsdone: | |
2074 | return |
|
2074 | return | |
2075 | pullop.stepsdone.add(b'bookmarks') |
|
2075 | pullop.stepsdone.add(b'bookmarks') | |
2076 | repo = pullop.repo |
|
2076 | repo = pullop.repo | |
2077 | remotebookmarks = pullop.remotebookmarks |
|
2077 | remotebookmarks = pullop.remotebookmarks | |
2078 | bookmarks_mode = None |
|
2078 | bookmarks_mode = None | |
2079 | if pullop.remote_path is not None: |
|
2079 | if pullop.remote_path is not None: | |
2080 | bookmarks_mode = pullop.remote_path.bookmarks_mode |
|
2080 | bookmarks_mode = pullop.remote_path.bookmarks_mode | |
2081 | bookmod.updatefromremote( |
|
2081 | bookmod.updatefromremote( | |
2082 | repo.ui, |
|
2082 | repo.ui, | |
2083 | repo, |
|
2083 | repo, | |
2084 | remotebookmarks, |
|
2084 | remotebookmarks, | |
2085 | pullop.remote.url(), |
|
2085 | pullop.remote.url(), | |
2086 | pullop.gettransaction, |
|
2086 | pullop.gettransaction, | |
2087 | explicit=pullop.explicitbookmarks, |
|
2087 | explicit=pullop.explicitbookmarks, | |
2088 | mode=bookmarks_mode, |
|
2088 | mode=bookmarks_mode, | |
2089 | ) |
|
2089 | ) | |
2090 |
|
2090 | |||
2091 |
|
2091 | |||
2092 | def _pullobsolete(pullop): |
|
2092 | def _pullobsolete(pullop): | |
2093 | """utility function to pull obsolete markers from a remote |
|
2093 | """utility function to pull obsolete markers from a remote | |
2094 |
|
2094 | |||
2095 | The `gettransaction` is function that return the pull transaction, creating |
|
2095 | The `gettransaction` is function that return the pull transaction, creating | |
2096 | one if necessary. We return the transaction to inform the calling code that |
|
2096 | one if necessary. We return the transaction to inform the calling code that | |
2097 | a new transaction have been created (when applicable). |
|
2097 | a new transaction have been created (when applicable). | |
2098 |
|
2098 | |||
2099 | Exists mostly to allow overriding for experimentation purpose""" |
|
2099 | Exists mostly to allow overriding for experimentation purpose""" | |
2100 | if b'obsmarkers' in pullop.stepsdone: |
|
2100 | if b'obsmarkers' in pullop.stepsdone: | |
2101 | return |
|
2101 | return | |
2102 | pullop.stepsdone.add(b'obsmarkers') |
|
2102 | pullop.stepsdone.add(b'obsmarkers') | |
2103 | tr = None |
|
2103 | tr = None | |
2104 | if obsolete.isenabled(pullop.repo, obsolete.exchangeopt): |
|
2104 | if obsolete.isenabled(pullop.repo, obsolete.exchangeopt): | |
2105 | pullop.repo.ui.debug(b'fetching remote obsolete markers\n') |
|
2105 | pullop.repo.ui.debug(b'fetching remote obsolete markers\n') | |
2106 | remoteobs = listkeys(pullop.remote, b'obsolete') |
|
2106 | remoteobs = listkeys(pullop.remote, b'obsolete') | |
2107 | if b'dump0' in remoteobs: |
|
2107 | if b'dump0' in remoteobs: | |
2108 | tr = pullop.gettransaction() |
|
2108 | tr = pullop.gettransaction() | |
2109 | markers = [] |
|
2109 | markers = [] | |
2110 | for key in sorted(remoteobs, reverse=True): |
|
2110 | for key in sorted(remoteobs, reverse=True): | |
2111 | if key.startswith(b'dump'): |
|
2111 | if key.startswith(b'dump'): | |
2112 | data = util.b85decode(remoteobs[key]) |
|
2112 | data = util.b85decode(remoteobs[key]) | |
2113 | version, newmarks = obsolete._readmarkers(data) |
|
2113 | version, newmarks = obsolete._readmarkers(data) | |
2114 | markers += newmarks |
|
2114 | markers += newmarks | |
2115 | if markers: |
|
2115 | if markers: | |
2116 | pullop.repo.obsstore.add(tr, markers) |
|
2116 | pullop.repo.obsstore.add(tr, markers) | |
2117 | pullop.repo.invalidatevolatilesets() |
|
2117 | pullop.repo.invalidatevolatilesets() | |
2118 | return tr |
|
2118 | return tr | |
2119 |
|
2119 | |||
2120 |
|
2120 | |||
2121 | def applynarrowacl(repo, kwargs): |
|
2121 | def applynarrowacl(repo, kwargs): | |
2122 | """Apply narrow fetch access control. |
|
2122 | """Apply narrow fetch access control. | |
2123 |
|
2123 | |||
2124 | This massages the named arguments for getbundle wire protocol commands |
|
2124 | This massages the named arguments for getbundle wire protocol commands | |
2125 | so requested data is filtered through access control rules. |
|
2125 | so requested data is filtered through access control rules. | |
2126 | """ |
|
2126 | """ | |
2127 | ui = repo.ui |
|
2127 | ui = repo.ui | |
2128 | # TODO this assumes existence of HTTP and is a layering violation. |
|
2128 | # TODO this assumes existence of HTTP and is a layering violation. | |
2129 | username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username()) |
|
2129 | username = ui.shortuser(ui.environ.get(b'REMOTE_USER') or ui.username()) | |
2130 | user_includes = ui.configlist( |
|
2130 | user_includes = ui.configlist( | |
2131 | _NARROWACL_SECTION, |
|
2131 | _NARROWACL_SECTION, | |
2132 | username + b'.includes', |
|
2132 | username + b'.includes', | |
2133 | ui.configlist(_NARROWACL_SECTION, b'default.includes'), |
|
2133 | ui.configlist(_NARROWACL_SECTION, b'default.includes'), | |
2134 | ) |
|
2134 | ) | |
2135 | user_excludes = ui.configlist( |
|
2135 | user_excludes = ui.configlist( | |
2136 | _NARROWACL_SECTION, |
|
2136 | _NARROWACL_SECTION, | |
2137 | username + b'.excludes', |
|
2137 | username + b'.excludes', | |
2138 | ui.configlist(_NARROWACL_SECTION, b'default.excludes'), |
|
2138 | ui.configlist(_NARROWACL_SECTION, b'default.excludes'), | |
2139 | ) |
|
2139 | ) | |
2140 | if not user_includes: |
|
2140 | if not user_includes: | |
2141 | raise error.Abort( |
|
2141 | raise error.Abort( | |
2142 | _(b"%s configuration for user %s is empty") |
|
2142 | _(b"%s configuration for user %s is empty") | |
2143 | % (_NARROWACL_SECTION, username) |
|
2143 | % (_NARROWACL_SECTION, username) | |
2144 | ) |
|
2144 | ) | |
2145 |
|
2145 | |||
2146 | user_includes = [ |
|
2146 | user_includes = [ | |
2147 | b'path:.' if p == b'*' else b'path:' + p for p in user_includes |
|
2147 | b'path:.' if p == b'*' else b'path:' + p for p in user_includes | |
2148 | ] |
|
2148 | ] | |
2149 | user_excludes = [ |
|
2149 | user_excludes = [ | |
2150 | b'path:.' if p == b'*' else b'path:' + p for p in user_excludes |
|
2150 | b'path:.' if p == b'*' else b'path:' + p for p in user_excludes | |
2151 | ] |
|
2151 | ] | |
2152 |
|
2152 | |||
2153 | req_includes = set(kwargs.get('includepats', [])) |
|
2153 | req_includes = set(kwargs.get('includepats', [])) | |
2154 | req_excludes = set(kwargs.get('excludepats', [])) |
|
2154 | req_excludes = set(kwargs.get('excludepats', [])) | |
2155 |
|
2155 | |||
2156 | req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns( |
|
2156 | req_includes, req_excludes, invalid_includes = narrowspec.restrictpatterns( | |
2157 | req_includes, req_excludes, user_includes, user_excludes |
|
2157 | req_includes, req_excludes, user_includes, user_excludes | |
2158 | ) |
|
2158 | ) | |
2159 |
|
2159 | |||
2160 | if invalid_includes: |
|
2160 | if invalid_includes: | |
2161 | raise error.Abort( |
|
2161 | raise error.Abort( | |
2162 | _(b"The following includes are not accessible for %s: %s") |
|
2162 | _(b"The following includes are not accessible for %s: %s") | |
2163 | % (username, stringutil.pprint(invalid_includes)) |
|
2163 | % (username, stringutil.pprint(invalid_includes)) | |
2164 | ) |
|
2164 | ) | |
2165 |
|
2165 | |||
2166 | new_args = {} |
|
2166 | new_args = {} | |
2167 | new_args.update(kwargs) |
|
2167 | new_args.update(kwargs) | |
2168 | new_args['narrow'] = True |
|
2168 | new_args['narrow'] = True | |
2169 | new_args['narrow_acl'] = True |
|
2169 | new_args['narrow_acl'] = True | |
2170 | new_args['includepats'] = req_includes |
|
2170 | new_args['includepats'] = req_includes | |
2171 | if req_excludes: |
|
2171 | if req_excludes: | |
2172 | new_args['excludepats'] = req_excludes |
|
2172 | new_args['excludepats'] = req_excludes | |
2173 |
|
2173 | |||
2174 | return new_args |
|
2174 | return new_args | |
2175 |
|
2175 | |||
2176 |
|
2176 | |||
2177 | def _computeellipsis(repo, common, heads, known, match, depth=None): |
|
2177 | def _computeellipsis(repo, common, heads, known, match, depth=None): | |
2178 | """Compute the shape of a narrowed DAG. |
|
2178 | """Compute the shape of a narrowed DAG. | |
2179 |
|
2179 | |||
2180 | Args: |
|
2180 | Args: | |
2181 | repo: The repository we're transferring. |
|
2181 | repo: The repository we're transferring. | |
2182 | common: The roots of the DAG range we're transferring. |
|
2182 | common: The roots of the DAG range we're transferring. | |
2183 | May be just [nullid], which means all ancestors of heads. |
|
2183 | May be just [nullid], which means all ancestors of heads. | |
2184 | heads: The heads of the DAG range we're transferring. |
|
2184 | heads: The heads of the DAG range we're transferring. | |
2185 | match: The narrowmatcher that allows us to identify relevant changes. |
|
2185 | match: The narrowmatcher that allows us to identify relevant changes. | |
2186 | depth: If not None, only consider nodes to be full nodes if they are at |
|
2186 | depth: If not None, only consider nodes to be full nodes if they are at | |
2187 | most depth changesets away from one of heads. |
|
2187 | most depth changesets away from one of heads. | |
2188 |
|
2188 | |||
2189 | Returns: |
|
2189 | Returns: | |
2190 | A tuple of (visitnodes, relevant_nodes, ellipsisroots) where: |
|
2190 | A tuple of (visitnodes, relevant_nodes, ellipsisroots) where: | |
2191 |
|
2191 | |||
2192 | visitnodes: The list of nodes (either full or ellipsis) which |
|
2192 | visitnodes: The list of nodes (either full or ellipsis) which | |
2193 | need to be sent to the client. |
|
2193 | need to be sent to the client. | |
2194 | relevant_nodes: The set of changelog nodes which change a file inside |
|
2194 | relevant_nodes: The set of changelog nodes which change a file inside | |
2195 | the narrowspec. The client needs these as non-ellipsis nodes. |
|
2195 | the narrowspec. The client needs these as non-ellipsis nodes. | |
2196 | ellipsisroots: A dict of {rev: parents} that is used in |
|
2196 | ellipsisroots: A dict of {rev: parents} that is used in | |
2197 | narrowchangegroup to produce ellipsis nodes with the |
|
2197 | narrowchangegroup to produce ellipsis nodes with the | |
2198 | correct parents. |
|
2198 | correct parents. | |
2199 | """ |
|
2199 | """ | |
2200 | cl = repo.changelog |
|
2200 | cl = repo.changelog | |
2201 | mfl = repo.manifestlog |
|
2201 | mfl = repo.manifestlog | |
2202 |
|
2202 | |||
2203 | clrev = cl.rev |
|
2203 | clrev = cl.rev | |
2204 |
|
2204 | |||
2205 | commonrevs = {clrev(n) for n in common} | {nullrev} |
|
2205 | commonrevs = {clrev(n) for n in common} | {nullrev} | |
2206 | headsrevs = {clrev(n) for n in heads} |
|
2206 | headsrevs = {clrev(n) for n in heads} | |
2207 |
|
2207 | |||
2208 | if depth: |
|
2208 | if depth: | |
2209 | revdepth = {h: 0 for h in headsrevs} |
|
2209 | revdepth = {h: 0 for h in headsrevs} | |
2210 |
|
2210 | |||
2211 | ellipsisheads = collections.defaultdict(set) |
|
2211 | ellipsisheads = collections.defaultdict(set) | |
2212 | ellipsisroots = collections.defaultdict(set) |
|
2212 | ellipsisroots = collections.defaultdict(set) | |
2213 |
|
2213 | |||
2214 | def addroot(head, curchange): |
|
2214 | def addroot(head, curchange): | |
2215 | """Add a root to an ellipsis head, splitting heads with 3 roots.""" |
|
2215 | """Add a root to an ellipsis head, splitting heads with 3 roots.""" | |
2216 | ellipsisroots[head].add(curchange) |
|
2216 | ellipsisroots[head].add(curchange) | |
2217 | # Recursively split ellipsis heads with 3 roots by finding the |
|
2217 | # Recursively split ellipsis heads with 3 roots by finding the | |
2218 | # roots' youngest common descendant which is an elided merge commit. |
|
2218 | # roots' youngest common descendant which is an elided merge commit. | |
2219 | # That descendant takes 2 of the 3 roots as its own, and becomes a |
|
2219 | # That descendant takes 2 of the 3 roots as its own, and becomes a | |
2220 | # root of the head. |
|
2220 | # root of the head. | |
2221 | while len(ellipsisroots[head]) > 2: |
|
2221 | while len(ellipsisroots[head]) > 2: | |
2222 | child, roots = splithead(head) |
|
2222 | child, roots = splithead(head) | |
2223 | splitroots(head, child, roots) |
|
2223 | splitroots(head, child, roots) | |
2224 | head = child # Recurse in case we just added a 3rd root |
|
2224 | head = child # Recurse in case we just added a 3rd root | |
2225 |
|
2225 | |||
2226 | def splitroots(head, child, roots): |
|
2226 | def splitroots(head, child, roots): | |
2227 | ellipsisroots[head].difference_update(roots) |
|
2227 | ellipsisroots[head].difference_update(roots) | |
2228 | ellipsisroots[head].add(child) |
|
2228 | ellipsisroots[head].add(child) | |
2229 | ellipsisroots[child].update(roots) |
|
2229 | ellipsisroots[child].update(roots) | |
2230 | ellipsisroots[child].discard(child) |
|
2230 | ellipsisroots[child].discard(child) | |
2231 |
|
2231 | |||
2232 | def splithead(head): |
|
2232 | def splithead(head): | |
2233 | r1, r2, r3 = sorted(ellipsisroots[head]) |
|
2233 | r1, r2, r3 = sorted(ellipsisroots[head]) | |
2234 | for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)): |
|
2234 | for nr1, nr2 in ((r2, r3), (r1, r3), (r1, r2)): | |
2235 | mid = repo.revs( |
|
2235 | mid = repo.revs( | |
2236 | b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head |
|
2236 | b'sort(merge() & %d::%d & %d::%d, -rev)', nr1, head, nr2, head | |
2237 | ) |
|
2237 | ) | |
2238 | for j in mid: |
|
2238 | for j in mid: | |
2239 | if j == nr2: |
|
2239 | if j == nr2: | |
2240 | return nr2, (nr1, nr2) |
|
2240 | return nr2, (nr1, nr2) | |
2241 | if j not in ellipsisroots or len(ellipsisroots[j]) < 2: |
|
2241 | if j not in ellipsisroots or len(ellipsisroots[j]) < 2: | |
2242 | return j, (nr1, nr2) |
|
2242 | return j, (nr1, nr2) | |
2243 | raise error.Abort( |
|
2243 | raise error.Abort( | |
2244 | _( |
|
2244 | _( | |
2245 | b'Failed to split up ellipsis node! head: %d, ' |
|
2245 | b'Failed to split up ellipsis node! head: %d, ' | |
2246 | b'roots: %d %d %d' |
|
2246 | b'roots: %d %d %d' | |
2247 | ) |
|
2247 | ) | |
2248 | % (head, r1, r2, r3) |
|
2248 | % (head, r1, r2, r3) | |
2249 | ) |
|
2249 | ) | |
2250 |
|
2250 | |||
2251 | missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs)) |
|
2251 | missing = list(cl.findmissingrevs(common=commonrevs, heads=headsrevs)) | |
2252 | visit = reversed(missing) |
|
2252 | visit = reversed(missing) | |
2253 | relevant_nodes = set() |
|
2253 | relevant_nodes = set() | |
2254 | visitnodes = [cl.node(m) for m in missing] |
|
2254 | visitnodes = [cl.node(m) for m in missing] | |
2255 | required = set(headsrevs) | known |
|
2255 | required = set(headsrevs) | known | |
2256 | for rev in visit: |
|
2256 | for rev in visit: | |
2257 | clrev = cl.changelogrevision(rev) |
|
2257 | clrev = cl.changelogrevision(rev) | |
2258 | ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev] |
|
2258 | ps = [prev for prev in cl.parentrevs(rev) if prev != nullrev] | |
2259 | if depth is not None: |
|
2259 | if depth is not None: | |
2260 | curdepth = revdepth[rev] |
|
2260 | curdepth = revdepth[rev] | |
2261 | for p in ps: |
|
2261 | for p in ps: | |
2262 | revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1)) |
|
2262 | revdepth[p] = min(curdepth + 1, revdepth.get(p, depth + 1)) | |
2263 | needed = False |
|
2263 | needed = False | |
2264 | shallow_enough = depth is None or revdepth[rev] <= depth |
|
2264 | shallow_enough = depth is None or revdepth[rev] <= depth | |
2265 | if shallow_enough: |
|
2265 | if shallow_enough: | |
2266 | curmf = mfl[clrev.manifest].read() |
|
2266 | curmf = mfl[clrev.manifest].read() | |
2267 | if ps: |
|
2267 | if ps: | |
2268 | # We choose to not trust the changed files list in |
|
2268 | # We choose to not trust the changed files list in | |
2269 | # changesets because it's not always correct. TODO: could |
|
2269 | # changesets because it's not always correct. TODO: could | |
2270 | # we trust it for the non-merge case? |
|
2270 | # we trust it for the non-merge case? | |
2271 | p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read() |
|
2271 | p1mf = mfl[cl.changelogrevision(ps[0]).manifest].read() | |
2272 | needed = bool(curmf.diff(p1mf, match)) |
|
2272 | needed = bool(curmf.diff(p1mf, match)) | |
2273 | if not needed and len(ps) > 1: |
|
2273 | if not needed and len(ps) > 1: | |
2274 | # For merge changes, the list of changed files is not |
|
2274 | # For merge changes, the list of changed files is not | |
2275 | # helpful, since we need to emit the merge if a file |
|
2275 | # helpful, since we need to emit the merge if a file | |
2276 | # in the narrow spec has changed on either side of the |
|
2276 | # in the narrow spec has changed on either side of the | |
2277 | # merge. As a result, we do a manifest diff to check. |
|
2277 | # merge. As a result, we do a manifest diff to check. | |
2278 | p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read() |
|
2278 | p2mf = mfl[cl.changelogrevision(ps[1]).manifest].read() | |
2279 | needed = bool(curmf.diff(p2mf, match)) |
|
2279 | needed = bool(curmf.diff(p2mf, match)) | |
2280 | else: |
|
2280 | else: | |
2281 | # For a root node, we need to include the node if any |
|
2281 | # For a root node, we need to include the node if any | |
2282 | # files in the node match the narrowspec. |
|
2282 | # files in the node match the narrowspec. | |
2283 | needed = any(curmf.walk(match)) |
|
2283 | needed = any(curmf.walk(match)) | |
2284 |
|
2284 | |||
2285 | if needed: |
|
2285 | if needed: | |
2286 | for head in ellipsisheads[rev]: |
|
2286 | for head in ellipsisheads[rev]: | |
2287 | addroot(head, rev) |
|
2287 | addroot(head, rev) | |
2288 | for p in ps: |
|
2288 | for p in ps: | |
2289 | required.add(p) |
|
2289 | required.add(p) | |
2290 | relevant_nodes.add(cl.node(rev)) |
|
2290 | relevant_nodes.add(cl.node(rev)) | |
2291 | else: |
|
2291 | else: | |
2292 | if not ps: |
|
2292 | if not ps: | |
2293 | ps = [nullrev] |
|
2293 | ps = [nullrev] | |
2294 | if rev in required: |
|
2294 | if rev in required: | |
2295 | for head in ellipsisheads[rev]: |
|
2295 | for head in ellipsisheads[rev]: | |
2296 | addroot(head, rev) |
|
2296 | addroot(head, rev) | |
2297 | for p in ps: |
|
2297 | for p in ps: | |
2298 | ellipsisheads[p].add(rev) |
|
2298 | ellipsisheads[p].add(rev) | |
2299 | else: |
|
2299 | else: | |
2300 | for p in ps: |
|
2300 | for p in ps: | |
2301 | ellipsisheads[p] |= ellipsisheads[rev] |
|
2301 | ellipsisheads[p] |= ellipsisheads[rev] | |
2302 |
|
2302 | |||
2303 | # add common changesets as roots of their reachable ellipsis heads |
|
2303 | # add common changesets as roots of their reachable ellipsis heads | |
2304 | for c in commonrevs: |
|
2304 | for c in commonrevs: | |
2305 | for head in ellipsisheads[c]: |
|
2305 | for head in ellipsisheads[c]: | |
2306 | addroot(head, c) |
|
2306 | addroot(head, c) | |
2307 | return visitnodes, relevant_nodes, ellipsisroots |
|
2307 | return visitnodes, relevant_nodes, ellipsisroots | |
2308 |
|
2308 | |||
2309 |
|
2309 | |||
2310 | def caps20to10(repo, role): |
|
2310 | def caps20to10(repo, role): | |
2311 | """return a set with appropriate options to use bundle20 during getbundle""" |
|
2311 | """return a set with appropriate options to use bundle20 during getbundle""" | |
2312 | caps = {b'HG20'} |
|
2312 | caps = {b'HG20'} | |
2313 | capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role)) |
|
2313 | capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=role)) | |
2314 | caps.add(b'bundle2=' + urlreq.quote(capsblob)) |
|
2314 | caps.add(b'bundle2=' + urlreq.quote(capsblob)) | |
2315 | return caps |
|
2315 | return caps | |
2316 |
|
2316 | |||
2317 |
|
2317 | |||
2318 | # List of names of steps to perform for a bundle2 for getbundle, order matters. |
|
2318 | # List of names of steps to perform for a bundle2 for getbundle, order matters. | |
2319 | getbundle2partsorder = [] |
|
2319 | getbundle2partsorder = [] | |
2320 |
|
2320 | |||
2321 | # Mapping between step name and function |
|
2321 | # Mapping between step name and function | |
2322 | # |
|
2322 | # | |
2323 | # This exists to help extensions wrap steps if necessary |
|
2323 | # This exists to help extensions wrap steps if necessary | |
2324 | getbundle2partsmapping = {} |
|
2324 | getbundle2partsmapping = {} | |
2325 |
|
2325 | |||
2326 |
|
2326 | |||
2327 | def getbundle2partsgenerator(stepname, idx=None): |
|
2327 | def getbundle2partsgenerator(stepname, idx=None): | |
2328 | """decorator for function generating bundle2 part for getbundle |
|
2328 | """decorator for function generating bundle2 part for getbundle | |
2329 |
|
2329 | |||
2330 | The function is added to the step -> function mapping and appended to the |
|
2330 | The function is added to the step -> function mapping and appended to the | |
2331 | list of steps. Beware that decorated functions will be added in order |
|
2331 | list of steps. Beware that decorated functions will be added in order | |
2332 | (this may matter). |
|
2332 | (this may matter). | |
2333 |
|
2333 | |||
2334 | You can only use this decorator for new steps, if you want to wrap a step |
|
2334 | You can only use this decorator for new steps, if you want to wrap a step | |
2335 | from an extension, attack the getbundle2partsmapping dictionary directly.""" |
|
2335 | from an extension, attack the getbundle2partsmapping dictionary directly.""" | |
2336 |
|
2336 | |||
2337 | def dec(func): |
|
2337 | def dec(func): | |
2338 | assert stepname not in getbundle2partsmapping |
|
2338 | assert stepname not in getbundle2partsmapping | |
2339 | getbundle2partsmapping[stepname] = func |
|
2339 | getbundle2partsmapping[stepname] = func | |
2340 | if idx is None: |
|
2340 | if idx is None: | |
2341 | getbundle2partsorder.append(stepname) |
|
2341 | getbundle2partsorder.append(stepname) | |
2342 | else: |
|
2342 | else: | |
2343 | getbundle2partsorder.insert(idx, stepname) |
|
2343 | getbundle2partsorder.insert(idx, stepname) | |
2344 | return func |
|
2344 | return func | |
2345 |
|
2345 | |||
2346 | return dec |
|
2346 | return dec | |
2347 |
|
2347 | |||
2348 |
|
2348 | |||
2349 | def bundle2requested(bundlecaps): |
|
2349 | def bundle2requested(bundlecaps): | |
2350 | if bundlecaps is not None: |
|
2350 | if bundlecaps is not None: | |
2351 | return any(cap.startswith(b'HG2') for cap in bundlecaps) |
|
2351 | return any(cap.startswith(b'HG2') for cap in bundlecaps) | |
2352 | return False |
|
2352 | return False | |
2353 |
|
2353 | |||
2354 |
|
2354 | |||
2355 | def getbundlechunks( |
|
2355 | def getbundlechunks( | |
2356 | repo, |
|
2356 | repo, | |
2357 | source, |
|
2357 | source, | |
2358 | heads=None, |
|
2358 | heads=None, | |
2359 | common=None, |
|
2359 | common=None, | |
2360 | bundlecaps=None, |
|
2360 | bundlecaps=None, | |
2361 | remote_sidedata=None, |
|
2361 | remote_sidedata=None, | |
2362 | **kwargs |
|
2362 | **kwargs | |
2363 | ): |
|
2363 | ): | |
2364 | """Return chunks constituting a bundle's raw data. |
|
2364 | """Return chunks constituting a bundle's raw data. | |
2365 |
|
2365 | |||
2366 | Could be a bundle HG10 or a bundle HG20 depending on bundlecaps |
|
2366 | Could be a bundle HG10 or a bundle HG20 depending on bundlecaps | |
2367 | passed. |
|
2367 | passed. | |
2368 |
|
2368 | |||
2369 | Returns a 2-tuple of a dict with metadata about the generated bundle |
|
2369 | Returns a 2-tuple of a dict with metadata about the generated bundle | |
2370 | and an iterator over raw chunks (of varying sizes). |
|
2370 | and an iterator over raw chunks (of varying sizes). | |
2371 | """ |
|
2371 | """ | |
2372 | kwargs = pycompat.byteskwargs(kwargs) |
|
2372 | kwargs = pycompat.byteskwargs(kwargs) | |
2373 | info = {} |
|
2373 | info = {} | |
2374 | usebundle2 = bundle2requested(bundlecaps) |
|
2374 | usebundle2 = bundle2requested(bundlecaps) | |
2375 | # bundle10 case |
|
2375 | # bundle10 case | |
2376 | if not usebundle2: |
|
2376 | if not usebundle2: | |
2377 | if bundlecaps and not kwargs.get(b'cg', True): |
|
2377 | if bundlecaps and not kwargs.get(b'cg', True): | |
2378 | raise ValueError( |
|
2378 | raise ValueError( | |
2379 | _(b'request for bundle10 must include changegroup') |
|
2379 | _(b'request for bundle10 must include changegroup') | |
2380 | ) |
|
2380 | ) | |
2381 |
|
2381 | |||
2382 | if kwargs: |
|
2382 | if kwargs: | |
2383 | raise ValueError( |
|
2383 | raise ValueError( | |
2384 | _(b'unsupported getbundle arguments: %s') |
|
2384 | _(b'unsupported getbundle arguments: %s') | |
2385 | % b', '.join(sorted(kwargs.keys())) |
|
2385 | % b', '.join(sorted(kwargs.keys())) | |
2386 | ) |
|
2386 | ) | |
2387 | outgoing = _computeoutgoing(repo, heads, common) |
|
2387 | outgoing = _computeoutgoing(repo, heads, common) | |
2388 | info[b'bundleversion'] = 1 |
|
2388 | info[b'bundleversion'] = 1 | |
2389 | return ( |
|
2389 | return ( | |
2390 | info, |
|
2390 | info, | |
2391 | changegroup.makestream( |
|
2391 | changegroup.makestream( | |
2392 | repo, |
|
2392 | repo, | |
2393 | outgoing, |
|
2393 | outgoing, | |
2394 | b'01', |
|
2394 | b'01', | |
2395 | source, |
|
2395 | source, | |
2396 | bundlecaps=bundlecaps, |
|
2396 | bundlecaps=bundlecaps, | |
2397 | remote_sidedata=remote_sidedata, |
|
2397 | remote_sidedata=remote_sidedata, | |
2398 | ), |
|
2398 | ), | |
2399 | ) |
|
2399 | ) | |
2400 |
|
2400 | |||
2401 | # bundle20 case |
|
2401 | # bundle20 case | |
2402 | info[b'bundleversion'] = 2 |
|
2402 | info[b'bundleversion'] = 2 | |
2403 | b2caps = {} |
|
2403 | b2caps = {} | |
2404 | for bcaps in bundlecaps: |
|
2404 | for bcaps in bundlecaps: | |
2405 | if bcaps.startswith(b'bundle2='): |
|
2405 | if bcaps.startswith(b'bundle2='): | |
2406 | blob = urlreq.unquote(bcaps[len(b'bundle2=') :]) |
|
2406 | blob = urlreq.unquote(bcaps[len(b'bundle2=') :]) | |
2407 | b2caps.update(bundle2.decodecaps(blob)) |
|
2407 | b2caps.update(bundle2.decodecaps(blob)) | |
2408 | bundler = bundle2.bundle20(repo.ui, b2caps) |
|
2408 | bundler = bundle2.bundle20(repo.ui, b2caps) | |
2409 |
|
2409 | |||
2410 | kwargs[b'heads'] = heads |
|
2410 | kwargs[b'heads'] = heads | |
2411 | kwargs[b'common'] = common |
|
2411 | kwargs[b'common'] = common | |
2412 |
|
2412 | |||
2413 | for name in getbundle2partsorder: |
|
2413 | for name in getbundle2partsorder: | |
2414 | func = getbundle2partsmapping[name] |
|
2414 | func = getbundle2partsmapping[name] | |
2415 | func( |
|
2415 | func( | |
2416 | bundler, |
|
2416 | bundler, | |
2417 | repo, |
|
2417 | repo, | |
2418 | source, |
|
2418 | source, | |
2419 | bundlecaps=bundlecaps, |
|
2419 | bundlecaps=bundlecaps, | |
2420 | b2caps=b2caps, |
|
2420 | b2caps=b2caps, | |
2421 | remote_sidedata=remote_sidedata, |
|
2421 | remote_sidedata=remote_sidedata, | |
2422 | **pycompat.strkwargs(kwargs) |
|
2422 | **pycompat.strkwargs(kwargs) | |
2423 | ) |
|
2423 | ) | |
2424 |
|
2424 | |||
2425 | info[b'prefercompressed'] = bundler.prefercompressed |
|
2425 | info[b'prefercompressed'] = bundler.prefercompressed | |
2426 |
|
2426 | |||
2427 | return info, bundler.getchunks() |
|
2427 | return info, bundler.getchunks() | |
2428 |
|
2428 | |||
2429 |
|
2429 | |||
2430 | @getbundle2partsgenerator(b'stream') |
|
2430 | @getbundle2partsgenerator(b'stream') | |
2431 | def _getbundlestream2(bundler, repo, *args, **kwargs): |
|
2431 | def _getbundlestream2(bundler, repo, *args, **kwargs): | |
2432 | return bundle2.addpartbundlestream2(bundler, repo, **kwargs) |
|
2432 | return bundle2.addpartbundlestream2(bundler, repo, **kwargs) | |
2433 |
|
2433 | |||
2434 |
|
2434 | |||
2435 | @getbundle2partsgenerator(b'changegroup') |
|
2435 | @getbundle2partsgenerator(b'changegroup') | |
2436 | def _getbundlechangegrouppart( |
|
2436 | def _getbundlechangegrouppart( | |
2437 | bundler, |
|
2437 | bundler, | |
2438 | repo, |
|
2438 | repo, | |
2439 | source, |
|
2439 | source, | |
2440 | bundlecaps=None, |
|
2440 | bundlecaps=None, | |
2441 | b2caps=None, |
|
2441 | b2caps=None, | |
2442 | heads=None, |
|
2442 | heads=None, | |
2443 | common=None, |
|
2443 | common=None, | |
2444 | remote_sidedata=None, |
|
2444 | remote_sidedata=None, | |
2445 | **kwargs |
|
2445 | **kwargs | |
2446 | ): |
|
2446 | ): | |
2447 | """add a changegroup part to the requested bundle""" |
|
2447 | """add a changegroup part to the requested bundle""" | |
2448 | if not kwargs.get('cg', True) or not b2caps: |
|
2448 | if not kwargs.get('cg', True) or not b2caps: | |
2449 | return |
|
2449 | return | |
2450 |
|
2450 | |||
2451 | version = b'01' |
|
2451 | version = b'01' | |
2452 | cgversions = b2caps.get(b'changegroup') |
|
2452 | cgversions = b2caps.get(b'changegroup') | |
2453 | if cgversions: # 3.1 and 3.2 ship with an empty value |
|
2453 | if cgversions: # 3.1 and 3.2 ship with an empty value | |
2454 | cgversions = [ |
|
2454 | cgversions = [ | |
2455 | v |
|
2455 | v | |
2456 | for v in cgversions |
|
2456 | for v in cgversions | |
2457 | if v in changegroup.supportedoutgoingversions(repo) |
|
2457 | if v in changegroup.supportedoutgoingversions(repo) | |
2458 | ] |
|
2458 | ] | |
2459 | if not cgversions: |
|
2459 | if not cgversions: | |
2460 | raise error.Abort(_(b'no common changegroup version')) |
|
2460 | raise error.Abort(_(b'no common changegroup version')) | |
2461 | version = max(cgversions) |
|
2461 | version = max(cgversions) | |
2462 |
|
2462 | |||
2463 | outgoing = _computeoutgoing(repo, heads, common) |
|
2463 | outgoing = _computeoutgoing(repo, heads, common) | |
2464 | if not outgoing.missing: |
|
2464 | if not outgoing.missing: | |
2465 | return |
|
2465 | return | |
2466 |
|
2466 | |||
2467 | if kwargs.get('narrow', False): |
|
2467 | if kwargs.get('narrow', False): | |
2468 | include = sorted(filter(bool, kwargs.get('includepats', []))) |
|
2468 | include = sorted(filter(bool, kwargs.get('includepats', []))) | |
2469 | exclude = sorted(filter(bool, kwargs.get('excludepats', []))) |
|
2469 | exclude = sorted(filter(bool, kwargs.get('excludepats', []))) | |
2470 | matcher = narrowspec.match(repo.root, include=include, exclude=exclude) |
|
2470 | matcher = narrowspec.match(repo.root, include=include, exclude=exclude) | |
2471 | else: |
|
2471 | else: | |
2472 | matcher = None |
|
2472 | matcher = None | |
2473 |
|
2473 | |||
2474 | cgstream = changegroup.makestream( |
|
2474 | cgstream = changegroup.makestream( | |
2475 | repo, |
|
2475 | repo, | |
2476 | outgoing, |
|
2476 | outgoing, | |
2477 | version, |
|
2477 | version, | |
2478 | source, |
|
2478 | source, | |
2479 | bundlecaps=bundlecaps, |
|
2479 | bundlecaps=bundlecaps, | |
2480 | matcher=matcher, |
|
2480 | matcher=matcher, | |
2481 | remote_sidedata=remote_sidedata, |
|
2481 | remote_sidedata=remote_sidedata, | |
2482 | ) |
|
2482 | ) | |
2483 |
|
2483 | |||
2484 | part = bundler.newpart(b'changegroup', data=cgstream) |
|
2484 | part = bundler.newpart(b'changegroup', data=cgstream) | |
2485 | if cgversions: |
|
2485 | if cgversions: | |
2486 | part.addparam(b'version', version) |
|
2486 | part.addparam(b'version', version) | |
2487 |
|
2487 | |||
2488 | part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False) |
|
2488 | part.addparam(b'nbchanges', b'%d' % len(outgoing.missing), mandatory=False) | |
2489 |
|
2489 | |||
2490 | if scmutil.istreemanifest(repo): |
|
2490 | if scmutil.istreemanifest(repo): | |
2491 | part.addparam(b'treemanifest', b'1') |
|
2491 | part.addparam(b'treemanifest', b'1') | |
2492 |
|
2492 | |||
2493 | if repository.REPO_FEATURE_SIDE_DATA in repo.features: |
|
2493 | if repository.REPO_FEATURE_SIDE_DATA in repo.features: | |
2494 | part.addparam(b'exp-sidedata', b'1') |
|
2494 | part.addparam(b'exp-sidedata', b'1') | |
2495 | sidedata = bundle2.format_remote_wanted_sidedata(repo) |
|
2495 | sidedata = bundle2.format_remote_wanted_sidedata(repo) | |
2496 | part.addparam(b'exp-wanted-sidedata', sidedata) |
|
2496 | part.addparam(b'exp-wanted-sidedata', sidedata) | |
2497 |
|
2497 | |||
2498 | if ( |
|
2498 | if ( | |
2499 | kwargs.get('narrow', False) |
|
2499 | kwargs.get('narrow', False) | |
2500 | and kwargs.get('narrow_acl', False) |
|
2500 | and kwargs.get('narrow_acl', False) | |
2501 | and (include or exclude) |
|
2501 | and (include or exclude) | |
2502 | ): |
|
2502 | ): | |
2503 | # this is mandatory because otherwise ACL clients won't work |
|
2503 | # this is mandatory because otherwise ACL clients won't work | |
2504 | narrowspecpart = bundler.newpart(b'Narrow:responsespec') |
|
2504 | narrowspecpart = bundler.newpart(b'Narrow:responsespec') | |
2505 | narrowspecpart.data = b'%s\0%s' % ( |
|
2505 | narrowspecpart.data = b'%s\0%s' % ( | |
2506 | b'\n'.join(include), |
|
2506 | b'\n'.join(include), | |
2507 | b'\n'.join(exclude), |
|
2507 | b'\n'.join(exclude), | |
2508 | ) |
|
2508 | ) | |
2509 |
|
2509 | |||
2510 |
|
2510 | |||
2511 | @getbundle2partsgenerator(b'bookmarks') |
|
2511 | @getbundle2partsgenerator(b'bookmarks') | |
2512 | def _getbundlebookmarkpart( |
|
2512 | def _getbundlebookmarkpart( | |
2513 | bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs |
|
2513 | bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs | |
2514 | ): |
|
2514 | ): | |
2515 | """add a bookmark part to the requested bundle""" |
|
2515 | """add a bookmark part to the requested bundle""" | |
2516 | if not kwargs.get('bookmarks', False): |
|
2516 | if not kwargs.get('bookmarks', False): | |
2517 | return |
|
2517 | return | |
2518 | if not b2caps or b'bookmarks' not in b2caps: |
|
2518 | if not b2caps or b'bookmarks' not in b2caps: | |
2519 | raise error.Abort(_(b'no common bookmarks exchange method')) |
|
2519 | raise error.Abort(_(b'no common bookmarks exchange method')) | |
2520 | books = bookmod.listbinbookmarks(repo) |
|
2520 | books = bookmod.listbinbookmarks(repo) | |
2521 | data = bookmod.binaryencode(repo, books) |
|
2521 | data = bookmod.binaryencode(repo, books) | |
2522 | if data: |
|
2522 | if data: | |
2523 | bundler.newpart(b'bookmarks', data=data) |
|
2523 | bundler.newpart(b'bookmarks', data=data) | |
2524 |
|
2524 | |||
2525 |
|
2525 | |||
2526 | @getbundle2partsgenerator(b'listkeys') |
|
2526 | @getbundle2partsgenerator(b'listkeys') | |
2527 | def _getbundlelistkeysparts( |
|
2527 | def _getbundlelistkeysparts( | |
2528 | bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs |
|
2528 | bundler, repo, source, bundlecaps=None, b2caps=None, **kwargs | |
2529 | ): |
|
2529 | ): | |
2530 | """add parts containing listkeys namespaces to the requested bundle""" |
|
2530 | """add parts containing listkeys namespaces to the requested bundle""" | |
2531 | listkeys = kwargs.get('listkeys', ()) |
|
2531 | listkeys = kwargs.get('listkeys', ()) | |
2532 | for namespace in listkeys: |
|
2532 | for namespace in listkeys: | |
2533 | part = bundler.newpart(b'listkeys') |
|
2533 | part = bundler.newpart(b'listkeys') | |
2534 | part.addparam(b'namespace', namespace) |
|
2534 | part.addparam(b'namespace', namespace) | |
2535 | keys = repo.listkeys(namespace).items() |
|
2535 | keys = repo.listkeys(namespace).items() | |
2536 | part.data = pushkey.encodekeys(keys) |
|
2536 | part.data = pushkey.encodekeys(keys) | |
2537 |
|
2537 | |||
2538 |
|
2538 | |||
2539 | @getbundle2partsgenerator(b'obsmarkers') |
|
2539 | @getbundle2partsgenerator(b'obsmarkers') | |
2540 | def _getbundleobsmarkerpart( |
|
2540 | def _getbundleobsmarkerpart( | |
2541 | bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs |
|
2541 | bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs | |
2542 | ): |
|
2542 | ): | |
2543 | """add an obsolescence markers part to the requested bundle""" |
|
2543 | """add an obsolescence markers part to the requested bundle""" | |
2544 | if kwargs.get('obsmarkers', False): |
|
2544 | if kwargs.get('obsmarkers', False): | |
2545 | if heads is None: |
|
2545 | if heads is None: | |
2546 | heads = repo.heads() |
|
2546 | heads = repo.heads() | |
2547 | subset = [c.node() for c in repo.set(b'::%ln', heads)] |
|
2547 | subset = [c.node() for c in repo.set(b'::%ln', heads)] | |
2548 | markers = repo.obsstore.relevantmarkers(subset) |
|
2548 | markers = repo.obsstore.relevantmarkers(subset) | |
2549 | markers = obsutil.sortedmarkers(markers) |
|
2549 | markers = obsutil.sortedmarkers(markers) | |
2550 | bundle2.buildobsmarkerspart(bundler, markers) |
|
2550 | bundle2.buildobsmarkerspart(bundler, markers) | |
2551 |
|
2551 | |||
2552 |
|
2552 | |||
2553 | @getbundle2partsgenerator(b'phases') |
|
2553 | @getbundle2partsgenerator(b'phases') | |
2554 | def _getbundlephasespart( |
|
2554 | def _getbundlephasespart( | |
2555 | bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs |
|
2555 | bundler, repo, source, bundlecaps=None, b2caps=None, heads=None, **kwargs | |
2556 | ): |
|
2556 | ): | |
2557 | """add phase heads part to the requested bundle""" |
|
2557 | """add phase heads part to the requested bundle""" | |
2558 | if kwargs.get('phases', False): |
|
2558 | if kwargs.get('phases', False): | |
2559 | if not b2caps or b'heads' not in b2caps.get(b'phases'): |
|
2559 | if not b2caps or b'heads' not in b2caps.get(b'phases'): | |
2560 | raise error.Abort(_(b'no common phases exchange method')) |
|
2560 | raise error.Abort(_(b'no common phases exchange method')) | |
2561 | if heads is None: |
|
2561 | if heads is None: | |
2562 | heads = repo.heads() |
|
2562 | heads = repo.heads() | |
2563 |
|
2563 | |||
2564 | headsbyphase = collections.defaultdict(set) |
|
2564 | headsbyphase = collections.defaultdict(set) | |
2565 | if repo.publishing(): |
|
2565 | if repo.publishing(): | |
2566 | headsbyphase[phases.public] = heads |
|
2566 | headsbyphase[phases.public] = heads | |
2567 | else: |
|
2567 | else: | |
2568 | # find the appropriate heads to move |
|
2568 | # find the appropriate heads to move | |
2569 |
|
2569 | |||
2570 | phase = repo._phasecache.phase |
|
2570 | phase = repo._phasecache.phase | |
2571 | node = repo.changelog.node |
|
2571 | node = repo.changelog.node | |
2572 | rev = repo.changelog.rev |
|
2572 | rev = repo.changelog.rev | |
2573 | for h in heads: |
|
2573 | for h in heads: | |
2574 | headsbyphase[phase(repo, rev(h))].add(h) |
|
2574 | headsbyphase[phase(repo, rev(h))].add(h) | |
2575 | seenphases = list(headsbyphase.keys()) |
|
2575 | seenphases = list(headsbyphase.keys()) | |
2576 |
|
2576 | |||
2577 | # We do not handle anything but public and draft phase for now) |
|
2577 | # We do not handle anything but public and draft phase for now) | |
2578 | if seenphases: |
|
2578 | if seenphases: | |
2579 | assert max(seenphases) <= phases.draft |
|
2579 | assert max(seenphases) <= phases.draft | |
2580 |
|
2580 | |||
2581 | # if client is pulling non-public changesets, we need to find |
|
2581 | # if client is pulling non-public changesets, we need to find | |
2582 | # intermediate public heads. |
|
2582 | # intermediate public heads. | |
2583 | draftheads = headsbyphase.get(phases.draft, set()) |
|
2583 | draftheads = headsbyphase.get(phases.draft, set()) | |
2584 | if draftheads: |
|
2584 | if draftheads: | |
2585 | publicheads = headsbyphase.get(phases.public, set()) |
|
2585 | publicheads = headsbyphase.get(phases.public, set()) | |
2586 |
|
2586 | |||
2587 | revset = b'heads(only(%ln, %ln) and public())' |
|
2587 | revset = b'heads(only(%ln, %ln) and public())' | |
2588 | extraheads = repo.revs(revset, draftheads, publicheads) |
|
2588 | extraheads = repo.revs(revset, draftheads, publicheads) | |
2589 | for r in extraheads: |
|
2589 | for r in extraheads: | |
2590 | headsbyphase[phases.public].add(node(r)) |
|
2590 | headsbyphase[phases.public].add(node(r)) | |
2591 |
|
2591 | |||
2592 | # transform data in a format used by the encoding function |
|
2592 | # transform data in a format used by the encoding function | |
2593 | phasemapping = { |
|
2593 | phasemapping = { | |
2594 | phase: sorted(headsbyphase[phase]) for phase in phases.allphases |
|
2594 | phase: sorted(headsbyphase[phase]) for phase in phases.allphases | |
2595 | } |
|
2595 | } | |
2596 |
|
2596 | |||
2597 | # generate the actual part |
|
2597 | # generate the actual part | |
2598 | phasedata = phases.binaryencode(phasemapping) |
|
2598 | phasedata = phases.binaryencode(phasemapping) | |
2599 | bundler.newpart(b'phase-heads', data=phasedata) |
|
2599 | bundler.newpart(b'phase-heads', data=phasedata) | |
2600 |
|
2600 | |||
2601 |
|
2601 | |||
2602 | @getbundle2partsgenerator(b'hgtagsfnodes') |
|
2602 | @getbundle2partsgenerator(b'hgtagsfnodes') | |
2603 | def _getbundletagsfnodes( |
|
2603 | def _getbundletagsfnodes( | |
2604 | bundler, |
|
2604 | bundler, | |
2605 | repo, |
|
2605 | repo, | |
2606 | source, |
|
2606 | source, | |
2607 | bundlecaps=None, |
|
2607 | bundlecaps=None, | |
2608 | b2caps=None, |
|
2608 | b2caps=None, | |
2609 | heads=None, |
|
2609 | heads=None, | |
2610 | common=None, |
|
2610 | common=None, | |
2611 | **kwargs |
|
2611 | **kwargs | |
2612 | ): |
|
2612 | ): | |
2613 | """Transfer the .hgtags filenodes mapping. |
|
2613 | """Transfer the .hgtags filenodes mapping. | |
2614 |
|
2614 | |||
2615 | Only values for heads in this bundle will be transferred. |
|
2615 | Only values for heads in this bundle will be transferred. | |
2616 |
|
2616 | |||
2617 | The part data consists of pairs of 20 byte changeset node and .hgtags |
|
2617 | The part data consists of pairs of 20 byte changeset node and .hgtags | |
2618 | filenodes raw values. |
|
2618 | filenodes raw values. | |
2619 | """ |
|
2619 | """ | |
2620 | # Don't send unless: |
|
2620 | # Don't send unless: | |
2621 | # - changeset are being exchanged, |
|
2621 | # - changeset are being exchanged, | |
2622 | # - the client supports it. |
|
2622 | # - the client supports it. | |
2623 | if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps): |
|
2623 | if not b2caps or not (kwargs.get('cg', True) and b'hgtagsfnodes' in b2caps): | |
2624 | return |
|
2624 | return | |
2625 |
|
2625 | |||
2626 | outgoing = _computeoutgoing(repo, heads, common) |
|
2626 | outgoing = _computeoutgoing(repo, heads, common) | |
2627 | bundle2.addparttagsfnodescache(repo, bundler, outgoing) |
|
2627 | bundle2.addparttagsfnodescache(repo, bundler, outgoing) | |
2628 |
|
2628 | |||
2629 |
|
2629 | |||
2630 | @getbundle2partsgenerator(b'cache:rev-branch-cache') |
|
2630 | @getbundle2partsgenerator(b'cache:rev-branch-cache') | |
2631 | def _getbundlerevbranchcache( |
|
2631 | def _getbundlerevbranchcache( | |
2632 | bundler, |
|
2632 | bundler, | |
2633 | repo, |
|
2633 | repo, | |
2634 | source, |
|
2634 | source, | |
2635 | bundlecaps=None, |
|
2635 | bundlecaps=None, | |
2636 | b2caps=None, |
|
2636 | b2caps=None, | |
2637 | heads=None, |
|
2637 | heads=None, | |
2638 | common=None, |
|
2638 | common=None, | |
2639 | **kwargs |
|
2639 | **kwargs | |
2640 | ): |
|
2640 | ): | |
2641 | """Transfer the rev-branch-cache mapping |
|
2641 | """Transfer the rev-branch-cache mapping | |
2642 |
|
2642 | |||
2643 | The payload is a series of data related to each branch |
|
2643 | The payload is a series of data related to each branch | |
2644 |
|
2644 | |||
2645 | 1) branch name length |
|
2645 | 1) branch name length | |
2646 | 2) number of open heads |
|
2646 | 2) number of open heads | |
2647 | 3) number of closed heads |
|
2647 | 3) number of closed heads | |
2648 | 4) open heads nodes |
|
2648 | 4) open heads nodes | |
2649 | 5) closed heads nodes |
|
2649 | 5) closed heads nodes | |
2650 | """ |
|
2650 | """ | |
2651 | # Don't send unless: |
|
2651 | # Don't send unless: | |
2652 | # - changeset are being exchanged, |
|
2652 | # - changeset are being exchanged, | |
2653 | # - the client supports it. |
|
2653 | # - the client supports it. | |
2654 | # - narrow bundle isn't in play (not currently compatible). |
|
2654 | # - narrow bundle isn't in play (not currently compatible). | |
2655 | if ( |
|
2655 | if ( | |
2656 | not kwargs.get('cg', True) |
|
2656 | not kwargs.get('cg', True) | |
2657 | or not b2caps |
|
2657 | or not b2caps | |
2658 | or b'rev-branch-cache' not in b2caps |
|
2658 | or b'rev-branch-cache' not in b2caps | |
2659 | or kwargs.get('narrow', False) |
|
2659 | or kwargs.get('narrow', False) | |
2660 | or repo.ui.has_section(_NARROWACL_SECTION) |
|
2660 | or repo.ui.has_section(_NARROWACL_SECTION) | |
2661 | ): |
|
2661 | ): | |
2662 | return |
|
2662 | return | |
2663 |
|
2663 | |||
2664 | outgoing = _computeoutgoing(repo, heads, common) |
|
2664 | outgoing = _computeoutgoing(repo, heads, common) | |
2665 | bundle2.addpartrevbranchcache(repo, bundler, outgoing) |
|
2665 | bundle2.addpartrevbranchcache(repo, bundler, outgoing) | |
2666 |
|
2666 | |||
2667 |
|
2667 | |||
2668 | def check_heads(repo, their_heads, context): |
|
2668 | def check_heads(repo, their_heads, context): | |
2669 | """check if the heads of a repo have been modified |
|
2669 | """check if the heads of a repo have been modified | |
2670 |
|
2670 | |||
2671 | Used by peer for unbundling. |
|
2671 | Used by peer for unbundling. | |
2672 | """ |
|
2672 | """ | |
2673 | heads = repo.heads() |
|
2673 | heads = repo.heads() | |
2674 | heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest() |
|
2674 | heads_hash = hashutil.sha1(b''.join(sorted(heads))).digest() | |
2675 | if not ( |
|
2675 | if not ( | |
2676 | their_heads == [b'force'] |
|
2676 | their_heads == [b'force'] | |
2677 | or their_heads == heads |
|
2677 | or their_heads == heads | |
2678 | or their_heads == [b'hashed', heads_hash] |
|
2678 | or their_heads == [b'hashed', heads_hash] | |
2679 | ): |
|
2679 | ): | |
2680 | # someone else committed/pushed/unbundled while we |
|
2680 | # someone else committed/pushed/unbundled while we | |
2681 | # were transferring data |
|
2681 | # were transferring data | |
2682 | raise error.PushRaced( |
|
2682 | raise error.PushRaced( | |
2683 | b'repository changed while %s - please try again' % context |
|
2683 | b'repository changed while %s - please try again' % context | |
2684 | ) |
|
2684 | ) | |
2685 |
|
2685 | |||
2686 |
|
2686 | |||
2687 | def unbundle(repo, cg, heads, source, url): |
|
2687 | def unbundle(repo, cg, heads, source, url): | |
2688 | """Apply a bundle to a repo. |
|
2688 | """Apply a bundle to a repo. | |
2689 |
|
2689 | |||
2690 | this function makes sure the repo is locked during the application and have |
|
2690 | this function makes sure the repo is locked during the application and have | |
2691 | mechanism to check that no push race occurred between the creation of the |
|
2691 | mechanism to check that no push race occurred between the creation of the | |
2692 | bundle and its application. |
|
2692 | bundle and its application. | |
2693 |
|
2693 | |||
2694 | If the push was raced as PushRaced exception is raised.""" |
|
2694 | If the push was raced as PushRaced exception is raised.""" | |
2695 | r = 0 |
|
2695 | r = 0 | |
2696 | # need a transaction when processing a bundle2 stream |
|
2696 | # need a transaction when processing a bundle2 stream | |
2697 | # [wlock, lock, tr] - needs to be an array so nested functions can modify it |
|
2697 | # [wlock, lock, tr] - needs to be an array so nested functions can modify it | |
2698 | lockandtr = [None, None, None] |
|
2698 | lockandtr = [None, None, None] | |
2699 | recordout = None |
|
2699 | recordout = None | |
2700 | # quick fix for output mismatch with bundle2 in 3.4 |
|
2700 | # quick fix for output mismatch with bundle2 in 3.4 | |
2701 | captureoutput = repo.ui.configbool( |
|
2701 | captureoutput = repo.ui.configbool( | |
2702 | b'experimental', b'bundle2-output-capture' |
|
2702 | b'experimental', b'bundle2-output-capture' | |
2703 | ) |
|
2703 | ) | |
2704 | if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'): |
|
2704 | if url.startswith(b'remote:http:') or url.startswith(b'remote:https:'): | |
2705 | captureoutput = True |
|
2705 | captureoutput = True | |
2706 | try: |
|
2706 | try: | |
2707 | # note: outside bundle1, 'heads' is expected to be empty and this |
|
2707 | # note: outside bundle1, 'heads' is expected to be empty and this | |
2708 | # 'check_heads' call wil be a no-op |
|
2708 | # 'check_heads' call wil be a no-op | |
2709 | check_heads(repo, heads, b'uploading changes') |
|
2709 | check_heads(repo, heads, b'uploading changes') | |
2710 | # push can proceed |
|
2710 | # push can proceed | |
2711 | if not isinstance(cg, bundle2.unbundle20): |
|
2711 | if not isinstance(cg, bundle2.unbundle20): | |
2712 | # legacy case: bundle1 (changegroup 01) |
|
2712 | # legacy case: bundle1 (changegroup 01) | |
2713 | txnname = b"\n".join([source, urlutil.hidepassword(url)]) |
|
2713 | txnname = b"\n".join([source, urlutil.hidepassword(url)]) | |
2714 | with repo.lock(), repo.transaction(txnname) as tr: |
|
2714 | with repo.lock(), repo.transaction(txnname) as tr: | |
2715 | op = bundle2.applybundle(repo, cg, tr, source, url) |
|
2715 | op = bundle2.applybundle(repo, cg, tr, source, url) | |
2716 | r = bundle2.combinechangegroupresults(op) |
|
2716 | r = bundle2.combinechangegroupresults(op) | |
2717 | else: |
|
2717 | else: | |
2718 | r = None |
|
2718 | r = None | |
2719 | try: |
|
2719 | try: | |
2720 |
|
2720 | |||
2721 | def gettransaction(): |
|
2721 | def gettransaction(): | |
2722 | if not lockandtr[2]: |
|
2722 | if not lockandtr[2]: | |
2723 | if not bookmod.bookmarksinstore(repo): |
|
2723 | if not bookmod.bookmarksinstore(repo): | |
2724 | lockandtr[0] = repo.wlock() |
|
2724 | lockandtr[0] = repo.wlock() | |
2725 | lockandtr[1] = repo.lock() |
|
2725 | lockandtr[1] = repo.lock() | |
2726 | lockandtr[2] = repo.transaction(source) |
|
2726 | lockandtr[2] = repo.transaction(source) | |
2727 | lockandtr[2].hookargs[b'source'] = source |
|
2727 | lockandtr[2].hookargs[b'source'] = source | |
2728 | lockandtr[2].hookargs[b'url'] = url |
|
2728 | lockandtr[2].hookargs[b'url'] = url | |
2729 | lockandtr[2].hookargs[b'bundle2'] = b'1' |
|
2729 | lockandtr[2].hookargs[b'bundle2'] = b'1' | |
2730 | return lockandtr[2] |
|
2730 | return lockandtr[2] | |
2731 |
|
2731 | |||
2732 | # Do greedy locking by default until we're satisfied with lazy |
|
2732 | # Do greedy locking by default until we're satisfied with lazy | |
2733 | # locking. |
|
2733 | # locking. | |
2734 | if not repo.ui.configbool( |
|
2734 | if not repo.ui.configbool( | |
2735 | b'experimental', b'bundle2lazylocking' |
|
2735 | b'experimental', b'bundle2lazylocking' | |
2736 | ): |
|
2736 | ): | |
2737 | gettransaction() |
|
2737 | gettransaction() | |
2738 |
|
2738 | |||
2739 | op = bundle2.bundleoperation( |
|
2739 | op = bundle2.bundleoperation( | |
2740 | repo, |
|
2740 | repo, | |
2741 | gettransaction, |
|
2741 | gettransaction, | |
2742 | captureoutput=captureoutput, |
|
2742 | captureoutput=captureoutput, | |
2743 | source=b'push', |
|
2743 | source=b'push', | |
2744 | ) |
|
2744 | ) | |
2745 | try: |
|
2745 | try: | |
2746 | op = bundle2.processbundle(repo, cg, op=op) |
|
2746 | op = bundle2.processbundle(repo, cg, op=op) | |
2747 | finally: |
|
2747 | finally: | |
2748 | r = op.reply |
|
2748 | r = op.reply | |
2749 | if captureoutput and r is not None: |
|
2749 | if captureoutput and r is not None: | |
2750 | repo.ui.pushbuffer(error=True, subproc=True) |
|
2750 | repo.ui.pushbuffer(error=True, subproc=True) | |
2751 |
|
2751 | |||
2752 | def recordout(output): |
|
2752 | def recordout(output): | |
2753 | r.newpart(b'output', data=output, mandatory=False) |
|
2753 | r.newpart(b'output', data=output, mandatory=False) | |
2754 |
|
2754 | |||
2755 | if lockandtr[2] is not None: |
|
2755 | if lockandtr[2] is not None: | |
2756 | lockandtr[2].close() |
|
2756 | lockandtr[2].close() | |
2757 | except BaseException as exc: |
|
2757 | except BaseException as exc: | |
2758 | exc.duringunbundle2 = True |
|
2758 | exc.duringunbundle2 = True | |
2759 | if captureoutput and r is not None: |
|
2759 | if captureoutput and r is not None: | |
2760 | parts = exc._bundle2salvagedoutput = r.salvageoutput() |
|
2760 | parts = exc._bundle2salvagedoutput = r.salvageoutput() | |
2761 |
|
2761 | |||
2762 | def recordout(output): |
|
2762 | def recordout(output): | |
2763 | part = bundle2.bundlepart( |
|
2763 | part = bundle2.bundlepart( | |
2764 | b'output', data=output, mandatory=False |
|
2764 | b'output', data=output, mandatory=False | |
2765 | ) |
|
2765 | ) | |
2766 | parts.append(part) |
|
2766 | parts.append(part) | |
2767 |
|
2767 | |||
2768 | raise |
|
2768 | raise | |
2769 | finally: |
|
2769 | finally: | |
2770 | lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0]) |
|
2770 | lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0]) | |
2771 | if recordout is not None: |
|
2771 | if recordout is not None: | |
2772 | recordout(repo.ui.popbuffer()) |
|
2772 | recordout(repo.ui.popbuffer()) | |
2773 | return r |
|
2773 | return r | |
2774 |
|
2774 | |||
2775 |
|
2775 | |||
2776 | def _maybeapplyclonebundle(pullop): |
|
2776 | def _maybeapplyclonebundle(pullop): | |
2777 | """Apply a clone bundle from a remote, if possible.""" |
|
2777 | """Apply a clone bundle from a remote, if possible.""" | |
2778 |
|
2778 | |||
2779 | repo = pullop.repo |
|
2779 | repo = pullop.repo | |
2780 | remote = pullop.remote |
|
2780 | remote = pullop.remote | |
2781 |
|
2781 | |||
2782 | if not repo.ui.configbool(b'ui', b'clonebundles'): |
|
2782 | if not repo.ui.configbool(b'ui', b'clonebundles'): | |
2783 | return |
|
2783 | return | |
2784 |
|
2784 | |||
2785 | # Only run if local repo is empty. |
|
2785 | # Only run if local repo is empty. | |
2786 | if len(repo): |
|
2786 | if len(repo): | |
2787 | return |
|
2787 | return | |
2788 |
|
2788 | |||
2789 | if pullop.heads: |
|
2789 | if pullop.heads: | |
2790 | return |
|
2790 | return | |
2791 |
|
2791 | |||
2792 | if not remote.capable(b'clonebundles'): |
|
2792 | if not remote.capable(b'clonebundles'): | |
2793 | return |
|
2793 | return | |
2794 |
|
2794 | |||
2795 | with remote.commandexecutor() as e: |
|
2795 | with remote.commandexecutor() as e: | |
2796 | res = e.callcommand(b'clonebundles', {}).result() |
|
2796 | res = e.callcommand(b'clonebundles', {}).result() | |
2797 |
|
2797 | |||
2798 | # If we call the wire protocol command, that's good enough to record the |
|
2798 | # If we call the wire protocol command, that's good enough to record the | |
2799 | # attempt. |
|
2799 | # attempt. | |
2800 | pullop.clonebundleattempted = True |
|
2800 | pullop.clonebundleattempted = True | |
2801 |
|
2801 | |||
2802 | entries = bundlecaches.parseclonebundlesmanifest(repo, res) |
|
2802 | entries = bundlecaches.parseclonebundlesmanifest(repo, res) | |
2803 | if not entries: |
|
2803 | if not entries: | |
2804 | repo.ui.note( |
|
2804 | repo.ui.note( | |
2805 | _( |
|
2805 | _( | |
2806 | b'no clone bundles available on remote; ' |
|
2806 | b'no clone bundles available on remote; ' | |
2807 | b'falling back to regular clone\n' |
|
2807 | b'falling back to regular clone\n' | |
2808 | ) |
|
2808 | ) | |
2809 | ) |
|
2809 | ) | |
2810 | return |
|
2810 | return | |
2811 |
|
2811 | |||
2812 | entries = bundlecaches.filterclonebundleentries( |
|
2812 | entries = bundlecaches.filterclonebundleentries( | |
2813 | repo, entries, streamclonerequested=pullop.streamclonerequested |
|
2813 | repo, entries, streamclonerequested=pullop.streamclonerequested | |
2814 | ) |
|
2814 | ) | |
2815 |
|
2815 | |||
2816 | if not entries: |
|
2816 | if not entries: | |
2817 | # There is a thundering herd concern here. However, if a server |
|
2817 | # There is a thundering herd concern here. However, if a server | |
2818 | # operator doesn't advertise bundles appropriate for its clients, |
|
2818 | # operator doesn't advertise bundles appropriate for its clients, | |
2819 | # they deserve what's coming. Furthermore, from a client's |
|
2819 | # they deserve what's coming. Furthermore, from a client's | |
2820 | # perspective, no automatic fallback would mean not being able to |
|
2820 | # perspective, no automatic fallback would mean not being able to | |
2821 | # clone! |
|
2821 | # clone! | |
2822 | repo.ui.warn( |
|
2822 | repo.ui.warn( | |
2823 | _( |
|
2823 | _( | |
2824 | b'no compatible clone bundles available on server; ' |
|
2824 | b'no compatible clone bundles available on server; ' | |
2825 | b'falling back to regular clone\n' |
|
2825 | b'falling back to regular clone\n' | |
2826 | ) |
|
2826 | ) | |
2827 | ) |
|
2827 | ) | |
2828 | repo.ui.warn( |
|
2828 | repo.ui.warn( | |
2829 | _(b'(you may want to report this to the server operator)\n') |
|
2829 | _(b'(you may want to report this to the server operator)\n') | |
2830 | ) |
|
2830 | ) | |
2831 | return |
|
2831 | return | |
2832 |
|
2832 | |||
2833 | entries = bundlecaches.sortclonebundleentries(repo.ui, entries) |
|
2833 | entries = bundlecaches.sortclonebundleentries(repo.ui, entries) | |
2834 |
|
2834 | |||
2835 | url = entries[0][b'URL'] |
|
2835 | url = entries[0][b'URL'] | |
2836 | repo.ui.status(_(b'applying clone bundle from %s\n') % url) |
|
2836 | repo.ui.status(_(b'applying clone bundle from %s\n') % url) | |
2837 | if trypullbundlefromurl(repo.ui, repo, url): |
|
2837 | if trypullbundlefromurl(repo.ui, repo, url, remote): | |
2838 | repo.ui.status(_(b'finished applying clone bundle\n')) |
|
2838 | repo.ui.status(_(b'finished applying clone bundle\n')) | |
2839 | # Bundle failed. |
|
2839 | # Bundle failed. | |
2840 | # |
|
2840 | # | |
2841 | # We abort by default to avoid the thundering herd of |
|
2841 | # We abort by default to avoid the thundering herd of | |
2842 | # clients flooding a server that was expecting expensive |
|
2842 | # clients flooding a server that was expecting expensive | |
2843 | # clone load to be offloaded. |
|
2843 | # clone load to be offloaded. | |
2844 | elif repo.ui.configbool(b'ui', b'clonebundlefallback'): |
|
2844 | elif repo.ui.configbool(b'ui', b'clonebundlefallback'): | |
2845 | repo.ui.warn(_(b'falling back to normal clone\n')) |
|
2845 | repo.ui.warn(_(b'falling back to normal clone\n')) | |
2846 | else: |
|
2846 | else: | |
2847 | raise error.Abort( |
|
2847 | raise error.Abort( | |
2848 | _(b'error applying bundle'), |
|
2848 | _(b'error applying bundle'), | |
2849 | hint=_( |
|
2849 | hint=_( | |
2850 | b'if this error persists, consider contacting ' |
|
2850 | b'if this error persists, consider contacting ' | |
2851 | b'the server operator or disable clone ' |
|
2851 | b'the server operator or disable clone ' | |
2852 | b'bundles via ' |
|
2852 | b'bundles via ' | |
2853 | b'"--config ui.clonebundles=false"' |
|
2853 | b'"--config ui.clonebundles=false"' | |
2854 | ), |
|
2854 | ), | |
2855 | ) |
|
2855 | ) | |
2856 |
|
2856 | |||
2857 |
|
2857 | |||
2858 | def trypullbundlefromurl(ui, repo, url): |
|
2858 | def inline_clone_bundle_open(ui, url, peer): | |
|
2859 | if not peer: | |||
|
2860 | raise error.Abort(_(b'no remote repository supplied for %s' % url)) | |||
|
2861 | clonebundleid = url[len(bundlecaches.CLONEBUNDLESCHEME) :] | |||
|
2862 | peerclonebundle = peer.get_inline_clone_bundle(clonebundleid) | |||
|
2863 | return util.chunkbuffer(peerclonebundle) | |||
|
2864 | ||||
|
2865 | ||||
|
2866 | def trypullbundlefromurl(ui, repo, url, peer): | |||
2859 | """Attempt to apply a bundle from a URL.""" |
|
2867 | """Attempt to apply a bundle from a URL.""" | |
2860 | with repo.lock(), repo.transaction(b'bundleurl') as tr: |
|
2868 | with repo.lock(), repo.transaction(b'bundleurl') as tr: | |
2861 | try: |
|
2869 | try: | |
2862 | fh = urlmod.open(ui, url) |
|
2870 | if url.startswith(bundlecaches.CLONEBUNDLESCHEME): | |
|
2871 | fh = inline_clone_bundle_open(ui, url, peer) | |||
|
2872 | else: | |||
|
2873 | fh = urlmod.open(ui, url) | |||
2863 | cg = readbundle(ui, fh, b'stream') |
|
2874 | cg = readbundle(ui, fh, b'stream') | |
2864 |
|
2875 | |||
2865 | if isinstance(cg, streamclone.streamcloneapplier): |
|
2876 | if isinstance(cg, streamclone.streamcloneapplier): | |
2866 | cg.apply(repo) |
|
2877 | cg.apply(repo) | |
2867 | else: |
|
2878 | else: | |
2868 | bundle2.applybundle(repo, cg, tr, b'clonebundles', url) |
|
2879 | bundle2.applybundle(repo, cg, tr, b'clonebundles', url) | |
2869 | return True |
|
2880 | return True | |
2870 | except urlerr.httperror as e: |
|
2881 | except urlerr.httperror as e: | |
2871 | ui.warn( |
|
2882 | ui.warn( | |
2872 | _(b'HTTP error fetching bundle: %s\n') |
|
2883 | _(b'HTTP error fetching bundle: %s\n') | |
2873 | % stringutil.forcebytestr(e) |
|
2884 | % stringutil.forcebytestr(e) | |
2874 | ) |
|
2885 | ) | |
2875 | except urlerr.urlerror as e: |
|
2886 | except urlerr.urlerror as e: | |
2876 | ui.warn( |
|
2887 | ui.warn( | |
2877 | _(b'error fetching bundle: %s\n') |
|
2888 | _(b'error fetching bundle: %s\n') | |
2878 | % stringutil.forcebytestr(e.reason) |
|
2889 | % stringutil.forcebytestr(e.reason) | |
2879 | ) |
|
2890 | ) | |
2880 |
|
2891 | |||
2881 | return False |
|
2892 | return False |
@@ -1,3383 +1,3389 b'' | |||||
1 | The Mercurial system uses a set of configuration files to control |
|
1 | The Mercurial system uses a set of configuration files to control | |
2 | aspects of its behavior. |
|
2 | aspects of its behavior. | |
3 |
|
3 | |||
4 | Troubleshooting |
|
4 | Troubleshooting | |
5 | =============== |
|
5 | =============== | |
6 |
|
6 | |||
7 | If you're having problems with your configuration, |
|
7 | If you're having problems with your configuration, | |
8 | :hg:`config --source` can help you understand what is introducing |
|
8 | :hg:`config --source` can help you understand what is introducing | |
9 | a setting into your environment. |
|
9 | a setting into your environment. | |
10 |
|
10 | |||
11 | See :hg:`help config.syntax` and :hg:`help config.files` |
|
11 | See :hg:`help config.syntax` and :hg:`help config.files` | |
12 | for information about how and where to override things. |
|
12 | for information about how and where to override things. | |
13 |
|
13 | |||
14 | Structure |
|
14 | Structure | |
15 | ========= |
|
15 | ========= | |
16 |
|
16 | |||
17 | The configuration files use a simple ini-file format. A configuration |
|
17 | The configuration files use a simple ini-file format. A configuration | |
18 | file consists of sections, led by a ``[section]`` header and followed |
|
18 | file consists of sections, led by a ``[section]`` header and followed | |
19 | by ``name = value`` entries:: |
|
19 | by ``name = value`` entries:: | |
20 |
|
20 | |||
21 | [ui] |
|
21 | [ui] | |
22 | username = Firstname Lastname <firstname.lastname@example.net> |
|
22 | username = Firstname Lastname <firstname.lastname@example.net> | |
23 | verbose = True |
|
23 | verbose = True | |
24 |
|
24 | |||
25 | The above entries will be referred to as ``ui.username`` and |
|
25 | The above entries will be referred to as ``ui.username`` and | |
26 | ``ui.verbose``, respectively. See :hg:`help config.syntax`. |
|
26 | ``ui.verbose``, respectively. See :hg:`help config.syntax`. | |
27 |
|
27 | |||
28 | Files |
|
28 | Files | |
29 | ===== |
|
29 | ===== | |
30 |
|
30 | |||
31 | Mercurial reads configuration data from several files, if they exist. |
|
31 | Mercurial reads configuration data from several files, if they exist. | |
32 | These files do not exist by default and you will have to create the |
|
32 | These files do not exist by default and you will have to create the | |
33 | appropriate configuration files yourself: |
|
33 | appropriate configuration files yourself: | |
34 |
|
34 | |||
35 | Local configuration is put into the per-repository ``<repo>/.hg/hgrc`` file. |
|
35 | Local configuration is put into the per-repository ``<repo>/.hg/hgrc`` file. | |
36 |
|
36 | |||
37 | Global configuration like the username setting is typically put into: |
|
37 | Global configuration like the username setting is typically put into: | |
38 |
|
38 | |||
39 | .. container:: windows |
|
39 | .. container:: windows | |
40 |
|
40 | |||
41 | - ``%USERPROFILE%\mercurial.ini`` (on Windows) |
|
41 | - ``%USERPROFILE%\mercurial.ini`` (on Windows) | |
42 |
|
42 | |||
43 | .. container:: unix.plan9 |
|
43 | .. container:: unix.plan9 | |
44 |
|
44 | |||
45 | - ``$HOME/.hgrc`` (on Unix, Plan9) |
|
45 | - ``$HOME/.hgrc`` (on Unix, Plan9) | |
46 |
|
46 | |||
47 | The names of these files depend on the system on which Mercurial is |
|
47 | The names of these files depend on the system on which Mercurial is | |
48 | installed. ``*.rc`` files from a single directory are read in |
|
48 | installed. ``*.rc`` files from a single directory are read in | |
49 | alphabetical order, later ones overriding earlier ones. Where multiple |
|
49 | alphabetical order, later ones overriding earlier ones. Where multiple | |
50 | paths are given below, settings from earlier paths override later |
|
50 | paths are given below, settings from earlier paths override later | |
51 | ones. |
|
51 | ones. | |
52 |
|
52 | |||
53 | .. container:: verbose.unix |
|
53 | .. container:: verbose.unix | |
54 |
|
54 | |||
55 | On Unix, the following files are consulted: |
|
55 | On Unix, the following files are consulted: | |
56 |
|
56 | |||
57 | - ``<repo>/.hg/hgrc-not-shared`` (per-repository) |
|
57 | - ``<repo>/.hg/hgrc-not-shared`` (per-repository) | |
58 | - ``<repo>/.hg/hgrc`` (per-repository) |
|
58 | - ``<repo>/.hg/hgrc`` (per-repository) | |
59 | - ``$HOME/.hgrc`` (per-user) |
|
59 | - ``$HOME/.hgrc`` (per-user) | |
60 | - ``${XDG_CONFIG_HOME:-$HOME/.config}/hg/hgrc`` (per-user) |
|
60 | - ``${XDG_CONFIG_HOME:-$HOME/.config}/hg/hgrc`` (per-user) | |
61 | - ``<install-root>/etc/mercurial/hgrc`` (per-installation) |
|
61 | - ``<install-root>/etc/mercurial/hgrc`` (per-installation) | |
62 | - ``<install-root>/etc/mercurial/hgrc.d/*.rc`` (per-installation) |
|
62 | - ``<install-root>/etc/mercurial/hgrc.d/*.rc`` (per-installation) | |
63 | - ``/etc/mercurial/hgrc`` (per-system) |
|
63 | - ``/etc/mercurial/hgrc`` (per-system) | |
64 | - ``/etc/mercurial/hgrc.d/*.rc`` (per-system) |
|
64 | - ``/etc/mercurial/hgrc.d/*.rc`` (per-system) | |
65 | - ``<internal>/*.rc`` (defaults) |
|
65 | - ``<internal>/*.rc`` (defaults) | |
66 |
|
66 | |||
67 | .. container:: verbose.windows |
|
67 | .. container:: verbose.windows | |
68 |
|
68 | |||
69 | On Windows, the following files are consulted: |
|
69 | On Windows, the following files are consulted: | |
70 |
|
70 | |||
71 | - ``<repo>/.hg/hgrc-not-shared`` (per-repository) |
|
71 | - ``<repo>/.hg/hgrc-not-shared`` (per-repository) | |
72 | - ``<repo>/.hg/hgrc`` (per-repository) |
|
72 | - ``<repo>/.hg/hgrc`` (per-repository) | |
73 | - ``%USERPROFILE%\.hgrc`` (per-user) |
|
73 | - ``%USERPROFILE%\.hgrc`` (per-user) | |
74 | - ``%USERPROFILE%\Mercurial.ini`` (per-user) |
|
74 | - ``%USERPROFILE%\Mercurial.ini`` (per-user) | |
75 | - ``%HOME%\.hgrc`` (per-user) |
|
75 | - ``%HOME%\.hgrc`` (per-user) | |
76 | - ``%HOME%\Mercurial.ini`` (per-user) |
|
76 | - ``%HOME%\Mercurial.ini`` (per-user) | |
77 | - ``HKEY_LOCAL_MACHINE\SOFTWARE\Mercurial`` (per-system) |
|
77 | - ``HKEY_LOCAL_MACHINE\SOFTWARE\Mercurial`` (per-system) | |
78 | - ``<install-dir>\hgrc.d\*.rc`` (per-installation) |
|
78 | - ``<install-dir>\hgrc.d\*.rc`` (per-installation) | |
79 | - ``<install-dir>\Mercurial.ini`` (per-installation) |
|
79 | - ``<install-dir>\Mercurial.ini`` (per-installation) | |
80 | - ``%PROGRAMDATA%\Mercurial\hgrc`` (per-system) |
|
80 | - ``%PROGRAMDATA%\Mercurial\hgrc`` (per-system) | |
81 | - ``%PROGRAMDATA%\Mercurial\Mercurial.ini`` (per-system) |
|
81 | - ``%PROGRAMDATA%\Mercurial\Mercurial.ini`` (per-system) | |
82 | - ``%PROGRAMDATA%\Mercurial\hgrc.d\*.rc`` (per-system) |
|
82 | - ``%PROGRAMDATA%\Mercurial\hgrc.d\*.rc`` (per-system) | |
83 | - ``<internal>/*.rc`` (defaults) |
|
83 | - ``<internal>/*.rc`` (defaults) | |
84 |
|
84 | |||
85 | .. note:: |
|
85 | .. note:: | |
86 |
|
86 | |||
87 | The registry key ``HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Mercurial`` |
|
87 | The registry key ``HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Mercurial`` | |
88 | is used when running 32-bit Python on 64-bit Windows. |
|
88 | is used when running 32-bit Python on 64-bit Windows. | |
89 |
|
89 | |||
90 | .. container:: verbose.plan9 |
|
90 | .. container:: verbose.plan9 | |
91 |
|
91 | |||
92 | On Plan9, the following files are consulted: |
|
92 | On Plan9, the following files are consulted: | |
93 |
|
93 | |||
94 | - ``<repo>/.hg/hgrc-not-shared`` (per-repository) |
|
94 | - ``<repo>/.hg/hgrc-not-shared`` (per-repository) | |
95 | - ``<repo>/.hg/hgrc`` (per-repository) |
|
95 | - ``<repo>/.hg/hgrc`` (per-repository) | |
96 | - ``$home/lib/hgrc`` (per-user) |
|
96 | - ``$home/lib/hgrc`` (per-user) | |
97 | - ``<install-root>/lib/mercurial/hgrc`` (per-installation) |
|
97 | - ``<install-root>/lib/mercurial/hgrc`` (per-installation) | |
98 | - ``<install-root>/lib/mercurial/hgrc.d/*.rc`` (per-installation) |
|
98 | - ``<install-root>/lib/mercurial/hgrc.d/*.rc`` (per-installation) | |
99 | - ``/lib/mercurial/hgrc`` (per-system) |
|
99 | - ``/lib/mercurial/hgrc`` (per-system) | |
100 | - ``/lib/mercurial/hgrc.d/*.rc`` (per-system) |
|
100 | - ``/lib/mercurial/hgrc.d/*.rc`` (per-system) | |
101 | - ``<internal>/*.rc`` (defaults) |
|
101 | - ``<internal>/*.rc`` (defaults) | |
102 |
|
102 | |||
103 | Per-repository configuration options only apply in a |
|
103 | Per-repository configuration options only apply in a | |
104 | particular repository. This file is not version-controlled, and |
|
104 | particular repository. This file is not version-controlled, and | |
105 | will not get transferred during a "clone" operation. Options in |
|
105 | will not get transferred during a "clone" operation. Options in | |
106 | this file override options in all other configuration files. |
|
106 | this file override options in all other configuration files. | |
107 |
|
107 | |||
108 | .. container:: unix.plan9 |
|
108 | .. container:: unix.plan9 | |
109 |
|
109 | |||
110 | On Plan 9 and Unix, most of this file will be ignored if it doesn't |
|
110 | On Plan 9 and Unix, most of this file will be ignored if it doesn't | |
111 | belong to a trusted user or to a trusted group. See |
|
111 | belong to a trusted user or to a trusted group. See | |
112 | :hg:`help config.trusted` for more details. |
|
112 | :hg:`help config.trusted` for more details. | |
113 |
|
113 | |||
114 | Per-user configuration file(s) are for the user running Mercurial. Options |
|
114 | Per-user configuration file(s) are for the user running Mercurial. Options | |
115 | in these files apply to all Mercurial commands executed by this user in any |
|
115 | in these files apply to all Mercurial commands executed by this user in any | |
116 | directory. Options in these files override per-system and per-installation |
|
116 | directory. Options in these files override per-system and per-installation | |
117 | options. |
|
117 | options. | |
118 |
|
118 | |||
119 | Per-installation configuration files are searched for in the |
|
119 | Per-installation configuration files are searched for in the | |
120 | directory where Mercurial is installed. ``<install-root>`` is the |
|
120 | directory where Mercurial is installed. ``<install-root>`` is the | |
121 | parent directory of the **hg** executable (or symlink) being run. |
|
121 | parent directory of the **hg** executable (or symlink) being run. | |
122 |
|
122 | |||
123 | .. container:: unix.plan9 |
|
123 | .. container:: unix.plan9 | |
124 |
|
124 | |||
125 | For example, if installed in ``/shared/tools/bin/hg``, Mercurial |
|
125 | For example, if installed in ``/shared/tools/bin/hg``, Mercurial | |
126 | will look in ``/shared/tools/etc/mercurial/hgrc``. Options in these |
|
126 | will look in ``/shared/tools/etc/mercurial/hgrc``. Options in these | |
127 | files apply to all Mercurial commands executed by any user in any |
|
127 | files apply to all Mercurial commands executed by any user in any | |
128 | directory. |
|
128 | directory. | |
129 |
|
129 | |||
130 | Per-installation configuration files are for the system on |
|
130 | Per-installation configuration files are for the system on | |
131 | which Mercurial is running. Options in these files apply to all |
|
131 | which Mercurial is running. Options in these files apply to all | |
132 | Mercurial commands executed by any user in any directory. Registry |
|
132 | Mercurial commands executed by any user in any directory. Registry | |
133 | keys contain PATH-like strings, every part of which must reference |
|
133 | keys contain PATH-like strings, every part of which must reference | |
134 | a ``Mercurial.ini`` file or be a directory where ``*.rc`` files will |
|
134 | a ``Mercurial.ini`` file or be a directory where ``*.rc`` files will | |
135 | be read. Mercurial checks each of these locations in the specified |
|
135 | be read. Mercurial checks each of these locations in the specified | |
136 | order until one or more configuration files are detected. |
|
136 | order until one or more configuration files are detected. | |
137 |
|
137 | |||
138 | Per-system configuration files are for the system on which Mercurial |
|
138 | Per-system configuration files are for the system on which Mercurial | |
139 | is running. Options in these files apply to all Mercurial commands |
|
139 | is running. Options in these files apply to all Mercurial commands | |
140 | executed by any user in any directory. Options in these files |
|
140 | executed by any user in any directory. Options in these files | |
141 | override per-installation options. |
|
141 | override per-installation options. | |
142 |
|
142 | |||
143 | Mercurial comes with some default configuration. The default configuration |
|
143 | Mercurial comes with some default configuration. The default configuration | |
144 | files are installed with Mercurial and will be overwritten on upgrades. Default |
|
144 | files are installed with Mercurial and will be overwritten on upgrades. Default | |
145 | configuration files should never be edited by users or administrators but can |
|
145 | configuration files should never be edited by users or administrators but can | |
146 | be overridden in other configuration files. So far the directory only contains |
|
146 | be overridden in other configuration files. So far the directory only contains | |
147 | merge tool configuration but packagers can also put other default configuration |
|
147 | merge tool configuration but packagers can also put other default configuration | |
148 | there. |
|
148 | there. | |
149 |
|
149 | |||
150 | On versions 5.7 and later, if share-safe functionality is enabled, |
|
150 | On versions 5.7 and later, if share-safe functionality is enabled, | |
151 | shares will read config file of share source too. |
|
151 | shares will read config file of share source too. | |
152 | `<share-source/.hg/hgrc>` is read before reading `<repo/.hg/hgrc>`. |
|
152 | `<share-source/.hg/hgrc>` is read before reading `<repo/.hg/hgrc>`. | |
153 |
|
153 | |||
154 | For configs which should not be shared, `<repo/.hg/hgrc-not-shared>` |
|
154 | For configs which should not be shared, `<repo/.hg/hgrc-not-shared>` | |
155 | should be used. |
|
155 | should be used. | |
156 |
|
156 | |||
157 | Syntax |
|
157 | Syntax | |
158 | ====== |
|
158 | ====== | |
159 |
|
159 | |||
160 | A configuration file consists of sections, led by a ``[section]`` header |
|
160 | A configuration file consists of sections, led by a ``[section]`` header | |
161 | and followed by ``name = value`` entries (sometimes called |
|
161 | and followed by ``name = value`` entries (sometimes called | |
162 | ``configuration keys``):: |
|
162 | ``configuration keys``):: | |
163 |
|
163 | |||
164 | [spam] |
|
164 | [spam] | |
165 | eggs=ham |
|
165 | eggs=ham | |
166 | green= |
|
166 | green= | |
167 | eggs |
|
167 | eggs | |
168 |
|
168 | |||
169 | Each line contains one entry. If the lines that follow are indented, |
|
169 | Each line contains one entry. If the lines that follow are indented, | |
170 | they are treated as continuations of that entry. Leading whitespace is |
|
170 | they are treated as continuations of that entry. Leading whitespace is | |
171 | removed from values. Empty lines are skipped. Lines beginning with |
|
171 | removed from values. Empty lines are skipped. Lines beginning with | |
172 | ``#`` or ``;`` are ignored and may be used to provide comments. |
|
172 | ``#`` or ``;`` are ignored and may be used to provide comments. | |
173 |
|
173 | |||
174 | Configuration keys can be set multiple times, in which case Mercurial |
|
174 | Configuration keys can be set multiple times, in which case Mercurial | |
175 | will use the value that was configured last. As an example:: |
|
175 | will use the value that was configured last. As an example:: | |
176 |
|
176 | |||
177 | [spam] |
|
177 | [spam] | |
178 | eggs=large |
|
178 | eggs=large | |
179 | ham=serrano |
|
179 | ham=serrano | |
180 | eggs=small |
|
180 | eggs=small | |
181 |
|
181 | |||
182 | This would set the configuration key named ``eggs`` to ``small``. |
|
182 | This would set the configuration key named ``eggs`` to ``small``. | |
183 |
|
183 | |||
184 | It is also possible to define a section multiple times. A section can |
|
184 | It is also possible to define a section multiple times. A section can | |
185 | be redefined on the same and/or on different configuration files. For |
|
185 | be redefined on the same and/or on different configuration files. For | |
186 | example:: |
|
186 | example:: | |
187 |
|
187 | |||
188 | [foo] |
|
188 | [foo] | |
189 | eggs=large |
|
189 | eggs=large | |
190 | ham=serrano |
|
190 | ham=serrano | |
191 | eggs=small |
|
191 | eggs=small | |
192 |
|
192 | |||
193 | [bar] |
|
193 | [bar] | |
194 | eggs=ham |
|
194 | eggs=ham | |
195 | green= |
|
195 | green= | |
196 | eggs |
|
196 | eggs | |
197 |
|
197 | |||
198 | [foo] |
|
198 | [foo] | |
199 | ham=prosciutto |
|
199 | ham=prosciutto | |
200 | eggs=medium |
|
200 | eggs=medium | |
201 | bread=toasted |
|
201 | bread=toasted | |
202 |
|
202 | |||
203 | This would set the ``eggs``, ``ham``, and ``bread`` configuration keys |
|
203 | This would set the ``eggs``, ``ham``, and ``bread`` configuration keys | |
204 | of the ``foo`` section to ``medium``, ``prosciutto``, and ``toasted``, |
|
204 | of the ``foo`` section to ``medium``, ``prosciutto``, and ``toasted``, | |
205 | respectively. As you can see there only thing that matters is the last |
|
205 | respectively. As you can see there only thing that matters is the last | |
206 | value that was set for each of the configuration keys. |
|
206 | value that was set for each of the configuration keys. | |
207 |
|
207 | |||
208 | If a configuration key is set multiple times in different |
|
208 | If a configuration key is set multiple times in different | |
209 | configuration files the final value will depend on the order in which |
|
209 | configuration files the final value will depend on the order in which | |
210 | the different configuration files are read, with settings from earlier |
|
210 | the different configuration files are read, with settings from earlier | |
211 | paths overriding later ones as described on the ``Files`` section |
|
211 | paths overriding later ones as described on the ``Files`` section | |
212 | above. |
|
212 | above. | |
213 |
|
213 | |||
214 | A line of the form ``%include file`` will include ``file`` into the |
|
214 | A line of the form ``%include file`` will include ``file`` into the | |
215 | current configuration file. The inclusion is recursive, which means |
|
215 | current configuration file. The inclusion is recursive, which means | |
216 | that included files can include other files. Filenames are relative to |
|
216 | that included files can include other files. Filenames are relative to | |
217 | the configuration file in which the ``%include`` directive is found. |
|
217 | the configuration file in which the ``%include`` directive is found. | |
218 | Environment variables and ``~user`` constructs are expanded in |
|
218 | Environment variables and ``~user`` constructs are expanded in | |
219 | ``file``. This lets you do something like:: |
|
219 | ``file``. This lets you do something like:: | |
220 |
|
220 | |||
221 | %include ~/.hgrc.d/$HOST.rc |
|
221 | %include ~/.hgrc.d/$HOST.rc | |
222 |
|
222 | |||
223 | to include a different configuration file on each computer you use. |
|
223 | to include a different configuration file on each computer you use. | |
224 |
|
224 | |||
225 | A line with ``%unset name`` will remove ``name`` from the current |
|
225 | A line with ``%unset name`` will remove ``name`` from the current | |
226 | section, if it has been set previously. |
|
226 | section, if it has been set previously. | |
227 |
|
227 | |||
228 | The values are either free-form text strings, lists of text strings, |
|
228 | The values are either free-form text strings, lists of text strings, | |
229 | or Boolean values. Boolean values can be set to true using any of "1", |
|
229 | or Boolean values. Boolean values can be set to true using any of "1", | |
230 | "yes", "true", or "on" and to false using "0", "no", "false", or "off" |
|
230 | "yes", "true", or "on" and to false using "0", "no", "false", or "off" | |
231 | (all case insensitive). |
|
231 | (all case insensitive). | |
232 |
|
232 | |||
233 | List values are separated by whitespace or comma, except when values are |
|
233 | List values are separated by whitespace or comma, except when values are | |
234 | placed in double quotation marks:: |
|
234 | placed in double quotation marks:: | |
235 |
|
235 | |||
236 | allow_read = "John Doe, PhD", brian, betty |
|
236 | allow_read = "John Doe, PhD", brian, betty | |
237 |
|
237 | |||
238 | Quotation marks can be escaped by prefixing them with a backslash. Only |
|
238 | Quotation marks can be escaped by prefixing them with a backslash. Only | |
239 | quotation marks at the beginning of a word is counted as a quotation |
|
239 | quotation marks at the beginning of a word is counted as a quotation | |
240 | (e.g., ``foo"bar baz`` is the list of ``foo"bar`` and ``baz``). |
|
240 | (e.g., ``foo"bar baz`` is the list of ``foo"bar`` and ``baz``). | |
241 |
|
241 | |||
242 | Sections |
|
242 | Sections | |
243 | ======== |
|
243 | ======== | |
244 |
|
244 | |||
245 | This section describes the different sections that may appear in a |
|
245 | This section describes the different sections that may appear in a | |
246 | Mercurial configuration file, the purpose of each section, its possible |
|
246 | Mercurial configuration file, the purpose of each section, its possible | |
247 | keys, and their possible values. |
|
247 | keys, and their possible values. | |
248 |
|
248 | |||
249 | ``alias`` |
|
249 | ``alias`` | |
250 | --------- |
|
250 | --------- | |
251 |
|
251 | |||
252 | Defines command aliases. |
|
252 | Defines command aliases. | |
253 |
|
253 | |||
254 | Aliases allow you to define your own commands in terms of other |
|
254 | Aliases allow you to define your own commands in terms of other | |
255 | commands (or aliases), optionally including arguments. Positional |
|
255 | commands (or aliases), optionally including arguments. Positional | |
256 | arguments in the form of ``$1``, ``$2``, etc. in the alias definition |
|
256 | arguments in the form of ``$1``, ``$2``, etc. in the alias definition | |
257 | are expanded by Mercurial before execution. Positional arguments not |
|
257 | are expanded by Mercurial before execution. Positional arguments not | |
258 | already used by ``$N`` in the definition are put at the end of the |
|
258 | already used by ``$N`` in the definition are put at the end of the | |
259 | command to be executed. |
|
259 | command to be executed. | |
260 |
|
260 | |||
261 | Alias definitions consist of lines of the form:: |
|
261 | Alias definitions consist of lines of the form:: | |
262 |
|
262 | |||
263 | <alias> = <command> [<argument>]... |
|
263 | <alias> = <command> [<argument>]... | |
264 |
|
264 | |||
265 | For example, this definition:: |
|
265 | For example, this definition:: | |
266 |
|
266 | |||
267 | latest = log --limit 5 |
|
267 | latest = log --limit 5 | |
268 |
|
268 | |||
269 | creates a new command ``latest`` that shows only the five most recent |
|
269 | creates a new command ``latest`` that shows only the five most recent | |
270 | changesets. You can define subsequent aliases using earlier ones:: |
|
270 | changesets. You can define subsequent aliases using earlier ones:: | |
271 |
|
271 | |||
272 | stable5 = latest -b stable |
|
272 | stable5 = latest -b stable | |
273 |
|
273 | |||
274 | .. note:: |
|
274 | .. note:: | |
275 |
|
275 | |||
276 | It is possible to create aliases with the same names as |
|
276 | It is possible to create aliases with the same names as | |
277 | existing commands, which will then override the original |
|
277 | existing commands, which will then override the original | |
278 | definitions. This is almost always a bad idea! |
|
278 | definitions. This is almost always a bad idea! | |
279 |
|
279 | |||
280 | An alias can start with an exclamation point (``!``) to make it a |
|
280 | An alias can start with an exclamation point (``!``) to make it a | |
281 | shell alias. A shell alias is executed with the shell and will let you |
|
281 | shell alias. A shell alias is executed with the shell and will let you | |
282 | run arbitrary commands. As an example, :: |
|
282 | run arbitrary commands. As an example, :: | |
283 |
|
283 | |||
284 | echo = !echo $@ |
|
284 | echo = !echo $@ | |
285 |
|
285 | |||
286 | will let you do ``hg echo foo`` to have ``foo`` printed in your |
|
286 | will let you do ``hg echo foo`` to have ``foo`` printed in your | |
287 | terminal. A better example might be:: |
|
287 | terminal. A better example might be:: | |
288 |
|
288 | |||
289 | purge = !$HG status --no-status --unknown -0 re: | xargs -0 rm -f |
|
289 | purge = !$HG status --no-status --unknown -0 re: | xargs -0 rm -f | |
290 |
|
290 | |||
291 | which will make ``hg purge`` delete all unknown files in the |
|
291 | which will make ``hg purge`` delete all unknown files in the | |
292 | repository in the same manner as the purge extension. |
|
292 | repository in the same manner as the purge extension. | |
293 |
|
293 | |||
294 | Positional arguments like ``$1``, ``$2``, etc. in the alias definition |
|
294 | Positional arguments like ``$1``, ``$2``, etc. in the alias definition | |
295 | expand to the command arguments. Unmatched arguments are |
|
295 | expand to the command arguments. Unmatched arguments are | |
296 | removed. ``$0`` expands to the alias name and ``$@`` expands to all |
|
296 | removed. ``$0`` expands to the alias name and ``$@`` expands to all | |
297 | arguments separated by a space. ``"$@"`` (with quotes) expands to all |
|
297 | arguments separated by a space. ``"$@"`` (with quotes) expands to all | |
298 | arguments quoted individually and separated by a space. These expansions |
|
298 | arguments quoted individually and separated by a space. These expansions | |
299 | happen before the command is passed to the shell. |
|
299 | happen before the command is passed to the shell. | |
300 |
|
300 | |||
301 | Shell aliases are executed in an environment where ``$HG`` expands to |
|
301 | Shell aliases are executed in an environment where ``$HG`` expands to | |
302 | the path of the Mercurial that was used to execute the alias. This is |
|
302 | the path of the Mercurial that was used to execute the alias. This is | |
303 | useful when you want to call further Mercurial commands in a shell |
|
303 | useful when you want to call further Mercurial commands in a shell | |
304 | alias, as was done above for the purge alias. In addition, |
|
304 | alias, as was done above for the purge alias. In addition, | |
305 | ``$HG_ARGS`` expands to the arguments given to Mercurial. In the ``hg |
|
305 | ``$HG_ARGS`` expands to the arguments given to Mercurial. In the ``hg | |
306 | echo foo`` call above, ``$HG_ARGS`` would expand to ``echo foo``. |
|
306 | echo foo`` call above, ``$HG_ARGS`` would expand to ``echo foo``. | |
307 |
|
307 | |||
308 | .. note:: |
|
308 | .. note:: | |
309 |
|
309 | |||
310 | Some global configuration options such as ``-R`` are |
|
310 | Some global configuration options such as ``-R`` are | |
311 | processed before shell aliases and will thus not be passed to |
|
311 | processed before shell aliases and will thus not be passed to | |
312 | aliases. |
|
312 | aliases. | |
313 |
|
313 | |||
314 |
|
314 | |||
315 | ``annotate`` |
|
315 | ``annotate`` | |
316 | ------------ |
|
316 | ------------ | |
317 |
|
317 | |||
318 | Settings used when displaying file annotations. All values are |
|
318 | Settings used when displaying file annotations. All values are | |
319 | Booleans and default to False. See :hg:`help config.diff` for |
|
319 | Booleans and default to False. See :hg:`help config.diff` for | |
320 | related options for the diff command. |
|
320 | related options for the diff command. | |
321 |
|
321 | |||
322 | ``ignorews`` |
|
322 | ``ignorews`` | |
323 | Ignore white space when comparing lines. |
|
323 | Ignore white space when comparing lines. | |
324 |
|
324 | |||
325 | ``ignorewseol`` |
|
325 | ``ignorewseol`` | |
326 | Ignore white space at the end of a line when comparing lines. |
|
326 | Ignore white space at the end of a line when comparing lines. | |
327 |
|
327 | |||
328 | ``ignorewsamount`` |
|
328 | ``ignorewsamount`` | |
329 | Ignore changes in the amount of white space. |
|
329 | Ignore changes in the amount of white space. | |
330 |
|
330 | |||
331 | ``ignoreblanklines`` |
|
331 | ``ignoreblanklines`` | |
332 | Ignore changes whose lines are all blank. |
|
332 | Ignore changes whose lines are all blank. | |
333 |
|
333 | |||
334 |
|
334 | |||
335 | ``auth`` |
|
335 | ``auth`` | |
336 | -------- |
|
336 | -------- | |
337 |
|
337 | |||
338 | Authentication credentials and other authentication-like configuration |
|
338 | Authentication credentials and other authentication-like configuration | |
339 | for HTTP connections. This section allows you to store usernames and |
|
339 | for HTTP connections. This section allows you to store usernames and | |
340 | passwords for use when logging *into* HTTP servers. See |
|
340 | passwords for use when logging *into* HTTP servers. See | |
341 | :hg:`help config.web` if you want to configure *who* can login to |
|
341 | :hg:`help config.web` if you want to configure *who* can login to | |
342 | your HTTP server. |
|
342 | your HTTP server. | |
343 |
|
343 | |||
344 | The following options apply to all hosts. |
|
344 | The following options apply to all hosts. | |
345 |
|
345 | |||
346 | ``cookiefile`` |
|
346 | ``cookiefile`` | |
347 | Path to a file containing HTTP cookie lines. Cookies matching a |
|
347 | Path to a file containing HTTP cookie lines. Cookies matching a | |
348 | host will be sent automatically. |
|
348 | host will be sent automatically. | |
349 |
|
349 | |||
350 | The file format uses the Mozilla cookies.txt format, which defines cookies |
|
350 | The file format uses the Mozilla cookies.txt format, which defines cookies | |
351 | on their own lines. Each line contains 7 fields delimited by the tab |
|
351 | on their own lines. Each line contains 7 fields delimited by the tab | |
352 | character (domain, is_domain_cookie, path, is_secure, expires, name, |
|
352 | character (domain, is_domain_cookie, path, is_secure, expires, name, | |
353 | value). For more info, do an Internet search for "Netscape cookies.txt |
|
353 | value). For more info, do an Internet search for "Netscape cookies.txt | |
354 | format." |
|
354 | format." | |
355 |
|
355 | |||
356 | Note: the cookies parser does not handle port numbers on domains. You |
|
356 | Note: the cookies parser does not handle port numbers on domains. You | |
357 | will need to remove ports from the domain for the cookie to be recognized. |
|
357 | will need to remove ports from the domain for the cookie to be recognized. | |
358 | This could result in a cookie being disclosed to an unwanted server. |
|
358 | This could result in a cookie being disclosed to an unwanted server. | |
359 |
|
359 | |||
360 | The cookies file is read-only. |
|
360 | The cookies file is read-only. | |
361 |
|
361 | |||
362 | Other options in this section are grouped by name and have the following |
|
362 | Other options in this section are grouped by name and have the following | |
363 | format:: |
|
363 | format:: | |
364 |
|
364 | |||
365 | <name>.<argument> = <value> |
|
365 | <name>.<argument> = <value> | |
366 |
|
366 | |||
367 | where ``<name>`` is used to group arguments into authentication |
|
367 | where ``<name>`` is used to group arguments into authentication | |
368 | entries. Example:: |
|
368 | entries. Example:: | |
369 |
|
369 | |||
370 | foo.prefix = hg.intevation.de/mercurial |
|
370 | foo.prefix = hg.intevation.de/mercurial | |
371 | foo.username = foo |
|
371 | foo.username = foo | |
372 | foo.password = bar |
|
372 | foo.password = bar | |
373 | foo.schemes = http https |
|
373 | foo.schemes = http https | |
374 |
|
374 | |||
375 | bar.prefix = secure.example.org |
|
375 | bar.prefix = secure.example.org | |
376 | bar.key = path/to/file.key |
|
376 | bar.key = path/to/file.key | |
377 | bar.cert = path/to/file.cert |
|
377 | bar.cert = path/to/file.cert | |
378 | bar.schemes = https |
|
378 | bar.schemes = https | |
379 |
|
379 | |||
380 | Supported arguments: |
|
380 | Supported arguments: | |
381 |
|
381 | |||
382 | ``prefix`` |
|
382 | ``prefix`` | |
383 | Either ``*`` or a URI prefix with or without the scheme part. |
|
383 | Either ``*`` or a URI prefix with or without the scheme part. | |
384 | The authentication entry with the longest matching prefix is used |
|
384 | The authentication entry with the longest matching prefix is used | |
385 | (where ``*`` matches everything and counts as a match of length |
|
385 | (where ``*`` matches everything and counts as a match of length | |
386 | 1). If the prefix doesn't include a scheme, the match is performed |
|
386 | 1). If the prefix doesn't include a scheme, the match is performed | |
387 | against the URI with its scheme stripped as well, and the schemes |
|
387 | against the URI with its scheme stripped as well, and the schemes | |
388 | argument, q.v., is then subsequently consulted. |
|
388 | argument, q.v., is then subsequently consulted. | |
389 |
|
389 | |||
390 | ``username`` |
|
390 | ``username`` | |
391 | Optional. Username to authenticate with. If not given, and the |
|
391 | Optional. Username to authenticate with. If not given, and the | |
392 | remote site requires basic or digest authentication, the user will |
|
392 | remote site requires basic or digest authentication, the user will | |
393 | be prompted for it. Environment variables are expanded in the |
|
393 | be prompted for it. Environment variables are expanded in the | |
394 | username letting you do ``foo.username = $USER``. If the URI |
|
394 | username letting you do ``foo.username = $USER``. If the URI | |
395 | includes a username, only ``[auth]`` entries with a matching |
|
395 | includes a username, only ``[auth]`` entries with a matching | |
396 | username or without a username will be considered. |
|
396 | username or without a username will be considered. | |
397 |
|
397 | |||
398 | ``password`` |
|
398 | ``password`` | |
399 | Optional. Password to authenticate with. If not given, and the |
|
399 | Optional. Password to authenticate with. If not given, and the | |
400 | remote site requires basic or digest authentication, the user |
|
400 | remote site requires basic or digest authentication, the user | |
401 | will be prompted for it. |
|
401 | will be prompted for it. | |
402 |
|
402 | |||
403 | ``key`` |
|
403 | ``key`` | |
404 | Optional. PEM encoded client certificate key file. Environment |
|
404 | Optional. PEM encoded client certificate key file. Environment | |
405 | variables are expanded in the filename. |
|
405 | variables are expanded in the filename. | |
406 |
|
406 | |||
407 | ``cert`` |
|
407 | ``cert`` | |
408 | Optional. PEM encoded client certificate chain file. Environment |
|
408 | Optional. PEM encoded client certificate chain file. Environment | |
409 | variables are expanded in the filename. |
|
409 | variables are expanded in the filename. | |
410 |
|
410 | |||
411 | ``schemes`` |
|
411 | ``schemes`` | |
412 | Optional. Space separated list of URI schemes to use this |
|
412 | Optional. Space separated list of URI schemes to use this | |
413 | authentication entry with. Only used if the prefix doesn't include |
|
413 | authentication entry with. Only used if the prefix doesn't include | |
414 | a scheme. Supported schemes are http and https. They will match |
|
414 | a scheme. Supported schemes are http and https. They will match | |
415 | static-http and static-https respectively, as well. |
|
415 | static-http and static-https respectively, as well. | |
416 | (default: https) |
|
416 | (default: https) | |
417 |
|
417 | |||
418 | If no suitable authentication entry is found, the user is prompted |
|
418 | If no suitable authentication entry is found, the user is prompted | |
419 | for credentials as usual if required by the remote. |
|
419 | for credentials as usual if required by the remote. | |
420 |
|
420 | |||
421 | ``cmdserver`` |
|
421 | ``cmdserver`` | |
422 | ------------- |
|
422 | ------------- | |
423 |
|
423 | |||
424 | Controls command server settings. (ADVANCED) |
|
424 | Controls command server settings. (ADVANCED) | |
425 |
|
425 | |||
426 | ``message-encodings`` |
|
426 | ``message-encodings`` | |
427 | List of encodings for the ``m`` (message) channel. The first encoding |
|
427 | List of encodings for the ``m`` (message) channel. The first encoding | |
428 | supported by the server will be selected and advertised in the hello |
|
428 | supported by the server will be selected and advertised in the hello | |
429 | message. This is useful only when ``ui.message-output`` is set to |
|
429 | message. This is useful only when ``ui.message-output`` is set to | |
430 | ``channel``. Supported encodings are ``cbor``. |
|
430 | ``channel``. Supported encodings are ``cbor``. | |
431 |
|
431 | |||
432 | ``shutdown-on-interrupt`` |
|
432 | ``shutdown-on-interrupt`` | |
433 | If set to false, the server's main loop will continue running after |
|
433 | If set to false, the server's main loop will continue running after | |
434 | SIGINT received. ``runcommand`` requests can still be interrupted by |
|
434 | SIGINT received. ``runcommand`` requests can still be interrupted by | |
435 | SIGINT. Close the write end of the pipe to shut down the server |
|
435 | SIGINT. Close the write end of the pipe to shut down the server | |
436 | process gracefully. |
|
436 | process gracefully. | |
437 | (default: True) |
|
437 | (default: True) | |
438 |
|
438 | |||
439 | ``color`` |
|
439 | ``color`` | |
440 | --------- |
|
440 | --------- | |
441 |
|
441 | |||
442 | Configure the Mercurial color mode. For details about how to define your custom |
|
442 | Configure the Mercurial color mode. For details about how to define your custom | |
443 | effect and style see :hg:`help color`. |
|
443 | effect and style see :hg:`help color`. | |
444 |
|
444 | |||
445 | ``mode`` |
|
445 | ``mode`` | |
446 | String: control the method used to output color. One of ``auto``, ``ansi``, |
|
446 | String: control the method used to output color. One of ``auto``, ``ansi``, | |
447 | ``win32``, ``terminfo`` or ``debug``. In auto mode, Mercurial will |
|
447 | ``win32``, ``terminfo`` or ``debug``. In auto mode, Mercurial will | |
448 | use ANSI mode by default (or win32 mode prior to Windows 10) if it detects a |
|
448 | use ANSI mode by default (or win32 mode prior to Windows 10) if it detects a | |
449 | terminal. Any invalid value will disable color. |
|
449 | terminal. Any invalid value will disable color. | |
450 |
|
450 | |||
451 | ``pagermode`` |
|
451 | ``pagermode`` | |
452 | String: optional override of ``color.mode`` used with pager. |
|
452 | String: optional override of ``color.mode`` used with pager. | |
453 |
|
453 | |||
454 | On some systems, terminfo mode may cause problems when using |
|
454 | On some systems, terminfo mode may cause problems when using | |
455 | color with ``less -R`` as a pager program. less with the -R option |
|
455 | color with ``less -R`` as a pager program. less with the -R option | |
456 | will only display ECMA-48 color codes, and terminfo mode may sometimes |
|
456 | will only display ECMA-48 color codes, and terminfo mode may sometimes | |
457 | emit codes that less doesn't understand. You can work around this by |
|
457 | emit codes that less doesn't understand. You can work around this by | |
458 | either using ansi mode (or auto mode), or by using less -r (which will |
|
458 | either using ansi mode (or auto mode), or by using less -r (which will | |
459 | pass through all terminal control codes, not just color control |
|
459 | pass through all terminal control codes, not just color control | |
460 | codes). |
|
460 | codes). | |
461 |
|
461 | |||
462 | On some systems (such as MSYS in Windows), the terminal may support |
|
462 | On some systems (such as MSYS in Windows), the terminal may support | |
463 | a different color mode than the pager program. |
|
463 | a different color mode than the pager program. | |
464 |
|
464 | |||
465 | ``commands`` |
|
465 | ``commands`` | |
466 | ------------ |
|
466 | ------------ | |
467 |
|
467 | |||
468 | ``commit.post-status`` |
|
468 | ``commit.post-status`` | |
469 | Show status of files in the working directory after successful commit. |
|
469 | Show status of files in the working directory after successful commit. | |
470 | (default: False) |
|
470 | (default: False) | |
471 |
|
471 | |||
472 | ``merge.require-rev`` |
|
472 | ``merge.require-rev`` | |
473 | Require that the revision to merge the current commit with be specified on |
|
473 | Require that the revision to merge the current commit with be specified on | |
474 | the command line. If this is enabled and a revision is not specified, the |
|
474 | the command line. If this is enabled and a revision is not specified, the | |
475 | command aborts. |
|
475 | command aborts. | |
476 | (default: False) |
|
476 | (default: False) | |
477 |
|
477 | |||
478 | ``push.require-revs`` |
|
478 | ``push.require-revs`` | |
479 | Require revisions to push be specified using one or more mechanisms such as |
|
479 | Require revisions to push be specified using one or more mechanisms such as | |
480 | specifying them positionally on the command line, using ``-r``, ``-b``, |
|
480 | specifying them positionally on the command line, using ``-r``, ``-b``, | |
481 | and/or ``-B`` on the command line, or using ``paths.<path>:pushrev`` in the |
|
481 | and/or ``-B`` on the command line, or using ``paths.<path>:pushrev`` in the | |
482 | configuration. If this is enabled and revisions are not specified, the |
|
482 | configuration. If this is enabled and revisions are not specified, the | |
483 | command aborts. |
|
483 | command aborts. | |
484 | (default: False) |
|
484 | (default: False) | |
485 |
|
485 | |||
486 | ``resolve.confirm`` |
|
486 | ``resolve.confirm`` | |
487 | Confirm before performing action if no filename is passed. |
|
487 | Confirm before performing action if no filename is passed. | |
488 | (default: False) |
|
488 | (default: False) | |
489 |
|
489 | |||
490 | ``resolve.explicit-re-merge`` |
|
490 | ``resolve.explicit-re-merge`` | |
491 | Require uses of ``hg resolve`` to specify which action it should perform, |
|
491 | Require uses of ``hg resolve`` to specify which action it should perform, | |
492 | instead of re-merging files by default. |
|
492 | instead of re-merging files by default. | |
493 | (default: False) |
|
493 | (default: False) | |
494 |
|
494 | |||
495 | ``resolve.mark-check`` |
|
495 | ``resolve.mark-check`` | |
496 | Determines what level of checking :hg:`resolve --mark` will perform before |
|
496 | Determines what level of checking :hg:`resolve --mark` will perform before | |
497 | marking files as resolved. Valid values are ``none`, ``warn``, and |
|
497 | marking files as resolved. Valid values are ``none`, ``warn``, and | |
498 | ``abort``. ``warn`` will output a warning listing the file(s) that still |
|
498 | ``abort``. ``warn`` will output a warning listing the file(s) that still | |
499 | have conflict markers in them, but will still mark everything resolved. |
|
499 | have conflict markers in them, but will still mark everything resolved. | |
500 | ``abort`` will output the same warning but will not mark things as resolved. |
|
500 | ``abort`` will output the same warning but will not mark things as resolved. | |
501 | If --all is passed and this is set to ``abort``, only a warning will be |
|
501 | If --all is passed and this is set to ``abort``, only a warning will be | |
502 | shown (an error will not be raised). |
|
502 | shown (an error will not be raised). | |
503 | (default: ``none``) |
|
503 | (default: ``none``) | |
504 |
|
504 | |||
505 | ``status.relative`` |
|
505 | ``status.relative`` | |
506 | Make paths in :hg:`status` output relative to the current directory. |
|
506 | Make paths in :hg:`status` output relative to the current directory. | |
507 | (default: False) |
|
507 | (default: False) | |
508 |
|
508 | |||
509 | ``status.terse`` |
|
509 | ``status.terse`` | |
510 | Default value for the --terse flag, which condenses status output. |
|
510 | Default value for the --terse flag, which condenses status output. | |
511 | (default: empty) |
|
511 | (default: empty) | |
512 |
|
512 | |||
513 | ``update.check`` |
|
513 | ``update.check`` | |
514 | Determines what level of checking :hg:`update` will perform before moving |
|
514 | Determines what level of checking :hg:`update` will perform before moving | |
515 | to a destination revision. Valid values are ``abort``, ``none``, |
|
515 | to a destination revision. Valid values are ``abort``, ``none``, | |
516 | ``linear``, and ``noconflict``. |
|
516 | ``linear``, and ``noconflict``. | |
517 |
|
517 | |||
518 | - ``abort`` always fails if the working directory has uncommitted changes. |
|
518 | - ``abort`` always fails if the working directory has uncommitted changes. | |
519 |
|
519 | |||
520 | - ``none`` performs no checking, and may result in a merge with uncommitted changes. |
|
520 | - ``none`` performs no checking, and may result in a merge with uncommitted changes. | |
521 |
|
521 | |||
522 | - ``linear`` allows any update as long as it follows a straight line in the |
|
522 | - ``linear`` allows any update as long as it follows a straight line in the | |
523 | revision history, and may trigger a merge with uncommitted changes. |
|
523 | revision history, and may trigger a merge with uncommitted changes. | |
524 |
|
524 | |||
525 | - ``noconflict`` will allow any update which would not trigger a merge with |
|
525 | - ``noconflict`` will allow any update which would not trigger a merge with | |
526 | uncommitted changes, if any are present. |
|
526 | uncommitted changes, if any are present. | |
527 |
|
527 | |||
528 | (default: ``linear``) |
|
528 | (default: ``linear``) | |
529 |
|
529 | |||
530 | ``update.requiredest`` |
|
530 | ``update.requiredest`` | |
531 | Require that the user pass a destination when running :hg:`update`. |
|
531 | Require that the user pass a destination when running :hg:`update`. | |
532 | For example, :hg:`update .::` will be allowed, but a plain :hg:`update` |
|
532 | For example, :hg:`update .::` will be allowed, but a plain :hg:`update` | |
533 | will be disallowed. |
|
533 | will be disallowed. | |
534 | (default: False) |
|
534 | (default: False) | |
535 |
|
535 | |||
536 | ``committemplate`` |
|
536 | ``committemplate`` | |
537 | ------------------ |
|
537 | ------------------ | |
538 |
|
538 | |||
539 | ``changeset`` |
|
539 | ``changeset`` | |
540 | String: configuration in this section is used as the template to |
|
540 | String: configuration in this section is used as the template to | |
541 | customize the text shown in the editor when committing. |
|
541 | customize the text shown in the editor when committing. | |
542 |
|
542 | |||
543 | In addition to pre-defined template keywords, commit log specific one |
|
543 | In addition to pre-defined template keywords, commit log specific one | |
544 | below can be used for customization: |
|
544 | below can be used for customization: | |
545 |
|
545 | |||
546 | ``extramsg`` |
|
546 | ``extramsg`` | |
547 | String: Extra message (typically 'Leave message empty to abort |
|
547 | String: Extra message (typically 'Leave message empty to abort | |
548 | commit.'). This may be changed by some commands or extensions. |
|
548 | commit.'). This may be changed by some commands or extensions. | |
549 |
|
549 | |||
550 | For example, the template configuration below shows as same text as |
|
550 | For example, the template configuration below shows as same text as | |
551 | one shown by default:: |
|
551 | one shown by default:: | |
552 |
|
552 | |||
553 | [committemplate] |
|
553 | [committemplate] | |
554 | changeset = {desc}\n\n |
|
554 | changeset = {desc}\n\n | |
555 | HG: Enter commit message. Lines beginning with 'HG:' are removed. |
|
555 | HG: Enter commit message. Lines beginning with 'HG:' are removed. | |
556 | HG: {extramsg} |
|
556 | HG: {extramsg} | |
557 | HG: -- |
|
557 | HG: -- | |
558 | HG: user: {author}\n{ifeq(p2rev, "-1", "", |
|
558 | HG: user: {author}\n{ifeq(p2rev, "-1", "", | |
559 | "HG: branch merge\n") |
|
559 | "HG: branch merge\n") | |
560 | }HG: branch '{branch}'\n{if(activebookmark, |
|
560 | }HG: branch '{branch}'\n{if(activebookmark, | |
561 | "HG: bookmark '{activebookmark}'\n") }{subrepos % |
|
561 | "HG: bookmark '{activebookmark}'\n") }{subrepos % | |
562 | "HG: subrepo {subrepo}\n" }{file_adds % |
|
562 | "HG: subrepo {subrepo}\n" }{file_adds % | |
563 | "HG: added {file}\n" }{file_mods % |
|
563 | "HG: added {file}\n" }{file_mods % | |
564 | "HG: changed {file}\n" }{file_dels % |
|
564 | "HG: changed {file}\n" }{file_dels % | |
565 | "HG: removed {file}\n" }{if(files, "", |
|
565 | "HG: removed {file}\n" }{if(files, "", | |
566 | "HG: no files changed\n")} |
|
566 | "HG: no files changed\n")} | |
567 |
|
567 | |||
568 | ``diff()`` |
|
568 | ``diff()`` | |
569 | String: show the diff (see :hg:`help templates` for detail) |
|
569 | String: show the diff (see :hg:`help templates` for detail) | |
570 |
|
570 | |||
571 | Sometimes it is helpful to show the diff of the changeset in the editor without |
|
571 | Sometimes it is helpful to show the diff of the changeset in the editor without | |
572 | having to prefix 'HG: ' to each line so that highlighting works correctly. For |
|
572 | having to prefix 'HG: ' to each line so that highlighting works correctly. For | |
573 | this, Mercurial provides a special string which will ignore everything below |
|
573 | this, Mercurial provides a special string which will ignore everything below | |
574 | it:: |
|
574 | it:: | |
575 |
|
575 | |||
576 | HG: ------------------------ >8 ------------------------ |
|
576 | HG: ------------------------ >8 ------------------------ | |
577 |
|
577 | |||
578 | For example, the template configuration below will show the diff below the |
|
578 | For example, the template configuration below will show the diff below the | |
579 | extra message:: |
|
579 | extra message:: | |
580 |
|
580 | |||
581 | [committemplate] |
|
581 | [committemplate] | |
582 | changeset = {desc}\n\n |
|
582 | changeset = {desc}\n\n | |
583 | HG: Enter commit message. Lines beginning with 'HG:' are removed. |
|
583 | HG: Enter commit message. Lines beginning with 'HG:' are removed. | |
584 | HG: {extramsg} |
|
584 | HG: {extramsg} | |
585 | HG: ------------------------ >8 ------------------------ |
|
585 | HG: ------------------------ >8 ------------------------ | |
586 | HG: Do not touch the line above. |
|
586 | HG: Do not touch the line above. | |
587 | HG: Everything below will be removed. |
|
587 | HG: Everything below will be removed. | |
588 | {diff()} |
|
588 | {diff()} | |
589 |
|
589 | |||
590 | .. note:: |
|
590 | .. note:: | |
591 |
|
591 | |||
592 | For some problematic encodings (see :hg:`help win32mbcs` for |
|
592 | For some problematic encodings (see :hg:`help win32mbcs` for | |
593 | detail), this customization should be configured carefully, to |
|
593 | detail), this customization should be configured carefully, to | |
594 | avoid showing broken characters. |
|
594 | avoid showing broken characters. | |
595 |
|
595 | |||
596 | For example, if a multibyte character ending with backslash (0x5c) is |
|
596 | For example, if a multibyte character ending with backslash (0x5c) is | |
597 | followed by the ASCII character 'n' in the customized template, |
|
597 | followed by the ASCII character 'n' in the customized template, | |
598 | the sequence of backslash and 'n' is treated as line-feed unexpectedly |
|
598 | the sequence of backslash and 'n' is treated as line-feed unexpectedly | |
599 | (and the multibyte character is broken, too). |
|
599 | (and the multibyte character is broken, too). | |
600 |
|
600 | |||
601 | Customized template is used for commands below (``--edit`` may be |
|
601 | Customized template is used for commands below (``--edit`` may be | |
602 | required): |
|
602 | required): | |
603 |
|
603 | |||
604 | - :hg:`backout` |
|
604 | - :hg:`backout` | |
605 | - :hg:`commit` |
|
605 | - :hg:`commit` | |
606 | - :hg:`fetch` (for merge commit only) |
|
606 | - :hg:`fetch` (for merge commit only) | |
607 | - :hg:`graft` |
|
607 | - :hg:`graft` | |
608 | - :hg:`histedit` |
|
608 | - :hg:`histedit` | |
609 | - :hg:`import` |
|
609 | - :hg:`import` | |
610 | - :hg:`qfold`, :hg:`qnew` and :hg:`qrefresh` |
|
610 | - :hg:`qfold`, :hg:`qnew` and :hg:`qrefresh` | |
611 | - :hg:`rebase` |
|
611 | - :hg:`rebase` | |
612 | - :hg:`shelve` |
|
612 | - :hg:`shelve` | |
613 | - :hg:`sign` |
|
613 | - :hg:`sign` | |
614 | - :hg:`tag` |
|
614 | - :hg:`tag` | |
615 | - :hg:`transplant` |
|
615 | - :hg:`transplant` | |
616 |
|
616 | |||
617 | Configuring items below instead of ``changeset`` allows showing |
|
617 | Configuring items below instead of ``changeset`` allows showing | |
618 | customized message only for specific actions, or showing different |
|
618 | customized message only for specific actions, or showing different | |
619 | messages for each action. |
|
619 | messages for each action. | |
620 |
|
620 | |||
621 | - ``changeset.backout`` for :hg:`backout` |
|
621 | - ``changeset.backout`` for :hg:`backout` | |
622 | - ``changeset.commit.amend.merge`` for :hg:`commit --amend` on merges |
|
622 | - ``changeset.commit.amend.merge`` for :hg:`commit --amend` on merges | |
623 | - ``changeset.commit.amend.normal`` for :hg:`commit --amend` on other |
|
623 | - ``changeset.commit.amend.normal`` for :hg:`commit --amend` on other | |
624 | - ``changeset.commit.normal.merge`` for :hg:`commit` on merges |
|
624 | - ``changeset.commit.normal.merge`` for :hg:`commit` on merges | |
625 | - ``changeset.commit.normal.normal`` for :hg:`commit` on other |
|
625 | - ``changeset.commit.normal.normal`` for :hg:`commit` on other | |
626 | - ``changeset.fetch`` for :hg:`fetch` (impling merge commit) |
|
626 | - ``changeset.fetch`` for :hg:`fetch` (impling merge commit) | |
627 | - ``changeset.gpg.sign`` for :hg:`sign` |
|
627 | - ``changeset.gpg.sign`` for :hg:`sign` | |
628 | - ``changeset.graft`` for :hg:`graft` |
|
628 | - ``changeset.graft`` for :hg:`graft` | |
629 | - ``changeset.histedit.edit`` for ``edit`` of :hg:`histedit` |
|
629 | - ``changeset.histedit.edit`` for ``edit`` of :hg:`histedit` | |
630 | - ``changeset.histedit.fold`` for ``fold`` of :hg:`histedit` |
|
630 | - ``changeset.histedit.fold`` for ``fold`` of :hg:`histedit` | |
631 | - ``changeset.histedit.mess`` for ``mess`` of :hg:`histedit` |
|
631 | - ``changeset.histedit.mess`` for ``mess`` of :hg:`histedit` | |
632 | - ``changeset.histedit.pick`` for ``pick`` of :hg:`histedit` |
|
632 | - ``changeset.histedit.pick`` for ``pick`` of :hg:`histedit` | |
633 | - ``changeset.import.bypass`` for :hg:`import --bypass` |
|
633 | - ``changeset.import.bypass`` for :hg:`import --bypass` | |
634 | - ``changeset.import.normal.merge`` for :hg:`import` on merges |
|
634 | - ``changeset.import.normal.merge`` for :hg:`import` on merges | |
635 | - ``changeset.import.normal.normal`` for :hg:`import` on other |
|
635 | - ``changeset.import.normal.normal`` for :hg:`import` on other | |
636 | - ``changeset.mq.qnew`` for :hg:`qnew` |
|
636 | - ``changeset.mq.qnew`` for :hg:`qnew` | |
637 | - ``changeset.mq.qfold`` for :hg:`qfold` |
|
637 | - ``changeset.mq.qfold`` for :hg:`qfold` | |
638 | - ``changeset.mq.qrefresh`` for :hg:`qrefresh` |
|
638 | - ``changeset.mq.qrefresh`` for :hg:`qrefresh` | |
639 | - ``changeset.rebase.collapse`` for :hg:`rebase --collapse` |
|
639 | - ``changeset.rebase.collapse`` for :hg:`rebase --collapse` | |
640 | - ``changeset.rebase.merge`` for :hg:`rebase` on merges |
|
640 | - ``changeset.rebase.merge`` for :hg:`rebase` on merges | |
641 | - ``changeset.rebase.normal`` for :hg:`rebase` on other |
|
641 | - ``changeset.rebase.normal`` for :hg:`rebase` on other | |
642 | - ``changeset.shelve.shelve`` for :hg:`shelve` |
|
642 | - ``changeset.shelve.shelve`` for :hg:`shelve` | |
643 | - ``changeset.tag.add`` for :hg:`tag` without ``--remove`` |
|
643 | - ``changeset.tag.add`` for :hg:`tag` without ``--remove`` | |
644 | - ``changeset.tag.remove`` for :hg:`tag --remove` |
|
644 | - ``changeset.tag.remove`` for :hg:`tag --remove` | |
645 | - ``changeset.transplant.merge`` for :hg:`transplant` on merges |
|
645 | - ``changeset.transplant.merge`` for :hg:`transplant` on merges | |
646 | - ``changeset.transplant.normal`` for :hg:`transplant` on other |
|
646 | - ``changeset.transplant.normal`` for :hg:`transplant` on other | |
647 |
|
647 | |||
648 | These dot-separated lists of names are treated as hierarchical ones. |
|
648 | These dot-separated lists of names are treated as hierarchical ones. | |
649 | For example, ``changeset.tag.remove`` customizes the commit message |
|
649 | For example, ``changeset.tag.remove`` customizes the commit message | |
650 | only for :hg:`tag --remove`, but ``changeset.tag`` customizes the |
|
650 | only for :hg:`tag --remove`, but ``changeset.tag`` customizes the | |
651 | commit message for :hg:`tag` regardless of ``--remove`` option. |
|
651 | commit message for :hg:`tag` regardless of ``--remove`` option. | |
652 |
|
652 | |||
653 | When the external editor is invoked for a commit, the corresponding |
|
653 | When the external editor is invoked for a commit, the corresponding | |
654 | dot-separated list of names without the ``changeset.`` prefix |
|
654 | dot-separated list of names without the ``changeset.`` prefix | |
655 | (e.g. ``commit.normal.normal``) is in the ``HGEDITFORM`` environment |
|
655 | (e.g. ``commit.normal.normal``) is in the ``HGEDITFORM`` environment | |
656 | variable. |
|
656 | variable. | |
657 |
|
657 | |||
658 | In this section, items other than ``changeset`` can be referred from |
|
658 | In this section, items other than ``changeset`` can be referred from | |
659 | others. For example, the configuration to list committed files up |
|
659 | others. For example, the configuration to list committed files up | |
660 | below can be referred as ``{listupfiles}``:: |
|
660 | below can be referred as ``{listupfiles}``:: | |
661 |
|
661 | |||
662 | [committemplate] |
|
662 | [committemplate] | |
663 | listupfiles = {file_adds % |
|
663 | listupfiles = {file_adds % | |
664 | "HG: added {file}\n" }{file_mods % |
|
664 | "HG: added {file}\n" }{file_mods % | |
665 | "HG: changed {file}\n" }{file_dels % |
|
665 | "HG: changed {file}\n" }{file_dels % | |
666 | "HG: removed {file}\n" }{if(files, "", |
|
666 | "HG: removed {file}\n" }{if(files, "", | |
667 | "HG: no files changed\n")} |
|
667 | "HG: no files changed\n")} | |
668 |
|
668 | |||
669 | ``decode/encode`` |
|
669 | ``decode/encode`` | |
670 | ----------------- |
|
670 | ----------------- | |
671 |
|
671 | |||
672 | Filters for transforming files on checkout/checkin. This would |
|
672 | Filters for transforming files on checkout/checkin. This would | |
673 | typically be used for newline processing or other |
|
673 | typically be used for newline processing or other | |
674 | localization/canonicalization of files. |
|
674 | localization/canonicalization of files. | |
675 |
|
675 | |||
676 | Filters consist of a filter pattern followed by a filter command. |
|
676 | Filters consist of a filter pattern followed by a filter command. | |
677 | Filter patterns are globs by default, rooted at the repository root. |
|
677 | Filter patterns are globs by default, rooted at the repository root. | |
678 | For example, to match any file ending in ``.txt`` in the root |
|
678 | For example, to match any file ending in ``.txt`` in the root | |
679 | directory only, use the pattern ``*.txt``. To match any file ending |
|
679 | directory only, use the pattern ``*.txt``. To match any file ending | |
680 | in ``.c`` anywhere in the repository, use the pattern ``**.c``. |
|
680 | in ``.c`` anywhere in the repository, use the pattern ``**.c``. | |
681 | For each file only the first matching filter applies. |
|
681 | For each file only the first matching filter applies. | |
682 |
|
682 | |||
683 | The filter command can start with a specifier, either ``pipe:`` or |
|
683 | The filter command can start with a specifier, either ``pipe:`` or | |
684 | ``tempfile:``. If no specifier is given, ``pipe:`` is used by default. |
|
684 | ``tempfile:``. If no specifier is given, ``pipe:`` is used by default. | |
685 |
|
685 | |||
686 | A ``pipe:`` command must accept data on stdin and return the transformed |
|
686 | A ``pipe:`` command must accept data on stdin and return the transformed | |
687 | data on stdout. |
|
687 | data on stdout. | |
688 |
|
688 | |||
689 | Pipe example:: |
|
689 | Pipe example:: | |
690 |
|
690 | |||
691 | [encode] |
|
691 | [encode] | |
692 | # uncompress gzip files on checkin to improve delta compression |
|
692 | # uncompress gzip files on checkin to improve delta compression | |
693 | # note: not necessarily a good idea, just an example |
|
693 | # note: not necessarily a good idea, just an example | |
694 | *.gz = pipe: gunzip |
|
694 | *.gz = pipe: gunzip | |
695 |
|
695 | |||
696 | [decode] |
|
696 | [decode] | |
697 | # recompress gzip files when writing them to the working dir (we |
|
697 | # recompress gzip files when writing them to the working dir (we | |
698 | # can safely omit "pipe:", because it's the default) |
|
698 | # can safely omit "pipe:", because it's the default) | |
699 | *.gz = gzip |
|
699 | *.gz = gzip | |
700 |
|
700 | |||
701 | A ``tempfile:`` command is a template. The string ``INFILE`` is replaced |
|
701 | A ``tempfile:`` command is a template. The string ``INFILE`` is replaced | |
702 | with the name of a temporary file that contains the data to be |
|
702 | with the name of a temporary file that contains the data to be | |
703 | filtered by the command. The string ``OUTFILE`` is replaced with the name |
|
703 | filtered by the command. The string ``OUTFILE`` is replaced with the name | |
704 | of an empty temporary file, where the filtered data must be written by |
|
704 | of an empty temporary file, where the filtered data must be written by | |
705 | the command. |
|
705 | the command. | |
706 |
|
706 | |||
707 | .. container:: windows |
|
707 | .. container:: windows | |
708 |
|
708 | |||
709 | .. note:: |
|
709 | .. note:: | |
710 |
|
710 | |||
711 | The tempfile mechanism is recommended for Windows systems, |
|
711 | The tempfile mechanism is recommended for Windows systems, | |
712 | where the standard shell I/O redirection operators often have |
|
712 | where the standard shell I/O redirection operators often have | |
713 | strange effects and may corrupt the contents of your files. |
|
713 | strange effects and may corrupt the contents of your files. | |
714 |
|
714 | |||
715 | This filter mechanism is used internally by the ``eol`` extension to |
|
715 | This filter mechanism is used internally by the ``eol`` extension to | |
716 | translate line ending characters between Windows (CRLF) and Unix (LF) |
|
716 | translate line ending characters between Windows (CRLF) and Unix (LF) | |
717 | format. We suggest you use the ``eol`` extension for convenience. |
|
717 | format. We suggest you use the ``eol`` extension for convenience. | |
718 |
|
718 | |||
719 |
|
719 | |||
720 | ``defaults`` |
|
720 | ``defaults`` | |
721 | ------------ |
|
721 | ------------ | |
722 |
|
722 | |||
723 | (defaults are deprecated. Don't use them. Use aliases instead.) |
|
723 | (defaults are deprecated. Don't use them. Use aliases instead.) | |
724 |
|
724 | |||
725 | Use the ``[defaults]`` section to define command defaults, i.e. the |
|
725 | Use the ``[defaults]`` section to define command defaults, i.e. the | |
726 | default options/arguments to pass to the specified commands. |
|
726 | default options/arguments to pass to the specified commands. | |
727 |
|
727 | |||
728 | The following example makes :hg:`log` run in verbose mode, and |
|
728 | The following example makes :hg:`log` run in verbose mode, and | |
729 | :hg:`status` show only the modified files, by default:: |
|
729 | :hg:`status` show only the modified files, by default:: | |
730 |
|
730 | |||
731 | [defaults] |
|
731 | [defaults] | |
732 | log = -v |
|
732 | log = -v | |
733 | status = -m |
|
733 | status = -m | |
734 |
|
734 | |||
735 | The actual commands, instead of their aliases, must be used when |
|
735 | The actual commands, instead of their aliases, must be used when | |
736 | defining command defaults. The command defaults will also be applied |
|
736 | defining command defaults. The command defaults will also be applied | |
737 | to the aliases of the commands defined. |
|
737 | to the aliases of the commands defined. | |
738 |
|
738 | |||
739 |
|
739 | |||
740 | ``diff`` |
|
740 | ``diff`` | |
741 | -------- |
|
741 | -------- | |
742 |
|
742 | |||
743 | Settings used when displaying diffs. Everything except for ``unified`` |
|
743 | Settings used when displaying diffs. Everything except for ``unified`` | |
744 | is a Boolean and defaults to False. See :hg:`help config.annotate` |
|
744 | is a Boolean and defaults to False. See :hg:`help config.annotate` | |
745 | for related options for the annotate command. |
|
745 | for related options for the annotate command. | |
746 |
|
746 | |||
747 | ``git`` |
|
747 | ``git`` | |
748 | Use git extended diff format. |
|
748 | Use git extended diff format. | |
749 |
|
749 | |||
750 | ``nobinary`` |
|
750 | ``nobinary`` | |
751 | Omit git binary patches. |
|
751 | Omit git binary patches. | |
752 |
|
752 | |||
753 | ``nodates`` |
|
753 | ``nodates`` | |
754 | Don't include dates in diff headers. |
|
754 | Don't include dates in diff headers. | |
755 |
|
755 | |||
756 | ``noprefix`` |
|
756 | ``noprefix`` | |
757 | Omit 'a/' and 'b/' prefixes from filenames. Ignored in plain mode. |
|
757 | Omit 'a/' and 'b/' prefixes from filenames. Ignored in plain mode. | |
758 |
|
758 | |||
759 | ``showfunc`` |
|
759 | ``showfunc`` | |
760 | Show which function each change is in. |
|
760 | Show which function each change is in. | |
761 |
|
761 | |||
762 | ``ignorews`` |
|
762 | ``ignorews`` | |
763 | Ignore white space when comparing lines. |
|
763 | Ignore white space when comparing lines. | |
764 |
|
764 | |||
765 | ``ignorewsamount`` |
|
765 | ``ignorewsamount`` | |
766 | Ignore changes in the amount of white space. |
|
766 | Ignore changes in the amount of white space. | |
767 |
|
767 | |||
768 | ``ignoreblanklines`` |
|
768 | ``ignoreblanklines`` | |
769 | Ignore changes whose lines are all blank. |
|
769 | Ignore changes whose lines are all blank. | |
770 |
|
770 | |||
771 | ``unified`` |
|
771 | ``unified`` | |
772 | Number of lines of context to show. |
|
772 | Number of lines of context to show. | |
773 |
|
773 | |||
774 | ``word-diff`` |
|
774 | ``word-diff`` | |
775 | Highlight changed words. |
|
775 | Highlight changed words. | |
776 |
|
776 | |||
777 | ``email`` |
|
777 | ``email`` | |
778 | --------- |
|
778 | --------- | |
779 |
|
779 | |||
780 | Settings for extensions that send email messages. |
|
780 | Settings for extensions that send email messages. | |
781 |
|
781 | |||
782 | ``from`` |
|
782 | ``from`` | |
783 | Optional. Email address to use in "From" header and SMTP envelope |
|
783 | Optional. Email address to use in "From" header and SMTP envelope | |
784 | of outgoing messages. |
|
784 | of outgoing messages. | |
785 |
|
785 | |||
786 | ``to`` |
|
786 | ``to`` | |
787 | Optional. Comma-separated list of recipients' email addresses. |
|
787 | Optional. Comma-separated list of recipients' email addresses. | |
788 |
|
788 | |||
789 | ``cc`` |
|
789 | ``cc`` | |
790 | Optional. Comma-separated list of carbon copy recipients' |
|
790 | Optional. Comma-separated list of carbon copy recipients' | |
791 | email addresses. |
|
791 | email addresses. | |
792 |
|
792 | |||
793 | ``bcc`` |
|
793 | ``bcc`` | |
794 | Optional. Comma-separated list of blind carbon copy recipients' |
|
794 | Optional. Comma-separated list of blind carbon copy recipients' | |
795 | email addresses. |
|
795 | email addresses. | |
796 |
|
796 | |||
797 | ``method`` |
|
797 | ``method`` | |
798 | Optional. Method to use to send email messages. If value is ``smtp`` |
|
798 | Optional. Method to use to send email messages. If value is ``smtp`` | |
799 | (default), use SMTP (see the ``[smtp]`` section for configuration). |
|
799 | (default), use SMTP (see the ``[smtp]`` section for configuration). | |
800 | Otherwise, use as name of program to run that acts like sendmail |
|
800 | Otherwise, use as name of program to run that acts like sendmail | |
801 | (takes ``-f`` option for sender, list of recipients on command line, |
|
801 | (takes ``-f`` option for sender, list of recipients on command line, | |
802 | message on stdin). Normally, setting this to ``sendmail`` or |
|
802 | message on stdin). Normally, setting this to ``sendmail`` or | |
803 | ``/usr/sbin/sendmail`` is enough to use sendmail to send messages. |
|
803 | ``/usr/sbin/sendmail`` is enough to use sendmail to send messages. | |
804 |
|
804 | |||
805 | ``charsets`` |
|
805 | ``charsets`` | |
806 | Optional. Comma-separated list of character sets considered |
|
806 | Optional. Comma-separated list of character sets considered | |
807 | convenient for recipients. Addresses, headers, and parts not |
|
807 | convenient for recipients. Addresses, headers, and parts not | |
808 | containing patches of outgoing messages will be encoded in the |
|
808 | containing patches of outgoing messages will be encoded in the | |
809 | first character set to which conversion from local encoding |
|
809 | first character set to which conversion from local encoding | |
810 | (``$HGENCODING``, ``ui.fallbackencoding``) succeeds. If correct |
|
810 | (``$HGENCODING``, ``ui.fallbackencoding``) succeeds. If correct | |
811 | conversion fails, the text in question is sent as is. |
|
811 | conversion fails, the text in question is sent as is. | |
812 | (default: '') |
|
812 | (default: '') | |
813 |
|
813 | |||
814 | Order of outgoing email character sets: |
|
814 | Order of outgoing email character sets: | |
815 |
|
815 | |||
816 | 1. ``us-ascii``: always first, regardless of settings |
|
816 | 1. ``us-ascii``: always first, regardless of settings | |
817 | 2. ``email.charsets``: in order given by user |
|
817 | 2. ``email.charsets``: in order given by user | |
818 | 3. ``ui.fallbackencoding``: if not in email.charsets |
|
818 | 3. ``ui.fallbackencoding``: if not in email.charsets | |
819 | 4. ``$HGENCODING``: if not in email.charsets |
|
819 | 4. ``$HGENCODING``: if not in email.charsets | |
820 | 5. ``utf-8``: always last, regardless of settings |
|
820 | 5. ``utf-8``: always last, regardless of settings | |
821 |
|
821 | |||
822 | Email example:: |
|
822 | Email example:: | |
823 |
|
823 | |||
824 | [email] |
|
824 | [email] | |
825 | from = Joseph User <joe.user@example.com> |
|
825 | from = Joseph User <joe.user@example.com> | |
826 | method = /usr/sbin/sendmail |
|
826 | method = /usr/sbin/sendmail | |
827 | # charsets for western Europeans |
|
827 | # charsets for western Europeans | |
828 | # us-ascii, utf-8 omitted, as they are tried first and last |
|
828 | # us-ascii, utf-8 omitted, as they are tried first and last | |
829 | charsets = iso-8859-1, iso-8859-15, windows-1252 |
|
829 | charsets = iso-8859-1, iso-8859-15, windows-1252 | |
830 |
|
830 | |||
831 |
|
831 | |||
832 | ``extensions`` |
|
832 | ``extensions`` | |
833 | -------------- |
|
833 | -------------- | |
834 |
|
834 | |||
835 | Mercurial has an extension mechanism for adding new features. To |
|
835 | Mercurial has an extension mechanism for adding new features. To | |
836 | enable an extension, create an entry for it in this section. |
|
836 | enable an extension, create an entry for it in this section. | |
837 |
|
837 | |||
838 | If you know that the extension is already in Python's search path, |
|
838 | If you know that the extension is already in Python's search path, | |
839 | you can give the name of the module, followed by ``=``, with nothing |
|
839 | you can give the name of the module, followed by ``=``, with nothing | |
840 | after the ``=``. |
|
840 | after the ``=``. | |
841 |
|
841 | |||
842 | Otherwise, give a name that you choose, followed by ``=``, followed by |
|
842 | Otherwise, give a name that you choose, followed by ``=``, followed by | |
843 | the path to the ``.py`` file (including the file name extension) that |
|
843 | the path to the ``.py`` file (including the file name extension) that | |
844 | defines the extension. |
|
844 | defines the extension. | |
845 |
|
845 | |||
846 | To explicitly disable an extension that is enabled in an hgrc of |
|
846 | To explicitly disable an extension that is enabled in an hgrc of | |
847 | broader scope, prepend its path with ``!``, as in ``foo = !/ext/path`` |
|
847 | broader scope, prepend its path with ``!``, as in ``foo = !/ext/path`` | |
848 | or ``foo = !`` when path is not supplied. |
|
848 | or ``foo = !`` when path is not supplied. | |
849 |
|
849 | |||
850 | Example for ``~/.hgrc``:: |
|
850 | Example for ``~/.hgrc``:: | |
851 |
|
851 | |||
852 | [extensions] |
|
852 | [extensions] | |
853 | # (the churn extension will get loaded from Mercurial's path) |
|
853 | # (the churn extension will get loaded from Mercurial's path) | |
854 | churn = |
|
854 | churn = | |
855 | # (this extension will get loaded from the file specified) |
|
855 | # (this extension will get loaded from the file specified) | |
856 | myfeature = ~/.hgext/myfeature.py |
|
856 | myfeature = ~/.hgext/myfeature.py | |
857 |
|
857 | |||
858 | If an extension fails to load, a warning will be issued, and Mercurial will |
|
858 | If an extension fails to load, a warning will be issued, and Mercurial will | |
859 | proceed. To enforce that an extension must be loaded, one can set the `required` |
|
859 | proceed. To enforce that an extension must be loaded, one can set the `required` | |
860 | suboption in the config:: |
|
860 | suboption in the config:: | |
861 |
|
861 | |||
862 | [extensions] |
|
862 | [extensions] | |
863 | myfeature = ~/.hgext/myfeature.py |
|
863 | myfeature = ~/.hgext/myfeature.py | |
864 | myfeature:required = yes |
|
864 | myfeature:required = yes | |
865 |
|
865 | |||
866 | To debug extension loading issue, one can add `--traceback` to their mercurial |
|
866 | To debug extension loading issue, one can add `--traceback` to their mercurial | |
867 | invocation. |
|
867 | invocation. | |
868 |
|
868 | |||
869 | A default setting can we set using the special `*` extension key:: |
|
869 | A default setting can we set using the special `*` extension key:: | |
870 |
|
870 | |||
871 | [extensions] |
|
871 | [extensions] | |
872 | *:required = yes |
|
872 | *:required = yes | |
873 | myfeature = ~/.hgext/myfeature.py |
|
873 | myfeature = ~/.hgext/myfeature.py | |
874 | rebase= |
|
874 | rebase= | |
875 |
|
875 | |||
876 |
|
876 | |||
877 | ``format`` |
|
877 | ``format`` | |
878 | ---------- |
|
878 | ---------- | |
879 |
|
879 | |||
880 | Configuration that controls the repository format. Newer format options are more |
|
880 | Configuration that controls the repository format. Newer format options are more | |
881 | powerful, but incompatible with some older versions of Mercurial. Format options |
|
881 | powerful, but incompatible with some older versions of Mercurial. Format options | |
882 | are considered at repository initialization only. You need to make a new clone |
|
882 | are considered at repository initialization only. You need to make a new clone | |
883 | for config changes to be taken into account. |
|
883 | for config changes to be taken into account. | |
884 |
|
884 | |||
885 | For more details about repository format and version compatibility, see |
|
885 | For more details about repository format and version compatibility, see | |
886 | https://www.mercurial-scm.org/wiki/MissingRequirement |
|
886 | https://www.mercurial-scm.org/wiki/MissingRequirement | |
887 |
|
887 | |||
888 | ``usegeneraldelta`` |
|
888 | ``usegeneraldelta`` | |
889 | Enable or disable the "generaldelta" repository format which improves |
|
889 | Enable or disable the "generaldelta" repository format which improves | |
890 | repository compression by allowing "revlog" to store deltas against |
|
890 | repository compression by allowing "revlog" to store deltas against | |
891 | arbitrary revisions instead of the previously stored one. This provides |
|
891 | arbitrary revisions instead of the previously stored one. This provides | |
892 | significant improvement for repositories with branches. |
|
892 | significant improvement for repositories with branches. | |
893 |
|
893 | |||
894 | Repositories with this on-disk format require Mercurial version 1.9. |
|
894 | Repositories with this on-disk format require Mercurial version 1.9. | |
895 |
|
895 | |||
896 | Enabled by default. |
|
896 | Enabled by default. | |
897 |
|
897 | |||
898 | ``dotencode`` |
|
898 | ``dotencode`` | |
899 | Enable or disable the "dotencode" repository format which enhances |
|
899 | Enable or disable the "dotencode" repository format which enhances | |
900 | the "fncache" repository format (which has to be enabled to use |
|
900 | the "fncache" repository format (which has to be enabled to use | |
901 | dotencode) to avoid issues with filenames starting with "._" on |
|
901 | dotencode) to avoid issues with filenames starting with "._" on | |
902 | Mac OS X and spaces on Windows. |
|
902 | Mac OS X and spaces on Windows. | |
903 |
|
903 | |||
904 | Repositories with this on-disk format require Mercurial version 1.7. |
|
904 | Repositories with this on-disk format require Mercurial version 1.7. | |
905 |
|
905 | |||
906 | Enabled by default. |
|
906 | Enabled by default. | |
907 |
|
907 | |||
908 | ``usefncache`` |
|
908 | ``usefncache`` | |
909 | Enable or disable the "fncache" repository format which enhances |
|
909 | Enable or disable the "fncache" repository format which enhances | |
910 | the "store" repository format (which has to be enabled to use |
|
910 | the "store" repository format (which has to be enabled to use | |
911 | fncache) to allow longer filenames and avoids using Windows |
|
911 | fncache) to allow longer filenames and avoids using Windows | |
912 | reserved names, e.g. "nul". |
|
912 | reserved names, e.g. "nul". | |
913 |
|
913 | |||
914 | Repositories with this on-disk format require Mercurial version 1.1. |
|
914 | Repositories with this on-disk format require Mercurial version 1.1. | |
915 |
|
915 | |||
916 | Enabled by default. |
|
916 | Enabled by default. | |
917 |
|
917 | |||
918 | ``use-dirstate-v2`` |
|
918 | ``use-dirstate-v2`` | |
919 | Enable or disable the experimental "dirstate-v2" feature. The dirstate |
|
919 | Enable or disable the experimental "dirstate-v2" feature. The dirstate | |
920 | functionality is shared by all commands interacting with the working copy. |
|
920 | functionality is shared by all commands interacting with the working copy. | |
921 | The new version is more robust, faster and stores more information. |
|
921 | The new version is more robust, faster and stores more information. | |
922 |
|
922 | |||
923 | The performance-improving version of this feature is currently only |
|
923 | The performance-improving version of this feature is currently only | |
924 | implemented in Rust (see :hg:`help rust`), so people not using a version of |
|
924 | implemented in Rust (see :hg:`help rust`), so people not using a version of | |
925 | Mercurial compiled with the Rust parts might actually suffer some slowdown. |
|
925 | Mercurial compiled with the Rust parts might actually suffer some slowdown. | |
926 | For this reason, such versions will by default refuse to access repositories |
|
926 | For this reason, such versions will by default refuse to access repositories | |
927 | with "dirstate-v2" enabled. |
|
927 | with "dirstate-v2" enabled. | |
928 |
|
928 | |||
929 | This behavior can be adjusted via configuration: check |
|
929 | This behavior can be adjusted via configuration: check | |
930 | :hg:`help config.storage.dirstate-v2.slow-path` for details. |
|
930 | :hg:`help config.storage.dirstate-v2.slow-path` for details. | |
931 |
|
931 | |||
932 | Repositories with this on-disk format require Mercurial 6.0 or above. |
|
932 | Repositories with this on-disk format require Mercurial 6.0 or above. | |
933 |
|
933 | |||
934 | By default this format variant is disabled if the fast implementation is not |
|
934 | By default this format variant is disabled if the fast implementation is not | |
935 | available, and enabled by default if the fast implementation is available. |
|
935 | available, and enabled by default if the fast implementation is available. | |
936 |
|
936 | |||
937 | To accomodate installations of Mercurial without the fast implementation, |
|
937 | To accomodate installations of Mercurial without the fast implementation, | |
938 | you can downgrade your repository. To do so run the following command: |
|
938 | you can downgrade your repository. To do so run the following command: | |
939 |
|
939 | |||
940 | $ hg debugupgraderepo \ |
|
940 | $ hg debugupgraderepo \ | |
941 | --run \ |
|
941 | --run \ | |
942 | --config format.use-dirstate-v2=False \ |
|
942 | --config format.use-dirstate-v2=False \ | |
943 | --config storage.dirstate-v2.slow-path=allow |
|
943 | --config storage.dirstate-v2.slow-path=allow | |
944 |
|
944 | |||
945 | For a more comprehensive guide, see :hg:`help internals.dirstate-v2`. |
|
945 | For a more comprehensive guide, see :hg:`help internals.dirstate-v2`. | |
946 |
|
946 | |||
947 | ``use-dirstate-v2.automatic-upgrade-of-mismatching-repositories`` |
|
947 | ``use-dirstate-v2.automatic-upgrade-of-mismatching-repositories`` | |
948 | When enabled, an automatic upgrade will be triggered when a repository format |
|
948 | When enabled, an automatic upgrade will be triggered when a repository format | |
949 | does not match its `use-dirstate-v2` config. |
|
949 | does not match its `use-dirstate-v2` config. | |
950 |
|
950 | |||
951 | This is an advanced behavior that most users will not need. We recommend you |
|
951 | This is an advanced behavior that most users will not need. We recommend you | |
952 | don't use this unless you are a seasoned administrator of a Mercurial install |
|
952 | don't use this unless you are a seasoned administrator of a Mercurial install | |
953 | base. |
|
953 | base. | |
954 |
|
954 | |||
955 | Automatic upgrade means that any process accessing the repository will |
|
955 | Automatic upgrade means that any process accessing the repository will | |
956 | upgrade the repository format to use `dirstate-v2`. This only triggers if a |
|
956 | upgrade the repository format to use `dirstate-v2`. This only triggers if a | |
957 | change is needed. This also applies to operations that would have been |
|
957 | change is needed. This also applies to operations that would have been | |
958 | read-only (like hg status). |
|
958 | read-only (like hg status). | |
959 |
|
959 | |||
960 | If the repository cannot be locked, the automatic-upgrade operation will be |
|
960 | If the repository cannot be locked, the automatic-upgrade operation will be | |
961 | skipped. The next operation will attempt it again. |
|
961 | skipped. The next operation will attempt it again. | |
962 |
|
962 | |||
963 | This configuration will apply for moves in any direction, either adding the |
|
963 | This configuration will apply for moves in any direction, either adding the | |
964 | `dirstate-v2` format if `format.use-dirstate-v2=yes` or removing the |
|
964 | `dirstate-v2` format if `format.use-dirstate-v2=yes` or removing the | |
965 | `dirstate-v2` requirement if `format.use-dirstate-v2=no`. So we recommend |
|
965 | `dirstate-v2` requirement if `format.use-dirstate-v2=no`. So we recommend | |
966 | setting both this value and `format.use-dirstate-v2` at the same time. |
|
966 | setting both this value and `format.use-dirstate-v2` at the same time. | |
967 |
|
967 | |||
968 | ``use-dirstate-v2.automatic-upgrade-of-mismatching-repositories:quiet`` |
|
968 | ``use-dirstate-v2.automatic-upgrade-of-mismatching-repositories:quiet`` | |
969 | Hide message when performing such automatic upgrade. |
|
969 | Hide message when performing such automatic upgrade. | |
970 |
|
970 | |||
971 | ``use-dirstate-tracked-hint`` |
|
971 | ``use-dirstate-tracked-hint`` | |
972 | Enable or disable the writing of "tracked key" file alongside the dirstate. |
|
972 | Enable or disable the writing of "tracked key" file alongside the dirstate. | |
973 | (default to disabled) |
|
973 | (default to disabled) | |
974 |
|
974 | |||
975 | That "tracked-hint" can help external automations to detect changes to the |
|
975 | That "tracked-hint" can help external automations to detect changes to the | |
976 | set of tracked files. (i.e the result of `hg files` or `hg status -macd`) |
|
976 | set of tracked files. (i.e the result of `hg files` or `hg status -macd`) | |
977 |
|
977 | |||
978 | The tracked-hint is written in a new `.hg/dirstate-tracked-hint`. That file |
|
978 | The tracked-hint is written in a new `.hg/dirstate-tracked-hint`. That file | |
979 | contains two lines: |
|
979 | contains two lines: | |
980 | - the first line is the file version (currently: 1), |
|
980 | - the first line is the file version (currently: 1), | |
981 | - the second line contains the "tracked-hint". |
|
981 | - the second line contains the "tracked-hint". | |
982 | That file is written right after the dirstate is written. |
|
982 | That file is written right after the dirstate is written. | |
983 |
|
983 | |||
984 | The tracked-hint changes whenever the set of file tracked in the dirstate |
|
984 | The tracked-hint changes whenever the set of file tracked in the dirstate | |
985 | changes. The general idea is: |
|
985 | changes. The general idea is: | |
986 | - if the hint is identical, the set of tracked file SHOULD be identical, |
|
986 | - if the hint is identical, the set of tracked file SHOULD be identical, | |
987 | - if the hint is different, the set of tracked file MIGHT be different. |
|
987 | - if the hint is different, the set of tracked file MIGHT be different. | |
988 |
|
988 | |||
989 | The "hint is identical" case uses `SHOULD` as the dirstate and the hint file |
|
989 | The "hint is identical" case uses `SHOULD` as the dirstate and the hint file | |
990 | are two distinct files and therefore that cannot be read or written to in an |
|
990 | are two distinct files and therefore that cannot be read or written to in an | |
991 | atomic way. If the key is identical, nothing garantees that the dirstate is |
|
991 | atomic way. If the key is identical, nothing garantees that the dirstate is | |
992 | not updated right after the hint file. This is considered a negligible |
|
992 | not updated right after the hint file. This is considered a negligible | |
993 | limitation for the intended usecase. It is actually possible to prevent this |
|
993 | limitation for the intended usecase. It is actually possible to prevent this | |
994 | race by taking the repository lock during read operations. |
|
994 | race by taking the repository lock during read operations. | |
995 |
|
995 | |||
996 | They are two "ways" to use this feature: |
|
996 | They are two "ways" to use this feature: | |
997 |
|
997 | |||
998 | 1) monitoring changes to the `.hg/dirstate-tracked-hint`, if the file |
|
998 | 1) monitoring changes to the `.hg/dirstate-tracked-hint`, if the file | |
999 | changes, the tracked set might have changed. |
|
999 | changes, the tracked set might have changed. | |
1000 |
|
1000 | |||
1001 | 2) storing the value and comparing it to a later value. |
|
1001 | 2) storing the value and comparing it to a later value. | |
1002 |
|
1002 | |||
1003 |
|
1003 | |||
1004 | ``use-dirstate-tracked-hint.automatic-upgrade-of-mismatching-repositories`` |
|
1004 | ``use-dirstate-tracked-hint.automatic-upgrade-of-mismatching-repositories`` | |
1005 | When enabled, an automatic upgrade will be triggered when a repository format |
|
1005 | When enabled, an automatic upgrade will be triggered when a repository format | |
1006 | does not match its `use-dirstate-tracked-hint` config. |
|
1006 | does not match its `use-dirstate-tracked-hint` config. | |
1007 |
|
1007 | |||
1008 | This is an advanced behavior that most users will not need. We recommend you |
|
1008 | This is an advanced behavior that most users will not need. We recommend you | |
1009 | don't use this unless you are a seasoned administrator of a Mercurial install |
|
1009 | don't use this unless you are a seasoned administrator of a Mercurial install | |
1010 | base. |
|
1010 | base. | |
1011 |
|
1011 | |||
1012 | Automatic upgrade means that any process accessing the repository will |
|
1012 | Automatic upgrade means that any process accessing the repository will | |
1013 | upgrade the repository format to use `dirstate-tracked-hint`. This only |
|
1013 | upgrade the repository format to use `dirstate-tracked-hint`. This only | |
1014 | triggers if a change is needed. This also applies to operations that would |
|
1014 | triggers if a change is needed. This also applies to operations that would | |
1015 | have been read-only (like hg status). |
|
1015 | have been read-only (like hg status). | |
1016 |
|
1016 | |||
1017 | If the repository cannot be locked, the automatic-upgrade operation will be |
|
1017 | If the repository cannot be locked, the automatic-upgrade operation will be | |
1018 | skipped. The next operation will attempt it again. |
|
1018 | skipped. The next operation will attempt it again. | |
1019 |
|
1019 | |||
1020 | This configuration will apply for moves in any direction, either adding the |
|
1020 | This configuration will apply for moves in any direction, either adding the | |
1021 | `dirstate-tracked-hint` format if `format.use-dirstate-tracked-hint=yes` or |
|
1021 | `dirstate-tracked-hint` format if `format.use-dirstate-tracked-hint=yes` or | |
1022 | removing the `dirstate-tracked-hint` requirement if |
|
1022 | removing the `dirstate-tracked-hint` requirement if | |
1023 | `format.use-dirstate-tracked-hint=no`. So we recommend setting both this |
|
1023 | `format.use-dirstate-tracked-hint=no`. So we recommend setting both this | |
1024 | value and `format.use-dirstate-tracked-hint` at the same time. |
|
1024 | value and `format.use-dirstate-tracked-hint` at the same time. | |
1025 |
|
1025 | |||
1026 |
|
1026 | |||
1027 | ``use-dirstate-tracked-hint.automatic-upgrade-of-mismatching-repositories:quiet`` |
|
1027 | ``use-dirstate-tracked-hint.automatic-upgrade-of-mismatching-repositories:quiet`` | |
1028 | Hide message when performing such automatic upgrade. |
|
1028 | Hide message when performing such automatic upgrade. | |
1029 |
|
1029 | |||
1030 |
|
1030 | |||
1031 | ``use-persistent-nodemap`` |
|
1031 | ``use-persistent-nodemap`` | |
1032 | Enable or disable the "persistent-nodemap" feature which improves |
|
1032 | Enable or disable the "persistent-nodemap" feature which improves | |
1033 | performance if the Rust extensions are available. |
|
1033 | performance if the Rust extensions are available. | |
1034 |
|
1034 | |||
1035 | The "persistent-nodemap" persist the "node -> rev" on disk removing the |
|
1035 | The "persistent-nodemap" persist the "node -> rev" on disk removing the | |
1036 | need to dynamically build that mapping for each Mercurial invocation. This |
|
1036 | need to dynamically build that mapping for each Mercurial invocation. This | |
1037 | significantly reduces the startup cost of various local and server-side |
|
1037 | significantly reduces the startup cost of various local and server-side | |
1038 | operation for larger repositories. |
|
1038 | operation for larger repositories. | |
1039 |
|
1039 | |||
1040 | The performance-improving version of this feature is currently only |
|
1040 | The performance-improving version of this feature is currently only | |
1041 | implemented in Rust (see :hg:`help rust`), so people not using a version of |
|
1041 | implemented in Rust (see :hg:`help rust`), so people not using a version of | |
1042 | Mercurial compiled with the Rust parts might actually suffer some slowdown. |
|
1042 | Mercurial compiled with the Rust parts might actually suffer some slowdown. | |
1043 | For this reason, such versions will by default refuse to access repositories |
|
1043 | For this reason, such versions will by default refuse to access repositories | |
1044 | with "persistent-nodemap". |
|
1044 | with "persistent-nodemap". | |
1045 |
|
1045 | |||
1046 | This behavior can be adjusted via configuration: check |
|
1046 | This behavior can be adjusted via configuration: check | |
1047 | :hg:`help config.storage.revlog.persistent-nodemap.slow-path` for details. |
|
1047 | :hg:`help config.storage.revlog.persistent-nodemap.slow-path` for details. | |
1048 |
|
1048 | |||
1049 | Repositories with this on-disk format require Mercurial 5.4 or above. |
|
1049 | Repositories with this on-disk format require Mercurial 5.4 or above. | |
1050 |
|
1050 | |||
1051 | By default this format variant is disabled if the fast implementation is not |
|
1051 | By default this format variant is disabled if the fast implementation is not | |
1052 | available, and enabled by default if the fast implementation is available. |
|
1052 | available, and enabled by default if the fast implementation is available. | |
1053 |
|
1053 | |||
1054 | To accomodate installations of Mercurial without the fast implementation, |
|
1054 | To accomodate installations of Mercurial without the fast implementation, | |
1055 | you can downgrade your repository. To do so run the following command: |
|
1055 | you can downgrade your repository. To do so run the following command: | |
1056 |
|
1056 | |||
1057 | $ hg debugupgraderepo \ |
|
1057 | $ hg debugupgraderepo \ | |
1058 | --run \ |
|
1058 | --run \ | |
1059 | --config format.use-persistent-nodemap=False \ |
|
1059 | --config format.use-persistent-nodemap=False \ | |
1060 | --config storage.revlog.persistent-nodemap.slow-path=allow |
|
1060 | --config storage.revlog.persistent-nodemap.slow-path=allow | |
1061 |
|
1061 | |||
1062 | ``use-share-safe`` |
|
1062 | ``use-share-safe`` | |
1063 | Enforce "safe" behaviors for all "shares" that access this repository. |
|
1063 | Enforce "safe" behaviors for all "shares" that access this repository. | |
1064 |
|
1064 | |||
1065 | With this feature, "shares" using this repository as a source will: |
|
1065 | With this feature, "shares" using this repository as a source will: | |
1066 |
|
1066 | |||
1067 | * read the source repository's configuration (`<source>/.hg/hgrc`). |
|
1067 | * read the source repository's configuration (`<source>/.hg/hgrc`). | |
1068 | * read and use the source repository's "requirements" |
|
1068 | * read and use the source repository's "requirements" | |
1069 | (except the working copy specific one). |
|
1069 | (except the working copy specific one). | |
1070 |
|
1070 | |||
1071 | Without this feature, "shares" using this repository as a source will: |
|
1071 | Without this feature, "shares" using this repository as a source will: | |
1072 |
|
1072 | |||
1073 | * keep tracking the repository "requirements" in the share only, ignoring |
|
1073 | * keep tracking the repository "requirements" in the share only, ignoring | |
1074 | the source "requirements", possibly diverging from them. |
|
1074 | the source "requirements", possibly diverging from them. | |
1075 | * ignore source repository config. This can create problems, like silently |
|
1075 | * ignore source repository config. This can create problems, like silently | |
1076 | ignoring important hooks. |
|
1076 | ignoring important hooks. | |
1077 |
|
1077 | |||
1078 | Beware that existing shares will not be upgraded/downgraded, and by |
|
1078 | Beware that existing shares will not be upgraded/downgraded, and by | |
1079 | default, Mercurial will refuse to interact with them until the mismatch |
|
1079 | default, Mercurial will refuse to interact with them until the mismatch | |
1080 | is resolved. See :hg:`help config.share.safe-mismatch.source-safe` and |
|
1080 | is resolved. See :hg:`help config.share.safe-mismatch.source-safe` and | |
1081 | :hg:`help config.share.safe-mismatch.source-not-safe` for details. |
|
1081 | :hg:`help config.share.safe-mismatch.source-not-safe` for details. | |
1082 |
|
1082 | |||
1083 | Introduced in Mercurial 5.7. |
|
1083 | Introduced in Mercurial 5.7. | |
1084 |
|
1084 | |||
1085 | Enabled by default in Mercurial 6.1. |
|
1085 | Enabled by default in Mercurial 6.1. | |
1086 |
|
1086 | |||
1087 | ``use-share-safe.automatic-upgrade-of-mismatching-repositories`` |
|
1087 | ``use-share-safe.automatic-upgrade-of-mismatching-repositories`` | |
1088 | When enabled, an automatic upgrade will be triggered when a repository format |
|
1088 | When enabled, an automatic upgrade will be triggered when a repository format | |
1089 | does not match its `use-share-safe` config. |
|
1089 | does not match its `use-share-safe` config. | |
1090 |
|
1090 | |||
1091 | This is an advanced behavior that most users will not need. We recommend you |
|
1091 | This is an advanced behavior that most users will not need. We recommend you | |
1092 | don't use this unless you are a seasoned administrator of a Mercurial install |
|
1092 | don't use this unless you are a seasoned administrator of a Mercurial install | |
1093 | base. |
|
1093 | base. | |
1094 |
|
1094 | |||
1095 | Automatic upgrade means that any process accessing the repository will |
|
1095 | Automatic upgrade means that any process accessing the repository will | |
1096 | upgrade the repository format to use `share-safe`. This only triggers if a |
|
1096 | upgrade the repository format to use `share-safe`. This only triggers if a | |
1097 | change is needed. This also applies to operation that would have been |
|
1097 | change is needed. This also applies to operation that would have been | |
1098 | read-only (like hg status). |
|
1098 | read-only (like hg status). | |
1099 |
|
1099 | |||
1100 | If the repository cannot be locked, the automatic-upgrade operation will be |
|
1100 | If the repository cannot be locked, the automatic-upgrade operation will be | |
1101 | skipped. The next operation will attempt it again. |
|
1101 | skipped. The next operation will attempt it again. | |
1102 |
|
1102 | |||
1103 | This configuration will apply for moves in any direction, either adding the |
|
1103 | This configuration will apply for moves in any direction, either adding the | |
1104 | `share-safe` format if `format.use-share-safe=yes` or removing the |
|
1104 | `share-safe` format if `format.use-share-safe=yes` or removing the | |
1105 | `share-safe` requirement if `format.use-share-safe=no`. So we recommend |
|
1105 | `share-safe` requirement if `format.use-share-safe=no`. So we recommend | |
1106 | setting both this value and `format.use-share-safe` at the same time. |
|
1106 | setting both this value and `format.use-share-safe` at the same time. | |
1107 |
|
1107 | |||
1108 | ``use-share-safe.automatic-upgrade-of-mismatching-repositories:quiet`` |
|
1108 | ``use-share-safe.automatic-upgrade-of-mismatching-repositories:quiet`` | |
1109 | Hide message when performing such automatic upgrade. |
|
1109 | Hide message when performing such automatic upgrade. | |
1110 |
|
1110 | |||
1111 | ``usestore`` |
|
1111 | ``usestore`` | |
1112 | Enable or disable the "store" repository format which improves |
|
1112 | Enable or disable the "store" repository format which improves | |
1113 | compatibility with systems that fold case or otherwise mangle |
|
1113 | compatibility with systems that fold case or otherwise mangle | |
1114 | filenames. Disabling this option will allow you to store longer filenames |
|
1114 | filenames. Disabling this option will allow you to store longer filenames | |
1115 | in some situations at the expense of compatibility. |
|
1115 | in some situations at the expense of compatibility. | |
1116 |
|
1116 | |||
1117 | Repositories with this on-disk format require Mercurial version 0.9.4. |
|
1117 | Repositories with this on-disk format require Mercurial version 0.9.4. | |
1118 |
|
1118 | |||
1119 | Enabled by default. |
|
1119 | Enabled by default. | |
1120 |
|
1120 | |||
1121 | ``sparse-revlog`` |
|
1121 | ``sparse-revlog`` | |
1122 | Enable or disable the ``sparse-revlog`` delta strategy. This format improves |
|
1122 | Enable or disable the ``sparse-revlog`` delta strategy. This format improves | |
1123 | delta re-use inside revlog. For very branchy repositories, it results in a |
|
1123 | delta re-use inside revlog. For very branchy repositories, it results in a | |
1124 | smaller store. For repositories with many revisions, it also helps |
|
1124 | smaller store. For repositories with many revisions, it also helps | |
1125 | performance (by using shortened delta chains.) |
|
1125 | performance (by using shortened delta chains.) | |
1126 |
|
1126 | |||
1127 | Repositories with this on-disk format require Mercurial version 4.7 |
|
1127 | Repositories with this on-disk format require Mercurial version 4.7 | |
1128 |
|
1128 | |||
1129 | Enabled by default. |
|
1129 | Enabled by default. | |
1130 |
|
1130 | |||
1131 | ``revlog-compression`` |
|
1131 | ``revlog-compression`` | |
1132 | Compression algorithm used by revlog. Supported values are `zlib` and |
|
1132 | Compression algorithm used by revlog. Supported values are `zlib` and | |
1133 | `zstd`. The `zlib` engine is the historical default of Mercurial. `zstd` is |
|
1133 | `zstd`. The `zlib` engine is the historical default of Mercurial. `zstd` is | |
1134 | a newer format that is usually a net win over `zlib`, operating faster at |
|
1134 | a newer format that is usually a net win over `zlib`, operating faster at | |
1135 | better compression rates. Use `zstd` to reduce CPU usage. Multiple values |
|
1135 | better compression rates. Use `zstd` to reduce CPU usage. Multiple values | |
1136 | can be specified, the first available one will be used. |
|
1136 | can be specified, the first available one will be used. | |
1137 |
|
1137 | |||
1138 | On some systems, the Mercurial installation may lack `zstd` support. |
|
1138 | On some systems, the Mercurial installation may lack `zstd` support. | |
1139 |
|
1139 | |||
1140 | Default is `zstd` if available, `zlib` otherwise. |
|
1140 | Default is `zstd` if available, `zlib` otherwise. | |
1141 |
|
1141 | |||
1142 | ``bookmarks-in-store`` |
|
1142 | ``bookmarks-in-store`` | |
1143 | Store bookmarks in .hg/store/. This means that bookmarks are shared when |
|
1143 | Store bookmarks in .hg/store/. This means that bookmarks are shared when | |
1144 | using `hg share` regardless of the `-B` option. |
|
1144 | using `hg share` regardless of the `-B` option. | |
1145 |
|
1145 | |||
1146 | Repositories with this on-disk format require Mercurial version 5.1. |
|
1146 | Repositories with this on-disk format require Mercurial version 5.1. | |
1147 |
|
1147 | |||
1148 | Disabled by default. |
|
1148 | Disabled by default. | |
1149 |
|
1149 | |||
1150 |
|
1150 | |||
1151 | ``graph`` |
|
1151 | ``graph`` | |
1152 | --------- |
|
1152 | --------- | |
1153 |
|
1153 | |||
1154 | Web graph view configuration. This section let you change graph |
|
1154 | Web graph view configuration. This section let you change graph | |
1155 | elements display properties by branches, for instance to make the |
|
1155 | elements display properties by branches, for instance to make the | |
1156 | ``default`` branch stand out. |
|
1156 | ``default`` branch stand out. | |
1157 |
|
1157 | |||
1158 | Each line has the following format:: |
|
1158 | Each line has the following format:: | |
1159 |
|
1159 | |||
1160 | <branch>.<argument> = <value> |
|
1160 | <branch>.<argument> = <value> | |
1161 |
|
1161 | |||
1162 | where ``<branch>`` is the name of the branch being |
|
1162 | where ``<branch>`` is the name of the branch being | |
1163 | customized. Example:: |
|
1163 | customized. Example:: | |
1164 |
|
1164 | |||
1165 | [graph] |
|
1165 | [graph] | |
1166 | # 2px width |
|
1166 | # 2px width | |
1167 | default.width = 2 |
|
1167 | default.width = 2 | |
1168 | # red color |
|
1168 | # red color | |
1169 | default.color = FF0000 |
|
1169 | default.color = FF0000 | |
1170 |
|
1170 | |||
1171 | Supported arguments: |
|
1171 | Supported arguments: | |
1172 |
|
1172 | |||
1173 | ``width`` |
|
1173 | ``width`` | |
1174 | Set branch edges width in pixels. |
|
1174 | Set branch edges width in pixels. | |
1175 |
|
1175 | |||
1176 | ``color`` |
|
1176 | ``color`` | |
1177 | Set branch edges color in hexadecimal RGB notation. |
|
1177 | Set branch edges color in hexadecimal RGB notation. | |
1178 |
|
1178 | |||
1179 | ``hooks`` |
|
1179 | ``hooks`` | |
1180 | --------- |
|
1180 | --------- | |
1181 |
|
1181 | |||
1182 | Commands or Python functions that get automatically executed by |
|
1182 | Commands or Python functions that get automatically executed by | |
1183 | various actions such as starting or finishing a commit. Multiple |
|
1183 | various actions such as starting or finishing a commit. Multiple | |
1184 | hooks can be run for the same action by appending a suffix to the |
|
1184 | hooks can be run for the same action by appending a suffix to the | |
1185 | action. Overriding a site-wide hook can be done by changing its |
|
1185 | action. Overriding a site-wide hook can be done by changing its | |
1186 | value or setting it to an empty string. Hooks can be prioritized |
|
1186 | value or setting it to an empty string. Hooks can be prioritized | |
1187 | by adding a prefix of ``priority.`` to the hook name on a new line |
|
1187 | by adding a prefix of ``priority.`` to the hook name on a new line | |
1188 | and setting the priority. The default priority is 0. |
|
1188 | and setting the priority. The default priority is 0. | |
1189 |
|
1189 | |||
1190 | Example ``.hg/hgrc``:: |
|
1190 | Example ``.hg/hgrc``:: | |
1191 |
|
1191 | |||
1192 | [hooks] |
|
1192 | [hooks] | |
1193 | # update working directory after adding changesets |
|
1193 | # update working directory after adding changesets | |
1194 | changegroup.update = hg update |
|
1194 | changegroup.update = hg update | |
1195 | # do not use the site-wide hook |
|
1195 | # do not use the site-wide hook | |
1196 | incoming = |
|
1196 | incoming = | |
1197 | incoming.email = /my/email/hook |
|
1197 | incoming.email = /my/email/hook | |
1198 | incoming.autobuild = /my/build/hook |
|
1198 | incoming.autobuild = /my/build/hook | |
1199 | # force autobuild hook to run before other incoming hooks |
|
1199 | # force autobuild hook to run before other incoming hooks | |
1200 | priority.incoming.autobuild = 1 |
|
1200 | priority.incoming.autobuild = 1 | |
1201 | ### control HGPLAIN setting when running autobuild hook |
|
1201 | ### control HGPLAIN setting when running autobuild hook | |
1202 | # HGPLAIN always set (default from Mercurial 5.7) |
|
1202 | # HGPLAIN always set (default from Mercurial 5.7) | |
1203 | incoming.autobuild:run-with-plain = yes |
|
1203 | incoming.autobuild:run-with-plain = yes | |
1204 | # HGPLAIN never set |
|
1204 | # HGPLAIN never set | |
1205 | incoming.autobuild:run-with-plain = no |
|
1205 | incoming.autobuild:run-with-plain = no | |
1206 | # HGPLAIN inherited from environment (default before Mercurial 5.7) |
|
1206 | # HGPLAIN inherited from environment (default before Mercurial 5.7) | |
1207 | incoming.autobuild:run-with-plain = auto |
|
1207 | incoming.autobuild:run-with-plain = auto | |
1208 |
|
1208 | |||
1209 | Most hooks are run with environment variables set that give useful |
|
1209 | Most hooks are run with environment variables set that give useful | |
1210 | additional information. For each hook below, the environment variables |
|
1210 | additional information. For each hook below, the environment variables | |
1211 | it is passed are listed with names in the form ``$HG_foo``. The |
|
1211 | it is passed are listed with names in the form ``$HG_foo``. The | |
1212 | ``$HG_HOOKTYPE`` and ``$HG_HOOKNAME`` variables are set for all hooks. |
|
1212 | ``$HG_HOOKTYPE`` and ``$HG_HOOKNAME`` variables are set for all hooks. | |
1213 | They contain the type of hook which triggered the run and the full name |
|
1213 | They contain the type of hook which triggered the run and the full name | |
1214 | of the hook in the config, respectively. In the example above, this will |
|
1214 | of the hook in the config, respectively. In the example above, this will | |
1215 | be ``$HG_HOOKTYPE=incoming`` and ``$HG_HOOKNAME=incoming.email``. |
|
1215 | be ``$HG_HOOKTYPE=incoming`` and ``$HG_HOOKNAME=incoming.email``. | |
1216 |
|
1216 | |||
1217 | .. container:: windows |
|
1217 | .. container:: windows | |
1218 |
|
1218 | |||
1219 | Some basic Unix syntax can be enabled for portability, including ``$VAR`` |
|
1219 | Some basic Unix syntax can be enabled for portability, including ``$VAR`` | |
1220 | and ``${VAR}`` style variables. A ``~`` followed by ``\`` or ``/`` will |
|
1220 | and ``${VAR}`` style variables. A ``~`` followed by ``\`` or ``/`` will | |
1221 | be expanded to ``%USERPROFILE%`` to simulate a subset of tilde expansion |
|
1221 | be expanded to ``%USERPROFILE%`` to simulate a subset of tilde expansion | |
1222 | on Unix. To use a literal ``$`` or ``~``, it must be escaped with a back |
|
1222 | on Unix. To use a literal ``$`` or ``~``, it must be escaped with a back | |
1223 | slash or inside of a strong quote. Strong quotes will be replaced by |
|
1223 | slash or inside of a strong quote. Strong quotes will be replaced by | |
1224 | double quotes after processing. |
|
1224 | double quotes after processing. | |
1225 |
|
1225 | |||
1226 | This feature is enabled by adding a prefix of ``tonative.`` to the hook |
|
1226 | This feature is enabled by adding a prefix of ``tonative.`` to the hook | |
1227 | name on a new line, and setting it to ``True``. For example:: |
|
1227 | name on a new line, and setting it to ``True``. For example:: | |
1228 |
|
1228 | |||
1229 | [hooks] |
|
1229 | [hooks] | |
1230 | incoming.autobuild = /my/build/hook |
|
1230 | incoming.autobuild = /my/build/hook | |
1231 | # enable translation to cmd.exe syntax for autobuild hook |
|
1231 | # enable translation to cmd.exe syntax for autobuild hook | |
1232 | tonative.incoming.autobuild = True |
|
1232 | tonative.incoming.autobuild = True | |
1233 |
|
1233 | |||
1234 | ``changegroup`` |
|
1234 | ``changegroup`` | |
1235 | Run after a changegroup has been added via push, pull or unbundle. The ID of |
|
1235 | Run after a changegroup has been added via push, pull or unbundle. The ID of | |
1236 | the first new changeset is in ``$HG_NODE`` and last is in ``$HG_NODE_LAST``. |
|
1236 | the first new changeset is in ``$HG_NODE`` and last is in ``$HG_NODE_LAST``. | |
1237 | The URL from which changes came is in ``$HG_URL``. |
|
1237 | The URL from which changes came is in ``$HG_URL``. | |
1238 |
|
1238 | |||
1239 | ``commit`` |
|
1239 | ``commit`` | |
1240 | Run after a changeset has been created in the local repository. The ID |
|
1240 | Run after a changeset has been created in the local repository. The ID | |
1241 | of the newly created changeset is in ``$HG_NODE``. Parent changeset |
|
1241 | of the newly created changeset is in ``$HG_NODE``. Parent changeset | |
1242 | IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. |
|
1242 | IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. | |
1243 |
|
1243 | |||
1244 | ``incoming`` |
|
1244 | ``incoming`` | |
1245 | Run after a changeset has been pulled, pushed, or unbundled into |
|
1245 | Run after a changeset has been pulled, pushed, or unbundled into | |
1246 | the local repository. The ID of the newly arrived changeset is in |
|
1246 | the local repository. The ID of the newly arrived changeset is in | |
1247 | ``$HG_NODE``. The URL that was source of the changes is in ``$HG_URL``. |
|
1247 | ``$HG_NODE``. The URL that was source of the changes is in ``$HG_URL``. | |
1248 |
|
1248 | |||
1249 | ``outgoing`` |
|
1249 | ``outgoing`` | |
1250 | Run after sending changes from the local repository to another. The ID of |
|
1250 | Run after sending changes from the local repository to another. The ID of | |
1251 | first changeset sent is in ``$HG_NODE``. The source of operation is in |
|
1251 | first changeset sent is in ``$HG_NODE``. The source of operation is in | |
1252 | ``$HG_SOURCE``. Also see :hg:`help config.hooks.preoutgoing`. |
|
1252 | ``$HG_SOURCE``. Also see :hg:`help config.hooks.preoutgoing`. | |
1253 |
|
1253 | |||
1254 | ``post-<command>`` |
|
1254 | ``post-<command>`` | |
1255 | Run after successful invocations of the associated command. The |
|
1255 | Run after successful invocations of the associated command. The | |
1256 | contents of the command line are passed as ``$HG_ARGS`` and the result |
|
1256 | contents of the command line are passed as ``$HG_ARGS`` and the result | |
1257 | code in ``$HG_RESULT``. Parsed command line arguments are passed as |
|
1257 | code in ``$HG_RESULT``. Parsed command line arguments are passed as | |
1258 | ``$HG_PATS`` and ``$HG_OPTS``. These contain string representations of |
|
1258 | ``$HG_PATS`` and ``$HG_OPTS``. These contain string representations of | |
1259 | the python data internally passed to <command>. ``$HG_OPTS`` is a |
|
1259 | the python data internally passed to <command>. ``$HG_OPTS`` is a | |
1260 | dictionary of options (with unspecified options set to their defaults). |
|
1260 | dictionary of options (with unspecified options set to their defaults). | |
1261 | ``$HG_PATS`` is a list of arguments. Hook failure is ignored. |
|
1261 | ``$HG_PATS`` is a list of arguments. Hook failure is ignored. | |
1262 |
|
1262 | |||
1263 | ``fail-<command>`` |
|
1263 | ``fail-<command>`` | |
1264 | Run after a failed invocation of an associated command. The contents |
|
1264 | Run after a failed invocation of an associated command. The contents | |
1265 | of the command line are passed as ``$HG_ARGS``. Parsed command line |
|
1265 | of the command line are passed as ``$HG_ARGS``. Parsed command line | |
1266 | arguments are passed as ``$HG_PATS`` and ``$HG_OPTS``. These contain |
|
1266 | arguments are passed as ``$HG_PATS`` and ``$HG_OPTS``. These contain | |
1267 | string representations of the python data internally passed to |
|
1267 | string representations of the python data internally passed to | |
1268 | <command>. ``$HG_OPTS`` is a dictionary of options (with unspecified |
|
1268 | <command>. ``$HG_OPTS`` is a dictionary of options (with unspecified | |
1269 | options set to their defaults). ``$HG_PATS`` is a list of arguments. |
|
1269 | options set to their defaults). ``$HG_PATS`` is a list of arguments. | |
1270 | Hook failure is ignored. |
|
1270 | Hook failure is ignored. | |
1271 |
|
1271 | |||
1272 | ``pre-<command>`` |
|
1272 | ``pre-<command>`` | |
1273 | Run before executing the associated command. The contents of the |
|
1273 | Run before executing the associated command. The contents of the | |
1274 | command line are passed as ``$HG_ARGS``. Parsed command line arguments |
|
1274 | command line are passed as ``$HG_ARGS``. Parsed command line arguments | |
1275 | are passed as ``$HG_PATS`` and ``$HG_OPTS``. These contain string |
|
1275 | are passed as ``$HG_PATS`` and ``$HG_OPTS``. These contain string | |
1276 | representations of the data internally passed to <command>. ``$HG_OPTS`` |
|
1276 | representations of the data internally passed to <command>. ``$HG_OPTS`` | |
1277 | is a dictionary of options (with unspecified options set to their |
|
1277 | is a dictionary of options (with unspecified options set to their | |
1278 | defaults). ``$HG_PATS`` is a list of arguments. If the hook returns |
|
1278 | defaults). ``$HG_PATS`` is a list of arguments. If the hook returns | |
1279 | failure, the command doesn't execute and Mercurial returns the failure |
|
1279 | failure, the command doesn't execute and Mercurial returns the failure | |
1280 | code. |
|
1280 | code. | |
1281 |
|
1281 | |||
1282 | ``prechangegroup`` |
|
1282 | ``prechangegroup`` | |
1283 | Run before a changegroup is added via push, pull or unbundle. Exit |
|
1283 | Run before a changegroup is added via push, pull or unbundle. Exit | |
1284 | status 0 allows the changegroup to proceed. A non-zero status will |
|
1284 | status 0 allows the changegroup to proceed. A non-zero status will | |
1285 | cause the push, pull or unbundle to fail. The URL from which changes |
|
1285 | cause the push, pull or unbundle to fail. The URL from which changes | |
1286 | will come is in ``$HG_URL``. |
|
1286 | will come is in ``$HG_URL``. | |
1287 |
|
1287 | |||
1288 | ``precommit`` |
|
1288 | ``precommit`` | |
1289 | Run before starting a local commit. Exit status 0 allows the |
|
1289 | Run before starting a local commit. Exit status 0 allows the | |
1290 | commit to proceed. A non-zero status will cause the commit to fail. |
|
1290 | commit to proceed. A non-zero status will cause the commit to fail. | |
1291 | Parent changeset IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. |
|
1291 | Parent changeset IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. | |
1292 |
|
1292 | |||
1293 | ``prelistkeys`` |
|
1293 | ``prelistkeys`` | |
1294 | Run before listing pushkeys (like bookmarks) in the |
|
1294 | Run before listing pushkeys (like bookmarks) in the | |
1295 | repository. A non-zero status will cause failure. The key namespace is |
|
1295 | repository. A non-zero status will cause failure. The key namespace is | |
1296 | in ``$HG_NAMESPACE``. |
|
1296 | in ``$HG_NAMESPACE``. | |
1297 |
|
1297 | |||
1298 | ``preoutgoing`` |
|
1298 | ``preoutgoing`` | |
1299 | Run before collecting changes to send from the local repository to |
|
1299 | Run before collecting changes to send from the local repository to | |
1300 | another. A non-zero status will cause failure. This lets you prevent |
|
1300 | another. A non-zero status will cause failure. This lets you prevent | |
1301 | pull over HTTP or SSH. It can also prevent propagating commits (via |
|
1301 | pull over HTTP or SSH. It can also prevent propagating commits (via | |
1302 | local pull, push (outbound) or bundle commands), but not completely, |
|
1302 | local pull, push (outbound) or bundle commands), but not completely, | |
1303 | since you can just copy files instead. The source of operation is in |
|
1303 | since you can just copy files instead. The source of operation is in | |
1304 | ``$HG_SOURCE``. If "serve", the operation is happening on behalf of a remote |
|
1304 | ``$HG_SOURCE``. If "serve", the operation is happening on behalf of a remote | |
1305 | SSH or HTTP repository. If "push", "pull" or "bundle", the operation |
|
1305 | SSH or HTTP repository. If "push", "pull" or "bundle", the operation | |
1306 | is happening on behalf of a repository on same system. |
|
1306 | is happening on behalf of a repository on same system. | |
1307 |
|
1307 | |||
1308 | ``prepushkey`` |
|
1308 | ``prepushkey`` | |
1309 | Run before a pushkey (like a bookmark) is added to the |
|
1309 | Run before a pushkey (like a bookmark) is added to the | |
1310 | repository. A non-zero status will cause the key to be rejected. The |
|
1310 | repository. A non-zero status will cause the key to be rejected. The | |
1311 | key namespace is in ``$HG_NAMESPACE``, the key is in ``$HG_KEY``, |
|
1311 | key namespace is in ``$HG_NAMESPACE``, the key is in ``$HG_KEY``, | |
1312 | the old value (if any) is in ``$HG_OLD``, and the new value is in |
|
1312 | the old value (if any) is in ``$HG_OLD``, and the new value is in | |
1313 | ``$HG_NEW``. |
|
1313 | ``$HG_NEW``. | |
1314 |
|
1314 | |||
1315 | ``pretag`` |
|
1315 | ``pretag`` | |
1316 | Run before creating a tag. Exit status 0 allows the tag to be |
|
1316 | Run before creating a tag. Exit status 0 allows the tag to be | |
1317 | created. A non-zero status will cause the tag to fail. The ID of the |
|
1317 | created. A non-zero status will cause the tag to fail. The ID of the | |
1318 | changeset to tag is in ``$HG_NODE``. The name of tag is in ``$HG_TAG``. The |
|
1318 | changeset to tag is in ``$HG_NODE``. The name of tag is in ``$HG_TAG``. The | |
1319 | tag is local if ``$HG_LOCAL=1``, or in the repository if ``$HG_LOCAL=0``. |
|
1319 | tag is local if ``$HG_LOCAL=1``, or in the repository if ``$HG_LOCAL=0``. | |
1320 |
|
1320 | |||
|
1321 | ``pretransmit-inline-clone-bundle`` | |||
|
1322 | Run before transferring an inline clonebundle to the peer. | |||
|
1323 | If the exit status is 0, the inline clonebundle will be allowed to be | |||
|
1324 | transferred. A non-zero status will cause the transfer to fail. | |||
|
1325 | The path of the inline clonebundle is in ``$HG_CLONEBUNDLEPATH``. | |||
|
1326 | ||||
1321 | ``pretxnopen`` |
|
1327 | ``pretxnopen`` | |
1322 | Run before any new repository transaction is open. The reason for the |
|
1328 | Run before any new repository transaction is open. The reason for the | |
1323 | transaction will be in ``$HG_TXNNAME``, and a unique identifier for the |
|
1329 | transaction will be in ``$HG_TXNNAME``, and a unique identifier for the | |
1324 | transaction will be in ``$HG_TXNID``. A non-zero status will prevent the |
|
1330 | transaction will be in ``$HG_TXNID``. A non-zero status will prevent the | |
1325 | transaction from being opened. |
|
1331 | transaction from being opened. | |
1326 |
|
1332 | |||
1327 | ``pretxnclose`` |
|
1333 | ``pretxnclose`` | |
1328 | Run right before the transaction is actually finalized. Any repository change |
|
1334 | Run right before the transaction is actually finalized. Any repository change | |
1329 | will be visible to the hook program. This lets you validate the transaction |
|
1335 | will be visible to the hook program. This lets you validate the transaction | |
1330 | content or change it. Exit status 0 allows the commit to proceed. A non-zero |
|
1336 | content or change it. Exit status 0 allows the commit to proceed. A non-zero | |
1331 | status will cause the transaction to be rolled back. The reason for the |
|
1337 | status will cause the transaction to be rolled back. The reason for the | |
1332 | transaction opening will be in ``$HG_TXNNAME``, and a unique identifier for |
|
1338 | transaction opening will be in ``$HG_TXNNAME``, and a unique identifier for | |
1333 | the transaction will be in ``$HG_TXNID``. The rest of the available data will |
|
1339 | the transaction will be in ``$HG_TXNID``. The rest of the available data will | |
1334 | vary according the transaction type. Changes unbundled to the repository will |
|
1340 | vary according the transaction type. Changes unbundled to the repository will | |
1335 | add ``$HG_URL`` and ``$HG_SOURCE``. New changesets will add ``$HG_NODE`` (the |
|
1341 | add ``$HG_URL`` and ``$HG_SOURCE``. New changesets will add ``$HG_NODE`` (the | |
1336 | ID of the first added changeset), ``$HG_NODE_LAST`` (the ID of the last added |
|
1342 | ID of the first added changeset), ``$HG_NODE_LAST`` (the ID of the last added | |
1337 | changeset). Bookmark and phase changes will set ``$HG_BOOKMARK_MOVED`` and |
|
1343 | changeset). Bookmark and phase changes will set ``$HG_BOOKMARK_MOVED`` and | |
1338 | ``$HG_PHASES_MOVED`` to ``1`` respectively. The number of new obsmarkers, if |
|
1344 | ``$HG_PHASES_MOVED`` to ``1`` respectively. The number of new obsmarkers, if | |
1339 | any, will be in ``$HG_NEW_OBSMARKERS``, etc. |
|
1345 | any, will be in ``$HG_NEW_OBSMARKERS``, etc. | |
1340 |
|
1346 | |||
1341 | ``pretxnclose-bookmark`` |
|
1347 | ``pretxnclose-bookmark`` | |
1342 | Run right before a bookmark change is actually finalized. Any repository |
|
1348 | Run right before a bookmark change is actually finalized. Any repository | |
1343 | change will be visible to the hook program. This lets you validate the |
|
1349 | change will be visible to the hook program. This lets you validate the | |
1344 | transaction content or change it. Exit status 0 allows the commit to |
|
1350 | transaction content or change it. Exit status 0 allows the commit to | |
1345 | proceed. A non-zero status will cause the transaction to be rolled back. |
|
1351 | proceed. A non-zero status will cause the transaction to be rolled back. | |
1346 | The name of the bookmark will be available in ``$HG_BOOKMARK``, the new |
|
1352 | The name of the bookmark will be available in ``$HG_BOOKMARK``, the new | |
1347 | bookmark location will be available in ``$HG_NODE`` while the previous |
|
1353 | bookmark location will be available in ``$HG_NODE`` while the previous | |
1348 | location will be available in ``$HG_OLDNODE``. In case of a bookmark |
|
1354 | location will be available in ``$HG_OLDNODE``. In case of a bookmark | |
1349 | creation ``$HG_OLDNODE`` will be empty. In case of deletion ``$HG_NODE`` |
|
1355 | creation ``$HG_OLDNODE`` will be empty. In case of deletion ``$HG_NODE`` | |
1350 | will be empty. |
|
1356 | will be empty. | |
1351 | In addition, the reason for the transaction opening will be in |
|
1357 | In addition, the reason for the transaction opening will be in | |
1352 | ``$HG_TXNNAME``, and a unique identifier for the transaction will be in |
|
1358 | ``$HG_TXNNAME``, and a unique identifier for the transaction will be in | |
1353 | ``$HG_TXNID``. |
|
1359 | ``$HG_TXNID``. | |
1354 |
|
1360 | |||
1355 | ``pretxnclose-phase`` |
|
1361 | ``pretxnclose-phase`` | |
1356 | Run right before a phase change is actually finalized. Any repository change |
|
1362 | Run right before a phase change is actually finalized. Any repository change | |
1357 | will be visible to the hook program. This lets you validate the transaction |
|
1363 | will be visible to the hook program. This lets you validate the transaction | |
1358 | content or change it. Exit status 0 allows the commit to proceed. A non-zero |
|
1364 | content or change it. Exit status 0 allows the commit to proceed. A non-zero | |
1359 | status will cause the transaction to be rolled back. The hook is called |
|
1365 | status will cause the transaction to be rolled back. The hook is called | |
1360 | multiple times, once for each revision affected by a phase change. |
|
1366 | multiple times, once for each revision affected by a phase change. | |
1361 | The affected node is available in ``$HG_NODE``, the phase in ``$HG_PHASE`` |
|
1367 | The affected node is available in ``$HG_NODE``, the phase in ``$HG_PHASE`` | |
1362 | while the previous ``$HG_OLDPHASE``. In case of new node, ``$HG_OLDPHASE`` |
|
1368 | while the previous ``$HG_OLDPHASE``. In case of new node, ``$HG_OLDPHASE`` | |
1363 | will be empty. In addition, the reason for the transaction opening will be in |
|
1369 | will be empty. In addition, the reason for the transaction opening will be in | |
1364 | ``$HG_TXNNAME``, and a unique identifier for the transaction will be in |
|
1370 | ``$HG_TXNNAME``, and a unique identifier for the transaction will be in | |
1365 | ``$HG_TXNID``. The hook is also run for newly added revisions. In this case |
|
1371 | ``$HG_TXNID``. The hook is also run for newly added revisions. In this case | |
1366 | the ``$HG_OLDPHASE`` entry will be empty. |
|
1372 | the ``$HG_OLDPHASE`` entry will be empty. | |
1367 |
|
1373 | |||
1368 | ``txnclose`` |
|
1374 | ``txnclose`` | |
1369 | Run after any repository transaction has been committed. At this |
|
1375 | Run after any repository transaction has been committed. At this | |
1370 | point, the transaction can no longer be rolled back. The hook will run |
|
1376 | point, the transaction can no longer be rolled back. The hook will run | |
1371 | after the lock is released. See :hg:`help config.hooks.pretxnclose` for |
|
1377 | after the lock is released. See :hg:`help config.hooks.pretxnclose` for | |
1372 | details about available variables. |
|
1378 | details about available variables. | |
1373 |
|
1379 | |||
1374 | ``txnclose-bookmark`` |
|
1380 | ``txnclose-bookmark`` | |
1375 | Run after any bookmark change has been committed. At this point, the |
|
1381 | Run after any bookmark change has been committed. At this point, the | |
1376 | transaction can no longer be rolled back. The hook will run after the lock |
|
1382 | transaction can no longer be rolled back. The hook will run after the lock | |
1377 | is released. See :hg:`help config.hooks.pretxnclose-bookmark` for details |
|
1383 | is released. See :hg:`help config.hooks.pretxnclose-bookmark` for details | |
1378 | about available variables. |
|
1384 | about available variables. | |
1379 |
|
1385 | |||
1380 | ``txnclose-phase`` |
|
1386 | ``txnclose-phase`` | |
1381 | Run after any phase change has been committed. At this point, the |
|
1387 | Run after any phase change has been committed. At this point, the | |
1382 | transaction can no longer be rolled back. The hook will run after the lock |
|
1388 | transaction can no longer be rolled back. The hook will run after the lock | |
1383 | is released. See :hg:`help config.hooks.pretxnclose-phase` for details about |
|
1389 | is released. See :hg:`help config.hooks.pretxnclose-phase` for details about | |
1384 | available variables. |
|
1390 | available variables. | |
1385 |
|
1391 | |||
1386 | ``txnabort`` |
|
1392 | ``txnabort`` | |
1387 | Run when a transaction is aborted. See :hg:`help config.hooks.pretxnclose` |
|
1393 | Run when a transaction is aborted. See :hg:`help config.hooks.pretxnclose` | |
1388 | for details about available variables. |
|
1394 | for details about available variables. | |
1389 |
|
1395 | |||
1390 | ``pretxnchangegroup`` |
|
1396 | ``pretxnchangegroup`` | |
1391 | Run after a changegroup has been added via push, pull or unbundle, but before |
|
1397 | Run after a changegroup has been added via push, pull or unbundle, but before | |
1392 | the transaction has been committed. The changegroup is visible to the hook |
|
1398 | the transaction has been committed. The changegroup is visible to the hook | |
1393 | program. This allows validation of incoming changes before accepting them. |
|
1399 | program. This allows validation of incoming changes before accepting them. | |
1394 | The ID of the first new changeset is in ``$HG_NODE`` and last is in |
|
1400 | The ID of the first new changeset is in ``$HG_NODE`` and last is in | |
1395 | ``$HG_NODE_LAST``. Exit status 0 allows the transaction to commit. A non-zero |
|
1401 | ``$HG_NODE_LAST``. Exit status 0 allows the transaction to commit. A non-zero | |
1396 | status will cause the transaction to be rolled back, and the push, pull or |
|
1402 | status will cause the transaction to be rolled back, and the push, pull or | |
1397 | unbundle will fail. The URL that was the source of changes is in ``$HG_URL``. |
|
1403 | unbundle will fail. The URL that was the source of changes is in ``$HG_URL``. | |
1398 |
|
1404 | |||
1399 | ``pretxncommit`` |
|
1405 | ``pretxncommit`` | |
1400 | Run after a changeset has been created, but before the transaction is |
|
1406 | Run after a changeset has been created, but before the transaction is | |
1401 | committed. The changeset is visible to the hook program. This allows |
|
1407 | committed. The changeset is visible to the hook program. This allows | |
1402 | validation of the commit message and changes. Exit status 0 allows the |
|
1408 | validation of the commit message and changes. Exit status 0 allows the | |
1403 | commit to proceed. A non-zero status will cause the transaction to |
|
1409 | commit to proceed. A non-zero status will cause the transaction to | |
1404 | be rolled back. The ID of the new changeset is in ``$HG_NODE``. The parent |
|
1410 | be rolled back. The ID of the new changeset is in ``$HG_NODE``. The parent | |
1405 | changeset IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. |
|
1411 | changeset IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. | |
1406 |
|
1412 | |||
1407 | ``preupdate`` |
|
1413 | ``preupdate`` | |
1408 | Run before updating the working directory. Exit status 0 allows |
|
1414 | Run before updating the working directory. Exit status 0 allows | |
1409 | the update to proceed. A non-zero status will prevent the update. |
|
1415 | the update to proceed. A non-zero status will prevent the update. | |
1410 | The changeset ID of first new parent is in ``$HG_PARENT1``. If updating to a |
|
1416 | The changeset ID of first new parent is in ``$HG_PARENT1``. If updating to a | |
1411 | merge, the ID of second new parent is in ``$HG_PARENT2``. |
|
1417 | merge, the ID of second new parent is in ``$HG_PARENT2``. | |
1412 |
|
1418 | |||
1413 | ``listkeys`` |
|
1419 | ``listkeys`` | |
1414 | Run after listing pushkeys (like bookmarks) in the repository. The |
|
1420 | Run after listing pushkeys (like bookmarks) in the repository. The | |
1415 | key namespace is in ``$HG_NAMESPACE``. ``$HG_VALUES`` is a |
|
1421 | key namespace is in ``$HG_NAMESPACE``. ``$HG_VALUES`` is a | |
1416 | dictionary containing the keys and values. |
|
1422 | dictionary containing the keys and values. | |
1417 |
|
1423 | |||
1418 | ``pushkey`` |
|
1424 | ``pushkey`` | |
1419 | Run after a pushkey (like a bookmark) is added to the |
|
1425 | Run after a pushkey (like a bookmark) is added to the | |
1420 | repository. The key namespace is in ``$HG_NAMESPACE``, the key is in |
|
1426 | repository. The key namespace is in ``$HG_NAMESPACE``, the key is in | |
1421 | ``$HG_KEY``, the old value (if any) is in ``$HG_OLD``, and the new |
|
1427 | ``$HG_KEY``, the old value (if any) is in ``$HG_OLD``, and the new | |
1422 | value is in ``$HG_NEW``. |
|
1428 | value is in ``$HG_NEW``. | |
1423 |
|
1429 | |||
1424 | ``tag`` |
|
1430 | ``tag`` | |
1425 | Run after a tag is created. The ID of the tagged changeset is in ``$HG_NODE``. |
|
1431 | Run after a tag is created. The ID of the tagged changeset is in ``$HG_NODE``. | |
1426 | The name of tag is in ``$HG_TAG``. The tag is local if ``$HG_LOCAL=1``, or in |
|
1432 | The name of tag is in ``$HG_TAG``. The tag is local if ``$HG_LOCAL=1``, or in | |
1427 | the repository if ``$HG_LOCAL=0``. |
|
1433 | the repository if ``$HG_LOCAL=0``. | |
1428 |
|
1434 | |||
1429 | ``update`` |
|
1435 | ``update`` | |
1430 | Run after updating the working directory. The changeset ID of first |
|
1436 | Run after updating the working directory. The changeset ID of first | |
1431 | new parent is in ``$HG_PARENT1``. If updating to a merge, the ID of second new |
|
1437 | new parent is in ``$HG_PARENT1``. If updating to a merge, the ID of second new | |
1432 | parent is in ``$HG_PARENT2``. If the update succeeded, ``$HG_ERROR=0``. If the |
|
1438 | parent is in ``$HG_PARENT2``. If the update succeeded, ``$HG_ERROR=0``. If the | |
1433 | update failed (e.g. because conflicts were not resolved), ``$HG_ERROR=1``. |
|
1439 | update failed (e.g. because conflicts were not resolved), ``$HG_ERROR=1``. | |
1434 |
|
1440 | |||
1435 | .. note:: |
|
1441 | .. note:: | |
1436 |
|
1442 | |||
1437 | It is generally better to use standard hooks rather than the |
|
1443 | It is generally better to use standard hooks rather than the | |
1438 | generic pre- and post- command hooks, as they are guaranteed to be |
|
1444 | generic pre- and post- command hooks, as they are guaranteed to be | |
1439 | called in the appropriate contexts for influencing transactions. |
|
1445 | called in the appropriate contexts for influencing transactions. | |
1440 | Also, hooks like "commit" will be called in all contexts that |
|
1446 | Also, hooks like "commit" will be called in all contexts that | |
1441 | generate a commit (e.g. tag) and not just the commit command. |
|
1447 | generate a commit (e.g. tag) and not just the commit command. | |
1442 |
|
1448 | |||
1443 | .. note:: |
|
1449 | .. note:: | |
1444 |
|
1450 | |||
1445 | Environment variables with empty values may not be passed to |
|
1451 | Environment variables with empty values may not be passed to | |
1446 | hooks on platforms such as Windows. As an example, ``$HG_PARENT2`` |
|
1452 | hooks on platforms such as Windows. As an example, ``$HG_PARENT2`` | |
1447 | will have an empty value under Unix-like platforms for non-merge |
|
1453 | will have an empty value under Unix-like platforms for non-merge | |
1448 | changesets, while it will not be available at all under Windows. |
|
1454 | changesets, while it will not be available at all under Windows. | |
1449 |
|
1455 | |||
1450 | The syntax for Python hooks is as follows:: |
|
1456 | The syntax for Python hooks is as follows:: | |
1451 |
|
1457 | |||
1452 | hookname = python:modulename.submodule.callable |
|
1458 | hookname = python:modulename.submodule.callable | |
1453 | hookname = python:/path/to/python/module.py:callable |
|
1459 | hookname = python:/path/to/python/module.py:callable | |
1454 |
|
1460 | |||
1455 | Python hooks are run within the Mercurial process. Each hook is |
|
1461 | Python hooks are run within the Mercurial process. Each hook is | |
1456 | called with at least three keyword arguments: a ui object (keyword |
|
1462 | called with at least three keyword arguments: a ui object (keyword | |
1457 | ``ui``), a repository object (keyword ``repo``), and a ``hooktype`` |
|
1463 | ``ui``), a repository object (keyword ``repo``), and a ``hooktype`` | |
1458 | keyword that tells what kind of hook is used. Arguments listed as |
|
1464 | keyword that tells what kind of hook is used. Arguments listed as | |
1459 | environment variables above are passed as keyword arguments, with no |
|
1465 | environment variables above are passed as keyword arguments, with no | |
1460 | ``HG_`` prefix, and names in lower case. |
|
1466 | ``HG_`` prefix, and names in lower case. | |
1461 |
|
1467 | |||
1462 | If a Python hook returns a "true" value or raises an exception, this |
|
1468 | If a Python hook returns a "true" value or raises an exception, this | |
1463 | is treated as a failure. |
|
1469 | is treated as a failure. | |
1464 |
|
1470 | |||
1465 |
|
1471 | |||
1466 | ``hostfingerprints`` |
|
1472 | ``hostfingerprints`` | |
1467 | -------------------- |
|
1473 | -------------------- | |
1468 |
|
1474 | |||
1469 | (Deprecated. Use ``[hostsecurity]``'s ``fingerprints`` options instead.) |
|
1475 | (Deprecated. Use ``[hostsecurity]``'s ``fingerprints`` options instead.) | |
1470 |
|
1476 | |||
1471 | Fingerprints of the certificates of known HTTPS servers. |
|
1477 | Fingerprints of the certificates of known HTTPS servers. | |
1472 |
|
1478 | |||
1473 | A HTTPS connection to a server with a fingerprint configured here will |
|
1479 | A HTTPS connection to a server with a fingerprint configured here will | |
1474 | only succeed if the servers certificate matches the fingerprint. |
|
1480 | only succeed if the servers certificate matches the fingerprint. | |
1475 | This is very similar to how ssh known hosts works. |
|
1481 | This is very similar to how ssh known hosts works. | |
1476 |
|
1482 | |||
1477 | The fingerprint is the SHA-1 hash value of the DER encoded certificate. |
|
1483 | The fingerprint is the SHA-1 hash value of the DER encoded certificate. | |
1478 | Multiple values can be specified (separated by spaces or commas). This can |
|
1484 | Multiple values can be specified (separated by spaces or commas). This can | |
1479 | be used to define both old and new fingerprints while a host transitions |
|
1485 | be used to define both old and new fingerprints while a host transitions | |
1480 | to a new certificate. |
|
1486 | to a new certificate. | |
1481 |
|
1487 | |||
1482 | The CA chain and web.cacerts is not used for servers with a fingerprint. |
|
1488 | The CA chain and web.cacerts is not used for servers with a fingerprint. | |
1483 |
|
1489 | |||
1484 | For example:: |
|
1490 | For example:: | |
1485 |
|
1491 | |||
1486 | [hostfingerprints] |
|
1492 | [hostfingerprints] | |
1487 | hg.intevation.de = fc:e2:8d:d9:51:cd:cb:c1:4d:18:6b:b7:44:8d:49:72:57:e6:cd:33 |
|
1493 | hg.intevation.de = fc:e2:8d:d9:51:cd:cb:c1:4d:18:6b:b7:44:8d:49:72:57:e6:cd:33 | |
1488 | hg.intevation.org = fc:e2:8d:d9:51:cd:cb:c1:4d:18:6b:b7:44:8d:49:72:57:e6:cd:33 |
|
1494 | hg.intevation.org = fc:e2:8d:d9:51:cd:cb:c1:4d:18:6b:b7:44:8d:49:72:57:e6:cd:33 | |
1489 |
|
1495 | |||
1490 | ``hostsecurity`` |
|
1496 | ``hostsecurity`` | |
1491 | ---------------- |
|
1497 | ---------------- | |
1492 |
|
1498 | |||
1493 | Used to specify global and per-host security settings for connecting to |
|
1499 | Used to specify global and per-host security settings for connecting to | |
1494 | other machines. |
|
1500 | other machines. | |
1495 |
|
1501 | |||
1496 | The following options control default behavior for all hosts. |
|
1502 | The following options control default behavior for all hosts. | |
1497 |
|
1503 | |||
1498 | ``ciphers`` |
|
1504 | ``ciphers`` | |
1499 | Defines the cryptographic ciphers to use for connections. |
|
1505 | Defines the cryptographic ciphers to use for connections. | |
1500 |
|
1506 | |||
1501 | Value must be a valid OpenSSL Cipher List Format as documented at |
|
1507 | Value must be a valid OpenSSL Cipher List Format as documented at | |
1502 | https://www.openssl.org/docs/manmaster/apps/ciphers.html#CIPHER-LIST-FORMAT. |
|
1508 | https://www.openssl.org/docs/manmaster/apps/ciphers.html#CIPHER-LIST-FORMAT. | |
1503 |
|
1509 | |||
1504 | This setting is for advanced users only. Setting to incorrect values |
|
1510 | This setting is for advanced users only. Setting to incorrect values | |
1505 | can significantly lower connection security or decrease performance. |
|
1511 | can significantly lower connection security or decrease performance. | |
1506 | You have been warned. |
|
1512 | You have been warned. | |
1507 |
|
1513 | |||
1508 | This option requires Python 2.7. |
|
1514 | This option requires Python 2.7. | |
1509 |
|
1515 | |||
1510 | ``minimumprotocol`` |
|
1516 | ``minimumprotocol`` | |
1511 | Defines the minimum channel encryption protocol to use. |
|
1517 | Defines the minimum channel encryption protocol to use. | |
1512 |
|
1518 | |||
1513 | By default, the highest version of TLS supported by both client and server |
|
1519 | By default, the highest version of TLS supported by both client and server | |
1514 | is used. |
|
1520 | is used. | |
1515 |
|
1521 | |||
1516 | Allowed values are: ``tls1.0``, ``tls1.1``, ``tls1.2``. |
|
1522 | Allowed values are: ``tls1.0``, ``tls1.1``, ``tls1.2``. | |
1517 |
|
1523 | |||
1518 | When running on an old Python version, only ``tls1.0`` is allowed since |
|
1524 | When running on an old Python version, only ``tls1.0`` is allowed since | |
1519 | old versions of Python only support up to TLS 1.0. |
|
1525 | old versions of Python only support up to TLS 1.0. | |
1520 |
|
1526 | |||
1521 | When running a Python that supports modern TLS versions, the default is |
|
1527 | When running a Python that supports modern TLS versions, the default is | |
1522 | ``tls1.1``. ``tls1.0`` can still be used to allow TLS 1.0. However, this |
|
1528 | ``tls1.1``. ``tls1.0`` can still be used to allow TLS 1.0. However, this | |
1523 | weakens security and should only be used as a feature of last resort if |
|
1529 | weakens security and should only be used as a feature of last resort if | |
1524 | a server does not support TLS 1.1+. |
|
1530 | a server does not support TLS 1.1+. | |
1525 |
|
1531 | |||
1526 | Options in the ``[hostsecurity]`` section can have the form |
|
1532 | Options in the ``[hostsecurity]`` section can have the form | |
1527 | ``hostname``:``setting``. This allows multiple settings to be defined on a |
|
1533 | ``hostname``:``setting``. This allows multiple settings to be defined on a | |
1528 | per-host basis. |
|
1534 | per-host basis. | |
1529 |
|
1535 | |||
1530 | The following per-host settings can be defined. |
|
1536 | The following per-host settings can be defined. | |
1531 |
|
1537 | |||
1532 | ``ciphers`` |
|
1538 | ``ciphers`` | |
1533 | This behaves like ``ciphers`` as described above except it only applies |
|
1539 | This behaves like ``ciphers`` as described above except it only applies | |
1534 | to the host on which it is defined. |
|
1540 | to the host on which it is defined. | |
1535 |
|
1541 | |||
1536 | ``fingerprints`` |
|
1542 | ``fingerprints`` | |
1537 | A list of hashes of the DER encoded peer/remote certificate. Values have |
|
1543 | A list of hashes of the DER encoded peer/remote certificate. Values have | |
1538 | the form ``algorithm``:``fingerprint``. e.g. |
|
1544 | the form ``algorithm``:``fingerprint``. e.g. | |
1539 | ``sha256:c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2``. |
|
1545 | ``sha256:c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2``. | |
1540 | In addition, colons (``:``) can appear in the fingerprint part. |
|
1546 | In addition, colons (``:``) can appear in the fingerprint part. | |
1541 |
|
1547 | |||
1542 | The following algorithms/prefixes are supported: ``sha1``, ``sha256``, |
|
1548 | The following algorithms/prefixes are supported: ``sha1``, ``sha256``, | |
1543 | ``sha512``. |
|
1549 | ``sha512``. | |
1544 |
|
1550 | |||
1545 | Use of ``sha256`` or ``sha512`` is preferred. |
|
1551 | Use of ``sha256`` or ``sha512`` is preferred. | |
1546 |
|
1552 | |||
1547 | If a fingerprint is specified, the CA chain is not validated for this |
|
1553 | If a fingerprint is specified, the CA chain is not validated for this | |
1548 | host and Mercurial will require the remote certificate to match one |
|
1554 | host and Mercurial will require the remote certificate to match one | |
1549 | of the fingerprints specified. This means if the server updates its |
|
1555 | of the fingerprints specified. This means if the server updates its | |
1550 | certificate, Mercurial will abort until a new fingerprint is defined. |
|
1556 | certificate, Mercurial will abort until a new fingerprint is defined. | |
1551 | This can provide stronger security than traditional CA-based validation |
|
1557 | This can provide stronger security than traditional CA-based validation | |
1552 | at the expense of convenience. |
|
1558 | at the expense of convenience. | |
1553 |
|
1559 | |||
1554 | This option takes precedence over ``verifycertsfile``. |
|
1560 | This option takes precedence over ``verifycertsfile``. | |
1555 |
|
1561 | |||
1556 | ``minimumprotocol`` |
|
1562 | ``minimumprotocol`` | |
1557 | This behaves like ``minimumprotocol`` as described above except it |
|
1563 | This behaves like ``minimumprotocol`` as described above except it | |
1558 | only applies to the host on which it is defined. |
|
1564 | only applies to the host on which it is defined. | |
1559 |
|
1565 | |||
1560 | ``verifycertsfile`` |
|
1566 | ``verifycertsfile`` | |
1561 | Path to file a containing a list of PEM encoded certificates used to |
|
1567 | Path to file a containing a list of PEM encoded certificates used to | |
1562 | verify the server certificate. Environment variables and ``~user`` |
|
1568 | verify the server certificate. Environment variables and ``~user`` | |
1563 | constructs are expanded in the filename. |
|
1569 | constructs are expanded in the filename. | |
1564 |
|
1570 | |||
1565 | The server certificate or the certificate's certificate authority (CA) |
|
1571 | The server certificate or the certificate's certificate authority (CA) | |
1566 | must match a certificate from this file or certificate verification |
|
1572 | must match a certificate from this file or certificate verification | |
1567 | will fail and connections to the server will be refused. |
|
1573 | will fail and connections to the server will be refused. | |
1568 |
|
1574 | |||
1569 | If defined, only certificates provided by this file will be used: |
|
1575 | If defined, only certificates provided by this file will be used: | |
1570 | ``web.cacerts`` and any system/default certificates will not be |
|
1576 | ``web.cacerts`` and any system/default certificates will not be | |
1571 | used. |
|
1577 | used. | |
1572 |
|
1578 | |||
1573 | This option has no effect if the per-host ``fingerprints`` option |
|
1579 | This option has no effect if the per-host ``fingerprints`` option | |
1574 | is set. |
|
1580 | is set. | |
1575 |
|
1581 | |||
1576 | The format of the file is as follows:: |
|
1582 | The format of the file is as follows:: | |
1577 |
|
1583 | |||
1578 | -----BEGIN CERTIFICATE----- |
|
1584 | -----BEGIN CERTIFICATE----- | |
1579 | ... (certificate in base64 PEM encoding) ... |
|
1585 | ... (certificate in base64 PEM encoding) ... | |
1580 | -----END CERTIFICATE----- |
|
1586 | -----END CERTIFICATE----- | |
1581 | -----BEGIN CERTIFICATE----- |
|
1587 | -----BEGIN CERTIFICATE----- | |
1582 | ... (certificate in base64 PEM encoding) ... |
|
1588 | ... (certificate in base64 PEM encoding) ... | |
1583 | -----END CERTIFICATE----- |
|
1589 | -----END CERTIFICATE----- | |
1584 |
|
1590 | |||
1585 | For example:: |
|
1591 | For example:: | |
1586 |
|
1592 | |||
1587 | [hostsecurity] |
|
1593 | [hostsecurity] | |
1588 | hg.example.com:fingerprints = sha256:c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2 |
|
1594 | hg.example.com:fingerprints = sha256:c3ab8ff13720e8ad9047dd39466b3c8974e592c2fa383d4a3960714caef0c4f2 | |
1589 | hg2.example.com:fingerprints = sha1:914f1aff87249c09b6859b88b1906d30756491ca, sha1:fc:e2:8d:d9:51:cd:cb:c1:4d:18:6b:b7:44:8d:49:72:57:e6:cd:33 |
|
1595 | hg2.example.com:fingerprints = sha1:914f1aff87249c09b6859b88b1906d30756491ca, sha1:fc:e2:8d:d9:51:cd:cb:c1:4d:18:6b:b7:44:8d:49:72:57:e6:cd:33 | |
1590 | hg3.example.com:fingerprints = sha256:9a:b0:dc:e2:75:ad:8a:b7:84:58:e5:1f:07:32:f1:87:e6:bd:24:22:af:b7:ce:8e:9c:b4:10:cf:b9:f4:0e:d2 |
|
1596 | hg3.example.com:fingerprints = sha256:9a:b0:dc:e2:75:ad:8a:b7:84:58:e5:1f:07:32:f1:87:e6:bd:24:22:af:b7:ce:8e:9c:b4:10:cf:b9:f4:0e:d2 | |
1591 | foo.example.com:verifycertsfile = /etc/ssl/trusted-ca-certs.pem |
|
1597 | foo.example.com:verifycertsfile = /etc/ssl/trusted-ca-certs.pem | |
1592 |
|
1598 | |||
1593 | To change the default minimum protocol version to TLS 1.2 but to allow TLS 1.1 |
|
1599 | To change the default minimum protocol version to TLS 1.2 but to allow TLS 1.1 | |
1594 | when connecting to ``hg.example.com``:: |
|
1600 | when connecting to ``hg.example.com``:: | |
1595 |
|
1601 | |||
1596 | [hostsecurity] |
|
1602 | [hostsecurity] | |
1597 | minimumprotocol = tls1.2 |
|
1603 | minimumprotocol = tls1.2 | |
1598 | hg.example.com:minimumprotocol = tls1.1 |
|
1604 | hg.example.com:minimumprotocol = tls1.1 | |
1599 |
|
1605 | |||
1600 | ``http_proxy`` |
|
1606 | ``http_proxy`` | |
1601 | -------------- |
|
1607 | -------------- | |
1602 |
|
1608 | |||
1603 | Used to access web-based Mercurial repositories through a HTTP |
|
1609 | Used to access web-based Mercurial repositories through a HTTP | |
1604 | proxy. |
|
1610 | proxy. | |
1605 |
|
1611 | |||
1606 | ``host`` |
|
1612 | ``host`` | |
1607 | Host name and (optional) port of the proxy server, for example |
|
1613 | Host name and (optional) port of the proxy server, for example | |
1608 | "myproxy:8000". |
|
1614 | "myproxy:8000". | |
1609 |
|
1615 | |||
1610 | ``no`` |
|
1616 | ``no`` | |
1611 | Optional. Comma-separated list of host names that should bypass |
|
1617 | Optional. Comma-separated list of host names that should bypass | |
1612 | the proxy. |
|
1618 | the proxy. | |
1613 |
|
1619 | |||
1614 | ``passwd`` |
|
1620 | ``passwd`` | |
1615 | Optional. Password to authenticate with at the proxy server. |
|
1621 | Optional. Password to authenticate with at the proxy server. | |
1616 |
|
1622 | |||
1617 | ``user`` |
|
1623 | ``user`` | |
1618 | Optional. User name to authenticate with at the proxy server. |
|
1624 | Optional. User name to authenticate with at the proxy server. | |
1619 |
|
1625 | |||
1620 | ``always`` |
|
1626 | ``always`` | |
1621 | Optional. Always use the proxy, even for localhost and any entries |
|
1627 | Optional. Always use the proxy, even for localhost and any entries | |
1622 | in ``http_proxy.no``. (default: False) |
|
1628 | in ``http_proxy.no``. (default: False) | |
1623 |
|
1629 | |||
1624 | ``http`` |
|
1630 | ``http`` | |
1625 | -------- |
|
1631 | -------- | |
1626 |
|
1632 | |||
1627 | Used to configure access to Mercurial repositories via HTTP. |
|
1633 | Used to configure access to Mercurial repositories via HTTP. | |
1628 |
|
1634 | |||
1629 | ``timeout`` |
|
1635 | ``timeout`` | |
1630 | If set, blocking operations will timeout after that many seconds. |
|
1636 | If set, blocking operations will timeout after that many seconds. | |
1631 | (default: None) |
|
1637 | (default: None) | |
1632 |
|
1638 | |||
1633 | ``merge`` |
|
1639 | ``merge`` | |
1634 | --------- |
|
1640 | --------- | |
1635 |
|
1641 | |||
1636 | This section specifies behavior during merges and updates. |
|
1642 | This section specifies behavior during merges and updates. | |
1637 |
|
1643 | |||
1638 | ``checkignored`` |
|
1644 | ``checkignored`` | |
1639 | Controls behavior when an ignored file on disk has the same name as a tracked |
|
1645 | Controls behavior when an ignored file on disk has the same name as a tracked | |
1640 | file in the changeset being merged or updated to, and has different |
|
1646 | file in the changeset being merged or updated to, and has different | |
1641 | contents. Options are ``abort``, ``warn`` and ``ignore``. With ``abort``, |
|
1647 | contents. Options are ``abort``, ``warn`` and ``ignore``. With ``abort``, | |
1642 | abort on such files. With ``warn``, warn on such files and back them up as |
|
1648 | abort on such files. With ``warn``, warn on such files and back them up as | |
1643 | ``.orig``. With ``ignore``, don't print a warning and back them up as |
|
1649 | ``.orig``. With ``ignore``, don't print a warning and back them up as | |
1644 | ``.orig``. (default: ``abort``) |
|
1650 | ``.orig``. (default: ``abort``) | |
1645 |
|
1651 | |||
1646 | ``checkunknown`` |
|
1652 | ``checkunknown`` | |
1647 | Controls behavior when an unknown file that isn't ignored has the same name |
|
1653 | Controls behavior when an unknown file that isn't ignored has the same name | |
1648 | as a tracked file in the changeset being merged or updated to, and has |
|
1654 | as a tracked file in the changeset being merged or updated to, and has | |
1649 | different contents. Similar to ``merge.checkignored``, except for files that |
|
1655 | different contents. Similar to ``merge.checkignored``, except for files that | |
1650 | are not ignored. (default: ``abort``) |
|
1656 | are not ignored. (default: ``abort``) | |
1651 |
|
1657 | |||
1652 | ``on-failure`` |
|
1658 | ``on-failure`` | |
1653 | When set to ``continue`` (the default), the merge process attempts to |
|
1659 | When set to ``continue`` (the default), the merge process attempts to | |
1654 | merge all unresolved files using the merge chosen tool, regardless of |
|
1660 | merge all unresolved files using the merge chosen tool, regardless of | |
1655 | whether previous file merge attempts during the process succeeded or not. |
|
1661 | whether previous file merge attempts during the process succeeded or not. | |
1656 | Setting this to ``prompt`` will prompt after any merge failure continue |
|
1662 | Setting this to ``prompt`` will prompt after any merge failure continue | |
1657 | or halt the merge process. Setting this to ``halt`` will automatically |
|
1663 | or halt the merge process. Setting this to ``halt`` will automatically | |
1658 | halt the merge process on any merge tool failure. The merge process |
|
1664 | halt the merge process on any merge tool failure. The merge process | |
1659 | can be restarted by using the ``resolve`` command. When a merge is |
|
1665 | can be restarted by using the ``resolve`` command. When a merge is | |
1660 | halted, the repository is left in a normal ``unresolved`` merge state. |
|
1666 | halted, the repository is left in a normal ``unresolved`` merge state. | |
1661 | (default: ``continue``) |
|
1667 | (default: ``continue``) | |
1662 |
|
1668 | |||
1663 | ``strict-capability-check`` |
|
1669 | ``strict-capability-check`` | |
1664 | Whether capabilities of internal merge tools are checked strictly |
|
1670 | Whether capabilities of internal merge tools are checked strictly | |
1665 | or not, while examining rules to decide merge tool to be used. |
|
1671 | or not, while examining rules to decide merge tool to be used. | |
1666 | (default: False) |
|
1672 | (default: False) | |
1667 |
|
1673 | |||
1668 | ``merge-patterns`` |
|
1674 | ``merge-patterns`` | |
1669 | ------------------ |
|
1675 | ------------------ | |
1670 |
|
1676 | |||
1671 | This section specifies merge tools to associate with particular file |
|
1677 | This section specifies merge tools to associate with particular file | |
1672 | patterns. Tools matched here will take precedence over the default |
|
1678 | patterns. Tools matched here will take precedence over the default | |
1673 | merge tool. Patterns are globs by default, rooted at the repository |
|
1679 | merge tool. Patterns are globs by default, rooted at the repository | |
1674 | root. |
|
1680 | root. | |
1675 |
|
1681 | |||
1676 | Example:: |
|
1682 | Example:: | |
1677 |
|
1683 | |||
1678 | [merge-patterns] |
|
1684 | [merge-patterns] | |
1679 | **.c = kdiff3 |
|
1685 | **.c = kdiff3 | |
1680 | **.jpg = myimgmerge |
|
1686 | **.jpg = myimgmerge | |
1681 |
|
1687 | |||
1682 | ``merge-tools`` |
|
1688 | ``merge-tools`` | |
1683 | --------------- |
|
1689 | --------------- | |
1684 |
|
1690 | |||
1685 | This section configures external merge tools to use for file-level |
|
1691 | This section configures external merge tools to use for file-level | |
1686 | merges. This section has likely been preconfigured at install time. |
|
1692 | merges. This section has likely been preconfigured at install time. | |
1687 | Use :hg:`config merge-tools` to check the existing configuration. |
|
1693 | Use :hg:`config merge-tools` to check the existing configuration. | |
1688 | Also see :hg:`help merge-tools` for more details. |
|
1694 | Also see :hg:`help merge-tools` for more details. | |
1689 |
|
1695 | |||
1690 | Example ``~/.hgrc``:: |
|
1696 | Example ``~/.hgrc``:: | |
1691 |
|
1697 | |||
1692 | [merge-tools] |
|
1698 | [merge-tools] | |
1693 | # Override stock tool location |
|
1699 | # Override stock tool location | |
1694 | kdiff3.executable = ~/bin/kdiff3 |
|
1700 | kdiff3.executable = ~/bin/kdiff3 | |
1695 | # Specify command line |
|
1701 | # Specify command line | |
1696 | kdiff3.args = $base $local $other -o $output |
|
1702 | kdiff3.args = $base $local $other -o $output | |
1697 | # Give higher priority |
|
1703 | # Give higher priority | |
1698 | kdiff3.priority = 1 |
|
1704 | kdiff3.priority = 1 | |
1699 |
|
1705 | |||
1700 | # Changing the priority of preconfigured tool |
|
1706 | # Changing the priority of preconfigured tool | |
1701 | meld.priority = 0 |
|
1707 | meld.priority = 0 | |
1702 |
|
1708 | |||
1703 | # Disable a preconfigured tool |
|
1709 | # Disable a preconfigured tool | |
1704 | vimdiff.disabled = yes |
|
1710 | vimdiff.disabled = yes | |
1705 |
|
1711 | |||
1706 | # Define new tool |
|
1712 | # Define new tool | |
1707 | myHtmlTool.args = -m $local $other $base $output |
|
1713 | myHtmlTool.args = -m $local $other $base $output | |
1708 | myHtmlTool.regkey = Software\FooSoftware\HtmlMerge |
|
1714 | myHtmlTool.regkey = Software\FooSoftware\HtmlMerge | |
1709 | myHtmlTool.priority = 1 |
|
1715 | myHtmlTool.priority = 1 | |
1710 |
|
1716 | |||
1711 | Supported arguments: |
|
1717 | Supported arguments: | |
1712 |
|
1718 | |||
1713 | ``priority`` |
|
1719 | ``priority`` | |
1714 | The priority in which to evaluate this tool. |
|
1720 | The priority in which to evaluate this tool. | |
1715 | (default: 0) |
|
1721 | (default: 0) | |
1716 |
|
1722 | |||
1717 | ``executable`` |
|
1723 | ``executable`` | |
1718 | Either just the name of the executable or its pathname. |
|
1724 | Either just the name of the executable or its pathname. | |
1719 |
|
1725 | |||
1720 | .. container:: windows |
|
1726 | .. container:: windows | |
1721 |
|
1727 | |||
1722 | On Windows, the path can use environment variables with ${ProgramFiles} |
|
1728 | On Windows, the path can use environment variables with ${ProgramFiles} | |
1723 | syntax. |
|
1729 | syntax. | |
1724 |
|
1730 | |||
1725 | (default: the tool name) |
|
1731 | (default: the tool name) | |
1726 |
|
1732 | |||
1727 | ``args`` |
|
1733 | ``args`` | |
1728 | The arguments to pass to the tool executable. You can refer to the |
|
1734 | The arguments to pass to the tool executable. You can refer to the | |
1729 | files being merged as well as the output file through these |
|
1735 | files being merged as well as the output file through these | |
1730 | variables: ``$base``, ``$local``, ``$other``, ``$output``. |
|
1736 | variables: ``$base``, ``$local``, ``$other``, ``$output``. | |
1731 |
|
1737 | |||
1732 | The meaning of ``$local`` and ``$other`` can vary depending on which action is |
|
1738 | The meaning of ``$local`` and ``$other`` can vary depending on which action is | |
1733 | being performed. During an update or merge, ``$local`` represents the original |
|
1739 | being performed. During an update or merge, ``$local`` represents the original | |
1734 | state of the file, while ``$other`` represents the commit you are updating to or |
|
1740 | state of the file, while ``$other`` represents the commit you are updating to or | |
1735 | the commit you are merging with. During a rebase, ``$local`` represents the |
|
1741 | the commit you are merging with. During a rebase, ``$local`` represents the | |
1736 | destination of the rebase, and ``$other`` represents the commit being rebased. |
|
1742 | destination of the rebase, and ``$other`` represents the commit being rebased. | |
1737 |
|
1743 | |||
1738 | Some operations define custom labels to assist with identifying the revisions, |
|
1744 | Some operations define custom labels to assist with identifying the revisions, | |
1739 | accessible via ``$labellocal``, ``$labelother``, and ``$labelbase``. If custom |
|
1745 | accessible via ``$labellocal``, ``$labelother``, and ``$labelbase``. If custom | |
1740 | labels are not available, these will be ``local``, ``other``, and ``base``, |
|
1746 | labels are not available, these will be ``local``, ``other``, and ``base``, | |
1741 | respectively. |
|
1747 | respectively. | |
1742 | (default: ``$local $base $other``) |
|
1748 | (default: ``$local $base $other``) | |
1743 |
|
1749 | |||
1744 | ``premerge`` |
|
1750 | ``premerge`` | |
1745 | Attempt to run internal non-interactive 3-way merge tool before |
|
1751 | Attempt to run internal non-interactive 3-way merge tool before | |
1746 | launching external tool. Options are ``true``, ``false``, ``keep``, |
|
1752 | launching external tool. Options are ``true``, ``false``, ``keep``, | |
1747 | ``keep-merge3``, or ``keep-mergediff`` (experimental). The ``keep`` option |
|
1753 | ``keep-merge3``, or ``keep-mergediff`` (experimental). The ``keep`` option | |
1748 | will leave markers in the file if the premerge fails. The ``keep-merge3`` |
|
1754 | will leave markers in the file if the premerge fails. The ``keep-merge3`` | |
1749 | will do the same but include information about the base of the merge in the |
|
1755 | will do the same but include information about the base of the merge in the | |
1750 | marker (see internal :merge3 in :hg:`help merge-tools`). The |
|
1756 | marker (see internal :merge3 in :hg:`help merge-tools`). The | |
1751 | ``keep-mergediff`` option is similar but uses a different marker style |
|
1757 | ``keep-mergediff`` option is similar but uses a different marker style | |
1752 | (see internal :merge3 in :hg:`help merge-tools`). (default: True) |
|
1758 | (see internal :merge3 in :hg:`help merge-tools`). (default: True) | |
1753 |
|
1759 | |||
1754 | ``binary`` |
|
1760 | ``binary`` | |
1755 | This tool can merge binary files. (default: False, unless tool |
|
1761 | This tool can merge binary files. (default: False, unless tool | |
1756 | was selected by file pattern match) |
|
1762 | was selected by file pattern match) | |
1757 |
|
1763 | |||
1758 | ``symlink`` |
|
1764 | ``symlink`` | |
1759 | This tool can merge symlinks. (default: False) |
|
1765 | This tool can merge symlinks. (default: False) | |
1760 |
|
1766 | |||
1761 | ``check`` |
|
1767 | ``check`` | |
1762 | A list of merge success-checking options: |
|
1768 | A list of merge success-checking options: | |
1763 |
|
1769 | |||
1764 | ``changed`` |
|
1770 | ``changed`` | |
1765 | Ask whether merge was successful when the merged file shows no changes. |
|
1771 | Ask whether merge was successful when the merged file shows no changes. | |
1766 | ``conflicts`` |
|
1772 | ``conflicts`` | |
1767 | Check whether there are conflicts even though the tool reported success. |
|
1773 | Check whether there are conflicts even though the tool reported success. | |
1768 | ``prompt`` |
|
1774 | ``prompt`` | |
1769 | Always prompt for merge success, regardless of success reported by tool. |
|
1775 | Always prompt for merge success, regardless of success reported by tool. | |
1770 |
|
1776 | |||
1771 | ``fixeol`` |
|
1777 | ``fixeol`` | |
1772 | Attempt to fix up EOL changes caused by the merge tool. |
|
1778 | Attempt to fix up EOL changes caused by the merge tool. | |
1773 | (default: False) |
|
1779 | (default: False) | |
1774 |
|
1780 | |||
1775 | ``gui`` |
|
1781 | ``gui`` | |
1776 | This tool requires a graphical interface to run. (default: False) |
|
1782 | This tool requires a graphical interface to run. (default: False) | |
1777 |
|
1783 | |||
1778 | ``mergemarkers`` |
|
1784 | ``mergemarkers`` | |
1779 | Controls whether the labels passed via ``$labellocal``, ``$labelother``, and |
|
1785 | Controls whether the labels passed via ``$labellocal``, ``$labelother``, and | |
1780 | ``$labelbase`` are ``detailed`` (respecting ``mergemarkertemplate``) or |
|
1786 | ``$labelbase`` are ``detailed`` (respecting ``mergemarkertemplate``) or | |
1781 | ``basic``. If ``premerge`` is ``keep`` or ``keep-merge3``, the conflict |
|
1787 | ``basic``. If ``premerge`` is ``keep`` or ``keep-merge3``, the conflict | |
1782 | markers generated during premerge will be ``detailed`` if either this option or |
|
1788 | markers generated during premerge will be ``detailed`` if either this option or | |
1783 | the corresponding option in the ``[ui]`` section is ``detailed``. |
|
1789 | the corresponding option in the ``[ui]`` section is ``detailed``. | |
1784 | (default: ``basic``) |
|
1790 | (default: ``basic``) | |
1785 |
|
1791 | |||
1786 | ``mergemarkertemplate`` |
|
1792 | ``mergemarkertemplate`` | |
1787 | This setting can be used to override ``mergemarker`` from the |
|
1793 | This setting can be used to override ``mergemarker`` from the | |
1788 | ``[command-templates]`` section on a per-tool basis; this applies to the |
|
1794 | ``[command-templates]`` section on a per-tool basis; this applies to the | |
1789 | ``$label``-prefixed variables and to the conflict markers that are generated |
|
1795 | ``$label``-prefixed variables and to the conflict markers that are generated | |
1790 | if ``premerge`` is ``keep` or ``keep-merge3``. See the corresponding variable |
|
1796 | if ``premerge`` is ``keep` or ``keep-merge3``. See the corresponding variable | |
1791 | in ``[ui]`` for more information. |
|
1797 | in ``[ui]`` for more information. | |
1792 |
|
1798 | |||
1793 | .. container:: windows |
|
1799 | .. container:: windows | |
1794 |
|
1800 | |||
1795 | ``regkey`` |
|
1801 | ``regkey`` | |
1796 | Windows registry key which describes install location of this |
|
1802 | Windows registry key which describes install location of this | |
1797 | tool. Mercurial will search for this key first under |
|
1803 | tool. Mercurial will search for this key first under | |
1798 | ``HKEY_CURRENT_USER`` and then under ``HKEY_LOCAL_MACHINE``. |
|
1804 | ``HKEY_CURRENT_USER`` and then under ``HKEY_LOCAL_MACHINE``. | |
1799 | (default: None) |
|
1805 | (default: None) | |
1800 |
|
1806 | |||
1801 | ``regkeyalt`` |
|
1807 | ``regkeyalt`` | |
1802 | An alternate Windows registry key to try if the first key is not |
|
1808 | An alternate Windows registry key to try if the first key is not | |
1803 | found. The alternate key uses the same ``regname`` and ``regappend`` |
|
1809 | found. The alternate key uses the same ``regname`` and ``regappend`` | |
1804 | semantics of the primary key. The most common use for this key |
|
1810 | semantics of the primary key. The most common use for this key | |
1805 | is to search for 32bit applications on 64bit operating systems. |
|
1811 | is to search for 32bit applications on 64bit operating systems. | |
1806 | (default: None) |
|
1812 | (default: None) | |
1807 |
|
1813 | |||
1808 | ``regname`` |
|
1814 | ``regname`` | |
1809 | Name of value to read from specified registry key. |
|
1815 | Name of value to read from specified registry key. | |
1810 | (default: the unnamed (default) value) |
|
1816 | (default: the unnamed (default) value) | |
1811 |
|
1817 | |||
1812 | ``regappend`` |
|
1818 | ``regappend`` | |
1813 | String to append to the value read from the registry, typically |
|
1819 | String to append to the value read from the registry, typically | |
1814 | the executable name of the tool. |
|
1820 | the executable name of the tool. | |
1815 | (default: None) |
|
1821 | (default: None) | |
1816 |
|
1822 | |||
1817 | ``pager`` |
|
1823 | ``pager`` | |
1818 | --------- |
|
1824 | --------- | |
1819 |
|
1825 | |||
1820 | Setting used to control when to paginate and with what external tool. See |
|
1826 | Setting used to control when to paginate and with what external tool. See | |
1821 | :hg:`help pager` for details. |
|
1827 | :hg:`help pager` for details. | |
1822 |
|
1828 | |||
1823 | ``pager`` |
|
1829 | ``pager`` | |
1824 | Define the external tool used as pager. |
|
1830 | Define the external tool used as pager. | |
1825 |
|
1831 | |||
1826 | If no pager is set, Mercurial uses the environment variable $PAGER. |
|
1832 | If no pager is set, Mercurial uses the environment variable $PAGER. | |
1827 | If neither pager.pager, nor $PAGER is set, a default pager will be |
|
1833 | If neither pager.pager, nor $PAGER is set, a default pager will be | |
1828 | used, typically `less` on Unix and `more` on Windows. Example:: |
|
1834 | used, typically `less` on Unix and `more` on Windows. Example:: | |
1829 |
|
1835 | |||
1830 | [pager] |
|
1836 | [pager] | |
1831 | pager = less -FRX |
|
1837 | pager = less -FRX | |
1832 |
|
1838 | |||
1833 | ``ignore`` |
|
1839 | ``ignore`` | |
1834 | List of commands to disable the pager for. Example:: |
|
1840 | List of commands to disable the pager for. Example:: | |
1835 |
|
1841 | |||
1836 | [pager] |
|
1842 | [pager] | |
1837 | ignore = version, help, update |
|
1843 | ignore = version, help, update | |
1838 |
|
1844 | |||
1839 | ``patch`` |
|
1845 | ``patch`` | |
1840 | --------- |
|
1846 | --------- | |
1841 |
|
1847 | |||
1842 | Settings used when applying patches, for instance through the 'import' |
|
1848 | Settings used when applying patches, for instance through the 'import' | |
1843 | command or with Mercurial Queues extension. |
|
1849 | command or with Mercurial Queues extension. | |
1844 |
|
1850 | |||
1845 | ``eol`` |
|
1851 | ``eol`` | |
1846 | When set to 'strict' patch content and patched files end of lines |
|
1852 | When set to 'strict' patch content and patched files end of lines | |
1847 | are preserved. When set to ``lf`` or ``crlf``, both files end of |
|
1853 | are preserved. When set to ``lf`` or ``crlf``, both files end of | |
1848 | lines are ignored when patching and the result line endings are |
|
1854 | lines are ignored when patching and the result line endings are | |
1849 | normalized to either LF (Unix) or CRLF (Windows). When set to |
|
1855 | normalized to either LF (Unix) or CRLF (Windows). When set to | |
1850 | ``auto``, end of lines are again ignored while patching but line |
|
1856 | ``auto``, end of lines are again ignored while patching but line | |
1851 | endings in patched files are normalized to their original setting |
|
1857 | endings in patched files are normalized to their original setting | |
1852 | on a per-file basis. If target file does not exist or has no end |
|
1858 | on a per-file basis. If target file does not exist or has no end | |
1853 | of line, patch line endings are preserved. |
|
1859 | of line, patch line endings are preserved. | |
1854 | (default: strict) |
|
1860 | (default: strict) | |
1855 |
|
1861 | |||
1856 | ``fuzz`` |
|
1862 | ``fuzz`` | |
1857 | The number of lines of 'fuzz' to allow when applying patches. This |
|
1863 | The number of lines of 'fuzz' to allow when applying patches. This | |
1858 | controls how much context the patcher is allowed to ignore when |
|
1864 | controls how much context the patcher is allowed to ignore when | |
1859 | trying to apply a patch. |
|
1865 | trying to apply a patch. | |
1860 | (default: 2) |
|
1866 | (default: 2) | |
1861 |
|
1867 | |||
1862 | ``paths`` |
|
1868 | ``paths`` | |
1863 | --------- |
|
1869 | --------- | |
1864 |
|
1870 | |||
1865 | Assigns symbolic names and behavior to repositories. |
|
1871 | Assigns symbolic names and behavior to repositories. | |
1866 |
|
1872 | |||
1867 | Options are symbolic names defining the URL or directory that is the |
|
1873 | Options are symbolic names defining the URL or directory that is the | |
1868 | location of the repository. Example:: |
|
1874 | location of the repository. Example:: | |
1869 |
|
1875 | |||
1870 | [paths] |
|
1876 | [paths] | |
1871 | my_server = https://example.com/my_repo |
|
1877 | my_server = https://example.com/my_repo | |
1872 | local_path = /home/me/repo |
|
1878 | local_path = /home/me/repo | |
1873 |
|
1879 | |||
1874 | These symbolic names can be used from the command line. To pull |
|
1880 | These symbolic names can be used from the command line. To pull | |
1875 | from ``my_server``: :hg:`pull my_server`. To push to ``local_path``: |
|
1881 | from ``my_server``: :hg:`pull my_server`. To push to ``local_path``: | |
1876 | :hg:`push local_path`. You can check :hg:`help urls` for details about |
|
1882 | :hg:`push local_path`. You can check :hg:`help urls` for details about | |
1877 | valid URLs. |
|
1883 | valid URLs. | |
1878 |
|
1884 | |||
1879 | Options containing colons (``:``) denote sub-options that can influence |
|
1885 | Options containing colons (``:``) denote sub-options that can influence | |
1880 | behavior for that specific path. Example:: |
|
1886 | behavior for that specific path. Example:: | |
1881 |
|
1887 | |||
1882 | [paths] |
|
1888 | [paths] | |
1883 | my_server = https://example.com/my_path |
|
1889 | my_server = https://example.com/my_path | |
1884 | my_server:pushurl = ssh://example.com/my_path |
|
1890 | my_server:pushurl = ssh://example.com/my_path | |
1885 |
|
1891 | |||
1886 | Paths using the `path://otherpath` scheme will inherit the sub-options value from |
|
1892 | Paths using the `path://otherpath` scheme will inherit the sub-options value from | |
1887 | the path they point to. |
|
1893 | the path they point to. | |
1888 |
|
1894 | |||
1889 | The following sub-options can be defined: |
|
1895 | The following sub-options can be defined: | |
1890 |
|
1896 | |||
1891 | ``multi-urls`` |
|
1897 | ``multi-urls`` | |
1892 | A boolean option. When enabled the value of the `[paths]` entry will be |
|
1898 | A boolean option. When enabled the value of the `[paths]` entry will be | |
1893 | parsed as a list and the alias will resolve to multiple destination. If some |
|
1899 | parsed as a list and the alias will resolve to multiple destination. If some | |
1894 | of the list entry use the `path://` syntax, the suboption will be inherited |
|
1900 | of the list entry use the `path://` syntax, the suboption will be inherited | |
1895 | individually. |
|
1901 | individually. | |
1896 |
|
1902 | |||
1897 | ``pushurl`` |
|
1903 | ``pushurl`` | |
1898 | The URL to use for push operations. If not defined, the location |
|
1904 | The URL to use for push operations. If not defined, the location | |
1899 | defined by the path's main entry is used. |
|
1905 | defined by the path's main entry is used. | |
1900 |
|
1906 | |||
1901 | ``pushrev`` |
|
1907 | ``pushrev`` | |
1902 | A revset defining which revisions to push by default. |
|
1908 | A revset defining which revisions to push by default. | |
1903 |
|
1909 | |||
1904 | When :hg:`push` is executed without a ``-r`` argument, the revset |
|
1910 | When :hg:`push` is executed without a ``-r`` argument, the revset | |
1905 | defined by this sub-option is evaluated to determine what to push. |
|
1911 | defined by this sub-option is evaluated to determine what to push. | |
1906 |
|
1912 | |||
1907 | For example, a value of ``.`` will push the working directory's |
|
1913 | For example, a value of ``.`` will push the working directory's | |
1908 | revision by default. |
|
1914 | revision by default. | |
1909 |
|
1915 | |||
1910 | Revsets specifying bookmarks will not result in the bookmark being |
|
1916 | Revsets specifying bookmarks will not result in the bookmark being | |
1911 | pushed. |
|
1917 | pushed. | |
1912 |
|
1918 | |||
1913 | ``bookmarks.mode`` |
|
1919 | ``bookmarks.mode`` | |
1914 | How bookmark will be dealt during the exchange. It support the following value |
|
1920 | How bookmark will be dealt during the exchange. It support the following value | |
1915 |
|
1921 | |||
1916 | - ``default``: the default behavior, local and remote bookmarks are "merged" |
|
1922 | - ``default``: the default behavior, local and remote bookmarks are "merged" | |
1917 | on push/pull. |
|
1923 | on push/pull. | |
1918 |
|
1924 | |||
1919 | - ``mirror``: when pulling, replace local bookmarks by remote bookmarks. This |
|
1925 | - ``mirror``: when pulling, replace local bookmarks by remote bookmarks. This | |
1920 | is useful to replicate a repository, or as an optimization. |
|
1926 | is useful to replicate a repository, or as an optimization. | |
1921 |
|
1927 | |||
1922 | - ``ignore``: ignore bookmarks during exchange. |
|
1928 | - ``ignore``: ignore bookmarks during exchange. | |
1923 | (This currently only affect pulling) |
|
1929 | (This currently only affect pulling) | |
1924 |
|
1930 | |||
1925 | .. container:: verbose |
|
1931 | .. container:: verbose | |
1926 |
|
1932 | |||
1927 | ``pulled-delta-reuse-policy`` |
|
1933 | ``pulled-delta-reuse-policy`` | |
1928 | Control the policy regarding deltas sent by the remote during pulls. |
|
1934 | Control the policy regarding deltas sent by the remote during pulls. | |
1929 |
|
1935 | |||
1930 | This is an advanced option that non-admin users should not need to understand |
|
1936 | This is an advanced option that non-admin users should not need to understand | |
1931 | or set. This option can be used to speed up pulls from trusted central |
|
1937 | or set. This option can be used to speed up pulls from trusted central | |
1932 | servers, or to fix-up deltas from older servers. |
|
1938 | servers, or to fix-up deltas from older servers. | |
1933 |
|
1939 | |||
1934 | It supports the following values: |
|
1940 | It supports the following values: | |
1935 |
|
1941 | |||
1936 | - ``default``: use the policy defined by |
|
1942 | - ``default``: use the policy defined by | |
1937 | `storage.revlog.reuse-external-delta-parent`, |
|
1943 | `storage.revlog.reuse-external-delta-parent`, | |
1938 |
|
1944 | |||
1939 | - ``no-reuse``: start a new optimal delta search for each new revision we add |
|
1945 | - ``no-reuse``: start a new optimal delta search for each new revision we add | |
1940 | to the repository. The deltas from the server will be reused when the base |
|
1946 | to the repository. The deltas from the server will be reused when the base | |
1941 | it applies to is tested (this can be frequent if that base is the one and |
|
1947 | it applies to is tested (this can be frequent if that base is the one and | |
1942 | unique parent of that revision). This can significantly slowdown pulls but |
|
1948 | unique parent of that revision). This can significantly slowdown pulls but | |
1943 | will result in an optimized storage space if the remote peer is sending poor |
|
1949 | will result in an optimized storage space if the remote peer is sending poor | |
1944 | quality deltas. |
|
1950 | quality deltas. | |
1945 |
|
1951 | |||
1946 | - ``try-base``: try to reuse the deltas from the remote peer as long as they |
|
1952 | - ``try-base``: try to reuse the deltas from the remote peer as long as they | |
1947 | create a valid delta-chain in the local repository. This speeds up the |
|
1953 | create a valid delta-chain in the local repository. This speeds up the | |
1948 | unbundling process, but can result in sub-optimal storage space if the |
|
1954 | unbundling process, but can result in sub-optimal storage space if the | |
1949 | remote peer is sending poor quality deltas. |
|
1955 | remote peer is sending poor quality deltas. | |
1950 |
|
1956 | |||
1951 | - ``forced``: the deltas from the peer will be reused in all cases, even if |
|
1957 | - ``forced``: the deltas from the peer will be reused in all cases, even if | |
1952 | the resulting delta-chain is "invalid". This setting will ensure the bundle |
|
1958 | the resulting delta-chain is "invalid". This setting will ensure the bundle | |
1953 | is applied at minimal CPU cost, but it can result in longer delta chains |
|
1959 | is applied at minimal CPU cost, but it can result in longer delta chains | |
1954 | being created on the client, making revisions potentially slower to access |
|
1960 | being created on the client, making revisions potentially slower to access | |
1955 | in the future. If you think you need this option, you should make sure you |
|
1961 | in the future. If you think you need this option, you should make sure you | |
1956 | are also talking to the Mercurial developer community to get confirmation. |
|
1962 | are also talking to the Mercurial developer community to get confirmation. | |
1957 |
|
1963 | |||
1958 | See `hg help config.storage.revlog.reuse-external-delta-parent` for a similar |
|
1964 | See `hg help config.storage.revlog.reuse-external-delta-parent` for a similar | |
1959 | global option. That option defines the behavior of `default`. |
|
1965 | global option. That option defines the behavior of `default`. | |
1960 |
|
1966 | |||
1961 | The following special named paths exist: |
|
1967 | The following special named paths exist: | |
1962 |
|
1968 | |||
1963 | ``default`` |
|
1969 | ``default`` | |
1964 | The URL or directory to use when no source or remote is specified. |
|
1970 | The URL or directory to use when no source or remote is specified. | |
1965 |
|
1971 | |||
1966 | :hg:`clone` will automatically define this path to the location the |
|
1972 | :hg:`clone` will automatically define this path to the location the | |
1967 | repository was cloned from. |
|
1973 | repository was cloned from. | |
1968 |
|
1974 | |||
1969 | ``default-push`` |
|
1975 | ``default-push`` | |
1970 | (deprecated) The URL or directory for the default :hg:`push` location. |
|
1976 | (deprecated) The URL or directory for the default :hg:`push` location. | |
1971 | ``default:pushurl`` should be used instead. |
|
1977 | ``default:pushurl`` should be used instead. | |
1972 |
|
1978 | |||
1973 | ``phases`` |
|
1979 | ``phases`` | |
1974 | ---------- |
|
1980 | ---------- | |
1975 |
|
1981 | |||
1976 | Specifies default handling of phases. See :hg:`help phases` for more |
|
1982 | Specifies default handling of phases. See :hg:`help phases` for more | |
1977 | information about working with phases. |
|
1983 | information about working with phases. | |
1978 |
|
1984 | |||
1979 | ``publish`` |
|
1985 | ``publish`` | |
1980 | Controls draft phase behavior when working as a server. When true, |
|
1986 | Controls draft phase behavior when working as a server. When true, | |
1981 | pushed changesets are set to public in both client and server and |
|
1987 | pushed changesets are set to public in both client and server and | |
1982 | pulled or cloned changesets are set to public in the client. |
|
1988 | pulled or cloned changesets are set to public in the client. | |
1983 | (default: True) |
|
1989 | (default: True) | |
1984 |
|
1990 | |||
1985 | ``new-commit`` |
|
1991 | ``new-commit`` | |
1986 | Phase of newly-created commits. |
|
1992 | Phase of newly-created commits. | |
1987 | (default: draft) |
|
1993 | (default: draft) | |
1988 |
|
1994 | |||
1989 | ``checksubrepos`` |
|
1995 | ``checksubrepos`` | |
1990 | Check the phase of the current revision of each subrepository. Allowed |
|
1996 | Check the phase of the current revision of each subrepository. Allowed | |
1991 | values are "ignore", "follow" and "abort". For settings other than |
|
1997 | values are "ignore", "follow" and "abort". For settings other than | |
1992 | "ignore", the phase of the current revision of each subrepository is |
|
1998 | "ignore", the phase of the current revision of each subrepository is | |
1993 | checked before committing the parent repository. If any of those phases is |
|
1999 | checked before committing the parent repository. If any of those phases is | |
1994 | greater than the phase of the parent repository (e.g. if a subrepo is in a |
|
2000 | greater than the phase of the parent repository (e.g. if a subrepo is in a | |
1995 | "secret" phase while the parent repo is in "draft" phase), the commit is |
|
2001 | "secret" phase while the parent repo is in "draft" phase), the commit is | |
1996 | either aborted (if checksubrepos is set to "abort") or the higher phase is |
|
2002 | either aborted (if checksubrepos is set to "abort") or the higher phase is | |
1997 | used for the parent repository commit (if set to "follow"). |
|
2003 | used for the parent repository commit (if set to "follow"). | |
1998 | (default: follow) |
|
2004 | (default: follow) | |
1999 |
|
2005 | |||
2000 |
|
2006 | |||
2001 | ``profiling`` |
|
2007 | ``profiling`` | |
2002 | ------------- |
|
2008 | ------------- | |
2003 |
|
2009 | |||
2004 | Specifies profiling type, format, and file output. Two profilers are |
|
2010 | Specifies profiling type, format, and file output. Two profilers are | |
2005 | supported: an instrumenting profiler (named ``ls``), and a sampling |
|
2011 | supported: an instrumenting profiler (named ``ls``), and a sampling | |
2006 | profiler (named ``stat``). |
|
2012 | profiler (named ``stat``). | |
2007 |
|
2013 | |||
2008 | In this section description, 'profiling data' stands for the raw data |
|
2014 | In this section description, 'profiling data' stands for the raw data | |
2009 | collected during profiling, while 'profiling report' stands for a |
|
2015 | collected during profiling, while 'profiling report' stands for a | |
2010 | statistical text report generated from the profiling data. |
|
2016 | statistical text report generated from the profiling data. | |
2011 |
|
2017 | |||
2012 | ``enabled`` |
|
2018 | ``enabled`` | |
2013 | Enable the profiler. |
|
2019 | Enable the profiler. | |
2014 | (default: false) |
|
2020 | (default: false) | |
2015 |
|
2021 | |||
2016 | This is equivalent to passing ``--profile`` on the command line. |
|
2022 | This is equivalent to passing ``--profile`` on the command line. | |
2017 |
|
2023 | |||
2018 | ``type`` |
|
2024 | ``type`` | |
2019 | The type of profiler to use. |
|
2025 | The type of profiler to use. | |
2020 | (default: stat) |
|
2026 | (default: stat) | |
2021 |
|
2027 | |||
2022 | ``ls`` |
|
2028 | ``ls`` | |
2023 | Use Python's built-in instrumenting profiler. This profiler |
|
2029 | Use Python's built-in instrumenting profiler. This profiler | |
2024 | works on all platforms, but each line number it reports is the |
|
2030 | works on all platforms, but each line number it reports is the | |
2025 | first line of a function. This restriction makes it difficult to |
|
2031 | first line of a function. This restriction makes it difficult to | |
2026 | identify the expensive parts of a non-trivial function. |
|
2032 | identify the expensive parts of a non-trivial function. | |
2027 | ``stat`` |
|
2033 | ``stat`` | |
2028 | Use a statistical profiler, statprof. This profiler is most |
|
2034 | Use a statistical profiler, statprof. This profiler is most | |
2029 | useful for profiling commands that run for longer than about 0.1 |
|
2035 | useful for profiling commands that run for longer than about 0.1 | |
2030 | seconds. |
|
2036 | seconds. | |
2031 |
|
2037 | |||
2032 | ``format`` |
|
2038 | ``format`` | |
2033 | Profiling format. Specific to the ``ls`` instrumenting profiler. |
|
2039 | Profiling format. Specific to the ``ls`` instrumenting profiler. | |
2034 | (default: text) |
|
2040 | (default: text) | |
2035 |
|
2041 | |||
2036 | ``text`` |
|
2042 | ``text`` | |
2037 | Generate a profiling report. When saving to a file, it should be |
|
2043 | Generate a profiling report. When saving to a file, it should be | |
2038 | noted that only the report is saved, and the profiling data is |
|
2044 | noted that only the report is saved, and the profiling data is | |
2039 | not kept. |
|
2045 | not kept. | |
2040 | ``kcachegrind`` |
|
2046 | ``kcachegrind`` | |
2041 | Format profiling data for kcachegrind use: when saving to a |
|
2047 | Format profiling data for kcachegrind use: when saving to a | |
2042 | file, the generated file can directly be loaded into |
|
2048 | file, the generated file can directly be loaded into | |
2043 | kcachegrind. |
|
2049 | kcachegrind. | |
2044 |
|
2050 | |||
2045 | ``statformat`` |
|
2051 | ``statformat`` | |
2046 | Profiling format for the ``stat`` profiler. |
|
2052 | Profiling format for the ``stat`` profiler. | |
2047 | (default: hotpath) |
|
2053 | (default: hotpath) | |
2048 |
|
2054 | |||
2049 | ``hotpath`` |
|
2055 | ``hotpath`` | |
2050 | Show a tree-based display containing the hot path of execution (where |
|
2056 | Show a tree-based display containing the hot path of execution (where | |
2051 | most time was spent). |
|
2057 | most time was spent). | |
2052 | ``bymethod`` |
|
2058 | ``bymethod`` | |
2053 | Show a table of methods ordered by how frequently they are active. |
|
2059 | Show a table of methods ordered by how frequently they are active. | |
2054 | ``byline`` |
|
2060 | ``byline`` | |
2055 | Show a table of lines in files ordered by how frequently they are active. |
|
2061 | Show a table of lines in files ordered by how frequently they are active. | |
2056 | ``json`` |
|
2062 | ``json`` | |
2057 | Render profiling data as JSON. |
|
2063 | Render profiling data as JSON. | |
2058 |
|
2064 | |||
2059 | ``freq`` |
|
2065 | ``freq`` | |
2060 | Sampling frequency. Specific to the ``stat`` sampling profiler. |
|
2066 | Sampling frequency. Specific to the ``stat`` sampling profiler. | |
2061 | (default: 1000) |
|
2067 | (default: 1000) | |
2062 |
|
2068 | |||
2063 | ``output`` |
|
2069 | ``output`` | |
2064 | File path where profiling data or report should be saved. If the |
|
2070 | File path where profiling data or report should be saved. If the | |
2065 | file exists, it is replaced. (default: None, data is printed on |
|
2071 | file exists, it is replaced. (default: None, data is printed on | |
2066 | stderr) |
|
2072 | stderr) | |
2067 |
|
2073 | |||
2068 | ``sort`` |
|
2074 | ``sort`` | |
2069 | Sort field. Specific to the ``ls`` instrumenting profiler. |
|
2075 | Sort field. Specific to the ``ls`` instrumenting profiler. | |
2070 | One of ``callcount``, ``reccallcount``, ``totaltime`` and |
|
2076 | One of ``callcount``, ``reccallcount``, ``totaltime`` and | |
2071 | ``inlinetime``. |
|
2077 | ``inlinetime``. | |
2072 | (default: inlinetime) |
|
2078 | (default: inlinetime) | |
2073 |
|
2079 | |||
2074 | ``time-track`` |
|
2080 | ``time-track`` | |
2075 | Control if the stat profiler track ``cpu`` or ``real`` time. |
|
2081 | Control if the stat profiler track ``cpu`` or ``real`` time. | |
2076 | (default: ``cpu`` on Windows, otherwise ``real``) |
|
2082 | (default: ``cpu`` on Windows, otherwise ``real``) | |
2077 |
|
2083 | |||
2078 | ``limit`` |
|
2084 | ``limit`` | |
2079 | Number of lines to show. Specific to the ``ls`` instrumenting profiler. |
|
2085 | Number of lines to show. Specific to the ``ls`` instrumenting profiler. | |
2080 | (default: 30) |
|
2086 | (default: 30) | |
2081 |
|
2087 | |||
2082 | ``nested`` |
|
2088 | ``nested`` | |
2083 | Show at most this number of lines of drill-down info after each main entry. |
|
2089 | Show at most this number of lines of drill-down info after each main entry. | |
2084 | This can help explain the difference between Total and Inline. |
|
2090 | This can help explain the difference between Total and Inline. | |
2085 | Specific to the ``ls`` instrumenting profiler. |
|
2091 | Specific to the ``ls`` instrumenting profiler. | |
2086 | (default: 0) |
|
2092 | (default: 0) | |
2087 |
|
2093 | |||
2088 | ``showmin`` |
|
2094 | ``showmin`` | |
2089 | Minimum fraction of samples an entry must have for it to be displayed. |
|
2095 | Minimum fraction of samples an entry must have for it to be displayed. | |
2090 | Can be specified as a float between ``0.0`` and ``1.0`` or can have a |
|
2096 | Can be specified as a float between ``0.0`` and ``1.0`` or can have a | |
2091 | ``%`` afterwards to allow values up to ``100``. e.g. ``5%``. |
|
2097 | ``%`` afterwards to allow values up to ``100``. e.g. ``5%``. | |
2092 |
|
2098 | |||
2093 | Only used by the ``stat`` profiler. |
|
2099 | Only used by the ``stat`` profiler. | |
2094 |
|
2100 | |||
2095 | For the ``hotpath`` format, default is ``0.05``. |
|
2101 | For the ``hotpath`` format, default is ``0.05``. | |
2096 | For the ``chrome`` format, default is ``0.005``. |
|
2102 | For the ``chrome`` format, default is ``0.005``. | |
2097 |
|
2103 | |||
2098 | The option is unused on other formats. |
|
2104 | The option is unused on other formats. | |
2099 |
|
2105 | |||
2100 | ``showmax`` |
|
2106 | ``showmax`` | |
2101 | Maximum fraction of samples an entry can have before it is ignored in |
|
2107 | Maximum fraction of samples an entry can have before it is ignored in | |
2102 | display. Values format is the same as ``showmin``. |
|
2108 | display. Values format is the same as ``showmin``. | |
2103 |
|
2109 | |||
2104 | Only used by the ``stat`` profiler. |
|
2110 | Only used by the ``stat`` profiler. | |
2105 |
|
2111 | |||
2106 | For the ``chrome`` format, default is ``0.999``. |
|
2112 | For the ``chrome`` format, default is ``0.999``. | |
2107 |
|
2113 | |||
2108 | The option is unused on other formats. |
|
2114 | The option is unused on other formats. | |
2109 |
|
2115 | |||
2110 | ``showtime`` |
|
2116 | ``showtime`` | |
2111 | Show time taken as absolute durations, in addition to percentages. |
|
2117 | Show time taken as absolute durations, in addition to percentages. | |
2112 | Only used by the ``hotpath`` format. |
|
2118 | Only used by the ``hotpath`` format. | |
2113 | (default: true) |
|
2119 | (default: true) | |
2114 |
|
2120 | |||
2115 | ``progress`` |
|
2121 | ``progress`` | |
2116 | ------------ |
|
2122 | ------------ | |
2117 |
|
2123 | |||
2118 | Mercurial commands can draw progress bars that are as informative as |
|
2124 | Mercurial commands can draw progress bars that are as informative as | |
2119 | possible. Some progress bars only offer indeterminate information, while others |
|
2125 | possible. Some progress bars only offer indeterminate information, while others | |
2120 | have a definite end point. |
|
2126 | have a definite end point. | |
2121 |
|
2127 | |||
2122 | ``debug`` |
|
2128 | ``debug`` | |
2123 | Whether to print debug info when updating the progress bar. (default: False) |
|
2129 | Whether to print debug info when updating the progress bar. (default: False) | |
2124 |
|
2130 | |||
2125 | ``delay`` |
|
2131 | ``delay`` | |
2126 | Number of seconds (float) before showing the progress bar. (default: 3) |
|
2132 | Number of seconds (float) before showing the progress bar. (default: 3) | |
2127 |
|
2133 | |||
2128 | ``changedelay`` |
|
2134 | ``changedelay`` | |
2129 | Minimum delay before showing a new topic. When set to less than 3 * refresh, |
|
2135 | Minimum delay before showing a new topic. When set to less than 3 * refresh, | |
2130 | that value will be used instead. (default: 1) |
|
2136 | that value will be used instead. (default: 1) | |
2131 |
|
2137 | |||
2132 | ``estimateinterval`` |
|
2138 | ``estimateinterval`` | |
2133 | Maximum sampling interval in seconds for speed and estimated time |
|
2139 | Maximum sampling interval in seconds for speed and estimated time | |
2134 | calculation. (default: 60) |
|
2140 | calculation. (default: 60) | |
2135 |
|
2141 | |||
2136 | ``refresh`` |
|
2142 | ``refresh`` | |
2137 | Time in seconds between refreshes of the progress bar. (default: 0.1) |
|
2143 | Time in seconds between refreshes of the progress bar. (default: 0.1) | |
2138 |
|
2144 | |||
2139 | ``format`` |
|
2145 | ``format`` | |
2140 | Format of the progress bar. |
|
2146 | Format of the progress bar. | |
2141 |
|
2147 | |||
2142 | Valid entries for the format field are ``topic``, ``bar``, ``number``, |
|
2148 | Valid entries for the format field are ``topic``, ``bar``, ``number``, | |
2143 | ``unit``, ``estimate``, ``speed``, and ``item``. ``item`` defaults to the |
|
2149 | ``unit``, ``estimate``, ``speed``, and ``item``. ``item`` defaults to the | |
2144 | last 20 characters of the item, but this can be changed by adding either |
|
2150 | last 20 characters of the item, but this can be changed by adding either | |
2145 | ``-<num>`` which would take the last num characters, or ``+<num>`` for the |
|
2151 | ``-<num>`` which would take the last num characters, or ``+<num>`` for the | |
2146 | first num characters. |
|
2152 | first num characters. | |
2147 |
|
2153 | |||
2148 | (default: topic bar number estimate) |
|
2154 | (default: topic bar number estimate) | |
2149 |
|
2155 | |||
2150 | ``width`` |
|
2156 | ``width`` | |
2151 | If set, the maximum width of the progress information (that is, min(width, |
|
2157 | If set, the maximum width of the progress information (that is, min(width, | |
2152 | term width) will be used). |
|
2158 | term width) will be used). | |
2153 |
|
2159 | |||
2154 | ``clear-complete`` |
|
2160 | ``clear-complete`` | |
2155 | Clear the progress bar after it's done. (default: True) |
|
2161 | Clear the progress bar after it's done. (default: True) | |
2156 |
|
2162 | |||
2157 | ``disable`` |
|
2163 | ``disable`` | |
2158 | If true, don't show a progress bar. |
|
2164 | If true, don't show a progress bar. | |
2159 |
|
2165 | |||
2160 | ``assume-tty`` |
|
2166 | ``assume-tty`` | |
2161 | If true, ALWAYS show a progress bar, unless disable is given. |
|
2167 | If true, ALWAYS show a progress bar, unless disable is given. | |
2162 |
|
2168 | |||
2163 | ``rebase`` |
|
2169 | ``rebase`` | |
2164 | ---------- |
|
2170 | ---------- | |
2165 |
|
2171 | |||
2166 | ``evolution.allowdivergence`` |
|
2172 | ``evolution.allowdivergence`` | |
2167 | Default to False, when True allow creating divergence when performing |
|
2173 | Default to False, when True allow creating divergence when performing | |
2168 | rebase of obsolete changesets. |
|
2174 | rebase of obsolete changesets. | |
2169 |
|
2175 | |||
2170 | ``revsetalias`` |
|
2176 | ``revsetalias`` | |
2171 | --------------- |
|
2177 | --------------- | |
2172 |
|
2178 | |||
2173 | Alias definitions for revsets. See :hg:`help revsets` for details. |
|
2179 | Alias definitions for revsets. See :hg:`help revsets` for details. | |
2174 |
|
2180 | |||
2175 | ``rewrite`` |
|
2181 | ``rewrite`` | |
2176 | ----------- |
|
2182 | ----------- | |
2177 |
|
2183 | |||
2178 | ``backup-bundle`` |
|
2184 | ``backup-bundle`` | |
2179 | Whether to save stripped changesets to a bundle file. (default: True) |
|
2185 | Whether to save stripped changesets to a bundle file. (default: True) | |
2180 |
|
2186 | |||
2181 | ``update-timestamp`` |
|
2187 | ``update-timestamp`` | |
2182 | If true, updates the date and time of the changeset to current. It is only |
|
2188 | If true, updates the date and time of the changeset to current. It is only | |
2183 | applicable for `hg amend`, `hg commit --amend` and `hg uncommit` in the |
|
2189 | applicable for `hg amend`, `hg commit --amend` and `hg uncommit` in the | |
2184 | current version. |
|
2190 | current version. | |
2185 |
|
2191 | |||
2186 | ``empty-successor`` |
|
2192 | ``empty-successor`` | |
2187 |
|
2193 | |||
2188 | Control what happens with empty successors that are the result of rewrite |
|
2194 | Control what happens with empty successors that are the result of rewrite | |
2189 | operations. If set to ``skip``, the successor is not created. If set to |
|
2195 | operations. If set to ``skip``, the successor is not created. If set to | |
2190 | ``keep``, the empty successor is created and kept. |
|
2196 | ``keep``, the empty successor is created and kept. | |
2191 |
|
2197 | |||
2192 | Currently, only the rebase and absorb commands consider this configuration. |
|
2198 | Currently, only the rebase and absorb commands consider this configuration. | |
2193 | (EXPERIMENTAL) |
|
2199 | (EXPERIMENTAL) | |
2194 |
|
2200 | |||
2195 | ``rhg`` |
|
2201 | ``rhg`` | |
2196 | ------- |
|
2202 | ------- | |
2197 |
|
2203 | |||
2198 | The pure Rust fast-path for Mercurial. See `rust/README.rst` in the Mercurial repository. |
|
2204 | The pure Rust fast-path for Mercurial. See `rust/README.rst` in the Mercurial repository. | |
2199 |
|
2205 | |||
2200 | ``fallback-executable`` |
|
2206 | ``fallback-executable`` | |
2201 | Path to the executable to run in a sub-process when falling back to |
|
2207 | Path to the executable to run in a sub-process when falling back to | |
2202 | another implementation of Mercurial. |
|
2208 | another implementation of Mercurial. | |
2203 |
|
2209 | |||
2204 | ``fallback-immediately`` |
|
2210 | ``fallback-immediately`` | |
2205 | Fall back to ``fallback-executable`` as soon as possible, regardless of |
|
2211 | Fall back to ``fallback-executable`` as soon as possible, regardless of | |
2206 | the `rhg.on-unsupported` configuration. Useful for debugging, for example to |
|
2212 | the `rhg.on-unsupported` configuration. Useful for debugging, for example to | |
2207 | bypass `rhg` if the deault `hg` points to `rhg`. |
|
2213 | bypass `rhg` if the deault `hg` points to `rhg`. | |
2208 |
|
2214 | |||
2209 | Note that because this requires loading the configuration, it is possible |
|
2215 | Note that because this requires loading the configuration, it is possible | |
2210 | that `rhg` error out before being able to fall back. |
|
2216 | that `rhg` error out before being able to fall back. | |
2211 |
|
2217 | |||
2212 | ``ignored-extensions`` |
|
2218 | ``ignored-extensions`` | |
2213 | Controls which extensions should be ignored by `rhg`. By default, `rhg` |
|
2219 | Controls which extensions should be ignored by `rhg`. By default, `rhg` | |
2214 | triggers the `rhg.on-unsupported` behavior any unsupported extensions. |
|
2220 | triggers the `rhg.on-unsupported` behavior any unsupported extensions. | |
2215 | Users can disable that behavior when they know that a given extension |
|
2221 | Users can disable that behavior when they know that a given extension | |
2216 | does not need support from `rhg`. |
|
2222 | does not need support from `rhg`. | |
2217 |
|
2223 | |||
2218 | Expects a list of extension names, or ``*`` to ignore all extensions. |
|
2224 | Expects a list of extension names, or ``*`` to ignore all extensions. | |
2219 |
|
2225 | |||
2220 | Note: ``*:<suboption>`` is also a valid extension name for this |
|
2226 | Note: ``*:<suboption>`` is also a valid extension name for this | |
2221 | configuration option. |
|
2227 | configuration option. | |
2222 | As of this writing, the only valid "global" suboption is ``required``. |
|
2228 | As of this writing, the only valid "global" suboption is ``required``. | |
2223 |
|
2229 | |||
2224 | ``on-unsupported`` |
|
2230 | ``on-unsupported`` | |
2225 | Controls the behavior of `rhg` when detecting unsupported features. |
|
2231 | Controls the behavior of `rhg` when detecting unsupported features. | |
2226 |
|
2232 | |||
2227 | Possible values are `abort` (default), `abort-silent` and `fallback`. |
|
2233 | Possible values are `abort` (default), `abort-silent` and `fallback`. | |
2228 |
|
2234 | |||
2229 | ``abort`` |
|
2235 | ``abort`` | |
2230 | Print an error message describing what feature is not supported, |
|
2236 | Print an error message describing what feature is not supported, | |
2231 | and exit with code 252 |
|
2237 | and exit with code 252 | |
2232 |
|
2238 | |||
2233 | ``abort-silent`` |
|
2239 | ``abort-silent`` | |
2234 | Silently exit with code 252 |
|
2240 | Silently exit with code 252 | |
2235 |
|
2241 | |||
2236 | ``fallback`` |
|
2242 | ``fallback`` | |
2237 | Try running the fallback executable with the same parameters |
|
2243 | Try running the fallback executable with the same parameters | |
2238 | (and trace the fallback reason, use `RUST_LOG=trace` to see). |
|
2244 | (and trace the fallback reason, use `RUST_LOG=trace` to see). | |
2239 |
|
2245 | |||
2240 | ``share`` |
|
2246 | ``share`` | |
2241 | --------- |
|
2247 | --------- | |
2242 |
|
2248 | |||
2243 | ``safe-mismatch.source-safe`` |
|
2249 | ``safe-mismatch.source-safe`` | |
2244 | Controls what happens when the shared repository does not use the |
|
2250 | Controls what happens when the shared repository does not use the | |
2245 | share-safe mechanism but its source repository does. |
|
2251 | share-safe mechanism but its source repository does. | |
2246 |
|
2252 | |||
2247 | Possible values are `abort` (default), `allow`, `upgrade-abort` and |
|
2253 | Possible values are `abort` (default), `allow`, `upgrade-abort` and | |
2248 | `upgrade-allow`. |
|
2254 | `upgrade-allow`. | |
2249 |
|
2255 | |||
2250 | ``abort`` |
|
2256 | ``abort`` | |
2251 | Disallows running any command and aborts |
|
2257 | Disallows running any command and aborts | |
2252 | ``allow`` |
|
2258 | ``allow`` | |
2253 | Respects the feature presence in the share source |
|
2259 | Respects the feature presence in the share source | |
2254 | ``upgrade-abort`` |
|
2260 | ``upgrade-abort`` | |
2255 | Tries to upgrade the share to use share-safe; if it fails, aborts |
|
2261 | Tries to upgrade the share to use share-safe; if it fails, aborts | |
2256 | ``upgrade-allow`` |
|
2262 | ``upgrade-allow`` | |
2257 | Tries to upgrade the share; if it fails, continue by |
|
2263 | Tries to upgrade the share; if it fails, continue by | |
2258 | respecting the share source setting |
|
2264 | respecting the share source setting | |
2259 |
|
2265 | |||
2260 | Check :hg:`help config.format.use-share-safe` for details about the |
|
2266 | Check :hg:`help config.format.use-share-safe` for details about the | |
2261 | share-safe feature. |
|
2267 | share-safe feature. | |
2262 |
|
2268 | |||
2263 | ``safe-mismatch.source-safe:verbose-upgrade`` |
|
2269 | ``safe-mismatch.source-safe:verbose-upgrade`` | |
2264 | Display a message when upgrading, (default: True) |
|
2270 | Display a message when upgrading, (default: True) | |
2265 |
|
2271 | |||
2266 | ``safe-mismatch.source-safe.warn`` |
|
2272 | ``safe-mismatch.source-safe.warn`` | |
2267 | Shows a warning on operations if the shared repository does not use |
|
2273 | Shows a warning on operations if the shared repository does not use | |
2268 | share-safe, but the source repository does. |
|
2274 | share-safe, but the source repository does. | |
2269 | (default: True) |
|
2275 | (default: True) | |
2270 |
|
2276 | |||
2271 | ``safe-mismatch.source-not-safe`` |
|
2277 | ``safe-mismatch.source-not-safe`` | |
2272 | Controls what happens when the shared repository uses the share-safe |
|
2278 | Controls what happens when the shared repository uses the share-safe | |
2273 | mechanism but its source does not. |
|
2279 | mechanism but its source does not. | |
2274 |
|
2280 | |||
2275 | Possible values are `abort` (default), `allow`, `downgrade-abort` and |
|
2281 | Possible values are `abort` (default), `allow`, `downgrade-abort` and | |
2276 | `downgrade-allow`. |
|
2282 | `downgrade-allow`. | |
2277 |
|
2283 | |||
2278 | ``abort`` |
|
2284 | ``abort`` | |
2279 | Disallows running any command and aborts |
|
2285 | Disallows running any command and aborts | |
2280 | ``allow`` |
|
2286 | ``allow`` | |
2281 | Respects the feature presence in the share source |
|
2287 | Respects the feature presence in the share source | |
2282 | ``downgrade-abort`` |
|
2288 | ``downgrade-abort`` | |
2283 | Tries to downgrade the share to not use share-safe; if it fails, aborts |
|
2289 | Tries to downgrade the share to not use share-safe; if it fails, aborts | |
2284 | ``downgrade-allow`` |
|
2290 | ``downgrade-allow`` | |
2285 | Tries to downgrade the share to not use share-safe; |
|
2291 | Tries to downgrade the share to not use share-safe; | |
2286 | if it fails, continue by respecting the shared source setting |
|
2292 | if it fails, continue by respecting the shared source setting | |
2287 |
|
2293 | |||
2288 | Check :hg:`help config.format.use-share-safe` for details about the |
|
2294 | Check :hg:`help config.format.use-share-safe` for details about the | |
2289 | share-safe feature. |
|
2295 | share-safe feature. | |
2290 |
|
2296 | |||
2291 | ``safe-mismatch.source-not-safe:verbose-upgrade`` |
|
2297 | ``safe-mismatch.source-not-safe:verbose-upgrade`` | |
2292 | Display a message when upgrading, (default: True) |
|
2298 | Display a message when upgrading, (default: True) | |
2293 |
|
2299 | |||
2294 | ``safe-mismatch.source-not-safe.warn`` |
|
2300 | ``safe-mismatch.source-not-safe.warn`` | |
2295 | Shows a warning on operations if the shared repository uses share-safe, |
|
2301 | Shows a warning on operations if the shared repository uses share-safe, | |
2296 | but the source repository does not. |
|
2302 | but the source repository does not. | |
2297 | (default: True) |
|
2303 | (default: True) | |
2298 |
|
2304 | |||
2299 | ``storage`` |
|
2305 | ``storage`` | |
2300 | ----------- |
|
2306 | ----------- | |
2301 |
|
2307 | |||
2302 | Control the strategy Mercurial uses internally to store history. Options in this |
|
2308 | Control the strategy Mercurial uses internally to store history. Options in this | |
2303 | category impact performance and repository size. |
|
2309 | category impact performance and repository size. | |
2304 |
|
2310 | |||
2305 | ``revlog.issue6528.fix-incoming`` |
|
2311 | ``revlog.issue6528.fix-incoming`` | |
2306 | Version 5.8 of Mercurial had a bug leading to altering the parent of file |
|
2312 | Version 5.8 of Mercurial had a bug leading to altering the parent of file | |
2307 | revision with copy information (or any other metadata) on exchange. This |
|
2313 | revision with copy information (or any other metadata) on exchange. This | |
2308 | leads to the copy metadata to be overlooked by various internal logic. The |
|
2314 | leads to the copy metadata to be overlooked by various internal logic. The | |
2309 | issue was fixed in Mercurial 5.8.1. |
|
2315 | issue was fixed in Mercurial 5.8.1. | |
2310 | (See https://bz.mercurial-scm.org/show_bug.cgi?id=6528 for details) |
|
2316 | (See https://bz.mercurial-scm.org/show_bug.cgi?id=6528 for details) | |
2311 |
|
2317 | |||
2312 | As a result Mercurial is now checking and fixing incoming file revisions to |
|
2318 | As a result Mercurial is now checking and fixing incoming file revisions to | |
2313 | make sure there parents are in the right order. This behavior can be |
|
2319 | make sure there parents are in the right order. This behavior can be | |
2314 | disabled by setting this option to `no`. This apply to revisions added |
|
2320 | disabled by setting this option to `no`. This apply to revisions added | |
2315 | through push, pull, clone and unbundle. |
|
2321 | through push, pull, clone and unbundle. | |
2316 |
|
2322 | |||
2317 | To fix affected revisions that already exist within the repository, one can |
|
2323 | To fix affected revisions that already exist within the repository, one can | |
2318 | use :hg:`debug-repair-issue-6528`. |
|
2324 | use :hg:`debug-repair-issue-6528`. | |
2319 |
|
2325 | |||
2320 | .. container:: verbose |
|
2326 | .. container:: verbose | |
2321 |
|
2327 | |||
2322 | ``revlog.delta-parent-search.candidate-group-chunk-size`` |
|
2328 | ``revlog.delta-parent-search.candidate-group-chunk-size`` | |
2323 | Tune the number of delta bases the storage will consider in the |
|
2329 | Tune the number of delta bases the storage will consider in the | |
2324 | same "round" of search. In some very rare cases, using a smaller value |
|
2330 | same "round" of search. In some very rare cases, using a smaller value | |
2325 | might result in faster processing at the possible expense of storage |
|
2331 | might result in faster processing at the possible expense of storage | |
2326 | space, while using larger values might result in slower processing at the |
|
2332 | space, while using larger values might result in slower processing at the | |
2327 | possible benefit of storage space. A value of "0" means no limitation. |
|
2333 | possible benefit of storage space. A value of "0" means no limitation. | |
2328 |
|
2334 | |||
2329 | default: no limitation |
|
2335 | default: no limitation | |
2330 |
|
2336 | |||
2331 | This is unlikely that you'll have to tune this configuration. If you think |
|
2337 | This is unlikely that you'll have to tune this configuration. If you think | |
2332 | you do, consider talking with the mercurial developer community about your |
|
2338 | you do, consider talking with the mercurial developer community about your | |
2333 | repositories. |
|
2339 | repositories. | |
2334 |
|
2340 | |||
2335 | ``revlog.optimize-delta-parent-choice`` |
|
2341 | ``revlog.optimize-delta-parent-choice`` | |
2336 | When storing a merge revision, both parents will be equally considered as |
|
2342 | When storing a merge revision, both parents will be equally considered as | |
2337 | a possible delta base. This results in better delta selection and improved |
|
2343 | a possible delta base. This results in better delta selection and improved | |
2338 | revlog compression. This option is enabled by default. |
|
2344 | revlog compression. This option is enabled by default. | |
2339 |
|
2345 | |||
2340 | Turning this option off can result in large increase of repository size for |
|
2346 | Turning this option off can result in large increase of repository size for | |
2341 | repository with many merges. |
|
2347 | repository with many merges. | |
2342 |
|
2348 | |||
2343 | ``revlog.persistent-nodemap.mmap`` |
|
2349 | ``revlog.persistent-nodemap.mmap`` | |
2344 | Whether to use the Operating System "memory mapping" feature (when |
|
2350 | Whether to use the Operating System "memory mapping" feature (when | |
2345 | possible) to access the persistent nodemap data. This improve performance |
|
2351 | possible) to access the persistent nodemap data. This improve performance | |
2346 | and reduce memory pressure. |
|
2352 | and reduce memory pressure. | |
2347 |
|
2353 | |||
2348 | Default to True. |
|
2354 | Default to True. | |
2349 |
|
2355 | |||
2350 | For details on the "persistent-nodemap" feature, see: |
|
2356 | For details on the "persistent-nodemap" feature, see: | |
2351 | :hg:`help config.format.use-persistent-nodemap`. |
|
2357 | :hg:`help config.format.use-persistent-nodemap`. | |
2352 |
|
2358 | |||
2353 | ``revlog.persistent-nodemap.slow-path`` |
|
2359 | ``revlog.persistent-nodemap.slow-path`` | |
2354 | Control the behavior of Merucrial when using a repository with "persistent" |
|
2360 | Control the behavior of Merucrial when using a repository with "persistent" | |
2355 | nodemap with an installation of Mercurial without a fast implementation for |
|
2361 | nodemap with an installation of Mercurial without a fast implementation for | |
2356 | the feature: |
|
2362 | the feature: | |
2357 |
|
2363 | |||
2358 | ``allow``: Silently use the slower implementation to access the repository. |
|
2364 | ``allow``: Silently use the slower implementation to access the repository. | |
2359 | ``warn``: Warn, but use the slower implementation to access the repository. |
|
2365 | ``warn``: Warn, but use the slower implementation to access the repository. | |
2360 | ``abort``: Prevent access to such repositories. (This is the default) |
|
2366 | ``abort``: Prevent access to such repositories. (This is the default) | |
2361 |
|
2367 | |||
2362 | For details on the "persistent-nodemap" feature, see: |
|
2368 | For details on the "persistent-nodemap" feature, see: | |
2363 | :hg:`help config.format.use-persistent-nodemap`. |
|
2369 | :hg:`help config.format.use-persistent-nodemap`. | |
2364 |
|
2370 | |||
2365 | ``revlog.reuse-external-delta-parent`` |
|
2371 | ``revlog.reuse-external-delta-parent`` | |
2366 | Control the order in which delta parents are considered when adding new |
|
2372 | Control the order in which delta parents are considered when adding new | |
2367 | revisions from an external source. |
|
2373 | revisions from an external source. | |
2368 | (typically: apply bundle from `hg pull` or `hg push`). |
|
2374 | (typically: apply bundle from `hg pull` or `hg push`). | |
2369 |
|
2375 | |||
2370 | New revisions are usually provided as a delta against other revisions. By |
|
2376 | New revisions are usually provided as a delta against other revisions. By | |
2371 | default, Mercurial will try to reuse this delta first, therefore using the |
|
2377 | default, Mercurial will try to reuse this delta first, therefore using the | |
2372 | same "delta parent" as the source. Directly using delta's from the source |
|
2378 | same "delta parent" as the source. Directly using delta's from the source | |
2373 | reduces CPU usage and usually speeds up operation. However, in some case, |
|
2379 | reduces CPU usage and usually speeds up operation. However, in some case, | |
2374 | the source might have sub-optimal delta bases and forcing their reevaluation |
|
2380 | the source might have sub-optimal delta bases and forcing their reevaluation | |
2375 | is useful. For example, pushes from an old client could have sub-optimal |
|
2381 | is useful. For example, pushes from an old client could have sub-optimal | |
2376 | delta's parent that the server want to optimize. (lack of general delta, bad |
|
2382 | delta's parent that the server want to optimize. (lack of general delta, bad | |
2377 | parents, choice, lack of sparse-revlog, etc). |
|
2383 | parents, choice, lack of sparse-revlog, etc). | |
2378 |
|
2384 | |||
2379 | This option is enabled by default. Turning it off will ensure bad delta |
|
2385 | This option is enabled by default. Turning it off will ensure bad delta | |
2380 | parent choices from older client do not propagate to this repository, at |
|
2386 | parent choices from older client do not propagate to this repository, at | |
2381 | the cost of a small increase in CPU consumption. |
|
2387 | the cost of a small increase in CPU consumption. | |
2382 |
|
2388 | |||
2383 | Note: this option only control the order in which delta parents are |
|
2389 | Note: this option only control the order in which delta parents are | |
2384 | considered. Even when disabled, the existing delta from the source will be |
|
2390 | considered. Even when disabled, the existing delta from the source will be | |
2385 | reused if the same delta parent is selected. |
|
2391 | reused if the same delta parent is selected. | |
2386 |
|
2392 | |||
2387 | ``revlog.reuse-external-delta`` |
|
2393 | ``revlog.reuse-external-delta`` | |
2388 | Control the reuse of delta from external source. |
|
2394 | Control the reuse of delta from external source. | |
2389 | (typically: apply bundle from `hg pull` or `hg push`). |
|
2395 | (typically: apply bundle from `hg pull` or `hg push`). | |
2390 |
|
2396 | |||
2391 | New revisions are usually provided as a delta against another revision. By |
|
2397 | New revisions are usually provided as a delta against another revision. By | |
2392 | default, Mercurial will not recompute the same delta again, trusting |
|
2398 | default, Mercurial will not recompute the same delta again, trusting | |
2393 | externally provided deltas. There have been rare cases of small adjustment |
|
2399 | externally provided deltas. There have been rare cases of small adjustment | |
2394 | to the diffing algorithm in the past. So in some rare case, recomputing |
|
2400 | to the diffing algorithm in the past. So in some rare case, recomputing | |
2395 | delta provided by ancient clients can provides better results. Disabling |
|
2401 | delta provided by ancient clients can provides better results. Disabling | |
2396 | this option means going through a full delta recomputation for all incoming |
|
2402 | this option means going through a full delta recomputation for all incoming | |
2397 | revisions. It means a large increase in CPU usage and will slow operations |
|
2403 | revisions. It means a large increase in CPU usage and will slow operations | |
2398 | down. |
|
2404 | down. | |
2399 |
|
2405 | |||
2400 | This option is enabled by default. When disabled, it also disables the |
|
2406 | This option is enabled by default. When disabled, it also disables the | |
2401 | related ``storage.revlog.reuse-external-delta-parent`` option. |
|
2407 | related ``storage.revlog.reuse-external-delta-parent`` option. | |
2402 |
|
2408 | |||
2403 | ``revlog.zlib.level`` |
|
2409 | ``revlog.zlib.level`` | |
2404 | Zlib compression level used when storing data into the repository. Accepted |
|
2410 | Zlib compression level used when storing data into the repository. Accepted | |
2405 | Value range from 1 (lowest compression) to 9 (highest compression). Zlib |
|
2411 | Value range from 1 (lowest compression) to 9 (highest compression). Zlib | |
2406 | default value is 6. |
|
2412 | default value is 6. | |
2407 |
|
2413 | |||
2408 |
|
2414 | |||
2409 | ``revlog.zstd.level`` |
|
2415 | ``revlog.zstd.level`` | |
2410 | zstd compression level used when storing data into the repository. Accepted |
|
2416 | zstd compression level used when storing data into the repository. Accepted | |
2411 | Value range from 1 (lowest compression) to 22 (highest compression). |
|
2417 | Value range from 1 (lowest compression) to 22 (highest compression). | |
2412 | (default 3) |
|
2418 | (default 3) | |
2413 |
|
2419 | |||
2414 | ``server`` |
|
2420 | ``server`` | |
2415 | ---------- |
|
2421 | ---------- | |
2416 |
|
2422 | |||
2417 | Controls generic server settings. |
|
2423 | Controls generic server settings. | |
2418 |
|
2424 | |||
2419 | ``bookmarks-pushkey-compat`` |
|
2425 | ``bookmarks-pushkey-compat`` | |
2420 | Trigger pushkey hook when being pushed bookmark updates. This config exist |
|
2426 | Trigger pushkey hook when being pushed bookmark updates. This config exist | |
2421 | for compatibility purpose (default to True) |
|
2427 | for compatibility purpose (default to True) | |
2422 |
|
2428 | |||
2423 | If you use ``pushkey`` and ``pre-pushkey`` hooks to control bookmark |
|
2429 | If you use ``pushkey`` and ``pre-pushkey`` hooks to control bookmark | |
2424 | movement we recommend you migrate them to ``txnclose-bookmark`` and |
|
2430 | movement we recommend you migrate them to ``txnclose-bookmark`` and | |
2425 | ``pretxnclose-bookmark``. |
|
2431 | ``pretxnclose-bookmark``. | |
2426 |
|
2432 | |||
2427 | ``compressionengines`` |
|
2433 | ``compressionengines`` | |
2428 | List of compression engines and their relative priority to advertise |
|
2434 | List of compression engines and their relative priority to advertise | |
2429 | to clients. |
|
2435 | to clients. | |
2430 |
|
2436 | |||
2431 | The order of compression engines determines their priority, the first |
|
2437 | The order of compression engines determines their priority, the first | |
2432 | having the highest priority. If a compression engine is not listed |
|
2438 | having the highest priority. If a compression engine is not listed | |
2433 | here, it won't be advertised to clients. |
|
2439 | here, it won't be advertised to clients. | |
2434 |
|
2440 | |||
2435 | If not set (the default), built-in defaults are used. Run |
|
2441 | If not set (the default), built-in defaults are used. Run | |
2436 | :hg:`debuginstall` to list available compression engines and their |
|
2442 | :hg:`debuginstall` to list available compression engines and their | |
2437 | default wire protocol priority. |
|
2443 | default wire protocol priority. | |
2438 |
|
2444 | |||
2439 | Older Mercurial clients only support zlib compression and this setting |
|
2445 | Older Mercurial clients only support zlib compression and this setting | |
2440 | has no effect for legacy clients. |
|
2446 | has no effect for legacy clients. | |
2441 |
|
2447 | |||
2442 | ``uncompressed`` |
|
2448 | ``uncompressed`` | |
2443 | Whether to allow clients to clone a repository using the |
|
2449 | Whether to allow clients to clone a repository using the | |
2444 | uncompressed streaming protocol. This transfers about 40% more |
|
2450 | uncompressed streaming protocol. This transfers about 40% more | |
2445 | data than a regular clone, but uses less memory and CPU on both |
|
2451 | data than a regular clone, but uses less memory and CPU on both | |
2446 | server and client. Over a LAN (100 Mbps or better) or a very fast |
|
2452 | server and client. Over a LAN (100 Mbps or better) or a very fast | |
2447 | WAN, an uncompressed streaming clone is a lot faster (~10x) than a |
|
2453 | WAN, an uncompressed streaming clone is a lot faster (~10x) than a | |
2448 | regular clone. Over most WAN connections (anything slower than |
|
2454 | regular clone. Over most WAN connections (anything slower than | |
2449 | about 6 Mbps), uncompressed streaming is slower, because of the |
|
2455 | about 6 Mbps), uncompressed streaming is slower, because of the | |
2450 | extra data transfer overhead. This mode will also temporarily hold |
|
2456 | extra data transfer overhead. This mode will also temporarily hold | |
2451 | the write lock while determining what data to transfer. |
|
2457 | the write lock while determining what data to transfer. | |
2452 | (default: True) |
|
2458 | (default: True) | |
2453 |
|
2459 | |||
2454 | ``uncompressedallowsecret`` |
|
2460 | ``uncompressedallowsecret`` | |
2455 | Whether to allow stream clones when the repository contains secret |
|
2461 | Whether to allow stream clones when the repository contains secret | |
2456 | changesets. (default: False) |
|
2462 | changesets. (default: False) | |
2457 |
|
2463 | |||
2458 | ``preferuncompressed`` |
|
2464 | ``preferuncompressed`` | |
2459 | When set, clients will try to use the uncompressed streaming |
|
2465 | When set, clients will try to use the uncompressed streaming | |
2460 | protocol. (default: False) |
|
2466 | protocol. (default: False) | |
2461 |
|
2467 | |||
2462 | ``disablefullbundle`` |
|
2468 | ``disablefullbundle`` | |
2463 | When set, servers will refuse attempts to do pull-based clones. |
|
2469 | When set, servers will refuse attempts to do pull-based clones. | |
2464 | If this option is set, ``preferuncompressed`` and/or clone bundles |
|
2470 | If this option is set, ``preferuncompressed`` and/or clone bundles | |
2465 | are highly recommended. Partial clones will still be allowed. |
|
2471 | are highly recommended. Partial clones will still be allowed. | |
2466 | (default: False) |
|
2472 | (default: False) | |
2467 |
|
2473 | |||
2468 | ``streamunbundle`` |
|
2474 | ``streamunbundle`` | |
2469 | When set, servers will apply data sent from the client directly, |
|
2475 | When set, servers will apply data sent from the client directly, | |
2470 | otherwise it will be written to a temporary file first. This option |
|
2476 | otherwise it will be written to a temporary file first. This option | |
2471 | effectively prevents concurrent pushes. |
|
2477 | effectively prevents concurrent pushes. | |
2472 |
|
2478 | |||
2473 | ``pullbundle`` |
|
2479 | ``pullbundle`` | |
2474 | When set, the server will check pullbundles.manifest for bundles |
|
2480 | When set, the server will check pullbundles.manifest for bundles | |
2475 | covering the requested heads and common nodes. The first matching |
|
2481 | covering the requested heads and common nodes. The first matching | |
2476 | entry will be streamed to the client. |
|
2482 | entry will be streamed to the client. | |
2477 |
|
2483 | |||
2478 | For HTTP transport, the stream will still use zlib compression |
|
2484 | For HTTP transport, the stream will still use zlib compression | |
2479 | for older clients. |
|
2485 | for older clients. | |
2480 |
|
2486 | |||
2481 | ``concurrent-push-mode`` |
|
2487 | ``concurrent-push-mode`` | |
2482 | Level of allowed race condition between two pushing clients. |
|
2488 | Level of allowed race condition between two pushing clients. | |
2483 |
|
2489 | |||
2484 | - 'strict': push is abort if another client touched the repository |
|
2490 | - 'strict': push is abort if another client touched the repository | |
2485 | while the push was preparing. |
|
2491 | while the push was preparing. | |
2486 | - 'check-related': push is only aborted if it affects head that got also |
|
2492 | - 'check-related': push is only aborted if it affects head that got also | |
2487 | affected while the push was preparing. (default since 5.4) |
|
2493 | affected while the push was preparing. (default since 5.4) | |
2488 |
|
2494 | |||
2489 | 'check-related' only takes effect for compatible clients (version |
|
2495 | 'check-related' only takes effect for compatible clients (version | |
2490 | 4.3 and later). Older clients will use 'strict'. |
|
2496 | 4.3 and later). Older clients will use 'strict'. | |
2491 |
|
2497 | |||
2492 | ``validate`` |
|
2498 | ``validate`` | |
2493 | Whether to validate the completeness of pushed changesets by |
|
2499 | Whether to validate the completeness of pushed changesets by | |
2494 | checking that all new file revisions specified in manifests are |
|
2500 | checking that all new file revisions specified in manifests are | |
2495 | present. (default: False) |
|
2501 | present. (default: False) | |
2496 |
|
2502 | |||
2497 | ``maxhttpheaderlen`` |
|
2503 | ``maxhttpheaderlen`` | |
2498 | Instruct HTTP clients not to send request headers longer than this |
|
2504 | Instruct HTTP clients not to send request headers longer than this | |
2499 | many bytes. (default: 1024) |
|
2505 | many bytes. (default: 1024) | |
2500 |
|
2506 | |||
2501 | ``bundle1`` |
|
2507 | ``bundle1`` | |
2502 | Whether to allow clients to push and pull using the legacy bundle1 |
|
2508 | Whether to allow clients to push and pull using the legacy bundle1 | |
2503 | exchange format. (default: True) |
|
2509 | exchange format. (default: True) | |
2504 |
|
2510 | |||
2505 | ``bundle1gd`` |
|
2511 | ``bundle1gd`` | |
2506 | Like ``bundle1`` but only used if the repository is using the |
|
2512 | Like ``bundle1`` but only used if the repository is using the | |
2507 | *generaldelta* storage format. (default: True) |
|
2513 | *generaldelta* storage format. (default: True) | |
2508 |
|
2514 | |||
2509 | ``bundle1.push`` |
|
2515 | ``bundle1.push`` | |
2510 | Whether to allow clients to push using the legacy bundle1 exchange |
|
2516 | Whether to allow clients to push using the legacy bundle1 exchange | |
2511 | format. (default: True) |
|
2517 | format. (default: True) | |
2512 |
|
2518 | |||
2513 | ``bundle1gd.push`` |
|
2519 | ``bundle1gd.push`` | |
2514 | Like ``bundle1.push`` but only used if the repository is using the |
|
2520 | Like ``bundle1.push`` but only used if the repository is using the | |
2515 | *generaldelta* storage format. (default: True) |
|
2521 | *generaldelta* storage format. (default: True) | |
2516 |
|
2522 | |||
2517 | ``bundle1.pull`` |
|
2523 | ``bundle1.pull`` | |
2518 | Whether to allow clients to pull using the legacy bundle1 exchange |
|
2524 | Whether to allow clients to pull using the legacy bundle1 exchange | |
2519 | format. (default: True) |
|
2525 | format. (default: True) | |
2520 |
|
2526 | |||
2521 | ``bundle1gd.pull`` |
|
2527 | ``bundle1gd.pull`` | |
2522 | Like ``bundle1.pull`` but only used if the repository is using the |
|
2528 | Like ``bundle1.pull`` but only used if the repository is using the | |
2523 | *generaldelta* storage format. (default: True) |
|
2529 | *generaldelta* storage format. (default: True) | |
2524 |
|
2530 | |||
2525 | Large repositories using the *generaldelta* storage format should |
|
2531 | Large repositories using the *generaldelta* storage format should | |
2526 | consider setting this option because converting *generaldelta* |
|
2532 | consider setting this option because converting *generaldelta* | |
2527 | repositories to the exchange format required by the bundle1 data |
|
2533 | repositories to the exchange format required by the bundle1 data | |
2528 | format can consume a lot of CPU. |
|
2534 | format can consume a lot of CPU. | |
2529 |
|
2535 | |||
2530 | ``bundle2.stream`` |
|
2536 | ``bundle2.stream`` | |
2531 | Whether to allow clients to pull using the bundle2 streaming protocol. |
|
2537 | Whether to allow clients to pull using the bundle2 streaming protocol. | |
2532 | (default: True) |
|
2538 | (default: True) | |
2533 |
|
2539 | |||
2534 | ``zliblevel`` |
|
2540 | ``zliblevel`` | |
2535 | Integer between ``-1`` and ``9`` that controls the zlib compression level |
|
2541 | Integer between ``-1`` and ``9`` that controls the zlib compression level | |
2536 | for wire protocol commands that send zlib compressed output (notably the |
|
2542 | for wire protocol commands that send zlib compressed output (notably the | |
2537 | commands that send repository history data). |
|
2543 | commands that send repository history data). | |
2538 |
|
2544 | |||
2539 | The default (``-1``) uses the default zlib compression level, which is |
|
2545 | The default (``-1``) uses the default zlib compression level, which is | |
2540 | likely equivalent to ``6``. ``0`` means no compression. ``9`` means |
|
2546 | likely equivalent to ``6``. ``0`` means no compression. ``9`` means | |
2541 | maximum compression. |
|
2547 | maximum compression. | |
2542 |
|
2548 | |||
2543 | Setting this option allows server operators to make trade-offs between |
|
2549 | Setting this option allows server operators to make trade-offs between | |
2544 | bandwidth and CPU used. Lowering the compression lowers CPU utilization |
|
2550 | bandwidth and CPU used. Lowering the compression lowers CPU utilization | |
2545 | but sends more bytes to clients. |
|
2551 | but sends more bytes to clients. | |
2546 |
|
2552 | |||
2547 | This option only impacts the HTTP server. |
|
2553 | This option only impacts the HTTP server. | |
2548 |
|
2554 | |||
2549 | ``zstdlevel`` |
|
2555 | ``zstdlevel`` | |
2550 | Integer between ``1`` and ``22`` that controls the zstd compression level |
|
2556 | Integer between ``1`` and ``22`` that controls the zstd compression level | |
2551 | for wire protocol commands. ``1`` is the minimal amount of compression and |
|
2557 | for wire protocol commands. ``1`` is the minimal amount of compression and | |
2552 | ``22`` is the highest amount of compression. |
|
2558 | ``22`` is the highest amount of compression. | |
2553 |
|
2559 | |||
2554 | The default (``3``) should be significantly faster than zlib while likely |
|
2560 | The default (``3``) should be significantly faster than zlib while likely | |
2555 | delivering better compression ratios. |
|
2561 | delivering better compression ratios. | |
2556 |
|
2562 | |||
2557 | This option only impacts the HTTP server. |
|
2563 | This option only impacts the HTTP server. | |
2558 |
|
2564 | |||
2559 | See also ``server.zliblevel``. |
|
2565 | See also ``server.zliblevel``. | |
2560 |
|
2566 | |||
2561 | ``view`` |
|
2567 | ``view`` | |
2562 | Repository filter used when exchanging revisions with the peer. |
|
2568 | Repository filter used when exchanging revisions with the peer. | |
2563 |
|
2569 | |||
2564 | The default view (``served``) excludes secret and hidden changesets. |
|
2570 | The default view (``served``) excludes secret and hidden changesets. | |
2565 | Another useful value is ``immutable`` (no draft, secret or hidden |
|
2571 | Another useful value is ``immutable`` (no draft, secret or hidden | |
2566 | changesets). (EXPERIMENTAL) |
|
2572 | changesets). (EXPERIMENTAL) | |
2567 |
|
2573 | |||
2568 | ``smtp`` |
|
2574 | ``smtp`` | |
2569 | -------- |
|
2575 | -------- | |
2570 |
|
2576 | |||
2571 | Configuration for extensions that need to send email messages. |
|
2577 | Configuration for extensions that need to send email messages. | |
2572 |
|
2578 | |||
2573 | ``host`` |
|
2579 | ``host`` | |
2574 | Host name of mail server, e.g. "mail.example.com". |
|
2580 | Host name of mail server, e.g. "mail.example.com". | |
2575 |
|
2581 | |||
2576 | ``port`` |
|
2582 | ``port`` | |
2577 | Optional. Port to connect to on mail server. (default: 465 if |
|
2583 | Optional. Port to connect to on mail server. (default: 465 if | |
2578 | ``tls`` is smtps; 25 otherwise) |
|
2584 | ``tls`` is smtps; 25 otherwise) | |
2579 |
|
2585 | |||
2580 | ``tls`` |
|
2586 | ``tls`` | |
2581 | Optional. Method to enable TLS when connecting to mail server: starttls, |
|
2587 | Optional. Method to enable TLS when connecting to mail server: starttls, | |
2582 | smtps or none. (default: none) |
|
2588 | smtps or none. (default: none) | |
2583 |
|
2589 | |||
2584 | ``username`` |
|
2590 | ``username`` | |
2585 | Optional. User name for authenticating with the SMTP server. |
|
2591 | Optional. User name for authenticating with the SMTP server. | |
2586 | (default: None) |
|
2592 | (default: None) | |
2587 |
|
2593 | |||
2588 | ``password`` |
|
2594 | ``password`` | |
2589 | Optional. Password for authenticating with the SMTP server. If not |
|
2595 | Optional. Password for authenticating with the SMTP server. If not | |
2590 | specified, interactive sessions will prompt the user for a |
|
2596 | specified, interactive sessions will prompt the user for a | |
2591 | password; non-interactive sessions will fail. (default: None) |
|
2597 | password; non-interactive sessions will fail. (default: None) | |
2592 |
|
2598 | |||
2593 | ``local_hostname`` |
|
2599 | ``local_hostname`` | |
2594 | Optional. The hostname that the sender can use to identify |
|
2600 | Optional. The hostname that the sender can use to identify | |
2595 | itself to the MTA. |
|
2601 | itself to the MTA. | |
2596 |
|
2602 | |||
2597 |
|
2603 | |||
2598 | ``subpaths`` |
|
2604 | ``subpaths`` | |
2599 | ------------ |
|
2605 | ------------ | |
2600 |
|
2606 | |||
2601 | Subrepository source URLs can go stale if a remote server changes name |
|
2607 | Subrepository source URLs can go stale if a remote server changes name | |
2602 | or becomes temporarily unavailable. This section lets you define |
|
2608 | or becomes temporarily unavailable. This section lets you define | |
2603 | rewrite rules of the form:: |
|
2609 | rewrite rules of the form:: | |
2604 |
|
2610 | |||
2605 | <pattern> = <replacement> |
|
2611 | <pattern> = <replacement> | |
2606 |
|
2612 | |||
2607 | where ``pattern`` is a regular expression matching a subrepository |
|
2613 | where ``pattern`` is a regular expression matching a subrepository | |
2608 | source URL and ``replacement`` is the replacement string used to |
|
2614 | source URL and ``replacement`` is the replacement string used to | |
2609 | rewrite it. Groups can be matched in ``pattern`` and referenced in |
|
2615 | rewrite it. Groups can be matched in ``pattern`` and referenced in | |
2610 | ``replacements``. For instance:: |
|
2616 | ``replacements``. For instance:: | |
2611 |
|
2617 | |||
2612 | http://server/(.*)-hg/ = http://hg.server/\1/ |
|
2618 | http://server/(.*)-hg/ = http://hg.server/\1/ | |
2613 |
|
2619 | |||
2614 | rewrites ``http://server/foo-hg/`` into ``http://hg.server/foo/``. |
|
2620 | rewrites ``http://server/foo-hg/`` into ``http://hg.server/foo/``. | |
2615 |
|
2621 | |||
2616 | Relative subrepository paths are first made absolute, and the |
|
2622 | Relative subrepository paths are first made absolute, and the | |
2617 | rewrite rules are then applied on the full (absolute) path. If ``pattern`` |
|
2623 | rewrite rules are then applied on the full (absolute) path. If ``pattern`` | |
2618 | doesn't match the full path, an attempt is made to apply it on the |
|
2624 | doesn't match the full path, an attempt is made to apply it on the | |
2619 | relative path alone. The rules are applied in definition order. |
|
2625 | relative path alone. The rules are applied in definition order. | |
2620 |
|
2626 | |||
2621 | ``subrepos`` |
|
2627 | ``subrepos`` | |
2622 | ------------ |
|
2628 | ------------ | |
2623 |
|
2629 | |||
2624 | This section contains options that control the behavior of the |
|
2630 | This section contains options that control the behavior of the | |
2625 | subrepositories feature. See also :hg:`help subrepos`. |
|
2631 | subrepositories feature. See also :hg:`help subrepos`. | |
2626 |
|
2632 | |||
2627 | Security note: auditing in Mercurial is known to be insufficient to |
|
2633 | Security note: auditing in Mercurial is known to be insufficient to | |
2628 | prevent clone-time code execution with carefully constructed Git |
|
2634 | prevent clone-time code execution with carefully constructed Git | |
2629 | subrepos. It is unknown if a similar detect is present in Subversion |
|
2635 | subrepos. It is unknown if a similar detect is present in Subversion | |
2630 | subrepos. Both Git and Subversion subrepos are disabled by default |
|
2636 | subrepos. Both Git and Subversion subrepos are disabled by default | |
2631 | out of security concerns. These subrepo types can be enabled using |
|
2637 | out of security concerns. These subrepo types can be enabled using | |
2632 | the respective options below. |
|
2638 | the respective options below. | |
2633 |
|
2639 | |||
2634 | ``allowed`` |
|
2640 | ``allowed`` | |
2635 | Whether subrepositories are allowed in the working directory. |
|
2641 | Whether subrepositories are allowed in the working directory. | |
2636 |
|
2642 | |||
2637 | When false, commands involving subrepositories (like :hg:`update`) |
|
2643 | When false, commands involving subrepositories (like :hg:`update`) | |
2638 | will fail for all subrepository types. |
|
2644 | will fail for all subrepository types. | |
2639 | (default: true) |
|
2645 | (default: true) | |
2640 |
|
2646 | |||
2641 | ``hg:allowed`` |
|
2647 | ``hg:allowed`` | |
2642 | Whether Mercurial subrepositories are allowed in the working |
|
2648 | Whether Mercurial subrepositories are allowed in the working | |
2643 | directory. This option only has an effect if ``subrepos.allowed`` |
|
2649 | directory. This option only has an effect if ``subrepos.allowed`` | |
2644 | is true. |
|
2650 | is true. | |
2645 | (default: true) |
|
2651 | (default: true) | |
2646 |
|
2652 | |||
2647 | ``git:allowed`` |
|
2653 | ``git:allowed`` | |
2648 | Whether Git subrepositories are allowed in the working directory. |
|
2654 | Whether Git subrepositories are allowed in the working directory. | |
2649 | This option only has an effect if ``subrepos.allowed`` is true. |
|
2655 | This option only has an effect if ``subrepos.allowed`` is true. | |
2650 |
|
2656 | |||
2651 | See the security note above before enabling Git subrepos. |
|
2657 | See the security note above before enabling Git subrepos. | |
2652 | (default: false) |
|
2658 | (default: false) | |
2653 |
|
2659 | |||
2654 | ``svn:allowed`` |
|
2660 | ``svn:allowed`` | |
2655 | Whether Subversion subrepositories are allowed in the working |
|
2661 | Whether Subversion subrepositories are allowed in the working | |
2656 | directory. This option only has an effect if ``subrepos.allowed`` |
|
2662 | directory. This option only has an effect if ``subrepos.allowed`` | |
2657 | is true. |
|
2663 | is true. | |
2658 |
|
2664 | |||
2659 | See the security note above before enabling Subversion subrepos. |
|
2665 | See the security note above before enabling Subversion subrepos. | |
2660 | (default: false) |
|
2666 | (default: false) | |
2661 |
|
2667 | |||
2662 | ``templatealias`` |
|
2668 | ``templatealias`` | |
2663 | ----------------- |
|
2669 | ----------------- | |
2664 |
|
2670 | |||
2665 | Alias definitions for templates. See :hg:`help templates` for details. |
|
2671 | Alias definitions for templates. See :hg:`help templates` for details. | |
2666 |
|
2672 | |||
2667 | ``templates`` |
|
2673 | ``templates`` | |
2668 | ------------- |
|
2674 | ------------- | |
2669 |
|
2675 | |||
2670 | Use the ``[templates]`` section to define template strings. |
|
2676 | Use the ``[templates]`` section to define template strings. | |
2671 | See :hg:`help templates` for details. |
|
2677 | See :hg:`help templates` for details. | |
2672 |
|
2678 | |||
2673 | ``trusted`` |
|
2679 | ``trusted`` | |
2674 | ----------- |
|
2680 | ----------- | |
2675 |
|
2681 | |||
2676 | Mercurial will not use the settings in the |
|
2682 | Mercurial will not use the settings in the | |
2677 | ``.hg/hgrc`` file from a repository if it doesn't belong to a trusted |
|
2683 | ``.hg/hgrc`` file from a repository if it doesn't belong to a trusted | |
2678 | user or to a trusted group, as various hgrc features allow arbitrary |
|
2684 | user or to a trusted group, as various hgrc features allow arbitrary | |
2679 | commands to be run. This issue is often encountered when configuring |
|
2685 | commands to be run. This issue is often encountered when configuring | |
2680 | hooks or extensions for shared repositories or servers. However, |
|
2686 | hooks or extensions for shared repositories or servers. However, | |
2681 | the web interface will use some safe settings from the ``[web]`` |
|
2687 | the web interface will use some safe settings from the ``[web]`` | |
2682 | section. |
|
2688 | section. | |
2683 |
|
2689 | |||
2684 | This section specifies what users and groups are trusted. The |
|
2690 | This section specifies what users and groups are trusted. The | |
2685 | current user is always trusted. To trust everybody, list a user or a |
|
2691 | current user is always trusted. To trust everybody, list a user or a | |
2686 | group with name ``*``. These settings must be placed in an |
|
2692 | group with name ``*``. These settings must be placed in an | |
2687 | *already-trusted file* to take effect, such as ``$HOME/.hgrc`` of the |
|
2693 | *already-trusted file* to take effect, such as ``$HOME/.hgrc`` of the | |
2688 | user or service running Mercurial. |
|
2694 | user or service running Mercurial. | |
2689 |
|
2695 | |||
2690 | ``users`` |
|
2696 | ``users`` | |
2691 | Comma-separated list of trusted users. |
|
2697 | Comma-separated list of trusted users. | |
2692 |
|
2698 | |||
2693 | ``groups`` |
|
2699 | ``groups`` | |
2694 | Comma-separated list of trusted groups. |
|
2700 | Comma-separated list of trusted groups. | |
2695 |
|
2701 | |||
2696 |
|
2702 | |||
2697 | ``ui`` |
|
2703 | ``ui`` | |
2698 | ------ |
|
2704 | ------ | |
2699 |
|
2705 | |||
2700 | User interface controls. |
|
2706 | User interface controls. | |
2701 |
|
2707 | |||
2702 | ``archivemeta`` |
|
2708 | ``archivemeta`` | |
2703 | Whether to include the .hg_archival.txt file containing meta data |
|
2709 | Whether to include the .hg_archival.txt file containing meta data | |
2704 | (hashes for the repository base and for tip) in archives created |
|
2710 | (hashes for the repository base and for tip) in archives created | |
2705 | by the :hg:`archive` command or downloaded via hgweb. |
|
2711 | by the :hg:`archive` command or downloaded via hgweb. | |
2706 | (default: True) |
|
2712 | (default: True) | |
2707 |
|
2713 | |||
2708 | ``askusername`` |
|
2714 | ``askusername`` | |
2709 | Whether to prompt for a username when committing. If True, and |
|
2715 | Whether to prompt for a username when committing. If True, and | |
2710 | neither ``$HGUSER`` nor ``$EMAIL`` has been specified, then the user will |
|
2716 | neither ``$HGUSER`` nor ``$EMAIL`` has been specified, then the user will | |
2711 | be prompted to enter a username. If no username is entered, the |
|
2717 | be prompted to enter a username. If no username is entered, the | |
2712 | default ``USER@HOST`` is used instead. |
|
2718 | default ``USER@HOST`` is used instead. | |
2713 | (default: False) |
|
2719 | (default: False) | |
2714 |
|
2720 | |||
2715 | ``clonebundles`` |
|
2721 | ``clonebundles`` | |
2716 | Whether the "clone bundles" feature is enabled. |
|
2722 | Whether the "clone bundles" feature is enabled. | |
2717 |
|
2723 | |||
2718 | When enabled, :hg:`clone` may download and apply a server-advertised |
|
2724 | When enabled, :hg:`clone` may download and apply a server-advertised | |
2719 | bundle file from a URL instead of using the normal exchange mechanism. |
|
2725 | bundle file from a URL instead of using the normal exchange mechanism. | |
2720 |
|
2726 | |||
2721 | This can likely result in faster and more reliable clones. |
|
2727 | This can likely result in faster and more reliable clones. | |
2722 |
|
2728 | |||
2723 | (default: True) |
|
2729 | (default: True) | |
2724 |
|
2730 | |||
2725 | ``clonebundlefallback`` |
|
2731 | ``clonebundlefallback`` | |
2726 | Whether failure to apply an advertised "clone bundle" from a server |
|
2732 | Whether failure to apply an advertised "clone bundle" from a server | |
2727 | should result in fallback to a regular clone. |
|
2733 | should result in fallback to a regular clone. | |
2728 |
|
2734 | |||
2729 | This is disabled by default because servers advertising "clone |
|
2735 | This is disabled by default because servers advertising "clone | |
2730 | bundles" often do so to reduce server load. If advertised bundles |
|
2736 | bundles" often do so to reduce server load. If advertised bundles | |
2731 | start mass failing and clients automatically fall back to a regular |
|
2737 | start mass failing and clients automatically fall back to a regular | |
2732 | clone, this would add significant and unexpected load to the server |
|
2738 | clone, this would add significant and unexpected load to the server | |
2733 | since the server is expecting clone operations to be offloaded to |
|
2739 | since the server is expecting clone operations to be offloaded to | |
2734 | pre-generated bundles. Failing fast (the default behavior) ensures |
|
2740 | pre-generated bundles. Failing fast (the default behavior) ensures | |
2735 | clients don't overwhelm the server when "clone bundle" application |
|
2741 | clients don't overwhelm the server when "clone bundle" application | |
2736 | fails. |
|
2742 | fails. | |
2737 |
|
2743 | |||
2738 | (default: False) |
|
2744 | (default: False) | |
2739 |
|
2745 | |||
2740 | ``clonebundleprefers`` |
|
2746 | ``clonebundleprefers`` | |
2741 | Defines preferences for which "clone bundles" to use. |
|
2747 | Defines preferences for which "clone bundles" to use. | |
2742 |
|
2748 | |||
2743 | Servers advertising "clone bundles" may advertise multiple available |
|
2749 | Servers advertising "clone bundles" may advertise multiple available | |
2744 | bundles. Each bundle may have different attributes, such as the bundle |
|
2750 | bundles. Each bundle may have different attributes, such as the bundle | |
2745 | type and compression format. This option is used to prefer a particular |
|
2751 | type and compression format. This option is used to prefer a particular | |
2746 | bundle over another. |
|
2752 | bundle over another. | |
2747 |
|
2753 | |||
2748 | The following keys are defined by Mercurial: |
|
2754 | The following keys are defined by Mercurial: | |
2749 |
|
2755 | |||
2750 | BUNDLESPEC |
|
2756 | BUNDLESPEC | |
2751 | A bundle type specifier. These are strings passed to :hg:`bundle -t`. |
|
2757 | A bundle type specifier. These are strings passed to :hg:`bundle -t`. | |
2752 | e.g. ``gzip-v2`` or ``bzip2-v1``. |
|
2758 | e.g. ``gzip-v2`` or ``bzip2-v1``. | |
2753 |
|
2759 | |||
2754 | COMPRESSION |
|
2760 | COMPRESSION | |
2755 | The compression format of the bundle. e.g. ``gzip`` and ``bzip2``. |
|
2761 | The compression format of the bundle. e.g. ``gzip`` and ``bzip2``. | |
2756 |
|
2762 | |||
2757 | Server operators may define custom keys. |
|
2763 | Server operators may define custom keys. | |
2758 |
|
2764 | |||
2759 | Example values: ``COMPRESSION=bzip2``, |
|
2765 | Example values: ``COMPRESSION=bzip2``, | |
2760 | ``BUNDLESPEC=gzip-v2, COMPRESSION=gzip``. |
|
2766 | ``BUNDLESPEC=gzip-v2, COMPRESSION=gzip``. | |
2761 |
|
2767 | |||
2762 | By default, the first bundle advertised by the server is used. |
|
2768 | By default, the first bundle advertised by the server is used. | |
2763 |
|
2769 | |||
2764 | ``color`` |
|
2770 | ``color`` | |
2765 | When to colorize output. Possible value are Boolean ("yes" or "no"), or |
|
2771 | When to colorize output. Possible value are Boolean ("yes" or "no"), or | |
2766 | "debug", or "always". (default: "yes"). "yes" will use color whenever it |
|
2772 | "debug", or "always". (default: "yes"). "yes" will use color whenever it | |
2767 | seems possible. See :hg:`help color` for details. |
|
2773 | seems possible. See :hg:`help color` for details. | |
2768 |
|
2774 | |||
2769 | ``commitsubrepos`` |
|
2775 | ``commitsubrepos`` | |
2770 | Whether to commit modified subrepositories when committing the |
|
2776 | Whether to commit modified subrepositories when committing the | |
2771 | parent repository. If False and one subrepository has uncommitted |
|
2777 | parent repository. If False and one subrepository has uncommitted | |
2772 | changes, abort the commit. |
|
2778 | changes, abort the commit. | |
2773 | (default: False) |
|
2779 | (default: False) | |
2774 |
|
2780 | |||
2775 | ``debug`` |
|
2781 | ``debug`` | |
2776 | Print debugging information. (default: False) |
|
2782 | Print debugging information. (default: False) | |
2777 |
|
2783 | |||
2778 | ``editor`` |
|
2784 | ``editor`` | |
2779 | The editor to use during a commit. (default: ``$EDITOR`` or ``vi``) |
|
2785 | The editor to use during a commit. (default: ``$EDITOR`` or ``vi``) | |
2780 |
|
2786 | |||
2781 | ``fallbackencoding`` |
|
2787 | ``fallbackencoding`` | |
2782 | Encoding to try if it's not possible to decode the changelog using |
|
2788 | Encoding to try if it's not possible to decode the changelog using | |
2783 | UTF-8. (default: ISO-8859-1) |
|
2789 | UTF-8. (default: ISO-8859-1) | |
2784 |
|
2790 | |||
2785 | ``graphnodetemplate`` |
|
2791 | ``graphnodetemplate`` | |
2786 | (DEPRECATED) Use ``command-templates.graphnode`` instead. |
|
2792 | (DEPRECATED) Use ``command-templates.graphnode`` instead. | |
2787 |
|
2793 | |||
2788 | ``ignore`` |
|
2794 | ``ignore`` | |
2789 | A file to read per-user ignore patterns from. This file should be |
|
2795 | A file to read per-user ignore patterns from. This file should be | |
2790 | in the same format as a repository-wide .hgignore file. Filenames |
|
2796 | in the same format as a repository-wide .hgignore file. Filenames | |
2791 | are relative to the repository root. This option supports hook syntax, |
|
2797 | are relative to the repository root. This option supports hook syntax, | |
2792 | so if you want to specify multiple ignore files, you can do so by |
|
2798 | so if you want to specify multiple ignore files, you can do so by | |
2793 | setting something like ``ignore.other = ~/.hgignore2``. For details |
|
2799 | setting something like ``ignore.other = ~/.hgignore2``. For details | |
2794 | of the ignore file format, see the ``hgignore(5)`` man page. |
|
2800 | of the ignore file format, see the ``hgignore(5)`` man page. | |
2795 |
|
2801 | |||
2796 | ``interactive`` |
|
2802 | ``interactive`` | |
2797 | Allow to prompt the user. (default: True) |
|
2803 | Allow to prompt the user. (default: True) | |
2798 |
|
2804 | |||
2799 | ``interface`` |
|
2805 | ``interface`` | |
2800 | Select the default interface for interactive features (default: text). |
|
2806 | Select the default interface for interactive features (default: text). | |
2801 | Possible values are 'text' and 'curses'. |
|
2807 | Possible values are 'text' and 'curses'. | |
2802 |
|
2808 | |||
2803 | ``interface.chunkselector`` |
|
2809 | ``interface.chunkselector`` | |
2804 | Select the interface for change recording (e.g. :hg:`commit -i`). |
|
2810 | Select the interface for change recording (e.g. :hg:`commit -i`). | |
2805 | Possible values are 'text' and 'curses'. |
|
2811 | Possible values are 'text' and 'curses'. | |
2806 | This config overrides the interface specified by ui.interface. |
|
2812 | This config overrides the interface specified by ui.interface. | |
2807 |
|
2813 | |||
2808 | ``large-file-limit`` |
|
2814 | ``large-file-limit`` | |
2809 | Largest file size that gives no memory use warning. |
|
2815 | Largest file size that gives no memory use warning. | |
2810 | Possible values are integers or 0 to disable the check. |
|
2816 | Possible values are integers or 0 to disable the check. | |
2811 | Value is expressed in bytes by default, one can use standard units for |
|
2817 | Value is expressed in bytes by default, one can use standard units for | |
2812 | convenience (e.g. 10MB, 0.1GB, etc) (default: 10MB) |
|
2818 | convenience (e.g. 10MB, 0.1GB, etc) (default: 10MB) | |
2813 |
|
2819 | |||
2814 | ``logtemplate`` |
|
2820 | ``logtemplate`` | |
2815 | (DEPRECATED) Use ``command-templates.log`` instead. |
|
2821 | (DEPRECATED) Use ``command-templates.log`` instead. | |
2816 |
|
2822 | |||
2817 | ``merge`` |
|
2823 | ``merge`` | |
2818 | The conflict resolution program to use during a manual merge. |
|
2824 | The conflict resolution program to use during a manual merge. | |
2819 | For more information on merge tools see :hg:`help merge-tools`. |
|
2825 | For more information on merge tools see :hg:`help merge-tools`. | |
2820 | For configuring merge tools see the ``[merge-tools]`` section. |
|
2826 | For configuring merge tools see the ``[merge-tools]`` section. | |
2821 |
|
2827 | |||
2822 | ``mergemarkers`` |
|
2828 | ``mergemarkers`` | |
2823 | Sets the merge conflict marker label styling. The ``detailed`` style |
|
2829 | Sets the merge conflict marker label styling. The ``detailed`` style | |
2824 | uses the ``command-templates.mergemarker`` setting to style the labels. |
|
2830 | uses the ``command-templates.mergemarker`` setting to style the labels. | |
2825 | The ``basic`` style just uses 'local' and 'other' as the marker label. |
|
2831 | The ``basic`` style just uses 'local' and 'other' as the marker label. | |
2826 | One of ``basic`` or ``detailed``. |
|
2832 | One of ``basic`` or ``detailed``. | |
2827 | (default: ``basic``) |
|
2833 | (default: ``basic``) | |
2828 |
|
2834 | |||
2829 | ``mergemarkertemplate`` |
|
2835 | ``mergemarkertemplate`` | |
2830 | (DEPRECATED) Use ``command-templates.mergemarker`` instead. |
|
2836 | (DEPRECATED) Use ``command-templates.mergemarker`` instead. | |
2831 |
|
2837 | |||
2832 | ``message-output`` |
|
2838 | ``message-output`` | |
2833 | Where to write status and error messages. (default: ``stdio``) |
|
2839 | Where to write status and error messages. (default: ``stdio``) | |
2834 |
|
2840 | |||
2835 | ``channel`` |
|
2841 | ``channel`` | |
2836 | Use separate channel for structured output. (Command-server only) |
|
2842 | Use separate channel for structured output. (Command-server only) | |
2837 | ``stderr`` |
|
2843 | ``stderr`` | |
2838 | Everything to stderr. |
|
2844 | Everything to stderr. | |
2839 | ``stdio`` |
|
2845 | ``stdio`` | |
2840 | Status to stdout, and error to stderr. |
|
2846 | Status to stdout, and error to stderr. | |
2841 |
|
2847 | |||
2842 | ``origbackuppath`` |
|
2848 | ``origbackuppath`` | |
2843 | The path to a directory used to store generated .orig files. If the path is |
|
2849 | The path to a directory used to store generated .orig files. If the path is | |
2844 | not a directory, one will be created. If set, files stored in this |
|
2850 | not a directory, one will be created. If set, files stored in this | |
2845 | directory have the same name as the original file and do not have a .orig |
|
2851 | directory have the same name as the original file and do not have a .orig | |
2846 | suffix. |
|
2852 | suffix. | |
2847 |
|
2853 | |||
2848 | ``paginate`` |
|
2854 | ``paginate`` | |
2849 | Control the pagination of command output (default: True). See :hg:`help pager` |
|
2855 | Control the pagination of command output (default: True). See :hg:`help pager` | |
2850 | for details. |
|
2856 | for details. | |
2851 |
|
2857 | |||
2852 | ``patch`` |
|
2858 | ``patch`` | |
2853 | An optional external tool that ``hg import`` and some extensions |
|
2859 | An optional external tool that ``hg import`` and some extensions | |
2854 | will use for applying patches. By default Mercurial uses an |
|
2860 | will use for applying patches. By default Mercurial uses an | |
2855 | internal patch utility. The external tool must work as the common |
|
2861 | internal patch utility. The external tool must work as the common | |
2856 | Unix ``patch`` program. In particular, it must accept a ``-p`` |
|
2862 | Unix ``patch`` program. In particular, it must accept a ``-p`` | |
2857 | argument to strip patch headers, a ``-d`` argument to specify the |
|
2863 | argument to strip patch headers, a ``-d`` argument to specify the | |
2858 | current directory, a file name to patch, and a patch file to take |
|
2864 | current directory, a file name to patch, and a patch file to take | |
2859 | from stdin. |
|
2865 | from stdin. | |
2860 |
|
2866 | |||
2861 | It is possible to specify a patch tool together with extra |
|
2867 | It is possible to specify a patch tool together with extra | |
2862 | arguments. For example, setting this option to ``patch --merge`` |
|
2868 | arguments. For example, setting this option to ``patch --merge`` | |
2863 | will use the ``patch`` program with its 2-way merge option. |
|
2869 | will use the ``patch`` program with its 2-way merge option. | |
2864 |
|
2870 | |||
2865 | ``portablefilenames`` |
|
2871 | ``portablefilenames`` | |
2866 | Check for portable filenames. Can be ``warn``, ``ignore`` or ``abort``. |
|
2872 | Check for portable filenames. Can be ``warn``, ``ignore`` or ``abort``. | |
2867 | (default: ``warn``) |
|
2873 | (default: ``warn``) | |
2868 |
|
2874 | |||
2869 | ``warn`` |
|
2875 | ``warn`` | |
2870 | Print a warning message on POSIX platforms, if a file with a non-portable |
|
2876 | Print a warning message on POSIX platforms, if a file with a non-portable | |
2871 | filename is added (e.g. a file with a name that can't be created on |
|
2877 | filename is added (e.g. a file with a name that can't be created on | |
2872 | Windows because it contains reserved parts like ``AUX``, reserved |
|
2878 | Windows because it contains reserved parts like ``AUX``, reserved | |
2873 | characters like ``:``, or would cause a case collision with an existing |
|
2879 | characters like ``:``, or would cause a case collision with an existing | |
2874 | file). |
|
2880 | file). | |
2875 |
|
2881 | |||
2876 | ``ignore`` |
|
2882 | ``ignore`` | |
2877 | Don't print a warning. |
|
2883 | Don't print a warning. | |
2878 |
|
2884 | |||
2879 | ``abort`` |
|
2885 | ``abort`` | |
2880 | The command is aborted. |
|
2886 | The command is aborted. | |
2881 |
|
2887 | |||
2882 | ``true`` |
|
2888 | ``true`` | |
2883 | Alias for ``warn``. |
|
2889 | Alias for ``warn``. | |
2884 |
|
2890 | |||
2885 | ``false`` |
|
2891 | ``false`` | |
2886 | Alias for ``ignore``. |
|
2892 | Alias for ``ignore``. | |
2887 |
|
2893 | |||
2888 | .. container:: windows |
|
2894 | .. container:: windows | |
2889 |
|
2895 | |||
2890 | On Windows, this configuration option is ignored and the command aborted. |
|
2896 | On Windows, this configuration option is ignored and the command aborted. | |
2891 |
|
2897 | |||
2892 | ``pre-merge-tool-output-template`` |
|
2898 | ``pre-merge-tool-output-template`` | |
2893 | (DEPRECATED) Use ``command-template.pre-merge-tool-output`` instead. |
|
2899 | (DEPRECATED) Use ``command-template.pre-merge-tool-output`` instead. | |
2894 |
|
2900 | |||
2895 | ``quiet`` |
|
2901 | ``quiet`` | |
2896 | Reduce the amount of output printed. |
|
2902 | Reduce the amount of output printed. | |
2897 | (default: False) |
|
2903 | (default: False) | |
2898 |
|
2904 | |||
2899 | ``relative-paths`` |
|
2905 | ``relative-paths`` | |
2900 | Prefer relative paths in the UI. |
|
2906 | Prefer relative paths in the UI. | |
2901 |
|
2907 | |||
2902 | ``remotecmd`` |
|
2908 | ``remotecmd`` | |
2903 | Remote command to use for clone/push/pull operations. |
|
2909 | Remote command to use for clone/push/pull operations. | |
2904 | (default: ``hg``) |
|
2910 | (default: ``hg``) | |
2905 |
|
2911 | |||
2906 | ``report_untrusted`` |
|
2912 | ``report_untrusted`` | |
2907 | Warn if a ``.hg/hgrc`` file is ignored due to not being owned by a |
|
2913 | Warn if a ``.hg/hgrc`` file is ignored due to not being owned by a | |
2908 | trusted user or group. |
|
2914 | trusted user or group. | |
2909 | (default: True) |
|
2915 | (default: True) | |
2910 |
|
2916 | |||
2911 | ``slash`` |
|
2917 | ``slash`` | |
2912 | (Deprecated. Use ``slashpath`` template filter instead.) |
|
2918 | (Deprecated. Use ``slashpath`` template filter instead.) | |
2913 |
|
2919 | |||
2914 | Display paths using a slash (``/``) as the path separator. This |
|
2920 | Display paths using a slash (``/``) as the path separator. This | |
2915 | only makes a difference on systems where the default path |
|
2921 | only makes a difference on systems where the default path | |
2916 | separator is not the slash character (e.g. Windows uses the |
|
2922 | separator is not the slash character (e.g. Windows uses the | |
2917 | backslash character (``\``)). |
|
2923 | backslash character (``\``)). | |
2918 | (default: False) |
|
2924 | (default: False) | |
2919 |
|
2925 | |||
2920 | ``statuscopies`` |
|
2926 | ``statuscopies`` | |
2921 | Display copies in the status command. |
|
2927 | Display copies in the status command. | |
2922 |
|
2928 | |||
2923 | ``ssh`` |
|
2929 | ``ssh`` | |
2924 | Command to use for SSH connections. (default: ``ssh``) |
|
2930 | Command to use for SSH connections. (default: ``ssh``) | |
2925 |
|
2931 | |||
2926 | ``ssherrorhint`` |
|
2932 | ``ssherrorhint`` | |
2927 | A hint shown to the user in the case of SSH error (e.g. |
|
2933 | A hint shown to the user in the case of SSH error (e.g. | |
2928 | ``Please see http://company/internalwiki/ssh.html``) |
|
2934 | ``Please see http://company/internalwiki/ssh.html``) | |
2929 |
|
2935 | |||
2930 | ``strict`` |
|
2936 | ``strict`` | |
2931 | Require exact command names, instead of allowing unambiguous |
|
2937 | Require exact command names, instead of allowing unambiguous | |
2932 | abbreviations. (default: False) |
|
2938 | abbreviations. (default: False) | |
2933 |
|
2939 | |||
2934 | ``style`` |
|
2940 | ``style`` | |
2935 | Name of style to use for command output. |
|
2941 | Name of style to use for command output. | |
2936 |
|
2942 | |||
2937 | ``supportcontact`` |
|
2943 | ``supportcontact`` | |
2938 | A URL where users should report a Mercurial traceback. Use this if you are a |
|
2944 | A URL where users should report a Mercurial traceback. Use this if you are a | |
2939 | large organisation with its own Mercurial deployment process and crash |
|
2945 | large organisation with its own Mercurial deployment process and crash | |
2940 | reports should be addressed to your internal support. |
|
2946 | reports should be addressed to your internal support. | |
2941 |
|
2947 | |||
2942 | ``textwidth`` |
|
2948 | ``textwidth`` | |
2943 | Maximum width of help text. A longer line generated by ``hg help`` or |
|
2949 | Maximum width of help text. A longer line generated by ``hg help`` or | |
2944 | ``hg subcommand --help`` will be broken after white space to get this |
|
2950 | ``hg subcommand --help`` will be broken after white space to get this | |
2945 | width or the terminal width, whichever comes first. |
|
2951 | width or the terminal width, whichever comes first. | |
2946 | A non-positive value will disable this and the terminal width will be |
|
2952 | A non-positive value will disable this and the terminal width will be | |
2947 | used. (default: 78) |
|
2953 | used. (default: 78) | |
2948 |
|
2954 | |||
2949 | ``timeout`` |
|
2955 | ``timeout`` | |
2950 | The timeout used when a lock is held (in seconds), a negative value |
|
2956 | The timeout used when a lock is held (in seconds), a negative value | |
2951 | means no timeout. (default: 600) |
|
2957 | means no timeout. (default: 600) | |
2952 |
|
2958 | |||
2953 | ``timeout.warn`` |
|
2959 | ``timeout.warn`` | |
2954 | Time (in seconds) before a warning is printed about held lock. A negative |
|
2960 | Time (in seconds) before a warning is printed about held lock. A negative | |
2955 | value means no warning. (default: 0) |
|
2961 | value means no warning. (default: 0) | |
2956 |
|
2962 | |||
2957 | ``traceback`` |
|
2963 | ``traceback`` | |
2958 | Mercurial always prints a traceback when an unknown exception |
|
2964 | Mercurial always prints a traceback when an unknown exception | |
2959 | occurs. Setting this to True will make Mercurial print a traceback |
|
2965 | occurs. Setting this to True will make Mercurial print a traceback | |
2960 | on all exceptions, even those recognized by Mercurial (such as |
|
2966 | on all exceptions, even those recognized by Mercurial (such as | |
2961 | IOError or MemoryError). (default: False) |
|
2967 | IOError or MemoryError). (default: False) | |
2962 |
|
2968 | |||
2963 | ``tweakdefaults`` |
|
2969 | ``tweakdefaults`` | |
2964 |
|
2970 | |||
2965 | By default Mercurial's behavior changes very little from release |
|
2971 | By default Mercurial's behavior changes very little from release | |
2966 | to release, but over time the recommended config settings |
|
2972 | to release, but over time the recommended config settings | |
2967 | shift. Enable this config to opt in to get automatic tweaks to |
|
2973 | shift. Enable this config to opt in to get automatic tweaks to | |
2968 | Mercurial's behavior over time. This config setting will have no |
|
2974 | Mercurial's behavior over time. This config setting will have no | |
2969 | effect if ``HGPLAIN`` is set or ``HGPLAINEXCEPT`` is set and does |
|
2975 | effect if ``HGPLAIN`` is set or ``HGPLAINEXCEPT`` is set and does | |
2970 | not include ``tweakdefaults``. (default: False) |
|
2976 | not include ``tweakdefaults``. (default: False) | |
2971 |
|
2977 | |||
2972 | It currently means:: |
|
2978 | It currently means:: | |
2973 |
|
2979 | |||
2974 | .. tweakdefaultsmarker |
|
2980 | .. tweakdefaultsmarker | |
2975 |
|
2981 | |||
2976 | ``username`` |
|
2982 | ``username`` | |
2977 | The committer of a changeset created when running "commit". |
|
2983 | The committer of a changeset created when running "commit". | |
2978 | Typically a person's name and email address, e.g. ``Fred Widget |
|
2984 | Typically a person's name and email address, e.g. ``Fred Widget | |
2979 | <fred@example.com>``. Environment variables in the |
|
2985 | <fred@example.com>``. Environment variables in the | |
2980 | username are expanded. |
|
2986 | username are expanded. | |
2981 |
|
2987 | |||
2982 | (default: ``$EMAIL`` or ``username@hostname``. If the username in |
|
2988 | (default: ``$EMAIL`` or ``username@hostname``. If the username in | |
2983 | hgrc is empty, e.g. if the system admin set ``username =`` in the |
|
2989 | hgrc is empty, e.g. if the system admin set ``username =`` in the | |
2984 | system hgrc, it has to be specified manually or in a different |
|
2990 | system hgrc, it has to be specified manually or in a different | |
2985 | hgrc file) |
|
2991 | hgrc file) | |
2986 |
|
2992 | |||
2987 | ``verbose`` |
|
2993 | ``verbose`` | |
2988 | Increase the amount of output printed. (default: False) |
|
2994 | Increase the amount of output printed. (default: False) | |
2989 |
|
2995 | |||
2990 |
|
2996 | |||
2991 | ``command-templates`` |
|
2997 | ``command-templates`` | |
2992 | --------------------- |
|
2998 | --------------------- | |
2993 |
|
2999 | |||
2994 | Templates used for customizing the output of commands. |
|
3000 | Templates used for customizing the output of commands. | |
2995 |
|
3001 | |||
2996 | ``graphnode`` |
|
3002 | ``graphnode`` | |
2997 | The template used to print changeset nodes in an ASCII revision graph. |
|
3003 | The template used to print changeset nodes in an ASCII revision graph. | |
2998 | (default: ``{graphnode}``) |
|
3004 | (default: ``{graphnode}``) | |
2999 |
|
3005 | |||
3000 | ``log`` |
|
3006 | ``log`` | |
3001 | Template string for commands that print changesets. |
|
3007 | Template string for commands that print changesets. | |
3002 |
|
3008 | |||
3003 | ``mergemarker`` |
|
3009 | ``mergemarker`` | |
3004 | The template used to print the commit description next to each conflict |
|
3010 | The template used to print the commit description next to each conflict | |
3005 | marker during merge conflicts. See :hg:`help templates` for the template |
|
3011 | marker during merge conflicts. See :hg:`help templates` for the template | |
3006 | format. |
|
3012 | format. | |
3007 |
|
3013 | |||
3008 | Defaults to showing the hash, tags, branches, bookmarks, author, and |
|
3014 | Defaults to showing the hash, tags, branches, bookmarks, author, and | |
3009 | the first line of the commit description. |
|
3015 | the first line of the commit description. | |
3010 |
|
3016 | |||
3011 | If you use non-ASCII characters in names for tags, branches, bookmarks, |
|
3017 | If you use non-ASCII characters in names for tags, branches, bookmarks, | |
3012 | authors, and/or commit descriptions, you must pay attention to encodings of |
|
3018 | authors, and/or commit descriptions, you must pay attention to encodings of | |
3013 | managed files. At template expansion, non-ASCII characters use the encoding |
|
3019 | managed files. At template expansion, non-ASCII characters use the encoding | |
3014 | specified by the ``--encoding`` global option, ``HGENCODING`` or other |
|
3020 | specified by the ``--encoding`` global option, ``HGENCODING`` or other | |
3015 | environment variables that govern your locale. If the encoding of the merge |
|
3021 | environment variables that govern your locale. If the encoding of the merge | |
3016 | markers is different from the encoding of the merged files, |
|
3022 | markers is different from the encoding of the merged files, | |
3017 | serious problems may occur. |
|
3023 | serious problems may occur. | |
3018 |
|
3024 | |||
3019 | Can be overridden per-merge-tool, see the ``[merge-tools]`` section. |
|
3025 | Can be overridden per-merge-tool, see the ``[merge-tools]`` section. | |
3020 |
|
3026 | |||
3021 | ``oneline-summary`` |
|
3027 | ``oneline-summary`` | |
3022 | A template used by `hg rebase` and other commands for showing a one-line |
|
3028 | A template used by `hg rebase` and other commands for showing a one-line | |
3023 | summary of a commit. If the template configured here is longer than one |
|
3029 | summary of a commit. If the template configured here is longer than one | |
3024 | line, then only the first line is used. |
|
3030 | line, then only the first line is used. | |
3025 |
|
3031 | |||
3026 | The template can be overridden per command by defining a template in |
|
3032 | The template can be overridden per command by defining a template in | |
3027 | `oneline-summary.<command>`, where `<command>` can be e.g. "rebase". |
|
3033 | `oneline-summary.<command>`, where `<command>` can be e.g. "rebase". | |
3028 |
|
3034 | |||
3029 | ``pre-merge-tool-output`` |
|
3035 | ``pre-merge-tool-output`` | |
3030 | A template that is printed before executing an external merge tool. This can |
|
3036 | A template that is printed before executing an external merge tool. This can | |
3031 | be used to print out additional context that might be useful to have during |
|
3037 | be used to print out additional context that might be useful to have during | |
3032 | the conflict resolution, such as the description of the various commits |
|
3038 | the conflict resolution, such as the description of the various commits | |
3033 | involved or bookmarks/tags. |
|
3039 | involved or bookmarks/tags. | |
3034 |
|
3040 | |||
3035 | Additional information is available in the ``local`, ``base``, and ``other`` |
|
3041 | Additional information is available in the ``local`, ``base``, and ``other`` | |
3036 | dicts. For example: ``{local.label}``, ``{base.name}``, or |
|
3042 | dicts. For example: ``{local.label}``, ``{base.name}``, or | |
3037 | ``{other.islink}``. |
|
3043 | ``{other.islink}``. | |
3038 |
|
3044 | |||
3039 |
|
3045 | |||
3040 | ``web`` |
|
3046 | ``web`` | |
3041 | ------- |
|
3047 | ------- | |
3042 |
|
3048 | |||
3043 | Web interface configuration. The settings in this section apply to |
|
3049 | Web interface configuration. The settings in this section apply to | |
3044 | both the builtin webserver (started by :hg:`serve`) and the script you |
|
3050 | both the builtin webserver (started by :hg:`serve`) and the script you | |
3045 | run through a webserver (``hgweb.cgi`` and the derivatives for FastCGI |
|
3051 | run through a webserver (``hgweb.cgi`` and the derivatives for FastCGI | |
3046 | and WSGI). |
|
3052 | and WSGI). | |
3047 |
|
3053 | |||
3048 | The Mercurial webserver does no authentication (it does not prompt for |
|
3054 | The Mercurial webserver does no authentication (it does not prompt for | |
3049 | usernames and passwords to validate *who* users are), but it does do |
|
3055 | usernames and passwords to validate *who* users are), but it does do | |
3050 | authorization (it grants or denies access for *authenticated users* |
|
3056 | authorization (it grants or denies access for *authenticated users* | |
3051 | based on settings in this section). You must either configure your |
|
3057 | based on settings in this section). You must either configure your | |
3052 | webserver to do authentication for you, or disable the authorization |
|
3058 | webserver to do authentication for you, or disable the authorization | |
3053 | checks. |
|
3059 | checks. | |
3054 |
|
3060 | |||
3055 | For a quick setup in a trusted environment, e.g., a private LAN, where |
|
3061 | For a quick setup in a trusted environment, e.g., a private LAN, where | |
3056 | you want it to accept pushes from anybody, you can use the following |
|
3062 | you want it to accept pushes from anybody, you can use the following | |
3057 | command line:: |
|
3063 | command line:: | |
3058 |
|
3064 | |||
3059 | $ hg --config web.allow-push=* --config web.push_ssl=False serve |
|
3065 | $ hg --config web.allow-push=* --config web.push_ssl=False serve | |
3060 |
|
3066 | |||
3061 | Note that this will allow anybody to push anything to the server and |
|
3067 | Note that this will allow anybody to push anything to the server and | |
3062 | that this should not be used for public servers. |
|
3068 | that this should not be used for public servers. | |
3063 |
|
3069 | |||
3064 | The full set of options is: |
|
3070 | The full set of options is: | |
3065 |
|
3071 | |||
3066 | ``accesslog`` |
|
3072 | ``accesslog`` | |
3067 | Where to output the access log. (default: stdout) |
|
3073 | Where to output the access log. (default: stdout) | |
3068 |
|
3074 | |||
3069 | ``address`` |
|
3075 | ``address`` | |
3070 | Interface address to bind to. (default: all) |
|
3076 | Interface address to bind to. (default: all) | |
3071 |
|
3077 | |||
3072 | ``allow-archive`` |
|
3078 | ``allow-archive`` | |
3073 | List of archive format (bz2, gz, zip) allowed for downloading. |
|
3079 | List of archive format (bz2, gz, zip) allowed for downloading. | |
3074 | (default: empty) |
|
3080 | (default: empty) | |
3075 |
|
3081 | |||
3076 | ``allowbz2`` |
|
3082 | ``allowbz2`` | |
3077 | (DEPRECATED) Whether to allow .tar.bz2 downloading of repository |
|
3083 | (DEPRECATED) Whether to allow .tar.bz2 downloading of repository | |
3078 | revisions. |
|
3084 | revisions. | |
3079 | (default: False) |
|
3085 | (default: False) | |
3080 |
|
3086 | |||
3081 | ``allowgz`` |
|
3087 | ``allowgz`` | |
3082 | (DEPRECATED) Whether to allow .tar.gz downloading of repository |
|
3088 | (DEPRECATED) Whether to allow .tar.gz downloading of repository | |
3083 | revisions. |
|
3089 | revisions. | |
3084 | (default: False) |
|
3090 | (default: False) | |
3085 |
|
3091 | |||
3086 | ``allow-pull`` |
|
3092 | ``allow-pull`` | |
3087 | Whether to allow pulling from the repository. (default: True) |
|
3093 | Whether to allow pulling from the repository. (default: True) | |
3088 |
|
3094 | |||
3089 | ``allow-push`` |
|
3095 | ``allow-push`` | |
3090 | Whether to allow pushing to the repository. If empty or not set, |
|
3096 | Whether to allow pushing to the repository. If empty or not set, | |
3091 | pushing is not allowed. If the special value ``*``, any remote |
|
3097 | pushing is not allowed. If the special value ``*``, any remote | |
3092 | user can push, including unauthenticated users. Otherwise, the |
|
3098 | user can push, including unauthenticated users. Otherwise, the | |
3093 | remote user must have been authenticated, and the authenticated |
|
3099 | remote user must have been authenticated, and the authenticated | |
3094 | user name must be present in this list. The contents of the |
|
3100 | user name must be present in this list. The contents of the | |
3095 | allow-push list are examined after the deny_push list. |
|
3101 | allow-push list are examined after the deny_push list. | |
3096 |
|
3102 | |||
3097 | ``allow_read`` |
|
3103 | ``allow_read`` | |
3098 | If the user has not already been denied repository access due to |
|
3104 | If the user has not already been denied repository access due to | |
3099 | the contents of deny_read, this list determines whether to grant |
|
3105 | the contents of deny_read, this list determines whether to grant | |
3100 | repository access to the user. If this list is not empty, and the |
|
3106 | repository access to the user. If this list is not empty, and the | |
3101 | user is unauthenticated or not present in the list, then access is |
|
3107 | user is unauthenticated or not present in the list, then access is | |
3102 | denied for the user. If the list is empty or not set, then access |
|
3108 | denied for the user. If the list is empty or not set, then access | |
3103 | is permitted to all users by default. Setting allow_read to the |
|
3109 | is permitted to all users by default. Setting allow_read to the | |
3104 | special value ``*`` is equivalent to it not being set (i.e. access |
|
3110 | special value ``*`` is equivalent to it not being set (i.e. access | |
3105 | is permitted to all users). The contents of the allow_read list are |
|
3111 | is permitted to all users). The contents of the allow_read list are | |
3106 | examined after the deny_read list. |
|
3112 | examined after the deny_read list. | |
3107 |
|
3113 | |||
3108 | ``allowzip`` |
|
3114 | ``allowzip`` | |
3109 | (DEPRECATED) Whether to allow .zip downloading of repository |
|
3115 | (DEPRECATED) Whether to allow .zip downloading of repository | |
3110 | revisions. This feature creates temporary files. |
|
3116 | revisions. This feature creates temporary files. | |
3111 | (default: False) |
|
3117 | (default: False) | |
3112 |
|
3118 | |||
3113 | ``archivesubrepos`` |
|
3119 | ``archivesubrepos`` | |
3114 | Whether to recurse into subrepositories when archiving. |
|
3120 | Whether to recurse into subrepositories when archiving. | |
3115 | (default: False) |
|
3121 | (default: False) | |
3116 |
|
3122 | |||
3117 | ``baseurl`` |
|
3123 | ``baseurl`` | |
3118 | Base URL to use when publishing URLs in other locations, so |
|
3124 | Base URL to use when publishing URLs in other locations, so | |
3119 | third-party tools like email notification hooks can construct |
|
3125 | third-party tools like email notification hooks can construct | |
3120 | URLs. Example: ``http://hgserver/repos/``. |
|
3126 | URLs. Example: ``http://hgserver/repos/``. | |
3121 |
|
3127 | |||
3122 | ``cacerts`` |
|
3128 | ``cacerts`` | |
3123 | Path to file containing a list of PEM encoded certificate |
|
3129 | Path to file containing a list of PEM encoded certificate | |
3124 | authority certificates. Environment variables and ``~user`` |
|
3130 | authority certificates. Environment variables and ``~user`` | |
3125 | constructs are expanded in the filename. If specified on the |
|
3131 | constructs are expanded in the filename. If specified on the | |
3126 | client, then it will verify the identity of remote HTTPS servers |
|
3132 | client, then it will verify the identity of remote HTTPS servers | |
3127 | with these certificates. |
|
3133 | with these certificates. | |
3128 |
|
3134 | |||
3129 | To disable SSL verification temporarily, specify ``--insecure`` from |
|
3135 | To disable SSL verification temporarily, specify ``--insecure`` from | |
3130 | command line. |
|
3136 | command line. | |
3131 |
|
3137 | |||
3132 | You can use OpenSSL's CA certificate file if your platform has |
|
3138 | You can use OpenSSL's CA certificate file if your platform has | |
3133 | one. On most Linux systems this will be |
|
3139 | one. On most Linux systems this will be | |
3134 | ``/etc/ssl/certs/ca-certificates.crt``. Otherwise you will have to |
|
3140 | ``/etc/ssl/certs/ca-certificates.crt``. Otherwise you will have to | |
3135 | generate this file manually. The form must be as follows:: |
|
3141 | generate this file manually. The form must be as follows:: | |
3136 |
|
3142 | |||
3137 | -----BEGIN CERTIFICATE----- |
|
3143 | -----BEGIN CERTIFICATE----- | |
3138 | ... (certificate in base64 PEM encoding) ... |
|
3144 | ... (certificate in base64 PEM encoding) ... | |
3139 | -----END CERTIFICATE----- |
|
3145 | -----END CERTIFICATE----- | |
3140 | -----BEGIN CERTIFICATE----- |
|
3146 | -----BEGIN CERTIFICATE----- | |
3141 | ... (certificate in base64 PEM encoding) ... |
|
3147 | ... (certificate in base64 PEM encoding) ... | |
3142 | -----END CERTIFICATE----- |
|
3148 | -----END CERTIFICATE----- | |
3143 |
|
3149 | |||
3144 | ``cache`` |
|
3150 | ``cache`` | |
3145 | Whether to support caching in hgweb. (default: True) |
|
3151 | Whether to support caching in hgweb. (default: True) | |
3146 |
|
3152 | |||
3147 | ``certificate`` |
|
3153 | ``certificate`` | |
3148 | Certificate to use when running :hg:`serve`. |
|
3154 | Certificate to use when running :hg:`serve`. | |
3149 |
|
3155 | |||
3150 | ``collapse`` |
|
3156 | ``collapse`` | |
3151 | With ``descend`` enabled, repositories in subdirectories are shown at |
|
3157 | With ``descend`` enabled, repositories in subdirectories are shown at | |
3152 | a single level alongside repositories in the current path. With |
|
3158 | a single level alongside repositories in the current path. With | |
3153 | ``collapse`` also enabled, repositories residing at a deeper level than |
|
3159 | ``collapse`` also enabled, repositories residing at a deeper level than | |
3154 | the current path are grouped behind navigable directory entries that |
|
3160 | the current path are grouped behind navigable directory entries that | |
3155 | lead to the locations of these repositories. In effect, this setting |
|
3161 | lead to the locations of these repositories. In effect, this setting | |
3156 | collapses each collection of repositories found within a subdirectory |
|
3162 | collapses each collection of repositories found within a subdirectory | |
3157 | into a single entry for that subdirectory. (default: False) |
|
3163 | into a single entry for that subdirectory. (default: False) | |
3158 |
|
3164 | |||
3159 | ``comparisoncontext`` |
|
3165 | ``comparisoncontext`` | |
3160 | Number of lines of context to show in side-by-side file comparison. If |
|
3166 | Number of lines of context to show in side-by-side file comparison. If | |
3161 | negative or the value ``full``, whole files are shown. (default: 5) |
|
3167 | negative or the value ``full``, whole files are shown. (default: 5) | |
3162 |
|
3168 | |||
3163 | This setting can be overridden by a ``context`` request parameter to the |
|
3169 | This setting can be overridden by a ``context`` request parameter to the | |
3164 | ``comparison`` command, taking the same values. |
|
3170 | ``comparison`` command, taking the same values. | |
3165 |
|
3171 | |||
3166 | ``contact`` |
|
3172 | ``contact`` | |
3167 | Name or email address of the person in charge of the repository. |
|
3173 | Name or email address of the person in charge of the repository. | |
3168 | (default: ui.username or ``$EMAIL`` or "unknown" if unset or empty) |
|
3174 | (default: ui.username or ``$EMAIL`` or "unknown" if unset or empty) | |
3169 |
|
3175 | |||
3170 | ``csp`` |
|
3176 | ``csp`` | |
3171 | Send a ``Content-Security-Policy`` HTTP header with this value. |
|
3177 | Send a ``Content-Security-Policy`` HTTP header with this value. | |
3172 |
|
3178 | |||
3173 | The value may contain a special string ``%nonce%``, which will be replaced |
|
3179 | The value may contain a special string ``%nonce%``, which will be replaced | |
3174 | by a randomly-generated one-time use value. If the value contains |
|
3180 | by a randomly-generated one-time use value. If the value contains | |
3175 | ``%nonce%``, ``web.cache`` will be disabled, as caching undermines the |
|
3181 | ``%nonce%``, ``web.cache`` will be disabled, as caching undermines the | |
3176 | one-time property of the nonce. This nonce will also be inserted into |
|
3182 | one-time property of the nonce. This nonce will also be inserted into | |
3177 | ``<script>`` elements containing inline JavaScript. |
|
3183 | ``<script>`` elements containing inline JavaScript. | |
3178 |
|
3184 | |||
3179 | Note: lots of HTML content sent by the server is derived from repository |
|
3185 | Note: lots of HTML content sent by the server is derived from repository | |
3180 | data. Please consider the potential for malicious repository data to |
|
3186 | data. Please consider the potential for malicious repository data to | |
3181 | "inject" itself into generated HTML content as part of your security |
|
3187 | "inject" itself into generated HTML content as part of your security | |
3182 | threat model. |
|
3188 | threat model. | |
3183 |
|
3189 | |||
3184 | ``deny_push`` |
|
3190 | ``deny_push`` | |
3185 | Whether to deny pushing to the repository. If empty or not set, |
|
3191 | Whether to deny pushing to the repository. If empty or not set, | |
3186 | push is not denied. If the special value ``*``, all remote users are |
|
3192 | push is not denied. If the special value ``*``, all remote users are | |
3187 | denied push. Otherwise, unauthenticated users are all denied, and |
|
3193 | denied push. Otherwise, unauthenticated users are all denied, and | |
3188 | any authenticated user name present in this list is also denied. The |
|
3194 | any authenticated user name present in this list is also denied. The | |
3189 | contents of the deny_push list are examined before the allow-push list. |
|
3195 | contents of the deny_push list are examined before the allow-push list. | |
3190 |
|
3196 | |||
3191 | ``deny_read`` |
|
3197 | ``deny_read`` | |
3192 | Whether to deny reading/viewing of the repository. If this list is |
|
3198 | Whether to deny reading/viewing of the repository. If this list is | |
3193 | not empty, unauthenticated users are all denied, and any |
|
3199 | not empty, unauthenticated users are all denied, and any | |
3194 | authenticated user name present in this list is also denied access to |
|
3200 | authenticated user name present in this list is also denied access to | |
3195 | the repository. If set to the special value ``*``, all remote users |
|
3201 | the repository. If set to the special value ``*``, all remote users | |
3196 | are denied access (rarely needed ;). If deny_read is empty or not set, |
|
3202 | are denied access (rarely needed ;). If deny_read is empty or not set, | |
3197 | the determination of repository access depends on the presence and |
|
3203 | the determination of repository access depends on the presence and | |
3198 | content of the allow_read list (see description). If both |
|
3204 | content of the allow_read list (see description). If both | |
3199 | deny_read and allow_read are empty or not set, then access is |
|
3205 | deny_read and allow_read are empty or not set, then access is | |
3200 | permitted to all users by default. If the repository is being |
|
3206 | permitted to all users by default. If the repository is being | |
3201 | served via hgwebdir, denied users will not be able to see it in |
|
3207 | served via hgwebdir, denied users will not be able to see it in | |
3202 | the list of repositories. The contents of the deny_read list have |
|
3208 | the list of repositories. The contents of the deny_read list have | |
3203 | priority over (are examined before) the contents of the allow_read |
|
3209 | priority over (are examined before) the contents of the allow_read | |
3204 | list. |
|
3210 | list. | |
3205 |
|
3211 | |||
3206 | ``descend`` |
|
3212 | ``descend`` | |
3207 | hgwebdir indexes will not descend into subdirectories. Only repositories |
|
3213 | hgwebdir indexes will not descend into subdirectories. Only repositories | |
3208 | directly in the current path will be shown (other repositories are still |
|
3214 | directly in the current path will be shown (other repositories are still | |
3209 | available from the index corresponding to their containing path). |
|
3215 | available from the index corresponding to their containing path). | |
3210 |
|
3216 | |||
3211 | ``description`` |
|
3217 | ``description`` | |
3212 | Textual description of the repository's purpose or contents. |
|
3218 | Textual description of the repository's purpose or contents. | |
3213 | (default: "unknown") |
|
3219 | (default: "unknown") | |
3214 |
|
3220 | |||
3215 | ``encoding`` |
|
3221 | ``encoding`` | |
3216 | Character encoding name. (default: the current locale charset) |
|
3222 | Character encoding name. (default: the current locale charset) | |
3217 | Example: "UTF-8". |
|
3223 | Example: "UTF-8". | |
3218 |
|
3224 | |||
3219 | ``errorlog`` |
|
3225 | ``errorlog`` | |
3220 | Where to output the error log. (default: stderr) |
|
3226 | Where to output the error log. (default: stderr) | |
3221 |
|
3227 | |||
3222 | ``guessmime`` |
|
3228 | ``guessmime`` | |
3223 | Control MIME types for raw download of file content. |
|
3229 | Control MIME types for raw download of file content. | |
3224 | Set to True to let hgweb guess the content type from the file |
|
3230 | Set to True to let hgweb guess the content type from the file | |
3225 | extension. This will serve HTML files as ``text/html`` and might |
|
3231 | extension. This will serve HTML files as ``text/html`` and might | |
3226 | allow cross-site scripting attacks when serving untrusted |
|
3232 | allow cross-site scripting attacks when serving untrusted | |
3227 | repositories. (default: False) |
|
3233 | repositories. (default: False) | |
3228 |
|
3234 | |||
3229 | ``hidden`` |
|
3235 | ``hidden`` | |
3230 | Whether to hide the repository in the hgwebdir index. |
|
3236 | Whether to hide the repository in the hgwebdir index. | |
3231 | (default: False) |
|
3237 | (default: False) | |
3232 |
|
3238 | |||
3233 | ``ipv6`` |
|
3239 | ``ipv6`` | |
3234 | Whether to use IPv6. (default: False) |
|
3240 | Whether to use IPv6. (default: False) | |
3235 |
|
3241 | |||
3236 | ``labels`` |
|
3242 | ``labels`` | |
3237 | List of string *labels* associated with the repository. |
|
3243 | List of string *labels* associated with the repository. | |
3238 |
|
3244 | |||
3239 | Labels are exposed as a template keyword and can be used to customize |
|
3245 | Labels are exposed as a template keyword and can be used to customize | |
3240 | output. e.g. the ``index`` template can group or filter repositories |
|
3246 | output. e.g. the ``index`` template can group or filter repositories | |
3241 | by labels and the ``summary`` template can display additional content |
|
3247 | by labels and the ``summary`` template can display additional content | |
3242 | if a specific label is present. |
|
3248 | if a specific label is present. | |
3243 |
|
3249 | |||
3244 | ``logoimg`` |
|
3250 | ``logoimg`` | |
3245 | File name of the logo image that some templates display on each page. |
|
3251 | File name of the logo image that some templates display on each page. | |
3246 | The file name is relative to ``staticurl``. That is, the full path to |
|
3252 | The file name is relative to ``staticurl``. That is, the full path to | |
3247 | the logo image is "staticurl/logoimg". |
|
3253 | the logo image is "staticurl/logoimg". | |
3248 | If unset, ``hglogo.png`` will be used. |
|
3254 | If unset, ``hglogo.png`` will be used. | |
3249 |
|
3255 | |||
3250 | ``logourl`` |
|
3256 | ``logourl`` | |
3251 | Base URL to use for logos. If unset, ``https://mercurial-scm.org/`` |
|
3257 | Base URL to use for logos. If unset, ``https://mercurial-scm.org/`` | |
3252 | will be used. |
|
3258 | will be used. | |
3253 |
|
3259 | |||
3254 | ``maxchanges`` |
|
3260 | ``maxchanges`` | |
3255 | Maximum number of changes to list on the changelog. (default: 10) |
|
3261 | Maximum number of changes to list on the changelog. (default: 10) | |
3256 |
|
3262 | |||
3257 | ``maxfiles`` |
|
3263 | ``maxfiles`` | |
3258 | Maximum number of files to list per changeset. (default: 10) |
|
3264 | Maximum number of files to list per changeset. (default: 10) | |
3259 |
|
3265 | |||
3260 | ``maxshortchanges`` |
|
3266 | ``maxshortchanges`` | |
3261 | Maximum number of changes to list on the shortlog, graph or filelog |
|
3267 | Maximum number of changes to list on the shortlog, graph or filelog | |
3262 | pages. (default: 60) |
|
3268 | pages. (default: 60) | |
3263 |
|
3269 | |||
3264 | ``name`` |
|
3270 | ``name`` | |
3265 | Repository name to use in the web interface. |
|
3271 | Repository name to use in the web interface. | |
3266 | (default: current working directory) |
|
3272 | (default: current working directory) | |
3267 |
|
3273 | |||
3268 | ``port`` |
|
3274 | ``port`` | |
3269 | Port to listen on. (default: 8000) |
|
3275 | Port to listen on. (default: 8000) | |
3270 |
|
3276 | |||
3271 | ``prefix`` |
|
3277 | ``prefix`` | |
3272 | Prefix path to serve from. (default: '' (server root)) |
|
3278 | Prefix path to serve from. (default: '' (server root)) | |
3273 |
|
3279 | |||
3274 | ``push_ssl`` |
|
3280 | ``push_ssl`` | |
3275 | Whether to require that inbound pushes be transported over SSL to |
|
3281 | Whether to require that inbound pushes be transported over SSL to | |
3276 | prevent password sniffing. (default: True) |
|
3282 | prevent password sniffing. (default: True) | |
3277 |
|
3283 | |||
3278 | ``refreshinterval`` |
|
3284 | ``refreshinterval`` | |
3279 | How frequently directory listings re-scan the filesystem for new |
|
3285 | How frequently directory listings re-scan the filesystem for new | |
3280 | repositories, in seconds. This is relevant when wildcards are used |
|
3286 | repositories, in seconds. This is relevant when wildcards are used | |
3281 | to define paths. Depending on how much filesystem traversal is |
|
3287 | to define paths. Depending on how much filesystem traversal is | |
3282 | required, refreshing may negatively impact performance. |
|
3288 | required, refreshing may negatively impact performance. | |
3283 |
|
3289 | |||
3284 | Values less than or equal to 0 always refresh. |
|
3290 | Values less than or equal to 0 always refresh. | |
3285 | (default: 20) |
|
3291 | (default: 20) | |
3286 |
|
3292 | |||
3287 | ``server-header`` |
|
3293 | ``server-header`` | |
3288 | Value for HTTP ``Server`` response header. |
|
3294 | Value for HTTP ``Server`` response header. | |
3289 |
|
3295 | |||
3290 | ``static`` |
|
3296 | ``static`` | |
3291 | Directory where static files are served from. |
|
3297 | Directory where static files are served from. | |
3292 |
|
3298 | |||
3293 | ``staticurl`` |
|
3299 | ``staticurl`` | |
3294 | Base URL to use for static files. If unset, static files (e.g. the |
|
3300 | Base URL to use for static files. If unset, static files (e.g. the | |
3295 | hgicon.png favicon) will be served by the CGI script itself. Use |
|
3301 | hgicon.png favicon) will be served by the CGI script itself. Use | |
3296 | this setting to serve them directly with the HTTP server. |
|
3302 | this setting to serve them directly with the HTTP server. | |
3297 | Example: ``http://hgserver/static/``. |
|
3303 | Example: ``http://hgserver/static/``. | |
3298 |
|
3304 | |||
3299 | ``stripes`` |
|
3305 | ``stripes`` | |
3300 | How many lines a "zebra stripe" should span in multi-line output. |
|
3306 | How many lines a "zebra stripe" should span in multi-line output. | |
3301 | Set to 0 to disable. (default: 1) |
|
3307 | Set to 0 to disable. (default: 1) | |
3302 |
|
3308 | |||
3303 | ``style`` |
|
3309 | ``style`` | |
3304 | Which template map style to use. The available options are the names of |
|
3310 | Which template map style to use. The available options are the names of | |
3305 | subdirectories in the HTML templates path. (default: ``paper``) |
|
3311 | subdirectories in the HTML templates path. (default: ``paper``) | |
3306 | Example: ``monoblue``. |
|
3312 | Example: ``monoblue``. | |
3307 |
|
3313 | |||
3308 | ``templates`` |
|
3314 | ``templates`` | |
3309 | Where to find the HTML templates. The default path to the HTML templates |
|
3315 | Where to find the HTML templates. The default path to the HTML templates | |
3310 | can be obtained from ``hg debuginstall``. |
|
3316 | can be obtained from ``hg debuginstall``. | |
3311 |
|
3317 | |||
3312 | ``websub`` |
|
3318 | ``websub`` | |
3313 | ---------- |
|
3319 | ---------- | |
3314 |
|
3320 | |||
3315 | Web substitution filter definition. You can use this section to |
|
3321 | Web substitution filter definition. You can use this section to | |
3316 | define a set of regular expression substitution patterns which |
|
3322 | define a set of regular expression substitution patterns which | |
3317 | let you automatically modify the hgweb server output. |
|
3323 | let you automatically modify the hgweb server output. | |
3318 |
|
3324 | |||
3319 | The default hgweb templates only apply these substitution patterns |
|
3325 | The default hgweb templates only apply these substitution patterns | |
3320 | on the revision description fields. You can apply them anywhere |
|
3326 | on the revision description fields. You can apply them anywhere | |
3321 | you want when you create your own templates by adding calls to the |
|
3327 | you want when you create your own templates by adding calls to the | |
3322 | "websub" filter (usually after calling the "escape" filter). |
|
3328 | "websub" filter (usually after calling the "escape" filter). | |
3323 |
|
3329 | |||
3324 | This can be used, for example, to convert issue references to links |
|
3330 | This can be used, for example, to convert issue references to links | |
3325 | to your issue tracker, or to convert "markdown-like" syntax into |
|
3331 | to your issue tracker, or to convert "markdown-like" syntax into | |
3326 | HTML (see the examples below). |
|
3332 | HTML (see the examples below). | |
3327 |
|
3333 | |||
3328 | Each entry in this section names a substitution filter. |
|
3334 | Each entry in this section names a substitution filter. | |
3329 | The value of each entry defines the substitution expression itself. |
|
3335 | The value of each entry defines the substitution expression itself. | |
3330 | The websub expressions follow the old interhg extension syntax, |
|
3336 | The websub expressions follow the old interhg extension syntax, | |
3331 | which in turn imitates the Unix sed replacement syntax:: |
|
3337 | which in turn imitates the Unix sed replacement syntax:: | |
3332 |
|
3338 | |||
3333 | patternname = s/SEARCH_REGEX/REPLACE_EXPRESSION/[i] |
|
3339 | patternname = s/SEARCH_REGEX/REPLACE_EXPRESSION/[i] | |
3334 |
|
3340 | |||
3335 | You can use any separator other than "/". The final "i" is optional |
|
3341 | You can use any separator other than "/". The final "i" is optional | |
3336 | and indicates that the search must be case insensitive. |
|
3342 | and indicates that the search must be case insensitive. | |
3337 |
|
3343 | |||
3338 | Examples:: |
|
3344 | Examples:: | |
3339 |
|
3345 | |||
3340 | [websub] |
|
3346 | [websub] | |
3341 | issues = s|issue(\d+)|<a href="http://bts.example.org/issue\1">issue\1</a>|i |
|
3347 | issues = s|issue(\d+)|<a href="http://bts.example.org/issue\1">issue\1</a>|i | |
3342 | italic = s/\b_(\S+)_\b/<i>\1<\/i>/ |
|
3348 | italic = s/\b_(\S+)_\b/<i>\1<\/i>/ | |
3343 | bold = s/\*\b(\S+)\b\*/<b>\1<\/b>/ |
|
3349 | bold = s/\*\b(\S+)\b\*/<b>\1<\/b>/ | |
3344 |
|
3350 | |||
3345 | ``worker`` |
|
3351 | ``worker`` | |
3346 | ---------- |
|
3352 | ---------- | |
3347 |
|
3353 | |||
3348 | Parallel master/worker configuration. We currently perform working |
|
3354 | Parallel master/worker configuration. We currently perform working | |
3349 | directory updates in parallel on Unix-like systems, which greatly |
|
3355 | directory updates in parallel on Unix-like systems, which greatly | |
3350 | helps performance. |
|
3356 | helps performance. | |
3351 |
|
3357 | |||
3352 | ``enabled`` |
|
3358 | ``enabled`` | |
3353 | Whether to enable workers code to be used. |
|
3359 | Whether to enable workers code to be used. | |
3354 | (default: true) |
|
3360 | (default: true) | |
3355 |
|
3361 | |||
3356 | ``numcpus`` |
|
3362 | ``numcpus`` | |
3357 | Number of CPUs to use for parallel operations. A zero or |
|
3363 | Number of CPUs to use for parallel operations. A zero or | |
3358 | negative value is treated as ``use the default``. |
|
3364 | negative value is treated as ``use the default``. | |
3359 | (default: 4 or the number of CPUs on the system, whichever is larger) |
|
3365 | (default: 4 or the number of CPUs on the system, whichever is larger) | |
3360 |
|
3366 | |||
3361 | ``backgroundclose`` |
|
3367 | ``backgroundclose`` | |
3362 | Whether to enable closing file handles on background threads during certain |
|
3368 | Whether to enable closing file handles on background threads during certain | |
3363 | operations. Some platforms aren't very efficient at closing file |
|
3369 | operations. Some platforms aren't very efficient at closing file | |
3364 | handles that have been written or appended to. By performing file closing |
|
3370 | handles that have been written or appended to. By performing file closing | |
3365 | on background threads, file write rate can increase substantially. |
|
3371 | on background threads, file write rate can increase substantially. | |
3366 | (default: true on Windows, false elsewhere) |
|
3372 | (default: true on Windows, false elsewhere) | |
3367 |
|
3373 | |||
3368 | ``backgroundcloseminfilecount`` |
|
3374 | ``backgroundcloseminfilecount`` | |
3369 | Minimum number of files required to trigger background file closing. |
|
3375 | Minimum number of files required to trigger background file closing. | |
3370 | Operations not writing this many files won't start background close |
|
3376 | Operations not writing this many files won't start background close | |
3371 | threads. |
|
3377 | threads. | |
3372 | (default: 2048) |
|
3378 | (default: 2048) | |
3373 |
|
3379 | |||
3374 | ``backgroundclosemaxqueue`` |
|
3380 | ``backgroundclosemaxqueue`` | |
3375 | The maximum number of opened file handles waiting to be closed in the |
|
3381 | The maximum number of opened file handles waiting to be closed in the | |
3376 | background. This option only has an effect if ``backgroundclose`` is |
|
3382 | background. This option only has an effect if ``backgroundclose`` is | |
3377 | enabled. |
|
3383 | enabled. | |
3378 | (default: 384) |
|
3384 | (default: 384) | |
3379 |
|
3385 | |||
3380 | ``backgroundclosethreadcount`` |
|
3386 | ``backgroundclosethreadcount`` | |
3381 | Number of threads to process background file closes. Only relevant if |
|
3387 | Number of threads to process background file closes. Only relevant if | |
3382 | ``backgroundclose`` is enabled. |
|
3388 | ``backgroundclose`` is enabled. | |
3383 | (default: 4) |
|
3389 | (default: 4) |
@@ -1,663 +1,670 b'' | |||||
1 | # httppeer.py - HTTP repository proxy classes for mercurial |
|
1 | # httppeer.py - HTTP repository proxy classes for mercurial | |
2 | # |
|
2 | # | |
3 | # Copyright 2005, 2006 Olivia Mackall <olivia@selenic.com> |
|
3 | # Copyright 2005, 2006 Olivia Mackall <olivia@selenic.com> | |
4 | # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com> |
|
4 | # Copyright 2006 Vadim Gelfer <vadim.gelfer@gmail.com> | |
5 | # |
|
5 | # | |
6 | # This software may be used and distributed according to the terms of the |
|
6 | # This software may be used and distributed according to the terms of the | |
7 | # GNU General Public License version 2 or any later version. |
|
7 | # GNU General Public License version 2 or any later version. | |
8 |
|
8 | |||
9 |
|
9 | |||
10 | import errno |
|
10 | import errno | |
11 | import io |
|
11 | import io | |
12 | import os |
|
12 | import os | |
13 | import socket |
|
13 | import socket | |
14 | import struct |
|
14 | import struct | |
15 |
|
15 | |||
16 | from concurrent import futures |
|
16 | from concurrent import futures | |
17 | from .i18n import _ |
|
17 | from .i18n import _ | |
18 | from .pycompat import getattr |
|
18 | from .pycompat import getattr | |
19 | from . import ( |
|
19 | from . import ( | |
20 | bundle2, |
|
20 | bundle2, | |
21 | error, |
|
21 | error, | |
22 | httpconnection, |
|
22 | httpconnection, | |
23 | pycompat, |
|
23 | pycompat, | |
24 | statichttprepo, |
|
24 | statichttprepo, | |
25 | url as urlmod, |
|
25 | url as urlmod, | |
26 | util, |
|
26 | util, | |
27 | wireprotov1peer, |
|
27 | wireprotov1peer, | |
28 | ) |
|
28 | ) | |
29 | from .utils import urlutil |
|
29 | from .utils import urlutil | |
30 |
|
30 | |||
31 | httplib = util.httplib |
|
31 | httplib = util.httplib | |
32 | urlerr = util.urlerr |
|
32 | urlerr = util.urlerr | |
33 | urlreq = util.urlreq |
|
33 | urlreq = util.urlreq | |
34 |
|
34 | |||
35 |
|
35 | |||
36 | def encodevalueinheaders(value, header, limit): |
|
36 | def encodevalueinheaders(value, header, limit): | |
37 | """Encode a string value into multiple HTTP headers. |
|
37 | """Encode a string value into multiple HTTP headers. | |
38 |
|
38 | |||
39 | ``value`` will be encoded into 1 or more HTTP headers with the names |
|
39 | ``value`` will be encoded into 1 or more HTTP headers with the names | |
40 | ``header-<N>`` where ``<N>`` is an integer starting at 1. Each header |
|
40 | ``header-<N>`` where ``<N>`` is an integer starting at 1. Each header | |
41 | name + value will be at most ``limit`` bytes long. |
|
41 | name + value will be at most ``limit`` bytes long. | |
42 |
|
42 | |||
43 | Returns an iterable of 2-tuples consisting of header names and |
|
43 | Returns an iterable of 2-tuples consisting of header names and | |
44 | values as native strings. |
|
44 | values as native strings. | |
45 | """ |
|
45 | """ | |
46 | # HTTP Headers are ASCII. Python 3 requires them to be unicodes, |
|
46 | # HTTP Headers are ASCII. Python 3 requires them to be unicodes, | |
47 | # not bytes. This function always takes bytes in as arguments. |
|
47 | # not bytes. This function always takes bytes in as arguments. | |
48 | fmt = pycompat.strurl(header) + r'-%s' |
|
48 | fmt = pycompat.strurl(header) + r'-%s' | |
49 | # Note: it is *NOT* a bug that the last bit here is a bytestring |
|
49 | # Note: it is *NOT* a bug that the last bit here is a bytestring | |
50 | # and not a unicode: we're just getting the encoded length anyway, |
|
50 | # and not a unicode: we're just getting the encoded length anyway, | |
51 | # and using an r-string to make it portable between Python 2 and 3 |
|
51 | # and using an r-string to make it portable between Python 2 and 3 | |
52 | # doesn't work because then the \r is a literal backslash-r |
|
52 | # doesn't work because then the \r is a literal backslash-r | |
53 | # instead of a carriage return. |
|
53 | # instead of a carriage return. | |
54 | valuelen = limit - len(fmt % '000') - len(b': \r\n') |
|
54 | valuelen = limit - len(fmt % '000') - len(b': \r\n') | |
55 | result = [] |
|
55 | result = [] | |
56 |
|
56 | |||
57 | n = 0 |
|
57 | n = 0 | |
58 | for i in range(0, len(value), valuelen): |
|
58 | for i in range(0, len(value), valuelen): | |
59 | n += 1 |
|
59 | n += 1 | |
60 | result.append((fmt % str(n), pycompat.strurl(value[i : i + valuelen]))) |
|
60 | result.append((fmt % str(n), pycompat.strurl(value[i : i + valuelen]))) | |
61 |
|
61 | |||
62 | return result |
|
62 | return result | |
63 |
|
63 | |||
64 |
|
64 | |||
65 | class _multifile: |
|
65 | class _multifile: | |
66 | def __init__(self, *fileobjs): |
|
66 | def __init__(self, *fileobjs): | |
67 | for f in fileobjs: |
|
67 | for f in fileobjs: | |
68 | if not util.safehasattr(f, 'length'): |
|
68 | if not util.safehasattr(f, 'length'): | |
69 | raise ValueError( |
|
69 | raise ValueError( | |
70 | b'_multifile only supports file objects that ' |
|
70 | b'_multifile only supports file objects that ' | |
71 | b'have a length but this one does not:', |
|
71 | b'have a length but this one does not:', | |
72 | type(f), |
|
72 | type(f), | |
73 | f, |
|
73 | f, | |
74 | ) |
|
74 | ) | |
75 | self._fileobjs = fileobjs |
|
75 | self._fileobjs = fileobjs | |
76 | self._index = 0 |
|
76 | self._index = 0 | |
77 |
|
77 | |||
78 | @property |
|
78 | @property | |
79 | def length(self): |
|
79 | def length(self): | |
80 | return sum(f.length for f in self._fileobjs) |
|
80 | return sum(f.length for f in self._fileobjs) | |
81 |
|
81 | |||
82 | def read(self, amt=None): |
|
82 | def read(self, amt=None): | |
83 | if amt <= 0: |
|
83 | if amt <= 0: | |
84 | return b''.join(f.read() for f in self._fileobjs) |
|
84 | return b''.join(f.read() for f in self._fileobjs) | |
85 | parts = [] |
|
85 | parts = [] | |
86 | while amt and self._index < len(self._fileobjs): |
|
86 | while amt and self._index < len(self._fileobjs): | |
87 | parts.append(self._fileobjs[self._index].read(amt)) |
|
87 | parts.append(self._fileobjs[self._index].read(amt)) | |
88 | got = len(parts[-1]) |
|
88 | got = len(parts[-1]) | |
89 | if got < amt: |
|
89 | if got < amt: | |
90 | self._index += 1 |
|
90 | self._index += 1 | |
91 | amt -= got |
|
91 | amt -= got | |
92 | return b''.join(parts) |
|
92 | return b''.join(parts) | |
93 |
|
93 | |||
94 | def seek(self, offset, whence=os.SEEK_SET): |
|
94 | def seek(self, offset, whence=os.SEEK_SET): | |
95 | if whence != os.SEEK_SET: |
|
95 | if whence != os.SEEK_SET: | |
96 | raise NotImplementedError( |
|
96 | raise NotImplementedError( | |
97 | b'_multifile does not support anything other' |
|
97 | b'_multifile does not support anything other' | |
98 | b' than os.SEEK_SET for whence on seek()' |
|
98 | b' than os.SEEK_SET for whence on seek()' | |
99 | ) |
|
99 | ) | |
100 | if offset != 0: |
|
100 | if offset != 0: | |
101 | raise NotImplementedError( |
|
101 | raise NotImplementedError( | |
102 | b'_multifile only supports seeking to start, but that ' |
|
102 | b'_multifile only supports seeking to start, but that ' | |
103 | b'could be fixed if you need it' |
|
103 | b'could be fixed if you need it' | |
104 | ) |
|
104 | ) | |
105 | for f in self._fileobjs: |
|
105 | for f in self._fileobjs: | |
106 | f.seek(0) |
|
106 | f.seek(0) | |
107 | self._index = 0 |
|
107 | self._index = 0 | |
108 |
|
108 | |||
109 |
|
109 | |||
110 | def makev1commandrequest( |
|
110 | def makev1commandrequest( | |
111 | ui, |
|
111 | ui, | |
112 | requestbuilder, |
|
112 | requestbuilder, | |
113 | caps, |
|
113 | caps, | |
114 | capablefn, |
|
114 | capablefn, | |
115 | repobaseurl, |
|
115 | repobaseurl, | |
116 | cmd, |
|
116 | cmd, | |
117 | args, |
|
117 | args, | |
118 | remotehidden=False, |
|
118 | remotehidden=False, | |
119 | ): |
|
119 | ): | |
120 | """Make an HTTP request to run a command for a version 1 client. |
|
120 | """Make an HTTP request to run a command for a version 1 client. | |
121 |
|
121 | |||
122 | ``caps`` is a set of known server capabilities. The value may be |
|
122 | ``caps`` is a set of known server capabilities. The value may be | |
123 | None if capabilities are not yet known. |
|
123 | None if capabilities are not yet known. | |
124 |
|
124 | |||
125 | ``capablefn`` is a function to evaluate a capability. |
|
125 | ``capablefn`` is a function to evaluate a capability. | |
126 |
|
126 | |||
127 | ``cmd``, ``args``, and ``data`` define the command, its arguments, and |
|
127 | ``cmd``, ``args``, and ``data`` define the command, its arguments, and | |
128 | raw data to pass to it. |
|
128 | raw data to pass to it. | |
129 | """ |
|
129 | """ | |
130 | if cmd == b'pushkey': |
|
130 | if cmd == b'pushkey': | |
131 | args[b'data'] = b'' |
|
131 | args[b'data'] = b'' | |
132 | data = args.pop(b'data', None) |
|
132 | data = args.pop(b'data', None) | |
133 | headers = args.pop(b'headers', {}) |
|
133 | headers = args.pop(b'headers', {}) | |
134 |
|
134 | |||
135 | ui.debug(b"sending %s command\n" % cmd) |
|
135 | ui.debug(b"sending %s command\n" % cmd) | |
136 | q = [(b'cmd', cmd)] |
|
136 | q = [(b'cmd', cmd)] | |
137 | if remotehidden: |
|
137 | if remotehidden: | |
138 | q.append(('access-hidden', '1')) |
|
138 | q.append(('access-hidden', '1')) | |
139 | headersize = 0 |
|
139 | headersize = 0 | |
140 | # Important: don't use self.capable() here or else you end up |
|
140 | # Important: don't use self.capable() here or else you end up | |
141 | # with infinite recursion when trying to look up capabilities |
|
141 | # with infinite recursion when trying to look up capabilities | |
142 | # for the first time. |
|
142 | # for the first time. | |
143 | postargsok = caps is not None and b'httppostargs' in caps |
|
143 | postargsok = caps is not None and b'httppostargs' in caps | |
144 |
|
144 | |||
145 | # Send arguments via POST. |
|
145 | # Send arguments via POST. | |
146 | if postargsok and args: |
|
146 | if postargsok and args: | |
147 | strargs = urlreq.urlencode(sorted(args.items())) |
|
147 | strargs = urlreq.urlencode(sorted(args.items())) | |
148 | if not data: |
|
148 | if not data: | |
149 | data = strargs |
|
149 | data = strargs | |
150 | else: |
|
150 | else: | |
151 | if isinstance(data, bytes): |
|
151 | if isinstance(data, bytes): | |
152 | i = io.BytesIO(data) |
|
152 | i = io.BytesIO(data) | |
153 | i.length = len(data) |
|
153 | i.length = len(data) | |
154 | data = i |
|
154 | data = i | |
155 | argsio = io.BytesIO(strargs) |
|
155 | argsio = io.BytesIO(strargs) | |
156 | argsio.length = len(strargs) |
|
156 | argsio.length = len(strargs) | |
157 | data = _multifile(argsio, data) |
|
157 | data = _multifile(argsio, data) | |
158 | headers['X-HgArgs-Post'] = len(strargs) |
|
158 | headers['X-HgArgs-Post'] = len(strargs) | |
159 | elif args: |
|
159 | elif args: | |
160 | # Calling self.capable() can infinite loop if we are calling |
|
160 | # Calling self.capable() can infinite loop if we are calling | |
161 | # "capabilities". But that command should never accept wire |
|
161 | # "capabilities". But that command should never accept wire | |
162 | # protocol arguments. So this should never happen. |
|
162 | # protocol arguments. So this should never happen. | |
163 | assert cmd != b'capabilities' |
|
163 | assert cmd != b'capabilities' | |
164 | httpheader = capablefn(b'httpheader') |
|
164 | httpheader = capablefn(b'httpheader') | |
165 | if httpheader: |
|
165 | if httpheader: | |
166 | headersize = int(httpheader.split(b',', 1)[0]) |
|
166 | headersize = int(httpheader.split(b',', 1)[0]) | |
167 |
|
167 | |||
168 | # Send arguments via HTTP headers. |
|
168 | # Send arguments via HTTP headers. | |
169 | if headersize > 0: |
|
169 | if headersize > 0: | |
170 | # The headers can typically carry more data than the URL. |
|
170 | # The headers can typically carry more data than the URL. | |
171 | encoded_args = urlreq.urlencode(sorted(args.items())) |
|
171 | encoded_args = urlreq.urlencode(sorted(args.items())) | |
172 | for header, value in encodevalueinheaders( |
|
172 | for header, value in encodevalueinheaders( | |
173 | encoded_args, b'X-HgArg', headersize |
|
173 | encoded_args, b'X-HgArg', headersize | |
174 | ): |
|
174 | ): | |
175 | headers[header] = value |
|
175 | headers[header] = value | |
176 | # Send arguments via query string (Mercurial <1.9). |
|
176 | # Send arguments via query string (Mercurial <1.9). | |
177 | else: |
|
177 | else: | |
178 | q += sorted(args.items()) |
|
178 | q += sorted(args.items()) | |
179 |
|
179 | |||
180 | qs = b'?%s' % urlreq.urlencode(q) |
|
180 | qs = b'?%s' % urlreq.urlencode(q) | |
181 | cu = b"%s%s" % (repobaseurl, qs) |
|
181 | cu = b"%s%s" % (repobaseurl, qs) | |
182 | size = 0 |
|
182 | size = 0 | |
183 | if util.safehasattr(data, 'length'): |
|
183 | if util.safehasattr(data, 'length'): | |
184 | size = data.length |
|
184 | size = data.length | |
185 | elif data is not None: |
|
185 | elif data is not None: | |
186 | size = len(data) |
|
186 | size = len(data) | |
187 | if data is not None and 'Content-Type' not in headers: |
|
187 | if data is not None and 'Content-Type' not in headers: | |
188 | headers['Content-Type'] = 'application/mercurial-0.1' |
|
188 | headers['Content-Type'] = 'application/mercurial-0.1' | |
189 |
|
189 | |||
190 | # Tell the server we accept application/mercurial-0.2 and multiple |
|
190 | # Tell the server we accept application/mercurial-0.2 and multiple | |
191 | # compression formats if the server is capable of emitting those |
|
191 | # compression formats if the server is capable of emitting those | |
192 | # payloads. |
|
192 | # payloads. | |
193 | # Note: Keep this set empty by default, as client advertisement of |
|
193 | # Note: Keep this set empty by default, as client advertisement of | |
194 | # protocol parameters should only occur after the handshake. |
|
194 | # protocol parameters should only occur after the handshake. | |
195 | protoparams = set() |
|
195 | protoparams = set() | |
196 |
|
196 | |||
197 | mediatypes = set() |
|
197 | mediatypes = set() | |
198 | if caps is not None: |
|
198 | if caps is not None: | |
199 | mt = capablefn(b'httpmediatype') |
|
199 | mt = capablefn(b'httpmediatype') | |
200 | if mt: |
|
200 | if mt: | |
201 | protoparams.add(b'0.1') |
|
201 | protoparams.add(b'0.1') | |
202 | mediatypes = set(mt.split(b',')) |
|
202 | mediatypes = set(mt.split(b',')) | |
203 |
|
203 | |||
204 | protoparams.add(b'partial-pull') |
|
204 | protoparams.add(b'partial-pull') | |
205 |
|
205 | |||
206 | if b'0.2tx' in mediatypes: |
|
206 | if b'0.2tx' in mediatypes: | |
207 | protoparams.add(b'0.2') |
|
207 | protoparams.add(b'0.2') | |
208 |
|
208 | |||
209 | if b'0.2tx' in mediatypes and capablefn(b'compression'): |
|
209 | if b'0.2tx' in mediatypes and capablefn(b'compression'): | |
210 | # We /could/ compare supported compression formats and prune |
|
210 | # We /could/ compare supported compression formats and prune | |
211 | # non-mutually supported or error if nothing is mutually supported. |
|
211 | # non-mutually supported or error if nothing is mutually supported. | |
212 | # For now, send the full list to the server and have it error. |
|
212 | # For now, send the full list to the server and have it error. | |
213 | comps = [ |
|
213 | comps = [ | |
214 | e.wireprotosupport().name |
|
214 | e.wireprotosupport().name | |
215 | for e in util.compengines.supportedwireengines(util.CLIENTROLE) |
|
215 | for e in util.compengines.supportedwireengines(util.CLIENTROLE) | |
216 | ] |
|
216 | ] | |
217 | protoparams.add(b'comp=%s' % b','.join(comps)) |
|
217 | protoparams.add(b'comp=%s' % b','.join(comps)) | |
218 |
|
218 | |||
219 | if protoparams: |
|
219 | if protoparams: | |
220 | protoheaders = encodevalueinheaders( |
|
220 | protoheaders = encodevalueinheaders( | |
221 | b' '.join(sorted(protoparams)), b'X-HgProto', headersize or 1024 |
|
221 | b' '.join(sorted(protoparams)), b'X-HgProto', headersize or 1024 | |
222 | ) |
|
222 | ) | |
223 | for header, value in protoheaders: |
|
223 | for header, value in protoheaders: | |
224 | headers[header] = value |
|
224 | headers[header] = value | |
225 |
|
225 | |||
226 | varyheaders = [] |
|
226 | varyheaders = [] | |
227 | for header in headers: |
|
227 | for header in headers: | |
228 | if header.lower().startswith('x-hg'): |
|
228 | if header.lower().startswith('x-hg'): | |
229 | varyheaders.append(header) |
|
229 | varyheaders.append(header) | |
230 |
|
230 | |||
231 | if varyheaders: |
|
231 | if varyheaders: | |
232 | headers['Vary'] = ','.join(sorted(varyheaders)) |
|
232 | headers['Vary'] = ','.join(sorted(varyheaders)) | |
233 |
|
233 | |||
234 | req = requestbuilder(pycompat.strurl(cu), data, headers) |
|
234 | req = requestbuilder(pycompat.strurl(cu), data, headers) | |
235 |
|
235 | |||
236 | if data is not None: |
|
236 | if data is not None: | |
237 | ui.debug(b"sending %d bytes\n" % size) |
|
237 | ui.debug(b"sending %d bytes\n" % size) | |
238 | req.add_unredirected_header('Content-Length', '%d' % size) |
|
238 | req.add_unredirected_header('Content-Length', '%d' % size) | |
239 |
|
239 | |||
240 | return req, cu, qs |
|
240 | return req, cu, qs | |
241 |
|
241 | |||
242 |
|
242 | |||
243 | def sendrequest(ui, opener, req): |
|
243 | def sendrequest(ui, opener, req): | |
244 | """Send a prepared HTTP request. |
|
244 | """Send a prepared HTTP request. | |
245 |
|
245 | |||
246 | Returns the response object. |
|
246 | Returns the response object. | |
247 | """ |
|
247 | """ | |
248 | dbg = ui.debug |
|
248 | dbg = ui.debug | |
249 | if ui.debugflag and ui.configbool(b'devel', b'debug.peer-request'): |
|
249 | if ui.debugflag and ui.configbool(b'devel', b'debug.peer-request'): | |
250 | line = b'devel-peer-request: %s\n' |
|
250 | line = b'devel-peer-request: %s\n' | |
251 | dbg( |
|
251 | dbg( | |
252 | line |
|
252 | line | |
253 | % b'%s %s' |
|
253 | % b'%s %s' | |
254 | % ( |
|
254 | % ( | |
255 | pycompat.bytesurl(req.get_method()), |
|
255 | pycompat.bytesurl(req.get_method()), | |
256 | pycompat.bytesurl(req.get_full_url()), |
|
256 | pycompat.bytesurl(req.get_full_url()), | |
257 | ) |
|
257 | ) | |
258 | ) |
|
258 | ) | |
259 | hgargssize = None |
|
259 | hgargssize = None | |
260 |
|
260 | |||
261 | for header, value in sorted(req.header_items()): |
|
261 | for header, value in sorted(req.header_items()): | |
262 | header = pycompat.bytesurl(header) |
|
262 | header = pycompat.bytesurl(header) | |
263 | value = pycompat.bytesurl(value) |
|
263 | value = pycompat.bytesurl(value) | |
264 | if header.startswith(b'X-hgarg-'): |
|
264 | if header.startswith(b'X-hgarg-'): | |
265 | if hgargssize is None: |
|
265 | if hgargssize is None: | |
266 | hgargssize = 0 |
|
266 | hgargssize = 0 | |
267 | hgargssize += len(value) |
|
267 | hgargssize += len(value) | |
268 | else: |
|
268 | else: | |
269 | dbg(line % b' %s %s' % (header, value)) |
|
269 | dbg(line % b' %s %s' % (header, value)) | |
270 |
|
270 | |||
271 | if hgargssize is not None: |
|
271 | if hgargssize is not None: | |
272 | dbg( |
|
272 | dbg( | |
273 | line |
|
273 | line | |
274 | % b' %d bytes of commands arguments in headers' |
|
274 | % b' %d bytes of commands arguments in headers' | |
275 | % hgargssize |
|
275 | % hgargssize | |
276 | ) |
|
276 | ) | |
277 | data = req.data |
|
277 | data = req.data | |
278 | if data is not None: |
|
278 | if data is not None: | |
279 | length = getattr(data, 'length', None) |
|
279 | length = getattr(data, 'length', None) | |
280 | if length is None: |
|
280 | if length is None: | |
281 | length = len(data) |
|
281 | length = len(data) | |
282 | dbg(line % b' %d bytes of data' % length) |
|
282 | dbg(line % b' %d bytes of data' % length) | |
283 |
|
283 | |||
284 | start = util.timer() |
|
284 | start = util.timer() | |
285 |
|
285 | |||
286 | res = None |
|
286 | res = None | |
287 | try: |
|
287 | try: | |
288 | res = opener.open(req) |
|
288 | res = opener.open(req) | |
289 | except urlerr.httperror as inst: |
|
289 | except urlerr.httperror as inst: | |
290 | if inst.code == 401: |
|
290 | if inst.code == 401: | |
291 | raise error.Abort(_(b'authorization failed')) |
|
291 | raise error.Abort(_(b'authorization failed')) | |
292 | raise |
|
292 | raise | |
293 | except httplib.HTTPException as inst: |
|
293 | except httplib.HTTPException as inst: | |
294 | ui.debug( |
|
294 | ui.debug( | |
295 | b'http error requesting %s\n' |
|
295 | b'http error requesting %s\n' | |
296 | % urlutil.hidepassword(req.get_full_url()) |
|
296 | % urlutil.hidepassword(req.get_full_url()) | |
297 | ) |
|
297 | ) | |
298 | ui.traceback() |
|
298 | ui.traceback() | |
299 | raise IOError(None, inst) |
|
299 | raise IOError(None, inst) | |
300 | finally: |
|
300 | finally: | |
301 | if ui.debugflag and ui.configbool(b'devel', b'debug.peer-request'): |
|
301 | if ui.debugflag and ui.configbool(b'devel', b'debug.peer-request'): | |
302 | code = res.code if res else -1 |
|
302 | code = res.code if res else -1 | |
303 | dbg( |
|
303 | dbg( | |
304 | line |
|
304 | line | |
305 | % b' finished in %.4f seconds (%d)' |
|
305 | % b' finished in %.4f seconds (%d)' | |
306 | % (util.timer() - start, code) |
|
306 | % (util.timer() - start, code) | |
307 | ) |
|
307 | ) | |
308 |
|
308 | |||
309 | # Insert error handlers for common I/O failures. |
|
309 | # Insert error handlers for common I/O failures. | |
310 | urlmod.wrapresponse(res) |
|
310 | urlmod.wrapresponse(res) | |
311 |
|
311 | |||
312 | return res |
|
312 | return res | |
313 |
|
313 | |||
314 |
|
314 | |||
315 | class RedirectedRepoError(error.RepoError): |
|
315 | class RedirectedRepoError(error.RepoError): | |
316 | def __init__(self, msg, respurl): |
|
316 | def __init__(self, msg, respurl): | |
317 | super(RedirectedRepoError, self).__init__(msg) |
|
317 | super(RedirectedRepoError, self).__init__(msg) | |
318 | self.respurl = respurl |
|
318 | self.respurl = respurl | |
319 |
|
319 | |||
320 |
|
320 | |||
321 | def parsev1commandresponse(ui, baseurl, requrl, qs, resp, compressible): |
|
321 | def parsev1commandresponse(ui, baseurl, requrl, qs, resp, compressible): | |
322 | # record the url we got redirected to |
|
322 | # record the url we got redirected to | |
323 | redirected = False |
|
323 | redirected = False | |
324 | respurl = pycompat.bytesurl(resp.geturl()) |
|
324 | respurl = pycompat.bytesurl(resp.geturl()) | |
325 | if respurl.endswith(qs): |
|
325 | if respurl.endswith(qs): | |
326 | respurl = respurl[: -len(qs)] |
|
326 | respurl = respurl[: -len(qs)] | |
327 | qsdropped = False |
|
327 | qsdropped = False | |
328 | else: |
|
328 | else: | |
329 | qsdropped = True |
|
329 | qsdropped = True | |
330 |
|
330 | |||
331 | if baseurl.rstrip(b'/') != respurl.rstrip(b'/'): |
|
331 | if baseurl.rstrip(b'/') != respurl.rstrip(b'/'): | |
332 | redirected = True |
|
332 | redirected = True | |
333 | if not ui.quiet: |
|
333 | if not ui.quiet: | |
334 | ui.warn(_(b'real URL is %s\n') % respurl) |
|
334 | ui.warn(_(b'real URL is %s\n') % respurl) | |
335 |
|
335 | |||
336 | try: |
|
336 | try: | |
337 | proto = pycompat.bytesurl(resp.getheader('content-type', '')) |
|
337 | proto = pycompat.bytesurl(resp.getheader('content-type', '')) | |
338 | except AttributeError: |
|
338 | except AttributeError: | |
339 | proto = pycompat.bytesurl(resp.headers.get('content-type', '')) |
|
339 | proto = pycompat.bytesurl(resp.headers.get('content-type', '')) | |
340 |
|
340 | |||
341 | safeurl = urlutil.hidepassword(baseurl) |
|
341 | safeurl = urlutil.hidepassword(baseurl) | |
342 | if proto.startswith(b'application/hg-error'): |
|
342 | if proto.startswith(b'application/hg-error'): | |
343 | raise error.OutOfBandError(resp.read()) |
|
343 | raise error.OutOfBandError(resp.read()) | |
344 |
|
344 | |||
345 | # Pre 1.0 versions of Mercurial used text/plain and |
|
345 | # Pre 1.0 versions of Mercurial used text/plain and | |
346 | # application/hg-changegroup. We don't support such old servers. |
|
346 | # application/hg-changegroup. We don't support such old servers. | |
347 | if not proto.startswith(b'application/mercurial-'): |
|
347 | if not proto.startswith(b'application/mercurial-'): | |
348 | ui.debug(b"requested URL: '%s'\n" % urlutil.hidepassword(requrl)) |
|
348 | ui.debug(b"requested URL: '%s'\n" % urlutil.hidepassword(requrl)) | |
349 | msg = _( |
|
349 | msg = _( | |
350 | b"'%s' does not appear to be an hg repository:\n" |
|
350 | b"'%s' does not appear to be an hg repository:\n" | |
351 | b"---%%<--- (%s)\n%s\n---%%<---\n" |
|
351 | b"---%%<--- (%s)\n%s\n---%%<---\n" | |
352 | ) % (safeurl, proto or b'no content-type', resp.read(1024)) |
|
352 | ) % (safeurl, proto or b'no content-type', resp.read(1024)) | |
353 |
|
353 | |||
354 | # Some servers may strip the query string from the redirect. We |
|
354 | # Some servers may strip the query string from the redirect. We | |
355 | # raise a special error type so callers can react to this specially. |
|
355 | # raise a special error type so callers can react to this specially. | |
356 | if redirected and qsdropped: |
|
356 | if redirected and qsdropped: | |
357 | raise RedirectedRepoError(msg, respurl) |
|
357 | raise RedirectedRepoError(msg, respurl) | |
358 | else: |
|
358 | else: | |
359 | raise error.RepoError(msg) |
|
359 | raise error.RepoError(msg) | |
360 |
|
360 | |||
361 | try: |
|
361 | try: | |
362 | subtype = proto.split(b'-', 1)[1] |
|
362 | subtype = proto.split(b'-', 1)[1] | |
363 |
|
363 | |||
364 | version_info = tuple([int(n) for n in subtype.split(b'.')]) |
|
364 | version_info = tuple([int(n) for n in subtype.split(b'.')]) | |
365 | except ValueError: |
|
365 | except ValueError: | |
366 | raise error.RepoError( |
|
366 | raise error.RepoError( | |
367 | _(b"'%s' sent a broken Content-Type header (%s)") % (safeurl, proto) |
|
367 | _(b"'%s' sent a broken Content-Type header (%s)") % (safeurl, proto) | |
368 | ) |
|
368 | ) | |
369 |
|
369 | |||
370 | # TODO consider switching to a decompression reader that uses |
|
370 | # TODO consider switching to a decompression reader that uses | |
371 | # generators. |
|
371 | # generators. | |
372 | if version_info == (0, 1): |
|
372 | if version_info == (0, 1): | |
373 | if compressible: |
|
373 | if compressible: | |
374 | resp = util.compengines[b'zlib'].decompressorreader(resp) |
|
374 | resp = util.compengines[b'zlib'].decompressorreader(resp) | |
375 |
|
375 | |||
376 | elif version_info == (0, 2): |
|
376 | elif version_info == (0, 2): | |
377 | # application/mercurial-0.2 always identifies the compression |
|
377 | # application/mercurial-0.2 always identifies the compression | |
378 | # engine in the payload header. |
|
378 | # engine in the payload header. | |
379 | elen = struct.unpack(b'B', util.readexactly(resp, 1))[0] |
|
379 | elen = struct.unpack(b'B', util.readexactly(resp, 1))[0] | |
380 | ename = util.readexactly(resp, elen) |
|
380 | ename = util.readexactly(resp, elen) | |
381 | engine = util.compengines.forwiretype(ename) |
|
381 | engine = util.compengines.forwiretype(ename) | |
382 |
|
382 | |||
383 | resp = engine.decompressorreader(resp) |
|
383 | resp = engine.decompressorreader(resp) | |
384 | else: |
|
384 | else: | |
385 | raise error.RepoError( |
|
385 | raise error.RepoError( | |
386 | _(b"'%s' uses newer protocol %s") % (safeurl, subtype) |
|
386 | _(b"'%s' uses newer protocol %s") % (safeurl, subtype) | |
387 | ) |
|
387 | ) | |
388 |
|
388 | |||
389 | return respurl, proto, resp |
|
389 | return respurl, proto, resp | |
390 |
|
390 | |||
391 |
|
391 | |||
392 | class httppeer(wireprotov1peer.wirepeer): |
|
392 | class httppeer(wireprotov1peer.wirepeer): | |
393 | def __init__( |
|
393 | def __init__( | |
394 | self, ui, path, url, opener, requestbuilder, caps, remotehidden=False |
|
394 | self, ui, path, url, opener, requestbuilder, caps, remotehidden=False | |
395 | ): |
|
395 | ): | |
396 | super().__init__(ui, path=path, remotehidden=remotehidden) |
|
396 | super().__init__(ui, path=path, remotehidden=remotehidden) | |
397 | self._url = url |
|
397 | self._url = url | |
398 | self._caps = caps |
|
398 | self._caps = caps | |
399 | self.limitedarguments = caps is not None and b'httppostargs' not in caps |
|
399 | self.limitedarguments = caps is not None and b'httppostargs' not in caps | |
400 | self._urlopener = opener |
|
400 | self._urlopener = opener | |
401 | self._requestbuilder = requestbuilder |
|
401 | self._requestbuilder = requestbuilder | |
402 | self._remotehidden = remotehidden |
|
402 | self._remotehidden = remotehidden | |
403 |
|
403 | |||
404 | def __del__(self): |
|
404 | def __del__(self): | |
405 | for h in self._urlopener.handlers: |
|
405 | for h in self._urlopener.handlers: | |
406 | h.close() |
|
406 | h.close() | |
407 | getattr(h, "close_all", lambda: None)() |
|
407 | getattr(h, "close_all", lambda: None)() | |
408 |
|
408 | |||
409 | # Begin of ipeerconnection interface. |
|
409 | # Begin of ipeerconnection interface. | |
410 |
|
410 | |||
411 | def url(self): |
|
411 | def url(self): | |
412 | return self.path.loc |
|
412 | return self.path.loc | |
413 |
|
413 | |||
414 | def local(self): |
|
414 | def local(self): | |
415 | return None |
|
415 | return None | |
416 |
|
416 | |||
417 | def canpush(self): |
|
417 | def canpush(self): | |
418 | return True |
|
418 | return True | |
419 |
|
419 | |||
420 | def close(self): |
|
420 | def close(self): | |
421 | try: |
|
421 | try: | |
422 | reqs, sent, recv = ( |
|
422 | reqs, sent, recv = ( | |
423 | self._urlopener.requestscount, |
|
423 | self._urlopener.requestscount, | |
424 | self._urlopener.sentbytescount, |
|
424 | self._urlopener.sentbytescount, | |
425 | self._urlopener.receivedbytescount, |
|
425 | self._urlopener.receivedbytescount, | |
426 | ) |
|
426 | ) | |
427 | except AttributeError: |
|
427 | except AttributeError: | |
428 | return |
|
428 | return | |
429 | self.ui.note( |
|
429 | self.ui.note( | |
430 | _( |
|
430 | _( | |
431 | b'(sent %d HTTP requests and %d bytes; ' |
|
431 | b'(sent %d HTTP requests and %d bytes; ' | |
432 | b'received %d bytes in responses)\n' |
|
432 | b'received %d bytes in responses)\n' | |
433 | ) |
|
433 | ) | |
434 | % (reqs, sent, recv) |
|
434 | % (reqs, sent, recv) | |
435 | ) |
|
435 | ) | |
436 |
|
436 | |||
437 | # End of ipeerconnection interface. |
|
437 | # End of ipeerconnection interface. | |
438 |
|
438 | |||
439 | # Begin of ipeercommands interface. |
|
439 | # Begin of ipeercommands interface. | |
440 |
|
440 | |||
441 | def capabilities(self): |
|
441 | def capabilities(self): | |
442 | return self._caps |
|
442 | return self._caps | |
443 |
|
443 | |||
|
444 | def _finish_inline_clone_bundle(self, stream): | |||
|
445 | # HTTP streams must hit the end to process the last empty | |||
|
446 | # chunk of Chunked-Encoding so the connection can be reused. | |||
|
447 | chunk = stream.read(1) | |||
|
448 | if chunk: | |||
|
449 | self._abort(error.ResponseError(_(b"unexpected response:"), chunk)) | |||
|
450 | ||||
444 | # End of ipeercommands interface. |
|
451 | # End of ipeercommands interface. | |
445 |
|
452 | |||
446 | def _callstream(self, cmd, _compressible=False, **args): |
|
453 | def _callstream(self, cmd, _compressible=False, **args): | |
447 | args = pycompat.byteskwargs(args) |
|
454 | args = pycompat.byteskwargs(args) | |
448 |
|
455 | |||
449 | req, cu, qs = makev1commandrequest( |
|
456 | req, cu, qs = makev1commandrequest( | |
450 | self.ui, |
|
457 | self.ui, | |
451 | self._requestbuilder, |
|
458 | self._requestbuilder, | |
452 | self._caps, |
|
459 | self._caps, | |
453 | self.capable, |
|
460 | self.capable, | |
454 | self._url, |
|
461 | self._url, | |
455 | cmd, |
|
462 | cmd, | |
456 | args, |
|
463 | args, | |
457 | self._remotehidden, |
|
464 | self._remotehidden, | |
458 | ) |
|
465 | ) | |
459 |
|
466 | |||
460 | resp = sendrequest(self.ui, self._urlopener, req) |
|
467 | resp = sendrequest(self.ui, self._urlopener, req) | |
461 |
|
468 | |||
462 | self._url, ct, resp = parsev1commandresponse( |
|
469 | self._url, ct, resp = parsev1commandresponse( | |
463 | self.ui, self._url, cu, qs, resp, _compressible |
|
470 | self.ui, self._url, cu, qs, resp, _compressible | |
464 | ) |
|
471 | ) | |
465 |
|
472 | |||
466 | return resp |
|
473 | return resp | |
467 |
|
474 | |||
468 | def _call(self, cmd, **args): |
|
475 | def _call(self, cmd, **args): | |
469 | fp = self._callstream(cmd, **args) |
|
476 | fp = self._callstream(cmd, **args) | |
470 | try: |
|
477 | try: | |
471 | return fp.read() |
|
478 | return fp.read() | |
472 | finally: |
|
479 | finally: | |
473 | # if using keepalive, allow connection to be reused |
|
480 | # if using keepalive, allow connection to be reused | |
474 | fp.close() |
|
481 | fp.close() | |
475 |
|
482 | |||
476 | def _callpush(self, cmd, cg, **args): |
|
483 | def _callpush(self, cmd, cg, **args): | |
477 | # have to stream bundle to a temp file because we do not have |
|
484 | # have to stream bundle to a temp file because we do not have | |
478 | # http 1.1 chunked transfer. |
|
485 | # http 1.1 chunked transfer. | |
479 |
|
486 | |||
480 | types = self.capable(b'unbundle') |
|
487 | types = self.capable(b'unbundle') | |
481 | try: |
|
488 | try: | |
482 | types = types.split(b',') |
|
489 | types = types.split(b',') | |
483 | except AttributeError: |
|
490 | except AttributeError: | |
484 | # servers older than d1b16a746db6 will send 'unbundle' as a |
|
491 | # servers older than d1b16a746db6 will send 'unbundle' as a | |
485 | # boolean capability. They only support headerless/uncompressed |
|
492 | # boolean capability. They only support headerless/uncompressed | |
486 | # bundles. |
|
493 | # bundles. | |
487 | types = [b""] |
|
494 | types = [b""] | |
488 | for x in types: |
|
495 | for x in types: | |
489 | if x in bundle2.bundletypes: |
|
496 | if x in bundle2.bundletypes: | |
490 | type = x |
|
497 | type = x | |
491 | break |
|
498 | break | |
492 |
|
499 | |||
493 | tempname = bundle2.writebundle(self.ui, cg, None, type) |
|
500 | tempname = bundle2.writebundle(self.ui, cg, None, type) | |
494 | fp = httpconnection.httpsendfile(self.ui, tempname, b"rb") |
|
501 | fp = httpconnection.httpsendfile(self.ui, tempname, b"rb") | |
495 | headers = {'Content-Type': 'application/mercurial-0.1'} |
|
502 | headers = {'Content-Type': 'application/mercurial-0.1'} | |
496 |
|
503 | |||
497 | try: |
|
504 | try: | |
498 | r = self._call(cmd, data=fp, headers=headers, **args) |
|
505 | r = self._call(cmd, data=fp, headers=headers, **args) | |
499 | vals = r.split(b'\n', 1) |
|
506 | vals = r.split(b'\n', 1) | |
500 | if len(vals) < 2: |
|
507 | if len(vals) < 2: | |
501 | raise error.ResponseError(_(b"unexpected response:"), r) |
|
508 | raise error.ResponseError(_(b"unexpected response:"), r) | |
502 | return vals |
|
509 | return vals | |
503 | except urlerr.httperror: |
|
510 | except urlerr.httperror: | |
504 | # Catch and re-raise these so we don't try and treat them |
|
511 | # Catch and re-raise these so we don't try and treat them | |
505 | # like generic socket errors. They lack any values in |
|
512 | # like generic socket errors. They lack any values in | |
506 | # .args on Python 3 which breaks our socket.error block. |
|
513 | # .args on Python 3 which breaks our socket.error block. | |
507 | raise |
|
514 | raise | |
508 | except socket.error as err: |
|
515 | except socket.error as err: | |
509 | if err.args[0] in (errno.ECONNRESET, errno.EPIPE): |
|
516 | if err.args[0] in (errno.ECONNRESET, errno.EPIPE): | |
510 | raise error.Abort(_(b'push failed: %s') % err.args[1]) |
|
517 | raise error.Abort(_(b'push failed: %s') % err.args[1]) | |
511 | raise error.Abort(err.args[1]) |
|
518 | raise error.Abort(err.args[1]) | |
512 | finally: |
|
519 | finally: | |
513 | fp.close() |
|
520 | fp.close() | |
514 | os.unlink(tempname) |
|
521 | os.unlink(tempname) | |
515 |
|
522 | |||
516 | def _calltwowaystream(self, cmd, fp, **args): |
|
523 | def _calltwowaystream(self, cmd, fp, **args): | |
517 | filename = None |
|
524 | filename = None | |
518 | try: |
|
525 | try: | |
519 | # dump bundle to disk |
|
526 | # dump bundle to disk | |
520 | fd, filename = pycompat.mkstemp(prefix=b"hg-bundle-", suffix=b".hg") |
|
527 | fd, filename = pycompat.mkstemp(prefix=b"hg-bundle-", suffix=b".hg") | |
521 | with os.fdopen(fd, "wb") as fh: |
|
528 | with os.fdopen(fd, "wb") as fh: | |
522 | d = fp.read(4096) |
|
529 | d = fp.read(4096) | |
523 | while d: |
|
530 | while d: | |
524 | fh.write(d) |
|
531 | fh.write(d) | |
525 | d = fp.read(4096) |
|
532 | d = fp.read(4096) | |
526 | # start http push |
|
533 | # start http push | |
527 | with httpconnection.httpsendfile(self.ui, filename, b"rb") as fp_: |
|
534 | with httpconnection.httpsendfile(self.ui, filename, b"rb") as fp_: | |
528 | headers = {'Content-Type': 'application/mercurial-0.1'} |
|
535 | headers = {'Content-Type': 'application/mercurial-0.1'} | |
529 | return self._callstream(cmd, data=fp_, headers=headers, **args) |
|
536 | return self._callstream(cmd, data=fp_, headers=headers, **args) | |
530 | finally: |
|
537 | finally: | |
531 | if filename is not None: |
|
538 | if filename is not None: | |
532 | os.unlink(filename) |
|
539 | os.unlink(filename) | |
533 |
|
540 | |||
534 | def _callcompressable(self, cmd, **args): |
|
541 | def _callcompressable(self, cmd, **args): | |
535 | return self._callstream(cmd, _compressible=True, **args) |
|
542 | return self._callstream(cmd, _compressible=True, **args) | |
536 |
|
543 | |||
537 | def _abort(self, exception): |
|
544 | def _abort(self, exception): | |
538 | raise exception |
|
545 | raise exception | |
539 |
|
546 | |||
540 |
|
547 | |||
541 | class queuedcommandfuture(futures.Future): |
|
548 | class queuedcommandfuture(futures.Future): | |
542 | """Wraps result() on command futures to trigger submission on call.""" |
|
549 | """Wraps result() on command futures to trigger submission on call.""" | |
543 |
|
550 | |||
544 | def result(self, timeout=None): |
|
551 | def result(self, timeout=None): | |
545 | if self.done(): |
|
552 | if self.done(): | |
546 | return futures.Future.result(self, timeout) |
|
553 | return futures.Future.result(self, timeout) | |
547 |
|
554 | |||
548 | self._peerexecutor.sendcommands() |
|
555 | self._peerexecutor.sendcommands() | |
549 |
|
556 | |||
550 | # sendcommands() will restore the original __class__ and self.result |
|
557 | # sendcommands() will restore the original __class__ and self.result | |
551 | # will resolve to Future.result. |
|
558 | # will resolve to Future.result. | |
552 | return self.result(timeout) |
|
559 | return self.result(timeout) | |
553 |
|
560 | |||
554 |
|
561 | |||
555 | def performhandshake(ui, url, opener, requestbuilder): |
|
562 | def performhandshake(ui, url, opener, requestbuilder): | |
556 | # The handshake is a request to the capabilities command. |
|
563 | # The handshake is a request to the capabilities command. | |
557 |
|
564 | |||
558 | caps = None |
|
565 | caps = None | |
559 |
|
566 | |||
560 | def capable(x): |
|
567 | def capable(x): | |
561 | raise error.ProgrammingError(b'should not be called') |
|
568 | raise error.ProgrammingError(b'should not be called') | |
562 |
|
569 | |||
563 | args = {} |
|
570 | args = {} | |
564 |
|
571 | |||
565 | req, requrl, qs = makev1commandrequest( |
|
572 | req, requrl, qs = makev1commandrequest( | |
566 | ui, requestbuilder, caps, capable, url, b'capabilities', args |
|
573 | ui, requestbuilder, caps, capable, url, b'capabilities', args | |
567 | ) |
|
574 | ) | |
568 | resp = sendrequest(ui, opener, req) |
|
575 | resp = sendrequest(ui, opener, req) | |
569 |
|
576 | |||
570 | # The server may redirect us to the repo root, stripping the |
|
577 | # The server may redirect us to the repo root, stripping the | |
571 | # ?cmd=capabilities query string from the URL. The server would likely |
|
578 | # ?cmd=capabilities query string from the URL. The server would likely | |
572 | # return HTML in this case and ``parsev1commandresponse()`` would raise. |
|
579 | # return HTML in this case and ``parsev1commandresponse()`` would raise. | |
573 | # We catch this special case and re-issue the capabilities request against |
|
580 | # We catch this special case and re-issue the capabilities request against | |
574 | # the new URL. |
|
581 | # the new URL. | |
575 | # |
|
582 | # | |
576 | # We should ideally not do this, as a redirect that drops the query |
|
583 | # We should ideally not do this, as a redirect that drops the query | |
577 | # string from the URL is arguably a server bug. (Garbage in, garbage out). |
|
584 | # string from the URL is arguably a server bug. (Garbage in, garbage out). | |
578 | # However, Mercurial clients for several years appeared to handle this |
|
585 | # However, Mercurial clients for several years appeared to handle this | |
579 | # issue without behavior degradation. And according to issue 5860, it may |
|
586 | # issue without behavior degradation. And according to issue 5860, it may | |
580 | # be a longstanding bug in some server implementations. So we allow a |
|
587 | # be a longstanding bug in some server implementations. So we allow a | |
581 | # redirect that drops the query string to "just work." |
|
588 | # redirect that drops the query string to "just work." | |
582 | try: |
|
589 | try: | |
583 | respurl, ct, resp = parsev1commandresponse( |
|
590 | respurl, ct, resp = parsev1commandresponse( | |
584 | ui, url, requrl, qs, resp, compressible=False |
|
591 | ui, url, requrl, qs, resp, compressible=False | |
585 | ) |
|
592 | ) | |
586 | except RedirectedRepoError as e: |
|
593 | except RedirectedRepoError as e: | |
587 | req, requrl, qs = makev1commandrequest( |
|
594 | req, requrl, qs = makev1commandrequest( | |
588 | ui, requestbuilder, caps, capable, e.respurl, b'capabilities', args |
|
595 | ui, requestbuilder, caps, capable, e.respurl, b'capabilities', args | |
589 | ) |
|
596 | ) | |
590 | resp = sendrequest(ui, opener, req) |
|
597 | resp = sendrequest(ui, opener, req) | |
591 | respurl, ct, resp = parsev1commandresponse( |
|
598 | respurl, ct, resp = parsev1commandresponse( | |
592 | ui, url, requrl, qs, resp, compressible=False |
|
599 | ui, url, requrl, qs, resp, compressible=False | |
593 | ) |
|
600 | ) | |
594 |
|
601 | |||
595 | try: |
|
602 | try: | |
596 | rawdata = resp.read() |
|
603 | rawdata = resp.read() | |
597 | finally: |
|
604 | finally: | |
598 | resp.close() |
|
605 | resp.close() | |
599 |
|
606 | |||
600 | if not ct.startswith(b'application/mercurial-'): |
|
607 | if not ct.startswith(b'application/mercurial-'): | |
601 | raise error.ProgrammingError(b'unexpected content-type: %s' % ct) |
|
608 | raise error.ProgrammingError(b'unexpected content-type: %s' % ct) | |
602 |
|
609 | |||
603 | info = {b'v1capabilities': set(rawdata.split())} |
|
610 | info = {b'v1capabilities': set(rawdata.split())} | |
604 |
|
611 | |||
605 | return respurl, info |
|
612 | return respurl, info | |
606 |
|
613 | |||
607 |
|
614 | |||
608 | def _make_peer( |
|
615 | def _make_peer( | |
609 | ui, path, opener=None, requestbuilder=urlreq.request, remotehidden=False |
|
616 | ui, path, opener=None, requestbuilder=urlreq.request, remotehidden=False | |
610 | ): |
|
617 | ): | |
611 | """Construct an appropriate HTTP peer instance. |
|
618 | """Construct an appropriate HTTP peer instance. | |
612 |
|
619 | |||
613 | ``opener`` is an ``url.opener`` that should be used to establish |
|
620 | ``opener`` is an ``url.opener`` that should be used to establish | |
614 | connections, perform HTTP requests. |
|
621 | connections, perform HTTP requests. | |
615 |
|
622 | |||
616 | ``requestbuilder`` is the type used for constructing HTTP requests. |
|
623 | ``requestbuilder`` is the type used for constructing HTTP requests. | |
617 | It exists as an argument so extensions can override the default. |
|
624 | It exists as an argument so extensions can override the default. | |
618 | """ |
|
625 | """ | |
619 | if path.url.query or path.url.fragment: |
|
626 | if path.url.query or path.url.fragment: | |
620 | msg = _(b'unsupported URL component: "%s"') |
|
627 | msg = _(b'unsupported URL component: "%s"') | |
621 | msg %= path.url.query or path.url.fragment |
|
628 | msg %= path.url.query or path.url.fragment | |
622 | raise error.Abort(msg) |
|
629 | raise error.Abort(msg) | |
623 |
|
630 | |||
624 | # urllib cannot handle URLs with embedded user or passwd. |
|
631 | # urllib cannot handle URLs with embedded user or passwd. | |
625 | url, authinfo = path.url.authinfo() |
|
632 | url, authinfo = path.url.authinfo() | |
626 | ui.debug(b'using %s\n' % url) |
|
633 | ui.debug(b'using %s\n' % url) | |
627 |
|
634 | |||
628 | opener = opener or urlmod.opener(ui, authinfo) |
|
635 | opener = opener or urlmod.opener(ui, authinfo) | |
629 |
|
636 | |||
630 | respurl, info = performhandshake(ui, url, opener, requestbuilder) |
|
637 | respurl, info = performhandshake(ui, url, opener, requestbuilder) | |
631 |
|
638 | |||
632 | return httppeer( |
|
639 | return httppeer( | |
633 | ui, |
|
640 | ui, | |
634 | path, |
|
641 | path, | |
635 | respurl, |
|
642 | respurl, | |
636 | opener, |
|
643 | opener, | |
637 | requestbuilder, |
|
644 | requestbuilder, | |
638 | info[b'v1capabilities'], |
|
645 | info[b'v1capabilities'], | |
639 | remotehidden=remotehidden, |
|
646 | remotehidden=remotehidden, | |
640 | ) |
|
647 | ) | |
641 |
|
648 | |||
642 |
|
649 | |||
643 | def make_peer( |
|
650 | def make_peer( | |
644 | ui, path, create, intents=None, createopts=None, remotehidden=False |
|
651 | ui, path, create, intents=None, createopts=None, remotehidden=False | |
645 | ): |
|
652 | ): | |
646 | if create: |
|
653 | if create: | |
647 | raise error.Abort(_(b'cannot create new http repository')) |
|
654 | raise error.Abort(_(b'cannot create new http repository')) | |
648 | try: |
|
655 | try: | |
649 | if path.url.scheme == b'https' and not urlmod.has_https: |
|
656 | if path.url.scheme == b'https' and not urlmod.has_https: | |
650 | raise error.Abort( |
|
657 | raise error.Abort( | |
651 | _(b'Python support for SSL and HTTPS is not installed') |
|
658 | _(b'Python support for SSL and HTTPS is not installed') | |
652 | ) |
|
659 | ) | |
653 |
|
660 | |||
654 | inst = _make_peer(ui, path, remotehidden=remotehidden) |
|
661 | inst = _make_peer(ui, path, remotehidden=remotehidden) | |
655 |
|
662 | |||
656 | return inst |
|
663 | return inst | |
657 | except error.RepoError as httpexception: |
|
664 | except error.RepoError as httpexception: | |
658 | try: |
|
665 | try: | |
659 | r = statichttprepo.make_peer(ui, b"static-" + path.loc, create) |
|
666 | r = statichttprepo.make_peer(ui, b"static-" + path.loc, create) | |
660 | ui.note(_(b'(falling back to static-http)\n')) |
|
667 | ui.note(_(b'(falling back to static-http)\n')) | |
661 | return r |
|
668 | return r | |
662 | except error.RepoError: |
|
669 | except error.RepoError: | |
663 | raise httpexception # use the original http RepoError instead |
|
670 | raise httpexception # use the original http RepoError instead |
@@ -1,2073 +1,2079 b'' | |||||
1 | # repository.py - Interfaces and base classes for repositories and peers. |
|
1 | # repository.py - Interfaces and base classes for repositories and peers. | |
2 | # coding: utf-8 |
|
2 | # coding: utf-8 | |
3 | # |
|
3 | # | |
4 | # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com> |
|
4 | # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com> | |
5 | # |
|
5 | # | |
6 | # This software may be used and distributed according to the terms of the |
|
6 | # This software may be used and distributed according to the terms of the | |
7 | # GNU General Public License version 2 or any later version. |
|
7 | # GNU General Public License version 2 or any later version. | |
8 |
|
8 | |||
9 |
|
9 | |||
10 | from ..i18n import _ |
|
10 | from ..i18n import _ | |
11 | from .. import error |
|
11 | from .. import error | |
12 | from . import util as interfaceutil |
|
12 | from . import util as interfaceutil | |
13 |
|
13 | |||
14 | # Local repository feature string. |
|
14 | # Local repository feature string. | |
15 |
|
15 | |||
16 | # Revlogs are being used for file storage. |
|
16 | # Revlogs are being used for file storage. | |
17 | REPO_FEATURE_REVLOG_FILE_STORAGE = b'revlogfilestorage' |
|
17 | REPO_FEATURE_REVLOG_FILE_STORAGE = b'revlogfilestorage' | |
18 | # The storage part of the repository is shared from an external source. |
|
18 | # The storage part of the repository is shared from an external source. | |
19 | REPO_FEATURE_SHARED_STORAGE = b'sharedstore' |
|
19 | REPO_FEATURE_SHARED_STORAGE = b'sharedstore' | |
20 | # LFS supported for backing file storage. |
|
20 | # LFS supported for backing file storage. | |
21 | REPO_FEATURE_LFS = b'lfs' |
|
21 | REPO_FEATURE_LFS = b'lfs' | |
22 | # Repository supports being stream cloned. |
|
22 | # Repository supports being stream cloned. | |
23 | REPO_FEATURE_STREAM_CLONE = b'streamclone' |
|
23 | REPO_FEATURE_STREAM_CLONE = b'streamclone' | |
24 | # Repository supports (at least) some sidedata to be stored |
|
24 | # Repository supports (at least) some sidedata to be stored | |
25 | REPO_FEATURE_SIDE_DATA = b'side-data' |
|
25 | REPO_FEATURE_SIDE_DATA = b'side-data' | |
26 | # Files storage may lack data for all ancestors. |
|
26 | # Files storage may lack data for all ancestors. | |
27 | REPO_FEATURE_SHALLOW_FILE_STORAGE = b'shallowfilestorage' |
|
27 | REPO_FEATURE_SHALLOW_FILE_STORAGE = b'shallowfilestorage' | |
28 |
|
28 | |||
29 | REVISION_FLAG_CENSORED = 1 << 15 |
|
29 | REVISION_FLAG_CENSORED = 1 << 15 | |
30 | REVISION_FLAG_ELLIPSIS = 1 << 14 |
|
30 | REVISION_FLAG_ELLIPSIS = 1 << 14 | |
31 | REVISION_FLAG_EXTSTORED = 1 << 13 |
|
31 | REVISION_FLAG_EXTSTORED = 1 << 13 | |
32 | REVISION_FLAG_HASCOPIESINFO = 1 << 12 |
|
32 | REVISION_FLAG_HASCOPIESINFO = 1 << 12 | |
33 |
|
33 | |||
34 | REVISION_FLAGS_KNOWN = ( |
|
34 | REVISION_FLAGS_KNOWN = ( | |
35 | REVISION_FLAG_CENSORED |
|
35 | REVISION_FLAG_CENSORED | |
36 | | REVISION_FLAG_ELLIPSIS |
|
36 | | REVISION_FLAG_ELLIPSIS | |
37 | | REVISION_FLAG_EXTSTORED |
|
37 | | REVISION_FLAG_EXTSTORED | |
38 | | REVISION_FLAG_HASCOPIESINFO |
|
38 | | REVISION_FLAG_HASCOPIESINFO | |
39 | ) |
|
39 | ) | |
40 |
|
40 | |||
41 | CG_DELTAMODE_STD = b'default' |
|
41 | CG_DELTAMODE_STD = b'default' | |
42 | CG_DELTAMODE_PREV = b'previous' |
|
42 | CG_DELTAMODE_PREV = b'previous' | |
43 | CG_DELTAMODE_FULL = b'fulltext' |
|
43 | CG_DELTAMODE_FULL = b'fulltext' | |
44 | CG_DELTAMODE_P1 = b'p1' |
|
44 | CG_DELTAMODE_P1 = b'p1' | |
45 |
|
45 | |||
46 |
|
46 | |||
47 | ## Cache related constants: |
|
47 | ## Cache related constants: | |
48 | # |
|
48 | # | |
49 | # Used to control which cache should be warmed in a repo.updatecaches(…) call. |
|
49 | # Used to control which cache should be warmed in a repo.updatecaches(…) call. | |
50 |
|
50 | |||
51 | # Warm branchmaps of all known repoview's filter-level |
|
51 | # Warm branchmaps of all known repoview's filter-level | |
52 | CACHE_BRANCHMAP_ALL = b"branchmap-all" |
|
52 | CACHE_BRANCHMAP_ALL = b"branchmap-all" | |
53 | # Warm branchmaps of repoview's filter-level used by server |
|
53 | # Warm branchmaps of repoview's filter-level used by server | |
54 | CACHE_BRANCHMAP_SERVED = b"branchmap-served" |
|
54 | CACHE_BRANCHMAP_SERVED = b"branchmap-served" | |
55 | # Warm internal changelog cache (eg: persistent nodemap) |
|
55 | # Warm internal changelog cache (eg: persistent nodemap) | |
56 | CACHE_CHANGELOG_CACHE = b"changelog-cache" |
|
56 | CACHE_CHANGELOG_CACHE = b"changelog-cache" | |
57 | # Warm full manifest cache |
|
57 | # Warm full manifest cache | |
58 | CACHE_FULL_MANIFEST = b"full-manifest" |
|
58 | CACHE_FULL_MANIFEST = b"full-manifest" | |
59 | # Warm file-node-tags cache |
|
59 | # Warm file-node-tags cache | |
60 | CACHE_FILE_NODE_TAGS = b"file-node-tags" |
|
60 | CACHE_FILE_NODE_TAGS = b"file-node-tags" | |
61 | # Warm internal manifestlog cache (eg: persistent nodemap) |
|
61 | # Warm internal manifestlog cache (eg: persistent nodemap) | |
62 | CACHE_MANIFESTLOG_CACHE = b"manifestlog-cache" |
|
62 | CACHE_MANIFESTLOG_CACHE = b"manifestlog-cache" | |
63 | # Warn rev branch cache |
|
63 | # Warn rev branch cache | |
64 | CACHE_REV_BRANCH = b"rev-branch-cache" |
|
64 | CACHE_REV_BRANCH = b"rev-branch-cache" | |
65 | # Warm tags' cache for default repoview' |
|
65 | # Warm tags' cache for default repoview' | |
66 | CACHE_TAGS_DEFAULT = b"tags-default" |
|
66 | CACHE_TAGS_DEFAULT = b"tags-default" | |
67 | # Warm tags' cache for repoview's filter-level used by server |
|
67 | # Warm tags' cache for repoview's filter-level used by server | |
68 | CACHE_TAGS_SERVED = b"tags-served" |
|
68 | CACHE_TAGS_SERVED = b"tags-served" | |
69 |
|
69 | |||
70 | # the cache to warm by default after a simple transaction |
|
70 | # the cache to warm by default after a simple transaction | |
71 | # (this is a mutable set to let extension update it) |
|
71 | # (this is a mutable set to let extension update it) | |
72 | CACHES_DEFAULT = { |
|
72 | CACHES_DEFAULT = { | |
73 | CACHE_BRANCHMAP_SERVED, |
|
73 | CACHE_BRANCHMAP_SERVED, | |
74 | } |
|
74 | } | |
75 |
|
75 | |||
76 | # the caches to warm when warming all of them |
|
76 | # the caches to warm when warming all of them | |
77 | # (this is a mutable set to let extension update it) |
|
77 | # (this is a mutable set to let extension update it) | |
78 | CACHES_ALL = { |
|
78 | CACHES_ALL = { | |
79 | CACHE_BRANCHMAP_SERVED, |
|
79 | CACHE_BRANCHMAP_SERVED, | |
80 | CACHE_BRANCHMAP_ALL, |
|
80 | CACHE_BRANCHMAP_ALL, | |
81 | CACHE_CHANGELOG_CACHE, |
|
81 | CACHE_CHANGELOG_CACHE, | |
82 | CACHE_FILE_NODE_TAGS, |
|
82 | CACHE_FILE_NODE_TAGS, | |
83 | CACHE_FULL_MANIFEST, |
|
83 | CACHE_FULL_MANIFEST, | |
84 | CACHE_MANIFESTLOG_CACHE, |
|
84 | CACHE_MANIFESTLOG_CACHE, | |
85 | CACHE_TAGS_DEFAULT, |
|
85 | CACHE_TAGS_DEFAULT, | |
86 | CACHE_TAGS_SERVED, |
|
86 | CACHE_TAGS_SERVED, | |
87 | } |
|
87 | } | |
88 |
|
88 | |||
89 | # the cache to warm by default on simple call |
|
89 | # the cache to warm by default on simple call | |
90 | # (this is a mutable set to let extension update it) |
|
90 | # (this is a mutable set to let extension update it) | |
91 | CACHES_POST_CLONE = CACHES_ALL.copy() |
|
91 | CACHES_POST_CLONE = CACHES_ALL.copy() | |
92 | CACHES_POST_CLONE.discard(CACHE_FILE_NODE_TAGS) |
|
92 | CACHES_POST_CLONE.discard(CACHE_FILE_NODE_TAGS) | |
93 |
|
93 | |||
94 |
|
94 | |||
95 | class ipeerconnection(interfaceutil.Interface): |
|
95 | class ipeerconnection(interfaceutil.Interface): | |
96 | """Represents a "connection" to a repository. |
|
96 | """Represents a "connection" to a repository. | |
97 |
|
97 | |||
98 | This is the base interface for representing a connection to a repository. |
|
98 | This is the base interface for representing a connection to a repository. | |
99 | It holds basic properties and methods applicable to all peer types. |
|
99 | It holds basic properties and methods applicable to all peer types. | |
100 |
|
100 | |||
101 | This is not a complete interface definition and should not be used |
|
101 | This is not a complete interface definition and should not be used | |
102 | outside of this module. |
|
102 | outside of this module. | |
103 | """ |
|
103 | """ | |
104 |
|
104 | |||
105 | ui = interfaceutil.Attribute("""ui.ui instance""") |
|
105 | ui = interfaceutil.Attribute("""ui.ui instance""") | |
106 | path = interfaceutil.Attribute("""a urlutil.path instance or None""") |
|
106 | path = interfaceutil.Attribute("""a urlutil.path instance or None""") | |
107 |
|
107 | |||
108 | def url(): |
|
108 | def url(): | |
109 | """Returns a URL string representing this peer. |
|
109 | """Returns a URL string representing this peer. | |
110 |
|
110 | |||
111 | Currently, implementations expose the raw URL used to construct the |
|
111 | Currently, implementations expose the raw URL used to construct the | |
112 | instance. It may contain credentials as part of the URL. The |
|
112 | instance. It may contain credentials as part of the URL. The | |
113 | expectations of the value aren't well-defined and this could lead to |
|
113 | expectations of the value aren't well-defined and this could lead to | |
114 | data leakage. |
|
114 | data leakage. | |
115 |
|
115 | |||
116 | TODO audit/clean consumers and more clearly define the contents of this |
|
116 | TODO audit/clean consumers and more clearly define the contents of this | |
117 | value. |
|
117 | value. | |
118 | """ |
|
118 | """ | |
119 |
|
119 | |||
120 | def local(): |
|
120 | def local(): | |
121 | """Returns a local repository instance. |
|
121 | """Returns a local repository instance. | |
122 |
|
122 | |||
123 | If the peer represents a local repository, returns an object that |
|
123 | If the peer represents a local repository, returns an object that | |
124 | can be used to interface with it. Otherwise returns ``None``. |
|
124 | can be used to interface with it. Otherwise returns ``None``. | |
125 | """ |
|
125 | """ | |
126 |
|
126 | |||
127 | def canpush(): |
|
127 | def canpush(): | |
128 | """Returns a boolean indicating if this peer can be pushed to.""" |
|
128 | """Returns a boolean indicating if this peer can be pushed to.""" | |
129 |
|
129 | |||
130 | def close(): |
|
130 | def close(): | |
131 | """Close the connection to this peer. |
|
131 | """Close the connection to this peer. | |
132 |
|
132 | |||
133 | This is called when the peer will no longer be used. Resources |
|
133 | This is called when the peer will no longer be used. Resources | |
134 | associated with the peer should be cleaned up. |
|
134 | associated with the peer should be cleaned up. | |
135 | """ |
|
135 | """ | |
136 |
|
136 | |||
137 |
|
137 | |||
138 | class ipeercapabilities(interfaceutil.Interface): |
|
138 | class ipeercapabilities(interfaceutil.Interface): | |
139 | """Peer sub-interface related to capabilities.""" |
|
139 | """Peer sub-interface related to capabilities.""" | |
140 |
|
140 | |||
141 | def capable(name): |
|
141 | def capable(name): | |
142 | """Determine support for a named capability. |
|
142 | """Determine support for a named capability. | |
143 |
|
143 | |||
144 | Returns ``False`` if capability not supported. |
|
144 | Returns ``False`` if capability not supported. | |
145 |
|
145 | |||
146 | Returns ``True`` if boolean capability is supported. Returns a string |
|
146 | Returns ``True`` if boolean capability is supported. Returns a string | |
147 | if capability support is non-boolean. |
|
147 | if capability support is non-boolean. | |
148 |
|
148 | |||
149 | Capability strings may or may not map to wire protocol capabilities. |
|
149 | Capability strings may or may not map to wire protocol capabilities. | |
150 | """ |
|
150 | """ | |
151 |
|
151 | |||
152 | def requirecap(name, purpose): |
|
152 | def requirecap(name, purpose): | |
153 | """Require a capability to be present. |
|
153 | """Require a capability to be present. | |
154 |
|
154 | |||
155 | Raises a ``CapabilityError`` if the capability isn't present. |
|
155 | Raises a ``CapabilityError`` if the capability isn't present. | |
156 | """ |
|
156 | """ | |
157 |
|
157 | |||
158 |
|
158 | |||
159 | class ipeercommands(interfaceutil.Interface): |
|
159 | class ipeercommands(interfaceutil.Interface): | |
160 | """Client-side interface for communicating over the wire protocol. |
|
160 | """Client-side interface for communicating over the wire protocol. | |
161 |
|
161 | |||
162 | This interface is used as a gateway to the Mercurial wire protocol. |
|
162 | This interface is used as a gateway to the Mercurial wire protocol. | |
163 | methods commonly call wire protocol commands of the same name. |
|
163 | methods commonly call wire protocol commands of the same name. | |
164 | """ |
|
164 | """ | |
165 |
|
165 | |||
166 | def branchmap(): |
|
166 | def branchmap(): | |
167 | """Obtain heads in named branches. |
|
167 | """Obtain heads in named branches. | |
168 |
|
168 | |||
169 | Returns a dict mapping branch name to an iterable of nodes that are |
|
169 | Returns a dict mapping branch name to an iterable of nodes that are | |
170 | heads on that branch. |
|
170 | heads on that branch. | |
171 | """ |
|
171 | """ | |
172 |
|
172 | |||
173 | def capabilities(): |
|
173 | def capabilities(): | |
174 | """Obtain capabilities of the peer. |
|
174 | """Obtain capabilities of the peer. | |
175 |
|
175 | |||
176 | Returns a set of string capabilities. |
|
176 | Returns a set of string capabilities. | |
177 | """ |
|
177 | """ | |
178 |
|
178 | |||
|
179 | def get_inline_clone_bundle(path): | |||
|
180 | """Retrieve clonebundle across the wire. | |||
|
181 | ||||
|
182 | Returns a chunkbuffer | |||
|
183 | """ | |||
|
184 | ||||
179 | def clonebundles(): |
|
185 | def clonebundles(): | |
180 | """Obtains the clone bundles manifest for the repo. |
|
186 | """Obtains the clone bundles manifest for the repo. | |
181 |
|
187 | |||
182 | Returns the manifest as unparsed bytes. |
|
188 | Returns the manifest as unparsed bytes. | |
183 | """ |
|
189 | """ | |
184 |
|
190 | |||
185 | def debugwireargs(one, two, three=None, four=None, five=None): |
|
191 | def debugwireargs(one, two, three=None, four=None, five=None): | |
186 | """Used to facilitate debugging of arguments passed over the wire.""" |
|
192 | """Used to facilitate debugging of arguments passed over the wire.""" | |
187 |
|
193 | |||
188 | def getbundle(source, **kwargs): |
|
194 | def getbundle(source, **kwargs): | |
189 | """Obtain remote repository data as a bundle. |
|
195 | """Obtain remote repository data as a bundle. | |
190 |
|
196 | |||
191 | This command is how the bulk of repository data is transferred from |
|
197 | This command is how the bulk of repository data is transferred from | |
192 | the peer to the local repository |
|
198 | the peer to the local repository | |
193 |
|
199 | |||
194 | Returns a generator of bundle data. |
|
200 | Returns a generator of bundle data. | |
195 | """ |
|
201 | """ | |
196 |
|
202 | |||
197 | def heads(): |
|
203 | def heads(): | |
198 | """Determine all known head revisions in the peer. |
|
204 | """Determine all known head revisions in the peer. | |
199 |
|
205 | |||
200 | Returns an iterable of binary nodes. |
|
206 | Returns an iterable of binary nodes. | |
201 | """ |
|
207 | """ | |
202 |
|
208 | |||
203 | def known(nodes): |
|
209 | def known(nodes): | |
204 | """Determine whether multiple nodes are known. |
|
210 | """Determine whether multiple nodes are known. | |
205 |
|
211 | |||
206 | Accepts an iterable of nodes whose presence to check for. |
|
212 | Accepts an iterable of nodes whose presence to check for. | |
207 |
|
213 | |||
208 | Returns an iterable of booleans indicating of the corresponding node |
|
214 | Returns an iterable of booleans indicating of the corresponding node | |
209 | at that index is known to the peer. |
|
215 | at that index is known to the peer. | |
210 | """ |
|
216 | """ | |
211 |
|
217 | |||
212 | def listkeys(namespace): |
|
218 | def listkeys(namespace): | |
213 | """Obtain all keys in a pushkey namespace. |
|
219 | """Obtain all keys in a pushkey namespace. | |
214 |
|
220 | |||
215 | Returns an iterable of key names. |
|
221 | Returns an iterable of key names. | |
216 | """ |
|
222 | """ | |
217 |
|
223 | |||
218 | def lookup(key): |
|
224 | def lookup(key): | |
219 | """Resolve a value to a known revision. |
|
225 | """Resolve a value to a known revision. | |
220 |
|
226 | |||
221 | Returns a binary node of the resolved revision on success. |
|
227 | Returns a binary node of the resolved revision on success. | |
222 | """ |
|
228 | """ | |
223 |
|
229 | |||
224 | def pushkey(namespace, key, old, new): |
|
230 | def pushkey(namespace, key, old, new): | |
225 | """Set a value using the ``pushkey`` protocol. |
|
231 | """Set a value using the ``pushkey`` protocol. | |
226 |
|
232 | |||
227 | Arguments correspond to the pushkey namespace and key to operate on and |
|
233 | Arguments correspond to the pushkey namespace and key to operate on and | |
228 | the old and new values for that key. |
|
234 | the old and new values for that key. | |
229 |
|
235 | |||
230 | Returns a string with the peer result. The value inside varies by the |
|
236 | Returns a string with the peer result. The value inside varies by the | |
231 | namespace. |
|
237 | namespace. | |
232 | """ |
|
238 | """ | |
233 |
|
239 | |||
234 | def stream_out(): |
|
240 | def stream_out(): | |
235 | """Obtain streaming clone data. |
|
241 | """Obtain streaming clone data. | |
236 |
|
242 | |||
237 | Successful result should be a generator of data chunks. |
|
243 | Successful result should be a generator of data chunks. | |
238 | """ |
|
244 | """ | |
239 |
|
245 | |||
240 | def unbundle(bundle, heads, url): |
|
246 | def unbundle(bundle, heads, url): | |
241 | """Transfer repository data to the peer. |
|
247 | """Transfer repository data to the peer. | |
242 |
|
248 | |||
243 | This is how the bulk of data during a push is transferred. |
|
249 | This is how the bulk of data during a push is transferred. | |
244 |
|
250 | |||
245 | Returns the integer number of heads added to the peer. |
|
251 | Returns the integer number of heads added to the peer. | |
246 | """ |
|
252 | """ | |
247 |
|
253 | |||
248 |
|
254 | |||
249 | class ipeerlegacycommands(interfaceutil.Interface): |
|
255 | class ipeerlegacycommands(interfaceutil.Interface): | |
250 | """Interface for implementing support for legacy wire protocol commands. |
|
256 | """Interface for implementing support for legacy wire protocol commands. | |
251 |
|
257 | |||
252 | Wire protocol commands transition to legacy status when they are no longer |
|
258 | Wire protocol commands transition to legacy status when they are no longer | |
253 | used by modern clients. To facilitate identifying which commands are |
|
259 | used by modern clients. To facilitate identifying which commands are | |
254 | legacy, the interfaces are split. |
|
260 | legacy, the interfaces are split. | |
255 | """ |
|
261 | """ | |
256 |
|
262 | |||
257 | def between(pairs): |
|
263 | def between(pairs): | |
258 | """Obtain nodes between pairs of nodes. |
|
264 | """Obtain nodes between pairs of nodes. | |
259 |
|
265 | |||
260 | ``pairs`` is an iterable of node pairs. |
|
266 | ``pairs`` is an iterable of node pairs. | |
261 |
|
267 | |||
262 | Returns an iterable of iterables of nodes corresponding to each |
|
268 | Returns an iterable of iterables of nodes corresponding to each | |
263 | requested pair. |
|
269 | requested pair. | |
264 | """ |
|
270 | """ | |
265 |
|
271 | |||
266 | def branches(nodes): |
|
272 | def branches(nodes): | |
267 | """Obtain ancestor changesets of specific nodes back to a branch point. |
|
273 | """Obtain ancestor changesets of specific nodes back to a branch point. | |
268 |
|
274 | |||
269 | For each requested node, the peer finds the first ancestor node that is |
|
275 | For each requested node, the peer finds the first ancestor node that is | |
270 | a DAG root or is a merge. |
|
276 | a DAG root or is a merge. | |
271 |
|
277 | |||
272 | Returns an iterable of iterables with the resolved values for each node. |
|
278 | Returns an iterable of iterables with the resolved values for each node. | |
273 | """ |
|
279 | """ | |
274 |
|
280 | |||
275 | def changegroup(nodes, source): |
|
281 | def changegroup(nodes, source): | |
276 | """Obtain a changegroup with data for descendants of specified nodes.""" |
|
282 | """Obtain a changegroup with data for descendants of specified nodes.""" | |
277 |
|
283 | |||
278 | def changegroupsubset(bases, heads, source): |
|
284 | def changegroupsubset(bases, heads, source): | |
279 | pass |
|
285 | pass | |
280 |
|
286 | |||
281 |
|
287 | |||
282 | class ipeercommandexecutor(interfaceutil.Interface): |
|
288 | class ipeercommandexecutor(interfaceutil.Interface): | |
283 | """Represents a mechanism to execute remote commands. |
|
289 | """Represents a mechanism to execute remote commands. | |
284 |
|
290 | |||
285 | This is the primary interface for requesting that wire protocol commands |
|
291 | This is the primary interface for requesting that wire protocol commands | |
286 | be executed. Instances of this interface are active in a context manager |
|
292 | be executed. Instances of this interface are active in a context manager | |
287 | and have a well-defined lifetime. When the context manager exits, all |
|
293 | and have a well-defined lifetime. When the context manager exits, all | |
288 | outstanding requests are waited on. |
|
294 | outstanding requests are waited on. | |
289 | """ |
|
295 | """ | |
290 |
|
296 | |||
291 | def callcommand(name, args): |
|
297 | def callcommand(name, args): | |
292 | """Request that a named command be executed. |
|
298 | """Request that a named command be executed. | |
293 |
|
299 | |||
294 | Receives the command name and a dictionary of command arguments. |
|
300 | Receives the command name and a dictionary of command arguments. | |
295 |
|
301 | |||
296 | Returns a ``concurrent.futures.Future`` that will resolve to the |
|
302 | Returns a ``concurrent.futures.Future`` that will resolve to the | |
297 | result of that command request. That exact value is left up to |
|
303 | result of that command request. That exact value is left up to | |
298 | the implementation and possibly varies by command. |
|
304 | the implementation and possibly varies by command. | |
299 |
|
305 | |||
300 | Not all commands can coexist with other commands in an executor |
|
306 | Not all commands can coexist with other commands in an executor | |
301 | instance: it depends on the underlying wire protocol transport being |
|
307 | instance: it depends on the underlying wire protocol transport being | |
302 | used and the command itself. |
|
308 | used and the command itself. | |
303 |
|
309 | |||
304 | Implementations MAY call ``sendcommands()`` automatically if the |
|
310 | Implementations MAY call ``sendcommands()`` automatically if the | |
305 | requested command can not coexist with other commands in this executor. |
|
311 | requested command can not coexist with other commands in this executor. | |
306 |
|
312 | |||
307 | Implementations MAY call ``sendcommands()`` automatically when the |
|
313 | Implementations MAY call ``sendcommands()`` automatically when the | |
308 | future's ``result()`` is called. So, consumers using multiple |
|
314 | future's ``result()`` is called. So, consumers using multiple | |
309 | commands with an executor MUST ensure that ``result()`` is not called |
|
315 | commands with an executor MUST ensure that ``result()`` is not called | |
310 | until all command requests have been issued. |
|
316 | until all command requests have been issued. | |
311 | """ |
|
317 | """ | |
312 |
|
318 | |||
313 | def sendcommands(): |
|
319 | def sendcommands(): | |
314 | """Trigger submission of queued command requests. |
|
320 | """Trigger submission of queued command requests. | |
315 |
|
321 | |||
316 | Not all transports submit commands as soon as they are requested to |
|
322 | Not all transports submit commands as soon as they are requested to | |
317 | run. When called, this method forces queued command requests to be |
|
323 | run. When called, this method forces queued command requests to be | |
318 | issued. It will no-op if all commands have already been sent. |
|
324 | issued. It will no-op if all commands have already been sent. | |
319 |
|
325 | |||
320 | When called, no more new commands may be issued with this executor. |
|
326 | When called, no more new commands may be issued with this executor. | |
321 | """ |
|
327 | """ | |
322 |
|
328 | |||
323 | def close(): |
|
329 | def close(): | |
324 | """Signal that this command request is finished. |
|
330 | """Signal that this command request is finished. | |
325 |
|
331 | |||
326 | When called, no more new commands may be issued. All outstanding |
|
332 | When called, no more new commands may be issued. All outstanding | |
327 | commands that have previously been issued are waited on before |
|
333 | commands that have previously been issued are waited on before | |
328 | returning. This not only includes waiting for the futures to resolve, |
|
334 | returning. This not only includes waiting for the futures to resolve, | |
329 | but also waiting for all response data to arrive. In other words, |
|
335 | but also waiting for all response data to arrive. In other words, | |
330 | calling this waits for all on-wire state for issued command requests |
|
336 | calling this waits for all on-wire state for issued command requests | |
331 | to finish. |
|
337 | to finish. | |
332 |
|
338 | |||
333 | When used as a context manager, this method is called when exiting the |
|
339 | When used as a context manager, this method is called when exiting the | |
334 | context manager. |
|
340 | context manager. | |
335 |
|
341 | |||
336 | This method may call ``sendcommands()`` if there are buffered commands. |
|
342 | This method may call ``sendcommands()`` if there are buffered commands. | |
337 | """ |
|
343 | """ | |
338 |
|
344 | |||
339 |
|
345 | |||
340 | class ipeerrequests(interfaceutil.Interface): |
|
346 | class ipeerrequests(interfaceutil.Interface): | |
341 | """Interface for executing commands on a peer.""" |
|
347 | """Interface for executing commands on a peer.""" | |
342 |
|
348 | |||
343 | limitedarguments = interfaceutil.Attribute( |
|
349 | limitedarguments = interfaceutil.Attribute( | |
344 | """True if the peer cannot receive large argument value for commands.""" |
|
350 | """True if the peer cannot receive large argument value for commands.""" | |
345 | ) |
|
351 | ) | |
346 |
|
352 | |||
347 | def commandexecutor(): |
|
353 | def commandexecutor(): | |
348 | """A context manager that resolves to an ipeercommandexecutor. |
|
354 | """A context manager that resolves to an ipeercommandexecutor. | |
349 |
|
355 | |||
350 | The object this resolves to can be used to issue command requests |
|
356 | The object this resolves to can be used to issue command requests | |
351 | to the peer. |
|
357 | to the peer. | |
352 |
|
358 | |||
353 | Callers should call its ``callcommand`` method to issue command |
|
359 | Callers should call its ``callcommand`` method to issue command | |
354 | requests. |
|
360 | requests. | |
355 |
|
361 | |||
356 | A new executor should be obtained for each distinct set of commands |
|
362 | A new executor should be obtained for each distinct set of commands | |
357 | (possibly just a single command) that the consumer wants to execute |
|
363 | (possibly just a single command) that the consumer wants to execute | |
358 | as part of a single operation or round trip. This is because some |
|
364 | as part of a single operation or round trip. This is because some | |
359 | peers are half-duplex and/or don't support persistent connections. |
|
365 | peers are half-duplex and/or don't support persistent connections. | |
360 | e.g. in the case of HTTP peers, commands sent to an executor represent |
|
366 | e.g. in the case of HTTP peers, commands sent to an executor represent | |
361 | a single HTTP request. While some peers may support multiple command |
|
367 | a single HTTP request. While some peers may support multiple command | |
362 | sends over the wire per executor, consumers need to code to the least |
|
368 | sends over the wire per executor, consumers need to code to the least | |
363 | capable peer. So it should be assumed that command executors buffer |
|
369 | capable peer. So it should be assumed that command executors buffer | |
364 | called commands until they are told to send them and that each |
|
370 | called commands until they are told to send them and that each | |
365 | command executor could result in a new connection or wire-level request |
|
371 | command executor could result in a new connection or wire-level request | |
366 | being issued. |
|
372 | being issued. | |
367 | """ |
|
373 | """ | |
368 |
|
374 | |||
369 |
|
375 | |||
370 | class ipeerbase(ipeerconnection, ipeercapabilities, ipeerrequests): |
|
376 | class ipeerbase(ipeerconnection, ipeercapabilities, ipeerrequests): | |
371 | """Unified interface for peer repositories. |
|
377 | """Unified interface for peer repositories. | |
372 |
|
378 | |||
373 | All peer instances must conform to this interface. |
|
379 | All peer instances must conform to this interface. | |
374 | """ |
|
380 | """ | |
375 |
|
381 | |||
376 |
|
382 | |||
377 | class ipeerv2(ipeerconnection, ipeercapabilities, ipeerrequests): |
|
383 | class ipeerv2(ipeerconnection, ipeercapabilities, ipeerrequests): | |
378 | """Unified peer interface for wire protocol version 2 peers.""" |
|
384 | """Unified peer interface for wire protocol version 2 peers.""" | |
379 |
|
385 | |||
380 | apidescriptor = interfaceutil.Attribute( |
|
386 | apidescriptor = interfaceutil.Attribute( | |
381 | """Data structure holding description of server API.""" |
|
387 | """Data structure holding description of server API.""" | |
382 | ) |
|
388 | ) | |
383 |
|
389 | |||
384 |
|
390 | |||
385 | @interfaceutil.implementer(ipeerbase) |
|
391 | @interfaceutil.implementer(ipeerbase) | |
386 | class peer: |
|
392 | class peer: | |
387 | """Base class for peer repositories.""" |
|
393 | """Base class for peer repositories.""" | |
388 |
|
394 | |||
389 | limitedarguments = False |
|
395 | limitedarguments = False | |
390 |
|
396 | |||
391 | def __init__(self, ui, path=None, remotehidden=False): |
|
397 | def __init__(self, ui, path=None, remotehidden=False): | |
392 | self.ui = ui |
|
398 | self.ui = ui | |
393 | self.path = path |
|
399 | self.path = path | |
394 |
|
400 | |||
395 | def capable(self, name): |
|
401 | def capable(self, name): | |
396 | caps = self.capabilities() |
|
402 | caps = self.capabilities() | |
397 | if name in caps: |
|
403 | if name in caps: | |
398 | return True |
|
404 | return True | |
399 |
|
405 | |||
400 | name = b'%s=' % name |
|
406 | name = b'%s=' % name | |
401 | for cap in caps: |
|
407 | for cap in caps: | |
402 | if cap.startswith(name): |
|
408 | if cap.startswith(name): | |
403 | return cap[len(name) :] |
|
409 | return cap[len(name) :] | |
404 |
|
410 | |||
405 | return False |
|
411 | return False | |
406 |
|
412 | |||
407 | def requirecap(self, name, purpose): |
|
413 | def requirecap(self, name, purpose): | |
408 | if self.capable(name): |
|
414 | if self.capable(name): | |
409 | return |
|
415 | return | |
410 |
|
416 | |||
411 | raise error.CapabilityError( |
|
417 | raise error.CapabilityError( | |
412 | _( |
|
418 | _( | |
413 | b'cannot %s; remote repository does not support the ' |
|
419 | b'cannot %s; remote repository does not support the ' | |
414 | b'\'%s\' capability' |
|
420 | b'\'%s\' capability' | |
415 | ) |
|
421 | ) | |
416 | % (purpose, name) |
|
422 | % (purpose, name) | |
417 | ) |
|
423 | ) | |
418 |
|
424 | |||
419 |
|
425 | |||
420 | class iverifyproblem(interfaceutil.Interface): |
|
426 | class iverifyproblem(interfaceutil.Interface): | |
421 | """Represents a problem with the integrity of the repository. |
|
427 | """Represents a problem with the integrity of the repository. | |
422 |
|
428 | |||
423 | Instances of this interface are emitted to describe an integrity issue |
|
429 | Instances of this interface are emitted to describe an integrity issue | |
424 | with a repository (e.g. corrupt storage, missing data, etc). |
|
430 | with a repository (e.g. corrupt storage, missing data, etc). | |
425 |
|
431 | |||
426 | Instances are essentially messages associated with severity. |
|
432 | Instances are essentially messages associated with severity. | |
427 | """ |
|
433 | """ | |
428 |
|
434 | |||
429 | warning = interfaceutil.Attribute( |
|
435 | warning = interfaceutil.Attribute( | |
430 | """Message indicating a non-fatal problem.""" |
|
436 | """Message indicating a non-fatal problem.""" | |
431 | ) |
|
437 | ) | |
432 |
|
438 | |||
433 | error = interfaceutil.Attribute("""Message indicating a fatal problem.""") |
|
439 | error = interfaceutil.Attribute("""Message indicating a fatal problem.""") | |
434 |
|
440 | |||
435 | node = interfaceutil.Attribute( |
|
441 | node = interfaceutil.Attribute( | |
436 | """Revision encountering the problem. |
|
442 | """Revision encountering the problem. | |
437 |
|
443 | |||
438 | ``None`` means the problem doesn't apply to a single revision. |
|
444 | ``None`` means the problem doesn't apply to a single revision. | |
439 | """ |
|
445 | """ | |
440 | ) |
|
446 | ) | |
441 |
|
447 | |||
442 |
|
448 | |||
443 | class irevisiondelta(interfaceutil.Interface): |
|
449 | class irevisiondelta(interfaceutil.Interface): | |
444 | """Represents a delta between one revision and another. |
|
450 | """Represents a delta between one revision and another. | |
445 |
|
451 | |||
446 | Instances convey enough information to allow a revision to be exchanged |
|
452 | Instances convey enough information to allow a revision to be exchanged | |
447 | with another repository. |
|
453 | with another repository. | |
448 |
|
454 | |||
449 | Instances represent the fulltext revision data or a delta against |
|
455 | Instances represent the fulltext revision data or a delta against | |
450 | another revision. Therefore the ``revision`` and ``delta`` attributes |
|
456 | another revision. Therefore the ``revision`` and ``delta`` attributes | |
451 | are mutually exclusive. |
|
457 | are mutually exclusive. | |
452 |
|
458 | |||
453 | Typically used for changegroup generation. |
|
459 | Typically used for changegroup generation. | |
454 | """ |
|
460 | """ | |
455 |
|
461 | |||
456 | node = interfaceutil.Attribute("""20 byte node of this revision.""") |
|
462 | node = interfaceutil.Attribute("""20 byte node of this revision.""") | |
457 |
|
463 | |||
458 | p1node = interfaceutil.Attribute( |
|
464 | p1node = interfaceutil.Attribute( | |
459 | """20 byte node of 1st parent of this revision.""" |
|
465 | """20 byte node of 1st parent of this revision.""" | |
460 | ) |
|
466 | ) | |
461 |
|
467 | |||
462 | p2node = interfaceutil.Attribute( |
|
468 | p2node = interfaceutil.Attribute( | |
463 | """20 byte node of 2nd parent of this revision.""" |
|
469 | """20 byte node of 2nd parent of this revision.""" | |
464 | ) |
|
470 | ) | |
465 |
|
471 | |||
466 | linknode = interfaceutil.Attribute( |
|
472 | linknode = interfaceutil.Attribute( | |
467 | """20 byte node of the changelog revision this node is linked to.""" |
|
473 | """20 byte node of the changelog revision this node is linked to.""" | |
468 | ) |
|
474 | ) | |
469 |
|
475 | |||
470 | flags = interfaceutil.Attribute( |
|
476 | flags = interfaceutil.Attribute( | |
471 | """2 bytes of integer flags that apply to this revision. |
|
477 | """2 bytes of integer flags that apply to this revision. | |
472 |
|
478 | |||
473 | This is a bitwise composition of the ``REVISION_FLAG_*`` constants. |
|
479 | This is a bitwise composition of the ``REVISION_FLAG_*`` constants. | |
474 | """ |
|
480 | """ | |
475 | ) |
|
481 | ) | |
476 |
|
482 | |||
477 | basenode = interfaceutil.Attribute( |
|
483 | basenode = interfaceutil.Attribute( | |
478 | """20 byte node of the revision this data is a delta against. |
|
484 | """20 byte node of the revision this data is a delta against. | |
479 |
|
485 | |||
480 | ``nullid`` indicates that the revision is a full revision and not |
|
486 | ``nullid`` indicates that the revision is a full revision and not | |
481 | a delta. |
|
487 | a delta. | |
482 | """ |
|
488 | """ | |
483 | ) |
|
489 | ) | |
484 |
|
490 | |||
485 | baserevisionsize = interfaceutil.Attribute( |
|
491 | baserevisionsize = interfaceutil.Attribute( | |
486 | """Size of base revision this delta is against. |
|
492 | """Size of base revision this delta is against. | |
487 |
|
493 | |||
488 | May be ``None`` if ``basenode`` is ``nullid``. |
|
494 | May be ``None`` if ``basenode`` is ``nullid``. | |
489 | """ |
|
495 | """ | |
490 | ) |
|
496 | ) | |
491 |
|
497 | |||
492 | revision = interfaceutil.Attribute( |
|
498 | revision = interfaceutil.Attribute( | |
493 | """Raw fulltext of revision data for this node.""" |
|
499 | """Raw fulltext of revision data for this node.""" | |
494 | ) |
|
500 | ) | |
495 |
|
501 | |||
496 | delta = interfaceutil.Attribute( |
|
502 | delta = interfaceutil.Attribute( | |
497 | """Delta between ``basenode`` and ``node``. |
|
503 | """Delta between ``basenode`` and ``node``. | |
498 |
|
504 | |||
499 | Stored in the bdiff delta format. |
|
505 | Stored in the bdiff delta format. | |
500 | """ |
|
506 | """ | |
501 | ) |
|
507 | ) | |
502 |
|
508 | |||
503 | sidedata = interfaceutil.Attribute( |
|
509 | sidedata = interfaceutil.Attribute( | |
504 | """Raw sidedata bytes for the given revision.""" |
|
510 | """Raw sidedata bytes for the given revision.""" | |
505 | ) |
|
511 | ) | |
506 |
|
512 | |||
507 | protocol_flags = interfaceutil.Attribute( |
|
513 | protocol_flags = interfaceutil.Attribute( | |
508 | """Single byte of integer flags that can influence the protocol. |
|
514 | """Single byte of integer flags that can influence the protocol. | |
509 |
|
515 | |||
510 | This is a bitwise composition of the ``storageutil.CG_FLAG*`` constants. |
|
516 | This is a bitwise composition of the ``storageutil.CG_FLAG*`` constants. | |
511 | """ |
|
517 | """ | |
512 | ) |
|
518 | ) | |
513 |
|
519 | |||
514 |
|
520 | |||
515 | class ifilerevisionssequence(interfaceutil.Interface): |
|
521 | class ifilerevisionssequence(interfaceutil.Interface): | |
516 | """Contains index data for all revisions of a file. |
|
522 | """Contains index data for all revisions of a file. | |
517 |
|
523 | |||
518 | Types implementing this behave like lists of tuples. The index |
|
524 | Types implementing this behave like lists of tuples. The index | |
519 | in the list corresponds to the revision number. The values contain |
|
525 | in the list corresponds to the revision number. The values contain | |
520 | index metadata. |
|
526 | index metadata. | |
521 |
|
527 | |||
522 | The *null* revision (revision number -1) is always the last item |
|
528 | The *null* revision (revision number -1) is always the last item | |
523 | in the index. |
|
529 | in the index. | |
524 | """ |
|
530 | """ | |
525 |
|
531 | |||
526 | def __len__(): |
|
532 | def __len__(): | |
527 | """The total number of revisions.""" |
|
533 | """The total number of revisions.""" | |
528 |
|
534 | |||
529 | def __getitem__(rev): |
|
535 | def __getitem__(rev): | |
530 | """Returns the object having a specific revision number. |
|
536 | """Returns the object having a specific revision number. | |
531 |
|
537 | |||
532 | Returns an 8-tuple with the following fields: |
|
538 | Returns an 8-tuple with the following fields: | |
533 |
|
539 | |||
534 | offset+flags |
|
540 | offset+flags | |
535 | Contains the offset and flags for the revision. 64-bit unsigned |
|
541 | Contains the offset and flags for the revision. 64-bit unsigned | |
536 | integer where first 6 bytes are the offset and the next 2 bytes |
|
542 | integer where first 6 bytes are the offset and the next 2 bytes | |
537 | are flags. The offset can be 0 if it is not used by the store. |
|
543 | are flags. The offset can be 0 if it is not used by the store. | |
538 | compressed size |
|
544 | compressed size | |
539 | Size of the revision data in the store. It can be 0 if it isn't |
|
545 | Size of the revision data in the store. It can be 0 if it isn't | |
540 | needed by the store. |
|
546 | needed by the store. | |
541 | uncompressed size |
|
547 | uncompressed size | |
542 | Fulltext size. It can be 0 if it isn't needed by the store. |
|
548 | Fulltext size. It can be 0 if it isn't needed by the store. | |
543 | base revision |
|
549 | base revision | |
544 | Revision number of revision the delta for storage is encoded |
|
550 | Revision number of revision the delta for storage is encoded | |
545 | against. -1 indicates not encoded against a base revision. |
|
551 | against. -1 indicates not encoded against a base revision. | |
546 | link revision |
|
552 | link revision | |
547 | Revision number of changelog revision this entry is related to. |
|
553 | Revision number of changelog revision this entry is related to. | |
548 | p1 revision |
|
554 | p1 revision | |
549 | Revision number of 1st parent. -1 if no 1st parent. |
|
555 | Revision number of 1st parent. -1 if no 1st parent. | |
550 | p2 revision |
|
556 | p2 revision | |
551 | Revision number of 2nd parent. -1 if no 1st parent. |
|
557 | Revision number of 2nd parent. -1 if no 1st parent. | |
552 | node |
|
558 | node | |
553 | Binary node value for this revision number. |
|
559 | Binary node value for this revision number. | |
554 |
|
560 | |||
555 | Negative values should index off the end of the sequence. ``-1`` |
|
561 | Negative values should index off the end of the sequence. ``-1`` | |
556 | should return the null revision. ``-2`` should return the most |
|
562 | should return the null revision. ``-2`` should return the most | |
557 | recent revision. |
|
563 | recent revision. | |
558 | """ |
|
564 | """ | |
559 |
|
565 | |||
560 | def __contains__(rev): |
|
566 | def __contains__(rev): | |
561 | """Whether a revision number exists.""" |
|
567 | """Whether a revision number exists.""" | |
562 |
|
568 | |||
563 | def insert(self, i, entry): |
|
569 | def insert(self, i, entry): | |
564 | """Add an item to the index at specific revision.""" |
|
570 | """Add an item to the index at specific revision.""" | |
565 |
|
571 | |||
566 |
|
572 | |||
567 | class ifileindex(interfaceutil.Interface): |
|
573 | class ifileindex(interfaceutil.Interface): | |
568 | """Storage interface for index data of a single file. |
|
574 | """Storage interface for index data of a single file. | |
569 |
|
575 | |||
570 | File storage data is divided into index metadata and data storage. |
|
576 | File storage data is divided into index metadata and data storage. | |
571 | This interface defines the index portion of the interface. |
|
577 | This interface defines the index portion of the interface. | |
572 |
|
578 | |||
573 | The index logically consists of: |
|
579 | The index logically consists of: | |
574 |
|
580 | |||
575 | * A mapping between revision numbers and nodes. |
|
581 | * A mapping between revision numbers and nodes. | |
576 | * DAG data (storing and querying the relationship between nodes). |
|
582 | * DAG data (storing and querying the relationship between nodes). | |
577 | * Metadata to facilitate storage. |
|
583 | * Metadata to facilitate storage. | |
578 | """ |
|
584 | """ | |
579 |
|
585 | |||
580 | nullid = interfaceutil.Attribute( |
|
586 | nullid = interfaceutil.Attribute( | |
581 | """node for the null revision for use as delta base.""" |
|
587 | """node for the null revision for use as delta base.""" | |
582 | ) |
|
588 | ) | |
583 |
|
589 | |||
584 | def __len__(): |
|
590 | def __len__(): | |
585 | """Obtain the number of revisions stored for this file.""" |
|
591 | """Obtain the number of revisions stored for this file.""" | |
586 |
|
592 | |||
587 | def __iter__(): |
|
593 | def __iter__(): | |
588 | """Iterate over revision numbers for this file.""" |
|
594 | """Iterate over revision numbers for this file.""" | |
589 |
|
595 | |||
590 | def hasnode(node): |
|
596 | def hasnode(node): | |
591 | """Returns a bool indicating if a node is known to this store. |
|
597 | """Returns a bool indicating if a node is known to this store. | |
592 |
|
598 | |||
593 | Implementations must only return True for full, binary node values: |
|
599 | Implementations must only return True for full, binary node values: | |
594 | hex nodes, revision numbers, and partial node matches must be |
|
600 | hex nodes, revision numbers, and partial node matches must be | |
595 | rejected. |
|
601 | rejected. | |
596 |
|
602 | |||
597 | The null node is never present. |
|
603 | The null node is never present. | |
598 | """ |
|
604 | """ | |
599 |
|
605 | |||
600 | def revs(start=0, stop=None): |
|
606 | def revs(start=0, stop=None): | |
601 | """Iterate over revision numbers for this file, with control.""" |
|
607 | """Iterate over revision numbers for this file, with control.""" | |
602 |
|
608 | |||
603 | def parents(node): |
|
609 | def parents(node): | |
604 | """Returns a 2-tuple of parent nodes for a revision. |
|
610 | """Returns a 2-tuple of parent nodes for a revision. | |
605 |
|
611 | |||
606 | Values will be ``nullid`` if the parent is empty. |
|
612 | Values will be ``nullid`` if the parent is empty. | |
607 | """ |
|
613 | """ | |
608 |
|
614 | |||
609 | def parentrevs(rev): |
|
615 | def parentrevs(rev): | |
610 | """Like parents() but operates on revision numbers.""" |
|
616 | """Like parents() but operates on revision numbers.""" | |
611 |
|
617 | |||
612 | def rev(node): |
|
618 | def rev(node): | |
613 | """Obtain the revision number given a node. |
|
619 | """Obtain the revision number given a node. | |
614 |
|
620 | |||
615 | Raises ``error.LookupError`` if the node is not known. |
|
621 | Raises ``error.LookupError`` if the node is not known. | |
616 | """ |
|
622 | """ | |
617 |
|
623 | |||
618 | def node(rev): |
|
624 | def node(rev): | |
619 | """Obtain the node value given a revision number. |
|
625 | """Obtain the node value given a revision number. | |
620 |
|
626 | |||
621 | Raises ``IndexError`` if the node is not known. |
|
627 | Raises ``IndexError`` if the node is not known. | |
622 | """ |
|
628 | """ | |
623 |
|
629 | |||
624 | def lookup(node): |
|
630 | def lookup(node): | |
625 | """Attempt to resolve a value to a node. |
|
631 | """Attempt to resolve a value to a node. | |
626 |
|
632 | |||
627 | Value can be a binary node, hex node, revision number, or a string |
|
633 | Value can be a binary node, hex node, revision number, or a string | |
628 | that can be converted to an integer. |
|
634 | that can be converted to an integer. | |
629 |
|
635 | |||
630 | Raises ``error.LookupError`` if a node could not be resolved. |
|
636 | Raises ``error.LookupError`` if a node could not be resolved. | |
631 | """ |
|
637 | """ | |
632 |
|
638 | |||
633 | def linkrev(rev): |
|
639 | def linkrev(rev): | |
634 | """Obtain the changeset revision number a revision is linked to.""" |
|
640 | """Obtain the changeset revision number a revision is linked to.""" | |
635 |
|
641 | |||
636 | def iscensored(rev): |
|
642 | def iscensored(rev): | |
637 | """Return whether a revision's content has been censored.""" |
|
643 | """Return whether a revision's content has been censored.""" | |
638 |
|
644 | |||
639 | def commonancestorsheads(node1, node2): |
|
645 | def commonancestorsheads(node1, node2): | |
640 | """Obtain an iterable of nodes containing heads of common ancestors. |
|
646 | """Obtain an iterable of nodes containing heads of common ancestors. | |
641 |
|
647 | |||
642 | See ``ancestor.commonancestorsheads()``. |
|
648 | See ``ancestor.commonancestorsheads()``. | |
643 | """ |
|
649 | """ | |
644 |
|
650 | |||
645 | def descendants(revs): |
|
651 | def descendants(revs): | |
646 | """Obtain descendant revision numbers for a set of revision numbers. |
|
652 | """Obtain descendant revision numbers for a set of revision numbers. | |
647 |
|
653 | |||
648 | If ``nullrev`` is in the set, this is equivalent to ``revs()``. |
|
654 | If ``nullrev`` is in the set, this is equivalent to ``revs()``. | |
649 | """ |
|
655 | """ | |
650 |
|
656 | |||
651 | def heads(start=None, stop=None): |
|
657 | def heads(start=None, stop=None): | |
652 | """Obtain a list of nodes that are DAG heads, with control. |
|
658 | """Obtain a list of nodes that are DAG heads, with control. | |
653 |
|
659 | |||
654 | The set of revisions examined can be limited by specifying |
|
660 | The set of revisions examined can be limited by specifying | |
655 | ``start`` and ``stop``. ``start`` is a node. ``stop`` is an |
|
661 | ``start`` and ``stop``. ``start`` is a node. ``stop`` is an | |
656 | iterable of nodes. DAG traversal starts at earlier revision |
|
662 | iterable of nodes. DAG traversal starts at earlier revision | |
657 | ``start`` and iterates forward until any node in ``stop`` is |
|
663 | ``start`` and iterates forward until any node in ``stop`` is | |
658 | encountered. |
|
664 | encountered. | |
659 | """ |
|
665 | """ | |
660 |
|
666 | |||
661 | def children(node): |
|
667 | def children(node): | |
662 | """Obtain nodes that are children of a node. |
|
668 | """Obtain nodes that are children of a node. | |
663 |
|
669 | |||
664 | Returns a list of nodes. |
|
670 | Returns a list of nodes. | |
665 | """ |
|
671 | """ | |
666 |
|
672 | |||
667 |
|
673 | |||
668 | class ifiledata(interfaceutil.Interface): |
|
674 | class ifiledata(interfaceutil.Interface): | |
669 | """Storage interface for data storage of a specific file. |
|
675 | """Storage interface for data storage of a specific file. | |
670 |
|
676 | |||
671 | This complements ``ifileindex`` and provides an interface for accessing |
|
677 | This complements ``ifileindex`` and provides an interface for accessing | |
672 | data for a tracked file. |
|
678 | data for a tracked file. | |
673 | """ |
|
679 | """ | |
674 |
|
680 | |||
675 | def size(rev): |
|
681 | def size(rev): | |
676 | """Obtain the fulltext size of file data. |
|
682 | """Obtain the fulltext size of file data. | |
677 |
|
683 | |||
678 | Any metadata is excluded from size measurements. |
|
684 | Any metadata is excluded from size measurements. | |
679 | """ |
|
685 | """ | |
680 |
|
686 | |||
681 | def revision(node, raw=False): |
|
687 | def revision(node, raw=False): | |
682 | """Obtain fulltext data for a node. |
|
688 | """Obtain fulltext data for a node. | |
683 |
|
689 | |||
684 | By default, any storage transformations are applied before the data |
|
690 | By default, any storage transformations are applied before the data | |
685 | is returned. If ``raw`` is True, non-raw storage transformations |
|
691 | is returned. If ``raw`` is True, non-raw storage transformations | |
686 | are not applied. |
|
692 | are not applied. | |
687 |
|
693 | |||
688 | The fulltext data may contain a header containing metadata. Most |
|
694 | The fulltext data may contain a header containing metadata. Most | |
689 | consumers should use ``read()`` to obtain the actual file data. |
|
695 | consumers should use ``read()`` to obtain the actual file data. | |
690 | """ |
|
696 | """ | |
691 |
|
697 | |||
692 | def rawdata(node): |
|
698 | def rawdata(node): | |
693 | """Obtain raw data for a node.""" |
|
699 | """Obtain raw data for a node.""" | |
694 |
|
700 | |||
695 | def read(node): |
|
701 | def read(node): | |
696 | """Resolve file fulltext data. |
|
702 | """Resolve file fulltext data. | |
697 |
|
703 | |||
698 | This is similar to ``revision()`` except any metadata in the data |
|
704 | This is similar to ``revision()`` except any metadata in the data | |
699 | headers is stripped. |
|
705 | headers is stripped. | |
700 | """ |
|
706 | """ | |
701 |
|
707 | |||
702 | def renamed(node): |
|
708 | def renamed(node): | |
703 | """Obtain copy metadata for a node. |
|
709 | """Obtain copy metadata for a node. | |
704 |
|
710 | |||
705 | Returns ``False`` if no copy metadata is stored or a 2-tuple of |
|
711 | Returns ``False`` if no copy metadata is stored or a 2-tuple of | |
706 | (path, node) from which this revision was copied. |
|
712 | (path, node) from which this revision was copied. | |
707 | """ |
|
713 | """ | |
708 |
|
714 | |||
709 | def cmp(node, fulltext): |
|
715 | def cmp(node, fulltext): | |
710 | """Compare fulltext to another revision. |
|
716 | """Compare fulltext to another revision. | |
711 |
|
717 | |||
712 | Returns True if the fulltext is different from what is stored. |
|
718 | Returns True if the fulltext is different from what is stored. | |
713 |
|
719 | |||
714 | This takes copy metadata into account. |
|
720 | This takes copy metadata into account. | |
715 |
|
721 | |||
716 | TODO better document the copy metadata and censoring logic. |
|
722 | TODO better document the copy metadata and censoring logic. | |
717 | """ |
|
723 | """ | |
718 |
|
724 | |||
719 | def emitrevisions( |
|
725 | def emitrevisions( | |
720 | nodes, |
|
726 | nodes, | |
721 | nodesorder=None, |
|
727 | nodesorder=None, | |
722 | revisiondata=False, |
|
728 | revisiondata=False, | |
723 | assumehaveparentrevisions=False, |
|
729 | assumehaveparentrevisions=False, | |
724 | deltamode=CG_DELTAMODE_STD, |
|
730 | deltamode=CG_DELTAMODE_STD, | |
725 | ): |
|
731 | ): | |
726 | """Produce ``irevisiondelta`` for revisions. |
|
732 | """Produce ``irevisiondelta`` for revisions. | |
727 |
|
733 | |||
728 | Given an iterable of nodes, emits objects conforming to the |
|
734 | Given an iterable of nodes, emits objects conforming to the | |
729 | ``irevisiondelta`` interface that describe revisions in storage. |
|
735 | ``irevisiondelta`` interface that describe revisions in storage. | |
730 |
|
736 | |||
731 | This method is a generator. |
|
737 | This method is a generator. | |
732 |
|
738 | |||
733 | The input nodes may be unordered. Implementations must ensure that a |
|
739 | The input nodes may be unordered. Implementations must ensure that a | |
734 | node's parents are emitted before the node itself. Transitively, this |
|
740 | node's parents are emitted before the node itself. Transitively, this | |
735 | means that a node may only be emitted once all its ancestors in |
|
741 | means that a node may only be emitted once all its ancestors in | |
736 | ``nodes`` have also been emitted. |
|
742 | ``nodes`` have also been emitted. | |
737 |
|
743 | |||
738 | By default, emits "index" data (the ``node``, ``p1node``, and |
|
744 | By default, emits "index" data (the ``node``, ``p1node``, and | |
739 | ``p2node`` attributes). If ``revisiondata`` is set, revision data |
|
745 | ``p2node`` attributes). If ``revisiondata`` is set, revision data | |
740 | will also be present on the emitted objects. |
|
746 | will also be present on the emitted objects. | |
741 |
|
747 | |||
742 | With default argument values, implementations can choose to emit |
|
748 | With default argument values, implementations can choose to emit | |
743 | either fulltext revision data or a delta. When emitting deltas, |
|
749 | either fulltext revision data or a delta. When emitting deltas, | |
744 | implementations must consider whether the delta's base revision |
|
750 | implementations must consider whether the delta's base revision | |
745 | fulltext is available to the receiver. |
|
751 | fulltext is available to the receiver. | |
746 |
|
752 | |||
747 | The base revision fulltext is guaranteed to be available if any of |
|
753 | The base revision fulltext is guaranteed to be available if any of | |
748 | the following are met: |
|
754 | the following are met: | |
749 |
|
755 | |||
750 | * Its fulltext revision was emitted by this method call. |
|
756 | * Its fulltext revision was emitted by this method call. | |
751 | * A delta for that revision was emitted by this method call. |
|
757 | * A delta for that revision was emitted by this method call. | |
752 | * ``assumehaveparentrevisions`` is True and the base revision is a |
|
758 | * ``assumehaveparentrevisions`` is True and the base revision is a | |
753 | parent of the node. |
|
759 | parent of the node. | |
754 |
|
760 | |||
755 | ``nodesorder`` can be used to control the order that revisions are |
|
761 | ``nodesorder`` can be used to control the order that revisions are | |
756 | emitted. By default, revisions can be reordered as long as they are |
|
762 | emitted. By default, revisions can be reordered as long as they are | |
757 | in DAG topological order (see above). If the value is ``nodes``, |
|
763 | in DAG topological order (see above). If the value is ``nodes``, | |
758 | the iteration order from ``nodes`` should be used. If the value is |
|
764 | the iteration order from ``nodes`` should be used. If the value is | |
759 | ``storage``, then the native order from the backing storage layer |
|
765 | ``storage``, then the native order from the backing storage layer | |
760 | is used. (Not all storage layers will have strong ordering and behavior |
|
766 | is used. (Not all storage layers will have strong ordering and behavior | |
761 | of this mode is storage-dependent.) ``nodes`` ordering can force |
|
767 | of this mode is storage-dependent.) ``nodes`` ordering can force | |
762 | revisions to be emitted before their ancestors, so consumers should |
|
768 | revisions to be emitted before their ancestors, so consumers should | |
763 | use it with care. |
|
769 | use it with care. | |
764 |
|
770 | |||
765 | The ``linknode`` attribute on the returned ``irevisiondelta`` may not |
|
771 | The ``linknode`` attribute on the returned ``irevisiondelta`` may not | |
766 | be set and it is the caller's responsibility to resolve it, if needed. |
|
772 | be set and it is the caller's responsibility to resolve it, if needed. | |
767 |
|
773 | |||
768 | If ``deltamode`` is CG_DELTAMODE_PREV and revision data is requested, |
|
774 | If ``deltamode`` is CG_DELTAMODE_PREV and revision data is requested, | |
769 | all revision data should be emitted as deltas against the revision |
|
775 | all revision data should be emitted as deltas against the revision | |
770 | emitted just prior. The initial revision should be a delta against its |
|
776 | emitted just prior. The initial revision should be a delta against its | |
771 | 1st parent. |
|
777 | 1st parent. | |
772 | """ |
|
778 | """ | |
773 |
|
779 | |||
774 |
|
780 | |||
775 | class ifilemutation(interfaceutil.Interface): |
|
781 | class ifilemutation(interfaceutil.Interface): | |
776 | """Storage interface for mutation events of a tracked file.""" |
|
782 | """Storage interface for mutation events of a tracked file.""" | |
777 |
|
783 | |||
778 | def add(filedata, meta, transaction, linkrev, p1, p2): |
|
784 | def add(filedata, meta, transaction, linkrev, p1, p2): | |
779 | """Add a new revision to the store. |
|
785 | """Add a new revision to the store. | |
780 |
|
786 | |||
781 | Takes file data, dictionary of metadata, a transaction, linkrev, |
|
787 | Takes file data, dictionary of metadata, a transaction, linkrev, | |
782 | and parent nodes. |
|
788 | and parent nodes. | |
783 |
|
789 | |||
784 | Returns the node that was added. |
|
790 | Returns the node that was added. | |
785 |
|
791 | |||
786 | May no-op if a revision matching the supplied data is already stored. |
|
792 | May no-op if a revision matching the supplied data is already stored. | |
787 | """ |
|
793 | """ | |
788 |
|
794 | |||
789 | def addrevision( |
|
795 | def addrevision( | |
790 | revisiondata, |
|
796 | revisiondata, | |
791 | transaction, |
|
797 | transaction, | |
792 | linkrev, |
|
798 | linkrev, | |
793 | p1, |
|
799 | p1, | |
794 | p2, |
|
800 | p2, | |
795 | node=None, |
|
801 | node=None, | |
796 | flags=0, |
|
802 | flags=0, | |
797 | cachedelta=None, |
|
803 | cachedelta=None, | |
798 | ): |
|
804 | ): | |
799 | """Add a new revision to the store and return its number. |
|
805 | """Add a new revision to the store and return its number. | |
800 |
|
806 | |||
801 | This is similar to ``add()`` except it operates at a lower level. |
|
807 | This is similar to ``add()`` except it operates at a lower level. | |
802 |
|
808 | |||
803 | The data passed in already contains a metadata header, if any. |
|
809 | The data passed in already contains a metadata header, if any. | |
804 |
|
810 | |||
805 | ``node`` and ``flags`` can be used to define the expected node and |
|
811 | ``node`` and ``flags`` can be used to define the expected node and | |
806 | the flags to use with storage. ``flags`` is a bitwise value composed |
|
812 | the flags to use with storage. ``flags`` is a bitwise value composed | |
807 | of the various ``REVISION_FLAG_*`` constants. |
|
813 | of the various ``REVISION_FLAG_*`` constants. | |
808 |
|
814 | |||
809 | ``add()`` is usually called when adding files from e.g. the working |
|
815 | ``add()`` is usually called when adding files from e.g. the working | |
810 | directory. ``addrevision()`` is often called by ``add()`` and for |
|
816 | directory. ``addrevision()`` is often called by ``add()`` and for | |
811 | scenarios where revision data has already been computed, such as when |
|
817 | scenarios where revision data has already been computed, such as when | |
812 | applying raw data from a peer repo. |
|
818 | applying raw data from a peer repo. | |
813 | """ |
|
819 | """ | |
814 |
|
820 | |||
815 | def addgroup( |
|
821 | def addgroup( | |
816 | deltas, |
|
822 | deltas, | |
817 | linkmapper, |
|
823 | linkmapper, | |
818 | transaction, |
|
824 | transaction, | |
819 | addrevisioncb=None, |
|
825 | addrevisioncb=None, | |
820 | duplicaterevisioncb=None, |
|
826 | duplicaterevisioncb=None, | |
821 | maybemissingparents=False, |
|
827 | maybemissingparents=False, | |
822 | ): |
|
828 | ): | |
823 | """Process a series of deltas for storage. |
|
829 | """Process a series of deltas for storage. | |
824 |
|
830 | |||
825 | ``deltas`` is an iterable of 7-tuples of |
|
831 | ``deltas`` is an iterable of 7-tuples of | |
826 | (node, p1, p2, linknode, deltabase, delta, flags) defining revisions |
|
832 | (node, p1, p2, linknode, deltabase, delta, flags) defining revisions | |
827 | to add. |
|
833 | to add. | |
828 |
|
834 | |||
829 | The ``delta`` field contains ``mpatch`` data to apply to a base |
|
835 | The ``delta`` field contains ``mpatch`` data to apply to a base | |
830 | revision, identified by ``deltabase``. The base node can be |
|
836 | revision, identified by ``deltabase``. The base node can be | |
831 | ``nullid``, in which case the header from the delta can be ignored |
|
837 | ``nullid``, in which case the header from the delta can be ignored | |
832 | and the delta used as the fulltext. |
|
838 | and the delta used as the fulltext. | |
833 |
|
839 | |||
834 | ``alwayscache`` instructs the lower layers to cache the content of the |
|
840 | ``alwayscache`` instructs the lower layers to cache the content of the | |
835 | newly added revision, even if it needs to be explicitly computed. |
|
841 | newly added revision, even if it needs to be explicitly computed. | |
836 | This used to be the default when ``addrevisioncb`` was provided up to |
|
842 | This used to be the default when ``addrevisioncb`` was provided up to | |
837 | Mercurial 5.8. |
|
843 | Mercurial 5.8. | |
838 |
|
844 | |||
839 | ``addrevisioncb`` should be called for each new rev as it is committed. |
|
845 | ``addrevisioncb`` should be called for each new rev as it is committed. | |
840 | ``duplicaterevisioncb`` should be called for all revs with a |
|
846 | ``duplicaterevisioncb`` should be called for all revs with a | |
841 | pre-existing node. |
|
847 | pre-existing node. | |
842 |
|
848 | |||
843 | ``maybemissingparents`` is a bool indicating whether the incoming |
|
849 | ``maybemissingparents`` is a bool indicating whether the incoming | |
844 | data may reference parents/ancestor revisions that aren't present. |
|
850 | data may reference parents/ancestor revisions that aren't present. | |
845 | This flag is set when receiving data into a "shallow" store that |
|
851 | This flag is set when receiving data into a "shallow" store that | |
846 | doesn't hold all history. |
|
852 | doesn't hold all history. | |
847 |
|
853 | |||
848 | Returns a list of nodes that were processed. A node will be in the list |
|
854 | Returns a list of nodes that were processed. A node will be in the list | |
849 | even if it existed in the store previously. |
|
855 | even if it existed in the store previously. | |
850 | """ |
|
856 | """ | |
851 |
|
857 | |||
852 | def censorrevision(tr, node, tombstone=b''): |
|
858 | def censorrevision(tr, node, tombstone=b''): | |
853 | """Remove the content of a single revision. |
|
859 | """Remove the content of a single revision. | |
854 |
|
860 | |||
855 | The specified ``node`` will have its content purged from storage. |
|
861 | The specified ``node`` will have its content purged from storage. | |
856 | Future attempts to access the revision data for this node will |
|
862 | Future attempts to access the revision data for this node will | |
857 | result in failure. |
|
863 | result in failure. | |
858 |
|
864 | |||
859 | A ``tombstone`` message can optionally be stored. This message may be |
|
865 | A ``tombstone`` message can optionally be stored. This message may be | |
860 | displayed to users when they attempt to access the missing revision |
|
866 | displayed to users when they attempt to access the missing revision | |
861 | data. |
|
867 | data. | |
862 |
|
868 | |||
863 | Storage backends may have stored deltas against the previous content |
|
869 | Storage backends may have stored deltas against the previous content | |
864 | in this revision. As part of censoring a revision, these storage |
|
870 | in this revision. As part of censoring a revision, these storage | |
865 | backends are expected to rewrite any internally stored deltas such |
|
871 | backends are expected to rewrite any internally stored deltas such | |
866 | that they no longer reference the deleted content. |
|
872 | that they no longer reference the deleted content. | |
867 | """ |
|
873 | """ | |
868 |
|
874 | |||
869 | def getstrippoint(minlink): |
|
875 | def getstrippoint(minlink): | |
870 | """Find the minimum revision that must be stripped to strip a linkrev. |
|
876 | """Find the minimum revision that must be stripped to strip a linkrev. | |
871 |
|
877 | |||
872 | Returns a 2-tuple containing the minimum revision number and a set |
|
878 | Returns a 2-tuple containing the minimum revision number and a set | |
873 | of all revisions numbers that would be broken by this strip. |
|
879 | of all revisions numbers that would be broken by this strip. | |
874 |
|
880 | |||
875 | TODO this is highly revlog centric and should be abstracted into |
|
881 | TODO this is highly revlog centric and should be abstracted into | |
876 | a higher-level deletion API. ``repair.strip()`` relies on this. |
|
882 | a higher-level deletion API. ``repair.strip()`` relies on this. | |
877 | """ |
|
883 | """ | |
878 |
|
884 | |||
879 | def strip(minlink, transaction): |
|
885 | def strip(minlink, transaction): | |
880 | """Remove storage of items starting at a linkrev. |
|
886 | """Remove storage of items starting at a linkrev. | |
881 |
|
887 | |||
882 | This uses ``getstrippoint()`` to determine the first node to remove. |
|
888 | This uses ``getstrippoint()`` to determine the first node to remove. | |
883 | Then it effectively truncates storage for all revisions after that. |
|
889 | Then it effectively truncates storage for all revisions after that. | |
884 |
|
890 | |||
885 | TODO this is highly revlog centric and should be abstracted into a |
|
891 | TODO this is highly revlog centric and should be abstracted into a | |
886 | higher-level deletion API. |
|
892 | higher-level deletion API. | |
887 | """ |
|
893 | """ | |
888 |
|
894 | |||
889 |
|
895 | |||
890 | class ifilestorage(ifileindex, ifiledata, ifilemutation): |
|
896 | class ifilestorage(ifileindex, ifiledata, ifilemutation): | |
891 | """Complete storage interface for a single tracked file.""" |
|
897 | """Complete storage interface for a single tracked file.""" | |
892 |
|
898 | |||
893 | def files(): |
|
899 | def files(): | |
894 | """Obtain paths that are backing storage for this file. |
|
900 | """Obtain paths that are backing storage for this file. | |
895 |
|
901 | |||
896 | TODO this is used heavily by verify code and there should probably |
|
902 | TODO this is used heavily by verify code and there should probably | |
897 | be a better API for that. |
|
903 | be a better API for that. | |
898 | """ |
|
904 | """ | |
899 |
|
905 | |||
900 | def storageinfo( |
|
906 | def storageinfo( | |
901 | exclusivefiles=False, |
|
907 | exclusivefiles=False, | |
902 | sharedfiles=False, |
|
908 | sharedfiles=False, | |
903 | revisionscount=False, |
|
909 | revisionscount=False, | |
904 | trackedsize=False, |
|
910 | trackedsize=False, | |
905 | storedsize=False, |
|
911 | storedsize=False, | |
906 | ): |
|
912 | ): | |
907 | """Obtain information about storage for this file's data. |
|
913 | """Obtain information about storage for this file's data. | |
908 |
|
914 | |||
909 | Returns a dict describing storage for this tracked path. The keys |
|
915 | Returns a dict describing storage for this tracked path. The keys | |
910 | in the dict map to arguments of the same. The arguments are bools |
|
916 | in the dict map to arguments of the same. The arguments are bools | |
911 | indicating whether to calculate and obtain that data. |
|
917 | indicating whether to calculate and obtain that data. | |
912 |
|
918 | |||
913 | exclusivefiles |
|
919 | exclusivefiles | |
914 | Iterable of (vfs, path) describing files that are exclusively |
|
920 | Iterable of (vfs, path) describing files that are exclusively | |
915 | used to back storage for this tracked path. |
|
921 | used to back storage for this tracked path. | |
916 |
|
922 | |||
917 | sharedfiles |
|
923 | sharedfiles | |
918 | Iterable of (vfs, path) describing files that are used to back |
|
924 | Iterable of (vfs, path) describing files that are used to back | |
919 | storage for this tracked path. Those files may also provide storage |
|
925 | storage for this tracked path. Those files may also provide storage | |
920 | for other stored entities. |
|
926 | for other stored entities. | |
921 |
|
927 | |||
922 | revisionscount |
|
928 | revisionscount | |
923 | Number of revisions available for retrieval. |
|
929 | Number of revisions available for retrieval. | |
924 |
|
930 | |||
925 | trackedsize |
|
931 | trackedsize | |
926 | Total size in bytes of all tracked revisions. This is a sum of the |
|
932 | Total size in bytes of all tracked revisions. This is a sum of the | |
927 | length of the fulltext of all revisions. |
|
933 | length of the fulltext of all revisions. | |
928 |
|
934 | |||
929 | storedsize |
|
935 | storedsize | |
930 | Total size in bytes used to store data for all tracked revisions. |
|
936 | Total size in bytes used to store data for all tracked revisions. | |
931 | This is commonly less than ``trackedsize`` due to internal usage |
|
937 | This is commonly less than ``trackedsize`` due to internal usage | |
932 | of deltas rather than fulltext revisions. |
|
938 | of deltas rather than fulltext revisions. | |
933 |
|
939 | |||
934 | Not all storage backends may support all queries are have a reasonable |
|
940 | Not all storage backends may support all queries are have a reasonable | |
935 | value to use. In that case, the value should be set to ``None`` and |
|
941 | value to use. In that case, the value should be set to ``None`` and | |
936 | callers are expected to handle this special value. |
|
942 | callers are expected to handle this special value. | |
937 | """ |
|
943 | """ | |
938 |
|
944 | |||
939 | def verifyintegrity(state): |
|
945 | def verifyintegrity(state): | |
940 | """Verifies the integrity of file storage. |
|
946 | """Verifies the integrity of file storage. | |
941 |
|
947 | |||
942 | ``state`` is a dict holding state of the verifier process. It can be |
|
948 | ``state`` is a dict holding state of the verifier process. It can be | |
943 | used to communicate data between invocations of multiple storage |
|
949 | used to communicate data between invocations of multiple storage | |
944 | primitives. |
|
950 | primitives. | |
945 |
|
951 | |||
946 | If individual revisions cannot have their revision content resolved, |
|
952 | If individual revisions cannot have their revision content resolved, | |
947 | the method is expected to set the ``skipread`` key to a set of nodes |
|
953 | the method is expected to set the ``skipread`` key to a set of nodes | |
948 | that encountered problems. If set, the method can also add the node(s) |
|
954 | that encountered problems. If set, the method can also add the node(s) | |
949 | to ``safe_renamed`` in order to indicate nodes that may perform the |
|
955 | to ``safe_renamed`` in order to indicate nodes that may perform the | |
950 | rename checks with currently accessible data. |
|
956 | rename checks with currently accessible data. | |
951 |
|
957 | |||
952 | The method yields objects conforming to the ``iverifyproblem`` |
|
958 | The method yields objects conforming to the ``iverifyproblem`` | |
953 | interface. |
|
959 | interface. | |
954 | """ |
|
960 | """ | |
955 |
|
961 | |||
956 |
|
962 | |||
957 | class idirs(interfaceutil.Interface): |
|
963 | class idirs(interfaceutil.Interface): | |
958 | """Interface representing a collection of directories from paths. |
|
964 | """Interface representing a collection of directories from paths. | |
959 |
|
965 | |||
960 | This interface is essentially a derived data structure representing |
|
966 | This interface is essentially a derived data structure representing | |
961 | directories from a collection of paths. |
|
967 | directories from a collection of paths. | |
962 | """ |
|
968 | """ | |
963 |
|
969 | |||
964 | def addpath(path): |
|
970 | def addpath(path): | |
965 | """Add a path to the collection. |
|
971 | """Add a path to the collection. | |
966 |
|
972 | |||
967 | All directories in the path will be added to the collection. |
|
973 | All directories in the path will be added to the collection. | |
968 | """ |
|
974 | """ | |
969 |
|
975 | |||
970 | def delpath(path): |
|
976 | def delpath(path): | |
971 | """Remove a path from the collection. |
|
977 | """Remove a path from the collection. | |
972 |
|
978 | |||
973 | If the removal was the last path in a particular directory, the |
|
979 | If the removal was the last path in a particular directory, the | |
974 | directory is removed from the collection. |
|
980 | directory is removed from the collection. | |
975 | """ |
|
981 | """ | |
976 |
|
982 | |||
977 | def __iter__(): |
|
983 | def __iter__(): | |
978 | """Iterate over the directories in this collection of paths.""" |
|
984 | """Iterate over the directories in this collection of paths.""" | |
979 |
|
985 | |||
980 | def __contains__(path): |
|
986 | def __contains__(path): | |
981 | """Whether a specific directory is in this collection.""" |
|
987 | """Whether a specific directory is in this collection.""" | |
982 |
|
988 | |||
983 |
|
989 | |||
984 | class imanifestdict(interfaceutil.Interface): |
|
990 | class imanifestdict(interfaceutil.Interface): | |
985 | """Interface representing a manifest data structure. |
|
991 | """Interface representing a manifest data structure. | |
986 |
|
992 | |||
987 | A manifest is effectively a dict mapping paths to entries. Each entry |
|
993 | A manifest is effectively a dict mapping paths to entries. Each entry | |
988 | consists of a binary node and extra flags affecting that entry. |
|
994 | consists of a binary node and extra flags affecting that entry. | |
989 | """ |
|
995 | """ | |
990 |
|
996 | |||
991 | def __getitem__(path): |
|
997 | def __getitem__(path): | |
992 | """Returns the binary node value for a path in the manifest. |
|
998 | """Returns the binary node value for a path in the manifest. | |
993 |
|
999 | |||
994 | Raises ``KeyError`` if the path does not exist in the manifest. |
|
1000 | Raises ``KeyError`` if the path does not exist in the manifest. | |
995 |
|
1001 | |||
996 | Equivalent to ``self.find(path)[0]``. |
|
1002 | Equivalent to ``self.find(path)[0]``. | |
997 | """ |
|
1003 | """ | |
998 |
|
1004 | |||
999 | def find(path): |
|
1005 | def find(path): | |
1000 | """Returns the entry for a path in the manifest. |
|
1006 | """Returns the entry for a path in the manifest. | |
1001 |
|
1007 | |||
1002 | Returns a 2-tuple of (node, flags). |
|
1008 | Returns a 2-tuple of (node, flags). | |
1003 |
|
1009 | |||
1004 | Raises ``KeyError`` if the path does not exist in the manifest. |
|
1010 | Raises ``KeyError`` if the path does not exist in the manifest. | |
1005 | """ |
|
1011 | """ | |
1006 |
|
1012 | |||
1007 | def __len__(): |
|
1013 | def __len__(): | |
1008 | """Return the number of entries in the manifest.""" |
|
1014 | """Return the number of entries in the manifest.""" | |
1009 |
|
1015 | |||
1010 | def __nonzero__(): |
|
1016 | def __nonzero__(): | |
1011 | """Returns True if the manifest has entries, False otherwise.""" |
|
1017 | """Returns True if the manifest has entries, False otherwise.""" | |
1012 |
|
1018 | |||
1013 | __bool__ = __nonzero__ |
|
1019 | __bool__ = __nonzero__ | |
1014 |
|
1020 | |||
1015 | def __setitem__(path, node): |
|
1021 | def __setitem__(path, node): | |
1016 | """Define the node value for a path in the manifest. |
|
1022 | """Define the node value for a path in the manifest. | |
1017 |
|
1023 | |||
1018 | If the path is already in the manifest, its flags will be copied to |
|
1024 | If the path is already in the manifest, its flags will be copied to | |
1019 | the new entry. |
|
1025 | the new entry. | |
1020 | """ |
|
1026 | """ | |
1021 |
|
1027 | |||
1022 | def __contains__(path): |
|
1028 | def __contains__(path): | |
1023 | """Whether a path exists in the manifest.""" |
|
1029 | """Whether a path exists in the manifest.""" | |
1024 |
|
1030 | |||
1025 | def __delitem__(path): |
|
1031 | def __delitem__(path): | |
1026 | """Remove a path from the manifest. |
|
1032 | """Remove a path from the manifest. | |
1027 |
|
1033 | |||
1028 | Raises ``KeyError`` if the path is not in the manifest. |
|
1034 | Raises ``KeyError`` if the path is not in the manifest. | |
1029 | """ |
|
1035 | """ | |
1030 |
|
1036 | |||
1031 | def __iter__(): |
|
1037 | def __iter__(): | |
1032 | """Iterate over paths in the manifest.""" |
|
1038 | """Iterate over paths in the manifest.""" | |
1033 |
|
1039 | |||
1034 | def iterkeys(): |
|
1040 | def iterkeys(): | |
1035 | """Iterate over paths in the manifest.""" |
|
1041 | """Iterate over paths in the manifest.""" | |
1036 |
|
1042 | |||
1037 | def keys(): |
|
1043 | def keys(): | |
1038 | """Obtain a list of paths in the manifest.""" |
|
1044 | """Obtain a list of paths in the manifest.""" | |
1039 |
|
1045 | |||
1040 | def filesnotin(other, match=None): |
|
1046 | def filesnotin(other, match=None): | |
1041 | """Obtain the set of paths in this manifest but not in another. |
|
1047 | """Obtain the set of paths in this manifest but not in another. | |
1042 |
|
1048 | |||
1043 | ``match`` is an optional matcher function to be applied to both |
|
1049 | ``match`` is an optional matcher function to be applied to both | |
1044 | manifests. |
|
1050 | manifests. | |
1045 |
|
1051 | |||
1046 | Returns a set of paths. |
|
1052 | Returns a set of paths. | |
1047 | """ |
|
1053 | """ | |
1048 |
|
1054 | |||
1049 | def dirs(): |
|
1055 | def dirs(): | |
1050 | """Returns an object implementing the ``idirs`` interface.""" |
|
1056 | """Returns an object implementing the ``idirs`` interface.""" | |
1051 |
|
1057 | |||
1052 | def hasdir(dir): |
|
1058 | def hasdir(dir): | |
1053 | """Returns a bool indicating if a directory is in this manifest.""" |
|
1059 | """Returns a bool indicating if a directory is in this manifest.""" | |
1054 |
|
1060 | |||
1055 | def walk(match): |
|
1061 | def walk(match): | |
1056 | """Generator of paths in manifest satisfying a matcher. |
|
1062 | """Generator of paths in manifest satisfying a matcher. | |
1057 |
|
1063 | |||
1058 | If the matcher has explicit files listed and they don't exist in |
|
1064 | If the matcher has explicit files listed and they don't exist in | |
1059 | the manifest, ``match.bad()`` is called for each missing file. |
|
1065 | the manifest, ``match.bad()`` is called for each missing file. | |
1060 | """ |
|
1066 | """ | |
1061 |
|
1067 | |||
1062 | def diff(other, match=None, clean=False): |
|
1068 | def diff(other, match=None, clean=False): | |
1063 | """Find differences between this manifest and another. |
|
1069 | """Find differences between this manifest and another. | |
1064 |
|
1070 | |||
1065 | This manifest is compared to ``other``. |
|
1071 | This manifest is compared to ``other``. | |
1066 |
|
1072 | |||
1067 | If ``match`` is provided, the two manifests are filtered against this |
|
1073 | If ``match`` is provided, the two manifests are filtered against this | |
1068 | matcher and only entries satisfying the matcher are compared. |
|
1074 | matcher and only entries satisfying the matcher are compared. | |
1069 |
|
1075 | |||
1070 | If ``clean`` is True, unchanged files are included in the returned |
|
1076 | If ``clean`` is True, unchanged files are included in the returned | |
1071 | object. |
|
1077 | object. | |
1072 |
|
1078 | |||
1073 | Returns a dict with paths as keys and values of 2-tuples of 2-tuples of |
|
1079 | Returns a dict with paths as keys and values of 2-tuples of 2-tuples of | |
1074 | the form ``((node1, flag1), (node2, flag2))`` where ``(node1, flag1)`` |
|
1080 | the form ``((node1, flag1), (node2, flag2))`` where ``(node1, flag1)`` | |
1075 | represents the node and flags for this manifest and ``(node2, flag2)`` |
|
1081 | represents the node and flags for this manifest and ``(node2, flag2)`` | |
1076 | are the same for the other manifest. |
|
1082 | are the same for the other manifest. | |
1077 | """ |
|
1083 | """ | |
1078 |
|
1084 | |||
1079 | def setflag(path, flag): |
|
1085 | def setflag(path, flag): | |
1080 | """Set the flag value for a given path. |
|
1086 | """Set the flag value for a given path. | |
1081 |
|
1087 | |||
1082 | Raises ``KeyError`` if the path is not already in the manifest. |
|
1088 | Raises ``KeyError`` if the path is not already in the manifest. | |
1083 | """ |
|
1089 | """ | |
1084 |
|
1090 | |||
1085 | def get(path, default=None): |
|
1091 | def get(path, default=None): | |
1086 | """Obtain the node value for a path or a default value if missing.""" |
|
1092 | """Obtain the node value for a path or a default value if missing.""" | |
1087 |
|
1093 | |||
1088 | def flags(path): |
|
1094 | def flags(path): | |
1089 | """Return the flags value for a path (default: empty bytestring).""" |
|
1095 | """Return the flags value for a path (default: empty bytestring).""" | |
1090 |
|
1096 | |||
1091 | def copy(): |
|
1097 | def copy(): | |
1092 | """Return a copy of this manifest.""" |
|
1098 | """Return a copy of this manifest.""" | |
1093 |
|
1099 | |||
1094 | def items(): |
|
1100 | def items(): | |
1095 | """Returns an iterable of (path, node) for items in this manifest.""" |
|
1101 | """Returns an iterable of (path, node) for items in this manifest.""" | |
1096 |
|
1102 | |||
1097 | def iteritems(): |
|
1103 | def iteritems(): | |
1098 | """Identical to items().""" |
|
1104 | """Identical to items().""" | |
1099 |
|
1105 | |||
1100 | def iterentries(): |
|
1106 | def iterentries(): | |
1101 | """Returns an iterable of (path, node, flags) for this manifest. |
|
1107 | """Returns an iterable of (path, node, flags) for this manifest. | |
1102 |
|
1108 | |||
1103 | Similar to ``iteritems()`` except items are a 3-tuple and include |
|
1109 | Similar to ``iteritems()`` except items are a 3-tuple and include | |
1104 | flags. |
|
1110 | flags. | |
1105 | """ |
|
1111 | """ | |
1106 |
|
1112 | |||
1107 | def text(): |
|
1113 | def text(): | |
1108 | """Obtain the raw data representation for this manifest. |
|
1114 | """Obtain the raw data representation for this manifest. | |
1109 |
|
1115 | |||
1110 | Result is used to create a manifest revision. |
|
1116 | Result is used to create a manifest revision. | |
1111 | """ |
|
1117 | """ | |
1112 |
|
1118 | |||
1113 | def fastdelta(base, changes): |
|
1119 | def fastdelta(base, changes): | |
1114 | """Obtain a delta between this manifest and another given changes. |
|
1120 | """Obtain a delta between this manifest and another given changes. | |
1115 |
|
1121 | |||
1116 | ``base`` in the raw data representation for another manifest. |
|
1122 | ``base`` in the raw data representation for another manifest. | |
1117 |
|
1123 | |||
1118 | ``changes`` is an iterable of ``(path, to_delete)``. |
|
1124 | ``changes`` is an iterable of ``(path, to_delete)``. | |
1119 |
|
1125 | |||
1120 | Returns a 2-tuple containing ``bytearray(self.text())`` and the |
|
1126 | Returns a 2-tuple containing ``bytearray(self.text())`` and the | |
1121 | delta between ``base`` and this manifest. |
|
1127 | delta between ``base`` and this manifest. | |
1122 |
|
1128 | |||
1123 | If this manifest implementation can't support ``fastdelta()``, |
|
1129 | If this manifest implementation can't support ``fastdelta()``, | |
1124 | raise ``mercurial.manifest.FastdeltaUnavailable``. |
|
1130 | raise ``mercurial.manifest.FastdeltaUnavailable``. | |
1125 | """ |
|
1131 | """ | |
1126 |
|
1132 | |||
1127 |
|
1133 | |||
1128 | class imanifestrevisionbase(interfaceutil.Interface): |
|
1134 | class imanifestrevisionbase(interfaceutil.Interface): | |
1129 | """Base interface representing a single revision of a manifest. |
|
1135 | """Base interface representing a single revision of a manifest. | |
1130 |
|
1136 | |||
1131 | Should not be used as a primary interface: should always be inherited |
|
1137 | Should not be used as a primary interface: should always be inherited | |
1132 | as part of a larger interface. |
|
1138 | as part of a larger interface. | |
1133 | """ |
|
1139 | """ | |
1134 |
|
1140 | |||
1135 | def copy(): |
|
1141 | def copy(): | |
1136 | """Obtain a copy of this manifest instance. |
|
1142 | """Obtain a copy of this manifest instance. | |
1137 |
|
1143 | |||
1138 | Returns an object conforming to the ``imanifestrevisionwritable`` |
|
1144 | Returns an object conforming to the ``imanifestrevisionwritable`` | |
1139 | interface. The instance will be associated with the same |
|
1145 | interface. The instance will be associated with the same | |
1140 | ``imanifestlog`` collection as this instance. |
|
1146 | ``imanifestlog`` collection as this instance. | |
1141 | """ |
|
1147 | """ | |
1142 |
|
1148 | |||
1143 | def read(): |
|
1149 | def read(): | |
1144 | """Obtain the parsed manifest data structure. |
|
1150 | """Obtain the parsed manifest data structure. | |
1145 |
|
1151 | |||
1146 | The returned object conforms to the ``imanifestdict`` interface. |
|
1152 | The returned object conforms to the ``imanifestdict`` interface. | |
1147 | """ |
|
1153 | """ | |
1148 |
|
1154 | |||
1149 |
|
1155 | |||
1150 | class imanifestrevisionstored(imanifestrevisionbase): |
|
1156 | class imanifestrevisionstored(imanifestrevisionbase): | |
1151 | """Interface representing a manifest revision committed to storage.""" |
|
1157 | """Interface representing a manifest revision committed to storage.""" | |
1152 |
|
1158 | |||
1153 | def node(): |
|
1159 | def node(): | |
1154 | """The binary node for this manifest.""" |
|
1160 | """The binary node for this manifest.""" | |
1155 |
|
1161 | |||
1156 | parents = interfaceutil.Attribute( |
|
1162 | parents = interfaceutil.Attribute( | |
1157 | """List of binary nodes that are parents for this manifest revision.""" |
|
1163 | """List of binary nodes that are parents for this manifest revision.""" | |
1158 | ) |
|
1164 | ) | |
1159 |
|
1165 | |||
1160 | def readdelta(shallow=False): |
|
1166 | def readdelta(shallow=False): | |
1161 | """Obtain the manifest data structure representing changes from parent. |
|
1167 | """Obtain the manifest data structure representing changes from parent. | |
1162 |
|
1168 | |||
1163 | This manifest is compared to its 1st parent. A new manifest representing |
|
1169 | This manifest is compared to its 1st parent. A new manifest representing | |
1164 | those differences is constructed. |
|
1170 | those differences is constructed. | |
1165 |
|
1171 | |||
1166 | The returned object conforms to the ``imanifestdict`` interface. |
|
1172 | The returned object conforms to the ``imanifestdict`` interface. | |
1167 | """ |
|
1173 | """ | |
1168 |
|
1174 | |||
1169 | def readfast(shallow=False): |
|
1175 | def readfast(shallow=False): | |
1170 | """Calls either ``read()`` or ``readdelta()``. |
|
1176 | """Calls either ``read()`` or ``readdelta()``. | |
1171 |
|
1177 | |||
1172 | The faster of the two options is called. |
|
1178 | The faster of the two options is called. | |
1173 | """ |
|
1179 | """ | |
1174 |
|
1180 | |||
1175 | def find(key): |
|
1181 | def find(key): | |
1176 | """Calls self.read().find(key)``. |
|
1182 | """Calls self.read().find(key)``. | |
1177 |
|
1183 | |||
1178 | Returns a 2-tuple of ``(node, flags)`` or raises ``KeyError``. |
|
1184 | Returns a 2-tuple of ``(node, flags)`` or raises ``KeyError``. | |
1179 | """ |
|
1185 | """ | |
1180 |
|
1186 | |||
1181 |
|
1187 | |||
1182 | class imanifestrevisionwritable(imanifestrevisionbase): |
|
1188 | class imanifestrevisionwritable(imanifestrevisionbase): | |
1183 | """Interface representing a manifest revision that can be committed.""" |
|
1189 | """Interface representing a manifest revision that can be committed.""" | |
1184 |
|
1190 | |||
1185 | def write(transaction, linkrev, p1node, p2node, added, removed, match=None): |
|
1191 | def write(transaction, linkrev, p1node, p2node, added, removed, match=None): | |
1186 | """Add this revision to storage. |
|
1192 | """Add this revision to storage. | |
1187 |
|
1193 | |||
1188 | Takes a transaction object, the changeset revision number it will |
|
1194 | Takes a transaction object, the changeset revision number it will | |
1189 | be associated with, its parent nodes, and lists of added and |
|
1195 | be associated with, its parent nodes, and lists of added and | |
1190 | removed paths. |
|
1196 | removed paths. | |
1191 |
|
1197 | |||
1192 | If match is provided, storage can choose not to inspect or write out |
|
1198 | If match is provided, storage can choose not to inspect or write out | |
1193 | items that do not match. Storage is still required to be able to provide |
|
1199 | items that do not match. Storage is still required to be able to provide | |
1194 | the full manifest in the future for any directories written (these |
|
1200 | the full manifest in the future for any directories written (these | |
1195 | manifests should not be "narrowed on disk"). |
|
1201 | manifests should not be "narrowed on disk"). | |
1196 |
|
1202 | |||
1197 | Returns the binary node of the created revision. |
|
1203 | Returns the binary node of the created revision. | |
1198 | """ |
|
1204 | """ | |
1199 |
|
1205 | |||
1200 |
|
1206 | |||
1201 | class imanifeststorage(interfaceutil.Interface): |
|
1207 | class imanifeststorage(interfaceutil.Interface): | |
1202 | """Storage interface for manifest data.""" |
|
1208 | """Storage interface for manifest data.""" | |
1203 |
|
1209 | |||
1204 | nodeconstants = interfaceutil.Attribute( |
|
1210 | nodeconstants = interfaceutil.Attribute( | |
1205 | """nodeconstants used by the current repository.""" |
|
1211 | """nodeconstants used by the current repository.""" | |
1206 | ) |
|
1212 | ) | |
1207 |
|
1213 | |||
1208 | tree = interfaceutil.Attribute( |
|
1214 | tree = interfaceutil.Attribute( | |
1209 | """The path to the directory this manifest tracks. |
|
1215 | """The path to the directory this manifest tracks. | |
1210 |
|
1216 | |||
1211 | The empty bytestring represents the root manifest. |
|
1217 | The empty bytestring represents the root manifest. | |
1212 | """ |
|
1218 | """ | |
1213 | ) |
|
1219 | ) | |
1214 |
|
1220 | |||
1215 | index = interfaceutil.Attribute( |
|
1221 | index = interfaceutil.Attribute( | |
1216 | """An ``ifilerevisionssequence`` instance.""" |
|
1222 | """An ``ifilerevisionssequence`` instance.""" | |
1217 | ) |
|
1223 | ) | |
1218 |
|
1224 | |||
1219 | opener = interfaceutil.Attribute( |
|
1225 | opener = interfaceutil.Attribute( | |
1220 | """VFS opener to use to access underlying files used for storage. |
|
1226 | """VFS opener to use to access underlying files used for storage. | |
1221 |
|
1227 | |||
1222 | TODO this is revlog specific and should not be exposed. |
|
1228 | TODO this is revlog specific and should not be exposed. | |
1223 | """ |
|
1229 | """ | |
1224 | ) |
|
1230 | ) | |
1225 |
|
1231 | |||
1226 | _generaldelta = interfaceutil.Attribute( |
|
1232 | _generaldelta = interfaceutil.Attribute( | |
1227 | """Whether generaldelta storage is being used. |
|
1233 | """Whether generaldelta storage is being used. | |
1228 |
|
1234 | |||
1229 | TODO this is revlog specific and should not be exposed. |
|
1235 | TODO this is revlog specific and should not be exposed. | |
1230 | """ |
|
1236 | """ | |
1231 | ) |
|
1237 | ) | |
1232 |
|
1238 | |||
1233 | fulltextcache = interfaceutil.Attribute( |
|
1239 | fulltextcache = interfaceutil.Attribute( | |
1234 | """Dict with cache of fulltexts. |
|
1240 | """Dict with cache of fulltexts. | |
1235 |
|
1241 | |||
1236 | TODO this doesn't feel appropriate for the storage interface. |
|
1242 | TODO this doesn't feel appropriate for the storage interface. | |
1237 | """ |
|
1243 | """ | |
1238 | ) |
|
1244 | ) | |
1239 |
|
1245 | |||
1240 | def __len__(): |
|
1246 | def __len__(): | |
1241 | """Obtain the number of revisions stored for this manifest.""" |
|
1247 | """Obtain the number of revisions stored for this manifest.""" | |
1242 |
|
1248 | |||
1243 | def __iter__(): |
|
1249 | def __iter__(): | |
1244 | """Iterate over revision numbers for this manifest.""" |
|
1250 | """Iterate over revision numbers for this manifest.""" | |
1245 |
|
1251 | |||
1246 | def rev(node): |
|
1252 | def rev(node): | |
1247 | """Obtain the revision number given a binary node. |
|
1253 | """Obtain the revision number given a binary node. | |
1248 |
|
1254 | |||
1249 | Raises ``error.LookupError`` if the node is not known. |
|
1255 | Raises ``error.LookupError`` if the node is not known. | |
1250 | """ |
|
1256 | """ | |
1251 |
|
1257 | |||
1252 | def node(rev): |
|
1258 | def node(rev): | |
1253 | """Obtain the node value given a revision number. |
|
1259 | """Obtain the node value given a revision number. | |
1254 |
|
1260 | |||
1255 | Raises ``error.LookupError`` if the revision is not known. |
|
1261 | Raises ``error.LookupError`` if the revision is not known. | |
1256 | """ |
|
1262 | """ | |
1257 |
|
1263 | |||
1258 | def lookup(value): |
|
1264 | def lookup(value): | |
1259 | """Attempt to resolve a value to a node. |
|
1265 | """Attempt to resolve a value to a node. | |
1260 |
|
1266 | |||
1261 | Value can be a binary node, hex node, revision number, or a bytes |
|
1267 | Value can be a binary node, hex node, revision number, or a bytes | |
1262 | that can be converted to an integer. |
|
1268 | that can be converted to an integer. | |
1263 |
|
1269 | |||
1264 | Raises ``error.LookupError`` if a ndoe could not be resolved. |
|
1270 | Raises ``error.LookupError`` if a ndoe could not be resolved. | |
1265 | """ |
|
1271 | """ | |
1266 |
|
1272 | |||
1267 | def parents(node): |
|
1273 | def parents(node): | |
1268 | """Returns a 2-tuple of parent nodes for a node. |
|
1274 | """Returns a 2-tuple of parent nodes for a node. | |
1269 |
|
1275 | |||
1270 | Values will be ``nullid`` if the parent is empty. |
|
1276 | Values will be ``nullid`` if the parent is empty. | |
1271 | """ |
|
1277 | """ | |
1272 |
|
1278 | |||
1273 | def parentrevs(rev): |
|
1279 | def parentrevs(rev): | |
1274 | """Like parents() but operates on revision numbers.""" |
|
1280 | """Like parents() but operates on revision numbers.""" | |
1275 |
|
1281 | |||
1276 | def linkrev(rev): |
|
1282 | def linkrev(rev): | |
1277 | """Obtain the changeset revision number a revision is linked to.""" |
|
1283 | """Obtain the changeset revision number a revision is linked to.""" | |
1278 |
|
1284 | |||
1279 | def revision(node, _df=None): |
|
1285 | def revision(node, _df=None): | |
1280 | """Obtain fulltext data for a node.""" |
|
1286 | """Obtain fulltext data for a node.""" | |
1281 |
|
1287 | |||
1282 | def rawdata(node, _df=None): |
|
1288 | def rawdata(node, _df=None): | |
1283 | """Obtain raw data for a node.""" |
|
1289 | """Obtain raw data for a node.""" | |
1284 |
|
1290 | |||
1285 | def revdiff(rev1, rev2): |
|
1291 | def revdiff(rev1, rev2): | |
1286 | """Obtain a delta between two revision numbers. |
|
1292 | """Obtain a delta between two revision numbers. | |
1287 |
|
1293 | |||
1288 | The returned data is the result of ``bdiff.bdiff()`` on the raw |
|
1294 | The returned data is the result of ``bdiff.bdiff()`` on the raw | |
1289 | revision data. |
|
1295 | revision data. | |
1290 | """ |
|
1296 | """ | |
1291 |
|
1297 | |||
1292 | def cmp(node, fulltext): |
|
1298 | def cmp(node, fulltext): | |
1293 | """Compare fulltext to another revision. |
|
1299 | """Compare fulltext to another revision. | |
1294 |
|
1300 | |||
1295 | Returns True if the fulltext is different from what is stored. |
|
1301 | Returns True if the fulltext is different from what is stored. | |
1296 | """ |
|
1302 | """ | |
1297 |
|
1303 | |||
1298 | def emitrevisions( |
|
1304 | def emitrevisions( | |
1299 | nodes, |
|
1305 | nodes, | |
1300 | nodesorder=None, |
|
1306 | nodesorder=None, | |
1301 | revisiondata=False, |
|
1307 | revisiondata=False, | |
1302 | assumehaveparentrevisions=False, |
|
1308 | assumehaveparentrevisions=False, | |
1303 | ): |
|
1309 | ): | |
1304 | """Produce ``irevisiondelta`` describing revisions. |
|
1310 | """Produce ``irevisiondelta`` describing revisions. | |
1305 |
|
1311 | |||
1306 | See the documentation for ``ifiledata`` for more. |
|
1312 | See the documentation for ``ifiledata`` for more. | |
1307 | """ |
|
1313 | """ | |
1308 |
|
1314 | |||
1309 | def addgroup( |
|
1315 | def addgroup( | |
1310 | deltas, |
|
1316 | deltas, | |
1311 | linkmapper, |
|
1317 | linkmapper, | |
1312 | transaction, |
|
1318 | transaction, | |
1313 | addrevisioncb=None, |
|
1319 | addrevisioncb=None, | |
1314 | duplicaterevisioncb=None, |
|
1320 | duplicaterevisioncb=None, | |
1315 | ): |
|
1321 | ): | |
1316 | """Process a series of deltas for storage. |
|
1322 | """Process a series of deltas for storage. | |
1317 |
|
1323 | |||
1318 | See the documentation in ``ifilemutation`` for more. |
|
1324 | See the documentation in ``ifilemutation`` for more. | |
1319 | """ |
|
1325 | """ | |
1320 |
|
1326 | |||
1321 | def rawsize(rev): |
|
1327 | def rawsize(rev): | |
1322 | """Obtain the size of tracked data. |
|
1328 | """Obtain the size of tracked data. | |
1323 |
|
1329 | |||
1324 | Is equivalent to ``len(m.rawdata(node))``. |
|
1330 | Is equivalent to ``len(m.rawdata(node))``. | |
1325 |
|
1331 | |||
1326 | TODO this method is only used by upgrade code and may be removed. |
|
1332 | TODO this method is only used by upgrade code and may be removed. | |
1327 | """ |
|
1333 | """ | |
1328 |
|
1334 | |||
1329 | def getstrippoint(minlink): |
|
1335 | def getstrippoint(minlink): | |
1330 | """Find minimum revision that must be stripped to strip a linkrev. |
|
1336 | """Find minimum revision that must be stripped to strip a linkrev. | |
1331 |
|
1337 | |||
1332 | See the documentation in ``ifilemutation`` for more. |
|
1338 | See the documentation in ``ifilemutation`` for more. | |
1333 | """ |
|
1339 | """ | |
1334 |
|
1340 | |||
1335 | def strip(minlink, transaction): |
|
1341 | def strip(minlink, transaction): | |
1336 | """Remove storage of items starting at a linkrev. |
|
1342 | """Remove storage of items starting at a linkrev. | |
1337 |
|
1343 | |||
1338 | See the documentation in ``ifilemutation`` for more. |
|
1344 | See the documentation in ``ifilemutation`` for more. | |
1339 | """ |
|
1345 | """ | |
1340 |
|
1346 | |||
1341 | def checksize(): |
|
1347 | def checksize(): | |
1342 | """Obtain the expected sizes of backing files. |
|
1348 | """Obtain the expected sizes of backing files. | |
1343 |
|
1349 | |||
1344 | TODO this is used by verify and it should not be part of the interface. |
|
1350 | TODO this is used by verify and it should not be part of the interface. | |
1345 | """ |
|
1351 | """ | |
1346 |
|
1352 | |||
1347 | def files(): |
|
1353 | def files(): | |
1348 | """Obtain paths that are backing storage for this manifest. |
|
1354 | """Obtain paths that are backing storage for this manifest. | |
1349 |
|
1355 | |||
1350 | TODO this is used by verify and there should probably be a better API |
|
1356 | TODO this is used by verify and there should probably be a better API | |
1351 | for this functionality. |
|
1357 | for this functionality. | |
1352 | """ |
|
1358 | """ | |
1353 |
|
1359 | |||
1354 | def deltaparent(rev): |
|
1360 | def deltaparent(rev): | |
1355 | """Obtain the revision that a revision is delta'd against. |
|
1361 | """Obtain the revision that a revision is delta'd against. | |
1356 |
|
1362 | |||
1357 | TODO delta encoding is an implementation detail of storage and should |
|
1363 | TODO delta encoding is an implementation detail of storage and should | |
1358 | not be exposed to the storage interface. |
|
1364 | not be exposed to the storage interface. | |
1359 | """ |
|
1365 | """ | |
1360 |
|
1366 | |||
1361 | def clone(tr, dest, **kwargs): |
|
1367 | def clone(tr, dest, **kwargs): | |
1362 | """Clone this instance to another.""" |
|
1368 | """Clone this instance to another.""" | |
1363 |
|
1369 | |||
1364 | def clearcaches(clear_persisted_data=False): |
|
1370 | def clearcaches(clear_persisted_data=False): | |
1365 | """Clear any caches associated with this instance.""" |
|
1371 | """Clear any caches associated with this instance.""" | |
1366 |
|
1372 | |||
1367 | def dirlog(d): |
|
1373 | def dirlog(d): | |
1368 | """Obtain a manifest storage instance for a tree.""" |
|
1374 | """Obtain a manifest storage instance for a tree.""" | |
1369 |
|
1375 | |||
1370 | def add( |
|
1376 | def add( | |
1371 | m, transaction, link, p1, p2, added, removed, readtree=None, match=None |
|
1377 | m, transaction, link, p1, p2, added, removed, readtree=None, match=None | |
1372 | ): |
|
1378 | ): | |
1373 | """Add a revision to storage. |
|
1379 | """Add a revision to storage. | |
1374 |
|
1380 | |||
1375 | ``m`` is an object conforming to ``imanifestdict``. |
|
1381 | ``m`` is an object conforming to ``imanifestdict``. | |
1376 |
|
1382 | |||
1377 | ``link`` is the linkrev revision number. |
|
1383 | ``link`` is the linkrev revision number. | |
1378 |
|
1384 | |||
1379 | ``p1`` and ``p2`` are the parent revision numbers. |
|
1385 | ``p1`` and ``p2`` are the parent revision numbers. | |
1380 |
|
1386 | |||
1381 | ``added`` and ``removed`` are iterables of added and removed paths, |
|
1387 | ``added`` and ``removed`` are iterables of added and removed paths, | |
1382 | respectively. |
|
1388 | respectively. | |
1383 |
|
1389 | |||
1384 | ``readtree`` is a function that can be used to read the child tree(s) |
|
1390 | ``readtree`` is a function that can be used to read the child tree(s) | |
1385 | when recursively writing the full tree structure when using |
|
1391 | when recursively writing the full tree structure when using | |
1386 | treemanifets. |
|
1392 | treemanifets. | |
1387 |
|
1393 | |||
1388 | ``match`` is a matcher that can be used to hint to storage that not all |
|
1394 | ``match`` is a matcher that can be used to hint to storage that not all | |
1389 | paths must be inspected; this is an optimization and can be safely |
|
1395 | paths must be inspected; this is an optimization and can be safely | |
1390 | ignored. Note that the storage must still be able to reproduce a full |
|
1396 | ignored. Note that the storage must still be able to reproduce a full | |
1391 | manifest including files that did not match. |
|
1397 | manifest including files that did not match. | |
1392 | """ |
|
1398 | """ | |
1393 |
|
1399 | |||
1394 | def storageinfo( |
|
1400 | def storageinfo( | |
1395 | exclusivefiles=False, |
|
1401 | exclusivefiles=False, | |
1396 | sharedfiles=False, |
|
1402 | sharedfiles=False, | |
1397 | revisionscount=False, |
|
1403 | revisionscount=False, | |
1398 | trackedsize=False, |
|
1404 | trackedsize=False, | |
1399 | storedsize=False, |
|
1405 | storedsize=False, | |
1400 | ): |
|
1406 | ): | |
1401 | """Obtain information about storage for this manifest's data. |
|
1407 | """Obtain information about storage for this manifest's data. | |
1402 |
|
1408 | |||
1403 | See ``ifilestorage.storageinfo()`` for a description of this method. |
|
1409 | See ``ifilestorage.storageinfo()`` for a description of this method. | |
1404 | This one behaves the same way, except for manifest data. |
|
1410 | This one behaves the same way, except for manifest data. | |
1405 | """ |
|
1411 | """ | |
1406 |
|
1412 | |||
1407 | def get_revlog(): |
|
1413 | def get_revlog(): | |
1408 | """return an actual revlog instance if any |
|
1414 | """return an actual revlog instance if any | |
1409 |
|
1415 | |||
1410 | This exist because a lot of code leverage the fact the underlying |
|
1416 | This exist because a lot of code leverage the fact the underlying | |
1411 | storage is a revlog for optimization, so giving simple way to access |
|
1417 | storage is a revlog for optimization, so giving simple way to access | |
1412 | the revlog instance helps such code. |
|
1418 | the revlog instance helps such code. | |
1413 | """ |
|
1419 | """ | |
1414 |
|
1420 | |||
1415 |
|
1421 | |||
1416 | class imanifestlog(interfaceutil.Interface): |
|
1422 | class imanifestlog(interfaceutil.Interface): | |
1417 | """Interface representing a collection of manifest snapshots. |
|
1423 | """Interface representing a collection of manifest snapshots. | |
1418 |
|
1424 | |||
1419 | Represents the root manifest in a repository. |
|
1425 | Represents the root manifest in a repository. | |
1420 |
|
1426 | |||
1421 | Also serves as a means to access nested tree manifests and to cache |
|
1427 | Also serves as a means to access nested tree manifests and to cache | |
1422 | tree manifests. |
|
1428 | tree manifests. | |
1423 | """ |
|
1429 | """ | |
1424 |
|
1430 | |||
1425 | nodeconstants = interfaceutil.Attribute( |
|
1431 | nodeconstants = interfaceutil.Attribute( | |
1426 | """nodeconstants used by the current repository.""" |
|
1432 | """nodeconstants used by the current repository.""" | |
1427 | ) |
|
1433 | ) | |
1428 |
|
1434 | |||
1429 | def __getitem__(node): |
|
1435 | def __getitem__(node): | |
1430 | """Obtain a manifest instance for a given binary node. |
|
1436 | """Obtain a manifest instance for a given binary node. | |
1431 |
|
1437 | |||
1432 | Equivalent to calling ``self.get('', node)``. |
|
1438 | Equivalent to calling ``self.get('', node)``. | |
1433 |
|
1439 | |||
1434 | The returned object conforms to the ``imanifestrevisionstored`` |
|
1440 | The returned object conforms to the ``imanifestrevisionstored`` | |
1435 | interface. |
|
1441 | interface. | |
1436 | """ |
|
1442 | """ | |
1437 |
|
1443 | |||
1438 | def get(tree, node, verify=True): |
|
1444 | def get(tree, node, verify=True): | |
1439 | """Retrieve the manifest instance for a given directory and binary node. |
|
1445 | """Retrieve the manifest instance for a given directory and binary node. | |
1440 |
|
1446 | |||
1441 | ``node`` always refers to the node of the root manifest (which will be |
|
1447 | ``node`` always refers to the node of the root manifest (which will be | |
1442 | the only manifest if flat manifests are being used). |
|
1448 | the only manifest if flat manifests are being used). | |
1443 |
|
1449 | |||
1444 | If ``tree`` is the empty string, the root manifest is returned. |
|
1450 | If ``tree`` is the empty string, the root manifest is returned. | |
1445 | Otherwise the manifest for the specified directory will be returned |
|
1451 | Otherwise the manifest for the specified directory will be returned | |
1446 | (requires tree manifests). |
|
1452 | (requires tree manifests). | |
1447 |
|
1453 | |||
1448 | If ``verify`` is True, ``LookupError`` is raised if the node is not |
|
1454 | If ``verify`` is True, ``LookupError`` is raised if the node is not | |
1449 | known. |
|
1455 | known. | |
1450 |
|
1456 | |||
1451 | The returned object conforms to the ``imanifestrevisionstored`` |
|
1457 | The returned object conforms to the ``imanifestrevisionstored`` | |
1452 | interface. |
|
1458 | interface. | |
1453 | """ |
|
1459 | """ | |
1454 |
|
1460 | |||
1455 | def getstorage(tree): |
|
1461 | def getstorage(tree): | |
1456 | """Retrieve an interface to storage for a particular tree. |
|
1462 | """Retrieve an interface to storage for a particular tree. | |
1457 |
|
1463 | |||
1458 | If ``tree`` is the empty bytestring, storage for the root manifest will |
|
1464 | If ``tree`` is the empty bytestring, storage for the root manifest will | |
1459 | be returned. Otherwise storage for a tree manifest is returned. |
|
1465 | be returned. Otherwise storage for a tree manifest is returned. | |
1460 |
|
1466 | |||
1461 | TODO formalize interface for returned object. |
|
1467 | TODO formalize interface for returned object. | |
1462 | """ |
|
1468 | """ | |
1463 |
|
1469 | |||
1464 | def clearcaches(): |
|
1470 | def clearcaches(): | |
1465 | """Clear caches associated with this collection.""" |
|
1471 | """Clear caches associated with this collection.""" | |
1466 |
|
1472 | |||
1467 | def rev(node): |
|
1473 | def rev(node): | |
1468 | """Obtain the revision number for a binary node. |
|
1474 | """Obtain the revision number for a binary node. | |
1469 |
|
1475 | |||
1470 | Raises ``error.LookupError`` if the node is not known. |
|
1476 | Raises ``error.LookupError`` if the node is not known. | |
1471 | """ |
|
1477 | """ | |
1472 |
|
1478 | |||
1473 | def update_caches(transaction): |
|
1479 | def update_caches(transaction): | |
1474 | """update whatever cache are relevant for the used storage.""" |
|
1480 | """update whatever cache are relevant for the used storage.""" | |
1475 |
|
1481 | |||
1476 |
|
1482 | |||
1477 | class ilocalrepositoryfilestorage(interfaceutil.Interface): |
|
1483 | class ilocalrepositoryfilestorage(interfaceutil.Interface): | |
1478 | """Local repository sub-interface providing access to tracked file storage. |
|
1484 | """Local repository sub-interface providing access to tracked file storage. | |
1479 |
|
1485 | |||
1480 | This interface defines how a repository accesses storage for a single |
|
1486 | This interface defines how a repository accesses storage for a single | |
1481 | tracked file path. |
|
1487 | tracked file path. | |
1482 | """ |
|
1488 | """ | |
1483 |
|
1489 | |||
1484 | def file(f): |
|
1490 | def file(f): | |
1485 | """Obtain a filelog for a tracked path. |
|
1491 | """Obtain a filelog for a tracked path. | |
1486 |
|
1492 | |||
1487 | The returned type conforms to the ``ifilestorage`` interface. |
|
1493 | The returned type conforms to the ``ifilestorage`` interface. | |
1488 | """ |
|
1494 | """ | |
1489 |
|
1495 | |||
1490 |
|
1496 | |||
1491 | class ilocalrepositorymain(interfaceutil.Interface): |
|
1497 | class ilocalrepositorymain(interfaceutil.Interface): | |
1492 | """Main interface for local repositories. |
|
1498 | """Main interface for local repositories. | |
1493 |
|
1499 | |||
1494 | This currently captures the reality of things - not how things should be. |
|
1500 | This currently captures the reality of things - not how things should be. | |
1495 | """ |
|
1501 | """ | |
1496 |
|
1502 | |||
1497 | nodeconstants = interfaceutil.Attribute( |
|
1503 | nodeconstants = interfaceutil.Attribute( | |
1498 | """Constant nodes matching the hash function used by the repository.""" |
|
1504 | """Constant nodes matching the hash function used by the repository.""" | |
1499 | ) |
|
1505 | ) | |
1500 | nullid = interfaceutil.Attribute( |
|
1506 | nullid = interfaceutil.Attribute( | |
1501 | """null revision for the hash function used by the repository.""" |
|
1507 | """null revision for the hash function used by the repository.""" | |
1502 | ) |
|
1508 | ) | |
1503 |
|
1509 | |||
1504 | supported = interfaceutil.Attribute( |
|
1510 | supported = interfaceutil.Attribute( | |
1505 | """Set of requirements that this repo is capable of opening.""" |
|
1511 | """Set of requirements that this repo is capable of opening.""" | |
1506 | ) |
|
1512 | ) | |
1507 |
|
1513 | |||
1508 | requirements = interfaceutil.Attribute( |
|
1514 | requirements = interfaceutil.Attribute( | |
1509 | """Set of requirements this repo uses.""" |
|
1515 | """Set of requirements this repo uses.""" | |
1510 | ) |
|
1516 | ) | |
1511 |
|
1517 | |||
1512 | features = interfaceutil.Attribute( |
|
1518 | features = interfaceutil.Attribute( | |
1513 | """Set of "features" this repository supports. |
|
1519 | """Set of "features" this repository supports. | |
1514 |
|
1520 | |||
1515 | A "feature" is a loosely-defined term. It can refer to a feature |
|
1521 | A "feature" is a loosely-defined term. It can refer to a feature | |
1516 | in the classical sense or can describe an implementation detail |
|
1522 | in the classical sense or can describe an implementation detail | |
1517 | of the repository. For example, a ``readonly`` feature may denote |
|
1523 | of the repository. For example, a ``readonly`` feature may denote | |
1518 | the repository as read-only. Or a ``revlogfilestore`` feature may |
|
1524 | the repository as read-only. Or a ``revlogfilestore`` feature may | |
1519 | denote that the repository is using revlogs for file storage. |
|
1525 | denote that the repository is using revlogs for file storage. | |
1520 |
|
1526 | |||
1521 | The intent of features is to provide a machine-queryable mechanism |
|
1527 | The intent of features is to provide a machine-queryable mechanism | |
1522 | for repo consumers to test for various repository characteristics. |
|
1528 | for repo consumers to test for various repository characteristics. | |
1523 |
|
1529 | |||
1524 | Features are similar to ``requirements``. The main difference is that |
|
1530 | Features are similar to ``requirements``. The main difference is that | |
1525 | requirements are stored on-disk and represent requirements to open the |
|
1531 | requirements are stored on-disk and represent requirements to open the | |
1526 | repository. Features are more run-time capabilities of the repository |
|
1532 | repository. Features are more run-time capabilities of the repository | |
1527 | and more granular capabilities (which may be derived from requirements). |
|
1533 | and more granular capabilities (which may be derived from requirements). | |
1528 | """ |
|
1534 | """ | |
1529 | ) |
|
1535 | ) | |
1530 |
|
1536 | |||
1531 | filtername = interfaceutil.Attribute( |
|
1537 | filtername = interfaceutil.Attribute( | |
1532 | """Name of the repoview that is active on this repo.""" |
|
1538 | """Name of the repoview that is active on this repo.""" | |
1533 | ) |
|
1539 | ) | |
1534 |
|
1540 | |||
1535 | vfs_map = interfaceutil.Attribute( |
|
1541 | vfs_map = interfaceutil.Attribute( | |
1536 | """a bytes-key → vfs mapping used by transaction and others""" |
|
1542 | """a bytes-key → vfs mapping used by transaction and others""" | |
1537 | ) |
|
1543 | ) | |
1538 |
|
1544 | |||
1539 | wvfs = interfaceutil.Attribute( |
|
1545 | wvfs = interfaceutil.Attribute( | |
1540 | """VFS used to access the working directory.""" |
|
1546 | """VFS used to access the working directory.""" | |
1541 | ) |
|
1547 | ) | |
1542 |
|
1548 | |||
1543 | vfs = interfaceutil.Attribute( |
|
1549 | vfs = interfaceutil.Attribute( | |
1544 | """VFS rooted at the .hg directory. |
|
1550 | """VFS rooted at the .hg directory. | |
1545 |
|
1551 | |||
1546 | Used to access repository data not in the store. |
|
1552 | Used to access repository data not in the store. | |
1547 | """ |
|
1553 | """ | |
1548 | ) |
|
1554 | ) | |
1549 |
|
1555 | |||
1550 | svfs = interfaceutil.Attribute( |
|
1556 | svfs = interfaceutil.Attribute( | |
1551 | """VFS rooted at the store. |
|
1557 | """VFS rooted at the store. | |
1552 |
|
1558 | |||
1553 | Used to access repository data in the store. Typically .hg/store. |
|
1559 | Used to access repository data in the store. Typically .hg/store. | |
1554 | But can point elsewhere if the store is shared. |
|
1560 | But can point elsewhere if the store is shared. | |
1555 | """ |
|
1561 | """ | |
1556 | ) |
|
1562 | ) | |
1557 |
|
1563 | |||
1558 | root = interfaceutil.Attribute( |
|
1564 | root = interfaceutil.Attribute( | |
1559 | """Path to the root of the working directory.""" |
|
1565 | """Path to the root of the working directory.""" | |
1560 | ) |
|
1566 | ) | |
1561 |
|
1567 | |||
1562 | path = interfaceutil.Attribute("""Path to the .hg directory.""") |
|
1568 | path = interfaceutil.Attribute("""Path to the .hg directory.""") | |
1563 |
|
1569 | |||
1564 | origroot = interfaceutil.Attribute( |
|
1570 | origroot = interfaceutil.Attribute( | |
1565 | """The filesystem path that was used to construct the repo.""" |
|
1571 | """The filesystem path that was used to construct the repo.""" | |
1566 | ) |
|
1572 | ) | |
1567 |
|
1573 | |||
1568 | auditor = interfaceutil.Attribute( |
|
1574 | auditor = interfaceutil.Attribute( | |
1569 | """A pathauditor for the working directory. |
|
1575 | """A pathauditor for the working directory. | |
1570 |
|
1576 | |||
1571 | This checks if a path refers to a nested repository. |
|
1577 | This checks if a path refers to a nested repository. | |
1572 |
|
1578 | |||
1573 | Operates on the filesystem. |
|
1579 | Operates on the filesystem. | |
1574 | """ |
|
1580 | """ | |
1575 | ) |
|
1581 | ) | |
1576 |
|
1582 | |||
1577 | nofsauditor = interfaceutil.Attribute( |
|
1583 | nofsauditor = interfaceutil.Attribute( | |
1578 | """A pathauditor for the working directory. |
|
1584 | """A pathauditor for the working directory. | |
1579 |
|
1585 | |||
1580 | This is like ``auditor`` except it doesn't do filesystem checks. |
|
1586 | This is like ``auditor`` except it doesn't do filesystem checks. | |
1581 | """ |
|
1587 | """ | |
1582 | ) |
|
1588 | ) | |
1583 |
|
1589 | |||
1584 | baseui = interfaceutil.Attribute( |
|
1590 | baseui = interfaceutil.Attribute( | |
1585 | """Original ui instance passed into constructor.""" |
|
1591 | """Original ui instance passed into constructor.""" | |
1586 | ) |
|
1592 | ) | |
1587 |
|
1593 | |||
1588 | ui = interfaceutil.Attribute("""Main ui instance for this instance.""") |
|
1594 | ui = interfaceutil.Attribute("""Main ui instance for this instance.""") | |
1589 |
|
1595 | |||
1590 | sharedpath = interfaceutil.Attribute( |
|
1596 | sharedpath = interfaceutil.Attribute( | |
1591 | """Path to the .hg directory of the repo this repo was shared from.""" |
|
1597 | """Path to the .hg directory of the repo this repo was shared from.""" | |
1592 | ) |
|
1598 | ) | |
1593 |
|
1599 | |||
1594 | store = interfaceutil.Attribute("""A store instance.""") |
|
1600 | store = interfaceutil.Attribute("""A store instance.""") | |
1595 |
|
1601 | |||
1596 | spath = interfaceutil.Attribute("""Path to the store.""") |
|
1602 | spath = interfaceutil.Attribute("""Path to the store.""") | |
1597 |
|
1603 | |||
1598 | sjoin = interfaceutil.Attribute("""Alias to self.store.join.""") |
|
1604 | sjoin = interfaceutil.Attribute("""Alias to self.store.join.""") | |
1599 |
|
1605 | |||
1600 | cachevfs = interfaceutil.Attribute( |
|
1606 | cachevfs = interfaceutil.Attribute( | |
1601 | """A VFS used to access the cache directory. |
|
1607 | """A VFS used to access the cache directory. | |
1602 |
|
1608 | |||
1603 | Typically .hg/cache. |
|
1609 | Typically .hg/cache. | |
1604 | """ |
|
1610 | """ | |
1605 | ) |
|
1611 | ) | |
1606 |
|
1612 | |||
1607 | wcachevfs = interfaceutil.Attribute( |
|
1613 | wcachevfs = interfaceutil.Attribute( | |
1608 | """A VFS used to access the cache directory dedicated to working copy |
|
1614 | """A VFS used to access the cache directory dedicated to working copy | |
1609 |
|
1615 | |||
1610 | Typically .hg/wcache. |
|
1616 | Typically .hg/wcache. | |
1611 | """ |
|
1617 | """ | |
1612 | ) |
|
1618 | ) | |
1613 |
|
1619 | |||
1614 | filteredrevcache = interfaceutil.Attribute( |
|
1620 | filteredrevcache = interfaceutil.Attribute( | |
1615 | """Holds sets of revisions to be filtered.""" |
|
1621 | """Holds sets of revisions to be filtered.""" | |
1616 | ) |
|
1622 | ) | |
1617 |
|
1623 | |||
1618 | names = interfaceutil.Attribute("""A ``namespaces`` instance.""") |
|
1624 | names = interfaceutil.Attribute("""A ``namespaces`` instance.""") | |
1619 |
|
1625 | |||
1620 | filecopiesmode = interfaceutil.Attribute( |
|
1626 | filecopiesmode = interfaceutil.Attribute( | |
1621 | """The way files copies should be dealt with in this repo.""" |
|
1627 | """The way files copies should be dealt with in this repo.""" | |
1622 | ) |
|
1628 | ) | |
1623 |
|
1629 | |||
1624 | def close(): |
|
1630 | def close(): | |
1625 | """Close the handle on this repository.""" |
|
1631 | """Close the handle on this repository.""" | |
1626 |
|
1632 | |||
1627 | def peer(path=None): |
|
1633 | def peer(path=None): | |
1628 | """Obtain an object conforming to the ``peer`` interface.""" |
|
1634 | """Obtain an object conforming to the ``peer`` interface.""" | |
1629 |
|
1635 | |||
1630 | def unfiltered(): |
|
1636 | def unfiltered(): | |
1631 | """Obtain an unfiltered/raw view of this repo.""" |
|
1637 | """Obtain an unfiltered/raw view of this repo.""" | |
1632 |
|
1638 | |||
1633 | def filtered(name, visibilityexceptions=None): |
|
1639 | def filtered(name, visibilityexceptions=None): | |
1634 | """Obtain a named view of this repository.""" |
|
1640 | """Obtain a named view of this repository.""" | |
1635 |
|
1641 | |||
1636 | obsstore = interfaceutil.Attribute("""A store of obsolescence data.""") |
|
1642 | obsstore = interfaceutil.Attribute("""A store of obsolescence data.""") | |
1637 |
|
1643 | |||
1638 | changelog = interfaceutil.Attribute("""A handle on the changelog revlog.""") |
|
1644 | changelog = interfaceutil.Attribute("""A handle on the changelog revlog.""") | |
1639 |
|
1645 | |||
1640 | manifestlog = interfaceutil.Attribute( |
|
1646 | manifestlog = interfaceutil.Attribute( | |
1641 | """An instance conforming to the ``imanifestlog`` interface. |
|
1647 | """An instance conforming to the ``imanifestlog`` interface. | |
1642 |
|
1648 | |||
1643 | Provides access to manifests for the repository. |
|
1649 | Provides access to manifests for the repository. | |
1644 | """ |
|
1650 | """ | |
1645 | ) |
|
1651 | ) | |
1646 |
|
1652 | |||
1647 | dirstate = interfaceutil.Attribute("""Working directory state.""") |
|
1653 | dirstate = interfaceutil.Attribute("""Working directory state.""") | |
1648 |
|
1654 | |||
1649 | narrowpats = interfaceutil.Attribute( |
|
1655 | narrowpats = interfaceutil.Attribute( | |
1650 | """Matcher patterns for this repository's narrowspec.""" |
|
1656 | """Matcher patterns for this repository's narrowspec.""" | |
1651 | ) |
|
1657 | ) | |
1652 |
|
1658 | |||
1653 | def narrowmatch(match=None, includeexact=False): |
|
1659 | def narrowmatch(match=None, includeexact=False): | |
1654 | """Obtain a matcher for the narrowspec.""" |
|
1660 | """Obtain a matcher for the narrowspec.""" | |
1655 |
|
1661 | |||
1656 | def setnarrowpats(newincludes, newexcludes): |
|
1662 | def setnarrowpats(newincludes, newexcludes): | |
1657 | """Define the narrowspec for this repository.""" |
|
1663 | """Define the narrowspec for this repository.""" | |
1658 |
|
1664 | |||
1659 | def __getitem__(changeid): |
|
1665 | def __getitem__(changeid): | |
1660 | """Try to resolve a changectx.""" |
|
1666 | """Try to resolve a changectx.""" | |
1661 |
|
1667 | |||
1662 | def __contains__(changeid): |
|
1668 | def __contains__(changeid): | |
1663 | """Whether a changeset exists.""" |
|
1669 | """Whether a changeset exists.""" | |
1664 |
|
1670 | |||
1665 | def __nonzero__(): |
|
1671 | def __nonzero__(): | |
1666 | """Always returns True.""" |
|
1672 | """Always returns True.""" | |
1667 | return True |
|
1673 | return True | |
1668 |
|
1674 | |||
1669 | __bool__ = __nonzero__ |
|
1675 | __bool__ = __nonzero__ | |
1670 |
|
1676 | |||
1671 | def __len__(): |
|
1677 | def __len__(): | |
1672 | """Returns the number of changesets in the repo.""" |
|
1678 | """Returns the number of changesets in the repo.""" | |
1673 |
|
1679 | |||
1674 | def __iter__(): |
|
1680 | def __iter__(): | |
1675 | """Iterate over revisions in the changelog.""" |
|
1681 | """Iterate over revisions in the changelog.""" | |
1676 |
|
1682 | |||
1677 | def revs(expr, *args): |
|
1683 | def revs(expr, *args): | |
1678 | """Evaluate a revset. |
|
1684 | """Evaluate a revset. | |
1679 |
|
1685 | |||
1680 | Emits revisions. |
|
1686 | Emits revisions. | |
1681 | """ |
|
1687 | """ | |
1682 |
|
1688 | |||
1683 | def set(expr, *args): |
|
1689 | def set(expr, *args): | |
1684 | """Evaluate a revset. |
|
1690 | """Evaluate a revset. | |
1685 |
|
1691 | |||
1686 | Emits changectx instances. |
|
1692 | Emits changectx instances. | |
1687 | """ |
|
1693 | """ | |
1688 |
|
1694 | |||
1689 | def anyrevs(specs, user=False, localalias=None): |
|
1695 | def anyrevs(specs, user=False, localalias=None): | |
1690 | """Find revisions matching one of the given revsets.""" |
|
1696 | """Find revisions matching one of the given revsets.""" | |
1691 |
|
1697 | |||
1692 | def url(): |
|
1698 | def url(): | |
1693 | """Returns a string representing the location of this repo.""" |
|
1699 | """Returns a string representing the location of this repo.""" | |
1694 |
|
1700 | |||
1695 | def hook(name, throw=False, **args): |
|
1701 | def hook(name, throw=False, **args): | |
1696 | """Call a hook.""" |
|
1702 | """Call a hook.""" | |
1697 |
|
1703 | |||
1698 | def tags(): |
|
1704 | def tags(): | |
1699 | """Return a mapping of tag to node.""" |
|
1705 | """Return a mapping of tag to node.""" | |
1700 |
|
1706 | |||
1701 | def tagtype(tagname): |
|
1707 | def tagtype(tagname): | |
1702 | """Return the type of a given tag.""" |
|
1708 | """Return the type of a given tag.""" | |
1703 |
|
1709 | |||
1704 | def tagslist(): |
|
1710 | def tagslist(): | |
1705 | """Return a list of tags ordered by revision.""" |
|
1711 | """Return a list of tags ordered by revision.""" | |
1706 |
|
1712 | |||
1707 | def nodetags(node): |
|
1713 | def nodetags(node): | |
1708 | """Return the tags associated with a node.""" |
|
1714 | """Return the tags associated with a node.""" | |
1709 |
|
1715 | |||
1710 | def nodebookmarks(node): |
|
1716 | def nodebookmarks(node): | |
1711 | """Return the list of bookmarks pointing to the specified node.""" |
|
1717 | """Return the list of bookmarks pointing to the specified node.""" | |
1712 |
|
1718 | |||
1713 | def branchmap(): |
|
1719 | def branchmap(): | |
1714 | """Return a mapping of branch to heads in that branch.""" |
|
1720 | """Return a mapping of branch to heads in that branch.""" | |
1715 |
|
1721 | |||
1716 | def revbranchcache(): |
|
1722 | def revbranchcache(): | |
1717 | pass |
|
1723 | pass | |
1718 |
|
1724 | |||
1719 | def register_changeset(rev, changelogrevision): |
|
1725 | def register_changeset(rev, changelogrevision): | |
1720 | """Extension point for caches for new nodes. |
|
1726 | """Extension point for caches for new nodes. | |
1721 |
|
1727 | |||
1722 | Multiple consumers are expected to need parts of the changelogrevision, |
|
1728 | Multiple consumers are expected to need parts of the changelogrevision, | |
1723 | so it is provided as optimization to avoid duplicate lookups. A simple |
|
1729 | so it is provided as optimization to avoid duplicate lookups. A simple | |
1724 | cache would be fragile when other revisions are accessed, too.""" |
|
1730 | cache would be fragile when other revisions are accessed, too.""" | |
1725 | pass |
|
1731 | pass | |
1726 |
|
1732 | |||
1727 | def branchtip(branchtip, ignoremissing=False): |
|
1733 | def branchtip(branchtip, ignoremissing=False): | |
1728 | """Return the tip node for a given branch.""" |
|
1734 | """Return the tip node for a given branch.""" | |
1729 |
|
1735 | |||
1730 | def lookup(key): |
|
1736 | def lookup(key): | |
1731 | """Resolve the node for a revision.""" |
|
1737 | """Resolve the node for a revision.""" | |
1732 |
|
1738 | |||
1733 | def lookupbranch(key): |
|
1739 | def lookupbranch(key): | |
1734 | """Look up the branch name of the given revision or branch name.""" |
|
1740 | """Look up the branch name of the given revision or branch name.""" | |
1735 |
|
1741 | |||
1736 | def known(nodes): |
|
1742 | def known(nodes): | |
1737 | """Determine whether a series of nodes is known. |
|
1743 | """Determine whether a series of nodes is known. | |
1738 |
|
1744 | |||
1739 | Returns a list of bools. |
|
1745 | Returns a list of bools. | |
1740 | """ |
|
1746 | """ | |
1741 |
|
1747 | |||
1742 | def local(): |
|
1748 | def local(): | |
1743 | """Whether the repository is local.""" |
|
1749 | """Whether the repository is local.""" | |
1744 | return True |
|
1750 | return True | |
1745 |
|
1751 | |||
1746 | def publishing(): |
|
1752 | def publishing(): | |
1747 | """Whether the repository is a publishing repository.""" |
|
1753 | """Whether the repository is a publishing repository.""" | |
1748 |
|
1754 | |||
1749 | def cancopy(): |
|
1755 | def cancopy(): | |
1750 | pass |
|
1756 | pass | |
1751 |
|
1757 | |||
1752 | def shared(): |
|
1758 | def shared(): | |
1753 | """The type of shared repository or None.""" |
|
1759 | """The type of shared repository or None.""" | |
1754 |
|
1760 | |||
1755 | def wjoin(f, *insidef): |
|
1761 | def wjoin(f, *insidef): | |
1756 | """Calls self.vfs.reljoin(self.root, f, *insidef)""" |
|
1762 | """Calls self.vfs.reljoin(self.root, f, *insidef)""" | |
1757 |
|
1763 | |||
1758 | def setparents(p1, p2): |
|
1764 | def setparents(p1, p2): | |
1759 | """Set the parent nodes of the working directory.""" |
|
1765 | """Set the parent nodes of the working directory.""" | |
1760 |
|
1766 | |||
1761 | def filectx(path, changeid=None, fileid=None): |
|
1767 | def filectx(path, changeid=None, fileid=None): | |
1762 | """Obtain a filectx for the given file revision.""" |
|
1768 | """Obtain a filectx for the given file revision.""" | |
1763 |
|
1769 | |||
1764 | def getcwd(): |
|
1770 | def getcwd(): | |
1765 | """Obtain the current working directory from the dirstate.""" |
|
1771 | """Obtain the current working directory from the dirstate.""" | |
1766 |
|
1772 | |||
1767 | def pathto(f, cwd=None): |
|
1773 | def pathto(f, cwd=None): | |
1768 | """Obtain the relative path to a file.""" |
|
1774 | """Obtain the relative path to a file.""" | |
1769 |
|
1775 | |||
1770 | def adddatafilter(name, fltr): |
|
1776 | def adddatafilter(name, fltr): | |
1771 | pass |
|
1777 | pass | |
1772 |
|
1778 | |||
1773 | def wread(filename): |
|
1779 | def wread(filename): | |
1774 | """Read a file from wvfs, using data filters.""" |
|
1780 | """Read a file from wvfs, using data filters.""" | |
1775 |
|
1781 | |||
1776 | def wwrite(filename, data, flags, backgroundclose=False, **kwargs): |
|
1782 | def wwrite(filename, data, flags, backgroundclose=False, **kwargs): | |
1777 | """Write data to a file in the wvfs, using data filters.""" |
|
1783 | """Write data to a file in the wvfs, using data filters.""" | |
1778 |
|
1784 | |||
1779 | def wwritedata(filename, data): |
|
1785 | def wwritedata(filename, data): | |
1780 | """Resolve data for writing to the wvfs, using data filters.""" |
|
1786 | """Resolve data for writing to the wvfs, using data filters.""" | |
1781 |
|
1787 | |||
1782 | def currenttransaction(): |
|
1788 | def currenttransaction(): | |
1783 | """Obtain the current transaction instance or None.""" |
|
1789 | """Obtain the current transaction instance or None.""" | |
1784 |
|
1790 | |||
1785 | def transaction(desc, report=None): |
|
1791 | def transaction(desc, report=None): | |
1786 | """Open a new transaction to write to the repository.""" |
|
1792 | """Open a new transaction to write to the repository.""" | |
1787 |
|
1793 | |||
1788 | def undofiles(): |
|
1794 | def undofiles(): | |
1789 | """Returns a list of (vfs, path) for files to undo transactions.""" |
|
1795 | """Returns a list of (vfs, path) for files to undo transactions.""" | |
1790 |
|
1796 | |||
1791 | def recover(): |
|
1797 | def recover(): | |
1792 | """Roll back an interrupted transaction.""" |
|
1798 | """Roll back an interrupted transaction.""" | |
1793 |
|
1799 | |||
1794 | def rollback(dryrun=False, force=False): |
|
1800 | def rollback(dryrun=False, force=False): | |
1795 | """Undo the last transaction. |
|
1801 | """Undo the last transaction. | |
1796 |
|
1802 | |||
1797 | DANGEROUS. |
|
1803 | DANGEROUS. | |
1798 | """ |
|
1804 | """ | |
1799 |
|
1805 | |||
1800 | def updatecaches(tr=None, full=False, caches=None): |
|
1806 | def updatecaches(tr=None, full=False, caches=None): | |
1801 | """Warm repo caches.""" |
|
1807 | """Warm repo caches.""" | |
1802 |
|
1808 | |||
1803 | def invalidatecaches(): |
|
1809 | def invalidatecaches(): | |
1804 | """Invalidate cached data due to the repository mutating.""" |
|
1810 | """Invalidate cached data due to the repository mutating.""" | |
1805 |
|
1811 | |||
1806 | def invalidatevolatilesets(): |
|
1812 | def invalidatevolatilesets(): | |
1807 | pass |
|
1813 | pass | |
1808 |
|
1814 | |||
1809 | def invalidatedirstate(): |
|
1815 | def invalidatedirstate(): | |
1810 | """Invalidate the dirstate.""" |
|
1816 | """Invalidate the dirstate.""" | |
1811 |
|
1817 | |||
1812 | def invalidate(clearfilecache=False): |
|
1818 | def invalidate(clearfilecache=False): | |
1813 | pass |
|
1819 | pass | |
1814 |
|
1820 | |||
1815 | def invalidateall(): |
|
1821 | def invalidateall(): | |
1816 | pass |
|
1822 | pass | |
1817 |
|
1823 | |||
1818 | def lock(wait=True): |
|
1824 | def lock(wait=True): | |
1819 | """Lock the repository store and return a lock instance.""" |
|
1825 | """Lock the repository store and return a lock instance.""" | |
1820 |
|
1826 | |||
1821 | def currentlock(): |
|
1827 | def currentlock(): | |
1822 | """Return the lock if it's held or None.""" |
|
1828 | """Return the lock if it's held or None.""" | |
1823 |
|
1829 | |||
1824 | def wlock(wait=True): |
|
1830 | def wlock(wait=True): | |
1825 | """Lock the non-store parts of the repository.""" |
|
1831 | """Lock the non-store parts of the repository.""" | |
1826 |
|
1832 | |||
1827 | def currentwlock(): |
|
1833 | def currentwlock(): | |
1828 | """Return the wlock if it's held or None.""" |
|
1834 | """Return the wlock if it's held or None.""" | |
1829 |
|
1835 | |||
1830 | def checkcommitpatterns(wctx, match, status, fail): |
|
1836 | def checkcommitpatterns(wctx, match, status, fail): | |
1831 | pass |
|
1837 | pass | |
1832 |
|
1838 | |||
1833 | def commit( |
|
1839 | def commit( | |
1834 | text=b'', |
|
1840 | text=b'', | |
1835 | user=None, |
|
1841 | user=None, | |
1836 | date=None, |
|
1842 | date=None, | |
1837 | match=None, |
|
1843 | match=None, | |
1838 | force=False, |
|
1844 | force=False, | |
1839 | editor=False, |
|
1845 | editor=False, | |
1840 | extra=None, |
|
1846 | extra=None, | |
1841 | ): |
|
1847 | ): | |
1842 | """Add a new revision to the repository.""" |
|
1848 | """Add a new revision to the repository.""" | |
1843 |
|
1849 | |||
1844 | def commitctx(ctx, error=False, origctx=None): |
|
1850 | def commitctx(ctx, error=False, origctx=None): | |
1845 | """Commit a commitctx instance to the repository.""" |
|
1851 | """Commit a commitctx instance to the repository.""" | |
1846 |
|
1852 | |||
1847 | def destroying(): |
|
1853 | def destroying(): | |
1848 | """Inform the repository that nodes are about to be destroyed.""" |
|
1854 | """Inform the repository that nodes are about to be destroyed.""" | |
1849 |
|
1855 | |||
1850 | def destroyed(): |
|
1856 | def destroyed(): | |
1851 | """Inform the repository that nodes have been destroyed.""" |
|
1857 | """Inform the repository that nodes have been destroyed.""" | |
1852 |
|
1858 | |||
1853 | def status( |
|
1859 | def status( | |
1854 | node1=b'.', |
|
1860 | node1=b'.', | |
1855 | node2=None, |
|
1861 | node2=None, | |
1856 | match=None, |
|
1862 | match=None, | |
1857 | ignored=False, |
|
1863 | ignored=False, | |
1858 | clean=False, |
|
1864 | clean=False, | |
1859 | unknown=False, |
|
1865 | unknown=False, | |
1860 | listsubrepos=False, |
|
1866 | listsubrepos=False, | |
1861 | ): |
|
1867 | ): | |
1862 | """Convenience method to call repo[x].status().""" |
|
1868 | """Convenience method to call repo[x].status().""" | |
1863 |
|
1869 | |||
1864 | def addpostdsstatus(ps): |
|
1870 | def addpostdsstatus(ps): | |
1865 | pass |
|
1871 | pass | |
1866 |
|
1872 | |||
1867 | def postdsstatus(): |
|
1873 | def postdsstatus(): | |
1868 | pass |
|
1874 | pass | |
1869 |
|
1875 | |||
1870 | def clearpostdsstatus(): |
|
1876 | def clearpostdsstatus(): | |
1871 | pass |
|
1877 | pass | |
1872 |
|
1878 | |||
1873 | def heads(start=None): |
|
1879 | def heads(start=None): | |
1874 | """Obtain list of nodes that are DAG heads.""" |
|
1880 | """Obtain list of nodes that are DAG heads.""" | |
1875 |
|
1881 | |||
1876 | def branchheads(branch=None, start=None, closed=False): |
|
1882 | def branchheads(branch=None, start=None, closed=False): | |
1877 | pass |
|
1883 | pass | |
1878 |
|
1884 | |||
1879 | def branches(nodes): |
|
1885 | def branches(nodes): | |
1880 | pass |
|
1886 | pass | |
1881 |
|
1887 | |||
1882 | def between(pairs): |
|
1888 | def between(pairs): | |
1883 | pass |
|
1889 | pass | |
1884 |
|
1890 | |||
1885 | def checkpush(pushop): |
|
1891 | def checkpush(pushop): | |
1886 | pass |
|
1892 | pass | |
1887 |
|
1893 | |||
1888 | prepushoutgoinghooks = interfaceutil.Attribute("""util.hooks instance.""") |
|
1894 | prepushoutgoinghooks = interfaceutil.Attribute("""util.hooks instance.""") | |
1889 |
|
1895 | |||
1890 | def pushkey(namespace, key, old, new): |
|
1896 | def pushkey(namespace, key, old, new): | |
1891 | pass |
|
1897 | pass | |
1892 |
|
1898 | |||
1893 | def listkeys(namespace): |
|
1899 | def listkeys(namespace): | |
1894 | pass |
|
1900 | pass | |
1895 |
|
1901 | |||
1896 | def debugwireargs(one, two, three=None, four=None, five=None): |
|
1902 | def debugwireargs(one, two, three=None, four=None, five=None): | |
1897 | pass |
|
1903 | pass | |
1898 |
|
1904 | |||
1899 | def savecommitmessage(text): |
|
1905 | def savecommitmessage(text): | |
1900 | pass |
|
1906 | pass | |
1901 |
|
1907 | |||
1902 | def register_sidedata_computer( |
|
1908 | def register_sidedata_computer( | |
1903 | kind, category, keys, computer, flags, replace=False |
|
1909 | kind, category, keys, computer, flags, replace=False | |
1904 | ): |
|
1910 | ): | |
1905 | pass |
|
1911 | pass | |
1906 |
|
1912 | |||
1907 | def register_wanted_sidedata(category): |
|
1913 | def register_wanted_sidedata(category): | |
1908 | pass |
|
1914 | pass | |
1909 |
|
1915 | |||
1910 |
|
1916 | |||
1911 | class completelocalrepository( |
|
1917 | class completelocalrepository( | |
1912 | ilocalrepositorymain, ilocalrepositoryfilestorage |
|
1918 | ilocalrepositorymain, ilocalrepositoryfilestorage | |
1913 | ): |
|
1919 | ): | |
1914 | """Complete interface for a local repository.""" |
|
1920 | """Complete interface for a local repository.""" | |
1915 |
|
1921 | |||
1916 |
|
1922 | |||
1917 | class iwireprotocolcommandcacher(interfaceutil.Interface): |
|
1923 | class iwireprotocolcommandcacher(interfaceutil.Interface): | |
1918 | """Represents a caching backend for wire protocol commands. |
|
1924 | """Represents a caching backend for wire protocol commands. | |
1919 |
|
1925 | |||
1920 | Wire protocol version 2 supports transparent caching of many commands. |
|
1926 | Wire protocol version 2 supports transparent caching of many commands. | |
1921 | To leverage this caching, servers can activate objects that cache |
|
1927 | To leverage this caching, servers can activate objects that cache | |
1922 | command responses. Objects handle both cache writing and reading. |
|
1928 | command responses. Objects handle both cache writing and reading. | |
1923 | This interface defines how that response caching mechanism works. |
|
1929 | This interface defines how that response caching mechanism works. | |
1924 |
|
1930 | |||
1925 | Wire protocol version 2 commands emit a series of objects that are |
|
1931 | Wire protocol version 2 commands emit a series of objects that are | |
1926 | serialized and sent to the client. The caching layer exists between |
|
1932 | serialized and sent to the client. The caching layer exists between | |
1927 | the invocation of the command function and the sending of its output |
|
1933 | the invocation of the command function and the sending of its output | |
1928 | objects to an output layer. |
|
1934 | objects to an output layer. | |
1929 |
|
1935 | |||
1930 | Instances of this interface represent a binding to a cache that |
|
1936 | Instances of this interface represent a binding to a cache that | |
1931 | can serve a response (in place of calling a command function) and/or |
|
1937 | can serve a response (in place of calling a command function) and/or | |
1932 | write responses to a cache for subsequent use. |
|
1938 | write responses to a cache for subsequent use. | |
1933 |
|
1939 | |||
1934 | When a command request arrives, the following happens with regards |
|
1940 | When a command request arrives, the following happens with regards | |
1935 | to this interface: |
|
1941 | to this interface: | |
1936 |
|
1942 | |||
1937 | 1. The server determines whether the command request is cacheable. |
|
1943 | 1. The server determines whether the command request is cacheable. | |
1938 | 2. If it is, an instance of this interface is spawned. |
|
1944 | 2. If it is, an instance of this interface is spawned. | |
1939 | 3. The cacher is activated in a context manager (``__enter__`` is called). |
|
1945 | 3. The cacher is activated in a context manager (``__enter__`` is called). | |
1940 | 4. A cache *key* for that request is derived. This will call the |
|
1946 | 4. A cache *key* for that request is derived. This will call the | |
1941 | instance's ``adjustcachekeystate()`` method so the derivation |
|
1947 | instance's ``adjustcachekeystate()`` method so the derivation | |
1942 | can be influenced. |
|
1948 | can be influenced. | |
1943 | 5. The cacher is informed of the derived cache key via a call to |
|
1949 | 5. The cacher is informed of the derived cache key via a call to | |
1944 | ``setcachekey()``. |
|
1950 | ``setcachekey()``. | |
1945 | 6. The cacher's ``lookup()`` method is called to test for presence of |
|
1951 | 6. The cacher's ``lookup()`` method is called to test for presence of | |
1946 | the derived key in the cache. |
|
1952 | the derived key in the cache. | |
1947 | 7. If ``lookup()`` returns a hit, that cached result is used in place |
|
1953 | 7. If ``lookup()`` returns a hit, that cached result is used in place | |
1948 | of invoking the command function. ``__exit__`` is called and the instance |
|
1954 | of invoking the command function. ``__exit__`` is called and the instance | |
1949 | is discarded. |
|
1955 | is discarded. | |
1950 | 8. The command function is invoked. |
|
1956 | 8. The command function is invoked. | |
1951 | 9. ``onobject()`` is called for each object emitted by the command |
|
1957 | 9. ``onobject()`` is called for each object emitted by the command | |
1952 | function. |
|
1958 | function. | |
1953 | 10. After the final object is seen, ``onfinished()`` is called. |
|
1959 | 10. After the final object is seen, ``onfinished()`` is called. | |
1954 | 11. ``__exit__`` is called to signal the end of use of the instance. |
|
1960 | 11. ``__exit__`` is called to signal the end of use of the instance. | |
1955 |
|
1961 | |||
1956 | Cache *key* derivation can be influenced by the instance. |
|
1962 | Cache *key* derivation can be influenced by the instance. | |
1957 |
|
1963 | |||
1958 | Cache keys are initially derived by a deterministic representation of |
|
1964 | Cache keys are initially derived by a deterministic representation of | |
1959 | the command request. This includes the command name, arguments, protocol |
|
1965 | the command request. This includes the command name, arguments, protocol | |
1960 | version, etc. This initial key derivation is performed by CBOR-encoding a |
|
1966 | version, etc. This initial key derivation is performed by CBOR-encoding a | |
1961 | data structure and feeding that output into a hasher. |
|
1967 | data structure and feeding that output into a hasher. | |
1962 |
|
1968 | |||
1963 | Instances of this interface can influence this initial key derivation |
|
1969 | Instances of this interface can influence this initial key derivation | |
1964 | via ``adjustcachekeystate()``. |
|
1970 | via ``adjustcachekeystate()``. | |
1965 |
|
1971 | |||
1966 | The instance is informed of the derived cache key via a call to |
|
1972 | The instance is informed of the derived cache key via a call to | |
1967 | ``setcachekey()``. The instance must store the key locally so it can |
|
1973 | ``setcachekey()``. The instance must store the key locally so it can | |
1968 | be consulted on subsequent operations that may require it. |
|
1974 | be consulted on subsequent operations that may require it. | |
1969 |
|
1975 | |||
1970 | When constructed, the instance has access to a callable that can be used |
|
1976 | When constructed, the instance has access to a callable that can be used | |
1971 | for encoding response objects. This callable receives as its single |
|
1977 | for encoding response objects. This callable receives as its single | |
1972 | argument an object emitted by a command function. It returns an iterable |
|
1978 | argument an object emitted by a command function. It returns an iterable | |
1973 | of bytes chunks representing the encoded object. Unless the cacher is |
|
1979 | of bytes chunks representing the encoded object. Unless the cacher is | |
1974 | caching native Python objects in memory or has a way of reconstructing |
|
1980 | caching native Python objects in memory or has a way of reconstructing | |
1975 | the original Python objects, implementations typically call this function |
|
1981 | the original Python objects, implementations typically call this function | |
1976 | to produce bytes from the output objects and then store those bytes in |
|
1982 | to produce bytes from the output objects and then store those bytes in | |
1977 | the cache. When it comes time to re-emit those bytes, they are wrapped |
|
1983 | the cache. When it comes time to re-emit those bytes, they are wrapped | |
1978 | in a ``wireprototypes.encodedresponse`` instance to tell the output |
|
1984 | in a ``wireprototypes.encodedresponse`` instance to tell the output | |
1979 | layer that they are pre-encoded. |
|
1985 | layer that they are pre-encoded. | |
1980 |
|
1986 | |||
1981 | When receiving the objects emitted by the command function, instances |
|
1987 | When receiving the objects emitted by the command function, instances | |
1982 | can choose what to do with those objects. The simplest thing to do is |
|
1988 | can choose what to do with those objects. The simplest thing to do is | |
1983 | re-emit the original objects. They will be forwarded to the output |
|
1989 | re-emit the original objects. They will be forwarded to the output | |
1984 | layer and will be processed as if the cacher did not exist. |
|
1990 | layer and will be processed as if the cacher did not exist. | |
1985 |
|
1991 | |||
1986 | Implementations could also choose to not emit objects - instead locally |
|
1992 | Implementations could also choose to not emit objects - instead locally | |
1987 | buffering objects or their encoded representation. They could then emit |
|
1993 | buffering objects or their encoded representation. They could then emit | |
1988 | a single "coalesced" object when ``onfinished()`` is called. In |
|
1994 | a single "coalesced" object when ``onfinished()`` is called. In | |
1989 | this way, the implementation would function as a filtering layer of |
|
1995 | this way, the implementation would function as a filtering layer of | |
1990 | sorts. |
|
1996 | sorts. | |
1991 |
|
1997 | |||
1992 | When caching objects, typically the encoded form of the object will |
|
1998 | When caching objects, typically the encoded form of the object will | |
1993 | be stored. Keep in mind that if the original object is forwarded to |
|
1999 | be stored. Keep in mind that if the original object is forwarded to | |
1994 | the output layer, it will need to be encoded there as well. For large |
|
2000 | the output layer, it will need to be encoded there as well. For large | |
1995 | output, this redundant encoding could add overhead. Implementations |
|
2001 | output, this redundant encoding could add overhead. Implementations | |
1996 | could wrap the encoded object data in ``wireprototypes.encodedresponse`` |
|
2002 | could wrap the encoded object data in ``wireprototypes.encodedresponse`` | |
1997 | instances to avoid this overhead. |
|
2003 | instances to avoid this overhead. | |
1998 | """ |
|
2004 | """ | |
1999 |
|
2005 | |||
2000 | def __enter__(): |
|
2006 | def __enter__(): | |
2001 | """Marks the instance as active. |
|
2007 | """Marks the instance as active. | |
2002 |
|
2008 | |||
2003 | Should return self. |
|
2009 | Should return self. | |
2004 | """ |
|
2010 | """ | |
2005 |
|
2011 | |||
2006 | def __exit__(exctype, excvalue, exctb): |
|
2012 | def __exit__(exctype, excvalue, exctb): | |
2007 | """Called when cacher is no longer used. |
|
2013 | """Called when cacher is no longer used. | |
2008 |
|
2014 | |||
2009 | This can be used by implementations to perform cleanup actions (e.g. |
|
2015 | This can be used by implementations to perform cleanup actions (e.g. | |
2010 | disconnecting network sockets, aborting a partially cached response. |
|
2016 | disconnecting network sockets, aborting a partially cached response. | |
2011 | """ |
|
2017 | """ | |
2012 |
|
2018 | |||
2013 | def adjustcachekeystate(state): |
|
2019 | def adjustcachekeystate(state): | |
2014 | """Influences cache key derivation by adjusting state to derive key. |
|
2020 | """Influences cache key derivation by adjusting state to derive key. | |
2015 |
|
2021 | |||
2016 | A dict defining the state used to derive the cache key is passed. |
|
2022 | A dict defining the state used to derive the cache key is passed. | |
2017 |
|
2023 | |||
2018 | Implementations can modify this dict to record additional state that |
|
2024 | Implementations can modify this dict to record additional state that | |
2019 | is wanted to influence key derivation. |
|
2025 | is wanted to influence key derivation. | |
2020 |
|
2026 | |||
2021 | Implementations are *highly* encouraged to not modify or delete |
|
2027 | Implementations are *highly* encouraged to not modify or delete | |
2022 | existing keys. |
|
2028 | existing keys. | |
2023 | """ |
|
2029 | """ | |
2024 |
|
2030 | |||
2025 | def setcachekey(key): |
|
2031 | def setcachekey(key): | |
2026 | """Record the derived cache key for this request. |
|
2032 | """Record the derived cache key for this request. | |
2027 |
|
2033 | |||
2028 | Instances may mutate the key for internal usage, as desired. e.g. |
|
2034 | Instances may mutate the key for internal usage, as desired. e.g. | |
2029 | instances may wish to prepend the repo name, introduce path |
|
2035 | instances may wish to prepend the repo name, introduce path | |
2030 | components for filesystem or URL addressing, etc. Behavior is up to |
|
2036 | components for filesystem or URL addressing, etc. Behavior is up to | |
2031 | the cache. |
|
2037 | the cache. | |
2032 |
|
2038 | |||
2033 | Returns a bool indicating if the request is cacheable by this |
|
2039 | Returns a bool indicating if the request is cacheable by this | |
2034 | instance. |
|
2040 | instance. | |
2035 | """ |
|
2041 | """ | |
2036 |
|
2042 | |||
2037 | def lookup(): |
|
2043 | def lookup(): | |
2038 | """Attempt to resolve an entry in the cache. |
|
2044 | """Attempt to resolve an entry in the cache. | |
2039 |
|
2045 | |||
2040 | The instance is instructed to look for the cache key that it was |
|
2046 | The instance is instructed to look for the cache key that it was | |
2041 | informed about via the call to ``setcachekey()``. |
|
2047 | informed about via the call to ``setcachekey()``. | |
2042 |
|
2048 | |||
2043 | If there's no cache hit or the cacher doesn't wish to use the cached |
|
2049 | If there's no cache hit or the cacher doesn't wish to use the cached | |
2044 | entry, ``None`` should be returned. |
|
2050 | entry, ``None`` should be returned. | |
2045 |
|
2051 | |||
2046 | Else, a dict defining the cached result should be returned. The |
|
2052 | Else, a dict defining the cached result should be returned. The | |
2047 | dict may have the following keys: |
|
2053 | dict may have the following keys: | |
2048 |
|
2054 | |||
2049 | objs |
|
2055 | objs | |
2050 | An iterable of objects that should be sent to the client. That |
|
2056 | An iterable of objects that should be sent to the client. That | |
2051 | iterable of objects is expected to be what the command function |
|
2057 | iterable of objects is expected to be what the command function | |
2052 | would return if invoked or an equivalent representation thereof. |
|
2058 | would return if invoked or an equivalent representation thereof. | |
2053 | """ |
|
2059 | """ | |
2054 |
|
2060 | |||
2055 | def onobject(obj): |
|
2061 | def onobject(obj): | |
2056 | """Called when a new object is emitted from the command function. |
|
2062 | """Called when a new object is emitted from the command function. | |
2057 |
|
2063 | |||
2058 | Receives as its argument the object that was emitted from the |
|
2064 | Receives as its argument the object that was emitted from the | |
2059 | command function. |
|
2065 | command function. | |
2060 |
|
2066 | |||
2061 | This method returns an iterator of objects to forward to the output |
|
2067 | This method returns an iterator of objects to forward to the output | |
2062 | layer. The easiest implementation is a generator that just |
|
2068 | layer. The easiest implementation is a generator that just | |
2063 | ``yield obj``. |
|
2069 | ``yield obj``. | |
2064 | """ |
|
2070 | """ | |
2065 |
|
2071 | |||
2066 | def onfinished(): |
|
2072 | def onfinished(): | |
2067 | """Called after all objects have been emitted from the command function. |
|
2073 | """Called after all objects have been emitted from the command function. | |
2068 |
|
2074 | |||
2069 | Implementations should return an iterator of objects to forward to |
|
2075 | Implementations should return an iterator of objects to forward to | |
2070 | the output layer. |
|
2076 | the output layer. | |
2071 |
|
2077 | |||
2072 | This method can be a generator. |
|
2078 | This method can be a generator. | |
2073 | """ |
|
2079 | """ |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
1 | NO CONTENT: modified file |
|
NO CONTENT: modified file | ||
The requested commit or file is too big and content was truncated. Show full diff |
General Comments 0
You need to be logged in to leave comments.
Login now