Show More
@@ -1,1036 +1,1037 b'' | |||||
1 | # This software may be used and distributed according to the terms of the |
|
1 | # This software may be used and distributed according to the terms of the | |
2 | # GNU General Public License version 2 or any later version. |
|
2 | # GNU General Public License version 2 or any later version. | |
3 |
|
3 | |||
4 | """advertise pre-generated bundles to seed clones |
|
4 | """advertise pre-generated bundles to seed clones | |
5 |
|
5 | |||
6 | "clonebundles" is a server-side extension used to advertise the existence |
|
6 | "clonebundles" is a server-side extension used to advertise the existence | |
7 | of pre-generated, externally hosted bundle files to clients that are |
|
7 | of pre-generated, externally hosted bundle files to clients that are | |
8 | cloning so that cloning can be faster, more reliable, and require less |
|
8 | cloning so that cloning can be faster, more reliable, and require less | |
9 | resources on the server. "pullbundles" is a related feature for sending |
|
9 | resources on the server. "pullbundles" is a related feature for sending | |
10 | pre-generated bundle files to clients as part of pull operations. |
|
10 | pre-generated bundle files to clients as part of pull operations. | |
11 |
|
11 | |||
12 | Cloning can be a CPU and I/O intensive operation on servers. Traditionally, |
|
12 | Cloning can be a CPU and I/O intensive operation on servers. Traditionally, | |
13 | the server, in response to a client's request to clone, dynamically generates |
|
13 | the server, in response to a client's request to clone, dynamically generates | |
14 | a bundle containing the entire repository content and sends it to the client. |
|
14 | a bundle containing the entire repository content and sends it to the client. | |
15 | There is no caching on the server and the server will have to redundantly |
|
15 | There is no caching on the server and the server will have to redundantly | |
16 | generate the same outgoing bundle in response to each clone request. For |
|
16 | generate the same outgoing bundle in response to each clone request. For | |
17 | servers with large repositories or with high clone volume, the load from |
|
17 | servers with large repositories or with high clone volume, the load from | |
18 | clones can make scaling the server challenging and costly. |
|
18 | clones can make scaling the server challenging and costly. | |
19 |
|
19 | |||
20 | This extension provides server operators the ability to offload |
|
20 | This extension provides server operators the ability to offload | |
21 | potentially expensive clone load to an external service. Pre-generated |
|
21 | potentially expensive clone load to an external service. Pre-generated | |
22 | bundles also allow using more CPU intensive compression, reducing the |
|
22 | bundles also allow using more CPU intensive compression, reducing the | |
23 | effective bandwidth requirements. |
|
23 | effective bandwidth requirements. | |
24 |
|
24 | |||
25 | Here's how clone bundles work: |
|
25 | Here's how clone bundles work: | |
26 |
|
26 | |||
27 | 1. A server operator establishes a mechanism for making bundle files available |
|
27 | 1. A server operator establishes a mechanism for making bundle files available | |
28 | on a hosting service where Mercurial clients can fetch them. |
|
28 | on a hosting service where Mercurial clients can fetch them. | |
29 | 2. A manifest file listing available bundle URLs and some optional metadata |
|
29 | 2. A manifest file listing available bundle URLs and some optional metadata | |
30 | is added to the Mercurial repository on the server. |
|
30 | is added to the Mercurial repository on the server. | |
31 | 3. A client initiates a clone against a clone bundles aware server. |
|
31 | 3. A client initiates a clone against a clone bundles aware server. | |
32 | 4. The client sees the server is advertising clone bundles and fetches the |
|
32 | 4. The client sees the server is advertising clone bundles and fetches the | |
33 | manifest listing available bundles. |
|
33 | manifest listing available bundles. | |
34 | 5. The client filters and sorts the available bundles based on what it |
|
34 | 5. The client filters and sorts the available bundles based on what it | |
35 | supports and prefers. |
|
35 | supports and prefers. | |
36 | 6. The client downloads and applies an available bundle from the |
|
36 | 6. The client downloads and applies an available bundle from the | |
37 | server-specified URL. |
|
37 | server-specified URL. | |
38 | 7. The client reconnects to the original server and performs the equivalent |
|
38 | 7. The client reconnects to the original server and performs the equivalent | |
39 | of :hg:`pull` to retrieve all repository data not in the bundle. (The |
|
39 | of :hg:`pull` to retrieve all repository data not in the bundle. (The | |
40 | repository could have been updated between when the bundle was created |
|
40 | repository could have been updated between when the bundle was created | |
41 | and when the client started the clone.) This may use "pullbundles". |
|
41 | and when the client started the clone.) This may use "pullbundles". | |
42 |
|
42 | |||
43 | Instead of the server generating full repository bundles for every clone |
|
43 | Instead of the server generating full repository bundles for every clone | |
44 | request, it generates full bundles once and they are subsequently reused to |
|
44 | request, it generates full bundles once and they are subsequently reused to | |
45 | bootstrap new clones. The server may still transfer data at clone time. |
|
45 | bootstrap new clones. The server may still transfer data at clone time. | |
46 | However, this is only data that has been added/changed since the bundle was |
|
46 | However, this is only data that has been added/changed since the bundle was | |
47 | created. For large, established repositories, this can reduce server load for |
|
47 | created. For large, established repositories, this can reduce server load for | |
48 | clones to less than 1% of original. |
|
48 | clones to less than 1% of original. | |
49 |
|
49 | |||
50 | Here's how pullbundles work: |
|
50 | Here's how pullbundles work: | |
51 |
|
51 | |||
52 | 1. A manifest file listing available bundles and describing the revisions |
|
52 | 1. A manifest file listing available bundles and describing the revisions | |
53 | is added to the Mercurial repository on the server. |
|
53 | is added to the Mercurial repository on the server. | |
54 | 2. A new-enough client informs the server that it supports partial pulls |
|
54 | 2. A new-enough client informs the server that it supports partial pulls | |
55 | and initiates a pull. |
|
55 | and initiates a pull. | |
56 | 3. If the server has pull bundles enabled and sees the client advertising |
|
56 | 3. If the server has pull bundles enabled and sees the client advertising | |
57 | partial pulls, it checks for a matching pull bundle in the manifest. |
|
57 | partial pulls, it checks for a matching pull bundle in the manifest. | |
58 | A bundle matches if the format is supported by the client, the client |
|
58 | A bundle matches if the format is supported by the client, the client | |
59 | has the required revisions already and needs something from the bundle. |
|
59 | has the required revisions already and needs something from the bundle. | |
60 | 4. If there is at least one matching bundle, the server sends it to the client. |
|
60 | 4. If there is at least one matching bundle, the server sends it to the client. | |
61 | 5. The client applies the bundle and notices that the server reply was |
|
61 | 5. The client applies the bundle and notices that the server reply was | |
62 | incomplete. It initiates another pull. |
|
62 | incomplete. It initiates another pull. | |
63 |
|
63 | |||
64 | To work, this extension requires the following of server operators: |
|
64 | To work, this extension requires the following of server operators: | |
65 |
|
65 | |||
66 | * Generating bundle files of repository content (typically periodically, |
|
66 | * Generating bundle files of repository content (typically periodically, | |
67 | such as once per day). |
|
67 | such as once per day). | |
68 | * Clone bundles: A file server that clients have network access to and that |
|
68 | * Clone bundles: A file server that clients have network access to and that | |
69 | Python knows how to talk to through its normal URL handling facility |
|
69 | Python knows how to talk to through its normal URL handling facility | |
70 | (typically an HTTP/HTTPS server). |
|
70 | (typically an HTTP/HTTPS server). | |
71 | * A process for keeping the bundles manifest in sync with available bundle |
|
71 | * A process for keeping the bundles manifest in sync with available bundle | |
72 | files. |
|
72 | files. | |
73 |
|
73 | |||
74 | Strictly speaking, using a static file hosting server isn't required: a server |
|
74 | Strictly speaking, using a static file hosting server isn't required: a server | |
75 | operator could use a dynamic service for retrieving bundle data. However, |
|
75 | operator could use a dynamic service for retrieving bundle data. However, | |
76 | static file hosting services are simple and scalable and should be sufficient |
|
76 | static file hosting services are simple and scalable and should be sufficient | |
77 | for most needs. |
|
77 | for most needs. | |
78 |
|
78 | |||
79 | Bundle files can be generated with the :hg:`bundle` command. Typically |
|
79 | Bundle files can be generated with the :hg:`bundle` command. Typically | |
80 | :hg:`bundle --all` is used to produce a bundle of the entire repository. |
|
80 | :hg:`bundle --all` is used to produce a bundle of the entire repository. | |
81 |
|
81 | |||
82 | The bundlespec option `stream` (see :hg:`help bundlespec`) |
|
82 | The bundlespec option `stream` (see :hg:`help bundlespec`) | |
83 | can be used to produce a special *streaming clonebundle*, typically using |
|
83 | can be used to produce a special *streaming clonebundle*, typically using | |
84 | :hg:`bundle --all --type="none-streamv2"`. |
|
84 | :hg:`bundle --all --type="none-streamv2"`. | |
85 | These are bundle files that are extremely efficient |
|
85 | These are bundle files that are extremely efficient | |
86 | to produce and consume (read: fast). However, they are larger than |
|
86 | to produce and consume (read: fast). However, they are larger than | |
87 | traditional bundle formats and require that clients support the exact set |
|
87 | traditional bundle formats and require that clients support the exact set | |
88 | of repository data store formats in use by the repository that created them. |
|
88 | of repository data store formats in use by the repository that created them. | |
89 | Typically, a newer server can serve data that is compatible with older clients. |
|
89 | Typically, a newer server can serve data that is compatible with older clients. | |
90 | However, *streaming clone bundles* don't have this guarantee. **Server |
|
90 | However, *streaming clone bundles* don't have this guarantee. **Server | |
91 | operators need to be aware that newer versions of Mercurial may produce |
|
91 | operators need to be aware that newer versions of Mercurial may produce | |
92 | streaming clone bundles incompatible with older Mercurial versions.** |
|
92 | streaming clone bundles incompatible with older Mercurial versions.** | |
93 |
|
93 | |||
94 | A server operator is responsible for creating a ``.hg/clonebundles.manifest`` |
|
94 | A server operator is responsible for creating a ``.hg/clonebundles.manifest`` | |
95 | file containing the list of available bundle files suitable for seeding |
|
95 | file containing the list of available bundle files suitable for seeding | |
96 | clones. If this file does not exist, the repository will not advertise the |
|
96 | clones. If this file does not exist, the repository will not advertise the | |
97 | existence of clone bundles when clients connect. For pull bundles, |
|
97 | existence of clone bundles when clients connect. For pull bundles, | |
98 | ``.hg/pullbundles.manifest`` is used. |
|
98 | ``.hg/pullbundles.manifest`` is used. | |
99 |
|
99 | |||
100 | The manifest file contains a newline (\\n) delimited list of entries. |
|
100 | The manifest file contains a newline (\\n) delimited list of entries. | |
101 |
|
101 | |||
102 | Each line in this file defines an available bundle. Lines have the format: |
|
102 | Each line in this file defines an available bundle. Lines have the format: | |
103 |
|
103 | |||
104 | <URL> [<key>=<value>[ <key>=<value>]] |
|
104 | <URL> [<key>=<value>[ <key>=<value>]] | |
105 |
|
105 | |||
106 | That is, a URL followed by an optional, space-delimited list of key=value |
|
106 | That is, a URL followed by an optional, space-delimited list of key=value | |
107 | pairs describing additional properties of this bundle. Both keys and values |
|
107 | pairs describing additional properties of this bundle. Both keys and values | |
108 | are URI encoded. |
|
108 | are URI encoded. | |
109 |
|
109 | |||
110 | For pull bundles, the URL is a path under the ``.hg`` directory of the |
|
110 | For pull bundles, the URL is a path under the ``.hg`` directory of the | |
111 | repository. |
|
111 | repository. | |
112 |
|
112 | |||
113 | Keys in UPPERCASE are reserved for use by Mercurial and are defined below. |
|
113 | Keys in UPPERCASE are reserved for use by Mercurial and are defined below. | |
114 | All non-uppercase keys can be used by site installations. An example use |
|
114 | All non-uppercase keys can be used by site installations. An example use | |
115 | for custom properties is to use the *datacenter* attribute to define which |
|
115 | for custom properties is to use the *datacenter* attribute to define which | |
116 | data center a file is hosted in. Clients could then prefer a server in the |
|
116 | data center a file is hosted in. Clients could then prefer a server in the | |
117 | data center closest to them. |
|
117 | data center closest to them. | |
118 |
|
118 | |||
119 | The following reserved keys are currently defined: |
|
119 | The following reserved keys are currently defined: | |
120 |
|
120 | |||
121 | BUNDLESPEC |
|
121 | BUNDLESPEC | |
122 | A "bundle specification" string that describes the type of the bundle. |
|
122 | A "bundle specification" string that describes the type of the bundle. | |
123 |
|
123 | |||
124 | These are string values that are accepted by the "--type" argument of |
|
124 | These are string values that are accepted by the "--type" argument of | |
125 | :hg:`bundle`. |
|
125 | :hg:`bundle`. | |
126 |
|
126 | |||
127 | The values are parsed in strict mode, which means they must be of the |
|
127 | The values are parsed in strict mode, which means they must be of the | |
128 | "<compression>-<type>" form. See |
|
128 | "<compression>-<type>" form. See | |
129 | mercurial.exchange.parsebundlespec() for more details. |
|
129 | mercurial.exchange.parsebundlespec() for more details. | |
130 |
|
130 | |||
131 | :hg:`debugbundle --spec` can be used to print the bundle specification |
|
131 | :hg:`debugbundle --spec` can be used to print the bundle specification | |
132 | string for a bundle file. The output of this command can be used verbatim |
|
132 | string for a bundle file. The output of this command can be used verbatim | |
133 | for the value of ``BUNDLESPEC`` (it is already escaped). |
|
133 | for the value of ``BUNDLESPEC`` (it is already escaped). | |
134 |
|
134 | |||
135 | Clients will automatically filter out specifications that are unknown or |
|
135 | Clients will automatically filter out specifications that are unknown or | |
136 | unsupported so they won't attempt to download something that likely won't |
|
136 | unsupported so they won't attempt to download something that likely won't | |
137 | apply. |
|
137 | apply. | |
138 |
|
138 | |||
139 | The actual value doesn't impact client behavior beyond filtering: |
|
139 | The actual value doesn't impact client behavior beyond filtering: | |
140 | clients will still sniff the bundle type from the header of downloaded |
|
140 | clients will still sniff the bundle type from the header of downloaded | |
141 | files. |
|
141 | files. | |
142 |
|
142 | |||
143 | **Use of this key is highly recommended**, as it allows clients to |
|
143 | **Use of this key is highly recommended**, as it allows clients to | |
144 | easily skip unsupported bundles. If this key is not defined, an old |
|
144 | easily skip unsupported bundles. If this key is not defined, an old | |
145 | client may attempt to apply a bundle that it is incapable of reading. |
|
145 | client may attempt to apply a bundle that it is incapable of reading. | |
146 |
|
146 | |||
147 | REQUIRESNI |
|
147 | REQUIRESNI | |
148 | Whether Server Name Indication (SNI) is required to connect to the URL. |
|
148 | Whether Server Name Indication (SNI) is required to connect to the URL. | |
149 | SNI allows servers to use multiple certificates on the same IP. It is |
|
149 | SNI allows servers to use multiple certificates on the same IP. It is | |
150 | somewhat common in CDNs and other hosting providers. Older Python |
|
150 | somewhat common in CDNs and other hosting providers. Older Python | |
151 | versions do not support SNI. Defining this attribute enables clients |
|
151 | versions do not support SNI. Defining this attribute enables clients | |
152 | with older Python versions to filter this entry without experiencing |
|
152 | with older Python versions to filter this entry without experiencing | |
153 | an opaque SSL failure at connection time. |
|
153 | an opaque SSL failure at connection time. | |
154 |
|
154 | |||
155 | If this is defined, it is important to advertise a non-SNI fallback |
|
155 | If this is defined, it is important to advertise a non-SNI fallback | |
156 | URL or clients running old Python releases may not be able to clone |
|
156 | URL or clients running old Python releases may not be able to clone | |
157 | with the clonebundles facility. |
|
157 | with the clonebundles facility. | |
158 |
|
158 | |||
159 | Value should be "true". |
|
159 | Value should be "true". | |
160 |
|
160 | |||
161 | REQUIREDRAM |
|
161 | REQUIREDRAM | |
162 | Value specifies expected memory requirements to decode the payload. |
|
162 | Value specifies expected memory requirements to decode the payload. | |
163 | Values can have suffixes for common bytes sizes. e.g. "64MB". |
|
163 | Values can have suffixes for common bytes sizes. e.g. "64MB". | |
164 |
|
164 | |||
165 | This key is often used with zstd-compressed bundles using a high |
|
165 | This key is often used with zstd-compressed bundles using a high | |
166 | compression level / window size, which can require 100+ MB of memory |
|
166 | compression level / window size, which can require 100+ MB of memory | |
167 | to decode. |
|
167 | to decode. | |
168 |
|
168 | |||
169 | heads |
|
169 | heads | |
170 | Used for pull bundles. This contains the ``;`` separated changeset |
|
170 | Used for pull bundles. This contains the ``;`` separated changeset | |
171 | hashes of the heads of the bundle content. |
|
171 | hashes of the heads of the bundle content. | |
172 |
|
172 | |||
173 | bases |
|
173 | bases | |
174 | Used for pull bundles. This contains the ``;`` separated changeset |
|
174 | Used for pull bundles. This contains the ``;`` separated changeset | |
175 | hashes of the roots of the bundle content. This can be skipped if |
|
175 | hashes of the roots of the bundle content. This can be skipped if | |
176 | the bundle was created without ``--base``. |
|
176 | the bundle was created without ``--base``. | |
177 |
|
177 | |||
178 | Manifests can contain multiple entries. Assuming metadata is defined, clients |
|
178 | Manifests can contain multiple entries. Assuming metadata is defined, clients | |
179 | will filter entries from the manifest that they don't support. The remaining |
|
179 | will filter entries from the manifest that they don't support. The remaining | |
180 | entries are optionally sorted by client preferences |
|
180 | entries are optionally sorted by client preferences | |
181 | (``ui.clonebundleprefers`` config option). The client then attempts |
|
181 | (``ui.clonebundleprefers`` config option). The client then attempts | |
182 | to fetch the bundle at the first URL in the remaining list. |
|
182 | to fetch the bundle at the first URL in the remaining list. | |
183 |
|
183 | |||
184 | **Errors when downloading a bundle will fail the entire clone operation: |
|
184 | **Errors when downloading a bundle will fail the entire clone operation: | |
185 | clients do not automatically fall back to a traditional clone.** The reason |
|
185 | clients do not automatically fall back to a traditional clone.** The reason | |
186 | for this is that if a server is using clone bundles, it is probably doing so |
|
186 | for this is that if a server is using clone bundles, it is probably doing so | |
187 | because the feature is necessary to help it scale. In other words, there |
|
187 | because the feature is necessary to help it scale. In other words, there | |
188 | is an assumption that clone load will be offloaded to another service and |
|
188 | is an assumption that clone load will be offloaded to another service and | |
189 | that the Mercurial server isn't responsible for serving this clone load. |
|
189 | that the Mercurial server isn't responsible for serving this clone load. | |
190 | If that other service experiences issues and clients start mass falling back to |
|
190 | If that other service experiences issues and clients start mass falling back to | |
191 | the original Mercurial server, the added clone load could overwhelm the server |
|
191 | the original Mercurial server, the added clone load could overwhelm the server | |
192 | due to unexpected load and effectively take it offline. Not having clients |
|
192 | due to unexpected load and effectively take it offline. Not having clients | |
193 | automatically fall back to cloning from the original server mitigates this |
|
193 | automatically fall back to cloning from the original server mitigates this | |
194 | scenario. |
|
194 | scenario. | |
195 |
|
195 | |||
196 | Because there is no automatic Mercurial server fallback on failure of the |
|
196 | Because there is no automatic Mercurial server fallback on failure of the | |
197 | bundle hosting service, it is important for server operators to view the bundle |
|
197 | bundle hosting service, it is important for server operators to view the bundle | |
198 | hosting service as an extension of the Mercurial server in terms of |
|
198 | hosting service as an extension of the Mercurial server in terms of | |
199 | availability and service level agreements: if the bundle hosting service goes |
|
199 | availability and service level agreements: if the bundle hosting service goes | |
200 | down, so does the ability for clients to clone. Note: clients will see a |
|
200 | down, so does the ability for clients to clone. Note: clients will see a | |
201 | message informing them how to bypass the clone bundles facility when a failure |
|
201 | message informing them how to bypass the clone bundles facility when a failure | |
202 | occurs. So server operators should prepare for some people to follow these |
|
202 | occurs. So server operators should prepare for some people to follow these | |
203 | instructions when a failure occurs, thus driving more load to the original |
|
203 | instructions when a failure occurs, thus driving more load to the original | |
204 | Mercurial server when the bundle hosting service fails. |
|
204 | Mercurial server when the bundle hosting service fails. | |
205 |
|
205 | |||
206 |
|
206 | |||
207 | inline clonebundles |
|
207 | inline clonebundles | |
208 | ------------------- |
|
208 | ------------------- | |
209 |
|
209 | |||
210 | It is possible to transmit clonebundles inline in case repositories are |
|
210 | It is possible to transmit clonebundles inline in case repositories are | |
211 | accessed over SSH. This avoids having to setup an external HTTPS server |
|
211 | accessed over SSH. This avoids having to setup an external HTTPS server | |
212 | and results in the same access control as already present for the SSH setup. |
|
212 | and results in the same access control as already present for the SSH setup. | |
213 |
|
213 | |||
214 | Inline clonebundles should be placed into the `.hg/bundle-cache` directory. |
|
214 | Inline clonebundles should be placed into the `.hg/bundle-cache` directory. | |
215 | A clonebundle at `.hg/bundle-cache/mybundle.bundle` is referred to |
|
215 | A clonebundle at `.hg/bundle-cache/mybundle.bundle` is referred to | |
216 | in the `clonebundles.manifest` file as `peer-bundle-cache://mybundle.bundle`. |
|
216 | in the `clonebundles.manifest` file as `peer-bundle-cache://mybundle.bundle`. | |
217 |
|
217 | |||
218 |
|
218 | |||
219 | auto-generation of clone bundles |
|
219 | auto-generation of clone bundles | |
220 | -------------------------------- |
|
220 | -------------------------------- | |
221 |
|
221 | |||
222 | It is possible to set Mercurial to automatically re-generate clone bundles when |
|
222 | It is possible to set Mercurial to automatically re-generate clone bundles when | |
223 | enough new content is available. |
|
223 | enough new content is available. | |
224 |
|
224 | |||
225 | Mercurial will take care of the process asynchronously. The defined list of |
|
225 | Mercurial will take care of the process asynchronously. The defined list of | |
226 | bundle-type will be generated, uploaded, and advertised. Older bundles will get |
|
226 | bundle-type will be generated, uploaded, and advertised. Older bundles will get | |
227 | decommissioned as newer ones replace them. |
|
227 | decommissioned as newer ones replace them. | |
228 |
|
228 | |||
229 | Bundles Generation: |
|
229 | Bundles Generation: | |
230 | ................... |
|
230 | ................... | |
231 |
|
231 | |||
232 | The extension can generate multiple variants of the clone bundle. Each |
|
232 | The extension can generate multiple variants of the clone bundle. Each | |
233 | different variant will be defined by the "bundle-spec" they use:: |
|
233 | different variant will be defined by the "bundle-spec" they use:: | |
234 |
|
234 | |||
235 | [clone-bundles] |
|
235 | [clone-bundles] | |
236 | auto-generate.formats= zstd-v2, gzip-v2 |
|
236 | auto-generate.formats= zstd-v2, gzip-v2 | |
237 |
|
237 | |||
238 | See `hg help bundlespec` for details about available options. |
|
238 | See `hg help bundlespec` for details about available options. | |
239 |
|
239 | |||
240 | By default, new bundles are generated when 5% of the repository contents or at |
|
240 | By default, new bundles are generated when 5% of the repository contents or at | |
241 | least 1000 revisions are not contained in the cached bundles. This option can |
|
241 | least 1000 revisions are not contained in the cached bundles. This option can | |
242 | be controlled by the `clone-bundles.trigger.below-bundled-ratio` option |
|
242 | be controlled by the `clone-bundles.trigger.below-bundled-ratio` option | |
243 | (default 0.95) and the `clone-bundles.trigger.revs` option (default 1000):: |
|
243 | (default 0.95) and the `clone-bundles.trigger.revs` option (default 1000):: | |
244 |
|
244 | |||
245 | [clone-bundles] |
|
245 | [clone-bundles] | |
246 | trigger.below-bundled-ratio=0.95 |
|
246 | trigger.below-bundled-ratio=0.95 | |
247 | trigger.revs=1000 |
|
247 | trigger.revs=1000 | |
248 |
|
248 | |||
249 | This logic can be manually triggered using the `admin::clone-bundles-refresh` |
|
249 | This logic can be manually triggered using the `admin::clone-bundles-refresh` | |
250 | command, or automatically on each repository change if |
|
250 | command, or automatically on each repository change if | |
251 | `clone-bundles.auto-generate.on-change` is set to `yes`. |
|
251 | `clone-bundles.auto-generate.on-change` is set to `yes`. | |
252 |
|
252 | |||
253 | [clone-bundles] |
|
253 | [clone-bundles] | |
254 | auto-generate.on-change=yes |
|
254 | auto-generate.on-change=yes | |
255 | auto-generate.formats= zstd-v2, gzip-v2 |
|
255 | auto-generate.formats= zstd-v2, gzip-v2 | |
256 |
|
256 | |||
257 | Bundles Upload and Serving: |
|
257 | Bundles Upload and Serving: | |
258 | ........................... |
|
258 | ........................... | |
259 |
|
259 | |||
260 | The generated bundles need to be made available to users through a "public" URL. |
|
260 | The generated bundles need to be made available to users through a "public" URL. | |
261 | This should be donne through `clone-bundles.upload-command` configuration. The |
|
261 | This should be donne through `clone-bundles.upload-command` configuration. The | |
262 | value of this command should be a shell command. It will have access to the |
|
262 | value of this command should be a shell command. It will have access to the | |
263 | bundle file path through the `$HGCB_BUNDLE_PATH` variable. And the expected |
|
263 | bundle file path through the `$HGCB_BUNDLE_PATH` variable. And the expected | |
264 | basename in the "public" URL is accessible at:: |
|
264 | basename in the "public" URL is accessible at:: | |
265 |
|
265 | |||
266 | [clone-bundles] |
|
266 | [clone-bundles] | |
267 | upload-command=sftp put $HGCB_BUNDLE_PATH \ |
|
267 | upload-command=sftp put $HGCB_BUNDLE_PATH \ | |
268 | sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME |
|
268 | sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME | |
269 |
|
269 | |||
270 | If the file was already uploaded, the command must still succeed. |
|
270 | If the file was already uploaded, the command must still succeed. | |
271 |
|
271 | |||
272 | After upload, the file should be available at an url defined by |
|
272 | After upload, the file should be available at an url defined by | |
273 | `clone-bundles.url-template`. |
|
273 | `clone-bundles.url-template`. | |
274 |
|
274 | |||
275 | [clone-bundles] |
|
275 | [clone-bundles] | |
276 | url-template=https://bundles.host/cache/clone-bundles/{basename} |
|
276 | url-template=https://bundles.host/cache/clone-bundles/{basename} | |
277 |
|
277 | |||
278 | Old bundles cleanup: |
|
278 | Old bundles cleanup: | |
279 | .................... |
|
279 | .................... | |
280 |
|
280 | |||
281 | When new bundles are generated, the older ones are no longer necessary and can |
|
281 | When new bundles are generated, the older ones are no longer necessary and can | |
282 | be removed from storage. This is done through the `clone-bundles.delete-command` |
|
282 | be removed from storage. This is done through the `clone-bundles.delete-command` | |
283 | configuration. The command is given the url of the artifact to delete through |
|
283 | configuration. The command is given the url of the artifact to delete through | |
284 | the `$HGCB_BUNDLE_URL` environment variable. |
|
284 | the `$HGCB_BUNDLE_URL` environment variable. | |
285 |
|
285 | |||
286 | [clone-bundles] |
|
286 | [clone-bundles] | |
287 | delete-command=sftp rm sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME |
|
287 | delete-command=sftp rm sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME | |
288 |
|
288 | |||
289 | If the file was already deleted, the command must still succeed. |
|
289 | If the file was already deleted, the command must still succeed. | |
290 | """ |
|
290 | """ | |
291 |
|
291 | |||
292 |
|
292 | |||
293 | import os |
|
293 | import os | |
294 | import weakref |
|
294 | import weakref | |
295 |
|
295 | |||
296 | from mercurial.i18n import _ |
|
296 | from mercurial.i18n import _ | |
297 |
|
297 | |||
298 | from mercurial import ( |
|
298 | from mercurial import ( | |
299 | bundlecaches, |
|
299 | bundlecaches, | |
300 | commands, |
|
300 | commands, | |
301 | error, |
|
301 | error, | |
302 | extensions, |
|
302 | extensions, | |
303 | localrepo, |
|
303 | localrepo, | |
304 | lock, |
|
304 | lock, | |
305 | node, |
|
305 | node, | |
306 | registrar, |
|
306 | registrar, | |
307 | util, |
|
307 | util, | |
308 | wireprotov1server, |
|
308 | wireprotov1server, | |
309 | ) |
|
309 | ) | |
310 |
|
310 | |||
311 |
|
311 | |||
312 | from mercurial.utils import ( |
|
312 | from mercurial.utils import ( | |
313 | procutil, |
|
313 | procutil, | |
314 | ) |
|
314 | ) | |
315 |
|
315 | |||
316 | testedwith = b'ships-with-hg-core' |
|
316 | testedwith = b'ships-with-hg-core' | |
317 |
|
317 | |||
318 |
|
318 | |||
319 | def capabilities(orig, repo, proto): |
|
319 | def capabilities(orig, repo, proto): | |
320 | caps = orig(repo, proto) |
|
320 | caps = orig(repo, proto) | |
321 |
|
321 | |||
322 | # Only advertise if a manifest exists. This does add some I/O to requests. |
|
322 | # Only advertise if a manifest exists. This does add some I/O to requests. | |
323 | # But this should be cheaper than a wasted network round trip due to |
|
323 | # But this should be cheaper than a wasted network round trip due to | |
324 | # missing file. |
|
324 | # missing file. | |
325 | if repo.vfs.exists(bundlecaches.CB_MANIFEST_FILE): |
|
325 | if repo.vfs.exists(bundlecaches.CB_MANIFEST_FILE): | |
326 | caps.append(b'clonebundles') |
|
326 | caps.append(b'clonebundles') | |
|
327 | caps.append(b'clonebundles_manifest') | |||
327 |
|
328 | |||
328 | return caps |
|
329 | return caps | |
329 |
|
330 | |||
330 |
|
331 | |||
331 | def extsetup(ui): |
|
332 | def extsetup(ui): | |
332 | extensions.wrapfunction(wireprotov1server, b'_capabilities', capabilities) |
|
333 | extensions.wrapfunction(wireprotov1server, b'_capabilities', capabilities) | |
333 |
|
334 | |||
334 |
|
335 | |||
335 | # logic for bundle auto-generation |
|
336 | # logic for bundle auto-generation | |
336 |
|
337 | |||
337 |
|
338 | |||
338 | configtable = {} |
|
339 | configtable = {} | |
339 | configitem = registrar.configitem(configtable) |
|
340 | configitem = registrar.configitem(configtable) | |
340 |
|
341 | |||
341 | cmdtable = {} |
|
342 | cmdtable = {} | |
342 | command = registrar.command(cmdtable) |
|
343 | command = registrar.command(cmdtable) | |
343 |
|
344 | |||
344 | configitem(b'clone-bundles', b'auto-generate.on-change', default=False) |
|
345 | configitem(b'clone-bundles', b'auto-generate.on-change', default=False) | |
345 | configitem(b'clone-bundles', b'auto-generate.formats', default=list) |
|
346 | configitem(b'clone-bundles', b'auto-generate.formats', default=list) | |
346 | configitem(b'clone-bundles', b'trigger.below-bundled-ratio', default=0.95) |
|
347 | configitem(b'clone-bundles', b'trigger.below-bundled-ratio', default=0.95) | |
347 | configitem(b'clone-bundles', b'trigger.revs', default=1000) |
|
348 | configitem(b'clone-bundles', b'trigger.revs', default=1000) | |
348 |
|
349 | |||
349 | configitem(b'clone-bundles', b'upload-command', default=None) |
|
350 | configitem(b'clone-bundles', b'upload-command', default=None) | |
350 |
|
351 | |||
351 | configitem(b'clone-bundles', b'delete-command', default=None) |
|
352 | configitem(b'clone-bundles', b'delete-command', default=None) | |
352 |
|
353 | |||
353 | configitem(b'clone-bundles', b'url-template', default=None) |
|
354 | configitem(b'clone-bundles', b'url-template', default=None) | |
354 |
|
355 | |||
355 | configitem(b'devel', b'debug.clonebundles', default=False) |
|
356 | configitem(b'devel', b'debug.clonebundles', default=False) | |
356 |
|
357 | |||
357 |
|
358 | |||
358 | # category for the post-close transaction hooks |
|
359 | # category for the post-close transaction hooks | |
359 | CAT_POSTCLOSE = b"clonebundles-autobundles" |
|
360 | CAT_POSTCLOSE = b"clonebundles-autobundles" | |
360 |
|
361 | |||
361 | # template for bundle file names |
|
362 | # template for bundle file names | |
362 | BUNDLE_MASK = ( |
|
363 | BUNDLE_MASK = ( | |
363 | b"full-%(bundle_type)s-%(revs)d_revs-%(tip_short)s_tip-%(op_id)s.hg" |
|
364 | b"full-%(bundle_type)s-%(revs)d_revs-%(tip_short)s_tip-%(op_id)s.hg" | |
364 | ) |
|
365 | ) | |
365 |
|
366 | |||
366 |
|
367 | |||
367 | # file in .hg/ use to track clonebundles being auto-generated |
|
368 | # file in .hg/ use to track clonebundles being auto-generated | |
368 | AUTO_GEN_FILE = b'clonebundles.auto-gen' |
|
369 | AUTO_GEN_FILE = b'clonebundles.auto-gen' | |
369 |
|
370 | |||
370 |
|
371 | |||
371 | class BundleBase(object): |
|
372 | class BundleBase(object): | |
372 | """represents the core of properties that matters for us in a bundle |
|
373 | """represents the core of properties that matters for us in a bundle | |
373 |
|
374 | |||
374 | :bundle_type: the bundlespec (see hg help bundlespec) |
|
375 | :bundle_type: the bundlespec (see hg help bundlespec) | |
375 | :revs: the number of revisions in the repo at bundle creation time |
|
376 | :revs: the number of revisions in the repo at bundle creation time | |
376 | :tip_rev: the rev-num of the tip revision |
|
377 | :tip_rev: the rev-num of the tip revision | |
377 | :tip_node: the node id of the tip-most revision in the bundle |
|
378 | :tip_node: the node id of the tip-most revision in the bundle | |
378 |
|
379 | |||
379 | :ready: True if the bundle is ready to be served |
|
380 | :ready: True if the bundle is ready to be served | |
380 | """ |
|
381 | """ | |
381 |
|
382 | |||
382 | ready = False |
|
383 | ready = False | |
383 |
|
384 | |||
384 | def __init__(self, bundle_type, revs, tip_rev, tip_node): |
|
385 | def __init__(self, bundle_type, revs, tip_rev, tip_node): | |
385 | self.bundle_type = bundle_type |
|
386 | self.bundle_type = bundle_type | |
386 | self.revs = revs |
|
387 | self.revs = revs | |
387 | self.tip_rev = tip_rev |
|
388 | self.tip_rev = tip_rev | |
388 | self.tip_node = tip_node |
|
389 | self.tip_node = tip_node | |
389 |
|
390 | |||
390 | def valid_for(self, repo): |
|
391 | def valid_for(self, repo): | |
391 | """is this bundle applicable to the current repository |
|
392 | """is this bundle applicable to the current repository | |
392 |
|
393 | |||
393 | This is useful for detecting bundles made irrelevant by stripping. |
|
394 | This is useful for detecting bundles made irrelevant by stripping. | |
394 | """ |
|
395 | """ | |
395 | tip_node = node.bin(self.tip_node) |
|
396 | tip_node = node.bin(self.tip_node) | |
396 | return repo.changelog.index.get_rev(tip_node) == self.tip_rev |
|
397 | return repo.changelog.index.get_rev(tip_node) == self.tip_rev | |
397 |
|
398 | |||
398 | def __eq__(self, other): |
|
399 | def __eq__(self, other): | |
399 | left = (self.ready, self.bundle_type, self.tip_rev, self.tip_node) |
|
400 | left = (self.ready, self.bundle_type, self.tip_rev, self.tip_node) | |
400 | right = (other.ready, other.bundle_type, other.tip_rev, other.tip_node) |
|
401 | right = (other.ready, other.bundle_type, other.tip_rev, other.tip_node) | |
401 | return left == right |
|
402 | return left == right | |
402 |
|
403 | |||
403 | def __neq__(self, other): |
|
404 | def __neq__(self, other): | |
404 | return not self == other |
|
405 | return not self == other | |
405 |
|
406 | |||
406 | def __cmp__(self, other): |
|
407 | def __cmp__(self, other): | |
407 | if self == other: |
|
408 | if self == other: | |
408 | return 0 |
|
409 | return 0 | |
409 | return -1 |
|
410 | return -1 | |
410 |
|
411 | |||
411 |
|
412 | |||
412 | class RequestedBundle(BundleBase): |
|
413 | class RequestedBundle(BundleBase): | |
413 | """A bundle that should be generated. |
|
414 | """A bundle that should be generated. | |
414 |
|
415 | |||
415 | Additional attributes compared to BundleBase |
|
416 | Additional attributes compared to BundleBase | |
416 | :heads: list of head revisions (as rev-num) |
|
417 | :heads: list of head revisions (as rev-num) | |
417 | :op_id: a "unique" identifier for the operation triggering the change |
|
418 | :op_id: a "unique" identifier for the operation triggering the change | |
418 | """ |
|
419 | """ | |
419 |
|
420 | |||
420 | def __init__(self, bundle_type, revs, tip_rev, tip_node, head_revs, op_id): |
|
421 | def __init__(self, bundle_type, revs, tip_rev, tip_node, head_revs, op_id): | |
421 | self.head_revs = head_revs |
|
422 | self.head_revs = head_revs | |
422 | self.op_id = op_id |
|
423 | self.op_id = op_id | |
423 | super(RequestedBundle, self).__init__( |
|
424 | super(RequestedBundle, self).__init__( | |
424 | bundle_type, |
|
425 | bundle_type, | |
425 | revs, |
|
426 | revs, | |
426 | tip_rev, |
|
427 | tip_rev, | |
427 | tip_node, |
|
428 | tip_node, | |
428 | ) |
|
429 | ) | |
429 |
|
430 | |||
430 | @property |
|
431 | @property | |
431 | def suggested_filename(self): |
|
432 | def suggested_filename(self): | |
432 | """A filename that can be used for the generated bundle""" |
|
433 | """A filename that can be used for the generated bundle""" | |
433 | data = { |
|
434 | data = { | |
434 | b'bundle_type': self.bundle_type, |
|
435 | b'bundle_type': self.bundle_type, | |
435 | b'revs': self.revs, |
|
436 | b'revs': self.revs, | |
436 | b'heads': self.head_revs, |
|
437 | b'heads': self.head_revs, | |
437 | b'tip_rev': self.tip_rev, |
|
438 | b'tip_rev': self.tip_rev, | |
438 | b'tip_node': self.tip_node, |
|
439 | b'tip_node': self.tip_node, | |
439 | b'tip_short': self.tip_node[:12], |
|
440 | b'tip_short': self.tip_node[:12], | |
440 | b'op_id': self.op_id, |
|
441 | b'op_id': self.op_id, | |
441 | } |
|
442 | } | |
442 | return BUNDLE_MASK % data |
|
443 | return BUNDLE_MASK % data | |
443 |
|
444 | |||
444 | def generate_bundle(self, repo, file_path): |
|
445 | def generate_bundle(self, repo, file_path): | |
445 | """generate the bundle at `filepath`""" |
|
446 | """generate the bundle at `filepath`""" | |
446 | commands.bundle( |
|
447 | commands.bundle( | |
447 | repo.ui, |
|
448 | repo.ui, | |
448 | repo, |
|
449 | repo, | |
449 | file_path, |
|
450 | file_path, | |
450 | base=[b"null"], |
|
451 | base=[b"null"], | |
451 | rev=self.head_revs, |
|
452 | rev=self.head_revs, | |
452 | type=self.bundle_type, |
|
453 | type=self.bundle_type, | |
453 | quiet=True, |
|
454 | quiet=True, | |
454 | ) |
|
455 | ) | |
455 |
|
456 | |||
456 | def generating(self, file_path, hostname=None, pid=None): |
|
457 | def generating(self, file_path, hostname=None, pid=None): | |
457 | """return a GeneratingBundle object from this object""" |
|
458 | """return a GeneratingBundle object from this object""" | |
458 | if pid is None: |
|
459 | if pid is None: | |
459 | pid = os.getpid() |
|
460 | pid = os.getpid() | |
460 | if hostname is None: |
|
461 | if hostname is None: | |
461 | hostname = lock._getlockprefix() |
|
462 | hostname = lock._getlockprefix() | |
462 | return GeneratingBundle( |
|
463 | return GeneratingBundle( | |
463 | self.bundle_type, |
|
464 | self.bundle_type, | |
464 | self.revs, |
|
465 | self.revs, | |
465 | self.tip_rev, |
|
466 | self.tip_rev, | |
466 | self.tip_node, |
|
467 | self.tip_node, | |
467 | hostname, |
|
468 | hostname, | |
468 | pid, |
|
469 | pid, | |
469 | file_path, |
|
470 | file_path, | |
470 | ) |
|
471 | ) | |
471 |
|
472 | |||
472 |
|
473 | |||
473 | class GeneratingBundle(BundleBase): |
|
474 | class GeneratingBundle(BundleBase): | |
474 | """A bundle being generated |
|
475 | """A bundle being generated | |
475 |
|
476 | |||
476 | extra attributes compared to BundleBase: |
|
477 | extra attributes compared to BundleBase: | |
477 |
|
478 | |||
478 | :hostname: the hostname of the machine generating the bundle |
|
479 | :hostname: the hostname of the machine generating the bundle | |
479 | :pid: the pid of the process generating the bundle |
|
480 | :pid: the pid of the process generating the bundle | |
480 | :filepath: the target filename of the bundle |
|
481 | :filepath: the target filename of the bundle | |
481 |
|
482 | |||
482 | These attributes exist to help detect stalled generation processes. |
|
483 | These attributes exist to help detect stalled generation processes. | |
483 | """ |
|
484 | """ | |
484 |
|
485 | |||
485 | ready = False |
|
486 | ready = False | |
486 |
|
487 | |||
487 | def __init__( |
|
488 | def __init__( | |
488 | self, bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath |
|
489 | self, bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath | |
489 | ): |
|
490 | ): | |
490 | self.hostname = hostname |
|
491 | self.hostname = hostname | |
491 | self.pid = pid |
|
492 | self.pid = pid | |
492 | self.filepath = filepath |
|
493 | self.filepath = filepath | |
493 | super(GeneratingBundle, self).__init__( |
|
494 | super(GeneratingBundle, self).__init__( | |
494 | bundle_type, revs, tip_rev, tip_node |
|
495 | bundle_type, revs, tip_rev, tip_node | |
495 | ) |
|
496 | ) | |
496 |
|
497 | |||
497 | @classmethod |
|
498 | @classmethod | |
498 | def from_line(cls, line): |
|
499 | def from_line(cls, line): | |
499 | """create an object by deserializing a line from AUTO_GEN_FILE""" |
|
500 | """create an object by deserializing a line from AUTO_GEN_FILE""" | |
500 | assert line.startswith(b'PENDING-v1 ') |
|
501 | assert line.startswith(b'PENDING-v1 ') | |
501 | ( |
|
502 | ( | |
502 | __, |
|
503 | __, | |
503 | bundle_type, |
|
504 | bundle_type, | |
504 | revs, |
|
505 | revs, | |
505 | tip_rev, |
|
506 | tip_rev, | |
506 | tip_node, |
|
507 | tip_node, | |
507 | hostname, |
|
508 | hostname, | |
508 | pid, |
|
509 | pid, | |
509 | filepath, |
|
510 | filepath, | |
510 | ) = line.split() |
|
511 | ) = line.split() | |
511 | hostname = util.urlreq.unquote(hostname) |
|
512 | hostname = util.urlreq.unquote(hostname) | |
512 | filepath = util.urlreq.unquote(filepath) |
|
513 | filepath = util.urlreq.unquote(filepath) | |
513 | revs = int(revs) |
|
514 | revs = int(revs) | |
514 | tip_rev = int(tip_rev) |
|
515 | tip_rev = int(tip_rev) | |
515 | pid = int(pid) |
|
516 | pid = int(pid) | |
516 | return cls( |
|
517 | return cls( | |
517 | bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath |
|
518 | bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath | |
518 | ) |
|
519 | ) | |
519 |
|
520 | |||
520 | def to_line(self): |
|
521 | def to_line(self): | |
521 | """serialize the object to include as a line in AUTO_GEN_FILE""" |
|
522 | """serialize the object to include as a line in AUTO_GEN_FILE""" | |
522 | templ = b"PENDING-v1 %s %d %d %s %s %d %s" |
|
523 | templ = b"PENDING-v1 %s %d %d %s %s %d %s" | |
523 | data = ( |
|
524 | data = ( | |
524 | self.bundle_type, |
|
525 | self.bundle_type, | |
525 | self.revs, |
|
526 | self.revs, | |
526 | self.tip_rev, |
|
527 | self.tip_rev, | |
527 | self.tip_node, |
|
528 | self.tip_node, | |
528 | util.urlreq.quote(self.hostname), |
|
529 | util.urlreq.quote(self.hostname), | |
529 | self.pid, |
|
530 | self.pid, | |
530 | util.urlreq.quote(self.filepath), |
|
531 | util.urlreq.quote(self.filepath), | |
531 | ) |
|
532 | ) | |
532 | return templ % data |
|
533 | return templ % data | |
533 |
|
534 | |||
534 | def __eq__(self, other): |
|
535 | def __eq__(self, other): | |
535 | if not super(GeneratingBundle, self).__eq__(other): |
|
536 | if not super(GeneratingBundle, self).__eq__(other): | |
536 | return False |
|
537 | return False | |
537 | left = (self.hostname, self.pid, self.filepath) |
|
538 | left = (self.hostname, self.pid, self.filepath) | |
538 | right = (other.hostname, other.pid, other.filepath) |
|
539 | right = (other.hostname, other.pid, other.filepath) | |
539 | return left == right |
|
540 | return left == right | |
540 |
|
541 | |||
541 | def uploaded(self, url, basename): |
|
542 | def uploaded(self, url, basename): | |
542 | """return a GeneratedBundle from this object""" |
|
543 | """return a GeneratedBundle from this object""" | |
543 | return GeneratedBundle( |
|
544 | return GeneratedBundle( | |
544 | self.bundle_type, |
|
545 | self.bundle_type, | |
545 | self.revs, |
|
546 | self.revs, | |
546 | self.tip_rev, |
|
547 | self.tip_rev, | |
547 | self.tip_node, |
|
548 | self.tip_node, | |
548 | url, |
|
549 | url, | |
549 | basename, |
|
550 | basename, | |
550 | ) |
|
551 | ) | |
551 |
|
552 | |||
552 |
|
553 | |||
553 | class GeneratedBundle(BundleBase): |
|
554 | class GeneratedBundle(BundleBase): | |
554 | """A bundle that is done being generated and can be served |
|
555 | """A bundle that is done being generated and can be served | |
555 |
|
556 | |||
556 | extra attributes compared to BundleBase: |
|
557 | extra attributes compared to BundleBase: | |
557 |
|
558 | |||
558 | :file_url: the url where the bundle is available. |
|
559 | :file_url: the url where the bundle is available. | |
559 | :basename: the "basename" used to upload (useful for deletion) |
|
560 | :basename: the "basename" used to upload (useful for deletion) | |
560 |
|
561 | |||
561 | These attributes exist to generate a bundle manifest |
|
562 | These attributes exist to generate a bundle manifest | |
562 | (.hg/pullbundles.manifest) |
|
563 | (.hg/pullbundles.manifest) | |
563 | """ |
|
564 | """ | |
564 |
|
565 | |||
565 | ready = True |
|
566 | ready = True | |
566 |
|
567 | |||
567 | def __init__( |
|
568 | def __init__( | |
568 | self, bundle_type, revs, tip_rev, tip_node, file_url, basename |
|
569 | self, bundle_type, revs, tip_rev, tip_node, file_url, basename | |
569 | ): |
|
570 | ): | |
570 | self.file_url = file_url |
|
571 | self.file_url = file_url | |
571 | self.basename = basename |
|
572 | self.basename = basename | |
572 | super(GeneratedBundle, self).__init__( |
|
573 | super(GeneratedBundle, self).__init__( | |
573 | bundle_type, revs, tip_rev, tip_node |
|
574 | bundle_type, revs, tip_rev, tip_node | |
574 | ) |
|
575 | ) | |
575 |
|
576 | |||
576 | @classmethod |
|
577 | @classmethod | |
577 | def from_line(cls, line): |
|
578 | def from_line(cls, line): | |
578 | """create an object by deserializing a line from AUTO_GEN_FILE""" |
|
579 | """create an object by deserializing a line from AUTO_GEN_FILE""" | |
579 | assert line.startswith(b'DONE-v1 ') |
|
580 | assert line.startswith(b'DONE-v1 ') | |
580 | ( |
|
581 | ( | |
581 | __, |
|
582 | __, | |
582 | bundle_type, |
|
583 | bundle_type, | |
583 | revs, |
|
584 | revs, | |
584 | tip_rev, |
|
585 | tip_rev, | |
585 | tip_node, |
|
586 | tip_node, | |
586 | file_url, |
|
587 | file_url, | |
587 | basename, |
|
588 | basename, | |
588 | ) = line.split() |
|
589 | ) = line.split() | |
589 | revs = int(revs) |
|
590 | revs = int(revs) | |
590 | tip_rev = int(tip_rev) |
|
591 | tip_rev = int(tip_rev) | |
591 | file_url = util.urlreq.unquote(file_url) |
|
592 | file_url = util.urlreq.unquote(file_url) | |
592 | return cls(bundle_type, revs, tip_rev, tip_node, file_url, basename) |
|
593 | return cls(bundle_type, revs, tip_rev, tip_node, file_url, basename) | |
593 |
|
594 | |||
594 | def to_line(self): |
|
595 | def to_line(self): | |
595 | """serialize the object to include as a line in AUTO_GEN_FILE""" |
|
596 | """serialize the object to include as a line in AUTO_GEN_FILE""" | |
596 | templ = b"DONE-v1 %s %d %d %s %s %s" |
|
597 | templ = b"DONE-v1 %s %d %d %s %s %s" | |
597 | data = ( |
|
598 | data = ( | |
598 | self.bundle_type, |
|
599 | self.bundle_type, | |
599 | self.revs, |
|
600 | self.revs, | |
600 | self.tip_rev, |
|
601 | self.tip_rev, | |
601 | self.tip_node, |
|
602 | self.tip_node, | |
602 | util.urlreq.quote(self.file_url), |
|
603 | util.urlreq.quote(self.file_url), | |
603 | self.basename, |
|
604 | self.basename, | |
604 | ) |
|
605 | ) | |
605 | return templ % data |
|
606 | return templ % data | |
606 |
|
607 | |||
607 | def manifest_line(self): |
|
608 | def manifest_line(self): | |
608 | """serialize the object to include as a line in pullbundles.manifest""" |
|
609 | """serialize the object to include as a line in pullbundles.manifest""" | |
609 | templ = b"%s BUNDLESPEC=%s REQUIRESNI=true" |
|
610 | templ = b"%s BUNDLESPEC=%s REQUIRESNI=true" | |
610 | return templ % (self.file_url, self.bundle_type) |
|
611 | return templ % (self.file_url, self.bundle_type) | |
611 |
|
612 | |||
612 | def __eq__(self, other): |
|
613 | def __eq__(self, other): | |
613 | if not super(GeneratedBundle, self).__eq__(other): |
|
614 | if not super(GeneratedBundle, self).__eq__(other): | |
614 | return False |
|
615 | return False | |
615 | return self.file_url == other.file_url |
|
616 | return self.file_url == other.file_url | |
616 |
|
617 | |||
617 |
|
618 | |||
618 | def parse_auto_gen(content): |
|
619 | def parse_auto_gen(content): | |
619 | """parse the AUTO_GEN_FILE to return a list of Bundle object""" |
|
620 | """parse the AUTO_GEN_FILE to return a list of Bundle object""" | |
620 | bundles = [] |
|
621 | bundles = [] | |
621 | for line in content.splitlines(): |
|
622 | for line in content.splitlines(): | |
622 | if line.startswith(b'PENDING-v1 '): |
|
623 | if line.startswith(b'PENDING-v1 '): | |
623 | bundles.append(GeneratingBundle.from_line(line)) |
|
624 | bundles.append(GeneratingBundle.from_line(line)) | |
624 | elif line.startswith(b'DONE-v1 '): |
|
625 | elif line.startswith(b'DONE-v1 '): | |
625 | bundles.append(GeneratedBundle.from_line(line)) |
|
626 | bundles.append(GeneratedBundle.from_line(line)) | |
626 | return bundles |
|
627 | return bundles | |
627 |
|
628 | |||
628 |
|
629 | |||
629 | def dumps_auto_gen(bundles): |
|
630 | def dumps_auto_gen(bundles): | |
630 | """serialize a list of Bundle as a AUTO_GEN_FILE content""" |
|
631 | """serialize a list of Bundle as a AUTO_GEN_FILE content""" | |
631 | lines = [] |
|
632 | lines = [] | |
632 | for b in bundles: |
|
633 | for b in bundles: | |
633 | lines.append(b"%s\n" % b.to_line()) |
|
634 | lines.append(b"%s\n" % b.to_line()) | |
634 | lines.sort() |
|
635 | lines.sort() | |
635 | return b"".join(lines) |
|
636 | return b"".join(lines) | |
636 |
|
637 | |||
637 |
|
638 | |||
638 | def read_auto_gen(repo): |
|
639 | def read_auto_gen(repo): | |
639 | """read the AUTO_GEN_FILE for the <repo> a list of Bundle object""" |
|
640 | """read the AUTO_GEN_FILE for the <repo> a list of Bundle object""" | |
640 | data = repo.vfs.tryread(AUTO_GEN_FILE) |
|
641 | data = repo.vfs.tryread(AUTO_GEN_FILE) | |
641 | if not data: |
|
642 | if not data: | |
642 | return [] |
|
643 | return [] | |
643 | return parse_auto_gen(data) |
|
644 | return parse_auto_gen(data) | |
644 |
|
645 | |||
645 |
|
646 | |||
646 | def write_auto_gen(repo, bundles): |
|
647 | def write_auto_gen(repo, bundles): | |
647 | """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" |
|
648 | """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" | |
648 | assert repo._cb_lock_ref is not None |
|
649 | assert repo._cb_lock_ref is not None | |
649 | data = dumps_auto_gen(bundles) |
|
650 | data = dumps_auto_gen(bundles) | |
650 | with repo.vfs(AUTO_GEN_FILE, mode=b'wb', atomictemp=True) as f: |
|
651 | with repo.vfs(AUTO_GEN_FILE, mode=b'wb', atomictemp=True) as f: | |
651 | f.write(data) |
|
652 | f.write(data) | |
652 |
|
653 | |||
653 |
|
654 | |||
654 | def generate_manifest(bundles): |
|
655 | def generate_manifest(bundles): | |
655 | """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" |
|
656 | """write a list of Bundle objects into the repo's AUTO_GEN_FILE""" | |
656 | bundles = list(bundles) |
|
657 | bundles = list(bundles) | |
657 | bundles.sort(key=lambda b: b.bundle_type) |
|
658 | bundles.sort(key=lambda b: b.bundle_type) | |
658 | lines = [] |
|
659 | lines = [] | |
659 | for b in bundles: |
|
660 | for b in bundles: | |
660 | lines.append(b"%s\n" % b.manifest_line()) |
|
661 | lines.append(b"%s\n" % b.manifest_line()) | |
661 | return b"".join(lines) |
|
662 | return b"".join(lines) | |
662 |
|
663 | |||
663 |
|
664 | |||
664 | def update_ondisk_manifest(repo): |
|
665 | def update_ondisk_manifest(repo): | |
665 | """update the clonebundle manifest with latest url""" |
|
666 | """update the clonebundle manifest with latest url""" | |
666 | with repo.clonebundles_lock(): |
|
667 | with repo.clonebundles_lock(): | |
667 | bundles = read_auto_gen(repo) |
|
668 | bundles = read_auto_gen(repo) | |
668 |
|
669 | |||
669 | per_types = {} |
|
670 | per_types = {} | |
670 | for b in bundles: |
|
671 | for b in bundles: | |
671 | if not (b.ready and b.valid_for(repo)): |
|
672 | if not (b.ready and b.valid_for(repo)): | |
672 | continue |
|
673 | continue | |
673 | current = per_types.get(b.bundle_type) |
|
674 | current = per_types.get(b.bundle_type) | |
674 | if current is not None and current.revs >= b.revs: |
|
675 | if current is not None and current.revs >= b.revs: | |
675 | continue |
|
676 | continue | |
676 | per_types[b.bundle_type] = b |
|
677 | per_types[b.bundle_type] = b | |
677 | manifest = generate_manifest(per_types.values()) |
|
678 | manifest = generate_manifest(per_types.values()) | |
678 | with repo.vfs( |
|
679 | with repo.vfs( | |
679 | bundlecaches.CB_MANIFEST_FILE, mode=b"wb", atomictemp=True |
|
680 | bundlecaches.CB_MANIFEST_FILE, mode=b"wb", atomictemp=True | |
680 | ) as f: |
|
681 | ) as f: | |
681 | f.write(manifest) |
|
682 | f.write(manifest) | |
682 |
|
683 | |||
683 |
|
684 | |||
684 | def update_bundle_list(repo, new_bundles=(), del_bundles=()): |
|
685 | def update_bundle_list(repo, new_bundles=(), del_bundles=()): | |
685 | """modify the repo's AUTO_GEN_FILE |
|
686 | """modify the repo's AUTO_GEN_FILE | |
686 |
|
687 | |||
687 | This method also regenerates the clone bundle manifest when needed""" |
|
688 | This method also regenerates the clone bundle manifest when needed""" | |
688 | with repo.clonebundles_lock(): |
|
689 | with repo.clonebundles_lock(): | |
689 | bundles = read_auto_gen(repo) |
|
690 | bundles = read_auto_gen(repo) | |
690 | if del_bundles: |
|
691 | if del_bundles: | |
691 | bundles = [b for b in bundles if b not in del_bundles] |
|
692 | bundles = [b for b in bundles if b not in del_bundles] | |
692 | new_bundles = [b for b in new_bundles if b not in bundles] |
|
693 | new_bundles = [b for b in new_bundles if b not in bundles] | |
693 | bundles.extend(new_bundles) |
|
694 | bundles.extend(new_bundles) | |
694 | write_auto_gen(repo, bundles) |
|
695 | write_auto_gen(repo, bundles) | |
695 | all_changed = [] |
|
696 | all_changed = [] | |
696 | all_changed.extend(new_bundles) |
|
697 | all_changed.extend(new_bundles) | |
697 | all_changed.extend(del_bundles) |
|
698 | all_changed.extend(del_bundles) | |
698 | if any(b.ready for b in all_changed): |
|
699 | if any(b.ready for b in all_changed): | |
699 | update_ondisk_manifest(repo) |
|
700 | update_ondisk_manifest(repo) | |
700 |
|
701 | |||
701 |
|
702 | |||
702 | def cleanup_tmp_bundle(repo, target): |
|
703 | def cleanup_tmp_bundle(repo, target): | |
703 | """remove a GeneratingBundle file and entry""" |
|
704 | """remove a GeneratingBundle file and entry""" | |
704 | assert not target.ready |
|
705 | assert not target.ready | |
705 | with repo.clonebundles_lock(): |
|
706 | with repo.clonebundles_lock(): | |
706 | repo.vfs.tryunlink(target.filepath) |
|
707 | repo.vfs.tryunlink(target.filepath) | |
707 | update_bundle_list(repo, del_bundles=[target]) |
|
708 | update_bundle_list(repo, del_bundles=[target]) | |
708 |
|
709 | |||
709 |
|
710 | |||
710 | def finalize_one_bundle(repo, target): |
|
711 | def finalize_one_bundle(repo, target): | |
711 | """upload a generated bundle and advertise it in the clonebundles.manifest""" |
|
712 | """upload a generated bundle and advertise it in the clonebundles.manifest""" | |
712 | with repo.clonebundles_lock(): |
|
713 | with repo.clonebundles_lock(): | |
713 | bundles = read_auto_gen(repo) |
|
714 | bundles = read_auto_gen(repo) | |
714 | if target in bundles and target.valid_for(repo): |
|
715 | if target in bundles and target.valid_for(repo): | |
715 | result = upload_bundle(repo, target) |
|
716 | result = upload_bundle(repo, target) | |
716 | update_bundle_list(repo, new_bundles=[result]) |
|
717 | update_bundle_list(repo, new_bundles=[result]) | |
717 | cleanup_tmp_bundle(repo, target) |
|
718 | cleanup_tmp_bundle(repo, target) | |
718 |
|
719 | |||
719 |
|
720 | |||
720 | def find_outdated_bundles(repo, bundles): |
|
721 | def find_outdated_bundles(repo, bundles): | |
721 | """finds outdated bundles""" |
|
722 | """finds outdated bundles""" | |
722 | olds = [] |
|
723 | olds = [] | |
723 | per_types = {} |
|
724 | per_types = {} | |
724 | for b in bundles: |
|
725 | for b in bundles: | |
725 | if not b.valid_for(repo): |
|
726 | if not b.valid_for(repo): | |
726 | olds.append(b) |
|
727 | olds.append(b) | |
727 | continue |
|
728 | continue | |
728 | l = per_types.setdefault(b.bundle_type, []) |
|
729 | l = per_types.setdefault(b.bundle_type, []) | |
729 | l.append(b) |
|
730 | l.append(b) | |
730 | for key in sorted(per_types): |
|
731 | for key in sorted(per_types): | |
731 | all = per_types[key] |
|
732 | all = per_types[key] | |
732 | if len(all) > 1: |
|
733 | if len(all) > 1: | |
733 | all.sort(key=lambda b: b.revs, reverse=True) |
|
734 | all.sort(key=lambda b: b.revs, reverse=True) | |
734 | olds.extend(all[1:]) |
|
735 | olds.extend(all[1:]) | |
735 | return olds |
|
736 | return olds | |
736 |
|
737 | |||
737 |
|
738 | |||
738 | def collect_garbage(repo): |
|
739 | def collect_garbage(repo): | |
739 | """finds outdated bundles and get them deleted""" |
|
740 | """finds outdated bundles and get them deleted""" | |
740 | with repo.clonebundles_lock(): |
|
741 | with repo.clonebundles_lock(): | |
741 | bundles = read_auto_gen(repo) |
|
742 | bundles = read_auto_gen(repo) | |
742 | olds = find_outdated_bundles(repo, bundles) |
|
743 | olds = find_outdated_bundles(repo, bundles) | |
743 | for o in olds: |
|
744 | for o in olds: | |
744 | delete_bundle(repo, o) |
|
745 | delete_bundle(repo, o) | |
745 | update_bundle_list(repo, del_bundles=olds) |
|
746 | update_bundle_list(repo, del_bundles=olds) | |
746 |
|
747 | |||
747 |
|
748 | |||
748 | def upload_bundle(repo, bundle): |
|
749 | def upload_bundle(repo, bundle): | |
749 | """upload the result of a GeneratingBundle and return a GeneratedBundle |
|
750 | """upload the result of a GeneratingBundle and return a GeneratedBundle | |
750 |
|
751 | |||
751 | The upload is done using the `clone-bundles.upload-command` |
|
752 | The upload is done using the `clone-bundles.upload-command` | |
752 | """ |
|
753 | """ | |
753 | cmd = repo.ui.config(b'clone-bundles', b'upload-command') |
|
754 | cmd = repo.ui.config(b'clone-bundles', b'upload-command') | |
754 | url = repo.ui.config(b'clone-bundles', b'url-template') |
|
755 | url = repo.ui.config(b'clone-bundles', b'url-template') | |
755 | basename = repo.vfs.basename(bundle.filepath) |
|
756 | basename = repo.vfs.basename(bundle.filepath) | |
756 | filepath = procutil.shellquote(bundle.filepath) |
|
757 | filepath = procutil.shellquote(bundle.filepath) | |
757 | variables = { |
|
758 | variables = { | |
758 | b'HGCB_BUNDLE_PATH': filepath, |
|
759 | b'HGCB_BUNDLE_PATH': filepath, | |
759 | b'HGCB_BUNDLE_BASENAME': basename, |
|
760 | b'HGCB_BUNDLE_BASENAME': basename, | |
760 | } |
|
761 | } | |
761 | env = procutil.shellenviron(environ=variables) |
|
762 | env = procutil.shellenviron(environ=variables) | |
762 | ret = repo.ui.system(cmd, environ=env) |
|
763 | ret = repo.ui.system(cmd, environ=env) | |
763 | if ret: |
|
764 | if ret: | |
764 | raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) |
|
765 | raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) | |
765 | url = ( |
|
766 | url = ( | |
766 | url.decode('utf8') |
|
767 | url.decode('utf8') | |
767 | .format(basename=basename.decode('utf8')) |
|
768 | .format(basename=basename.decode('utf8')) | |
768 | .encode('utf8') |
|
769 | .encode('utf8') | |
769 | ) |
|
770 | ) | |
770 | return bundle.uploaded(url, basename) |
|
771 | return bundle.uploaded(url, basename) | |
771 |
|
772 | |||
772 |
|
773 | |||
773 | def delete_bundle(repo, bundle): |
|
774 | def delete_bundle(repo, bundle): | |
774 | """delete a bundle from storage""" |
|
775 | """delete a bundle from storage""" | |
775 | assert bundle.ready |
|
776 | assert bundle.ready | |
776 | msg = b'clone-bundles: deleting bundle %s\n' |
|
777 | msg = b'clone-bundles: deleting bundle %s\n' | |
777 | msg %= bundle.basename |
|
778 | msg %= bundle.basename | |
778 | if repo.ui.configbool(b'devel', b'debug.clonebundles'): |
|
779 | if repo.ui.configbool(b'devel', b'debug.clonebundles'): | |
779 | repo.ui.write(msg) |
|
780 | repo.ui.write(msg) | |
780 | else: |
|
781 | else: | |
781 | repo.ui.debug(msg) |
|
782 | repo.ui.debug(msg) | |
782 |
|
783 | |||
783 | cmd = repo.ui.config(b'clone-bundles', b'delete-command') |
|
784 | cmd = repo.ui.config(b'clone-bundles', b'delete-command') | |
784 | variables = { |
|
785 | variables = { | |
785 | b'HGCB_BUNDLE_URL': bundle.file_url, |
|
786 | b'HGCB_BUNDLE_URL': bundle.file_url, | |
786 | b'HGCB_BASENAME': bundle.basename, |
|
787 | b'HGCB_BASENAME': bundle.basename, | |
787 | } |
|
788 | } | |
788 | env = procutil.shellenviron(environ=variables) |
|
789 | env = procutil.shellenviron(environ=variables) | |
789 | ret = repo.ui.system(cmd, environ=env) |
|
790 | ret = repo.ui.system(cmd, environ=env) | |
790 | if ret: |
|
791 | if ret: | |
791 | raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) |
|
792 | raise error.Abort(b"command returned status %d: %s" % (ret, cmd)) | |
792 |
|
793 | |||
793 |
|
794 | |||
794 | def auto_bundle_needed_actions(repo, bundles, op_id): |
|
795 | def auto_bundle_needed_actions(repo, bundles, op_id): | |
795 | """find the list of bundles that need action |
|
796 | """find the list of bundles that need action | |
796 |
|
797 | |||
797 | returns a list of RequestedBundle objects that need to be generated and |
|
798 | returns a list of RequestedBundle objects that need to be generated and | |
798 | uploaded.""" |
|
799 | uploaded.""" | |
799 | create_bundles = [] |
|
800 | create_bundles = [] | |
800 | delete_bundles = [] |
|
801 | delete_bundles = [] | |
801 | repo = repo.filtered(b"immutable") |
|
802 | repo = repo.filtered(b"immutable") | |
802 | targets = repo.ui.configlist(b'clone-bundles', b'auto-generate.formats') |
|
803 | targets = repo.ui.configlist(b'clone-bundles', b'auto-generate.formats') | |
803 | ratio = float( |
|
804 | ratio = float( | |
804 | repo.ui.config(b'clone-bundles', b'trigger.below-bundled-ratio') |
|
805 | repo.ui.config(b'clone-bundles', b'trigger.below-bundled-ratio') | |
805 | ) |
|
806 | ) | |
806 | abs_revs = repo.ui.configint(b'clone-bundles', b'trigger.revs') |
|
807 | abs_revs = repo.ui.configint(b'clone-bundles', b'trigger.revs') | |
807 | revs = len(repo.changelog) |
|
808 | revs = len(repo.changelog) | |
808 | generic_data = { |
|
809 | generic_data = { | |
809 | 'revs': revs, |
|
810 | 'revs': revs, | |
810 | 'head_revs': repo.changelog.headrevs(), |
|
811 | 'head_revs': repo.changelog.headrevs(), | |
811 | 'tip_rev': repo.changelog.tiprev(), |
|
812 | 'tip_rev': repo.changelog.tiprev(), | |
812 | 'tip_node': node.hex(repo.changelog.tip()), |
|
813 | 'tip_node': node.hex(repo.changelog.tip()), | |
813 | 'op_id': op_id, |
|
814 | 'op_id': op_id, | |
814 | } |
|
815 | } | |
815 | for t in targets: |
|
816 | for t in targets: | |
816 | if new_bundle_needed(repo, bundles, ratio, abs_revs, t, revs): |
|
817 | if new_bundle_needed(repo, bundles, ratio, abs_revs, t, revs): | |
817 | data = generic_data.copy() |
|
818 | data = generic_data.copy() | |
818 | data['bundle_type'] = t |
|
819 | data['bundle_type'] = t | |
819 | b = RequestedBundle(**data) |
|
820 | b = RequestedBundle(**data) | |
820 | create_bundles.append(b) |
|
821 | create_bundles.append(b) | |
821 | delete_bundles.extend(find_outdated_bundles(repo, bundles)) |
|
822 | delete_bundles.extend(find_outdated_bundles(repo, bundles)) | |
822 | return create_bundles, delete_bundles |
|
823 | return create_bundles, delete_bundles | |
823 |
|
824 | |||
824 |
|
825 | |||
825 | def new_bundle_needed(repo, bundles, ratio, abs_revs, bundle_type, revs): |
|
826 | def new_bundle_needed(repo, bundles, ratio, abs_revs, bundle_type, revs): | |
826 | """consider the current cached content and trigger new bundles if needed""" |
|
827 | """consider the current cached content and trigger new bundles if needed""" | |
827 | threshold = max((revs * ratio), (revs - abs_revs)) |
|
828 | threshold = max((revs * ratio), (revs - abs_revs)) | |
828 | for b in bundles: |
|
829 | for b in bundles: | |
829 | if not b.valid_for(repo) or b.bundle_type != bundle_type: |
|
830 | if not b.valid_for(repo) or b.bundle_type != bundle_type: | |
830 | continue |
|
831 | continue | |
831 | if b.revs > threshold: |
|
832 | if b.revs > threshold: | |
832 | return False |
|
833 | return False | |
833 | return True |
|
834 | return True | |
834 |
|
835 | |||
835 |
|
836 | |||
836 | def start_one_bundle(repo, bundle): |
|
837 | def start_one_bundle(repo, bundle): | |
837 | """start the generation of a single bundle file |
|
838 | """start the generation of a single bundle file | |
838 |
|
839 | |||
839 | the `bundle` argument should be a RequestedBundle object. |
|
840 | the `bundle` argument should be a RequestedBundle object. | |
840 |
|
841 | |||
841 | This data is passed to the `debugmakeclonebundles` "as is". |
|
842 | This data is passed to the `debugmakeclonebundles` "as is". | |
842 | """ |
|
843 | """ | |
843 | data = util.pickle.dumps(bundle) |
|
844 | data = util.pickle.dumps(bundle) | |
844 | cmd = [procutil.hgexecutable(), b'--cwd', repo.path, INTERNAL_CMD] |
|
845 | cmd = [procutil.hgexecutable(), b'--cwd', repo.path, INTERNAL_CMD] | |
845 | env = procutil.shellenviron() |
|
846 | env = procutil.shellenviron() | |
846 | msg = b'clone-bundles: starting bundle generation: %s\n' |
|
847 | msg = b'clone-bundles: starting bundle generation: %s\n' | |
847 | stdout = None |
|
848 | stdout = None | |
848 | stderr = None |
|
849 | stderr = None | |
849 | waits = [] |
|
850 | waits = [] | |
850 | record_wait = None |
|
851 | record_wait = None | |
851 | if repo.ui.configbool(b'devel', b'debug.clonebundles'): |
|
852 | if repo.ui.configbool(b'devel', b'debug.clonebundles'): | |
852 | stdout = procutil.stdout |
|
853 | stdout = procutil.stdout | |
853 | stderr = procutil.stderr |
|
854 | stderr = procutil.stderr | |
854 | repo.ui.write(msg % bundle.bundle_type) |
|
855 | repo.ui.write(msg % bundle.bundle_type) | |
855 | record_wait = waits.append |
|
856 | record_wait = waits.append | |
856 | else: |
|
857 | else: | |
857 | repo.ui.debug(msg % bundle.bundle_type) |
|
858 | repo.ui.debug(msg % bundle.bundle_type) | |
858 | bg = procutil.runbgcommand |
|
859 | bg = procutil.runbgcommand | |
859 | bg( |
|
860 | bg( | |
860 | cmd, |
|
861 | cmd, | |
861 | env, |
|
862 | env, | |
862 | stdin_bytes=data, |
|
863 | stdin_bytes=data, | |
863 | stdout=stdout, |
|
864 | stdout=stdout, | |
864 | stderr=stderr, |
|
865 | stderr=stderr, | |
865 | record_wait=record_wait, |
|
866 | record_wait=record_wait, | |
866 | ) |
|
867 | ) | |
867 | for f in waits: |
|
868 | for f in waits: | |
868 | f() |
|
869 | f() | |
869 |
|
870 | |||
870 |
|
871 | |||
871 | INTERNAL_CMD = b'debug::internal-make-clone-bundles' |
|
872 | INTERNAL_CMD = b'debug::internal-make-clone-bundles' | |
872 |
|
873 | |||
873 |
|
874 | |||
874 | @command(INTERNAL_CMD, [], b'') |
|
875 | @command(INTERNAL_CMD, [], b'') | |
875 | def debugmakeclonebundles(ui, repo): |
|
876 | def debugmakeclonebundles(ui, repo): | |
876 | """Internal command to auto-generate debug bundles""" |
|
877 | """Internal command to auto-generate debug bundles""" | |
877 | requested_bundle = util.pickle.load(procutil.stdin) |
|
878 | requested_bundle = util.pickle.load(procutil.stdin) | |
878 | procutil.stdin.close() |
|
879 | procutil.stdin.close() | |
879 |
|
880 | |||
880 | collect_garbage(repo) |
|
881 | collect_garbage(repo) | |
881 |
|
882 | |||
882 | fname = requested_bundle.suggested_filename |
|
883 | fname = requested_bundle.suggested_filename | |
883 | fpath = repo.vfs.makedirs(b'tmp-bundles') |
|
884 | fpath = repo.vfs.makedirs(b'tmp-bundles') | |
884 | fpath = repo.vfs.join(b'tmp-bundles', fname) |
|
885 | fpath = repo.vfs.join(b'tmp-bundles', fname) | |
885 | bundle = requested_bundle.generating(fpath) |
|
886 | bundle = requested_bundle.generating(fpath) | |
886 | update_bundle_list(repo, new_bundles=[bundle]) |
|
887 | update_bundle_list(repo, new_bundles=[bundle]) | |
887 |
|
888 | |||
888 | requested_bundle.generate_bundle(repo, fpath) |
|
889 | requested_bundle.generate_bundle(repo, fpath) | |
889 |
|
890 | |||
890 | repo.invalidate() |
|
891 | repo.invalidate() | |
891 | finalize_one_bundle(repo, bundle) |
|
892 | finalize_one_bundle(repo, bundle) | |
892 |
|
893 | |||
893 |
|
894 | |||
894 | def make_auto_bundler(source_repo): |
|
895 | def make_auto_bundler(source_repo): | |
895 | reporef = weakref.ref(source_repo) |
|
896 | reporef = weakref.ref(source_repo) | |
896 |
|
897 | |||
897 | def autobundle(tr): |
|
898 | def autobundle(tr): | |
898 | repo = reporef() |
|
899 | repo = reporef() | |
899 | assert repo is not None |
|
900 | assert repo is not None | |
900 | bundles = read_auto_gen(repo) |
|
901 | bundles = read_auto_gen(repo) | |
901 | new, __ = auto_bundle_needed_actions(repo, bundles, b"%d_txn" % id(tr)) |
|
902 | new, __ = auto_bundle_needed_actions(repo, bundles, b"%d_txn" % id(tr)) | |
902 | for data in new: |
|
903 | for data in new: | |
903 | start_one_bundle(repo, data) |
|
904 | start_one_bundle(repo, data) | |
904 | return None |
|
905 | return None | |
905 |
|
906 | |||
906 | return autobundle |
|
907 | return autobundle | |
907 |
|
908 | |||
908 |
|
909 | |||
909 | def reposetup(ui, repo): |
|
910 | def reposetup(ui, repo): | |
910 | """install the two pieces needed for automatic clonebundle generation |
|
911 | """install the two pieces needed for automatic clonebundle generation | |
911 |
|
912 | |||
912 | - add a "post-close" hook that fires bundling when needed |
|
913 | - add a "post-close" hook that fires bundling when needed | |
913 | - introduce a clone-bundle lock to let multiple processes meddle with the |
|
914 | - introduce a clone-bundle lock to let multiple processes meddle with the | |
914 | state files. |
|
915 | state files. | |
915 | """ |
|
916 | """ | |
916 | if not repo.local(): |
|
917 | if not repo.local(): | |
917 | return |
|
918 | return | |
918 |
|
919 | |||
919 | class autobundlesrepo(repo.__class__): |
|
920 | class autobundlesrepo(repo.__class__): | |
920 | def transaction(self, *args, **kwargs): |
|
921 | def transaction(self, *args, **kwargs): | |
921 | tr = super(autobundlesrepo, self).transaction(*args, **kwargs) |
|
922 | tr = super(autobundlesrepo, self).transaction(*args, **kwargs) | |
922 | enabled = repo.ui.configbool( |
|
923 | enabled = repo.ui.configbool( | |
923 | b'clone-bundles', |
|
924 | b'clone-bundles', | |
924 | b'auto-generate.on-change', |
|
925 | b'auto-generate.on-change', | |
925 | ) |
|
926 | ) | |
926 | targets = repo.ui.configlist( |
|
927 | targets = repo.ui.configlist( | |
927 | b'clone-bundles', b'auto-generate.formats' |
|
928 | b'clone-bundles', b'auto-generate.formats' | |
928 | ) |
|
929 | ) | |
929 | if enabled and targets: |
|
930 | if enabled and targets: | |
930 | tr.addpostclose(CAT_POSTCLOSE, make_auto_bundler(self)) |
|
931 | tr.addpostclose(CAT_POSTCLOSE, make_auto_bundler(self)) | |
931 | return tr |
|
932 | return tr | |
932 |
|
933 | |||
933 | @localrepo.unfilteredmethod |
|
934 | @localrepo.unfilteredmethod | |
934 | def clonebundles_lock(self, wait=True): |
|
935 | def clonebundles_lock(self, wait=True): | |
935 | '''Lock the repository file related to clone bundles''' |
|
936 | '''Lock the repository file related to clone bundles''' | |
936 | if not util.safehasattr(self, '_cb_lock_ref'): |
|
937 | if not util.safehasattr(self, '_cb_lock_ref'): | |
937 | self._cb_lock_ref = None |
|
938 | self._cb_lock_ref = None | |
938 | l = self._currentlock(self._cb_lock_ref) |
|
939 | l = self._currentlock(self._cb_lock_ref) | |
939 | if l is not None: |
|
940 | if l is not None: | |
940 | l.lock() |
|
941 | l.lock() | |
941 | return l |
|
942 | return l | |
942 |
|
943 | |||
943 | l = self._lock( |
|
944 | l = self._lock( | |
944 | vfs=self.vfs, |
|
945 | vfs=self.vfs, | |
945 | lockname=b"clonebundleslock", |
|
946 | lockname=b"clonebundleslock", | |
946 | wait=wait, |
|
947 | wait=wait, | |
947 | releasefn=None, |
|
948 | releasefn=None, | |
948 | acquirefn=None, |
|
949 | acquirefn=None, | |
949 | desc=_(b'repository %s') % self.origroot, |
|
950 | desc=_(b'repository %s') % self.origroot, | |
950 | ) |
|
951 | ) | |
951 | self._cb_lock_ref = weakref.ref(l) |
|
952 | self._cb_lock_ref = weakref.ref(l) | |
952 | return l |
|
953 | return l | |
953 |
|
954 | |||
954 | repo._wlockfreeprefix.add(AUTO_GEN_FILE) |
|
955 | repo._wlockfreeprefix.add(AUTO_GEN_FILE) | |
955 | repo._wlockfreeprefix.add(bundlecaches.CB_MANIFEST_FILE) |
|
956 | repo._wlockfreeprefix.add(bundlecaches.CB_MANIFEST_FILE) | |
956 | repo.__class__ = autobundlesrepo |
|
957 | repo.__class__ = autobundlesrepo | |
957 |
|
958 | |||
958 |
|
959 | |||
959 | @command( |
|
960 | @command( | |
960 | b'admin::clone-bundles-refresh', |
|
961 | b'admin::clone-bundles-refresh', | |
961 | [ |
|
962 | [ | |
962 | ( |
|
963 | ( | |
963 | b'', |
|
964 | b'', | |
964 | b'background', |
|
965 | b'background', | |
965 | False, |
|
966 | False, | |
966 | _(b'start bundle generation in the background'), |
|
967 | _(b'start bundle generation in the background'), | |
967 | ), |
|
968 | ), | |
968 | ], |
|
969 | ], | |
969 | b'', |
|
970 | b'', | |
970 | ) |
|
971 | ) | |
971 | def cmd_admin_clone_bundles_refresh( |
|
972 | def cmd_admin_clone_bundles_refresh( | |
972 | ui, |
|
973 | ui, | |
973 | repo: localrepo.localrepository, |
|
974 | repo: localrepo.localrepository, | |
974 | background=False, |
|
975 | background=False, | |
975 | ): |
|
976 | ): | |
976 | """generate clone bundles according to the configuration |
|
977 | """generate clone bundles according to the configuration | |
977 |
|
978 | |||
978 | This runs the logic for automatic generation, removing outdated bundles and |
|
979 | This runs the logic for automatic generation, removing outdated bundles and | |
979 | generating new ones if necessary. See :hg:`help -e clone-bundles` for |
|
980 | generating new ones if necessary. See :hg:`help -e clone-bundles` for | |
980 | details about how to configure this feature. |
|
981 | details about how to configure this feature. | |
981 | """ |
|
982 | """ | |
982 | debug = repo.ui.configbool(b'devel', b'debug.clonebundles') |
|
983 | debug = repo.ui.configbool(b'devel', b'debug.clonebundles') | |
983 | bundles = read_auto_gen(repo) |
|
984 | bundles = read_auto_gen(repo) | |
984 | op_id = b"%d_acbr" % os.getpid() |
|
985 | op_id = b"%d_acbr" % os.getpid() | |
985 | create, delete = auto_bundle_needed_actions(repo, bundles, op_id) |
|
986 | create, delete = auto_bundle_needed_actions(repo, bundles, op_id) | |
986 |
|
987 | |||
987 | # if some bundles are scheduled for creation in the background, they will |
|
988 | # if some bundles are scheduled for creation in the background, they will | |
988 | # deal with garbage collection too, so no need to synchroniously do it. |
|
989 | # deal with garbage collection too, so no need to synchroniously do it. | |
989 | # |
|
990 | # | |
990 | # However if no bundles are scheduled for creation, we need to explicitly do |
|
991 | # However if no bundles are scheduled for creation, we need to explicitly do | |
991 | # it here. |
|
992 | # it here. | |
992 | if not (background and create): |
|
993 | if not (background and create): | |
993 | # we clean up outdated bundles before generating new ones to keep the |
|
994 | # we clean up outdated bundles before generating new ones to keep the | |
994 | # last two versions of the bundle around for a while and avoid having to |
|
995 | # last two versions of the bundle around for a while and avoid having to | |
995 | # deal with clients that just got served a manifest. |
|
996 | # deal with clients that just got served a manifest. | |
996 | for o in delete: |
|
997 | for o in delete: | |
997 | delete_bundle(repo, o) |
|
998 | delete_bundle(repo, o) | |
998 | update_bundle_list(repo, del_bundles=delete) |
|
999 | update_bundle_list(repo, del_bundles=delete) | |
999 |
|
1000 | |||
1000 | if create: |
|
1001 | if create: | |
1001 | fpath = repo.vfs.makedirs(b'tmp-bundles') |
|
1002 | fpath = repo.vfs.makedirs(b'tmp-bundles') | |
1002 |
|
1003 | |||
1003 | if background: |
|
1004 | if background: | |
1004 | for requested_bundle in create: |
|
1005 | for requested_bundle in create: | |
1005 | start_one_bundle(repo, requested_bundle) |
|
1006 | start_one_bundle(repo, requested_bundle) | |
1006 | else: |
|
1007 | else: | |
1007 | for requested_bundle in create: |
|
1008 | for requested_bundle in create: | |
1008 | if debug: |
|
1009 | if debug: | |
1009 | msg = b'clone-bundles: starting bundle generation: %s\n' |
|
1010 | msg = b'clone-bundles: starting bundle generation: %s\n' | |
1010 | repo.ui.write(msg % requested_bundle.bundle_type) |
|
1011 | repo.ui.write(msg % requested_bundle.bundle_type) | |
1011 | fname = requested_bundle.suggested_filename |
|
1012 | fname = requested_bundle.suggested_filename | |
1012 | fpath = repo.vfs.join(b'tmp-bundles', fname) |
|
1013 | fpath = repo.vfs.join(b'tmp-bundles', fname) | |
1013 | generating_bundle = requested_bundle.generating(fpath) |
|
1014 | generating_bundle = requested_bundle.generating(fpath) | |
1014 | update_bundle_list(repo, new_bundles=[generating_bundle]) |
|
1015 | update_bundle_list(repo, new_bundles=[generating_bundle]) | |
1015 | requested_bundle.generate_bundle(repo, fpath) |
|
1016 | requested_bundle.generate_bundle(repo, fpath) | |
1016 | result = upload_bundle(repo, generating_bundle) |
|
1017 | result = upload_bundle(repo, generating_bundle) | |
1017 | update_bundle_list(repo, new_bundles=[result]) |
|
1018 | update_bundle_list(repo, new_bundles=[result]) | |
1018 | update_ondisk_manifest(repo) |
|
1019 | update_ondisk_manifest(repo) | |
1019 | cleanup_tmp_bundle(repo, generating_bundle) |
|
1020 | cleanup_tmp_bundle(repo, generating_bundle) | |
1020 |
|
1021 | |||
1021 |
|
1022 | |||
1022 | @command(b'admin::clone-bundles-clear', [], b'') |
|
1023 | @command(b'admin::clone-bundles-clear', [], b'') | |
1023 | def cmd_admin_clone_bundles_clear(ui, repo: localrepo.localrepository): |
|
1024 | def cmd_admin_clone_bundles_clear(ui, repo: localrepo.localrepository): | |
1024 | """remove existing clone bundle caches |
|
1025 | """remove existing clone bundle caches | |
1025 |
|
1026 | |||
1026 | See `hg help admin::clone-bundles-refresh` for details on how to regenerate |
|
1027 | See `hg help admin::clone-bundles-refresh` for details on how to regenerate | |
1027 | them. |
|
1028 | them. | |
1028 |
|
1029 | |||
1029 | This command will only affect bundles currently available, it will not |
|
1030 | This command will only affect bundles currently available, it will not | |
1030 | affect bundles being asynchronously generated. |
|
1031 | affect bundles being asynchronously generated. | |
1031 | """ |
|
1032 | """ | |
1032 | bundles = read_auto_gen(repo) |
|
1033 | bundles = read_auto_gen(repo) | |
1033 | delete = [b for b in bundles if b.ready] |
|
1034 | delete = [b for b in bundles if b.ready] | |
1034 | for o in delete: |
|
1035 | for o in delete: | |
1035 | delete_bundle(repo, o) |
|
1036 | delete_bundle(repo, o) | |
1036 | update_bundle_list(repo, del_bundles=delete) |
|
1037 | update_bundle_list(repo, del_bundles=delete) |
@@ -1,661 +1,664 b'' | |||||
1 | # wireprotov1peer.py - Client-side functionality for wire protocol version 1. |
|
1 | # wireprotov1peer.py - Client-side functionality for wire protocol version 1. | |
2 | # |
|
2 | # | |
3 | # Copyright 2005-2010 Olivia Mackall <olivia@selenic.com> |
|
3 | # Copyright 2005-2010 Olivia Mackall <olivia@selenic.com> | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 |
|
7 | |||
8 |
|
8 | |||
9 | import sys |
|
9 | import sys | |
10 | import weakref |
|
10 | import weakref | |
11 |
|
11 | |||
12 | from concurrent import futures |
|
12 | from concurrent import futures | |
13 | from .i18n import _ |
|
13 | from .i18n import _ | |
14 | from .node import bin |
|
14 | from .node import bin | |
15 | from .pycompat import ( |
|
15 | from .pycompat import ( | |
16 | getattr, |
|
16 | getattr, | |
17 | setattr, |
|
17 | setattr, | |
18 | ) |
|
18 | ) | |
19 | from . import ( |
|
19 | from . import ( | |
20 | bundle2, |
|
20 | bundle2, | |
21 | changegroup as changegroupmod, |
|
21 | changegroup as changegroupmod, | |
22 | encoding, |
|
22 | encoding, | |
23 | error, |
|
23 | error, | |
24 | pushkey as pushkeymod, |
|
24 | pushkey as pushkeymod, | |
25 | pycompat, |
|
25 | pycompat, | |
26 | util, |
|
26 | util, | |
27 | wireprototypes, |
|
27 | wireprototypes, | |
28 | ) |
|
28 | ) | |
29 | from .interfaces import ( |
|
29 | from .interfaces import ( | |
30 | repository, |
|
30 | repository, | |
31 | util as interfaceutil, |
|
31 | util as interfaceutil, | |
32 | ) |
|
32 | ) | |
33 | from .utils import hashutil |
|
33 | from .utils import hashutil | |
34 |
|
34 | |||
35 | urlreq = util.urlreq |
|
35 | urlreq = util.urlreq | |
36 |
|
36 | |||
37 |
|
37 | |||
38 | def batchable(f): |
|
38 | def batchable(f): | |
39 | """annotation for batchable methods |
|
39 | """annotation for batchable methods | |
40 |
|
40 | |||
41 | Such methods must implement a coroutine as follows: |
|
41 | Such methods must implement a coroutine as follows: | |
42 |
|
42 | |||
43 | @batchable |
|
43 | @batchable | |
44 | def sample(self, one, two=None): |
|
44 | def sample(self, one, two=None): | |
45 | # Build list of encoded arguments suitable for your wire protocol: |
|
45 | # Build list of encoded arguments suitable for your wire protocol: | |
46 | encoded_args = [('one', encode(one),), ('two', encode(two),)] |
|
46 | encoded_args = [('one', encode(one),), ('two', encode(two),)] | |
47 | # Return it, along with a function that will receive the result |
|
47 | # Return it, along with a function that will receive the result | |
48 | # from the batched request. |
|
48 | # from the batched request. | |
49 | return encoded_args, decode |
|
49 | return encoded_args, decode | |
50 |
|
50 | |||
51 | The decorator returns a function which wraps this coroutine as a plain |
|
51 | The decorator returns a function which wraps this coroutine as a plain | |
52 | method, but adds the original method as an attribute called "batchable", |
|
52 | method, but adds the original method as an attribute called "batchable", | |
53 | which is used by remotebatch to split the call into separate encoding and |
|
53 | which is used by remotebatch to split the call into separate encoding and | |
54 | decoding phases. |
|
54 | decoding phases. | |
55 | """ |
|
55 | """ | |
56 |
|
56 | |||
57 | def plain(*args, **opts): |
|
57 | def plain(*args, **opts): | |
58 | encoded_args_or_res, decode = f(*args, **opts) |
|
58 | encoded_args_or_res, decode = f(*args, **opts) | |
59 | if not decode: |
|
59 | if not decode: | |
60 | return encoded_args_or_res # a local result in this case |
|
60 | return encoded_args_or_res # a local result in this case | |
61 | self = args[0] |
|
61 | self = args[0] | |
62 | cmd = pycompat.bytesurl(f.__name__) # ensure cmd is ascii bytestr |
|
62 | cmd = pycompat.bytesurl(f.__name__) # ensure cmd is ascii bytestr | |
63 | encoded_res = self._submitone(cmd, encoded_args_or_res) |
|
63 | encoded_res = self._submitone(cmd, encoded_args_or_res) | |
64 | return decode(encoded_res) |
|
64 | return decode(encoded_res) | |
65 |
|
65 | |||
66 | setattr(plain, 'batchable', f) |
|
66 | setattr(plain, 'batchable', f) | |
67 | setattr(plain, '__name__', f.__name__) |
|
67 | setattr(plain, '__name__', f.__name__) | |
68 | return plain |
|
68 | return plain | |
69 |
|
69 | |||
70 |
|
70 | |||
71 | def encodebatchcmds(req): |
|
71 | def encodebatchcmds(req): | |
72 | """Return a ``cmds`` argument value for the ``batch`` command.""" |
|
72 | """Return a ``cmds`` argument value for the ``batch`` command.""" | |
73 | escapearg = wireprototypes.escapebatcharg |
|
73 | escapearg = wireprototypes.escapebatcharg | |
74 |
|
74 | |||
75 | cmds = [] |
|
75 | cmds = [] | |
76 | for op, argsdict in req: |
|
76 | for op, argsdict in req: | |
77 | # Old servers didn't properly unescape argument names. So prevent |
|
77 | # Old servers didn't properly unescape argument names. So prevent | |
78 | # the sending of argument names that may not be decoded properly by |
|
78 | # the sending of argument names that may not be decoded properly by | |
79 | # servers. |
|
79 | # servers. | |
80 | assert all(escapearg(k) == k for k in argsdict) |
|
80 | assert all(escapearg(k) == k for k in argsdict) | |
81 |
|
81 | |||
82 | args = b','.join( |
|
82 | args = b','.join( | |
83 | b'%s=%s' % (escapearg(k), escapearg(v)) for k, v in argsdict.items() |
|
83 | b'%s=%s' % (escapearg(k), escapearg(v)) for k, v in argsdict.items() | |
84 | ) |
|
84 | ) | |
85 | cmds.append(b'%s %s' % (op, args)) |
|
85 | cmds.append(b'%s %s' % (op, args)) | |
86 |
|
86 | |||
87 | return b';'.join(cmds) |
|
87 | return b';'.join(cmds) | |
88 |
|
88 | |||
89 |
|
89 | |||
90 | class unsentfuture(futures.Future): |
|
90 | class unsentfuture(futures.Future): | |
91 | """A Future variation to represent an unsent command. |
|
91 | """A Future variation to represent an unsent command. | |
92 |
|
92 | |||
93 | Because we buffer commands and don't submit them immediately, calling |
|
93 | Because we buffer commands and don't submit them immediately, calling | |
94 | ``result()`` on an unsent future could deadlock. Futures for buffered |
|
94 | ``result()`` on an unsent future could deadlock. Futures for buffered | |
95 | commands are represented by this type, which wraps ``result()`` to |
|
95 | commands are represented by this type, which wraps ``result()`` to | |
96 | call ``sendcommands()``. |
|
96 | call ``sendcommands()``. | |
97 | """ |
|
97 | """ | |
98 |
|
98 | |||
99 | def result(self, timeout=None): |
|
99 | def result(self, timeout=None): | |
100 | if self.done(): |
|
100 | if self.done(): | |
101 | return futures.Future.result(self, timeout) |
|
101 | return futures.Future.result(self, timeout) | |
102 |
|
102 | |||
103 | self._peerexecutor.sendcommands() |
|
103 | self._peerexecutor.sendcommands() | |
104 |
|
104 | |||
105 | # This looks like it will infinitely recurse. However, |
|
105 | # This looks like it will infinitely recurse. However, | |
106 | # sendcommands() should modify __class__. This call serves as a check |
|
106 | # sendcommands() should modify __class__. This call serves as a check | |
107 | # on that. |
|
107 | # on that. | |
108 | return self.result(timeout) |
|
108 | return self.result(timeout) | |
109 |
|
109 | |||
110 |
|
110 | |||
111 | @interfaceutil.implementer(repository.ipeercommandexecutor) |
|
111 | @interfaceutil.implementer(repository.ipeercommandexecutor) | |
112 | class peerexecutor: |
|
112 | class peerexecutor: | |
113 | def __init__(self, peer): |
|
113 | def __init__(self, peer): | |
114 | self._peer = peer |
|
114 | self._peer = peer | |
115 | self._sent = False |
|
115 | self._sent = False | |
116 | self._closed = False |
|
116 | self._closed = False | |
117 | self._calls = [] |
|
117 | self._calls = [] | |
118 | self._futures = weakref.WeakSet() |
|
118 | self._futures = weakref.WeakSet() | |
119 | self._responseexecutor = None |
|
119 | self._responseexecutor = None | |
120 | self._responsef = None |
|
120 | self._responsef = None | |
121 |
|
121 | |||
122 | def __enter__(self): |
|
122 | def __enter__(self): | |
123 | return self |
|
123 | return self | |
124 |
|
124 | |||
125 | def __exit__(self, exctype, excvalee, exctb): |
|
125 | def __exit__(self, exctype, excvalee, exctb): | |
126 | self.close() |
|
126 | self.close() | |
127 |
|
127 | |||
128 | def callcommand(self, command, args): |
|
128 | def callcommand(self, command, args): | |
129 | if self._sent: |
|
129 | if self._sent: | |
130 | raise error.ProgrammingError( |
|
130 | raise error.ProgrammingError( | |
131 | b'callcommand() cannot be used after commands are sent' |
|
131 | b'callcommand() cannot be used after commands are sent' | |
132 | ) |
|
132 | ) | |
133 |
|
133 | |||
134 | if self._closed: |
|
134 | if self._closed: | |
135 | raise error.ProgrammingError( |
|
135 | raise error.ProgrammingError( | |
136 | b'callcommand() cannot be used after close()' |
|
136 | b'callcommand() cannot be used after close()' | |
137 | ) |
|
137 | ) | |
138 |
|
138 | |||
139 | # Commands are dispatched through methods on the peer. |
|
139 | # Commands are dispatched through methods on the peer. | |
140 | fn = getattr(self._peer, pycompat.sysstr(command), None) |
|
140 | fn = getattr(self._peer, pycompat.sysstr(command), None) | |
141 |
|
141 | |||
142 | if not fn: |
|
142 | if not fn: | |
143 | raise error.ProgrammingError( |
|
143 | raise error.ProgrammingError( | |
144 | b'cannot call command %s: method of same name not available ' |
|
144 | b'cannot call command %s: method of same name not available ' | |
145 | b'on peer' % command |
|
145 | b'on peer' % command | |
146 | ) |
|
146 | ) | |
147 |
|
147 | |||
148 | # Commands are either batchable or they aren't. If a command |
|
148 | # Commands are either batchable or they aren't. If a command | |
149 | # isn't batchable, we send it immediately because the executor |
|
149 | # isn't batchable, we send it immediately because the executor | |
150 | # can no longer accept new commands after a non-batchable command. |
|
150 | # can no longer accept new commands after a non-batchable command. | |
151 | # If a command is batchable, we queue it for later. But we have |
|
151 | # If a command is batchable, we queue it for later. But we have | |
152 | # to account for the case of a non-batchable command arriving after |
|
152 | # to account for the case of a non-batchable command arriving after | |
153 | # a batchable one and refuse to service it. |
|
153 | # a batchable one and refuse to service it. | |
154 |
|
154 | |||
155 | def addcall(): |
|
155 | def addcall(): | |
156 | f = futures.Future() |
|
156 | f = futures.Future() | |
157 | self._futures.add(f) |
|
157 | self._futures.add(f) | |
158 | self._calls.append((command, args, fn, f)) |
|
158 | self._calls.append((command, args, fn, f)) | |
159 | return f |
|
159 | return f | |
160 |
|
160 | |||
161 | if getattr(fn, 'batchable', False): |
|
161 | if getattr(fn, 'batchable', False): | |
162 | f = addcall() |
|
162 | f = addcall() | |
163 |
|
163 | |||
164 | # But since we don't issue it immediately, we wrap its result() |
|
164 | # But since we don't issue it immediately, we wrap its result() | |
165 | # to trigger sending so we avoid deadlocks. |
|
165 | # to trigger sending so we avoid deadlocks. | |
166 | f.__class__ = unsentfuture |
|
166 | f.__class__ = unsentfuture | |
167 | f._peerexecutor = self |
|
167 | f._peerexecutor = self | |
168 | else: |
|
168 | else: | |
169 | if self._calls: |
|
169 | if self._calls: | |
170 | raise error.ProgrammingError( |
|
170 | raise error.ProgrammingError( | |
171 | b'%s is not batchable and cannot be called on a command ' |
|
171 | b'%s is not batchable and cannot be called on a command ' | |
172 | b'executor along with other commands' % command |
|
172 | b'executor along with other commands' % command | |
173 | ) |
|
173 | ) | |
174 |
|
174 | |||
175 | f = addcall() |
|
175 | f = addcall() | |
176 |
|
176 | |||
177 | # Non-batchable commands can never coexist with another command |
|
177 | # Non-batchable commands can never coexist with another command | |
178 | # in this executor. So send the command immediately. |
|
178 | # in this executor. So send the command immediately. | |
179 | self.sendcommands() |
|
179 | self.sendcommands() | |
180 |
|
180 | |||
181 | return f |
|
181 | return f | |
182 |
|
182 | |||
183 | def sendcommands(self): |
|
183 | def sendcommands(self): | |
184 | if self._sent: |
|
184 | if self._sent: | |
185 | return |
|
185 | return | |
186 |
|
186 | |||
187 | if not self._calls: |
|
187 | if not self._calls: | |
188 | return |
|
188 | return | |
189 |
|
189 | |||
190 | self._sent = True |
|
190 | self._sent = True | |
191 |
|
191 | |||
192 | # Unhack any future types so caller seens a clean type and to break |
|
192 | # Unhack any future types so caller seens a clean type and to break | |
193 | # cycle between us and futures. |
|
193 | # cycle between us and futures. | |
194 | for f in self._futures: |
|
194 | for f in self._futures: | |
195 | if isinstance(f, unsentfuture): |
|
195 | if isinstance(f, unsentfuture): | |
196 | f.__class__ = futures.Future |
|
196 | f.__class__ = futures.Future | |
197 | f._peerexecutor = None |
|
197 | f._peerexecutor = None | |
198 |
|
198 | |||
199 | calls = self._calls |
|
199 | calls = self._calls | |
200 | # Mainly to destroy references to futures. |
|
200 | # Mainly to destroy references to futures. | |
201 | self._calls = None |
|
201 | self._calls = None | |
202 |
|
202 | |||
203 | # Simple case of a single command. We call it synchronously. |
|
203 | # Simple case of a single command. We call it synchronously. | |
204 | if len(calls) == 1: |
|
204 | if len(calls) == 1: | |
205 | command, args, fn, f = calls[0] |
|
205 | command, args, fn, f = calls[0] | |
206 |
|
206 | |||
207 | # Future was cancelled. Ignore it. |
|
207 | # Future was cancelled. Ignore it. | |
208 | if not f.set_running_or_notify_cancel(): |
|
208 | if not f.set_running_or_notify_cancel(): | |
209 | return |
|
209 | return | |
210 |
|
210 | |||
211 | try: |
|
211 | try: | |
212 | result = fn(**pycompat.strkwargs(args)) |
|
212 | result = fn(**pycompat.strkwargs(args)) | |
213 | except Exception: |
|
213 | except Exception: | |
214 | pycompat.future_set_exception_info(f, sys.exc_info()[1:]) |
|
214 | pycompat.future_set_exception_info(f, sys.exc_info()[1:]) | |
215 | else: |
|
215 | else: | |
216 | f.set_result(result) |
|
216 | f.set_result(result) | |
217 |
|
217 | |||
218 | return |
|
218 | return | |
219 |
|
219 | |||
220 | # Batch commands are a bit harder. First, we have to deal with the |
|
220 | # Batch commands are a bit harder. First, we have to deal with the | |
221 | # @batchable coroutine. That's a bit annoying. Furthermore, we also |
|
221 | # @batchable coroutine. That's a bit annoying. Furthermore, we also | |
222 | # need to preserve streaming. i.e. it should be possible for the |
|
222 | # need to preserve streaming. i.e. it should be possible for the | |
223 | # futures to resolve as data is coming in off the wire without having |
|
223 | # futures to resolve as data is coming in off the wire without having | |
224 | # to wait for the final byte of the final response. We do this by |
|
224 | # to wait for the final byte of the final response. We do this by | |
225 | # spinning up a thread to read the responses. |
|
225 | # spinning up a thread to read the responses. | |
226 |
|
226 | |||
227 | requests = [] |
|
227 | requests = [] | |
228 | states = [] |
|
228 | states = [] | |
229 |
|
229 | |||
230 | for command, args, fn, f in calls: |
|
230 | for command, args, fn, f in calls: | |
231 | # Future was cancelled. Ignore it. |
|
231 | # Future was cancelled. Ignore it. | |
232 | if not f.set_running_or_notify_cancel(): |
|
232 | if not f.set_running_or_notify_cancel(): | |
233 | continue |
|
233 | continue | |
234 |
|
234 | |||
235 | try: |
|
235 | try: | |
236 | encoded_args_or_res, decode = fn.batchable( |
|
236 | encoded_args_or_res, decode = fn.batchable( | |
237 | fn.__self__, **pycompat.strkwargs(args) |
|
237 | fn.__self__, **pycompat.strkwargs(args) | |
238 | ) |
|
238 | ) | |
239 | except Exception: |
|
239 | except Exception: | |
240 | pycompat.future_set_exception_info(f, sys.exc_info()[1:]) |
|
240 | pycompat.future_set_exception_info(f, sys.exc_info()[1:]) | |
241 | return |
|
241 | return | |
242 |
|
242 | |||
243 | if not decode: |
|
243 | if not decode: | |
244 | f.set_result(encoded_args_or_res) |
|
244 | f.set_result(encoded_args_or_res) | |
245 | else: |
|
245 | else: | |
246 | requests.append((command, encoded_args_or_res)) |
|
246 | requests.append((command, encoded_args_or_res)) | |
247 | states.append((command, f, batchable, decode)) |
|
247 | states.append((command, f, batchable, decode)) | |
248 |
|
248 | |||
249 | if not requests: |
|
249 | if not requests: | |
250 | return |
|
250 | return | |
251 |
|
251 | |||
252 | # This will emit responses in order they were executed. |
|
252 | # This will emit responses in order they were executed. | |
253 | wireresults = self._peer._submitbatch(requests) |
|
253 | wireresults = self._peer._submitbatch(requests) | |
254 |
|
254 | |||
255 | # The use of a thread pool executor here is a bit weird for something |
|
255 | # The use of a thread pool executor here is a bit weird for something | |
256 | # that only spins up a single thread. However, thread management is |
|
256 | # that only spins up a single thread. However, thread management is | |
257 | # hard and it is easy to encounter race conditions, deadlocks, etc. |
|
257 | # hard and it is easy to encounter race conditions, deadlocks, etc. | |
258 | # concurrent.futures already solves these problems and its thread pool |
|
258 | # concurrent.futures already solves these problems and its thread pool | |
259 | # executor has minimal overhead. So we use it. |
|
259 | # executor has minimal overhead. So we use it. | |
260 | self._responseexecutor = futures.ThreadPoolExecutor(1) |
|
260 | self._responseexecutor = futures.ThreadPoolExecutor(1) | |
261 | self._responsef = self._responseexecutor.submit( |
|
261 | self._responsef = self._responseexecutor.submit( | |
262 | self._readbatchresponse, states, wireresults |
|
262 | self._readbatchresponse, states, wireresults | |
263 | ) |
|
263 | ) | |
264 |
|
264 | |||
265 | def close(self): |
|
265 | def close(self): | |
266 | self.sendcommands() |
|
266 | self.sendcommands() | |
267 |
|
267 | |||
268 | if self._closed: |
|
268 | if self._closed: | |
269 | return |
|
269 | return | |
270 |
|
270 | |||
271 | self._closed = True |
|
271 | self._closed = True | |
272 |
|
272 | |||
273 | if not self._responsef: |
|
273 | if not self._responsef: | |
274 | return |
|
274 | return | |
275 |
|
275 | |||
276 | # We need to wait on our in-flight response and then shut down the |
|
276 | # We need to wait on our in-flight response and then shut down the | |
277 | # executor once we have a result. |
|
277 | # executor once we have a result. | |
278 | try: |
|
278 | try: | |
279 | self._responsef.result() |
|
279 | self._responsef.result() | |
280 | finally: |
|
280 | finally: | |
281 | self._responseexecutor.shutdown(wait=True) |
|
281 | self._responseexecutor.shutdown(wait=True) | |
282 | self._responsef = None |
|
282 | self._responsef = None | |
283 | self._responseexecutor = None |
|
283 | self._responseexecutor = None | |
284 |
|
284 | |||
285 | # If any of our futures are still in progress, mark them as |
|
285 | # If any of our futures are still in progress, mark them as | |
286 | # errored. Otherwise a result() could wait indefinitely. |
|
286 | # errored. Otherwise a result() could wait indefinitely. | |
287 | for f in self._futures: |
|
287 | for f in self._futures: | |
288 | if not f.done(): |
|
288 | if not f.done(): | |
289 | f.set_exception( |
|
289 | f.set_exception( | |
290 | error.ResponseError( |
|
290 | error.ResponseError( | |
291 | _(b'unfulfilled batch command response'), None |
|
291 | _(b'unfulfilled batch command response'), None | |
292 | ) |
|
292 | ) | |
293 | ) |
|
293 | ) | |
294 |
|
294 | |||
295 | self._futures = None |
|
295 | self._futures = None | |
296 |
|
296 | |||
297 | def _readbatchresponse(self, states, wireresults): |
|
297 | def _readbatchresponse(self, states, wireresults): | |
298 | # Executes in a thread to read data off the wire. |
|
298 | # Executes in a thread to read data off the wire. | |
299 |
|
299 | |||
300 | for command, f, batchable, decode in states: |
|
300 | for command, f, batchable, decode in states: | |
301 | # Grab raw result off the wire and teach the internal future |
|
301 | # Grab raw result off the wire and teach the internal future | |
302 | # about it. |
|
302 | # about it. | |
303 | try: |
|
303 | try: | |
304 | remoteresult = next(wireresults) |
|
304 | remoteresult = next(wireresults) | |
305 | except StopIteration: |
|
305 | except StopIteration: | |
306 | # This can happen in particular because next(batchable) |
|
306 | # This can happen in particular because next(batchable) | |
307 | # in the previous iteration can call peer._abort, which |
|
307 | # in the previous iteration can call peer._abort, which | |
308 | # may close the peer. |
|
308 | # may close the peer. | |
309 | f.set_exception( |
|
309 | f.set_exception( | |
310 | error.ResponseError( |
|
310 | error.ResponseError( | |
311 | _(b'unfulfilled batch command response'), None |
|
311 | _(b'unfulfilled batch command response'), None | |
312 | ) |
|
312 | ) | |
313 | ) |
|
313 | ) | |
314 | else: |
|
314 | else: | |
315 | try: |
|
315 | try: | |
316 | result = decode(remoteresult) |
|
316 | result = decode(remoteresult) | |
317 | except Exception: |
|
317 | except Exception: | |
318 | pycompat.future_set_exception_info(f, sys.exc_info()[1:]) |
|
318 | pycompat.future_set_exception_info(f, sys.exc_info()[1:]) | |
319 | else: |
|
319 | else: | |
320 | f.set_result(result) |
|
320 | f.set_result(result) | |
321 |
|
321 | |||
322 |
|
322 | |||
323 | @interfaceutil.implementer( |
|
323 | @interfaceutil.implementer( | |
324 | repository.ipeercommands, repository.ipeerlegacycommands |
|
324 | repository.ipeercommands, repository.ipeerlegacycommands | |
325 | ) |
|
325 | ) | |
326 | class wirepeer(repository.peer): |
|
326 | class wirepeer(repository.peer): | |
327 | """Client-side interface for communicating with a peer repository. |
|
327 | """Client-side interface for communicating with a peer repository. | |
328 |
|
328 | |||
329 | Methods commonly call wire protocol commands of the same name. |
|
329 | Methods commonly call wire protocol commands of the same name. | |
330 |
|
330 | |||
331 | See also httppeer.py and sshpeer.py for protocol-specific |
|
331 | See also httppeer.py and sshpeer.py for protocol-specific | |
332 | implementations of this interface. |
|
332 | implementations of this interface. | |
333 | """ |
|
333 | """ | |
334 |
|
334 | |||
335 | def commandexecutor(self): |
|
335 | def commandexecutor(self): | |
336 | return peerexecutor(self) |
|
336 | return peerexecutor(self) | |
337 |
|
337 | |||
338 | # Begin of ipeercommands interface. |
|
338 | # Begin of ipeercommands interface. | |
339 |
|
339 | |||
340 | def clonebundles(self): |
|
340 | def clonebundles(self): | |
|
341 | if self.capable(b'clonebundles_manifest'): | |||
|
342 | return self._call(b'clonebundles_manifest') | |||
|
343 | else: | |||
341 | self.requirecap(b'clonebundles', _(b'clone bundles')) |
|
344 | self.requirecap(b'clonebundles', _(b'clone bundles')) | |
342 | return self._call(b'clonebundles') |
|
345 | return self._call(b'clonebundles') | |
343 |
|
346 | |||
344 | def _finish_inline_clone_bundle(self, stream): |
|
347 | def _finish_inline_clone_bundle(self, stream): | |
345 | pass # allow override for httppeer |
|
348 | pass # allow override for httppeer | |
346 |
|
349 | |||
347 | def get_cached_bundle_inline(self, path): |
|
350 | def get_cached_bundle_inline(self, path): | |
348 | stream = self._callstream(b"get_cached_bundle_inline", path=path) |
|
351 | stream = self._callstream(b"get_cached_bundle_inline", path=path) | |
349 | length = util.uvarintdecodestream(stream) |
|
352 | length = util.uvarintdecodestream(stream) | |
350 |
|
353 | |||
351 | # SSH streams will block if reading more than length |
|
354 | # SSH streams will block if reading more than length | |
352 | for chunk in util.filechunkiter(stream, limit=length): |
|
355 | for chunk in util.filechunkiter(stream, limit=length): | |
353 | yield chunk |
|
356 | yield chunk | |
354 |
|
357 | |||
355 | self._finish_inline_clone_bundle(stream) |
|
358 | self._finish_inline_clone_bundle(stream) | |
356 |
|
359 | |||
357 | @batchable |
|
360 | @batchable | |
358 | def lookup(self, key): |
|
361 | def lookup(self, key): | |
359 | self.requirecap(b'lookup', _(b'look up remote revision')) |
|
362 | self.requirecap(b'lookup', _(b'look up remote revision')) | |
360 |
|
363 | |||
361 | def decode(d): |
|
364 | def decode(d): | |
362 | success, data = d[:-1].split(b" ", 1) |
|
365 | success, data = d[:-1].split(b" ", 1) | |
363 | if int(success): |
|
366 | if int(success): | |
364 | return bin(data) |
|
367 | return bin(data) | |
365 | else: |
|
368 | else: | |
366 | self._abort(error.RepoError(data)) |
|
369 | self._abort(error.RepoError(data)) | |
367 |
|
370 | |||
368 | return {b'key': encoding.fromlocal(key)}, decode |
|
371 | return {b'key': encoding.fromlocal(key)}, decode | |
369 |
|
372 | |||
370 | @batchable |
|
373 | @batchable | |
371 | def heads(self): |
|
374 | def heads(self): | |
372 | def decode(d): |
|
375 | def decode(d): | |
373 | try: |
|
376 | try: | |
374 | return wireprototypes.decodelist(d[:-1]) |
|
377 | return wireprototypes.decodelist(d[:-1]) | |
375 | except ValueError: |
|
378 | except ValueError: | |
376 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) |
|
379 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) | |
377 |
|
380 | |||
378 | return {}, decode |
|
381 | return {}, decode | |
379 |
|
382 | |||
380 | @batchable |
|
383 | @batchable | |
381 | def known(self, nodes): |
|
384 | def known(self, nodes): | |
382 | def decode(d): |
|
385 | def decode(d): | |
383 | try: |
|
386 | try: | |
384 | return [bool(int(b)) for b in pycompat.iterbytestr(d)] |
|
387 | return [bool(int(b)) for b in pycompat.iterbytestr(d)] | |
385 | except ValueError: |
|
388 | except ValueError: | |
386 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) |
|
389 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) | |
387 |
|
390 | |||
388 | return {b'nodes': wireprototypes.encodelist(nodes)}, decode |
|
391 | return {b'nodes': wireprototypes.encodelist(nodes)}, decode | |
389 |
|
392 | |||
390 | @batchable |
|
393 | @batchable | |
391 | def branchmap(self): |
|
394 | def branchmap(self): | |
392 | def decode(d): |
|
395 | def decode(d): | |
393 | try: |
|
396 | try: | |
394 | branchmap = {} |
|
397 | branchmap = {} | |
395 | for branchpart in d.splitlines(): |
|
398 | for branchpart in d.splitlines(): | |
396 | branchname, branchheads = branchpart.split(b' ', 1) |
|
399 | branchname, branchheads = branchpart.split(b' ', 1) | |
397 | branchname = encoding.tolocal(urlreq.unquote(branchname)) |
|
400 | branchname = encoding.tolocal(urlreq.unquote(branchname)) | |
398 | branchheads = wireprototypes.decodelist(branchheads) |
|
401 | branchheads = wireprototypes.decodelist(branchheads) | |
399 | branchmap[branchname] = branchheads |
|
402 | branchmap[branchname] = branchheads | |
400 | return branchmap |
|
403 | return branchmap | |
401 | except TypeError: |
|
404 | except TypeError: | |
402 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) |
|
405 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) | |
403 |
|
406 | |||
404 | return {}, decode |
|
407 | return {}, decode | |
405 |
|
408 | |||
406 | @batchable |
|
409 | @batchable | |
407 | def listkeys(self, namespace): |
|
410 | def listkeys(self, namespace): | |
408 | if not self.capable(b'pushkey'): |
|
411 | if not self.capable(b'pushkey'): | |
409 | return {}, None |
|
412 | return {}, None | |
410 | self.ui.debug(b'preparing listkeys for "%s"\n' % namespace) |
|
413 | self.ui.debug(b'preparing listkeys for "%s"\n' % namespace) | |
411 |
|
414 | |||
412 | def decode(d): |
|
415 | def decode(d): | |
413 | self.ui.debug( |
|
416 | self.ui.debug( | |
414 | b'received listkey for "%s": %i bytes\n' % (namespace, len(d)) |
|
417 | b'received listkey for "%s": %i bytes\n' % (namespace, len(d)) | |
415 | ) |
|
418 | ) | |
416 | return pushkeymod.decodekeys(d) |
|
419 | return pushkeymod.decodekeys(d) | |
417 |
|
420 | |||
418 | return {b'namespace': encoding.fromlocal(namespace)}, decode |
|
421 | return {b'namespace': encoding.fromlocal(namespace)}, decode | |
419 |
|
422 | |||
420 | @batchable |
|
423 | @batchable | |
421 | def pushkey(self, namespace, key, old, new): |
|
424 | def pushkey(self, namespace, key, old, new): | |
422 | if not self.capable(b'pushkey'): |
|
425 | if not self.capable(b'pushkey'): | |
423 | return False, None |
|
426 | return False, None | |
424 | self.ui.debug(b'preparing pushkey for "%s:%s"\n' % (namespace, key)) |
|
427 | self.ui.debug(b'preparing pushkey for "%s:%s"\n' % (namespace, key)) | |
425 |
|
428 | |||
426 | def decode(d): |
|
429 | def decode(d): | |
427 | d, output = d.split(b'\n', 1) |
|
430 | d, output = d.split(b'\n', 1) | |
428 | try: |
|
431 | try: | |
429 | d = bool(int(d)) |
|
432 | d = bool(int(d)) | |
430 | except ValueError: |
|
433 | except ValueError: | |
431 | raise error.ResponseError( |
|
434 | raise error.ResponseError( | |
432 | _(b'push failed (unexpected response):'), d |
|
435 | _(b'push failed (unexpected response):'), d | |
433 | ) |
|
436 | ) | |
434 | for l in output.splitlines(True): |
|
437 | for l in output.splitlines(True): | |
435 | self.ui.status(_(b'remote: '), l) |
|
438 | self.ui.status(_(b'remote: '), l) | |
436 | return d |
|
439 | return d | |
437 |
|
440 | |||
438 | return { |
|
441 | return { | |
439 | b'namespace': encoding.fromlocal(namespace), |
|
442 | b'namespace': encoding.fromlocal(namespace), | |
440 | b'key': encoding.fromlocal(key), |
|
443 | b'key': encoding.fromlocal(key), | |
441 | b'old': encoding.fromlocal(old), |
|
444 | b'old': encoding.fromlocal(old), | |
442 | b'new': encoding.fromlocal(new), |
|
445 | b'new': encoding.fromlocal(new), | |
443 | }, decode |
|
446 | }, decode | |
444 |
|
447 | |||
445 | def stream_out(self): |
|
448 | def stream_out(self): | |
446 | return self._callstream(b'stream_out') |
|
449 | return self._callstream(b'stream_out') | |
447 |
|
450 | |||
448 | def getbundle(self, source, **kwargs): |
|
451 | def getbundle(self, source, **kwargs): | |
449 | kwargs = pycompat.byteskwargs(kwargs) |
|
452 | kwargs = pycompat.byteskwargs(kwargs) | |
450 | self.requirecap(b'getbundle', _(b'look up remote changes')) |
|
453 | self.requirecap(b'getbundle', _(b'look up remote changes')) | |
451 | opts = {} |
|
454 | opts = {} | |
452 | bundlecaps = kwargs.get(b'bundlecaps') or set() |
|
455 | bundlecaps = kwargs.get(b'bundlecaps') or set() | |
453 | for key, value in kwargs.items(): |
|
456 | for key, value in kwargs.items(): | |
454 | if value is None: |
|
457 | if value is None: | |
455 | continue |
|
458 | continue | |
456 | keytype = wireprototypes.GETBUNDLE_ARGUMENTS.get(key) |
|
459 | keytype = wireprototypes.GETBUNDLE_ARGUMENTS.get(key) | |
457 | if keytype is None: |
|
460 | if keytype is None: | |
458 | raise error.ProgrammingError( |
|
461 | raise error.ProgrammingError( | |
459 | b'Unexpectedly None keytype for key %s' % key |
|
462 | b'Unexpectedly None keytype for key %s' % key | |
460 | ) |
|
463 | ) | |
461 | elif keytype == b'nodes': |
|
464 | elif keytype == b'nodes': | |
462 | value = wireprototypes.encodelist(value) |
|
465 | value = wireprototypes.encodelist(value) | |
463 | elif keytype == b'csv': |
|
466 | elif keytype == b'csv': | |
464 | value = b','.join(value) |
|
467 | value = b','.join(value) | |
465 | elif keytype == b'scsv': |
|
468 | elif keytype == b'scsv': | |
466 | value = b','.join(sorted(value)) |
|
469 | value = b','.join(sorted(value)) | |
467 | elif keytype == b'boolean': |
|
470 | elif keytype == b'boolean': | |
468 | value = b'%i' % bool(value) |
|
471 | value = b'%i' % bool(value) | |
469 | elif keytype != b'plain': |
|
472 | elif keytype != b'plain': | |
470 | raise KeyError(b'unknown getbundle option type %s' % keytype) |
|
473 | raise KeyError(b'unknown getbundle option type %s' % keytype) | |
471 | opts[key] = value |
|
474 | opts[key] = value | |
472 | f = self._callcompressable(b"getbundle", **pycompat.strkwargs(opts)) |
|
475 | f = self._callcompressable(b"getbundle", **pycompat.strkwargs(opts)) | |
473 | if any((cap.startswith(b'HG2') for cap in bundlecaps)): |
|
476 | if any((cap.startswith(b'HG2') for cap in bundlecaps)): | |
474 | return bundle2.getunbundler(self.ui, f) |
|
477 | return bundle2.getunbundler(self.ui, f) | |
475 | else: |
|
478 | else: | |
476 | return changegroupmod.cg1unpacker(f, b'UN') |
|
479 | return changegroupmod.cg1unpacker(f, b'UN') | |
477 |
|
480 | |||
478 | def unbundle(self, bundle, heads, url): |
|
481 | def unbundle(self, bundle, heads, url): | |
479 | """Send cg (a readable file-like object representing the |
|
482 | """Send cg (a readable file-like object representing the | |
480 | changegroup to push, typically a chunkbuffer object) to the |
|
483 | changegroup to push, typically a chunkbuffer object) to the | |
481 | remote server as a bundle. |
|
484 | remote server as a bundle. | |
482 |
|
485 | |||
483 | When pushing a bundle10 stream, return an integer indicating the |
|
486 | When pushing a bundle10 stream, return an integer indicating the | |
484 | result of the push (see changegroup.apply()). |
|
487 | result of the push (see changegroup.apply()). | |
485 |
|
488 | |||
486 | When pushing a bundle20 stream, return a bundle20 stream. |
|
489 | When pushing a bundle20 stream, return a bundle20 stream. | |
487 |
|
490 | |||
488 | `url` is the url the client thinks it's pushing to, which is |
|
491 | `url` is the url the client thinks it's pushing to, which is | |
489 | visible to hooks. |
|
492 | visible to hooks. | |
490 | """ |
|
493 | """ | |
491 |
|
494 | |||
492 | if heads != [b'force'] and self.capable(b'unbundlehash'): |
|
495 | if heads != [b'force'] and self.capable(b'unbundlehash'): | |
493 | heads = wireprototypes.encodelist( |
|
496 | heads = wireprototypes.encodelist( | |
494 | [b'hashed', hashutil.sha1(b''.join(sorted(heads))).digest()] |
|
497 | [b'hashed', hashutil.sha1(b''.join(sorted(heads))).digest()] | |
495 | ) |
|
498 | ) | |
496 | else: |
|
499 | else: | |
497 | heads = wireprototypes.encodelist(heads) |
|
500 | heads = wireprototypes.encodelist(heads) | |
498 |
|
501 | |||
499 | if util.safehasattr(bundle, 'deltaheader'): |
|
502 | if util.safehasattr(bundle, 'deltaheader'): | |
500 | # this a bundle10, do the old style call sequence |
|
503 | # this a bundle10, do the old style call sequence | |
501 | ret, output = self._callpush(b"unbundle", bundle, heads=heads) |
|
504 | ret, output = self._callpush(b"unbundle", bundle, heads=heads) | |
502 | if ret == b"": |
|
505 | if ret == b"": | |
503 | raise error.ResponseError(_(b'push failed:'), output) |
|
506 | raise error.ResponseError(_(b'push failed:'), output) | |
504 | try: |
|
507 | try: | |
505 | ret = int(ret) |
|
508 | ret = int(ret) | |
506 | except ValueError: |
|
509 | except ValueError: | |
507 | raise error.ResponseError( |
|
510 | raise error.ResponseError( | |
508 | _(b'push failed (unexpected response):'), ret |
|
511 | _(b'push failed (unexpected response):'), ret | |
509 | ) |
|
512 | ) | |
510 |
|
513 | |||
511 | for l in output.splitlines(True): |
|
514 | for l in output.splitlines(True): | |
512 | self.ui.status(_(b'remote: '), l) |
|
515 | self.ui.status(_(b'remote: '), l) | |
513 | else: |
|
516 | else: | |
514 | # bundle2 push. Send a stream, fetch a stream. |
|
517 | # bundle2 push. Send a stream, fetch a stream. | |
515 | stream = self._calltwowaystream(b'unbundle', bundle, heads=heads) |
|
518 | stream = self._calltwowaystream(b'unbundle', bundle, heads=heads) | |
516 | ret = bundle2.getunbundler(self.ui, stream) |
|
519 | ret = bundle2.getunbundler(self.ui, stream) | |
517 | return ret |
|
520 | return ret | |
518 |
|
521 | |||
519 | # End of ipeercommands interface. |
|
522 | # End of ipeercommands interface. | |
520 |
|
523 | |||
521 | # Begin of ipeerlegacycommands interface. |
|
524 | # Begin of ipeerlegacycommands interface. | |
522 |
|
525 | |||
523 | def branches(self, nodes): |
|
526 | def branches(self, nodes): | |
524 | n = wireprototypes.encodelist(nodes) |
|
527 | n = wireprototypes.encodelist(nodes) | |
525 | d = self._call(b"branches", nodes=n) |
|
528 | d = self._call(b"branches", nodes=n) | |
526 | try: |
|
529 | try: | |
527 | br = [tuple(wireprototypes.decodelist(b)) for b in d.splitlines()] |
|
530 | br = [tuple(wireprototypes.decodelist(b)) for b in d.splitlines()] | |
528 | return br |
|
531 | return br | |
529 | except ValueError: |
|
532 | except ValueError: | |
530 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) |
|
533 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) | |
531 |
|
534 | |||
532 | def between(self, pairs): |
|
535 | def between(self, pairs): | |
533 | batch = 8 # avoid giant requests |
|
536 | batch = 8 # avoid giant requests | |
534 | r = [] |
|
537 | r = [] | |
535 | for i in range(0, len(pairs), batch): |
|
538 | for i in range(0, len(pairs), batch): | |
536 | n = b" ".join( |
|
539 | n = b" ".join( | |
537 | [ |
|
540 | [ | |
538 | wireprototypes.encodelist(p, b'-') |
|
541 | wireprototypes.encodelist(p, b'-') | |
539 | for p in pairs[i : i + batch] |
|
542 | for p in pairs[i : i + batch] | |
540 | ] |
|
543 | ] | |
541 | ) |
|
544 | ) | |
542 | d = self._call(b"between", pairs=n) |
|
545 | d = self._call(b"between", pairs=n) | |
543 | try: |
|
546 | try: | |
544 | r.extend( |
|
547 | r.extend( | |
545 | l and wireprototypes.decodelist(l) or [] |
|
548 | l and wireprototypes.decodelist(l) or [] | |
546 | for l in d.splitlines() |
|
549 | for l in d.splitlines() | |
547 | ) |
|
550 | ) | |
548 | except ValueError: |
|
551 | except ValueError: | |
549 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) |
|
552 | self._abort(error.ResponseError(_(b"unexpected response:"), d)) | |
550 | return r |
|
553 | return r | |
551 |
|
554 | |||
552 | def changegroup(self, nodes, source): |
|
555 | def changegroup(self, nodes, source): | |
553 | n = wireprototypes.encodelist(nodes) |
|
556 | n = wireprototypes.encodelist(nodes) | |
554 | f = self._callcompressable(b"changegroup", roots=n) |
|
557 | f = self._callcompressable(b"changegroup", roots=n) | |
555 | return changegroupmod.cg1unpacker(f, b'UN') |
|
558 | return changegroupmod.cg1unpacker(f, b'UN') | |
556 |
|
559 | |||
557 | def changegroupsubset(self, bases, heads, source): |
|
560 | def changegroupsubset(self, bases, heads, source): | |
558 | self.requirecap(b'changegroupsubset', _(b'look up remote changes')) |
|
561 | self.requirecap(b'changegroupsubset', _(b'look up remote changes')) | |
559 | bases = wireprototypes.encodelist(bases) |
|
562 | bases = wireprototypes.encodelist(bases) | |
560 | heads = wireprototypes.encodelist(heads) |
|
563 | heads = wireprototypes.encodelist(heads) | |
561 | f = self._callcompressable( |
|
564 | f = self._callcompressable( | |
562 | b"changegroupsubset", bases=bases, heads=heads |
|
565 | b"changegroupsubset", bases=bases, heads=heads | |
563 | ) |
|
566 | ) | |
564 | return changegroupmod.cg1unpacker(f, b'UN') |
|
567 | return changegroupmod.cg1unpacker(f, b'UN') | |
565 |
|
568 | |||
566 | # End of ipeerlegacycommands interface. |
|
569 | # End of ipeerlegacycommands interface. | |
567 |
|
570 | |||
568 | def _submitbatch(self, req): |
|
571 | def _submitbatch(self, req): | |
569 | """run batch request <req> on the server |
|
572 | """run batch request <req> on the server | |
570 |
|
573 | |||
571 | Returns an iterator of the raw responses from the server. |
|
574 | Returns an iterator of the raw responses from the server. | |
572 | """ |
|
575 | """ | |
573 | ui = self.ui |
|
576 | ui = self.ui | |
574 | if ui.debugflag and ui.configbool(b'devel', b'debug.peer-request'): |
|
577 | if ui.debugflag and ui.configbool(b'devel', b'debug.peer-request'): | |
575 | ui.debug(b'devel-peer-request: batched-content\n') |
|
578 | ui.debug(b'devel-peer-request: batched-content\n') | |
576 | for op, args in req: |
|
579 | for op, args in req: | |
577 | msg = b'devel-peer-request: - %s (%d arguments)\n' |
|
580 | msg = b'devel-peer-request: - %s (%d arguments)\n' | |
578 | ui.debug(msg % (op, len(args))) |
|
581 | ui.debug(msg % (op, len(args))) | |
579 |
|
582 | |||
580 | unescapearg = wireprototypes.unescapebatcharg |
|
583 | unescapearg = wireprototypes.unescapebatcharg | |
581 |
|
584 | |||
582 | rsp = self._callstream(b"batch", cmds=encodebatchcmds(req)) |
|
585 | rsp = self._callstream(b"batch", cmds=encodebatchcmds(req)) | |
583 | chunk = rsp.read(1024) |
|
586 | chunk = rsp.read(1024) | |
584 | work = [chunk] |
|
587 | work = [chunk] | |
585 | while chunk: |
|
588 | while chunk: | |
586 | while b';' not in chunk and chunk: |
|
589 | while b';' not in chunk and chunk: | |
587 | chunk = rsp.read(1024) |
|
590 | chunk = rsp.read(1024) | |
588 | work.append(chunk) |
|
591 | work.append(chunk) | |
589 | merged = b''.join(work) |
|
592 | merged = b''.join(work) | |
590 | while b';' in merged: |
|
593 | while b';' in merged: | |
591 | one, merged = merged.split(b';', 1) |
|
594 | one, merged = merged.split(b';', 1) | |
592 | yield unescapearg(one) |
|
595 | yield unescapearg(one) | |
593 | chunk = rsp.read(1024) |
|
596 | chunk = rsp.read(1024) | |
594 | work = [merged, chunk] |
|
597 | work = [merged, chunk] | |
595 | yield unescapearg(b''.join(work)) |
|
598 | yield unescapearg(b''.join(work)) | |
596 |
|
599 | |||
597 | def _submitone(self, op, args): |
|
600 | def _submitone(self, op, args): | |
598 | return self._call(op, **pycompat.strkwargs(args)) |
|
601 | return self._call(op, **pycompat.strkwargs(args)) | |
599 |
|
602 | |||
600 | def debugwireargs(self, one, two, three=None, four=None, five=None): |
|
603 | def debugwireargs(self, one, two, three=None, four=None, five=None): | |
601 | # don't pass optional arguments left at their default value |
|
604 | # don't pass optional arguments left at their default value | |
602 | opts = {} |
|
605 | opts = {} | |
603 | if three is not None: |
|
606 | if three is not None: | |
604 | opts['three'] = three |
|
607 | opts['three'] = three | |
605 | if four is not None: |
|
608 | if four is not None: | |
606 | opts['four'] = four |
|
609 | opts['four'] = four | |
607 | return self._call(b'debugwireargs', one=one, two=two, **opts) |
|
610 | return self._call(b'debugwireargs', one=one, two=two, **opts) | |
608 |
|
611 | |||
609 | def _call(self, cmd, **args): |
|
612 | def _call(self, cmd, **args): | |
610 | """execute <cmd> on the server |
|
613 | """execute <cmd> on the server | |
611 |
|
614 | |||
612 | The command is expected to return a simple string. |
|
615 | The command is expected to return a simple string. | |
613 |
|
616 | |||
614 | returns the server reply as a string.""" |
|
617 | returns the server reply as a string.""" | |
615 | raise NotImplementedError() |
|
618 | raise NotImplementedError() | |
616 |
|
619 | |||
617 | def _callstream(self, cmd, **args): |
|
620 | def _callstream(self, cmd, **args): | |
618 | """execute <cmd> on the server |
|
621 | """execute <cmd> on the server | |
619 |
|
622 | |||
620 | The command is expected to return a stream. Note that if the |
|
623 | The command is expected to return a stream. Note that if the | |
621 | command doesn't return a stream, _callstream behaves |
|
624 | command doesn't return a stream, _callstream behaves | |
622 | differently for ssh and http peers. |
|
625 | differently for ssh and http peers. | |
623 |
|
626 | |||
624 | returns the server reply as a file like object. |
|
627 | returns the server reply as a file like object. | |
625 | """ |
|
628 | """ | |
626 | raise NotImplementedError() |
|
629 | raise NotImplementedError() | |
627 |
|
630 | |||
628 | def _callcompressable(self, cmd, **args): |
|
631 | def _callcompressable(self, cmd, **args): | |
629 | """execute <cmd> on the server |
|
632 | """execute <cmd> on the server | |
630 |
|
633 | |||
631 | The command is expected to return a stream. |
|
634 | The command is expected to return a stream. | |
632 |
|
635 | |||
633 | The stream may have been compressed in some implementations. This |
|
636 | The stream may have been compressed in some implementations. This | |
634 | function takes care of the decompression. This is the only difference |
|
637 | function takes care of the decompression. This is the only difference | |
635 | with _callstream. |
|
638 | with _callstream. | |
636 |
|
639 | |||
637 | returns the server reply as a file like object. |
|
640 | returns the server reply as a file like object. | |
638 | """ |
|
641 | """ | |
639 | raise NotImplementedError() |
|
642 | raise NotImplementedError() | |
640 |
|
643 | |||
641 | def _callpush(self, cmd, fp, **args): |
|
644 | def _callpush(self, cmd, fp, **args): | |
642 | """execute a <cmd> on server |
|
645 | """execute a <cmd> on server | |
643 |
|
646 | |||
644 | The command is expected to be related to a push. Push has a special |
|
647 | The command is expected to be related to a push. Push has a special | |
645 | return method. |
|
648 | return method. | |
646 |
|
649 | |||
647 | returns the server reply as a (ret, output) tuple. ret is either |
|
650 | returns the server reply as a (ret, output) tuple. ret is either | |
648 | empty (error) or a stringified int. |
|
651 | empty (error) or a stringified int. | |
649 | """ |
|
652 | """ | |
650 | raise NotImplementedError() |
|
653 | raise NotImplementedError() | |
651 |
|
654 | |||
652 | def _calltwowaystream(self, cmd, fp, **args): |
|
655 | def _calltwowaystream(self, cmd, fp, **args): | |
653 | """execute <cmd> on server |
|
656 | """execute <cmd> on server | |
654 |
|
657 | |||
655 | The command will send a stream to the server and get a stream in reply. |
|
658 | The command will send a stream to the server and get a stream in reply. | |
656 | """ |
|
659 | """ | |
657 | raise NotImplementedError() |
|
660 | raise NotImplementedError() | |
658 |
|
661 | |||
659 | def _abort(self, exception): |
|
662 | def _abort(self, exception): | |
660 | """clearly abort the wire protocol connection and raise the exception""" |
|
663 | """clearly abort the wire protocol connection and raise the exception""" | |
661 | raise NotImplementedError() |
|
664 | raise NotImplementedError() |
@@ -1,795 +1,804 b'' | |||||
1 | # wireprotov1server.py - Wire protocol version 1 server functionality |
|
1 | # wireprotov1server.py - Wire protocol version 1 server functionality | |
2 | # |
|
2 | # | |
3 | # Copyright 2005-2010 Olivia Mackall <olivia@selenic.com> |
|
3 | # Copyright 2005-2010 Olivia Mackall <olivia@selenic.com> | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 |
|
7 | |||
8 |
|
8 | |||
9 | import binascii |
|
9 | import binascii | |
10 | import os |
|
10 | import os | |
11 |
|
11 | |||
12 | from .i18n import _ |
|
12 | from .i18n import _ | |
13 | from .node import hex |
|
13 | from .node import hex | |
14 | from .pycompat import getattr |
|
14 | from .pycompat import getattr | |
15 |
|
15 | |||
16 | from . import ( |
|
16 | from . import ( | |
17 | bundle2, |
|
17 | bundle2, | |
18 | bundlecaches, |
|
18 | bundlecaches, | |
19 | changegroup as changegroupmod, |
|
19 | changegroup as changegroupmod, | |
20 | discovery, |
|
20 | discovery, | |
21 | encoding, |
|
21 | encoding, | |
22 | error, |
|
22 | error, | |
23 | exchange, |
|
23 | exchange, | |
24 | hook, |
|
24 | hook, | |
25 | pushkey as pushkeymod, |
|
25 | pushkey as pushkeymod, | |
26 | pycompat, |
|
26 | pycompat, | |
27 | repoview, |
|
27 | repoview, | |
28 | requirements as requirementsmod, |
|
28 | requirements as requirementsmod, | |
29 | streamclone, |
|
29 | streamclone, | |
30 | util, |
|
30 | util, | |
31 | wireprototypes, |
|
31 | wireprototypes, | |
32 | ) |
|
32 | ) | |
33 |
|
33 | |||
34 | from .utils import ( |
|
34 | from .utils import ( | |
35 | procutil, |
|
35 | procutil, | |
36 | stringutil, |
|
36 | stringutil, | |
37 | ) |
|
37 | ) | |
38 |
|
38 | |||
39 | urlerr = util.urlerr |
|
39 | urlerr = util.urlerr | |
40 | urlreq = util.urlreq |
|
40 | urlreq = util.urlreq | |
41 |
|
41 | |||
42 | bundle2requiredmain = _(b'incompatible Mercurial client; bundle2 required') |
|
42 | bundle2requiredmain = _(b'incompatible Mercurial client; bundle2 required') | |
43 | bundle2requiredhint = _( |
|
43 | bundle2requiredhint = _( | |
44 | b'see https://www.mercurial-scm.org/wiki/IncompatibleClient' |
|
44 | b'see https://www.mercurial-scm.org/wiki/IncompatibleClient' | |
45 | ) |
|
45 | ) | |
46 | bundle2required = b'%s\n(%s)\n' % (bundle2requiredmain, bundle2requiredhint) |
|
46 | bundle2required = b'%s\n(%s)\n' % (bundle2requiredmain, bundle2requiredhint) | |
47 |
|
47 | |||
48 |
|
48 | |||
49 | def clientcompressionsupport(proto): |
|
49 | def clientcompressionsupport(proto): | |
50 | """Returns a list of compression methods supported by the client. |
|
50 | """Returns a list of compression methods supported by the client. | |
51 |
|
51 | |||
52 | Returns a list of the compression methods supported by the client |
|
52 | Returns a list of the compression methods supported by the client | |
53 | according to the protocol capabilities. If no such capability has |
|
53 | according to the protocol capabilities. If no such capability has | |
54 | been announced, fallback to the default of zlib and uncompressed. |
|
54 | been announced, fallback to the default of zlib and uncompressed. | |
55 | """ |
|
55 | """ | |
56 | for cap in proto.getprotocaps(): |
|
56 | for cap in proto.getprotocaps(): | |
57 | if cap.startswith(b'comp='): |
|
57 | if cap.startswith(b'comp='): | |
58 | return cap[5:].split(b',') |
|
58 | return cap[5:].split(b',') | |
59 | return [b'zlib', b'none'] |
|
59 | return [b'zlib', b'none'] | |
60 |
|
60 | |||
61 |
|
61 | |||
62 | # wire protocol command can either return a string or one of these classes. |
|
62 | # wire protocol command can either return a string or one of these classes. | |
63 |
|
63 | |||
64 |
|
64 | |||
65 | def getdispatchrepo(repo, proto, command, accesshidden=False): |
|
65 | def getdispatchrepo(repo, proto, command, accesshidden=False): | |
66 | """Obtain the repo used for processing wire protocol commands. |
|
66 | """Obtain the repo used for processing wire protocol commands. | |
67 |
|
67 | |||
68 | The intent of this function is to serve as a monkeypatch point for |
|
68 | The intent of this function is to serve as a monkeypatch point for | |
69 | extensions that need commands to operate on different repo views under |
|
69 | extensions that need commands to operate on different repo views under | |
70 | specialized circumstances. |
|
70 | specialized circumstances. | |
71 | """ |
|
71 | """ | |
72 | viewconfig = repo.ui.config(b'server', b'view') |
|
72 | viewconfig = repo.ui.config(b'server', b'view') | |
73 |
|
73 | |||
74 | # Only works if the filter actually supports being upgraded to show hidden |
|
74 | # Only works if the filter actually supports being upgraded to show hidden | |
75 | # changesets. |
|
75 | # changesets. | |
76 | if ( |
|
76 | if ( | |
77 | accesshidden |
|
77 | accesshidden | |
78 | and viewconfig is not None |
|
78 | and viewconfig is not None | |
79 | and viewconfig + b'.hidden' in repoview.filtertable |
|
79 | and viewconfig + b'.hidden' in repoview.filtertable | |
80 | ): |
|
80 | ): | |
81 | viewconfig += b'.hidden' |
|
81 | viewconfig += b'.hidden' | |
82 |
|
82 | |||
83 | return repo.filtered(viewconfig) |
|
83 | return repo.filtered(viewconfig) | |
84 |
|
84 | |||
85 |
|
85 | |||
86 | def dispatch(repo, proto, command, accesshidden=False): |
|
86 | def dispatch(repo, proto, command, accesshidden=False): | |
87 | repo = getdispatchrepo(repo, proto, command, accesshidden=accesshidden) |
|
87 | repo = getdispatchrepo(repo, proto, command, accesshidden=accesshidden) | |
88 |
|
88 | |||
89 | func, spec = commands[command] |
|
89 | func, spec = commands[command] | |
90 | args = proto.getargs(spec) |
|
90 | args = proto.getargs(spec) | |
91 |
|
91 | |||
92 | return func(repo, proto, *args) |
|
92 | return func(repo, proto, *args) | |
93 |
|
93 | |||
94 |
|
94 | |||
95 | def options(cmd, keys, others): |
|
95 | def options(cmd, keys, others): | |
96 | opts = {} |
|
96 | opts = {} | |
97 | for k in keys: |
|
97 | for k in keys: | |
98 | if k in others: |
|
98 | if k in others: | |
99 | opts[k] = others[k] |
|
99 | opts[k] = others[k] | |
100 | del others[k] |
|
100 | del others[k] | |
101 | if others: |
|
101 | if others: | |
102 | procutil.stderr.write( |
|
102 | procutil.stderr.write( | |
103 | b"warning: %s ignored unexpected arguments %s\n" |
|
103 | b"warning: %s ignored unexpected arguments %s\n" | |
104 | % (cmd, b",".join(others)) |
|
104 | % (cmd, b",".join(others)) | |
105 | ) |
|
105 | ) | |
106 | return opts |
|
106 | return opts | |
107 |
|
107 | |||
108 |
|
108 | |||
109 | def bundle1allowed(repo, action): |
|
109 | def bundle1allowed(repo, action): | |
110 | """Whether a bundle1 operation is allowed from the server. |
|
110 | """Whether a bundle1 operation is allowed from the server. | |
111 |
|
111 | |||
112 | Priority is: |
|
112 | Priority is: | |
113 |
|
113 | |||
114 | 1. server.bundle1gd.<action> (if generaldelta active) |
|
114 | 1. server.bundle1gd.<action> (if generaldelta active) | |
115 | 2. server.bundle1.<action> |
|
115 | 2. server.bundle1.<action> | |
116 | 3. server.bundle1gd (if generaldelta active) |
|
116 | 3. server.bundle1gd (if generaldelta active) | |
117 | 4. server.bundle1 |
|
117 | 4. server.bundle1 | |
118 | """ |
|
118 | """ | |
119 | ui = repo.ui |
|
119 | ui = repo.ui | |
120 | gd = requirementsmod.GENERALDELTA_REQUIREMENT in repo.requirements |
|
120 | gd = requirementsmod.GENERALDELTA_REQUIREMENT in repo.requirements | |
121 |
|
121 | |||
122 | if gd: |
|
122 | if gd: | |
123 | v = ui.configbool(b'server', b'bundle1gd.%s' % action) |
|
123 | v = ui.configbool(b'server', b'bundle1gd.%s' % action) | |
124 | if v is not None: |
|
124 | if v is not None: | |
125 | return v |
|
125 | return v | |
126 |
|
126 | |||
127 | v = ui.configbool(b'server', b'bundle1.%s' % action) |
|
127 | v = ui.configbool(b'server', b'bundle1.%s' % action) | |
128 | if v is not None: |
|
128 | if v is not None: | |
129 | return v |
|
129 | return v | |
130 |
|
130 | |||
131 | if gd: |
|
131 | if gd: | |
132 | v = ui.configbool(b'server', b'bundle1gd') |
|
132 | v = ui.configbool(b'server', b'bundle1gd') | |
133 | if v is not None: |
|
133 | if v is not None: | |
134 | return v |
|
134 | return v | |
135 |
|
135 | |||
136 | return ui.configbool(b'server', b'bundle1') |
|
136 | return ui.configbool(b'server', b'bundle1') | |
137 |
|
137 | |||
138 |
|
138 | |||
139 | commands = wireprototypes.commanddict() |
|
139 | commands = wireprototypes.commanddict() | |
140 |
|
140 | |||
141 |
|
141 | |||
142 | def wireprotocommand(name, args=None, permission=b'push'): |
|
142 | def wireprotocommand(name, args=None, permission=b'push'): | |
143 | """Decorator to declare a wire protocol command. |
|
143 | """Decorator to declare a wire protocol command. | |
144 |
|
144 | |||
145 | ``name`` is the name of the wire protocol command being provided. |
|
145 | ``name`` is the name of the wire protocol command being provided. | |
146 |
|
146 | |||
147 | ``args`` defines the named arguments accepted by the command. It is |
|
147 | ``args`` defines the named arguments accepted by the command. It is | |
148 | a space-delimited list of argument names. ``*`` denotes a special value |
|
148 | a space-delimited list of argument names. ``*`` denotes a special value | |
149 | that says to accept all named arguments. |
|
149 | that says to accept all named arguments. | |
150 |
|
150 | |||
151 | ``permission`` defines the permission type needed to run this command. |
|
151 | ``permission`` defines the permission type needed to run this command. | |
152 | Can be ``push`` or ``pull``. These roughly map to read-write and read-only, |
|
152 | Can be ``push`` or ``pull``. These roughly map to read-write and read-only, | |
153 | respectively. Default is to assume command requires ``push`` permissions |
|
153 | respectively. Default is to assume command requires ``push`` permissions | |
154 | because otherwise commands not declaring their permissions could modify |
|
154 | because otherwise commands not declaring their permissions could modify | |
155 | a repository that is supposed to be read-only. |
|
155 | a repository that is supposed to be read-only. | |
156 | """ |
|
156 | """ | |
157 | transports = { |
|
157 | transports = { | |
158 | k for k, v in wireprototypes.TRANSPORTS.items() if v[b'version'] == 1 |
|
158 | k for k, v in wireprototypes.TRANSPORTS.items() if v[b'version'] == 1 | |
159 | } |
|
159 | } | |
160 |
|
160 | |||
161 | if permission not in (b'push', b'pull'): |
|
161 | if permission not in (b'push', b'pull'): | |
162 | raise error.ProgrammingError( |
|
162 | raise error.ProgrammingError( | |
163 | b'invalid wire protocol permission; ' |
|
163 | b'invalid wire protocol permission; ' | |
164 | b'got %s; expected "push" or "pull"' % permission |
|
164 | b'got %s; expected "push" or "pull"' % permission | |
165 | ) |
|
165 | ) | |
166 |
|
166 | |||
167 | if args is None: |
|
167 | if args is None: | |
168 | args = b'' |
|
168 | args = b'' | |
169 |
|
169 | |||
170 | if not isinstance(args, bytes): |
|
170 | if not isinstance(args, bytes): | |
171 | raise error.ProgrammingError( |
|
171 | raise error.ProgrammingError( | |
172 | b'arguments for version 1 commands must be declared as bytes' |
|
172 | b'arguments for version 1 commands must be declared as bytes' | |
173 | ) |
|
173 | ) | |
174 |
|
174 | |||
175 | def register(func): |
|
175 | def register(func): | |
176 | if name in commands: |
|
176 | if name in commands: | |
177 | raise error.ProgrammingError( |
|
177 | raise error.ProgrammingError( | |
178 | b'%s command already registered for version 1' % name |
|
178 | b'%s command already registered for version 1' % name | |
179 | ) |
|
179 | ) | |
180 | commands[name] = wireprototypes.commandentry( |
|
180 | commands[name] = wireprototypes.commandentry( | |
181 | func, args=args, transports=transports, permission=permission |
|
181 | func, args=args, transports=transports, permission=permission | |
182 | ) |
|
182 | ) | |
183 |
|
183 | |||
184 | return func |
|
184 | return func | |
185 |
|
185 | |||
186 | return register |
|
186 | return register | |
187 |
|
187 | |||
188 |
|
188 | |||
189 | # TODO define a more appropriate permissions type to use for this. |
|
189 | # TODO define a more appropriate permissions type to use for this. | |
190 | @wireprotocommand(b'batch', b'cmds *', permission=b'pull') |
|
190 | @wireprotocommand(b'batch', b'cmds *', permission=b'pull') | |
191 | def batch(repo, proto, cmds, others): |
|
191 | def batch(repo, proto, cmds, others): | |
192 | unescapearg = wireprototypes.unescapebatcharg |
|
192 | unescapearg = wireprototypes.unescapebatcharg | |
193 | res = [] |
|
193 | res = [] | |
194 | for pair in cmds.split(b';'): |
|
194 | for pair in cmds.split(b';'): | |
195 | op, args = pair.split(b' ', 1) |
|
195 | op, args = pair.split(b' ', 1) | |
196 | vals = {} |
|
196 | vals = {} | |
197 | for a in args.split(b','): |
|
197 | for a in args.split(b','): | |
198 | if a: |
|
198 | if a: | |
199 | n, v = a.split(b'=') |
|
199 | n, v = a.split(b'=') | |
200 | vals[unescapearg(n)] = unescapearg(v) |
|
200 | vals[unescapearg(n)] = unescapearg(v) | |
201 | func, spec = commands[op] |
|
201 | func, spec = commands[op] | |
202 |
|
202 | |||
203 | # Validate that client has permissions to perform this command. |
|
203 | # Validate that client has permissions to perform this command. | |
204 | perm = commands[op].permission |
|
204 | perm = commands[op].permission | |
205 | assert perm in (b'push', b'pull') |
|
205 | assert perm in (b'push', b'pull') | |
206 | proto.checkperm(perm) |
|
206 | proto.checkperm(perm) | |
207 |
|
207 | |||
208 | if spec: |
|
208 | if spec: | |
209 | keys = spec.split() |
|
209 | keys = spec.split() | |
210 | data = {} |
|
210 | data = {} | |
211 | for k in keys: |
|
211 | for k in keys: | |
212 | if k == b'*': |
|
212 | if k == b'*': | |
213 | star = {} |
|
213 | star = {} | |
214 | for key in vals.keys(): |
|
214 | for key in vals.keys(): | |
215 | if key not in keys: |
|
215 | if key not in keys: | |
216 | star[key] = vals[key] |
|
216 | star[key] = vals[key] | |
217 | data[b'*'] = star |
|
217 | data[b'*'] = star | |
218 | else: |
|
218 | else: | |
219 | data[k] = vals[k] |
|
219 | data[k] = vals[k] | |
220 | result = func(repo, proto, *[data[k] for k in keys]) |
|
220 | result = func(repo, proto, *[data[k] for k in keys]) | |
221 | else: |
|
221 | else: | |
222 | result = func(repo, proto) |
|
222 | result = func(repo, proto) | |
223 | if isinstance(result, wireprototypes.ooberror): |
|
223 | if isinstance(result, wireprototypes.ooberror): | |
224 | return result |
|
224 | return result | |
225 |
|
225 | |||
226 | # For now, all batchable commands must return bytesresponse or |
|
226 | # For now, all batchable commands must return bytesresponse or | |
227 | # raw bytes (for backwards compatibility). |
|
227 | # raw bytes (for backwards compatibility). | |
228 | assert isinstance(result, (wireprototypes.bytesresponse, bytes)) |
|
228 | assert isinstance(result, (wireprototypes.bytesresponse, bytes)) | |
229 | if isinstance(result, wireprototypes.bytesresponse): |
|
229 | if isinstance(result, wireprototypes.bytesresponse): | |
230 | result = result.data |
|
230 | result = result.data | |
231 | res.append(wireprototypes.escapebatcharg(result)) |
|
231 | res.append(wireprototypes.escapebatcharg(result)) | |
232 |
|
232 | |||
233 | return wireprototypes.bytesresponse(b';'.join(res)) |
|
233 | return wireprototypes.bytesresponse(b';'.join(res)) | |
234 |
|
234 | |||
235 |
|
235 | |||
236 | @wireprotocommand(b'between', b'pairs', permission=b'pull') |
|
236 | @wireprotocommand(b'between', b'pairs', permission=b'pull') | |
237 | def between(repo, proto, pairs): |
|
237 | def between(repo, proto, pairs): | |
238 | pairs = [wireprototypes.decodelist(p, b'-') for p in pairs.split(b" ")] |
|
238 | pairs = [wireprototypes.decodelist(p, b'-') for p in pairs.split(b" ")] | |
239 | r = [] |
|
239 | r = [] | |
240 | for b in repo.between(pairs): |
|
240 | for b in repo.between(pairs): | |
241 | r.append(wireprototypes.encodelist(b) + b"\n") |
|
241 | r.append(wireprototypes.encodelist(b) + b"\n") | |
242 |
|
242 | |||
243 | return wireprototypes.bytesresponse(b''.join(r)) |
|
243 | return wireprototypes.bytesresponse(b''.join(r)) | |
244 |
|
244 | |||
245 |
|
245 | |||
246 | @wireprotocommand(b'branchmap', permission=b'pull') |
|
246 | @wireprotocommand(b'branchmap', permission=b'pull') | |
247 | def branchmap(repo, proto): |
|
247 | def branchmap(repo, proto): | |
248 | branchmap = repo.branchmap() |
|
248 | branchmap = repo.branchmap() | |
249 | heads = [] |
|
249 | heads = [] | |
250 | for branch, nodes in branchmap.items(): |
|
250 | for branch, nodes in branchmap.items(): | |
251 | branchname = urlreq.quote(encoding.fromlocal(branch)) |
|
251 | branchname = urlreq.quote(encoding.fromlocal(branch)) | |
252 | branchnodes = wireprototypes.encodelist(nodes) |
|
252 | branchnodes = wireprototypes.encodelist(nodes) | |
253 | heads.append(b'%s %s' % (branchname, branchnodes)) |
|
253 | heads.append(b'%s %s' % (branchname, branchnodes)) | |
254 |
|
254 | |||
255 | return wireprototypes.bytesresponse(b'\n'.join(heads)) |
|
255 | return wireprototypes.bytesresponse(b'\n'.join(heads)) | |
256 |
|
256 | |||
257 |
|
257 | |||
258 | @wireprotocommand(b'branches', b'nodes', permission=b'pull') |
|
258 | @wireprotocommand(b'branches', b'nodes', permission=b'pull') | |
259 | def branches(repo, proto, nodes): |
|
259 | def branches(repo, proto, nodes): | |
260 | nodes = wireprototypes.decodelist(nodes) |
|
260 | nodes = wireprototypes.decodelist(nodes) | |
261 | r = [] |
|
261 | r = [] | |
262 | for b in repo.branches(nodes): |
|
262 | for b in repo.branches(nodes): | |
263 | r.append(wireprototypes.encodelist(b) + b"\n") |
|
263 | r.append(wireprototypes.encodelist(b) + b"\n") | |
264 |
|
264 | |||
265 | return wireprototypes.bytesresponse(b''.join(r)) |
|
265 | return wireprototypes.bytesresponse(b''.join(r)) | |
266 |
|
266 | |||
267 |
|
267 | |||
268 | @wireprotocommand(b'get_cached_bundle_inline', b'path', permission=b'pull') |
|
268 | @wireprotocommand(b'get_cached_bundle_inline', b'path', permission=b'pull') | |
269 | def get_cached_bundle_inline(repo, proto, path): |
|
269 | def get_cached_bundle_inline(repo, proto, path): | |
270 | """ |
|
270 | """ | |
271 | Server command to send a clonebundle to the client |
|
271 | Server command to send a clonebundle to the client | |
272 | """ |
|
272 | """ | |
273 | if hook.hashook(repo.ui, b'pretransmit-inline-clone-bundle'): |
|
273 | if hook.hashook(repo.ui, b'pretransmit-inline-clone-bundle'): | |
274 | hook.hook( |
|
274 | hook.hook( | |
275 | repo.ui, |
|
275 | repo.ui, | |
276 | repo, |
|
276 | repo, | |
277 | b'pretransmit-inline-clone-bundle', |
|
277 | b'pretransmit-inline-clone-bundle', | |
278 | throw=True, |
|
278 | throw=True, | |
279 | clonebundlepath=path, |
|
279 | clonebundlepath=path, | |
280 | ) |
|
280 | ) | |
281 |
|
281 | |||
282 | bundle_dir = repo.vfs.join(bundlecaches.BUNDLE_CACHE_DIR) |
|
282 | bundle_dir = repo.vfs.join(bundlecaches.BUNDLE_CACHE_DIR) | |
283 | clonebundlepath = repo.vfs.join(bundle_dir, path) |
|
283 | clonebundlepath = repo.vfs.join(bundle_dir, path) | |
284 | if not repo.vfs.exists(clonebundlepath): |
|
284 | if not repo.vfs.exists(clonebundlepath): | |
285 | raise error.Abort(b'clonebundle %s does not exist' % path) |
|
285 | raise error.Abort(b'clonebundle %s does not exist' % path) | |
286 |
|
286 | |||
287 | clonebundles_dir = os.path.realpath(bundle_dir) |
|
287 | clonebundles_dir = os.path.realpath(bundle_dir) | |
288 | if not os.path.realpath(clonebundlepath).startswith(clonebundles_dir): |
|
288 | if not os.path.realpath(clonebundlepath).startswith(clonebundles_dir): | |
289 | raise error.Abort(b'clonebundle %s is using an illegal path' % path) |
|
289 | raise error.Abort(b'clonebundle %s is using an illegal path' % path) | |
290 |
|
290 | |||
291 | def generator(vfs, bundle_path): |
|
291 | def generator(vfs, bundle_path): | |
292 | with vfs(bundle_path) as f: |
|
292 | with vfs(bundle_path) as f: | |
293 | length = os.fstat(f.fileno())[6] |
|
293 | length = os.fstat(f.fileno())[6] | |
294 | yield util.uvarintencode(length) |
|
294 | yield util.uvarintencode(length) | |
295 | for chunk in util.filechunkiter(f): |
|
295 | for chunk in util.filechunkiter(f): | |
296 | yield chunk |
|
296 | yield chunk | |
297 |
|
297 | |||
298 | stream = generator(repo.vfs, clonebundlepath) |
|
298 | stream = generator(repo.vfs, clonebundlepath) | |
299 | return wireprototypes.streamres(gen=stream, prefer_uncompressed=True) |
|
299 | return wireprototypes.streamres(gen=stream, prefer_uncompressed=True) | |
300 |
|
300 | |||
301 |
|
301 | |||
302 | @wireprotocommand(b'clonebundles', b'', permission=b'pull') |
|
302 | @wireprotocommand(b'clonebundles', b'', permission=b'pull') | |
303 | def clonebundles(repo, proto): |
|
303 | def clonebundles(repo, proto): | |
|
304 | """A legacy version of clonebundles_manifest | |||
|
305 | ||||
|
306 | This version filtered out new url scheme (like peer-bundle-cache://) to | |||
|
307 | avoid confusion in older clients. | |||
|
308 | """ | |||
|
309 | manifest_contents = bundlecaches.get_manifest(repo) | |||
|
310 | # Filter out peer-bundle-cache:// entries | |||
|
311 | modified_manifest = [] | |||
|
312 | for line in manifest_contents.splitlines(): | |||
|
313 | if line.startswith(bundlecaches.CLONEBUNDLESCHEME): | |||
|
314 | continue | |||
|
315 | modified_manifest.append(line) | |||
|
316 | return wireprototypes.bytesresponse(b'\n'.join(modified_manifest)) | |||
|
317 | ||||
|
318 | ||||
|
319 | @wireprotocommand(b'clonebundles_manifest', b'*', permission=b'pull') | |||
|
320 | def clonebundles_2(repo, proto, args): | |||
304 | """Server command for returning info for available bundles to seed clones. |
|
321 | """Server command for returning info for available bundles to seed clones. | |
305 |
|
322 | |||
306 | Clients will parse this response and determine what bundle to fetch. |
|
323 | Clients will parse this response and determine what bundle to fetch. | |
307 |
|
324 | |||
308 | Extensions may wrap this command to filter or dynamically emit data |
|
325 | Extensions may wrap this command to filter or dynamically emit data | |
309 | depending on the request. e.g. you could advertise URLs for the closest |
|
326 | depending on the request. e.g. you could advertise URLs for the closest | |
310 | data center given the client's IP address. |
|
327 | data center given the client's IP address. | |
311 |
|
328 | |||
312 | The only filter on the server side is filtering out inline clonebundles |
|
329 | The only filter on the server side is filtering out inline clonebundles | |
313 | in case a client does not support them. |
|
330 | in case a client does not support them. | |
314 | Otherwise, older clients would retrieve and error out on those. |
|
331 | Otherwise, older clients would retrieve and error out on those. | |
315 | """ |
|
332 | """ | |
316 | manifest_contents = bundlecaches.get_manifest(repo) |
|
333 | manifest_contents = bundlecaches.get_manifest(repo) | |
317 | clientcapabilities = proto.getprotocaps() |
|
|||
318 | if b'inlineclonebundles' in clientcapabilities: |
|
|||
319 |
|
|
334 | return wireprototypes.bytesresponse(manifest_contents) | |
320 | modified_manifest = [] |
|
|||
321 | for line in manifest_contents.splitlines(): |
|
|||
322 | if line.startswith(bundlecaches.CLONEBUNDLESCHEME): |
|
|||
323 | continue |
|
|||
324 | modified_manifest.append(line) |
|
|||
325 | return wireprototypes.bytesresponse(b'\n'.join(modified_manifest)) |
|
|||
326 |
|
335 | |||
327 |
|
336 | |||
328 | wireprotocaps = [ |
|
337 | wireprotocaps = [ | |
329 | b'lookup', |
|
338 | b'lookup', | |
330 | b'branchmap', |
|
339 | b'branchmap', | |
331 | b'pushkey', |
|
340 | b'pushkey', | |
332 | b'known', |
|
341 | b'known', | |
333 | b'getbundle', |
|
342 | b'getbundle', | |
334 | b'unbundlehash', |
|
343 | b'unbundlehash', | |
335 | ] |
|
344 | ] | |
336 |
|
345 | |||
337 |
|
346 | |||
338 | def _capabilities(repo, proto): |
|
347 | def _capabilities(repo, proto): | |
339 | """return a list of capabilities for a repo |
|
348 | """return a list of capabilities for a repo | |
340 |
|
349 | |||
341 | This function exists to allow extensions to easily wrap capabilities |
|
350 | This function exists to allow extensions to easily wrap capabilities | |
342 | computation |
|
351 | computation | |
343 |
|
352 | |||
344 | - returns a lists: easy to alter |
|
353 | - returns a lists: easy to alter | |
345 | - change done here will be propagated to both `capabilities` and `hello` |
|
354 | - change done here will be propagated to both `capabilities` and `hello` | |
346 | command without any other action needed. |
|
355 | command without any other action needed. | |
347 | """ |
|
356 | """ | |
348 | # copy to prevent modification of the global list |
|
357 | # copy to prevent modification of the global list | |
349 | caps = list(wireprotocaps) |
|
358 | caps = list(wireprotocaps) | |
350 |
|
359 | |||
351 | # Command of same name as capability isn't exposed to version 1 of |
|
360 | # Command of same name as capability isn't exposed to version 1 of | |
352 | # transports. So conditionally add it. |
|
361 | # transports. So conditionally add it. | |
353 | if commands.commandavailable(b'changegroupsubset', proto): |
|
362 | if commands.commandavailable(b'changegroupsubset', proto): | |
354 | caps.append(b'changegroupsubset') |
|
363 | caps.append(b'changegroupsubset') | |
355 |
|
364 | |||
356 | if streamclone.allowservergeneration(repo): |
|
365 | if streamclone.allowservergeneration(repo): | |
357 | if repo.ui.configbool(b'server', b'preferuncompressed'): |
|
366 | if repo.ui.configbool(b'server', b'preferuncompressed'): | |
358 | caps.append(b'stream-preferred') |
|
367 | caps.append(b'stream-preferred') | |
359 | requiredformats = streamclone.streamed_requirements(repo) |
|
368 | requiredformats = streamclone.streamed_requirements(repo) | |
360 | # if our local revlogs are just revlogv1, add 'stream' cap |
|
369 | # if our local revlogs are just revlogv1, add 'stream' cap | |
361 | if not requiredformats - {requirementsmod.REVLOGV1_REQUIREMENT}: |
|
370 | if not requiredformats - {requirementsmod.REVLOGV1_REQUIREMENT}: | |
362 | caps.append(b'stream') |
|
371 | caps.append(b'stream') | |
363 | # otherwise, add 'streamreqs' detailing our local revlog format |
|
372 | # otherwise, add 'streamreqs' detailing our local revlog format | |
364 | else: |
|
373 | else: | |
365 | caps.append(b'streamreqs=%s' % b','.join(sorted(requiredformats))) |
|
374 | caps.append(b'streamreqs=%s' % b','.join(sorted(requiredformats))) | |
366 | if repo.ui.configbool(b'experimental', b'bundle2-advertise'): |
|
375 | if repo.ui.configbool(b'experimental', b'bundle2-advertise'): | |
367 | capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=b'server')) |
|
376 | capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=b'server')) | |
368 | caps.append(b'bundle2=' + urlreq.quote(capsblob)) |
|
377 | caps.append(b'bundle2=' + urlreq.quote(capsblob)) | |
369 | caps.append(b'unbundle=%s' % b','.join(bundle2.bundlepriority)) |
|
378 | caps.append(b'unbundle=%s' % b','.join(bundle2.bundlepriority)) | |
370 |
|
379 | |||
371 | if repo.ui.configbool(b'experimental', b'narrow'): |
|
380 | if repo.ui.configbool(b'experimental', b'narrow'): | |
372 | caps.append(wireprototypes.NARROWCAP) |
|
381 | caps.append(wireprototypes.NARROWCAP) | |
373 | if repo.ui.configbool(b'experimental', b'narrowservebrokenellipses'): |
|
382 | if repo.ui.configbool(b'experimental', b'narrowservebrokenellipses'): | |
374 | caps.append(wireprototypes.ELLIPSESCAP) |
|
383 | caps.append(wireprototypes.ELLIPSESCAP) | |
375 |
|
384 | |||
376 | return proto.addcapabilities(repo, caps) |
|
385 | return proto.addcapabilities(repo, caps) | |
377 |
|
386 | |||
378 |
|
387 | |||
379 | # If you are writing an extension and consider wrapping this function. Wrap |
|
388 | # If you are writing an extension and consider wrapping this function. Wrap | |
380 | # `_capabilities` instead. |
|
389 | # `_capabilities` instead. | |
381 | @wireprotocommand(b'capabilities', permission=b'pull') |
|
390 | @wireprotocommand(b'capabilities', permission=b'pull') | |
382 | def capabilities(repo, proto): |
|
391 | def capabilities(repo, proto): | |
383 | caps = _capabilities(repo, proto) |
|
392 | caps = _capabilities(repo, proto) | |
384 | return wireprototypes.bytesresponse(b' '.join(sorted(caps))) |
|
393 | return wireprototypes.bytesresponse(b' '.join(sorted(caps))) | |
385 |
|
394 | |||
386 |
|
395 | |||
387 | @wireprotocommand(b'changegroup', b'roots', permission=b'pull') |
|
396 | @wireprotocommand(b'changegroup', b'roots', permission=b'pull') | |
388 | def changegroup(repo, proto, roots): |
|
397 | def changegroup(repo, proto, roots): | |
389 | nodes = wireprototypes.decodelist(roots) |
|
398 | nodes = wireprototypes.decodelist(roots) | |
390 | outgoing = discovery.outgoing( |
|
399 | outgoing = discovery.outgoing( | |
391 | repo, missingroots=nodes, ancestorsof=repo.heads() |
|
400 | repo, missingroots=nodes, ancestorsof=repo.heads() | |
392 | ) |
|
401 | ) | |
393 | cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve') |
|
402 | cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve') | |
394 | gen = iter(lambda: cg.read(32768), b'') |
|
403 | gen = iter(lambda: cg.read(32768), b'') | |
395 | return wireprototypes.streamres(gen=gen) |
|
404 | return wireprototypes.streamres(gen=gen) | |
396 |
|
405 | |||
397 |
|
406 | |||
398 | @wireprotocommand(b'changegroupsubset', b'bases heads', permission=b'pull') |
|
407 | @wireprotocommand(b'changegroupsubset', b'bases heads', permission=b'pull') | |
399 | def changegroupsubset(repo, proto, bases, heads): |
|
408 | def changegroupsubset(repo, proto, bases, heads): | |
400 | bases = wireprototypes.decodelist(bases) |
|
409 | bases = wireprototypes.decodelist(bases) | |
401 | heads = wireprototypes.decodelist(heads) |
|
410 | heads = wireprototypes.decodelist(heads) | |
402 | outgoing = discovery.outgoing(repo, missingroots=bases, ancestorsof=heads) |
|
411 | outgoing = discovery.outgoing(repo, missingroots=bases, ancestorsof=heads) | |
403 | cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve') |
|
412 | cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve') | |
404 | gen = iter(lambda: cg.read(32768), b'') |
|
413 | gen = iter(lambda: cg.read(32768), b'') | |
405 | return wireprototypes.streamres(gen=gen) |
|
414 | return wireprototypes.streamres(gen=gen) | |
406 |
|
415 | |||
407 |
|
416 | |||
408 | @wireprotocommand(b'debugwireargs', b'one two *', permission=b'pull') |
|
417 | @wireprotocommand(b'debugwireargs', b'one two *', permission=b'pull') | |
409 | def debugwireargs(repo, proto, one, two, others): |
|
418 | def debugwireargs(repo, proto, one, two, others): | |
410 | # only accept optional args from the known set |
|
419 | # only accept optional args from the known set | |
411 | opts = options(b'debugwireargs', [b'three', b'four'], others) |
|
420 | opts = options(b'debugwireargs', [b'three', b'four'], others) | |
412 | return wireprototypes.bytesresponse( |
|
421 | return wireprototypes.bytesresponse( | |
413 | repo.debugwireargs(one, two, **pycompat.strkwargs(opts)) |
|
422 | repo.debugwireargs(one, two, **pycompat.strkwargs(opts)) | |
414 | ) |
|
423 | ) | |
415 |
|
424 | |||
416 |
|
425 | |||
417 | def find_pullbundle(repo, proto, opts, clheads, heads, common): |
|
426 | def find_pullbundle(repo, proto, opts, clheads, heads, common): | |
418 | """Return a file object for the first matching pullbundle. |
|
427 | """Return a file object for the first matching pullbundle. | |
419 |
|
428 | |||
420 | Pullbundles are specified in .hg/pullbundles.manifest similar to |
|
429 | Pullbundles are specified in .hg/pullbundles.manifest similar to | |
421 | clonebundles. |
|
430 | clonebundles. | |
422 | For each entry, the bundle specification is checked for compatibility: |
|
431 | For each entry, the bundle specification is checked for compatibility: | |
423 | - Client features vs the BUNDLESPEC. |
|
432 | - Client features vs the BUNDLESPEC. | |
424 | - Revisions shared with the clients vs base revisions of the bundle. |
|
433 | - Revisions shared with the clients vs base revisions of the bundle. | |
425 | A bundle can be applied only if all its base revisions are known by |
|
434 | A bundle can be applied only if all its base revisions are known by | |
426 | the client. |
|
435 | the client. | |
427 | - At least one leaf of the bundle's DAG is missing on the client. |
|
436 | - At least one leaf of the bundle's DAG is missing on the client. | |
428 | - Every leaf of the bundle's DAG is part of node set the client wants. |
|
437 | - Every leaf of the bundle's DAG is part of node set the client wants. | |
429 | E.g. do not send a bundle of all changes if the client wants only |
|
438 | E.g. do not send a bundle of all changes if the client wants only | |
430 | one specific branch of many. |
|
439 | one specific branch of many. | |
431 | """ |
|
440 | """ | |
432 |
|
441 | |||
433 | def decodehexstring(s): |
|
442 | def decodehexstring(s): | |
434 | return {binascii.unhexlify(h) for h in s.split(b';')} |
|
443 | return {binascii.unhexlify(h) for h in s.split(b';')} | |
435 |
|
444 | |||
436 | manifest = repo.vfs.tryread(b'pullbundles.manifest') |
|
445 | manifest = repo.vfs.tryread(b'pullbundles.manifest') | |
437 | if not manifest: |
|
446 | if not manifest: | |
438 | return None |
|
447 | return None | |
439 | res = bundlecaches.parseclonebundlesmanifest(repo, manifest) |
|
448 | res = bundlecaches.parseclonebundlesmanifest(repo, manifest) | |
440 | res = bundlecaches.filterclonebundleentries(repo, res, pullbundles=True) |
|
449 | res = bundlecaches.filterclonebundleentries(repo, res, pullbundles=True) | |
441 | if not res: |
|
450 | if not res: | |
442 | return None |
|
451 | return None | |
443 | cl = repo.unfiltered().changelog |
|
452 | cl = repo.unfiltered().changelog | |
444 | heads_anc = cl.ancestors([cl.rev(rev) for rev in heads], inclusive=True) |
|
453 | heads_anc = cl.ancestors([cl.rev(rev) for rev in heads], inclusive=True) | |
445 | common_anc = cl.ancestors([cl.rev(rev) for rev in common], inclusive=True) |
|
454 | common_anc = cl.ancestors([cl.rev(rev) for rev in common], inclusive=True) | |
446 | compformats = clientcompressionsupport(proto) |
|
455 | compformats = clientcompressionsupport(proto) | |
447 | for entry in res: |
|
456 | for entry in res: | |
448 | comp = entry.get(b'COMPRESSION') |
|
457 | comp = entry.get(b'COMPRESSION') | |
449 | altcomp = util.compengines._bundlenames.get(comp) |
|
458 | altcomp = util.compengines._bundlenames.get(comp) | |
450 | if comp and comp not in compformats and altcomp not in compformats: |
|
459 | if comp and comp not in compformats and altcomp not in compformats: | |
451 | continue |
|
460 | continue | |
452 | # No test yet for VERSION, since V2 is supported by any client |
|
461 | # No test yet for VERSION, since V2 is supported by any client | |
453 | # that advertises partial pulls |
|
462 | # that advertises partial pulls | |
454 | if b'heads' in entry: |
|
463 | if b'heads' in entry: | |
455 | try: |
|
464 | try: | |
456 | bundle_heads = decodehexstring(entry[b'heads']) |
|
465 | bundle_heads = decodehexstring(entry[b'heads']) | |
457 | except TypeError: |
|
466 | except TypeError: | |
458 | # Bad heads entry |
|
467 | # Bad heads entry | |
459 | continue |
|
468 | continue | |
460 | if bundle_heads.issubset(common): |
|
469 | if bundle_heads.issubset(common): | |
461 | continue # Nothing new |
|
470 | continue # Nothing new | |
462 | if all(cl.rev(rev) in common_anc for rev in bundle_heads): |
|
471 | if all(cl.rev(rev) in common_anc for rev in bundle_heads): | |
463 | continue # Still nothing new |
|
472 | continue # Still nothing new | |
464 | if any( |
|
473 | if any( | |
465 | cl.rev(rev) not in heads_anc and cl.rev(rev) not in common_anc |
|
474 | cl.rev(rev) not in heads_anc and cl.rev(rev) not in common_anc | |
466 | for rev in bundle_heads |
|
475 | for rev in bundle_heads | |
467 | ): |
|
476 | ): | |
468 | continue |
|
477 | continue | |
469 | if b'bases' in entry: |
|
478 | if b'bases' in entry: | |
470 | try: |
|
479 | try: | |
471 | bundle_bases = decodehexstring(entry[b'bases']) |
|
480 | bundle_bases = decodehexstring(entry[b'bases']) | |
472 | except TypeError: |
|
481 | except TypeError: | |
473 | # Bad bases entry |
|
482 | # Bad bases entry | |
474 | continue |
|
483 | continue | |
475 | if not all(cl.rev(rev) in common_anc for rev in bundle_bases): |
|
484 | if not all(cl.rev(rev) in common_anc for rev in bundle_bases): | |
476 | continue |
|
485 | continue | |
477 | path = entry[b'URL'] |
|
486 | path = entry[b'URL'] | |
478 | repo.ui.debug(b'sending pullbundle "%s"\n' % path) |
|
487 | repo.ui.debug(b'sending pullbundle "%s"\n' % path) | |
479 | try: |
|
488 | try: | |
480 | return repo.vfs.open(path) |
|
489 | return repo.vfs.open(path) | |
481 | except IOError: |
|
490 | except IOError: | |
482 | repo.ui.debug(b'pullbundle "%s" not accessible\n' % path) |
|
491 | repo.ui.debug(b'pullbundle "%s" not accessible\n' % path) | |
483 | continue |
|
492 | continue | |
484 | return None |
|
493 | return None | |
485 |
|
494 | |||
486 |
|
495 | |||
487 | @wireprotocommand(b'getbundle', b'*', permission=b'pull') |
|
496 | @wireprotocommand(b'getbundle', b'*', permission=b'pull') | |
488 | def getbundle(repo, proto, others): |
|
497 | def getbundle(repo, proto, others): | |
489 | opts = options( |
|
498 | opts = options( | |
490 | b'getbundle', wireprototypes.GETBUNDLE_ARGUMENTS.keys(), others |
|
499 | b'getbundle', wireprototypes.GETBUNDLE_ARGUMENTS.keys(), others | |
491 | ) |
|
500 | ) | |
492 | for k, v in opts.items(): |
|
501 | for k, v in opts.items(): | |
493 | keytype = wireprototypes.GETBUNDLE_ARGUMENTS[k] |
|
502 | keytype = wireprototypes.GETBUNDLE_ARGUMENTS[k] | |
494 | if keytype == b'nodes': |
|
503 | if keytype == b'nodes': | |
495 | opts[k] = wireprototypes.decodelist(v) |
|
504 | opts[k] = wireprototypes.decodelist(v) | |
496 | elif keytype == b'csv': |
|
505 | elif keytype == b'csv': | |
497 | opts[k] = list(v.split(b',')) |
|
506 | opts[k] = list(v.split(b',')) | |
498 | elif keytype == b'scsv': |
|
507 | elif keytype == b'scsv': | |
499 | opts[k] = set(v.split(b',')) |
|
508 | opts[k] = set(v.split(b',')) | |
500 | elif keytype == b'boolean': |
|
509 | elif keytype == b'boolean': | |
501 | # Client should serialize False as '0', which is a non-empty string |
|
510 | # Client should serialize False as '0', which is a non-empty string | |
502 | # so it evaluates as a True bool. |
|
511 | # so it evaluates as a True bool. | |
503 | if v == b'0': |
|
512 | if v == b'0': | |
504 | opts[k] = False |
|
513 | opts[k] = False | |
505 | else: |
|
514 | else: | |
506 | opts[k] = bool(v) |
|
515 | opts[k] = bool(v) | |
507 | elif keytype != b'plain': |
|
516 | elif keytype != b'plain': | |
508 | raise KeyError(b'unknown getbundle option type %s' % keytype) |
|
517 | raise KeyError(b'unknown getbundle option type %s' % keytype) | |
509 |
|
518 | |||
510 | if not bundle1allowed(repo, b'pull'): |
|
519 | if not bundle1allowed(repo, b'pull'): | |
511 | if not exchange.bundle2requested(opts.get(b'bundlecaps')): |
|
520 | if not exchange.bundle2requested(opts.get(b'bundlecaps')): | |
512 | if proto.name == b'http-v1': |
|
521 | if proto.name == b'http-v1': | |
513 | return wireprototypes.ooberror(bundle2required) |
|
522 | return wireprototypes.ooberror(bundle2required) | |
514 | raise error.Abort(bundle2requiredmain, hint=bundle2requiredhint) |
|
523 | raise error.Abort(bundle2requiredmain, hint=bundle2requiredhint) | |
515 |
|
524 | |||
516 | try: |
|
525 | try: | |
517 | clheads = set(repo.changelog.heads()) |
|
526 | clheads = set(repo.changelog.heads()) | |
518 | heads = set(opts.get(b'heads', set())) |
|
527 | heads = set(opts.get(b'heads', set())) | |
519 | common = set(opts.get(b'common', set())) |
|
528 | common = set(opts.get(b'common', set())) | |
520 | common.discard(repo.nullid) |
|
529 | common.discard(repo.nullid) | |
521 | if ( |
|
530 | if ( | |
522 | repo.ui.configbool(b'server', b'pullbundle') |
|
531 | repo.ui.configbool(b'server', b'pullbundle') | |
523 | and b'partial-pull' in proto.getprotocaps() |
|
532 | and b'partial-pull' in proto.getprotocaps() | |
524 | ): |
|
533 | ): | |
525 | # Check if a pre-built bundle covers this request. |
|
534 | # Check if a pre-built bundle covers this request. | |
526 | bundle = find_pullbundle(repo, proto, opts, clheads, heads, common) |
|
535 | bundle = find_pullbundle(repo, proto, opts, clheads, heads, common) | |
527 | if bundle: |
|
536 | if bundle: | |
528 | return wireprototypes.streamres( |
|
537 | return wireprototypes.streamres( | |
529 | gen=util.filechunkiter(bundle), prefer_uncompressed=True |
|
538 | gen=util.filechunkiter(bundle), prefer_uncompressed=True | |
530 | ) |
|
539 | ) | |
531 |
|
540 | |||
532 | if repo.ui.configbool(b'server', b'disablefullbundle'): |
|
541 | if repo.ui.configbool(b'server', b'disablefullbundle'): | |
533 | # Check to see if this is a full clone. |
|
542 | # Check to see if this is a full clone. | |
534 | changegroup = opts.get(b'cg', True) |
|
543 | changegroup = opts.get(b'cg', True) | |
535 | if changegroup and not common and clheads == heads: |
|
544 | if changegroup and not common and clheads == heads: | |
536 | raise error.Abort( |
|
545 | raise error.Abort( | |
537 | _(b'server has pull-based clones disabled'), |
|
546 | _(b'server has pull-based clones disabled'), | |
538 | hint=_(b'remove --pull if specified or upgrade Mercurial'), |
|
547 | hint=_(b'remove --pull if specified or upgrade Mercurial'), | |
539 | ) |
|
548 | ) | |
540 |
|
549 | |||
541 | info, chunks = exchange.getbundlechunks( |
|
550 | info, chunks = exchange.getbundlechunks( | |
542 | repo, b'serve', **pycompat.strkwargs(opts) |
|
551 | repo, b'serve', **pycompat.strkwargs(opts) | |
543 | ) |
|
552 | ) | |
544 | prefercompressed = info.get(b'prefercompressed', True) |
|
553 | prefercompressed = info.get(b'prefercompressed', True) | |
545 | except error.Abort as exc: |
|
554 | except error.Abort as exc: | |
546 | # cleanly forward Abort error to the client |
|
555 | # cleanly forward Abort error to the client | |
547 | if not exchange.bundle2requested(opts.get(b'bundlecaps')): |
|
556 | if not exchange.bundle2requested(opts.get(b'bundlecaps')): | |
548 | if proto.name == b'http-v1': |
|
557 | if proto.name == b'http-v1': | |
549 | return wireprototypes.ooberror(exc.message + b'\n') |
|
558 | return wireprototypes.ooberror(exc.message + b'\n') | |
550 | raise # cannot do better for bundle1 + ssh |
|
559 | raise # cannot do better for bundle1 + ssh | |
551 | # bundle2 request expect a bundle2 reply |
|
560 | # bundle2 request expect a bundle2 reply | |
552 | bundler = bundle2.bundle20(repo.ui) |
|
561 | bundler = bundle2.bundle20(repo.ui) | |
553 | manargs = [(b'message', exc.message)] |
|
562 | manargs = [(b'message', exc.message)] | |
554 | advargs = [] |
|
563 | advargs = [] | |
555 | if exc.hint is not None: |
|
564 | if exc.hint is not None: | |
556 | advargs.append((b'hint', exc.hint)) |
|
565 | advargs.append((b'hint', exc.hint)) | |
557 | bundler.addpart(bundle2.bundlepart(b'error:abort', manargs, advargs)) |
|
566 | bundler.addpart(bundle2.bundlepart(b'error:abort', manargs, advargs)) | |
558 | chunks = bundler.getchunks() |
|
567 | chunks = bundler.getchunks() | |
559 | prefercompressed = False |
|
568 | prefercompressed = False | |
560 |
|
569 | |||
561 | return wireprototypes.streamres( |
|
570 | return wireprototypes.streamres( | |
562 | gen=chunks, prefer_uncompressed=not prefercompressed |
|
571 | gen=chunks, prefer_uncompressed=not prefercompressed | |
563 | ) |
|
572 | ) | |
564 |
|
573 | |||
565 |
|
574 | |||
566 | @wireprotocommand(b'heads', permission=b'pull') |
|
575 | @wireprotocommand(b'heads', permission=b'pull') | |
567 | def heads(repo, proto): |
|
576 | def heads(repo, proto): | |
568 | h = repo.heads() |
|
577 | h = repo.heads() | |
569 | return wireprototypes.bytesresponse(wireprototypes.encodelist(h) + b'\n') |
|
578 | return wireprototypes.bytesresponse(wireprototypes.encodelist(h) + b'\n') | |
570 |
|
579 | |||
571 |
|
580 | |||
572 | @wireprotocommand(b'hello', permission=b'pull') |
|
581 | @wireprotocommand(b'hello', permission=b'pull') | |
573 | def hello(repo, proto): |
|
582 | def hello(repo, proto): | |
574 | """Called as part of SSH handshake to obtain server info. |
|
583 | """Called as part of SSH handshake to obtain server info. | |
575 |
|
584 | |||
576 | Returns a list of lines describing interesting things about the |
|
585 | Returns a list of lines describing interesting things about the | |
577 | server, in an RFC822-like format. |
|
586 | server, in an RFC822-like format. | |
578 |
|
587 | |||
579 | Currently, the only one defined is ``capabilities``, which consists of a |
|
588 | Currently, the only one defined is ``capabilities``, which consists of a | |
580 | line of space separated tokens describing server abilities: |
|
589 | line of space separated tokens describing server abilities: | |
581 |
|
590 | |||
582 | capabilities: <token0> <token1> <token2> |
|
591 | capabilities: <token0> <token1> <token2> | |
583 | """ |
|
592 | """ | |
584 | caps = capabilities(repo, proto).data |
|
593 | caps = capabilities(repo, proto).data | |
585 | return wireprototypes.bytesresponse(b'capabilities: %s\n' % caps) |
|
594 | return wireprototypes.bytesresponse(b'capabilities: %s\n' % caps) | |
586 |
|
595 | |||
587 |
|
596 | |||
588 | @wireprotocommand(b'listkeys', b'namespace', permission=b'pull') |
|
597 | @wireprotocommand(b'listkeys', b'namespace', permission=b'pull') | |
589 | def listkeys(repo, proto, namespace): |
|
598 | def listkeys(repo, proto, namespace): | |
590 | d = sorted(repo.listkeys(encoding.tolocal(namespace)).items()) |
|
599 | d = sorted(repo.listkeys(encoding.tolocal(namespace)).items()) | |
591 | return wireprototypes.bytesresponse(pushkeymod.encodekeys(d)) |
|
600 | return wireprototypes.bytesresponse(pushkeymod.encodekeys(d)) | |
592 |
|
601 | |||
593 |
|
602 | |||
594 | @wireprotocommand(b'lookup', b'key', permission=b'pull') |
|
603 | @wireprotocommand(b'lookup', b'key', permission=b'pull') | |
595 | def lookup(repo, proto, key): |
|
604 | def lookup(repo, proto, key): | |
596 | try: |
|
605 | try: | |
597 | k = encoding.tolocal(key) |
|
606 | k = encoding.tolocal(key) | |
598 | n = repo.lookup(k) |
|
607 | n = repo.lookup(k) | |
599 | r = hex(n) |
|
608 | r = hex(n) | |
600 | success = 1 |
|
609 | success = 1 | |
601 | except Exception as inst: |
|
610 | except Exception as inst: | |
602 | r = stringutil.forcebytestr(inst) |
|
611 | r = stringutil.forcebytestr(inst) | |
603 | success = 0 |
|
612 | success = 0 | |
604 | return wireprototypes.bytesresponse(b'%d %s\n' % (success, r)) |
|
613 | return wireprototypes.bytesresponse(b'%d %s\n' % (success, r)) | |
605 |
|
614 | |||
606 |
|
615 | |||
607 | @wireprotocommand(b'known', b'nodes *', permission=b'pull') |
|
616 | @wireprotocommand(b'known', b'nodes *', permission=b'pull') | |
608 | def known(repo, proto, nodes, others): |
|
617 | def known(repo, proto, nodes, others): | |
609 | v = b''.join( |
|
618 | v = b''.join( | |
610 | b and b'1' or b'0' for b in repo.known(wireprototypes.decodelist(nodes)) |
|
619 | b and b'1' or b'0' for b in repo.known(wireprototypes.decodelist(nodes)) | |
611 | ) |
|
620 | ) | |
612 | return wireprototypes.bytesresponse(v) |
|
621 | return wireprototypes.bytesresponse(v) | |
613 |
|
622 | |||
614 |
|
623 | |||
615 | @wireprotocommand(b'protocaps', b'caps', permission=b'pull') |
|
624 | @wireprotocommand(b'protocaps', b'caps', permission=b'pull') | |
616 | def protocaps(repo, proto, caps): |
|
625 | def protocaps(repo, proto, caps): | |
617 | if proto.name == wireprototypes.SSHV1: |
|
626 | if proto.name == wireprototypes.SSHV1: | |
618 | proto._protocaps = set(caps.split(b' ')) |
|
627 | proto._protocaps = set(caps.split(b' ')) | |
619 | return wireprototypes.bytesresponse(b'OK') |
|
628 | return wireprototypes.bytesresponse(b'OK') | |
620 |
|
629 | |||
621 |
|
630 | |||
622 | @wireprotocommand(b'pushkey', b'namespace key old new', permission=b'push') |
|
631 | @wireprotocommand(b'pushkey', b'namespace key old new', permission=b'push') | |
623 | def pushkey(repo, proto, namespace, key, old, new): |
|
632 | def pushkey(repo, proto, namespace, key, old, new): | |
624 | # compatibility with pre-1.8 clients which were accidentally |
|
633 | # compatibility with pre-1.8 clients which were accidentally | |
625 | # sending raw binary nodes rather than utf-8-encoded hex |
|
634 | # sending raw binary nodes rather than utf-8-encoded hex | |
626 | if len(new) == 20 and stringutil.escapestr(new) != new: |
|
635 | if len(new) == 20 and stringutil.escapestr(new) != new: | |
627 | # looks like it could be a binary node |
|
636 | # looks like it could be a binary node | |
628 | try: |
|
637 | try: | |
629 | new.decode('utf-8') |
|
638 | new.decode('utf-8') | |
630 | new = encoding.tolocal(new) # but cleanly decodes as UTF-8 |
|
639 | new = encoding.tolocal(new) # but cleanly decodes as UTF-8 | |
631 | except UnicodeDecodeError: |
|
640 | except UnicodeDecodeError: | |
632 | pass # binary, leave unmodified |
|
641 | pass # binary, leave unmodified | |
633 | else: |
|
642 | else: | |
634 | new = encoding.tolocal(new) # normal path |
|
643 | new = encoding.tolocal(new) # normal path | |
635 |
|
644 | |||
636 | with proto.mayberedirectstdio() as output: |
|
645 | with proto.mayberedirectstdio() as output: | |
637 | r = ( |
|
646 | r = ( | |
638 | repo.pushkey( |
|
647 | repo.pushkey( | |
639 | encoding.tolocal(namespace), |
|
648 | encoding.tolocal(namespace), | |
640 | encoding.tolocal(key), |
|
649 | encoding.tolocal(key), | |
641 | encoding.tolocal(old), |
|
650 | encoding.tolocal(old), | |
642 | new, |
|
651 | new, | |
643 | ) |
|
652 | ) | |
644 | or False |
|
653 | or False | |
645 | ) |
|
654 | ) | |
646 |
|
655 | |||
647 | output = output.getvalue() if output else b'' |
|
656 | output = output.getvalue() if output else b'' | |
648 | return wireprototypes.bytesresponse(b'%d\n%s' % (int(r), output)) |
|
657 | return wireprototypes.bytesresponse(b'%d\n%s' % (int(r), output)) | |
649 |
|
658 | |||
650 |
|
659 | |||
651 | @wireprotocommand(b'stream_out', permission=b'pull') |
|
660 | @wireprotocommand(b'stream_out', permission=b'pull') | |
652 | def stream(repo, proto): |
|
661 | def stream(repo, proto): | |
653 | """If the server supports streaming clone, it advertises the "stream" |
|
662 | """If the server supports streaming clone, it advertises the "stream" | |
654 | capability with a value representing the version and flags of the repo |
|
663 | capability with a value representing the version and flags of the repo | |
655 | it is serving. Client checks to see if it understands the format. |
|
664 | it is serving. Client checks to see if it understands the format. | |
656 | """ |
|
665 | """ | |
657 | return wireprototypes.streamreslegacy(streamclone.generatev1wireproto(repo)) |
|
666 | return wireprototypes.streamreslegacy(streamclone.generatev1wireproto(repo)) | |
658 |
|
667 | |||
659 |
|
668 | |||
660 | @wireprotocommand(b'unbundle', b'heads', permission=b'push') |
|
669 | @wireprotocommand(b'unbundle', b'heads', permission=b'push') | |
661 | def unbundle(repo, proto, heads): |
|
670 | def unbundle(repo, proto, heads): | |
662 | their_heads = wireprototypes.decodelist(heads) |
|
671 | their_heads = wireprototypes.decodelist(heads) | |
663 |
|
672 | |||
664 | with proto.mayberedirectstdio() as output: |
|
673 | with proto.mayberedirectstdio() as output: | |
665 | try: |
|
674 | try: | |
666 | exchange.check_heads(repo, their_heads, b'preparing changes') |
|
675 | exchange.check_heads(repo, their_heads, b'preparing changes') | |
667 | cleanup = lambda: None |
|
676 | cleanup = lambda: None | |
668 | try: |
|
677 | try: | |
669 | payload = proto.getpayload() |
|
678 | payload = proto.getpayload() | |
670 | if repo.ui.configbool(b'server', b'streamunbundle'): |
|
679 | if repo.ui.configbool(b'server', b'streamunbundle'): | |
671 |
|
680 | |||
672 | def cleanup(): |
|
681 | def cleanup(): | |
673 | # Ensure that the full payload is consumed, so |
|
682 | # Ensure that the full payload is consumed, so | |
674 | # that the connection doesn't contain trailing garbage. |
|
683 | # that the connection doesn't contain trailing garbage. | |
675 | for p in payload: |
|
684 | for p in payload: | |
676 | pass |
|
685 | pass | |
677 |
|
686 | |||
678 | fp = util.chunkbuffer(payload) |
|
687 | fp = util.chunkbuffer(payload) | |
679 | else: |
|
688 | else: | |
680 | # write bundle data to temporary file as it can be big |
|
689 | # write bundle data to temporary file as it can be big | |
681 | fp, tempname = None, None |
|
690 | fp, tempname = None, None | |
682 |
|
691 | |||
683 | def cleanup(): |
|
692 | def cleanup(): | |
684 | if fp: |
|
693 | if fp: | |
685 | fp.close() |
|
694 | fp.close() | |
686 | if tempname: |
|
695 | if tempname: | |
687 | os.unlink(tempname) |
|
696 | os.unlink(tempname) | |
688 |
|
697 | |||
689 | fd, tempname = pycompat.mkstemp(prefix=b'hg-unbundle-') |
|
698 | fd, tempname = pycompat.mkstemp(prefix=b'hg-unbundle-') | |
690 | repo.ui.debug( |
|
699 | repo.ui.debug( | |
691 | b'redirecting incoming bundle to %s\n' % tempname |
|
700 | b'redirecting incoming bundle to %s\n' % tempname | |
692 | ) |
|
701 | ) | |
693 | fp = os.fdopen(fd, pycompat.sysstr(b'wb+')) |
|
702 | fp = os.fdopen(fd, pycompat.sysstr(b'wb+')) | |
694 | for p in payload: |
|
703 | for p in payload: | |
695 | fp.write(p) |
|
704 | fp.write(p) | |
696 | fp.seek(0) |
|
705 | fp.seek(0) | |
697 |
|
706 | |||
698 | gen = exchange.readbundle(repo.ui, fp, None) |
|
707 | gen = exchange.readbundle(repo.ui, fp, None) | |
699 | if isinstance( |
|
708 | if isinstance( | |
700 | gen, changegroupmod.cg1unpacker |
|
709 | gen, changegroupmod.cg1unpacker | |
701 | ) and not bundle1allowed(repo, b'push'): |
|
710 | ) and not bundle1allowed(repo, b'push'): | |
702 | if proto.name == b'http-v1': |
|
711 | if proto.name == b'http-v1': | |
703 | # need to special case http because stderr do not get to |
|
712 | # need to special case http because stderr do not get to | |
704 | # the http client on failed push so we need to abuse |
|
713 | # the http client on failed push so we need to abuse | |
705 | # some other error type to make sure the message get to |
|
714 | # some other error type to make sure the message get to | |
706 | # the user. |
|
715 | # the user. | |
707 | return wireprototypes.ooberror(bundle2required) |
|
716 | return wireprototypes.ooberror(bundle2required) | |
708 | raise error.Abort( |
|
717 | raise error.Abort( | |
709 | bundle2requiredmain, hint=bundle2requiredhint |
|
718 | bundle2requiredmain, hint=bundle2requiredhint | |
710 | ) |
|
719 | ) | |
711 |
|
720 | |||
712 | r = exchange.unbundle( |
|
721 | r = exchange.unbundle( | |
713 | repo, gen, their_heads, b'serve', proto.client() |
|
722 | repo, gen, their_heads, b'serve', proto.client() | |
714 | ) |
|
723 | ) | |
715 | if util.safehasattr(r, 'addpart'): |
|
724 | if util.safehasattr(r, 'addpart'): | |
716 | # The return looks streamable, we are in the bundle2 case |
|
725 | # The return looks streamable, we are in the bundle2 case | |
717 | # and should return a stream. |
|
726 | # and should return a stream. | |
718 | return wireprototypes.streamreslegacy(gen=r.getchunks()) |
|
727 | return wireprototypes.streamreslegacy(gen=r.getchunks()) | |
719 | return wireprototypes.pushres( |
|
728 | return wireprototypes.pushres( | |
720 | r, output.getvalue() if output else b'' |
|
729 | r, output.getvalue() if output else b'' | |
721 | ) |
|
730 | ) | |
722 |
|
731 | |||
723 | finally: |
|
732 | finally: | |
724 | cleanup() |
|
733 | cleanup() | |
725 |
|
734 | |||
726 | except (error.BundleValueError, error.Abort, error.PushRaced) as exc: |
|
735 | except (error.BundleValueError, error.Abort, error.PushRaced) as exc: | |
727 | # handle non-bundle2 case first |
|
736 | # handle non-bundle2 case first | |
728 | if not getattr(exc, 'duringunbundle2', False): |
|
737 | if not getattr(exc, 'duringunbundle2', False): | |
729 | try: |
|
738 | try: | |
730 | raise |
|
739 | raise | |
731 | except error.Abort as exc: |
|
740 | except error.Abort as exc: | |
732 | # The old code we moved used procutil.stderr directly. |
|
741 | # The old code we moved used procutil.stderr directly. | |
733 | # We did not change it to minimise code change. |
|
742 | # We did not change it to minimise code change. | |
734 | # This need to be moved to something proper. |
|
743 | # This need to be moved to something proper. | |
735 | # Feel free to do it. |
|
744 | # Feel free to do it. | |
736 | procutil.stderr.write(exc.format()) |
|
745 | procutil.stderr.write(exc.format()) | |
737 | procutil.stderr.flush() |
|
746 | procutil.stderr.flush() | |
738 | return wireprototypes.pushres( |
|
747 | return wireprototypes.pushres( | |
739 | 0, output.getvalue() if output else b'' |
|
748 | 0, output.getvalue() if output else b'' | |
740 | ) |
|
749 | ) | |
741 | except error.PushRaced: |
|
750 | except error.PushRaced: | |
742 | return wireprototypes.pusherr( |
|
751 | return wireprototypes.pusherr( | |
743 | pycompat.bytestr(exc), |
|
752 | pycompat.bytestr(exc), | |
744 | output.getvalue() if output else b'', |
|
753 | output.getvalue() if output else b'', | |
745 | ) |
|
754 | ) | |
746 |
|
755 | |||
747 | bundler = bundle2.bundle20(repo.ui) |
|
756 | bundler = bundle2.bundle20(repo.ui) | |
748 | for out in getattr(exc, '_bundle2salvagedoutput', ()): |
|
757 | for out in getattr(exc, '_bundle2salvagedoutput', ()): | |
749 | bundler.addpart(out) |
|
758 | bundler.addpart(out) | |
750 | try: |
|
759 | try: | |
751 | try: |
|
760 | try: | |
752 | raise |
|
761 | raise | |
753 | except error.PushkeyFailed as exc: |
|
762 | except error.PushkeyFailed as exc: | |
754 | # check client caps |
|
763 | # check client caps | |
755 | remotecaps = getattr(exc, '_replycaps', None) |
|
764 | remotecaps = getattr(exc, '_replycaps', None) | |
756 | if ( |
|
765 | if ( | |
757 | remotecaps is not None |
|
766 | remotecaps is not None | |
758 | and b'pushkey' not in remotecaps.get(b'error', ()) |
|
767 | and b'pushkey' not in remotecaps.get(b'error', ()) | |
759 | ): |
|
768 | ): | |
760 | # no support remote side, fallback to Abort handler. |
|
769 | # no support remote side, fallback to Abort handler. | |
761 | raise |
|
770 | raise | |
762 | part = bundler.newpart(b'error:pushkey') |
|
771 | part = bundler.newpart(b'error:pushkey') | |
763 | part.addparam(b'in-reply-to', exc.partid) |
|
772 | part.addparam(b'in-reply-to', exc.partid) | |
764 | if exc.namespace is not None: |
|
773 | if exc.namespace is not None: | |
765 | part.addparam( |
|
774 | part.addparam( | |
766 | b'namespace', exc.namespace, mandatory=False |
|
775 | b'namespace', exc.namespace, mandatory=False | |
767 | ) |
|
776 | ) | |
768 | if exc.key is not None: |
|
777 | if exc.key is not None: | |
769 | part.addparam(b'key', exc.key, mandatory=False) |
|
778 | part.addparam(b'key', exc.key, mandatory=False) | |
770 | if exc.new is not None: |
|
779 | if exc.new is not None: | |
771 | part.addparam(b'new', exc.new, mandatory=False) |
|
780 | part.addparam(b'new', exc.new, mandatory=False) | |
772 | if exc.old is not None: |
|
781 | if exc.old is not None: | |
773 | part.addparam(b'old', exc.old, mandatory=False) |
|
782 | part.addparam(b'old', exc.old, mandatory=False) | |
774 | if exc.ret is not None: |
|
783 | if exc.ret is not None: | |
775 | part.addparam(b'ret', exc.ret, mandatory=False) |
|
784 | part.addparam(b'ret', exc.ret, mandatory=False) | |
776 | except error.BundleValueError as exc: |
|
785 | except error.BundleValueError as exc: | |
777 | errpart = bundler.newpart(b'error:unsupportedcontent') |
|
786 | errpart = bundler.newpart(b'error:unsupportedcontent') | |
778 | if exc.parttype is not None: |
|
787 | if exc.parttype is not None: | |
779 | errpart.addparam(b'parttype', exc.parttype) |
|
788 | errpart.addparam(b'parttype', exc.parttype) | |
780 | if exc.params: |
|
789 | if exc.params: | |
781 | errpart.addparam(b'params', b'\0'.join(exc.params)) |
|
790 | errpart.addparam(b'params', b'\0'.join(exc.params)) | |
782 | except error.Abort as exc: |
|
791 | except error.Abort as exc: | |
783 | manargs = [(b'message', exc.message)] |
|
792 | manargs = [(b'message', exc.message)] | |
784 | advargs = [] |
|
793 | advargs = [] | |
785 | if exc.hint is not None: |
|
794 | if exc.hint is not None: | |
786 | advargs.append((b'hint', exc.hint)) |
|
795 | advargs.append((b'hint', exc.hint)) | |
787 | bundler.addpart( |
|
796 | bundler.addpart( | |
788 | bundle2.bundlepart(b'error:abort', manargs, advargs) |
|
797 | bundle2.bundlepart(b'error:abort', manargs, advargs) | |
789 | ) |
|
798 | ) | |
790 | except error.PushRaced as exc: |
|
799 | except error.PushRaced as exc: | |
791 | bundler.newpart( |
|
800 | bundler.newpart( | |
792 | b'error:pushraced', |
|
801 | b'error:pushraced', | |
793 | [(b'message', stringutil.forcebytestr(exc))], |
|
802 | [(b'message', stringutil.forcebytestr(exc))], | |
794 | ) |
|
803 | ) | |
795 | return wireprototypes.streamreslegacy(gen=bundler.getchunks()) |
|
804 | return wireprototypes.streamreslegacy(gen=bundler.getchunks()) |
@@ -1,838 +1,837 b'' | |||||
1 | #require no-reposimplestore no-chg |
|
1 | #require no-reposimplestore no-chg | |
2 |
|
2 | |||
3 | Set up a server |
|
3 | Set up a server | |
4 |
|
4 | |||
5 | $ hg init server |
|
5 | $ hg init server | |
6 | $ cd server |
|
6 | $ cd server | |
7 | $ cat >> .hg/hgrc << EOF |
|
7 | $ cat >> .hg/hgrc << EOF | |
8 | > [extensions] |
|
8 | > [extensions] | |
9 | > clonebundles = |
|
9 | > clonebundles = | |
10 | > EOF |
|
10 | > EOF | |
11 |
|
11 | |||
12 | $ touch foo |
|
12 | $ touch foo | |
13 | $ hg -q commit -A -m 'add foo' |
|
13 | $ hg -q commit -A -m 'add foo' | |
14 | $ touch bar |
|
14 | $ touch bar | |
15 | $ hg -q commit -A -m 'add bar' |
|
15 | $ hg -q commit -A -m 'add bar' | |
16 |
|
16 | |||
17 | $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log |
|
17 | $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log | |
18 | $ cat hg.pid >> $DAEMON_PIDS |
|
18 | $ cat hg.pid >> $DAEMON_PIDS | |
19 | $ cd .. |
|
19 | $ cd .. | |
20 |
|
20 | |||
21 | Missing manifest should not result in server lookup |
|
21 | Missing manifest should not result in server lookup | |
22 |
|
22 | |||
23 | $ hg --verbose clone -U http://localhost:$HGPORT no-manifest |
|
23 | $ hg --verbose clone -U http://localhost:$HGPORT no-manifest | |
24 | requesting all changes |
|
24 | requesting all changes | |
25 | adding changesets |
|
25 | adding changesets | |
26 | adding manifests |
|
26 | adding manifests | |
27 | adding file changes |
|
27 | adding file changes | |
28 | added 2 changesets with 2 changes to 2 files |
|
28 | added 2 changesets with 2 changes to 2 files | |
29 | new changesets 53245c60e682:aaff8d2ffbbf |
|
29 | new changesets 53245c60e682:aaff8d2ffbbf | |
30 | (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
30 | (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) | |
31 |
|
31 | |||
32 | $ cat server/access.log |
|
32 | $ cat server/access.log | |
33 | * - - [*] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob) |
|
33 | * - - [*] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob) | |
34 | $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) |
|
34 | $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) | |
35 | $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&$USUAL_BUNDLE_CAPS$&cg=1&common=0000000000000000000000000000000000000000&heads=aaff8d2ffbbf07a46dd1f05d8ae7877e3f56e2a2&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) |
|
35 | $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&$USUAL_BUNDLE_CAPS$&cg=1&common=0000000000000000000000000000000000000000&heads=aaff8d2ffbbf07a46dd1f05d8ae7877e3f56e2a2&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob) | |
36 |
|
36 | |||
37 | Empty manifest file results in retrieval |
|
37 | Empty manifest file results in retrieval | |
38 | (the extension only checks if the manifest file exists) |
|
38 | (the extension only checks if the manifest file exists) | |
39 |
|
39 | |||
40 | $ touch server/.hg/clonebundles.manifest |
|
40 | $ touch server/.hg/clonebundles.manifest | |
41 | $ hg --verbose clone -U http://localhost:$HGPORT empty-manifest |
|
41 | $ hg --verbose clone -U http://localhost:$HGPORT empty-manifest | |
42 | no clone bundles available on remote; falling back to regular clone |
|
42 | no clone bundles available on remote; falling back to regular clone | |
43 | requesting all changes |
|
43 | requesting all changes | |
44 | adding changesets |
|
44 | adding changesets | |
45 | adding manifests |
|
45 | adding manifests | |
46 | adding file changes |
|
46 | adding file changes | |
47 | added 2 changesets with 2 changes to 2 files |
|
47 | added 2 changesets with 2 changes to 2 files | |
48 | new changesets 53245c60e682:aaff8d2ffbbf |
|
48 | new changesets 53245c60e682:aaff8d2ffbbf | |
49 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
49 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) | |
50 |
|
50 | |||
51 | Manifest file with invalid URL aborts |
|
51 | Manifest file with invalid URL aborts | |
52 |
|
52 | |||
53 | $ echo 'http://does.not.exist/bundle.hg' > server/.hg/clonebundles.manifest |
|
53 | $ echo 'http://does.not.exist/bundle.hg' > server/.hg/clonebundles.manifest | |
54 | $ hg clone http://localhost:$HGPORT 404-url |
|
54 | $ hg clone http://localhost:$HGPORT 404-url | |
55 | applying clone bundle from http://does.not.exist/bundle.hg |
|
55 | applying clone bundle from http://does.not.exist/bundle.hg | |
56 | error fetching bundle: (.* not known|(\[Errno -?\d+] )?([Nn]o address associated with (host)?name|Temporary failure in name resolution|Name does not resolve)) (re) (no-windows !) |
|
56 | error fetching bundle: (.* not known|(\[Errno -?\d+] )?([Nn]o address associated with (host)?name|Temporary failure in name resolution|Name does not resolve)) (re) (no-windows !) | |
57 | error fetching bundle: [Errno 1100*] getaddrinfo failed (glob) (windows !) |
|
57 | error fetching bundle: [Errno 1100*] getaddrinfo failed (glob) (windows !) | |
58 | abort: error applying bundle |
|
58 | abort: error applying bundle | |
59 | (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false") |
|
59 | (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false") | |
60 | [255] |
|
60 | [255] | |
61 |
|
61 | |||
62 | Manifest file with URL with unknown scheme skips the URL |
|
62 | Manifest file with URL with unknown scheme skips the URL | |
63 | $ echo 'weirdscheme://does.not.exist/bundle.hg' > server/.hg/clonebundles.manifest |
|
63 | $ echo 'weirdscheme://does.not.exist/bundle.hg' > server/.hg/clonebundles.manifest | |
64 | $ hg clone http://localhost:$HGPORT unknown-scheme |
|
64 | $ hg clone http://localhost:$HGPORT unknown-scheme | |
65 | no compatible clone bundles available on server; falling back to regular clone |
|
65 | no compatible clone bundles available on server; falling back to regular clone | |
66 | (you may want to report this to the server operator) |
|
66 | (you may want to report this to the server operator) | |
67 | requesting all changes |
|
67 | requesting all changes | |
68 | adding changesets |
|
68 | adding changesets | |
69 | adding manifests |
|
69 | adding manifests | |
70 | adding file changes |
|
70 | adding file changes | |
71 | added 2 changesets with 2 changes to 2 files |
|
71 | added 2 changesets with 2 changes to 2 files | |
72 | new changesets 53245c60e682:aaff8d2ffbbf |
|
72 | new changesets 53245c60e682:aaff8d2ffbbf | |
73 | updating to branch default |
|
73 | updating to branch default | |
74 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
74 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
75 |
|
75 | |||
76 | Server is not running aborts |
|
76 | Server is not running aborts | |
77 |
|
77 | |||
78 | $ echo "http://localhost:$HGPORT1/bundle.hg" > server/.hg/clonebundles.manifest |
|
78 | $ echo "http://localhost:$HGPORT1/bundle.hg" > server/.hg/clonebundles.manifest | |
79 | $ hg clone http://localhost:$HGPORT server-not-runner |
|
79 | $ hg clone http://localhost:$HGPORT server-not-runner | |
80 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg |
|
80 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg | |
81 | error fetching bundle: (.* refused.*|Protocol not supported|(.* )?\$EADDRNOTAVAIL\$|.* No route to host) (re) |
|
81 | error fetching bundle: (.* refused.*|Protocol not supported|(.* )?\$EADDRNOTAVAIL\$|.* No route to host) (re) | |
82 | abort: error applying bundle |
|
82 | abort: error applying bundle | |
83 | (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false") |
|
83 | (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false") | |
84 | [255] |
|
84 | [255] | |
85 |
|
85 | |||
86 | Server returns 404 |
|
86 | Server returns 404 | |
87 |
|
87 | |||
88 | $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid |
|
88 | $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid | |
89 | $ cat http.pid >> $DAEMON_PIDS |
|
89 | $ cat http.pid >> $DAEMON_PIDS | |
90 | $ hg clone http://localhost:$HGPORT running-404 |
|
90 | $ hg clone http://localhost:$HGPORT running-404 | |
91 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg |
|
91 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg | |
92 | HTTP error fetching bundle: HTTP Error 404: File not found |
|
92 | HTTP error fetching bundle: HTTP Error 404: File not found | |
93 | abort: error applying bundle |
|
93 | abort: error applying bundle | |
94 | (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false") |
|
94 | (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false") | |
95 | [255] |
|
95 | [255] | |
96 |
|
96 | |||
97 | We can override failure to fall back to regular clone |
|
97 | We can override failure to fall back to regular clone | |
98 |
|
98 | |||
99 | $ hg --config ui.clonebundlefallback=true clone -U http://localhost:$HGPORT 404-fallback |
|
99 | $ hg --config ui.clonebundlefallback=true clone -U http://localhost:$HGPORT 404-fallback | |
100 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg |
|
100 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg | |
101 | HTTP error fetching bundle: HTTP Error 404: File not found |
|
101 | HTTP error fetching bundle: HTTP Error 404: File not found | |
102 | falling back to normal clone |
|
102 | falling back to normal clone | |
103 | requesting all changes |
|
103 | requesting all changes | |
104 | adding changesets |
|
104 | adding changesets | |
105 | adding manifests |
|
105 | adding manifests | |
106 | adding file changes |
|
106 | adding file changes | |
107 | added 2 changesets with 2 changes to 2 files |
|
107 | added 2 changesets with 2 changes to 2 files | |
108 | new changesets 53245c60e682:aaff8d2ffbbf |
|
108 | new changesets 53245c60e682:aaff8d2ffbbf | |
109 |
|
109 | |||
110 | Bundle with partial content works |
|
110 | Bundle with partial content works | |
111 |
|
111 | |||
112 | $ hg -R server bundle --type gzip-v1 --base null -r 53245c60e682 partial.hg |
|
112 | $ hg -R server bundle --type gzip-v1 --base null -r 53245c60e682 partial.hg | |
113 | 1 changesets found |
|
113 | 1 changesets found | |
114 |
|
114 | |||
115 | We verify exact bundle content as an extra check against accidental future |
|
115 | We verify exact bundle content as an extra check against accidental future | |
116 | changes. If this output changes, we could break old clients. |
|
116 | changes. If this output changes, we could break old clients. | |
117 |
|
117 | |||
118 | $ f --size --hexdump partial.hg |
|
118 | $ f --size --hexdump partial.hg | |
119 | partial.hg: size=207 |
|
119 | partial.hg: size=207 | |
120 | 0000: 48 47 31 30 47 5a 78 9c 63 60 60 98 17 ac 12 93 |HG10GZx.c``.....| |
|
120 | 0000: 48 47 31 30 47 5a 78 9c 63 60 60 98 17 ac 12 93 |HG10GZx.c``.....| | |
121 | 0010: f0 ac a9 23 45 70 cb bf 0d 5f 59 4e 4a 7f 79 21 |...#Ep..._YNJ.y!| |
|
121 | 0010: f0 ac a9 23 45 70 cb bf 0d 5f 59 4e 4a 7f 79 21 |...#Ep..._YNJ.y!| | |
122 | 0020: 9b cc 40 24 20 a0 d7 ce 2c d1 38 25 cd 24 25 d5 |..@$ ...,.8%.$%.| |
|
122 | 0020: 9b cc 40 24 20 a0 d7 ce 2c d1 38 25 cd 24 25 d5 |..@$ ...,.8%.$%.| | |
123 | 0030: d8 c2 22 cd 38 d9 24 cd 22 d5 c8 22 cd 24 cd 32 |..".8.$."..".$.2| |
|
123 | 0030: d8 c2 22 cd 38 d9 24 cd 22 d5 c8 22 cd 24 cd 32 |..".8.$."..".$.2| | |
124 | 0040: d1 c2 d0 c4 c8 d2 32 d1 38 39 29 c9 34 cd d4 80 |......2.89).4...| |
|
124 | 0040: d1 c2 d0 c4 c8 d2 32 d1 38 39 29 c9 34 cd d4 80 |......2.89).4...| | |
125 | 0050: ab 24 b5 b8 84 cb 40 c1 80 2b 2d 3f 9f 8b 2b 31 |.$....@..+-?..+1| |
|
125 | 0050: ab 24 b5 b8 84 cb 40 c1 80 2b 2d 3f 9f 8b 2b 31 |.$....@..+-?..+1| | |
126 | 0060: 25 45 01 c8 80 9a d2 9b 65 fb e5 9e 45 bf 8d 7f |%E......e...E...| |
|
126 | 0060: 25 45 01 c8 80 9a d2 9b 65 fb e5 9e 45 bf 8d 7f |%E......e...E...| | |
127 | 0070: 9f c6 97 9f 2b 44 34 67 d9 ec 8e 0f a0 92 0b 75 |....+D4g.......u| |
|
127 | 0070: 9f c6 97 9f 2b 44 34 67 d9 ec 8e 0f a0 92 0b 75 |....+D4g.......u| | |
128 | 0080: 41 d6 24 59 18 a4 a4 9a a6 18 1a 5b 98 9b 5a 98 |A.$Y.......[..Z.| |
|
128 | 0080: 41 d6 24 59 18 a4 a4 9a a6 18 1a 5b 98 9b 5a 98 |A.$Y.......[..Z.| | |
129 | 0090: 9a 18 26 9b a6 19 98 1a 99 99 26 a6 18 9a 98 24 |..&.......&....$| |
|
129 | 0090: 9a 18 26 9b a6 19 98 1a 99 99 26 a6 18 9a 98 24 |..&.......&....$| | |
130 | 00a0: 26 59 a6 25 5a 98 a5 18 a6 24 71 41 35 b1 43 dc |&Y.%Z....$qA5.C.| |
|
130 | 00a0: 26 59 a6 25 5a 98 a5 18 a6 24 71 41 35 b1 43 dc |&Y.%Z....$qA5.C.| | |
131 | 00b0: 16 b2 83 f7 e9 45 8b d2 56 c7 a3 1f 82 52 d7 8a |.....E..V....R..| |
|
131 | 00b0: 16 b2 83 f7 e9 45 8b d2 56 c7 a3 1f 82 52 d7 8a |.....E..V....R..| | |
132 | 00c0: 78 ed fc d5 76 f1 36 35 dc 05 00 36 ed 5e c7 |x...v.65...6.^.| |
|
132 | 00c0: 78 ed fc d5 76 f1 36 35 dc 05 00 36 ed 5e c7 |x...v.65...6.^.| | |
133 |
|
133 | |||
134 | $ echo "http://localhost:$HGPORT1/partial.hg" > server/.hg/clonebundles.manifest |
|
134 | $ echo "http://localhost:$HGPORT1/partial.hg" > server/.hg/clonebundles.manifest | |
135 | $ hg clone -U http://localhost:$HGPORT partial-bundle |
|
135 | $ hg clone -U http://localhost:$HGPORT partial-bundle | |
136 | applying clone bundle from http://localhost:$HGPORT1/partial.hg |
|
136 | applying clone bundle from http://localhost:$HGPORT1/partial.hg | |
137 | adding changesets |
|
137 | adding changesets | |
138 | adding manifests |
|
138 | adding manifests | |
139 | adding file changes |
|
139 | adding file changes | |
140 | added 1 changesets with 1 changes to 1 files |
|
140 | added 1 changesets with 1 changes to 1 files | |
141 | finished applying clone bundle |
|
141 | finished applying clone bundle | |
142 | searching for changes |
|
142 | searching for changes | |
143 | adding changesets |
|
143 | adding changesets | |
144 | adding manifests |
|
144 | adding manifests | |
145 | adding file changes |
|
145 | adding file changes | |
146 | added 1 changesets with 1 changes to 1 files |
|
146 | added 1 changesets with 1 changes to 1 files | |
147 | new changesets aaff8d2ffbbf |
|
147 | new changesets aaff8d2ffbbf | |
148 | 1 local changesets published |
|
148 | 1 local changesets published | |
149 |
|
149 | |||
150 | Incremental pull doesn't fetch bundle |
|
150 | Incremental pull doesn't fetch bundle | |
151 |
|
151 | |||
152 | $ hg clone -r 53245c60e682 -U http://localhost:$HGPORT partial-clone |
|
152 | $ hg clone -r 53245c60e682 -U http://localhost:$HGPORT partial-clone | |
153 | adding changesets |
|
153 | adding changesets | |
154 | adding manifests |
|
154 | adding manifests | |
155 | adding file changes |
|
155 | adding file changes | |
156 | added 1 changesets with 1 changes to 1 files |
|
156 | added 1 changesets with 1 changes to 1 files | |
157 | new changesets 53245c60e682 |
|
157 | new changesets 53245c60e682 | |
158 |
|
158 | |||
159 | $ cd partial-clone |
|
159 | $ cd partial-clone | |
160 | $ hg pull |
|
160 | $ hg pull | |
161 | pulling from http://localhost:$HGPORT/ |
|
161 | pulling from http://localhost:$HGPORT/ | |
162 | searching for changes |
|
162 | searching for changes | |
163 | adding changesets |
|
163 | adding changesets | |
164 | adding manifests |
|
164 | adding manifests | |
165 | adding file changes |
|
165 | adding file changes | |
166 | added 1 changesets with 1 changes to 1 files |
|
166 | added 1 changesets with 1 changes to 1 files | |
167 | new changesets aaff8d2ffbbf |
|
167 | new changesets aaff8d2ffbbf | |
168 | (run 'hg update' to get a working copy) |
|
168 | (run 'hg update' to get a working copy) | |
169 | $ cd .. |
|
169 | $ cd .. | |
170 |
|
170 | |||
171 | Bundle with full content works |
|
171 | Bundle with full content works | |
172 |
|
172 | |||
173 | $ hg -R server bundle --type gzip-v2 --base null -r tip full.hg |
|
173 | $ hg -R server bundle --type gzip-v2 --base null -r tip full.hg | |
174 | 2 changesets found |
|
174 | 2 changesets found | |
175 |
|
175 | |||
176 | Again, we perform an extra check against bundle content changes. If this content |
|
176 | Again, we perform an extra check against bundle content changes. If this content | |
177 | changes, clone bundles produced by new Mercurial versions may not be readable |
|
177 | changes, clone bundles produced by new Mercurial versions may not be readable | |
178 | by old clients. |
|
178 | by old clients. | |
179 |
|
179 | |||
180 | $ f --size --hexdump full.hg |
|
180 | $ f --size --hexdump full.hg | |
181 | full.hg: size=442 |
|
181 | full.hg: size=442 | |
182 | 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress| |
|
182 | 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress| | |
183 | 0010: 69 6f 6e 3d 47 5a 78 9c 63 60 60 d0 e4 76 f6 70 |ion=GZx.c``..v.p| |
|
183 | 0010: 69 6f 6e 3d 47 5a 78 9c 63 60 60 d0 e4 76 f6 70 |ion=GZx.c``..v.p| | |
184 | 0020: f4 73 77 75 0f f2 0f 0d 60 00 02 46 46 76 26 4e |.swu....`..FFv&N| |
|
184 | 0020: f4 73 77 75 0f f2 0f 0d 60 00 02 46 46 76 26 4e |.swu....`..FFv&N| | |
185 | 0030: c6 b2 d4 a2 e2 cc fc 3c 03 a3 bc a4 e4 8c c4 bc |.......<........| |
|
185 | 0030: c6 b2 d4 a2 e2 cc fc 3c 03 a3 bc a4 e4 8c c4 bc |.......<........| | |
186 | 0040: f4 d4 62 23 06 06 e6 19 40 f9 4d c1 2a 31 09 cf |..b#....@.M.*1..| |
|
186 | 0040: f4 d4 62 23 06 06 e6 19 40 f9 4d c1 2a 31 09 cf |..b#....@.M.*1..| | |
187 | 0050: 9a 3a 52 04 b7 fc db f0 95 e5 a4 f4 97 17 b2 c9 |.:R.............| |
|
187 | 0050: 9a 3a 52 04 b7 fc db f0 95 e5 a4 f4 97 17 b2 c9 |.:R.............| | |
188 | 0060: 0c 14 00 02 e6 d9 99 25 1a a7 a4 99 a4 a4 1a 5b |.......%.......[| |
|
188 | 0060: 0c 14 00 02 e6 d9 99 25 1a a7 a4 99 a4 a4 1a 5b |.......%.......[| | |
189 | 0070: 58 a4 19 27 9b a4 59 a4 1a 59 a4 99 a4 59 26 5a |X..'..Y..Y...Y&Z| |
|
189 | 0070: 58 a4 19 27 9b a4 59 a4 1a 59 a4 99 a4 59 26 5a |X..'..Y..Y...Y&Z| | |
190 | 0080: 18 9a 18 59 5a 26 1a 27 27 25 99 a6 99 1a 70 95 |...YZ&.''%....p.| |
|
190 | 0080: 18 9a 18 59 5a 26 1a 27 27 25 99 a6 99 1a 70 95 |...YZ&.''%....p.| | |
191 | 0090: a4 16 97 70 19 28 18 70 a5 e5 e7 73 71 25 a6 a4 |...p.(.p...sq%..| |
|
191 | 0090: a4 16 97 70 19 28 18 70 a5 e5 e7 73 71 25 a6 a4 |...p.(.p...sq%..| | |
192 | 00a0: 28 00 19 20 17 af fa df ab ff 7b 3f fb 92 dc 8b |(.. ......{?....| |
|
192 | 00a0: 28 00 19 20 17 af fa df ab ff 7b 3f fb 92 dc 8b |(.. ......{?....| | |
193 | 00b0: 1f 62 bb 9e b7 d7 d9 87 3d 5a 44 89 2f b0 99 87 |.b......=ZD./...| |
|
193 | 00b0: 1f 62 bb 9e b7 d7 d9 87 3d 5a 44 89 2f b0 99 87 |.b......=ZD./...| | |
194 | 00c0: ec e2 54 63 43 e3 b4 64 43 73 23 33 43 53 0b 63 |..TcC..dCs#3CS.c| |
|
194 | 00c0: ec e2 54 63 43 e3 b4 64 43 73 23 33 43 53 0b 63 |..TcC..dCs#3CS.c| | |
195 | 00d0: d3 14 23 03 a0 fb 2c 2c 0c d3 80 1e 30 49 49 b1 |..#...,,....0II.| |
|
195 | 00d0: d3 14 23 03 a0 fb 2c 2c 0c d3 80 1e 30 49 49 b1 |..#...,,....0II.| | |
196 | 00e0: 4c 4a 32 48 33 30 b0 34 42 b8 38 29 b1 08 e2 62 |LJ2H30.4B.8)...b| |
|
196 | 00e0: 4c 4a 32 48 33 30 b0 34 42 b8 38 29 b1 08 e2 62 |LJ2H30.4B.8)...b| | |
197 | 00f0: 20 03 6a ca c2 2c db 2f f7 2c fa 6d fc fb 34 be | .j..,./.,.m..4.| |
|
197 | 00f0: 20 03 6a ca c2 2c db 2f f7 2c fa 6d fc fb 34 be | .j..,./.,.m..4.| | |
198 | 0100: fc 5c 21 a2 39 cb 66 77 7c 00 0d c3 59 17 14 58 |.\!.9.fw|...Y..X| |
|
198 | 0100: fc 5c 21 a2 39 cb 66 77 7c 00 0d c3 59 17 14 58 |.\!.9.fw|...Y..X| | |
199 | 0110: 49 16 06 29 a9 a6 29 86 c6 16 e6 a6 16 a6 26 86 |I..)..).......&.| |
|
199 | 0110: 49 16 06 29 a9 a6 29 86 c6 16 e6 a6 16 a6 26 86 |I..)..).......&.| | |
200 | 0120: c9 a6 69 06 a6 46 66 a6 89 29 86 26 26 89 49 96 |..i..Ff..).&&.I.| |
|
200 | 0120: c9 a6 69 06 a6 46 66 a6 89 29 86 26 26 89 49 96 |..i..Ff..).&&.I.| | |
201 | 0130: 69 89 16 66 29 86 29 49 5c 20 07 3e 16 fe 23 ae |i..f).)I\ .>..#.| |
|
201 | 0130: 69 89 16 66 29 86 29 49 5c 20 07 3e 16 fe 23 ae |i..f).)I\ .>..#.| | |
202 | 0140: 26 da 1c ab 10 1f d1 f8 e3 b3 ef cd dd fc 0c 93 |&...............| |
|
202 | 0140: 26 da 1c ab 10 1f d1 f8 e3 b3 ef cd dd fc 0c 93 |&...............| | |
203 | 0150: 88 75 34 36 75 04 82 55 17 14 36 a4 38 10 04 d8 |.u46u..U..6.8...| |
|
203 | 0150: 88 75 34 36 75 04 82 55 17 14 36 a4 38 10 04 d8 |.u46u..U..6.8...| | |
204 | 0160: 21 01 9a b1 83 f7 e9 45 8b d2 56 c7 a3 1f 82 52 |!......E..V....R| |
|
204 | 0160: 21 01 9a b1 83 f7 e9 45 8b d2 56 c7 a3 1f 82 52 |!......E..V....R| | |
205 | 0170: d7 8a 78 ed fc d5 76 f1 36 25 81 89 c7 ad ec 90 |..x...v.6%......| |
|
205 | 0170: d7 8a 78 ed fc d5 76 f1 36 25 81 89 c7 ad ec 90 |..x...v.6%......| | |
206 | 0180: 54 47 75 2b 89 48 b1 b2 62 c9 89 c9 19 a9 56 45 |TGu+.H..b.....VE| |
|
206 | 0180: 54 47 75 2b 89 48 b1 b2 62 c9 89 c9 19 a9 56 45 |TGu+.H..b.....VE| | |
207 | 0190: a9 65 ba 49 45 89 79 c9 19 ba 60 01 a0 14 23 58 |.e.IE.y...`...#X| |
|
207 | 0190: a9 65 ba 49 45 89 79 c9 19 ba 60 01 a0 14 23 58 |.e.IE.y...`...#X| | |
208 | 01a0: 81 35 c8 7d 40 cc 04 e2 a4 a4 a6 25 96 e6 94 60 |.5.}@......%...`| |
|
208 | 01a0: 81 35 c8 7d 40 cc 04 e2 a4 a4 a6 25 96 e6 94 60 |.5.}@......%...`| | |
209 | 01b0: 33 17 5f 54 00 00 d3 1b 0d 4c |3._T.....L| |
|
209 | 01b0: 33 17 5f 54 00 00 d3 1b 0d 4c |3._T.....L| | |
210 |
|
210 | |||
211 | $ echo "http://localhost:$HGPORT1/full.hg" > server/.hg/clonebundles.manifest |
|
211 | $ echo "http://localhost:$HGPORT1/full.hg" > server/.hg/clonebundles.manifest | |
212 | $ hg clone -U http://localhost:$HGPORT full-bundle |
|
212 | $ hg clone -U http://localhost:$HGPORT full-bundle | |
213 | applying clone bundle from http://localhost:$HGPORT1/full.hg |
|
213 | applying clone bundle from http://localhost:$HGPORT1/full.hg | |
214 | adding changesets |
|
214 | adding changesets | |
215 | adding manifests |
|
215 | adding manifests | |
216 | adding file changes |
|
216 | adding file changes | |
217 | added 2 changesets with 2 changes to 2 files |
|
217 | added 2 changesets with 2 changes to 2 files | |
218 | finished applying clone bundle |
|
218 | finished applying clone bundle | |
219 | searching for changes |
|
219 | searching for changes | |
220 | no changes found |
|
220 | no changes found | |
221 | 2 local changesets published |
|
221 | 2 local changesets published | |
222 |
|
222 | |||
223 | Feature works over SSH |
|
223 | Feature works over SSH | |
224 |
|
224 | |||
225 | $ hg clone -U ssh://user@dummy/server ssh-full-clone |
|
225 | $ hg clone -U ssh://user@dummy/server ssh-full-clone | |
226 | applying clone bundle from http://localhost:$HGPORT1/full.hg |
|
226 | applying clone bundle from http://localhost:$HGPORT1/full.hg | |
227 | adding changesets |
|
227 | adding changesets | |
228 | adding manifests |
|
228 | adding manifests | |
229 | adding file changes |
|
229 | adding file changes | |
230 | added 2 changesets with 2 changes to 2 files |
|
230 | added 2 changesets with 2 changes to 2 files | |
231 | finished applying clone bundle |
|
231 | finished applying clone bundle | |
232 | searching for changes |
|
232 | searching for changes | |
233 | no changes found |
|
233 | no changes found | |
234 | 2 local changesets published |
|
234 | 2 local changesets published | |
235 |
|
235 | |||
236 | Inline bundle |
|
236 | Inline bundle | |
237 | ============= |
|
237 | ============= | |
238 |
|
238 | |||
239 | Checking bundle retrieved over the wireprotocol |
|
239 | Checking bundle retrieved over the wireprotocol | |
240 |
|
240 | |||
241 | Feature works over SSH with inline bundle |
|
241 | Feature works over SSH with inline bundle | |
242 | ----------------------------------------- |
|
242 | ----------------------------------------- | |
243 |
|
243 | |||
244 | $ mkdir server/.hg/bundle-cache/ |
|
244 | $ mkdir server/.hg/bundle-cache/ | |
245 | $ cp full.hg server/.hg/bundle-cache/ |
|
245 | $ cp full.hg server/.hg/bundle-cache/ | |
246 | $ echo "peer-bundle-cache://full.hg" > server/.hg/clonebundles.manifest |
|
246 | $ echo "peer-bundle-cache://full.hg" > server/.hg/clonebundles.manifest | |
247 | $ hg clone -U ssh://user@dummy/server ssh-inline-clone |
|
247 | $ hg clone -U ssh://user@dummy/server ssh-inline-clone | |
248 | applying clone bundle from peer-bundle-cache://full.hg |
|
248 | applying clone bundle from peer-bundle-cache://full.hg | |
249 | adding changesets |
|
249 | adding changesets | |
250 | adding manifests |
|
250 | adding manifests | |
251 | adding file changes |
|
251 | adding file changes | |
252 | added 2 changesets with 2 changes to 2 files |
|
252 | added 2 changesets with 2 changes to 2 files | |
253 | finished applying clone bundle |
|
253 | finished applying clone bundle | |
254 | searching for changes |
|
254 | searching for changes | |
255 | no changes found |
|
255 | no changes found | |
256 | 2 local changesets published |
|
256 | 2 local changesets published | |
257 |
|
257 | |||
258 | HTTP Supports |
|
258 | HTTP Supports | |
259 | ------------- |
|
259 | ------------- | |
260 |
|
260 | |||
261 | Or lack of it actually |
|
|||
262 |
|
||||
263 | Feature does not use inline bundle over HTTP(S) because there is no protocaps support |
|
|||
264 | (so no way for the client to announce that it supports inline clonebundles) |
|
|||
265 |
$ |
|
261 | $ hg clone -U http://localhost:$HGPORT http-inline-clone | |
266 | requesting all changes |
|
262 | applying clone bundle from peer-bundle-cache://full.hg | |
267 | adding changesets |
|
263 | adding changesets | |
268 | adding manifests |
|
264 | adding manifests | |
269 | adding file changes |
|
265 | adding file changes | |
270 | added 2 changesets with 2 changes to 2 files |
|
266 | added 2 changesets with 2 changes to 2 files | |
271 | new changesets 53245c60e682:aaff8d2ffbbf |
|
267 | finished applying clone bundle | |
|
268 | searching for changes | |||
|
269 | no changes found | |||
|
270 | 2 local changesets published | |||
272 |
|
271 | |||
273 | Pre-transmit Hook |
|
272 | Pre-transmit Hook | |
274 | ----------------- |
|
273 | ----------------- | |
275 |
|
274 | |||
276 | Hooks work with inline bundle |
|
275 | Hooks work with inline bundle | |
277 |
|
276 | |||
278 | $ cp server/.hg/hgrc server/.hg/hgrc-beforeinlinehooks |
|
277 | $ cp server/.hg/hgrc server/.hg/hgrc-beforeinlinehooks | |
279 | $ echo "[hooks]" >> server/.hg/hgrc |
|
278 | $ echo "[hooks]" >> server/.hg/hgrc | |
280 | $ echo "pretransmit-inline-clone-bundle=echo foo" >> server/.hg/hgrc |
|
279 | $ echo "pretransmit-inline-clone-bundle=echo foo" >> server/.hg/hgrc | |
281 | $ hg clone -U ssh://user@dummy/server ssh-inline-clone-hook |
|
280 | $ hg clone -U ssh://user@dummy/server ssh-inline-clone-hook | |
282 | applying clone bundle from peer-bundle-cache://full.hg |
|
281 | applying clone bundle from peer-bundle-cache://full.hg | |
283 | remote: foo |
|
282 | remote: foo | |
284 | adding changesets |
|
283 | adding changesets | |
285 | adding manifests |
|
284 | adding manifests | |
286 | adding file changes |
|
285 | adding file changes | |
287 | added 2 changesets with 2 changes to 2 files |
|
286 | added 2 changesets with 2 changes to 2 files | |
288 | finished applying clone bundle |
|
287 | finished applying clone bundle | |
289 | searching for changes |
|
288 | searching for changes | |
290 | no changes found |
|
289 | no changes found | |
291 | 2 local changesets published |
|
290 | 2 local changesets published | |
292 |
|
291 | |||
293 | Hooks can make an inline bundle fail |
|
292 | Hooks can make an inline bundle fail | |
294 |
|
293 | |||
295 | $ cp server/.hg/hgrc-beforeinlinehooks server/.hg/hgrc |
|
294 | $ cp server/.hg/hgrc-beforeinlinehooks server/.hg/hgrc | |
296 | $ echo "[hooks]" >> server/.hg/hgrc |
|
295 | $ echo "[hooks]" >> server/.hg/hgrc | |
297 | $ echo "pretransmit-inline-clone-bundle=echo bar && false" >> server/.hg/hgrc |
|
296 | $ echo "pretransmit-inline-clone-bundle=echo bar && false" >> server/.hg/hgrc | |
298 | $ hg clone -U ssh://user@dummy/server ssh-inline-clone-hook-fail |
|
297 | $ hg clone -U ssh://user@dummy/server ssh-inline-clone-hook-fail | |
299 | applying clone bundle from peer-bundle-cache://full.hg |
|
298 | applying clone bundle from peer-bundle-cache://full.hg | |
300 | remote: bar |
|
299 | remote: bar | |
301 | remote: abort: pretransmit-inline-clone-bundle hook exited with status 1 |
|
300 | remote: abort: pretransmit-inline-clone-bundle hook exited with status 1 | |
302 | abort: stream ended unexpectedly (got 0 bytes, expected 1) |
|
301 | abort: stream ended unexpectedly (got 0 bytes, expected 1) | |
303 | [255] |
|
302 | [255] | |
304 | $ cp server/.hg/hgrc-beforeinlinehooks server/.hg/hgrc |
|
303 | $ cp server/.hg/hgrc-beforeinlinehooks server/.hg/hgrc | |
305 |
|
304 | |||
306 | Other tests |
|
305 | Other tests | |
307 | =========== |
|
306 | =========== | |
308 |
|
307 | |||
309 | Entry with unknown BUNDLESPEC is filtered and not used |
|
308 | Entry with unknown BUNDLESPEC is filtered and not used | |
310 |
|
309 | |||
311 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
310 | $ cat > server/.hg/clonebundles.manifest << EOF | |
312 | > http://bad.entry1 BUNDLESPEC=UNKNOWN |
|
311 | > http://bad.entry1 BUNDLESPEC=UNKNOWN | |
313 | > http://bad.entry2 BUNDLESPEC=xz-v1 |
|
312 | > http://bad.entry2 BUNDLESPEC=xz-v1 | |
314 | > http://bad.entry3 BUNDLESPEC=none-v100 |
|
313 | > http://bad.entry3 BUNDLESPEC=none-v100 | |
315 | > http://localhost:$HGPORT1/full.hg BUNDLESPEC=gzip-v2 |
|
314 | > http://localhost:$HGPORT1/full.hg BUNDLESPEC=gzip-v2 | |
316 | > EOF |
|
315 | > EOF | |
317 |
|
316 | |||
318 | $ hg clone -U http://localhost:$HGPORT filter-unknown-type |
|
317 | $ hg clone -U http://localhost:$HGPORT filter-unknown-type | |
319 | applying clone bundle from http://localhost:$HGPORT1/full.hg |
|
318 | applying clone bundle from http://localhost:$HGPORT1/full.hg | |
320 | adding changesets |
|
319 | adding changesets | |
321 | adding manifests |
|
320 | adding manifests | |
322 | adding file changes |
|
321 | adding file changes | |
323 | added 2 changesets with 2 changes to 2 files |
|
322 | added 2 changesets with 2 changes to 2 files | |
324 | finished applying clone bundle |
|
323 | finished applying clone bundle | |
325 | searching for changes |
|
324 | searching for changes | |
326 | no changes found |
|
325 | no changes found | |
327 | 2 local changesets published |
|
326 | 2 local changesets published | |
328 |
|
327 | |||
329 | Automatic fallback when all entries are filtered |
|
328 | Automatic fallback when all entries are filtered | |
330 |
|
329 | |||
331 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
330 | $ cat > server/.hg/clonebundles.manifest << EOF | |
332 | > http://bad.entry BUNDLESPEC=UNKNOWN |
|
331 | > http://bad.entry BUNDLESPEC=UNKNOWN | |
333 | > EOF |
|
332 | > EOF | |
334 |
|
333 | |||
335 | $ hg clone -U http://localhost:$HGPORT filter-all |
|
334 | $ hg clone -U http://localhost:$HGPORT filter-all | |
336 | no compatible clone bundles available on server; falling back to regular clone |
|
335 | no compatible clone bundles available on server; falling back to regular clone | |
337 | (you may want to report this to the server operator) |
|
336 | (you may want to report this to the server operator) | |
338 | requesting all changes |
|
337 | requesting all changes | |
339 | adding changesets |
|
338 | adding changesets | |
340 | adding manifests |
|
339 | adding manifests | |
341 | adding file changes |
|
340 | adding file changes | |
342 | added 2 changesets with 2 changes to 2 files |
|
341 | added 2 changesets with 2 changes to 2 files | |
343 | new changesets 53245c60e682:aaff8d2ffbbf |
|
342 | new changesets 53245c60e682:aaff8d2ffbbf | |
344 |
|
343 | |||
345 | We require a Python version that supports SNI. Therefore, URLs requiring SNI |
|
344 | We require a Python version that supports SNI. Therefore, URLs requiring SNI | |
346 | are not filtered. |
|
345 | are not filtered. | |
347 |
|
346 | |||
348 | $ cp full.hg sni.hg |
|
347 | $ cp full.hg sni.hg | |
349 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
348 | $ cat > server/.hg/clonebundles.manifest << EOF | |
350 | > http://localhost:$HGPORT1/sni.hg REQUIRESNI=true |
|
349 | > http://localhost:$HGPORT1/sni.hg REQUIRESNI=true | |
351 | > http://localhost:$HGPORT1/full.hg |
|
350 | > http://localhost:$HGPORT1/full.hg | |
352 | > EOF |
|
351 | > EOF | |
353 |
|
352 | |||
354 | $ hg clone -U http://localhost:$HGPORT sni-supported |
|
353 | $ hg clone -U http://localhost:$HGPORT sni-supported | |
355 | applying clone bundle from http://localhost:$HGPORT1/sni.hg |
|
354 | applying clone bundle from http://localhost:$HGPORT1/sni.hg | |
356 | adding changesets |
|
355 | adding changesets | |
357 | adding manifests |
|
356 | adding manifests | |
358 | adding file changes |
|
357 | adding file changes | |
359 | added 2 changesets with 2 changes to 2 files |
|
358 | added 2 changesets with 2 changes to 2 files | |
360 | finished applying clone bundle |
|
359 | finished applying clone bundle | |
361 | searching for changes |
|
360 | searching for changes | |
362 | no changes found |
|
361 | no changes found | |
363 | 2 local changesets published |
|
362 | 2 local changesets published | |
364 |
|
363 | |||
365 | Stream clone bundles are supported |
|
364 | Stream clone bundles are supported | |
366 |
|
365 | |||
367 | $ hg -R server debugcreatestreamclonebundle packed.hg |
|
366 | $ hg -R server debugcreatestreamclonebundle packed.hg | |
368 | writing 613 bytes for 4 files |
|
367 | writing 613 bytes for 4 files | |
369 | bundle requirements: generaldelta, revlogv1, sparserevlog (no-rust no-zstd !) |
|
368 | bundle requirements: generaldelta, revlogv1, sparserevlog (no-rust no-zstd !) | |
370 | bundle requirements: generaldelta, revlog-compression-zstd, revlogv1, sparserevlog (no-rust zstd !) |
|
369 | bundle requirements: generaldelta, revlog-compression-zstd, revlogv1, sparserevlog (no-rust zstd !) | |
371 | bundle requirements: generaldelta, revlog-compression-zstd, revlogv1, sparserevlog (rust !) |
|
370 | bundle requirements: generaldelta, revlog-compression-zstd, revlogv1, sparserevlog (rust !) | |
372 |
|
371 | |||
373 | No bundle spec should work |
|
372 | No bundle spec should work | |
374 |
|
373 | |||
375 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
374 | $ cat > server/.hg/clonebundles.manifest << EOF | |
376 | > http://localhost:$HGPORT1/packed.hg |
|
375 | > http://localhost:$HGPORT1/packed.hg | |
377 | > EOF |
|
376 | > EOF | |
378 |
|
377 | |||
379 | $ hg clone -U http://localhost:$HGPORT stream-clone-no-spec |
|
378 | $ hg clone -U http://localhost:$HGPORT stream-clone-no-spec | |
380 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
379 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
381 | 4 files to transfer, 613 bytes of data |
|
380 | 4 files to transfer, 613 bytes of data | |
382 | transferred 613 bytes in *.* seconds (*) (glob) |
|
381 | transferred 613 bytes in *.* seconds (*) (glob) | |
383 | finished applying clone bundle |
|
382 | finished applying clone bundle | |
384 | searching for changes |
|
383 | searching for changes | |
385 | no changes found |
|
384 | no changes found | |
386 |
|
385 | |||
387 | Bundle spec without parameters should work |
|
386 | Bundle spec without parameters should work | |
388 |
|
387 | |||
389 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
388 | $ cat > server/.hg/clonebundles.manifest << EOF | |
390 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1 |
|
389 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1 | |
391 | > EOF |
|
390 | > EOF | |
392 |
|
391 | |||
393 | $ hg clone -U http://localhost:$HGPORT stream-clone-vanilla-spec |
|
392 | $ hg clone -U http://localhost:$HGPORT stream-clone-vanilla-spec | |
394 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
393 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
395 | 4 files to transfer, 613 bytes of data |
|
394 | 4 files to transfer, 613 bytes of data | |
396 | transferred 613 bytes in *.* seconds (*) (glob) |
|
395 | transferred 613 bytes in *.* seconds (*) (glob) | |
397 | finished applying clone bundle |
|
396 | finished applying clone bundle | |
398 | searching for changes |
|
397 | searching for changes | |
399 | no changes found |
|
398 | no changes found | |
400 |
|
399 | |||
401 | Bundle spec with format requirements should work |
|
400 | Bundle spec with format requirements should work | |
402 |
|
401 | |||
403 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
402 | $ cat > server/.hg/clonebundles.manifest << EOF | |
404 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv1 |
|
403 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv1 | |
405 | > EOF |
|
404 | > EOF | |
406 |
|
405 | |||
407 | $ hg clone -U http://localhost:$HGPORT stream-clone-supported-requirements |
|
406 | $ hg clone -U http://localhost:$HGPORT stream-clone-supported-requirements | |
408 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
407 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
409 | 4 files to transfer, 613 bytes of data |
|
408 | 4 files to transfer, 613 bytes of data | |
410 | transferred 613 bytes in *.* seconds (*) (glob) |
|
409 | transferred 613 bytes in *.* seconds (*) (glob) | |
411 | finished applying clone bundle |
|
410 | finished applying clone bundle | |
412 | searching for changes |
|
411 | searching for changes | |
413 | no changes found |
|
412 | no changes found | |
414 |
|
413 | |||
415 | Stream bundle spec with unknown requirements should be filtered out |
|
414 | Stream bundle spec with unknown requirements should be filtered out | |
416 |
|
415 | |||
417 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
416 | $ cat > server/.hg/clonebundles.manifest << EOF | |
418 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv42 |
|
417 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv42 | |
419 | > EOF |
|
418 | > EOF | |
420 |
|
419 | |||
421 | $ hg clone -U http://localhost:$HGPORT stream-clone-unsupported-requirements |
|
420 | $ hg clone -U http://localhost:$HGPORT stream-clone-unsupported-requirements | |
422 | no compatible clone bundles available on server; falling back to regular clone |
|
421 | no compatible clone bundles available on server; falling back to regular clone | |
423 | (you may want to report this to the server operator) |
|
422 | (you may want to report this to the server operator) | |
424 | requesting all changes |
|
423 | requesting all changes | |
425 | adding changesets |
|
424 | adding changesets | |
426 | adding manifests |
|
425 | adding manifests | |
427 | adding file changes |
|
426 | adding file changes | |
428 | added 2 changesets with 2 changes to 2 files |
|
427 | added 2 changesets with 2 changes to 2 files | |
429 | new changesets 53245c60e682:aaff8d2ffbbf |
|
428 | new changesets 53245c60e682:aaff8d2ffbbf | |
430 |
|
429 | |||
431 | Set up manifest for testing preferences |
|
430 | Set up manifest for testing preferences | |
432 | (Remember, the TYPE does not have to match reality - the URL is |
|
431 | (Remember, the TYPE does not have to match reality - the URL is | |
433 | important) |
|
432 | important) | |
434 |
|
433 | |||
435 | $ cp full.hg gz-a.hg |
|
434 | $ cp full.hg gz-a.hg | |
436 | $ cp full.hg gz-b.hg |
|
435 | $ cp full.hg gz-b.hg | |
437 | $ cp full.hg bz2-a.hg |
|
436 | $ cp full.hg bz2-a.hg | |
438 | $ cp full.hg bz2-b.hg |
|
437 | $ cp full.hg bz2-b.hg | |
439 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
438 | $ cat > server/.hg/clonebundles.manifest << EOF | |
440 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 extra=a |
|
439 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 extra=a | |
441 | > http://localhost:$HGPORT1/bz2-a.hg BUNDLESPEC=bzip2-v2 extra=a |
|
440 | > http://localhost:$HGPORT1/bz2-a.hg BUNDLESPEC=bzip2-v2 extra=a | |
442 | > http://localhost:$HGPORT1/gz-b.hg BUNDLESPEC=gzip-v2 extra=b |
|
441 | > http://localhost:$HGPORT1/gz-b.hg BUNDLESPEC=gzip-v2 extra=b | |
443 | > http://localhost:$HGPORT1/bz2-b.hg BUNDLESPEC=bzip2-v2 extra=b |
|
442 | > http://localhost:$HGPORT1/bz2-b.hg BUNDLESPEC=bzip2-v2 extra=b | |
444 | > EOF |
|
443 | > EOF | |
445 |
|
444 | |||
446 | Preferring an undefined attribute will take first entry |
|
445 | Preferring an undefined attribute will take first entry | |
447 |
|
446 | |||
448 | $ hg --config ui.clonebundleprefers=foo=bar clone -U http://localhost:$HGPORT prefer-foo |
|
447 | $ hg --config ui.clonebundleprefers=foo=bar clone -U http://localhost:$HGPORT prefer-foo | |
449 | applying clone bundle from http://localhost:$HGPORT1/gz-a.hg |
|
448 | applying clone bundle from http://localhost:$HGPORT1/gz-a.hg | |
450 | adding changesets |
|
449 | adding changesets | |
451 | adding manifests |
|
450 | adding manifests | |
452 | adding file changes |
|
451 | adding file changes | |
453 | added 2 changesets with 2 changes to 2 files |
|
452 | added 2 changesets with 2 changes to 2 files | |
454 | finished applying clone bundle |
|
453 | finished applying clone bundle | |
455 | searching for changes |
|
454 | searching for changes | |
456 | no changes found |
|
455 | no changes found | |
457 | 2 local changesets published |
|
456 | 2 local changesets published | |
458 |
|
457 | |||
459 | Preferring bz2 type will download first entry of that type |
|
458 | Preferring bz2 type will download first entry of that type | |
460 |
|
459 | |||
461 | $ hg --config ui.clonebundleprefers=COMPRESSION=bzip2 clone -U http://localhost:$HGPORT prefer-bz |
|
460 | $ hg --config ui.clonebundleprefers=COMPRESSION=bzip2 clone -U http://localhost:$HGPORT prefer-bz | |
462 | applying clone bundle from http://localhost:$HGPORT1/bz2-a.hg |
|
461 | applying clone bundle from http://localhost:$HGPORT1/bz2-a.hg | |
463 | adding changesets |
|
462 | adding changesets | |
464 | adding manifests |
|
463 | adding manifests | |
465 | adding file changes |
|
464 | adding file changes | |
466 | added 2 changesets with 2 changes to 2 files |
|
465 | added 2 changesets with 2 changes to 2 files | |
467 | finished applying clone bundle |
|
466 | finished applying clone bundle | |
468 | searching for changes |
|
467 | searching for changes | |
469 | no changes found |
|
468 | no changes found | |
470 | 2 local changesets published |
|
469 | 2 local changesets published | |
471 |
|
470 | |||
472 | Preferring multiple values of an option works |
|
471 | Preferring multiple values of an option works | |
473 |
|
472 | |||
474 | $ hg --config ui.clonebundleprefers=COMPRESSION=unknown,COMPRESSION=bzip2 clone -U http://localhost:$HGPORT prefer-multiple-bz |
|
473 | $ hg --config ui.clonebundleprefers=COMPRESSION=unknown,COMPRESSION=bzip2 clone -U http://localhost:$HGPORT prefer-multiple-bz | |
475 | applying clone bundle from http://localhost:$HGPORT1/bz2-a.hg |
|
474 | applying clone bundle from http://localhost:$HGPORT1/bz2-a.hg | |
476 | adding changesets |
|
475 | adding changesets | |
477 | adding manifests |
|
476 | adding manifests | |
478 | adding file changes |
|
477 | adding file changes | |
479 | added 2 changesets with 2 changes to 2 files |
|
478 | added 2 changesets with 2 changes to 2 files | |
480 | finished applying clone bundle |
|
479 | finished applying clone bundle | |
481 | searching for changes |
|
480 | searching for changes | |
482 | no changes found |
|
481 | no changes found | |
483 | 2 local changesets published |
|
482 | 2 local changesets published | |
484 |
|
483 | |||
485 | Sorting multiple values should get us back to original first entry |
|
484 | Sorting multiple values should get us back to original first entry | |
486 |
|
485 | |||
487 | $ hg --config ui.clonebundleprefers=BUNDLESPEC=unknown,BUNDLESPEC=gzip-v2,BUNDLESPEC=bzip2-v2 clone -U http://localhost:$HGPORT prefer-multiple-gz |
|
486 | $ hg --config ui.clonebundleprefers=BUNDLESPEC=unknown,BUNDLESPEC=gzip-v2,BUNDLESPEC=bzip2-v2 clone -U http://localhost:$HGPORT prefer-multiple-gz | |
488 | applying clone bundle from http://localhost:$HGPORT1/gz-a.hg |
|
487 | applying clone bundle from http://localhost:$HGPORT1/gz-a.hg | |
489 | adding changesets |
|
488 | adding changesets | |
490 | adding manifests |
|
489 | adding manifests | |
491 | adding file changes |
|
490 | adding file changes | |
492 | added 2 changesets with 2 changes to 2 files |
|
491 | added 2 changesets with 2 changes to 2 files | |
493 | finished applying clone bundle |
|
492 | finished applying clone bundle | |
494 | searching for changes |
|
493 | searching for changes | |
495 | no changes found |
|
494 | no changes found | |
496 | 2 local changesets published |
|
495 | 2 local changesets published | |
497 |
|
496 | |||
498 | Preferring multiple attributes has correct order |
|
497 | Preferring multiple attributes has correct order | |
499 |
|
498 | |||
500 | $ hg --config ui.clonebundleprefers=extra=b,BUNDLESPEC=bzip2-v2 clone -U http://localhost:$HGPORT prefer-separate-attributes |
|
499 | $ hg --config ui.clonebundleprefers=extra=b,BUNDLESPEC=bzip2-v2 clone -U http://localhost:$HGPORT prefer-separate-attributes | |
501 | applying clone bundle from http://localhost:$HGPORT1/bz2-b.hg |
|
500 | applying clone bundle from http://localhost:$HGPORT1/bz2-b.hg | |
502 | adding changesets |
|
501 | adding changesets | |
503 | adding manifests |
|
502 | adding manifests | |
504 | adding file changes |
|
503 | adding file changes | |
505 | added 2 changesets with 2 changes to 2 files |
|
504 | added 2 changesets with 2 changes to 2 files | |
506 | finished applying clone bundle |
|
505 | finished applying clone bundle | |
507 | searching for changes |
|
506 | searching for changes | |
508 | no changes found |
|
507 | no changes found | |
509 | 2 local changesets published |
|
508 | 2 local changesets published | |
510 |
|
509 | |||
511 | Test where attribute is missing from some entries |
|
510 | Test where attribute is missing from some entries | |
512 |
|
511 | |||
513 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
512 | $ cat > server/.hg/clonebundles.manifest << EOF | |
514 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 |
|
513 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 | |
515 | > http://localhost:$HGPORT1/bz2-a.hg BUNDLESPEC=bzip2-v2 |
|
514 | > http://localhost:$HGPORT1/bz2-a.hg BUNDLESPEC=bzip2-v2 | |
516 | > http://localhost:$HGPORT1/gz-b.hg BUNDLESPEC=gzip-v2 extra=b |
|
515 | > http://localhost:$HGPORT1/gz-b.hg BUNDLESPEC=gzip-v2 extra=b | |
517 | > http://localhost:$HGPORT1/bz2-b.hg BUNDLESPEC=bzip2-v2 extra=b |
|
516 | > http://localhost:$HGPORT1/bz2-b.hg BUNDLESPEC=bzip2-v2 extra=b | |
518 | > EOF |
|
517 | > EOF | |
519 |
|
518 | |||
520 | $ hg --config ui.clonebundleprefers=extra=b clone -U http://localhost:$HGPORT prefer-partially-defined-attribute |
|
519 | $ hg --config ui.clonebundleprefers=extra=b clone -U http://localhost:$HGPORT prefer-partially-defined-attribute | |
521 | applying clone bundle from http://localhost:$HGPORT1/gz-b.hg |
|
520 | applying clone bundle from http://localhost:$HGPORT1/gz-b.hg | |
522 | adding changesets |
|
521 | adding changesets | |
523 | adding manifests |
|
522 | adding manifests | |
524 | adding file changes |
|
523 | adding file changes | |
525 | added 2 changesets with 2 changes to 2 files |
|
524 | added 2 changesets with 2 changes to 2 files | |
526 | finished applying clone bundle |
|
525 | finished applying clone bundle | |
527 | searching for changes |
|
526 | searching for changes | |
528 | no changes found |
|
527 | no changes found | |
529 | 2 local changesets published |
|
528 | 2 local changesets published | |
530 |
|
529 | |||
531 | Test a bad attribute list |
|
530 | Test a bad attribute list | |
532 |
|
531 | |||
533 | $ hg --config ui.clonebundleprefers=bad clone -U http://localhost:$HGPORT bad-input |
|
532 | $ hg --config ui.clonebundleprefers=bad clone -U http://localhost:$HGPORT bad-input | |
534 | abort: invalid ui.clonebundleprefers item: bad |
|
533 | abort: invalid ui.clonebundleprefers item: bad | |
535 | (each comma separated item should be key=value pairs) |
|
534 | (each comma separated item should be key=value pairs) | |
536 | [255] |
|
535 | [255] | |
537 | $ hg --config ui.clonebundleprefers=key=val,bad,key2=val2 clone \ |
|
536 | $ hg --config ui.clonebundleprefers=key=val,bad,key2=val2 clone \ | |
538 | > -U http://localhost:$HGPORT bad-input |
|
537 | > -U http://localhost:$HGPORT bad-input | |
539 | abort: invalid ui.clonebundleprefers item: bad |
|
538 | abort: invalid ui.clonebundleprefers item: bad | |
540 | (each comma separated item should be key=value pairs) |
|
539 | (each comma separated item should be key=value pairs) | |
541 | [255] |
|
540 | [255] | |
542 |
|
541 | |||
543 |
|
542 | |||
544 | Test interaction between clone bundles and --stream |
|
543 | Test interaction between clone bundles and --stream | |
545 |
|
544 | |||
546 | A manifest with just a gzip bundle |
|
545 | A manifest with just a gzip bundle | |
547 |
|
546 | |||
548 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
547 | $ cat > server/.hg/clonebundles.manifest << EOF | |
549 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 |
|
548 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 | |
550 | > EOF |
|
549 | > EOF | |
551 |
|
550 | |||
552 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip |
|
551 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip | |
553 | no compatible clone bundles available on server; falling back to regular clone |
|
552 | no compatible clone bundles available on server; falling back to regular clone | |
554 | (you may want to report this to the server operator) |
|
553 | (you may want to report this to the server operator) | |
555 | streaming all changes |
|
554 | streaming all changes | |
556 | 9 files to transfer, 816 bytes of data |
|
555 | 9 files to transfer, 816 bytes of data | |
557 | transferred 816 bytes in * seconds (*) (glob) |
|
556 | transferred 816 bytes in * seconds (*) (glob) | |
558 |
|
557 | |||
559 | A manifest with a stream clone but no BUNDLESPEC |
|
558 | A manifest with a stream clone but no BUNDLESPEC | |
560 |
|
559 | |||
561 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
560 | $ cat > server/.hg/clonebundles.manifest << EOF | |
562 | > http://localhost:$HGPORT1/packed.hg |
|
561 | > http://localhost:$HGPORT1/packed.hg | |
563 | > EOF |
|
562 | > EOF | |
564 |
|
563 | |||
565 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-no-bundlespec |
|
564 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-no-bundlespec | |
566 | no compatible clone bundles available on server; falling back to regular clone |
|
565 | no compatible clone bundles available on server; falling back to regular clone | |
567 | (you may want to report this to the server operator) |
|
566 | (you may want to report this to the server operator) | |
568 | streaming all changes |
|
567 | streaming all changes | |
569 | 9 files to transfer, 816 bytes of data |
|
568 | 9 files to transfer, 816 bytes of data | |
570 | transferred 816 bytes in * seconds (*) (glob) |
|
569 | transferred 816 bytes in * seconds (*) (glob) | |
571 |
|
570 | |||
572 | A manifest with a gzip bundle and a stream clone |
|
571 | A manifest with a gzip bundle and a stream clone | |
573 |
|
572 | |||
574 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
573 | $ cat > server/.hg/clonebundles.manifest << EOF | |
575 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 |
|
574 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 | |
576 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1 |
|
575 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1 | |
577 | > EOF |
|
576 | > EOF | |
578 |
|
577 | |||
579 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed |
|
578 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed | |
580 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
579 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
581 | 4 files to transfer, 613 bytes of data |
|
580 | 4 files to transfer, 613 bytes of data | |
582 | transferred 613 bytes in * seconds (*) (glob) |
|
581 | transferred 613 bytes in * seconds (*) (glob) | |
583 | finished applying clone bundle |
|
582 | finished applying clone bundle | |
584 | searching for changes |
|
583 | searching for changes | |
585 | no changes found |
|
584 | no changes found | |
586 |
|
585 | |||
587 | A manifest with a gzip bundle and stream clone with supported requirements |
|
586 | A manifest with a gzip bundle and stream clone with supported requirements | |
588 |
|
587 | |||
589 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
588 | $ cat > server/.hg/clonebundles.manifest << EOF | |
590 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 |
|
589 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 | |
591 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv1 |
|
590 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv1 | |
592 | > EOF |
|
591 | > EOF | |
593 |
|
592 | |||
594 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed-requirements |
|
593 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed-requirements | |
595 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
594 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
596 | 4 files to transfer, 613 bytes of data |
|
595 | 4 files to transfer, 613 bytes of data | |
597 | transferred 613 bytes in * seconds (*) (glob) |
|
596 | transferred 613 bytes in * seconds (*) (glob) | |
598 | finished applying clone bundle |
|
597 | finished applying clone bundle | |
599 | searching for changes |
|
598 | searching for changes | |
600 | no changes found |
|
599 | no changes found | |
601 |
|
600 | |||
602 | A manifest with a gzip bundle and a stream clone with unsupported requirements |
|
601 | A manifest with a gzip bundle and a stream clone with unsupported requirements | |
603 |
|
602 | |||
604 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
603 | $ cat > server/.hg/clonebundles.manifest << EOF | |
605 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 |
|
604 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 | |
606 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv42 |
|
605 | > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv42 | |
607 | > EOF |
|
606 | > EOF | |
608 |
|
607 | |||
609 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed-unsupported-requirements |
|
608 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed-unsupported-requirements | |
610 | no compatible clone bundles available on server; falling back to regular clone |
|
609 | no compatible clone bundles available on server; falling back to regular clone | |
611 | (you may want to report this to the server operator) |
|
610 | (you may want to report this to the server operator) | |
612 | streaming all changes |
|
611 | streaming all changes | |
613 | 9 files to transfer, 816 bytes of data |
|
612 | 9 files to transfer, 816 bytes of data | |
614 | transferred 816 bytes in * seconds (*) (glob) |
|
613 | transferred 816 bytes in * seconds (*) (glob) | |
615 |
|
614 | |||
616 | Test clone bundle retrieved through bundle2 |
|
615 | Test clone bundle retrieved through bundle2 | |
617 |
|
616 | |||
618 | $ cat << EOF >> $HGRCPATH |
|
617 | $ cat << EOF >> $HGRCPATH | |
619 | > [extensions] |
|
618 | > [extensions] | |
620 | > largefiles= |
|
619 | > largefiles= | |
621 | > EOF |
|
620 | > EOF | |
622 | $ killdaemons.py |
|
621 | $ killdaemons.py | |
623 | $ hg -R server serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log |
|
622 | $ hg -R server serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log | |
624 | $ cat hg.pid >> $DAEMON_PIDS |
|
623 | $ cat hg.pid >> $DAEMON_PIDS | |
625 |
|
624 | |||
626 | $ hg -R server debuglfput gz-a.hg |
|
625 | $ hg -R server debuglfput gz-a.hg | |
627 | 1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae |
|
626 | 1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae | |
628 |
|
627 | |||
629 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
628 | $ cat > server/.hg/clonebundles.manifest << EOF | |
630 | > largefile://1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae BUNDLESPEC=gzip-v2 |
|
629 | > largefile://1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae BUNDLESPEC=gzip-v2 | |
631 | > EOF |
|
630 | > EOF | |
632 |
|
631 | |||
633 | $ hg clone -U http://localhost:$HGPORT largefile-provided --traceback |
|
632 | $ hg clone -U http://localhost:$HGPORT largefile-provided --traceback | |
634 | applying clone bundle from largefile://1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae |
|
633 | applying clone bundle from largefile://1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae | |
635 | adding changesets |
|
634 | adding changesets | |
636 | adding manifests |
|
635 | adding manifests | |
637 | adding file changes |
|
636 | adding file changes | |
638 | added 2 changesets with 2 changes to 2 files |
|
637 | added 2 changesets with 2 changes to 2 files | |
639 | finished applying clone bundle |
|
638 | finished applying clone bundle | |
640 | searching for changes |
|
639 | searching for changes | |
641 | no changes found |
|
640 | no changes found | |
642 | 2 local changesets published |
|
641 | 2 local changesets published | |
643 | $ killdaemons.py |
|
642 | $ killdaemons.py | |
644 |
|
643 | |||
645 | A manifest with a gzip bundle requiring too much memory for a 16MB system and working |
|
644 | A manifest with a gzip bundle requiring too much memory for a 16MB system and working | |
646 | on a 32MB system. |
|
645 | on a 32MB system. | |
647 |
|
646 | |||
648 | $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid |
|
647 | $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid | |
649 | $ cat http.pid >> $DAEMON_PIDS |
|
648 | $ cat http.pid >> $DAEMON_PIDS | |
650 | $ hg -R server serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log |
|
649 | $ hg -R server serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log | |
651 | $ cat hg.pid >> $DAEMON_PIDS |
|
650 | $ cat hg.pid >> $DAEMON_PIDS | |
652 |
|
651 | |||
653 | $ cat > server/.hg/clonebundles.manifest << EOF |
|
652 | $ cat > server/.hg/clonebundles.manifest << EOF | |
654 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 REQUIREDRAM=12MB |
|
653 | > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 REQUIREDRAM=12MB | |
655 | > EOF |
|
654 | > EOF | |
656 |
|
655 | |||
657 | $ hg clone -U --debug --config ui.available-memory=16MB http://localhost:$HGPORT gzip-too-large |
|
656 | $ hg clone -U --debug --config ui.available-memory=16MB http://localhost:$HGPORT gzip-too-large | |
658 | using http://localhost:$HGPORT/ |
|
657 | using http://localhost:$HGPORT/ | |
659 | sending capabilities command |
|
658 | sending capabilities command | |
660 | sending clonebundles command |
|
659 | sending clonebundles_manifest command | |
661 | filtering http://localhost:$HGPORT1/gz-a.hg as it needs more than 2/3 of system memory |
|
660 | filtering http://localhost:$HGPORT1/gz-a.hg as it needs more than 2/3 of system memory | |
662 | no compatible clone bundles available on server; falling back to regular clone |
|
661 | no compatible clone bundles available on server; falling back to regular clone | |
663 | (you may want to report this to the server operator) |
|
662 | (you may want to report this to the server operator) | |
664 | query 1; heads |
|
663 | query 1; heads | |
665 | sending batch command |
|
664 | sending batch command | |
666 | requesting all changes |
|
665 | requesting all changes | |
667 | sending getbundle command |
|
666 | sending getbundle command | |
668 | bundle2-input-bundle: with-transaction |
|
667 | bundle2-input-bundle: with-transaction | |
669 | bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported |
|
668 | bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported | |
670 | adding changesets |
|
669 | adding changesets | |
671 | add changeset 53245c60e682 |
|
670 | add changeset 53245c60e682 | |
672 | add changeset aaff8d2ffbbf |
|
671 | add changeset aaff8d2ffbbf | |
673 | adding manifests |
|
672 | adding manifests | |
674 | adding file changes |
|
673 | adding file changes | |
675 | adding bar revisions |
|
674 | adding bar revisions | |
676 | adding foo revisions |
|
675 | adding foo revisions | |
677 | bundle2-input-part: total payload size 936 |
|
676 | bundle2-input-part: total payload size 936 | |
678 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
677 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported | |
679 | bundle2-input-part: "phase-heads" supported |
|
678 | bundle2-input-part: "phase-heads" supported | |
680 | bundle2-input-part: total payload size 24 |
|
679 | bundle2-input-part: total payload size 24 | |
681 | bundle2-input-bundle: 3 parts total |
|
680 | bundle2-input-bundle: 3 parts total | |
682 | checking for updated bookmarks |
|
681 | checking for updated bookmarks | |
683 | updating the branch cache |
|
682 | updating the branch cache | |
684 | added 2 changesets with 2 changes to 2 files |
|
683 | added 2 changesets with 2 changes to 2 files | |
685 | new changesets 53245c60e682:aaff8d2ffbbf |
|
684 | new changesets 53245c60e682:aaff8d2ffbbf | |
686 | calling hook changegroup.lfiles: hgext.largefiles.reposetup.checkrequireslfiles |
|
685 | calling hook changegroup.lfiles: hgext.largefiles.reposetup.checkrequireslfiles | |
687 | updating the branch cache |
|
686 | updating the branch cache | |
688 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
687 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) | |
689 |
|
688 | |||
690 | $ hg clone -U --debug --config ui.available-memory=32MB http://localhost:$HGPORT gzip-too-large2 |
|
689 | $ hg clone -U --debug --config ui.available-memory=32MB http://localhost:$HGPORT gzip-too-large2 | |
691 | using http://localhost:$HGPORT/ |
|
690 | using http://localhost:$HGPORT/ | |
692 | sending capabilities command |
|
691 | sending capabilities command | |
693 | sending clonebundles command |
|
692 | sending clonebundles_manifest command | |
694 | applying clone bundle from http://localhost:$HGPORT1/gz-a.hg |
|
693 | applying clone bundle from http://localhost:$HGPORT1/gz-a.hg | |
695 | bundle2-input-bundle: 1 params with-transaction |
|
694 | bundle2-input-bundle: 1 params with-transaction | |
696 | bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported |
|
695 | bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported | |
697 | adding changesets |
|
696 | adding changesets | |
698 | add changeset 53245c60e682 |
|
697 | add changeset 53245c60e682 | |
699 | add changeset aaff8d2ffbbf |
|
698 | add changeset aaff8d2ffbbf | |
700 | adding manifests |
|
699 | adding manifests | |
701 | adding file changes |
|
700 | adding file changes | |
702 | adding bar revisions |
|
701 | adding bar revisions | |
703 | adding foo revisions |
|
702 | adding foo revisions | |
704 | bundle2-input-part: total payload size 920 |
|
703 | bundle2-input-part: total payload size 920 | |
705 | bundle2-input-part: "cache:rev-branch-cache" (advisory) supported |
|
704 | bundle2-input-part: "cache:rev-branch-cache" (advisory) supported | |
706 | bundle2-input-part: total payload size 59 |
|
705 | bundle2-input-part: total payload size 59 | |
707 | bundle2-input-bundle: 2 parts total |
|
706 | bundle2-input-bundle: 2 parts total | |
708 | updating the branch cache |
|
707 | updating the branch cache | |
709 | added 2 changesets with 2 changes to 2 files |
|
708 | added 2 changesets with 2 changes to 2 files | |
710 | finished applying clone bundle |
|
709 | finished applying clone bundle | |
711 | query 1; heads |
|
710 | query 1; heads | |
712 | sending batch command |
|
711 | sending batch command | |
713 | searching for changes |
|
712 | searching for changes | |
714 | all remote heads known locally |
|
713 | all remote heads known locally | |
715 | no changes found |
|
714 | no changes found | |
716 | sending getbundle command |
|
715 | sending getbundle command | |
717 | bundle2-input-bundle: with-transaction |
|
716 | bundle2-input-bundle: with-transaction | |
718 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
717 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported | |
719 | bundle2-input-part: "phase-heads" supported |
|
718 | bundle2-input-part: "phase-heads" supported | |
720 | bundle2-input-part: total payload size 24 |
|
719 | bundle2-input-part: total payload size 24 | |
721 | bundle2-input-bundle: 2 parts total |
|
720 | bundle2-input-bundle: 2 parts total | |
722 | checking for updated bookmarks |
|
721 | checking for updated bookmarks | |
723 | 2 local changesets published |
|
722 | 2 local changesets published | |
724 | calling hook changegroup.lfiles: hgext.largefiles.reposetup.checkrequireslfiles |
|
723 | calling hook changegroup.lfiles: hgext.largefiles.reposetup.checkrequireslfiles | |
725 | updating the branch cache |
|
724 | updating the branch cache | |
726 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
725 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) | |
727 | $ killdaemons.py |
|
726 | $ killdaemons.py | |
728 |
|
727 | |||
729 | Testing a clone bundles that involves revlog splitting (issue6811) |
|
728 | Testing a clone bundles that involves revlog splitting (issue6811) | |
730 | ================================================================== |
|
729 | ================================================================== | |
731 |
|
730 | |||
732 | $ cat >> $HGRCPATH << EOF |
|
731 | $ cat >> $HGRCPATH << EOF | |
733 | > [format] |
|
732 | > [format] | |
734 | > revlog-compression=none |
|
733 | > revlog-compression=none | |
735 | > use-persistent-nodemap=no |
|
734 | > use-persistent-nodemap=no | |
736 | > EOF |
|
735 | > EOF | |
737 |
|
736 | |||
738 | $ hg init server-revlog-split/ |
|
737 | $ hg init server-revlog-split/ | |
739 | $ cd server-revlog-split |
|
738 | $ cd server-revlog-split | |
740 | $ cat >> .hg/hgrc << EOF |
|
739 | $ cat >> .hg/hgrc << EOF | |
741 | > [extensions] |
|
740 | > [extensions] | |
742 | > clonebundles = |
|
741 | > clonebundles = | |
743 | > EOF |
|
742 | > EOF | |
744 | $ echo foo > A |
|
743 | $ echo foo > A | |
745 | $ hg add A |
|
744 | $ hg add A | |
746 | $ hg commit -m 'initial commit' |
|
745 | $ hg commit -m 'initial commit' | |
747 | IMPORTANT: the revlogs must not be split |
|
746 | IMPORTANT: the revlogs must not be split | |
748 | $ ls -1 .hg/store/00manifest.* |
|
747 | $ ls -1 .hg/store/00manifest.* | |
749 | .hg/store/00manifest.i |
|
748 | .hg/store/00manifest.i | |
750 | $ ls -1 .hg/store/data/_a.* |
|
749 | $ ls -1 .hg/store/data/_a.* | |
751 | .hg/store/data/_a.i |
|
750 | .hg/store/data/_a.i | |
752 |
|
751 | |||
753 | do big enough update to split the revlogs |
|
752 | do big enough update to split the revlogs | |
754 |
|
753 | |||
755 | $ $TESTDIR/seq.py 100000 > A |
|
754 | $ $TESTDIR/seq.py 100000 > A | |
756 | $ mkdir foo |
|
755 | $ mkdir foo | |
757 | $ cd foo |
|
756 | $ cd foo | |
758 | $ touch `$TESTDIR/seq.py 10000` |
|
757 | $ touch `$TESTDIR/seq.py 10000` | |
759 | $ cd .. |
|
758 | $ cd .. | |
760 | $ hg add -q foo |
|
759 | $ hg add -q foo | |
761 | $ hg commit -m 'split the manifest and one filelog' |
|
760 | $ hg commit -m 'split the manifest and one filelog' | |
762 |
|
761 | |||
763 | IMPORTANT: now the revlogs must be split |
|
762 | IMPORTANT: now the revlogs must be split | |
764 | $ ls -1 .hg/store/00manifest.* |
|
763 | $ ls -1 .hg/store/00manifest.* | |
765 | .hg/store/00manifest.d |
|
764 | .hg/store/00manifest.d | |
766 | .hg/store/00manifest.i |
|
765 | .hg/store/00manifest.i | |
767 | $ ls -1 .hg/store/data/_a.* |
|
766 | $ ls -1 .hg/store/data/_a.* | |
768 | .hg/store/data/_a.d |
|
767 | .hg/store/data/_a.d | |
769 | .hg/store/data/_a.i |
|
768 | .hg/store/data/_a.i | |
770 |
|
769 | |||
771 | Add an extra commit on top of that |
|
770 | Add an extra commit on top of that | |
772 |
|
771 | |||
773 | $ echo foo >> A |
|
772 | $ echo foo >> A | |
774 | $ hg commit -m 'one extra commit' |
|
773 | $ hg commit -m 'one extra commit' | |
775 |
|
774 | |||
776 | $ cd .. |
|
775 | $ cd .. | |
777 |
|
776 | |||
778 | Do a bundle that contains the split, but not the update |
|
777 | Do a bundle that contains the split, but not the update | |
779 |
|
778 | |||
780 | $ hg bundle --exact --rev '::(default~1)' -R server-revlog-split/ --type gzip-v2 split-test.hg |
|
779 | $ hg bundle --exact --rev '::(default~1)' -R server-revlog-split/ --type gzip-v2 split-test.hg | |
781 | 2 changesets found |
|
780 | 2 changesets found | |
782 |
|
781 | |||
783 | $ cat > server-revlog-split/.hg/clonebundles.manifest << EOF |
|
782 | $ cat > server-revlog-split/.hg/clonebundles.manifest << EOF | |
784 | > http://localhost:$HGPORT1/split-test.hg BUNDLESPEC=gzip-v2 |
|
783 | > http://localhost:$HGPORT1/split-test.hg BUNDLESPEC=gzip-v2 | |
785 | > EOF |
|
784 | > EOF | |
786 |
|
785 | |||
787 | start the necessary server |
|
786 | start the necessary server | |
788 |
|
787 | |||
789 | $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid |
|
788 | $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid | |
790 | $ cat http.pid >> $DAEMON_PIDS |
|
789 | $ cat http.pid >> $DAEMON_PIDS | |
791 | $ hg -R server-revlog-split serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log |
|
790 | $ hg -R server-revlog-split serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log | |
792 | $ cat hg.pid >> $DAEMON_PIDS |
|
791 | $ cat hg.pid >> $DAEMON_PIDS | |
793 |
|
792 | |||
794 | Check that clone works fine |
|
793 | Check that clone works fine | |
795 | =========================== |
|
794 | =========================== | |
796 |
|
795 | |||
797 | Here, the initial clone will trigger a revlog split (which is a bit clowny it |
|
796 | Here, the initial clone will trigger a revlog split (which is a bit clowny it | |
798 | itself, but whatever). The split revlogs will see additionnal data added to |
|
797 | itself, but whatever). The split revlogs will see additionnal data added to | |
799 | them in the subsequent pull. This should not be a problem |
|
798 | them in the subsequent pull. This should not be a problem | |
800 |
|
799 | |||
801 | $ hg clone http://localhost:$HGPORT revlog-split-in-the-bundle |
|
800 | $ hg clone http://localhost:$HGPORT revlog-split-in-the-bundle | |
802 | applying clone bundle from http://localhost:$HGPORT1/split-test.hg |
|
801 | applying clone bundle from http://localhost:$HGPORT1/split-test.hg | |
803 | adding changesets |
|
802 | adding changesets | |
804 | adding manifests |
|
803 | adding manifests | |
805 | adding file changes |
|
804 | adding file changes | |
806 | added 2 changesets with 10002 changes to 10001 files |
|
805 | added 2 changesets with 10002 changes to 10001 files | |
807 | finished applying clone bundle |
|
806 | finished applying clone bundle | |
808 | searching for changes |
|
807 | searching for changes | |
809 | adding changesets |
|
808 | adding changesets | |
810 | adding manifests |
|
809 | adding manifests | |
811 | adding file changes |
|
810 | adding file changes | |
812 | added 1 changesets with 1 changes to 1 files |
|
811 | added 1 changesets with 1 changes to 1 files | |
813 | new changesets e3879eaa1db7 |
|
812 | new changesets e3879eaa1db7 | |
814 | 2 local changesets published |
|
813 | 2 local changesets published | |
815 | updating to branch default |
|
814 | updating to branch default | |
816 | 10001 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
815 | 10001 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
817 |
|
816 | |||
818 | check the results |
|
817 | check the results | |
819 |
|
818 | |||
820 | $ cd revlog-split-in-the-bundle |
|
819 | $ cd revlog-split-in-the-bundle | |
821 | $ f --size .hg/store/00manifest.* |
|
820 | $ f --size .hg/store/00manifest.* | |
822 | .hg/store/00manifest.d: size=499037 |
|
821 | .hg/store/00manifest.d: size=499037 | |
823 | .hg/store/00manifest.i: size=192 |
|
822 | .hg/store/00manifest.i: size=192 | |
824 | $ f --size .hg/store/data/_a.* |
|
823 | $ f --size .hg/store/data/_a.* | |
825 | .hg/store/data/_a.d: size=588917 |
|
824 | .hg/store/data/_a.d: size=588917 | |
826 | .hg/store/data/_a.i: size=192 |
|
825 | .hg/store/data/_a.i: size=192 | |
827 |
|
826 | |||
828 | manifest should work |
|
827 | manifest should work | |
829 |
|
828 | |||
830 | $ hg files -r tip | wc -l |
|
829 | $ hg files -r tip | wc -l | |
831 | \s*10001 (re) |
|
830 | \s*10001 (re) | |
832 |
|
831 | |||
833 | file content should work |
|
832 | file content should work | |
834 |
|
833 | |||
835 | $ hg cat -r tip A | wc -l |
|
834 | $ hg cat -r tip A | wc -l | |
836 | \s*100001 (re) |
|
835 | \s*100001 (re) | |
837 |
|
836 | |||
838 |
|
837 |
@@ -1,204 +1,204 b'' | |||||
1 | #require no-reposimplestore |
|
1 | #require no-reposimplestore | |
2 |
|
2 | |||
3 | #testcases stream-v2 stream-v3 |
|
3 | #testcases stream-v2 stream-v3 | |
4 |
|
4 | |||
5 | #if stream-v2 |
|
5 | #if stream-v2 | |
6 | $ bundle_format="streamv2" |
|
6 | $ bundle_format="streamv2" | |
7 | $ stream_version="v2" |
|
7 | $ stream_version="v2" | |
8 | #endif |
|
8 | #endif | |
9 | #if stream-v3 |
|
9 | #if stream-v3 | |
10 | $ bundle_format="streamv3-exp" |
|
10 | $ bundle_format="streamv3-exp" | |
11 | $ stream_version="v3-exp" |
|
11 | $ stream_version="v3-exp" | |
12 | $ cat << EOF >> $HGRCPATH |
|
12 | $ cat << EOF >> $HGRCPATH | |
13 | > [experimental] |
|
13 | > [experimental] | |
14 | > stream-v3=yes |
|
14 | > stream-v3=yes | |
15 | > EOF |
|
15 | > EOF | |
16 | #endif |
|
16 | #endif | |
17 |
|
17 | |||
18 | Test creating a consuming stream bundle v2 and v3 |
|
18 | Test creating a consuming stream bundle v2 and v3 | |
19 |
|
19 | |||
20 | $ getmainid() { |
|
20 | $ getmainid() { | |
21 | > hg -R main log --template '{node}\n' --rev "$1" |
|
21 | > hg -R main log --template '{node}\n' --rev "$1" | |
22 | > } |
|
22 | > } | |
23 |
|
23 | |||
24 | $ cp $HGRCPATH $TESTTMP/hgrc.orig |
|
24 | $ cp $HGRCPATH $TESTTMP/hgrc.orig | |
25 |
|
25 | |||
26 | $ cat >> $HGRCPATH << EOF |
|
26 | $ cat >> $HGRCPATH << EOF | |
27 | > [experimental] |
|
27 | > [experimental] | |
28 | > evolution.createmarkers=True |
|
28 | > evolution.createmarkers=True | |
29 | > evolution.exchange=True |
|
29 | > evolution.exchange=True | |
30 | > bundle2-output-capture=True |
|
30 | > bundle2-output-capture=True | |
31 | > [ui] |
|
31 | > [ui] | |
32 | > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline} |
|
32 | > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline} | |
33 | > [web] |
|
33 | > [web] | |
34 | > push_ssl = false |
|
34 | > push_ssl = false | |
35 | > allow_push = * |
|
35 | > allow_push = * | |
36 | > [phases] |
|
36 | > [phases] | |
37 | > publish=False |
|
37 | > publish=False | |
38 | > [extensions] |
|
38 | > [extensions] | |
39 | > drawdag=$TESTDIR/drawdag.py |
|
39 | > drawdag=$TESTDIR/drawdag.py | |
40 | > clonebundles= |
|
40 | > clonebundles= | |
41 | > EOF |
|
41 | > EOF | |
42 |
|
42 | |||
43 | The extension requires a repo (currently unused) |
|
43 | The extension requires a repo (currently unused) | |
44 |
|
44 | |||
45 | $ hg init main |
|
45 | $ hg init main | |
46 | $ cd main |
|
46 | $ cd main | |
47 |
|
47 | |||
48 | $ hg debugdrawdag <<'EOF' |
|
48 | $ hg debugdrawdag <<'EOF' | |
49 | > E |
|
49 | > E | |
50 | > | |
|
50 | > | | |
51 | > D |
|
51 | > D | |
52 | > | |
|
52 | > | | |
53 | > C |
|
53 | > C | |
54 | > | |
|
54 | > | | |
55 | > B |
|
55 | > B | |
56 | > | |
|
56 | > | | |
57 | > A |
|
57 | > A | |
58 | > EOF |
|
58 | > EOF | |
59 |
|
59 | |||
60 | $ hg bundle -a --type="none-v2;stream=$stream_version" bundle.hg |
|
60 | $ hg bundle -a --type="none-v2;stream=$stream_version" bundle.hg | |
61 | $ hg debugbundle bundle.hg |
|
61 | $ hg debugbundle bundle.hg | |
62 | Stream params: {} |
|
62 | Stream params: {} | |
63 | stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 no-zstd !) |
|
63 | stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 no-zstd !) | |
64 | stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 zstd no-rust !) |
|
64 | stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 zstd no-rust !) | |
65 | stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 rust !) |
|
65 | stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 rust !) | |
66 | stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 no-zstd !) |
|
66 | stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 no-zstd !) | |
67 | stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 zstd no-rust !) |
|
67 | stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 zstd no-rust !) | |
68 | stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 rust !) |
|
68 | stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 rust !) | |
69 | $ hg debugbundle --spec bundle.hg |
|
69 | $ hg debugbundle --spec bundle.hg | |
70 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v2 no-zstd !) |
|
70 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v2 no-zstd !) | |
71 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 zstd no-rust !) |
|
71 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 zstd no-rust !) | |
72 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 rust !) |
|
72 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 rust !) | |
73 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v3 no-zstd !) |
|
73 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v3 no-zstd !) | |
74 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 zstd no-rust !) |
|
74 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 zstd no-rust !) | |
75 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 rust !) |
|
75 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 rust !) | |
76 |
|
76 | |||
77 | Test that we can apply the bundle as a stream clone bundle |
|
77 | Test that we can apply the bundle as a stream clone bundle | |
78 |
|
78 | |||
79 | $ cat > .hg/clonebundles.manifest << EOF |
|
79 | $ cat > .hg/clonebundles.manifest << EOF | |
80 | > http://localhost:$HGPORT1/bundle.hg BUNDLESPEC=`hg debugbundle --spec bundle.hg` |
|
80 | > http://localhost:$HGPORT1/bundle.hg BUNDLESPEC=`hg debugbundle --spec bundle.hg` | |
81 | > EOF |
|
81 | > EOF | |
82 |
|
82 | |||
83 | $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log |
|
83 | $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log | |
84 | $ cat hg.pid >> $DAEMON_PIDS |
|
84 | $ cat hg.pid >> $DAEMON_PIDS | |
85 |
|
85 | |||
86 | $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid |
|
86 | $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid | |
87 | $ cat http.pid >> $DAEMON_PIDS |
|
87 | $ cat http.pid >> $DAEMON_PIDS | |
88 |
|
88 | |||
89 | $ cd .. |
|
89 | $ cd .. | |
90 | $ hg clone http://localhost:$HGPORT stream-clone-implicit --debug |
|
90 | $ hg clone http://localhost:$HGPORT stream-clone-implicit --debug | |
91 | using http://localhost:$HGPORT/ |
|
91 | using http://localhost:$HGPORT/ | |
92 | sending capabilities command |
|
92 | sending capabilities command | |
93 | sending clonebundles command |
|
93 | sending clonebundles_manifest command | |
94 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg |
|
94 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg | |
95 | bundle2-input-bundle: with-transaction |
|
95 | bundle2-input-bundle: with-transaction | |
96 | bundle2-input-part: "stream2" (params: 3 mandatory) supported (stream-v2 !) |
|
96 | bundle2-input-part: "stream2" (params: 3 mandatory) supported (stream-v2 !) | |
97 | bundle2-input-part: "stream3-exp" (params: 3 mandatory) supported (stream-v3 !) |
|
97 | bundle2-input-part: "stream3-exp" (params: 3 mandatory) supported (stream-v3 !) | |
98 | applying stream bundle |
|
98 | applying stream bundle | |
99 | 11 files to transfer, 1.65 KB of data |
|
99 | 11 files to transfer, 1.65 KB of data | |
100 | starting 4 threads for background file closing (?) |
|
100 | starting 4 threads for background file closing (?) | |
101 | starting 4 threads for background file closing (?) |
|
101 | starting 4 threads for background file closing (?) | |
102 | adding [s] data/A.i (66 bytes) |
|
102 | adding [s] data/A.i (66 bytes) | |
103 | adding [s] data/B.i (66 bytes) |
|
103 | adding [s] data/B.i (66 bytes) | |
104 | adding [s] data/C.i (66 bytes) |
|
104 | adding [s] data/C.i (66 bytes) | |
105 | adding [s] data/D.i (66 bytes) |
|
105 | adding [s] data/D.i (66 bytes) | |
106 | adding [s] data/E.i (66 bytes) |
|
106 | adding [s] data/E.i (66 bytes) | |
107 | adding [s] phaseroots (43 bytes) |
|
107 | adding [s] phaseroots (43 bytes) | |
108 | adding [s] 00manifest.i (584 bytes) |
|
108 | adding [s] 00manifest.i (584 bytes) | |
109 | adding [s] 00changelog.i (595 bytes) |
|
109 | adding [s] 00changelog.i (595 bytes) | |
110 | adding [c] branch2-served (94 bytes) |
|
110 | adding [c] branch2-served (94 bytes) | |
111 | adding [c] rbc-names-v1 (7 bytes) |
|
111 | adding [c] rbc-names-v1 (7 bytes) | |
112 | adding [c] rbc-revs-v1 (40 bytes) |
|
112 | adding [c] rbc-revs-v1 (40 bytes) | |
113 | transferred 1.65 KB in * seconds (* */sec) (glob) |
|
113 | transferred 1.65 KB in * seconds (* */sec) (glob) | |
114 | bundle2-input-part: total payload size 1840 |
|
114 | bundle2-input-part: total payload size 1840 | |
115 | bundle2-input-bundle: 1 parts total |
|
115 | bundle2-input-bundle: 1 parts total | |
116 | updating the branch cache |
|
116 | updating the branch cache | |
117 | finished applying clone bundle |
|
117 | finished applying clone bundle | |
118 | query 1; heads |
|
118 | query 1; heads | |
119 | sending batch command |
|
119 | sending batch command | |
120 | searching for changes |
|
120 | searching for changes | |
121 | all remote heads known locally |
|
121 | all remote heads known locally | |
122 | no changes found |
|
122 | no changes found | |
123 | sending getbundle command |
|
123 | sending getbundle command | |
124 | bundle2-input-bundle: with-transaction |
|
124 | bundle2-input-bundle: with-transaction | |
125 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
125 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported | |
126 | bundle2-input-part: "phase-heads" supported |
|
126 | bundle2-input-part: "phase-heads" supported | |
127 | bundle2-input-part: total payload size 24 |
|
127 | bundle2-input-part: total payload size 24 | |
128 | bundle2-input-bundle: 2 parts total |
|
128 | bundle2-input-bundle: 2 parts total | |
129 | checking for updated bookmarks |
|
129 | checking for updated bookmarks | |
130 | updating to branch default |
|
130 | updating to branch default | |
131 | resolving manifests |
|
131 | resolving manifests | |
132 | branchmerge: False, force: False, partial: False |
|
132 | branchmerge: False, force: False, partial: False | |
133 | ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041 |
|
133 | ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041 | |
134 | A: remote created -> g |
|
134 | A: remote created -> g | |
135 | getting A |
|
135 | getting A | |
136 | B: remote created -> g |
|
136 | B: remote created -> g | |
137 | getting B |
|
137 | getting B | |
138 | C: remote created -> g |
|
138 | C: remote created -> g | |
139 | getting C |
|
139 | getting C | |
140 | D: remote created -> g |
|
140 | D: remote created -> g | |
141 | getting D |
|
141 | getting D | |
142 | E: remote created -> g |
|
142 | E: remote created -> g | |
143 | getting E |
|
143 | getting E | |
144 | 5 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
144 | 5 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
145 | updating the branch cache |
|
145 | updating the branch cache | |
146 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
146 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) | |
147 |
|
147 | |||
148 | $ hg clone --stream http://localhost:$HGPORT stream-clone-explicit --debug |
|
148 | $ hg clone --stream http://localhost:$HGPORT stream-clone-explicit --debug | |
149 | using http://localhost:$HGPORT/ |
|
149 | using http://localhost:$HGPORT/ | |
150 | sending capabilities command |
|
150 | sending capabilities command | |
151 | sending clonebundles command |
|
151 | sending clonebundles_manifest command | |
152 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg |
|
152 | applying clone bundle from http://localhost:$HGPORT1/bundle.hg | |
153 | bundle2-input-bundle: with-transaction |
|
153 | bundle2-input-bundle: with-transaction | |
154 | bundle2-input-part: "stream2" (params: 3 mandatory) supported (stream-v2 !) |
|
154 | bundle2-input-part: "stream2" (params: 3 mandatory) supported (stream-v2 !) | |
155 | bundle2-input-part: "stream3-exp" (params: 3 mandatory) supported (stream-v3 !) |
|
155 | bundle2-input-part: "stream3-exp" (params: 3 mandatory) supported (stream-v3 !) | |
156 | applying stream bundle |
|
156 | applying stream bundle | |
157 | 11 files to transfer, 1.65 KB of data |
|
157 | 11 files to transfer, 1.65 KB of data | |
158 | starting 4 threads for background file closing (?) |
|
158 | starting 4 threads for background file closing (?) | |
159 | starting 4 threads for background file closing (?) |
|
159 | starting 4 threads for background file closing (?) | |
160 | adding [s] data/A.i (66 bytes) |
|
160 | adding [s] data/A.i (66 bytes) | |
161 | adding [s] data/B.i (66 bytes) |
|
161 | adding [s] data/B.i (66 bytes) | |
162 | adding [s] data/C.i (66 bytes) |
|
162 | adding [s] data/C.i (66 bytes) | |
163 | adding [s] data/D.i (66 bytes) |
|
163 | adding [s] data/D.i (66 bytes) | |
164 | adding [s] data/E.i (66 bytes) |
|
164 | adding [s] data/E.i (66 bytes) | |
165 | adding [s] phaseroots (43 bytes) |
|
165 | adding [s] phaseroots (43 bytes) | |
166 | adding [s] 00manifest.i (584 bytes) |
|
166 | adding [s] 00manifest.i (584 bytes) | |
167 | adding [s] 00changelog.i (595 bytes) |
|
167 | adding [s] 00changelog.i (595 bytes) | |
168 | adding [c] branch2-served (94 bytes) |
|
168 | adding [c] branch2-served (94 bytes) | |
169 | adding [c] rbc-names-v1 (7 bytes) |
|
169 | adding [c] rbc-names-v1 (7 bytes) | |
170 | adding [c] rbc-revs-v1 (40 bytes) |
|
170 | adding [c] rbc-revs-v1 (40 bytes) | |
171 | transferred 1.65 KB in * seconds (* */sec) (glob) |
|
171 | transferred 1.65 KB in * seconds (* */sec) (glob) | |
172 | bundle2-input-part: total payload size 1840 |
|
172 | bundle2-input-part: total payload size 1840 | |
173 | bundle2-input-bundle: 1 parts total |
|
173 | bundle2-input-bundle: 1 parts total | |
174 | updating the branch cache |
|
174 | updating the branch cache | |
175 | finished applying clone bundle |
|
175 | finished applying clone bundle | |
176 | query 1; heads |
|
176 | query 1; heads | |
177 | sending batch command |
|
177 | sending batch command | |
178 | searching for changes |
|
178 | searching for changes | |
179 | all remote heads known locally |
|
179 | all remote heads known locally | |
180 | no changes found |
|
180 | no changes found | |
181 | sending getbundle command |
|
181 | sending getbundle command | |
182 | bundle2-input-bundle: with-transaction |
|
182 | bundle2-input-bundle: with-transaction | |
183 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
183 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported | |
184 | bundle2-input-part: "phase-heads" supported |
|
184 | bundle2-input-part: "phase-heads" supported | |
185 | bundle2-input-part: total payload size 24 |
|
185 | bundle2-input-part: total payload size 24 | |
186 | bundle2-input-bundle: 2 parts total |
|
186 | bundle2-input-bundle: 2 parts total | |
187 | checking for updated bookmarks |
|
187 | checking for updated bookmarks | |
188 | updating to branch default |
|
188 | updating to branch default | |
189 | resolving manifests |
|
189 | resolving manifests | |
190 | branchmerge: False, force: False, partial: False |
|
190 | branchmerge: False, force: False, partial: False | |
191 | ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041 |
|
191 | ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041 | |
192 | A: remote created -> g |
|
192 | A: remote created -> g | |
193 | getting A |
|
193 | getting A | |
194 | B: remote created -> g |
|
194 | B: remote created -> g | |
195 | getting B |
|
195 | getting B | |
196 | C: remote created -> g |
|
196 | C: remote created -> g | |
197 | getting C |
|
197 | getting C | |
198 | D: remote created -> g |
|
198 | D: remote created -> g | |
199 | getting D |
|
199 | getting D | |
200 | E: remote created -> g |
|
200 | E: remote created -> g | |
201 | getting E |
|
201 | getting E | |
202 | 5 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
202 | 5 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
203 | updating the branch cache |
|
203 | updating the branch cache | |
204 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
204 | (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob) |
General Comments 0
You need to be logged in to leave comments.
Login now