##// END OF EJS Templates
wireprotov2: unify file revision collection and linknode derivation...
Gregory Szorc -
r40960:08cfa77d default
parent child Browse files
Show More
@@ -1,722 +1,724 b''
1 **Experimental and under active development**
1 **Experimental and under active development**
2
2
3 This section documents the wire protocol commands exposed to transports
3 This section documents the wire protocol commands exposed to transports
4 using the frame-based protocol. The set of commands exposed through
4 using the frame-based protocol. The set of commands exposed through
5 these transports is distinct from the set of commands exposed to legacy
5 these transports is distinct from the set of commands exposed to legacy
6 transports.
6 transports.
7
7
8 The frame-based protocol uses CBOR to encode command execution requests.
8 The frame-based protocol uses CBOR to encode command execution requests.
9 All command arguments must be mapped to a specific or set of CBOR data
9 All command arguments must be mapped to a specific or set of CBOR data
10 types.
10 types.
11
11
12 The response to many commands is also CBOR. There is no common response
12 The response to many commands is also CBOR. There is no common response
13 format: each command defines its own response format.
13 format: each command defines its own response format.
14
14
15 TODOs
15 TODOs
16 =====
16 =====
17
17
18 * Add "node namespace" support to each command. In order to support
18 * Add "node namespace" support to each command. In order to support
19 SHA-1 hash transition, we want servers to be able to expose different
19 SHA-1 hash transition, we want servers to be able to expose different
20 "node namespaces" for the same data. Every command operating on nodes
20 "node namespaces" for the same data. Every command operating on nodes
21 should specify which "node namespace" it is operating on and responses
21 should specify which "node namespace" it is operating on and responses
22 should encode the "node namespace" accordingly.
22 should encode the "node namespace" accordingly.
23
23
24 Commands
24 Commands
25 ========
25 ========
26
26
27 The sections below detail all commands available to wire protocol version
27 The sections below detail all commands available to wire protocol version
28 2.
28 2.
29
29
30 branchmap
30 branchmap
31 ---------
31 ---------
32
32
33 Obtain heads in named branches.
33 Obtain heads in named branches.
34
34
35 Receives no arguments.
35 Receives no arguments.
36
36
37 The response is a map with bytestring keys defining the branch name.
37 The response is a map with bytestring keys defining the branch name.
38 Values are arrays of bytestring defining raw changeset nodes.
38 Values are arrays of bytestring defining raw changeset nodes.
39
39
40 capabilities
40 capabilities
41 ------------
41 ------------
42
42
43 Obtain the server's capabilities.
43 Obtain the server's capabilities.
44
44
45 Receives no arguments.
45 Receives no arguments.
46
46
47 This command is typically called only as part of the handshake during
47 This command is typically called only as part of the handshake during
48 initial connection establishment.
48 initial connection establishment.
49
49
50 The response is a map with bytestring keys defining server information.
50 The response is a map with bytestring keys defining server information.
51
51
52 The defined keys are:
52 The defined keys are:
53
53
54 commands
54 commands
55 A map defining available wire protocol commands on this server.
55 A map defining available wire protocol commands on this server.
56
56
57 Keys in the map are the names of commands that can be invoked. Values
57 Keys in the map are the names of commands that can be invoked. Values
58 are maps defining information about that command. The bytestring keys
58 are maps defining information about that command. The bytestring keys
59 are:
59 are:
60
60
61 args
61 args
62 (map) Describes arguments accepted by the command.
62 (map) Describes arguments accepted by the command.
63
63
64 Keys are bytestrings denoting the argument name.
64 Keys are bytestrings denoting the argument name.
65
65
66 Values are maps describing the argument. The map has the following
66 Values are maps describing the argument. The map has the following
67 bytestring keys:
67 bytestring keys:
68
68
69 default
69 default
70 (varied) The default value for this argument if not specified. Only
70 (varied) The default value for this argument if not specified. Only
71 present if ``required`` is not true.
71 present if ``required`` is not true.
72
72
73 required
73 required
74 (boolean) Whether the argument must be specified. Failure to send
74 (boolean) Whether the argument must be specified. Failure to send
75 required arguments will result in an error executing the command.
75 required arguments will result in an error executing the command.
76
76
77 type
77 type
78 (bytestring) The type of the argument. e.g. ``bytes`` or ``bool``.
78 (bytestring) The type of the argument. e.g. ``bytes`` or ``bool``.
79
79
80 validvalues
80 validvalues
81 (set) Values that are recognized for this argument. Some arguments
81 (set) Values that are recognized for this argument. Some arguments
82 only allow a fixed set of values to be specified. These arguments
82 only allow a fixed set of values to be specified. These arguments
83 may advertise that set in this key. If this set is advertised and
83 may advertise that set in this key. If this set is advertised and
84 a value not in this set is specified, the command should result
84 a value not in this set is specified, the command should result
85 in error.
85 in error.
86
86
87 permissions
87 permissions
88 An array of permissions required to execute this command.
88 An array of permissions required to execute this command.
89
89
90 *
90 *
91 (various) Individual commands may define extra keys that supplement
91 (various) Individual commands may define extra keys that supplement
92 generic command metadata. See the command definition for more.
92 generic command metadata. See the command definition for more.
93
93
94 framingmediatypes
94 framingmediatypes
95 An array of bytestrings defining the supported framing protocol
95 An array of bytestrings defining the supported framing protocol
96 media types. Servers will not accept media types not in this list.
96 media types. Servers will not accept media types not in this list.
97
97
98 pathfilterprefixes
98 pathfilterprefixes
99 (set of bytestring) Matcher prefixes that are recognized when performing
99 (set of bytestring) Matcher prefixes that are recognized when performing
100 path filtering. Specifying a path filter whose type/prefix does not
100 path filtering. Specifying a path filter whose type/prefix does not
101 match one in this set will likely be rejected by the server.
101 match one in this set will likely be rejected by the server.
102
102
103 rawrepoformats
103 rawrepoformats
104 An array of storage formats the repository is using. This set of
104 An array of storage formats the repository is using. This set of
105 requirements can be used to determine whether a client can read a
105 requirements can be used to determine whether a client can read a
106 *raw* copy of file data available.
106 *raw* copy of file data available.
107
107
108 redirect
108 redirect
109 A map declaring potential *content redirects* that may be used by this
109 A map declaring potential *content redirects* that may be used by this
110 server. Contains the following bytestring keys:
110 server. Contains the following bytestring keys:
111
111
112 targets
112 targets
113 (array of maps) Potential redirect targets. Values are maps describing
113 (array of maps) Potential redirect targets. Values are maps describing
114 this target in more detail. Each map has the following bytestring keys:
114 this target in more detail. Each map has the following bytestring keys:
115
115
116 name
116 name
117 (bytestring) Identifier for this target. The identifier will be used
117 (bytestring) Identifier for this target. The identifier will be used
118 by clients to uniquely identify this target.
118 by clients to uniquely identify this target.
119
119
120 protocol
120 protocol
121 (bytestring) High-level network protocol. Values can be
121 (bytestring) High-level network protocol. Values can be
122 ``http``, ```https``, ``ssh``, etc.
122 ``http``, ```https``, ``ssh``, etc.
123
123
124 uris
124 uris
125 (array of bytestrings) Representative URIs for this target.
125 (array of bytestrings) Representative URIs for this target.
126
126
127 snirequired (optional)
127 snirequired (optional)
128 (boolean) Indicates whether Server Name Indication is required
128 (boolean) Indicates whether Server Name Indication is required
129 to use this target. Defaults to False.
129 to use this target. Defaults to False.
130
130
131 tlsversions (optional)
131 tlsversions (optional)
132 (array of bytestring) Indicates which TLS versions are supported by
132 (array of bytestring) Indicates which TLS versions are supported by
133 this target. Values are ``1.1``, ``1.2``, ``1.3``, etc.
133 this target. Values are ``1.1``, ``1.2``, ``1.3``, etc.
134
134
135 hashes
135 hashes
136 (array of bytestring) Indicates support for hashing algorithms that are
136 (array of bytestring) Indicates support for hashing algorithms that are
137 used to ensure content integrity. Values include ``sha1``, ``sha256``,
137 used to ensure content integrity. Values include ``sha1``, ``sha256``,
138 etc.
138 etc.
139
139
140 changesetdata
140 changesetdata
141 -------------
141 -------------
142
142
143 Obtain various data related to changesets.
143 Obtain various data related to changesets.
144
144
145 The command accepts the following arguments:
145 The command accepts the following arguments:
146
146
147 revisions
147 revisions
148 (array of maps) Specifies revisions whose data is being requested. Each
148 (array of maps) Specifies revisions whose data is being requested. Each
149 value in the array is a map describing revisions. See the
149 value in the array is a map describing revisions. See the
150 *Revisions Specifiers* section below for the format of this map.
150 *Revisions Specifiers* section below for the format of this map.
151
151
152 Data will be sent for the union of all revisions resolved by all
152 Data will be sent for the union of all revisions resolved by all
153 revision specifiers.
153 revision specifiers.
154
154
155 Only revision specifiers operating on changeset revisions are allowed.
155 Only revision specifiers operating on changeset revisions are allowed.
156
156
157 fields
157 fields
158 (set of bytestring) Which data associated with changelog revisions to
158 (set of bytestring) Which data associated with changelog revisions to
159 fetch. The following values are recognized:
159 fetch. The following values are recognized:
160
160
161 bookmarks
161 bookmarks
162 Bookmarks associated with a revision.
162 Bookmarks associated with a revision.
163
163
164 parents
164 parents
165 Parent revisions.
165 Parent revisions.
166
166
167 phase
167 phase
168 The phase state of a revision.
168 The phase state of a revision.
169
169
170 revision
170 revision
171 The raw, revision data for the changelog entry. The hash of this data
171 The raw, revision data for the changelog entry. The hash of this data
172 will match the revision's node value.
172 will match the revision's node value.
173
173
174 The response bytestream starts with a CBOR map describing the data that follows.
174 The response bytestream starts with a CBOR map describing the data that follows.
175 This map has the following bytestring keys:
175 This map has the following bytestring keys:
176
176
177 totalitems
177 totalitems
178 (unsigned integer) Total number of changelog revisions whose data is being
178 (unsigned integer) Total number of changelog revisions whose data is being
179 transferred. This maps to the set of revisions in the requested node
179 transferred. This maps to the set of revisions in the requested node
180 range, not the total number of records that follow (see below for why).
180 range, not the total number of records that follow (see below for why).
181
181
182 Following the map header is a series of 0 or more CBOR values. If values
182 Following the map header is a series of 0 or more CBOR values. If values
183 are present, the first value will always be a map describing a single changeset
183 are present, the first value will always be a map describing a single changeset
184 revision.
184 revision.
185
185
186 If the ``fieldsfollowing`` key is present, the map will immediately be followed
186 If the ``fieldsfollowing`` key is present, the map will immediately be followed
187 by N CBOR bytestring values, where N is the number of elements in
187 by N CBOR bytestring values, where N is the number of elements in
188 ``fieldsfollowing``. Each bytestring value corresponds to a field denoted
188 ``fieldsfollowing``. Each bytestring value corresponds to a field denoted
189 by ``fieldsfollowing``.
189 by ``fieldsfollowing``.
190
190
191 Following the optional bytestring field values is the next revision descriptor
191 Following the optional bytestring field values is the next revision descriptor
192 map, or end of stream.
192 map, or end of stream.
193
193
194 Each revision descriptor map has the following bytestring keys:
194 Each revision descriptor map has the following bytestring keys:
195
195
196 node
196 node
197 (bytestring) The node value for this revision. This is the SHA-1 hash of
197 (bytestring) The node value for this revision. This is the SHA-1 hash of
198 the raw revision data.
198 the raw revision data.
199
199
200 bookmarks (optional)
200 bookmarks (optional)
201 (array of bytestrings) Bookmarks attached to this revision. Only present
201 (array of bytestrings) Bookmarks attached to this revision. Only present
202 if ``bookmarks`` data is being requested and the revision has bookmarks
202 if ``bookmarks`` data is being requested and the revision has bookmarks
203 attached.
203 attached.
204
204
205 fieldsfollowing (optional)
205 fieldsfollowing (optional)
206 (array of 2-array) Denotes what fields immediately follow this map. Each
206 (array of 2-array) Denotes what fields immediately follow this map. Each
207 value is an array with 2 elements: the bytestring field name and an unsigned
207 value is an array with 2 elements: the bytestring field name and an unsigned
208 integer describing the length of the data, in bytes.
208 integer describing the length of the data, in bytes.
209
209
210 If this key isn't present, no special fields will follow this map.
210 If this key isn't present, no special fields will follow this map.
211
211
212 The following fields may be present:
212 The following fields may be present:
213
213
214 revision
214 revision
215 Raw, revision data for the changelog entry. Contains a serialized form
215 Raw, revision data for the changelog entry. Contains a serialized form
216 of the changeset data, including the author, date, commit message, set
216 of the changeset data, including the author, date, commit message, set
217 of changed files, manifest node, and other metadata.
217 of changed files, manifest node, and other metadata.
218
218
219 Only present if the ``revision`` field was requested.
219 Only present if the ``revision`` field was requested.
220
220
221 parents (optional)
221 parents (optional)
222 (array of bytestrings) The nodes representing the parent revisions of this
222 (array of bytestrings) The nodes representing the parent revisions of this
223 revision. Only present if ``parents`` data is being requested.
223 revision. Only present if ``parents`` data is being requested.
224
224
225 phase (optional)
225 phase (optional)
226 (bytestring) The phase that a revision is in. Recognized values are
226 (bytestring) The phase that a revision is in. Recognized values are
227 ``secret``, ``draft``, and ``public``. Only present if ``phase`` data
227 ``secret``, ``draft``, and ``public``. Only present if ``phase`` data
228 is being requested.
228 is being requested.
229
229
230 The set of changeset revisions emitted may not match the exact set of
230 The set of changeset revisions emitted may not match the exact set of
231 changesets requested. Furthermore, the set of keys present on each
231 changesets requested. Furthermore, the set of keys present on each
232 map may vary. This is to facilitate emitting changeset updates as well
232 map may vary. This is to facilitate emitting changeset updates as well
233 as new revisions.
233 as new revisions.
234
234
235 For example, if the request wants ``phase`` and ``revision`` data,
235 For example, if the request wants ``phase`` and ``revision`` data,
236 the response may contain entries for each changeset in the common nodes
236 the response may contain entries for each changeset in the common nodes
237 set with the ``phase`` key and without the ``revision`` key in order
237 set with the ``phase`` key and without the ``revision`` key in order
238 to reflect a phase-only update.
238 to reflect a phase-only update.
239
239
240 TODO support different revision selection mechanisms (e.g. non-public, specific
240 TODO support different revision selection mechanisms (e.g. non-public, specific
241 revisions)
241 revisions)
242 TODO support different hash "namespaces" for revisions (e.g. sha-1 versus other)
242 TODO support different hash "namespaces" for revisions (e.g. sha-1 versus other)
243 TODO support emitting obsolescence data
243 TODO support emitting obsolescence data
244 TODO support filtering based on relevant paths (narrow clone)
244 TODO support filtering based on relevant paths (narrow clone)
245 TODO support hgtagsfnodes cache / tags data
245 TODO support hgtagsfnodes cache / tags data
246 TODO support branch heads cache
246 TODO support branch heads cache
247 TODO consider unify query mechanism. e.g. as an array of "query descriptors"
247 TODO consider unify query mechanism. e.g. as an array of "query descriptors"
248 rather than a set of top-level arguments that have semantics when combined.
248 rather than a set of top-level arguments that have semantics when combined.
249
249
250 filedata
250 filedata
251 --------
251 --------
252
252
253 Obtain various data related to an individual tracked file.
253 Obtain various data related to an individual tracked file.
254
254
255 The command accepts the following arguments:
255 The command accepts the following arguments:
256
256
257 fields
257 fields
258 (set of bytestring) Which data associated with a file to fetch.
258 (set of bytestring) Which data associated with a file to fetch.
259 The following values are recognized:
259 The following values are recognized:
260
260
261 linknode
261 linknode
262 The changeset node introducing this revision.
262 The changeset node introducing this revision.
263
263
264 parents
264 parents
265 Parent nodes for the revision.
265 Parent nodes for the revision.
266
266
267 revision
267 revision
268 The raw revision data for a file.
268 The raw revision data for a file.
269
269
270 haveparents
270 haveparents
271 (bool) Whether the client has the parent revisions of all requested
271 (bool) Whether the client has the parent revisions of all requested
272 nodes. If set, the server may emit revision data as deltas against
272 nodes. If set, the server may emit revision data as deltas against
273 any parent revision. If not set, the server MUST only emit deltas for
273 any parent revision. If not set, the server MUST only emit deltas for
274 revisions previously emitted by this command.
274 revisions previously emitted by this command.
275
275
276 False is assumed in the absence of any value.
276 False is assumed in the absence of any value.
277
277
278 nodes
278 nodes
279 (array of bytestrings) File nodes whose data to retrieve.
279 (array of bytestrings) File nodes whose data to retrieve.
280
280
281 path
281 path
282 (bytestring) Path of the tracked file whose data to retrieve.
282 (bytestring) Path of the tracked file whose data to retrieve.
283
283
284 TODO allow specifying revisions via alternate means (such as from
284 TODO allow specifying revisions via alternate means (such as from
285 changeset revisions or ranges)
285 changeset revisions or ranges)
286
286
287 The response bytestream starts with a CBOR map describing the data that
287 The response bytestream starts with a CBOR map describing the data that
288 follows. It has the following bytestream keys:
288 follows. It has the following bytestream keys:
289
289
290 totalitems
290 totalitems
291 (unsigned integer) Total number of file revisions whose data is
291 (unsigned integer) Total number of file revisions whose data is
292 being returned.
292 being returned.
293
293
294 Following the map header is a series of 0 or more CBOR values. If values
294 Following the map header is a series of 0 or more CBOR values. If values
295 are present, the first value will always be a map describing a single changeset
295 are present, the first value will always be a map describing a single changeset
296 revision.
296 revision.
297
297
298 If the ``fieldsfollowing`` key is present, the map will immediately be followed
298 If the ``fieldsfollowing`` key is present, the map will immediately be followed
299 by N CBOR bytestring values, where N is the number of elements in
299 by N CBOR bytestring values, where N is the number of elements in
300 ``fieldsfollowing``. Each bytestring value corresponds to a field denoted
300 ``fieldsfollowing``. Each bytestring value corresponds to a field denoted
301 by ``fieldsfollowing``.
301 by ``fieldsfollowing``.
302
302
303 Following the optional bytestring field values is the next revision descriptor
303 Following the optional bytestring field values is the next revision descriptor
304 map, or end of stream.
304 map, or end of stream.
305
305
306 Each revision descriptor map has the following bytestring keys:
306 Each revision descriptor map has the following bytestring keys:
307
307
308 Each map has the following bytestring keys:
308 Each map has the following bytestring keys:
309
309
310 node
310 node
311 (bytestring) The node of the file revision whose data is represented.
311 (bytestring) The node of the file revision whose data is represented.
312
312
313 deltabasenode
313 deltabasenode
314 (bytestring) Node of the file revision the following delta is against.
314 (bytestring) Node of the file revision the following delta is against.
315
315
316 Only present if the ``revision`` field is requested and delta data
316 Only present if the ``revision`` field is requested and delta data
317 follows this map.
317 follows this map.
318
318
319 fieldsfollowing
319 fieldsfollowing
320 (array of 2-array) Denotes extra bytestring fields that following this map.
320 (array of 2-array) Denotes extra bytestring fields that following this map.
321 See the documentation for ``changesetdata`` for semantics.
321 See the documentation for ``changesetdata`` for semantics.
322
322
323 The following named fields may be present:
323 The following named fields may be present:
324
324
325 ``delta``
325 ``delta``
326 The delta data to use to construct the fulltext revision.
326 The delta data to use to construct the fulltext revision.
327
327
328 Only present if the ``revision`` field is requested and a delta is
328 Only present if the ``revision`` field is requested and a delta is
329 being emitted. The ``deltabasenode`` top-level key will also be
329 being emitted. The ``deltabasenode`` top-level key will also be
330 present if this field is being emitted.
330 present if this field is being emitted.
331
331
332 ``revision``
332 ``revision``
333 The fulltext revision data for this manifest. Only present if the
333 The fulltext revision data for this manifest. Only present if the
334 ``revision`` field is requested and a fulltext revision is being emitted.
334 ``revision`` field is requested and a fulltext revision is being emitted.
335
335
336 parents
336 parents
337 (array of bytestring) The nodes of the parents of this file revision.
337 (array of bytestring) The nodes of the parents of this file revision.
338
338
339 Only present if the ``parents`` field is requested.
339 Only present if the ``parents`` field is requested.
340
340
341 When ``revision`` data is requested, the server chooses to emit either fulltext
341 When ``revision`` data is requested, the server chooses to emit either fulltext
342 revision data or a delta. What the server decides can be inferred by looking
342 revision data or a delta. What the server decides can be inferred by looking
343 for the presence of the ``delta`` or ``revision`` keys in the
343 for the presence of the ``delta`` or ``revision`` keys in the
344 ``fieldsfollowing`` array.
344 ``fieldsfollowing`` array.
345
345
346 filesdata
346 filesdata
347 ---------
347 ---------
348
348
349 Obtain various data related to multiple tracked files for specific changesets.
349 Obtain various data related to multiple tracked files for specific changesets.
350
350
351 This command is similar to ``filedata`` with the main difference being that
351 This command is similar to ``filedata`` with the main difference being that
352 individual requests operate on multiple file paths. This allows clients to
352 individual requests operate on multiple file paths. This allows clients to
353 request data for multiple paths by issuing a single command.
353 request data for multiple paths by issuing a single command.
354
354
355 The command accepts the following arguments:
355 The command accepts the following arguments:
356
356
357 fields
357 fields
358 (set of bytestring) Which data associated with a file to fetch.
358 (set of bytestring) Which data associated with a file to fetch.
359 The following values are recognized:
359 The following values are recognized:
360
360
361 linknode
361 linknode
362 The changeset node introducing this revision.
362 The changeset node introducing this revision.
363
363
364 parents
364 parents
365 Parent nodes for the revision.
365 Parent nodes for the revision.
366
366
367 revision
367 revision
368 The raw revision data for a file.
368 The raw revision data for a file.
369
369
370 haveparents
370 haveparents
371 (bool) Whether the client has the parent revisions of all requested
371 (bool) Whether the client has the parent revisions of all requested
372 nodes.
372 nodes.
373
373
374 pathfilter
374 pathfilter
375 (map) Defines a filter that determines what file paths are relevant.
375 (map) Defines a filter that determines what file paths are relevant.
376
376
377 See the *Path Filters* section for more.
377 See the *Path Filters* section for more.
378
378
379 If the argument is omitted, it is assumed that all paths are relevant.
379 If the argument is omitted, it is assumed that all paths are relevant.
380
380
381 revisions
381 revisions
382 (array of maps) Specifies revisions whose data is being requested. Each value
382 (array of maps) Specifies revisions whose data is being requested. Each value
383 in the array is a map describing revisions. See the *Revisions Specifiers*
383 in the array is a map describing revisions. See the *Revisions Specifiers*
384 section below for the format of this map.
384 section below for the format of this map.
385
385
386 Data will be sent for the union of all revisions resolved by all revision
386 Data will be sent for the union of all revisions resolved by all revision
387 specifiers.
387 specifiers.
388
388
389 Only revision specifiers operating on changeset revisions are allowed.
389 Only revision specifiers operating on changeset revisions are allowed.
390
390
391 The response bytestream starts with a CBOR map describing the data that
391 The response bytestream starts with a CBOR map describing the data that
392 follows. This map has the following bytestring keys:
392 follows. This map has the following bytestring keys:
393
393
394 totalpaths
394 totalpaths
395 (unsigned integer) Total number of paths whose data is being transferred.
395 (unsigned integer) Total number of paths whose data is being transferred.
396
396
397 totalitems
397 totalitems
398 (unsigned integer) Total number of file revisions whose data is being
398 (unsigned integer) Total number of file revisions whose data is being
399 transferred.
399 transferred.
400
400
401 Following the map header are 0 or more sequences of CBOR values. Each sequence
401 Following the map header are 0 or more sequences of CBOR values. Each sequence
402 represents data for a specific tracked path. Each sequence begins with a CBOR
402 represents data for a specific tracked path. Each sequence begins with a CBOR
403 map describing the file data that follows. Following that map is N CBOR values
403 map describing the file data that follows. Following that map is N CBOR values
404 describing file revision data. The format of this data is identical to that
404 describing file revision data. The format of this data is identical to that
405 returned by the ``filedata`` command.
405 returned by the ``filedata`` command.
406
406
407 Each sequence's map header has the following bytestring keys:
407 Each sequence's map header has the following bytestring keys:
408
408
409 path
409 path
410 (bytestring) The tracked file path whose data follows.
410 (bytestring) The tracked file path whose data follows.
411
411
412 totalitems
412 totalitems
413 (unsigned integer) Total number of file revisions whose data is being
413 (unsigned integer) Total number of file revisions whose data is being
414 transferred.
414 transferred.
415
415
416 The ``haveparents`` argument has significant implications on the data
416 The ``haveparents`` argument has significant implications on the data
417 transferred.
417 transferred.
418
418
419 When ``haveparents`` is true, the command MAY only emit data for file
419 When ``haveparents`` is true, the command MAY only emit data for file
420 revisions introduced by the set of changeset revisions whose data is being
420 revisions introduced by the set of changeset revisions whose data is being
421 requested. In other words, the command may assume that all file revisions
421 requested. In other words, the command may assume that all file revisions
422 for all relevant paths for ancestors of the requested changeset revisions
422 for all relevant paths for ancestors of the requested changeset revisions
423 are present on the receiver.
423 are present on the receiver.
424
424
425 When ``haveparents`` is false, the command MUST assume that the receiver
425 When ``haveparents`` is false, the command MUST assume that the receiver
426 has no file revisions data. This means that all referenced file revisions
426 has no file revisions data. This means that all referenced file revisions
427 in the queried set of changeset revisions will be sent.
427 in the queried set of changeset revisions will be sent.
428
428
429 TODO we'll probably want a more complicated mechanism for the client to
429 TODO we want a more complicated mechanism for the client to specify which
430 specify which ancestor revisions are known.
430 ancestor revisions are known. This is needed so intelligent deltas can be
431 emitted and so updated linknodes can be sent if the client needs to adjust
432 its linknodes for existing file nodes to older changeset revisions.
431 TODO we may want to make linknodes an array so multiple changesets can be
433 TODO we may want to make linknodes an array so multiple changesets can be
432 marked as introducing a file revision, since this can occur with e.g. hidden
434 marked as introducing a file revision, since this can occur with e.g. hidden
433 changesets.
435 changesets.
434
436
435 heads
437 heads
436 -----
438 -----
437
439
438 Obtain DAG heads in the repository.
440 Obtain DAG heads in the repository.
439
441
440 The command accepts the following arguments:
442 The command accepts the following arguments:
441
443
442 publiconly (optional)
444 publiconly (optional)
443 (boolean) If set, operate on the DAG for public phase changesets only.
445 (boolean) If set, operate on the DAG for public phase changesets only.
444 Non-public (i.e. draft) phase DAG heads will not be returned.
446 Non-public (i.e. draft) phase DAG heads will not be returned.
445
447
446 The response is a CBOR array of bytestrings defining changeset nodes
448 The response is a CBOR array of bytestrings defining changeset nodes
447 of DAG heads. The array can be empty if the repository is empty or no
449 of DAG heads. The array can be empty if the repository is empty or no
448 changesets satisfied the request.
450 changesets satisfied the request.
449
451
450 TODO consider exposing phase of heads in response
452 TODO consider exposing phase of heads in response
451
453
452 known
454 known
453 -----
455 -----
454
456
455 Determine whether a series of changeset nodes is known to the server.
457 Determine whether a series of changeset nodes is known to the server.
456
458
457 The command accepts the following arguments:
459 The command accepts the following arguments:
458
460
459 nodes
461 nodes
460 (array of bytestrings) List of changeset nodes whose presence to
462 (array of bytestrings) List of changeset nodes whose presence to
461 query.
463 query.
462
464
463 The response is a bytestring where each byte contains a 0 or 1 for the
465 The response is a bytestring where each byte contains a 0 or 1 for the
464 corresponding requested node at the same index.
466 corresponding requested node at the same index.
465
467
466 TODO use a bit array for even more compact response
468 TODO use a bit array for even more compact response
467
469
468 listkeys
470 listkeys
469 --------
471 --------
470
472
471 List values in a specified ``pushkey`` namespace.
473 List values in a specified ``pushkey`` namespace.
472
474
473 The command receives the following arguments:
475 The command receives the following arguments:
474
476
475 namespace
477 namespace
476 (bytestring) Pushkey namespace to query.
478 (bytestring) Pushkey namespace to query.
477
479
478 The response is a map with bytestring keys and values.
480 The response is a map with bytestring keys and values.
479
481
480 TODO consider using binary to represent nodes in certain pushkey namespaces.
482 TODO consider using binary to represent nodes in certain pushkey namespaces.
481
483
482 lookup
484 lookup
483 ------
485 ------
484
486
485 Try to resolve a value to a changeset revision.
487 Try to resolve a value to a changeset revision.
486
488
487 Unlike ``known`` which operates on changeset nodes, lookup operates on
489 Unlike ``known`` which operates on changeset nodes, lookup operates on
488 node fragments and other names that a user may use.
490 node fragments and other names that a user may use.
489
491
490 The command receives the following arguments:
492 The command receives the following arguments:
491
493
492 key
494 key
493 (bytestring) Value to try to resolve.
495 (bytestring) Value to try to resolve.
494
496
495 On success, returns a bytestring containing the resolved node.
497 On success, returns a bytestring containing the resolved node.
496
498
497 manifestdata
499 manifestdata
498 ------------
500 ------------
499
501
500 Obtain various data related to manifests (which are lists of files in
502 Obtain various data related to manifests (which are lists of files in
501 a revision).
503 a revision).
502
504
503 The command accepts the following arguments:
505 The command accepts the following arguments:
504
506
505 fields
507 fields
506 (set of bytestring) Which data associated with manifests to fetch.
508 (set of bytestring) Which data associated with manifests to fetch.
507 The following values are recognized:
509 The following values are recognized:
508
510
509 parents
511 parents
510 Parent nodes for the manifest.
512 Parent nodes for the manifest.
511
513
512 revision
514 revision
513 The raw revision data for the manifest.
515 The raw revision data for the manifest.
514
516
515 haveparents
517 haveparents
516 (bool) Whether the client has the parent revisions of all requested
518 (bool) Whether the client has the parent revisions of all requested
517 nodes. If set, the server may emit revision data as deltas against
519 nodes. If set, the server may emit revision data as deltas against
518 any parent revision. If not set, the server MUST only emit deltas for
520 any parent revision. If not set, the server MUST only emit deltas for
519 revisions previously emitted by this command.
521 revisions previously emitted by this command.
520
522
521 False is assumed in the absence of any value.
523 False is assumed in the absence of any value.
522
524
523 nodes
525 nodes
524 (array of bytestring) Manifest nodes whose data to retrieve.
526 (array of bytestring) Manifest nodes whose data to retrieve.
525
527
526 tree
528 tree
527 (bytestring) Path to manifest to retrieve. The empty bytestring represents
529 (bytestring) Path to manifest to retrieve. The empty bytestring represents
528 the root manifest. All other values represent directories/trees within
530 the root manifest. All other values represent directories/trees within
529 the repository.
531 the repository.
530
532
531 TODO allow specifying revisions via alternate means (such as from changeset
533 TODO allow specifying revisions via alternate means (such as from changeset
532 revisions or ranges)
534 revisions or ranges)
533 TODO consider recursive expansion of manifests (with path filtering for
535 TODO consider recursive expansion of manifests (with path filtering for
534 narrow use cases)
536 narrow use cases)
535
537
536 The response bytestream starts with a CBOR map describing the data that
538 The response bytestream starts with a CBOR map describing the data that
537 follows. It has the following bytestring keys:
539 follows. It has the following bytestring keys:
538
540
539 totalitems
541 totalitems
540 (unsigned integer) Total number of manifest revisions whose data is
542 (unsigned integer) Total number of manifest revisions whose data is
541 being returned.
543 being returned.
542
544
543 Following the map header is a series of 0 or more CBOR values. If values
545 Following the map header is a series of 0 or more CBOR values. If values
544 are present, the first value will always be a map describing a single manifest
546 are present, the first value will always be a map describing a single manifest
545 revision.
547 revision.
546
548
547 If the ``fieldsfollowing`` key is present, the map will immediately be followed
549 If the ``fieldsfollowing`` key is present, the map will immediately be followed
548 by N CBOR bytestring values, where N is the number of elements in
550 by N CBOR bytestring values, where N is the number of elements in
549 ``fieldsfollowing``. Each bytestring value corresponds to a field denoted
551 ``fieldsfollowing``. Each bytestring value corresponds to a field denoted
550 by ``fieldsfollowing``.
552 by ``fieldsfollowing``.
551
553
552 Following the optional bytestring field values is the next revision descriptor
554 Following the optional bytestring field values is the next revision descriptor
553 map, or end of stream.
555 map, or end of stream.
554
556
555 Each revision descriptor map has the following bytestring keys:
557 Each revision descriptor map has the following bytestring keys:
556
558
557 node
559 node
558 (bytestring) The node of the manifest revision whose data is represented.
560 (bytestring) The node of the manifest revision whose data is represented.
559
561
560 deltabasenode
562 deltabasenode
561 (bytestring) The node that the delta representation of this revision is
563 (bytestring) The node that the delta representation of this revision is
562 computed against. Only present if the ``revision`` field is requested and
564 computed against. Only present if the ``revision`` field is requested and
563 a delta is being emitted.
565 a delta is being emitted.
564
566
565 fieldsfollowing
567 fieldsfollowing
566 (array of 2-array) Denotes extra bytestring fields that following this map.
568 (array of 2-array) Denotes extra bytestring fields that following this map.
567 See the documentation for ``changesetdata`` for semantics.
569 See the documentation for ``changesetdata`` for semantics.
568
570
569 The following named fields may be present:
571 The following named fields may be present:
570
572
571 ``delta``
573 ``delta``
572 The delta data to use to construct the fulltext revision.
574 The delta data to use to construct the fulltext revision.
573
575
574 Only present if the ``revision`` field is requested and a delta is
576 Only present if the ``revision`` field is requested and a delta is
575 being emitted. The ``deltabasenode`` top-level key will also be
577 being emitted. The ``deltabasenode`` top-level key will also be
576 present if this field is being emitted.
578 present if this field is being emitted.
577
579
578 ``revision``
580 ``revision``
579 The fulltext revision data for this manifest. Only present if the
581 The fulltext revision data for this manifest. Only present if the
580 ``revision`` field is requested and a fulltext revision is being emitted.
582 ``revision`` field is requested and a fulltext revision is being emitted.
581
583
582 parents
584 parents
583 (array of bytestring) The nodes of the parents of this manifest revision.
585 (array of bytestring) The nodes of the parents of this manifest revision.
584 Only present if the ``parents`` field is requested.
586 Only present if the ``parents`` field is requested.
585
587
586 When ``revision`` data is requested, the server chooses to emit either fulltext
588 When ``revision`` data is requested, the server chooses to emit either fulltext
587 revision data or a delta. What the server decides can be inferred by looking
589 revision data or a delta. What the server decides can be inferred by looking
588 for the presence of ``delta`` or ``revision`` in the ``fieldsfollowing`` array.
590 for the presence of ``delta`` or ``revision`` in the ``fieldsfollowing`` array.
589
591
590 Servers MAY advertise the following extra fields in the capabilities
592 Servers MAY advertise the following extra fields in the capabilities
591 descriptor for this command:
593 descriptor for this command:
592
594
593 recommendedbatchsize
595 recommendedbatchsize
594 (unsigned integer) Number of revisions the server recommends as a batch
596 (unsigned integer) Number of revisions the server recommends as a batch
595 query size. If defined, clients needing to issue multiple ``manifestdata``
597 query size. If defined, clients needing to issue multiple ``manifestdata``
596 commands to obtain needed data SHOULD construct their commands to have
598 commands to obtain needed data SHOULD construct their commands to have
597 this many revisions per request.
599 this many revisions per request.
598
600
599 pushkey
601 pushkey
600 -------
602 -------
601
603
602 Set a value using the ``pushkey`` protocol.
604 Set a value using the ``pushkey`` protocol.
603
605
604 The command receives the following arguments:
606 The command receives the following arguments:
605
607
606 namespace
608 namespace
607 (bytestring) Pushkey namespace to operate on.
609 (bytestring) Pushkey namespace to operate on.
608 key
610 key
609 (bytestring) The pushkey key to set.
611 (bytestring) The pushkey key to set.
610 old
612 old
611 (bytestring) Old value for this key.
613 (bytestring) Old value for this key.
612 new
614 new
613 (bytestring) New value for this key.
615 (bytestring) New value for this key.
614
616
615 TODO consider using binary to represent nodes is certain pushkey namespaces.
617 TODO consider using binary to represent nodes is certain pushkey namespaces.
616 TODO better define response type and meaning.
618 TODO better define response type and meaning.
617
619
618 rawstorefiledata
620 rawstorefiledata
619 ----------------
621 ----------------
620
622
621 Allows retrieving raw files used to store repository data.
623 Allows retrieving raw files used to store repository data.
622
624
623 The command accepts the following arguments:
625 The command accepts the following arguments:
624
626
625 files
627 files
626 (array of bytestring) Describes the files that should be retrieved.
628 (array of bytestring) Describes the files that should be retrieved.
627
629
628 The meaning of values in this array is dependent on the storage backend used
630 The meaning of values in this array is dependent on the storage backend used
629 by the server.
631 by the server.
630
632
631 The response bytestream starts with a CBOR map describing the data that follows.
633 The response bytestream starts with a CBOR map describing the data that follows.
632 This map has the following bytestring keys:
634 This map has the following bytestring keys:
633
635
634 filecount
636 filecount
635 (unsigned integer) Total number of files whose data is being transferred.
637 (unsigned integer) Total number of files whose data is being transferred.
636
638
637 totalsize
639 totalsize
638 (unsigned integer) Total size in bytes of files data that will be
640 (unsigned integer) Total size in bytes of files data that will be
639 transferred. This is file on-disk size and not wire size.
641 transferred. This is file on-disk size and not wire size.
640
642
641 Following the map header are N file segments. Each file segment consists of a
643 Following the map header are N file segments. Each file segment consists of a
642 CBOR map followed by an indefinite length bytestring. Each map has the following
644 CBOR map followed by an indefinite length bytestring. Each map has the following
643 bytestring keys:
645 bytestring keys:
644
646
645 location
647 location
646 (bytestring) Denotes the location in the repository where the file should be
648 (bytestring) Denotes the location in the repository where the file should be
647 written. Values map to vfs instances to use for the writing.
649 written. Values map to vfs instances to use for the writing.
648
650
649 path
651 path
650 (bytestring) Path of file being transferred. Path is the raw store
652 (bytestring) Path of file being transferred. Path is the raw store
651 path and can be any sequence of bytes that can be tracked in a Mercurial
653 path and can be any sequence of bytes that can be tracked in a Mercurial
652 manifest.
654 manifest.
653
655
654 size
656 size
655 (unsigned integer) Size of file data. This will be the final written
657 (unsigned integer) Size of file data. This will be the final written
656 file size. The total size of the data that follows the CBOR map
658 file size. The total size of the data that follows the CBOR map
657 will be greater due to encoding overhead of CBOR.
659 will be greater due to encoding overhead of CBOR.
658
660
659 TODO this command is woefully incomplete. If we are to move forward with a
661 TODO this command is woefully incomplete. If we are to move forward with a
660 stream clone analog, it needs a lot more metadata around how to describe what
662 stream clone analog, it needs a lot more metadata around how to describe what
661 files are available to retrieve, other semantics.
663 files are available to retrieve, other semantics.
662
664
663 Revision Specifiers
665 Revision Specifiers
664 ===================
666 ===================
665
667
666 A *revision specifier* is a map that evaluates to a set of revisions.
668 A *revision specifier* is a map that evaluates to a set of revisions.
667
669
668 A *revision specifier* has a ``type`` key that defines the revision
670 A *revision specifier* has a ``type`` key that defines the revision
669 selection type to perform. Other keys in the map are used in a
671 selection type to perform. Other keys in the map are used in a
670 type-specific manner.
672 type-specific manner.
671
673
672 The following types are defined:
674 The following types are defined:
673
675
674 changesetexplicit
676 changesetexplicit
675 An explicit set of enumerated changeset revisions.
677 An explicit set of enumerated changeset revisions.
676
678
677 The ``nodes`` key MUST contain an array of full binary nodes, expressed
679 The ``nodes`` key MUST contain an array of full binary nodes, expressed
678 as bytestrings.
680 as bytestrings.
679
681
680 changesetexplicitdepth
682 changesetexplicitdepth
681 Like ``changesetexplicit``, but contains a ``depth`` key defining the
683 Like ``changesetexplicit``, but contains a ``depth`` key defining the
682 unsigned integer number of ancestor revisions to also resolve. For each
684 unsigned integer number of ancestor revisions to also resolve. For each
683 value in ``nodes``, DAG ancestors will be walked until up to N total
685 value in ``nodes``, DAG ancestors will be walked until up to N total
684 revisions from that ancestry walk are present in the final resolved set.
686 revisions from that ancestry walk are present in the final resolved set.
685
687
686 changesetdagrange
688 changesetdagrange
687 Defines revisions via a DAG range of changesets on the changelog.
689 Defines revisions via a DAG range of changesets on the changelog.
688
690
689 The ``roots`` key MUST contain an array of full, binary node values
691 The ``roots`` key MUST contain an array of full, binary node values
690 representing the *root* revisions.
692 representing the *root* revisions.
691
693
692 The ``heads`` key MUST contain an array of full, binary nodes values
694 The ``heads`` key MUST contain an array of full, binary nodes values
693 representing the *head* revisions.
695 representing the *head* revisions.
694
696
695 The DAG range between ``roots`` and ``heads`` will be resolved and all
697 The DAG range between ``roots`` and ``heads`` will be resolved and all
696 revisions between will be used. Nodes in ``roots`` are not part of the
698 revisions between will be used. Nodes in ``roots`` are not part of the
697 resolved set. Nodes in ``heads`` are. The ``roots`` array may be empty.
699 resolved set. Nodes in ``heads`` are. The ``roots`` array may be empty.
698 The ``heads`` array MUST be defined.
700 The ``heads`` array MUST be defined.
699
701
700 Path Filters
702 Path Filters
701 ============
703 ============
702
704
703 Various commands accept a *path filter* argument that defines the set of file
705 Various commands accept a *path filter* argument that defines the set of file
704 paths relevant to the request.
706 paths relevant to the request.
705
707
706 A *path filter* is defined as a map with the bytestring keys ``include`` and
708 A *path filter* is defined as a map with the bytestring keys ``include`` and
707 ``exclude``. Each is an array of bytestring values. Each value defines a pattern
709 ``exclude``. Each is an array of bytestring values. Each value defines a pattern
708 rule (see :hg:`help patterns`) that is used to match file paths.
710 rule (see :hg:`help patterns`) that is used to match file paths.
709
711
710 A path matches the path filter if it is matched by a rule in the ``include``
712 A path matches the path filter if it is matched by a rule in the ``include``
711 set but doesn't match a rule in the ``exclude`` set. In other words, a path
713 set but doesn't match a rule in the ``exclude`` set. In other words, a path
712 matcher takes the union of all ``include`` patterns and then substracts the
714 matcher takes the union of all ``include`` patterns and then substracts the
713 union of all ``exclude`` patterns.
715 union of all ``exclude`` patterns.
714
716
715 Patterns MUST be prefixed with their pattern type. Only the following pattern
717 Patterns MUST be prefixed with their pattern type. Only the following pattern
716 types are allowed: ``path:``, ``rootfilesin:``.
718 types are allowed: ``path:``, ``rootfilesin:``.
717
719
718 If the ``include`` key is omitted, it is assumed that all paths are
720 If the ``include`` key is omitted, it is assumed that all paths are
719 relevant. The patterns from ``exclude`` will still be used, if defined.
721 relevant. The patterns from ``exclude`` will still be used, if defined.
720
722
721 An example value is ``path:tests/foo``, which would match a file named
723 An example value is ``path:tests/foo``, which would match a file named
722 ``tests/foo`` or a directory ``tests/foo`` and all files under it.
724 ``tests/foo`` or a directory ``tests/foo`` and all files under it.
@@ -1,1465 +1,1455 b''
1 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
1 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
2 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
2 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 #
3 #
4 # This software may be used and distributed according to the terms of the
4 # This software may be used and distributed according to the terms of the
5 # GNU General Public License version 2 or any later version.
5 # GNU General Public License version 2 or any later version.
6
6
7 from __future__ import absolute_import
7 from __future__ import absolute_import
8
8
9 import collections
9 import collections
10 import contextlib
10 import contextlib
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 discovery,
19 discovery,
20 encoding,
20 encoding,
21 error,
21 error,
22 match as matchmod,
22 match as matchmod,
23 narrowspec,
23 narrowspec,
24 pycompat,
24 pycompat,
25 streamclone,
25 streamclone,
26 util,
26 util,
27 wireprotoframing,
27 wireprotoframing,
28 wireprototypes,
28 wireprototypes,
29 )
29 )
30 from .utils import (
30 from .utils import (
31 cborutil,
31 cborutil,
32 interfaceutil,
32 interfaceutil,
33 stringutil,
33 stringutil,
34 )
34 )
35
35
36 FRAMINGTYPE = b'application/mercurial-exp-framing-0006'
36 FRAMINGTYPE = b'application/mercurial-exp-framing-0006'
37
37
38 HTTP_WIREPROTO_V2 = wireprototypes.HTTP_WIREPROTO_V2
38 HTTP_WIREPROTO_V2 = wireprototypes.HTTP_WIREPROTO_V2
39
39
40 COMMANDS = wireprototypes.commanddict()
40 COMMANDS = wireprototypes.commanddict()
41
41
42 # Value inserted into cache key computation function. Change the value to
42 # Value inserted into cache key computation function. Change the value to
43 # force new cache keys for every command request. This should be done when
43 # force new cache keys for every command request. This should be done when
44 # there is a change to how caching works, etc.
44 # there is a change to how caching works, etc.
45 GLOBAL_CACHE_VERSION = 1
45 GLOBAL_CACHE_VERSION = 1
46
46
47 def handlehttpv2request(rctx, req, res, checkperm, urlparts):
47 def handlehttpv2request(rctx, req, res, checkperm, urlparts):
48 from .hgweb import common as hgwebcommon
48 from .hgweb import common as hgwebcommon
49
49
50 # URL space looks like: <permissions>/<command>, where <permission> can
50 # URL space looks like: <permissions>/<command>, where <permission> can
51 # be ``ro`` or ``rw`` to signal read-only or read-write, respectively.
51 # be ``ro`` or ``rw`` to signal read-only or read-write, respectively.
52
52
53 # Root URL does nothing meaningful... yet.
53 # Root URL does nothing meaningful... yet.
54 if not urlparts:
54 if not urlparts:
55 res.status = b'200 OK'
55 res.status = b'200 OK'
56 res.headers[b'Content-Type'] = b'text/plain'
56 res.headers[b'Content-Type'] = b'text/plain'
57 res.setbodybytes(_('HTTP version 2 API handler'))
57 res.setbodybytes(_('HTTP version 2 API handler'))
58 return
58 return
59
59
60 if len(urlparts) == 1:
60 if len(urlparts) == 1:
61 res.status = b'404 Not Found'
61 res.status = b'404 Not Found'
62 res.headers[b'Content-Type'] = b'text/plain'
62 res.headers[b'Content-Type'] = b'text/plain'
63 res.setbodybytes(_('do not know how to process %s\n') %
63 res.setbodybytes(_('do not know how to process %s\n') %
64 req.dispatchpath)
64 req.dispatchpath)
65 return
65 return
66
66
67 permission, command = urlparts[0:2]
67 permission, command = urlparts[0:2]
68
68
69 if permission not in (b'ro', b'rw'):
69 if permission not in (b'ro', b'rw'):
70 res.status = b'404 Not Found'
70 res.status = b'404 Not Found'
71 res.headers[b'Content-Type'] = b'text/plain'
71 res.headers[b'Content-Type'] = b'text/plain'
72 res.setbodybytes(_('unknown permission: %s') % permission)
72 res.setbodybytes(_('unknown permission: %s') % permission)
73 return
73 return
74
74
75 if req.method != 'POST':
75 if req.method != 'POST':
76 res.status = b'405 Method Not Allowed'
76 res.status = b'405 Method Not Allowed'
77 res.headers[b'Allow'] = b'POST'
77 res.headers[b'Allow'] = b'POST'
78 res.setbodybytes(_('commands require POST requests'))
78 res.setbodybytes(_('commands require POST requests'))
79 return
79 return
80
80
81 # At some point we'll want to use our own API instead of recycling the
81 # At some point we'll want to use our own API instead of recycling the
82 # behavior of version 1 of the wire protocol...
82 # behavior of version 1 of the wire protocol...
83 # TODO return reasonable responses - not responses that overload the
83 # TODO return reasonable responses - not responses that overload the
84 # HTTP status line message for error reporting.
84 # HTTP status line message for error reporting.
85 try:
85 try:
86 checkperm(rctx, req, 'pull' if permission == b'ro' else 'push')
86 checkperm(rctx, req, 'pull' if permission == b'ro' else 'push')
87 except hgwebcommon.ErrorResponse as e:
87 except hgwebcommon.ErrorResponse as e:
88 res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e))
88 res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e))
89 for k, v in e.headers:
89 for k, v in e.headers:
90 res.headers[k] = v
90 res.headers[k] = v
91 res.setbodybytes('permission denied')
91 res.setbodybytes('permission denied')
92 return
92 return
93
93
94 # We have a special endpoint to reflect the request back at the client.
94 # We have a special endpoint to reflect the request back at the client.
95 if command == b'debugreflect':
95 if command == b'debugreflect':
96 _processhttpv2reflectrequest(rctx.repo.ui, rctx.repo, req, res)
96 _processhttpv2reflectrequest(rctx.repo.ui, rctx.repo, req, res)
97 return
97 return
98
98
99 # Extra commands that we handle that aren't really wire protocol
99 # Extra commands that we handle that aren't really wire protocol
100 # commands. Think extra hard before making this hackery available to
100 # commands. Think extra hard before making this hackery available to
101 # extension.
101 # extension.
102 extracommands = {'multirequest'}
102 extracommands = {'multirequest'}
103
103
104 if command not in COMMANDS and command not in extracommands:
104 if command not in COMMANDS and command not in extracommands:
105 res.status = b'404 Not Found'
105 res.status = b'404 Not Found'
106 res.headers[b'Content-Type'] = b'text/plain'
106 res.headers[b'Content-Type'] = b'text/plain'
107 res.setbodybytes(_('unknown wire protocol command: %s\n') % command)
107 res.setbodybytes(_('unknown wire protocol command: %s\n') % command)
108 return
108 return
109
109
110 repo = rctx.repo
110 repo = rctx.repo
111 ui = repo.ui
111 ui = repo.ui
112
112
113 proto = httpv2protocolhandler(req, ui)
113 proto = httpv2protocolhandler(req, ui)
114
114
115 if (not COMMANDS.commandavailable(command, proto)
115 if (not COMMANDS.commandavailable(command, proto)
116 and command not in extracommands):
116 and command not in extracommands):
117 res.status = b'404 Not Found'
117 res.status = b'404 Not Found'
118 res.headers[b'Content-Type'] = b'text/plain'
118 res.headers[b'Content-Type'] = b'text/plain'
119 res.setbodybytes(_('invalid wire protocol command: %s') % command)
119 res.setbodybytes(_('invalid wire protocol command: %s') % command)
120 return
120 return
121
121
122 # TODO consider cases where proxies may add additional Accept headers.
122 # TODO consider cases where proxies may add additional Accept headers.
123 if req.headers.get(b'Accept') != FRAMINGTYPE:
123 if req.headers.get(b'Accept') != FRAMINGTYPE:
124 res.status = b'406 Not Acceptable'
124 res.status = b'406 Not Acceptable'
125 res.headers[b'Content-Type'] = b'text/plain'
125 res.headers[b'Content-Type'] = b'text/plain'
126 res.setbodybytes(_('client MUST specify Accept header with value: %s\n')
126 res.setbodybytes(_('client MUST specify Accept header with value: %s\n')
127 % FRAMINGTYPE)
127 % FRAMINGTYPE)
128 return
128 return
129
129
130 if req.headers.get(b'Content-Type') != FRAMINGTYPE:
130 if req.headers.get(b'Content-Type') != FRAMINGTYPE:
131 res.status = b'415 Unsupported Media Type'
131 res.status = b'415 Unsupported Media Type'
132 # TODO we should send a response with appropriate media type,
132 # TODO we should send a response with appropriate media type,
133 # since client does Accept it.
133 # since client does Accept it.
134 res.headers[b'Content-Type'] = b'text/plain'
134 res.headers[b'Content-Type'] = b'text/plain'
135 res.setbodybytes(_('client MUST send Content-Type header with '
135 res.setbodybytes(_('client MUST send Content-Type header with '
136 'value: %s\n') % FRAMINGTYPE)
136 'value: %s\n') % FRAMINGTYPE)
137 return
137 return
138
138
139 _processhttpv2request(ui, repo, req, res, permission, command, proto)
139 _processhttpv2request(ui, repo, req, res, permission, command, proto)
140
140
141 def _processhttpv2reflectrequest(ui, repo, req, res):
141 def _processhttpv2reflectrequest(ui, repo, req, res):
142 """Reads unified frame protocol request and dumps out state to client.
142 """Reads unified frame protocol request and dumps out state to client.
143
143
144 This special endpoint can be used to help debug the wire protocol.
144 This special endpoint can be used to help debug the wire protocol.
145
145
146 Instead of routing the request through the normal dispatch mechanism,
146 Instead of routing the request through the normal dispatch mechanism,
147 we instead read all frames, decode them, and feed them into our state
147 we instead read all frames, decode them, and feed them into our state
148 tracker. We then dump the log of all that activity back out to the
148 tracker. We then dump the log of all that activity back out to the
149 client.
149 client.
150 """
150 """
151 import json
151 import json
152
152
153 # Reflection APIs have a history of being abused, accidentally disclosing
153 # Reflection APIs have a history of being abused, accidentally disclosing
154 # sensitive data, etc. So we have a config knob.
154 # sensitive data, etc. So we have a config knob.
155 if not ui.configbool('experimental', 'web.api.debugreflect'):
155 if not ui.configbool('experimental', 'web.api.debugreflect'):
156 res.status = b'404 Not Found'
156 res.status = b'404 Not Found'
157 res.headers[b'Content-Type'] = b'text/plain'
157 res.headers[b'Content-Type'] = b'text/plain'
158 res.setbodybytes(_('debugreflect service not available'))
158 res.setbodybytes(_('debugreflect service not available'))
159 return
159 return
160
160
161 # We assume we have a unified framing protocol request body.
161 # We assume we have a unified framing protocol request body.
162
162
163 reactor = wireprotoframing.serverreactor(ui)
163 reactor = wireprotoframing.serverreactor(ui)
164 states = []
164 states = []
165
165
166 while True:
166 while True:
167 frame = wireprotoframing.readframe(req.bodyfh)
167 frame = wireprotoframing.readframe(req.bodyfh)
168
168
169 if not frame:
169 if not frame:
170 states.append(b'received: <no frame>')
170 states.append(b'received: <no frame>')
171 break
171 break
172
172
173 states.append(b'received: %d %d %d %s' % (frame.typeid, frame.flags,
173 states.append(b'received: %d %d %d %s' % (frame.typeid, frame.flags,
174 frame.requestid,
174 frame.requestid,
175 frame.payload))
175 frame.payload))
176
176
177 action, meta = reactor.onframerecv(frame)
177 action, meta = reactor.onframerecv(frame)
178 states.append(json.dumps((action, meta), sort_keys=True,
178 states.append(json.dumps((action, meta), sort_keys=True,
179 separators=(', ', ': ')))
179 separators=(', ', ': ')))
180
180
181 action, meta = reactor.oninputeof()
181 action, meta = reactor.oninputeof()
182 meta['action'] = action
182 meta['action'] = action
183 states.append(json.dumps(meta, sort_keys=True, separators=(', ',': ')))
183 states.append(json.dumps(meta, sort_keys=True, separators=(', ',': ')))
184
184
185 res.status = b'200 OK'
185 res.status = b'200 OK'
186 res.headers[b'Content-Type'] = b'text/plain'
186 res.headers[b'Content-Type'] = b'text/plain'
187 res.setbodybytes(b'\n'.join(states))
187 res.setbodybytes(b'\n'.join(states))
188
188
189 def _processhttpv2request(ui, repo, req, res, authedperm, reqcommand, proto):
189 def _processhttpv2request(ui, repo, req, res, authedperm, reqcommand, proto):
190 """Post-validation handler for HTTPv2 requests.
190 """Post-validation handler for HTTPv2 requests.
191
191
192 Called when the HTTP request contains unified frame-based protocol
192 Called when the HTTP request contains unified frame-based protocol
193 frames for evaluation.
193 frames for evaluation.
194 """
194 """
195 # TODO Some HTTP clients are full duplex and can receive data before
195 # TODO Some HTTP clients are full duplex and can receive data before
196 # the entire request is transmitted. Figure out a way to indicate support
196 # the entire request is transmitted. Figure out a way to indicate support
197 # for that so we can opt into full duplex mode.
197 # for that so we can opt into full duplex mode.
198 reactor = wireprotoframing.serverreactor(ui, deferoutput=True)
198 reactor = wireprotoframing.serverreactor(ui, deferoutput=True)
199 seencommand = False
199 seencommand = False
200
200
201 outstream = None
201 outstream = None
202
202
203 while True:
203 while True:
204 frame = wireprotoframing.readframe(req.bodyfh)
204 frame = wireprotoframing.readframe(req.bodyfh)
205 if not frame:
205 if not frame:
206 break
206 break
207
207
208 action, meta = reactor.onframerecv(frame)
208 action, meta = reactor.onframerecv(frame)
209
209
210 if action == 'wantframe':
210 if action == 'wantframe':
211 # Need more data before we can do anything.
211 # Need more data before we can do anything.
212 continue
212 continue
213 elif action == 'runcommand':
213 elif action == 'runcommand':
214 # Defer creating output stream because we need to wait for
214 # Defer creating output stream because we need to wait for
215 # protocol settings frames so proper encoding can be applied.
215 # protocol settings frames so proper encoding can be applied.
216 if not outstream:
216 if not outstream:
217 outstream = reactor.makeoutputstream()
217 outstream = reactor.makeoutputstream()
218
218
219 sentoutput = _httpv2runcommand(ui, repo, req, res, authedperm,
219 sentoutput = _httpv2runcommand(ui, repo, req, res, authedperm,
220 reqcommand, reactor, outstream,
220 reqcommand, reactor, outstream,
221 meta, issubsequent=seencommand)
221 meta, issubsequent=seencommand)
222
222
223 if sentoutput:
223 if sentoutput:
224 return
224 return
225
225
226 seencommand = True
226 seencommand = True
227
227
228 elif action == 'error':
228 elif action == 'error':
229 # TODO define proper error mechanism.
229 # TODO define proper error mechanism.
230 res.status = b'200 OK'
230 res.status = b'200 OK'
231 res.headers[b'Content-Type'] = b'text/plain'
231 res.headers[b'Content-Type'] = b'text/plain'
232 res.setbodybytes(meta['message'] + b'\n')
232 res.setbodybytes(meta['message'] + b'\n')
233 return
233 return
234 else:
234 else:
235 raise error.ProgrammingError(
235 raise error.ProgrammingError(
236 'unhandled action from frame processor: %s' % action)
236 'unhandled action from frame processor: %s' % action)
237
237
238 action, meta = reactor.oninputeof()
238 action, meta = reactor.oninputeof()
239 if action == 'sendframes':
239 if action == 'sendframes':
240 # We assume we haven't started sending the response yet. If we're
240 # We assume we haven't started sending the response yet. If we're
241 # wrong, the response type will raise an exception.
241 # wrong, the response type will raise an exception.
242 res.status = b'200 OK'
242 res.status = b'200 OK'
243 res.headers[b'Content-Type'] = FRAMINGTYPE
243 res.headers[b'Content-Type'] = FRAMINGTYPE
244 res.setbodygen(meta['framegen'])
244 res.setbodygen(meta['framegen'])
245 elif action == 'noop':
245 elif action == 'noop':
246 pass
246 pass
247 else:
247 else:
248 raise error.ProgrammingError('unhandled action from frame processor: %s'
248 raise error.ProgrammingError('unhandled action from frame processor: %s'
249 % action)
249 % action)
250
250
251 def _httpv2runcommand(ui, repo, req, res, authedperm, reqcommand, reactor,
251 def _httpv2runcommand(ui, repo, req, res, authedperm, reqcommand, reactor,
252 outstream, command, issubsequent):
252 outstream, command, issubsequent):
253 """Dispatch a wire protocol command made from HTTPv2 requests.
253 """Dispatch a wire protocol command made from HTTPv2 requests.
254
254
255 The authenticated permission (``authedperm``) along with the original
255 The authenticated permission (``authedperm``) along with the original
256 command from the URL (``reqcommand``) are passed in.
256 command from the URL (``reqcommand``) are passed in.
257 """
257 """
258 # We already validated that the session has permissions to perform the
258 # We already validated that the session has permissions to perform the
259 # actions in ``authedperm``. In the unified frame protocol, the canonical
259 # actions in ``authedperm``. In the unified frame protocol, the canonical
260 # command to run is expressed in a frame. However, the URL also requested
260 # command to run is expressed in a frame. However, the URL also requested
261 # to run a specific command. We need to be careful that the command we
261 # to run a specific command. We need to be careful that the command we
262 # run doesn't have permissions requirements greater than what was granted
262 # run doesn't have permissions requirements greater than what was granted
263 # by ``authedperm``.
263 # by ``authedperm``.
264 #
264 #
265 # Our rule for this is we only allow one command per HTTP request and
265 # Our rule for this is we only allow one command per HTTP request and
266 # that command must match the command in the URL. However, we make
266 # that command must match the command in the URL. However, we make
267 # an exception for the ``multirequest`` URL. This URL is allowed to
267 # an exception for the ``multirequest`` URL. This URL is allowed to
268 # execute multiple commands. We double check permissions of each command
268 # execute multiple commands. We double check permissions of each command
269 # as it is invoked to ensure there is no privilege escalation.
269 # as it is invoked to ensure there is no privilege escalation.
270 # TODO consider allowing multiple commands to regular command URLs
270 # TODO consider allowing multiple commands to regular command URLs
271 # iff each command is the same.
271 # iff each command is the same.
272
272
273 proto = httpv2protocolhandler(req, ui, args=command['args'])
273 proto = httpv2protocolhandler(req, ui, args=command['args'])
274
274
275 if reqcommand == b'multirequest':
275 if reqcommand == b'multirequest':
276 if not COMMANDS.commandavailable(command['command'], proto):
276 if not COMMANDS.commandavailable(command['command'], proto):
277 # TODO proper error mechanism
277 # TODO proper error mechanism
278 res.status = b'200 OK'
278 res.status = b'200 OK'
279 res.headers[b'Content-Type'] = b'text/plain'
279 res.headers[b'Content-Type'] = b'text/plain'
280 res.setbodybytes(_('wire protocol command not available: %s') %
280 res.setbodybytes(_('wire protocol command not available: %s') %
281 command['command'])
281 command['command'])
282 return True
282 return True
283
283
284 # TODO don't use assert here, since it may be elided by -O.
284 # TODO don't use assert here, since it may be elided by -O.
285 assert authedperm in (b'ro', b'rw')
285 assert authedperm in (b'ro', b'rw')
286 wirecommand = COMMANDS[command['command']]
286 wirecommand = COMMANDS[command['command']]
287 assert wirecommand.permission in ('push', 'pull')
287 assert wirecommand.permission in ('push', 'pull')
288
288
289 if authedperm == b'ro' and wirecommand.permission != 'pull':
289 if authedperm == b'ro' and wirecommand.permission != 'pull':
290 # TODO proper error mechanism
290 # TODO proper error mechanism
291 res.status = b'403 Forbidden'
291 res.status = b'403 Forbidden'
292 res.headers[b'Content-Type'] = b'text/plain'
292 res.headers[b'Content-Type'] = b'text/plain'
293 res.setbodybytes(_('insufficient permissions to execute '
293 res.setbodybytes(_('insufficient permissions to execute '
294 'command: %s') % command['command'])
294 'command: %s') % command['command'])
295 return True
295 return True
296
296
297 # TODO should we also call checkperm() here? Maybe not if we're going
297 # TODO should we also call checkperm() here? Maybe not if we're going
298 # to overhaul that API. The granted scope from the URL check should
298 # to overhaul that API. The granted scope from the URL check should
299 # be good enough.
299 # be good enough.
300
300
301 else:
301 else:
302 # Don't allow multiple commands outside of ``multirequest`` URL.
302 # Don't allow multiple commands outside of ``multirequest`` URL.
303 if issubsequent:
303 if issubsequent:
304 # TODO proper error mechanism
304 # TODO proper error mechanism
305 res.status = b'200 OK'
305 res.status = b'200 OK'
306 res.headers[b'Content-Type'] = b'text/plain'
306 res.headers[b'Content-Type'] = b'text/plain'
307 res.setbodybytes(_('multiple commands cannot be issued to this '
307 res.setbodybytes(_('multiple commands cannot be issued to this '
308 'URL'))
308 'URL'))
309 return True
309 return True
310
310
311 if reqcommand != command['command']:
311 if reqcommand != command['command']:
312 # TODO define proper error mechanism
312 # TODO define proper error mechanism
313 res.status = b'200 OK'
313 res.status = b'200 OK'
314 res.headers[b'Content-Type'] = b'text/plain'
314 res.headers[b'Content-Type'] = b'text/plain'
315 res.setbodybytes(_('command in frame must match command in URL'))
315 res.setbodybytes(_('command in frame must match command in URL'))
316 return True
316 return True
317
317
318 res.status = b'200 OK'
318 res.status = b'200 OK'
319 res.headers[b'Content-Type'] = FRAMINGTYPE
319 res.headers[b'Content-Type'] = FRAMINGTYPE
320
320
321 try:
321 try:
322 objs = dispatch(repo, proto, command['command'], command['redirect'])
322 objs = dispatch(repo, proto, command['command'], command['redirect'])
323
323
324 action, meta = reactor.oncommandresponsereadyobjects(
324 action, meta = reactor.oncommandresponsereadyobjects(
325 outstream, command['requestid'], objs)
325 outstream, command['requestid'], objs)
326
326
327 except error.WireprotoCommandError as e:
327 except error.WireprotoCommandError as e:
328 action, meta = reactor.oncommanderror(
328 action, meta = reactor.oncommanderror(
329 outstream, command['requestid'], e.message, e.messageargs)
329 outstream, command['requestid'], e.message, e.messageargs)
330
330
331 except Exception as e:
331 except Exception as e:
332 action, meta = reactor.onservererror(
332 action, meta = reactor.onservererror(
333 outstream, command['requestid'],
333 outstream, command['requestid'],
334 _('exception when invoking command: %s') %
334 _('exception when invoking command: %s') %
335 stringutil.forcebytestr(e))
335 stringutil.forcebytestr(e))
336
336
337 if action == 'sendframes':
337 if action == 'sendframes':
338 res.setbodygen(meta['framegen'])
338 res.setbodygen(meta['framegen'])
339 return True
339 return True
340 elif action == 'noop':
340 elif action == 'noop':
341 return False
341 return False
342 else:
342 else:
343 raise error.ProgrammingError('unhandled event from reactor: %s' %
343 raise error.ProgrammingError('unhandled event from reactor: %s' %
344 action)
344 action)
345
345
346 def getdispatchrepo(repo, proto, command):
346 def getdispatchrepo(repo, proto, command):
347 return repo.filtered('served')
347 return repo.filtered('served')
348
348
349 def dispatch(repo, proto, command, redirect):
349 def dispatch(repo, proto, command, redirect):
350 """Run a wire protocol command.
350 """Run a wire protocol command.
351
351
352 Returns an iterable of objects that will be sent to the client.
352 Returns an iterable of objects that will be sent to the client.
353 """
353 """
354 repo = getdispatchrepo(repo, proto, command)
354 repo = getdispatchrepo(repo, proto, command)
355
355
356 entry = COMMANDS[command]
356 entry = COMMANDS[command]
357 func = entry.func
357 func = entry.func
358 spec = entry.args
358 spec = entry.args
359
359
360 args = proto.getargs(spec)
360 args = proto.getargs(spec)
361
361
362 # There is some duplicate boilerplate code here for calling the command and
362 # There is some duplicate boilerplate code here for calling the command and
363 # emitting objects. It is either that or a lot of indented code that looks
363 # emitting objects. It is either that or a lot of indented code that looks
364 # like a pyramid (since there are a lot of code paths that result in not
364 # like a pyramid (since there are a lot of code paths that result in not
365 # using the cacher).
365 # using the cacher).
366 callcommand = lambda: func(repo, proto, **pycompat.strkwargs(args))
366 callcommand = lambda: func(repo, proto, **pycompat.strkwargs(args))
367
367
368 # Request is not cacheable. Don't bother instantiating a cacher.
368 # Request is not cacheable. Don't bother instantiating a cacher.
369 if not entry.cachekeyfn:
369 if not entry.cachekeyfn:
370 for o in callcommand():
370 for o in callcommand():
371 yield o
371 yield o
372 return
372 return
373
373
374 if redirect:
374 if redirect:
375 redirecttargets = redirect[b'targets']
375 redirecttargets = redirect[b'targets']
376 redirecthashes = redirect[b'hashes']
376 redirecthashes = redirect[b'hashes']
377 else:
377 else:
378 redirecttargets = []
378 redirecttargets = []
379 redirecthashes = []
379 redirecthashes = []
380
380
381 cacher = makeresponsecacher(repo, proto, command, args,
381 cacher = makeresponsecacher(repo, proto, command, args,
382 cborutil.streamencode,
382 cborutil.streamencode,
383 redirecttargets=redirecttargets,
383 redirecttargets=redirecttargets,
384 redirecthashes=redirecthashes)
384 redirecthashes=redirecthashes)
385
385
386 # But we have no cacher. Do default handling.
386 # But we have no cacher. Do default handling.
387 if not cacher:
387 if not cacher:
388 for o in callcommand():
388 for o in callcommand():
389 yield o
389 yield o
390 return
390 return
391
391
392 with cacher:
392 with cacher:
393 cachekey = entry.cachekeyfn(repo, proto, cacher, **args)
393 cachekey = entry.cachekeyfn(repo, proto, cacher, **args)
394
394
395 # No cache key or the cacher doesn't like it. Do default handling.
395 # No cache key or the cacher doesn't like it. Do default handling.
396 if cachekey is None or not cacher.setcachekey(cachekey):
396 if cachekey is None or not cacher.setcachekey(cachekey):
397 for o in callcommand():
397 for o in callcommand():
398 yield o
398 yield o
399 return
399 return
400
400
401 # Serve it from the cache, if possible.
401 # Serve it from the cache, if possible.
402 cached = cacher.lookup()
402 cached = cacher.lookup()
403
403
404 if cached:
404 if cached:
405 for o in cached['objs']:
405 for o in cached['objs']:
406 yield o
406 yield o
407 return
407 return
408
408
409 # Else call the command and feed its output into the cacher, allowing
409 # Else call the command and feed its output into the cacher, allowing
410 # the cacher to buffer/mutate objects as it desires.
410 # the cacher to buffer/mutate objects as it desires.
411 for o in callcommand():
411 for o in callcommand():
412 for o in cacher.onobject(o):
412 for o in cacher.onobject(o):
413 yield o
413 yield o
414
414
415 for o in cacher.onfinished():
415 for o in cacher.onfinished():
416 yield o
416 yield o
417
417
418 @interfaceutil.implementer(wireprototypes.baseprotocolhandler)
418 @interfaceutil.implementer(wireprototypes.baseprotocolhandler)
419 class httpv2protocolhandler(object):
419 class httpv2protocolhandler(object):
420 def __init__(self, req, ui, args=None):
420 def __init__(self, req, ui, args=None):
421 self._req = req
421 self._req = req
422 self._ui = ui
422 self._ui = ui
423 self._args = args
423 self._args = args
424
424
425 @property
425 @property
426 def name(self):
426 def name(self):
427 return HTTP_WIREPROTO_V2
427 return HTTP_WIREPROTO_V2
428
428
429 def getargs(self, args):
429 def getargs(self, args):
430 # First look for args that were passed but aren't registered on this
430 # First look for args that were passed but aren't registered on this
431 # command.
431 # command.
432 extra = set(self._args) - set(args)
432 extra = set(self._args) - set(args)
433 if extra:
433 if extra:
434 raise error.WireprotoCommandError(
434 raise error.WireprotoCommandError(
435 'unsupported argument to command: %s' %
435 'unsupported argument to command: %s' %
436 ', '.join(sorted(extra)))
436 ', '.join(sorted(extra)))
437
437
438 # And look for required arguments that are missing.
438 # And look for required arguments that are missing.
439 missing = {a for a in args if args[a]['required']} - set(self._args)
439 missing = {a for a in args if args[a]['required']} - set(self._args)
440
440
441 if missing:
441 if missing:
442 raise error.WireprotoCommandError(
442 raise error.WireprotoCommandError(
443 'missing required arguments: %s' % ', '.join(sorted(missing)))
443 'missing required arguments: %s' % ', '.join(sorted(missing)))
444
444
445 # Now derive the arguments to pass to the command, taking into
445 # Now derive the arguments to pass to the command, taking into
446 # account the arguments specified by the client.
446 # account the arguments specified by the client.
447 data = {}
447 data = {}
448 for k, meta in sorted(args.items()):
448 for k, meta in sorted(args.items()):
449 # This argument wasn't passed by the client.
449 # This argument wasn't passed by the client.
450 if k not in self._args:
450 if k not in self._args:
451 data[k] = meta['default']()
451 data[k] = meta['default']()
452 continue
452 continue
453
453
454 v = self._args[k]
454 v = self._args[k]
455
455
456 # Sets may be expressed as lists. Silently normalize.
456 # Sets may be expressed as lists. Silently normalize.
457 if meta['type'] == 'set' and isinstance(v, list):
457 if meta['type'] == 'set' and isinstance(v, list):
458 v = set(v)
458 v = set(v)
459
459
460 # TODO consider more/stronger type validation.
460 # TODO consider more/stronger type validation.
461
461
462 data[k] = v
462 data[k] = v
463
463
464 return data
464 return data
465
465
466 def getprotocaps(self):
466 def getprotocaps(self):
467 # Protocol capabilities are currently not implemented for HTTP V2.
467 # Protocol capabilities are currently not implemented for HTTP V2.
468 return set()
468 return set()
469
469
470 def getpayload(self):
470 def getpayload(self):
471 raise NotImplementedError
471 raise NotImplementedError
472
472
473 @contextlib.contextmanager
473 @contextlib.contextmanager
474 def mayberedirectstdio(self):
474 def mayberedirectstdio(self):
475 raise NotImplementedError
475 raise NotImplementedError
476
476
477 def client(self):
477 def client(self):
478 raise NotImplementedError
478 raise NotImplementedError
479
479
480 def addcapabilities(self, repo, caps):
480 def addcapabilities(self, repo, caps):
481 return caps
481 return caps
482
482
483 def checkperm(self, perm):
483 def checkperm(self, perm):
484 raise NotImplementedError
484 raise NotImplementedError
485
485
486 def httpv2apidescriptor(req, repo):
486 def httpv2apidescriptor(req, repo):
487 proto = httpv2protocolhandler(req, repo.ui)
487 proto = httpv2protocolhandler(req, repo.ui)
488
488
489 return _capabilitiesv2(repo, proto)
489 return _capabilitiesv2(repo, proto)
490
490
491 def _capabilitiesv2(repo, proto):
491 def _capabilitiesv2(repo, proto):
492 """Obtain the set of capabilities for version 2 transports.
492 """Obtain the set of capabilities for version 2 transports.
493
493
494 These capabilities are distinct from the capabilities for version 1
494 These capabilities are distinct from the capabilities for version 1
495 transports.
495 transports.
496 """
496 """
497 caps = {
497 caps = {
498 'commands': {},
498 'commands': {},
499 'framingmediatypes': [FRAMINGTYPE],
499 'framingmediatypes': [FRAMINGTYPE],
500 'pathfilterprefixes': set(narrowspec.VALID_PREFIXES),
500 'pathfilterprefixes': set(narrowspec.VALID_PREFIXES),
501 }
501 }
502
502
503 for command, entry in COMMANDS.items():
503 for command, entry in COMMANDS.items():
504 args = {}
504 args = {}
505
505
506 for arg, meta in entry.args.items():
506 for arg, meta in entry.args.items():
507 args[arg] = {
507 args[arg] = {
508 # TODO should this be a normalized type using CBOR's
508 # TODO should this be a normalized type using CBOR's
509 # terminology?
509 # terminology?
510 b'type': meta['type'],
510 b'type': meta['type'],
511 b'required': meta['required'],
511 b'required': meta['required'],
512 }
512 }
513
513
514 if not meta['required']:
514 if not meta['required']:
515 args[arg][b'default'] = meta['default']()
515 args[arg][b'default'] = meta['default']()
516
516
517 if meta['validvalues']:
517 if meta['validvalues']:
518 args[arg][b'validvalues'] = meta['validvalues']
518 args[arg][b'validvalues'] = meta['validvalues']
519
519
520 # TODO this type of check should be defined in a per-command callback.
520 # TODO this type of check should be defined in a per-command callback.
521 if (command == b'rawstorefiledata'
521 if (command == b'rawstorefiledata'
522 and not streamclone.allowservergeneration(repo)):
522 and not streamclone.allowservergeneration(repo)):
523 continue
523 continue
524
524
525 caps['commands'][command] = {
525 caps['commands'][command] = {
526 'args': args,
526 'args': args,
527 'permissions': [entry.permission],
527 'permissions': [entry.permission],
528 }
528 }
529
529
530 if entry.extracapabilitiesfn:
530 if entry.extracapabilitiesfn:
531 extracaps = entry.extracapabilitiesfn(repo, proto)
531 extracaps = entry.extracapabilitiesfn(repo, proto)
532 caps['commands'][command].update(extracaps)
532 caps['commands'][command].update(extracaps)
533
533
534 caps['rawrepoformats'] = sorted(repo.requirements &
534 caps['rawrepoformats'] = sorted(repo.requirements &
535 repo.supportedformats)
535 repo.supportedformats)
536
536
537 targets = getadvertisedredirecttargets(repo, proto)
537 targets = getadvertisedredirecttargets(repo, proto)
538 if targets:
538 if targets:
539 caps[b'redirect'] = {
539 caps[b'redirect'] = {
540 b'targets': [],
540 b'targets': [],
541 b'hashes': [b'sha256', b'sha1'],
541 b'hashes': [b'sha256', b'sha1'],
542 }
542 }
543
543
544 for target in targets:
544 for target in targets:
545 entry = {
545 entry = {
546 b'name': target['name'],
546 b'name': target['name'],
547 b'protocol': target['protocol'],
547 b'protocol': target['protocol'],
548 b'uris': target['uris'],
548 b'uris': target['uris'],
549 }
549 }
550
550
551 for key in ('snirequired', 'tlsversions'):
551 for key in ('snirequired', 'tlsversions'):
552 if key in target:
552 if key in target:
553 entry[key] = target[key]
553 entry[key] = target[key]
554
554
555 caps[b'redirect'][b'targets'].append(entry)
555 caps[b'redirect'][b'targets'].append(entry)
556
556
557 return proto.addcapabilities(repo, caps)
557 return proto.addcapabilities(repo, caps)
558
558
559 def getadvertisedredirecttargets(repo, proto):
559 def getadvertisedredirecttargets(repo, proto):
560 """Obtain a list of content redirect targets.
560 """Obtain a list of content redirect targets.
561
561
562 Returns a list containing potential redirect targets that will be
562 Returns a list containing potential redirect targets that will be
563 advertised in capabilities data. Each dict MUST have the following
563 advertised in capabilities data. Each dict MUST have the following
564 keys:
564 keys:
565
565
566 name
566 name
567 The name of this redirect target. This is the identifier clients use
567 The name of this redirect target. This is the identifier clients use
568 to refer to a target. It is transferred as part of every command
568 to refer to a target. It is transferred as part of every command
569 request.
569 request.
570
570
571 protocol
571 protocol
572 Network protocol used by this target. Typically this is the string
572 Network protocol used by this target. Typically this is the string
573 in front of the ``://`` in a URL. e.g. ``https``.
573 in front of the ``://`` in a URL. e.g. ``https``.
574
574
575 uris
575 uris
576 List of representative URIs for this target. Clients can use the
576 List of representative URIs for this target. Clients can use the
577 URIs to test parsing for compatibility or for ordering preference
577 URIs to test parsing for compatibility or for ordering preference
578 for which target to use.
578 for which target to use.
579
579
580 The following optional keys are recognized:
580 The following optional keys are recognized:
581
581
582 snirequired
582 snirequired
583 Bool indicating if Server Name Indication (SNI) is required to
583 Bool indicating if Server Name Indication (SNI) is required to
584 connect to this target.
584 connect to this target.
585
585
586 tlsversions
586 tlsversions
587 List of bytes indicating which TLS versions are supported by this
587 List of bytes indicating which TLS versions are supported by this
588 target.
588 target.
589
589
590 By default, clients reflect the target order advertised by servers
590 By default, clients reflect the target order advertised by servers
591 and servers will use the first client-advertised target when picking
591 and servers will use the first client-advertised target when picking
592 a redirect target. So targets should be advertised in the order the
592 a redirect target. So targets should be advertised in the order the
593 server prefers they be used.
593 server prefers they be used.
594 """
594 """
595 return []
595 return []
596
596
597 def wireprotocommand(name, args=None, permission='push', cachekeyfn=None,
597 def wireprotocommand(name, args=None, permission='push', cachekeyfn=None,
598 extracapabilitiesfn=None):
598 extracapabilitiesfn=None):
599 """Decorator to declare a wire protocol command.
599 """Decorator to declare a wire protocol command.
600
600
601 ``name`` is the name of the wire protocol command being provided.
601 ``name`` is the name of the wire protocol command being provided.
602
602
603 ``args`` is a dict defining arguments accepted by the command. Keys are
603 ``args`` is a dict defining arguments accepted by the command. Keys are
604 the argument name. Values are dicts with the following keys:
604 the argument name. Values are dicts with the following keys:
605
605
606 ``type``
606 ``type``
607 The argument data type. Must be one of the following string
607 The argument data type. Must be one of the following string
608 literals: ``bytes``, ``int``, ``list``, ``dict``, ``set``,
608 literals: ``bytes``, ``int``, ``list``, ``dict``, ``set``,
609 or ``bool``.
609 or ``bool``.
610
610
611 ``default``
611 ``default``
612 A callable returning the default value for this argument. If not
612 A callable returning the default value for this argument. If not
613 specified, ``None`` will be the default value.
613 specified, ``None`` will be the default value.
614
614
615 ``example``
615 ``example``
616 An example value for this argument.
616 An example value for this argument.
617
617
618 ``validvalues``
618 ``validvalues``
619 Set of recognized values for this argument.
619 Set of recognized values for this argument.
620
620
621 ``permission`` defines the permission type needed to run this command.
621 ``permission`` defines the permission type needed to run this command.
622 Can be ``push`` or ``pull``. These roughly map to read-write and read-only,
622 Can be ``push`` or ``pull``. These roughly map to read-write and read-only,
623 respectively. Default is to assume command requires ``push`` permissions
623 respectively. Default is to assume command requires ``push`` permissions
624 because otherwise commands not declaring their permissions could modify
624 because otherwise commands not declaring their permissions could modify
625 a repository that is supposed to be read-only.
625 a repository that is supposed to be read-only.
626
626
627 ``cachekeyfn`` defines an optional callable that can derive the
627 ``cachekeyfn`` defines an optional callable that can derive the
628 cache key for this request.
628 cache key for this request.
629
629
630 ``extracapabilitiesfn`` defines an optional callable that defines extra
630 ``extracapabilitiesfn`` defines an optional callable that defines extra
631 command capabilities/parameters that are advertised next to the command
631 command capabilities/parameters that are advertised next to the command
632 in the capabilities data structure describing the server. The callable
632 in the capabilities data structure describing the server. The callable
633 receives as arguments the repository and protocol objects. It returns
633 receives as arguments the repository and protocol objects. It returns
634 a dict of extra fields to add to the command descriptor.
634 a dict of extra fields to add to the command descriptor.
635
635
636 Wire protocol commands are generators of objects to be serialized and
636 Wire protocol commands are generators of objects to be serialized and
637 sent to the client.
637 sent to the client.
638
638
639 If a command raises an uncaught exception, this will be translated into
639 If a command raises an uncaught exception, this will be translated into
640 a command error.
640 a command error.
641
641
642 All commands can opt in to being cacheable by defining a function
642 All commands can opt in to being cacheable by defining a function
643 (``cachekeyfn``) that is called to derive a cache key. This function
643 (``cachekeyfn``) that is called to derive a cache key. This function
644 receives the same arguments as the command itself plus a ``cacher``
644 receives the same arguments as the command itself plus a ``cacher``
645 argument containing the active cacher for the request and returns a bytes
645 argument containing the active cacher for the request and returns a bytes
646 containing the key in a cache the response to this command may be cached
646 containing the key in a cache the response to this command may be cached
647 under.
647 under.
648 """
648 """
649 transports = {k for k, v in wireprototypes.TRANSPORTS.items()
649 transports = {k for k, v in wireprototypes.TRANSPORTS.items()
650 if v['version'] == 2}
650 if v['version'] == 2}
651
651
652 if permission not in ('push', 'pull'):
652 if permission not in ('push', 'pull'):
653 raise error.ProgrammingError('invalid wire protocol permission; '
653 raise error.ProgrammingError('invalid wire protocol permission; '
654 'got %s; expected "push" or "pull"' %
654 'got %s; expected "push" or "pull"' %
655 permission)
655 permission)
656
656
657 if args is None:
657 if args is None:
658 args = {}
658 args = {}
659
659
660 if not isinstance(args, dict):
660 if not isinstance(args, dict):
661 raise error.ProgrammingError('arguments for version 2 commands '
661 raise error.ProgrammingError('arguments for version 2 commands '
662 'must be declared as dicts')
662 'must be declared as dicts')
663
663
664 for arg, meta in args.items():
664 for arg, meta in args.items():
665 if arg == '*':
665 if arg == '*':
666 raise error.ProgrammingError('* argument name not allowed on '
666 raise error.ProgrammingError('* argument name not allowed on '
667 'version 2 commands')
667 'version 2 commands')
668
668
669 if not isinstance(meta, dict):
669 if not isinstance(meta, dict):
670 raise error.ProgrammingError('arguments for version 2 commands '
670 raise error.ProgrammingError('arguments for version 2 commands '
671 'must declare metadata as a dict')
671 'must declare metadata as a dict')
672
672
673 if 'type' not in meta:
673 if 'type' not in meta:
674 raise error.ProgrammingError('%s argument for command %s does not '
674 raise error.ProgrammingError('%s argument for command %s does not '
675 'declare type field' % (arg, name))
675 'declare type field' % (arg, name))
676
676
677 if meta['type'] not in ('bytes', 'int', 'list', 'dict', 'set', 'bool'):
677 if meta['type'] not in ('bytes', 'int', 'list', 'dict', 'set', 'bool'):
678 raise error.ProgrammingError('%s argument for command %s has '
678 raise error.ProgrammingError('%s argument for command %s has '
679 'illegal type: %s' % (arg, name,
679 'illegal type: %s' % (arg, name,
680 meta['type']))
680 meta['type']))
681
681
682 if 'example' not in meta:
682 if 'example' not in meta:
683 raise error.ProgrammingError('%s argument for command %s does not '
683 raise error.ProgrammingError('%s argument for command %s does not '
684 'declare example field' % (arg, name))
684 'declare example field' % (arg, name))
685
685
686 meta['required'] = 'default' not in meta
686 meta['required'] = 'default' not in meta
687
687
688 meta.setdefault('default', lambda: None)
688 meta.setdefault('default', lambda: None)
689 meta.setdefault('validvalues', None)
689 meta.setdefault('validvalues', None)
690
690
691 def register(func):
691 def register(func):
692 if name in COMMANDS:
692 if name in COMMANDS:
693 raise error.ProgrammingError('%s command already registered '
693 raise error.ProgrammingError('%s command already registered '
694 'for version 2' % name)
694 'for version 2' % name)
695
695
696 COMMANDS[name] = wireprototypes.commandentry(
696 COMMANDS[name] = wireprototypes.commandentry(
697 func, args=args, transports=transports, permission=permission,
697 func, args=args, transports=transports, permission=permission,
698 cachekeyfn=cachekeyfn, extracapabilitiesfn=extracapabilitiesfn)
698 cachekeyfn=cachekeyfn, extracapabilitiesfn=extracapabilitiesfn)
699
699
700 return func
700 return func
701
701
702 return register
702 return register
703
703
704 def makecommandcachekeyfn(command, localversion=None, allargs=False):
704 def makecommandcachekeyfn(command, localversion=None, allargs=False):
705 """Construct a cache key derivation function with common features.
705 """Construct a cache key derivation function with common features.
706
706
707 By default, the cache key is a hash of:
707 By default, the cache key is a hash of:
708
708
709 * The command name.
709 * The command name.
710 * A global cache version number.
710 * A global cache version number.
711 * A local cache version number (passed via ``localversion``).
711 * A local cache version number (passed via ``localversion``).
712 * All the arguments passed to the command.
712 * All the arguments passed to the command.
713 * The media type used.
713 * The media type used.
714 * Wire protocol version string.
714 * Wire protocol version string.
715 * The repository path.
715 * The repository path.
716 """
716 """
717 if not allargs:
717 if not allargs:
718 raise error.ProgrammingError('only allargs=True is currently supported')
718 raise error.ProgrammingError('only allargs=True is currently supported')
719
719
720 if localversion is None:
720 if localversion is None:
721 raise error.ProgrammingError('must set localversion argument value')
721 raise error.ProgrammingError('must set localversion argument value')
722
722
723 def cachekeyfn(repo, proto, cacher, **args):
723 def cachekeyfn(repo, proto, cacher, **args):
724 spec = COMMANDS[command]
724 spec = COMMANDS[command]
725
725
726 # Commands that mutate the repo can not be cached.
726 # Commands that mutate the repo can not be cached.
727 if spec.permission == 'push':
727 if spec.permission == 'push':
728 return None
728 return None
729
729
730 # TODO config option to disable caching.
730 # TODO config option to disable caching.
731
731
732 # Our key derivation strategy is to construct a data structure
732 # Our key derivation strategy is to construct a data structure
733 # holding everything that could influence cacheability and to hash
733 # holding everything that could influence cacheability and to hash
734 # the CBOR representation of that. Using CBOR seems like it might
734 # the CBOR representation of that. Using CBOR seems like it might
735 # be overkill. However, simpler hashing mechanisms are prone to
735 # be overkill. However, simpler hashing mechanisms are prone to
736 # duplicate input issues. e.g. if you just concatenate two values,
736 # duplicate input issues. e.g. if you just concatenate two values,
737 # "foo"+"bar" is identical to "fo"+"obar". Using CBOR provides
737 # "foo"+"bar" is identical to "fo"+"obar". Using CBOR provides
738 # "padding" between values and prevents these problems.
738 # "padding" between values and prevents these problems.
739
739
740 # Seed the hash with various data.
740 # Seed the hash with various data.
741 state = {
741 state = {
742 # To invalidate all cache keys.
742 # To invalidate all cache keys.
743 b'globalversion': GLOBAL_CACHE_VERSION,
743 b'globalversion': GLOBAL_CACHE_VERSION,
744 # More granular cache key invalidation.
744 # More granular cache key invalidation.
745 b'localversion': localversion,
745 b'localversion': localversion,
746 # Cache keys are segmented by command.
746 # Cache keys are segmented by command.
747 b'command': pycompat.sysbytes(command),
747 b'command': pycompat.sysbytes(command),
748 # Throw in the media type and API version strings so changes
748 # Throw in the media type and API version strings so changes
749 # to exchange semantics invalid cache.
749 # to exchange semantics invalid cache.
750 b'mediatype': FRAMINGTYPE,
750 b'mediatype': FRAMINGTYPE,
751 b'version': HTTP_WIREPROTO_V2,
751 b'version': HTTP_WIREPROTO_V2,
752 # So same requests for different repos don't share cache keys.
752 # So same requests for different repos don't share cache keys.
753 b'repo': repo.root,
753 b'repo': repo.root,
754 }
754 }
755
755
756 # The arguments passed to us will have already been normalized.
756 # The arguments passed to us will have already been normalized.
757 # Default values will be set, etc. This is important because it
757 # Default values will be set, etc. This is important because it
758 # means that it doesn't matter if clients send an explicit argument
758 # means that it doesn't matter if clients send an explicit argument
759 # or rely on the default value: it will all normalize to the same
759 # or rely on the default value: it will all normalize to the same
760 # set of arguments on the server and therefore the same cache key.
760 # set of arguments on the server and therefore the same cache key.
761 #
761 #
762 # Arguments by their very nature must support being encoded to CBOR.
762 # Arguments by their very nature must support being encoded to CBOR.
763 # And the CBOR encoder is deterministic. So we hash the arguments
763 # And the CBOR encoder is deterministic. So we hash the arguments
764 # by feeding the CBOR of their representation into the hasher.
764 # by feeding the CBOR of their representation into the hasher.
765 if allargs:
765 if allargs:
766 state[b'args'] = pycompat.byteskwargs(args)
766 state[b'args'] = pycompat.byteskwargs(args)
767
767
768 cacher.adjustcachekeystate(state)
768 cacher.adjustcachekeystate(state)
769
769
770 hasher = hashlib.sha1()
770 hasher = hashlib.sha1()
771 for chunk in cborutil.streamencode(state):
771 for chunk in cborutil.streamencode(state):
772 hasher.update(chunk)
772 hasher.update(chunk)
773
773
774 return pycompat.sysbytes(hasher.hexdigest())
774 return pycompat.sysbytes(hasher.hexdigest())
775
775
776 return cachekeyfn
776 return cachekeyfn
777
777
778 def makeresponsecacher(repo, proto, command, args, objencoderfn,
778 def makeresponsecacher(repo, proto, command, args, objencoderfn,
779 redirecttargets, redirecthashes):
779 redirecttargets, redirecthashes):
780 """Construct a cacher for a cacheable command.
780 """Construct a cacher for a cacheable command.
781
781
782 Returns an ``iwireprotocolcommandcacher`` instance.
782 Returns an ``iwireprotocolcommandcacher`` instance.
783
783
784 Extensions can monkeypatch this function to provide custom caching
784 Extensions can monkeypatch this function to provide custom caching
785 backends.
785 backends.
786 """
786 """
787 return None
787 return None
788
788
789 def resolvenodes(repo, revisions):
789 def resolvenodes(repo, revisions):
790 """Resolve nodes from a revisions specifier data structure."""
790 """Resolve nodes from a revisions specifier data structure."""
791 cl = repo.changelog
791 cl = repo.changelog
792 clhasnode = cl.hasnode
792 clhasnode = cl.hasnode
793
793
794 seen = set()
794 seen = set()
795 nodes = []
795 nodes = []
796
796
797 if not isinstance(revisions, list):
797 if not isinstance(revisions, list):
798 raise error.WireprotoCommandError('revisions must be defined as an '
798 raise error.WireprotoCommandError('revisions must be defined as an '
799 'array')
799 'array')
800
800
801 for spec in revisions:
801 for spec in revisions:
802 if b'type' not in spec:
802 if b'type' not in spec:
803 raise error.WireprotoCommandError(
803 raise error.WireprotoCommandError(
804 'type key not present in revision specifier')
804 'type key not present in revision specifier')
805
805
806 typ = spec[b'type']
806 typ = spec[b'type']
807
807
808 if typ == b'changesetexplicit':
808 if typ == b'changesetexplicit':
809 if b'nodes' not in spec:
809 if b'nodes' not in spec:
810 raise error.WireprotoCommandError(
810 raise error.WireprotoCommandError(
811 'nodes key not present in changesetexplicit revision '
811 'nodes key not present in changesetexplicit revision '
812 'specifier')
812 'specifier')
813
813
814 for node in spec[b'nodes']:
814 for node in spec[b'nodes']:
815 if node not in seen:
815 if node not in seen:
816 nodes.append(node)
816 nodes.append(node)
817 seen.add(node)
817 seen.add(node)
818
818
819 elif typ == b'changesetexplicitdepth':
819 elif typ == b'changesetexplicitdepth':
820 for key in (b'nodes', b'depth'):
820 for key in (b'nodes', b'depth'):
821 if key not in spec:
821 if key not in spec:
822 raise error.WireprotoCommandError(
822 raise error.WireprotoCommandError(
823 '%s key not present in changesetexplicitdepth revision '
823 '%s key not present in changesetexplicitdepth revision '
824 'specifier', (key,))
824 'specifier', (key,))
825
825
826 for rev in repo.revs(b'ancestors(%ln, %d)', spec[b'nodes'],
826 for rev in repo.revs(b'ancestors(%ln, %d)', spec[b'nodes'],
827 spec[b'depth'] - 1):
827 spec[b'depth'] - 1):
828 node = cl.node(rev)
828 node = cl.node(rev)
829
829
830 if node not in seen:
830 if node not in seen:
831 nodes.append(node)
831 nodes.append(node)
832 seen.add(node)
832 seen.add(node)
833
833
834 elif typ == b'changesetdagrange':
834 elif typ == b'changesetdagrange':
835 for key in (b'roots', b'heads'):
835 for key in (b'roots', b'heads'):
836 if key not in spec:
836 if key not in spec:
837 raise error.WireprotoCommandError(
837 raise error.WireprotoCommandError(
838 '%s key not present in changesetdagrange revision '
838 '%s key not present in changesetdagrange revision '
839 'specifier', (key,))
839 'specifier', (key,))
840
840
841 if not spec[b'heads']:
841 if not spec[b'heads']:
842 raise error.WireprotoCommandError(
842 raise error.WireprotoCommandError(
843 'heads key in changesetdagrange cannot be empty')
843 'heads key in changesetdagrange cannot be empty')
844
844
845 if spec[b'roots']:
845 if spec[b'roots']:
846 common = [n for n in spec[b'roots'] if clhasnode(n)]
846 common = [n for n in spec[b'roots'] if clhasnode(n)]
847 else:
847 else:
848 common = [nullid]
848 common = [nullid]
849
849
850 for n in discovery.outgoing(repo, common, spec[b'heads']).missing:
850 for n in discovery.outgoing(repo, common, spec[b'heads']).missing:
851 if n not in seen:
851 if n not in seen:
852 nodes.append(n)
852 nodes.append(n)
853 seen.add(n)
853 seen.add(n)
854
854
855 else:
855 else:
856 raise error.WireprotoCommandError(
856 raise error.WireprotoCommandError(
857 'unknown revision specifier type: %s', (typ,))
857 'unknown revision specifier type: %s', (typ,))
858
858
859 return nodes
859 return nodes
860
860
861 @wireprotocommand('branchmap', permission='pull')
861 @wireprotocommand('branchmap', permission='pull')
862 def branchmapv2(repo, proto):
862 def branchmapv2(repo, proto):
863 yield {encoding.fromlocal(k): v
863 yield {encoding.fromlocal(k): v
864 for k, v in repo.branchmap().iteritems()}
864 for k, v in repo.branchmap().iteritems()}
865
865
866 @wireprotocommand('capabilities', permission='pull')
866 @wireprotocommand('capabilities', permission='pull')
867 def capabilitiesv2(repo, proto):
867 def capabilitiesv2(repo, proto):
868 yield _capabilitiesv2(repo, proto)
868 yield _capabilitiesv2(repo, proto)
869
869
870 @wireprotocommand(
870 @wireprotocommand(
871 'changesetdata',
871 'changesetdata',
872 args={
872 args={
873 'revisions': {
873 'revisions': {
874 'type': 'list',
874 'type': 'list',
875 'example': [{
875 'example': [{
876 b'type': b'changesetexplicit',
876 b'type': b'changesetexplicit',
877 b'nodes': [b'abcdef...'],
877 b'nodes': [b'abcdef...'],
878 }],
878 }],
879 },
879 },
880 'fields': {
880 'fields': {
881 'type': 'set',
881 'type': 'set',
882 'default': set,
882 'default': set,
883 'example': {b'parents', b'revision'},
883 'example': {b'parents', b'revision'},
884 'validvalues': {b'bookmarks', b'parents', b'phase', b'revision'},
884 'validvalues': {b'bookmarks', b'parents', b'phase', b'revision'},
885 },
885 },
886 },
886 },
887 permission='pull')
887 permission='pull')
888 def changesetdata(repo, proto, revisions, fields):
888 def changesetdata(repo, proto, revisions, fields):
889 # TODO look for unknown fields and abort when they can't be serviced.
889 # TODO look for unknown fields and abort when they can't be serviced.
890 # This could probably be validated by dispatcher using validvalues.
890 # This could probably be validated by dispatcher using validvalues.
891
891
892 cl = repo.changelog
892 cl = repo.changelog
893 outgoing = resolvenodes(repo, revisions)
893 outgoing = resolvenodes(repo, revisions)
894 publishing = repo.publishing()
894 publishing = repo.publishing()
895
895
896 if outgoing:
896 if outgoing:
897 repo.hook('preoutgoing', throw=True, source='serve')
897 repo.hook('preoutgoing', throw=True, source='serve')
898
898
899 yield {
899 yield {
900 b'totalitems': len(outgoing),
900 b'totalitems': len(outgoing),
901 }
901 }
902
902
903 # The phases of nodes already transferred to the client may have changed
903 # The phases of nodes already transferred to the client may have changed
904 # since the client last requested data. We send phase-only records
904 # since the client last requested data. We send phase-only records
905 # for these revisions, if requested.
905 # for these revisions, if requested.
906 # TODO actually do this. We'll probably want to emit phase heads
906 # TODO actually do this. We'll probably want to emit phase heads
907 # in the ancestry set of the outgoing revisions. This will ensure
907 # in the ancestry set of the outgoing revisions. This will ensure
908 # that phase updates within that set are seen.
908 # that phase updates within that set are seen.
909 if b'phase' in fields:
909 if b'phase' in fields:
910 pass
910 pass
911
911
912 nodebookmarks = {}
912 nodebookmarks = {}
913 for mark, node in repo._bookmarks.items():
913 for mark, node in repo._bookmarks.items():
914 nodebookmarks.setdefault(node, set()).add(mark)
914 nodebookmarks.setdefault(node, set()).add(mark)
915
915
916 # It is already topologically sorted by revision number.
916 # It is already topologically sorted by revision number.
917 for node in outgoing:
917 for node in outgoing:
918 d = {
918 d = {
919 b'node': node,
919 b'node': node,
920 }
920 }
921
921
922 if b'parents' in fields:
922 if b'parents' in fields:
923 d[b'parents'] = cl.parents(node)
923 d[b'parents'] = cl.parents(node)
924
924
925 if b'phase' in fields:
925 if b'phase' in fields:
926 if publishing:
926 if publishing:
927 d[b'phase'] = b'public'
927 d[b'phase'] = b'public'
928 else:
928 else:
929 ctx = repo[node]
929 ctx = repo[node]
930 d[b'phase'] = ctx.phasestr()
930 d[b'phase'] = ctx.phasestr()
931
931
932 if b'bookmarks' in fields and node in nodebookmarks:
932 if b'bookmarks' in fields and node in nodebookmarks:
933 d[b'bookmarks'] = sorted(nodebookmarks[node])
933 d[b'bookmarks'] = sorted(nodebookmarks[node])
934 del nodebookmarks[node]
934 del nodebookmarks[node]
935
935
936 followingmeta = []
936 followingmeta = []
937 followingdata = []
937 followingdata = []
938
938
939 if b'revision' in fields:
939 if b'revision' in fields:
940 revisiondata = cl.revision(node, raw=True)
940 revisiondata = cl.revision(node, raw=True)
941 followingmeta.append((b'revision', len(revisiondata)))
941 followingmeta.append((b'revision', len(revisiondata)))
942 followingdata.append(revisiondata)
942 followingdata.append(revisiondata)
943
943
944 # TODO make it possible for extensions to wrap a function or register
944 # TODO make it possible for extensions to wrap a function or register
945 # a handler to service custom fields.
945 # a handler to service custom fields.
946
946
947 if followingmeta:
947 if followingmeta:
948 d[b'fieldsfollowing'] = followingmeta
948 d[b'fieldsfollowing'] = followingmeta
949
949
950 yield d
950 yield d
951
951
952 for extra in followingdata:
952 for extra in followingdata:
953 yield extra
953 yield extra
954
954
955 # If requested, send bookmarks from nodes that didn't have revision
955 # If requested, send bookmarks from nodes that didn't have revision
956 # data sent so receiver is aware of any bookmark updates.
956 # data sent so receiver is aware of any bookmark updates.
957 if b'bookmarks' in fields:
957 if b'bookmarks' in fields:
958 for node, marks in sorted(nodebookmarks.iteritems()):
958 for node, marks in sorted(nodebookmarks.iteritems()):
959 yield {
959 yield {
960 b'node': node,
960 b'node': node,
961 b'bookmarks': sorted(marks),
961 b'bookmarks': sorted(marks),
962 }
962 }
963
963
964 class FileAccessError(Exception):
964 class FileAccessError(Exception):
965 """Represents an error accessing a specific file."""
965 """Represents an error accessing a specific file."""
966
966
967 def __init__(self, path, msg, args):
967 def __init__(self, path, msg, args):
968 self.path = path
968 self.path = path
969 self.msg = msg
969 self.msg = msg
970 self.args = args
970 self.args = args
971
971
972 def getfilestore(repo, proto, path):
972 def getfilestore(repo, proto, path):
973 """Obtain a file storage object for use with wire protocol.
973 """Obtain a file storage object for use with wire protocol.
974
974
975 Exists as a standalone function so extensions can monkeypatch to add
975 Exists as a standalone function so extensions can monkeypatch to add
976 access control.
976 access control.
977 """
977 """
978 # This seems to work even if the file doesn't exist. So catch
978 # This seems to work even if the file doesn't exist. So catch
979 # "empty" files and return an error.
979 # "empty" files and return an error.
980 fl = repo.file(path)
980 fl = repo.file(path)
981
981
982 if not len(fl):
982 if not len(fl):
983 raise FileAccessError(path, 'unknown file: %s', (path,))
983 raise FileAccessError(path, 'unknown file: %s', (path,))
984
984
985 return fl
985 return fl
986
986
987 def emitfilerevisions(repo, path, revisions, linknodes, fields):
987 def emitfilerevisions(repo, path, revisions, linknodes, fields):
988 for revision in revisions:
988 for revision in revisions:
989 d = {
989 d = {
990 b'node': revision.node,
990 b'node': revision.node,
991 }
991 }
992
992
993 if b'parents' in fields:
993 if b'parents' in fields:
994 d[b'parents'] = [revision.p1node, revision.p2node]
994 d[b'parents'] = [revision.p1node, revision.p2node]
995
995
996 if b'linknode' in fields:
996 if b'linknode' in fields:
997 d[b'linknode'] = linknodes[revision.node]
997 d[b'linknode'] = linknodes[revision.node]
998
998
999 followingmeta = []
999 followingmeta = []
1000 followingdata = []
1000 followingdata = []
1001
1001
1002 if b'revision' in fields:
1002 if b'revision' in fields:
1003 if revision.revision is not None:
1003 if revision.revision is not None:
1004 followingmeta.append((b'revision', len(revision.revision)))
1004 followingmeta.append((b'revision', len(revision.revision)))
1005 followingdata.append(revision.revision)
1005 followingdata.append(revision.revision)
1006 else:
1006 else:
1007 d[b'deltabasenode'] = revision.basenode
1007 d[b'deltabasenode'] = revision.basenode
1008 followingmeta.append((b'delta', len(revision.delta)))
1008 followingmeta.append((b'delta', len(revision.delta)))
1009 followingdata.append(revision.delta)
1009 followingdata.append(revision.delta)
1010
1010
1011 if followingmeta:
1011 if followingmeta:
1012 d[b'fieldsfollowing'] = followingmeta
1012 d[b'fieldsfollowing'] = followingmeta
1013
1013
1014 yield d
1014 yield d
1015
1015
1016 for extra in followingdata:
1016 for extra in followingdata:
1017 yield extra
1017 yield extra
1018
1018
1019 def makefilematcher(repo, pathfilter):
1019 def makefilematcher(repo, pathfilter):
1020 """Construct a matcher from a path filter dict."""
1020 """Construct a matcher from a path filter dict."""
1021
1021
1022 # Validate values.
1022 # Validate values.
1023 if pathfilter:
1023 if pathfilter:
1024 for key in (b'include', b'exclude'):
1024 for key in (b'include', b'exclude'):
1025 for pattern in pathfilter.get(key, []):
1025 for pattern in pathfilter.get(key, []):
1026 if not pattern.startswith((b'path:', b'rootfilesin:')):
1026 if not pattern.startswith((b'path:', b'rootfilesin:')):
1027 raise error.WireprotoCommandError(
1027 raise error.WireprotoCommandError(
1028 '%s pattern must begin with `path:` or `rootfilesin:`; '
1028 '%s pattern must begin with `path:` or `rootfilesin:`; '
1029 'got %s', (key, pattern))
1029 'got %s', (key, pattern))
1030
1030
1031 if pathfilter:
1031 if pathfilter:
1032 matcher = matchmod.match(repo.root, b'',
1032 matcher = matchmod.match(repo.root, b'',
1033 include=pathfilter.get(b'include', []),
1033 include=pathfilter.get(b'include', []),
1034 exclude=pathfilter.get(b'exclude', []))
1034 exclude=pathfilter.get(b'exclude', []))
1035 else:
1035 else:
1036 matcher = matchmod.match(repo.root, b'')
1036 matcher = matchmod.match(repo.root, b'')
1037
1037
1038 # Requested patterns could include files not in the local store. So
1038 # Requested patterns could include files not in the local store. So
1039 # filter those out.
1039 # filter those out.
1040 return repo.narrowmatch(matcher)
1040 return repo.narrowmatch(matcher)
1041
1041
1042 @wireprotocommand(
1042 @wireprotocommand(
1043 'filedata',
1043 'filedata',
1044 args={
1044 args={
1045 'haveparents': {
1045 'haveparents': {
1046 'type': 'bool',
1046 'type': 'bool',
1047 'default': lambda: False,
1047 'default': lambda: False,
1048 'example': True,
1048 'example': True,
1049 },
1049 },
1050 'nodes': {
1050 'nodes': {
1051 'type': 'list',
1051 'type': 'list',
1052 'example': [b'0123456...'],
1052 'example': [b'0123456...'],
1053 },
1053 },
1054 'fields': {
1054 'fields': {
1055 'type': 'set',
1055 'type': 'set',
1056 'default': set,
1056 'default': set,
1057 'example': {b'parents', b'revision'},
1057 'example': {b'parents', b'revision'},
1058 'validvalues': {b'parents', b'revision', b'linknode'},
1058 'validvalues': {b'parents', b'revision', b'linknode'},
1059 },
1059 },
1060 'path': {
1060 'path': {
1061 'type': 'bytes',
1061 'type': 'bytes',
1062 'example': b'foo.txt',
1062 'example': b'foo.txt',
1063 }
1063 }
1064 },
1064 },
1065 permission='pull',
1065 permission='pull',
1066 # TODO censoring a file revision won't invalidate the cache.
1066 # TODO censoring a file revision won't invalidate the cache.
1067 # Figure out a way to take censoring into account when deriving
1067 # Figure out a way to take censoring into account when deriving
1068 # the cache key.
1068 # the cache key.
1069 cachekeyfn=makecommandcachekeyfn('filedata', 1, allargs=True))
1069 cachekeyfn=makecommandcachekeyfn('filedata', 1, allargs=True))
1070 def filedata(repo, proto, haveparents, nodes, fields, path):
1070 def filedata(repo, proto, haveparents, nodes, fields, path):
1071 # TODO this API allows access to file revisions that are attached to
1071 # TODO this API allows access to file revisions that are attached to
1072 # secret changesets. filesdata does not have this problem. Maybe this
1072 # secret changesets. filesdata does not have this problem. Maybe this
1073 # API should be deleted?
1073 # API should be deleted?
1074
1074
1075 try:
1075 try:
1076 # Extensions may wish to access the protocol handler.
1076 # Extensions may wish to access the protocol handler.
1077 store = getfilestore(repo, proto, path)
1077 store = getfilestore(repo, proto, path)
1078 except FileAccessError as e:
1078 except FileAccessError as e:
1079 raise error.WireprotoCommandError(e.msg, e.args)
1079 raise error.WireprotoCommandError(e.msg, e.args)
1080
1080
1081 clnode = repo.changelog.node
1081 clnode = repo.changelog.node
1082 linknodes = {}
1082 linknodes = {}
1083
1083
1084 # Validate requested nodes.
1084 # Validate requested nodes.
1085 for node in nodes:
1085 for node in nodes:
1086 try:
1086 try:
1087 store.rev(node)
1087 store.rev(node)
1088 except error.LookupError:
1088 except error.LookupError:
1089 raise error.WireprotoCommandError('unknown file node: %s',
1089 raise error.WireprotoCommandError('unknown file node: %s',
1090 (hex(node),))
1090 (hex(node),))
1091
1091
1092 # TODO by creating the filectx against a specific file revision
1092 # TODO by creating the filectx against a specific file revision
1093 # instead of changeset, linkrev() is always used. This is wrong for
1093 # instead of changeset, linkrev() is always used. This is wrong for
1094 # cases where linkrev() may refer to a hidden changeset. But since this
1094 # cases where linkrev() may refer to a hidden changeset. But since this
1095 # API doesn't know anything about changesets, we're not sure how to
1095 # API doesn't know anything about changesets, we're not sure how to
1096 # disambiguate the linknode. Perhaps we should delete this API?
1096 # disambiguate the linknode. Perhaps we should delete this API?
1097 fctx = repo.filectx(path, fileid=node)
1097 fctx = repo.filectx(path, fileid=node)
1098 linknodes[node] = clnode(fctx.introrev())
1098 linknodes[node] = clnode(fctx.introrev())
1099
1099
1100 revisions = store.emitrevisions(nodes,
1100 revisions = store.emitrevisions(nodes,
1101 revisiondata=b'revision' in fields,
1101 revisiondata=b'revision' in fields,
1102 assumehaveparentrevisions=haveparents)
1102 assumehaveparentrevisions=haveparents)
1103
1103
1104 yield {
1104 yield {
1105 b'totalitems': len(nodes),
1105 b'totalitems': len(nodes),
1106 }
1106 }
1107
1107
1108 for o in emitfilerevisions(repo, path, revisions, linknodes, fields):
1108 for o in emitfilerevisions(repo, path, revisions, linknodes, fields):
1109 yield o
1109 yield o
1110
1110
1111 def filesdatacapabilities(repo, proto):
1111 def filesdatacapabilities(repo, proto):
1112 batchsize = repo.ui.configint(
1112 batchsize = repo.ui.configint(
1113 b'experimental', b'server.filesdata.recommended-batch-size')
1113 b'experimental', b'server.filesdata.recommended-batch-size')
1114 return {
1114 return {
1115 b'recommendedbatchsize': batchsize,
1115 b'recommendedbatchsize': batchsize,
1116 }
1116 }
1117
1117
1118 @wireprotocommand(
1118 @wireprotocommand(
1119 'filesdata',
1119 'filesdata',
1120 args={
1120 args={
1121 'haveparents': {
1121 'haveparents': {
1122 'type': 'bool',
1122 'type': 'bool',
1123 'default': lambda: False,
1123 'default': lambda: False,
1124 'example': True,
1124 'example': True,
1125 },
1125 },
1126 'fields': {
1126 'fields': {
1127 'type': 'set',
1127 'type': 'set',
1128 'default': set,
1128 'default': set,
1129 'example': {b'parents', b'revision'},
1129 'example': {b'parents', b'revision'},
1130 'validvalues': {b'firstchangeset', b'linknode', b'parents',
1130 'validvalues': {b'firstchangeset', b'linknode', b'parents',
1131 b'revision'},
1131 b'revision'},
1132 },
1132 },
1133 'pathfilter': {
1133 'pathfilter': {
1134 'type': 'dict',
1134 'type': 'dict',
1135 'default': lambda: None,
1135 'default': lambda: None,
1136 'example': {b'include': [b'path:tests']},
1136 'example': {b'include': [b'path:tests']},
1137 },
1137 },
1138 'revisions': {
1138 'revisions': {
1139 'type': 'list',
1139 'type': 'list',
1140 'example': [{
1140 'example': [{
1141 b'type': b'changesetexplicit',
1141 b'type': b'changesetexplicit',
1142 b'nodes': [b'abcdef...'],
1142 b'nodes': [b'abcdef...'],
1143 }],
1143 }],
1144 },
1144 },
1145 },
1145 },
1146 permission='pull',
1146 permission='pull',
1147 # TODO censoring a file revision won't invalidate the cache.
1147 # TODO censoring a file revision won't invalidate the cache.
1148 # Figure out a way to take censoring into account when deriving
1148 # Figure out a way to take censoring into account when deriving
1149 # the cache key.
1149 # the cache key.
1150 cachekeyfn=makecommandcachekeyfn('filesdata', 1, allargs=True),
1150 cachekeyfn=makecommandcachekeyfn('filesdata', 1, allargs=True),
1151 extracapabilitiesfn=filesdatacapabilities)
1151 extracapabilitiesfn=filesdatacapabilities)
1152 def filesdata(repo, proto, haveparents, fields, pathfilter, revisions):
1152 def filesdata(repo, proto, haveparents, fields, pathfilter, revisions):
1153 # TODO This should operate on a repo that exposes obsolete changesets. There
1153 # TODO This should operate on a repo that exposes obsolete changesets. There
1154 # is a race between a client making a push that obsoletes a changeset and
1154 # is a race between a client making a push that obsoletes a changeset and
1155 # another client fetching files data for that changeset. If a client has a
1155 # another client fetching files data for that changeset. If a client has a
1156 # changeset, it should probably be allowed to access files data for that
1156 # changeset, it should probably be allowed to access files data for that
1157 # changeset.
1157 # changeset.
1158
1158
1159 cl = repo.changelog
1160 clnode = cl.node
1161 outgoing = resolvenodes(repo, revisions)
1159 outgoing = resolvenodes(repo, revisions)
1162 filematcher = makefilematcher(repo, pathfilter)
1160 filematcher = makefilematcher(repo, pathfilter)
1163
1161
1164 # Figure out what needs to be emitted.
1165 changedpaths = set()
1166 # path -> {fnode: linknode}
1162 # path -> {fnode: linknode}
1167 fnodes = collections.defaultdict(dict)
1163 fnodes = collections.defaultdict(dict)
1168
1164
1165 # We collect the set of relevant file revisions by iterating the changeset
1166 # revisions and either walking the set of files recorded in the changeset
1167 # or by walking the manifest at that revision. There is probably room for a
1168 # storage-level API to request this data, as it can be expensive to compute
1169 # and would benefit from caching or alternate storage from what revlogs
1170 # provide.
1169 for node in outgoing:
1171 for node in outgoing:
1170 ctx = repo[node]
1172 ctx = repo[node]
1171 changedpaths.update(ctx.files())
1173 mctx = ctx.manifestctx()
1172
1174 md = mctx.read()
1173 changedpaths = sorted(p for p in changedpaths if filematcher(p))
1174
1175
1175 # If ancestors are known, we send file revisions having a linkrev in the
1176 if haveparents:
1176 # outgoing set of changeset revisions.
1177 checkpaths = ctx.files()
1177 if haveparents:
1178 else:
1178 outgoingclrevs = set(cl.rev(n) for n in outgoing)
1179 checkpaths = md.keys()
1179
1180 for path in changedpaths:
1181 try:
1182 store = getfilestore(repo, proto, path)
1183 except FileAccessError as e:
1184 raise error.WireprotoCommandError(e.msg, e.args)
1185
1180
1186 for rev in store:
1181 for path in checkpaths:
1187 linkrev = store.linkrev(rev)
1182 fnode = md[path]
1188
1189 if linkrev in outgoingclrevs:
1190 fnodes[path].setdefault(store.node(rev), clnode(linkrev))
1191
1183
1192 # If ancestors aren't known, we walk the manifests and send all
1184 if path in fnodes and fnode in fnodes[path]:
1193 # encountered file revisions.
1185 continue
1194 else:
1195 for node in outgoing:
1196 mctx = repo[node].manifestctx()
1197
1186
1198 for path, fnode in mctx.read().items():
1187 if not filematcher(path):
1199 if filematcher(path):
1188 continue
1200 fnodes[path].setdefault(fnode, node)
1189
1190 fnodes[path].setdefault(fnode, node)
1201
1191
1202 yield {
1192 yield {
1203 b'totalpaths': len(fnodes),
1193 b'totalpaths': len(fnodes),
1204 b'totalitems': sum(len(v) for v in fnodes.values())
1194 b'totalitems': sum(len(v) for v in fnodes.values())
1205 }
1195 }
1206
1196
1207 for path, filenodes in sorted(fnodes.items()):
1197 for path, filenodes in sorted(fnodes.items()):
1208 try:
1198 try:
1209 store = getfilestore(repo, proto, path)
1199 store = getfilestore(repo, proto, path)
1210 except FileAccessError as e:
1200 except FileAccessError as e:
1211 raise error.WireprotoCommandError(e.msg, e.args)
1201 raise error.WireprotoCommandError(e.msg, e.args)
1212
1202
1213 yield {
1203 yield {
1214 b'path': path,
1204 b'path': path,
1215 b'totalitems': len(filenodes),
1205 b'totalitems': len(filenodes),
1216 }
1206 }
1217
1207
1218 revisions = store.emitrevisions(filenodes.keys(),
1208 revisions = store.emitrevisions(filenodes.keys(),
1219 revisiondata=b'revision' in fields,
1209 revisiondata=b'revision' in fields,
1220 assumehaveparentrevisions=haveparents)
1210 assumehaveparentrevisions=haveparents)
1221
1211
1222 for o in emitfilerevisions(repo, path, revisions, filenodes, fields):
1212 for o in emitfilerevisions(repo, path, revisions, filenodes, fields):
1223 yield o
1213 yield o
1224
1214
1225 @wireprotocommand(
1215 @wireprotocommand(
1226 'heads',
1216 'heads',
1227 args={
1217 args={
1228 'publiconly': {
1218 'publiconly': {
1229 'type': 'bool',
1219 'type': 'bool',
1230 'default': lambda: False,
1220 'default': lambda: False,
1231 'example': False,
1221 'example': False,
1232 },
1222 },
1233 },
1223 },
1234 permission='pull')
1224 permission='pull')
1235 def headsv2(repo, proto, publiconly):
1225 def headsv2(repo, proto, publiconly):
1236 if publiconly:
1226 if publiconly:
1237 repo = repo.filtered('immutable')
1227 repo = repo.filtered('immutable')
1238
1228
1239 yield repo.heads()
1229 yield repo.heads()
1240
1230
1241 @wireprotocommand(
1231 @wireprotocommand(
1242 'known',
1232 'known',
1243 args={
1233 args={
1244 'nodes': {
1234 'nodes': {
1245 'type': 'list',
1235 'type': 'list',
1246 'default': list,
1236 'default': list,
1247 'example': [b'deadbeef'],
1237 'example': [b'deadbeef'],
1248 },
1238 },
1249 },
1239 },
1250 permission='pull')
1240 permission='pull')
1251 def knownv2(repo, proto, nodes):
1241 def knownv2(repo, proto, nodes):
1252 result = b''.join(b'1' if n else b'0' for n in repo.known(nodes))
1242 result = b''.join(b'1' if n else b'0' for n in repo.known(nodes))
1253 yield result
1243 yield result
1254
1244
1255 @wireprotocommand(
1245 @wireprotocommand(
1256 'listkeys',
1246 'listkeys',
1257 args={
1247 args={
1258 'namespace': {
1248 'namespace': {
1259 'type': 'bytes',
1249 'type': 'bytes',
1260 'example': b'ns',
1250 'example': b'ns',
1261 },
1251 },
1262 },
1252 },
1263 permission='pull')
1253 permission='pull')
1264 def listkeysv2(repo, proto, namespace):
1254 def listkeysv2(repo, proto, namespace):
1265 keys = repo.listkeys(encoding.tolocal(namespace))
1255 keys = repo.listkeys(encoding.tolocal(namespace))
1266 keys = {encoding.fromlocal(k): encoding.fromlocal(v)
1256 keys = {encoding.fromlocal(k): encoding.fromlocal(v)
1267 for k, v in keys.iteritems()}
1257 for k, v in keys.iteritems()}
1268
1258
1269 yield keys
1259 yield keys
1270
1260
1271 @wireprotocommand(
1261 @wireprotocommand(
1272 'lookup',
1262 'lookup',
1273 args={
1263 args={
1274 'key': {
1264 'key': {
1275 'type': 'bytes',
1265 'type': 'bytes',
1276 'example': b'foo',
1266 'example': b'foo',
1277 },
1267 },
1278 },
1268 },
1279 permission='pull')
1269 permission='pull')
1280 def lookupv2(repo, proto, key):
1270 def lookupv2(repo, proto, key):
1281 key = encoding.tolocal(key)
1271 key = encoding.tolocal(key)
1282
1272
1283 # TODO handle exception.
1273 # TODO handle exception.
1284 node = repo.lookup(key)
1274 node = repo.lookup(key)
1285
1275
1286 yield node
1276 yield node
1287
1277
1288 def manifestdatacapabilities(repo, proto):
1278 def manifestdatacapabilities(repo, proto):
1289 batchsize = repo.ui.configint(
1279 batchsize = repo.ui.configint(
1290 b'experimental', b'server.manifestdata.recommended-batch-size')
1280 b'experimental', b'server.manifestdata.recommended-batch-size')
1291
1281
1292 return {
1282 return {
1293 b'recommendedbatchsize': batchsize,
1283 b'recommendedbatchsize': batchsize,
1294 }
1284 }
1295
1285
1296 @wireprotocommand(
1286 @wireprotocommand(
1297 'manifestdata',
1287 'manifestdata',
1298 args={
1288 args={
1299 'nodes': {
1289 'nodes': {
1300 'type': 'list',
1290 'type': 'list',
1301 'example': [b'0123456...'],
1291 'example': [b'0123456...'],
1302 },
1292 },
1303 'haveparents': {
1293 'haveparents': {
1304 'type': 'bool',
1294 'type': 'bool',
1305 'default': lambda: False,
1295 'default': lambda: False,
1306 'example': True,
1296 'example': True,
1307 },
1297 },
1308 'fields': {
1298 'fields': {
1309 'type': 'set',
1299 'type': 'set',
1310 'default': set,
1300 'default': set,
1311 'example': {b'parents', b'revision'},
1301 'example': {b'parents', b'revision'},
1312 'validvalues': {b'parents', b'revision'},
1302 'validvalues': {b'parents', b'revision'},
1313 },
1303 },
1314 'tree': {
1304 'tree': {
1315 'type': 'bytes',
1305 'type': 'bytes',
1316 'example': b'',
1306 'example': b'',
1317 },
1307 },
1318 },
1308 },
1319 permission='pull',
1309 permission='pull',
1320 cachekeyfn=makecommandcachekeyfn('manifestdata', 1, allargs=True),
1310 cachekeyfn=makecommandcachekeyfn('manifestdata', 1, allargs=True),
1321 extracapabilitiesfn=manifestdatacapabilities)
1311 extracapabilitiesfn=manifestdatacapabilities)
1322 def manifestdata(repo, proto, haveparents, nodes, fields, tree):
1312 def manifestdata(repo, proto, haveparents, nodes, fields, tree):
1323 store = repo.manifestlog.getstorage(tree)
1313 store = repo.manifestlog.getstorage(tree)
1324
1314
1325 # Validate the node is known and abort on unknown revisions.
1315 # Validate the node is known and abort on unknown revisions.
1326 for node in nodes:
1316 for node in nodes:
1327 try:
1317 try:
1328 store.rev(node)
1318 store.rev(node)
1329 except error.LookupError:
1319 except error.LookupError:
1330 raise error.WireprotoCommandError(
1320 raise error.WireprotoCommandError(
1331 'unknown node: %s', (node,))
1321 'unknown node: %s', (node,))
1332
1322
1333 revisions = store.emitrevisions(nodes,
1323 revisions = store.emitrevisions(nodes,
1334 revisiondata=b'revision' in fields,
1324 revisiondata=b'revision' in fields,
1335 assumehaveparentrevisions=haveparents)
1325 assumehaveparentrevisions=haveparents)
1336
1326
1337 yield {
1327 yield {
1338 b'totalitems': len(nodes),
1328 b'totalitems': len(nodes),
1339 }
1329 }
1340
1330
1341 for revision in revisions:
1331 for revision in revisions:
1342 d = {
1332 d = {
1343 b'node': revision.node,
1333 b'node': revision.node,
1344 }
1334 }
1345
1335
1346 if b'parents' in fields:
1336 if b'parents' in fields:
1347 d[b'parents'] = [revision.p1node, revision.p2node]
1337 d[b'parents'] = [revision.p1node, revision.p2node]
1348
1338
1349 followingmeta = []
1339 followingmeta = []
1350 followingdata = []
1340 followingdata = []
1351
1341
1352 if b'revision' in fields:
1342 if b'revision' in fields:
1353 if revision.revision is not None:
1343 if revision.revision is not None:
1354 followingmeta.append((b'revision', len(revision.revision)))
1344 followingmeta.append((b'revision', len(revision.revision)))
1355 followingdata.append(revision.revision)
1345 followingdata.append(revision.revision)
1356 else:
1346 else:
1357 d[b'deltabasenode'] = revision.basenode
1347 d[b'deltabasenode'] = revision.basenode
1358 followingmeta.append((b'delta', len(revision.delta)))
1348 followingmeta.append((b'delta', len(revision.delta)))
1359 followingdata.append(revision.delta)
1349 followingdata.append(revision.delta)
1360
1350
1361 if followingmeta:
1351 if followingmeta:
1362 d[b'fieldsfollowing'] = followingmeta
1352 d[b'fieldsfollowing'] = followingmeta
1363
1353
1364 yield d
1354 yield d
1365
1355
1366 for extra in followingdata:
1356 for extra in followingdata:
1367 yield extra
1357 yield extra
1368
1358
1369 @wireprotocommand(
1359 @wireprotocommand(
1370 'pushkey',
1360 'pushkey',
1371 args={
1361 args={
1372 'namespace': {
1362 'namespace': {
1373 'type': 'bytes',
1363 'type': 'bytes',
1374 'example': b'ns',
1364 'example': b'ns',
1375 },
1365 },
1376 'key': {
1366 'key': {
1377 'type': 'bytes',
1367 'type': 'bytes',
1378 'example': b'key',
1368 'example': b'key',
1379 },
1369 },
1380 'old': {
1370 'old': {
1381 'type': 'bytes',
1371 'type': 'bytes',
1382 'example': b'old',
1372 'example': b'old',
1383 },
1373 },
1384 'new': {
1374 'new': {
1385 'type': 'bytes',
1375 'type': 'bytes',
1386 'example': 'new',
1376 'example': 'new',
1387 },
1377 },
1388 },
1378 },
1389 permission='push')
1379 permission='push')
1390 def pushkeyv2(repo, proto, namespace, key, old, new):
1380 def pushkeyv2(repo, proto, namespace, key, old, new):
1391 # TODO handle ui output redirection
1381 # TODO handle ui output redirection
1392 yield repo.pushkey(encoding.tolocal(namespace),
1382 yield repo.pushkey(encoding.tolocal(namespace),
1393 encoding.tolocal(key),
1383 encoding.tolocal(key),
1394 encoding.tolocal(old),
1384 encoding.tolocal(old),
1395 encoding.tolocal(new))
1385 encoding.tolocal(new))
1396
1386
1397
1387
1398 @wireprotocommand(
1388 @wireprotocommand(
1399 'rawstorefiledata',
1389 'rawstorefiledata',
1400 args={
1390 args={
1401 'files': {
1391 'files': {
1402 'type': 'list',
1392 'type': 'list',
1403 'example': [b'changelog', b'manifestlog'],
1393 'example': [b'changelog', b'manifestlog'],
1404 },
1394 },
1405 'pathfilter': {
1395 'pathfilter': {
1406 'type': 'list',
1396 'type': 'list',
1407 'default': lambda: None,
1397 'default': lambda: None,
1408 'example': {b'include': [b'path:tests']},
1398 'example': {b'include': [b'path:tests']},
1409 },
1399 },
1410 },
1400 },
1411 permission='pull')
1401 permission='pull')
1412 def rawstorefiledata(repo, proto, files, pathfilter):
1402 def rawstorefiledata(repo, proto, files, pathfilter):
1413 if not streamclone.allowservergeneration(repo):
1403 if not streamclone.allowservergeneration(repo):
1414 raise error.WireprotoCommandError(b'stream clone is disabled')
1404 raise error.WireprotoCommandError(b'stream clone is disabled')
1415
1405
1416 # TODO support dynamically advertising what store files "sets" are
1406 # TODO support dynamically advertising what store files "sets" are
1417 # available. For now, we support changelog, manifestlog, and files.
1407 # available. For now, we support changelog, manifestlog, and files.
1418 files = set(files)
1408 files = set(files)
1419 allowedfiles = {b'changelog', b'manifestlog'}
1409 allowedfiles = {b'changelog', b'manifestlog'}
1420
1410
1421 unsupported = files - allowedfiles
1411 unsupported = files - allowedfiles
1422 if unsupported:
1412 if unsupported:
1423 raise error.WireprotoCommandError(b'unknown file type: %s',
1413 raise error.WireprotoCommandError(b'unknown file type: %s',
1424 (b', '.join(sorted(unsupported)),))
1414 (b', '.join(sorted(unsupported)),))
1425
1415
1426 with repo.lock():
1416 with repo.lock():
1427 topfiles = list(repo.store.topfiles())
1417 topfiles = list(repo.store.topfiles())
1428
1418
1429 sendfiles = []
1419 sendfiles = []
1430 totalsize = 0
1420 totalsize = 0
1431
1421
1432 # TODO this is a bunch of storage layer interface abstractions because
1422 # TODO this is a bunch of storage layer interface abstractions because
1433 # it assumes revlogs.
1423 # it assumes revlogs.
1434 for name, encodedname, size in topfiles:
1424 for name, encodedname, size in topfiles:
1435 if b'changelog' in files and name.startswith(b'00changelog'):
1425 if b'changelog' in files and name.startswith(b'00changelog'):
1436 pass
1426 pass
1437 elif b'manifestlog' in files and name.startswith(b'00manifest'):
1427 elif b'manifestlog' in files and name.startswith(b'00manifest'):
1438 pass
1428 pass
1439 else:
1429 else:
1440 continue
1430 continue
1441
1431
1442 sendfiles.append((b'store', name, size))
1432 sendfiles.append((b'store', name, size))
1443 totalsize += size
1433 totalsize += size
1444
1434
1445 yield {
1435 yield {
1446 b'filecount': len(sendfiles),
1436 b'filecount': len(sendfiles),
1447 b'totalsize': totalsize,
1437 b'totalsize': totalsize,
1448 }
1438 }
1449
1439
1450 for location, name, size in sendfiles:
1440 for location, name, size in sendfiles:
1451 yield {
1441 yield {
1452 b'location': location,
1442 b'location': location,
1453 b'path': name,
1443 b'path': name,
1454 b'size': size,
1444 b'size': size,
1455 }
1445 }
1456
1446
1457 # We have to use a closure for this to ensure the context manager is
1447 # We have to use a closure for this to ensure the context manager is
1458 # closed only after sending the final chunk.
1448 # closed only after sending the final chunk.
1459 def getfiledata():
1449 def getfiledata():
1460 with repo.svfs(name, 'rb', auditpath=False) as fh:
1450 with repo.svfs(name, 'rb', auditpath=False) as fh:
1461 for chunk in util.filechunkiter(fh, limit=size):
1451 for chunk in util.filechunkiter(fh, limit=size):
1462 yield chunk
1452 yield chunk
1463
1453
1464 yield wireprototypes.indefinitebytestringresponse(
1454 yield wireprototypes.indefinitebytestringresponse(
1465 getfiledata())
1455 getfiledata())
@@ -1,1292 +1,1298 b''
1 $ . $TESTDIR/wireprotohelpers.sh
1 $ . $TESTDIR/wireprotohelpers.sh
2
2
3 $ hg init server
3 $ hg init server
4 $ enablehttpv2 server
4 $ enablehttpv2 server
5 $ cd server
5 $ cd server
6 $ cat > a << EOF
6 $ cat > a << EOF
7 > a0
7 > a0
8 > 00000000000000000000000000000000000000
8 > 00000000000000000000000000000000000000
9 > 11111111111111111111111111111111111111
9 > 11111111111111111111111111111111111111
10 > EOF
10 > EOF
11 $ cat > b << EOF
11 $ cat > b << EOF
12 > b0
12 > b0
13 > aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
13 > aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
14 > bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
14 > bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
15 > EOF
15 > EOF
16 $ mkdir -p dir0/child0 dir0/child1 dir1
16 $ mkdir -p dir0/child0 dir0/child1 dir1
17 $ echo c0 > dir0/c
17 $ echo c0 > dir0/c
18 $ echo d0 > dir0/d
18 $ echo d0 > dir0/d
19 $ echo e0 > dir0/child0/e
19 $ echo e0 > dir0/child0/e
20 $ echo f0 > dir0/child1/f
20 $ echo f0 > dir0/child1/f
21 $ hg -q commit -A -m 'commit 0'
21 $ hg -q commit -A -m 'commit 0'
22
22
23 $ echo a1 >> a
23 $ echo a1 >> a
24 $ echo d1 > dir0/d
24 $ echo d1 > dir0/d
25 $ echo g0 > g
25 $ echo g0 > g
26 $ echo h0 > h
26 $ echo h0 > h
27 $ hg -q commit -A -m 'commit 1'
27 $ hg -q commit -A -m 'commit 1'
28 $ echo f1 > dir0/child1/f
28 $ echo f1 > dir0/child1/f
29 $ echo i0 > dir0/i
29 $ echo i0 > dir0/i
30 $ hg -q commit -A -m 'commit 2'
30 $ hg -q commit -A -m 'commit 2'
31
31
32 $ hg -q up -r 0
32 $ hg -q up -r 0
33 $ echo a2 >> a
33 $ echo a2 >> a
34 $ hg commit -m 'commit 3'
34 $ hg commit -m 'commit 3'
35 created new head
35 created new head
36
36
37 Create multiple heads introducing the same file nodefile node
37 Create multiple heads introducing the same file nodefile node
38
38
39 $ hg -q up -r 0
39 $ hg -q up -r 0
40 $ echo foo > dupe-file
40 $ echo foo > dupe-file
41 $ hg commit -Am 'dupe 1'
41 $ hg commit -Am 'dupe 1'
42 adding dupe-file
42 adding dupe-file
43 created new head
43 created new head
44 $ hg -q up -r 0
44 $ hg -q up -r 0
45 $ echo foo > dupe-file
45 $ echo foo > dupe-file
46 $ hg commit -Am 'dupe 2'
46 $ hg commit -Am 'dupe 2'
47 adding dupe-file
47 adding dupe-file
48 created new head
48 created new head
49
49
50 $ hg log -G -T '{rev}:{node} {desc}\n'
50 $ hg log -G -T '{rev}:{node} {desc}\n'
51 @ 5:47fc30580911232cb264675b402819deddf6c6f0 dupe 2
51 @ 5:47fc30580911232cb264675b402819deddf6c6f0 dupe 2
52 |
52 |
53 | o 4:b16cce2967c1749ef4f4e3086a806cfbad8a3af7 dupe 1
53 | o 4:b16cce2967c1749ef4f4e3086a806cfbad8a3af7 dupe 1
54 |/
54 |/
55 | o 3:476fbf122cd82f6726f0191ff146f67140946abc commit 3
55 | o 3:476fbf122cd82f6726f0191ff146f67140946abc commit 3
56 |/
56 |/
57 | o 2:b91c03cbba3519ab149b6cd0a0afbdb5cf1b5c8a commit 2
57 | o 2:b91c03cbba3519ab149b6cd0a0afbdb5cf1b5c8a commit 2
58 | |
58 | |
59 | o 1:5b0b1a23577e205ea240e39c9704e28d7697cbd8 commit 1
59 | o 1:5b0b1a23577e205ea240e39c9704e28d7697cbd8 commit 1
60 |/
60 |/
61 o 0:6e875ff18c227659ad6143bb3580c65700734884 commit 0
61 o 0:6e875ff18c227659ad6143bb3580c65700734884 commit 0
62
62
63
63
64 $ hg serve -p $HGPORT -d --pid-file hg.pid -E error.log
64 $ hg serve -p $HGPORT -d --pid-file hg.pid -E error.log
65 $ cat hg.pid > $DAEMON_PIDS
65 $ cat hg.pid > $DAEMON_PIDS
66
66
67 Missing arguments is an error
67 Missing arguments is an error
68
68
69 $ sendhttpv2peer << EOF
69 $ sendhttpv2peer << EOF
70 > command filesdata
70 > command filesdata
71 > EOF
71 > EOF
72 creating http peer for wire protocol version 2
72 creating http peer for wire protocol version 2
73 sending filesdata command
73 sending filesdata command
74 abort: missing required arguments: revisions!
74 abort: missing required arguments: revisions!
75 [255]
75 [255]
76
76
77 Bad pattern to pathfilter is rejected
77 Bad pattern to pathfilter is rejected
78
78
79 $ sendhttpv2peer << EOF
79 $ sendhttpv2peer << EOF
80 > command filesdata
80 > command filesdata
81 > revisions eval:[{
81 > revisions eval:[{
82 > b'type': b'changesetexplicit',
82 > b'type': b'changesetexplicit',
83 > b'nodes': [
83 > b'nodes': [
84 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
84 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
85 > ]}]
85 > ]}]
86 > pathfilter eval:{b'include': [b'bad:foo']}
86 > pathfilter eval:{b'include': [b'bad:foo']}
87 > EOF
87 > EOF
88 creating http peer for wire protocol version 2
88 creating http peer for wire protocol version 2
89 sending filesdata command
89 sending filesdata command
90 abort: include pattern must begin with `path:` or `rootfilesin:`; got bad:foo!
90 abort: include pattern must begin with `path:` or `rootfilesin:`; got bad:foo!
91 [255]
91 [255]
92
92
93 $ sendhttpv2peer << EOF
93 $ sendhttpv2peer << EOF
94 > command filesdata
94 > command filesdata
95 > revisions eval:[{
95 > revisions eval:[{
96 > b'type': b'changesetexplicit',
96 > b'type': b'changesetexplicit',
97 > b'nodes': [
97 > b'nodes': [
98 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
98 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
99 > ]}]
99 > ]}]
100 > pathfilter eval:{b'exclude': [b'glob:foo']}
100 > pathfilter eval:{b'exclude': [b'glob:foo']}
101 > EOF
101 > EOF
102 creating http peer for wire protocol version 2
102 creating http peer for wire protocol version 2
103 sending filesdata command
103 sending filesdata command
104 abort: exclude pattern must begin with `path:` or `rootfilesin:`; got glob:foo!
104 abort: exclude pattern must begin with `path:` or `rootfilesin:`; got glob:foo!
105 [255]
105 [255]
106
106
107 Fetching a single changeset without parents fetches all files
107 Fetching a single changeset without parents fetches all files
108
108
109 $ sendhttpv2peer << EOF
109 $ sendhttpv2peer << EOF
110 > command filesdata
110 > command filesdata
111 > revisions eval:[{
111 > revisions eval:[{
112 > b'type': b'changesetexplicit',
112 > b'type': b'changesetexplicit',
113 > b'nodes': [
113 > b'nodes': [
114 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
114 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
115 > ]}]
115 > ]}]
116 > EOF
116 > EOF
117 creating http peer for wire protocol version 2
117 creating http peer for wire protocol version 2
118 sending filesdata command
118 sending filesdata command
119 response: gen[
119 response: gen[
120 {
120 {
121 b'totalitems': 8,
121 b'totalitems': 8,
122 b'totalpaths': 8
122 b'totalpaths': 8
123 },
123 },
124 {
124 {
125 b'path': b'a',
125 b'path': b'a',
126 b'totalitems': 1
126 b'totalitems': 1
127 },
127 },
128 {
128 {
129 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
129 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
130 },
130 },
131 {
131 {
132 b'path': b'b',
132 b'path': b'b',
133 b'totalitems': 1
133 b'totalitems': 1
134 },
134 },
135 {
135 {
136 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
136 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
137 },
137 },
138 {
138 {
139 b'path': b'dir0/c',
139 b'path': b'dir0/c',
140 b'totalitems': 1
140 b'totalitems': 1
141 },
141 },
142 {
142 {
143 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
143 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
144 },
144 },
145 {
145 {
146 b'path': b'dir0/child0/e',
146 b'path': b'dir0/child0/e',
147 b'totalitems': 1
147 b'totalitems': 1
148 },
148 },
149 {
149 {
150 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
150 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
151 },
151 },
152 {
152 {
153 b'path': b'dir0/child1/f',
153 b'path': b'dir0/child1/f',
154 b'totalitems': 1
154 b'totalitems': 1
155 },
155 },
156 {
156 {
157 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
157 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
158 },
158 },
159 {
159 {
160 b'path': b'dir0/d',
160 b'path': b'dir0/d',
161 b'totalitems': 1
161 b'totalitems': 1
162 },
162 },
163 {
163 {
164 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
164 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
165 },
165 },
166 {
166 {
167 b'path': b'g',
167 b'path': b'g',
168 b'totalitems': 1
168 b'totalitems': 1
169 },
169 },
170 {
170 {
171 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
171 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
172 },
172 },
173 {
173 {
174 b'path': b'h',
174 b'path': b'h',
175 b'totalitems': 1
175 b'totalitems': 1
176 },
176 },
177 {
177 {
178 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
178 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
179 }
179 }
180 ]
180 ]
181
181
182 Fetching a single changeset saying parents data is available fetches just new files
182 Fetching a single changeset saying parents data is available fetches just new files
183
183
184 $ sendhttpv2peer << EOF
184 $ sendhttpv2peer << EOF
185 > command filesdata
185 > command filesdata
186 > revisions eval:[{
186 > revisions eval:[{
187 > b'type': b'changesetexplicit',
187 > b'type': b'changesetexplicit',
188 > b'nodes': [
188 > b'nodes': [
189 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
189 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
190 > ]}]
190 > ]}]
191 > haveparents eval:True
191 > haveparents eval:True
192 > EOF
192 > EOF
193 creating http peer for wire protocol version 2
193 creating http peer for wire protocol version 2
194 sending filesdata command
194 sending filesdata command
195 response: gen[
195 response: gen[
196 {
196 {
197 b'totalitems': 4,
197 b'totalitems': 4,
198 b'totalpaths': 4
198 b'totalpaths': 4
199 },
199 },
200 {
200 {
201 b'path': b'a',
201 b'path': b'a',
202 b'totalitems': 1
202 b'totalitems': 1
203 },
203 },
204 {
204 {
205 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
205 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
206 },
206 },
207 {
207 {
208 b'path': b'dir0/d',
208 b'path': b'dir0/d',
209 b'totalitems': 1
209 b'totalitems': 1
210 },
210 },
211 {
211 {
212 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
212 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
213 },
213 },
214 {
214 {
215 b'path': b'g',
215 b'path': b'g',
216 b'totalitems': 1
216 b'totalitems': 1
217 },
217 },
218 {
218 {
219 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
219 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
220 },
220 },
221 {
221 {
222 b'path': b'h',
222 b'path': b'h',
223 b'totalitems': 1
223 b'totalitems': 1
224 },
224 },
225 {
225 {
226 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
226 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
227 }
227 }
228 ]
228 ]
229
229
230 A path filter for a sub-directory is honored
230 A path filter for a sub-directory is honored
231
231
232 $ sendhttpv2peer << EOF
232 $ sendhttpv2peer << EOF
233 > command filesdata
233 > command filesdata
234 > revisions eval:[{
234 > revisions eval:[{
235 > b'type': b'changesetexplicit',
235 > b'type': b'changesetexplicit',
236 > b'nodes': [
236 > b'nodes': [
237 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
237 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
238 > ]}]
238 > ]}]
239 > haveparents eval:True
239 > haveparents eval:True
240 > pathfilter eval:{b'include': [b'path:dir0']}
240 > pathfilter eval:{b'include': [b'path:dir0']}
241 > EOF
241 > EOF
242 creating http peer for wire protocol version 2
242 creating http peer for wire protocol version 2
243 sending filesdata command
243 sending filesdata command
244 response: gen[
244 response: gen[
245 {
245 {
246 b'totalitems': 1,
246 b'totalitems': 1,
247 b'totalpaths': 1
247 b'totalpaths': 1
248 },
248 },
249 {
249 {
250 b'path': b'dir0/d',
250 b'path': b'dir0/d',
251 b'totalitems': 1
251 b'totalitems': 1
252 },
252 },
253 {
253 {
254 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
254 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
255 }
255 }
256 ]
256 ]
257
257
258 $ sendhttpv2peer << EOF
258 $ sendhttpv2peer << EOF
259 > command filesdata
259 > command filesdata
260 > revisions eval:[{
260 > revisions eval:[{
261 > b'type': b'changesetexplicit',
261 > b'type': b'changesetexplicit',
262 > b'nodes': [
262 > b'nodes': [
263 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
263 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
264 > ]}]
264 > ]}]
265 > haveparents eval:True
265 > haveparents eval:True
266 > pathfilter eval:{b'exclude': [b'path:a', b'path:g']}
266 > pathfilter eval:{b'exclude': [b'path:a', b'path:g']}
267 > EOF
267 > EOF
268 creating http peer for wire protocol version 2
268 creating http peer for wire protocol version 2
269 sending filesdata command
269 sending filesdata command
270 response: gen[
270 response: gen[
271 {
271 {
272 b'totalitems': 2,
272 b'totalitems': 2,
273 b'totalpaths': 2
273 b'totalpaths': 2
274 },
274 },
275 {
275 {
276 b'path': b'dir0/d',
276 b'path': b'dir0/d',
277 b'totalitems': 1
277 b'totalitems': 1
278 },
278 },
279 {
279 {
280 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
280 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
281 },
281 },
282 {
282 {
283 b'path': b'h',
283 b'path': b'h',
284 b'totalitems': 1
284 b'totalitems': 1
285 },
285 },
286 {
286 {
287 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
287 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
288 }
288 }
289 ]
289 ]
290
290
291 Requesting multiple changeset nodes without haveparents sends all data for both
291 Requesting multiple changeset nodes without haveparents sends all data for both
292
292
293 $ sendhttpv2peer << EOF
293 $ sendhttpv2peer << EOF
294 > command filesdata
294 > command filesdata
295 > revisions eval:[{
295 > revisions eval:[{
296 > b'type': b'changesetexplicit',
296 > b'type': b'changesetexplicit',
297 > b'nodes': [
297 > b'nodes': [
298 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
298 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
299 > b'\xb9\x1c\x03\xcb\xba\x35\x19\xab\x14\x9b\x6c\xd0\xa0\xaf\xbd\xb5\xcf\x1b\x5c\x8a',
299 > b'\xb9\x1c\x03\xcb\xba\x35\x19\xab\x14\x9b\x6c\xd0\xa0\xaf\xbd\xb5\xcf\x1b\x5c\x8a',
300 > ]}]
300 > ]}]
301 > EOF
301 > EOF
302 creating http peer for wire protocol version 2
302 creating http peer for wire protocol version 2
303 sending filesdata command
303 sending filesdata command
304 response: gen[
304 response: gen[
305 {
305 {
306 b'totalitems': 10,
306 b'totalitems': 10,
307 b'totalpaths': 9
307 b'totalpaths': 9
308 },
308 },
309 {
309 {
310 b'path': b'a',
310 b'path': b'a',
311 b'totalitems': 1
311 b'totalitems': 1
312 },
312 },
313 {
313 {
314 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
314 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
315 },
315 },
316 {
316 {
317 b'path': b'b',
317 b'path': b'b',
318 b'totalitems': 1
318 b'totalitems': 1
319 },
319 },
320 {
320 {
321 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
321 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
322 },
322 },
323 {
323 {
324 b'path': b'dir0/c',
324 b'path': b'dir0/c',
325 b'totalitems': 1
325 b'totalitems': 1
326 },
326 },
327 {
327 {
328 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
328 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
329 },
329 },
330 {
330 {
331 b'path': b'dir0/child0/e',
331 b'path': b'dir0/child0/e',
332 b'totalitems': 1
332 b'totalitems': 1
333 },
333 },
334 {
334 {
335 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
335 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
336 },
336 },
337 {
337 {
338 b'path': b'dir0/child1/f',
338 b'path': b'dir0/child1/f',
339 b'totalitems': 2
339 b'totalitems': 2
340 },
340 },
341 {
341 {
342 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
342 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
343 },
343 },
344 {
344 {
345 b'node': b'(\xc7v\xae\x08\xd0\xd5^\xb4\x06H\xb4\x01\xb9\x0f\xf5DH4\x8e'
345 b'node': b'(\xc7v\xae\x08\xd0\xd5^\xb4\x06H\xb4\x01\xb9\x0f\xf5DH4\x8e'
346 },
346 },
347 {
347 {
348 b'path': b'dir0/d',
348 b'path': b'dir0/d',
349 b'totalitems': 1
349 b'totalitems': 1
350 },
350 },
351 {
351 {
352 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
352 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
353 },
353 },
354 {
354 {
355 b'path': b'dir0/i',
355 b'path': b'dir0/i',
356 b'totalitems': 1
356 b'totalitems': 1
357 },
357 },
358 {
358 {
359 b'node': b'\xd7t\xb5\x80Jq\xfd1\xe1\xae\x05\xea\x8e2\xdd\x9b\xa3\xd8S\xd7'
359 b'node': b'\xd7t\xb5\x80Jq\xfd1\xe1\xae\x05\xea\x8e2\xdd\x9b\xa3\xd8S\xd7'
360 },
360 },
361 {
361 {
362 b'path': b'g',
362 b'path': b'g',
363 b'totalitems': 1
363 b'totalitems': 1
364 },
364 },
365 {
365 {
366 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
366 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
367 },
367 },
368 {
368 {
369 b'path': b'h',
369 b'path': b'h',
370 b'totalitems': 1
370 b'totalitems': 1
371 },
371 },
372 {
372 {
373 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
373 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
374 }
374 }
375 ]
375 ]
376
376
377 Requesting multiple changeset nodes with haveparents sends incremental data for both
377 Requesting multiple changeset nodes with haveparents sends incremental data for both
378
378
379 $ sendhttpv2peer << EOF
379 $ sendhttpv2peer << EOF
380 > command filesdata
380 > command filesdata
381 > revisions eval:[{
381 > revisions eval:[{
382 > b'type': b'changesetexplicit',
382 > b'type': b'changesetexplicit',
383 > b'nodes': [
383 > b'nodes': [
384 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
384 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
385 > b'\xb9\x1c\x03\xcb\xba\x35\x19\xab\x14\x9b\x6c\xd0\xa0\xaf\xbd\xb5\xcf\x1b\x5c\x8a',
385 > b'\xb9\x1c\x03\xcb\xba\x35\x19\xab\x14\x9b\x6c\xd0\xa0\xaf\xbd\xb5\xcf\x1b\x5c\x8a',
386 > ]}]
386 > ]}]
387 > haveparents eval:True
387 > haveparents eval:True
388 > EOF
388 > EOF
389 creating http peer for wire protocol version 2
389 creating http peer for wire protocol version 2
390 sending filesdata command
390 sending filesdata command
391 response: gen[
391 response: gen[
392 {
392 {
393 b'totalitems': 6,
393 b'totalitems': 6,
394 b'totalpaths': 6
394 b'totalpaths': 6
395 },
395 },
396 {
396 {
397 b'path': b'a',
397 b'path': b'a',
398 b'totalitems': 1
398 b'totalitems': 1
399 },
399 },
400 {
400 {
401 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
401 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
402 },
402 },
403 {
403 {
404 b'path': b'dir0/child1/f',
404 b'path': b'dir0/child1/f',
405 b'totalitems': 1
405 b'totalitems': 1
406 },
406 },
407 {
407 {
408 b'node': b'(\xc7v\xae\x08\xd0\xd5^\xb4\x06H\xb4\x01\xb9\x0f\xf5DH4\x8e'
408 b'node': b'(\xc7v\xae\x08\xd0\xd5^\xb4\x06H\xb4\x01\xb9\x0f\xf5DH4\x8e'
409 },
409 },
410 {
410 {
411 b'path': b'dir0/d',
411 b'path': b'dir0/d',
412 b'totalitems': 1
412 b'totalitems': 1
413 },
413 },
414 {
414 {
415 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
415 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
416 },
416 },
417 {
417 {
418 b'path': b'dir0/i',
418 b'path': b'dir0/i',
419 b'totalitems': 1
419 b'totalitems': 1
420 },
420 },
421 {
421 {
422 b'node': b'\xd7t\xb5\x80Jq\xfd1\xe1\xae\x05\xea\x8e2\xdd\x9b\xa3\xd8S\xd7'
422 b'node': b'\xd7t\xb5\x80Jq\xfd1\xe1\xae\x05\xea\x8e2\xdd\x9b\xa3\xd8S\xd7'
423 },
423 },
424 {
424 {
425 b'path': b'g',
425 b'path': b'g',
426 b'totalitems': 1
426 b'totalitems': 1
427 },
427 },
428 {
428 {
429 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
429 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
430 },
430 },
431 {
431 {
432 b'path': b'h',
432 b'path': b'h',
433 b'totalitems': 1
433 b'totalitems': 1
434 },
434 },
435 {
435 {
436 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
436 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
437 }
437 }
438 ]
438 ]
439
439
440 Requesting parents works
440 Requesting parents works
441
441
442 $ sendhttpv2peer << EOF
442 $ sendhttpv2peer << EOF
443 > command filesdata
443 > command filesdata
444 > revisions eval:[{
444 > revisions eval:[{
445 > b'type': b'changesetexplicit',
445 > b'type': b'changesetexplicit',
446 > b'nodes': [
446 > b'nodes': [
447 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
447 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
448 > ]}]
448 > ]}]
449 > fields eval:[b'parents']
449 > fields eval:[b'parents']
450 > EOF
450 > EOF
451 creating http peer for wire protocol version 2
451 creating http peer for wire protocol version 2
452 sending filesdata command
452 sending filesdata command
453 response: gen[
453 response: gen[
454 {
454 {
455 b'totalitems': 8,
455 b'totalitems': 8,
456 b'totalpaths': 8
456 b'totalpaths': 8
457 },
457 },
458 {
458 {
459 b'path': b'a',
459 b'path': b'a',
460 b'totalitems': 1
460 b'totalitems': 1
461 },
461 },
462 {
462 {
463 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11',
463 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11',
464 b'parents': [
464 b'parents': [
465 b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9',
465 b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9',
466 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
466 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
467 ]
467 ]
468 },
468 },
469 {
469 {
470 b'path': b'b',
470 b'path': b'b',
471 b'totalitems': 1
471 b'totalitems': 1
472 },
472 },
473 {
473 {
474 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2',
474 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2',
475 b'parents': [
475 b'parents': [
476 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
476 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
477 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
477 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
478 ]
478 ]
479 },
479 },
480 {
480 {
481 b'path': b'dir0/c',
481 b'path': b'dir0/c',
482 b'totalitems': 1
482 b'totalitems': 1
483 },
483 },
484 {
484 {
485 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01',
485 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01',
486 b'parents': [
486 b'parents': [
487 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
487 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
488 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
488 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
489 ]
489 ]
490 },
490 },
491 {
491 {
492 b'path': b'dir0/child0/e',
492 b'path': b'dir0/child0/e',
493 b'totalitems': 1
493 b'totalitems': 1
494 },
494 },
495 {
495 {
496 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4',
496 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4',
497 b'parents': [
497 b'parents': [
498 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
498 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
499 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
499 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
500 ]
500 ]
501 },
501 },
502 {
502 {
503 b'path': b'dir0/child1/f',
503 b'path': b'dir0/child1/f',
504 b'totalitems': 1
504 b'totalitems': 1
505 },
505 },
506 {
506 {
507 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4',
507 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4',
508 b'parents': [
508 b'parents': [
509 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
509 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
510 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
510 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
511 ]
511 ]
512 },
512 },
513 {
513 {
514 b'path': b'dir0/d',
514 b'path': b'dir0/d',
515 b'totalitems': 1
515 b'totalitems': 1
516 },
516 },
517 {
517 {
518 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G',
518 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G',
519 b'parents': [
519 b'parents': [
520 b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&',
520 b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&',
521 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
521 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
522 ]
522 ]
523 },
523 },
524 {
524 {
525 b'path': b'g',
525 b'path': b'g',
526 b'totalitems': 1
526 b'totalitems': 1
527 },
527 },
528 {
528 {
529 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c',
529 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c',
530 b'parents': [
530 b'parents': [
531 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
531 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
532 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
532 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
533 ]
533 ]
534 },
534 },
535 {
535 {
536 b'path': b'h',
536 b'path': b'h',
537 b'totalitems': 1
537 b'totalitems': 1
538 },
538 },
539 {
539 {
540 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K',
540 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K',
541 b'parents': [
541 b'parents': [
542 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
542 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00',
543 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
543 b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'
544 ]
544 ]
545 }
545 }
546 ]
546 ]
547
547
548 Requesting revision data works
548 Requesting revision data works
549 (haveparents defaults to False, so fulltext is emitted)
549 (haveparents defaults to False, so fulltext is emitted)
550
550
551 $ sendhttpv2peer << EOF
551 $ sendhttpv2peer << EOF
552 > command filesdata
552 > command filesdata
553 > revisions eval:[{
553 > revisions eval:[{
554 > b'type': b'changesetexplicit',
554 > b'type': b'changesetexplicit',
555 > b'nodes': [
555 > b'nodes': [
556 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
556 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
557 > ]}]
557 > ]}]
558 > fields eval:[b'revision']
558 > fields eval:[b'revision']
559 > EOF
559 > EOF
560 creating http peer for wire protocol version 2
560 creating http peer for wire protocol version 2
561 sending filesdata command
561 sending filesdata command
562 response: gen[
562 response: gen[
563 {
563 {
564 b'totalitems': 8,
564 b'totalitems': 8,
565 b'totalpaths': 8
565 b'totalpaths': 8
566 },
566 },
567 {
567 {
568 b'path': b'a',
568 b'path': b'a',
569 b'totalitems': 1
569 b'totalitems': 1
570 },
570 },
571 {
571 {
572 b'fieldsfollowing': [
572 b'fieldsfollowing': [
573 [
573 [
574 b'revision',
574 b'revision',
575 84
575 84
576 ]
576 ]
577 ],
577 ],
578 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
578 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
579 },
579 },
580 b'a0\n00000000000000000000000000000000000000\n11111111111111111111111111111111111111\na1\n',
580 b'a0\n00000000000000000000000000000000000000\n11111111111111111111111111111111111111\na1\n',
581 {
581 {
582 b'path': b'b',
582 b'path': b'b',
583 b'totalitems': 1
583 b'totalitems': 1
584 },
584 },
585 {
585 {
586 b'fieldsfollowing': [
586 b'fieldsfollowing': [
587 [
587 [
588 b'revision',
588 b'revision',
589 81
589 81
590 ]
590 ]
591 ],
591 ],
592 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
592 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
593 },
593 },
594 b'b0\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\nbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb\n',
594 b'b0\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\nbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb\n',
595 {
595 {
596 b'path': b'dir0/c',
596 b'path': b'dir0/c',
597 b'totalitems': 1
597 b'totalitems': 1
598 },
598 },
599 {
599 {
600 b'fieldsfollowing': [
600 b'fieldsfollowing': [
601 [
601 [
602 b'revision',
602 b'revision',
603 3
603 3
604 ]
604 ]
605 ],
605 ],
606 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
606 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
607 },
607 },
608 b'c0\n',
608 b'c0\n',
609 {
609 {
610 b'path': b'dir0/child0/e',
610 b'path': b'dir0/child0/e',
611 b'totalitems': 1
611 b'totalitems': 1
612 },
612 },
613 {
613 {
614 b'fieldsfollowing': [
614 b'fieldsfollowing': [
615 [
615 [
616 b'revision',
616 b'revision',
617 3
617 3
618 ]
618 ]
619 ],
619 ],
620 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
620 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
621 },
621 },
622 b'e0\n',
622 b'e0\n',
623 {
623 {
624 b'path': b'dir0/child1/f',
624 b'path': b'dir0/child1/f',
625 b'totalitems': 1
625 b'totalitems': 1
626 },
626 },
627 {
627 {
628 b'fieldsfollowing': [
628 b'fieldsfollowing': [
629 [
629 [
630 b'revision',
630 b'revision',
631 3
631 3
632 ]
632 ]
633 ],
633 ],
634 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
634 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
635 },
635 },
636 b'f0\n',
636 b'f0\n',
637 {
637 {
638 b'path': b'dir0/d',
638 b'path': b'dir0/d',
639 b'totalitems': 1
639 b'totalitems': 1
640 },
640 },
641 {
641 {
642 b'fieldsfollowing': [
642 b'fieldsfollowing': [
643 [
643 [
644 b'revision',
644 b'revision',
645 3
645 3
646 ]
646 ]
647 ],
647 ],
648 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
648 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
649 },
649 },
650 b'd1\n',
650 b'd1\n',
651 {
651 {
652 b'path': b'g',
652 b'path': b'g',
653 b'totalitems': 1
653 b'totalitems': 1
654 },
654 },
655 {
655 {
656 b'fieldsfollowing': [
656 b'fieldsfollowing': [
657 [
657 [
658 b'revision',
658 b'revision',
659 3
659 3
660 ]
660 ]
661 ],
661 ],
662 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
662 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
663 },
663 },
664 b'g0\n',
664 b'g0\n',
665 {
665 {
666 b'path': b'h',
666 b'path': b'h',
667 b'totalitems': 1
667 b'totalitems': 1
668 },
668 },
669 {
669 {
670 b'fieldsfollowing': [
670 b'fieldsfollowing': [
671 [
671 [
672 b'revision',
672 b'revision',
673 3
673 3
674 ]
674 ]
675 ],
675 ],
676 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
676 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
677 },
677 },
678 b'h0\n'
678 b'h0\n'
679 ]
679 ]
680
680
681 haveparents=False should be same as above
681 haveparents=False should be same as above
682
682
683 $ sendhttpv2peer << EOF
683 $ sendhttpv2peer << EOF
684 > command filesdata
684 > command filesdata
685 > revisions eval:[{
685 > revisions eval:[{
686 > b'type': b'changesetexplicit',
686 > b'type': b'changesetexplicit',
687 > b'nodes': [
687 > b'nodes': [
688 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
688 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
689 > ]}]
689 > ]}]
690 > fields eval:[b'revision']
690 > fields eval:[b'revision']
691 > haveparents eval:False
691 > haveparents eval:False
692 > EOF
692 > EOF
693 creating http peer for wire protocol version 2
693 creating http peer for wire protocol version 2
694 sending filesdata command
694 sending filesdata command
695 response: gen[
695 response: gen[
696 {
696 {
697 b'totalitems': 8,
697 b'totalitems': 8,
698 b'totalpaths': 8
698 b'totalpaths': 8
699 },
699 },
700 {
700 {
701 b'path': b'a',
701 b'path': b'a',
702 b'totalitems': 1
702 b'totalitems': 1
703 },
703 },
704 {
704 {
705 b'fieldsfollowing': [
705 b'fieldsfollowing': [
706 [
706 [
707 b'revision',
707 b'revision',
708 84
708 84
709 ]
709 ]
710 ],
710 ],
711 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
711 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
712 },
712 },
713 b'a0\n00000000000000000000000000000000000000\n11111111111111111111111111111111111111\na1\n',
713 b'a0\n00000000000000000000000000000000000000\n11111111111111111111111111111111111111\na1\n',
714 {
714 {
715 b'path': b'b',
715 b'path': b'b',
716 b'totalitems': 1
716 b'totalitems': 1
717 },
717 },
718 {
718 {
719 b'fieldsfollowing': [
719 b'fieldsfollowing': [
720 [
720 [
721 b'revision',
721 b'revision',
722 81
722 81
723 ]
723 ]
724 ],
724 ],
725 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
725 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
726 },
726 },
727 b'b0\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\nbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb\n',
727 b'b0\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\nbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb\n',
728 {
728 {
729 b'path': b'dir0/c',
729 b'path': b'dir0/c',
730 b'totalitems': 1
730 b'totalitems': 1
731 },
731 },
732 {
732 {
733 b'fieldsfollowing': [
733 b'fieldsfollowing': [
734 [
734 [
735 b'revision',
735 b'revision',
736 3
736 3
737 ]
737 ]
738 ],
738 ],
739 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
739 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
740 },
740 },
741 b'c0\n',
741 b'c0\n',
742 {
742 {
743 b'path': b'dir0/child0/e',
743 b'path': b'dir0/child0/e',
744 b'totalitems': 1
744 b'totalitems': 1
745 },
745 },
746 {
746 {
747 b'fieldsfollowing': [
747 b'fieldsfollowing': [
748 [
748 [
749 b'revision',
749 b'revision',
750 3
750 3
751 ]
751 ]
752 ],
752 ],
753 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
753 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
754 },
754 },
755 b'e0\n',
755 b'e0\n',
756 {
756 {
757 b'path': b'dir0/child1/f',
757 b'path': b'dir0/child1/f',
758 b'totalitems': 1
758 b'totalitems': 1
759 },
759 },
760 {
760 {
761 b'fieldsfollowing': [
761 b'fieldsfollowing': [
762 [
762 [
763 b'revision',
763 b'revision',
764 3
764 3
765 ]
765 ]
766 ],
766 ],
767 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
767 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
768 },
768 },
769 b'f0\n',
769 b'f0\n',
770 {
770 {
771 b'path': b'dir0/d',
771 b'path': b'dir0/d',
772 b'totalitems': 1
772 b'totalitems': 1
773 },
773 },
774 {
774 {
775 b'fieldsfollowing': [
775 b'fieldsfollowing': [
776 [
776 [
777 b'revision',
777 b'revision',
778 3
778 3
779 ]
779 ]
780 ],
780 ],
781 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
781 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
782 },
782 },
783 b'd1\n',
783 b'd1\n',
784 {
784 {
785 b'path': b'g',
785 b'path': b'g',
786 b'totalitems': 1
786 b'totalitems': 1
787 },
787 },
788 {
788 {
789 b'fieldsfollowing': [
789 b'fieldsfollowing': [
790 [
790 [
791 b'revision',
791 b'revision',
792 3
792 3
793 ]
793 ]
794 ],
794 ],
795 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
795 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
796 },
796 },
797 b'g0\n',
797 b'g0\n',
798 {
798 {
799 b'path': b'h',
799 b'path': b'h',
800 b'totalitems': 1
800 b'totalitems': 1
801 },
801 },
802 {
802 {
803 b'fieldsfollowing': [
803 b'fieldsfollowing': [
804 [
804 [
805 b'revision',
805 b'revision',
806 3
806 3
807 ]
807 ]
808 ],
808 ],
809 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
809 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
810 },
810 },
811 b'h0\n'
811 b'h0\n'
812 ]
812 ]
813
813
814 haveparents=True should emit a delta
814 haveparents=True should emit a delta
815
815
816 $ sendhttpv2peer << EOF
816 $ sendhttpv2peer << EOF
817 > command filesdata
817 > command filesdata
818 > revisions eval:[{
818 > revisions eval:[{
819 > b'type': b'changesetexplicit',
819 > b'type': b'changesetexplicit',
820 > b'nodes': [
820 > b'nodes': [
821 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
821 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
822 > ]}]
822 > ]}]
823 > fields eval:[b'revision']
823 > fields eval:[b'revision']
824 > haveparents eval:True
824 > haveparents eval:True
825 > EOF
825 > EOF
826 creating http peer for wire protocol version 2
826 creating http peer for wire protocol version 2
827 sending filesdata command
827 sending filesdata command
828 response: gen[
828 response: gen[
829 {
829 {
830 b'totalitems': 4,
830 b'totalitems': 4,
831 b'totalpaths': 4
831 b'totalpaths': 4
832 },
832 },
833 {
833 {
834 b'path': b'a',
834 b'path': b'a',
835 b'totalitems': 1
835 b'totalitems': 1
836 },
836 },
837 {
837 {
838 b'deltabasenode': b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9',
838 b'deltabasenode': b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9',
839 b'fieldsfollowing': [
839 b'fieldsfollowing': [
840 [
840 [
841 b'delta',
841 b'delta',
842 15
842 15
843 ]
843 ]
844 ],
844 ],
845 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
845 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
846 },
846 },
847 b'\x00\x00\x00Q\x00\x00\x00Q\x00\x00\x00\x03a1\n',
847 b'\x00\x00\x00Q\x00\x00\x00Q\x00\x00\x00\x03a1\n',
848 {
848 {
849 b'path': b'dir0/d',
849 b'path': b'dir0/d',
850 b'totalitems': 1
850 b'totalitems': 1
851 },
851 },
852 {
852 {
853 b'deltabasenode': b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&',
853 b'deltabasenode': b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&',
854 b'fieldsfollowing': [
854 b'fieldsfollowing': [
855 [
855 [
856 b'delta',
856 b'delta',
857 15
857 15
858 ]
858 ]
859 ],
859 ],
860 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
860 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
861 },
861 },
862 b'\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x03d1\n',
862 b'\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x03d1\n',
863 {
863 {
864 b'path': b'g',
864 b'path': b'g',
865 b'totalitems': 1
865 b'totalitems': 1
866 },
866 },
867 {
867 {
868 b'fieldsfollowing': [
868 b'fieldsfollowing': [
869 [
869 [
870 b'revision',
870 b'revision',
871 3
871 3
872 ]
872 ]
873 ],
873 ],
874 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
874 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
875 },
875 },
876 b'g0\n',
876 b'g0\n',
877 {
877 {
878 b'path': b'h',
878 b'path': b'h',
879 b'totalitems': 1
879 b'totalitems': 1
880 },
880 },
881 {
881 {
882 b'fieldsfollowing': [
882 b'fieldsfollowing': [
883 [
883 [
884 b'revision',
884 b'revision',
885 3
885 3
886 ]
886 ]
887 ],
887 ],
888 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
888 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
889 },
889 },
890 b'h0\n'
890 b'h0\n'
891 ]
891 ]
892
892
893 Requesting multiple revisions works
893 Requesting multiple revisions works
894 (first revision is a fulltext since haveparents=False by default)
894 (first revision is a fulltext since haveparents=False by default)
895
895
896 $ sendhttpv2peer << EOF
896 $ sendhttpv2peer << EOF
897 > command filesdata
897 > command filesdata
898 > revisions eval:[{
898 > revisions eval:[{
899 > b'type': b'changesetexplicit',
899 > b'type': b'changesetexplicit',
900 > b'nodes': [
900 > b'nodes': [
901 > b'\x6e\x87\x5f\xf1\x8c\x22\x76\x59\xad\x61\x43\xbb\x35\x80\xc6\x57\x00\x73\x48\x84',
901 > b'\x6e\x87\x5f\xf1\x8c\x22\x76\x59\xad\x61\x43\xbb\x35\x80\xc6\x57\x00\x73\x48\x84',
902 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
902 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
903 > b'\xb9\x1c\x03\xcb\xba\x35\x19\xab\x14\x9b\x6c\xd0\xa0\xaf\xbd\xb5\xcf\x1b\x5c\x8a',
903 > b'\xb9\x1c\x03\xcb\xba\x35\x19\xab\x14\x9b\x6c\xd0\xa0\xaf\xbd\xb5\xcf\x1b\x5c\x8a',
904 > ]}]
904 > ]}]
905 > fields eval:[b'revision']
905 > fields eval:[b'revision']
906 > EOF
906 > EOF
907 creating http peer for wire protocol version 2
907 creating http peer for wire protocol version 2
908 sending filesdata command
908 sending filesdata command
909 response: gen[
909 response: gen[
910 {
910 {
911 b'totalitems': 12,
911 b'totalitems': 12,
912 b'totalpaths': 9
912 b'totalpaths': 9
913 },
913 },
914 {
914 {
915 b'path': b'a',
915 b'path': b'a',
916 b'totalitems': 2
916 b'totalitems': 2
917 },
917 },
918 {
918 {
919 b'fieldsfollowing': [
919 b'fieldsfollowing': [
920 [
920 [
921 b'revision',
921 b'revision',
922 81
922 81
923 ]
923 ]
924 ],
924 ],
925 b'node': b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9'
925 b'node': b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9'
926 },
926 },
927 b'a0\n00000000000000000000000000000000000000\n11111111111111111111111111111111111111\n',
927 b'a0\n00000000000000000000000000000000000000\n11111111111111111111111111111111111111\n',
928 {
928 {
929 b'deltabasenode': b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9',
929 b'deltabasenode': b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9',
930 b'fieldsfollowing': [
930 b'fieldsfollowing': [
931 [
931 [
932 b'delta',
932 b'delta',
933 15
933 15
934 ]
934 ]
935 ],
935 ],
936 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
936 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
937 },
937 },
938 b'\x00\x00\x00Q\x00\x00\x00Q\x00\x00\x00\x03a1\n',
938 b'\x00\x00\x00Q\x00\x00\x00Q\x00\x00\x00\x03a1\n',
939 {
939 {
940 b'path': b'b',
940 b'path': b'b',
941 b'totalitems': 1
941 b'totalitems': 1
942 },
942 },
943 {
943 {
944 b'fieldsfollowing': [
944 b'fieldsfollowing': [
945 [
945 [
946 b'revision',
946 b'revision',
947 81
947 81
948 ]
948 ]
949 ],
949 ],
950 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
950 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
951 },
951 },
952 b'b0\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\nbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb\n',
952 b'b0\naaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\nbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb\n',
953 {
953 {
954 b'path': b'dir0/c',
954 b'path': b'dir0/c',
955 b'totalitems': 1
955 b'totalitems': 1
956 },
956 },
957 {
957 {
958 b'fieldsfollowing': [
958 b'fieldsfollowing': [
959 [
959 [
960 b'revision',
960 b'revision',
961 3
961 3
962 ]
962 ]
963 ],
963 ],
964 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
964 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
965 },
965 },
966 b'c0\n',
966 b'c0\n',
967 {
967 {
968 b'path': b'dir0/child0/e',
968 b'path': b'dir0/child0/e',
969 b'totalitems': 1
969 b'totalitems': 1
970 },
970 },
971 {
971 {
972 b'fieldsfollowing': [
972 b'fieldsfollowing': [
973 [
973 [
974 b'revision',
974 b'revision',
975 3
975 3
976 ]
976 ]
977 ],
977 ],
978 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
978 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
979 },
979 },
980 b'e0\n',
980 b'e0\n',
981 {
981 {
982 b'path': b'dir0/child1/f',
982 b'path': b'dir0/child1/f',
983 b'totalitems': 2
983 b'totalitems': 2
984 },
984 },
985 {
985 {
986 b'fieldsfollowing': [
986 b'fieldsfollowing': [
987 [
987 [
988 b'revision',
988 b'revision',
989 3
989 3
990 ]
990 ]
991 ],
991 ],
992 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
992 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
993 },
993 },
994 b'f0\n',
994 b'f0\n',
995 {
995 {
996 b'deltabasenode': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4',
996 b'deltabasenode': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4',
997 b'fieldsfollowing': [
997 b'fieldsfollowing': [
998 [
998 [
999 b'delta',
999 b'delta',
1000 15
1000 15
1001 ]
1001 ]
1002 ],
1002 ],
1003 b'node': b'(\xc7v\xae\x08\xd0\xd5^\xb4\x06H\xb4\x01\xb9\x0f\xf5DH4\x8e'
1003 b'node': b'(\xc7v\xae\x08\xd0\xd5^\xb4\x06H\xb4\x01\xb9\x0f\xf5DH4\x8e'
1004 },
1004 },
1005 b'\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x03f1\n',
1005 b'\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x03f1\n',
1006 {
1006 {
1007 b'path': b'dir0/d',
1007 b'path': b'dir0/d',
1008 b'totalitems': 2
1008 b'totalitems': 2
1009 },
1009 },
1010 {
1010 {
1011 b'fieldsfollowing': [
1011 b'fieldsfollowing': [
1012 [
1012 [
1013 b'revision',
1013 b'revision',
1014 3
1014 3
1015 ]
1015 ]
1016 ],
1016 ],
1017 b'node': b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&'
1017 b'node': b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&'
1018 },
1018 },
1019 b'd0\n',
1019 b'd0\n',
1020 {
1020 {
1021 b'deltabasenode': b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&',
1021 b'deltabasenode': b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&',
1022 b'fieldsfollowing': [
1022 b'fieldsfollowing': [
1023 [
1023 [
1024 b'delta',
1024 b'delta',
1025 15
1025 15
1026 ]
1026 ]
1027 ],
1027 ],
1028 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
1028 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
1029 },
1029 },
1030 b'\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x03d1\n',
1030 b'\x00\x00\x00\x00\x00\x00\x00\x03\x00\x00\x00\x03d1\n',
1031 {
1031 {
1032 b'path': b'dir0/i',
1032 b'path': b'dir0/i',
1033 b'totalitems': 1
1033 b'totalitems': 1
1034 },
1034 },
1035 {
1035 {
1036 b'fieldsfollowing': [
1036 b'fieldsfollowing': [
1037 [
1037 [
1038 b'revision',
1038 b'revision',
1039 3
1039 3
1040 ]
1040 ]
1041 ],
1041 ],
1042 b'node': b'\xd7t\xb5\x80Jq\xfd1\xe1\xae\x05\xea\x8e2\xdd\x9b\xa3\xd8S\xd7'
1042 b'node': b'\xd7t\xb5\x80Jq\xfd1\xe1\xae\x05\xea\x8e2\xdd\x9b\xa3\xd8S\xd7'
1043 },
1043 },
1044 b'i0\n',
1044 b'i0\n',
1045 {
1045 {
1046 b'path': b'g',
1046 b'path': b'g',
1047 b'totalitems': 1
1047 b'totalitems': 1
1048 },
1048 },
1049 {
1049 {
1050 b'fieldsfollowing': [
1050 b'fieldsfollowing': [
1051 [
1051 [
1052 b'revision',
1052 b'revision',
1053 3
1053 3
1054 ]
1054 ]
1055 ],
1055 ],
1056 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
1056 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
1057 },
1057 },
1058 b'g0\n',
1058 b'g0\n',
1059 {
1059 {
1060 b'path': b'h',
1060 b'path': b'h',
1061 b'totalitems': 1
1061 b'totalitems': 1
1062 },
1062 },
1063 {
1063 {
1064 b'fieldsfollowing': [
1064 b'fieldsfollowing': [
1065 [
1065 [
1066 b'revision',
1066 b'revision',
1067 3
1067 3
1068 ]
1068 ]
1069 ],
1069 ],
1070 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
1070 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
1071 },
1071 },
1072 b'h0\n'
1072 b'h0\n'
1073 ]
1073 ]
1074
1074
1075 Requesting linknode field works
1075 Requesting linknode field works
1076
1076
1077 $ sendhttpv2peer << EOF
1077 $ sendhttpv2peer << EOF
1078 > command filesdata
1078 > command filesdata
1079 > revisions eval:[{
1079 > revisions eval:[{
1080 > b'type': b'changesetexplicit',
1080 > b'type': b'changesetexplicit',
1081 > b'nodes': [
1081 > b'nodes': [
1082 > b'\x6e\x87\x5f\xf1\x8c\x22\x76\x59\xad\x61\x43\xbb\x35\x80\xc6\x57\x00\x73\x48\x84',
1082 > b'\x6e\x87\x5f\xf1\x8c\x22\x76\x59\xad\x61\x43\xbb\x35\x80\xc6\x57\x00\x73\x48\x84',
1083 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
1083 > b'\x5b\x0b\x1a\x23\x57\x7e\x20\x5e\xa2\x40\xe3\x9c\x97\x04\xe2\x8d\x76\x97\xcb\xd8',
1084 > b'\xb9\x1c\x03\xcb\xba\x35\x19\xab\x14\x9b\x6c\xd0\xa0\xaf\xbd\xb5\xcf\x1b\x5c\x8a',
1084 > b'\xb9\x1c\x03\xcb\xba\x35\x19\xab\x14\x9b\x6c\xd0\xa0\xaf\xbd\xb5\xcf\x1b\x5c\x8a',
1085 > ]}]
1085 > ]}]
1086 > fields eval:[b'linknode']
1086 > fields eval:[b'linknode']
1087 > EOF
1087 > EOF
1088 creating http peer for wire protocol version 2
1088 creating http peer for wire protocol version 2
1089 sending filesdata command
1089 sending filesdata command
1090 response: gen[
1090 response: gen[
1091 {
1091 {
1092 b'totalitems': 12,
1092 b'totalitems': 12,
1093 b'totalpaths': 9
1093 b'totalpaths': 9
1094 },
1094 },
1095 {
1095 {
1096 b'path': b'a',
1096 b'path': b'a',
1097 b'totalitems': 2
1097 b'totalitems': 2
1098 },
1098 },
1099 {
1099 {
1100 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1100 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1101 b'node': b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9'
1101 b'node': b'd\x9d\x14\x9d\xf4=\x83\x88%#\xb7\xfb\x1ej:\xf6\xf1\x90{9'
1102 },
1102 },
1103 {
1103 {
1104 b'linknode': b'[\x0b\x1a#W~ ^\xa2@\xe3\x9c\x97\x04\xe2\x8dv\x97\xcb\xd8',
1104 b'linknode': b'[\x0b\x1a#W~ ^\xa2@\xe3\x9c\x97\x04\xe2\x8dv\x97\xcb\xd8',
1105 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
1105 b'node': b'\n\x862\x1f\x13y\xd1\xa9\xec\xd0W\x9a"\x97z\xf7\xa5\xac\xaf\x11'
1106 },
1106 },
1107 {
1107 {
1108 b'path': b'b',
1108 b'path': b'b',
1109 b'totalitems': 1
1109 b'totalitems': 1
1110 },
1110 },
1111 {
1111 {
1112 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1112 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1113 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
1113 b'node': b'\x88\xbac\xb8\xd8\xc6 :\xc6z\xc9\x98\xac\xd9\x17K\xf7\x05!\xb2'
1114 },
1114 },
1115 {
1115 {
1116 b'path': b'dir0/c',
1116 b'path': b'dir0/c',
1117 b'totalitems': 1
1117 b'totalitems': 1
1118 },
1118 },
1119 {
1119 {
1120 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1120 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1121 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
1121 b'node': b'\x91DE4j\x0c\xa0b\x9b\xd4|\xeb]\xfe\x07\xe4\xd4\xcf%\x01'
1122 },
1122 },
1123 {
1123 {
1124 b'path': b'dir0/child0/e',
1124 b'path': b'dir0/child0/e',
1125 b'totalitems': 1
1125 b'totalitems': 1
1126 },
1126 },
1127 {
1127 {
1128 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1128 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1129 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
1129 b'node': b'\xbb\xbal\x06\xb3\x0fD=4\xff\x84\x1b\xc9\x85\xc4\xd0\x82|k\xe4'
1130 },
1130 },
1131 {
1131 {
1132 b'path': b'dir0/child1/f',
1132 b'path': b'dir0/child1/f',
1133 b'totalitems': 2
1133 b'totalitems': 2
1134 },
1134 },
1135 {
1135 {
1136 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1136 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1137 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
1137 b'node': b'\x12\xfc}\xcdw;Z\n\x92\x9c\xe1\x95"\x80\x83\xc6\xdd\xc9\xce\xc4'
1138 },
1138 },
1139 {
1139 {
1140 b'linknode': b'\xb9\x1c\x03\xcb\xba5\x19\xab\x14\x9bl\xd0\xa0\xaf\xbd\xb5\xcf\x1b\\\x8a',
1140 b'linknode': b'\xb9\x1c\x03\xcb\xba5\x19\xab\x14\x9bl\xd0\xa0\xaf\xbd\xb5\xcf\x1b\\\x8a',
1141 b'node': b'(\xc7v\xae\x08\xd0\xd5^\xb4\x06H\xb4\x01\xb9\x0f\xf5DH4\x8e'
1141 b'node': b'(\xc7v\xae\x08\xd0\xd5^\xb4\x06H\xb4\x01\xb9\x0f\xf5DH4\x8e'
1142 },
1142 },
1143 {
1143 {
1144 b'path': b'dir0/d',
1144 b'path': b'dir0/d',
1145 b'totalitems': 2
1145 b'totalitems': 2
1146 },
1146 },
1147 {
1147 {
1148 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1148 b'linknode': b'n\x87_\xf1\x8c"vY\xadaC\xbb5\x80\xc6W\x00sH\x84',
1149 b'node': b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&'
1149 b'node': b'S\x82\x06\xdc\x97\x1eR\x15@\xd6\x84:\xbf\xe6\xd1`2\xf6\xd4&'
1150 },
1150 },
1151 {
1151 {
1152 b'linknode': b'[\x0b\x1a#W~ ^\xa2@\xe3\x9c\x97\x04\xe2\x8dv\x97\xcb\xd8',
1152 b'linknode': b'[\x0b\x1a#W~ ^\xa2@\xe3\x9c\x97\x04\xe2\x8dv\x97\xcb\xd8',
1153 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
1153 b'node': b'\x93\x88)\xad\x01R}2\xba\x06_\x81#6\xfe\xc7\x9d\xdd9G'
1154 },
1154 },
1155 {
1155 {
1156 b'path': b'dir0/i',
1156 b'path': b'dir0/i',
1157 b'totalitems': 1
1157 b'totalitems': 1
1158 },
1158 },
1159 {
1159 {
1160 b'linknode': b'\xb9\x1c\x03\xcb\xba5\x19\xab\x14\x9bl\xd0\xa0\xaf\xbd\xb5\xcf\x1b\\\x8a',
1160 b'linknode': b'\xb9\x1c\x03\xcb\xba5\x19\xab\x14\x9bl\xd0\xa0\xaf\xbd\xb5\xcf\x1b\\\x8a',
1161 b'node': b'\xd7t\xb5\x80Jq\xfd1\xe1\xae\x05\xea\x8e2\xdd\x9b\xa3\xd8S\xd7'
1161 b'node': b'\xd7t\xb5\x80Jq\xfd1\xe1\xae\x05\xea\x8e2\xdd\x9b\xa3\xd8S\xd7'
1162 },
1162 },
1163 {
1163 {
1164 b'path': b'g',
1164 b'path': b'g',
1165 b'totalitems': 1
1165 b'totalitems': 1
1166 },
1166 },
1167 {
1167 {
1168 b'linknode': b'[\x0b\x1a#W~ ^\xa2@\xe3\x9c\x97\x04\xe2\x8dv\x97\xcb\xd8',
1168 b'linknode': b'[\x0b\x1a#W~ ^\xa2@\xe3\x9c\x97\x04\xe2\x8dv\x97\xcb\xd8',
1169 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
1169 b'node': b'\xde\xca\xba5DFjI\x95r\xe9\x0f\xac\xe6\xfa\x0c!k\xba\x8c'
1170 },
1170 },
1171 {
1171 {
1172 b'path': b'h',
1172 b'path': b'h',
1173 b'totalitems': 1
1173 b'totalitems': 1
1174 },
1174 },
1175 {
1175 {
1176 b'linknode': b'[\x0b\x1a#W~ ^\xa2@\xe3\x9c\x97\x04\xe2\x8dv\x97\xcb\xd8',
1176 b'linknode': b'[\x0b\x1a#W~ ^\xa2@\xe3\x9c\x97\x04\xe2\x8dv\x97\xcb\xd8',
1177 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
1177 b'node': b'\x03A\xfc\x84\x1b\xb5\xb4\xba\x93\xb2mM\xdaa\xf7y6]\xb3K'
1178 }
1178 }
1179 ]
1179 ]
1180
1180
1181 Test behavior where a file node is introduced in 2 DAG heads
1181 Test behavior where a file node is introduced in 2 DAG heads
1182
1182
1183 Request for changeset introducing filenode returns linknode as self
1183 Request for changeset introducing filenode returns linknode as self
1184
1184
1185 $ sendhttpv2peer << EOF
1185 $ sendhttpv2peer << EOF
1186 > command filesdata
1186 > command filesdata
1187 > revisions eval:[{
1187 > revisions eval:[{
1188 > b'type': b'changesetexplicit',
1188 > b'type': b'changesetexplicit',
1189 > b'nodes': [
1189 > b'nodes': [
1190 > b'\xb1\x6c\xce\x29\x67\xc1\x74\x9e\xf4\xf4\xe3\x08\x6a\x80\x6c\xfb\xad\x8a\x3a\xf7',
1190 > b'\xb1\x6c\xce\x29\x67\xc1\x74\x9e\xf4\xf4\xe3\x08\x6a\x80\x6c\xfb\xad\x8a\x3a\xf7',
1191 > ]}]
1191 > ]}]
1192 > fields eval:[b'linknode']
1192 > fields eval:[b'linknode']
1193 > pathfilter eval:{b'include': [b'path:dupe-file']}
1193 > pathfilter eval:{b'include': [b'path:dupe-file']}
1194 > EOF
1194 > EOF
1195 creating http peer for wire protocol version 2
1195 creating http peer for wire protocol version 2
1196 sending filesdata command
1196 sending filesdata command
1197 response: gen[
1197 response: gen[
1198 {
1198 {
1199 b'totalitems': 1,
1199 b'totalitems': 1,
1200 b'totalpaths': 1
1200 b'totalpaths': 1
1201 },
1201 },
1202 {
1202 {
1203 b'path': b'dupe-file',
1203 b'path': b'dupe-file',
1204 b'totalitems': 1
1204 b'totalitems': 1
1205 },
1205 },
1206 {
1206 {
1207 b'linknode': b'\xb1l\xce)g\xc1t\x9e\xf4\xf4\xe3\x08j\x80l\xfb\xad\x8a:\xf7',
1207 b'linknode': b'\xb1l\xce)g\xc1t\x9e\xf4\xf4\xe3\x08j\x80l\xfb\xad\x8a:\xf7',
1208 b'node': b'.\xd2\xa3\x91*\x0b$P C\xea\xe8N\xe4\xb2y\xc1\x8b\x90\xdd'
1208 b'node': b'.\xd2\xa3\x91*\x0b$P C\xea\xe8N\xe4\xb2y\xc1\x8b\x90\xdd'
1209 }
1209 }
1210 ]
1210 ]
1211
1211
1212 $ sendhttpv2peer << EOF
1212 $ sendhttpv2peer << EOF
1213 > command filesdata
1213 > command filesdata
1214 > revisions eval:[{
1214 > revisions eval:[{
1215 > b'type': b'changesetexplicit',
1215 > b'type': b'changesetexplicit',
1216 > b'nodes': [
1216 > b'nodes': [
1217 > b'\xb1\x6c\xce\x29\x67\xc1\x74\x9e\xf4\xf4\xe3\x08\x6a\x80\x6c\xfb\xad\x8a\x3a\xf7',
1217 > b'\xb1\x6c\xce\x29\x67\xc1\x74\x9e\xf4\xf4\xe3\x08\x6a\x80\x6c\xfb\xad\x8a\x3a\xf7',
1218 > ]}]
1218 > ]}]
1219 > fields eval:[b'linknode']
1219 > fields eval:[b'linknode']
1220 > haveparents eval:True
1220 > haveparents eval:True
1221 > pathfilter eval:{b'include': [b'path:dupe-file']}
1221 > pathfilter eval:{b'include': [b'path:dupe-file']}
1222 > EOF
1222 > EOF
1223 creating http peer for wire protocol version 2
1223 creating http peer for wire protocol version 2
1224 sending filesdata command
1224 sending filesdata command
1225 response: gen[
1225 response: gen[
1226 {
1226 {
1227 b'totalitems': 1,
1227 b'totalitems': 1,
1228 b'totalpaths': 1
1228 b'totalpaths': 1
1229 },
1229 },
1230 {
1230 {
1231 b'path': b'dupe-file',
1231 b'path': b'dupe-file',
1232 b'totalitems': 1
1232 b'totalitems': 1
1233 },
1233 },
1234 {
1234 {
1235 b'linknode': b'\xb1l\xce)g\xc1t\x9e\xf4\xf4\xe3\x08j\x80l\xfb\xad\x8a:\xf7',
1235 b'linknode': b'\xb1l\xce)g\xc1t\x9e\xf4\xf4\xe3\x08j\x80l\xfb\xad\x8a:\xf7',
1236 b'node': b'.\xd2\xa3\x91*\x0b$P C\xea\xe8N\xe4\xb2y\xc1\x8b\x90\xdd'
1236 b'node': b'.\xd2\xa3\x91*\x0b$P C\xea\xe8N\xe4\xb2y\xc1\x8b\x90\xdd'
1237 }
1237 }
1238 ]
1238 ]
1239
1239
1240 Request for changeset where recorded linknode isn't in DAG ancestry will get
1240 Request for changeset where recorded linknode isn't in DAG ancestry will get
1241 rewritten accordingly
1241 rewritten accordingly
1242
1242
1243 $ sendhttpv2peer << EOF
1243 $ sendhttpv2peer << EOF
1244 > command filesdata
1244 > command filesdata
1245 > revisions eval:[{
1245 > revisions eval:[{
1246 > b'type': b'changesetexplicit',
1246 > b'type': b'changesetexplicit',
1247 > b'nodes': [
1247 > b'nodes': [
1248 > b'\x47\xfc\x30\x58\x09\x11\x23\x2c\xb2\x64\x67\x5b\x40\x28\x19\xde\xdd\xf6\xc6\xf0',
1248 > b'\x47\xfc\x30\x58\x09\x11\x23\x2c\xb2\x64\x67\x5b\x40\x28\x19\xde\xdd\xf6\xc6\xf0',
1249 > ]}]
1249 > ]}]
1250 > fields eval:[b'linknode']
1250 > fields eval:[b'linknode']
1251 > pathfilter eval:{b'include': [b'path:dupe-file']}
1251 > pathfilter eval:{b'include': [b'path:dupe-file']}
1252 > EOF
1252 > EOF
1253 creating http peer for wire protocol version 2
1253 creating http peer for wire protocol version 2
1254 sending filesdata command
1254 sending filesdata command
1255 response: gen[
1255 response: gen[
1256 {
1256 {
1257 b'totalitems': 1,
1257 b'totalitems': 1,
1258 b'totalpaths': 1
1258 b'totalpaths': 1
1259 },
1259 },
1260 {
1260 {
1261 b'path': b'dupe-file',
1261 b'path': b'dupe-file',
1262 b'totalitems': 1
1262 b'totalitems': 1
1263 },
1263 },
1264 {
1264 {
1265 b'linknode': b'G\xfc0X\t\x11#,\xb2dg[@(\x19\xde\xdd\xf6\xc6\xf0',
1265 b'linknode': b'G\xfc0X\t\x11#,\xb2dg[@(\x19\xde\xdd\xf6\xc6\xf0',
1266 b'node': b'.\xd2\xa3\x91*\x0b$P C\xea\xe8N\xe4\xb2y\xc1\x8b\x90\xdd'
1266 b'node': b'.\xd2\xa3\x91*\x0b$P C\xea\xe8N\xe4\xb2y\xc1\x8b\x90\xdd'
1267 }
1267 }
1268 ]
1268 ]
1269
1269
1270 TODO this is buggy
1271
1272 $ sendhttpv2peer << EOF
1270 $ sendhttpv2peer << EOF
1273 > command filesdata
1271 > command filesdata
1274 > revisions eval:[{
1272 > revisions eval:[{
1275 > b'type': b'changesetexplicit',
1273 > b'type': b'changesetexplicit',
1276 > b'nodes': [
1274 > b'nodes': [
1277 > b'\x47\xfc\x30\x58\x09\x11\x23\x2c\xb2\x64\x67\x5b\x40\x28\x19\xde\xdd\xf6\xc6\xf0',
1275 > b'\x47\xfc\x30\x58\x09\x11\x23\x2c\xb2\x64\x67\x5b\x40\x28\x19\xde\xdd\xf6\xc6\xf0',
1278 > ]}]
1276 > ]}]
1279 > fields eval:[b'linknode']
1277 > fields eval:[b'linknode']
1280 > haveparents eval:True
1278 > haveparents eval:True
1281 > pathfilter eval:{b'include': [b'path:dupe-file']}
1279 > pathfilter eval:{b'include': [b'path:dupe-file']}
1282 > EOF
1280 > EOF
1283 creating http peer for wire protocol version 2
1281 creating http peer for wire protocol version 2
1284 sending filesdata command
1282 sending filesdata command
1285 response: gen[
1283 response: gen[
1286 {
1284 {
1287 b'totalitems': 0,
1285 b'totalitems': 1,
1288 b'totalpaths': 0
1286 b'totalpaths': 1
1287 },
1288 {
1289 b'path': b'dupe-file',
1290 b'totalitems': 1
1291 },
1292 {
1293 b'linknode': b'G\xfc0X\t\x11#,\xb2dg[@(\x19\xde\xdd\xf6\xc6\xf0',
1294 b'node': b'.\xd2\xa3\x91*\x0b$P C\xea\xe8N\xe4\xb2y\xc1\x8b\x90\xdd'
1289 }
1295 }
1290 ]
1296 ]
1291
1297
1292 $ cat error.log
1298 $ cat error.log
@@ -1,1320 +1,1305 b''
1 Tests for wire protocol version 2 exchange.
1 Tests for wire protocol version 2 exchange.
2 Tests in this file should be folded into existing tests once protocol
2 Tests in this file should be folded into existing tests once protocol
3 v2 has enough features that it can be enabled via #testcase in existing
3 v2 has enough features that it can be enabled via #testcase in existing
4 tests.
4 tests.
5
5
6 $ . $TESTDIR/wireprotohelpers.sh
6 $ . $TESTDIR/wireprotohelpers.sh
7 $ enablehttpv2client
7 $ enablehttpv2client
8
8
9 $ hg init server-simple
9 $ hg init server-simple
10 $ enablehttpv2 server-simple
10 $ enablehttpv2 server-simple
11 $ cd server-simple
11 $ cd server-simple
12 $ cat >> .hg/hgrc << EOF
12 $ cat >> .hg/hgrc << EOF
13 > [phases]
13 > [phases]
14 > publish = false
14 > publish = false
15 > EOF
15 > EOF
16 $ echo a0 > a
16 $ echo a0 > a
17 $ echo b0 > b
17 $ echo b0 > b
18 $ hg -q commit -A -m 'commit 0'
18 $ hg -q commit -A -m 'commit 0'
19
19
20 $ echo a1 > a
20 $ echo a1 > a
21 $ hg commit -m 'commit 1'
21 $ hg commit -m 'commit 1'
22 $ hg phase --public -r .
22 $ hg phase --public -r .
23 $ echo a2 > a
23 $ echo a2 > a
24 $ hg commit -m 'commit 2'
24 $ hg commit -m 'commit 2'
25
25
26 $ hg -q up -r 0
26 $ hg -q up -r 0
27 $ echo b1 > b
27 $ echo b1 > b
28 $ hg -q commit -m 'head 2 commit 1'
28 $ hg -q commit -m 'head 2 commit 1'
29 $ echo b2 > b
29 $ echo b2 > b
30 $ hg -q commit -m 'head 2 commit 2'
30 $ hg -q commit -m 'head 2 commit 2'
31
31
32 $ hg serve -p $HGPORT -d --pid-file hg.pid -E error.log
32 $ hg serve -p $HGPORT -d --pid-file hg.pid -E error.log
33 $ cat hg.pid > $DAEMON_PIDS
33 $ cat hg.pid > $DAEMON_PIDS
34
34
35 $ cd ..
35 $ cd ..
36
36
37 Test basic clone
37 Test basic clone
38
38
39 $ hg --debug clone -U http://localhost:$HGPORT client-simple
39 $ hg --debug clone -U http://localhost:$HGPORT client-simple
40 using http://localhost:$HGPORT/
40 using http://localhost:$HGPORT/
41 sending capabilities command
41 sending capabilities command
42 query 1; heads
42 query 1; heads
43 sending 2 commands
43 sending 2 commands
44 sending command heads: {}
44 sending command heads: {}
45 sending command known: {
45 sending command known: {
46 'nodes': []
46 'nodes': []
47 }
47 }
48 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
48 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
49 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
49 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
50 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
50 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
51 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
51 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
52 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
52 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
53 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
53 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
54 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
54 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
55 sending 1 commands
55 sending 1 commands
56 sending command changesetdata: {
56 sending command changesetdata: {
57 'fields': set([
57 'fields': set([
58 'bookmarks',
58 'bookmarks',
59 'parents',
59 'parents',
60 'phase',
60 'phase',
61 'revision'
61 'revision'
62 ]),
62 ]),
63 'revisions': [
63 'revisions': [
64 {
64 {
65 'heads': [
65 'heads': [
66 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
66 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
67 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
67 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
68 ],
68 ],
69 'roots': [],
69 'roots': [],
70 'type': 'changesetdagrange'
70 'type': 'changesetdagrange'
71 }
71 }
72 ]
72 ]
73 }
73 }
74 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
74 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
75 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
75 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
76 received frame(size=941; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
76 received frame(size=941; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
77 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
77 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
78 add changeset 3390ef850073
78 add changeset 3390ef850073
79 add changeset 4432d83626e8
79 add changeset 4432d83626e8
80 add changeset cd2534766bec
80 add changeset cd2534766bec
81 add changeset e96ae20f4188
81 add changeset e96ae20f4188
82 add changeset caa2a465451d
82 add changeset caa2a465451d
83 checking for updated bookmarks
83 checking for updated bookmarks
84 sending 1 commands
84 sending 1 commands
85 sending command manifestdata: {
85 sending command manifestdata: {
86 'fields': set([
86 'fields': set([
87 'parents',
87 'parents',
88 'revision'
88 'revision'
89 ]),
89 ]),
90 'haveparents': True,
90 'haveparents': True,
91 'nodes': [
91 'nodes': [
92 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
92 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
93 '\xa9\x88\xfbCX>\x87\x1d\x1e\xd5u\x0e\xe0t\xc6\xd8@\xbb\xbf\xc8',
93 '\xa9\x88\xfbCX>\x87\x1d\x1e\xd5u\x0e\xe0t\xc6\xd8@\xbb\xbf\xc8',
94 '\xec\x80NH\x8c \x88\xc25\t\x9a\x10 u\x13\xbe\xcd\xc3\xdd\xa5',
94 '\xec\x80NH\x8c \x88\xc25\t\x9a\x10 u\x13\xbe\xcd\xc3\xdd\xa5',
95 '\x04\\\x7f9\'\xda\x13\xe7Z\xf8\xf0\xe4\xf0HI\xe4a\xa9x\x0f',
95 '\x04\\\x7f9\'\xda\x13\xe7Z\xf8\xf0\xe4\xf0HI\xe4a\xa9x\x0f',
96 '7\x9c\xb0\xc2\xe6d\\y\xdd\xc5\x9a\x1dG\'\xa9\xfb\x83\n\xeb&'
96 '7\x9c\xb0\xc2\xe6d\\y\xdd\xc5\x9a\x1dG\'\xa9\xfb\x83\n\xeb&'
97 ],
97 ],
98 'tree': ''
98 'tree': ''
99 }
99 }
100 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
100 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
101 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
101 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
102 received frame(size=992; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
102 received frame(size=992; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
103 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
103 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
104 sending 1 commands
104 sending 1 commands
105 sending command filesdata: {
105 sending command filesdata: {
106 'fields': set([
106 'fields': set([
107 'parents',
107 'parents',
108 'revision'
108 'revision'
109 ]),
109 ]),
110 'haveparents': True,
110 'haveparents': True,
111 'revisions': [
111 'revisions': [
112 {
112 {
113 'nodes': [
113 'nodes': [
114 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
114 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
115 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0',
115 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0',
116 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
116 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
117 '\xe9j\xe2\x0fA\x88H{\x9a\xe4\xef9A\xc2|\x81\x141F\xe5',
117 '\xe9j\xe2\x0fA\x88H{\x9a\xe4\xef9A\xc2|\x81\x141F\xe5',
118 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
118 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
119 ],
119 ],
120 'type': 'changesetexplicit'
120 'type': 'changesetexplicit'
121 }
121 }
122 ]
122 ]
123 }
123 }
124 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
124 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
125 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
125 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
126 received frame(size=901; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
126 received frame(size=901; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
127 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
127 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
128 updating the branch cache
128 updating the branch cache
129 new changesets 3390ef850073:caa2a465451d (3 drafts)
129 new changesets 3390ef850073:caa2a465451d (3 drafts)
130 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
130 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
131
131
132 All changesets should have been transferred
132 All changesets should have been transferred
133
133
134 $ hg -R client-simple debugindex -c
134 $ hg -R client-simple debugindex -c
135 rev linkrev nodeid p1 p2
135 rev linkrev nodeid p1 p2
136 0 0 3390ef850073 000000000000 000000000000
136 0 0 3390ef850073 000000000000 000000000000
137 1 1 4432d83626e8 3390ef850073 000000000000
137 1 1 4432d83626e8 3390ef850073 000000000000
138 2 2 cd2534766bec 4432d83626e8 000000000000
138 2 2 cd2534766bec 4432d83626e8 000000000000
139 3 3 e96ae20f4188 3390ef850073 000000000000
139 3 3 e96ae20f4188 3390ef850073 000000000000
140 4 4 caa2a465451d e96ae20f4188 000000000000
140 4 4 caa2a465451d e96ae20f4188 000000000000
141
141
142 $ hg -R client-simple log -G -T '{rev} {node} {phase}\n'
142 $ hg -R client-simple log -G -T '{rev} {node} {phase}\n'
143 o 4 caa2a465451dd1facda0f5b12312c355584188a1 draft
143 o 4 caa2a465451dd1facda0f5b12312c355584188a1 draft
144 |
144 |
145 o 3 e96ae20f4188487b9ae4ef3941c27c81143146e5 draft
145 o 3 e96ae20f4188487b9ae4ef3941c27c81143146e5 draft
146 |
146 |
147 | o 2 cd2534766bece138c7c1afdc6825302f0f62d81f draft
147 | o 2 cd2534766bece138c7c1afdc6825302f0f62d81f draft
148 | |
148 | |
149 | o 1 4432d83626e8a98655f062ec1f2a43b07f7fbbb0 public
149 | o 1 4432d83626e8a98655f062ec1f2a43b07f7fbbb0 public
150 |/
150 |/
151 o 0 3390ef850073fbc2f0dfff2244342c8e9229013a public
151 o 0 3390ef850073fbc2f0dfff2244342c8e9229013a public
152
152
153
153
154 All manifests should have been transferred
154 All manifests should have been transferred
155
155
156 $ hg -R client-simple debugindex -m
156 $ hg -R client-simple debugindex -m
157 rev linkrev nodeid p1 p2
157 rev linkrev nodeid p1 p2
158 0 0 992f4779029a 000000000000 000000000000
158 0 0 992f4779029a 000000000000 000000000000
159 1 1 a988fb43583e 992f4779029a 000000000000
159 1 1 a988fb43583e 992f4779029a 000000000000
160 2 2 ec804e488c20 a988fb43583e 000000000000
160 2 2 ec804e488c20 a988fb43583e 000000000000
161 3 3 045c7f3927da 992f4779029a 000000000000
161 3 3 045c7f3927da 992f4779029a 000000000000
162 4 4 379cb0c2e664 045c7f3927da 000000000000
162 4 4 379cb0c2e664 045c7f3927da 000000000000
163
163
164 Cloning only a specific revision works
164 Cloning only a specific revision works
165
165
166 $ hg --debug clone -U -r 4432d83626e8 http://localhost:$HGPORT client-singlehead
166 $ hg --debug clone -U -r 4432d83626e8 http://localhost:$HGPORT client-singlehead
167 using http://localhost:$HGPORT/
167 using http://localhost:$HGPORT/
168 sending capabilities command
168 sending capabilities command
169 sending 1 commands
169 sending 1 commands
170 sending command lookup: {
170 sending command lookup: {
171 'key': '4432d83626e8'
171 'key': '4432d83626e8'
172 }
172 }
173 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
173 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
174 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
174 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
175 received frame(size=21; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
175 received frame(size=21; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
176 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
176 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
177 query 1; heads
177 query 1; heads
178 sending 2 commands
178 sending 2 commands
179 sending command heads: {}
179 sending command heads: {}
180 sending command known: {
180 sending command known: {
181 'nodes': []
181 'nodes': []
182 }
182 }
183 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
183 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
184 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
184 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
185 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
185 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
186 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
186 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
187 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
187 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
188 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
188 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
189 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
189 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
190 sending 1 commands
190 sending 1 commands
191 sending command changesetdata: {
191 sending command changesetdata: {
192 'fields': set([
192 'fields': set([
193 'bookmarks',
193 'bookmarks',
194 'parents',
194 'parents',
195 'phase',
195 'phase',
196 'revision'
196 'revision'
197 ]),
197 ]),
198 'revisions': [
198 'revisions': [
199 {
199 {
200 'heads': [
200 'heads': [
201 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
201 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
202 ],
202 ],
203 'roots': [],
203 'roots': [],
204 'type': 'changesetdagrange'
204 'type': 'changesetdagrange'
205 }
205 }
206 ]
206 ]
207 }
207 }
208 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
208 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
209 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
209 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
210 received frame(size=381; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
210 received frame(size=381; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
211 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
211 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
212 add changeset 3390ef850073
212 add changeset 3390ef850073
213 add changeset 4432d83626e8
213 add changeset 4432d83626e8
214 checking for updated bookmarks
214 checking for updated bookmarks
215 sending 1 commands
215 sending 1 commands
216 sending command manifestdata: {
216 sending command manifestdata: {
217 'fields': set([
217 'fields': set([
218 'parents',
218 'parents',
219 'revision'
219 'revision'
220 ]),
220 ]),
221 'haveparents': True,
221 'haveparents': True,
222 'nodes': [
222 'nodes': [
223 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
223 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
224 '\xa9\x88\xfbCX>\x87\x1d\x1e\xd5u\x0e\xe0t\xc6\xd8@\xbb\xbf\xc8'
224 '\xa9\x88\xfbCX>\x87\x1d\x1e\xd5u\x0e\xe0t\xc6\xd8@\xbb\xbf\xc8'
225 ],
225 ],
226 'tree': ''
226 'tree': ''
227 }
227 }
228 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
228 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
229 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
229 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
230 received frame(size=404; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
230 received frame(size=404; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
231 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
231 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
232 sending 1 commands
232 sending 1 commands
233 sending command filesdata: {
233 sending command filesdata: {
234 'fields': set([
234 'fields': set([
235 'parents',
235 'parents',
236 'revision'
236 'revision'
237 ]),
237 ]),
238 'haveparents': True,
238 'haveparents': True,
239 'revisions': [
239 'revisions': [
240 {
240 {
241 'nodes': [
241 'nodes': [
242 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
242 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
243 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
243 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
244 ],
244 ],
245 'type': 'changesetexplicit'
245 'type': 'changesetexplicit'
246 }
246 }
247 ]
247 ]
248 }
248 }
249 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
249 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
250 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
250 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
251 received frame(size=439; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
251 received frame(size=439; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
252 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
252 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
253 updating the branch cache
253 updating the branch cache
254 new changesets 3390ef850073:4432d83626e8
254 new changesets 3390ef850073:4432d83626e8
255 (sent 6 HTTP requests and * bytes; received * bytes in responses) (glob)
255 (sent 6 HTTP requests and * bytes; received * bytes in responses) (glob)
256
256
257 $ cd client-singlehead
257 $ cd client-singlehead
258
258
259 $ hg log -G -T '{rev} {node} {phase}\n'
259 $ hg log -G -T '{rev} {node} {phase}\n'
260 o 1 4432d83626e8a98655f062ec1f2a43b07f7fbbb0 public
260 o 1 4432d83626e8a98655f062ec1f2a43b07f7fbbb0 public
261 |
261 |
262 o 0 3390ef850073fbc2f0dfff2244342c8e9229013a public
262 o 0 3390ef850073fbc2f0dfff2244342c8e9229013a public
263
263
264
264
265 $ hg debugindex -m
265 $ hg debugindex -m
266 rev linkrev nodeid p1 p2
266 rev linkrev nodeid p1 p2
267 0 0 992f4779029a 000000000000 000000000000
267 0 0 992f4779029a 000000000000 000000000000
268 1 1 a988fb43583e 992f4779029a 000000000000
268 1 1 a988fb43583e 992f4779029a 000000000000
269
269
270 Incremental pull works
270 Incremental pull works
271
271
272 $ hg --debug pull
272 $ hg --debug pull
273 pulling from http://localhost:$HGPORT/
273 pulling from http://localhost:$HGPORT/
274 using http://localhost:$HGPORT/
274 using http://localhost:$HGPORT/
275 sending capabilities command
275 sending capabilities command
276 query 1; heads
276 query 1; heads
277 sending 2 commands
277 sending 2 commands
278 sending command heads: {}
278 sending command heads: {}
279 sending command known: {
279 sending command known: {
280 'nodes': [
280 'nodes': [
281 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
281 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
282 ]
282 ]
283 }
283 }
284 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
284 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
285 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
285 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
286 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
286 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
287 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
287 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
288 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
288 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
289 received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
289 received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
290 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
290 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
291 searching for changes
291 searching for changes
292 all local heads known remotely
292 all local heads known remotely
293 sending 1 commands
293 sending 1 commands
294 sending command changesetdata: {
294 sending command changesetdata: {
295 'fields': set([
295 'fields': set([
296 'bookmarks',
296 'bookmarks',
297 'parents',
297 'parents',
298 'phase',
298 'phase',
299 'revision'
299 'revision'
300 ]),
300 ]),
301 'revisions': [
301 'revisions': [
302 {
302 {
303 'heads': [
303 'heads': [
304 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
304 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
305 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
305 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
306 ],
306 ],
307 'roots': [
307 'roots': [
308 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
308 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0'
309 ],
309 ],
310 'type': 'changesetdagrange'
310 'type': 'changesetdagrange'
311 }
311 }
312 ]
312 ]
313 }
313 }
314 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
314 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
315 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
315 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
316 received frame(size=573; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
316 received frame(size=573; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
317 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
317 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
318 add changeset cd2534766bec
318 add changeset cd2534766bec
319 add changeset e96ae20f4188
319 add changeset e96ae20f4188
320 add changeset caa2a465451d
320 add changeset caa2a465451d
321 checking for updated bookmarks
321 checking for updated bookmarks
322 sending 1 commands
322 sending 1 commands
323 sending command manifestdata: {
323 sending command manifestdata: {
324 'fields': set([
324 'fields': set([
325 'parents',
325 'parents',
326 'revision'
326 'revision'
327 ]),
327 ]),
328 'haveparents': True,
328 'haveparents': True,
329 'nodes': [
329 'nodes': [
330 '\xec\x80NH\x8c \x88\xc25\t\x9a\x10 u\x13\xbe\xcd\xc3\xdd\xa5',
330 '\xec\x80NH\x8c \x88\xc25\t\x9a\x10 u\x13\xbe\xcd\xc3\xdd\xa5',
331 '\x04\\\x7f9\'\xda\x13\xe7Z\xf8\xf0\xe4\xf0HI\xe4a\xa9x\x0f',
331 '\x04\\\x7f9\'\xda\x13\xe7Z\xf8\xf0\xe4\xf0HI\xe4a\xa9x\x0f',
332 '7\x9c\xb0\xc2\xe6d\\y\xdd\xc5\x9a\x1dG\'\xa9\xfb\x83\n\xeb&'
332 '7\x9c\xb0\xc2\xe6d\\y\xdd\xc5\x9a\x1dG\'\xa9\xfb\x83\n\xeb&'
333 ],
333 ],
334 'tree': ''
334 'tree': ''
335 }
335 }
336 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
336 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
337 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
337 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
338 received frame(size=601; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
338 received frame(size=601; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
339 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
339 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
340 sending 1 commands
340 sending 1 commands
341 sending command filesdata: {
341 sending command filesdata: {
342 'fields': set([
342 'fields': set([
343 'parents',
343 'parents',
344 'revision'
344 'revision'
345 ]),
345 ]),
346 'haveparents': True,
346 'haveparents': True,
347 'revisions': [
347 'revisions': [
348 {
348 {
349 'nodes': [
349 'nodes': [
350 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
350 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
351 '\xe9j\xe2\x0fA\x88H{\x9a\xe4\xef9A\xc2|\x81\x141F\xe5',
351 '\xe9j\xe2\x0fA\x88H{\x9a\xe4\xef9A\xc2|\x81\x141F\xe5',
352 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
352 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
353 ],
353 ],
354 'type': 'changesetexplicit'
354 'type': 'changesetexplicit'
355 }
355 }
356 ]
356 ]
357 }
357 }
358 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
358 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
359 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
359 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
360 received frame(size=527; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
360 received frame(size=527; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
361 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
361 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
362 updating the branch cache
362 updating the branch cache
363 new changesets cd2534766bec:caa2a465451d (3 drafts)
363 new changesets cd2534766bec:caa2a465451d (3 drafts)
364 (run 'hg update' to get a working copy)
364 (run 'hg update' to get a working copy)
365 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
365 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
366
366
367 $ hg log -G -T '{rev} {node} {phase}\n'
367 $ hg log -G -T '{rev} {node} {phase}\n'
368 o 4 caa2a465451dd1facda0f5b12312c355584188a1 draft
368 o 4 caa2a465451dd1facda0f5b12312c355584188a1 draft
369 |
369 |
370 o 3 e96ae20f4188487b9ae4ef3941c27c81143146e5 draft
370 o 3 e96ae20f4188487b9ae4ef3941c27c81143146e5 draft
371 |
371 |
372 | o 2 cd2534766bece138c7c1afdc6825302f0f62d81f draft
372 | o 2 cd2534766bece138c7c1afdc6825302f0f62d81f draft
373 | |
373 | |
374 | o 1 4432d83626e8a98655f062ec1f2a43b07f7fbbb0 public
374 | o 1 4432d83626e8a98655f062ec1f2a43b07f7fbbb0 public
375 |/
375 |/
376 o 0 3390ef850073fbc2f0dfff2244342c8e9229013a public
376 o 0 3390ef850073fbc2f0dfff2244342c8e9229013a public
377
377
378
378
379 $ hg debugindex -m
379 $ hg debugindex -m
380 rev linkrev nodeid p1 p2
380 rev linkrev nodeid p1 p2
381 0 0 992f4779029a 000000000000 000000000000
381 0 0 992f4779029a 000000000000 000000000000
382 1 1 a988fb43583e 992f4779029a 000000000000
382 1 1 a988fb43583e 992f4779029a 000000000000
383 2 2 ec804e488c20 a988fb43583e 000000000000
383 2 2 ec804e488c20 a988fb43583e 000000000000
384 3 3 045c7f3927da 992f4779029a 000000000000
384 3 3 045c7f3927da 992f4779029a 000000000000
385 4 4 379cb0c2e664 045c7f3927da 000000000000
385 4 4 379cb0c2e664 045c7f3927da 000000000000
386
386
387 Phase-only update works
387 Phase-only update works
388 TODO this doesn't work
388 TODO this doesn't work
389
389
390 $ hg -R ../server-simple phase --public -r caa2a465451dd
390 $ hg -R ../server-simple phase --public -r caa2a465451dd
391 $ hg --debug pull
391 $ hg --debug pull
392 pulling from http://localhost:$HGPORT/
392 pulling from http://localhost:$HGPORT/
393 using http://localhost:$HGPORT/
393 using http://localhost:$HGPORT/
394 sending capabilities command
394 sending capabilities command
395 query 1; heads
395 query 1; heads
396 sending 2 commands
396 sending 2 commands
397 sending command heads: {}
397 sending command heads: {}
398 sending command known: {
398 sending command known: {
399 'nodes': [
399 'nodes': [
400 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
400 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
401 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
401 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
402 ]
402 ]
403 }
403 }
404 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
404 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
405 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
405 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
406 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
406 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
407 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
407 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
408 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
408 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
409 received frame(size=3; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
409 received frame(size=3; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
410 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
410 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
411 searching for changes
411 searching for changes
412 all remote heads known locally
412 all remote heads known locally
413 sending 1 commands
413 sending 1 commands
414 sending command changesetdata: {
414 sending command changesetdata: {
415 'fields': set([
415 'fields': set([
416 'bookmarks',
416 'bookmarks',
417 'parents',
417 'parents',
418 'phase',
418 'phase',
419 'revision'
419 'revision'
420 ]),
420 ]),
421 'revisions': [
421 'revisions': [
422 {
422 {
423 'heads': [
423 'heads': [
424 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
424 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
425 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
425 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
426 ],
426 ],
427 'roots': [
427 'roots': [
428 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
428 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
429 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
429 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
430 ],
430 ],
431 'type': 'changesetdagrange'
431 'type': 'changesetdagrange'
432 }
432 }
433 ]
433 ]
434 }
434 }
435 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
435 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
436 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
436 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
437 received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
437 received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
438 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
438 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
439 checking for updated bookmarks
439 checking for updated bookmarks
440 (run 'hg update' to get a working copy)
440 (run 'hg update' to get a working copy)
441 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
441 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
442
442
443 $ hg log -G -T '{rev} {node} {phase}\n'
443 $ hg log -G -T '{rev} {node} {phase}\n'
444 o 4 caa2a465451dd1facda0f5b12312c355584188a1 draft
444 o 4 caa2a465451dd1facda0f5b12312c355584188a1 draft
445 |
445 |
446 o 3 e96ae20f4188487b9ae4ef3941c27c81143146e5 draft
446 o 3 e96ae20f4188487b9ae4ef3941c27c81143146e5 draft
447 |
447 |
448 | o 2 cd2534766bece138c7c1afdc6825302f0f62d81f draft
448 | o 2 cd2534766bece138c7c1afdc6825302f0f62d81f draft
449 | |
449 | |
450 | o 1 4432d83626e8a98655f062ec1f2a43b07f7fbbb0 public
450 | o 1 4432d83626e8a98655f062ec1f2a43b07f7fbbb0 public
451 |/
451 |/
452 o 0 3390ef850073fbc2f0dfff2244342c8e9229013a public
452 o 0 3390ef850073fbc2f0dfff2244342c8e9229013a public
453
453
454
454
455 $ cd ..
455 $ cd ..
456
456
457 Bookmarks are transferred on clone
457 Bookmarks are transferred on clone
458
458
459 $ hg -R server-simple bookmark -r 3390ef850073fbc2f0dfff2244342c8e9229013a book-1
459 $ hg -R server-simple bookmark -r 3390ef850073fbc2f0dfff2244342c8e9229013a book-1
460 $ hg -R server-simple bookmark -r cd2534766bece138c7c1afdc6825302f0f62d81f book-2
460 $ hg -R server-simple bookmark -r cd2534766bece138c7c1afdc6825302f0f62d81f book-2
461
461
462 $ hg --debug clone -U http://localhost:$HGPORT/ client-bookmarks
462 $ hg --debug clone -U http://localhost:$HGPORT/ client-bookmarks
463 using http://localhost:$HGPORT/
463 using http://localhost:$HGPORT/
464 sending capabilities command
464 sending capabilities command
465 query 1; heads
465 query 1; heads
466 sending 2 commands
466 sending 2 commands
467 sending command heads: {}
467 sending command heads: {}
468 sending command known: {
468 sending command known: {
469 'nodes': []
469 'nodes': []
470 }
470 }
471 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
471 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
472 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
472 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
473 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
473 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
474 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
474 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
475 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
475 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
476 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
476 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
477 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
477 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
478 sending 1 commands
478 sending 1 commands
479 sending command changesetdata: {
479 sending command changesetdata: {
480 'fields': set([
480 'fields': set([
481 'bookmarks',
481 'bookmarks',
482 'parents',
482 'parents',
483 'phase',
483 'phase',
484 'revision'
484 'revision'
485 ]),
485 ]),
486 'revisions': [
486 'revisions': [
487 {
487 {
488 'heads': [
488 'heads': [
489 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
489 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
490 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
490 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
491 ],
491 ],
492 'roots': [],
492 'roots': [],
493 'type': 'changesetdagrange'
493 'type': 'changesetdagrange'
494 }
494 }
495 ]
495 ]
496 }
496 }
497 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
497 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
498 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
498 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
499 received frame(size=979; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
499 received frame(size=979; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
500 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
500 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
501 add changeset 3390ef850073
501 add changeset 3390ef850073
502 add changeset 4432d83626e8
502 add changeset 4432d83626e8
503 add changeset cd2534766bec
503 add changeset cd2534766bec
504 add changeset e96ae20f4188
504 add changeset e96ae20f4188
505 add changeset caa2a465451d
505 add changeset caa2a465451d
506 checking for updated bookmarks
506 checking for updated bookmarks
507 adding remote bookmark book-1
507 adding remote bookmark book-1
508 adding remote bookmark book-2
508 adding remote bookmark book-2
509 sending 1 commands
509 sending 1 commands
510 sending command manifestdata: {
510 sending command manifestdata: {
511 'fields': set([
511 'fields': set([
512 'parents',
512 'parents',
513 'revision'
513 'revision'
514 ]),
514 ]),
515 'haveparents': True,
515 'haveparents': True,
516 'nodes': [
516 'nodes': [
517 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
517 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
518 '\xa9\x88\xfbCX>\x87\x1d\x1e\xd5u\x0e\xe0t\xc6\xd8@\xbb\xbf\xc8',
518 '\xa9\x88\xfbCX>\x87\x1d\x1e\xd5u\x0e\xe0t\xc6\xd8@\xbb\xbf\xc8',
519 '\xec\x80NH\x8c \x88\xc25\t\x9a\x10 u\x13\xbe\xcd\xc3\xdd\xa5',
519 '\xec\x80NH\x8c \x88\xc25\t\x9a\x10 u\x13\xbe\xcd\xc3\xdd\xa5',
520 '\x04\\\x7f9\'\xda\x13\xe7Z\xf8\xf0\xe4\xf0HI\xe4a\xa9x\x0f',
520 '\x04\\\x7f9\'\xda\x13\xe7Z\xf8\xf0\xe4\xf0HI\xe4a\xa9x\x0f',
521 '7\x9c\xb0\xc2\xe6d\\y\xdd\xc5\x9a\x1dG\'\xa9\xfb\x83\n\xeb&'
521 '7\x9c\xb0\xc2\xe6d\\y\xdd\xc5\x9a\x1dG\'\xa9\xfb\x83\n\xeb&'
522 ],
522 ],
523 'tree': ''
523 'tree': ''
524 }
524 }
525 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
525 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
526 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
526 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
527 received frame(size=992; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
527 received frame(size=992; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
528 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
528 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
529 sending 1 commands
529 sending 1 commands
530 sending command filesdata: {
530 sending command filesdata: {
531 'fields': set([
531 'fields': set([
532 'parents',
532 'parents',
533 'revision'
533 'revision'
534 ]),
534 ]),
535 'haveparents': True,
535 'haveparents': True,
536 'revisions': [
536 'revisions': [
537 {
537 {
538 'nodes': [
538 'nodes': [
539 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
539 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
540 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0',
540 'D2\xd86&\xe8\xa9\x86U\xf0b\xec\x1f*C\xb0\x7f\x7f\xbb\xb0',
541 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
541 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
542 '\xe9j\xe2\x0fA\x88H{\x9a\xe4\xef9A\xc2|\x81\x141F\xe5',
542 '\xe9j\xe2\x0fA\x88H{\x9a\xe4\xef9A\xc2|\x81\x141F\xe5',
543 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
543 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
544 ],
544 ],
545 'type': 'changesetexplicit'
545 'type': 'changesetexplicit'
546 }
546 }
547 ]
547 ]
548 }
548 }
549 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
549 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
550 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
550 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
551 received frame(size=901; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
551 received frame(size=901; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
552 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
552 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
553 updating the branch cache
553 updating the branch cache
554 new changesets 3390ef850073:caa2a465451d (1 drafts)
554 new changesets 3390ef850073:caa2a465451d (1 drafts)
555 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
555 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
556
556
557 $ hg -R client-bookmarks bookmarks
557 $ hg -R client-bookmarks bookmarks
558 book-1 0:3390ef850073
558 book-1 0:3390ef850073
559 book-2 2:cd2534766bec
559 book-2 2:cd2534766bec
560
560
561 Server-side bookmark moves are reflected during `hg pull`
561 Server-side bookmark moves are reflected during `hg pull`
562
562
563 $ hg -R server-simple bookmark -r cd2534766bece138c7c1afdc6825302f0f62d81f book-1
563 $ hg -R server-simple bookmark -r cd2534766bece138c7c1afdc6825302f0f62d81f book-1
564 moving bookmark 'book-1' forward from 3390ef850073
564 moving bookmark 'book-1' forward from 3390ef850073
565
565
566 $ hg -R client-bookmarks --debug pull
566 $ hg -R client-bookmarks --debug pull
567 pulling from http://localhost:$HGPORT/
567 pulling from http://localhost:$HGPORT/
568 using http://localhost:$HGPORT/
568 using http://localhost:$HGPORT/
569 sending capabilities command
569 sending capabilities command
570 query 1; heads
570 query 1; heads
571 sending 2 commands
571 sending 2 commands
572 sending command heads: {}
572 sending command heads: {}
573 sending command known: {
573 sending command known: {
574 'nodes': [
574 'nodes': [
575 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
575 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f',
576 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
576 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1'
577 ]
577 ]
578 }
578 }
579 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
579 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
580 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
580 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
581 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
581 received frame(size=43; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
582 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
582 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
583 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
583 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
584 received frame(size=3; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
584 received frame(size=3; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
585 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
585 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
586 searching for changes
586 searching for changes
587 all remote heads known locally
587 all remote heads known locally
588 sending 1 commands
588 sending 1 commands
589 sending command changesetdata: {
589 sending command changesetdata: {
590 'fields': set([
590 'fields': set([
591 'bookmarks',
591 'bookmarks',
592 'parents',
592 'parents',
593 'phase',
593 'phase',
594 'revision'
594 'revision'
595 ]),
595 ]),
596 'revisions': [
596 'revisions': [
597 {
597 {
598 'heads': [
598 'heads': [
599 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
599 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
600 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
600 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
601 ],
601 ],
602 'roots': [
602 'roots': [
603 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
603 '\xca\xa2\xa4eE\x1d\xd1\xfa\xcd\xa0\xf5\xb1#\x12\xc3UXA\x88\xa1',
604 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
604 '\xcd%4vk\xec\xe18\xc7\xc1\xaf\xdch%0/\x0fb\xd8\x1f'
605 ],
605 ],
606 'type': 'changesetdagrange'
606 'type': 'changesetdagrange'
607 }
607 }
608 ]
608 ]
609 }
609 }
610 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
610 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
611 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
611 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
612 received frame(size=65; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
612 received frame(size=65; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
613 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
613 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
614 checking for updated bookmarks
614 checking for updated bookmarks
615 updating bookmark book-1
615 updating bookmark book-1
616 (run 'hg update' to get a working copy)
616 (run 'hg update' to get a working copy)
617 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
617 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
618
618
619 $ hg -R client-bookmarks bookmarks
619 $ hg -R client-bookmarks bookmarks
620 book-1 2:cd2534766bec
620 book-1 2:cd2534766bec
621 book-2 2:cd2534766bec
621 book-2 2:cd2534766bec
622
622
623 $ killdaemons.py
623 $ killdaemons.py
624
624
625 Let's set up a slightly more complicated server
625 Let's set up a slightly more complicated server
626
626
627 $ hg init server-2
627 $ hg init server-2
628 $ enablehttpv2 server-2
628 $ enablehttpv2 server-2
629 $ cd server-2
629 $ cd server-2
630 $ mkdir dir0 dir1
630 $ mkdir dir0 dir1
631 $ echo a0 > a
631 $ echo a0 > a
632 $ echo b0 > b
632 $ echo b0 > b
633 $ hg -q commit -A -m 'commit 0'
633 $ hg -q commit -A -m 'commit 0'
634 $ echo c0 > dir0/c
634 $ echo c0 > dir0/c
635 $ echo d0 > dir0/d
635 $ echo d0 > dir0/d
636 $ hg -q commit -A -m 'commit 1'
636 $ hg -q commit -A -m 'commit 1'
637 $ echo e0 > dir1/e
637 $ echo e0 > dir1/e
638 $ echo f0 > dir1/f
638 $ echo f0 > dir1/f
639 $ hg -q commit -A -m 'commit 2'
639 $ hg -q commit -A -m 'commit 2'
640 $ echo c1 > dir0/c
640 $ echo c1 > dir0/c
641 $ echo e1 > dir1/e
641 $ echo e1 > dir1/e
642 $ hg commit -m 'commit 3'
642 $ hg commit -m 'commit 3'
643 $ hg serve -p $HGPORT -d --pid-file hg.pid -E error.log
643 $ hg serve -p $HGPORT -d --pid-file hg.pid -E error.log
644 $ cat hg.pid > $DAEMON_PIDS
644 $ cat hg.pid > $DAEMON_PIDS
645
645
646 $ cd ..
646 $ cd ..
647
647
648 Narrow clone only fetches some files
648 Narrow clone only fetches some files
649
649
650 $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --include dir0/ http://localhost:$HGPORT/ client-narrow-0
650 $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --include dir0/ http://localhost:$HGPORT/ client-narrow-0
651 using http://localhost:$HGPORT/
651 using http://localhost:$HGPORT/
652 sending capabilities command
652 sending capabilities command
653 query 1; heads
653 query 1; heads
654 sending 2 commands
654 sending 2 commands
655 sending command heads: {}
655 sending command heads: {}
656 sending command known: {
656 sending command known: {
657 'nodes': []
657 'nodes': []
658 }
658 }
659 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
659 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
660 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
660 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
661 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
661 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
662 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
662 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
663 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
663 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
664 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
664 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
665 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
665 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
666 sending 1 commands
666 sending 1 commands
667 sending command changesetdata: {
667 sending command changesetdata: {
668 'fields': set([
668 'fields': set([
669 'bookmarks',
669 'bookmarks',
670 'parents',
670 'parents',
671 'phase',
671 'phase',
672 'revision'
672 'revision'
673 ]),
673 ]),
674 'revisions': [
674 'revisions': [
675 {
675 {
676 'heads': [
676 'heads': [
677 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
677 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
678 ],
678 ],
679 'roots': [],
679 'roots': [],
680 'type': 'changesetdagrange'
680 'type': 'changesetdagrange'
681 }
681 }
682 ]
682 ]
683 }
683 }
684 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
684 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
685 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
685 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
686 received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
686 received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
687 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
687 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
688 add changeset 3390ef850073
688 add changeset 3390ef850073
689 add changeset b709380892b1
689 add changeset b709380892b1
690 add changeset 47fe012ab237
690 add changeset 47fe012ab237
691 add changeset 97765fc3cd62
691 add changeset 97765fc3cd62
692 checking for updated bookmarks
692 checking for updated bookmarks
693 sending 1 commands
693 sending 1 commands
694 sending command manifestdata: {
694 sending command manifestdata: {
695 'fields': set([
695 'fields': set([
696 'parents',
696 'parents',
697 'revision'
697 'revision'
698 ]),
698 ]),
699 'haveparents': True,
699 'haveparents': True,
700 'nodes': [
700 'nodes': [
701 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
701 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
702 '|2 \x1a\xa3\xa1R\xa9\xe6\xa9"+?\xa8\xd0\xe3\x0f\xc2V\xe8',
702 '|2 \x1a\xa3\xa1R\xa9\xe6\xa9"+?\xa8\xd0\xe3\x0f\xc2V\xe8',
703 '\x8d\xd0W<\x7f\xaf\xe2\x04F\xcc\xea\xac\x05N\xea\xa4x\x91M\xdb',
703 '\x8d\xd0W<\x7f\xaf\xe2\x04F\xcc\xea\xac\x05N\xea\xa4x\x91M\xdb',
704 '113\x85\xf2!\x8b\x08^\xb2Z\x821\x1e*\xdd\x0e\xeb\x8c3'
704 '113\x85\xf2!\x8b\x08^\xb2Z\x821\x1e*\xdd\x0e\xeb\x8c3'
705 ],
705 ],
706 'tree': ''
706 'tree': ''
707 }
707 }
708 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
708 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
709 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
709 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
710 received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
710 received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
711 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
711 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
712 sending 1 commands
712 sending 1 commands
713 sending command filesdata: {
713 sending command filesdata: {
714 'fields': set([
714 'fields': set([
715 'parents',
715 'parents',
716 'revision'
716 'revision'
717 ]),
717 ]),
718 'haveparents': True,
718 'haveparents': True,
719 'pathfilter': {
719 'pathfilter': {
720 'include': [
720 'include': [
721 'path:dir0'
721 'path:dir0'
722 ]
722 ]
723 },
723 },
724 'revisions': [
724 'revisions': [
725 {
725 {
726 'nodes': [
726 'nodes': [
727 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
727 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
728 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
728 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
729 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
729 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
730 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
730 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
731 ],
731 ],
732 'type': 'changesetexplicit'
732 'type': 'changesetexplicit'
733 }
733 }
734 ]
734 ]
735 }
735 }
736 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
736 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
737 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
737 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
738 received frame(size=449; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
738 received frame(size=449; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
739 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
739 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
740 updating the branch cache
740 updating the branch cache
741 new changesets 3390ef850073:97765fc3cd62
741 new changesets 3390ef850073:97765fc3cd62
742 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
742 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
743
743
744 #if reporevlogstore
744 #if reporevlogstore
745 $ find client-narrow-0/.hg/store -type f -name '*.i' | sort
745 $ find client-narrow-0/.hg/store -type f -name '*.i' | sort
746 client-narrow-0/.hg/store/00changelog.i
746 client-narrow-0/.hg/store/00changelog.i
747 client-narrow-0/.hg/store/00manifest.i
747 client-narrow-0/.hg/store/00manifest.i
748 client-narrow-0/.hg/store/data/dir0/c.i
748 client-narrow-0/.hg/store/data/dir0/c.i
749 client-narrow-0/.hg/store/data/dir0/d.i
749 client-narrow-0/.hg/store/data/dir0/d.i
750 #endif
750 #endif
751
751
752 --exclude by itself works
752 --exclude by itself works
753
753
754 $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --exclude dir0/ http://localhost:$HGPORT/ client-narrow-1
754 $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --exclude dir0/ http://localhost:$HGPORT/ client-narrow-1
755 using http://localhost:$HGPORT/
755 using http://localhost:$HGPORT/
756 sending capabilities command
756 sending capabilities command
757 query 1; heads
757 query 1; heads
758 sending 2 commands
758 sending 2 commands
759 sending command heads: {}
759 sending command heads: {}
760 sending command known: {
760 sending command known: {
761 'nodes': []
761 'nodes': []
762 }
762 }
763 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
763 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
764 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
764 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
765 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
765 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
766 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
766 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
767 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
767 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
768 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
768 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
769 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
769 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
770 sending 1 commands
770 sending 1 commands
771 sending command changesetdata: {
771 sending command changesetdata: {
772 'fields': set([
772 'fields': set([
773 'bookmarks',
773 'bookmarks',
774 'parents',
774 'parents',
775 'phase',
775 'phase',
776 'revision'
776 'revision'
777 ]),
777 ]),
778 'revisions': [
778 'revisions': [
779 {
779 {
780 'heads': [
780 'heads': [
781 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
781 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
782 ],
782 ],
783 'roots': [],
783 'roots': [],
784 'type': 'changesetdagrange'
784 'type': 'changesetdagrange'
785 }
785 }
786 ]
786 ]
787 }
787 }
788 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
788 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
789 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
789 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
790 received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
790 received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
791 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
791 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
792 add changeset 3390ef850073
792 add changeset 3390ef850073
793 add changeset b709380892b1
793 add changeset b709380892b1
794 add changeset 47fe012ab237
794 add changeset 47fe012ab237
795 add changeset 97765fc3cd62
795 add changeset 97765fc3cd62
796 checking for updated bookmarks
796 checking for updated bookmarks
797 sending 1 commands
797 sending 1 commands
798 sending command manifestdata: {
798 sending command manifestdata: {
799 'fields': set([
799 'fields': set([
800 'parents',
800 'parents',
801 'revision'
801 'revision'
802 ]),
802 ]),
803 'haveparents': True,
803 'haveparents': True,
804 'nodes': [
804 'nodes': [
805 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
805 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
806 '|2 \x1a\xa3\xa1R\xa9\xe6\xa9"+?\xa8\xd0\xe3\x0f\xc2V\xe8',
806 '|2 \x1a\xa3\xa1R\xa9\xe6\xa9"+?\xa8\xd0\xe3\x0f\xc2V\xe8',
807 '\x8d\xd0W<\x7f\xaf\xe2\x04F\xcc\xea\xac\x05N\xea\xa4x\x91M\xdb',
807 '\x8d\xd0W<\x7f\xaf\xe2\x04F\xcc\xea\xac\x05N\xea\xa4x\x91M\xdb',
808 '113\x85\xf2!\x8b\x08^\xb2Z\x821\x1e*\xdd\x0e\xeb\x8c3'
808 '113\x85\xf2!\x8b\x08^\xb2Z\x821\x1e*\xdd\x0e\xeb\x8c3'
809 ],
809 ],
810 'tree': ''
810 'tree': ''
811 }
811 }
812 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
812 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
813 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
813 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
814 received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
814 received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
815 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
815 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
816 sending 1 commands
816 sending 1 commands
817 sending command filesdata: {
817 sending command filesdata: {
818 'fields': set([
818 'fields': set([
819 'parents',
819 'parents',
820 'revision'
820 'revision'
821 ]),
821 ]),
822 'haveparents': True,
822 'haveparents': True,
823 'pathfilter': {
823 'pathfilter': {
824 'exclude': [
824 'exclude': [
825 'path:dir0'
825 'path:dir0'
826 ],
826 ],
827 'include': [
827 'include': [
828 'path:.'
828 'path:.'
829 ]
829 ]
830 },
830 },
831 'revisions': [
831 'revisions': [
832 {
832 {
833 'nodes': [
833 'nodes': [
834 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
834 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
835 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
835 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
836 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
836 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
837 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
837 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
838 ],
838 ],
839 'type': 'changesetexplicit'
839 'type': 'changesetexplicit'
840 }
840 }
841 ]
841 ]
842 }
842 }
843 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
843 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
844 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
844 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
845 received frame(size=709; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
845 received frame(size=709; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
846 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
846 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
847 updating the branch cache
847 updating the branch cache
848 new changesets 3390ef850073:97765fc3cd62
848 new changesets 3390ef850073:97765fc3cd62
849 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
849 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
850
850
851 #if reporevlogstore
851 #if reporevlogstore
852 $ find client-narrow-1/.hg/store -type f -name '*.i' | sort
852 $ find client-narrow-1/.hg/store -type f -name '*.i' | sort
853 client-narrow-1/.hg/store/00changelog.i
853 client-narrow-1/.hg/store/00changelog.i
854 client-narrow-1/.hg/store/00manifest.i
854 client-narrow-1/.hg/store/00manifest.i
855 client-narrow-1/.hg/store/data/a.i
855 client-narrow-1/.hg/store/data/a.i
856 client-narrow-1/.hg/store/data/b.i
856 client-narrow-1/.hg/store/data/b.i
857 client-narrow-1/.hg/store/data/dir1/e.i
857 client-narrow-1/.hg/store/data/dir1/e.i
858 client-narrow-1/.hg/store/data/dir1/f.i
858 client-narrow-1/.hg/store/data/dir1/f.i
859 #endif
859 #endif
860
860
861 Mixing --include and --exclude works
861 Mixing --include and --exclude works
862
862
863 $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --include dir0/ --exclude dir0/c http://localhost:$HGPORT/ client-narrow-2
863 $ hg --config extensions.pullext=$TESTDIR/pullext.py --debug clone -U --include dir0/ --exclude dir0/c http://localhost:$HGPORT/ client-narrow-2
864 using http://localhost:$HGPORT/
864 using http://localhost:$HGPORT/
865 sending capabilities command
865 sending capabilities command
866 query 1; heads
866 query 1; heads
867 sending 2 commands
867 sending 2 commands
868 sending command heads: {}
868 sending command heads: {}
869 sending command known: {
869 sending command known: {
870 'nodes': []
870 'nodes': []
871 }
871 }
872 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
872 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
873 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
873 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
874 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
874 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
875 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
875 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
876 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
876 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
877 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
877 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
878 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
878 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
879 sending 1 commands
879 sending 1 commands
880 sending command changesetdata: {
880 sending command changesetdata: {
881 'fields': set([
881 'fields': set([
882 'bookmarks',
882 'bookmarks',
883 'parents',
883 'parents',
884 'phase',
884 'phase',
885 'revision'
885 'revision'
886 ]),
886 ]),
887 'revisions': [
887 'revisions': [
888 {
888 {
889 'heads': [
889 'heads': [
890 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
890 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
891 ],
891 ],
892 'roots': [],
892 'roots': [],
893 'type': 'changesetdagrange'
893 'type': 'changesetdagrange'
894 }
894 }
895 ]
895 ]
896 }
896 }
897 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
897 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
898 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
898 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
899 received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
899 received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
900 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
900 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
901 add changeset 3390ef850073
901 add changeset 3390ef850073
902 add changeset b709380892b1
902 add changeset b709380892b1
903 add changeset 47fe012ab237
903 add changeset 47fe012ab237
904 add changeset 97765fc3cd62
904 add changeset 97765fc3cd62
905 checking for updated bookmarks
905 checking for updated bookmarks
906 sending 1 commands
906 sending 1 commands
907 sending command manifestdata: {
907 sending command manifestdata: {
908 'fields': set([
908 'fields': set([
909 'parents',
909 'parents',
910 'revision'
910 'revision'
911 ]),
911 ]),
912 'haveparents': True,
912 'haveparents': True,
913 'nodes': [
913 'nodes': [
914 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
914 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
915 '|2 \x1a\xa3\xa1R\xa9\xe6\xa9"+?\xa8\xd0\xe3\x0f\xc2V\xe8',
915 '|2 \x1a\xa3\xa1R\xa9\xe6\xa9"+?\xa8\xd0\xe3\x0f\xc2V\xe8',
916 '\x8d\xd0W<\x7f\xaf\xe2\x04F\xcc\xea\xac\x05N\xea\xa4x\x91M\xdb',
916 '\x8d\xd0W<\x7f\xaf\xe2\x04F\xcc\xea\xac\x05N\xea\xa4x\x91M\xdb',
917 '113\x85\xf2!\x8b\x08^\xb2Z\x821\x1e*\xdd\x0e\xeb\x8c3'
917 '113\x85\xf2!\x8b\x08^\xb2Z\x821\x1e*\xdd\x0e\xeb\x8c3'
918 ],
918 ],
919 'tree': ''
919 'tree': ''
920 }
920 }
921 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
921 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
922 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
922 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
923 received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
923 received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
924 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
924 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
925 sending 1 commands
925 sending 1 commands
926 sending command filesdata: {
926 sending command filesdata: {
927 'fields': set([
927 'fields': set([
928 'parents',
928 'parents',
929 'revision'
929 'revision'
930 ]),
930 ]),
931 'haveparents': True,
931 'haveparents': True,
932 'pathfilter': {
932 'pathfilter': {
933 'exclude': [
933 'exclude': [
934 'path:dir0/c'
934 'path:dir0/c'
935 ],
935 ],
936 'include': [
936 'include': [
937 'path:dir0'
937 'path:dir0'
938 ]
938 ]
939 },
939 },
940 'revisions': [
940 'revisions': [
941 {
941 {
942 'nodes': [
942 'nodes': [
943 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
943 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
944 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
944 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
945 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
945 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
946 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
946 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
947 ],
947 ],
948 'type': 'changesetexplicit'
948 'type': 'changesetexplicit'
949 }
949 }
950 ]
950 ]
951 }
951 }
952 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
952 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
953 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
953 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
954 received frame(size=160; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
954 received frame(size=160; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
955 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
955 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
956 updating the branch cache
956 updating the branch cache
957 new changesets 3390ef850073:97765fc3cd62
957 new changesets 3390ef850073:97765fc3cd62
958 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
958 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
959
959
960 #if reporevlogstore
960 #if reporevlogstore
961 $ find client-narrow-2/.hg/store -type f -name '*.i' | sort
961 $ find client-narrow-2/.hg/store -type f -name '*.i' | sort
962 client-narrow-2/.hg/store/00changelog.i
962 client-narrow-2/.hg/store/00changelog.i
963 client-narrow-2/.hg/store/00manifest.i
963 client-narrow-2/.hg/store/00manifest.i
964 client-narrow-2/.hg/store/data/dir0/d.i
964 client-narrow-2/.hg/store/data/dir0/d.i
965 #endif
965 #endif
966
966
967 --stream will use rawfiledata to transfer changelog and manifestlog, then
967 --stream will use rawfiledata to transfer changelog and manifestlog, then
968 fall through to get files data
968 fall through to get files data
969
969
970 $ hg --debug clone --stream -U http://localhost:$HGPORT client-stream-0
970 $ hg --debug clone --stream -U http://localhost:$HGPORT client-stream-0
971 using http://localhost:$HGPORT/
971 using http://localhost:$HGPORT/
972 sending capabilities command
972 sending capabilities command
973 sending 1 commands
973 sending 1 commands
974 sending command rawstorefiledata: {
974 sending command rawstorefiledata: {
975 'files': [
975 'files': [
976 'changelog',
976 'changelog',
977 'manifestlog'
977 'manifestlog'
978 ]
978 ]
979 }
979 }
980 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
980 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
981 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
981 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
982 received frame(size=1275; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
982 received frame(size=1275; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
983 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
983 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
984 updating the branch cache
984 updating the branch cache
985 query 1; heads
985 query 1; heads
986 sending 2 commands
986 sending 2 commands
987 sending command heads: {}
987 sending command heads: {}
988 sending command known: {
988 sending command known: {
989 'nodes': [
989 'nodes': [
990 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
990 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
991 ]
991 ]
992 }
992 }
993 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
993 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
994 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
994 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
995 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
995 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
996 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
996 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
997 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
997 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
998 received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
998 received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
999 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
999 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
1000 searching for changes
1000 searching for changes
1001 all remote heads known locally
1001 all remote heads known locally
1002 sending 1 commands
1002 sending 1 commands
1003 sending command changesetdata: {
1003 sending command changesetdata: {
1004 'fields': set([
1004 'fields': set([
1005 'bookmarks',
1005 'bookmarks',
1006 'parents',
1006 'parents',
1007 'phase',
1007 'phase',
1008 'revision'
1008 'revision'
1009 ]),
1009 ]),
1010 'revisions': [
1010 'revisions': [
1011 {
1011 {
1012 'heads': [
1012 'heads': [
1013 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1013 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1014 ],
1014 ],
1015 'roots': [
1015 'roots': [
1016 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1016 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1017 ],
1017 ],
1018 'type': 'changesetdagrange'
1018 'type': 'changesetdagrange'
1019 }
1019 }
1020 ]
1020 ]
1021 }
1021 }
1022 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1022 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1023 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1023 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1024 received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1024 received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1025 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1025 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1026 checking for updated bookmarks
1026 checking for updated bookmarks
1027 sending 1 commands
1027 sending 1 commands
1028 sending command filesdata: {
1028 sending command filesdata: {
1029 'fields': set([
1029 'fields': set([
1030 'parents',
1030 'parents',
1031 'revision'
1031 'revision'
1032 ]),
1032 ]),
1033 'haveparents': True,
1033 'haveparents': True,
1034 'revisions': [
1034 'revisions': [
1035 {
1035 {
1036 'nodes': [
1036 'nodes': [
1037 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
1037 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
1038 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
1038 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
1039 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
1039 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
1040 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1040 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1041 ],
1041 ],
1042 'type': 'changesetexplicit'
1042 'type': 'changesetexplicit'
1043 }
1043 }
1044 ]
1044 ]
1045 }
1045 }
1046 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1046 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1047 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1047 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1048 received frame(size=1133; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1048 received frame(size=1133; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1049 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1049 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1050 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
1050 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
1051
1051
1052 --stream + --include/--exclude will only obtain some files
1052 --stream + --include/--exclude will only obtain some files
1053
1053
1054 $ hg --debug --config extensions.pullext=$TESTDIR/pullext.py clone --stream --include dir0/ -U http://localhost:$HGPORT client-stream-2
1054 $ hg --debug --config extensions.pullext=$TESTDIR/pullext.py clone --stream --include dir0/ -U http://localhost:$HGPORT client-stream-2
1055 using http://localhost:$HGPORT/
1055 using http://localhost:$HGPORT/
1056 sending capabilities command
1056 sending capabilities command
1057 sending 1 commands
1057 sending 1 commands
1058 sending command rawstorefiledata: {
1058 sending command rawstorefiledata: {
1059 'files': [
1059 'files': [
1060 'changelog',
1060 'changelog',
1061 'manifestlog'
1061 'manifestlog'
1062 ]
1062 ]
1063 }
1063 }
1064 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1064 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1065 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1065 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1066 received frame(size=1275; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1066 received frame(size=1275; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1067 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1067 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1068 updating the branch cache
1068 updating the branch cache
1069 query 1; heads
1069 query 1; heads
1070 sending 2 commands
1070 sending 2 commands
1071 sending command heads: {}
1071 sending command heads: {}
1072 sending command known: {
1072 sending command known: {
1073 'nodes': [
1073 'nodes': [
1074 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1074 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1075 ]
1075 ]
1076 }
1076 }
1077 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1077 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1078 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1078 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1079 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1079 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1080 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1080 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1081 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1081 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1082 received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1082 received frame(size=2; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1083 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
1083 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
1084 searching for changes
1084 searching for changes
1085 all remote heads known locally
1085 all remote heads known locally
1086 sending 1 commands
1086 sending 1 commands
1087 sending command changesetdata: {
1087 sending command changesetdata: {
1088 'fields': set([
1088 'fields': set([
1089 'bookmarks',
1089 'bookmarks',
1090 'parents',
1090 'parents',
1091 'phase',
1091 'phase',
1092 'revision'
1092 'revision'
1093 ]),
1093 ]),
1094 'revisions': [
1094 'revisions': [
1095 {
1095 {
1096 'heads': [
1096 'heads': [
1097 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1097 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1098 ],
1098 ],
1099 'roots': [
1099 'roots': [
1100 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1100 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1101 ],
1101 ],
1102 'type': 'changesetdagrange'
1102 'type': 'changesetdagrange'
1103 }
1103 }
1104 ]
1104 ]
1105 }
1105 }
1106 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1106 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1107 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1107 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1108 received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1108 received frame(size=13; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1109 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1109 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1110 checking for updated bookmarks
1110 checking for updated bookmarks
1111 sending 1 commands
1111 sending 1 commands
1112 sending command filesdata: {
1112 sending command filesdata: {
1113 'fields': set([
1113 'fields': set([
1114 'parents',
1114 'parents',
1115 'revision'
1115 'revision'
1116 ]),
1116 ]),
1117 'haveparents': True,
1117 'haveparents': True,
1118 'pathfilter': {
1118 'pathfilter': {
1119 'include': [
1119 'include': [
1120 'path:dir0'
1120 'path:dir0'
1121 ]
1121 ]
1122 },
1122 },
1123 'revisions': [
1123 'revisions': [
1124 {
1124 {
1125 'nodes': [
1125 'nodes': [
1126 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
1126 '3\x90\xef\x85\x00s\xfb\xc2\xf0\xdf\xff"D4,\x8e\x92)\x01:',
1127 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
1127 '\xb7\t8\x08\x92\xb1\x93\xc1\t\x1d:\x81\x7fp`R\xe3F\x82\x1b',
1128 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
1128 'G\xfe\x01*\xb27\xa8\xc7\xfc\x0cx\xf9\xf2mXf\xee\xf3\xf8%',
1129 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1129 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1130 ],
1130 ],
1131 'type': 'changesetexplicit'
1131 'type': 'changesetexplicit'
1132 }
1132 }
1133 ]
1133 ]
1134 }
1134 }
1135 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1135 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1136 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1136 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1137 received frame(size=449; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1137 received frame(size=449; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1138 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1138 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1139 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
1139 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
1140
1140
1141 #if reporevlogstore
1141 #if reporevlogstore
1142 $ find client-stream-2/.hg/store -type f -name '*.i' | sort
1142 $ find client-stream-2/.hg/store -type f -name '*.i' | sort
1143 client-stream-2/.hg/store/00changelog.i
1143 client-stream-2/.hg/store/00changelog.i
1144 client-stream-2/.hg/store/00manifest.i
1144 client-stream-2/.hg/store/00manifest.i
1145 client-stream-2/.hg/store/data/dir0/c.i
1145 client-stream-2/.hg/store/data/dir0/c.i
1146 client-stream-2/.hg/store/data/dir0/d.i
1146 client-stream-2/.hg/store/data/dir0/d.i
1147 #endif
1147 #endif
1148
1148
1149 Shallow clone doesn't work with revlogs
1149 Shallow clone doesn't work with revlogs
1150
1150
1151 $ hg --debug --config extensions.pullext=$TESTDIR/pullext.py clone --depth 1 -U http://localhost:$HGPORT client-shallow-revlogs
1151 $ hg --debug --config extensions.pullext=$TESTDIR/pullext.py clone --depth 1 -U http://localhost:$HGPORT client-shallow-revlogs
1152 using http://localhost:$HGPORT/
1152 using http://localhost:$HGPORT/
1153 sending capabilities command
1153 sending capabilities command
1154 query 1; heads
1154 query 1; heads
1155 sending 2 commands
1155 sending 2 commands
1156 sending command heads: {}
1156 sending command heads: {}
1157 sending command known: {
1157 sending command known: {
1158 'nodes': []
1158 'nodes': []
1159 }
1159 }
1160 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1160 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1161 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1161 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1162 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1162 received frame(size=22; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1163 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1163 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1164 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1164 received frame(size=11; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1165 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1165 received frame(size=1; request=3; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1166 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
1166 received frame(size=0; request=3; stream=2; streamflags=; type=command-response; flags=eos)
1167 sending 1 commands
1167 sending 1 commands
1168 sending command changesetdata: {
1168 sending command changesetdata: {
1169 'fields': set([
1169 'fields': set([
1170 'bookmarks',
1170 'bookmarks',
1171 'parents',
1171 'parents',
1172 'phase',
1172 'phase',
1173 'revision'
1173 'revision'
1174 ]),
1174 ]),
1175 'revisions': [
1175 'revisions': [
1176 {
1176 {
1177 'heads': [
1177 'heads': [
1178 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1178 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1179 ],
1179 ],
1180 'roots': [],
1180 'roots': [],
1181 'type': 'changesetdagrange'
1181 'type': 'changesetdagrange'
1182 }
1182 }
1183 ]
1183 ]
1184 }
1184 }
1185 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1185 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1186 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1186 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1187 received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1187 received frame(size=783; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1188 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1188 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1189 add changeset 3390ef850073
1189 add changeset 3390ef850073
1190 add changeset b709380892b1
1190 add changeset b709380892b1
1191 add changeset 47fe012ab237
1191 add changeset 47fe012ab237
1192 add changeset 97765fc3cd62
1192 add changeset 97765fc3cd62
1193 checking for updated bookmarks
1193 checking for updated bookmarks
1194 sending 1 commands
1194 sending 1 commands
1195 sending command manifestdata: {
1195 sending command manifestdata: {
1196 'fields': set([
1196 'fields': set([
1197 'parents',
1197 'parents',
1198 'revision'
1198 'revision'
1199 ]),
1199 ]),
1200 'haveparents': True,
1200 'haveparents': True,
1201 'nodes': [
1201 'nodes': [
1202 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
1202 '\x99/Gy\x02\x9a=\xf8\xd0fm\x00\xbb\x92OicN&A',
1203 '|2 \x1a\xa3\xa1R\xa9\xe6\xa9"+?\xa8\xd0\xe3\x0f\xc2V\xe8',
1203 '|2 \x1a\xa3\xa1R\xa9\xe6\xa9"+?\xa8\xd0\xe3\x0f\xc2V\xe8',
1204 '\x8d\xd0W<\x7f\xaf\xe2\x04F\xcc\xea\xac\x05N\xea\xa4x\x91M\xdb',
1204 '\x8d\xd0W<\x7f\xaf\xe2\x04F\xcc\xea\xac\x05N\xea\xa4x\x91M\xdb',
1205 '113\x85\xf2!\x8b\x08^\xb2Z\x821\x1e*\xdd\x0e\xeb\x8c3'
1205 '113\x85\xf2!\x8b\x08^\xb2Z\x821\x1e*\xdd\x0e\xeb\x8c3'
1206 ],
1206 ],
1207 'tree': ''
1207 'tree': ''
1208 }
1208 }
1209 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1209 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1210 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1210 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1211 received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1211 received frame(size=967; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1212 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1212 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1213 sending 1 commands
1213 sending 1 commands
1214 sending command filesdata: {
1214 sending command filesdata: {
1215 'fields': set([
1215 'fields': set([
1216 'linknode',
1216 'linknode',
1217 'parents',
1217 'parents',
1218 'revision'
1218 'revision'
1219 ]),
1219 ]),
1220 'haveparents': False,
1220 'haveparents': False,
1221 'revisions': [
1221 'revisions': [
1222 {
1222 {
1223 'nodes': [
1223 'nodes': [
1224 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1224 '\x97v_\xc3\xcdbO\xd1\xfa\x01v\x93,!\xff\xd1j\xdfC.'
1225 ],
1225 ],
1226 'type': 'changesetexplicit'
1226 'type': 'changesetexplicit'
1227 }
1227 }
1228 ]
1228 ]
1229 }
1229 }
1230 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1230 received frame(size=9; request=1; stream=2; streamflags=stream-begin; type=stream-settings; flags=eos)
1231 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1231 received frame(size=11; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1232 received frame(size=1005; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1232 received frame(size=1005; request=1; stream=2; streamflags=encoded; type=command-response; flags=continuation)
1233 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1233 received frame(size=0; request=1; stream=2; streamflags=; type=command-response; flags=eos)
1234 transaction abort!
1234 transaction abort!
1235 rollback completed
1235 rollback completed
1236 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
1236 (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob)
1237 abort: revlog storage does not support missing parents write mode
1237 abort: revlog storage does not support missing parents write mode
1238 [255]
1238 [255]
1239
1239
1240 $ killdaemons.py
1240 $ killdaemons.py
1241
1241
1242 Repo with 2 DAG branches introducing same filenode, to test linknode adjustment
1242 Repo with 2 DAG branches introducing same filenode, to test linknode adjustment
1243
1243
1244 $ hg init server-linknode
1244 $ hg init server-linknode
1245 $ enablehttpv2 server-linknode
1245 $ enablehttpv2 server-linknode
1246 $ cd server-linknode
1246 $ cd server-linknode
1247 $ touch foo
1247 $ touch foo
1248 $ hg -q commit -Am initial
1248 $ hg -q commit -Am initial
1249 $ echo foo > dupe-file
1249 $ echo foo > dupe-file
1250 $ hg commit -Am 'dupe 1'
1250 $ hg commit -Am 'dupe 1'
1251 adding dupe-file
1251 adding dupe-file
1252 $ hg -q up -r 0
1252 $ hg -q up -r 0
1253 $ echo foo > dupe-file
1253 $ echo foo > dupe-file
1254 $ hg commit -Am 'dupe 2'
1254 $ hg commit -Am 'dupe 2'
1255 adding dupe-file
1255 adding dupe-file
1256 created new head
1256 created new head
1257 $ hg serve -p $HGPORT -d --pid-file hg.pid -E error.log
1257 $ hg serve -p $HGPORT -d --pid-file hg.pid -E error.log
1258 $ cat hg.pid > $DAEMON_PIDS
1258 $ cat hg.pid > $DAEMON_PIDS
1259 $ cd ..
1259 $ cd ..
1260
1260
1261 Perform an incremental pull of both heads and ensure linkrev is written out properly
1261 Perform an incremental pull of both heads and ensure linkrev is written out properly
1262
1262
1263 $ hg clone -r 96ee1d7354c4 http://localhost:$HGPORT client-linknode-1
1263 $ hg clone -r 96ee1d7354c4 http://localhost:$HGPORT client-linknode-1
1264 new changesets 96ee1d7354c4
1264 new changesets 96ee1d7354c4
1265 updating to branch default
1265 updating to branch default
1266 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1266 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1267 $ cd client-linknode-1
1267 $ cd client-linknode-1
1268 $ touch extra
1268 $ touch extra
1269 $ hg commit -Am extra
1269 $ hg commit -Am extra
1270 adding extra
1270 adding extra
1271 $ cd ..
1271 $ cd ..
1272
1272
1273 $ hg clone -r 96ee1d7354c4 http://localhost:$HGPORT client-linknode-2
1273 $ hg clone -r 96ee1d7354c4 http://localhost:$HGPORT client-linknode-2
1274 new changesets 96ee1d7354c4
1274 new changesets 96ee1d7354c4
1275 updating to branch default
1275 updating to branch default
1276 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1276 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1277 $ cd client-linknode-2
1277 $ cd client-linknode-2
1278 $ touch extra
1278 $ touch extra
1279 $ hg commit -Am extra
1279 $ hg commit -Am extra
1280 adding extra
1280 adding extra
1281 $ cd ..
1281 $ cd ..
1282
1282
1283 $ hg -R client-linknode-1 pull -r 1681c33f9f80
1283 $ hg -R client-linknode-1 pull -r 1681c33f9f80
1284 pulling from http://localhost:$HGPORT/
1284 pulling from http://localhost:$HGPORT/
1285 searching for changes
1285 searching for changes
1286 new changesets 1681c33f9f80
1286 new changesets 1681c33f9f80
1287 (run 'hg update' to get a working copy)
1287 (run 'hg update' to get a working copy)
1288
1288
1289 #if reporevlogstore
1289 #if reporevlogstore
1290 $ hg -R client-linknode-1 debugrevlogindex dupe-file
1290 $ hg -R client-linknode-1 debugrevlogindex dupe-file
1291 rev linkrev nodeid p1 p2
1291 rev linkrev nodeid p1 p2
1292 0 2 2ed2a3912a0b 000000000000 000000000000
1292 0 2 2ed2a3912a0b 000000000000 000000000000
1293 #endif
1293 #endif
1294
1294
1295 $ hg -R client-linknode-2 pull -r 639c8990d6a5
1295 $ hg -R client-linknode-2 pull -r 639c8990d6a5
1296 pulling from http://localhost:$HGPORT/
1296 pulling from http://localhost:$HGPORT/
1297 searching for changes
1297 searching for changes
1298 new changesets 639c8990d6a5
1298 new changesets 639c8990d6a5
1299 (run 'hg update' to get a working copy)
1299 (run 'hg update' to get a working copy)
1300
1300
1301 #if reporevlogstore
1301 #if reporevlogstore
1302 $ hg -R client-linknode-2 debugrevlogindex dupe-file
1302 $ hg -R client-linknode-2 debugrevlogindex dupe-file
1303 abort: revlog 'dupe-file' not found
1303 rev linkrev nodeid p1 p2
1304 [255]
1304 0 2 2ed2a3912a0b 000000000000 000000000000
1305 #endif
1305 #endif
1306
1307 $ hg -R client-linknode-2 verify
1308 checking changesets
1309 checking manifests
1310 crosschecking files in changesets and manifests
1311 checking files
1312 warning: revlog 'data/dupe-file.i' not in fncache!
1313 2: empty or missing dupe-file
1314 dupe-file@2: manifest refers to unknown revision 2ed2a3912a0b
1315 checked 3 changesets with 2 changes to 3 files
1316 1 warnings encountered!
1317 hint: run "hg debugrebuildfncache" to recover from corrupt fncache
1318 2 integrity errors encountered!
1319 (first damaged changeset appears to be 2)
1320 [1]
General Comments 0
You need to be logged in to leave comments. Login now