##// END OF EJS Templates
clonebundles: introduce a new write protocol command...
marmoute -
r51594:4238e6b2 default
parent child Browse files
Show More
@@ -1,1036 +1,1037 b''
1 1 # This software may be used and distributed according to the terms of the
2 2 # GNU General Public License version 2 or any later version.
3 3
4 4 """advertise pre-generated bundles to seed clones
5 5
6 6 "clonebundles" is a server-side extension used to advertise the existence
7 7 of pre-generated, externally hosted bundle files to clients that are
8 8 cloning so that cloning can be faster, more reliable, and require less
9 9 resources on the server. "pullbundles" is a related feature for sending
10 10 pre-generated bundle files to clients as part of pull operations.
11 11
12 12 Cloning can be a CPU and I/O intensive operation on servers. Traditionally,
13 13 the server, in response to a client's request to clone, dynamically generates
14 14 a bundle containing the entire repository content and sends it to the client.
15 15 There is no caching on the server and the server will have to redundantly
16 16 generate the same outgoing bundle in response to each clone request. For
17 17 servers with large repositories or with high clone volume, the load from
18 18 clones can make scaling the server challenging and costly.
19 19
20 20 This extension provides server operators the ability to offload
21 21 potentially expensive clone load to an external service. Pre-generated
22 22 bundles also allow using more CPU intensive compression, reducing the
23 23 effective bandwidth requirements.
24 24
25 25 Here's how clone bundles work:
26 26
27 27 1. A server operator establishes a mechanism for making bundle files available
28 28 on a hosting service where Mercurial clients can fetch them.
29 29 2. A manifest file listing available bundle URLs and some optional metadata
30 30 is added to the Mercurial repository on the server.
31 31 3. A client initiates a clone against a clone bundles aware server.
32 32 4. The client sees the server is advertising clone bundles and fetches the
33 33 manifest listing available bundles.
34 34 5. The client filters and sorts the available bundles based on what it
35 35 supports and prefers.
36 36 6. The client downloads and applies an available bundle from the
37 37 server-specified URL.
38 38 7. The client reconnects to the original server and performs the equivalent
39 39 of :hg:`pull` to retrieve all repository data not in the bundle. (The
40 40 repository could have been updated between when the bundle was created
41 41 and when the client started the clone.) This may use "pullbundles".
42 42
43 43 Instead of the server generating full repository bundles for every clone
44 44 request, it generates full bundles once and they are subsequently reused to
45 45 bootstrap new clones. The server may still transfer data at clone time.
46 46 However, this is only data that has been added/changed since the bundle was
47 47 created. For large, established repositories, this can reduce server load for
48 48 clones to less than 1% of original.
49 49
50 50 Here's how pullbundles work:
51 51
52 52 1. A manifest file listing available bundles and describing the revisions
53 53 is added to the Mercurial repository on the server.
54 54 2. A new-enough client informs the server that it supports partial pulls
55 55 and initiates a pull.
56 56 3. If the server has pull bundles enabled and sees the client advertising
57 57 partial pulls, it checks for a matching pull bundle in the manifest.
58 58 A bundle matches if the format is supported by the client, the client
59 59 has the required revisions already and needs something from the bundle.
60 60 4. If there is at least one matching bundle, the server sends it to the client.
61 61 5. The client applies the bundle and notices that the server reply was
62 62 incomplete. It initiates another pull.
63 63
64 64 To work, this extension requires the following of server operators:
65 65
66 66 * Generating bundle files of repository content (typically periodically,
67 67 such as once per day).
68 68 * Clone bundles: A file server that clients have network access to and that
69 69 Python knows how to talk to through its normal URL handling facility
70 70 (typically an HTTP/HTTPS server).
71 71 * A process for keeping the bundles manifest in sync with available bundle
72 72 files.
73 73
74 74 Strictly speaking, using a static file hosting server isn't required: a server
75 75 operator could use a dynamic service for retrieving bundle data. However,
76 76 static file hosting services are simple and scalable and should be sufficient
77 77 for most needs.
78 78
79 79 Bundle files can be generated with the :hg:`bundle` command. Typically
80 80 :hg:`bundle --all` is used to produce a bundle of the entire repository.
81 81
82 82 The bundlespec option `stream` (see :hg:`help bundlespec`)
83 83 can be used to produce a special *streaming clonebundle*, typically using
84 84 :hg:`bundle --all --type="none-streamv2"`.
85 85 These are bundle files that are extremely efficient
86 86 to produce and consume (read: fast). However, they are larger than
87 87 traditional bundle formats and require that clients support the exact set
88 88 of repository data store formats in use by the repository that created them.
89 89 Typically, a newer server can serve data that is compatible with older clients.
90 90 However, *streaming clone bundles* don't have this guarantee. **Server
91 91 operators need to be aware that newer versions of Mercurial may produce
92 92 streaming clone bundles incompatible with older Mercurial versions.**
93 93
94 94 A server operator is responsible for creating a ``.hg/clonebundles.manifest``
95 95 file containing the list of available bundle files suitable for seeding
96 96 clones. If this file does not exist, the repository will not advertise the
97 97 existence of clone bundles when clients connect. For pull bundles,
98 98 ``.hg/pullbundles.manifest`` is used.
99 99
100 100 The manifest file contains a newline (\\n) delimited list of entries.
101 101
102 102 Each line in this file defines an available bundle. Lines have the format:
103 103
104 104 <URL> [<key>=<value>[ <key>=<value>]]
105 105
106 106 That is, a URL followed by an optional, space-delimited list of key=value
107 107 pairs describing additional properties of this bundle. Both keys and values
108 108 are URI encoded.
109 109
110 110 For pull bundles, the URL is a path under the ``.hg`` directory of the
111 111 repository.
112 112
113 113 Keys in UPPERCASE are reserved for use by Mercurial and are defined below.
114 114 All non-uppercase keys can be used by site installations. An example use
115 115 for custom properties is to use the *datacenter* attribute to define which
116 116 data center a file is hosted in. Clients could then prefer a server in the
117 117 data center closest to them.
118 118
119 119 The following reserved keys are currently defined:
120 120
121 121 BUNDLESPEC
122 122 A "bundle specification" string that describes the type of the bundle.
123 123
124 124 These are string values that are accepted by the "--type" argument of
125 125 :hg:`bundle`.
126 126
127 127 The values are parsed in strict mode, which means they must be of the
128 128 "<compression>-<type>" form. See
129 129 mercurial.exchange.parsebundlespec() for more details.
130 130
131 131 :hg:`debugbundle --spec` can be used to print the bundle specification
132 132 string for a bundle file. The output of this command can be used verbatim
133 133 for the value of ``BUNDLESPEC`` (it is already escaped).
134 134
135 135 Clients will automatically filter out specifications that are unknown or
136 136 unsupported so they won't attempt to download something that likely won't
137 137 apply.
138 138
139 139 The actual value doesn't impact client behavior beyond filtering:
140 140 clients will still sniff the bundle type from the header of downloaded
141 141 files.
142 142
143 143 **Use of this key is highly recommended**, as it allows clients to
144 144 easily skip unsupported bundles. If this key is not defined, an old
145 145 client may attempt to apply a bundle that it is incapable of reading.
146 146
147 147 REQUIRESNI
148 148 Whether Server Name Indication (SNI) is required to connect to the URL.
149 149 SNI allows servers to use multiple certificates on the same IP. It is
150 150 somewhat common in CDNs and other hosting providers. Older Python
151 151 versions do not support SNI. Defining this attribute enables clients
152 152 with older Python versions to filter this entry without experiencing
153 153 an opaque SSL failure at connection time.
154 154
155 155 If this is defined, it is important to advertise a non-SNI fallback
156 156 URL or clients running old Python releases may not be able to clone
157 157 with the clonebundles facility.
158 158
159 159 Value should be "true".
160 160
161 161 REQUIREDRAM
162 162 Value specifies expected memory requirements to decode the payload.
163 163 Values can have suffixes for common bytes sizes. e.g. "64MB".
164 164
165 165 This key is often used with zstd-compressed bundles using a high
166 166 compression level / window size, which can require 100+ MB of memory
167 167 to decode.
168 168
169 169 heads
170 170 Used for pull bundles. This contains the ``;`` separated changeset
171 171 hashes of the heads of the bundle content.
172 172
173 173 bases
174 174 Used for pull bundles. This contains the ``;`` separated changeset
175 175 hashes of the roots of the bundle content. This can be skipped if
176 176 the bundle was created without ``--base``.
177 177
178 178 Manifests can contain multiple entries. Assuming metadata is defined, clients
179 179 will filter entries from the manifest that they don't support. The remaining
180 180 entries are optionally sorted by client preferences
181 181 (``ui.clonebundleprefers`` config option). The client then attempts
182 182 to fetch the bundle at the first URL in the remaining list.
183 183
184 184 **Errors when downloading a bundle will fail the entire clone operation:
185 185 clients do not automatically fall back to a traditional clone.** The reason
186 186 for this is that if a server is using clone bundles, it is probably doing so
187 187 because the feature is necessary to help it scale. In other words, there
188 188 is an assumption that clone load will be offloaded to another service and
189 189 that the Mercurial server isn't responsible for serving this clone load.
190 190 If that other service experiences issues and clients start mass falling back to
191 191 the original Mercurial server, the added clone load could overwhelm the server
192 192 due to unexpected load and effectively take it offline. Not having clients
193 193 automatically fall back to cloning from the original server mitigates this
194 194 scenario.
195 195
196 196 Because there is no automatic Mercurial server fallback on failure of the
197 197 bundle hosting service, it is important for server operators to view the bundle
198 198 hosting service as an extension of the Mercurial server in terms of
199 199 availability and service level agreements: if the bundle hosting service goes
200 200 down, so does the ability for clients to clone. Note: clients will see a
201 201 message informing them how to bypass the clone bundles facility when a failure
202 202 occurs. So server operators should prepare for some people to follow these
203 203 instructions when a failure occurs, thus driving more load to the original
204 204 Mercurial server when the bundle hosting service fails.
205 205
206 206
207 207 inline clonebundles
208 208 -------------------
209 209
210 210 It is possible to transmit clonebundles inline in case repositories are
211 211 accessed over SSH. This avoids having to setup an external HTTPS server
212 212 and results in the same access control as already present for the SSH setup.
213 213
214 214 Inline clonebundles should be placed into the `.hg/bundle-cache` directory.
215 215 A clonebundle at `.hg/bundle-cache/mybundle.bundle` is referred to
216 216 in the `clonebundles.manifest` file as `peer-bundle-cache://mybundle.bundle`.
217 217
218 218
219 219 auto-generation of clone bundles
220 220 --------------------------------
221 221
222 222 It is possible to set Mercurial to automatically re-generate clone bundles when
223 223 enough new content is available.
224 224
225 225 Mercurial will take care of the process asynchronously. The defined list of
226 226 bundle-type will be generated, uploaded, and advertised. Older bundles will get
227 227 decommissioned as newer ones replace them.
228 228
229 229 Bundles Generation:
230 230 ...................
231 231
232 232 The extension can generate multiple variants of the clone bundle. Each
233 233 different variant will be defined by the "bundle-spec" they use::
234 234
235 235 [clone-bundles]
236 236 auto-generate.formats= zstd-v2, gzip-v2
237 237
238 238 See `hg help bundlespec` for details about available options.
239 239
240 240 By default, new bundles are generated when 5% of the repository contents or at
241 241 least 1000 revisions are not contained in the cached bundles. This option can
242 242 be controlled by the `clone-bundles.trigger.below-bundled-ratio` option
243 243 (default 0.95) and the `clone-bundles.trigger.revs` option (default 1000)::
244 244
245 245 [clone-bundles]
246 246 trigger.below-bundled-ratio=0.95
247 247 trigger.revs=1000
248 248
249 249 This logic can be manually triggered using the `admin::clone-bundles-refresh`
250 250 command, or automatically on each repository change if
251 251 `clone-bundles.auto-generate.on-change` is set to `yes`.
252 252
253 253 [clone-bundles]
254 254 auto-generate.on-change=yes
255 255 auto-generate.formats= zstd-v2, gzip-v2
256 256
257 257 Bundles Upload and Serving:
258 258 ...........................
259 259
260 260 The generated bundles need to be made available to users through a "public" URL.
261 261 This should be donne through `clone-bundles.upload-command` configuration. The
262 262 value of this command should be a shell command. It will have access to the
263 263 bundle file path through the `$HGCB_BUNDLE_PATH` variable. And the expected
264 264 basename in the "public" URL is accessible at::
265 265
266 266 [clone-bundles]
267 267 upload-command=sftp put $HGCB_BUNDLE_PATH \
268 268 sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME
269 269
270 270 If the file was already uploaded, the command must still succeed.
271 271
272 272 After upload, the file should be available at an url defined by
273 273 `clone-bundles.url-template`.
274 274
275 275 [clone-bundles]
276 276 url-template=https://bundles.host/cache/clone-bundles/{basename}
277 277
278 278 Old bundles cleanup:
279 279 ....................
280 280
281 281 When new bundles are generated, the older ones are no longer necessary and can
282 282 be removed from storage. This is done through the `clone-bundles.delete-command`
283 283 configuration. The command is given the url of the artifact to delete through
284 284 the `$HGCB_BUNDLE_URL` environment variable.
285 285
286 286 [clone-bundles]
287 287 delete-command=sftp rm sftp://bundles.host/clone-bundles/$HGCB_BUNDLE_BASENAME
288 288
289 289 If the file was already deleted, the command must still succeed.
290 290 """
291 291
292 292
293 293 import os
294 294 import weakref
295 295
296 296 from mercurial.i18n import _
297 297
298 298 from mercurial import (
299 299 bundlecaches,
300 300 commands,
301 301 error,
302 302 extensions,
303 303 localrepo,
304 304 lock,
305 305 node,
306 306 registrar,
307 307 util,
308 308 wireprotov1server,
309 309 )
310 310
311 311
312 312 from mercurial.utils import (
313 313 procutil,
314 314 )
315 315
316 316 testedwith = b'ships-with-hg-core'
317 317
318 318
319 319 def capabilities(orig, repo, proto):
320 320 caps = orig(repo, proto)
321 321
322 322 # Only advertise if a manifest exists. This does add some I/O to requests.
323 323 # But this should be cheaper than a wasted network round trip due to
324 324 # missing file.
325 325 if repo.vfs.exists(bundlecaches.CB_MANIFEST_FILE):
326 326 caps.append(b'clonebundles')
327 caps.append(b'clonebundles_manifest')
327 328
328 329 return caps
329 330
330 331
331 332 def extsetup(ui):
332 333 extensions.wrapfunction(wireprotov1server, b'_capabilities', capabilities)
333 334
334 335
335 336 # logic for bundle auto-generation
336 337
337 338
338 339 configtable = {}
339 340 configitem = registrar.configitem(configtable)
340 341
341 342 cmdtable = {}
342 343 command = registrar.command(cmdtable)
343 344
344 345 configitem(b'clone-bundles', b'auto-generate.on-change', default=False)
345 346 configitem(b'clone-bundles', b'auto-generate.formats', default=list)
346 347 configitem(b'clone-bundles', b'trigger.below-bundled-ratio', default=0.95)
347 348 configitem(b'clone-bundles', b'trigger.revs', default=1000)
348 349
349 350 configitem(b'clone-bundles', b'upload-command', default=None)
350 351
351 352 configitem(b'clone-bundles', b'delete-command', default=None)
352 353
353 354 configitem(b'clone-bundles', b'url-template', default=None)
354 355
355 356 configitem(b'devel', b'debug.clonebundles', default=False)
356 357
357 358
358 359 # category for the post-close transaction hooks
359 360 CAT_POSTCLOSE = b"clonebundles-autobundles"
360 361
361 362 # template for bundle file names
362 363 BUNDLE_MASK = (
363 364 b"full-%(bundle_type)s-%(revs)d_revs-%(tip_short)s_tip-%(op_id)s.hg"
364 365 )
365 366
366 367
367 368 # file in .hg/ use to track clonebundles being auto-generated
368 369 AUTO_GEN_FILE = b'clonebundles.auto-gen'
369 370
370 371
371 372 class BundleBase(object):
372 373 """represents the core of properties that matters for us in a bundle
373 374
374 375 :bundle_type: the bundlespec (see hg help bundlespec)
375 376 :revs: the number of revisions in the repo at bundle creation time
376 377 :tip_rev: the rev-num of the tip revision
377 378 :tip_node: the node id of the tip-most revision in the bundle
378 379
379 380 :ready: True if the bundle is ready to be served
380 381 """
381 382
382 383 ready = False
383 384
384 385 def __init__(self, bundle_type, revs, tip_rev, tip_node):
385 386 self.bundle_type = bundle_type
386 387 self.revs = revs
387 388 self.tip_rev = tip_rev
388 389 self.tip_node = tip_node
389 390
390 391 def valid_for(self, repo):
391 392 """is this bundle applicable to the current repository
392 393
393 394 This is useful for detecting bundles made irrelevant by stripping.
394 395 """
395 396 tip_node = node.bin(self.tip_node)
396 397 return repo.changelog.index.get_rev(tip_node) == self.tip_rev
397 398
398 399 def __eq__(self, other):
399 400 left = (self.ready, self.bundle_type, self.tip_rev, self.tip_node)
400 401 right = (other.ready, other.bundle_type, other.tip_rev, other.tip_node)
401 402 return left == right
402 403
403 404 def __neq__(self, other):
404 405 return not self == other
405 406
406 407 def __cmp__(self, other):
407 408 if self == other:
408 409 return 0
409 410 return -1
410 411
411 412
412 413 class RequestedBundle(BundleBase):
413 414 """A bundle that should be generated.
414 415
415 416 Additional attributes compared to BundleBase
416 417 :heads: list of head revisions (as rev-num)
417 418 :op_id: a "unique" identifier for the operation triggering the change
418 419 """
419 420
420 421 def __init__(self, bundle_type, revs, tip_rev, tip_node, head_revs, op_id):
421 422 self.head_revs = head_revs
422 423 self.op_id = op_id
423 424 super(RequestedBundle, self).__init__(
424 425 bundle_type,
425 426 revs,
426 427 tip_rev,
427 428 tip_node,
428 429 )
429 430
430 431 @property
431 432 def suggested_filename(self):
432 433 """A filename that can be used for the generated bundle"""
433 434 data = {
434 435 b'bundle_type': self.bundle_type,
435 436 b'revs': self.revs,
436 437 b'heads': self.head_revs,
437 438 b'tip_rev': self.tip_rev,
438 439 b'tip_node': self.tip_node,
439 440 b'tip_short': self.tip_node[:12],
440 441 b'op_id': self.op_id,
441 442 }
442 443 return BUNDLE_MASK % data
443 444
444 445 def generate_bundle(self, repo, file_path):
445 446 """generate the bundle at `filepath`"""
446 447 commands.bundle(
447 448 repo.ui,
448 449 repo,
449 450 file_path,
450 451 base=[b"null"],
451 452 rev=self.head_revs,
452 453 type=self.bundle_type,
453 454 quiet=True,
454 455 )
455 456
456 457 def generating(self, file_path, hostname=None, pid=None):
457 458 """return a GeneratingBundle object from this object"""
458 459 if pid is None:
459 460 pid = os.getpid()
460 461 if hostname is None:
461 462 hostname = lock._getlockprefix()
462 463 return GeneratingBundle(
463 464 self.bundle_type,
464 465 self.revs,
465 466 self.tip_rev,
466 467 self.tip_node,
467 468 hostname,
468 469 pid,
469 470 file_path,
470 471 )
471 472
472 473
473 474 class GeneratingBundle(BundleBase):
474 475 """A bundle being generated
475 476
476 477 extra attributes compared to BundleBase:
477 478
478 479 :hostname: the hostname of the machine generating the bundle
479 480 :pid: the pid of the process generating the bundle
480 481 :filepath: the target filename of the bundle
481 482
482 483 These attributes exist to help detect stalled generation processes.
483 484 """
484 485
485 486 ready = False
486 487
487 488 def __init__(
488 489 self, bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath
489 490 ):
490 491 self.hostname = hostname
491 492 self.pid = pid
492 493 self.filepath = filepath
493 494 super(GeneratingBundle, self).__init__(
494 495 bundle_type, revs, tip_rev, tip_node
495 496 )
496 497
497 498 @classmethod
498 499 def from_line(cls, line):
499 500 """create an object by deserializing a line from AUTO_GEN_FILE"""
500 501 assert line.startswith(b'PENDING-v1 ')
501 502 (
502 503 __,
503 504 bundle_type,
504 505 revs,
505 506 tip_rev,
506 507 tip_node,
507 508 hostname,
508 509 pid,
509 510 filepath,
510 511 ) = line.split()
511 512 hostname = util.urlreq.unquote(hostname)
512 513 filepath = util.urlreq.unquote(filepath)
513 514 revs = int(revs)
514 515 tip_rev = int(tip_rev)
515 516 pid = int(pid)
516 517 return cls(
517 518 bundle_type, revs, tip_rev, tip_node, hostname, pid, filepath
518 519 )
519 520
520 521 def to_line(self):
521 522 """serialize the object to include as a line in AUTO_GEN_FILE"""
522 523 templ = b"PENDING-v1 %s %d %d %s %s %d %s"
523 524 data = (
524 525 self.bundle_type,
525 526 self.revs,
526 527 self.tip_rev,
527 528 self.tip_node,
528 529 util.urlreq.quote(self.hostname),
529 530 self.pid,
530 531 util.urlreq.quote(self.filepath),
531 532 )
532 533 return templ % data
533 534
534 535 def __eq__(self, other):
535 536 if not super(GeneratingBundle, self).__eq__(other):
536 537 return False
537 538 left = (self.hostname, self.pid, self.filepath)
538 539 right = (other.hostname, other.pid, other.filepath)
539 540 return left == right
540 541
541 542 def uploaded(self, url, basename):
542 543 """return a GeneratedBundle from this object"""
543 544 return GeneratedBundle(
544 545 self.bundle_type,
545 546 self.revs,
546 547 self.tip_rev,
547 548 self.tip_node,
548 549 url,
549 550 basename,
550 551 )
551 552
552 553
553 554 class GeneratedBundle(BundleBase):
554 555 """A bundle that is done being generated and can be served
555 556
556 557 extra attributes compared to BundleBase:
557 558
558 559 :file_url: the url where the bundle is available.
559 560 :basename: the "basename" used to upload (useful for deletion)
560 561
561 562 These attributes exist to generate a bundle manifest
562 563 (.hg/pullbundles.manifest)
563 564 """
564 565
565 566 ready = True
566 567
567 568 def __init__(
568 569 self, bundle_type, revs, tip_rev, tip_node, file_url, basename
569 570 ):
570 571 self.file_url = file_url
571 572 self.basename = basename
572 573 super(GeneratedBundle, self).__init__(
573 574 bundle_type, revs, tip_rev, tip_node
574 575 )
575 576
576 577 @classmethod
577 578 def from_line(cls, line):
578 579 """create an object by deserializing a line from AUTO_GEN_FILE"""
579 580 assert line.startswith(b'DONE-v1 ')
580 581 (
581 582 __,
582 583 bundle_type,
583 584 revs,
584 585 tip_rev,
585 586 tip_node,
586 587 file_url,
587 588 basename,
588 589 ) = line.split()
589 590 revs = int(revs)
590 591 tip_rev = int(tip_rev)
591 592 file_url = util.urlreq.unquote(file_url)
592 593 return cls(bundle_type, revs, tip_rev, tip_node, file_url, basename)
593 594
594 595 def to_line(self):
595 596 """serialize the object to include as a line in AUTO_GEN_FILE"""
596 597 templ = b"DONE-v1 %s %d %d %s %s %s"
597 598 data = (
598 599 self.bundle_type,
599 600 self.revs,
600 601 self.tip_rev,
601 602 self.tip_node,
602 603 util.urlreq.quote(self.file_url),
603 604 self.basename,
604 605 )
605 606 return templ % data
606 607
607 608 def manifest_line(self):
608 609 """serialize the object to include as a line in pullbundles.manifest"""
609 610 templ = b"%s BUNDLESPEC=%s REQUIRESNI=true"
610 611 return templ % (self.file_url, self.bundle_type)
611 612
612 613 def __eq__(self, other):
613 614 if not super(GeneratedBundle, self).__eq__(other):
614 615 return False
615 616 return self.file_url == other.file_url
616 617
617 618
618 619 def parse_auto_gen(content):
619 620 """parse the AUTO_GEN_FILE to return a list of Bundle object"""
620 621 bundles = []
621 622 for line in content.splitlines():
622 623 if line.startswith(b'PENDING-v1 '):
623 624 bundles.append(GeneratingBundle.from_line(line))
624 625 elif line.startswith(b'DONE-v1 '):
625 626 bundles.append(GeneratedBundle.from_line(line))
626 627 return bundles
627 628
628 629
629 630 def dumps_auto_gen(bundles):
630 631 """serialize a list of Bundle as a AUTO_GEN_FILE content"""
631 632 lines = []
632 633 for b in bundles:
633 634 lines.append(b"%s\n" % b.to_line())
634 635 lines.sort()
635 636 return b"".join(lines)
636 637
637 638
638 639 def read_auto_gen(repo):
639 640 """read the AUTO_GEN_FILE for the <repo> a list of Bundle object"""
640 641 data = repo.vfs.tryread(AUTO_GEN_FILE)
641 642 if not data:
642 643 return []
643 644 return parse_auto_gen(data)
644 645
645 646
646 647 def write_auto_gen(repo, bundles):
647 648 """write a list of Bundle objects into the repo's AUTO_GEN_FILE"""
648 649 assert repo._cb_lock_ref is not None
649 650 data = dumps_auto_gen(bundles)
650 651 with repo.vfs(AUTO_GEN_FILE, mode=b'wb', atomictemp=True) as f:
651 652 f.write(data)
652 653
653 654
654 655 def generate_manifest(bundles):
655 656 """write a list of Bundle objects into the repo's AUTO_GEN_FILE"""
656 657 bundles = list(bundles)
657 658 bundles.sort(key=lambda b: b.bundle_type)
658 659 lines = []
659 660 for b in bundles:
660 661 lines.append(b"%s\n" % b.manifest_line())
661 662 return b"".join(lines)
662 663
663 664
664 665 def update_ondisk_manifest(repo):
665 666 """update the clonebundle manifest with latest url"""
666 667 with repo.clonebundles_lock():
667 668 bundles = read_auto_gen(repo)
668 669
669 670 per_types = {}
670 671 for b in bundles:
671 672 if not (b.ready and b.valid_for(repo)):
672 673 continue
673 674 current = per_types.get(b.bundle_type)
674 675 if current is not None and current.revs >= b.revs:
675 676 continue
676 677 per_types[b.bundle_type] = b
677 678 manifest = generate_manifest(per_types.values())
678 679 with repo.vfs(
679 680 bundlecaches.CB_MANIFEST_FILE, mode=b"wb", atomictemp=True
680 681 ) as f:
681 682 f.write(manifest)
682 683
683 684
684 685 def update_bundle_list(repo, new_bundles=(), del_bundles=()):
685 686 """modify the repo's AUTO_GEN_FILE
686 687
687 688 This method also regenerates the clone bundle manifest when needed"""
688 689 with repo.clonebundles_lock():
689 690 bundles = read_auto_gen(repo)
690 691 if del_bundles:
691 692 bundles = [b for b in bundles if b not in del_bundles]
692 693 new_bundles = [b for b in new_bundles if b not in bundles]
693 694 bundles.extend(new_bundles)
694 695 write_auto_gen(repo, bundles)
695 696 all_changed = []
696 697 all_changed.extend(new_bundles)
697 698 all_changed.extend(del_bundles)
698 699 if any(b.ready for b in all_changed):
699 700 update_ondisk_manifest(repo)
700 701
701 702
702 703 def cleanup_tmp_bundle(repo, target):
703 704 """remove a GeneratingBundle file and entry"""
704 705 assert not target.ready
705 706 with repo.clonebundles_lock():
706 707 repo.vfs.tryunlink(target.filepath)
707 708 update_bundle_list(repo, del_bundles=[target])
708 709
709 710
710 711 def finalize_one_bundle(repo, target):
711 712 """upload a generated bundle and advertise it in the clonebundles.manifest"""
712 713 with repo.clonebundles_lock():
713 714 bundles = read_auto_gen(repo)
714 715 if target in bundles and target.valid_for(repo):
715 716 result = upload_bundle(repo, target)
716 717 update_bundle_list(repo, new_bundles=[result])
717 718 cleanup_tmp_bundle(repo, target)
718 719
719 720
720 721 def find_outdated_bundles(repo, bundles):
721 722 """finds outdated bundles"""
722 723 olds = []
723 724 per_types = {}
724 725 for b in bundles:
725 726 if not b.valid_for(repo):
726 727 olds.append(b)
727 728 continue
728 729 l = per_types.setdefault(b.bundle_type, [])
729 730 l.append(b)
730 731 for key in sorted(per_types):
731 732 all = per_types[key]
732 733 if len(all) > 1:
733 734 all.sort(key=lambda b: b.revs, reverse=True)
734 735 olds.extend(all[1:])
735 736 return olds
736 737
737 738
738 739 def collect_garbage(repo):
739 740 """finds outdated bundles and get them deleted"""
740 741 with repo.clonebundles_lock():
741 742 bundles = read_auto_gen(repo)
742 743 olds = find_outdated_bundles(repo, bundles)
743 744 for o in olds:
744 745 delete_bundle(repo, o)
745 746 update_bundle_list(repo, del_bundles=olds)
746 747
747 748
748 749 def upload_bundle(repo, bundle):
749 750 """upload the result of a GeneratingBundle and return a GeneratedBundle
750 751
751 752 The upload is done using the `clone-bundles.upload-command`
752 753 """
753 754 cmd = repo.ui.config(b'clone-bundles', b'upload-command')
754 755 url = repo.ui.config(b'clone-bundles', b'url-template')
755 756 basename = repo.vfs.basename(bundle.filepath)
756 757 filepath = procutil.shellquote(bundle.filepath)
757 758 variables = {
758 759 b'HGCB_BUNDLE_PATH': filepath,
759 760 b'HGCB_BUNDLE_BASENAME': basename,
760 761 }
761 762 env = procutil.shellenviron(environ=variables)
762 763 ret = repo.ui.system(cmd, environ=env)
763 764 if ret:
764 765 raise error.Abort(b"command returned status %d: %s" % (ret, cmd))
765 766 url = (
766 767 url.decode('utf8')
767 768 .format(basename=basename.decode('utf8'))
768 769 .encode('utf8')
769 770 )
770 771 return bundle.uploaded(url, basename)
771 772
772 773
773 774 def delete_bundle(repo, bundle):
774 775 """delete a bundle from storage"""
775 776 assert bundle.ready
776 777 msg = b'clone-bundles: deleting bundle %s\n'
777 778 msg %= bundle.basename
778 779 if repo.ui.configbool(b'devel', b'debug.clonebundles'):
779 780 repo.ui.write(msg)
780 781 else:
781 782 repo.ui.debug(msg)
782 783
783 784 cmd = repo.ui.config(b'clone-bundles', b'delete-command')
784 785 variables = {
785 786 b'HGCB_BUNDLE_URL': bundle.file_url,
786 787 b'HGCB_BASENAME': bundle.basename,
787 788 }
788 789 env = procutil.shellenviron(environ=variables)
789 790 ret = repo.ui.system(cmd, environ=env)
790 791 if ret:
791 792 raise error.Abort(b"command returned status %d: %s" % (ret, cmd))
792 793
793 794
794 795 def auto_bundle_needed_actions(repo, bundles, op_id):
795 796 """find the list of bundles that need action
796 797
797 798 returns a list of RequestedBundle objects that need to be generated and
798 799 uploaded."""
799 800 create_bundles = []
800 801 delete_bundles = []
801 802 repo = repo.filtered(b"immutable")
802 803 targets = repo.ui.configlist(b'clone-bundles', b'auto-generate.formats')
803 804 ratio = float(
804 805 repo.ui.config(b'clone-bundles', b'trigger.below-bundled-ratio')
805 806 )
806 807 abs_revs = repo.ui.configint(b'clone-bundles', b'trigger.revs')
807 808 revs = len(repo.changelog)
808 809 generic_data = {
809 810 'revs': revs,
810 811 'head_revs': repo.changelog.headrevs(),
811 812 'tip_rev': repo.changelog.tiprev(),
812 813 'tip_node': node.hex(repo.changelog.tip()),
813 814 'op_id': op_id,
814 815 }
815 816 for t in targets:
816 817 if new_bundle_needed(repo, bundles, ratio, abs_revs, t, revs):
817 818 data = generic_data.copy()
818 819 data['bundle_type'] = t
819 820 b = RequestedBundle(**data)
820 821 create_bundles.append(b)
821 822 delete_bundles.extend(find_outdated_bundles(repo, bundles))
822 823 return create_bundles, delete_bundles
823 824
824 825
825 826 def new_bundle_needed(repo, bundles, ratio, abs_revs, bundle_type, revs):
826 827 """consider the current cached content and trigger new bundles if needed"""
827 828 threshold = max((revs * ratio), (revs - abs_revs))
828 829 for b in bundles:
829 830 if not b.valid_for(repo) or b.bundle_type != bundle_type:
830 831 continue
831 832 if b.revs > threshold:
832 833 return False
833 834 return True
834 835
835 836
836 837 def start_one_bundle(repo, bundle):
837 838 """start the generation of a single bundle file
838 839
839 840 the `bundle` argument should be a RequestedBundle object.
840 841
841 842 This data is passed to the `debugmakeclonebundles` "as is".
842 843 """
843 844 data = util.pickle.dumps(bundle)
844 845 cmd = [procutil.hgexecutable(), b'--cwd', repo.path, INTERNAL_CMD]
845 846 env = procutil.shellenviron()
846 847 msg = b'clone-bundles: starting bundle generation: %s\n'
847 848 stdout = None
848 849 stderr = None
849 850 waits = []
850 851 record_wait = None
851 852 if repo.ui.configbool(b'devel', b'debug.clonebundles'):
852 853 stdout = procutil.stdout
853 854 stderr = procutil.stderr
854 855 repo.ui.write(msg % bundle.bundle_type)
855 856 record_wait = waits.append
856 857 else:
857 858 repo.ui.debug(msg % bundle.bundle_type)
858 859 bg = procutil.runbgcommand
859 860 bg(
860 861 cmd,
861 862 env,
862 863 stdin_bytes=data,
863 864 stdout=stdout,
864 865 stderr=stderr,
865 866 record_wait=record_wait,
866 867 )
867 868 for f in waits:
868 869 f()
869 870
870 871
871 872 INTERNAL_CMD = b'debug::internal-make-clone-bundles'
872 873
873 874
874 875 @command(INTERNAL_CMD, [], b'')
875 876 def debugmakeclonebundles(ui, repo):
876 877 """Internal command to auto-generate debug bundles"""
877 878 requested_bundle = util.pickle.load(procutil.stdin)
878 879 procutil.stdin.close()
879 880
880 881 collect_garbage(repo)
881 882
882 883 fname = requested_bundle.suggested_filename
883 884 fpath = repo.vfs.makedirs(b'tmp-bundles')
884 885 fpath = repo.vfs.join(b'tmp-bundles', fname)
885 886 bundle = requested_bundle.generating(fpath)
886 887 update_bundle_list(repo, new_bundles=[bundle])
887 888
888 889 requested_bundle.generate_bundle(repo, fpath)
889 890
890 891 repo.invalidate()
891 892 finalize_one_bundle(repo, bundle)
892 893
893 894
894 895 def make_auto_bundler(source_repo):
895 896 reporef = weakref.ref(source_repo)
896 897
897 898 def autobundle(tr):
898 899 repo = reporef()
899 900 assert repo is not None
900 901 bundles = read_auto_gen(repo)
901 902 new, __ = auto_bundle_needed_actions(repo, bundles, b"%d_txn" % id(tr))
902 903 for data in new:
903 904 start_one_bundle(repo, data)
904 905 return None
905 906
906 907 return autobundle
907 908
908 909
909 910 def reposetup(ui, repo):
910 911 """install the two pieces needed for automatic clonebundle generation
911 912
912 913 - add a "post-close" hook that fires bundling when needed
913 914 - introduce a clone-bundle lock to let multiple processes meddle with the
914 915 state files.
915 916 """
916 917 if not repo.local():
917 918 return
918 919
919 920 class autobundlesrepo(repo.__class__):
920 921 def transaction(self, *args, **kwargs):
921 922 tr = super(autobundlesrepo, self).transaction(*args, **kwargs)
922 923 enabled = repo.ui.configbool(
923 924 b'clone-bundles',
924 925 b'auto-generate.on-change',
925 926 )
926 927 targets = repo.ui.configlist(
927 928 b'clone-bundles', b'auto-generate.formats'
928 929 )
929 930 if enabled and targets:
930 931 tr.addpostclose(CAT_POSTCLOSE, make_auto_bundler(self))
931 932 return tr
932 933
933 934 @localrepo.unfilteredmethod
934 935 def clonebundles_lock(self, wait=True):
935 936 '''Lock the repository file related to clone bundles'''
936 937 if not util.safehasattr(self, '_cb_lock_ref'):
937 938 self._cb_lock_ref = None
938 939 l = self._currentlock(self._cb_lock_ref)
939 940 if l is not None:
940 941 l.lock()
941 942 return l
942 943
943 944 l = self._lock(
944 945 vfs=self.vfs,
945 946 lockname=b"clonebundleslock",
946 947 wait=wait,
947 948 releasefn=None,
948 949 acquirefn=None,
949 950 desc=_(b'repository %s') % self.origroot,
950 951 )
951 952 self._cb_lock_ref = weakref.ref(l)
952 953 return l
953 954
954 955 repo._wlockfreeprefix.add(AUTO_GEN_FILE)
955 956 repo._wlockfreeprefix.add(bundlecaches.CB_MANIFEST_FILE)
956 957 repo.__class__ = autobundlesrepo
957 958
958 959
959 960 @command(
960 961 b'admin::clone-bundles-refresh',
961 962 [
962 963 (
963 964 b'',
964 965 b'background',
965 966 False,
966 967 _(b'start bundle generation in the background'),
967 968 ),
968 969 ],
969 970 b'',
970 971 )
971 972 def cmd_admin_clone_bundles_refresh(
972 973 ui,
973 974 repo: localrepo.localrepository,
974 975 background=False,
975 976 ):
976 977 """generate clone bundles according to the configuration
977 978
978 979 This runs the logic for automatic generation, removing outdated bundles and
979 980 generating new ones if necessary. See :hg:`help -e clone-bundles` for
980 981 details about how to configure this feature.
981 982 """
982 983 debug = repo.ui.configbool(b'devel', b'debug.clonebundles')
983 984 bundles = read_auto_gen(repo)
984 985 op_id = b"%d_acbr" % os.getpid()
985 986 create, delete = auto_bundle_needed_actions(repo, bundles, op_id)
986 987
987 988 # if some bundles are scheduled for creation in the background, they will
988 989 # deal with garbage collection too, so no need to synchroniously do it.
989 990 #
990 991 # However if no bundles are scheduled for creation, we need to explicitly do
991 992 # it here.
992 993 if not (background and create):
993 994 # we clean up outdated bundles before generating new ones to keep the
994 995 # last two versions of the bundle around for a while and avoid having to
995 996 # deal with clients that just got served a manifest.
996 997 for o in delete:
997 998 delete_bundle(repo, o)
998 999 update_bundle_list(repo, del_bundles=delete)
999 1000
1000 1001 if create:
1001 1002 fpath = repo.vfs.makedirs(b'tmp-bundles')
1002 1003
1003 1004 if background:
1004 1005 for requested_bundle in create:
1005 1006 start_one_bundle(repo, requested_bundle)
1006 1007 else:
1007 1008 for requested_bundle in create:
1008 1009 if debug:
1009 1010 msg = b'clone-bundles: starting bundle generation: %s\n'
1010 1011 repo.ui.write(msg % requested_bundle.bundle_type)
1011 1012 fname = requested_bundle.suggested_filename
1012 1013 fpath = repo.vfs.join(b'tmp-bundles', fname)
1013 1014 generating_bundle = requested_bundle.generating(fpath)
1014 1015 update_bundle_list(repo, new_bundles=[generating_bundle])
1015 1016 requested_bundle.generate_bundle(repo, fpath)
1016 1017 result = upload_bundle(repo, generating_bundle)
1017 1018 update_bundle_list(repo, new_bundles=[result])
1018 1019 update_ondisk_manifest(repo)
1019 1020 cleanup_tmp_bundle(repo, generating_bundle)
1020 1021
1021 1022
1022 1023 @command(b'admin::clone-bundles-clear', [], b'')
1023 1024 def cmd_admin_clone_bundles_clear(ui, repo: localrepo.localrepository):
1024 1025 """remove existing clone bundle caches
1025 1026
1026 1027 See `hg help admin::clone-bundles-refresh` for details on how to regenerate
1027 1028 them.
1028 1029
1029 1030 This command will only affect bundles currently available, it will not
1030 1031 affect bundles being asynchronously generated.
1031 1032 """
1032 1033 bundles = read_auto_gen(repo)
1033 1034 delete = [b for b in bundles if b.ready]
1034 1035 for o in delete:
1035 1036 delete_bundle(repo, o)
1036 1037 update_bundle_list(repo, del_bundles=delete)
@@ -1,661 +1,664 b''
1 1 # wireprotov1peer.py - Client-side functionality for wire protocol version 1.
2 2 #
3 3 # Copyright 2005-2010 Olivia Mackall <olivia@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8
9 9 import sys
10 10 import weakref
11 11
12 12 from concurrent import futures
13 13 from .i18n import _
14 14 from .node import bin
15 15 from .pycompat import (
16 16 getattr,
17 17 setattr,
18 18 )
19 19 from . import (
20 20 bundle2,
21 21 changegroup as changegroupmod,
22 22 encoding,
23 23 error,
24 24 pushkey as pushkeymod,
25 25 pycompat,
26 26 util,
27 27 wireprototypes,
28 28 )
29 29 from .interfaces import (
30 30 repository,
31 31 util as interfaceutil,
32 32 )
33 33 from .utils import hashutil
34 34
35 35 urlreq = util.urlreq
36 36
37 37
38 38 def batchable(f):
39 39 """annotation for batchable methods
40 40
41 41 Such methods must implement a coroutine as follows:
42 42
43 43 @batchable
44 44 def sample(self, one, two=None):
45 45 # Build list of encoded arguments suitable for your wire protocol:
46 46 encoded_args = [('one', encode(one),), ('two', encode(two),)]
47 47 # Return it, along with a function that will receive the result
48 48 # from the batched request.
49 49 return encoded_args, decode
50 50
51 51 The decorator returns a function which wraps this coroutine as a plain
52 52 method, but adds the original method as an attribute called "batchable",
53 53 which is used by remotebatch to split the call into separate encoding and
54 54 decoding phases.
55 55 """
56 56
57 57 def plain(*args, **opts):
58 58 encoded_args_or_res, decode = f(*args, **opts)
59 59 if not decode:
60 60 return encoded_args_or_res # a local result in this case
61 61 self = args[0]
62 62 cmd = pycompat.bytesurl(f.__name__) # ensure cmd is ascii bytestr
63 63 encoded_res = self._submitone(cmd, encoded_args_or_res)
64 64 return decode(encoded_res)
65 65
66 66 setattr(plain, 'batchable', f)
67 67 setattr(plain, '__name__', f.__name__)
68 68 return plain
69 69
70 70
71 71 def encodebatchcmds(req):
72 72 """Return a ``cmds`` argument value for the ``batch`` command."""
73 73 escapearg = wireprototypes.escapebatcharg
74 74
75 75 cmds = []
76 76 for op, argsdict in req:
77 77 # Old servers didn't properly unescape argument names. So prevent
78 78 # the sending of argument names that may not be decoded properly by
79 79 # servers.
80 80 assert all(escapearg(k) == k for k in argsdict)
81 81
82 82 args = b','.join(
83 83 b'%s=%s' % (escapearg(k), escapearg(v)) for k, v in argsdict.items()
84 84 )
85 85 cmds.append(b'%s %s' % (op, args))
86 86
87 87 return b';'.join(cmds)
88 88
89 89
90 90 class unsentfuture(futures.Future):
91 91 """A Future variation to represent an unsent command.
92 92
93 93 Because we buffer commands and don't submit them immediately, calling
94 94 ``result()`` on an unsent future could deadlock. Futures for buffered
95 95 commands are represented by this type, which wraps ``result()`` to
96 96 call ``sendcommands()``.
97 97 """
98 98
99 99 def result(self, timeout=None):
100 100 if self.done():
101 101 return futures.Future.result(self, timeout)
102 102
103 103 self._peerexecutor.sendcommands()
104 104
105 105 # This looks like it will infinitely recurse. However,
106 106 # sendcommands() should modify __class__. This call serves as a check
107 107 # on that.
108 108 return self.result(timeout)
109 109
110 110
111 111 @interfaceutil.implementer(repository.ipeercommandexecutor)
112 112 class peerexecutor:
113 113 def __init__(self, peer):
114 114 self._peer = peer
115 115 self._sent = False
116 116 self._closed = False
117 117 self._calls = []
118 118 self._futures = weakref.WeakSet()
119 119 self._responseexecutor = None
120 120 self._responsef = None
121 121
122 122 def __enter__(self):
123 123 return self
124 124
125 125 def __exit__(self, exctype, excvalee, exctb):
126 126 self.close()
127 127
128 128 def callcommand(self, command, args):
129 129 if self._sent:
130 130 raise error.ProgrammingError(
131 131 b'callcommand() cannot be used after commands are sent'
132 132 )
133 133
134 134 if self._closed:
135 135 raise error.ProgrammingError(
136 136 b'callcommand() cannot be used after close()'
137 137 )
138 138
139 139 # Commands are dispatched through methods on the peer.
140 140 fn = getattr(self._peer, pycompat.sysstr(command), None)
141 141
142 142 if not fn:
143 143 raise error.ProgrammingError(
144 144 b'cannot call command %s: method of same name not available '
145 145 b'on peer' % command
146 146 )
147 147
148 148 # Commands are either batchable or they aren't. If a command
149 149 # isn't batchable, we send it immediately because the executor
150 150 # can no longer accept new commands after a non-batchable command.
151 151 # If a command is batchable, we queue it for later. But we have
152 152 # to account for the case of a non-batchable command arriving after
153 153 # a batchable one and refuse to service it.
154 154
155 155 def addcall():
156 156 f = futures.Future()
157 157 self._futures.add(f)
158 158 self._calls.append((command, args, fn, f))
159 159 return f
160 160
161 161 if getattr(fn, 'batchable', False):
162 162 f = addcall()
163 163
164 164 # But since we don't issue it immediately, we wrap its result()
165 165 # to trigger sending so we avoid deadlocks.
166 166 f.__class__ = unsentfuture
167 167 f._peerexecutor = self
168 168 else:
169 169 if self._calls:
170 170 raise error.ProgrammingError(
171 171 b'%s is not batchable and cannot be called on a command '
172 172 b'executor along with other commands' % command
173 173 )
174 174
175 175 f = addcall()
176 176
177 177 # Non-batchable commands can never coexist with another command
178 178 # in this executor. So send the command immediately.
179 179 self.sendcommands()
180 180
181 181 return f
182 182
183 183 def sendcommands(self):
184 184 if self._sent:
185 185 return
186 186
187 187 if not self._calls:
188 188 return
189 189
190 190 self._sent = True
191 191
192 192 # Unhack any future types so caller seens a clean type and to break
193 193 # cycle between us and futures.
194 194 for f in self._futures:
195 195 if isinstance(f, unsentfuture):
196 196 f.__class__ = futures.Future
197 197 f._peerexecutor = None
198 198
199 199 calls = self._calls
200 200 # Mainly to destroy references to futures.
201 201 self._calls = None
202 202
203 203 # Simple case of a single command. We call it synchronously.
204 204 if len(calls) == 1:
205 205 command, args, fn, f = calls[0]
206 206
207 207 # Future was cancelled. Ignore it.
208 208 if not f.set_running_or_notify_cancel():
209 209 return
210 210
211 211 try:
212 212 result = fn(**pycompat.strkwargs(args))
213 213 except Exception:
214 214 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
215 215 else:
216 216 f.set_result(result)
217 217
218 218 return
219 219
220 220 # Batch commands are a bit harder. First, we have to deal with the
221 221 # @batchable coroutine. That's a bit annoying. Furthermore, we also
222 222 # need to preserve streaming. i.e. it should be possible for the
223 223 # futures to resolve as data is coming in off the wire without having
224 224 # to wait for the final byte of the final response. We do this by
225 225 # spinning up a thread to read the responses.
226 226
227 227 requests = []
228 228 states = []
229 229
230 230 for command, args, fn, f in calls:
231 231 # Future was cancelled. Ignore it.
232 232 if not f.set_running_or_notify_cancel():
233 233 continue
234 234
235 235 try:
236 236 encoded_args_or_res, decode = fn.batchable(
237 237 fn.__self__, **pycompat.strkwargs(args)
238 238 )
239 239 except Exception:
240 240 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
241 241 return
242 242
243 243 if not decode:
244 244 f.set_result(encoded_args_or_res)
245 245 else:
246 246 requests.append((command, encoded_args_or_res))
247 247 states.append((command, f, batchable, decode))
248 248
249 249 if not requests:
250 250 return
251 251
252 252 # This will emit responses in order they were executed.
253 253 wireresults = self._peer._submitbatch(requests)
254 254
255 255 # The use of a thread pool executor here is a bit weird for something
256 256 # that only spins up a single thread. However, thread management is
257 257 # hard and it is easy to encounter race conditions, deadlocks, etc.
258 258 # concurrent.futures already solves these problems and its thread pool
259 259 # executor has minimal overhead. So we use it.
260 260 self._responseexecutor = futures.ThreadPoolExecutor(1)
261 261 self._responsef = self._responseexecutor.submit(
262 262 self._readbatchresponse, states, wireresults
263 263 )
264 264
265 265 def close(self):
266 266 self.sendcommands()
267 267
268 268 if self._closed:
269 269 return
270 270
271 271 self._closed = True
272 272
273 273 if not self._responsef:
274 274 return
275 275
276 276 # We need to wait on our in-flight response and then shut down the
277 277 # executor once we have a result.
278 278 try:
279 279 self._responsef.result()
280 280 finally:
281 281 self._responseexecutor.shutdown(wait=True)
282 282 self._responsef = None
283 283 self._responseexecutor = None
284 284
285 285 # If any of our futures are still in progress, mark them as
286 286 # errored. Otherwise a result() could wait indefinitely.
287 287 for f in self._futures:
288 288 if not f.done():
289 289 f.set_exception(
290 290 error.ResponseError(
291 291 _(b'unfulfilled batch command response'), None
292 292 )
293 293 )
294 294
295 295 self._futures = None
296 296
297 297 def _readbatchresponse(self, states, wireresults):
298 298 # Executes in a thread to read data off the wire.
299 299
300 300 for command, f, batchable, decode in states:
301 301 # Grab raw result off the wire and teach the internal future
302 302 # about it.
303 303 try:
304 304 remoteresult = next(wireresults)
305 305 except StopIteration:
306 306 # This can happen in particular because next(batchable)
307 307 # in the previous iteration can call peer._abort, which
308 308 # may close the peer.
309 309 f.set_exception(
310 310 error.ResponseError(
311 311 _(b'unfulfilled batch command response'), None
312 312 )
313 313 )
314 314 else:
315 315 try:
316 316 result = decode(remoteresult)
317 317 except Exception:
318 318 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
319 319 else:
320 320 f.set_result(result)
321 321
322 322
323 323 @interfaceutil.implementer(
324 324 repository.ipeercommands, repository.ipeerlegacycommands
325 325 )
326 326 class wirepeer(repository.peer):
327 327 """Client-side interface for communicating with a peer repository.
328 328
329 329 Methods commonly call wire protocol commands of the same name.
330 330
331 331 See also httppeer.py and sshpeer.py for protocol-specific
332 332 implementations of this interface.
333 333 """
334 334
335 335 def commandexecutor(self):
336 336 return peerexecutor(self)
337 337
338 338 # Begin of ipeercommands interface.
339 339
340 340 def clonebundles(self):
341 self.requirecap(b'clonebundles', _(b'clone bundles'))
342 return self._call(b'clonebundles')
341 if self.capable(b'clonebundles_manifest'):
342 return self._call(b'clonebundles_manifest')
343 else:
344 self.requirecap(b'clonebundles', _(b'clone bundles'))
345 return self._call(b'clonebundles')
343 346
344 347 def _finish_inline_clone_bundle(self, stream):
345 348 pass # allow override for httppeer
346 349
347 350 def get_cached_bundle_inline(self, path):
348 351 stream = self._callstream(b"get_cached_bundle_inline", path=path)
349 352 length = util.uvarintdecodestream(stream)
350 353
351 354 # SSH streams will block if reading more than length
352 355 for chunk in util.filechunkiter(stream, limit=length):
353 356 yield chunk
354 357
355 358 self._finish_inline_clone_bundle(stream)
356 359
357 360 @batchable
358 361 def lookup(self, key):
359 362 self.requirecap(b'lookup', _(b'look up remote revision'))
360 363
361 364 def decode(d):
362 365 success, data = d[:-1].split(b" ", 1)
363 366 if int(success):
364 367 return bin(data)
365 368 else:
366 369 self._abort(error.RepoError(data))
367 370
368 371 return {b'key': encoding.fromlocal(key)}, decode
369 372
370 373 @batchable
371 374 def heads(self):
372 375 def decode(d):
373 376 try:
374 377 return wireprototypes.decodelist(d[:-1])
375 378 except ValueError:
376 379 self._abort(error.ResponseError(_(b"unexpected response:"), d))
377 380
378 381 return {}, decode
379 382
380 383 @batchable
381 384 def known(self, nodes):
382 385 def decode(d):
383 386 try:
384 387 return [bool(int(b)) for b in pycompat.iterbytestr(d)]
385 388 except ValueError:
386 389 self._abort(error.ResponseError(_(b"unexpected response:"), d))
387 390
388 391 return {b'nodes': wireprototypes.encodelist(nodes)}, decode
389 392
390 393 @batchable
391 394 def branchmap(self):
392 395 def decode(d):
393 396 try:
394 397 branchmap = {}
395 398 for branchpart in d.splitlines():
396 399 branchname, branchheads = branchpart.split(b' ', 1)
397 400 branchname = encoding.tolocal(urlreq.unquote(branchname))
398 401 branchheads = wireprototypes.decodelist(branchheads)
399 402 branchmap[branchname] = branchheads
400 403 return branchmap
401 404 except TypeError:
402 405 self._abort(error.ResponseError(_(b"unexpected response:"), d))
403 406
404 407 return {}, decode
405 408
406 409 @batchable
407 410 def listkeys(self, namespace):
408 411 if not self.capable(b'pushkey'):
409 412 return {}, None
410 413 self.ui.debug(b'preparing listkeys for "%s"\n' % namespace)
411 414
412 415 def decode(d):
413 416 self.ui.debug(
414 417 b'received listkey for "%s": %i bytes\n' % (namespace, len(d))
415 418 )
416 419 return pushkeymod.decodekeys(d)
417 420
418 421 return {b'namespace': encoding.fromlocal(namespace)}, decode
419 422
420 423 @batchable
421 424 def pushkey(self, namespace, key, old, new):
422 425 if not self.capable(b'pushkey'):
423 426 return False, None
424 427 self.ui.debug(b'preparing pushkey for "%s:%s"\n' % (namespace, key))
425 428
426 429 def decode(d):
427 430 d, output = d.split(b'\n', 1)
428 431 try:
429 432 d = bool(int(d))
430 433 except ValueError:
431 434 raise error.ResponseError(
432 435 _(b'push failed (unexpected response):'), d
433 436 )
434 437 for l in output.splitlines(True):
435 438 self.ui.status(_(b'remote: '), l)
436 439 return d
437 440
438 441 return {
439 442 b'namespace': encoding.fromlocal(namespace),
440 443 b'key': encoding.fromlocal(key),
441 444 b'old': encoding.fromlocal(old),
442 445 b'new': encoding.fromlocal(new),
443 446 }, decode
444 447
445 448 def stream_out(self):
446 449 return self._callstream(b'stream_out')
447 450
448 451 def getbundle(self, source, **kwargs):
449 452 kwargs = pycompat.byteskwargs(kwargs)
450 453 self.requirecap(b'getbundle', _(b'look up remote changes'))
451 454 opts = {}
452 455 bundlecaps = kwargs.get(b'bundlecaps') or set()
453 456 for key, value in kwargs.items():
454 457 if value is None:
455 458 continue
456 459 keytype = wireprototypes.GETBUNDLE_ARGUMENTS.get(key)
457 460 if keytype is None:
458 461 raise error.ProgrammingError(
459 462 b'Unexpectedly None keytype for key %s' % key
460 463 )
461 464 elif keytype == b'nodes':
462 465 value = wireprototypes.encodelist(value)
463 466 elif keytype == b'csv':
464 467 value = b','.join(value)
465 468 elif keytype == b'scsv':
466 469 value = b','.join(sorted(value))
467 470 elif keytype == b'boolean':
468 471 value = b'%i' % bool(value)
469 472 elif keytype != b'plain':
470 473 raise KeyError(b'unknown getbundle option type %s' % keytype)
471 474 opts[key] = value
472 475 f = self._callcompressable(b"getbundle", **pycompat.strkwargs(opts))
473 476 if any((cap.startswith(b'HG2') for cap in bundlecaps)):
474 477 return bundle2.getunbundler(self.ui, f)
475 478 else:
476 479 return changegroupmod.cg1unpacker(f, b'UN')
477 480
478 481 def unbundle(self, bundle, heads, url):
479 482 """Send cg (a readable file-like object representing the
480 483 changegroup to push, typically a chunkbuffer object) to the
481 484 remote server as a bundle.
482 485
483 486 When pushing a bundle10 stream, return an integer indicating the
484 487 result of the push (see changegroup.apply()).
485 488
486 489 When pushing a bundle20 stream, return a bundle20 stream.
487 490
488 491 `url` is the url the client thinks it's pushing to, which is
489 492 visible to hooks.
490 493 """
491 494
492 495 if heads != [b'force'] and self.capable(b'unbundlehash'):
493 496 heads = wireprototypes.encodelist(
494 497 [b'hashed', hashutil.sha1(b''.join(sorted(heads))).digest()]
495 498 )
496 499 else:
497 500 heads = wireprototypes.encodelist(heads)
498 501
499 502 if util.safehasattr(bundle, 'deltaheader'):
500 503 # this a bundle10, do the old style call sequence
501 504 ret, output = self._callpush(b"unbundle", bundle, heads=heads)
502 505 if ret == b"":
503 506 raise error.ResponseError(_(b'push failed:'), output)
504 507 try:
505 508 ret = int(ret)
506 509 except ValueError:
507 510 raise error.ResponseError(
508 511 _(b'push failed (unexpected response):'), ret
509 512 )
510 513
511 514 for l in output.splitlines(True):
512 515 self.ui.status(_(b'remote: '), l)
513 516 else:
514 517 # bundle2 push. Send a stream, fetch a stream.
515 518 stream = self._calltwowaystream(b'unbundle', bundle, heads=heads)
516 519 ret = bundle2.getunbundler(self.ui, stream)
517 520 return ret
518 521
519 522 # End of ipeercommands interface.
520 523
521 524 # Begin of ipeerlegacycommands interface.
522 525
523 526 def branches(self, nodes):
524 527 n = wireprototypes.encodelist(nodes)
525 528 d = self._call(b"branches", nodes=n)
526 529 try:
527 530 br = [tuple(wireprototypes.decodelist(b)) for b in d.splitlines()]
528 531 return br
529 532 except ValueError:
530 533 self._abort(error.ResponseError(_(b"unexpected response:"), d))
531 534
532 535 def between(self, pairs):
533 536 batch = 8 # avoid giant requests
534 537 r = []
535 538 for i in range(0, len(pairs), batch):
536 539 n = b" ".join(
537 540 [
538 541 wireprototypes.encodelist(p, b'-')
539 542 for p in pairs[i : i + batch]
540 543 ]
541 544 )
542 545 d = self._call(b"between", pairs=n)
543 546 try:
544 547 r.extend(
545 548 l and wireprototypes.decodelist(l) or []
546 549 for l in d.splitlines()
547 550 )
548 551 except ValueError:
549 552 self._abort(error.ResponseError(_(b"unexpected response:"), d))
550 553 return r
551 554
552 555 def changegroup(self, nodes, source):
553 556 n = wireprototypes.encodelist(nodes)
554 557 f = self._callcompressable(b"changegroup", roots=n)
555 558 return changegroupmod.cg1unpacker(f, b'UN')
556 559
557 560 def changegroupsubset(self, bases, heads, source):
558 561 self.requirecap(b'changegroupsubset', _(b'look up remote changes'))
559 562 bases = wireprototypes.encodelist(bases)
560 563 heads = wireprototypes.encodelist(heads)
561 564 f = self._callcompressable(
562 565 b"changegroupsubset", bases=bases, heads=heads
563 566 )
564 567 return changegroupmod.cg1unpacker(f, b'UN')
565 568
566 569 # End of ipeerlegacycommands interface.
567 570
568 571 def _submitbatch(self, req):
569 572 """run batch request <req> on the server
570 573
571 574 Returns an iterator of the raw responses from the server.
572 575 """
573 576 ui = self.ui
574 577 if ui.debugflag and ui.configbool(b'devel', b'debug.peer-request'):
575 578 ui.debug(b'devel-peer-request: batched-content\n')
576 579 for op, args in req:
577 580 msg = b'devel-peer-request: - %s (%d arguments)\n'
578 581 ui.debug(msg % (op, len(args)))
579 582
580 583 unescapearg = wireprototypes.unescapebatcharg
581 584
582 585 rsp = self._callstream(b"batch", cmds=encodebatchcmds(req))
583 586 chunk = rsp.read(1024)
584 587 work = [chunk]
585 588 while chunk:
586 589 while b';' not in chunk and chunk:
587 590 chunk = rsp.read(1024)
588 591 work.append(chunk)
589 592 merged = b''.join(work)
590 593 while b';' in merged:
591 594 one, merged = merged.split(b';', 1)
592 595 yield unescapearg(one)
593 596 chunk = rsp.read(1024)
594 597 work = [merged, chunk]
595 598 yield unescapearg(b''.join(work))
596 599
597 600 def _submitone(self, op, args):
598 601 return self._call(op, **pycompat.strkwargs(args))
599 602
600 603 def debugwireargs(self, one, two, three=None, four=None, five=None):
601 604 # don't pass optional arguments left at their default value
602 605 opts = {}
603 606 if three is not None:
604 607 opts['three'] = three
605 608 if four is not None:
606 609 opts['four'] = four
607 610 return self._call(b'debugwireargs', one=one, two=two, **opts)
608 611
609 612 def _call(self, cmd, **args):
610 613 """execute <cmd> on the server
611 614
612 615 The command is expected to return a simple string.
613 616
614 617 returns the server reply as a string."""
615 618 raise NotImplementedError()
616 619
617 620 def _callstream(self, cmd, **args):
618 621 """execute <cmd> on the server
619 622
620 623 The command is expected to return a stream. Note that if the
621 624 command doesn't return a stream, _callstream behaves
622 625 differently for ssh and http peers.
623 626
624 627 returns the server reply as a file like object.
625 628 """
626 629 raise NotImplementedError()
627 630
628 631 def _callcompressable(self, cmd, **args):
629 632 """execute <cmd> on the server
630 633
631 634 The command is expected to return a stream.
632 635
633 636 The stream may have been compressed in some implementations. This
634 637 function takes care of the decompression. This is the only difference
635 638 with _callstream.
636 639
637 640 returns the server reply as a file like object.
638 641 """
639 642 raise NotImplementedError()
640 643
641 644 def _callpush(self, cmd, fp, **args):
642 645 """execute a <cmd> on server
643 646
644 647 The command is expected to be related to a push. Push has a special
645 648 return method.
646 649
647 650 returns the server reply as a (ret, output) tuple. ret is either
648 651 empty (error) or a stringified int.
649 652 """
650 653 raise NotImplementedError()
651 654
652 655 def _calltwowaystream(self, cmd, fp, **args):
653 656 """execute <cmd> on server
654 657
655 658 The command will send a stream to the server and get a stream in reply.
656 659 """
657 660 raise NotImplementedError()
658 661
659 662 def _abort(self, exception):
660 663 """clearly abort the wire protocol connection and raise the exception"""
661 664 raise NotImplementedError()
@@ -1,795 +1,804 b''
1 1 # wireprotov1server.py - Wire protocol version 1 server functionality
2 2 #
3 3 # Copyright 2005-2010 Olivia Mackall <olivia@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8
9 9 import binascii
10 10 import os
11 11
12 12 from .i18n import _
13 13 from .node import hex
14 14 from .pycompat import getattr
15 15
16 16 from . import (
17 17 bundle2,
18 18 bundlecaches,
19 19 changegroup as changegroupmod,
20 20 discovery,
21 21 encoding,
22 22 error,
23 23 exchange,
24 24 hook,
25 25 pushkey as pushkeymod,
26 26 pycompat,
27 27 repoview,
28 28 requirements as requirementsmod,
29 29 streamclone,
30 30 util,
31 31 wireprototypes,
32 32 )
33 33
34 34 from .utils import (
35 35 procutil,
36 36 stringutil,
37 37 )
38 38
39 39 urlerr = util.urlerr
40 40 urlreq = util.urlreq
41 41
42 42 bundle2requiredmain = _(b'incompatible Mercurial client; bundle2 required')
43 43 bundle2requiredhint = _(
44 44 b'see https://www.mercurial-scm.org/wiki/IncompatibleClient'
45 45 )
46 46 bundle2required = b'%s\n(%s)\n' % (bundle2requiredmain, bundle2requiredhint)
47 47
48 48
49 49 def clientcompressionsupport(proto):
50 50 """Returns a list of compression methods supported by the client.
51 51
52 52 Returns a list of the compression methods supported by the client
53 53 according to the protocol capabilities. If no such capability has
54 54 been announced, fallback to the default of zlib and uncompressed.
55 55 """
56 56 for cap in proto.getprotocaps():
57 57 if cap.startswith(b'comp='):
58 58 return cap[5:].split(b',')
59 59 return [b'zlib', b'none']
60 60
61 61
62 62 # wire protocol command can either return a string or one of these classes.
63 63
64 64
65 65 def getdispatchrepo(repo, proto, command, accesshidden=False):
66 66 """Obtain the repo used for processing wire protocol commands.
67 67
68 68 The intent of this function is to serve as a monkeypatch point for
69 69 extensions that need commands to operate on different repo views under
70 70 specialized circumstances.
71 71 """
72 72 viewconfig = repo.ui.config(b'server', b'view')
73 73
74 74 # Only works if the filter actually supports being upgraded to show hidden
75 75 # changesets.
76 76 if (
77 77 accesshidden
78 78 and viewconfig is not None
79 79 and viewconfig + b'.hidden' in repoview.filtertable
80 80 ):
81 81 viewconfig += b'.hidden'
82 82
83 83 return repo.filtered(viewconfig)
84 84
85 85
86 86 def dispatch(repo, proto, command, accesshidden=False):
87 87 repo = getdispatchrepo(repo, proto, command, accesshidden=accesshidden)
88 88
89 89 func, spec = commands[command]
90 90 args = proto.getargs(spec)
91 91
92 92 return func(repo, proto, *args)
93 93
94 94
95 95 def options(cmd, keys, others):
96 96 opts = {}
97 97 for k in keys:
98 98 if k in others:
99 99 opts[k] = others[k]
100 100 del others[k]
101 101 if others:
102 102 procutil.stderr.write(
103 103 b"warning: %s ignored unexpected arguments %s\n"
104 104 % (cmd, b",".join(others))
105 105 )
106 106 return opts
107 107
108 108
109 109 def bundle1allowed(repo, action):
110 110 """Whether a bundle1 operation is allowed from the server.
111 111
112 112 Priority is:
113 113
114 114 1. server.bundle1gd.<action> (if generaldelta active)
115 115 2. server.bundle1.<action>
116 116 3. server.bundle1gd (if generaldelta active)
117 117 4. server.bundle1
118 118 """
119 119 ui = repo.ui
120 120 gd = requirementsmod.GENERALDELTA_REQUIREMENT in repo.requirements
121 121
122 122 if gd:
123 123 v = ui.configbool(b'server', b'bundle1gd.%s' % action)
124 124 if v is not None:
125 125 return v
126 126
127 127 v = ui.configbool(b'server', b'bundle1.%s' % action)
128 128 if v is not None:
129 129 return v
130 130
131 131 if gd:
132 132 v = ui.configbool(b'server', b'bundle1gd')
133 133 if v is not None:
134 134 return v
135 135
136 136 return ui.configbool(b'server', b'bundle1')
137 137
138 138
139 139 commands = wireprototypes.commanddict()
140 140
141 141
142 142 def wireprotocommand(name, args=None, permission=b'push'):
143 143 """Decorator to declare a wire protocol command.
144 144
145 145 ``name`` is the name of the wire protocol command being provided.
146 146
147 147 ``args`` defines the named arguments accepted by the command. It is
148 148 a space-delimited list of argument names. ``*`` denotes a special value
149 149 that says to accept all named arguments.
150 150
151 151 ``permission`` defines the permission type needed to run this command.
152 152 Can be ``push`` or ``pull``. These roughly map to read-write and read-only,
153 153 respectively. Default is to assume command requires ``push`` permissions
154 154 because otherwise commands not declaring their permissions could modify
155 155 a repository that is supposed to be read-only.
156 156 """
157 157 transports = {
158 158 k for k, v in wireprototypes.TRANSPORTS.items() if v[b'version'] == 1
159 159 }
160 160
161 161 if permission not in (b'push', b'pull'):
162 162 raise error.ProgrammingError(
163 163 b'invalid wire protocol permission; '
164 164 b'got %s; expected "push" or "pull"' % permission
165 165 )
166 166
167 167 if args is None:
168 168 args = b''
169 169
170 170 if not isinstance(args, bytes):
171 171 raise error.ProgrammingError(
172 172 b'arguments for version 1 commands must be declared as bytes'
173 173 )
174 174
175 175 def register(func):
176 176 if name in commands:
177 177 raise error.ProgrammingError(
178 178 b'%s command already registered for version 1' % name
179 179 )
180 180 commands[name] = wireprototypes.commandentry(
181 181 func, args=args, transports=transports, permission=permission
182 182 )
183 183
184 184 return func
185 185
186 186 return register
187 187
188 188
189 189 # TODO define a more appropriate permissions type to use for this.
190 190 @wireprotocommand(b'batch', b'cmds *', permission=b'pull')
191 191 def batch(repo, proto, cmds, others):
192 192 unescapearg = wireprototypes.unescapebatcharg
193 193 res = []
194 194 for pair in cmds.split(b';'):
195 195 op, args = pair.split(b' ', 1)
196 196 vals = {}
197 197 for a in args.split(b','):
198 198 if a:
199 199 n, v = a.split(b'=')
200 200 vals[unescapearg(n)] = unescapearg(v)
201 201 func, spec = commands[op]
202 202
203 203 # Validate that client has permissions to perform this command.
204 204 perm = commands[op].permission
205 205 assert perm in (b'push', b'pull')
206 206 proto.checkperm(perm)
207 207
208 208 if spec:
209 209 keys = spec.split()
210 210 data = {}
211 211 for k in keys:
212 212 if k == b'*':
213 213 star = {}
214 214 for key in vals.keys():
215 215 if key not in keys:
216 216 star[key] = vals[key]
217 217 data[b'*'] = star
218 218 else:
219 219 data[k] = vals[k]
220 220 result = func(repo, proto, *[data[k] for k in keys])
221 221 else:
222 222 result = func(repo, proto)
223 223 if isinstance(result, wireprototypes.ooberror):
224 224 return result
225 225
226 226 # For now, all batchable commands must return bytesresponse or
227 227 # raw bytes (for backwards compatibility).
228 228 assert isinstance(result, (wireprototypes.bytesresponse, bytes))
229 229 if isinstance(result, wireprototypes.bytesresponse):
230 230 result = result.data
231 231 res.append(wireprototypes.escapebatcharg(result))
232 232
233 233 return wireprototypes.bytesresponse(b';'.join(res))
234 234
235 235
236 236 @wireprotocommand(b'between', b'pairs', permission=b'pull')
237 237 def between(repo, proto, pairs):
238 238 pairs = [wireprototypes.decodelist(p, b'-') for p in pairs.split(b" ")]
239 239 r = []
240 240 for b in repo.between(pairs):
241 241 r.append(wireprototypes.encodelist(b) + b"\n")
242 242
243 243 return wireprototypes.bytesresponse(b''.join(r))
244 244
245 245
246 246 @wireprotocommand(b'branchmap', permission=b'pull')
247 247 def branchmap(repo, proto):
248 248 branchmap = repo.branchmap()
249 249 heads = []
250 250 for branch, nodes in branchmap.items():
251 251 branchname = urlreq.quote(encoding.fromlocal(branch))
252 252 branchnodes = wireprototypes.encodelist(nodes)
253 253 heads.append(b'%s %s' % (branchname, branchnodes))
254 254
255 255 return wireprototypes.bytesresponse(b'\n'.join(heads))
256 256
257 257
258 258 @wireprotocommand(b'branches', b'nodes', permission=b'pull')
259 259 def branches(repo, proto, nodes):
260 260 nodes = wireprototypes.decodelist(nodes)
261 261 r = []
262 262 for b in repo.branches(nodes):
263 263 r.append(wireprototypes.encodelist(b) + b"\n")
264 264
265 265 return wireprototypes.bytesresponse(b''.join(r))
266 266
267 267
268 268 @wireprotocommand(b'get_cached_bundle_inline', b'path', permission=b'pull')
269 269 def get_cached_bundle_inline(repo, proto, path):
270 270 """
271 271 Server command to send a clonebundle to the client
272 272 """
273 273 if hook.hashook(repo.ui, b'pretransmit-inline-clone-bundle'):
274 274 hook.hook(
275 275 repo.ui,
276 276 repo,
277 277 b'pretransmit-inline-clone-bundle',
278 278 throw=True,
279 279 clonebundlepath=path,
280 280 )
281 281
282 282 bundle_dir = repo.vfs.join(bundlecaches.BUNDLE_CACHE_DIR)
283 283 clonebundlepath = repo.vfs.join(bundle_dir, path)
284 284 if not repo.vfs.exists(clonebundlepath):
285 285 raise error.Abort(b'clonebundle %s does not exist' % path)
286 286
287 287 clonebundles_dir = os.path.realpath(bundle_dir)
288 288 if not os.path.realpath(clonebundlepath).startswith(clonebundles_dir):
289 289 raise error.Abort(b'clonebundle %s is using an illegal path' % path)
290 290
291 291 def generator(vfs, bundle_path):
292 292 with vfs(bundle_path) as f:
293 293 length = os.fstat(f.fileno())[6]
294 294 yield util.uvarintencode(length)
295 295 for chunk in util.filechunkiter(f):
296 296 yield chunk
297 297
298 298 stream = generator(repo.vfs, clonebundlepath)
299 299 return wireprototypes.streamres(gen=stream, prefer_uncompressed=True)
300 300
301 301
302 302 @wireprotocommand(b'clonebundles', b'', permission=b'pull')
303 303 def clonebundles(repo, proto):
304 """A legacy version of clonebundles_manifest
305
306 This version filtered out new url scheme (like peer-bundle-cache://) to
307 avoid confusion in older clients.
308 """
309 manifest_contents = bundlecaches.get_manifest(repo)
310 # Filter out peer-bundle-cache:// entries
311 modified_manifest = []
312 for line in manifest_contents.splitlines():
313 if line.startswith(bundlecaches.CLONEBUNDLESCHEME):
314 continue
315 modified_manifest.append(line)
316 return wireprototypes.bytesresponse(b'\n'.join(modified_manifest))
317
318
319 @wireprotocommand(b'clonebundles_manifest', b'*', permission=b'pull')
320 def clonebundles_2(repo, proto, args):
304 321 """Server command for returning info for available bundles to seed clones.
305 322
306 323 Clients will parse this response and determine what bundle to fetch.
307 324
308 325 Extensions may wrap this command to filter or dynamically emit data
309 326 depending on the request. e.g. you could advertise URLs for the closest
310 327 data center given the client's IP address.
311 328
312 329 The only filter on the server side is filtering out inline clonebundles
313 330 in case a client does not support them.
314 331 Otherwise, older clients would retrieve and error out on those.
315 332 """
316 333 manifest_contents = bundlecaches.get_manifest(repo)
317 clientcapabilities = proto.getprotocaps()
318 if b'inlineclonebundles' in clientcapabilities:
319 return wireprototypes.bytesresponse(manifest_contents)
320 modified_manifest = []
321 for line in manifest_contents.splitlines():
322 if line.startswith(bundlecaches.CLONEBUNDLESCHEME):
323 continue
324 modified_manifest.append(line)
325 return wireprototypes.bytesresponse(b'\n'.join(modified_manifest))
334 return wireprototypes.bytesresponse(manifest_contents)
326 335
327 336
328 337 wireprotocaps = [
329 338 b'lookup',
330 339 b'branchmap',
331 340 b'pushkey',
332 341 b'known',
333 342 b'getbundle',
334 343 b'unbundlehash',
335 344 ]
336 345
337 346
338 347 def _capabilities(repo, proto):
339 348 """return a list of capabilities for a repo
340 349
341 350 This function exists to allow extensions to easily wrap capabilities
342 351 computation
343 352
344 353 - returns a lists: easy to alter
345 354 - change done here will be propagated to both `capabilities` and `hello`
346 355 command without any other action needed.
347 356 """
348 357 # copy to prevent modification of the global list
349 358 caps = list(wireprotocaps)
350 359
351 360 # Command of same name as capability isn't exposed to version 1 of
352 361 # transports. So conditionally add it.
353 362 if commands.commandavailable(b'changegroupsubset', proto):
354 363 caps.append(b'changegroupsubset')
355 364
356 365 if streamclone.allowservergeneration(repo):
357 366 if repo.ui.configbool(b'server', b'preferuncompressed'):
358 367 caps.append(b'stream-preferred')
359 368 requiredformats = streamclone.streamed_requirements(repo)
360 369 # if our local revlogs are just revlogv1, add 'stream' cap
361 370 if not requiredformats - {requirementsmod.REVLOGV1_REQUIREMENT}:
362 371 caps.append(b'stream')
363 372 # otherwise, add 'streamreqs' detailing our local revlog format
364 373 else:
365 374 caps.append(b'streamreqs=%s' % b','.join(sorted(requiredformats)))
366 375 if repo.ui.configbool(b'experimental', b'bundle2-advertise'):
367 376 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo, role=b'server'))
368 377 caps.append(b'bundle2=' + urlreq.quote(capsblob))
369 378 caps.append(b'unbundle=%s' % b','.join(bundle2.bundlepriority))
370 379
371 380 if repo.ui.configbool(b'experimental', b'narrow'):
372 381 caps.append(wireprototypes.NARROWCAP)
373 382 if repo.ui.configbool(b'experimental', b'narrowservebrokenellipses'):
374 383 caps.append(wireprototypes.ELLIPSESCAP)
375 384
376 385 return proto.addcapabilities(repo, caps)
377 386
378 387
379 388 # If you are writing an extension and consider wrapping this function. Wrap
380 389 # `_capabilities` instead.
381 390 @wireprotocommand(b'capabilities', permission=b'pull')
382 391 def capabilities(repo, proto):
383 392 caps = _capabilities(repo, proto)
384 393 return wireprototypes.bytesresponse(b' '.join(sorted(caps)))
385 394
386 395
387 396 @wireprotocommand(b'changegroup', b'roots', permission=b'pull')
388 397 def changegroup(repo, proto, roots):
389 398 nodes = wireprototypes.decodelist(roots)
390 399 outgoing = discovery.outgoing(
391 400 repo, missingroots=nodes, ancestorsof=repo.heads()
392 401 )
393 402 cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve')
394 403 gen = iter(lambda: cg.read(32768), b'')
395 404 return wireprototypes.streamres(gen=gen)
396 405
397 406
398 407 @wireprotocommand(b'changegroupsubset', b'bases heads', permission=b'pull')
399 408 def changegroupsubset(repo, proto, bases, heads):
400 409 bases = wireprototypes.decodelist(bases)
401 410 heads = wireprototypes.decodelist(heads)
402 411 outgoing = discovery.outgoing(repo, missingroots=bases, ancestorsof=heads)
403 412 cg = changegroupmod.makechangegroup(repo, outgoing, b'01', b'serve')
404 413 gen = iter(lambda: cg.read(32768), b'')
405 414 return wireprototypes.streamres(gen=gen)
406 415
407 416
408 417 @wireprotocommand(b'debugwireargs', b'one two *', permission=b'pull')
409 418 def debugwireargs(repo, proto, one, two, others):
410 419 # only accept optional args from the known set
411 420 opts = options(b'debugwireargs', [b'three', b'four'], others)
412 421 return wireprototypes.bytesresponse(
413 422 repo.debugwireargs(one, two, **pycompat.strkwargs(opts))
414 423 )
415 424
416 425
417 426 def find_pullbundle(repo, proto, opts, clheads, heads, common):
418 427 """Return a file object for the first matching pullbundle.
419 428
420 429 Pullbundles are specified in .hg/pullbundles.manifest similar to
421 430 clonebundles.
422 431 For each entry, the bundle specification is checked for compatibility:
423 432 - Client features vs the BUNDLESPEC.
424 433 - Revisions shared with the clients vs base revisions of the bundle.
425 434 A bundle can be applied only if all its base revisions are known by
426 435 the client.
427 436 - At least one leaf of the bundle's DAG is missing on the client.
428 437 - Every leaf of the bundle's DAG is part of node set the client wants.
429 438 E.g. do not send a bundle of all changes if the client wants only
430 439 one specific branch of many.
431 440 """
432 441
433 442 def decodehexstring(s):
434 443 return {binascii.unhexlify(h) for h in s.split(b';')}
435 444
436 445 manifest = repo.vfs.tryread(b'pullbundles.manifest')
437 446 if not manifest:
438 447 return None
439 448 res = bundlecaches.parseclonebundlesmanifest(repo, manifest)
440 449 res = bundlecaches.filterclonebundleentries(repo, res, pullbundles=True)
441 450 if not res:
442 451 return None
443 452 cl = repo.unfiltered().changelog
444 453 heads_anc = cl.ancestors([cl.rev(rev) for rev in heads], inclusive=True)
445 454 common_anc = cl.ancestors([cl.rev(rev) for rev in common], inclusive=True)
446 455 compformats = clientcompressionsupport(proto)
447 456 for entry in res:
448 457 comp = entry.get(b'COMPRESSION')
449 458 altcomp = util.compengines._bundlenames.get(comp)
450 459 if comp and comp not in compformats and altcomp not in compformats:
451 460 continue
452 461 # No test yet for VERSION, since V2 is supported by any client
453 462 # that advertises partial pulls
454 463 if b'heads' in entry:
455 464 try:
456 465 bundle_heads = decodehexstring(entry[b'heads'])
457 466 except TypeError:
458 467 # Bad heads entry
459 468 continue
460 469 if bundle_heads.issubset(common):
461 470 continue # Nothing new
462 471 if all(cl.rev(rev) in common_anc for rev in bundle_heads):
463 472 continue # Still nothing new
464 473 if any(
465 474 cl.rev(rev) not in heads_anc and cl.rev(rev) not in common_anc
466 475 for rev in bundle_heads
467 476 ):
468 477 continue
469 478 if b'bases' in entry:
470 479 try:
471 480 bundle_bases = decodehexstring(entry[b'bases'])
472 481 except TypeError:
473 482 # Bad bases entry
474 483 continue
475 484 if not all(cl.rev(rev) in common_anc for rev in bundle_bases):
476 485 continue
477 486 path = entry[b'URL']
478 487 repo.ui.debug(b'sending pullbundle "%s"\n' % path)
479 488 try:
480 489 return repo.vfs.open(path)
481 490 except IOError:
482 491 repo.ui.debug(b'pullbundle "%s" not accessible\n' % path)
483 492 continue
484 493 return None
485 494
486 495
487 496 @wireprotocommand(b'getbundle', b'*', permission=b'pull')
488 497 def getbundle(repo, proto, others):
489 498 opts = options(
490 499 b'getbundle', wireprototypes.GETBUNDLE_ARGUMENTS.keys(), others
491 500 )
492 501 for k, v in opts.items():
493 502 keytype = wireprototypes.GETBUNDLE_ARGUMENTS[k]
494 503 if keytype == b'nodes':
495 504 opts[k] = wireprototypes.decodelist(v)
496 505 elif keytype == b'csv':
497 506 opts[k] = list(v.split(b','))
498 507 elif keytype == b'scsv':
499 508 opts[k] = set(v.split(b','))
500 509 elif keytype == b'boolean':
501 510 # Client should serialize False as '0', which is a non-empty string
502 511 # so it evaluates as a True bool.
503 512 if v == b'0':
504 513 opts[k] = False
505 514 else:
506 515 opts[k] = bool(v)
507 516 elif keytype != b'plain':
508 517 raise KeyError(b'unknown getbundle option type %s' % keytype)
509 518
510 519 if not bundle1allowed(repo, b'pull'):
511 520 if not exchange.bundle2requested(opts.get(b'bundlecaps')):
512 521 if proto.name == b'http-v1':
513 522 return wireprototypes.ooberror(bundle2required)
514 523 raise error.Abort(bundle2requiredmain, hint=bundle2requiredhint)
515 524
516 525 try:
517 526 clheads = set(repo.changelog.heads())
518 527 heads = set(opts.get(b'heads', set()))
519 528 common = set(opts.get(b'common', set()))
520 529 common.discard(repo.nullid)
521 530 if (
522 531 repo.ui.configbool(b'server', b'pullbundle')
523 532 and b'partial-pull' in proto.getprotocaps()
524 533 ):
525 534 # Check if a pre-built bundle covers this request.
526 535 bundle = find_pullbundle(repo, proto, opts, clheads, heads, common)
527 536 if bundle:
528 537 return wireprototypes.streamres(
529 538 gen=util.filechunkiter(bundle), prefer_uncompressed=True
530 539 )
531 540
532 541 if repo.ui.configbool(b'server', b'disablefullbundle'):
533 542 # Check to see if this is a full clone.
534 543 changegroup = opts.get(b'cg', True)
535 544 if changegroup and not common and clheads == heads:
536 545 raise error.Abort(
537 546 _(b'server has pull-based clones disabled'),
538 547 hint=_(b'remove --pull if specified or upgrade Mercurial'),
539 548 )
540 549
541 550 info, chunks = exchange.getbundlechunks(
542 551 repo, b'serve', **pycompat.strkwargs(opts)
543 552 )
544 553 prefercompressed = info.get(b'prefercompressed', True)
545 554 except error.Abort as exc:
546 555 # cleanly forward Abort error to the client
547 556 if not exchange.bundle2requested(opts.get(b'bundlecaps')):
548 557 if proto.name == b'http-v1':
549 558 return wireprototypes.ooberror(exc.message + b'\n')
550 559 raise # cannot do better for bundle1 + ssh
551 560 # bundle2 request expect a bundle2 reply
552 561 bundler = bundle2.bundle20(repo.ui)
553 562 manargs = [(b'message', exc.message)]
554 563 advargs = []
555 564 if exc.hint is not None:
556 565 advargs.append((b'hint', exc.hint))
557 566 bundler.addpart(bundle2.bundlepart(b'error:abort', manargs, advargs))
558 567 chunks = bundler.getchunks()
559 568 prefercompressed = False
560 569
561 570 return wireprototypes.streamres(
562 571 gen=chunks, prefer_uncompressed=not prefercompressed
563 572 )
564 573
565 574
566 575 @wireprotocommand(b'heads', permission=b'pull')
567 576 def heads(repo, proto):
568 577 h = repo.heads()
569 578 return wireprototypes.bytesresponse(wireprototypes.encodelist(h) + b'\n')
570 579
571 580
572 581 @wireprotocommand(b'hello', permission=b'pull')
573 582 def hello(repo, proto):
574 583 """Called as part of SSH handshake to obtain server info.
575 584
576 585 Returns a list of lines describing interesting things about the
577 586 server, in an RFC822-like format.
578 587
579 588 Currently, the only one defined is ``capabilities``, which consists of a
580 589 line of space separated tokens describing server abilities:
581 590
582 591 capabilities: <token0> <token1> <token2>
583 592 """
584 593 caps = capabilities(repo, proto).data
585 594 return wireprototypes.bytesresponse(b'capabilities: %s\n' % caps)
586 595
587 596
588 597 @wireprotocommand(b'listkeys', b'namespace', permission=b'pull')
589 598 def listkeys(repo, proto, namespace):
590 599 d = sorted(repo.listkeys(encoding.tolocal(namespace)).items())
591 600 return wireprototypes.bytesresponse(pushkeymod.encodekeys(d))
592 601
593 602
594 603 @wireprotocommand(b'lookup', b'key', permission=b'pull')
595 604 def lookup(repo, proto, key):
596 605 try:
597 606 k = encoding.tolocal(key)
598 607 n = repo.lookup(k)
599 608 r = hex(n)
600 609 success = 1
601 610 except Exception as inst:
602 611 r = stringutil.forcebytestr(inst)
603 612 success = 0
604 613 return wireprototypes.bytesresponse(b'%d %s\n' % (success, r))
605 614
606 615
607 616 @wireprotocommand(b'known', b'nodes *', permission=b'pull')
608 617 def known(repo, proto, nodes, others):
609 618 v = b''.join(
610 619 b and b'1' or b'0' for b in repo.known(wireprototypes.decodelist(nodes))
611 620 )
612 621 return wireprototypes.bytesresponse(v)
613 622
614 623
615 624 @wireprotocommand(b'protocaps', b'caps', permission=b'pull')
616 625 def protocaps(repo, proto, caps):
617 626 if proto.name == wireprototypes.SSHV1:
618 627 proto._protocaps = set(caps.split(b' '))
619 628 return wireprototypes.bytesresponse(b'OK')
620 629
621 630
622 631 @wireprotocommand(b'pushkey', b'namespace key old new', permission=b'push')
623 632 def pushkey(repo, proto, namespace, key, old, new):
624 633 # compatibility with pre-1.8 clients which were accidentally
625 634 # sending raw binary nodes rather than utf-8-encoded hex
626 635 if len(new) == 20 and stringutil.escapestr(new) != new:
627 636 # looks like it could be a binary node
628 637 try:
629 638 new.decode('utf-8')
630 639 new = encoding.tolocal(new) # but cleanly decodes as UTF-8
631 640 except UnicodeDecodeError:
632 641 pass # binary, leave unmodified
633 642 else:
634 643 new = encoding.tolocal(new) # normal path
635 644
636 645 with proto.mayberedirectstdio() as output:
637 646 r = (
638 647 repo.pushkey(
639 648 encoding.tolocal(namespace),
640 649 encoding.tolocal(key),
641 650 encoding.tolocal(old),
642 651 new,
643 652 )
644 653 or False
645 654 )
646 655
647 656 output = output.getvalue() if output else b''
648 657 return wireprototypes.bytesresponse(b'%d\n%s' % (int(r), output))
649 658
650 659
651 660 @wireprotocommand(b'stream_out', permission=b'pull')
652 661 def stream(repo, proto):
653 662 """If the server supports streaming clone, it advertises the "stream"
654 663 capability with a value representing the version and flags of the repo
655 664 it is serving. Client checks to see if it understands the format.
656 665 """
657 666 return wireprototypes.streamreslegacy(streamclone.generatev1wireproto(repo))
658 667
659 668
660 669 @wireprotocommand(b'unbundle', b'heads', permission=b'push')
661 670 def unbundle(repo, proto, heads):
662 671 their_heads = wireprototypes.decodelist(heads)
663 672
664 673 with proto.mayberedirectstdio() as output:
665 674 try:
666 675 exchange.check_heads(repo, their_heads, b'preparing changes')
667 676 cleanup = lambda: None
668 677 try:
669 678 payload = proto.getpayload()
670 679 if repo.ui.configbool(b'server', b'streamunbundle'):
671 680
672 681 def cleanup():
673 682 # Ensure that the full payload is consumed, so
674 683 # that the connection doesn't contain trailing garbage.
675 684 for p in payload:
676 685 pass
677 686
678 687 fp = util.chunkbuffer(payload)
679 688 else:
680 689 # write bundle data to temporary file as it can be big
681 690 fp, tempname = None, None
682 691
683 692 def cleanup():
684 693 if fp:
685 694 fp.close()
686 695 if tempname:
687 696 os.unlink(tempname)
688 697
689 698 fd, tempname = pycompat.mkstemp(prefix=b'hg-unbundle-')
690 699 repo.ui.debug(
691 700 b'redirecting incoming bundle to %s\n' % tempname
692 701 )
693 702 fp = os.fdopen(fd, pycompat.sysstr(b'wb+'))
694 703 for p in payload:
695 704 fp.write(p)
696 705 fp.seek(0)
697 706
698 707 gen = exchange.readbundle(repo.ui, fp, None)
699 708 if isinstance(
700 709 gen, changegroupmod.cg1unpacker
701 710 ) and not bundle1allowed(repo, b'push'):
702 711 if proto.name == b'http-v1':
703 712 # need to special case http because stderr do not get to
704 713 # the http client on failed push so we need to abuse
705 714 # some other error type to make sure the message get to
706 715 # the user.
707 716 return wireprototypes.ooberror(bundle2required)
708 717 raise error.Abort(
709 718 bundle2requiredmain, hint=bundle2requiredhint
710 719 )
711 720
712 721 r = exchange.unbundle(
713 722 repo, gen, their_heads, b'serve', proto.client()
714 723 )
715 724 if util.safehasattr(r, 'addpart'):
716 725 # The return looks streamable, we are in the bundle2 case
717 726 # and should return a stream.
718 727 return wireprototypes.streamreslegacy(gen=r.getchunks())
719 728 return wireprototypes.pushres(
720 729 r, output.getvalue() if output else b''
721 730 )
722 731
723 732 finally:
724 733 cleanup()
725 734
726 735 except (error.BundleValueError, error.Abort, error.PushRaced) as exc:
727 736 # handle non-bundle2 case first
728 737 if not getattr(exc, 'duringunbundle2', False):
729 738 try:
730 739 raise
731 740 except error.Abort as exc:
732 741 # The old code we moved used procutil.stderr directly.
733 742 # We did not change it to minimise code change.
734 743 # This need to be moved to something proper.
735 744 # Feel free to do it.
736 745 procutil.stderr.write(exc.format())
737 746 procutil.stderr.flush()
738 747 return wireprototypes.pushres(
739 748 0, output.getvalue() if output else b''
740 749 )
741 750 except error.PushRaced:
742 751 return wireprototypes.pusherr(
743 752 pycompat.bytestr(exc),
744 753 output.getvalue() if output else b'',
745 754 )
746 755
747 756 bundler = bundle2.bundle20(repo.ui)
748 757 for out in getattr(exc, '_bundle2salvagedoutput', ()):
749 758 bundler.addpart(out)
750 759 try:
751 760 try:
752 761 raise
753 762 except error.PushkeyFailed as exc:
754 763 # check client caps
755 764 remotecaps = getattr(exc, '_replycaps', None)
756 765 if (
757 766 remotecaps is not None
758 767 and b'pushkey' not in remotecaps.get(b'error', ())
759 768 ):
760 769 # no support remote side, fallback to Abort handler.
761 770 raise
762 771 part = bundler.newpart(b'error:pushkey')
763 772 part.addparam(b'in-reply-to', exc.partid)
764 773 if exc.namespace is not None:
765 774 part.addparam(
766 775 b'namespace', exc.namespace, mandatory=False
767 776 )
768 777 if exc.key is not None:
769 778 part.addparam(b'key', exc.key, mandatory=False)
770 779 if exc.new is not None:
771 780 part.addparam(b'new', exc.new, mandatory=False)
772 781 if exc.old is not None:
773 782 part.addparam(b'old', exc.old, mandatory=False)
774 783 if exc.ret is not None:
775 784 part.addparam(b'ret', exc.ret, mandatory=False)
776 785 except error.BundleValueError as exc:
777 786 errpart = bundler.newpart(b'error:unsupportedcontent')
778 787 if exc.parttype is not None:
779 788 errpart.addparam(b'parttype', exc.parttype)
780 789 if exc.params:
781 790 errpart.addparam(b'params', b'\0'.join(exc.params))
782 791 except error.Abort as exc:
783 792 manargs = [(b'message', exc.message)]
784 793 advargs = []
785 794 if exc.hint is not None:
786 795 advargs.append((b'hint', exc.hint))
787 796 bundler.addpart(
788 797 bundle2.bundlepart(b'error:abort', manargs, advargs)
789 798 )
790 799 except error.PushRaced as exc:
791 800 bundler.newpart(
792 801 b'error:pushraced',
793 802 [(b'message', stringutil.forcebytestr(exc))],
794 803 )
795 804 return wireprototypes.streamreslegacy(gen=bundler.getchunks())
@@ -1,838 +1,837 b''
1 1 #require no-reposimplestore no-chg
2 2
3 3 Set up a server
4 4
5 5 $ hg init server
6 6 $ cd server
7 7 $ cat >> .hg/hgrc << EOF
8 8 > [extensions]
9 9 > clonebundles =
10 10 > EOF
11 11
12 12 $ touch foo
13 13 $ hg -q commit -A -m 'add foo'
14 14 $ touch bar
15 15 $ hg -q commit -A -m 'add bar'
16 16
17 17 $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log
18 18 $ cat hg.pid >> $DAEMON_PIDS
19 19 $ cd ..
20 20
21 21 Missing manifest should not result in server lookup
22 22
23 23 $ hg --verbose clone -U http://localhost:$HGPORT no-manifest
24 24 requesting all changes
25 25 adding changesets
26 26 adding manifests
27 27 adding file changes
28 28 added 2 changesets with 2 changes to 2 files
29 29 new changesets 53245c60e682:aaff8d2ffbbf
30 30 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
31 31
32 32 $ cat server/access.log
33 33 * - - [*] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
34 34 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
35 35 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&$USUAL_BUNDLE_CAPS$&cg=1&common=0000000000000000000000000000000000000000&heads=aaff8d2ffbbf07a46dd1f05d8ae7877e3f56e2a2&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
36 36
37 37 Empty manifest file results in retrieval
38 38 (the extension only checks if the manifest file exists)
39 39
40 40 $ touch server/.hg/clonebundles.manifest
41 41 $ hg --verbose clone -U http://localhost:$HGPORT empty-manifest
42 42 no clone bundles available on remote; falling back to regular clone
43 43 requesting all changes
44 44 adding changesets
45 45 adding manifests
46 46 adding file changes
47 47 added 2 changesets with 2 changes to 2 files
48 48 new changesets 53245c60e682:aaff8d2ffbbf
49 49 (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob)
50 50
51 51 Manifest file with invalid URL aborts
52 52
53 53 $ echo 'http://does.not.exist/bundle.hg' > server/.hg/clonebundles.manifest
54 54 $ hg clone http://localhost:$HGPORT 404-url
55 55 applying clone bundle from http://does.not.exist/bundle.hg
56 56 error fetching bundle: (.* not known|(\[Errno -?\d+] )?([Nn]o address associated with (host)?name|Temporary failure in name resolution|Name does not resolve)) (re) (no-windows !)
57 57 error fetching bundle: [Errno 1100*] getaddrinfo failed (glob) (windows !)
58 58 abort: error applying bundle
59 59 (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false")
60 60 [255]
61 61
62 62 Manifest file with URL with unknown scheme skips the URL
63 63 $ echo 'weirdscheme://does.not.exist/bundle.hg' > server/.hg/clonebundles.manifest
64 64 $ hg clone http://localhost:$HGPORT unknown-scheme
65 65 no compatible clone bundles available on server; falling back to regular clone
66 66 (you may want to report this to the server operator)
67 67 requesting all changes
68 68 adding changesets
69 69 adding manifests
70 70 adding file changes
71 71 added 2 changesets with 2 changes to 2 files
72 72 new changesets 53245c60e682:aaff8d2ffbbf
73 73 updating to branch default
74 74 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
75 75
76 76 Server is not running aborts
77 77
78 78 $ echo "http://localhost:$HGPORT1/bundle.hg" > server/.hg/clonebundles.manifest
79 79 $ hg clone http://localhost:$HGPORT server-not-runner
80 80 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
81 81 error fetching bundle: (.* refused.*|Protocol not supported|(.* )?\$EADDRNOTAVAIL\$|.* No route to host) (re)
82 82 abort: error applying bundle
83 83 (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false")
84 84 [255]
85 85
86 86 Server returns 404
87 87
88 88 $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid
89 89 $ cat http.pid >> $DAEMON_PIDS
90 90 $ hg clone http://localhost:$HGPORT running-404
91 91 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
92 92 HTTP error fetching bundle: HTTP Error 404: File not found
93 93 abort: error applying bundle
94 94 (if this error persists, consider contacting the server operator or disable clone bundles via "--config ui.clonebundles=false")
95 95 [255]
96 96
97 97 We can override failure to fall back to regular clone
98 98
99 99 $ hg --config ui.clonebundlefallback=true clone -U http://localhost:$HGPORT 404-fallback
100 100 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
101 101 HTTP error fetching bundle: HTTP Error 404: File not found
102 102 falling back to normal clone
103 103 requesting all changes
104 104 adding changesets
105 105 adding manifests
106 106 adding file changes
107 107 added 2 changesets with 2 changes to 2 files
108 108 new changesets 53245c60e682:aaff8d2ffbbf
109 109
110 110 Bundle with partial content works
111 111
112 112 $ hg -R server bundle --type gzip-v1 --base null -r 53245c60e682 partial.hg
113 113 1 changesets found
114 114
115 115 We verify exact bundle content as an extra check against accidental future
116 116 changes. If this output changes, we could break old clients.
117 117
118 118 $ f --size --hexdump partial.hg
119 119 partial.hg: size=207
120 120 0000: 48 47 31 30 47 5a 78 9c 63 60 60 98 17 ac 12 93 |HG10GZx.c``.....|
121 121 0010: f0 ac a9 23 45 70 cb bf 0d 5f 59 4e 4a 7f 79 21 |...#Ep..._YNJ.y!|
122 122 0020: 9b cc 40 24 20 a0 d7 ce 2c d1 38 25 cd 24 25 d5 |..@$ ...,.8%.$%.|
123 123 0030: d8 c2 22 cd 38 d9 24 cd 22 d5 c8 22 cd 24 cd 32 |..".8.$."..".$.2|
124 124 0040: d1 c2 d0 c4 c8 d2 32 d1 38 39 29 c9 34 cd d4 80 |......2.89).4...|
125 125 0050: ab 24 b5 b8 84 cb 40 c1 80 2b 2d 3f 9f 8b 2b 31 |.$....@..+-?..+1|
126 126 0060: 25 45 01 c8 80 9a d2 9b 65 fb e5 9e 45 bf 8d 7f |%E......e...E...|
127 127 0070: 9f c6 97 9f 2b 44 34 67 d9 ec 8e 0f a0 92 0b 75 |....+D4g.......u|
128 128 0080: 41 d6 24 59 18 a4 a4 9a a6 18 1a 5b 98 9b 5a 98 |A.$Y.......[..Z.|
129 129 0090: 9a 18 26 9b a6 19 98 1a 99 99 26 a6 18 9a 98 24 |..&.......&....$|
130 130 00a0: 26 59 a6 25 5a 98 a5 18 a6 24 71 41 35 b1 43 dc |&Y.%Z....$qA5.C.|
131 131 00b0: 16 b2 83 f7 e9 45 8b d2 56 c7 a3 1f 82 52 d7 8a |.....E..V....R..|
132 132 00c0: 78 ed fc d5 76 f1 36 35 dc 05 00 36 ed 5e c7 |x...v.65...6.^.|
133 133
134 134 $ echo "http://localhost:$HGPORT1/partial.hg" > server/.hg/clonebundles.manifest
135 135 $ hg clone -U http://localhost:$HGPORT partial-bundle
136 136 applying clone bundle from http://localhost:$HGPORT1/partial.hg
137 137 adding changesets
138 138 adding manifests
139 139 adding file changes
140 140 added 1 changesets with 1 changes to 1 files
141 141 finished applying clone bundle
142 142 searching for changes
143 143 adding changesets
144 144 adding manifests
145 145 adding file changes
146 146 added 1 changesets with 1 changes to 1 files
147 147 new changesets aaff8d2ffbbf
148 148 1 local changesets published
149 149
150 150 Incremental pull doesn't fetch bundle
151 151
152 152 $ hg clone -r 53245c60e682 -U http://localhost:$HGPORT partial-clone
153 153 adding changesets
154 154 adding manifests
155 155 adding file changes
156 156 added 1 changesets with 1 changes to 1 files
157 157 new changesets 53245c60e682
158 158
159 159 $ cd partial-clone
160 160 $ hg pull
161 161 pulling from http://localhost:$HGPORT/
162 162 searching for changes
163 163 adding changesets
164 164 adding manifests
165 165 adding file changes
166 166 added 1 changesets with 1 changes to 1 files
167 167 new changesets aaff8d2ffbbf
168 168 (run 'hg update' to get a working copy)
169 169 $ cd ..
170 170
171 171 Bundle with full content works
172 172
173 173 $ hg -R server bundle --type gzip-v2 --base null -r tip full.hg
174 174 2 changesets found
175 175
176 176 Again, we perform an extra check against bundle content changes. If this content
177 177 changes, clone bundles produced by new Mercurial versions may not be readable
178 178 by old clients.
179 179
180 180 $ f --size --hexdump full.hg
181 181 full.hg: size=442
182 182 0000: 48 47 32 30 00 00 00 0e 43 6f 6d 70 72 65 73 73 |HG20....Compress|
183 183 0010: 69 6f 6e 3d 47 5a 78 9c 63 60 60 d0 e4 76 f6 70 |ion=GZx.c``..v.p|
184 184 0020: f4 73 77 75 0f f2 0f 0d 60 00 02 46 46 76 26 4e |.swu....`..FFv&N|
185 185 0030: c6 b2 d4 a2 e2 cc fc 3c 03 a3 bc a4 e4 8c c4 bc |.......<........|
186 186 0040: f4 d4 62 23 06 06 e6 19 40 f9 4d c1 2a 31 09 cf |..b#....@.M.*1..|
187 187 0050: 9a 3a 52 04 b7 fc db f0 95 e5 a4 f4 97 17 b2 c9 |.:R.............|
188 188 0060: 0c 14 00 02 e6 d9 99 25 1a a7 a4 99 a4 a4 1a 5b |.......%.......[|
189 189 0070: 58 a4 19 27 9b a4 59 a4 1a 59 a4 99 a4 59 26 5a |X..'..Y..Y...Y&Z|
190 190 0080: 18 9a 18 59 5a 26 1a 27 27 25 99 a6 99 1a 70 95 |...YZ&.''%....p.|
191 191 0090: a4 16 97 70 19 28 18 70 a5 e5 e7 73 71 25 a6 a4 |...p.(.p...sq%..|
192 192 00a0: 28 00 19 20 17 af fa df ab ff 7b 3f fb 92 dc 8b |(.. ......{?....|
193 193 00b0: 1f 62 bb 9e b7 d7 d9 87 3d 5a 44 89 2f b0 99 87 |.b......=ZD./...|
194 194 00c0: ec e2 54 63 43 e3 b4 64 43 73 23 33 43 53 0b 63 |..TcC..dCs#3CS.c|
195 195 00d0: d3 14 23 03 a0 fb 2c 2c 0c d3 80 1e 30 49 49 b1 |..#...,,....0II.|
196 196 00e0: 4c 4a 32 48 33 30 b0 34 42 b8 38 29 b1 08 e2 62 |LJ2H30.4B.8)...b|
197 197 00f0: 20 03 6a ca c2 2c db 2f f7 2c fa 6d fc fb 34 be | .j..,./.,.m..4.|
198 198 0100: fc 5c 21 a2 39 cb 66 77 7c 00 0d c3 59 17 14 58 |.\!.9.fw|...Y..X|
199 199 0110: 49 16 06 29 a9 a6 29 86 c6 16 e6 a6 16 a6 26 86 |I..)..).......&.|
200 200 0120: c9 a6 69 06 a6 46 66 a6 89 29 86 26 26 89 49 96 |..i..Ff..).&&.I.|
201 201 0130: 69 89 16 66 29 86 29 49 5c 20 07 3e 16 fe 23 ae |i..f).)I\ .>..#.|
202 202 0140: 26 da 1c ab 10 1f d1 f8 e3 b3 ef cd dd fc 0c 93 |&...............|
203 203 0150: 88 75 34 36 75 04 82 55 17 14 36 a4 38 10 04 d8 |.u46u..U..6.8...|
204 204 0160: 21 01 9a b1 83 f7 e9 45 8b d2 56 c7 a3 1f 82 52 |!......E..V....R|
205 205 0170: d7 8a 78 ed fc d5 76 f1 36 25 81 89 c7 ad ec 90 |..x...v.6%......|
206 206 0180: 54 47 75 2b 89 48 b1 b2 62 c9 89 c9 19 a9 56 45 |TGu+.H..b.....VE|
207 207 0190: a9 65 ba 49 45 89 79 c9 19 ba 60 01 a0 14 23 58 |.e.IE.y...`...#X|
208 208 01a0: 81 35 c8 7d 40 cc 04 e2 a4 a4 a6 25 96 e6 94 60 |.5.}@......%...`|
209 209 01b0: 33 17 5f 54 00 00 d3 1b 0d 4c |3._T.....L|
210 210
211 211 $ echo "http://localhost:$HGPORT1/full.hg" > server/.hg/clonebundles.manifest
212 212 $ hg clone -U http://localhost:$HGPORT full-bundle
213 213 applying clone bundle from http://localhost:$HGPORT1/full.hg
214 214 adding changesets
215 215 adding manifests
216 216 adding file changes
217 217 added 2 changesets with 2 changes to 2 files
218 218 finished applying clone bundle
219 219 searching for changes
220 220 no changes found
221 221 2 local changesets published
222 222
223 223 Feature works over SSH
224 224
225 225 $ hg clone -U ssh://user@dummy/server ssh-full-clone
226 226 applying clone bundle from http://localhost:$HGPORT1/full.hg
227 227 adding changesets
228 228 adding manifests
229 229 adding file changes
230 230 added 2 changesets with 2 changes to 2 files
231 231 finished applying clone bundle
232 232 searching for changes
233 233 no changes found
234 234 2 local changesets published
235 235
236 236 Inline bundle
237 237 =============
238 238
239 239 Checking bundle retrieved over the wireprotocol
240 240
241 241 Feature works over SSH with inline bundle
242 242 -----------------------------------------
243 243
244 244 $ mkdir server/.hg/bundle-cache/
245 245 $ cp full.hg server/.hg/bundle-cache/
246 246 $ echo "peer-bundle-cache://full.hg" > server/.hg/clonebundles.manifest
247 247 $ hg clone -U ssh://user@dummy/server ssh-inline-clone
248 248 applying clone bundle from peer-bundle-cache://full.hg
249 249 adding changesets
250 250 adding manifests
251 251 adding file changes
252 252 added 2 changesets with 2 changes to 2 files
253 253 finished applying clone bundle
254 254 searching for changes
255 255 no changes found
256 256 2 local changesets published
257 257
258 258 HTTP Supports
259 259 -------------
260 260
261 Or lack of it actually
262
263 Feature does not use inline bundle over HTTP(S) because there is no protocaps support
264 (so no way for the client to announce that it supports inline clonebundles)
265 261 $ hg clone -U http://localhost:$HGPORT http-inline-clone
266 requesting all changes
262 applying clone bundle from peer-bundle-cache://full.hg
267 263 adding changesets
268 264 adding manifests
269 265 adding file changes
270 266 added 2 changesets with 2 changes to 2 files
271 new changesets 53245c60e682:aaff8d2ffbbf
267 finished applying clone bundle
268 searching for changes
269 no changes found
270 2 local changesets published
272 271
273 272 Pre-transmit Hook
274 273 -----------------
275 274
276 275 Hooks work with inline bundle
277 276
278 277 $ cp server/.hg/hgrc server/.hg/hgrc-beforeinlinehooks
279 278 $ echo "[hooks]" >> server/.hg/hgrc
280 279 $ echo "pretransmit-inline-clone-bundle=echo foo" >> server/.hg/hgrc
281 280 $ hg clone -U ssh://user@dummy/server ssh-inline-clone-hook
282 281 applying clone bundle from peer-bundle-cache://full.hg
283 282 remote: foo
284 283 adding changesets
285 284 adding manifests
286 285 adding file changes
287 286 added 2 changesets with 2 changes to 2 files
288 287 finished applying clone bundle
289 288 searching for changes
290 289 no changes found
291 290 2 local changesets published
292 291
293 292 Hooks can make an inline bundle fail
294 293
295 294 $ cp server/.hg/hgrc-beforeinlinehooks server/.hg/hgrc
296 295 $ echo "[hooks]" >> server/.hg/hgrc
297 296 $ echo "pretransmit-inline-clone-bundle=echo bar && false" >> server/.hg/hgrc
298 297 $ hg clone -U ssh://user@dummy/server ssh-inline-clone-hook-fail
299 298 applying clone bundle from peer-bundle-cache://full.hg
300 299 remote: bar
301 300 remote: abort: pretransmit-inline-clone-bundle hook exited with status 1
302 301 abort: stream ended unexpectedly (got 0 bytes, expected 1)
303 302 [255]
304 303 $ cp server/.hg/hgrc-beforeinlinehooks server/.hg/hgrc
305 304
306 305 Other tests
307 306 ===========
308 307
309 308 Entry with unknown BUNDLESPEC is filtered and not used
310 309
311 310 $ cat > server/.hg/clonebundles.manifest << EOF
312 311 > http://bad.entry1 BUNDLESPEC=UNKNOWN
313 312 > http://bad.entry2 BUNDLESPEC=xz-v1
314 313 > http://bad.entry3 BUNDLESPEC=none-v100
315 314 > http://localhost:$HGPORT1/full.hg BUNDLESPEC=gzip-v2
316 315 > EOF
317 316
318 317 $ hg clone -U http://localhost:$HGPORT filter-unknown-type
319 318 applying clone bundle from http://localhost:$HGPORT1/full.hg
320 319 adding changesets
321 320 adding manifests
322 321 adding file changes
323 322 added 2 changesets with 2 changes to 2 files
324 323 finished applying clone bundle
325 324 searching for changes
326 325 no changes found
327 326 2 local changesets published
328 327
329 328 Automatic fallback when all entries are filtered
330 329
331 330 $ cat > server/.hg/clonebundles.manifest << EOF
332 331 > http://bad.entry BUNDLESPEC=UNKNOWN
333 332 > EOF
334 333
335 334 $ hg clone -U http://localhost:$HGPORT filter-all
336 335 no compatible clone bundles available on server; falling back to regular clone
337 336 (you may want to report this to the server operator)
338 337 requesting all changes
339 338 adding changesets
340 339 adding manifests
341 340 adding file changes
342 341 added 2 changesets with 2 changes to 2 files
343 342 new changesets 53245c60e682:aaff8d2ffbbf
344 343
345 344 We require a Python version that supports SNI. Therefore, URLs requiring SNI
346 345 are not filtered.
347 346
348 347 $ cp full.hg sni.hg
349 348 $ cat > server/.hg/clonebundles.manifest << EOF
350 349 > http://localhost:$HGPORT1/sni.hg REQUIRESNI=true
351 350 > http://localhost:$HGPORT1/full.hg
352 351 > EOF
353 352
354 353 $ hg clone -U http://localhost:$HGPORT sni-supported
355 354 applying clone bundle from http://localhost:$HGPORT1/sni.hg
356 355 adding changesets
357 356 adding manifests
358 357 adding file changes
359 358 added 2 changesets with 2 changes to 2 files
360 359 finished applying clone bundle
361 360 searching for changes
362 361 no changes found
363 362 2 local changesets published
364 363
365 364 Stream clone bundles are supported
366 365
367 366 $ hg -R server debugcreatestreamclonebundle packed.hg
368 367 writing 613 bytes for 4 files
369 368 bundle requirements: generaldelta, revlogv1, sparserevlog (no-rust no-zstd !)
370 369 bundle requirements: generaldelta, revlog-compression-zstd, revlogv1, sparserevlog (no-rust zstd !)
371 370 bundle requirements: generaldelta, revlog-compression-zstd, revlogv1, sparserevlog (rust !)
372 371
373 372 No bundle spec should work
374 373
375 374 $ cat > server/.hg/clonebundles.manifest << EOF
376 375 > http://localhost:$HGPORT1/packed.hg
377 376 > EOF
378 377
379 378 $ hg clone -U http://localhost:$HGPORT stream-clone-no-spec
380 379 applying clone bundle from http://localhost:$HGPORT1/packed.hg
381 380 4 files to transfer, 613 bytes of data
382 381 transferred 613 bytes in *.* seconds (*) (glob)
383 382 finished applying clone bundle
384 383 searching for changes
385 384 no changes found
386 385
387 386 Bundle spec without parameters should work
388 387
389 388 $ cat > server/.hg/clonebundles.manifest << EOF
390 389 > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1
391 390 > EOF
392 391
393 392 $ hg clone -U http://localhost:$HGPORT stream-clone-vanilla-spec
394 393 applying clone bundle from http://localhost:$HGPORT1/packed.hg
395 394 4 files to transfer, 613 bytes of data
396 395 transferred 613 bytes in *.* seconds (*) (glob)
397 396 finished applying clone bundle
398 397 searching for changes
399 398 no changes found
400 399
401 400 Bundle spec with format requirements should work
402 401
403 402 $ cat > server/.hg/clonebundles.manifest << EOF
404 403 > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv1
405 404 > EOF
406 405
407 406 $ hg clone -U http://localhost:$HGPORT stream-clone-supported-requirements
408 407 applying clone bundle from http://localhost:$HGPORT1/packed.hg
409 408 4 files to transfer, 613 bytes of data
410 409 transferred 613 bytes in *.* seconds (*) (glob)
411 410 finished applying clone bundle
412 411 searching for changes
413 412 no changes found
414 413
415 414 Stream bundle spec with unknown requirements should be filtered out
416 415
417 416 $ cat > server/.hg/clonebundles.manifest << EOF
418 417 > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv42
419 418 > EOF
420 419
421 420 $ hg clone -U http://localhost:$HGPORT stream-clone-unsupported-requirements
422 421 no compatible clone bundles available on server; falling back to regular clone
423 422 (you may want to report this to the server operator)
424 423 requesting all changes
425 424 adding changesets
426 425 adding manifests
427 426 adding file changes
428 427 added 2 changesets with 2 changes to 2 files
429 428 new changesets 53245c60e682:aaff8d2ffbbf
430 429
431 430 Set up manifest for testing preferences
432 431 (Remember, the TYPE does not have to match reality - the URL is
433 432 important)
434 433
435 434 $ cp full.hg gz-a.hg
436 435 $ cp full.hg gz-b.hg
437 436 $ cp full.hg bz2-a.hg
438 437 $ cp full.hg bz2-b.hg
439 438 $ cat > server/.hg/clonebundles.manifest << EOF
440 439 > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 extra=a
441 440 > http://localhost:$HGPORT1/bz2-a.hg BUNDLESPEC=bzip2-v2 extra=a
442 441 > http://localhost:$HGPORT1/gz-b.hg BUNDLESPEC=gzip-v2 extra=b
443 442 > http://localhost:$HGPORT1/bz2-b.hg BUNDLESPEC=bzip2-v2 extra=b
444 443 > EOF
445 444
446 445 Preferring an undefined attribute will take first entry
447 446
448 447 $ hg --config ui.clonebundleprefers=foo=bar clone -U http://localhost:$HGPORT prefer-foo
449 448 applying clone bundle from http://localhost:$HGPORT1/gz-a.hg
450 449 adding changesets
451 450 adding manifests
452 451 adding file changes
453 452 added 2 changesets with 2 changes to 2 files
454 453 finished applying clone bundle
455 454 searching for changes
456 455 no changes found
457 456 2 local changesets published
458 457
459 458 Preferring bz2 type will download first entry of that type
460 459
461 460 $ hg --config ui.clonebundleprefers=COMPRESSION=bzip2 clone -U http://localhost:$HGPORT prefer-bz
462 461 applying clone bundle from http://localhost:$HGPORT1/bz2-a.hg
463 462 adding changesets
464 463 adding manifests
465 464 adding file changes
466 465 added 2 changesets with 2 changes to 2 files
467 466 finished applying clone bundle
468 467 searching for changes
469 468 no changes found
470 469 2 local changesets published
471 470
472 471 Preferring multiple values of an option works
473 472
474 473 $ hg --config ui.clonebundleprefers=COMPRESSION=unknown,COMPRESSION=bzip2 clone -U http://localhost:$HGPORT prefer-multiple-bz
475 474 applying clone bundle from http://localhost:$HGPORT1/bz2-a.hg
476 475 adding changesets
477 476 adding manifests
478 477 adding file changes
479 478 added 2 changesets with 2 changes to 2 files
480 479 finished applying clone bundle
481 480 searching for changes
482 481 no changes found
483 482 2 local changesets published
484 483
485 484 Sorting multiple values should get us back to original first entry
486 485
487 486 $ hg --config ui.clonebundleprefers=BUNDLESPEC=unknown,BUNDLESPEC=gzip-v2,BUNDLESPEC=bzip2-v2 clone -U http://localhost:$HGPORT prefer-multiple-gz
488 487 applying clone bundle from http://localhost:$HGPORT1/gz-a.hg
489 488 adding changesets
490 489 adding manifests
491 490 adding file changes
492 491 added 2 changesets with 2 changes to 2 files
493 492 finished applying clone bundle
494 493 searching for changes
495 494 no changes found
496 495 2 local changesets published
497 496
498 497 Preferring multiple attributes has correct order
499 498
500 499 $ hg --config ui.clonebundleprefers=extra=b,BUNDLESPEC=bzip2-v2 clone -U http://localhost:$HGPORT prefer-separate-attributes
501 500 applying clone bundle from http://localhost:$HGPORT1/bz2-b.hg
502 501 adding changesets
503 502 adding manifests
504 503 adding file changes
505 504 added 2 changesets with 2 changes to 2 files
506 505 finished applying clone bundle
507 506 searching for changes
508 507 no changes found
509 508 2 local changesets published
510 509
511 510 Test where attribute is missing from some entries
512 511
513 512 $ cat > server/.hg/clonebundles.manifest << EOF
514 513 > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2
515 514 > http://localhost:$HGPORT1/bz2-a.hg BUNDLESPEC=bzip2-v2
516 515 > http://localhost:$HGPORT1/gz-b.hg BUNDLESPEC=gzip-v2 extra=b
517 516 > http://localhost:$HGPORT1/bz2-b.hg BUNDLESPEC=bzip2-v2 extra=b
518 517 > EOF
519 518
520 519 $ hg --config ui.clonebundleprefers=extra=b clone -U http://localhost:$HGPORT prefer-partially-defined-attribute
521 520 applying clone bundle from http://localhost:$HGPORT1/gz-b.hg
522 521 adding changesets
523 522 adding manifests
524 523 adding file changes
525 524 added 2 changesets with 2 changes to 2 files
526 525 finished applying clone bundle
527 526 searching for changes
528 527 no changes found
529 528 2 local changesets published
530 529
531 530 Test a bad attribute list
532 531
533 532 $ hg --config ui.clonebundleprefers=bad clone -U http://localhost:$HGPORT bad-input
534 533 abort: invalid ui.clonebundleprefers item: bad
535 534 (each comma separated item should be key=value pairs)
536 535 [255]
537 536 $ hg --config ui.clonebundleprefers=key=val,bad,key2=val2 clone \
538 537 > -U http://localhost:$HGPORT bad-input
539 538 abort: invalid ui.clonebundleprefers item: bad
540 539 (each comma separated item should be key=value pairs)
541 540 [255]
542 541
543 542
544 543 Test interaction between clone bundles and --stream
545 544
546 545 A manifest with just a gzip bundle
547 546
548 547 $ cat > server/.hg/clonebundles.manifest << EOF
549 548 > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2
550 549 > EOF
551 550
552 551 $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip
553 552 no compatible clone bundles available on server; falling back to regular clone
554 553 (you may want to report this to the server operator)
555 554 streaming all changes
556 555 9 files to transfer, 816 bytes of data
557 556 transferred 816 bytes in * seconds (*) (glob)
558 557
559 558 A manifest with a stream clone but no BUNDLESPEC
560 559
561 560 $ cat > server/.hg/clonebundles.manifest << EOF
562 561 > http://localhost:$HGPORT1/packed.hg
563 562 > EOF
564 563
565 564 $ hg clone -U --stream http://localhost:$HGPORT uncompressed-no-bundlespec
566 565 no compatible clone bundles available on server; falling back to regular clone
567 566 (you may want to report this to the server operator)
568 567 streaming all changes
569 568 9 files to transfer, 816 bytes of data
570 569 transferred 816 bytes in * seconds (*) (glob)
571 570
572 571 A manifest with a gzip bundle and a stream clone
573 572
574 573 $ cat > server/.hg/clonebundles.manifest << EOF
575 574 > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2
576 575 > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1
577 576 > EOF
578 577
579 578 $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed
580 579 applying clone bundle from http://localhost:$HGPORT1/packed.hg
581 580 4 files to transfer, 613 bytes of data
582 581 transferred 613 bytes in * seconds (*) (glob)
583 582 finished applying clone bundle
584 583 searching for changes
585 584 no changes found
586 585
587 586 A manifest with a gzip bundle and stream clone with supported requirements
588 587
589 588 $ cat > server/.hg/clonebundles.manifest << EOF
590 589 > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2
591 590 > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv1
592 591 > EOF
593 592
594 593 $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed-requirements
595 594 applying clone bundle from http://localhost:$HGPORT1/packed.hg
596 595 4 files to transfer, 613 bytes of data
597 596 transferred 613 bytes in * seconds (*) (glob)
598 597 finished applying clone bundle
599 598 searching for changes
600 599 no changes found
601 600
602 601 A manifest with a gzip bundle and a stream clone with unsupported requirements
603 602
604 603 $ cat > server/.hg/clonebundles.manifest << EOF
605 604 > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2
606 605 > http://localhost:$HGPORT1/packed.hg BUNDLESPEC=none-packed1;requirements%3Drevlogv42
607 606 > EOF
608 607
609 608 $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed-unsupported-requirements
610 609 no compatible clone bundles available on server; falling back to regular clone
611 610 (you may want to report this to the server operator)
612 611 streaming all changes
613 612 9 files to transfer, 816 bytes of data
614 613 transferred 816 bytes in * seconds (*) (glob)
615 614
616 615 Test clone bundle retrieved through bundle2
617 616
618 617 $ cat << EOF >> $HGRCPATH
619 618 > [extensions]
620 619 > largefiles=
621 620 > EOF
622 621 $ killdaemons.py
623 622 $ hg -R server serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log
624 623 $ cat hg.pid >> $DAEMON_PIDS
625 624
626 625 $ hg -R server debuglfput gz-a.hg
627 626 1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae
628 627
629 628 $ cat > server/.hg/clonebundles.manifest << EOF
630 629 > largefile://1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae BUNDLESPEC=gzip-v2
631 630 > EOF
632 631
633 632 $ hg clone -U http://localhost:$HGPORT largefile-provided --traceback
634 633 applying clone bundle from largefile://1f74b3d08286b9b3a16fb3fa185dd29219cbc6ae
635 634 adding changesets
636 635 adding manifests
637 636 adding file changes
638 637 added 2 changesets with 2 changes to 2 files
639 638 finished applying clone bundle
640 639 searching for changes
641 640 no changes found
642 641 2 local changesets published
643 642 $ killdaemons.py
644 643
645 644 A manifest with a gzip bundle requiring too much memory for a 16MB system and working
646 645 on a 32MB system.
647 646
648 647 $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid
649 648 $ cat http.pid >> $DAEMON_PIDS
650 649 $ hg -R server serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log
651 650 $ cat hg.pid >> $DAEMON_PIDS
652 651
653 652 $ cat > server/.hg/clonebundles.manifest << EOF
654 653 > http://localhost:$HGPORT1/gz-a.hg BUNDLESPEC=gzip-v2 REQUIREDRAM=12MB
655 654 > EOF
656 655
657 656 $ hg clone -U --debug --config ui.available-memory=16MB http://localhost:$HGPORT gzip-too-large
658 657 using http://localhost:$HGPORT/
659 658 sending capabilities command
660 sending clonebundles command
659 sending clonebundles_manifest command
661 660 filtering http://localhost:$HGPORT1/gz-a.hg as it needs more than 2/3 of system memory
662 661 no compatible clone bundles available on server; falling back to regular clone
663 662 (you may want to report this to the server operator)
664 663 query 1; heads
665 664 sending batch command
666 665 requesting all changes
667 666 sending getbundle command
668 667 bundle2-input-bundle: with-transaction
669 668 bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported
670 669 adding changesets
671 670 add changeset 53245c60e682
672 671 add changeset aaff8d2ffbbf
673 672 adding manifests
674 673 adding file changes
675 674 adding bar revisions
676 675 adding foo revisions
677 676 bundle2-input-part: total payload size 936
678 677 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
679 678 bundle2-input-part: "phase-heads" supported
680 679 bundle2-input-part: total payload size 24
681 680 bundle2-input-bundle: 3 parts total
682 681 checking for updated bookmarks
683 682 updating the branch cache
684 683 added 2 changesets with 2 changes to 2 files
685 684 new changesets 53245c60e682:aaff8d2ffbbf
686 685 calling hook changegroup.lfiles: hgext.largefiles.reposetup.checkrequireslfiles
687 686 updating the branch cache
688 687 (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob)
689 688
690 689 $ hg clone -U --debug --config ui.available-memory=32MB http://localhost:$HGPORT gzip-too-large2
691 690 using http://localhost:$HGPORT/
692 691 sending capabilities command
693 sending clonebundles command
692 sending clonebundles_manifest command
694 693 applying clone bundle from http://localhost:$HGPORT1/gz-a.hg
695 694 bundle2-input-bundle: 1 params with-transaction
696 695 bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported
697 696 adding changesets
698 697 add changeset 53245c60e682
699 698 add changeset aaff8d2ffbbf
700 699 adding manifests
701 700 adding file changes
702 701 adding bar revisions
703 702 adding foo revisions
704 703 bundle2-input-part: total payload size 920
705 704 bundle2-input-part: "cache:rev-branch-cache" (advisory) supported
706 705 bundle2-input-part: total payload size 59
707 706 bundle2-input-bundle: 2 parts total
708 707 updating the branch cache
709 708 added 2 changesets with 2 changes to 2 files
710 709 finished applying clone bundle
711 710 query 1; heads
712 711 sending batch command
713 712 searching for changes
714 713 all remote heads known locally
715 714 no changes found
716 715 sending getbundle command
717 716 bundle2-input-bundle: with-transaction
718 717 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
719 718 bundle2-input-part: "phase-heads" supported
720 719 bundle2-input-part: total payload size 24
721 720 bundle2-input-bundle: 2 parts total
722 721 checking for updated bookmarks
723 722 2 local changesets published
724 723 calling hook changegroup.lfiles: hgext.largefiles.reposetup.checkrequireslfiles
725 724 updating the branch cache
726 725 (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob)
727 726 $ killdaemons.py
728 727
729 728 Testing a clone bundles that involves revlog splitting (issue6811)
730 729 ==================================================================
731 730
732 731 $ cat >> $HGRCPATH << EOF
733 732 > [format]
734 733 > revlog-compression=none
735 734 > use-persistent-nodemap=no
736 735 > EOF
737 736
738 737 $ hg init server-revlog-split/
739 738 $ cd server-revlog-split
740 739 $ cat >> .hg/hgrc << EOF
741 740 > [extensions]
742 741 > clonebundles =
743 742 > EOF
744 743 $ echo foo > A
745 744 $ hg add A
746 745 $ hg commit -m 'initial commit'
747 746 IMPORTANT: the revlogs must not be split
748 747 $ ls -1 .hg/store/00manifest.*
749 748 .hg/store/00manifest.i
750 749 $ ls -1 .hg/store/data/_a.*
751 750 .hg/store/data/_a.i
752 751
753 752 do big enough update to split the revlogs
754 753
755 754 $ $TESTDIR/seq.py 100000 > A
756 755 $ mkdir foo
757 756 $ cd foo
758 757 $ touch `$TESTDIR/seq.py 10000`
759 758 $ cd ..
760 759 $ hg add -q foo
761 760 $ hg commit -m 'split the manifest and one filelog'
762 761
763 762 IMPORTANT: now the revlogs must be split
764 763 $ ls -1 .hg/store/00manifest.*
765 764 .hg/store/00manifest.d
766 765 .hg/store/00manifest.i
767 766 $ ls -1 .hg/store/data/_a.*
768 767 .hg/store/data/_a.d
769 768 .hg/store/data/_a.i
770 769
771 770 Add an extra commit on top of that
772 771
773 772 $ echo foo >> A
774 773 $ hg commit -m 'one extra commit'
775 774
776 775 $ cd ..
777 776
778 777 Do a bundle that contains the split, but not the update
779 778
780 779 $ hg bundle --exact --rev '::(default~1)' -R server-revlog-split/ --type gzip-v2 split-test.hg
781 780 2 changesets found
782 781
783 782 $ cat > server-revlog-split/.hg/clonebundles.manifest << EOF
784 783 > http://localhost:$HGPORT1/split-test.hg BUNDLESPEC=gzip-v2
785 784 > EOF
786 785
787 786 start the necessary server
788 787
789 788 $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid
790 789 $ cat http.pid >> $DAEMON_PIDS
791 790 $ hg -R server-revlog-split serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log
792 791 $ cat hg.pid >> $DAEMON_PIDS
793 792
794 793 Check that clone works fine
795 794 ===========================
796 795
797 796 Here, the initial clone will trigger a revlog split (which is a bit clowny it
798 797 itself, but whatever). The split revlogs will see additionnal data added to
799 798 them in the subsequent pull. This should not be a problem
800 799
801 800 $ hg clone http://localhost:$HGPORT revlog-split-in-the-bundle
802 801 applying clone bundle from http://localhost:$HGPORT1/split-test.hg
803 802 adding changesets
804 803 adding manifests
805 804 adding file changes
806 805 added 2 changesets with 10002 changes to 10001 files
807 806 finished applying clone bundle
808 807 searching for changes
809 808 adding changesets
810 809 adding manifests
811 810 adding file changes
812 811 added 1 changesets with 1 changes to 1 files
813 812 new changesets e3879eaa1db7
814 813 2 local changesets published
815 814 updating to branch default
816 815 10001 files updated, 0 files merged, 0 files removed, 0 files unresolved
817 816
818 817 check the results
819 818
820 819 $ cd revlog-split-in-the-bundle
821 820 $ f --size .hg/store/00manifest.*
822 821 .hg/store/00manifest.d: size=499037
823 822 .hg/store/00manifest.i: size=192
824 823 $ f --size .hg/store/data/_a.*
825 824 .hg/store/data/_a.d: size=588917
826 825 .hg/store/data/_a.i: size=192
827 826
828 827 manifest should work
829 828
830 829 $ hg files -r tip | wc -l
831 830 \s*10001 (re)
832 831
833 832 file content should work
834 833
835 834 $ hg cat -r tip A | wc -l
836 835 \s*100001 (re)
837 836
838 837
@@ -1,204 +1,204 b''
1 1 #require no-reposimplestore
2 2
3 3 #testcases stream-v2 stream-v3
4 4
5 5 #if stream-v2
6 6 $ bundle_format="streamv2"
7 7 $ stream_version="v2"
8 8 #endif
9 9 #if stream-v3
10 10 $ bundle_format="streamv3-exp"
11 11 $ stream_version="v3-exp"
12 12 $ cat << EOF >> $HGRCPATH
13 13 > [experimental]
14 14 > stream-v3=yes
15 15 > EOF
16 16 #endif
17 17
18 18 Test creating a consuming stream bundle v2 and v3
19 19
20 20 $ getmainid() {
21 21 > hg -R main log --template '{node}\n' --rev "$1"
22 22 > }
23 23
24 24 $ cp $HGRCPATH $TESTTMP/hgrc.orig
25 25
26 26 $ cat >> $HGRCPATH << EOF
27 27 > [experimental]
28 28 > evolution.createmarkers=True
29 29 > evolution.exchange=True
30 30 > bundle2-output-capture=True
31 31 > [ui]
32 32 > logtemplate={rev}:{node|short} {phase} {author} {bookmarks} {desc|firstline}
33 33 > [web]
34 34 > push_ssl = false
35 35 > allow_push = *
36 36 > [phases]
37 37 > publish=False
38 38 > [extensions]
39 39 > drawdag=$TESTDIR/drawdag.py
40 40 > clonebundles=
41 41 > EOF
42 42
43 43 The extension requires a repo (currently unused)
44 44
45 45 $ hg init main
46 46 $ cd main
47 47
48 48 $ hg debugdrawdag <<'EOF'
49 49 > E
50 50 > |
51 51 > D
52 52 > |
53 53 > C
54 54 > |
55 55 > B
56 56 > |
57 57 > A
58 58 > EOF
59 59
60 60 $ hg bundle -a --type="none-v2;stream=$stream_version" bundle.hg
61 61 $ hg debugbundle bundle.hg
62 62 Stream params: {}
63 63 stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 no-zstd !)
64 64 stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 zstd no-rust !)
65 65 stream2 -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 rust !)
66 66 stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 no-zstd !)
67 67 stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 zstd no-rust !)
68 68 stream3-exp -- {bytecount: 1693, filecount: 11, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 rust !)
69 69 $ hg debugbundle --spec bundle.hg
70 70 none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v2 no-zstd !)
71 71 none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 zstd no-rust !)
72 72 none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 rust !)
73 73 none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v3 no-zstd !)
74 74 none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 zstd no-rust !)
75 75 none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 rust !)
76 76
77 77 Test that we can apply the bundle as a stream clone bundle
78 78
79 79 $ cat > .hg/clonebundles.manifest << EOF
80 80 > http://localhost:$HGPORT1/bundle.hg BUNDLESPEC=`hg debugbundle --spec bundle.hg`
81 81 > EOF
82 82
83 83 $ hg serve -d -p $HGPORT --pid-file hg.pid --accesslog access.log
84 84 $ cat hg.pid >> $DAEMON_PIDS
85 85
86 86 $ "$PYTHON" $TESTDIR/dumbhttp.py -p $HGPORT1 --pid http.pid
87 87 $ cat http.pid >> $DAEMON_PIDS
88 88
89 89 $ cd ..
90 90 $ hg clone http://localhost:$HGPORT stream-clone-implicit --debug
91 91 using http://localhost:$HGPORT/
92 92 sending capabilities command
93 sending clonebundles command
93 sending clonebundles_manifest command
94 94 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
95 95 bundle2-input-bundle: with-transaction
96 96 bundle2-input-part: "stream2" (params: 3 mandatory) supported (stream-v2 !)
97 97 bundle2-input-part: "stream3-exp" (params: 3 mandatory) supported (stream-v3 !)
98 98 applying stream bundle
99 99 11 files to transfer, 1.65 KB of data
100 100 starting 4 threads for background file closing (?)
101 101 starting 4 threads for background file closing (?)
102 102 adding [s] data/A.i (66 bytes)
103 103 adding [s] data/B.i (66 bytes)
104 104 adding [s] data/C.i (66 bytes)
105 105 adding [s] data/D.i (66 bytes)
106 106 adding [s] data/E.i (66 bytes)
107 107 adding [s] phaseroots (43 bytes)
108 108 adding [s] 00manifest.i (584 bytes)
109 109 adding [s] 00changelog.i (595 bytes)
110 110 adding [c] branch2-served (94 bytes)
111 111 adding [c] rbc-names-v1 (7 bytes)
112 112 adding [c] rbc-revs-v1 (40 bytes)
113 113 transferred 1.65 KB in * seconds (* */sec) (glob)
114 114 bundle2-input-part: total payload size 1840
115 115 bundle2-input-bundle: 1 parts total
116 116 updating the branch cache
117 117 finished applying clone bundle
118 118 query 1; heads
119 119 sending batch command
120 120 searching for changes
121 121 all remote heads known locally
122 122 no changes found
123 123 sending getbundle command
124 124 bundle2-input-bundle: with-transaction
125 125 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
126 126 bundle2-input-part: "phase-heads" supported
127 127 bundle2-input-part: total payload size 24
128 128 bundle2-input-bundle: 2 parts total
129 129 checking for updated bookmarks
130 130 updating to branch default
131 131 resolving manifests
132 132 branchmerge: False, force: False, partial: False
133 133 ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041
134 134 A: remote created -> g
135 135 getting A
136 136 B: remote created -> g
137 137 getting B
138 138 C: remote created -> g
139 139 getting C
140 140 D: remote created -> g
141 141 getting D
142 142 E: remote created -> g
143 143 getting E
144 144 5 files updated, 0 files merged, 0 files removed, 0 files unresolved
145 145 updating the branch cache
146 146 (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob)
147 147
148 148 $ hg clone --stream http://localhost:$HGPORT stream-clone-explicit --debug
149 149 using http://localhost:$HGPORT/
150 150 sending capabilities command
151 sending clonebundles command
151 sending clonebundles_manifest command
152 152 applying clone bundle from http://localhost:$HGPORT1/bundle.hg
153 153 bundle2-input-bundle: with-transaction
154 154 bundle2-input-part: "stream2" (params: 3 mandatory) supported (stream-v2 !)
155 155 bundle2-input-part: "stream3-exp" (params: 3 mandatory) supported (stream-v3 !)
156 156 applying stream bundle
157 157 11 files to transfer, 1.65 KB of data
158 158 starting 4 threads for background file closing (?)
159 159 starting 4 threads for background file closing (?)
160 160 adding [s] data/A.i (66 bytes)
161 161 adding [s] data/B.i (66 bytes)
162 162 adding [s] data/C.i (66 bytes)
163 163 adding [s] data/D.i (66 bytes)
164 164 adding [s] data/E.i (66 bytes)
165 165 adding [s] phaseroots (43 bytes)
166 166 adding [s] 00manifest.i (584 bytes)
167 167 adding [s] 00changelog.i (595 bytes)
168 168 adding [c] branch2-served (94 bytes)
169 169 adding [c] rbc-names-v1 (7 bytes)
170 170 adding [c] rbc-revs-v1 (40 bytes)
171 171 transferred 1.65 KB in * seconds (* */sec) (glob)
172 172 bundle2-input-part: total payload size 1840
173 173 bundle2-input-bundle: 1 parts total
174 174 updating the branch cache
175 175 finished applying clone bundle
176 176 query 1; heads
177 177 sending batch command
178 178 searching for changes
179 179 all remote heads known locally
180 180 no changes found
181 181 sending getbundle command
182 182 bundle2-input-bundle: with-transaction
183 183 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
184 184 bundle2-input-part: "phase-heads" supported
185 185 bundle2-input-part: total payload size 24
186 186 bundle2-input-bundle: 2 parts total
187 187 checking for updated bookmarks
188 188 updating to branch default
189 189 resolving manifests
190 190 branchmerge: False, force: False, partial: False
191 191 ancestor: 000000000000, local: 000000000000+, remote: 9bc730a19041
192 192 A: remote created -> g
193 193 getting A
194 194 B: remote created -> g
195 195 getting B
196 196 C: remote created -> g
197 197 getting C
198 198 D: remote created -> g
199 199 getting D
200 200 E: remote created -> g
201 201 getting E
202 202 5 files updated, 0 files merged, 0 files removed, 0 files unresolved
203 203 updating the branch cache
204 204 (sent 4 HTTP requests and * bytes; received * bytes in responses) (glob)
General Comments 0
You need to be logged in to leave comments. Login now