##// END OF EJS Templates
push: include a 'check:bookmarks' part when possible...
Boris Feld -
r35260:ad5f2b92 default
parent child Browse files
Show More
@@ -1,2059 +1,2060 b''
1 1 # bundle2.py - generic container format to transmit arbitrary data.
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """Handling of the new bundle2 format
8 8
9 9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 11 that will be handed to and processed by the application layer.
12 12
13 13
14 14 General format architecture
15 15 ===========================
16 16
17 17 The format is architectured as follow
18 18
19 19 - magic string
20 20 - stream level parameters
21 21 - payload parts (any number)
22 22 - end of stream marker.
23 23
24 24 the Binary format
25 25 ============================
26 26
27 27 All numbers are unsigned and big-endian.
28 28
29 29 stream level parameters
30 30 ------------------------
31 31
32 32 Binary format is as follow
33 33
34 34 :params size: int32
35 35
36 36 The total number of Bytes used by the parameters
37 37
38 38 :params value: arbitrary number of Bytes
39 39
40 40 A blob of `params size` containing the serialized version of all stream level
41 41 parameters.
42 42
43 43 The blob contains a space separated list of parameters. Parameters with value
44 44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45 45
46 46 Empty name are obviously forbidden.
47 47
48 48 Name MUST start with a letter. If this first letter is lower case, the
49 49 parameter is advisory and can be safely ignored. However when the first
50 50 letter is capital, the parameter is mandatory and the bundling process MUST
51 51 stop if he is not able to proceed it.
52 52
53 53 Stream parameters use a simple textual format for two main reasons:
54 54
55 55 - Stream level parameters should remain simple and we want to discourage any
56 56 crazy usage.
57 57 - Textual data allow easy human inspection of a bundle2 header in case of
58 58 troubles.
59 59
60 60 Any Applicative level options MUST go into a bundle2 part instead.
61 61
62 62 Payload part
63 63 ------------------------
64 64
65 65 Binary format is as follow
66 66
67 67 :header size: int32
68 68
69 69 The total number of Bytes used by the part header. When the header is empty
70 70 (size = 0) this is interpreted as the end of stream marker.
71 71
72 72 :header:
73 73
74 74 The header defines how to interpret the part. It contains two piece of
75 75 data: the part type, and the part parameters.
76 76
77 77 The part type is used to route an application level handler, that can
78 78 interpret payload.
79 79
80 80 Part parameters are passed to the application level handler. They are
81 81 meant to convey information that will help the application level object to
82 82 interpret the part payload.
83 83
84 84 The binary format of the header is has follow
85 85
86 86 :typesize: (one byte)
87 87
88 88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89 89
90 90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 91 to this part.
92 92
93 93 :parameters:
94 94
95 95 Part's parameter may have arbitrary content, the binary structure is::
96 96
97 97 <mandatory-count><advisory-count><param-sizes><param-data>
98 98
99 99 :mandatory-count: 1 byte, number of mandatory parameters
100 100
101 101 :advisory-count: 1 byte, number of advisory parameters
102 102
103 103 :param-sizes:
104 104
105 105 N couple of bytes, where N is the total number of parameters. Each
106 106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107 107
108 108 :param-data:
109 109
110 110 A blob of bytes from which each parameter key and value can be
111 111 retrieved using the list of size couples stored in the previous
112 112 field.
113 113
114 114 Mandatory parameters comes first, then the advisory ones.
115 115
116 116 Each parameter's key MUST be unique within the part.
117 117
118 118 :payload:
119 119
120 120 payload is a series of `<chunksize><chunkdata>`.
121 121
122 122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124 124
125 125 The current implementation always produces either zero or one chunk.
126 126 This is an implementation limitation that will ultimately be lifted.
127 127
128 128 `chunksize` can be negative to trigger special case processing. No such
129 129 processing is in place yet.
130 130
131 131 Bundle processing
132 132 ============================
133 133
134 134 Each part is processed in order using a "part handler". Handler are registered
135 135 for a certain part type.
136 136
137 137 The matching of a part to its handler is case insensitive. The case of the
138 138 part type is used to know if a part is mandatory or advisory. If the Part type
139 139 contains any uppercase char it is considered mandatory. When no handler is
140 140 known for a Mandatory part, the process is aborted and an exception is raised.
141 141 If the part is advisory and no handler is known, the part is ignored. When the
142 142 process is aborted, the full bundle is still read from the stream to keep the
143 143 channel usable. But none of the part read from an abort are processed. In the
144 144 future, dropping the stream may become an option for channel we do not care to
145 145 preserve.
146 146 """
147 147
148 148 from __future__ import absolute_import, division
149 149
150 150 import errno
151 151 import os
152 152 import re
153 153 import string
154 154 import struct
155 155 import sys
156 156
157 157 from .i18n import _
158 158 from . import (
159 159 bookmarks,
160 160 changegroup,
161 161 error,
162 162 node as nodemod,
163 163 obsolete,
164 164 phases,
165 165 pushkey,
166 166 pycompat,
167 167 tags,
168 168 url,
169 169 util,
170 170 )
171 171
172 172 urlerr = util.urlerr
173 173 urlreq = util.urlreq
174 174
175 175 _pack = struct.pack
176 176 _unpack = struct.unpack
177 177
178 178 _fstreamparamsize = '>i'
179 179 _fpartheadersize = '>i'
180 180 _fparttypesize = '>B'
181 181 _fpartid = '>I'
182 182 _fpayloadsize = '>i'
183 183 _fpartparamcount = '>BB'
184 184
185 185 preferedchunksize = 4096
186 186
187 187 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
188 188
189 189 def outdebug(ui, message):
190 190 """debug regarding output stream (bundling)"""
191 191 if ui.configbool('devel', 'bundle2.debug'):
192 192 ui.debug('bundle2-output: %s\n' % message)
193 193
194 194 def indebug(ui, message):
195 195 """debug on input stream (unbundling)"""
196 196 if ui.configbool('devel', 'bundle2.debug'):
197 197 ui.debug('bundle2-input: %s\n' % message)
198 198
199 199 def validateparttype(parttype):
200 200 """raise ValueError if a parttype contains invalid character"""
201 201 if _parttypeforbidden.search(parttype):
202 202 raise ValueError(parttype)
203 203
204 204 def _makefpartparamsizes(nbparams):
205 205 """return a struct format to read part parameter sizes
206 206
207 207 The number parameters is variable so we need to build that format
208 208 dynamically.
209 209 """
210 210 return '>'+('BB'*nbparams)
211 211
212 212 parthandlermapping = {}
213 213
214 214 def parthandler(parttype, params=()):
215 215 """decorator that register a function as a bundle2 part handler
216 216
217 217 eg::
218 218
219 219 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
220 220 def myparttypehandler(...):
221 221 '''process a part of type "my part".'''
222 222 ...
223 223 """
224 224 validateparttype(parttype)
225 225 def _decorator(func):
226 226 lparttype = parttype.lower() # enforce lower case matching.
227 227 assert lparttype not in parthandlermapping
228 228 parthandlermapping[lparttype] = func
229 229 func.params = frozenset(params)
230 230 return func
231 231 return _decorator
232 232
233 233 class unbundlerecords(object):
234 234 """keep record of what happens during and unbundle
235 235
236 236 New records are added using `records.add('cat', obj)`. Where 'cat' is a
237 237 category of record and obj is an arbitrary object.
238 238
239 239 `records['cat']` will return all entries of this category 'cat'.
240 240
241 241 Iterating on the object itself will yield `('category', obj)` tuples
242 242 for all entries.
243 243
244 244 All iterations happens in chronological order.
245 245 """
246 246
247 247 def __init__(self):
248 248 self._categories = {}
249 249 self._sequences = []
250 250 self._replies = {}
251 251
252 252 def add(self, category, entry, inreplyto=None):
253 253 """add a new record of a given category.
254 254
255 255 The entry can then be retrieved in the list returned by
256 256 self['category']."""
257 257 self._categories.setdefault(category, []).append(entry)
258 258 self._sequences.append((category, entry))
259 259 if inreplyto is not None:
260 260 self.getreplies(inreplyto).add(category, entry)
261 261
262 262 def getreplies(self, partid):
263 263 """get the records that are replies to a specific part"""
264 264 return self._replies.setdefault(partid, unbundlerecords())
265 265
266 266 def __getitem__(self, cat):
267 267 return tuple(self._categories.get(cat, ()))
268 268
269 269 def __iter__(self):
270 270 return iter(self._sequences)
271 271
272 272 def __len__(self):
273 273 return len(self._sequences)
274 274
275 275 def __nonzero__(self):
276 276 return bool(self._sequences)
277 277
278 278 __bool__ = __nonzero__
279 279
280 280 class bundleoperation(object):
281 281 """an object that represents a single bundling process
282 282
283 283 Its purpose is to carry unbundle-related objects and states.
284 284
285 285 A new object should be created at the beginning of each bundle processing.
286 286 The object is to be returned by the processing function.
287 287
288 288 The object has very little content now it will ultimately contain:
289 289 * an access to the repo the bundle is applied to,
290 290 * a ui object,
291 291 * a way to retrieve a transaction to add changes to the repo,
292 292 * a way to record the result of processing each part,
293 293 * a way to construct a bundle response when applicable.
294 294 """
295 295
296 296 def __init__(self, repo, transactiongetter, captureoutput=True):
297 297 self.repo = repo
298 298 self.ui = repo.ui
299 299 self.records = unbundlerecords()
300 300 self.reply = None
301 301 self.captureoutput = captureoutput
302 302 self.hookargs = {}
303 303 self._gettransaction = transactiongetter
304 304
305 305 def gettransaction(self):
306 306 transaction = self._gettransaction()
307 307
308 308 if self.hookargs:
309 309 # the ones added to the transaction supercede those added
310 310 # to the operation.
311 311 self.hookargs.update(transaction.hookargs)
312 312 transaction.hookargs = self.hookargs
313 313
314 314 # mark the hookargs as flushed. further attempts to add to
315 315 # hookargs will result in an abort.
316 316 self.hookargs = None
317 317
318 318 return transaction
319 319
320 320 def addhookargs(self, hookargs):
321 321 if self.hookargs is None:
322 322 raise error.ProgrammingError('attempted to add hookargs to '
323 323 'operation after transaction started')
324 324 self.hookargs.update(hookargs)
325 325
326 326 class TransactionUnavailable(RuntimeError):
327 327 pass
328 328
329 329 def _notransaction():
330 330 """default method to get a transaction while processing a bundle
331 331
332 332 Raise an exception to highlight the fact that no transaction was expected
333 333 to be created"""
334 334 raise TransactionUnavailable()
335 335
336 336 def applybundle(repo, unbundler, tr, source=None, url=None, **kwargs):
337 337 # transform me into unbundler.apply() as soon as the freeze is lifted
338 338 if isinstance(unbundler, unbundle20):
339 339 tr.hookargs['bundle2'] = '1'
340 340 if source is not None and 'source' not in tr.hookargs:
341 341 tr.hookargs['source'] = source
342 342 if url is not None and 'url' not in tr.hookargs:
343 343 tr.hookargs['url'] = url
344 344 return processbundle(repo, unbundler, lambda: tr)
345 345 else:
346 346 # the transactiongetter won't be used, but we might as well set it
347 347 op = bundleoperation(repo, lambda: tr)
348 348 _processchangegroup(op, unbundler, tr, source, url, **kwargs)
349 349 return op
350 350
351 351 class partiterator(object):
352 352 def __init__(self, repo, op, unbundler):
353 353 self.repo = repo
354 354 self.op = op
355 355 self.unbundler = unbundler
356 356 self.iterator = None
357 357 self.count = 0
358 358 self.current = None
359 359
360 360 def __enter__(self):
361 361 def func():
362 362 itr = enumerate(self.unbundler.iterparts())
363 363 for count, p in itr:
364 364 self.count = count
365 365 self.current = p
366 366 yield p
367 367 p.consume()
368 368 self.current = None
369 369 self.iterator = func()
370 370 return self.iterator
371 371
372 372 def __exit__(self, type, exc, tb):
373 373 if not self.iterator:
374 374 return
375 375
376 376 # Only gracefully abort in a normal exception situation. User aborts
377 377 # like Ctrl+C throw a KeyboardInterrupt which is not a base Exception,
378 378 # and should not gracefully cleanup.
379 379 if isinstance(exc, Exception):
380 380 # Any exceptions seeking to the end of the bundle at this point are
381 381 # almost certainly related to the underlying stream being bad.
382 382 # And, chances are that the exception we're handling is related to
383 383 # getting in that bad state. So, we swallow the seeking error and
384 384 # re-raise the original error.
385 385 seekerror = False
386 386 try:
387 387 if self.current:
388 388 # consume the part content to not corrupt the stream.
389 389 self.current.consume()
390 390
391 391 for part in self.iterator:
392 392 # consume the bundle content
393 393 part.consume()
394 394 except Exception:
395 395 seekerror = True
396 396
397 397 # Small hack to let caller code distinguish exceptions from bundle2
398 398 # processing from processing the old format. This is mostly needed
399 399 # to handle different return codes to unbundle according to the type
400 400 # of bundle. We should probably clean up or drop this return code
401 401 # craziness in a future version.
402 402 exc.duringunbundle2 = True
403 403 salvaged = []
404 404 replycaps = None
405 405 if self.op.reply is not None:
406 406 salvaged = self.op.reply.salvageoutput()
407 407 replycaps = self.op.reply.capabilities
408 408 exc._replycaps = replycaps
409 409 exc._bundle2salvagedoutput = salvaged
410 410
411 411 # Re-raising from a variable loses the original stack. So only use
412 412 # that form if we need to.
413 413 if seekerror:
414 414 raise exc
415 415
416 416 self.repo.ui.debug('bundle2-input-bundle: %i parts total\n' %
417 417 self.count)
418 418
419 419 def processbundle(repo, unbundler, transactiongetter=None, op=None):
420 420 """This function process a bundle, apply effect to/from a repo
421 421
422 422 It iterates over each part then searches for and uses the proper handling
423 423 code to process the part. Parts are processed in order.
424 424
425 425 Unknown Mandatory part will abort the process.
426 426
427 427 It is temporarily possible to provide a prebuilt bundleoperation to the
428 428 function. This is used to ensure output is properly propagated in case of
429 429 an error during the unbundling. This output capturing part will likely be
430 430 reworked and this ability will probably go away in the process.
431 431 """
432 432 if op is None:
433 433 if transactiongetter is None:
434 434 transactiongetter = _notransaction
435 435 op = bundleoperation(repo, transactiongetter)
436 436 # todo:
437 437 # - replace this is a init function soon.
438 438 # - exception catching
439 439 unbundler.params
440 440 if repo.ui.debugflag:
441 441 msg = ['bundle2-input-bundle:']
442 442 if unbundler.params:
443 443 msg.append(' %i params' % len(unbundler.params))
444 444 if op._gettransaction is None or op._gettransaction is _notransaction:
445 445 msg.append(' no-transaction')
446 446 else:
447 447 msg.append(' with-transaction')
448 448 msg.append('\n')
449 449 repo.ui.debug(''.join(msg))
450 450
451 451 processparts(repo, op, unbundler)
452 452
453 453 return op
454 454
455 455 def processparts(repo, op, unbundler):
456 456 with partiterator(repo, op, unbundler) as parts:
457 457 for part in parts:
458 458 _processpart(op, part)
459 459
460 460 def _processchangegroup(op, cg, tr, source, url, **kwargs):
461 461 ret = cg.apply(op.repo, tr, source, url, **kwargs)
462 462 op.records.add('changegroup', {
463 463 'return': ret,
464 464 })
465 465 return ret
466 466
467 467 def _gethandler(op, part):
468 468 status = 'unknown' # used by debug output
469 469 try:
470 470 handler = parthandlermapping.get(part.type)
471 471 if handler is None:
472 472 status = 'unsupported-type'
473 473 raise error.BundleUnknownFeatureError(parttype=part.type)
474 474 indebug(op.ui, 'found a handler for part %s' % part.type)
475 475 unknownparams = part.mandatorykeys - handler.params
476 476 if unknownparams:
477 477 unknownparams = list(unknownparams)
478 478 unknownparams.sort()
479 479 status = 'unsupported-params (%s)' % ', '.join(unknownparams)
480 480 raise error.BundleUnknownFeatureError(parttype=part.type,
481 481 params=unknownparams)
482 482 status = 'supported'
483 483 except error.BundleUnknownFeatureError as exc:
484 484 if part.mandatory: # mandatory parts
485 485 raise
486 486 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
487 487 return # skip to part processing
488 488 finally:
489 489 if op.ui.debugflag:
490 490 msg = ['bundle2-input-part: "%s"' % part.type]
491 491 if not part.mandatory:
492 492 msg.append(' (advisory)')
493 493 nbmp = len(part.mandatorykeys)
494 494 nbap = len(part.params) - nbmp
495 495 if nbmp or nbap:
496 496 msg.append(' (params:')
497 497 if nbmp:
498 498 msg.append(' %i mandatory' % nbmp)
499 499 if nbap:
500 500 msg.append(' %i advisory' % nbmp)
501 501 msg.append(')')
502 502 msg.append(' %s\n' % status)
503 503 op.ui.debug(''.join(msg))
504 504
505 505 return handler
506 506
507 507 def _processpart(op, part):
508 508 """process a single part from a bundle
509 509
510 510 The part is guaranteed to have been fully consumed when the function exits
511 511 (even if an exception is raised)."""
512 512 handler = _gethandler(op, part)
513 513 if handler is None:
514 514 return
515 515
516 516 # handler is called outside the above try block so that we don't
517 517 # risk catching KeyErrors from anything other than the
518 518 # parthandlermapping lookup (any KeyError raised by handler()
519 519 # itself represents a defect of a different variety).
520 520 output = None
521 521 if op.captureoutput and op.reply is not None:
522 522 op.ui.pushbuffer(error=True, subproc=True)
523 523 output = ''
524 524 try:
525 525 handler(op, part)
526 526 finally:
527 527 if output is not None:
528 528 output = op.ui.popbuffer()
529 529 if output:
530 530 outpart = op.reply.newpart('output', data=output,
531 531 mandatory=False)
532 532 outpart.addparam(
533 533 'in-reply-to', pycompat.bytestr(part.id), mandatory=False)
534 534
535 535 def decodecaps(blob):
536 536 """decode a bundle2 caps bytes blob into a dictionary
537 537
538 538 The blob is a list of capabilities (one per line)
539 539 Capabilities may have values using a line of the form::
540 540
541 541 capability=value1,value2,value3
542 542
543 543 The values are always a list."""
544 544 caps = {}
545 545 for line in blob.splitlines():
546 546 if not line:
547 547 continue
548 548 if '=' not in line:
549 549 key, vals = line, ()
550 550 else:
551 551 key, vals = line.split('=', 1)
552 552 vals = vals.split(',')
553 553 key = urlreq.unquote(key)
554 554 vals = [urlreq.unquote(v) for v in vals]
555 555 caps[key] = vals
556 556 return caps
557 557
558 558 def encodecaps(caps):
559 559 """encode a bundle2 caps dictionary into a bytes blob"""
560 560 chunks = []
561 561 for ca in sorted(caps):
562 562 vals = caps[ca]
563 563 ca = urlreq.quote(ca)
564 564 vals = [urlreq.quote(v) for v in vals]
565 565 if vals:
566 566 ca = "%s=%s" % (ca, ','.join(vals))
567 567 chunks.append(ca)
568 568 return '\n'.join(chunks)
569 569
570 570 bundletypes = {
571 571 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
572 572 # since the unification ssh accepts a header but there
573 573 # is no capability signaling it.
574 574 "HG20": (), # special-cased below
575 575 "HG10UN": ("HG10UN", 'UN'),
576 576 "HG10BZ": ("HG10", 'BZ'),
577 577 "HG10GZ": ("HG10GZ", 'GZ'),
578 578 }
579 579
580 580 # hgweb uses this list to communicate its preferred type
581 581 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
582 582
583 583 class bundle20(object):
584 584 """represent an outgoing bundle2 container
585 585
586 586 Use the `addparam` method to add stream level parameter. and `newpart` to
587 587 populate it. Then call `getchunks` to retrieve all the binary chunks of
588 588 data that compose the bundle2 container."""
589 589
590 590 _magicstring = 'HG20'
591 591
592 592 def __init__(self, ui, capabilities=()):
593 593 self.ui = ui
594 594 self._params = []
595 595 self._parts = []
596 596 self.capabilities = dict(capabilities)
597 597 self._compengine = util.compengines.forbundletype('UN')
598 598 self._compopts = None
599 599
600 600 def setcompression(self, alg, compopts=None):
601 601 """setup core part compression to <alg>"""
602 602 if alg in (None, 'UN'):
603 603 return
604 604 assert not any(n.lower() == 'compression' for n, v in self._params)
605 605 self.addparam('Compression', alg)
606 606 self._compengine = util.compengines.forbundletype(alg)
607 607 self._compopts = compopts
608 608
609 609 @property
610 610 def nbparts(self):
611 611 """total number of parts added to the bundler"""
612 612 return len(self._parts)
613 613
614 614 # methods used to defines the bundle2 content
615 615 def addparam(self, name, value=None):
616 616 """add a stream level parameter"""
617 617 if not name:
618 618 raise ValueError(r'empty parameter name')
619 619 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
620 620 raise ValueError(r'non letter first character: %s' % name)
621 621 self._params.append((name, value))
622 622
623 623 def addpart(self, part):
624 624 """add a new part to the bundle2 container
625 625
626 626 Parts contains the actual applicative payload."""
627 627 assert part.id is None
628 628 part.id = len(self._parts) # very cheap counter
629 629 self._parts.append(part)
630 630
631 631 def newpart(self, typeid, *args, **kwargs):
632 632 """create a new part and add it to the containers
633 633
634 634 As the part is directly added to the containers. For now, this means
635 635 that any failure to properly initialize the part after calling
636 636 ``newpart`` should result in a failure of the whole bundling process.
637 637
638 638 You can still fall back to manually create and add if you need better
639 639 control."""
640 640 part = bundlepart(typeid, *args, **kwargs)
641 641 self.addpart(part)
642 642 return part
643 643
644 644 # methods used to generate the bundle2 stream
645 645 def getchunks(self):
646 646 if self.ui.debugflag:
647 647 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
648 648 if self._params:
649 649 msg.append(' (%i params)' % len(self._params))
650 650 msg.append(' %i parts total\n' % len(self._parts))
651 651 self.ui.debug(''.join(msg))
652 652 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
653 653 yield self._magicstring
654 654 param = self._paramchunk()
655 655 outdebug(self.ui, 'bundle parameter: %s' % param)
656 656 yield _pack(_fstreamparamsize, len(param))
657 657 if param:
658 658 yield param
659 659 for chunk in self._compengine.compressstream(self._getcorechunk(),
660 660 self._compopts):
661 661 yield chunk
662 662
663 663 def _paramchunk(self):
664 664 """return a encoded version of all stream parameters"""
665 665 blocks = []
666 666 for par, value in self._params:
667 667 par = urlreq.quote(par)
668 668 if value is not None:
669 669 value = urlreq.quote(value)
670 670 par = '%s=%s' % (par, value)
671 671 blocks.append(par)
672 672 return ' '.join(blocks)
673 673
674 674 def _getcorechunk(self):
675 675 """yield chunk for the core part of the bundle
676 676
677 677 (all but headers and parameters)"""
678 678 outdebug(self.ui, 'start of parts')
679 679 for part in self._parts:
680 680 outdebug(self.ui, 'bundle part: "%s"' % part.type)
681 681 for chunk in part.getchunks(ui=self.ui):
682 682 yield chunk
683 683 outdebug(self.ui, 'end of bundle')
684 684 yield _pack(_fpartheadersize, 0)
685 685
686 686
687 687 def salvageoutput(self):
688 688 """return a list with a copy of all output parts in the bundle
689 689
690 690 This is meant to be used during error handling to make sure we preserve
691 691 server output"""
692 692 salvaged = []
693 693 for part in self._parts:
694 694 if part.type.startswith('output'):
695 695 salvaged.append(part.copy())
696 696 return salvaged
697 697
698 698
699 699 class unpackermixin(object):
700 700 """A mixin to extract bytes and struct data from a stream"""
701 701
702 702 def __init__(self, fp):
703 703 self._fp = fp
704 704
705 705 def _unpack(self, format):
706 706 """unpack this struct format from the stream
707 707
708 708 This method is meant for internal usage by the bundle2 protocol only.
709 709 They directly manipulate the low level stream including bundle2 level
710 710 instruction.
711 711
712 712 Do not use it to implement higher-level logic or methods."""
713 713 data = self._readexact(struct.calcsize(format))
714 714 return _unpack(format, data)
715 715
716 716 def _readexact(self, size):
717 717 """read exactly <size> bytes from the stream
718 718
719 719 This method is meant for internal usage by the bundle2 protocol only.
720 720 They directly manipulate the low level stream including bundle2 level
721 721 instruction.
722 722
723 723 Do not use it to implement higher-level logic or methods."""
724 724 return changegroup.readexactly(self._fp, size)
725 725
726 726 def getunbundler(ui, fp, magicstring=None):
727 727 """return a valid unbundler object for a given magicstring"""
728 728 if magicstring is None:
729 729 magicstring = changegroup.readexactly(fp, 4)
730 730 magic, version = magicstring[0:2], magicstring[2:4]
731 731 if magic != 'HG':
732 732 ui.debug(
733 733 "error: invalid magic: %r (version %r), should be 'HG'\n"
734 734 % (magic, version))
735 735 raise error.Abort(_('not a Mercurial bundle'))
736 736 unbundlerclass = formatmap.get(version)
737 737 if unbundlerclass is None:
738 738 raise error.Abort(_('unknown bundle version %s') % version)
739 739 unbundler = unbundlerclass(ui, fp)
740 740 indebug(ui, 'start processing of %s stream' % magicstring)
741 741 return unbundler
742 742
743 743 class unbundle20(unpackermixin):
744 744 """interpret a bundle2 stream
745 745
746 746 This class is fed with a binary stream and yields parts through its
747 747 `iterparts` methods."""
748 748
749 749 _magicstring = 'HG20'
750 750
751 751 def __init__(self, ui, fp):
752 752 """If header is specified, we do not read it out of the stream."""
753 753 self.ui = ui
754 754 self._compengine = util.compengines.forbundletype('UN')
755 755 self._compressed = None
756 756 super(unbundle20, self).__init__(fp)
757 757
758 758 @util.propertycache
759 759 def params(self):
760 760 """dictionary of stream level parameters"""
761 761 indebug(self.ui, 'reading bundle2 stream parameters')
762 762 params = {}
763 763 paramssize = self._unpack(_fstreamparamsize)[0]
764 764 if paramssize < 0:
765 765 raise error.BundleValueError('negative bundle param size: %i'
766 766 % paramssize)
767 767 if paramssize:
768 768 params = self._readexact(paramssize)
769 769 params = self._processallparams(params)
770 770 return params
771 771
772 772 def _processallparams(self, paramsblock):
773 773 """"""
774 774 params = util.sortdict()
775 775 for p in paramsblock.split(' '):
776 776 p = p.split('=', 1)
777 777 p = [urlreq.unquote(i) for i in p]
778 778 if len(p) < 2:
779 779 p.append(None)
780 780 self._processparam(*p)
781 781 params[p[0]] = p[1]
782 782 return params
783 783
784 784
785 785 def _processparam(self, name, value):
786 786 """process a parameter, applying its effect if needed
787 787
788 788 Parameter starting with a lower case letter are advisory and will be
789 789 ignored when unknown. Those starting with an upper case letter are
790 790 mandatory and will this function will raise a KeyError when unknown.
791 791
792 792 Note: no option are currently supported. Any input will be either
793 793 ignored or failing.
794 794 """
795 795 if not name:
796 796 raise ValueError(r'empty parameter name')
797 797 if name[0:1] not in pycompat.bytestr(string.ascii_letters):
798 798 raise ValueError(r'non letter first character: %s' % name)
799 799 try:
800 800 handler = b2streamparamsmap[name.lower()]
801 801 except KeyError:
802 802 if name[0:1].islower():
803 803 indebug(self.ui, "ignoring unknown parameter %s" % name)
804 804 else:
805 805 raise error.BundleUnknownFeatureError(params=(name,))
806 806 else:
807 807 handler(self, name, value)
808 808
809 809 def _forwardchunks(self):
810 810 """utility to transfer a bundle2 as binary
811 811
812 812 This is made necessary by the fact the 'getbundle' command over 'ssh'
813 813 have no way to know then the reply end, relying on the bundle to be
814 814 interpreted to know its end. This is terrible and we are sorry, but we
815 815 needed to move forward to get general delta enabled.
816 816 """
817 817 yield self._magicstring
818 818 assert 'params' not in vars(self)
819 819 paramssize = self._unpack(_fstreamparamsize)[0]
820 820 if paramssize < 0:
821 821 raise error.BundleValueError('negative bundle param size: %i'
822 822 % paramssize)
823 823 yield _pack(_fstreamparamsize, paramssize)
824 824 if paramssize:
825 825 params = self._readexact(paramssize)
826 826 self._processallparams(params)
827 827 yield params
828 828 assert self._compengine.bundletype == 'UN'
829 829 # From there, payload might need to be decompressed
830 830 self._fp = self._compengine.decompressorreader(self._fp)
831 831 emptycount = 0
832 832 while emptycount < 2:
833 833 # so we can brainlessly loop
834 834 assert _fpartheadersize == _fpayloadsize
835 835 size = self._unpack(_fpartheadersize)[0]
836 836 yield _pack(_fpartheadersize, size)
837 837 if size:
838 838 emptycount = 0
839 839 else:
840 840 emptycount += 1
841 841 continue
842 842 if size == flaginterrupt:
843 843 continue
844 844 elif size < 0:
845 845 raise error.BundleValueError('negative chunk size: %i')
846 846 yield self._readexact(size)
847 847
848 848
849 849 def iterparts(self, seekable=False):
850 850 """yield all parts contained in the stream"""
851 851 cls = seekableunbundlepart if seekable else unbundlepart
852 852 # make sure param have been loaded
853 853 self.params
854 854 # From there, payload need to be decompressed
855 855 self._fp = self._compengine.decompressorreader(self._fp)
856 856 indebug(self.ui, 'start extraction of bundle2 parts')
857 857 headerblock = self._readpartheader()
858 858 while headerblock is not None:
859 859 part = cls(self.ui, headerblock, self._fp)
860 860 yield part
861 861 # Ensure part is fully consumed so we can start reading the next
862 862 # part.
863 863 part.consume()
864 864
865 865 headerblock = self._readpartheader()
866 866 indebug(self.ui, 'end of bundle2 stream')
867 867
868 868 def _readpartheader(self):
869 869 """reads a part header size and return the bytes blob
870 870
871 871 returns None if empty"""
872 872 headersize = self._unpack(_fpartheadersize)[0]
873 873 if headersize < 0:
874 874 raise error.BundleValueError('negative part header size: %i'
875 875 % headersize)
876 876 indebug(self.ui, 'part header size: %i' % headersize)
877 877 if headersize:
878 878 return self._readexact(headersize)
879 879 return None
880 880
881 881 def compressed(self):
882 882 self.params # load params
883 883 return self._compressed
884 884
885 885 def close(self):
886 886 """close underlying file"""
887 887 if util.safehasattr(self._fp, 'close'):
888 888 return self._fp.close()
889 889
890 890 formatmap = {'20': unbundle20}
891 891
892 892 b2streamparamsmap = {}
893 893
894 894 def b2streamparamhandler(name):
895 895 """register a handler for a stream level parameter"""
896 896 def decorator(func):
897 897 assert name not in formatmap
898 898 b2streamparamsmap[name] = func
899 899 return func
900 900 return decorator
901 901
902 902 @b2streamparamhandler('compression')
903 903 def processcompression(unbundler, param, value):
904 904 """read compression parameter and install payload decompression"""
905 905 if value not in util.compengines.supportedbundletypes:
906 906 raise error.BundleUnknownFeatureError(params=(param,),
907 907 values=(value,))
908 908 unbundler._compengine = util.compengines.forbundletype(value)
909 909 if value is not None:
910 910 unbundler._compressed = True
911 911
912 912 class bundlepart(object):
913 913 """A bundle2 part contains application level payload
914 914
915 915 The part `type` is used to route the part to the application level
916 916 handler.
917 917
918 918 The part payload is contained in ``part.data``. It could be raw bytes or a
919 919 generator of byte chunks.
920 920
921 921 You can add parameters to the part using the ``addparam`` method.
922 922 Parameters can be either mandatory (default) or advisory. Remote side
923 923 should be able to safely ignore the advisory ones.
924 924
925 925 Both data and parameters cannot be modified after the generation has begun.
926 926 """
927 927
928 928 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
929 929 data='', mandatory=True):
930 930 validateparttype(parttype)
931 931 self.id = None
932 932 self.type = parttype
933 933 self._data = data
934 934 self._mandatoryparams = list(mandatoryparams)
935 935 self._advisoryparams = list(advisoryparams)
936 936 # checking for duplicated entries
937 937 self._seenparams = set()
938 938 for pname, __ in self._mandatoryparams + self._advisoryparams:
939 939 if pname in self._seenparams:
940 940 raise error.ProgrammingError('duplicated params: %s' % pname)
941 941 self._seenparams.add(pname)
942 942 # status of the part's generation:
943 943 # - None: not started,
944 944 # - False: currently generated,
945 945 # - True: generation done.
946 946 self._generated = None
947 947 self.mandatory = mandatory
948 948
949 949 def __repr__(self):
950 950 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
951 951 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
952 952 % (cls, id(self), self.id, self.type, self.mandatory))
953 953
954 954 def copy(self):
955 955 """return a copy of the part
956 956
957 957 The new part have the very same content but no partid assigned yet.
958 958 Parts with generated data cannot be copied."""
959 959 assert not util.safehasattr(self.data, 'next')
960 960 return self.__class__(self.type, self._mandatoryparams,
961 961 self._advisoryparams, self._data, self.mandatory)
962 962
963 963 # methods used to defines the part content
964 964 @property
965 965 def data(self):
966 966 return self._data
967 967
968 968 @data.setter
969 969 def data(self, data):
970 970 if self._generated is not None:
971 971 raise error.ReadOnlyPartError('part is being generated')
972 972 self._data = data
973 973
974 974 @property
975 975 def mandatoryparams(self):
976 976 # make it an immutable tuple to force people through ``addparam``
977 977 return tuple(self._mandatoryparams)
978 978
979 979 @property
980 980 def advisoryparams(self):
981 981 # make it an immutable tuple to force people through ``addparam``
982 982 return tuple(self._advisoryparams)
983 983
984 984 def addparam(self, name, value='', mandatory=True):
985 985 """add a parameter to the part
986 986
987 987 If 'mandatory' is set to True, the remote handler must claim support
988 988 for this parameter or the unbundling will be aborted.
989 989
990 990 The 'name' and 'value' cannot exceed 255 bytes each.
991 991 """
992 992 if self._generated is not None:
993 993 raise error.ReadOnlyPartError('part is being generated')
994 994 if name in self._seenparams:
995 995 raise ValueError('duplicated params: %s' % name)
996 996 self._seenparams.add(name)
997 997 params = self._advisoryparams
998 998 if mandatory:
999 999 params = self._mandatoryparams
1000 1000 params.append((name, value))
1001 1001
1002 1002 # methods used to generates the bundle2 stream
1003 1003 def getchunks(self, ui):
1004 1004 if self._generated is not None:
1005 1005 raise error.ProgrammingError('part can only be consumed once')
1006 1006 self._generated = False
1007 1007
1008 1008 if ui.debugflag:
1009 1009 msg = ['bundle2-output-part: "%s"' % self.type]
1010 1010 if not self.mandatory:
1011 1011 msg.append(' (advisory)')
1012 1012 nbmp = len(self.mandatoryparams)
1013 1013 nbap = len(self.advisoryparams)
1014 1014 if nbmp or nbap:
1015 1015 msg.append(' (params:')
1016 1016 if nbmp:
1017 1017 msg.append(' %i mandatory' % nbmp)
1018 1018 if nbap:
1019 1019 msg.append(' %i advisory' % nbmp)
1020 1020 msg.append(')')
1021 1021 if not self.data:
1022 1022 msg.append(' empty payload')
1023 1023 elif (util.safehasattr(self.data, 'next')
1024 1024 or util.safehasattr(self.data, '__next__')):
1025 1025 msg.append(' streamed payload')
1026 1026 else:
1027 1027 msg.append(' %i bytes payload' % len(self.data))
1028 1028 msg.append('\n')
1029 1029 ui.debug(''.join(msg))
1030 1030
1031 1031 #### header
1032 1032 if self.mandatory:
1033 1033 parttype = self.type.upper()
1034 1034 else:
1035 1035 parttype = self.type.lower()
1036 1036 outdebug(ui, 'part %s: "%s"' % (pycompat.bytestr(self.id), parttype))
1037 1037 ## parttype
1038 1038 header = [_pack(_fparttypesize, len(parttype)),
1039 1039 parttype, _pack(_fpartid, self.id),
1040 1040 ]
1041 1041 ## parameters
1042 1042 # count
1043 1043 manpar = self.mandatoryparams
1044 1044 advpar = self.advisoryparams
1045 1045 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
1046 1046 # size
1047 1047 parsizes = []
1048 1048 for key, value in manpar:
1049 1049 parsizes.append(len(key))
1050 1050 parsizes.append(len(value))
1051 1051 for key, value in advpar:
1052 1052 parsizes.append(len(key))
1053 1053 parsizes.append(len(value))
1054 1054 paramsizes = _pack(_makefpartparamsizes(len(parsizes) // 2), *parsizes)
1055 1055 header.append(paramsizes)
1056 1056 # key, value
1057 1057 for key, value in manpar:
1058 1058 header.append(key)
1059 1059 header.append(value)
1060 1060 for key, value in advpar:
1061 1061 header.append(key)
1062 1062 header.append(value)
1063 1063 ## finalize header
1064 1064 try:
1065 1065 headerchunk = ''.join(header)
1066 1066 except TypeError:
1067 1067 raise TypeError(r'Found a non-bytes trying to '
1068 1068 r'build bundle part header: %r' % header)
1069 1069 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
1070 1070 yield _pack(_fpartheadersize, len(headerchunk))
1071 1071 yield headerchunk
1072 1072 ## payload
1073 1073 try:
1074 1074 for chunk in self._payloadchunks():
1075 1075 outdebug(ui, 'payload chunk size: %i' % len(chunk))
1076 1076 yield _pack(_fpayloadsize, len(chunk))
1077 1077 yield chunk
1078 1078 except GeneratorExit:
1079 1079 # GeneratorExit means that nobody is listening for our
1080 1080 # results anyway, so just bail quickly rather than trying
1081 1081 # to produce an error part.
1082 1082 ui.debug('bundle2-generatorexit\n')
1083 1083 raise
1084 1084 except BaseException as exc:
1085 1085 bexc = util.forcebytestr(exc)
1086 1086 # backup exception data for later
1087 1087 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1088 1088 % bexc)
1089 1089 tb = sys.exc_info()[2]
1090 1090 msg = 'unexpected error: %s' % bexc
1091 1091 interpart = bundlepart('error:abort', [('message', msg)],
1092 1092 mandatory=False)
1093 1093 interpart.id = 0
1094 1094 yield _pack(_fpayloadsize, -1)
1095 1095 for chunk in interpart.getchunks(ui=ui):
1096 1096 yield chunk
1097 1097 outdebug(ui, 'closing payload chunk')
1098 1098 # abort current part payload
1099 1099 yield _pack(_fpayloadsize, 0)
1100 1100 pycompat.raisewithtb(exc, tb)
1101 1101 # end of payload
1102 1102 outdebug(ui, 'closing payload chunk')
1103 1103 yield _pack(_fpayloadsize, 0)
1104 1104 self._generated = True
1105 1105
1106 1106 def _payloadchunks(self):
1107 1107 """yield chunks of a the part payload
1108 1108
1109 1109 Exists to handle the different methods to provide data to a part."""
1110 1110 # we only support fixed size data now.
1111 1111 # This will be improved in the future.
1112 1112 if (util.safehasattr(self.data, 'next')
1113 1113 or util.safehasattr(self.data, '__next__')):
1114 1114 buff = util.chunkbuffer(self.data)
1115 1115 chunk = buff.read(preferedchunksize)
1116 1116 while chunk:
1117 1117 yield chunk
1118 1118 chunk = buff.read(preferedchunksize)
1119 1119 elif len(self.data):
1120 1120 yield self.data
1121 1121
1122 1122
1123 1123 flaginterrupt = -1
1124 1124
1125 1125 class interrupthandler(unpackermixin):
1126 1126 """read one part and process it with restricted capability
1127 1127
1128 1128 This allows to transmit exception raised on the producer size during part
1129 1129 iteration while the consumer is reading a part.
1130 1130
1131 1131 Part processed in this manner only have access to a ui object,"""
1132 1132
1133 1133 def __init__(self, ui, fp):
1134 1134 super(interrupthandler, self).__init__(fp)
1135 1135 self.ui = ui
1136 1136
1137 1137 def _readpartheader(self):
1138 1138 """reads a part header size and return the bytes blob
1139 1139
1140 1140 returns None if empty"""
1141 1141 headersize = self._unpack(_fpartheadersize)[0]
1142 1142 if headersize < 0:
1143 1143 raise error.BundleValueError('negative part header size: %i'
1144 1144 % headersize)
1145 1145 indebug(self.ui, 'part header size: %i\n' % headersize)
1146 1146 if headersize:
1147 1147 return self._readexact(headersize)
1148 1148 return None
1149 1149
1150 1150 def __call__(self):
1151 1151
1152 1152 self.ui.debug('bundle2-input-stream-interrupt:'
1153 1153 ' opening out of band context\n')
1154 1154 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1155 1155 headerblock = self._readpartheader()
1156 1156 if headerblock is None:
1157 1157 indebug(self.ui, 'no part found during interruption.')
1158 1158 return
1159 1159 part = unbundlepart(self.ui, headerblock, self._fp)
1160 1160 op = interruptoperation(self.ui)
1161 1161 hardabort = False
1162 1162 try:
1163 1163 _processpart(op, part)
1164 1164 except (SystemExit, KeyboardInterrupt):
1165 1165 hardabort = True
1166 1166 raise
1167 1167 finally:
1168 1168 if not hardabort:
1169 1169 part.consume()
1170 1170 self.ui.debug('bundle2-input-stream-interrupt:'
1171 1171 ' closing out of band context\n')
1172 1172
1173 1173 class interruptoperation(object):
1174 1174 """A limited operation to be use by part handler during interruption
1175 1175
1176 1176 It only have access to an ui object.
1177 1177 """
1178 1178
1179 1179 def __init__(self, ui):
1180 1180 self.ui = ui
1181 1181 self.reply = None
1182 1182 self.captureoutput = False
1183 1183
1184 1184 @property
1185 1185 def repo(self):
1186 1186 raise error.ProgrammingError('no repo access from stream interruption')
1187 1187
1188 1188 def gettransaction(self):
1189 1189 raise TransactionUnavailable('no repo access from stream interruption')
1190 1190
1191 1191 def decodepayloadchunks(ui, fh):
1192 1192 """Reads bundle2 part payload data into chunks.
1193 1193
1194 1194 Part payload data consists of framed chunks. This function takes
1195 1195 a file handle and emits those chunks.
1196 1196 """
1197 1197 dolog = ui.configbool('devel', 'bundle2.debug')
1198 1198 debug = ui.debug
1199 1199
1200 1200 headerstruct = struct.Struct(_fpayloadsize)
1201 1201 headersize = headerstruct.size
1202 1202 unpack = headerstruct.unpack
1203 1203
1204 1204 readexactly = changegroup.readexactly
1205 1205 read = fh.read
1206 1206
1207 1207 chunksize = unpack(readexactly(fh, headersize))[0]
1208 1208 indebug(ui, 'payload chunk size: %i' % chunksize)
1209 1209
1210 1210 # changegroup.readexactly() is inlined below for performance.
1211 1211 while chunksize:
1212 1212 if chunksize >= 0:
1213 1213 s = read(chunksize)
1214 1214 if len(s) < chunksize:
1215 1215 raise error.Abort(_('stream ended unexpectedly '
1216 1216 ' (got %d bytes, expected %d)') %
1217 1217 (len(s), chunksize))
1218 1218
1219 1219 yield s
1220 1220 elif chunksize == flaginterrupt:
1221 1221 # Interrupt "signal" detected. The regular stream is interrupted
1222 1222 # and a bundle2 part follows. Consume it.
1223 1223 interrupthandler(ui, fh)()
1224 1224 else:
1225 1225 raise error.BundleValueError(
1226 1226 'negative payload chunk size: %s' % chunksize)
1227 1227
1228 1228 s = read(headersize)
1229 1229 if len(s) < headersize:
1230 1230 raise error.Abort(_('stream ended unexpectedly '
1231 1231 ' (got %d bytes, expected %d)') %
1232 1232 (len(s), chunksize))
1233 1233
1234 1234 chunksize = unpack(s)[0]
1235 1235
1236 1236 # indebug() inlined for performance.
1237 1237 if dolog:
1238 1238 debug('bundle2-input: payload chunk size: %i\n' % chunksize)
1239 1239
1240 1240 class unbundlepart(unpackermixin):
1241 1241 """a bundle part read from a bundle"""
1242 1242
1243 1243 def __init__(self, ui, header, fp):
1244 1244 super(unbundlepart, self).__init__(fp)
1245 1245 self._seekable = (util.safehasattr(fp, 'seek') and
1246 1246 util.safehasattr(fp, 'tell'))
1247 1247 self.ui = ui
1248 1248 # unbundle state attr
1249 1249 self._headerdata = header
1250 1250 self._headeroffset = 0
1251 1251 self._initialized = False
1252 1252 self.consumed = False
1253 1253 # part data
1254 1254 self.id = None
1255 1255 self.type = None
1256 1256 self.mandatoryparams = None
1257 1257 self.advisoryparams = None
1258 1258 self.params = None
1259 1259 self.mandatorykeys = ()
1260 1260 self._readheader()
1261 1261 self._mandatory = None
1262 1262 self._pos = 0
1263 1263
1264 1264 def _fromheader(self, size):
1265 1265 """return the next <size> byte from the header"""
1266 1266 offset = self._headeroffset
1267 1267 data = self._headerdata[offset:(offset + size)]
1268 1268 self._headeroffset = offset + size
1269 1269 return data
1270 1270
1271 1271 def _unpackheader(self, format):
1272 1272 """read given format from header
1273 1273
1274 1274 This automatically compute the size of the format to read."""
1275 1275 data = self._fromheader(struct.calcsize(format))
1276 1276 return _unpack(format, data)
1277 1277
1278 1278 def _initparams(self, mandatoryparams, advisoryparams):
1279 1279 """internal function to setup all logic related parameters"""
1280 1280 # make it read only to prevent people touching it by mistake.
1281 1281 self.mandatoryparams = tuple(mandatoryparams)
1282 1282 self.advisoryparams = tuple(advisoryparams)
1283 1283 # user friendly UI
1284 1284 self.params = util.sortdict(self.mandatoryparams)
1285 1285 self.params.update(self.advisoryparams)
1286 1286 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1287 1287
1288 1288 def _readheader(self):
1289 1289 """read the header and setup the object"""
1290 1290 typesize = self._unpackheader(_fparttypesize)[0]
1291 1291 self.type = self._fromheader(typesize)
1292 1292 indebug(self.ui, 'part type: "%s"' % self.type)
1293 1293 self.id = self._unpackheader(_fpartid)[0]
1294 1294 indebug(self.ui, 'part id: "%s"' % pycompat.bytestr(self.id))
1295 1295 # extract mandatory bit from type
1296 1296 self.mandatory = (self.type != self.type.lower())
1297 1297 self.type = self.type.lower()
1298 1298 ## reading parameters
1299 1299 # param count
1300 1300 mancount, advcount = self._unpackheader(_fpartparamcount)
1301 1301 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1302 1302 # param size
1303 1303 fparamsizes = _makefpartparamsizes(mancount + advcount)
1304 1304 paramsizes = self._unpackheader(fparamsizes)
1305 1305 # make it a list of couple again
1306 1306 paramsizes = list(zip(paramsizes[::2], paramsizes[1::2]))
1307 1307 # split mandatory from advisory
1308 1308 mansizes = paramsizes[:mancount]
1309 1309 advsizes = paramsizes[mancount:]
1310 1310 # retrieve param value
1311 1311 manparams = []
1312 1312 for key, value in mansizes:
1313 1313 manparams.append((self._fromheader(key), self._fromheader(value)))
1314 1314 advparams = []
1315 1315 for key, value in advsizes:
1316 1316 advparams.append((self._fromheader(key), self._fromheader(value)))
1317 1317 self._initparams(manparams, advparams)
1318 1318 ## part payload
1319 1319 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1320 1320 # we read the data, tell it
1321 1321 self._initialized = True
1322 1322
1323 1323 def _payloadchunks(self):
1324 1324 """Generator of decoded chunks in the payload."""
1325 1325 return decodepayloadchunks(self.ui, self._fp)
1326 1326
1327 1327 def consume(self):
1328 1328 """Read the part payload until completion.
1329 1329
1330 1330 By consuming the part data, the underlying stream read offset will
1331 1331 be advanced to the next part (or end of stream).
1332 1332 """
1333 1333 if self.consumed:
1334 1334 return
1335 1335
1336 1336 chunk = self.read(32768)
1337 1337 while chunk:
1338 1338 self._pos += len(chunk)
1339 1339 chunk = self.read(32768)
1340 1340
1341 1341 def read(self, size=None):
1342 1342 """read payload data"""
1343 1343 if not self._initialized:
1344 1344 self._readheader()
1345 1345 if size is None:
1346 1346 data = self._payloadstream.read()
1347 1347 else:
1348 1348 data = self._payloadstream.read(size)
1349 1349 self._pos += len(data)
1350 1350 if size is None or len(data) < size:
1351 1351 if not self.consumed and self._pos:
1352 1352 self.ui.debug('bundle2-input-part: total payload size %i\n'
1353 1353 % self._pos)
1354 1354 self.consumed = True
1355 1355 return data
1356 1356
1357 1357 class seekableunbundlepart(unbundlepart):
1358 1358 """A bundle2 part in a bundle that is seekable.
1359 1359
1360 1360 Regular ``unbundlepart`` instances can only be read once. This class
1361 1361 extends ``unbundlepart`` to enable bi-directional seeking within the
1362 1362 part.
1363 1363
1364 1364 Bundle2 part data consists of framed chunks. Offsets when seeking
1365 1365 refer to the decoded data, not the offsets in the underlying bundle2
1366 1366 stream.
1367 1367
1368 1368 To facilitate quickly seeking within the decoded data, instances of this
1369 1369 class maintain a mapping between offsets in the underlying stream and
1370 1370 the decoded payload. This mapping will consume memory in proportion
1371 1371 to the number of chunks within the payload (which almost certainly
1372 1372 increases in proportion with the size of the part).
1373 1373 """
1374 1374 def __init__(self, ui, header, fp):
1375 1375 # (payload, file) offsets for chunk starts.
1376 1376 self._chunkindex = []
1377 1377
1378 1378 super(seekableunbundlepart, self).__init__(ui, header, fp)
1379 1379
1380 1380 def _payloadchunks(self, chunknum=0):
1381 1381 '''seek to specified chunk and start yielding data'''
1382 1382 if len(self._chunkindex) == 0:
1383 1383 assert chunknum == 0, 'Must start with chunk 0'
1384 1384 self._chunkindex.append((0, self._tellfp()))
1385 1385 else:
1386 1386 assert chunknum < len(self._chunkindex), \
1387 1387 'Unknown chunk %d' % chunknum
1388 1388 self._seekfp(self._chunkindex[chunknum][1])
1389 1389
1390 1390 pos = self._chunkindex[chunknum][0]
1391 1391
1392 1392 for chunk in decodepayloadchunks(self.ui, self._fp):
1393 1393 chunknum += 1
1394 1394 pos += len(chunk)
1395 1395 if chunknum == len(self._chunkindex):
1396 1396 self._chunkindex.append((pos, self._tellfp()))
1397 1397
1398 1398 yield chunk
1399 1399
1400 1400 def _findchunk(self, pos):
1401 1401 '''for a given payload position, return a chunk number and offset'''
1402 1402 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1403 1403 if ppos == pos:
1404 1404 return chunk, 0
1405 1405 elif ppos > pos:
1406 1406 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1407 1407 raise ValueError('Unknown chunk')
1408 1408
1409 1409 def tell(self):
1410 1410 return self._pos
1411 1411
1412 1412 def seek(self, offset, whence=os.SEEK_SET):
1413 1413 if whence == os.SEEK_SET:
1414 1414 newpos = offset
1415 1415 elif whence == os.SEEK_CUR:
1416 1416 newpos = self._pos + offset
1417 1417 elif whence == os.SEEK_END:
1418 1418 if not self.consumed:
1419 1419 # Can't use self.consume() here because it advances self._pos.
1420 1420 chunk = self.read(32768)
1421 1421 while chunk:
1422 1422 chunk = self.read(32768)
1423 1423 newpos = self._chunkindex[-1][0] - offset
1424 1424 else:
1425 1425 raise ValueError('Unknown whence value: %r' % (whence,))
1426 1426
1427 1427 if newpos > self._chunkindex[-1][0] and not self.consumed:
1428 1428 # Can't use self.consume() here because it advances self._pos.
1429 1429 chunk = self.read(32768)
1430 1430 while chunk:
1431 1431 chunk = self.read(32668)
1432 1432
1433 1433 if not 0 <= newpos <= self._chunkindex[-1][0]:
1434 1434 raise ValueError('Offset out of range')
1435 1435
1436 1436 if self._pos != newpos:
1437 1437 chunk, internaloffset = self._findchunk(newpos)
1438 1438 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1439 1439 adjust = self.read(internaloffset)
1440 1440 if len(adjust) != internaloffset:
1441 1441 raise error.Abort(_('Seek failed\n'))
1442 1442 self._pos = newpos
1443 1443
1444 1444 def _seekfp(self, offset, whence=0):
1445 1445 """move the underlying file pointer
1446 1446
1447 1447 This method is meant for internal usage by the bundle2 protocol only.
1448 1448 They directly manipulate the low level stream including bundle2 level
1449 1449 instruction.
1450 1450
1451 1451 Do not use it to implement higher-level logic or methods."""
1452 1452 if self._seekable:
1453 1453 return self._fp.seek(offset, whence)
1454 1454 else:
1455 1455 raise NotImplementedError(_('File pointer is not seekable'))
1456 1456
1457 1457 def _tellfp(self):
1458 1458 """return the file offset, or None if file is not seekable
1459 1459
1460 1460 This method is meant for internal usage by the bundle2 protocol only.
1461 1461 They directly manipulate the low level stream including bundle2 level
1462 1462 instruction.
1463 1463
1464 1464 Do not use it to implement higher-level logic or methods."""
1465 1465 if self._seekable:
1466 1466 try:
1467 1467 return self._fp.tell()
1468 1468 except IOError as e:
1469 1469 if e.errno == errno.ESPIPE:
1470 1470 self._seekable = False
1471 1471 else:
1472 1472 raise
1473 1473 return None
1474 1474
1475 1475 # These are only the static capabilities.
1476 1476 # Check the 'getrepocaps' function for the rest.
1477 1477 capabilities = {'HG20': (),
1478 'bookmarks': (),
1478 1479 'error': ('abort', 'unsupportedcontent', 'pushraced',
1479 1480 'pushkey'),
1480 1481 'listkeys': (),
1481 1482 'pushkey': (),
1482 1483 'digests': tuple(sorted(util.DIGESTS.keys())),
1483 1484 'remote-changegroup': ('http', 'https'),
1484 1485 'hgtagsfnodes': (),
1485 1486 'phases': ('heads',),
1486 1487 }
1487 1488
1488 1489 def getrepocaps(repo, allowpushback=False):
1489 1490 """return the bundle2 capabilities for a given repo
1490 1491
1491 1492 Exists to allow extensions (like evolution) to mutate the capabilities.
1492 1493 """
1493 1494 caps = capabilities.copy()
1494 1495 caps['changegroup'] = tuple(sorted(
1495 1496 changegroup.supportedincomingversions(repo)))
1496 1497 if obsolete.isenabled(repo, obsolete.exchangeopt):
1497 1498 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1498 1499 caps['obsmarkers'] = supportedformat
1499 1500 if allowpushback:
1500 1501 caps['pushback'] = ()
1501 1502 cpmode = repo.ui.config('server', 'concurrent-push-mode')
1502 1503 if cpmode == 'check-related':
1503 1504 caps['checkheads'] = ('related',)
1504 1505 if 'phases' in repo.ui.configlist('devel', 'legacy.exchange'):
1505 1506 caps.pop('phases')
1506 1507 return caps
1507 1508
1508 1509 def bundle2caps(remote):
1509 1510 """return the bundle capabilities of a peer as dict"""
1510 1511 raw = remote.capable('bundle2')
1511 1512 if not raw and raw != '':
1512 1513 return {}
1513 1514 capsblob = urlreq.unquote(remote.capable('bundle2'))
1514 1515 return decodecaps(capsblob)
1515 1516
1516 1517 def obsmarkersversion(caps):
1517 1518 """extract the list of supported obsmarkers versions from a bundle2caps dict
1518 1519 """
1519 1520 obscaps = caps.get('obsmarkers', ())
1520 1521 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1521 1522
1522 1523 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1523 1524 vfs=None, compression=None, compopts=None):
1524 1525 if bundletype.startswith('HG10'):
1525 1526 cg = changegroup.makechangegroup(repo, outgoing, '01', source)
1526 1527 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1527 1528 compression=compression, compopts=compopts)
1528 1529 elif not bundletype.startswith('HG20'):
1529 1530 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1530 1531
1531 1532 caps = {}
1532 1533 if 'obsolescence' in opts:
1533 1534 caps['obsmarkers'] = ('V1',)
1534 1535 bundle = bundle20(ui, caps)
1535 1536 bundle.setcompression(compression, compopts)
1536 1537 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1537 1538 chunkiter = bundle.getchunks()
1538 1539
1539 1540 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1540 1541
1541 1542 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1542 1543 # We should eventually reconcile this logic with the one behind
1543 1544 # 'exchange.getbundle2partsgenerator'.
1544 1545 #
1545 1546 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1546 1547 # different right now. So we keep them separated for now for the sake of
1547 1548 # simplicity.
1548 1549
1549 1550 # we always want a changegroup in such bundle
1550 1551 cgversion = opts.get('cg.version')
1551 1552 if cgversion is None:
1552 1553 cgversion = changegroup.safeversion(repo)
1553 1554 cg = changegroup.makechangegroup(repo, outgoing, cgversion, source)
1554 1555 part = bundler.newpart('changegroup', data=cg.getchunks())
1555 1556 part.addparam('version', cg.version)
1556 1557 if 'clcount' in cg.extras:
1557 1558 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1558 1559 mandatory=False)
1559 1560 if opts.get('phases') and repo.revs('%ln and secret()',
1560 1561 outgoing.missingheads):
1561 1562 part.addparam('targetphase', '%d' % phases.secret, mandatory=False)
1562 1563
1563 1564 addparttagsfnodescache(repo, bundler, outgoing)
1564 1565
1565 1566 if opts.get('obsolescence', False):
1566 1567 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1567 1568 buildobsmarkerspart(bundler, obsmarkers)
1568 1569
1569 1570 if opts.get('phases', False):
1570 1571 headsbyphase = phases.subsetphaseheads(repo, outgoing.missing)
1571 1572 phasedata = phases.binaryencode(headsbyphase)
1572 1573 bundler.newpart('phase-heads', data=phasedata)
1573 1574
1574 1575 def addparttagsfnodescache(repo, bundler, outgoing):
1575 1576 # we include the tags fnode cache for the bundle changeset
1576 1577 # (as an optional parts)
1577 1578 cache = tags.hgtagsfnodescache(repo.unfiltered())
1578 1579 chunks = []
1579 1580
1580 1581 # .hgtags fnodes are only relevant for head changesets. While we could
1581 1582 # transfer values for all known nodes, there will likely be little to
1582 1583 # no benefit.
1583 1584 #
1584 1585 # We don't bother using a generator to produce output data because
1585 1586 # a) we only have 40 bytes per head and even esoteric numbers of heads
1586 1587 # consume little memory (1M heads is 40MB) b) we don't want to send the
1587 1588 # part if we don't have entries and knowing if we have entries requires
1588 1589 # cache lookups.
1589 1590 for node in outgoing.missingheads:
1590 1591 # Don't compute missing, as this may slow down serving.
1591 1592 fnode = cache.getfnode(node, computemissing=False)
1592 1593 if fnode is not None:
1593 1594 chunks.extend([node, fnode])
1594 1595
1595 1596 if chunks:
1596 1597 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1597 1598
1598 1599 def buildobsmarkerspart(bundler, markers):
1599 1600 """add an obsmarker part to the bundler with <markers>
1600 1601
1601 1602 No part is created if markers is empty.
1602 1603 Raises ValueError if the bundler doesn't support any known obsmarker format.
1603 1604 """
1604 1605 if not markers:
1605 1606 return None
1606 1607
1607 1608 remoteversions = obsmarkersversion(bundler.capabilities)
1608 1609 version = obsolete.commonversion(remoteversions)
1609 1610 if version is None:
1610 1611 raise ValueError('bundler does not support common obsmarker format')
1611 1612 stream = obsolete.encodemarkers(markers, True, version=version)
1612 1613 return bundler.newpart('obsmarkers', data=stream)
1613 1614
1614 1615 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1615 1616 compopts=None):
1616 1617 """Write a bundle file and return its filename.
1617 1618
1618 1619 Existing files will not be overwritten.
1619 1620 If no filename is specified, a temporary file is created.
1620 1621 bz2 compression can be turned off.
1621 1622 The bundle file will be deleted in case of errors.
1622 1623 """
1623 1624
1624 1625 if bundletype == "HG20":
1625 1626 bundle = bundle20(ui)
1626 1627 bundle.setcompression(compression, compopts)
1627 1628 part = bundle.newpart('changegroup', data=cg.getchunks())
1628 1629 part.addparam('version', cg.version)
1629 1630 if 'clcount' in cg.extras:
1630 1631 part.addparam('nbchanges', '%d' % cg.extras['clcount'],
1631 1632 mandatory=False)
1632 1633 chunkiter = bundle.getchunks()
1633 1634 else:
1634 1635 # compression argument is only for the bundle2 case
1635 1636 assert compression is None
1636 1637 if cg.version != '01':
1637 1638 raise error.Abort(_('old bundle types only supports v1 '
1638 1639 'changegroups'))
1639 1640 header, comp = bundletypes[bundletype]
1640 1641 if comp not in util.compengines.supportedbundletypes:
1641 1642 raise error.Abort(_('unknown stream compression type: %s')
1642 1643 % comp)
1643 1644 compengine = util.compengines.forbundletype(comp)
1644 1645 def chunkiter():
1645 1646 yield header
1646 1647 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1647 1648 yield chunk
1648 1649 chunkiter = chunkiter()
1649 1650
1650 1651 # parse the changegroup data, otherwise we will block
1651 1652 # in case of sshrepo because we don't know the end of the stream
1652 1653 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1653 1654
1654 1655 def combinechangegroupresults(op):
1655 1656 """logic to combine 0 or more addchangegroup results into one"""
1656 1657 results = [r.get('return', 0)
1657 1658 for r in op.records['changegroup']]
1658 1659 changedheads = 0
1659 1660 result = 1
1660 1661 for ret in results:
1661 1662 # If any changegroup result is 0, return 0
1662 1663 if ret == 0:
1663 1664 result = 0
1664 1665 break
1665 1666 if ret < -1:
1666 1667 changedheads += ret + 1
1667 1668 elif ret > 1:
1668 1669 changedheads += ret - 1
1669 1670 if changedheads > 0:
1670 1671 result = 1 + changedheads
1671 1672 elif changedheads < 0:
1672 1673 result = -1 + changedheads
1673 1674 return result
1674 1675
1675 1676 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest',
1676 1677 'targetphase'))
1677 1678 def handlechangegroup(op, inpart):
1678 1679 """apply a changegroup part on the repo
1679 1680
1680 1681 This is a very early implementation that will massive rework before being
1681 1682 inflicted to any end-user.
1682 1683 """
1683 1684 tr = op.gettransaction()
1684 1685 unpackerversion = inpart.params.get('version', '01')
1685 1686 # We should raise an appropriate exception here
1686 1687 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1687 1688 # the source and url passed here are overwritten by the one contained in
1688 1689 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1689 1690 nbchangesets = None
1690 1691 if 'nbchanges' in inpart.params:
1691 1692 nbchangesets = int(inpart.params.get('nbchanges'))
1692 1693 if ('treemanifest' in inpart.params and
1693 1694 'treemanifest' not in op.repo.requirements):
1694 1695 if len(op.repo.changelog) != 0:
1695 1696 raise error.Abort(_(
1696 1697 "bundle contains tree manifests, but local repo is "
1697 1698 "non-empty and does not use tree manifests"))
1698 1699 op.repo.requirements.add('treemanifest')
1699 1700 op.repo._applyopenerreqs()
1700 1701 op.repo._writerequirements()
1701 1702 extrakwargs = {}
1702 1703 targetphase = inpart.params.get('targetphase')
1703 1704 if targetphase is not None:
1704 1705 extrakwargs['targetphase'] = int(targetphase)
1705 1706 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2',
1706 1707 expectedtotal=nbchangesets, **extrakwargs)
1707 1708 if op.reply is not None:
1708 1709 # This is definitely not the final form of this
1709 1710 # return. But one need to start somewhere.
1710 1711 part = op.reply.newpart('reply:changegroup', mandatory=False)
1711 1712 part.addparam(
1712 1713 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1713 1714 part.addparam('return', '%i' % ret, mandatory=False)
1714 1715 assert not inpart.read()
1715 1716
1716 1717 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1717 1718 ['digest:%s' % k for k in util.DIGESTS.keys()])
1718 1719 @parthandler('remote-changegroup', _remotechangegroupparams)
1719 1720 def handleremotechangegroup(op, inpart):
1720 1721 """apply a bundle10 on the repo, given an url and validation information
1721 1722
1722 1723 All the information about the remote bundle to import are given as
1723 1724 parameters. The parameters include:
1724 1725 - url: the url to the bundle10.
1725 1726 - size: the bundle10 file size. It is used to validate what was
1726 1727 retrieved by the client matches the server knowledge about the bundle.
1727 1728 - digests: a space separated list of the digest types provided as
1728 1729 parameters.
1729 1730 - digest:<digest-type>: the hexadecimal representation of the digest with
1730 1731 that name. Like the size, it is used to validate what was retrieved by
1731 1732 the client matches what the server knows about the bundle.
1732 1733
1733 1734 When multiple digest types are given, all of them are checked.
1734 1735 """
1735 1736 try:
1736 1737 raw_url = inpart.params['url']
1737 1738 except KeyError:
1738 1739 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1739 1740 parsed_url = util.url(raw_url)
1740 1741 if parsed_url.scheme not in capabilities['remote-changegroup']:
1741 1742 raise error.Abort(_('remote-changegroup does not support %s urls') %
1742 1743 parsed_url.scheme)
1743 1744
1744 1745 try:
1745 1746 size = int(inpart.params['size'])
1746 1747 except ValueError:
1747 1748 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1748 1749 % 'size')
1749 1750 except KeyError:
1750 1751 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1751 1752
1752 1753 digests = {}
1753 1754 for typ in inpart.params.get('digests', '').split():
1754 1755 param = 'digest:%s' % typ
1755 1756 try:
1756 1757 value = inpart.params[param]
1757 1758 except KeyError:
1758 1759 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1759 1760 param)
1760 1761 digests[typ] = value
1761 1762
1762 1763 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1763 1764
1764 1765 tr = op.gettransaction()
1765 1766 from . import exchange
1766 1767 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1767 1768 if not isinstance(cg, changegroup.cg1unpacker):
1768 1769 raise error.Abort(_('%s: not a bundle version 1.0') %
1769 1770 util.hidepassword(raw_url))
1770 1771 ret = _processchangegroup(op, cg, tr, 'bundle2', 'bundle2')
1771 1772 if op.reply is not None:
1772 1773 # This is definitely not the final form of this
1773 1774 # return. But one need to start somewhere.
1774 1775 part = op.reply.newpart('reply:changegroup')
1775 1776 part.addparam(
1776 1777 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1777 1778 part.addparam('return', '%i' % ret, mandatory=False)
1778 1779 try:
1779 1780 real_part.validate()
1780 1781 except error.Abort as e:
1781 1782 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1782 1783 (util.hidepassword(raw_url), str(e)))
1783 1784 assert not inpart.read()
1784 1785
1785 1786 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1786 1787 def handlereplychangegroup(op, inpart):
1787 1788 ret = int(inpart.params['return'])
1788 1789 replyto = int(inpart.params['in-reply-to'])
1789 1790 op.records.add('changegroup', {'return': ret}, replyto)
1790 1791
1791 1792 @parthandler('check:bookmarks')
1792 1793 def handlecheckbookmarks(op, inpart):
1793 1794 """check location of bookmarks
1794 1795
1795 1796 This part is to be used to detect push race regarding bookmark, it
1796 1797 contains binary encoded (bookmark, node) tuple. If the local state does
1797 1798 not marks the one in the part, a PushRaced exception is raised
1798 1799 """
1799 1800 bookdata = bookmarks.binarydecode(inpart)
1800 1801
1801 1802 msgstandard = ('repository changed while pushing - please try again '
1802 1803 '(bookmark "%s" move from %s to %s)')
1803 1804 msgmissing = ('repository changed while pushing - please try again '
1804 1805 '(bookmark "%s" is missing, expected %s)')
1805 1806 msgexist = ('repository changed while pushing - please try again '
1806 1807 '(bookmark "%s" set on %s, expected missing)')
1807 1808 for book, node in bookdata:
1808 1809 currentnode = op.repo._bookmarks.get(book)
1809 1810 if currentnode != node:
1810 1811 if node is None:
1811 1812 finalmsg = msgexist % (book, nodemod.short(currentnode))
1812 1813 elif currentnode is None:
1813 1814 finalmsg = msgmissing % (book, nodemod.short(node))
1814 1815 else:
1815 1816 finalmsg = msgstandard % (book, nodemod.short(node),
1816 1817 nodemod.short(currentnode))
1817 1818 raise error.PushRaced(finalmsg)
1818 1819
1819 1820 @parthandler('check:heads')
1820 1821 def handlecheckheads(op, inpart):
1821 1822 """check that head of the repo did not change
1822 1823
1823 1824 This is used to detect a push race when using unbundle.
1824 1825 This replaces the "heads" argument of unbundle."""
1825 1826 h = inpart.read(20)
1826 1827 heads = []
1827 1828 while len(h) == 20:
1828 1829 heads.append(h)
1829 1830 h = inpart.read(20)
1830 1831 assert not h
1831 1832 # Trigger a transaction so that we are guaranteed to have the lock now.
1832 1833 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1833 1834 op.gettransaction()
1834 1835 if sorted(heads) != sorted(op.repo.heads()):
1835 1836 raise error.PushRaced('repository changed while pushing - '
1836 1837 'please try again')
1837 1838
1838 1839 @parthandler('check:updated-heads')
1839 1840 def handlecheckupdatedheads(op, inpart):
1840 1841 """check for race on the heads touched by a push
1841 1842
1842 1843 This is similar to 'check:heads' but focus on the heads actually updated
1843 1844 during the push. If other activities happen on unrelated heads, it is
1844 1845 ignored.
1845 1846
1846 1847 This allow server with high traffic to avoid push contention as long as
1847 1848 unrelated parts of the graph are involved."""
1848 1849 h = inpart.read(20)
1849 1850 heads = []
1850 1851 while len(h) == 20:
1851 1852 heads.append(h)
1852 1853 h = inpart.read(20)
1853 1854 assert not h
1854 1855 # trigger a transaction so that we are guaranteed to have the lock now.
1855 1856 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1856 1857 op.gettransaction()
1857 1858
1858 1859 currentheads = set()
1859 1860 for ls in op.repo.branchmap().itervalues():
1860 1861 currentheads.update(ls)
1861 1862
1862 1863 for h in heads:
1863 1864 if h not in currentheads:
1864 1865 raise error.PushRaced('repository changed while pushing - '
1865 1866 'please try again')
1866 1867
1867 1868 @parthandler('check:phases')
1868 1869 def handlecheckphases(op, inpart):
1869 1870 """check that phase boundaries of the repository did not change
1870 1871
1871 1872 This is used to detect a push race.
1872 1873 """
1873 1874 phasetonodes = phases.binarydecode(inpart)
1874 1875 unfi = op.repo.unfiltered()
1875 1876 cl = unfi.changelog
1876 1877 phasecache = unfi._phasecache
1877 1878 msg = ('repository changed while pushing - please try again '
1878 1879 '(%s is %s expected %s)')
1879 1880 for expectedphase, nodes in enumerate(phasetonodes):
1880 1881 for n in nodes:
1881 1882 actualphase = phasecache.phase(unfi, cl.rev(n))
1882 1883 if actualphase != expectedphase:
1883 1884 finalmsg = msg % (nodemod.short(n),
1884 1885 phases.phasenames[actualphase],
1885 1886 phases.phasenames[expectedphase])
1886 1887 raise error.PushRaced(finalmsg)
1887 1888
1888 1889 @parthandler('output')
1889 1890 def handleoutput(op, inpart):
1890 1891 """forward output captured on the server to the client"""
1891 1892 for line in inpart.read().splitlines():
1892 1893 op.ui.status(_('remote: %s\n') % line)
1893 1894
1894 1895 @parthandler('replycaps')
1895 1896 def handlereplycaps(op, inpart):
1896 1897 """Notify that a reply bundle should be created
1897 1898
1898 1899 The payload contains the capabilities information for the reply"""
1899 1900 caps = decodecaps(inpart.read())
1900 1901 if op.reply is None:
1901 1902 op.reply = bundle20(op.ui, caps)
1902 1903
1903 1904 class AbortFromPart(error.Abort):
1904 1905 """Sub-class of Abort that denotes an error from a bundle2 part."""
1905 1906
1906 1907 @parthandler('error:abort', ('message', 'hint'))
1907 1908 def handleerrorabort(op, inpart):
1908 1909 """Used to transmit abort error over the wire"""
1909 1910 raise AbortFromPart(inpart.params['message'],
1910 1911 hint=inpart.params.get('hint'))
1911 1912
1912 1913 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1913 1914 'in-reply-to'))
1914 1915 def handleerrorpushkey(op, inpart):
1915 1916 """Used to transmit failure of a mandatory pushkey over the wire"""
1916 1917 kwargs = {}
1917 1918 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1918 1919 value = inpart.params.get(name)
1919 1920 if value is not None:
1920 1921 kwargs[name] = value
1921 1922 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1922 1923
1923 1924 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1924 1925 def handleerrorunsupportedcontent(op, inpart):
1925 1926 """Used to transmit unknown content error over the wire"""
1926 1927 kwargs = {}
1927 1928 parttype = inpart.params.get('parttype')
1928 1929 if parttype is not None:
1929 1930 kwargs['parttype'] = parttype
1930 1931 params = inpart.params.get('params')
1931 1932 if params is not None:
1932 1933 kwargs['params'] = params.split('\0')
1933 1934
1934 1935 raise error.BundleUnknownFeatureError(**kwargs)
1935 1936
1936 1937 @parthandler('error:pushraced', ('message',))
1937 1938 def handleerrorpushraced(op, inpart):
1938 1939 """Used to transmit push race error over the wire"""
1939 1940 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1940 1941
1941 1942 @parthandler('listkeys', ('namespace',))
1942 1943 def handlelistkeys(op, inpart):
1943 1944 """retrieve pushkey namespace content stored in a bundle2"""
1944 1945 namespace = inpart.params['namespace']
1945 1946 r = pushkey.decodekeys(inpart.read())
1946 1947 op.records.add('listkeys', (namespace, r))
1947 1948
1948 1949 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1949 1950 def handlepushkey(op, inpart):
1950 1951 """process a pushkey request"""
1951 1952 dec = pushkey.decode
1952 1953 namespace = dec(inpart.params['namespace'])
1953 1954 key = dec(inpart.params['key'])
1954 1955 old = dec(inpart.params['old'])
1955 1956 new = dec(inpart.params['new'])
1956 1957 # Grab the transaction to ensure that we have the lock before performing the
1957 1958 # pushkey.
1958 1959 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1959 1960 op.gettransaction()
1960 1961 ret = op.repo.pushkey(namespace, key, old, new)
1961 1962 record = {'namespace': namespace,
1962 1963 'key': key,
1963 1964 'old': old,
1964 1965 'new': new}
1965 1966 op.records.add('pushkey', record)
1966 1967 if op.reply is not None:
1967 1968 rpart = op.reply.newpart('reply:pushkey')
1968 1969 rpart.addparam(
1969 1970 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
1970 1971 rpart.addparam('return', '%i' % ret, mandatory=False)
1971 1972 if inpart.mandatory and not ret:
1972 1973 kwargs = {}
1973 1974 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1974 1975 if key in inpart.params:
1975 1976 kwargs[key] = inpart.params[key]
1976 1977 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1977 1978
1978 1979 @parthandler('phase-heads')
1979 1980 def handlephases(op, inpart):
1980 1981 """apply phases from bundle part to repo"""
1981 1982 headsbyphase = phases.binarydecode(inpart)
1982 1983 phases.updatephases(op.repo.unfiltered(), op.gettransaction, headsbyphase)
1983 1984
1984 1985 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1985 1986 def handlepushkeyreply(op, inpart):
1986 1987 """retrieve the result of a pushkey request"""
1987 1988 ret = int(inpart.params['return'])
1988 1989 partid = int(inpart.params['in-reply-to'])
1989 1990 op.records.add('pushkey', {'return': ret}, partid)
1990 1991
1991 1992 @parthandler('obsmarkers')
1992 1993 def handleobsmarker(op, inpart):
1993 1994 """add a stream of obsmarkers to the repo"""
1994 1995 tr = op.gettransaction()
1995 1996 markerdata = inpart.read()
1996 1997 if op.ui.config('experimental', 'obsmarkers-exchange-debug'):
1997 1998 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1998 1999 % len(markerdata))
1999 2000 # The mergemarkers call will crash if marker creation is not enabled.
2000 2001 # we want to avoid this if the part is advisory.
2001 2002 if not inpart.mandatory and op.repo.obsstore.readonly:
2002 2003 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
2003 2004 return
2004 2005 new = op.repo.obsstore.mergemarkers(tr, markerdata)
2005 2006 op.repo.invalidatevolatilesets()
2006 2007 if new:
2007 2008 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
2008 2009 op.records.add('obsmarkers', {'new': new})
2009 2010 if op.reply is not None:
2010 2011 rpart = op.reply.newpart('reply:obsmarkers')
2011 2012 rpart.addparam(
2012 2013 'in-reply-to', pycompat.bytestr(inpart.id), mandatory=False)
2013 2014 rpart.addparam('new', '%i' % new, mandatory=False)
2014 2015
2015 2016
2016 2017 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
2017 2018 def handleobsmarkerreply(op, inpart):
2018 2019 """retrieve the result of a pushkey request"""
2019 2020 ret = int(inpart.params['new'])
2020 2021 partid = int(inpart.params['in-reply-to'])
2021 2022 op.records.add('obsmarkers', {'new': ret}, partid)
2022 2023
2023 2024 @parthandler('hgtagsfnodes')
2024 2025 def handlehgtagsfnodes(op, inpart):
2025 2026 """Applies .hgtags fnodes cache entries to the local repo.
2026 2027
2027 2028 Payload is pairs of 20 byte changeset nodes and filenodes.
2028 2029 """
2029 2030 # Grab the transaction so we ensure that we have the lock at this point.
2030 2031 if op.ui.configbool('experimental', 'bundle2lazylocking'):
2031 2032 op.gettransaction()
2032 2033 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
2033 2034
2034 2035 count = 0
2035 2036 while True:
2036 2037 node = inpart.read(20)
2037 2038 fnode = inpart.read(20)
2038 2039 if len(node) < 20 or len(fnode) < 20:
2039 2040 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
2040 2041 break
2041 2042 cache.setfnode(node, fnode)
2042 2043 count += 1
2043 2044
2044 2045 cache.write()
2045 2046 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
2046 2047
2047 2048 @parthandler('pushvars')
2048 2049 def bundle2getvars(op, part):
2049 2050 '''unbundle a bundle2 containing shellvars on the server'''
2050 2051 # An option to disable unbundling on server-side for security reasons
2051 2052 if op.ui.configbool('push', 'pushvars.server'):
2052 2053 hookargs = {}
2053 2054 for key, value in part.advisoryparams:
2054 2055 key = key.upper()
2055 2056 # We want pushed variables to have USERVAR_ prepended so we know
2056 2057 # they came from the --pushvar flag.
2057 2058 key = "USERVAR_" + key
2058 2059 hookargs[key] = value
2059 2060 op.addhookargs(hookargs)
@@ -1,2120 +1,2137 b''
1 1 # exchange.py - utility to exchange data between repos.
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import collections
11 11 import errno
12 12 import hashlib
13 13
14 14 from .i18n import _
15 15 from .node import (
16 bin,
16 17 hex,
17 18 nullid,
18 19 )
19 20 from . import (
20 21 bookmarks as bookmod,
21 22 bundle2,
22 23 changegroup,
23 24 discovery,
24 25 error,
25 26 lock as lockmod,
26 27 obsolete,
27 28 phases,
28 29 pushkey,
29 30 pycompat,
30 31 remotenames,
31 32 scmutil,
32 33 sslutil,
33 34 streamclone,
34 35 url as urlmod,
35 36 util,
36 37 )
37 38
38 39 urlerr = util.urlerr
39 40 urlreq = util.urlreq
40 41
41 42 # Maps bundle version human names to changegroup versions.
42 43 _bundlespeccgversions = {'v1': '01',
43 44 'v2': '02',
44 45 'packed1': 's1',
45 46 'bundle2': '02', #legacy
46 47 }
47 48
48 49 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
49 50 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
50 51
51 52 def parsebundlespec(repo, spec, strict=True, externalnames=False):
52 53 """Parse a bundle string specification into parts.
53 54
54 55 Bundle specifications denote a well-defined bundle/exchange format.
55 56 The content of a given specification should not change over time in
56 57 order to ensure that bundles produced by a newer version of Mercurial are
57 58 readable from an older version.
58 59
59 60 The string currently has the form:
60 61
61 62 <compression>-<type>[;<parameter0>[;<parameter1>]]
62 63
63 64 Where <compression> is one of the supported compression formats
64 65 and <type> is (currently) a version string. A ";" can follow the type and
65 66 all text afterwards is interpreted as URI encoded, ";" delimited key=value
66 67 pairs.
67 68
68 69 If ``strict`` is True (the default) <compression> is required. Otherwise,
69 70 it is optional.
70 71
71 72 If ``externalnames`` is False (the default), the human-centric names will
72 73 be converted to their internal representation.
73 74
74 75 Returns a 3-tuple of (compression, version, parameters). Compression will
75 76 be ``None`` if not in strict mode and a compression isn't defined.
76 77
77 78 An ``InvalidBundleSpecification`` is raised when the specification is
78 79 not syntactically well formed.
79 80
80 81 An ``UnsupportedBundleSpecification`` is raised when the compression or
81 82 bundle type/version is not recognized.
82 83
83 84 Note: this function will likely eventually return a more complex data
84 85 structure, including bundle2 part information.
85 86 """
86 87 def parseparams(s):
87 88 if ';' not in s:
88 89 return s, {}
89 90
90 91 params = {}
91 92 version, paramstr = s.split(';', 1)
92 93
93 94 for p in paramstr.split(';'):
94 95 if '=' not in p:
95 96 raise error.InvalidBundleSpecification(
96 97 _('invalid bundle specification: '
97 98 'missing "=" in parameter: %s') % p)
98 99
99 100 key, value = p.split('=', 1)
100 101 key = urlreq.unquote(key)
101 102 value = urlreq.unquote(value)
102 103 params[key] = value
103 104
104 105 return version, params
105 106
106 107
107 108 if strict and '-' not in spec:
108 109 raise error.InvalidBundleSpecification(
109 110 _('invalid bundle specification; '
110 111 'must be prefixed with compression: %s') % spec)
111 112
112 113 if '-' in spec:
113 114 compression, version = spec.split('-', 1)
114 115
115 116 if compression not in util.compengines.supportedbundlenames:
116 117 raise error.UnsupportedBundleSpecification(
117 118 _('%s compression is not supported') % compression)
118 119
119 120 version, params = parseparams(version)
120 121
121 122 if version not in _bundlespeccgversions:
122 123 raise error.UnsupportedBundleSpecification(
123 124 _('%s is not a recognized bundle version') % version)
124 125 else:
125 126 # Value could be just the compression or just the version, in which
126 127 # case some defaults are assumed (but only when not in strict mode).
127 128 assert not strict
128 129
129 130 spec, params = parseparams(spec)
130 131
131 132 if spec in util.compengines.supportedbundlenames:
132 133 compression = spec
133 134 version = 'v1'
134 135 # Generaldelta repos require v2.
135 136 if 'generaldelta' in repo.requirements:
136 137 version = 'v2'
137 138 # Modern compression engines require v2.
138 139 if compression not in _bundlespecv1compengines:
139 140 version = 'v2'
140 141 elif spec in _bundlespeccgversions:
141 142 if spec == 'packed1':
142 143 compression = 'none'
143 144 else:
144 145 compression = 'bzip2'
145 146 version = spec
146 147 else:
147 148 raise error.UnsupportedBundleSpecification(
148 149 _('%s is not a recognized bundle specification') % spec)
149 150
150 151 # Bundle version 1 only supports a known set of compression engines.
151 152 if version == 'v1' and compression not in _bundlespecv1compengines:
152 153 raise error.UnsupportedBundleSpecification(
153 154 _('compression engine %s is not supported on v1 bundles') %
154 155 compression)
155 156
156 157 # The specification for packed1 can optionally declare the data formats
157 158 # required to apply it. If we see this metadata, compare against what the
158 159 # repo supports and error if the bundle isn't compatible.
159 160 if version == 'packed1' and 'requirements' in params:
160 161 requirements = set(params['requirements'].split(','))
161 162 missingreqs = requirements - repo.supportedformats
162 163 if missingreqs:
163 164 raise error.UnsupportedBundleSpecification(
164 165 _('missing support for repository features: %s') %
165 166 ', '.join(sorted(missingreqs)))
166 167
167 168 if not externalnames:
168 169 engine = util.compengines.forbundlename(compression)
169 170 compression = engine.bundletype()[1]
170 171 version = _bundlespeccgversions[version]
171 172 return compression, version, params
172 173
173 174 def readbundle(ui, fh, fname, vfs=None):
174 175 header = changegroup.readexactly(fh, 4)
175 176
176 177 alg = None
177 178 if not fname:
178 179 fname = "stream"
179 180 if not header.startswith('HG') and header.startswith('\0'):
180 181 fh = changegroup.headerlessfixup(fh, header)
181 182 header = "HG10"
182 183 alg = 'UN'
183 184 elif vfs:
184 185 fname = vfs.join(fname)
185 186
186 187 magic, version = header[0:2], header[2:4]
187 188
188 189 if magic != 'HG':
189 190 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
190 191 if version == '10':
191 192 if alg is None:
192 193 alg = changegroup.readexactly(fh, 2)
193 194 return changegroup.cg1unpacker(fh, alg)
194 195 elif version.startswith('2'):
195 196 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
196 197 elif version == 'S1':
197 198 return streamclone.streamcloneapplier(fh)
198 199 else:
199 200 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
200 201
201 202 def getbundlespec(ui, fh):
202 203 """Infer the bundlespec from a bundle file handle.
203 204
204 205 The input file handle is seeked and the original seek position is not
205 206 restored.
206 207 """
207 208 def speccompression(alg):
208 209 try:
209 210 return util.compengines.forbundletype(alg).bundletype()[0]
210 211 except KeyError:
211 212 return None
212 213
213 214 b = readbundle(ui, fh, None)
214 215 if isinstance(b, changegroup.cg1unpacker):
215 216 alg = b._type
216 217 if alg == '_truncatedBZ':
217 218 alg = 'BZ'
218 219 comp = speccompression(alg)
219 220 if not comp:
220 221 raise error.Abort(_('unknown compression algorithm: %s') % alg)
221 222 return '%s-v1' % comp
222 223 elif isinstance(b, bundle2.unbundle20):
223 224 if 'Compression' in b.params:
224 225 comp = speccompression(b.params['Compression'])
225 226 if not comp:
226 227 raise error.Abort(_('unknown compression algorithm: %s') % comp)
227 228 else:
228 229 comp = 'none'
229 230
230 231 version = None
231 232 for part in b.iterparts():
232 233 if part.type == 'changegroup':
233 234 version = part.params['version']
234 235 if version in ('01', '02'):
235 236 version = 'v2'
236 237 else:
237 238 raise error.Abort(_('changegroup version %s does not have '
238 239 'a known bundlespec') % version,
239 240 hint=_('try upgrading your Mercurial '
240 241 'client'))
241 242
242 243 if not version:
243 244 raise error.Abort(_('could not identify changegroup version in '
244 245 'bundle'))
245 246
246 247 return '%s-%s' % (comp, version)
247 248 elif isinstance(b, streamclone.streamcloneapplier):
248 249 requirements = streamclone.readbundle1header(fh)[2]
249 250 params = 'requirements=%s' % ','.join(sorted(requirements))
250 251 return 'none-packed1;%s' % urlreq.quote(params)
251 252 else:
252 253 raise error.Abort(_('unknown bundle type: %s') % b)
253 254
254 255 def _computeoutgoing(repo, heads, common):
255 256 """Computes which revs are outgoing given a set of common
256 257 and a set of heads.
257 258
258 259 This is a separate function so extensions can have access to
259 260 the logic.
260 261
261 262 Returns a discovery.outgoing object.
262 263 """
263 264 cl = repo.changelog
264 265 if common:
265 266 hasnode = cl.hasnode
266 267 common = [n for n in common if hasnode(n)]
267 268 else:
268 269 common = [nullid]
269 270 if not heads:
270 271 heads = cl.heads()
271 272 return discovery.outgoing(repo, common, heads)
272 273
273 274 def _forcebundle1(op):
274 275 """return true if a pull/push must use bundle1
275 276
276 277 This function is used to allow testing of the older bundle version"""
277 278 ui = op.repo.ui
278 279 forcebundle1 = False
279 280 # The goal is this config is to allow developer to choose the bundle
280 281 # version used during exchanged. This is especially handy during test.
281 282 # Value is a list of bundle version to be picked from, highest version
282 283 # should be used.
283 284 #
284 285 # developer config: devel.legacy.exchange
285 286 exchange = ui.configlist('devel', 'legacy.exchange')
286 287 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
287 288 return forcebundle1 or not op.remote.capable('bundle2')
288 289
289 290 class pushoperation(object):
290 291 """A object that represent a single push operation
291 292
292 293 Its purpose is to carry push related state and very common operations.
293 294
294 295 A new pushoperation should be created at the beginning of each push and
295 296 discarded afterward.
296 297 """
297 298
298 299 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
299 300 bookmarks=(), pushvars=None):
300 301 # repo we push from
301 302 self.repo = repo
302 303 self.ui = repo.ui
303 304 # repo we push to
304 305 self.remote = remote
305 306 # force option provided
306 307 self.force = force
307 308 # revs to be pushed (None is "all")
308 309 self.revs = revs
309 310 # bookmark explicitly pushed
310 311 self.bookmarks = bookmarks
311 312 # allow push of new branch
312 313 self.newbranch = newbranch
313 314 # step already performed
314 315 # (used to check what steps have been already performed through bundle2)
315 316 self.stepsdone = set()
316 317 # Integer version of the changegroup push result
317 318 # - None means nothing to push
318 319 # - 0 means HTTP error
319 320 # - 1 means we pushed and remote head count is unchanged *or*
320 321 # we have outgoing changesets but refused to push
321 322 # - other values as described by addchangegroup()
322 323 self.cgresult = None
323 324 # Boolean value for the bookmark push
324 325 self.bkresult = None
325 326 # discover.outgoing object (contains common and outgoing data)
326 327 self.outgoing = None
327 328 # all remote topological heads before the push
328 329 self.remoteheads = None
329 330 # Details of the remote branch pre and post push
330 331 #
331 332 # mapping: {'branch': ([remoteheads],
332 333 # [newheads],
333 334 # [unsyncedheads],
334 335 # [discardedheads])}
335 336 # - branch: the branch name
336 337 # - remoteheads: the list of remote heads known locally
337 338 # None if the branch is new
338 339 # - newheads: the new remote heads (known locally) with outgoing pushed
339 340 # - unsyncedheads: the list of remote heads unknown locally.
340 341 # - discardedheads: the list of remote heads made obsolete by the push
341 342 self.pushbranchmap = None
342 343 # testable as a boolean indicating if any nodes are missing locally.
343 344 self.incoming = None
344 345 # summary of the remote phase situation
345 346 self.remotephases = None
346 347 # phases changes that must be pushed along side the changesets
347 348 self.outdatedphases = None
348 349 # phases changes that must be pushed if changeset push fails
349 350 self.fallbackoutdatedphases = None
350 351 # outgoing obsmarkers
351 352 self.outobsmarkers = set()
352 353 # outgoing bookmarks
353 354 self.outbookmarks = []
354 355 # transaction manager
355 356 self.trmanager = None
356 357 # map { pushkey partid -> callback handling failure}
357 358 # used to handle exception from mandatory pushkey part failure
358 359 self.pkfailcb = {}
359 360 # an iterable of pushvars or None
360 361 self.pushvars = pushvars
361 362
362 363 @util.propertycache
363 364 def futureheads(self):
364 365 """future remote heads if the changeset push succeeds"""
365 366 return self.outgoing.missingheads
366 367
367 368 @util.propertycache
368 369 def fallbackheads(self):
369 370 """future remote heads if the changeset push fails"""
370 371 if self.revs is None:
371 372 # not target to push, all common are relevant
372 373 return self.outgoing.commonheads
373 374 unfi = self.repo.unfiltered()
374 375 # I want cheads = heads(::missingheads and ::commonheads)
375 376 # (missingheads is revs with secret changeset filtered out)
376 377 #
377 378 # This can be expressed as:
378 379 # cheads = ( (missingheads and ::commonheads)
379 380 # + (commonheads and ::missingheads))"
380 381 # )
381 382 #
382 383 # while trying to push we already computed the following:
383 384 # common = (::commonheads)
384 385 # missing = ((commonheads::missingheads) - commonheads)
385 386 #
386 387 # We can pick:
387 388 # * missingheads part of common (::commonheads)
388 389 common = self.outgoing.common
389 390 nm = self.repo.changelog.nodemap
390 391 cheads = [node for node in self.revs if nm[node] in common]
391 392 # and
392 393 # * commonheads parents on missing
393 394 revset = unfi.set('%ln and parents(roots(%ln))',
394 395 self.outgoing.commonheads,
395 396 self.outgoing.missing)
396 397 cheads.extend(c.node() for c in revset)
397 398 return cheads
398 399
399 400 @property
400 401 def commonheads(self):
401 402 """set of all common heads after changeset bundle push"""
402 403 if self.cgresult:
403 404 return self.futureheads
404 405 else:
405 406 return self.fallbackheads
406 407
407 408 # mapping of message used when pushing bookmark
408 409 bookmsgmap = {'update': (_("updating bookmark %s\n"),
409 410 _('updating bookmark %s failed!\n')),
410 411 'export': (_("exporting bookmark %s\n"),
411 412 _('exporting bookmark %s failed!\n')),
412 413 'delete': (_("deleting remote bookmark %s\n"),
413 414 _('deleting remote bookmark %s failed!\n')),
414 415 }
415 416
416 417
417 418 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
418 419 opargs=None):
419 420 '''Push outgoing changesets (limited by revs) from a local
420 421 repository to remote. Return an integer:
421 422 - None means nothing to push
422 423 - 0 means HTTP error
423 424 - 1 means we pushed and remote head count is unchanged *or*
424 425 we have outgoing changesets but refused to push
425 426 - other values as described by addchangegroup()
426 427 '''
427 428 if opargs is None:
428 429 opargs = {}
429 430 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
430 431 **pycompat.strkwargs(opargs))
431 432 if pushop.remote.local():
432 433 missing = (set(pushop.repo.requirements)
433 434 - pushop.remote.local().supported)
434 435 if missing:
435 436 msg = _("required features are not"
436 437 " supported in the destination:"
437 438 " %s") % (', '.join(sorted(missing)))
438 439 raise error.Abort(msg)
439 440
440 441 if not pushop.remote.canpush():
441 442 raise error.Abort(_("destination does not support push"))
442 443
443 444 if not pushop.remote.capable('unbundle'):
444 445 raise error.Abort(_('cannot push: destination does not support the '
445 446 'unbundle wire protocol command'))
446 447
447 448 # get lock as we might write phase data
448 449 wlock = lock = None
449 450 try:
450 451 # bundle2 push may receive a reply bundle touching bookmarks or other
451 452 # things requiring the wlock. Take it now to ensure proper ordering.
452 453 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
453 454 if (not _forcebundle1(pushop)) and maypushback:
454 455 wlock = pushop.repo.wlock()
455 456 lock = pushop.repo.lock()
456 457 pushop.trmanager = transactionmanager(pushop.repo,
457 458 'push-response',
458 459 pushop.remote.url())
459 460 except IOError as err:
460 461 if err.errno != errno.EACCES:
461 462 raise
462 463 # source repo cannot be locked.
463 464 # We do not abort the push, but just disable the local phase
464 465 # synchronisation.
465 466 msg = 'cannot lock source repository: %s\n' % err
466 467 pushop.ui.debug(msg)
467 468
468 469 with wlock or util.nullcontextmanager(), \
469 470 lock or util.nullcontextmanager(), \
470 471 pushop.trmanager or util.nullcontextmanager():
471 472 pushop.repo.checkpush(pushop)
472 473 _pushdiscovery(pushop)
473 474 if not _forcebundle1(pushop):
474 475 _pushbundle2(pushop)
475 476 _pushchangeset(pushop)
476 477 _pushsyncphase(pushop)
477 478 _pushobsolete(pushop)
478 479 _pushbookmark(pushop)
479 480
480 481 return pushop
481 482
482 483 # list of steps to perform discovery before push
483 484 pushdiscoveryorder = []
484 485
485 486 # Mapping between step name and function
486 487 #
487 488 # This exists to help extensions wrap steps if necessary
488 489 pushdiscoverymapping = {}
489 490
490 491 def pushdiscovery(stepname):
491 492 """decorator for function performing discovery before push
492 493
493 494 The function is added to the step -> function mapping and appended to the
494 495 list of steps. Beware that decorated function will be added in order (this
495 496 may matter).
496 497
497 498 You can only use this decorator for a new step, if you want to wrap a step
498 499 from an extension, change the pushdiscovery dictionary directly."""
499 500 def dec(func):
500 501 assert stepname not in pushdiscoverymapping
501 502 pushdiscoverymapping[stepname] = func
502 503 pushdiscoveryorder.append(stepname)
503 504 return func
504 505 return dec
505 506
506 507 def _pushdiscovery(pushop):
507 508 """Run all discovery steps"""
508 509 for stepname in pushdiscoveryorder:
509 510 step = pushdiscoverymapping[stepname]
510 511 step(pushop)
511 512
512 513 @pushdiscovery('changeset')
513 514 def _pushdiscoverychangeset(pushop):
514 515 """discover the changeset that need to be pushed"""
515 516 fci = discovery.findcommonincoming
516 517 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
517 518 common, inc, remoteheads = commoninc
518 519 fco = discovery.findcommonoutgoing
519 520 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
520 521 commoninc=commoninc, force=pushop.force)
521 522 pushop.outgoing = outgoing
522 523 pushop.remoteheads = remoteheads
523 524 pushop.incoming = inc
524 525
525 526 @pushdiscovery('phase')
526 527 def _pushdiscoveryphase(pushop):
527 528 """discover the phase that needs to be pushed
528 529
529 530 (computed for both success and failure case for changesets push)"""
530 531 outgoing = pushop.outgoing
531 532 unfi = pushop.repo.unfiltered()
532 533 remotephases = pushop.remote.listkeys('phases')
533 534 if (pushop.ui.configbool('ui', '_usedassubrepo')
534 535 and remotephases # server supports phases
535 536 and not pushop.outgoing.missing # no changesets to be pushed
536 537 and remotephases.get('publishing', False)):
537 538 # When:
538 539 # - this is a subrepo push
539 540 # - and remote support phase
540 541 # - and no changeset are to be pushed
541 542 # - and remote is publishing
542 543 # We may be in issue 3781 case!
543 544 # We drop the possible phase synchronisation done by
544 545 # courtesy to publish changesets possibly locally draft
545 546 # on the remote.
546 547 pushop.outdatedphases = []
547 548 pushop.fallbackoutdatedphases = []
548 549 return
549 550
550 551 pushop.remotephases = phases.remotephasessummary(pushop.repo,
551 552 pushop.fallbackheads,
552 553 remotephases)
553 554 droots = pushop.remotephases.draftroots
554 555
555 556 extracond = ''
556 557 if not pushop.remotephases.publishing:
557 558 extracond = ' and public()'
558 559 revset = 'heads((%%ln::%%ln) %s)' % extracond
559 560 # Get the list of all revs draft on remote by public here.
560 561 # XXX Beware that revset break if droots is not strictly
561 562 # XXX root we may want to ensure it is but it is costly
562 563 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
563 564 if not outgoing.missing:
564 565 future = fallback
565 566 else:
566 567 # adds changeset we are going to push as draft
567 568 #
568 569 # should not be necessary for publishing server, but because of an
569 570 # issue fixed in xxxxx we have to do it anyway.
570 571 fdroots = list(unfi.set('roots(%ln + %ln::)',
571 572 outgoing.missing, droots))
572 573 fdroots = [f.node() for f in fdroots]
573 574 future = list(unfi.set(revset, fdroots, pushop.futureheads))
574 575 pushop.outdatedphases = future
575 576 pushop.fallbackoutdatedphases = fallback
576 577
577 578 @pushdiscovery('obsmarker')
578 579 def _pushdiscoveryobsmarkers(pushop):
579 580 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
580 581 and pushop.repo.obsstore
581 582 and 'obsolete' in pushop.remote.listkeys('namespaces')):
582 583 repo = pushop.repo
583 584 # very naive computation, that can be quite expensive on big repo.
584 585 # However: evolution is currently slow on them anyway.
585 586 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
586 587 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
587 588
588 589 @pushdiscovery('bookmarks')
589 590 def _pushdiscoverybookmarks(pushop):
590 591 ui = pushop.ui
591 592 repo = pushop.repo.unfiltered()
592 593 remote = pushop.remote
593 594 ui.debug("checking for updated bookmarks\n")
594 595 ancestors = ()
595 596 if pushop.revs:
596 597 revnums = map(repo.changelog.rev, pushop.revs)
597 598 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
598 599 remotebookmark = remote.listkeys('bookmarks')
599 600
600 601 explicit = set([repo._bookmarks.expandname(bookmark)
601 602 for bookmark in pushop.bookmarks])
602 603
603 604 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
604 605 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
605 606
606 607 def safehex(x):
607 608 if x is None:
608 609 return x
609 610 return hex(x)
610 611
611 612 def hexifycompbookmarks(bookmarks):
612 613 for b, scid, dcid in bookmarks:
613 614 yield b, safehex(scid), safehex(dcid)
614 615
615 616 comp = [hexifycompbookmarks(marks) for marks in comp]
616 617 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
617 618
618 619 for b, scid, dcid in advsrc:
619 620 if b in explicit:
620 621 explicit.remove(b)
621 622 if not ancestors or repo[scid].rev() in ancestors:
622 623 pushop.outbookmarks.append((b, dcid, scid))
623 624 # search added bookmark
624 625 for b, scid, dcid in addsrc:
625 626 if b in explicit:
626 627 explicit.remove(b)
627 628 pushop.outbookmarks.append((b, '', scid))
628 629 # search for overwritten bookmark
629 630 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
630 631 if b in explicit:
631 632 explicit.remove(b)
632 633 pushop.outbookmarks.append((b, dcid, scid))
633 634 # search for bookmark to delete
634 635 for b, scid, dcid in adddst:
635 636 if b in explicit:
636 637 explicit.remove(b)
637 638 # treat as "deleted locally"
638 639 pushop.outbookmarks.append((b, dcid, ''))
639 640 # identical bookmarks shouldn't get reported
640 641 for b, scid, dcid in same:
641 642 if b in explicit:
642 643 explicit.remove(b)
643 644
644 645 if explicit:
645 646 explicit = sorted(explicit)
646 647 # we should probably list all of them
647 648 ui.warn(_('bookmark %s does not exist on the local '
648 649 'or remote repository!\n') % explicit[0])
649 650 pushop.bkresult = 2
650 651
651 652 pushop.outbookmarks.sort()
652 653
653 654 def _pushcheckoutgoing(pushop):
654 655 outgoing = pushop.outgoing
655 656 unfi = pushop.repo.unfiltered()
656 657 if not outgoing.missing:
657 658 # nothing to push
658 659 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
659 660 return False
660 661 # something to push
661 662 if not pushop.force:
662 663 # if repo.obsstore == False --> no obsolete
663 664 # then, save the iteration
664 665 if unfi.obsstore:
665 666 # this message are here for 80 char limit reason
666 667 mso = _("push includes obsolete changeset: %s!")
667 668 mspd = _("push includes phase-divergent changeset: %s!")
668 669 mscd = _("push includes content-divergent changeset: %s!")
669 670 mst = {"orphan": _("push includes orphan changeset: %s!"),
670 671 "phase-divergent": mspd,
671 672 "content-divergent": mscd}
672 673 # If we are to push if there is at least one
673 674 # obsolete or unstable changeset in missing, at
674 675 # least one of the missinghead will be obsolete or
675 676 # unstable. So checking heads only is ok
676 677 for node in outgoing.missingheads:
677 678 ctx = unfi[node]
678 679 if ctx.obsolete():
679 680 raise error.Abort(mso % ctx)
680 681 elif ctx.isunstable():
681 682 # TODO print more than one instability in the abort
682 683 # message
683 684 raise error.Abort(mst[ctx.instabilities()[0]] % ctx)
684 685
685 686 discovery.checkheads(pushop)
686 687 return True
687 688
688 689 # List of names of steps to perform for an outgoing bundle2, order matters.
689 690 b2partsgenorder = []
690 691
691 692 # Mapping between step name and function
692 693 #
693 694 # This exists to help extensions wrap steps if necessary
694 695 b2partsgenmapping = {}
695 696
696 697 def b2partsgenerator(stepname, idx=None):
697 698 """decorator for function generating bundle2 part
698 699
699 700 The function is added to the step -> function mapping and appended to the
700 701 list of steps. Beware that decorated functions will be added in order
701 702 (this may matter).
702 703
703 704 You can only use this decorator for new steps, if you want to wrap a step
704 705 from an extension, attack the b2partsgenmapping dictionary directly."""
705 706 def dec(func):
706 707 assert stepname not in b2partsgenmapping
707 708 b2partsgenmapping[stepname] = func
708 709 if idx is None:
709 710 b2partsgenorder.append(stepname)
710 711 else:
711 712 b2partsgenorder.insert(idx, stepname)
712 713 return func
713 714 return dec
714 715
715 716 def _pushb2ctxcheckheads(pushop, bundler):
716 717 """Generate race condition checking parts
717 718
718 719 Exists as an independent function to aid extensions
719 720 """
720 721 # * 'force' do not check for push race,
721 722 # * if we don't push anything, there are nothing to check.
722 723 if not pushop.force and pushop.outgoing.missingheads:
723 724 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
724 725 emptyremote = pushop.pushbranchmap is None
725 726 if not allowunrelated or emptyremote:
726 727 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
727 728 else:
728 729 affected = set()
729 730 for branch, heads in pushop.pushbranchmap.iteritems():
730 731 remoteheads, newheads, unsyncedheads, discardedheads = heads
731 732 if remoteheads is not None:
732 733 remote = set(remoteheads)
733 734 affected |= set(discardedheads) & remote
734 735 affected |= remote - set(newheads)
735 736 if affected:
736 737 data = iter(sorted(affected))
737 738 bundler.newpart('check:updated-heads', data=data)
738 739
739 740 def _pushing(pushop):
740 741 """return True if we are pushing anything"""
741 742 return bool(pushop.outgoing.missing
742 743 or pushop.outdatedphases
743 744 or pushop.outobsmarkers
744 745 or pushop.outbookmarks)
745 746
747 @b2partsgenerator('check-bookmarks')
748 def _pushb2checkbookmarks(pushop, bundler):
749 """insert bookmark move checking"""
750 if not _pushing(pushop) or pushop.force:
751 return
752 b2caps = bundle2.bundle2caps(pushop.remote)
753 hasbookmarkcheck = 'bookmarks' in b2caps
754 if not (pushop.outbookmarks and hasbookmarkcheck):
755 return
756 data = []
757 for book, old, new in pushop.outbookmarks:
758 old = bin(old)
759 data.append((book, old))
760 checkdata = bookmod.binaryencode(data)
761 bundler.newpart('check:bookmarks', data=checkdata)
762
746 763 @b2partsgenerator('check-phases')
747 764 def _pushb2checkphases(pushop, bundler):
748 765 """insert phase move checking"""
749 766 if not _pushing(pushop) or pushop.force:
750 767 return
751 768 b2caps = bundle2.bundle2caps(pushop.remote)
752 769 hasphaseheads = 'heads' in b2caps.get('phases', ())
753 770 if pushop.remotephases is not None and hasphaseheads:
754 771 # check that the remote phase has not changed
755 772 checks = [[] for p in phases.allphases]
756 773 checks[phases.public].extend(pushop.remotephases.publicheads)
757 774 checks[phases.draft].extend(pushop.remotephases.draftroots)
758 775 if any(checks):
759 776 for nodes in checks:
760 777 nodes.sort()
761 778 checkdata = phases.binaryencode(checks)
762 779 bundler.newpart('check:phases', data=checkdata)
763 780
764 781 @b2partsgenerator('changeset')
765 782 def _pushb2ctx(pushop, bundler):
766 783 """handle changegroup push through bundle2
767 784
768 785 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
769 786 """
770 787 if 'changesets' in pushop.stepsdone:
771 788 return
772 789 pushop.stepsdone.add('changesets')
773 790 # Send known heads to the server for race detection.
774 791 if not _pushcheckoutgoing(pushop):
775 792 return
776 793 pushop.repo.prepushoutgoinghooks(pushop)
777 794
778 795 _pushb2ctxcheckheads(pushop, bundler)
779 796
780 797 b2caps = bundle2.bundle2caps(pushop.remote)
781 798 version = '01'
782 799 cgversions = b2caps.get('changegroup')
783 800 if cgversions: # 3.1 and 3.2 ship with an empty value
784 801 cgversions = [v for v in cgversions
785 802 if v in changegroup.supportedoutgoingversions(
786 803 pushop.repo)]
787 804 if not cgversions:
788 805 raise ValueError(_('no common changegroup version'))
789 806 version = max(cgversions)
790 807 cgstream = changegroup.makestream(pushop.repo, pushop.outgoing, version,
791 808 'push')
792 809 cgpart = bundler.newpart('changegroup', data=cgstream)
793 810 if cgversions:
794 811 cgpart.addparam('version', version)
795 812 if 'treemanifest' in pushop.repo.requirements:
796 813 cgpart.addparam('treemanifest', '1')
797 814 def handlereply(op):
798 815 """extract addchangegroup returns from server reply"""
799 816 cgreplies = op.records.getreplies(cgpart.id)
800 817 assert len(cgreplies['changegroup']) == 1
801 818 pushop.cgresult = cgreplies['changegroup'][0]['return']
802 819 return handlereply
803 820
804 821 @b2partsgenerator('phase')
805 822 def _pushb2phases(pushop, bundler):
806 823 """handle phase push through bundle2"""
807 824 if 'phases' in pushop.stepsdone:
808 825 return
809 826 b2caps = bundle2.bundle2caps(pushop.remote)
810 827 ui = pushop.repo.ui
811 828
812 829 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
813 830 haspushkey = 'pushkey' in b2caps
814 831 hasphaseheads = 'heads' in b2caps.get('phases', ())
815 832
816 833 if hasphaseheads and not legacyphase:
817 834 return _pushb2phaseheads(pushop, bundler)
818 835 elif haspushkey:
819 836 return _pushb2phasespushkey(pushop, bundler)
820 837
821 838 def _pushb2phaseheads(pushop, bundler):
822 839 """push phase information through a bundle2 - binary part"""
823 840 pushop.stepsdone.add('phases')
824 841 if pushop.outdatedphases:
825 842 updates = [[] for p in phases.allphases]
826 843 updates[0].extend(h.node() for h in pushop.outdatedphases)
827 844 phasedata = phases.binaryencode(updates)
828 845 bundler.newpart('phase-heads', data=phasedata)
829 846
830 847 def _pushb2phasespushkey(pushop, bundler):
831 848 """push phase information through a bundle2 - pushkey part"""
832 849 pushop.stepsdone.add('phases')
833 850 part2node = []
834 851
835 852 def handlefailure(pushop, exc):
836 853 targetid = int(exc.partid)
837 854 for partid, node in part2node:
838 855 if partid == targetid:
839 856 raise error.Abort(_('updating %s to public failed') % node)
840 857
841 858 enc = pushkey.encode
842 859 for newremotehead in pushop.outdatedphases:
843 860 part = bundler.newpart('pushkey')
844 861 part.addparam('namespace', enc('phases'))
845 862 part.addparam('key', enc(newremotehead.hex()))
846 863 part.addparam('old', enc('%d' % phases.draft))
847 864 part.addparam('new', enc('%d' % phases.public))
848 865 part2node.append((part.id, newremotehead))
849 866 pushop.pkfailcb[part.id] = handlefailure
850 867
851 868 def handlereply(op):
852 869 for partid, node in part2node:
853 870 partrep = op.records.getreplies(partid)
854 871 results = partrep['pushkey']
855 872 assert len(results) <= 1
856 873 msg = None
857 874 if not results:
858 875 msg = _('server ignored update of %s to public!\n') % node
859 876 elif not int(results[0]['return']):
860 877 msg = _('updating %s to public failed!\n') % node
861 878 if msg is not None:
862 879 pushop.ui.warn(msg)
863 880 return handlereply
864 881
865 882 @b2partsgenerator('obsmarkers')
866 883 def _pushb2obsmarkers(pushop, bundler):
867 884 if 'obsmarkers' in pushop.stepsdone:
868 885 return
869 886 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
870 887 if obsolete.commonversion(remoteversions) is None:
871 888 return
872 889 pushop.stepsdone.add('obsmarkers')
873 890 if pushop.outobsmarkers:
874 891 markers = sorted(pushop.outobsmarkers)
875 892 bundle2.buildobsmarkerspart(bundler, markers)
876 893
877 894 @b2partsgenerator('bookmarks')
878 895 def _pushb2bookmarks(pushop, bundler):
879 896 """handle bookmark push through bundle2"""
880 897 if 'bookmarks' in pushop.stepsdone:
881 898 return
882 899 b2caps = bundle2.bundle2caps(pushop.remote)
883 900 if 'pushkey' not in b2caps:
884 901 return
885 902 pushop.stepsdone.add('bookmarks')
886 903 part2book = []
887 904 enc = pushkey.encode
888 905
889 906 def handlefailure(pushop, exc):
890 907 targetid = int(exc.partid)
891 908 for partid, book, action in part2book:
892 909 if partid == targetid:
893 910 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
894 911 # we should not be called for part we did not generated
895 912 assert False
896 913
897 914 for book, old, new in pushop.outbookmarks:
898 915 part = bundler.newpart('pushkey')
899 916 part.addparam('namespace', enc('bookmarks'))
900 917 part.addparam('key', enc(book))
901 918 part.addparam('old', enc(old))
902 919 part.addparam('new', enc(new))
903 920 action = 'update'
904 921 if not old:
905 922 action = 'export'
906 923 elif not new:
907 924 action = 'delete'
908 925 part2book.append((part.id, book, action))
909 926 pushop.pkfailcb[part.id] = handlefailure
910 927
911 928 def handlereply(op):
912 929 ui = pushop.ui
913 930 for partid, book, action in part2book:
914 931 partrep = op.records.getreplies(partid)
915 932 results = partrep['pushkey']
916 933 assert len(results) <= 1
917 934 if not results:
918 935 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
919 936 else:
920 937 ret = int(results[0]['return'])
921 938 if ret:
922 939 ui.status(bookmsgmap[action][0] % book)
923 940 else:
924 941 ui.warn(bookmsgmap[action][1] % book)
925 942 if pushop.bkresult is not None:
926 943 pushop.bkresult = 1
927 944 return handlereply
928 945
929 946 @b2partsgenerator('pushvars', idx=0)
930 947 def _getbundlesendvars(pushop, bundler):
931 948 '''send shellvars via bundle2'''
932 949 pushvars = pushop.pushvars
933 950 if pushvars:
934 951 shellvars = {}
935 952 for raw in pushvars:
936 953 if '=' not in raw:
937 954 msg = ("unable to parse variable '%s', should follow "
938 955 "'KEY=VALUE' or 'KEY=' format")
939 956 raise error.Abort(msg % raw)
940 957 k, v = raw.split('=', 1)
941 958 shellvars[k] = v
942 959
943 960 part = bundler.newpart('pushvars')
944 961
945 962 for key, value in shellvars.iteritems():
946 963 part.addparam(key, value, mandatory=False)
947 964
948 965 def _pushbundle2(pushop):
949 966 """push data to the remote using bundle2
950 967
951 968 The only currently supported type of data is changegroup but this will
952 969 evolve in the future."""
953 970 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
954 971 pushback = (pushop.trmanager
955 972 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
956 973
957 974 # create reply capability
958 975 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
959 976 allowpushback=pushback))
960 977 bundler.newpart('replycaps', data=capsblob)
961 978 replyhandlers = []
962 979 for partgenname in b2partsgenorder:
963 980 partgen = b2partsgenmapping[partgenname]
964 981 ret = partgen(pushop, bundler)
965 982 if callable(ret):
966 983 replyhandlers.append(ret)
967 984 # do not push if nothing to push
968 985 if bundler.nbparts <= 1:
969 986 return
970 987 stream = util.chunkbuffer(bundler.getchunks())
971 988 try:
972 989 try:
973 990 reply = pushop.remote.unbundle(
974 991 stream, ['force'], pushop.remote.url())
975 992 except error.BundleValueError as exc:
976 993 raise error.Abort(_('missing support for %s') % exc)
977 994 try:
978 995 trgetter = None
979 996 if pushback:
980 997 trgetter = pushop.trmanager.transaction
981 998 op = bundle2.processbundle(pushop.repo, reply, trgetter)
982 999 except error.BundleValueError as exc:
983 1000 raise error.Abort(_('missing support for %s') % exc)
984 1001 except bundle2.AbortFromPart as exc:
985 1002 pushop.ui.status(_('remote: %s\n') % exc)
986 1003 if exc.hint is not None:
987 1004 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
988 1005 raise error.Abort(_('push failed on remote'))
989 1006 except error.PushkeyFailed as exc:
990 1007 partid = int(exc.partid)
991 1008 if partid not in pushop.pkfailcb:
992 1009 raise
993 1010 pushop.pkfailcb[partid](pushop, exc)
994 1011 for rephand in replyhandlers:
995 1012 rephand(op)
996 1013
997 1014 def _pushchangeset(pushop):
998 1015 """Make the actual push of changeset bundle to remote repo"""
999 1016 if 'changesets' in pushop.stepsdone:
1000 1017 return
1001 1018 pushop.stepsdone.add('changesets')
1002 1019 if not _pushcheckoutgoing(pushop):
1003 1020 return
1004 1021
1005 1022 # Should have verified this in push().
1006 1023 assert pushop.remote.capable('unbundle')
1007 1024
1008 1025 pushop.repo.prepushoutgoinghooks(pushop)
1009 1026 outgoing = pushop.outgoing
1010 1027 # TODO: get bundlecaps from remote
1011 1028 bundlecaps = None
1012 1029 # create a changegroup from local
1013 1030 if pushop.revs is None and not (outgoing.excluded
1014 1031 or pushop.repo.changelog.filteredrevs):
1015 1032 # push everything,
1016 1033 # use the fast path, no race possible on push
1017 1034 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01', 'push',
1018 1035 fastpath=True, bundlecaps=bundlecaps)
1019 1036 else:
1020 1037 cg = changegroup.makechangegroup(pushop.repo, outgoing, '01',
1021 1038 'push', bundlecaps=bundlecaps)
1022 1039
1023 1040 # apply changegroup to remote
1024 1041 # local repo finds heads on server, finds out what
1025 1042 # revs it must push. once revs transferred, if server
1026 1043 # finds it has different heads (someone else won
1027 1044 # commit/push race), server aborts.
1028 1045 if pushop.force:
1029 1046 remoteheads = ['force']
1030 1047 else:
1031 1048 remoteheads = pushop.remoteheads
1032 1049 # ssh: return remote's addchangegroup()
1033 1050 # http: return remote's addchangegroup() or 0 for error
1034 1051 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
1035 1052 pushop.repo.url())
1036 1053
1037 1054 def _pushsyncphase(pushop):
1038 1055 """synchronise phase information locally and remotely"""
1039 1056 cheads = pushop.commonheads
1040 1057 # even when we don't push, exchanging phase data is useful
1041 1058 remotephases = pushop.remote.listkeys('phases')
1042 1059 if (pushop.ui.configbool('ui', '_usedassubrepo')
1043 1060 and remotephases # server supports phases
1044 1061 and pushop.cgresult is None # nothing was pushed
1045 1062 and remotephases.get('publishing', False)):
1046 1063 # When:
1047 1064 # - this is a subrepo push
1048 1065 # - and remote support phase
1049 1066 # - and no changeset was pushed
1050 1067 # - and remote is publishing
1051 1068 # We may be in issue 3871 case!
1052 1069 # We drop the possible phase synchronisation done by
1053 1070 # courtesy to publish changesets possibly locally draft
1054 1071 # on the remote.
1055 1072 remotephases = {'publishing': 'True'}
1056 1073 if not remotephases: # old server or public only reply from non-publishing
1057 1074 _localphasemove(pushop, cheads)
1058 1075 # don't push any phase data as there is nothing to push
1059 1076 else:
1060 1077 ana = phases.analyzeremotephases(pushop.repo, cheads,
1061 1078 remotephases)
1062 1079 pheads, droots = ana
1063 1080 ### Apply remote phase on local
1064 1081 if remotephases.get('publishing', False):
1065 1082 _localphasemove(pushop, cheads)
1066 1083 else: # publish = False
1067 1084 _localphasemove(pushop, pheads)
1068 1085 _localphasemove(pushop, cheads, phases.draft)
1069 1086 ### Apply local phase on remote
1070 1087
1071 1088 if pushop.cgresult:
1072 1089 if 'phases' in pushop.stepsdone:
1073 1090 # phases already pushed though bundle2
1074 1091 return
1075 1092 outdated = pushop.outdatedphases
1076 1093 else:
1077 1094 outdated = pushop.fallbackoutdatedphases
1078 1095
1079 1096 pushop.stepsdone.add('phases')
1080 1097
1081 1098 # filter heads already turned public by the push
1082 1099 outdated = [c for c in outdated if c.node() not in pheads]
1083 1100 # fallback to independent pushkey command
1084 1101 for newremotehead in outdated:
1085 1102 r = pushop.remote.pushkey('phases',
1086 1103 newremotehead.hex(),
1087 1104 str(phases.draft),
1088 1105 str(phases.public))
1089 1106 if not r:
1090 1107 pushop.ui.warn(_('updating %s to public failed!\n')
1091 1108 % newremotehead)
1092 1109
1093 1110 def _localphasemove(pushop, nodes, phase=phases.public):
1094 1111 """move <nodes> to <phase> in the local source repo"""
1095 1112 if pushop.trmanager:
1096 1113 phases.advanceboundary(pushop.repo,
1097 1114 pushop.trmanager.transaction(),
1098 1115 phase,
1099 1116 nodes)
1100 1117 else:
1101 1118 # repo is not locked, do not change any phases!
1102 1119 # Informs the user that phases should have been moved when
1103 1120 # applicable.
1104 1121 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1105 1122 phasestr = phases.phasenames[phase]
1106 1123 if actualmoves:
1107 1124 pushop.ui.status(_('cannot lock source repo, skipping '
1108 1125 'local %s phase update\n') % phasestr)
1109 1126
1110 1127 def _pushobsolete(pushop):
1111 1128 """utility function to push obsolete markers to a remote"""
1112 1129 if 'obsmarkers' in pushop.stepsdone:
1113 1130 return
1114 1131 repo = pushop.repo
1115 1132 remote = pushop.remote
1116 1133 pushop.stepsdone.add('obsmarkers')
1117 1134 if pushop.outobsmarkers:
1118 1135 pushop.ui.debug('try to push obsolete markers to remote\n')
1119 1136 rslts = []
1120 1137 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1121 1138 for key in sorted(remotedata, reverse=True):
1122 1139 # reverse sort to ensure we end with dump0
1123 1140 data = remotedata[key]
1124 1141 rslts.append(remote.pushkey('obsolete', key, '', data))
1125 1142 if [r for r in rslts if not r]:
1126 1143 msg = _('failed to push some obsolete markers!\n')
1127 1144 repo.ui.warn(msg)
1128 1145
1129 1146 def _pushbookmark(pushop):
1130 1147 """Update bookmark position on remote"""
1131 1148 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1132 1149 return
1133 1150 pushop.stepsdone.add('bookmarks')
1134 1151 ui = pushop.ui
1135 1152 remote = pushop.remote
1136 1153
1137 1154 for b, old, new in pushop.outbookmarks:
1138 1155 action = 'update'
1139 1156 if not old:
1140 1157 action = 'export'
1141 1158 elif not new:
1142 1159 action = 'delete'
1143 1160 if remote.pushkey('bookmarks', b, old, new):
1144 1161 ui.status(bookmsgmap[action][0] % b)
1145 1162 else:
1146 1163 ui.warn(bookmsgmap[action][1] % b)
1147 1164 # discovery can have set the value form invalid entry
1148 1165 if pushop.bkresult is not None:
1149 1166 pushop.bkresult = 1
1150 1167
1151 1168 class pulloperation(object):
1152 1169 """A object that represent a single pull operation
1153 1170
1154 1171 It purpose is to carry pull related state and very common operation.
1155 1172
1156 1173 A new should be created at the beginning of each pull and discarded
1157 1174 afterward.
1158 1175 """
1159 1176
1160 1177 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1161 1178 remotebookmarks=None, streamclonerequested=None):
1162 1179 # repo we pull into
1163 1180 self.repo = repo
1164 1181 # repo we pull from
1165 1182 self.remote = remote
1166 1183 # revision we try to pull (None is "all")
1167 1184 self.heads = heads
1168 1185 # bookmark pulled explicitly
1169 1186 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1170 1187 for bookmark in bookmarks]
1171 1188 # do we force pull?
1172 1189 self.force = force
1173 1190 # whether a streaming clone was requested
1174 1191 self.streamclonerequested = streamclonerequested
1175 1192 # transaction manager
1176 1193 self.trmanager = None
1177 1194 # set of common changeset between local and remote before pull
1178 1195 self.common = None
1179 1196 # set of pulled head
1180 1197 self.rheads = None
1181 1198 # list of missing changeset to fetch remotely
1182 1199 self.fetch = None
1183 1200 # remote bookmarks data
1184 1201 self.remotebookmarks = remotebookmarks
1185 1202 # result of changegroup pulling (used as return code by pull)
1186 1203 self.cgresult = None
1187 1204 # list of step already done
1188 1205 self.stepsdone = set()
1189 1206 # Whether we attempted a clone from pre-generated bundles.
1190 1207 self.clonebundleattempted = False
1191 1208
1192 1209 @util.propertycache
1193 1210 def pulledsubset(self):
1194 1211 """heads of the set of changeset target by the pull"""
1195 1212 # compute target subset
1196 1213 if self.heads is None:
1197 1214 # We pulled every thing possible
1198 1215 # sync on everything common
1199 1216 c = set(self.common)
1200 1217 ret = list(self.common)
1201 1218 for n in self.rheads:
1202 1219 if n not in c:
1203 1220 ret.append(n)
1204 1221 return ret
1205 1222 else:
1206 1223 # We pulled a specific subset
1207 1224 # sync on this subset
1208 1225 return self.heads
1209 1226
1210 1227 @util.propertycache
1211 1228 def canusebundle2(self):
1212 1229 return not _forcebundle1(self)
1213 1230
1214 1231 @util.propertycache
1215 1232 def remotebundle2caps(self):
1216 1233 return bundle2.bundle2caps(self.remote)
1217 1234
1218 1235 def gettransaction(self):
1219 1236 # deprecated; talk to trmanager directly
1220 1237 return self.trmanager.transaction()
1221 1238
1222 1239 class transactionmanager(util.transactional):
1223 1240 """An object to manage the life cycle of a transaction
1224 1241
1225 1242 It creates the transaction on demand and calls the appropriate hooks when
1226 1243 closing the transaction."""
1227 1244 def __init__(self, repo, source, url):
1228 1245 self.repo = repo
1229 1246 self.source = source
1230 1247 self.url = url
1231 1248 self._tr = None
1232 1249
1233 1250 def transaction(self):
1234 1251 """Return an open transaction object, constructing if necessary"""
1235 1252 if not self._tr:
1236 1253 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1237 1254 self._tr = self.repo.transaction(trname)
1238 1255 self._tr.hookargs['source'] = self.source
1239 1256 self._tr.hookargs['url'] = self.url
1240 1257 return self._tr
1241 1258
1242 1259 def close(self):
1243 1260 """close transaction if created"""
1244 1261 if self._tr is not None:
1245 1262 self._tr.close()
1246 1263
1247 1264 def release(self):
1248 1265 """release transaction if created"""
1249 1266 if self._tr is not None:
1250 1267 self._tr.release()
1251 1268
1252 1269 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1253 1270 streamclonerequested=None):
1254 1271 """Fetch repository data from a remote.
1255 1272
1256 1273 This is the main function used to retrieve data from a remote repository.
1257 1274
1258 1275 ``repo`` is the local repository to clone into.
1259 1276 ``remote`` is a peer instance.
1260 1277 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1261 1278 default) means to pull everything from the remote.
1262 1279 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1263 1280 default, all remote bookmarks are pulled.
1264 1281 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1265 1282 initialization.
1266 1283 ``streamclonerequested`` is a boolean indicating whether a "streaming
1267 1284 clone" is requested. A "streaming clone" is essentially a raw file copy
1268 1285 of revlogs from the server. This only works when the local repository is
1269 1286 empty. The default value of ``None`` means to respect the server
1270 1287 configuration for preferring stream clones.
1271 1288
1272 1289 Returns the ``pulloperation`` created for this pull.
1273 1290 """
1274 1291 if opargs is None:
1275 1292 opargs = {}
1276 1293 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1277 1294 streamclonerequested=streamclonerequested, **opargs)
1278 1295
1279 1296 peerlocal = pullop.remote.local()
1280 1297 if peerlocal:
1281 1298 missing = set(peerlocal.requirements) - pullop.repo.supported
1282 1299 if missing:
1283 1300 msg = _("required features are not"
1284 1301 " supported in the destination:"
1285 1302 " %s") % (', '.join(sorted(missing)))
1286 1303 raise error.Abort(msg)
1287 1304
1288 1305 wlock = lock = None
1289 1306 try:
1290 1307 wlock = pullop.repo.wlock()
1291 1308 lock = pullop.repo.lock()
1292 1309 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1293 1310 # This should ideally be in _pullbundle2(). However, it needs to run
1294 1311 # before discovery to avoid extra work.
1295 1312 _maybeapplyclonebundle(pullop)
1296 1313 streamclone.maybeperformlegacystreamclone(pullop)
1297 1314 _pulldiscovery(pullop)
1298 1315 if pullop.canusebundle2:
1299 1316 _pullbundle2(pullop)
1300 1317 _pullchangeset(pullop)
1301 1318 _pullphase(pullop)
1302 1319 _pullbookmarks(pullop)
1303 1320 _pullobsolete(pullop)
1304 1321 pullop.trmanager.close()
1305 1322 finally:
1306 1323 lockmod.release(pullop.trmanager, lock, wlock)
1307 1324
1308 1325 # storing remotenames
1309 1326 if repo.ui.configbool('experimental', 'remotenames'):
1310 1327 remotenames.pullremotenames(repo, remote)
1311 1328
1312 1329 return pullop
1313 1330
1314 1331 # list of steps to perform discovery before pull
1315 1332 pulldiscoveryorder = []
1316 1333
1317 1334 # Mapping between step name and function
1318 1335 #
1319 1336 # This exists to help extensions wrap steps if necessary
1320 1337 pulldiscoverymapping = {}
1321 1338
1322 1339 def pulldiscovery(stepname):
1323 1340 """decorator for function performing discovery before pull
1324 1341
1325 1342 The function is added to the step -> function mapping and appended to the
1326 1343 list of steps. Beware that decorated function will be added in order (this
1327 1344 may matter).
1328 1345
1329 1346 You can only use this decorator for a new step, if you want to wrap a step
1330 1347 from an extension, change the pulldiscovery dictionary directly."""
1331 1348 def dec(func):
1332 1349 assert stepname not in pulldiscoverymapping
1333 1350 pulldiscoverymapping[stepname] = func
1334 1351 pulldiscoveryorder.append(stepname)
1335 1352 return func
1336 1353 return dec
1337 1354
1338 1355 def _pulldiscovery(pullop):
1339 1356 """Run all discovery steps"""
1340 1357 for stepname in pulldiscoveryorder:
1341 1358 step = pulldiscoverymapping[stepname]
1342 1359 step(pullop)
1343 1360
1344 1361 @pulldiscovery('b1:bookmarks')
1345 1362 def _pullbookmarkbundle1(pullop):
1346 1363 """fetch bookmark data in bundle1 case
1347 1364
1348 1365 If not using bundle2, we have to fetch bookmarks before changeset
1349 1366 discovery to reduce the chance and impact of race conditions."""
1350 1367 if pullop.remotebookmarks is not None:
1351 1368 return
1352 1369 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1353 1370 # all known bundle2 servers now support listkeys, but lets be nice with
1354 1371 # new implementation.
1355 1372 return
1356 1373 books = pullop.remote.listkeys('bookmarks')
1357 1374 pullop.remotebookmarks = bookmod.unhexlifybookmarks(books)
1358 1375
1359 1376
1360 1377 @pulldiscovery('changegroup')
1361 1378 def _pulldiscoverychangegroup(pullop):
1362 1379 """discovery phase for the pull
1363 1380
1364 1381 Current handle changeset discovery only, will change handle all discovery
1365 1382 at some point."""
1366 1383 tmp = discovery.findcommonincoming(pullop.repo,
1367 1384 pullop.remote,
1368 1385 heads=pullop.heads,
1369 1386 force=pullop.force)
1370 1387 common, fetch, rheads = tmp
1371 1388 nm = pullop.repo.unfiltered().changelog.nodemap
1372 1389 if fetch and rheads:
1373 1390 # If a remote heads is filtered locally, put in back in common.
1374 1391 #
1375 1392 # This is a hackish solution to catch most of "common but locally
1376 1393 # hidden situation". We do not performs discovery on unfiltered
1377 1394 # repository because it end up doing a pathological amount of round
1378 1395 # trip for w huge amount of changeset we do not care about.
1379 1396 #
1380 1397 # If a set of such "common but filtered" changeset exist on the server
1381 1398 # but are not including a remote heads, we'll not be able to detect it,
1382 1399 scommon = set(common)
1383 1400 for n in rheads:
1384 1401 if n in nm:
1385 1402 if n not in scommon:
1386 1403 common.append(n)
1387 1404 if set(rheads).issubset(set(common)):
1388 1405 fetch = []
1389 1406 pullop.common = common
1390 1407 pullop.fetch = fetch
1391 1408 pullop.rheads = rheads
1392 1409
1393 1410 def _pullbundle2(pullop):
1394 1411 """pull data using bundle2
1395 1412
1396 1413 For now, the only supported data are changegroup."""
1397 1414 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1398 1415
1399 1416 # At the moment we don't do stream clones over bundle2. If that is
1400 1417 # implemented then here's where the check for that will go.
1401 1418 streaming = False
1402 1419
1403 1420 # pulling changegroup
1404 1421 pullop.stepsdone.add('changegroup')
1405 1422
1406 1423 kwargs['common'] = pullop.common
1407 1424 kwargs['heads'] = pullop.heads or pullop.rheads
1408 1425 kwargs['cg'] = pullop.fetch
1409 1426
1410 1427 ui = pullop.repo.ui
1411 1428 legacyphase = 'phases' in ui.configlist('devel', 'legacy.exchange')
1412 1429 hasbinaryphase = 'heads' in pullop.remotebundle2caps.get('phases', ())
1413 1430 if (not legacyphase and hasbinaryphase):
1414 1431 kwargs['phases'] = True
1415 1432 pullop.stepsdone.add('phases')
1416 1433
1417 1434 if 'listkeys' in pullop.remotebundle2caps:
1418 1435 if 'phases' not in pullop.stepsdone:
1419 1436 kwargs['listkeys'] = ['phases']
1420 1437 if pullop.remotebookmarks is None:
1421 1438 # make sure to always includes bookmark data when migrating
1422 1439 # `hg incoming --bundle` to using this function.
1423 1440 kwargs.setdefault('listkeys', []).append('bookmarks')
1424 1441
1425 1442 # If this is a full pull / clone and the server supports the clone bundles
1426 1443 # feature, tell the server whether we attempted a clone bundle. The
1427 1444 # presence of this flag indicates the client supports clone bundles. This
1428 1445 # will enable the server to treat clients that support clone bundles
1429 1446 # differently from those that don't.
1430 1447 if (pullop.remote.capable('clonebundles')
1431 1448 and pullop.heads is None and list(pullop.common) == [nullid]):
1432 1449 kwargs['cbattempted'] = pullop.clonebundleattempted
1433 1450
1434 1451 if streaming:
1435 1452 pullop.repo.ui.status(_('streaming all changes\n'))
1436 1453 elif not pullop.fetch:
1437 1454 pullop.repo.ui.status(_("no changes found\n"))
1438 1455 pullop.cgresult = 0
1439 1456 else:
1440 1457 if pullop.heads is None and list(pullop.common) == [nullid]:
1441 1458 pullop.repo.ui.status(_("requesting all changes\n"))
1442 1459 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1443 1460 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1444 1461 if obsolete.commonversion(remoteversions) is not None:
1445 1462 kwargs['obsmarkers'] = True
1446 1463 pullop.stepsdone.add('obsmarkers')
1447 1464 _pullbundle2extraprepare(pullop, kwargs)
1448 1465 bundle = pullop.remote.getbundle('pull', **pycompat.strkwargs(kwargs))
1449 1466 try:
1450 1467 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1451 1468 except bundle2.AbortFromPart as exc:
1452 1469 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1453 1470 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1454 1471 except error.BundleValueError as exc:
1455 1472 raise error.Abort(_('missing support for %s') % exc)
1456 1473
1457 1474 if pullop.fetch:
1458 1475 pullop.cgresult = bundle2.combinechangegroupresults(op)
1459 1476
1460 1477 # processing phases change
1461 1478 for namespace, value in op.records['listkeys']:
1462 1479 if namespace == 'phases':
1463 1480 _pullapplyphases(pullop, value)
1464 1481
1465 1482 # processing bookmark update
1466 1483 for namespace, value in op.records['listkeys']:
1467 1484 if namespace == 'bookmarks':
1468 1485 pullop.remotebookmarks = bookmod.unhexlifybookmarks(value)
1469 1486
1470 1487 # bookmark data were either already there or pulled in the bundle
1471 1488 if pullop.remotebookmarks is not None:
1472 1489 _pullbookmarks(pullop)
1473 1490
1474 1491 def _pullbundle2extraprepare(pullop, kwargs):
1475 1492 """hook function so that extensions can extend the getbundle call"""
1476 1493
1477 1494 def _pullchangeset(pullop):
1478 1495 """pull changeset from unbundle into the local repo"""
1479 1496 # We delay the open of the transaction as late as possible so we
1480 1497 # don't open transaction for nothing or you break future useful
1481 1498 # rollback call
1482 1499 if 'changegroup' in pullop.stepsdone:
1483 1500 return
1484 1501 pullop.stepsdone.add('changegroup')
1485 1502 if not pullop.fetch:
1486 1503 pullop.repo.ui.status(_("no changes found\n"))
1487 1504 pullop.cgresult = 0
1488 1505 return
1489 1506 tr = pullop.gettransaction()
1490 1507 if pullop.heads is None and list(pullop.common) == [nullid]:
1491 1508 pullop.repo.ui.status(_("requesting all changes\n"))
1492 1509 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1493 1510 # issue1320, avoid a race if remote changed after discovery
1494 1511 pullop.heads = pullop.rheads
1495 1512
1496 1513 if pullop.remote.capable('getbundle'):
1497 1514 # TODO: get bundlecaps from remote
1498 1515 cg = pullop.remote.getbundle('pull', common=pullop.common,
1499 1516 heads=pullop.heads or pullop.rheads)
1500 1517 elif pullop.heads is None:
1501 1518 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1502 1519 elif not pullop.remote.capable('changegroupsubset'):
1503 1520 raise error.Abort(_("partial pull cannot be done because "
1504 1521 "other repository doesn't support "
1505 1522 "changegroupsubset."))
1506 1523 else:
1507 1524 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1508 1525 bundleop = bundle2.applybundle(pullop.repo, cg, tr, 'pull',
1509 1526 pullop.remote.url())
1510 1527 pullop.cgresult = bundle2.combinechangegroupresults(bundleop)
1511 1528
1512 1529 def _pullphase(pullop):
1513 1530 # Get remote phases data from remote
1514 1531 if 'phases' in pullop.stepsdone:
1515 1532 return
1516 1533 remotephases = pullop.remote.listkeys('phases')
1517 1534 _pullapplyphases(pullop, remotephases)
1518 1535
1519 1536 def _pullapplyphases(pullop, remotephases):
1520 1537 """apply phase movement from observed remote state"""
1521 1538 if 'phases' in pullop.stepsdone:
1522 1539 return
1523 1540 pullop.stepsdone.add('phases')
1524 1541 publishing = bool(remotephases.get('publishing', False))
1525 1542 if remotephases and not publishing:
1526 1543 # remote is new and non-publishing
1527 1544 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1528 1545 pullop.pulledsubset,
1529 1546 remotephases)
1530 1547 dheads = pullop.pulledsubset
1531 1548 else:
1532 1549 # Remote is old or publishing all common changesets
1533 1550 # should be seen as public
1534 1551 pheads = pullop.pulledsubset
1535 1552 dheads = []
1536 1553 unfi = pullop.repo.unfiltered()
1537 1554 phase = unfi._phasecache.phase
1538 1555 rev = unfi.changelog.nodemap.get
1539 1556 public = phases.public
1540 1557 draft = phases.draft
1541 1558
1542 1559 # exclude changesets already public locally and update the others
1543 1560 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1544 1561 if pheads:
1545 1562 tr = pullop.gettransaction()
1546 1563 phases.advanceboundary(pullop.repo, tr, public, pheads)
1547 1564
1548 1565 # exclude changesets already draft locally and update the others
1549 1566 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1550 1567 if dheads:
1551 1568 tr = pullop.gettransaction()
1552 1569 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1553 1570
1554 1571 def _pullbookmarks(pullop):
1555 1572 """process the remote bookmark information to update the local one"""
1556 1573 if 'bookmarks' in pullop.stepsdone:
1557 1574 return
1558 1575 pullop.stepsdone.add('bookmarks')
1559 1576 repo = pullop.repo
1560 1577 remotebookmarks = pullop.remotebookmarks
1561 1578 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1562 1579 pullop.remote.url(),
1563 1580 pullop.gettransaction,
1564 1581 explicit=pullop.explicitbookmarks)
1565 1582
1566 1583 def _pullobsolete(pullop):
1567 1584 """utility function to pull obsolete markers from a remote
1568 1585
1569 1586 The `gettransaction` is function that return the pull transaction, creating
1570 1587 one if necessary. We return the transaction to inform the calling code that
1571 1588 a new transaction have been created (when applicable).
1572 1589
1573 1590 Exists mostly to allow overriding for experimentation purpose"""
1574 1591 if 'obsmarkers' in pullop.stepsdone:
1575 1592 return
1576 1593 pullop.stepsdone.add('obsmarkers')
1577 1594 tr = None
1578 1595 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1579 1596 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1580 1597 remoteobs = pullop.remote.listkeys('obsolete')
1581 1598 if 'dump0' in remoteobs:
1582 1599 tr = pullop.gettransaction()
1583 1600 markers = []
1584 1601 for key in sorted(remoteobs, reverse=True):
1585 1602 if key.startswith('dump'):
1586 1603 data = util.b85decode(remoteobs[key])
1587 1604 version, newmarks = obsolete._readmarkers(data)
1588 1605 markers += newmarks
1589 1606 if markers:
1590 1607 pullop.repo.obsstore.add(tr, markers)
1591 1608 pullop.repo.invalidatevolatilesets()
1592 1609 return tr
1593 1610
1594 1611 def caps20to10(repo):
1595 1612 """return a set with appropriate options to use bundle20 during getbundle"""
1596 1613 caps = {'HG20'}
1597 1614 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1598 1615 caps.add('bundle2=' + urlreq.quote(capsblob))
1599 1616 return caps
1600 1617
1601 1618 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1602 1619 getbundle2partsorder = []
1603 1620
1604 1621 # Mapping between step name and function
1605 1622 #
1606 1623 # This exists to help extensions wrap steps if necessary
1607 1624 getbundle2partsmapping = {}
1608 1625
1609 1626 def getbundle2partsgenerator(stepname, idx=None):
1610 1627 """decorator for function generating bundle2 part for getbundle
1611 1628
1612 1629 The function is added to the step -> function mapping and appended to the
1613 1630 list of steps. Beware that decorated functions will be added in order
1614 1631 (this may matter).
1615 1632
1616 1633 You can only use this decorator for new steps, if you want to wrap a step
1617 1634 from an extension, attack the getbundle2partsmapping dictionary directly."""
1618 1635 def dec(func):
1619 1636 assert stepname not in getbundle2partsmapping
1620 1637 getbundle2partsmapping[stepname] = func
1621 1638 if idx is None:
1622 1639 getbundle2partsorder.append(stepname)
1623 1640 else:
1624 1641 getbundle2partsorder.insert(idx, stepname)
1625 1642 return func
1626 1643 return dec
1627 1644
1628 1645 def bundle2requested(bundlecaps):
1629 1646 if bundlecaps is not None:
1630 1647 return any(cap.startswith('HG2') for cap in bundlecaps)
1631 1648 return False
1632 1649
1633 1650 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1634 1651 **kwargs):
1635 1652 """Return chunks constituting a bundle's raw data.
1636 1653
1637 1654 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1638 1655 passed.
1639 1656
1640 1657 Returns an iterator over raw chunks (of varying sizes).
1641 1658 """
1642 1659 kwargs = pycompat.byteskwargs(kwargs)
1643 1660 usebundle2 = bundle2requested(bundlecaps)
1644 1661 # bundle10 case
1645 1662 if not usebundle2:
1646 1663 if bundlecaps and not kwargs.get('cg', True):
1647 1664 raise ValueError(_('request for bundle10 must include changegroup'))
1648 1665
1649 1666 if kwargs:
1650 1667 raise ValueError(_('unsupported getbundle arguments: %s')
1651 1668 % ', '.join(sorted(kwargs.keys())))
1652 1669 outgoing = _computeoutgoing(repo, heads, common)
1653 1670 return changegroup.makestream(repo, outgoing, '01', source,
1654 1671 bundlecaps=bundlecaps)
1655 1672
1656 1673 # bundle20 case
1657 1674 b2caps = {}
1658 1675 for bcaps in bundlecaps:
1659 1676 if bcaps.startswith('bundle2='):
1660 1677 blob = urlreq.unquote(bcaps[len('bundle2='):])
1661 1678 b2caps.update(bundle2.decodecaps(blob))
1662 1679 bundler = bundle2.bundle20(repo.ui, b2caps)
1663 1680
1664 1681 kwargs['heads'] = heads
1665 1682 kwargs['common'] = common
1666 1683
1667 1684 for name in getbundle2partsorder:
1668 1685 func = getbundle2partsmapping[name]
1669 1686 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1670 1687 **pycompat.strkwargs(kwargs))
1671 1688
1672 1689 return bundler.getchunks()
1673 1690
1674 1691 @getbundle2partsgenerator('changegroup')
1675 1692 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1676 1693 b2caps=None, heads=None, common=None, **kwargs):
1677 1694 """add a changegroup part to the requested bundle"""
1678 1695 cgstream = None
1679 1696 if kwargs.get('cg', True):
1680 1697 # build changegroup bundle here.
1681 1698 version = '01'
1682 1699 cgversions = b2caps.get('changegroup')
1683 1700 if cgversions: # 3.1 and 3.2 ship with an empty value
1684 1701 cgversions = [v for v in cgversions
1685 1702 if v in changegroup.supportedoutgoingversions(repo)]
1686 1703 if not cgversions:
1687 1704 raise ValueError(_('no common changegroup version'))
1688 1705 version = max(cgversions)
1689 1706 outgoing = _computeoutgoing(repo, heads, common)
1690 1707 if outgoing.missing:
1691 1708 cgstream = changegroup.makestream(repo, outgoing, version, source,
1692 1709 bundlecaps=bundlecaps)
1693 1710
1694 1711 if cgstream:
1695 1712 part = bundler.newpart('changegroup', data=cgstream)
1696 1713 if cgversions:
1697 1714 part.addparam('version', version)
1698 1715 part.addparam('nbchanges', '%d' % len(outgoing.missing),
1699 1716 mandatory=False)
1700 1717 if 'treemanifest' in repo.requirements:
1701 1718 part.addparam('treemanifest', '1')
1702 1719
1703 1720 @getbundle2partsgenerator('listkeys')
1704 1721 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1705 1722 b2caps=None, **kwargs):
1706 1723 """add parts containing listkeys namespaces to the requested bundle"""
1707 1724 listkeys = kwargs.get('listkeys', ())
1708 1725 for namespace in listkeys:
1709 1726 part = bundler.newpart('listkeys')
1710 1727 part.addparam('namespace', namespace)
1711 1728 keys = repo.listkeys(namespace).items()
1712 1729 part.data = pushkey.encodekeys(keys)
1713 1730
1714 1731 @getbundle2partsgenerator('obsmarkers')
1715 1732 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1716 1733 b2caps=None, heads=None, **kwargs):
1717 1734 """add an obsolescence markers part to the requested bundle"""
1718 1735 if kwargs.get('obsmarkers', False):
1719 1736 if heads is None:
1720 1737 heads = repo.heads()
1721 1738 subset = [c.node() for c in repo.set('::%ln', heads)]
1722 1739 markers = repo.obsstore.relevantmarkers(subset)
1723 1740 markers = sorted(markers)
1724 1741 bundle2.buildobsmarkerspart(bundler, markers)
1725 1742
1726 1743 @getbundle2partsgenerator('phases')
1727 1744 def _getbundlephasespart(bundler, repo, source, bundlecaps=None,
1728 1745 b2caps=None, heads=None, **kwargs):
1729 1746 """add phase heads part to the requested bundle"""
1730 1747 if kwargs.get('phases', False):
1731 1748 if not 'heads' in b2caps.get('phases'):
1732 1749 raise ValueError(_('no common phases exchange method'))
1733 1750 if heads is None:
1734 1751 heads = repo.heads()
1735 1752
1736 1753 headsbyphase = collections.defaultdict(set)
1737 1754 if repo.publishing():
1738 1755 headsbyphase[phases.public] = heads
1739 1756 else:
1740 1757 # find the appropriate heads to move
1741 1758
1742 1759 phase = repo._phasecache.phase
1743 1760 node = repo.changelog.node
1744 1761 rev = repo.changelog.rev
1745 1762 for h in heads:
1746 1763 headsbyphase[phase(repo, rev(h))].add(h)
1747 1764 seenphases = list(headsbyphase.keys())
1748 1765
1749 1766 # We do not handle anything but public and draft phase for now)
1750 1767 if seenphases:
1751 1768 assert max(seenphases) <= phases.draft
1752 1769
1753 1770 # if client is pulling non-public changesets, we need to find
1754 1771 # intermediate public heads.
1755 1772 draftheads = headsbyphase.get(phases.draft, set())
1756 1773 if draftheads:
1757 1774 publicheads = headsbyphase.get(phases.public, set())
1758 1775
1759 1776 revset = 'heads(only(%ln, %ln) and public())'
1760 1777 extraheads = repo.revs(revset, draftheads, publicheads)
1761 1778 for r in extraheads:
1762 1779 headsbyphase[phases.public].add(node(r))
1763 1780
1764 1781 # transform data in a format used by the encoding function
1765 1782 phasemapping = []
1766 1783 for phase in phases.allphases:
1767 1784 phasemapping.append(sorted(headsbyphase[phase]))
1768 1785
1769 1786 # generate the actual part
1770 1787 phasedata = phases.binaryencode(phasemapping)
1771 1788 bundler.newpart('phase-heads', data=phasedata)
1772 1789
1773 1790 @getbundle2partsgenerator('hgtagsfnodes')
1774 1791 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1775 1792 b2caps=None, heads=None, common=None,
1776 1793 **kwargs):
1777 1794 """Transfer the .hgtags filenodes mapping.
1778 1795
1779 1796 Only values for heads in this bundle will be transferred.
1780 1797
1781 1798 The part data consists of pairs of 20 byte changeset node and .hgtags
1782 1799 filenodes raw values.
1783 1800 """
1784 1801 # Don't send unless:
1785 1802 # - changeset are being exchanged,
1786 1803 # - the client supports it.
1787 1804 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1788 1805 return
1789 1806
1790 1807 outgoing = _computeoutgoing(repo, heads, common)
1791 1808 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1792 1809
1793 1810 def check_heads(repo, their_heads, context):
1794 1811 """check if the heads of a repo have been modified
1795 1812
1796 1813 Used by peer for unbundling.
1797 1814 """
1798 1815 heads = repo.heads()
1799 1816 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1800 1817 if not (their_heads == ['force'] or their_heads == heads or
1801 1818 their_heads == ['hashed', heads_hash]):
1802 1819 # someone else committed/pushed/unbundled while we
1803 1820 # were transferring data
1804 1821 raise error.PushRaced('repository changed while %s - '
1805 1822 'please try again' % context)
1806 1823
1807 1824 def unbundle(repo, cg, heads, source, url):
1808 1825 """Apply a bundle to a repo.
1809 1826
1810 1827 this function makes sure the repo is locked during the application and have
1811 1828 mechanism to check that no push race occurred between the creation of the
1812 1829 bundle and its application.
1813 1830
1814 1831 If the push was raced as PushRaced exception is raised."""
1815 1832 r = 0
1816 1833 # need a transaction when processing a bundle2 stream
1817 1834 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1818 1835 lockandtr = [None, None, None]
1819 1836 recordout = None
1820 1837 # quick fix for output mismatch with bundle2 in 3.4
1821 1838 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture')
1822 1839 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1823 1840 captureoutput = True
1824 1841 try:
1825 1842 # note: outside bundle1, 'heads' is expected to be empty and this
1826 1843 # 'check_heads' call wil be a no-op
1827 1844 check_heads(repo, heads, 'uploading changes')
1828 1845 # push can proceed
1829 1846 if not isinstance(cg, bundle2.unbundle20):
1830 1847 # legacy case: bundle1 (changegroup 01)
1831 1848 txnname = "\n".join([source, util.hidepassword(url)])
1832 1849 with repo.lock(), repo.transaction(txnname) as tr:
1833 1850 op = bundle2.applybundle(repo, cg, tr, source, url)
1834 1851 r = bundle2.combinechangegroupresults(op)
1835 1852 else:
1836 1853 r = None
1837 1854 try:
1838 1855 def gettransaction():
1839 1856 if not lockandtr[2]:
1840 1857 lockandtr[0] = repo.wlock()
1841 1858 lockandtr[1] = repo.lock()
1842 1859 lockandtr[2] = repo.transaction(source)
1843 1860 lockandtr[2].hookargs['source'] = source
1844 1861 lockandtr[2].hookargs['url'] = url
1845 1862 lockandtr[2].hookargs['bundle2'] = '1'
1846 1863 return lockandtr[2]
1847 1864
1848 1865 # Do greedy locking by default until we're satisfied with lazy
1849 1866 # locking.
1850 1867 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1851 1868 gettransaction()
1852 1869
1853 1870 op = bundle2.bundleoperation(repo, gettransaction,
1854 1871 captureoutput=captureoutput)
1855 1872 try:
1856 1873 op = bundle2.processbundle(repo, cg, op=op)
1857 1874 finally:
1858 1875 r = op.reply
1859 1876 if captureoutput and r is not None:
1860 1877 repo.ui.pushbuffer(error=True, subproc=True)
1861 1878 def recordout(output):
1862 1879 r.newpart('output', data=output, mandatory=False)
1863 1880 if lockandtr[2] is not None:
1864 1881 lockandtr[2].close()
1865 1882 except BaseException as exc:
1866 1883 exc.duringunbundle2 = True
1867 1884 if captureoutput and r is not None:
1868 1885 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1869 1886 def recordout(output):
1870 1887 part = bundle2.bundlepart('output', data=output,
1871 1888 mandatory=False)
1872 1889 parts.append(part)
1873 1890 raise
1874 1891 finally:
1875 1892 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1876 1893 if recordout is not None:
1877 1894 recordout(repo.ui.popbuffer())
1878 1895 return r
1879 1896
1880 1897 def _maybeapplyclonebundle(pullop):
1881 1898 """Apply a clone bundle from a remote, if possible."""
1882 1899
1883 1900 repo = pullop.repo
1884 1901 remote = pullop.remote
1885 1902
1886 1903 if not repo.ui.configbool('ui', 'clonebundles'):
1887 1904 return
1888 1905
1889 1906 # Only run if local repo is empty.
1890 1907 if len(repo):
1891 1908 return
1892 1909
1893 1910 if pullop.heads:
1894 1911 return
1895 1912
1896 1913 if not remote.capable('clonebundles'):
1897 1914 return
1898 1915
1899 1916 res = remote._call('clonebundles')
1900 1917
1901 1918 # If we call the wire protocol command, that's good enough to record the
1902 1919 # attempt.
1903 1920 pullop.clonebundleattempted = True
1904 1921
1905 1922 entries = parseclonebundlesmanifest(repo, res)
1906 1923 if not entries:
1907 1924 repo.ui.note(_('no clone bundles available on remote; '
1908 1925 'falling back to regular clone\n'))
1909 1926 return
1910 1927
1911 1928 entries = filterclonebundleentries(
1912 1929 repo, entries, streamclonerequested=pullop.streamclonerequested)
1913 1930
1914 1931 if not entries:
1915 1932 # There is a thundering herd concern here. However, if a server
1916 1933 # operator doesn't advertise bundles appropriate for its clients,
1917 1934 # they deserve what's coming. Furthermore, from a client's
1918 1935 # perspective, no automatic fallback would mean not being able to
1919 1936 # clone!
1920 1937 repo.ui.warn(_('no compatible clone bundles available on server; '
1921 1938 'falling back to regular clone\n'))
1922 1939 repo.ui.warn(_('(you may want to report this to the server '
1923 1940 'operator)\n'))
1924 1941 return
1925 1942
1926 1943 entries = sortclonebundleentries(repo.ui, entries)
1927 1944
1928 1945 url = entries[0]['URL']
1929 1946 repo.ui.status(_('applying clone bundle from %s\n') % url)
1930 1947 if trypullbundlefromurl(repo.ui, repo, url):
1931 1948 repo.ui.status(_('finished applying clone bundle\n'))
1932 1949 # Bundle failed.
1933 1950 #
1934 1951 # We abort by default to avoid the thundering herd of
1935 1952 # clients flooding a server that was expecting expensive
1936 1953 # clone load to be offloaded.
1937 1954 elif repo.ui.configbool('ui', 'clonebundlefallback'):
1938 1955 repo.ui.warn(_('falling back to normal clone\n'))
1939 1956 else:
1940 1957 raise error.Abort(_('error applying bundle'),
1941 1958 hint=_('if this error persists, consider contacting '
1942 1959 'the server operator or disable clone '
1943 1960 'bundles via '
1944 1961 '"--config ui.clonebundles=false"'))
1945 1962
1946 1963 def parseclonebundlesmanifest(repo, s):
1947 1964 """Parses the raw text of a clone bundles manifest.
1948 1965
1949 1966 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1950 1967 to the URL and other keys are the attributes for the entry.
1951 1968 """
1952 1969 m = []
1953 1970 for line in s.splitlines():
1954 1971 fields = line.split()
1955 1972 if not fields:
1956 1973 continue
1957 1974 attrs = {'URL': fields[0]}
1958 1975 for rawattr in fields[1:]:
1959 1976 key, value = rawattr.split('=', 1)
1960 1977 key = urlreq.unquote(key)
1961 1978 value = urlreq.unquote(value)
1962 1979 attrs[key] = value
1963 1980
1964 1981 # Parse BUNDLESPEC into components. This makes client-side
1965 1982 # preferences easier to specify since you can prefer a single
1966 1983 # component of the BUNDLESPEC.
1967 1984 if key == 'BUNDLESPEC':
1968 1985 try:
1969 1986 comp, version, params = parsebundlespec(repo, value,
1970 1987 externalnames=True)
1971 1988 attrs['COMPRESSION'] = comp
1972 1989 attrs['VERSION'] = version
1973 1990 except error.InvalidBundleSpecification:
1974 1991 pass
1975 1992 except error.UnsupportedBundleSpecification:
1976 1993 pass
1977 1994
1978 1995 m.append(attrs)
1979 1996
1980 1997 return m
1981 1998
1982 1999 def filterclonebundleentries(repo, entries, streamclonerequested=False):
1983 2000 """Remove incompatible clone bundle manifest entries.
1984 2001
1985 2002 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1986 2003 and returns a new list consisting of only the entries that this client
1987 2004 should be able to apply.
1988 2005
1989 2006 There is no guarantee we'll be able to apply all returned entries because
1990 2007 the metadata we use to filter on may be missing or wrong.
1991 2008 """
1992 2009 newentries = []
1993 2010 for entry in entries:
1994 2011 spec = entry.get('BUNDLESPEC')
1995 2012 if spec:
1996 2013 try:
1997 2014 comp, version, params = parsebundlespec(repo, spec, strict=True)
1998 2015
1999 2016 # If a stream clone was requested, filter out non-streamclone
2000 2017 # entries.
2001 2018 if streamclonerequested and (comp != 'UN' or version != 's1'):
2002 2019 repo.ui.debug('filtering %s because not a stream clone\n' %
2003 2020 entry['URL'])
2004 2021 continue
2005 2022
2006 2023 except error.InvalidBundleSpecification as e:
2007 2024 repo.ui.debug(str(e) + '\n')
2008 2025 continue
2009 2026 except error.UnsupportedBundleSpecification as e:
2010 2027 repo.ui.debug('filtering %s because unsupported bundle '
2011 2028 'spec: %s\n' % (entry['URL'], str(e)))
2012 2029 continue
2013 2030 # If we don't have a spec and requested a stream clone, we don't know
2014 2031 # what the entry is so don't attempt to apply it.
2015 2032 elif streamclonerequested:
2016 2033 repo.ui.debug('filtering %s because cannot determine if a stream '
2017 2034 'clone bundle\n' % entry['URL'])
2018 2035 continue
2019 2036
2020 2037 if 'REQUIRESNI' in entry and not sslutil.hassni:
2021 2038 repo.ui.debug('filtering %s because SNI not supported\n' %
2022 2039 entry['URL'])
2023 2040 continue
2024 2041
2025 2042 newentries.append(entry)
2026 2043
2027 2044 return newentries
2028 2045
2029 2046 class clonebundleentry(object):
2030 2047 """Represents an item in a clone bundles manifest.
2031 2048
2032 2049 This rich class is needed to support sorting since sorted() in Python 3
2033 2050 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
2034 2051 won't work.
2035 2052 """
2036 2053
2037 2054 def __init__(self, value, prefers):
2038 2055 self.value = value
2039 2056 self.prefers = prefers
2040 2057
2041 2058 def _cmp(self, other):
2042 2059 for prefkey, prefvalue in self.prefers:
2043 2060 avalue = self.value.get(prefkey)
2044 2061 bvalue = other.value.get(prefkey)
2045 2062
2046 2063 # Special case for b missing attribute and a matches exactly.
2047 2064 if avalue is not None and bvalue is None and avalue == prefvalue:
2048 2065 return -1
2049 2066
2050 2067 # Special case for a missing attribute and b matches exactly.
2051 2068 if bvalue is not None and avalue is None and bvalue == prefvalue:
2052 2069 return 1
2053 2070
2054 2071 # We can't compare unless attribute present on both.
2055 2072 if avalue is None or bvalue is None:
2056 2073 continue
2057 2074
2058 2075 # Same values should fall back to next attribute.
2059 2076 if avalue == bvalue:
2060 2077 continue
2061 2078
2062 2079 # Exact matches come first.
2063 2080 if avalue == prefvalue:
2064 2081 return -1
2065 2082 if bvalue == prefvalue:
2066 2083 return 1
2067 2084
2068 2085 # Fall back to next attribute.
2069 2086 continue
2070 2087
2071 2088 # If we got here we couldn't sort by attributes and prefers. Fall
2072 2089 # back to index order.
2073 2090 return 0
2074 2091
2075 2092 def __lt__(self, other):
2076 2093 return self._cmp(other) < 0
2077 2094
2078 2095 def __gt__(self, other):
2079 2096 return self._cmp(other) > 0
2080 2097
2081 2098 def __eq__(self, other):
2082 2099 return self._cmp(other) == 0
2083 2100
2084 2101 def __le__(self, other):
2085 2102 return self._cmp(other) <= 0
2086 2103
2087 2104 def __ge__(self, other):
2088 2105 return self._cmp(other) >= 0
2089 2106
2090 2107 def __ne__(self, other):
2091 2108 return self._cmp(other) != 0
2092 2109
2093 2110 def sortclonebundleentries(ui, entries):
2094 2111 prefers = ui.configlist('ui', 'clonebundleprefers')
2095 2112 if not prefers:
2096 2113 return list(entries)
2097 2114
2098 2115 prefers = [p.split('=', 1) for p in prefers]
2099 2116
2100 2117 items = sorted(clonebundleentry(v, prefers) for v in entries)
2101 2118 return [i.value for i in items]
2102 2119
2103 2120 def trypullbundlefromurl(ui, repo, url):
2104 2121 """Attempt to apply a bundle from a URL."""
2105 2122 with repo.lock(), repo.transaction('bundleurl') as tr:
2106 2123 try:
2107 2124 fh = urlmod.open(ui, url)
2108 2125 cg = readbundle(ui, fh, 'stream')
2109 2126
2110 2127 if isinstance(cg, streamclone.streamcloneapplier):
2111 2128 cg.apply(repo)
2112 2129 else:
2113 2130 bundle2.applybundle(repo, cg, tr, 'clonebundles', url)
2114 2131 return True
2115 2132 except urlerr.httperror as e:
2116 2133 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2117 2134 except urlerr.urlerror as e:
2118 2135 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2119 2136
2120 2137 return False
@@ -1,75 +1,77 b''
1 1 # common patterns in test at can safely be replaced
2 2 from __future__ import absolute_import
3 3
4 4 substitutions = [
5 5 # list of possible compressions
6 6 (br'(zstd,)?zlib,none,bzip2',
7 7 br'$USUAL_COMPRESSIONS$'
8 8 ),
9 9 # capabilities sent through http
10 10 (br'bundlecaps=HG20%2Cbundle2%3DHG20%250A'
11 br'bookmarks%250A'
11 12 br'changegroup%253D01%252C02%250A'
12 13 br'digests%253Dmd5%252Csha1%252Csha512%250A'
13 14 br'error%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250A'
14 15 br'hgtagsfnodes%250A'
15 16 br'listkeys%250A'
16 17 br'phases%253Dheads%250A'
17 18 br'pushkey%250A'
18 19 br'remote-changegroup%253Dhttp%252Chttps',
19 20 # (the replacement patterns)
20 21 br'$USUAL_BUNDLE_CAPS$'
21 22 ),
22 23 # bundle2 capabilities sent through ssh
23 24 (br'bundle2=HG20%0A'
25 br'bookmarks%0A'
24 26 br'changegroup%3D01%2C02%0A'
25 27 br'digests%3Dmd5%2Csha1%2Csha512%0A'
26 28 br'error%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0A'
27 29 br'hgtagsfnodes%0A'
28 30 br'listkeys%0A'
29 31 br'phases%3Dheads%0A'
30 32 br'pushkey%0A'
31 33 br'remote-changegroup%3Dhttp%2Chttps',
32 34 # (replacement patterns)
33 35 br'$USUAL_BUNDLE2_CAPS$'
34 36 ),
35 37 # HTTP log dates
36 38 (br' - - \[\d\d/.../2\d\d\d \d\d:\d\d:\d\d] "GET',
37 39 br' - - [$LOGDATE$] "GET'
38 40 ),
39 41 ]
40 42
41 43 # Various platform error strings, keyed on a common replacement string
42 44 _errors = {
43 45 br'$ENOENT$': (
44 46 # strerror()
45 47 br'No such file or directory',
46 48
47 49 # FormatMessage(ERROR_FILE_NOT_FOUND)
48 50 br'The system cannot find the file specified',
49 51 ),
50 52 br'$ENOTDIR$': (
51 53 # strerror()
52 54 br'Not a directory',
53 55
54 56 # FormatMessage(ERROR_PATH_NOT_FOUND)
55 57 br'The system cannot find the path specified',
56 58 ),
57 59 br'$ECONNRESET$': (
58 60 # strerror()
59 61 br'Connection reset by peer',
60 62
61 63 # FormatMessage(WSAECONNRESET)
62 64 br'An existing connection was forcibly closed by the remote host',
63 65 ),
64 66 br'$EADDRINUSE$': (
65 67 # strerror()
66 68 br'Address already in use',
67 69
68 70 # FormatMessage(WSAEADDRINUSE)
69 71 br'Only one usage of each socket address'
70 72 br' \(protocol/network address/port\) is normally permitted',
71 73 ),
72 74 }
73 75
74 76 for replace, msgs in _errors.items():
75 77 substitutions.extend((m, replace) for m in msgs)
@@ -1,2183 +1,2183 b''
1 1 > do_push()
2 2 > {
3 3 > user=$1
4 4 > shift
5 5 > echo "Pushing as user $user"
6 6 > echo 'hgrc = """'
7 7 > sed -n '/\[[ha]/,$p' b/.hg/hgrc | grep -v fakegroups.py
8 8 > echo '"""'
9 9 > if test -f acl.config; then
10 10 > echo 'acl.config = """'
11 11 > cat acl.config
12 12 > echo '"""'
13 13 > fi
14 14 > # On AIX /etc/profile sets LOGNAME read-only. So
15 15 > # LOGNAME=$user hg --cws a --debug push ../b
16 16 > # fails with "This variable is read only."
17 17 > # Use env to work around this.
18 18 > env LOGNAME=$user hg --cwd a --debug push ../b
19 19 > hg --cwd b rollback
20 20 > hg --cwd b --quiet tip
21 21 > echo
22 22 > }
23 23
24 24 > init_config()
25 25 > {
26 26 > cat > fakegroups.py <<EOF
27 27 > from hgext import acl
28 28 > def fakegetusers(ui, group):
29 29 > try:
30 30 > return acl._getusersorig(ui, group)
31 31 > except:
32 32 > return ["fred", "betty"]
33 33 > acl._getusersorig = acl._getusers
34 34 > acl._getusers = fakegetusers
35 35 > EOF
36 36 > rm -f acl.config
37 37 > cat > $config <<EOF
38 38 > [hooks]
39 39 > pretxnchangegroup.acl = python:hgext.acl.hook
40 40 > [acl]
41 41 > sources = push
42 42 > [extensions]
43 43 > f=`pwd`/fakegroups.py
44 44 > EOF
45 45 > }
46 46
47 47 $ hg init a
48 48 $ cd a
49 49 $ mkdir foo foo/Bar quux
50 50 $ echo 'in foo' > foo/file.txt
51 51 $ echo 'in foo/Bar' > foo/Bar/file.txt
52 52 $ echo 'in quux' > quux/file.py
53 53 $ hg add -q
54 54 $ hg ci -m 'add files' -d '1000000 0'
55 55 $ echo >> foo/file.txt
56 56 $ hg ci -m 'change foo/file' -d '1000001 0'
57 57 $ echo >> foo/Bar/file.txt
58 58 $ hg ci -m 'change foo/Bar/file' -d '1000002 0'
59 59 $ echo >> quux/file.py
60 60 $ hg ci -m 'change quux/file' -d '1000003 0'
61 61 $ hg tip --quiet
62 62 3:911600dab2ae
63 63
64 64 $ cd ..
65 65 $ hg clone -r 0 a b
66 66 adding changesets
67 67 adding manifests
68 68 adding file changes
69 69 added 1 changesets with 3 changes to 3 files
70 70 new changesets 6675d58eff77
71 71 updating to branch default
72 72 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
73 73
74 74 $ config=b/.hg/hgrc
75 75
76 76 Extension disabled for lack of a hook
77 77
78 78 $ do_push fred
79 79 Pushing as user fred
80 80 hgrc = """
81 81 """
82 82 pushing to ../b
83 83 query 1; heads
84 84 searching for changes
85 85 all remote heads known locally
86 86 listing keys for "phases"
87 87 checking for updated bookmarks
88 88 listing keys for "bookmarks"
89 89 listing keys for "bookmarks"
90 90 3 changesets found
91 91 list of changesets:
92 92 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
93 93 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
94 94 911600dab2ae7a9baff75958b84fe606851ce955
95 95 bundle2-output-bundle: "HG20", 5 parts total
96 bundle2-output-part: "replycaps" 168 bytes payload
96 bundle2-output-part: "replycaps" 178 bytes payload
97 97 bundle2-output-part: "check:phases" 24 bytes payload
98 98 bundle2-output-part: "check:heads" streamed payload
99 99 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
100 100 bundle2-output-part: "phase-heads" 24 bytes payload
101 101 bundle2-input-bundle: with-transaction
102 102 bundle2-input-part: "replycaps" supported
103 bundle2-input-part: total payload size 168
103 bundle2-input-part: total payload size 178
104 104 bundle2-input-part: "check:phases" supported
105 105 bundle2-input-part: total payload size 24
106 106 bundle2-input-part: "check:heads" supported
107 107 bundle2-input-part: total payload size 20
108 108 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
109 109 adding changesets
110 110 add changeset ef1ea85a6374
111 111 add changeset f9cafe1212c8
112 112 add changeset 911600dab2ae
113 113 adding manifests
114 114 adding file changes
115 115 adding foo/Bar/file.txt revisions
116 116 adding foo/file.txt revisions
117 117 adding quux/file.py revisions
118 118 added 3 changesets with 3 changes to 3 files
119 119 bundle2-input-part: total payload size 1553
120 120 bundle2-input-part: "phase-heads" supported
121 121 bundle2-input-part: total payload size 24
122 122 bundle2-input-bundle: 4 parts total
123 123 updating the branch cache
124 124 bundle2-output-bundle: "HG20", 1 parts total
125 125 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
126 126 bundle2-input-bundle: no-transaction
127 127 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
128 128 bundle2-input-bundle: 0 parts total
129 129 listing keys for "phases"
130 130 repository tip rolled back to revision 0 (undo push)
131 131 0:6675d58eff77
132 132
133 133
134 134 $ echo '[hooks]' >> $config
135 135 $ echo 'pretxnchangegroup.acl = python:hgext.acl.hook' >> $config
136 136
137 137 Extension disabled for lack of acl.sources
138 138
139 139 $ do_push fred
140 140 Pushing as user fred
141 141 hgrc = """
142 142 [hooks]
143 143 pretxnchangegroup.acl = python:hgext.acl.hook
144 144 """
145 145 pushing to ../b
146 146 query 1; heads
147 147 searching for changes
148 148 all remote heads known locally
149 149 listing keys for "phases"
150 150 checking for updated bookmarks
151 151 listing keys for "bookmarks"
152 152 listing keys for "bookmarks"
153 153 3 changesets found
154 154 list of changesets:
155 155 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
156 156 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
157 157 911600dab2ae7a9baff75958b84fe606851ce955
158 158 bundle2-output-bundle: "HG20", 5 parts total
159 bundle2-output-part: "replycaps" 168 bytes payload
159 bundle2-output-part: "replycaps" 178 bytes payload
160 160 bundle2-output-part: "check:phases" 24 bytes payload
161 161 bundle2-output-part: "check:heads" streamed payload
162 162 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
163 163 bundle2-output-part: "phase-heads" 24 bytes payload
164 164 bundle2-input-bundle: with-transaction
165 165 bundle2-input-part: "replycaps" supported
166 bundle2-input-part: total payload size 168
166 bundle2-input-part: total payload size 178
167 167 bundle2-input-part: "check:phases" supported
168 168 bundle2-input-part: total payload size 24
169 169 bundle2-input-part: "check:heads" supported
170 170 bundle2-input-part: total payload size 20
171 171 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
172 172 adding changesets
173 173 add changeset ef1ea85a6374
174 174 add changeset f9cafe1212c8
175 175 add changeset 911600dab2ae
176 176 adding manifests
177 177 adding file changes
178 178 adding foo/Bar/file.txt revisions
179 179 adding foo/file.txt revisions
180 180 adding quux/file.py revisions
181 181 added 3 changesets with 3 changes to 3 files
182 182 calling hook pretxnchangegroup.acl: hgext.acl.hook
183 183 acl: changes have source "push" - skipping
184 184 bundle2-input-part: total payload size 1553
185 185 bundle2-input-part: "phase-heads" supported
186 186 bundle2-input-part: total payload size 24
187 187 bundle2-input-bundle: 4 parts total
188 188 updating the branch cache
189 189 bundle2-output-bundle: "HG20", 1 parts total
190 190 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
191 191 bundle2-input-bundle: no-transaction
192 192 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
193 193 bundle2-input-bundle: 0 parts total
194 194 listing keys for "phases"
195 195 repository tip rolled back to revision 0 (undo push)
196 196 0:6675d58eff77
197 197
198 198
199 199 No [acl.allow]/[acl.deny]
200 200
201 201 $ echo '[acl]' >> $config
202 202 $ echo 'sources = push' >> $config
203 203 $ do_push fred
204 204 Pushing as user fred
205 205 hgrc = """
206 206 [hooks]
207 207 pretxnchangegroup.acl = python:hgext.acl.hook
208 208 [acl]
209 209 sources = push
210 210 """
211 211 pushing to ../b
212 212 query 1; heads
213 213 searching for changes
214 214 all remote heads known locally
215 215 listing keys for "phases"
216 216 checking for updated bookmarks
217 217 listing keys for "bookmarks"
218 218 listing keys for "bookmarks"
219 219 3 changesets found
220 220 list of changesets:
221 221 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
222 222 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
223 223 911600dab2ae7a9baff75958b84fe606851ce955
224 224 bundle2-output-bundle: "HG20", 5 parts total
225 bundle2-output-part: "replycaps" 168 bytes payload
225 bundle2-output-part: "replycaps" 178 bytes payload
226 226 bundle2-output-part: "check:phases" 24 bytes payload
227 227 bundle2-output-part: "check:heads" streamed payload
228 228 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
229 229 bundle2-output-part: "phase-heads" 24 bytes payload
230 230 bundle2-input-bundle: with-transaction
231 231 bundle2-input-part: "replycaps" supported
232 bundle2-input-part: total payload size 168
232 bundle2-input-part: total payload size 178
233 233 bundle2-input-part: "check:phases" supported
234 234 bundle2-input-part: total payload size 24
235 235 bundle2-input-part: "check:heads" supported
236 236 bundle2-input-part: total payload size 20
237 237 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
238 238 adding changesets
239 239 add changeset ef1ea85a6374
240 240 add changeset f9cafe1212c8
241 241 add changeset 911600dab2ae
242 242 adding manifests
243 243 adding file changes
244 244 adding foo/Bar/file.txt revisions
245 245 adding foo/file.txt revisions
246 246 adding quux/file.py revisions
247 247 added 3 changesets with 3 changes to 3 files
248 248 calling hook pretxnchangegroup.acl: hgext.acl.hook
249 249 acl: checking access for user "fred"
250 250 acl: acl.allow.branches not enabled
251 251 acl: acl.deny.branches not enabled
252 252 acl: acl.allow not enabled
253 253 acl: acl.deny not enabled
254 254 acl: branch access granted: "ef1ea85a6374" on branch "default"
255 255 acl: path access granted: "ef1ea85a6374"
256 256 acl: branch access granted: "f9cafe1212c8" on branch "default"
257 257 acl: path access granted: "f9cafe1212c8"
258 258 acl: branch access granted: "911600dab2ae" on branch "default"
259 259 acl: path access granted: "911600dab2ae"
260 260 bundle2-input-part: total payload size 1553
261 261 bundle2-input-part: "phase-heads" supported
262 262 bundle2-input-part: total payload size 24
263 263 bundle2-input-bundle: 4 parts total
264 264 updating the branch cache
265 265 bundle2-output-bundle: "HG20", 1 parts total
266 266 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
267 267 bundle2-input-bundle: no-transaction
268 268 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
269 269 bundle2-input-bundle: 0 parts total
270 270 listing keys for "phases"
271 271 repository tip rolled back to revision 0 (undo push)
272 272 0:6675d58eff77
273 273
274 274
275 275 Empty [acl.allow]
276 276
277 277 $ echo '[acl.allow]' >> $config
278 278 $ do_push fred
279 279 Pushing as user fred
280 280 hgrc = """
281 281 [hooks]
282 282 pretxnchangegroup.acl = python:hgext.acl.hook
283 283 [acl]
284 284 sources = push
285 285 [acl.allow]
286 286 """
287 287 pushing to ../b
288 288 query 1; heads
289 289 searching for changes
290 290 all remote heads known locally
291 291 listing keys for "phases"
292 292 checking for updated bookmarks
293 293 listing keys for "bookmarks"
294 294 listing keys for "bookmarks"
295 295 3 changesets found
296 296 list of changesets:
297 297 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
298 298 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
299 299 911600dab2ae7a9baff75958b84fe606851ce955
300 300 bundle2-output-bundle: "HG20", 5 parts total
301 bundle2-output-part: "replycaps" 168 bytes payload
301 bundle2-output-part: "replycaps" 178 bytes payload
302 302 bundle2-output-part: "check:phases" 24 bytes payload
303 303 bundle2-output-part: "check:heads" streamed payload
304 304 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
305 305 bundle2-output-part: "phase-heads" 24 bytes payload
306 306 bundle2-input-bundle: with-transaction
307 307 bundle2-input-part: "replycaps" supported
308 bundle2-input-part: total payload size 168
308 bundle2-input-part: total payload size 178
309 309 bundle2-input-part: "check:phases" supported
310 310 bundle2-input-part: total payload size 24
311 311 bundle2-input-part: "check:heads" supported
312 312 bundle2-input-part: total payload size 20
313 313 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
314 314 adding changesets
315 315 add changeset ef1ea85a6374
316 316 add changeset f9cafe1212c8
317 317 add changeset 911600dab2ae
318 318 adding manifests
319 319 adding file changes
320 320 adding foo/Bar/file.txt revisions
321 321 adding foo/file.txt revisions
322 322 adding quux/file.py revisions
323 323 added 3 changesets with 3 changes to 3 files
324 324 calling hook pretxnchangegroup.acl: hgext.acl.hook
325 325 acl: checking access for user "fred"
326 326 acl: acl.allow.branches not enabled
327 327 acl: acl.deny.branches not enabled
328 328 acl: acl.allow enabled, 0 entries for user fred
329 329 acl: acl.deny not enabled
330 330 acl: branch access granted: "ef1ea85a6374" on branch "default"
331 331 error: pretxnchangegroup.acl hook failed: acl: user "fred" not allowed on "foo/file.txt" (changeset "ef1ea85a6374")
332 332 bundle2-input-part: total payload size 1553
333 333 bundle2-input-part: total payload size 24
334 334 bundle2-input-bundle: 4 parts total
335 335 transaction abort!
336 336 rollback completed
337 337 abort: acl: user "fred" not allowed on "foo/file.txt" (changeset "ef1ea85a6374")
338 338 no rollback information available
339 339 0:6675d58eff77
340 340
341 341
342 342 fred is allowed inside foo/
343 343
344 344 $ echo 'foo/** = fred' >> $config
345 345 $ do_push fred
346 346 Pushing as user fred
347 347 hgrc = """
348 348 [hooks]
349 349 pretxnchangegroup.acl = python:hgext.acl.hook
350 350 [acl]
351 351 sources = push
352 352 [acl.allow]
353 353 foo/** = fred
354 354 """
355 355 pushing to ../b
356 356 query 1; heads
357 357 searching for changes
358 358 all remote heads known locally
359 359 listing keys for "phases"
360 360 checking for updated bookmarks
361 361 listing keys for "bookmarks"
362 362 listing keys for "bookmarks"
363 363 3 changesets found
364 364 list of changesets:
365 365 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
366 366 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
367 367 911600dab2ae7a9baff75958b84fe606851ce955
368 368 bundle2-output-bundle: "HG20", 5 parts total
369 bundle2-output-part: "replycaps" 168 bytes payload
369 bundle2-output-part: "replycaps" 178 bytes payload
370 370 bundle2-output-part: "check:phases" 24 bytes payload
371 371 bundle2-output-part: "check:heads" streamed payload
372 372 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
373 373 bundle2-output-part: "phase-heads" 24 bytes payload
374 374 bundle2-input-bundle: with-transaction
375 375 bundle2-input-part: "replycaps" supported
376 bundle2-input-part: total payload size 168
376 bundle2-input-part: total payload size 178
377 377 bundle2-input-part: "check:phases" supported
378 378 bundle2-input-part: total payload size 24
379 379 bundle2-input-part: "check:heads" supported
380 380 bundle2-input-part: total payload size 20
381 381 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
382 382 adding changesets
383 383 add changeset ef1ea85a6374
384 384 add changeset f9cafe1212c8
385 385 add changeset 911600dab2ae
386 386 adding manifests
387 387 adding file changes
388 388 adding foo/Bar/file.txt revisions
389 389 adding foo/file.txt revisions
390 390 adding quux/file.py revisions
391 391 added 3 changesets with 3 changes to 3 files
392 392 calling hook pretxnchangegroup.acl: hgext.acl.hook
393 393 acl: checking access for user "fred"
394 394 acl: acl.allow.branches not enabled
395 395 acl: acl.deny.branches not enabled
396 396 acl: acl.allow enabled, 1 entries for user fred
397 397 acl: acl.deny not enabled
398 398 acl: branch access granted: "ef1ea85a6374" on branch "default"
399 399 acl: path access granted: "ef1ea85a6374"
400 400 acl: branch access granted: "f9cafe1212c8" on branch "default"
401 401 acl: path access granted: "f9cafe1212c8"
402 402 acl: branch access granted: "911600dab2ae" on branch "default"
403 403 error: pretxnchangegroup.acl hook failed: acl: user "fred" not allowed on "quux/file.py" (changeset "911600dab2ae")
404 404 bundle2-input-part: total payload size 1553
405 405 bundle2-input-part: total payload size 24
406 406 bundle2-input-bundle: 4 parts total
407 407 transaction abort!
408 408 rollback completed
409 409 abort: acl: user "fred" not allowed on "quux/file.py" (changeset "911600dab2ae")
410 410 no rollback information available
411 411 0:6675d58eff77
412 412
413 413
414 414 Empty [acl.deny]
415 415
416 416 $ echo '[acl.deny]' >> $config
417 417 $ do_push barney
418 418 Pushing as user barney
419 419 hgrc = """
420 420 [hooks]
421 421 pretxnchangegroup.acl = python:hgext.acl.hook
422 422 [acl]
423 423 sources = push
424 424 [acl.allow]
425 425 foo/** = fred
426 426 [acl.deny]
427 427 """
428 428 pushing to ../b
429 429 query 1; heads
430 430 searching for changes
431 431 all remote heads known locally
432 432 listing keys for "phases"
433 433 checking for updated bookmarks
434 434 listing keys for "bookmarks"
435 435 listing keys for "bookmarks"
436 436 3 changesets found
437 437 list of changesets:
438 438 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
439 439 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
440 440 911600dab2ae7a9baff75958b84fe606851ce955
441 441 bundle2-output-bundle: "HG20", 5 parts total
442 bundle2-output-part: "replycaps" 168 bytes payload
442 bundle2-output-part: "replycaps" 178 bytes payload
443 443 bundle2-output-part: "check:phases" 24 bytes payload
444 444 bundle2-output-part: "check:heads" streamed payload
445 445 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
446 446 bundle2-output-part: "phase-heads" 24 bytes payload
447 447 bundle2-input-bundle: with-transaction
448 448 bundle2-input-part: "replycaps" supported
449 bundle2-input-part: total payload size 168
449 bundle2-input-part: total payload size 178
450 450 bundle2-input-part: "check:phases" supported
451 451 bundle2-input-part: total payload size 24
452 452 bundle2-input-part: "check:heads" supported
453 453 bundle2-input-part: total payload size 20
454 454 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
455 455 adding changesets
456 456 add changeset ef1ea85a6374
457 457 add changeset f9cafe1212c8
458 458 add changeset 911600dab2ae
459 459 adding manifests
460 460 adding file changes
461 461 adding foo/Bar/file.txt revisions
462 462 adding foo/file.txt revisions
463 463 adding quux/file.py revisions
464 464 added 3 changesets with 3 changes to 3 files
465 465 calling hook pretxnchangegroup.acl: hgext.acl.hook
466 466 acl: checking access for user "barney"
467 467 acl: acl.allow.branches not enabled
468 468 acl: acl.deny.branches not enabled
469 469 acl: acl.allow enabled, 0 entries for user barney
470 470 acl: acl.deny enabled, 0 entries for user barney
471 471 acl: branch access granted: "ef1ea85a6374" on branch "default"
472 472 error: pretxnchangegroup.acl hook failed: acl: user "barney" not allowed on "foo/file.txt" (changeset "ef1ea85a6374")
473 473 bundle2-input-part: total payload size 1553
474 474 bundle2-input-part: total payload size 24
475 475 bundle2-input-bundle: 4 parts total
476 476 transaction abort!
477 477 rollback completed
478 478 abort: acl: user "barney" not allowed on "foo/file.txt" (changeset "ef1ea85a6374")
479 479 no rollback information available
480 480 0:6675d58eff77
481 481
482 482
483 483 fred is allowed inside foo/, but not foo/bar/ (case matters)
484 484
485 485 $ echo 'foo/bar/** = fred' >> $config
486 486 $ do_push fred
487 487 Pushing as user fred
488 488 hgrc = """
489 489 [hooks]
490 490 pretxnchangegroup.acl = python:hgext.acl.hook
491 491 [acl]
492 492 sources = push
493 493 [acl.allow]
494 494 foo/** = fred
495 495 [acl.deny]
496 496 foo/bar/** = fred
497 497 """
498 498 pushing to ../b
499 499 query 1; heads
500 500 searching for changes
501 501 all remote heads known locally
502 502 listing keys for "phases"
503 503 checking for updated bookmarks
504 504 listing keys for "bookmarks"
505 505 listing keys for "bookmarks"
506 506 3 changesets found
507 507 list of changesets:
508 508 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
509 509 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
510 510 911600dab2ae7a9baff75958b84fe606851ce955
511 511 bundle2-output-bundle: "HG20", 5 parts total
512 bundle2-output-part: "replycaps" 168 bytes payload
512 bundle2-output-part: "replycaps" 178 bytes payload
513 513 bundle2-output-part: "check:phases" 24 bytes payload
514 514 bundle2-output-part: "check:heads" streamed payload
515 515 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
516 516 bundle2-output-part: "phase-heads" 24 bytes payload
517 517 bundle2-input-bundle: with-transaction
518 518 bundle2-input-part: "replycaps" supported
519 bundle2-input-part: total payload size 168
519 bundle2-input-part: total payload size 178
520 520 bundle2-input-part: "check:phases" supported
521 521 bundle2-input-part: total payload size 24
522 522 bundle2-input-part: "check:heads" supported
523 523 bundle2-input-part: total payload size 20
524 524 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
525 525 adding changesets
526 526 add changeset ef1ea85a6374
527 527 add changeset f9cafe1212c8
528 528 add changeset 911600dab2ae
529 529 adding manifests
530 530 adding file changes
531 531 adding foo/Bar/file.txt revisions
532 532 adding foo/file.txt revisions
533 533 adding quux/file.py revisions
534 534 added 3 changesets with 3 changes to 3 files
535 535 calling hook pretxnchangegroup.acl: hgext.acl.hook
536 536 acl: checking access for user "fred"
537 537 acl: acl.allow.branches not enabled
538 538 acl: acl.deny.branches not enabled
539 539 acl: acl.allow enabled, 1 entries for user fred
540 540 acl: acl.deny enabled, 1 entries for user fred
541 541 acl: branch access granted: "ef1ea85a6374" on branch "default"
542 542 acl: path access granted: "ef1ea85a6374"
543 543 acl: branch access granted: "f9cafe1212c8" on branch "default"
544 544 acl: path access granted: "f9cafe1212c8"
545 545 acl: branch access granted: "911600dab2ae" on branch "default"
546 546 error: pretxnchangegroup.acl hook failed: acl: user "fred" not allowed on "quux/file.py" (changeset "911600dab2ae")
547 547 bundle2-input-part: total payload size 1553
548 548 bundle2-input-part: total payload size 24
549 549 bundle2-input-bundle: 4 parts total
550 550 transaction abort!
551 551 rollback completed
552 552 abort: acl: user "fred" not allowed on "quux/file.py" (changeset "911600dab2ae")
553 553 no rollback information available
554 554 0:6675d58eff77
555 555
556 556
557 557 fred is allowed inside foo/, but not foo/Bar/
558 558
559 559 $ echo 'foo/Bar/** = fred' >> $config
560 560 $ do_push fred
561 561 Pushing as user fred
562 562 hgrc = """
563 563 [hooks]
564 564 pretxnchangegroup.acl = python:hgext.acl.hook
565 565 [acl]
566 566 sources = push
567 567 [acl.allow]
568 568 foo/** = fred
569 569 [acl.deny]
570 570 foo/bar/** = fred
571 571 foo/Bar/** = fred
572 572 """
573 573 pushing to ../b
574 574 query 1; heads
575 575 searching for changes
576 576 all remote heads known locally
577 577 listing keys for "phases"
578 578 checking for updated bookmarks
579 579 listing keys for "bookmarks"
580 580 listing keys for "bookmarks"
581 581 3 changesets found
582 582 list of changesets:
583 583 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
584 584 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
585 585 911600dab2ae7a9baff75958b84fe606851ce955
586 586 bundle2-output-bundle: "HG20", 5 parts total
587 bundle2-output-part: "replycaps" 168 bytes payload
587 bundle2-output-part: "replycaps" 178 bytes payload
588 588 bundle2-output-part: "check:phases" 24 bytes payload
589 589 bundle2-output-part: "check:heads" streamed payload
590 590 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
591 591 bundle2-output-part: "phase-heads" 24 bytes payload
592 592 bundle2-input-bundle: with-transaction
593 593 bundle2-input-part: "replycaps" supported
594 bundle2-input-part: total payload size 168
594 bundle2-input-part: total payload size 178
595 595 bundle2-input-part: "check:phases" supported
596 596 bundle2-input-part: total payload size 24
597 597 bundle2-input-part: "check:heads" supported
598 598 bundle2-input-part: total payload size 20
599 599 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
600 600 adding changesets
601 601 add changeset ef1ea85a6374
602 602 add changeset f9cafe1212c8
603 603 add changeset 911600dab2ae
604 604 adding manifests
605 605 adding file changes
606 606 adding foo/Bar/file.txt revisions
607 607 adding foo/file.txt revisions
608 608 adding quux/file.py revisions
609 609 added 3 changesets with 3 changes to 3 files
610 610 calling hook pretxnchangegroup.acl: hgext.acl.hook
611 611 acl: checking access for user "fred"
612 612 acl: acl.allow.branches not enabled
613 613 acl: acl.deny.branches not enabled
614 614 acl: acl.allow enabled, 1 entries for user fred
615 615 acl: acl.deny enabled, 2 entries for user fred
616 616 acl: branch access granted: "ef1ea85a6374" on branch "default"
617 617 acl: path access granted: "ef1ea85a6374"
618 618 acl: branch access granted: "f9cafe1212c8" on branch "default"
619 619 error: pretxnchangegroup.acl hook failed: acl: user "fred" denied on "foo/Bar/file.txt" (changeset "f9cafe1212c8")
620 620 bundle2-input-part: total payload size 1553
621 621 bundle2-input-part: total payload size 24
622 622 bundle2-input-bundle: 4 parts total
623 623 transaction abort!
624 624 rollback completed
625 625 abort: acl: user "fred" denied on "foo/Bar/file.txt" (changeset "f9cafe1212c8")
626 626 no rollback information available
627 627 0:6675d58eff77
628 628
629 629
630 630 $ echo 'barney is not mentioned => not allowed anywhere'
631 631 barney is not mentioned => not allowed anywhere
632 632 $ do_push barney
633 633 Pushing as user barney
634 634 hgrc = """
635 635 [hooks]
636 636 pretxnchangegroup.acl = python:hgext.acl.hook
637 637 [acl]
638 638 sources = push
639 639 [acl.allow]
640 640 foo/** = fred
641 641 [acl.deny]
642 642 foo/bar/** = fred
643 643 foo/Bar/** = fred
644 644 """
645 645 pushing to ../b
646 646 query 1; heads
647 647 searching for changes
648 648 all remote heads known locally
649 649 listing keys for "phases"
650 650 checking for updated bookmarks
651 651 listing keys for "bookmarks"
652 652 listing keys for "bookmarks"
653 653 3 changesets found
654 654 list of changesets:
655 655 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
656 656 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
657 657 911600dab2ae7a9baff75958b84fe606851ce955
658 658 bundle2-output-bundle: "HG20", 5 parts total
659 bundle2-output-part: "replycaps" 168 bytes payload
659 bundle2-output-part: "replycaps" 178 bytes payload
660 660 bundle2-output-part: "check:phases" 24 bytes payload
661 661 bundle2-output-part: "check:heads" streamed payload
662 662 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
663 663 bundle2-output-part: "phase-heads" 24 bytes payload
664 664 bundle2-input-bundle: with-transaction
665 665 bundle2-input-part: "replycaps" supported
666 bundle2-input-part: total payload size 168
666 bundle2-input-part: total payload size 178
667 667 bundle2-input-part: "check:phases" supported
668 668 bundle2-input-part: total payload size 24
669 669 bundle2-input-part: "check:heads" supported
670 670 bundle2-input-part: total payload size 20
671 671 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
672 672 adding changesets
673 673 add changeset ef1ea85a6374
674 674 add changeset f9cafe1212c8
675 675 add changeset 911600dab2ae
676 676 adding manifests
677 677 adding file changes
678 678 adding foo/Bar/file.txt revisions
679 679 adding foo/file.txt revisions
680 680 adding quux/file.py revisions
681 681 added 3 changesets with 3 changes to 3 files
682 682 calling hook pretxnchangegroup.acl: hgext.acl.hook
683 683 acl: checking access for user "barney"
684 684 acl: acl.allow.branches not enabled
685 685 acl: acl.deny.branches not enabled
686 686 acl: acl.allow enabled, 0 entries for user barney
687 687 acl: acl.deny enabled, 0 entries for user barney
688 688 acl: branch access granted: "ef1ea85a6374" on branch "default"
689 689 error: pretxnchangegroup.acl hook failed: acl: user "barney" not allowed on "foo/file.txt" (changeset "ef1ea85a6374")
690 690 bundle2-input-part: total payload size 1553
691 691 bundle2-input-part: total payload size 24
692 692 bundle2-input-bundle: 4 parts total
693 693 transaction abort!
694 694 rollback completed
695 695 abort: acl: user "barney" not allowed on "foo/file.txt" (changeset "ef1ea85a6374")
696 696 no rollback information available
697 697 0:6675d58eff77
698 698
699 699
700 700 barney is allowed everywhere
701 701
702 702 $ echo '[acl.allow]' >> $config
703 703 $ echo '** = barney' >> $config
704 704 $ do_push barney
705 705 Pushing as user barney
706 706 hgrc = """
707 707 [hooks]
708 708 pretxnchangegroup.acl = python:hgext.acl.hook
709 709 [acl]
710 710 sources = push
711 711 [acl.allow]
712 712 foo/** = fred
713 713 [acl.deny]
714 714 foo/bar/** = fred
715 715 foo/Bar/** = fred
716 716 [acl.allow]
717 717 ** = barney
718 718 """
719 719 pushing to ../b
720 720 query 1; heads
721 721 searching for changes
722 722 all remote heads known locally
723 723 listing keys for "phases"
724 724 checking for updated bookmarks
725 725 listing keys for "bookmarks"
726 726 listing keys for "bookmarks"
727 727 3 changesets found
728 728 list of changesets:
729 729 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
730 730 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
731 731 911600dab2ae7a9baff75958b84fe606851ce955
732 732 bundle2-output-bundle: "HG20", 5 parts total
733 bundle2-output-part: "replycaps" 168 bytes payload
733 bundle2-output-part: "replycaps" 178 bytes payload
734 734 bundle2-output-part: "check:phases" 24 bytes payload
735 735 bundle2-output-part: "check:heads" streamed payload
736 736 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
737 737 bundle2-output-part: "phase-heads" 24 bytes payload
738 738 bundle2-input-bundle: with-transaction
739 739 bundle2-input-part: "replycaps" supported
740 bundle2-input-part: total payload size 168
740 bundle2-input-part: total payload size 178
741 741 bundle2-input-part: "check:phases" supported
742 742 bundle2-input-part: total payload size 24
743 743 bundle2-input-part: "check:heads" supported
744 744 bundle2-input-part: total payload size 20
745 745 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
746 746 adding changesets
747 747 add changeset ef1ea85a6374
748 748 add changeset f9cafe1212c8
749 749 add changeset 911600dab2ae
750 750 adding manifests
751 751 adding file changes
752 752 adding foo/Bar/file.txt revisions
753 753 adding foo/file.txt revisions
754 754 adding quux/file.py revisions
755 755 added 3 changesets with 3 changes to 3 files
756 756 calling hook pretxnchangegroup.acl: hgext.acl.hook
757 757 acl: checking access for user "barney"
758 758 acl: acl.allow.branches not enabled
759 759 acl: acl.deny.branches not enabled
760 760 acl: acl.allow enabled, 1 entries for user barney
761 761 acl: acl.deny enabled, 0 entries for user barney
762 762 acl: branch access granted: "ef1ea85a6374" on branch "default"
763 763 acl: path access granted: "ef1ea85a6374"
764 764 acl: branch access granted: "f9cafe1212c8" on branch "default"
765 765 acl: path access granted: "f9cafe1212c8"
766 766 acl: branch access granted: "911600dab2ae" on branch "default"
767 767 acl: path access granted: "911600dab2ae"
768 768 bundle2-input-part: total payload size 1553
769 769 bundle2-input-part: "phase-heads" supported
770 770 bundle2-input-part: total payload size 24
771 771 bundle2-input-bundle: 4 parts total
772 772 updating the branch cache
773 773 bundle2-output-bundle: "HG20", 1 parts total
774 774 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
775 775 bundle2-input-bundle: no-transaction
776 776 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
777 777 bundle2-input-bundle: 0 parts total
778 778 listing keys for "phases"
779 779 repository tip rolled back to revision 0 (undo push)
780 780 0:6675d58eff77
781 781
782 782
783 783 wilma can change files with a .txt extension
784 784
785 785 $ echo '**/*.txt = wilma' >> $config
786 786 $ do_push wilma
787 787 Pushing as user wilma
788 788 hgrc = """
789 789 [hooks]
790 790 pretxnchangegroup.acl = python:hgext.acl.hook
791 791 [acl]
792 792 sources = push
793 793 [acl.allow]
794 794 foo/** = fred
795 795 [acl.deny]
796 796 foo/bar/** = fred
797 797 foo/Bar/** = fred
798 798 [acl.allow]
799 799 ** = barney
800 800 **/*.txt = wilma
801 801 """
802 802 pushing to ../b
803 803 query 1; heads
804 804 searching for changes
805 805 all remote heads known locally
806 806 listing keys for "phases"
807 807 checking for updated bookmarks
808 808 listing keys for "bookmarks"
809 809 listing keys for "bookmarks"
810 810 3 changesets found
811 811 list of changesets:
812 812 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
813 813 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
814 814 911600dab2ae7a9baff75958b84fe606851ce955
815 815 bundle2-output-bundle: "HG20", 5 parts total
816 bundle2-output-part: "replycaps" 168 bytes payload
816 bundle2-output-part: "replycaps" 178 bytes payload
817 817 bundle2-output-part: "check:phases" 24 bytes payload
818 818 bundle2-output-part: "check:heads" streamed payload
819 819 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
820 820 bundle2-output-part: "phase-heads" 24 bytes payload
821 821 bundle2-input-bundle: with-transaction
822 822 bundle2-input-part: "replycaps" supported
823 bundle2-input-part: total payload size 168
823 bundle2-input-part: total payload size 178
824 824 bundle2-input-part: "check:phases" supported
825 825 bundle2-input-part: total payload size 24
826 826 bundle2-input-part: "check:heads" supported
827 827 bundle2-input-part: total payload size 20
828 828 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
829 829 adding changesets
830 830 add changeset ef1ea85a6374
831 831 add changeset f9cafe1212c8
832 832 add changeset 911600dab2ae
833 833 adding manifests
834 834 adding file changes
835 835 adding foo/Bar/file.txt revisions
836 836 adding foo/file.txt revisions
837 837 adding quux/file.py revisions
838 838 added 3 changesets with 3 changes to 3 files
839 839 calling hook pretxnchangegroup.acl: hgext.acl.hook
840 840 acl: checking access for user "wilma"
841 841 acl: acl.allow.branches not enabled
842 842 acl: acl.deny.branches not enabled
843 843 acl: acl.allow enabled, 1 entries for user wilma
844 844 acl: acl.deny enabled, 0 entries for user wilma
845 845 acl: branch access granted: "ef1ea85a6374" on branch "default"
846 846 acl: path access granted: "ef1ea85a6374"
847 847 acl: branch access granted: "f9cafe1212c8" on branch "default"
848 848 acl: path access granted: "f9cafe1212c8"
849 849 acl: branch access granted: "911600dab2ae" on branch "default"
850 850 error: pretxnchangegroup.acl hook failed: acl: user "wilma" not allowed on "quux/file.py" (changeset "911600dab2ae")
851 851 bundle2-input-part: total payload size 1553
852 852 bundle2-input-part: total payload size 24
853 853 bundle2-input-bundle: 4 parts total
854 854 transaction abort!
855 855 rollback completed
856 856 abort: acl: user "wilma" not allowed on "quux/file.py" (changeset "911600dab2ae")
857 857 no rollback information available
858 858 0:6675d58eff77
859 859
860 860
861 861 file specified by acl.config does not exist
862 862
863 863 $ echo '[acl]' >> $config
864 864 $ echo 'config = ../acl.config' >> $config
865 865 $ do_push barney
866 866 Pushing as user barney
867 867 hgrc = """
868 868 [hooks]
869 869 pretxnchangegroup.acl = python:hgext.acl.hook
870 870 [acl]
871 871 sources = push
872 872 [acl.allow]
873 873 foo/** = fred
874 874 [acl.deny]
875 875 foo/bar/** = fred
876 876 foo/Bar/** = fred
877 877 [acl.allow]
878 878 ** = barney
879 879 **/*.txt = wilma
880 880 [acl]
881 881 config = ../acl.config
882 882 """
883 883 pushing to ../b
884 884 query 1; heads
885 885 searching for changes
886 886 all remote heads known locally
887 887 listing keys for "phases"
888 888 checking for updated bookmarks
889 889 listing keys for "bookmarks"
890 890 listing keys for "bookmarks"
891 891 3 changesets found
892 892 list of changesets:
893 893 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
894 894 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
895 895 911600dab2ae7a9baff75958b84fe606851ce955
896 896 bundle2-output-bundle: "HG20", 5 parts total
897 bundle2-output-part: "replycaps" 168 bytes payload
897 bundle2-output-part: "replycaps" 178 bytes payload
898 898 bundle2-output-part: "check:phases" 24 bytes payload
899 899 bundle2-output-part: "check:heads" streamed payload
900 900 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
901 901 bundle2-output-part: "phase-heads" 24 bytes payload
902 902 bundle2-input-bundle: with-transaction
903 903 bundle2-input-part: "replycaps" supported
904 bundle2-input-part: total payload size 168
904 bundle2-input-part: total payload size 178
905 905 bundle2-input-part: "check:phases" supported
906 906 bundle2-input-part: total payload size 24
907 907 bundle2-input-part: "check:heads" supported
908 908 bundle2-input-part: total payload size 20
909 909 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
910 910 adding changesets
911 911 add changeset ef1ea85a6374
912 912 add changeset f9cafe1212c8
913 913 add changeset 911600dab2ae
914 914 adding manifests
915 915 adding file changes
916 916 adding foo/Bar/file.txt revisions
917 917 adding foo/file.txt revisions
918 918 adding quux/file.py revisions
919 919 added 3 changesets with 3 changes to 3 files
920 920 calling hook pretxnchangegroup.acl: hgext.acl.hook
921 921 acl: checking access for user "barney"
922 922 error: pretxnchangegroup.acl hook raised an exception: [Errno *] * (glob)
923 923 bundle2-input-part: total payload size 1553
924 924 bundle2-input-part: total payload size 24
925 925 bundle2-input-bundle: 4 parts total
926 926 transaction abort!
927 927 rollback completed
928 928 abort: $ENOENT$: ../acl.config
929 929 no rollback information available
930 930 0:6675d58eff77
931 931
932 932
933 933 betty is allowed inside foo/ by a acl.config file
934 934
935 935 $ echo '[acl.allow]' >> acl.config
936 936 $ echo 'foo/** = betty' >> acl.config
937 937 $ do_push betty
938 938 Pushing as user betty
939 939 hgrc = """
940 940 [hooks]
941 941 pretxnchangegroup.acl = python:hgext.acl.hook
942 942 [acl]
943 943 sources = push
944 944 [acl.allow]
945 945 foo/** = fred
946 946 [acl.deny]
947 947 foo/bar/** = fred
948 948 foo/Bar/** = fred
949 949 [acl.allow]
950 950 ** = barney
951 951 **/*.txt = wilma
952 952 [acl]
953 953 config = ../acl.config
954 954 """
955 955 acl.config = """
956 956 [acl.allow]
957 957 foo/** = betty
958 958 """
959 959 pushing to ../b
960 960 query 1; heads
961 961 searching for changes
962 962 all remote heads known locally
963 963 listing keys for "phases"
964 964 checking for updated bookmarks
965 965 listing keys for "bookmarks"
966 966 listing keys for "bookmarks"
967 967 3 changesets found
968 968 list of changesets:
969 969 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
970 970 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
971 971 911600dab2ae7a9baff75958b84fe606851ce955
972 972 bundle2-output-bundle: "HG20", 5 parts total
973 bundle2-output-part: "replycaps" 168 bytes payload
973 bundle2-output-part: "replycaps" 178 bytes payload
974 974 bundle2-output-part: "check:phases" 24 bytes payload
975 975 bundle2-output-part: "check:heads" streamed payload
976 976 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
977 977 bundle2-output-part: "phase-heads" 24 bytes payload
978 978 bundle2-input-bundle: with-transaction
979 979 bundle2-input-part: "replycaps" supported
980 bundle2-input-part: total payload size 168
980 bundle2-input-part: total payload size 178
981 981 bundle2-input-part: "check:phases" supported
982 982 bundle2-input-part: total payload size 24
983 983 bundle2-input-part: "check:heads" supported
984 984 bundle2-input-part: total payload size 20
985 985 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
986 986 adding changesets
987 987 add changeset ef1ea85a6374
988 988 add changeset f9cafe1212c8
989 989 add changeset 911600dab2ae
990 990 adding manifests
991 991 adding file changes
992 992 adding foo/Bar/file.txt revisions
993 993 adding foo/file.txt revisions
994 994 adding quux/file.py revisions
995 995 added 3 changesets with 3 changes to 3 files
996 996 calling hook pretxnchangegroup.acl: hgext.acl.hook
997 997 acl: checking access for user "betty"
998 998 acl: acl.allow.branches not enabled
999 999 acl: acl.deny.branches not enabled
1000 1000 acl: acl.allow enabled, 1 entries for user betty
1001 1001 acl: acl.deny enabled, 0 entries for user betty
1002 1002 acl: branch access granted: "ef1ea85a6374" on branch "default"
1003 1003 acl: path access granted: "ef1ea85a6374"
1004 1004 acl: branch access granted: "f9cafe1212c8" on branch "default"
1005 1005 acl: path access granted: "f9cafe1212c8"
1006 1006 acl: branch access granted: "911600dab2ae" on branch "default"
1007 1007 error: pretxnchangegroup.acl hook failed: acl: user "betty" not allowed on "quux/file.py" (changeset "911600dab2ae")
1008 1008 bundle2-input-part: total payload size 1553
1009 1009 bundle2-input-part: total payload size 24
1010 1010 bundle2-input-bundle: 4 parts total
1011 1011 transaction abort!
1012 1012 rollback completed
1013 1013 abort: acl: user "betty" not allowed on "quux/file.py" (changeset "911600dab2ae")
1014 1014 no rollback information available
1015 1015 0:6675d58eff77
1016 1016
1017 1017
1018 1018 acl.config can set only [acl.allow]/[acl.deny]
1019 1019
1020 1020 $ echo '[hooks]' >> acl.config
1021 1021 $ echo 'changegroup.acl = false' >> acl.config
1022 1022 $ do_push barney
1023 1023 Pushing as user barney
1024 1024 hgrc = """
1025 1025 [hooks]
1026 1026 pretxnchangegroup.acl = python:hgext.acl.hook
1027 1027 [acl]
1028 1028 sources = push
1029 1029 [acl.allow]
1030 1030 foo/** = fred
1031 1031 [acl.deny]
1032 1032 foo/bar/** = fred
1033 1033 foo/Bar/** = fred
1034 1034 [acl.allow]
1035 1035 ** = barney
1036 1036 **/*.txt = wilma
1037 1037 [acl]
1038 1038 config = ../acl.config
1039 1039 """
1040 1040 acl.config = """
1041 1041 [acl.allow]
1042 1042 foo/** = betty
1043 1043 [hooks]
1044 1044 changegroup.acl = false
1045 1045 """
1046 1046 pushing to ../b
1047 1047 query 1; heads
1048 1048 searching for changes
1049 1049 all remote heads known locally
1050 1050 listing keys for "phases"
1051 1051 checking for updated bookmarks
1052 1052 listing keys for "bookmarks"
1053 1053 listing keys for "bookmarks"
1054 1054 3 changesets found
1055 1055 list of changesets:
1056 1056 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1057 1057 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1058 1058 911600dab2ae7a9baff75958b84fe606851ce955
1059 1059 bundle2-output-bundle: "HG20", 5 parts total
1060 bundle2-output-part: "replycaps" 168 bytes payload
1060 bundle2-output-part: "replycaps" 178 bytes payload
1061 1061 bundle2-output-part: "check:phases" 24 bytes payload
1062 1062 bundle2-output-part: "check:heads" streamed payload
1063 1063 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1064 1064 bundle2-output-part: "phase-heads" 24 bytes payload
1065 1065 bundle2-input-bundle: with-transaction
1066 1066 bundle2-input-part: "replycaps" supported
1067 bundle2-input-part: total payload size 168
1067 bundle2-input-part: total payload size 178
1068 1068 bundle2-input-part: "check:phases" supported
1069 1069 bundle2-input-part: total payload size 24
1070 1070 bundle2-input-part: "check:heads" supported
1071 1071 bundle2-input-part: total payload size 20
1072 1072 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1073 1073 adding changesets
1074 1074 add changeset ef1ea85a6374
1075 1075 add changeset f9cafe1212c8
1076 1076 add changeset 911600dab2ae
1077 1077 adding manifests
1078 1078 adding file changes
1079 1079 adding foo/Bar/file.txt revisions
1080 1080 adding foo/file.txt revisions
1081 1081 adding quux/file.py revisions
1082 1082 added 3 changesets with 3 changes to 3 files
1083 1083 calling hook pretxnchangegroup.acl: hgext.acl.hook
1084 1084 acl: checking access for user "barney"
1085 1085 acl: acl.allow.branches not enabled
1086 1086 acl: acl.deny.branches not enabled
1087 1087 acl: acl.allow enabled, 1 entries for user barney
1088 1088 acl: acl.deny enabled, 0 entries for user barney
1089 1089 acl: branch access granted: "ef1ea85a6374" on branch "default"
1090 1090 acl: path access granted: "ef1ea85a6374"
1091 1091 acl: branch access granted: "f9cafe1212c8" on branch "default"
1092 1092 acl: path access granted: "f9cafe1212c8"
1093 1093 acl: branch access granted: "911600dab2ae" on branch "default"
1094 1094 acl: path access granted: "911600dab2ae"
1095 1095 bundle2-input-part: total payload size 1553
1096 1096 bundle2-input-part: "phase-heads" supported
1097 1097 bundle2-input-part: total payload size 24
1098 1098 bundle2-input-bundle: 4 parts total
1099 1099 updating the branch cache
1100 1100 bundle2-output-bundle: "HG20", 1 parts total
1101 1101 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
1102 1102 bundle2-input-bundle: no-transaction
1103 1103 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
1104 1104 bundle2-input-bundle: 0 parts total
1105 1105 listing keys for "phases"
1106 1106 repository tip rolled back to revision 0 (undo push)
1107 1107 0:6675d58eff77
1108 1108
1109 1109
1110 1110 asterisk
1111 1111
1112 1112 $ init_config
1113 1113
1114 1114 asterisk test
1115 1115
1116 1116 $ echo '[acl.allow]' >> $config
1117 1117 $ echo "** = fred" >> $config
1118 1118
1119 1119 fred is always allowed
1120 1120
1121 1121 $ do_push fred
1122 1122 Pushing as user fred
1123 1123 hgrc = """
1124 1124 [hooks]
1125 1125 pretxnchangegroup.acl = python:hgext.acl.hook
1126 1126 [acl]
1127 1127 sources = push
1128 1128 [extensions]
1129 1129 [acl.allow]
1130 1130 ** = fred
1131 1131 """
1132 1132 pushing to ../b
1133 1133 query 1; heads
1134 1134 searching for changes
1135 1135 all remote heads known locally
1136 1136 listing keys for "phases"
1137 1137 checking for updated bookmarks
1138 1138 listing keys for "bookmarks"
1139 1139 listing keys for "bookmarks"
1140 1140 3 changesets found
1141 1141 list of changesets:
1142 1142 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1143 1143 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1144 1144 911600dab2ae7a9baff75958b84fe606851ce955
1145 1145 bundle2-output-bundle: "HG20", 5 parts total
1146 bundle2-output-part: "replycaps" 168 bytes payload
1146 bundle2-output-part: "replycaps" 178 bytes payload
1147 1147 bundle2-output-part: "check:phases" 24 bytes payload
1148 1148 bundle2-output-part: "check:heads" streamed payload
1149 1149 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1150 1150 bundle2-output-part: "phase-heads" 24 bytes payload
1151 1151 bundle2-input-bundle: with-transaction
1152 1152 bundle2-input-part: "replycaps" supported
1153 bundle2-input-part: total payload size 168
1153 bundle2-input-part: total payload size 178
1154 1154 bundle2-input-part: "check:phases" supported
1155 1155 bundle2-input-part: total payload size 24
1156 1156 bundle2-input-part: "check:heads" supported
1157 1157 bundle2-input-part: total payload size 20
1158 1158 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1159 1159 adding changesets
1160 1160 add changeset ef1ea85a6374
1161 1161 add changeset f9cafe1212c8
1162 1162 add changeset 911600dab2ae
1163 1163 adding manifests
1164 1164 adding file changes
1165 1165 adding foo/Bar/file.txt revisions
1166 1166 adding foo/file.txt revisions
1167 1167 adding quux/file.py revisions
1168 1168 added 3 changesets with 3 changes to 3 files
1169 1169 calling hook pretxnchangegroup.acl: hgext.acl.hook
1170 1170 acl: checking access for user "fred"
1171 1171 acl: acl.allow.branches not enabled
1172 1172 acl: acl.deny.branches not enabled
1173 1173 acl: acl.allow enabled, 1 entries for user fred
1174 1174 acl: acl.deny not enabled
1175 1175 acl: branch access granted: "ef1ea85a6374" on branch "default"
1176 1176 acl: path access granted: "ef1ea85a6374"
1177 1177 acl: branch access granted: "f9cafe1212c8" on branch "default"
1178 1178 acl: path access granted: "f9cafe1212c8"
1179 1179 acl: branch access granted: "911600dab2ae" on branch "default"
1180 1180 acl: path access granted: "911600dab2ae"
1181 1181 bundle2-input-part: total payload size 1553
1182 1182 bundle2-input-part: "phase-heads" supported
1183 1183 bundle2-input-part: total payload size 24
1184 1184 bundle2-input-bundle: 4 parts total
1185 1185 updating the branch cache
1186 1186 bundle2-output-bundle: "HG20", 1 parts total
1187 1187 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
1188 1188 bundle2-input-bundle: no-transaction
1189 1189 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
1190 1190 bundle2-input-bundle: 0 parts total
1191 1191 listing keys for "phases"
1192 1192 repository tip rolled back to revision 0 (undo push)
1193 1193 0:6675d58eff77
1194 1194
1195 1195
1196 1196 $ echo '[acl.deny]' >> $config
1197 1197 $ echo "foo/Bar/** = *" >> $config
1198 1198
1199 1199 no one is allowed inside foo/Bar/
1200 1200
1201 1201 $ do_push fred
1202 1202 Pushing as user fred
1203 1203 hgrc = """
1204 1204 [hooks]
1205 1205 pretxnchangegroup.acl = python:hgext.acl.hook
1206 1206 [acl]
1207 1207 sources = push
1208 1208 [extensions]
1209 1209 [acl.allow]
1210 1210 ** = fred
1211 1211 [acl.deny]
1212 1212 foo/Bar/** = *
1213 1213 """
1214 1214 pushing to ../b
1215 1215 query 1; heads
1216 1216 searching for changes
1217 1217 all remote heads known locally
1218 1218 listing keys for "phases"
1219 1219 checking for updated bookmarks
1220 1220 listing keys for "bookmarks"
1221 1221 listing keys for "bookmarks"
1222 1222 3 changesets found
1223 1223 list of changesets:
1224 1224 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1225 1225 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1226 1226 911600dab2ae7a9baff75958b84fe606851ce955
1227 1227 bundle2-output-bundle: "HG20", 5 parts total
1228 bundle2-output-part: "replycaps" 168 bytes payload
1228 bundle2-output-part: "replycaps" 178 bytes payload
1229 1229 bundle2-output-part: "check:phases" 24 bytes payload
1230 1230 bundle2-output-part: "check:heads" streamed payload
1231 1231 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1232 1232 bundle2-output-part: "phase-heads" 24 bytes payload
1233 1233 bundle2-input-bundle: with-transaction
1234 1234 bundle2-input-part: "replycaps" supported
1235 bundle2-input-part: total payload size 168
1235 bundle2-input-part: total payload size 178
1236 1236 bundle2-input-part: "check:phases" supported
1237 1237 bundle2-input-part: total payload size 24
1238 1238 bundle2-input-part: "check:heads" supported
1239 1239 bundle2-input-part: total payload size 20
1240 1240 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1241 1241 adding changesets
1242 1242 add changeset ef1ea85a6374
1243 1243 add changeset f9cafe1212c8
1244 1244 add changeset 911600dab2ae
1245 1245 adding manifests
1246 1246 adding file changes
1247 1247 adding foo/Bar/file.txt revisions
1248 1248 adding foo/file.txt revisions
1249 1249 adding quux/file.py revisions
1250 1250 added 3 changesets with 3 changes to 3 files
1251 1251 calling hook pretxnchangegroup.acl: hgext.acl.hook
1252 1252 acl: checking access for user "fred"
1253 1253 acl: acl.allow.branches not enabled
1254 1254 acl: acl.deny.branches not enabled
1255 1255 acl: acl.allow enabled, 1 entries for user fred
1256 1256 acl: acl.deny enabled, 1 entries for user fred
1257 1257 acl: branch access granted: "ef1ea85a6374" on branch "default"
1258 1258 acl: path access granted: "ef1ea85a6374"
1259 1259 acl: branch access granted: "f9cafe1212c8" on branch "default"
1260 1260 error: pretxnchangegroup.acl hook failed: acl: user "fred" denied on "foo/Bar/file.txt" (changeset "f9cafe1212c8")
1261 1261 bundle2-input-part: total payload size 1553
1262 1262 bundle2-input-part: total payload size 24
1263 1263 bundle2-input-bundle: 4 parts total
1264 1264 transaction abort!
1265 1265 rollback completed
1266 1266 abort: acl: user "fred" denied on "foo/Bar/file.txt" (changeset "f9cafe1212c8")
1267 1267 no rollback information available
1268 1268 0:6675d58eff77
1269 1269
1270 1270
1271 1271 Groups
1272 1272
1273 1273 $ init_config
1274 1274
1275 1275 OS-level groups
1276 1276
1277 1277 $ echo '[acl.allow]' >> $config
1278 1278 $ echo "** = @group1" >> $config
1279 1279
1280 1280 @group1 is always allowed
1281 1281
1282 1282 $ do_push fred
1283 1283 Pushing as user fred
1284 1284 hgrc = """
1285 1285 [hooks]
1286 1286 pretxnchangegroup.acl = python:hgext.acl.hook
1287 1287 [acl]
1288 1288 sources = push
1289 1289 [extensions]
1290 1290 [acl.allow]
1291 1291 ** = @group1
1292 1292 """
1293 1293 pushing to ../b
1294 1294 query 1; heads
1295 1295 searching for changes
1296 1296 all remote heads known locally
1297 1297 listing keys for "phases"
1298 1298 checking for updated bookmarks
1299 1299 listing keys for "bookmarks"
1300 1300 listing keys for "bookmarks"
1301 1301 3 changesets found
1302 1302 list of changesets:
1303 1303 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1304 1304 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1305 1305 911600dab2ae7a9baff75958b84fe606851ce955
1306 1306 bundle2-output-bundle: "HG20", 5 parts total
1307 bundle2-output-part: "replycaps" 168 bytes payload
1307 bundle2-output-part: "replycaps" 178 bytes payload
1308 1308 bundle2-output-part: "check:phases" 24 bytes payload
1309 1309 bundle2-output-part: "check:heads" streamed payload
1310 1310 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1311 1311 bundle2-output-part: "phase-heads" 24 bytes payload
1312 1312 bundle2-input-bundle: with-transaction
1313 1313 bundle2-input-part: "replycaps" supported
1314 bundle2-input-part: total payload size 168
1314 bundle2-input-part: total payload size 178
1315 1315 bundle2-input-part: "check:phases" supported
1316 1316 bundle2-input-part: total payload size 24
1317 1317 bundle2-input-part: "check:heads" supported
1318 1318 bundle2-input-part: total payload size 20
1319 1319 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1320 1320 adding changesets
1321 1321 add changeset ef1ea85a6374
1322 1322 add changeset f9cafe1212c8
1323 1323 add changeset 911600dab2ae
1324 1324 adding manifests
1325 1325 adding file changes
1326 1326 adding foo/Bar/file.txt revisions
1327 1327 adding foo/file.txt revisions
1328 1328 adding quux/file.py revisions
1329 1329 added 3 changesets with 3 changes to 3 files
1330 1330 calling hook pretxnchangegroup.acl: hgext.acl.hook
1331 1331 acl: checking access for user "fred"
1332 1332 acl: acl.allow.branches not enabled
1333 1333 acl: acl.deny.branches not enabled
1334 1334 acl: "group1" not defined in [acl.groups]
1335 1335 acl: acl.allow enabled, 1 entries for user fred
1336 1336 acl: acl.deny not enabled
1337 1337 acl: branch access granted: "ef1ea85a6374" on branch "default"
1338 1338 acl: path access granted: "ef1ea85a6374"
1339 1339 acl: branch access granted: "f9cafe1212c8" on branch "default"
1340 1340 acl: path access granted: "f9cafe1212c8"
1341 1341 acl: branch access granted: "911600dab2ae" on branch "default"
1342 1342 acl: path access granted: "911600dab2ae"
1343 1343 bundle2-input-part: total payload size 1553
1344 1344 bundle2-input-part: "phase-heads" supported
1345 1345 bundle2-input-part: total payload size 24
1346 1346 bundle2-input-bundle: 4 parts total
1347 1347 updating the branch cache
1348 1348 bundle2-output-bundle: "HG20", 1 parts total
1349 1349 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
1350 1350 bundle2-input-bundle: no-transaction
1351 1351 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
1352 1352 bundle2-input-bundle: 0 parts total
1353 1353 listing keys for "phases"
1354 1354 repository tip rolled back to revision 0 (undo push)
1355 1355 0:6675d58eff77
1356 1356
1357 1357
1358 1358 $ echo '[acl.deny]' >> $config
1359 1359 $ echo "foo/Bar/** = @group1" >> $config
1360 1360
1361 1361 @group is allowed inside anything but foo/Bar/
1362 1362
1363 1363 $ do_push fred
1364 1364 Pushing as user fred
1365 1365 hgrc = """
1366 1366 [hooks]
1367 1367 pretxnchangegroup.acl = python:hgext.acl.hook
1368 1368 [acl]
1369 1369 sources = push
1370 1370 [extensions]
1371 1371 [acl.allow]
1372 1372 ** = @group1
1373 1373 [acl.deny]
1374 1374 foo/Bar/** = @group1
1375 1375 """
1376 1376 pushing to ../b
1377 1377 query 1; heads
1378 1378 searching for changes
1379 1379 all remote heads known locally
1380 1380 listing keys for "phases"
1381 1381 checking for updated bookmarks
1382 1382 listing keys for "bookmarks"
1383 1383 listing keys for "bookmarks"
1384 1384 3 changesets found
1385 1385 list of changesets:
1386 1386 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1387 1387 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1388 1388 911600dab2ae7a9baff75958b84fe606851ce955
1389 1389 bundle2-output-bundle: "HG20", 5 parts total
1390 bundle2-output-part: "replycaps" 168 bytes payload
1390 bundle2-output-part: "replycaps" 178 bytes payload
1391 1391 bundle2-output-part: "check:phases" 24 bytes payload
1392 1392 bundle2-output-part: "check:heads" streamed payload
1393 1393 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1394 1394 bundle2-output-part: "phase-heads" 24 bytes payload
1395 1395 bundle2-input-bundle: with-transaction
1396 1396 bundle2-input-part: "replycaps" supported
1397 bundle2-input-part: total payload size 168
1397 bundle2-input-part: total payload size 178
1398 1398 bundle2-input-part: "check:phases" supported
1399 1399 bundle2-input-part: total payload size 24
1400 1400 bundle2-input-part: "check:heads" supported
1401 1401 bundle2-input-part: total payload size 20
1402 1402 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1403 1403 adding changesets
1404 1404 add changeset ef1ea85a6374
1405 1405 add changeset f9cafe1212c8
1406 1406 add changeset 911600dab2ae
1407 1407 adding manifests
1408 1408 adding file changes
1409 1409 adding foo/Bar/file.txt revisions
1410 1410 adding foo/file.txt revisions
1411 1411 adding quux/file.py revisions
1412 1412 added 3 changesets with 3 changes to 3 files
1413 1413 calling hook pretxnchangegroup.acl: hgext.acl.hook
1414 1414 acl: checking access for user "fred"
1415 1415 acl: acl.allow.branches not enabled
1416 1416 acl: acl.deny.branches not enabled
1417 1417 acl: "group1" not defined in [acl.groups]
1418 1418 acl: acl.allow enabled, 1 entries for user fred
1419 1419 acl: "group1" not defined in [acl.groups]
1420 1420 acl: acl.deny enabled, 1 entries for user fred
1421 1421 acl: branch access granted: "ef1ea85a6374" on branch "default"
1422 1422 acl: path access granted: "ef1ea85a6374"
1423 1423 acl: branch access granted: "f9cafe1212c8" on branch "default"
1424 1424 error: pretxnchangegroup.acl hook failed: acl: user "fred" denied on "foo/Bar/file.txt" (changeset "f9cafe1212c8")
1425 1425 bundle2-input-part: total payload size 1553
1426 1426 bundle2-input-part: total payload size 24
1427 1427 bundle2-input-bundle: 4 parts total
1428 1428 transaction abort!
1429 1429 rollback completed
1430 1430 abort: acl: user "fred" denied on "foo/Bar/file.txt" (changeset "f9cafe1212c8")
1431 1431 no rollback information available
1432 1432 0:6675d58eff77
1433 1433
1434 1434
1435 1435 Invalid group
1436 1436
1437 1437 Disable the fakegroups trick to get real failures
1438 1438
1439 1439 $ grep -v fakegroups $config > config.tmp
1440 1440 $ mv config.tmp $config
1441 1441 $ echo '[acl.allow]' >> $config
1442 1442 $ echo "** = @unlikelytoexist" >> $config
1443 1443 $ do_push fred 2>&1 | grep unlikelytoexist
1444 1444 ** = @unlikelytoexist
1445 1445 acl: "unlikelytoexist" not defined in [acl.groups]
1446 1446 error: pretxnchangegroup.acl hook failed: group 'unlikelytoexist' is undefined
1447 1447 abort: group 'unlikelytoexist' is undefined
1448 1448
1449 1449
1450 1450 Branch acl tests setup
1451 1451
1452 1452 $ init_config
1453 1453 $ cd b
1454 1454 $ hg up
1455 1455 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1456 1456 $ hg branch foobar
1457 1457 marked working directory as branch foobar
1458 1458 (branches are permanent and global, did you want a bookmark?)
1459 1459 $ hg commit -m 'create foobar'
1460 1460 $ echo 'foo contents' > abc.txt
1461 1461 $ hg add abc.txt
1462 1462 $ hg commit -m 'foobar contents'
1463 1463 $ cd ..
1464 1464 $ hg --cwd a pull ../b
1465 1465 pulling from ../b
1466 1466 searching for changes
1467 1467 adding changesets
1468 1468 adding manifests
1469 1469 adding file changes
1470 1470 added 2 changesets with 1 changes to 1 files (+1 heads)
1471 1471 new changesets 81fbf4469322:fb35475503ef
1472 1472 (run 'hg heads' to see heads)
1473 1473
1474 1474 Create additional changeset on foobar branch
1475 1475
1476 1476 $ cd a
1477 1477 $ hg up -C foobar
1478 1478 4 files updated, 0 files merged, 0 files removed, 0 files unresolved
1479 1479 $ echo 'foo contents2' > abc.txt
1480 1480 $ hg commit -m 'foobar contents2'
1481 1481 $ cd ..
1482 1482
1483 1483
1484 1484 No branch acls specified
1485 1485
1486 1486 $ do_push astro
1487 1487 Pushing as user astro
1488 1488 hgrc = """
1489 1489 [hooks]
1490 1490 pretxnchangegroup.acl = python:hgext.acl.hook
1491 1491 [acl]
1492 1492 sources = push
1493 1493 [extensions]
1494 1494 """
1495 1495 pushing to ../b
1496 1496 query 1; heads
1497 1497 searching for changes
1498 1498 all remote heads known locally
1499 1499 listing keys for "phases"
1500 1500 checking for updated bookmarks
1501 1501 listing keys for "bookmarks"
1502 1502 listing keys for "bookmarks"
1503 1503 4 changesets found
1504 1504 list of changesets:
1505 1505 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1506 1506 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1507 1507 911600dab2ae7a9baff75958b84fe606851ce955
1508 1508 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
1509 1509 bundle2-output-bundle: "HG20", 5 parts total
1510 bundle2-output-part: "replycaps" 168 bytes payload
1510 bundle2-output-part: "replycaps" 178 bytes payload
1511 1511 bundle2-output-part: "check:phases" 48 bytes payload
1512 1512 bundle2-output-part: "check:heads" streamed payload
1513 1513 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1514 1514 bundle2-output-part: "phase-heads" 48 bytes payload
1515 1515 bundle2-input-bundle: with-transaction
1516 1516 bundle2-input-part: "replycaps" supported
1517 bundle2-input-part: total payload size 168
1517 bundle2-input-part: total payload size 178
1518 1518 bundle2-input-part: "check:phases" supported
1519 1519 bundle2-input-part: total payload size 48
1520 1520 bundle2-input-part: "check:heads" supported
1521 1521 bundle2-input-part: total payload size 20
1522 1522 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1523 1523 adding changesets
1524 1524 add changeset ef1ea85a6374
1525 1525 add changeset f9cafe1212c8
1526 1526 add changeset 911600dab2ae
1527 1527 add changeset e8fc755d4d82
1528 1528 adding manifests
1529 1529 adding file changes
1530 1530 adding abc.txt revisions
1531 1531 adding foo/Bar/file.txt revisions
1532 1532 adding foo/file.txt revisions
1533 1533 adding quux/file.py revisions
1534 1534 added 4 changesets with 4 changes to 4 files (+1 heads)
1535 1535 calling hook pretxnchangegroup.acl: hgext.acl.hook
1536 1536 acl: checking access for user "astro"
1537 1537 acl: acl.allow.branches not enabled
1538 1538 acl: acl.deny.branches not enabled
1539 1539 acl: acl.allow not enabled
1540 1540 acl: acl.deny not enabled
1541 1541 acl: branch access granted: "ef1ea85a6374" on branch "default"
1542 1542 acl: path access granted: "ef1ea85a6374"
1543 1543 acl: branch access granted: "f9cafe1212c8" on branch "default"
1544 1544 acl: path access granted: "f9cafe1212c8"
1545 1545 acl: branch access granted: "911600dab2ae" on branch "default"
1546 1546 acl: path access granted: "911600dab2ae"
1547 1547 acl: branch access granted: "e8fc755d4d82" on branch "foobar"
1548 1548 acl: path access granted: "e8fc755d4d82"
1549 1549 bundle2-input-part: total payload size 2068
1550 1550 bundle2-input-part: "phase-heads" supported
1551 1551 bundle2-input-part: total payload size 48
1552 1552 bundle2-input-bundle: 4 parts total
1553 1553 updating the branch cache
1554 1554 bundle2-output-bundle: "HG20", 1 parts total
1555 1555 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
1556 1556 bundle2-input-bundle: no-transaction
1557 1557 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
1558 1558 bundle2-input-bundle: 0 parts total
1559 1559 listing keys for "phases"
1560 1560 repository tip rolled back to revision 2 (undo push)
1561 1561 2:fb35475503ef
1562 1562
1563 1563
1564 1564 Branch acl deny test
1565 1565
1566 1566 $ echo "[acl.deny.branches]" >> $config
1567 1567 $ echo "foobar = *" >> $config
1568 1568 $ do_push astro
1569 1569 Pushing as user astro
1570 1570 hgrc = """
1571 1571 [hooks]
1572 1572 pretxnchangegroup.acl = python:hgext.acl.hook
1573 1573 [acl]
1574 1574 sources = push
1575 1575 [extensions]
1576 1576 [acl.deny.branches]
1577 1577 foobar = *
1578 1578 """
1579 1579 pushing to ../b
1580 1580 query 1; heads
1581 1581 searching for changes
1582 1582 all remote heads known locally
1583 1583 listing keys for "phases"
1584 1584 checking for updated bookmarks
1585 1585 listing keys for "bookmarks"
1586 1586 listing keys for "bookmarks"
1587 1587 4 changesets found
1588 1588 list of changesets:
1589 1589 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1590 1590 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1591 1591 911600dab2ae7a9baff75958b84fe606851ce955
1592 1592 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
1593 1593 bundle2-output-bundle: "HG20", 5 parts total
1594 bundle2-output-part: "replycaps" 168 bytes payload
1594 bundle2-output-part: "replycaps" 178 bytes payload
1595 1595 bundle2-output-part: "check:phases" 48 bytes payload
1596 1596 bundle2-output-part: "check:heads" streamed payload
1597 1597 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1598 1598 bundle2-output-part: "phase-heads" 48 bytes payload
1599 1599 bundle2-input-bundle: with-transaction
1600 1600 bundle2-input-part: "replycaps" supported
1601 bundle2-input-part: total payload size 168
1601 bundle2-input-part: total payload size 178
1602 1602 bundle2-input-part: "check:phases" supported
1603 1603 bundle2-input-part: total payload size 48
1604 1604 bundle2-input-part: "check:heads" supported
1605 1605 bundle2-input-part: total payload size 20
1606 1606 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1607 1607 adding changesets
1608 1608 add changeset ef1ea85a6374
1609 1609 add changeset f9cafe1212c8
1610 1610 add changeset 911600dab2ae
1611 1611 add changeset e8fc755d4d82
1612 1612 adding manifests
1613 1613 adding file changes
1614 1614 adding abc.txt revisions
1615 1615 adding foo/Bar/file.txt revisions
1616 1616 adding foo/file.txt revisions
1617 1617 adding quux/file.py revisions
1618 1618 added 4 changesets with 4 changes to 4 files (+1 heads)
1619 1619 calling hook pretxnchangegroup.acl: hgext.acl.hook
1620 1620 acl: checking access for user "astro"
1621 1621 acl: acl.allow.branches not enabled
1622 1622 acl: acl.deny.branches enabled, 1 entries for user astro
1623 1623 acl: acl.allow not enabled
1624 1624 acl: acl.deny not enabled
1625 1625 acl: branch access granted: "ef1ea85a6374" on branch "default"
1626 1626 acl: path access granted: "ef1ea85a6374"
1627 1627 acl: branch access granted: "f9cafe1212c8" on branch "default"
1628 1628 acl: path access granted: "f9cafe1212c8"
1629 1629 acl: branch access granted: "911600dab2ae" on branch "default"
1630 1630 acl: path access granted: "911600dab2ae"
1631 1631 error: pretxnchangegroup.acl hook failed: acl: user "astro" denied on branch "foobar" (changeset "e8fc755d4d82")
1632 1632 bundle2-input-part: total payload size 2068
1633 1633 bundle2-input-part: total payload size 48
1634 1634 bundle2-input-bundle: 4 parts total
1635 1635 transaction abort!
1636 1636 rollback completed
1637 1637 abort: acl: user "astro" denied on branch "foobar" (changeset "e8fc755d4d82")
1638 1638 no rollback information available
1639 1639 2:fb35475503ef
1640 1640
1641 1641
1642 1642 Branch acl empty allow test
1643 1643
1644 1644 $ init_config
1645 1645 $ echo "[acl.allow.branches]" >> $config
1646 1646 $ do_push astro
1647 1647 Pushing as user astro
1648 1648 hgrc = """
1649 1649 [hooks]
1650 1650 pretxnchangegroup.acl = python:hgext.acl.hook
1651 1651 [acl]
1652 1652 sources = push
1653 1653 [extensions]
1654 1654 [acl.allow.branches]
1655 1655 """
1656 1656 pushing to ../b
1657 1657 query 1; heads
1658 1658 searching for changes
1659 1659 all remote heads known locally
1660 1660 listing keys for "phases"
1661 1661 checking for updated bookmarks
1662 1662 listing keys for "bookmarks"
1663 1663 listing keys for "bookmarks"
1664 1664 4 changesets found
1665 1665 list of changesets:
1666 1666 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1667 1667 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1668 1668 911600dab2ae7a9baff75958b84fe606851ce955
1669 1669 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
1670 1670 bundle2-output-bundle: "HG20", 5 parts total
1671 bundle2-output-part: "replycaps" 168 bytes payload
1671 bundle2-output-part: "replycaps" 178 bytes payload
1672 1672 bundle2-output-part: "check:phases" 48 bytes payload
1673 1673 bundle2-output-part: "check:heads" streamed payload
1674 1674 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1675 1675 bundle2-output-part: "phase-heads" 48 bytes payload
1676 1676 bundle2-input-bundle: with-transaction
1677 1677 bundle2-input-part: "replycaps" supported
1678 bundle2-input-part: total payload size 168
1678 bundle2-input-part: total payload size 178
1679 1679 bundle2-input-part: "check:phases" supported
1680 1680 bundle2-input-part: total payload size 48
1681 1681 bundle2-input-part: "check:heads" supported
1682 1682 bundle2-input-part: total payload size 20
1683 1683 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1684 1684 adding changesets
1685 1685 add changeset ef1ea85a6374
1686 1686 add changeset f9cafe1212c8
1687 1687 add changeset 911600dab2ae
1688 1688 add changeset e8fc755d4d82
1689 1689 adding manifests
1690 1690 adding file changes
1691 1691 adding abc.txt revisions
1692 1692 adding foo/Bar/file.txt revisions
1693 1693 adding foo/file.txt revisions
1694 1694 adding quux/file.py revisions
1695 1695 added 4 changesets with 4 changes to 4 files (+1 heads)
1696 1696 calling hook pretxnchangegroup.acl: hgext.acl.hook
1697 1697 acl: checking access for user "astro"
1698 1698 acl: acl.allow.branches enabled, 0 entries for user astro
1699 1699 acl: acl.deny.branches not enabled
1700 1700 acl: acl.allow not enabled
1701 1701 acl: acl.deny not enabled
1702 1702 error: pretxnchangegroup.acl hook failed: acl: user "astro" not allowed on branch "default" (changeset "ef1ea85a6374")
1703 1703 bundle2-input-part: total payload size 2068
1704 1704 bundle2-input-part: total payload size 48
1705 1705 bundle2-input-bundle: 4 parts total
1706 1706 transaction abort!
1707 1707 rollback completed
1708 1708 abort: acl: user "astro" not allowed on branch "default" (changeset "ef1ea85a6374")
1709 1709 no rollback information available
1710 1710 2:fb35475503ef
1711 1711
1712 1712
1713 1713 Branch acl allow other
1714 1714
1715 1715 $ init_config
1716 1716 $ echo "[acl.allow.branches]" >> $config
1717 1717 $ echo "* = george" >> $config
1718 1718 $ do_push astro
1719 1719 Pushing as user astro
1720 1720 hgrc = """
1721 1721 [hooks]
1722 1722 pretxnchangegroup.acl = python:hgext.acl.hook
1723 1723 [acl]
1724 1724 sources = push
1725 1725 [extensions]
1726 1726 [acl.allow.branches]
1727 1727 * = george
1728 1728 """
1729 1729 pushing to ../b
1730 1730 query 1; heads
1731 1731 searching for changes
1732 1732 all remote heads known locally
1733 1733 listing keys for "phases"
1734 1734 checking for updated bookmarks
1735 1735 listing keys for "bookmarks"
1736 1736 listing keys for "bookmarks"
1737 1737 4 changesets found
1738 1738 list of changesets:
1739 1739 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1740 1740 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1741 1741 911600dab2ae7a9baff75958b84fe606851ce955
1742 1742 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
1743 1743 bundle2-output-bundle: "HG20", 5 parts total
1744 bundle2-output-part: "replycaps" 168 bytes payload
1744 bundle2-output-part: "replycaps" 178 bytes payload
1745 1745 bundle2-output-part: "check:phases" 48 bytes payload
1746 1746 bundle2-output-part: "check:heads" streamed payload
1747 1747 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1748 1748 bundle2-output-part: "phase-heads" 48 bytes payload
1749 1749 bundle2-input-bundle: with-transaction
1750 1750 bundle2-input-part: "replycaps" supported
1751 bundle2-input-part: total payload size 168
1751 bundle2-input-part: total payload size 178
1752 1752 bundle2-input-part: "check:phases" supported
1753 1753 bundle2-input-part: total payload size 48
1754 1754 bundle2-input-part: "check:heads" supported
1755 1755 bundle2-input-part: total payload size 20
1756 1756 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1757 1757 adding changesets
1758 1758 add changeset ef1ea85a6374
1759 1759 add changeset f9cafe1212c8
1760 1760 add changeset 911600dab2ae
1761 1761 add changeset e8fc755d4d82
1762 1762 adding manifests
1763 1763 adding file changes
1764 1764 adding abc.txt revisions
1765 1765 adding foo/Bar/file.txt revisions
1766 1766 adding foo/file.txt revisions
1767 1767 adding quux/file.py revisions
1768 1768 added 4 changesets with 4 changes to 4 files (+1 heads)
1769 1769 calling hook pretxnchangegroup.acl: hgext.acl.hook
1770 1770 acl: checking access for user "astro"
1771 1771 acl: acl.allow.branches enabled, 0 entries for user astro
1772 1772 acl: acl.deny.branches not enabled
1773 1773 acl: acl.allow not enabled
1774 1774 acl: acl.deny not enabled
1775 1775 error: pretxnchangegroup.acl hook failed: acl: user "astro" not allowed on branch "default" (changeset "ef1ea85a6374")
1776 1776 bundle2-input-part: total payload size 2068
1777 1777 bundle2-input-part: total payload size 48
1778 1778 bundle2-input-bundle: 4 parts total
1779 1779 transaction abort!
1780 1780 rollback completed
1781 1781 abort: acl: user "astro" not allowed on branch "default" (changeset "ef1ea85a6374")
1782 1782 no rollback information available
1783 1783 2:fb35475503ef
1784 1784
1785 1785 $ do_push george
1786 1786 Pushing as user george
1787 1787 hgrc = """
1788 1788 [hooks]
1789 1789 pretxnchangegroup.acl = python:hgext.acl.hook
1790 1790 [acl]
1791 1791 sources = push
1792 1792 [extensions]
1793 1793 [acl.allow.branches]
1794 1794 * = george
1795 1795 """
1796 1796 pushing to ../b
1797 1797 query 1; heads
1798 1798 searching for changes
1799 1799 all remote heads known locally
1800 1800 listing keys for "phases"
1801 1801 checking for updated bookmarks
1802 1802 listing keys for "bookmarks"
1803 1803 listing keys for "bookmarks"
1804 1804 4 changesets found
1805 1805 list of changesets:
1806 1806 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1807 1807 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1808 1808 911600dab2ae7a9baff75958b84fe606851ce955
1809 1809 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
1810 1810 bundle2-output-bundle: "HG20", 5 parts total
1811 bundle2-output-part: "replycaps" 168 bytes payload
1811 bundle2-output-part: "replycaps" 178 bytes payload
1812 1812 bundle2-output-part: "check:phases" 48 bytes payload
1813 1813 bundle2-output-part: "check:heads" streamed payload
1814 1814 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1815 1815 bundle2-output-part: "phase-heads" 48 bytes payload
1816 1816 bundle2-input-bundle: with-transaction
1817 1817 bundle2-input-part: "replycaps" supported
1818 bundle2-input-part: total payload size 168
1818 bundle2-input-part: total payload size 178
1819 1819 bundle2-input-part: "check:phases" supported
1820 1820 bundle2-input-part: total payload size 48
1821 1821 bundle2-input-part: "check:heads" supported
1822 1822 bundle2-input-part: total payload size 20
1823 1823 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1824 1824 adding changesets
1825 1825 add changeset ef1ea85a6374
1826 1826 add changeset f9cafe1212c8
1827 1827 add changeset 911600dab2ae
1828 1828 add changeset e8fc755d4d82
1829 1829 adding manifests
1830 1830 adding file changes
1831 1831 adding abc.txt revisions
1832 1832 adding foo/Bar/file.txt revisions
1833 1833 adding foo/file.txt revisions
1834 1834 adding quux/file.py revisions
1835 1835 added 4 changesets with 4 changes to 4 files (+1 heads)
1836 1836 calling hook pretxnchangegroup.acl: hgext.acl.hook
1837 1837 acl: checking access for user "george"
1838 1838 acl: acl.allow.branches enabled, 1 entries for user george
1839 1839 acl: acl.deny.branches not enabled
1840 1840 acl: acl.allow not enabled
1841 1841 acl: acl.deny not enabled
1842 1842 acl: branch access granted: "ef1ea85a6374" on branch "default"
1843 1843 acl: path access granted: "ef1ea85a6374"
1844 1844 acl: branch access granted: "f9cafe1212c8" on branch "default"
1845 1845 acl: path access granted: "f9cafe1212c8"
1846 1846 acl: branch access granted: "911600dab2ae" on branch "default"
1847 1847 acl: path access granted: "911600dab2ae"
1848 1848 acl: branch access granted: "e8fc755d4d82" on branch "foobar"
1849 1849 acl: path access granted: "e8fc755d4d82"
1850 1850 bundle2-input-part: total payload size 2068
1851 1851 bundle2-input-part: "phase-heads" supported
1852 1852 bundle2-input-part: total payload size 48
1853 1853 bundle2-input-bundle: 4 parts total
1854 1854 updating the branch cache
1855 1855 bundle2-output-bundle: "HG20", 1 parts total
1856 1856 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
1857 1857 bundle2-input-bundle: no-transaction
1858 1858 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
1859 1859 bundle2-input-bundle: 0 parts total
1860 1860 listing keys for "phases"
1861 1861 repository tip rolled back to revision 2 (undo push)
1862 1862 2:fb35475503ef
1863 1863
1864 1864
1865 1865 Branch acl conflicting allow
1866 1866 asterisk ends up applying to all branches and allowing george to
1867 1867 push foobar into the remote
1868 1868
1869 1869 $ init_config
1870 1870 $ echo "[acl.allow.branches]" >> $config
1871 1871 $ echo "foobar = astro" >> $config
1872 1872 $ echo "* = george" >> $config
1873 1873 $ do_push george
1874 1874 Pushing as user george
1875 1875 hgrc = """
1876 1876 [hooks]
1877 1877 pretxnchangegroup.acl = python:hgext.acl.hook
1878 1878 [acl]
1879 1879 sources = push
1880 1880 [extensions]
1881 1881 [acl.allow.branches]
1882 1882 foobar = astro
1883 1883 * = george
1884 1884 """
1885 1885 pushing to ../b
1886 1886 query 1; heads
1887 1887 searching for changes
1888 1888 all remote heads known locally
1889 1889 listing keys for "phases"
1890 1890 checking for updated bookmarks
1891 1891 listing keys for "bookmarks"
1892 1892 listing keys for "bookmarks"
1893 1893 4 changesets found
1894 1894 list of changesets:
1895 1895 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1896 1896 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1897 1897 911600dab2ae7a9baff75958b84fe606851ce955
1898 1898 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
1899 1899 bundle2-output-bundle: "HG20", 5 parts total
1900 bundle2-output-part: "replycaps" 168 bytes payload
1900 bundle2-output-part: "replycaps" 178 bytes payload
1901 1901 bundle2-output-part: "check:phases" 48 bytes payload
1902 1902 bundle2-output-part: "check:heads" streamed payload
1903 1903 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1904 1904 bundle2-output-part: "phase-heads" 48 bytes payload
1905 1905 bundle2-input-bundle: with-transaction
1906 1906 bundle2-input-part: "replycaps" supported
1907 bundle2-input-part: total payload size 168
1907 bundle2-input-part: total payload size 178
1908 1908 bundle2-input-part: "check:phases" supported
1909 1909 bundle2-input-part: total payload size 48
1910 1910 bundle2-input-part: "check:heads" supported
1911 1911 bundle2-input-part: total payload size 20
1912 1912 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
1913 1913 adding changesets
1914 1914 add changeset ef1ea85a6374
1915 1915 add changeset f9cafe1212c8
1916 1916 add changeset 911600dab2ae
1917 1917 add changeset e8fc755d4d82
1918 1918 adding manifests
1919 1919 adding file changes
1920 1920 adding abc.txt revisions
1921 1921 adding foo/Bar/file.txt revisions
1922 1922 adding foo/file.txt revisions
1923 1923 adding quux/file.py revisions
1924 1924 added 4 changesets with 4 changes to 4 files (+1 heads)
1925 1925 calling hook pretxnchangegroup.acl: hgext.acl.hook
1926 1926 acl: checking access for user "george"
1927 1927 acl: acl.allow.branches enabled, 1 entries for user george
1928 1928 acl: acl.deny.branches not enabled
1929 1929 acl: acl.allow not enabled
1930 1930 acl: acl.deny not enabled
1931 1931 acl: branch access granted: "ef1ea85a6374" on branch "default"
1932 1932 acl: path access granted: "ef1ea85a6374"
1933 1933 acl: branch access granted: "f9cafe1212c8" on branch "default"
1934 1934 acl: path access granted: "f9cafe1212c8"
1935 1935 acl: branch access granted: "911600dab2ae" on branch "default"
1936 1936 acl: path access granted: "911600dab2ae"
1937 1937 acl: branch access granted: "e8fc755d4d82" on branch "foobar"
1938 1938 acl: path access granted: "e8fc755d4d82"
1939 1939 bundle2-input-part: total payload size 2068
1940 1940 bundle2-input-part: "phase-heads" supported
1941 1941 bundle2-input-part: total payload size 48
1942 1942 bundle2-input-bundle: 4 parts total
1943 1943 updating the branch cache
1944 1944 bundle2-output-bundle: "HG20", 1 parts total
1945 1945 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
1946 1946 bundle2-input-bundle: no-transaction
1947 1947 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
1948 1948 bundle2-input-bundle: 0 parts total
1949 1949 listing keys for "phases"
1950 1950 repository tip rolled back to revision 2 (undo push)
1951 1951 2:fb35475503ef
1952 1952
1953 1953 Branch acl conflicting deny
1954 1954
1955 1955 $ init_config
1956 1956 $ echo "[acl.deny.branches]" >> $config
1957 1957 $ echo "foobar = astro" >> $config
1958 1958 $ echo "default = astro" >> $config
1959 1959 $ echo "* = george" >> $config
1960 1960 $ do_push george
1961 1961 Pushing as user george
1962 1962 hgrc = """
1963 1963 [hooks]
1964 1964 pretxnchangegroup.acl = python:hgext.acl.hook
1965 1965 [acl]
1966 1966 sources = push
1967 1967 [extensions]
1968 1968 [acl.deny.branches]
1969 1969 foobar = astro
1970 1970 default = astro
1971 1971 * = george
1972 1972 """
1973 1973 pushing to ../b
1974 1974 query 1; heads
1975 1975 searching for changes
1976 1976 all remote heads known locally
1977 1977 listing keys for "phases"
1978 1978 checking for updated bookmarks
1979 1979 listing keys for "bookmarks"
1980 1980 listing keys for "bookmarks"
1981 1981 4 changesets found
1982 1982 list of changesets:
1983 1983 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
1984 1984 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
1985 1985 911600dab2ae7a9baff75958b84fe606851ce955
1986 1986 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
1987 1987 bundle2-output-bundle: "HG20", 5 parts total
1988 bundle2-output-part: "replycaps" 168 bytes payload
1988 bundle2-output-part: "replycaps" 178 bytes payload
1989 1989 bundle2-output-part: "check:phases" 48 bytes payload
1990 1990 bundle2-output-part: "check:heads" streamed payload
1991 1991 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
1992 1992 bundle2-output-part: "phase-heads" 48 bytes payload
1993 1993 bundle2-input-bundle: with-transaction
1994 1994 bundle2-input-part: "replycaps" supported
1995 bundle2-input-part: total payload size 168
1995 bundle2-input-part: total payload size 178
1996 1996 bundle2-input-part: "check:phases" supported
1997 1997 bundle2-input-part: total payload size 48
1998 1998 bundle2-input-part: "check:heads" supported
1999 1999 bundle2-input-part: total payload size 20
2000 2000 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
2001 2001 adding changesets
2002 2002 add changeset ef1ea85a6374
2003 2003 add changeset f9cafe1212c8
2004 2004 add changeset 911600dab2ae
2005 2005 add changeset e8fc755d4d82
2006 2006 adding manifests
2007 2007 adding file changes
2008 2008 adding abc.txt revisions
2009 2009 adding foo/Bar/file.txt revisions
2010 2010 adding foo/file.txt revisions
2011 2011 adding quux/file.py revisions
2012 2012 added 4 changesets with 4 changes to 4 files (+1 heads)
2013 2013 calling hook pretxnchangegroup.acl: hgext.acl.hook
2014 2014 acl: checking access for user "george"
2015 2015 acl: acl.allow.branches not enabled
2016 2016 acl: acl.deny.branches enabled, 1 entries for user george
2017 2017 acl: acl.allow not enabled
2018 2018 acl: acl.deny not enabled
2019 2019 error: pretxnchangegroup.acl hook failed: acl: user "george" denied on branch "default" (changeset "ef1ea85a6374")
2020 2020 bundle2-input-part: total payload size 2068
2021 2021 bundle2-input-part: total payload size 48
2022 2022 bundle2-input-bundle: 4 parts total
2023 2023 transaction abort!
2024 2024 rollback completed
2025 2025 abort: acl: user "george" denied on branch "default" (changeset "ef1ea85a6374")
2026 2026 no rollback information available
2027 2027 2:fb35475503ef
2028 2028
2029 2029 User 'astro' must not be denied
2030 2030
2031 2031 $ init_config
2032 2032 $ echo "[acl.deny.branches]" >> $config
2033 2033 $ echo "default = !astro" >> $config
2034 2034 $ do_push astro
2035 2035 Pushing as user astro
2036 2036 hgrc = """
2037 2037 [hooks]
2038 2038 pretxnchangegroup.acl = python:hgext.acl.hook
2039 2039 [acl]
2040 2040 sources = push
2041 2041 [extensions]
2042 2042 [acl.deny.branches]
2043 2043 default = !astro
2044 2044 """
2045 2045 pushing to ../b
2046 2046 query 1; heads
2047 2047 searching for changes
2048 2048 all remote heads known locally
2049 2049 listing keys for "phases"
2050 2050 checking for updated bookmarks
2051 2051 listing keys for "bookmarks"
2052 2052 listing keys for "bookmarks"
2053 2053 4 changesets found
2054 2054 list of changesets:
2055 2055 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
2056 2056 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
2057 2057 911600dab2ae7a9baff75958b84fe606851ce955
2058 2058 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
2059 2059 bundle2-output-bundle: "HG20", 5 parts total
2060 bundle2-output-part: "replycaps" 168 bytes payload
2060 bundle2-output-part: "replycaps" 178 bytes payload
2061 2061 bundle2-output-part: "check:phases" 48 bytes payload
2062 2062 bundle2-output-part: "check:heads" streamed payload
2063 2063 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
2064 2064 bundle2-output-part: "phase-heads" 48 bytes payload
2065 2065 bundle2-input-bundle: with-transaction
2066 2066 bundle2-input-part: "replycaps" supported
2067 bundle2-input-part: total payload size 168
2067 bundle2-input-part: total payload size 178
2068 2068 bundle2-input-part: "check:phases" supported
2069 2069 bundle2-input-part: total payload size 48
2070 2070 bundle2-input-part: "check:heads" supported
2071 2071 bundle2-input-part: total payload size 20
2072 2072 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
2073 2073 adding changesets
2074 2074 add changeset ef1ea85a6374
2075 2075 add changeset f9cafe1212c8
2076 2076 add changeset 911600dab2ae
2077 2077 add changeset e8fc755d4d82
2078 2078 adding manifests
2079 2079 adding file changes
2080 2080 adding abc.txt revisions
2081 2081 adding foo/Bar/file.txt revisions
2082 2082 adding foo/file.txt revisions
2083 2083 adding quux/file.py revisions
2084 2084 added 4 changesets with 4 changes to 4 files (+1 heads)
2085 2085 calling hook pretxnchangegroup.acl: hgext.acl.hook
2086 2086 acl: checking access for user "astro"
2087 2087 acl: acl.allow.branches not enabled
2088 2088 acl: acl.deny.branches enabled, 0 entries for user astro
2089 2089 acl: acl.allow not enabled
2090 2090 acl: acl.deny not enabled
2091 2091 acl: branch access granted: "ef1ea85a6374" on branch "default"
2092 2092 acl: path access granted: "ef1ea85a6374"
2093 2093 acl: branch access granted: "f9cafe1212c8" on branch "default"
2094 2094 acl: path access granted: "f9cafe1212c8"
2095 2095 acl: branch access granted: "911600dab2ae" on branch "default"
2096 2096 acl: path access granted: "911600dab2ae"
2097 2097 acl: branch access granted: "e8fc755d4d82" on branch "foobar"
2098 2098 acl: path access granted: "e8fc755d4d82"
2099 2099 bundle2-input-part: total payload size 2068
2100 2100 bundle2-input-part: "phase-heads" supported
2101 2101 bundle2-input-part: total payload size 48
2102 2102 bundle2-input-bundle: 4 parts total
2103 2103 updating the branch cache
2104 2104 bundle2-output-bundle: "HG20", 1 parts total
2105 2105 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
2106 2106 bundle2-input-bundle: no-transaction
2107 2107 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
2108 2108 bundle2-input-bundle: 0 parts total
2109 2109 listing keys for "phases"
2110 2110 repository tip rolled back to revision 2 (undo push)
2111 2111 2:fb35475503ef
2112 2112
2113 2113
2114 2114 Non-astro users must be denied
2115 2115
2116 2116 $ do_push george
2117 2117 Pushing as user george
2118 2118 hgrc = """
2119 2119 [hooks]
2120 2120 pretxnchangegroup.acl = python:hgext.acl.hook
2121 2121 [acl]
2122 2122 sources = push
2123 2123 [extensions]
2124 2124 [acl.deny.branches]
2125 2125 default = !astro
2126 2126 """
2127 2127 pushing to ../b
2128 2128 query 1; heads
2129 2129 searching for changes
2130 2130 all remote heads known locally
2131 2131 listing keys for "phases"
2132 2132 checking for updated bookmarks
2133 2133 listing keys for "bookmarks"
2134 2134 listing keys for "bookmarks"
2135 2135 4 changesets found
2136 2136 list of changesets:
2137 2137 ef1ea85a6374b77d6da9dcda9541f498f2d17df7
2138 2138 f9cafe1212c8c6fa1120d14a556e18cc44ff8bdd
2139 2139 911600dab2ae7a9baff75958b84fe606851ce955
2140 2140 e8fc755d4d8217ee5b0c2bb41558c40d43b92c01
2141 2141 bundle2-output-bundle: "HG20", 5 parts total
2142 bundle2-output-part: "replycaps" 168 bytes payload
2142 bundle2-output-part: "replycaps" 178 bytes payload
2143 2143 bundle2-output-part: "check:phases" 48 bytes payload
2144 2144 bundle2-output-part: "check:heads" streamed payload
2145 2145 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
2146 2146 bundle2-output-part: "phase-heads" 48 bytes payload
2147 2147 bundle2-input-bundle: with-transaction
2148 2148 bundle2-input-part: "replycaps" supported
2149 bundle2-input-part: total payload size 168
2149 bundle2-input-part: total payload size 178
2150 2150 bundle2-input-part: "check:phases" supported
2151 2151 bundle2-input-part: total payload size 48
2152 2152 bundle2-input-part: "check:heads" supported
2153 2153 bundle2-input-part: total payload size 20
2154 2154 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
2155 2155 adding changesets
2156 2156 add changeset ef1ea85a6374
2157 2157 add changeset f9cafe1212c8
2158 2158 add changeset 911600dab2ae
2159 2159 add changeset e8fc755d4d82
2160 2160 adding manifests
2161 2161 adding file changes
2162 2162 adding abc.txt revisions
2163 2163 adding foo/Bar/file.txt revisions
2164 2164 adding foo/file.txt revisions
2165 2165 adding quux/file.py revisions
2166 2166 added 4 changesets with 4 changes to 4 files (+1 heads)
2167 2167 calling hook pretxnchangegroup.acl: hgext.acl.hook
2168 2168 acl: checking access for user "george"
2169 2169 acl: acl.allow.branches not enabled
2170 2170 acl: acl.deny.branches enabled, 1 entries for user george
2171 2171 acl: acl.allow not enabled
2172 2172 acl: acl.deny not enabled
2173 2173 error: pretxnchangegroup.acl hook failed: acl: user "george" denied on branch "default" (changeset "ef1ea85a6374")
2174 2174 bundle2-input-part: total payload size 2068
2175 2175 bundle2-input-part: total payload size 48
2176 2176 bundle2-input-bundle: 4 parts total
2177 2177 transaction abort!
2178 2178 rollback completed
2179 2179 abort: acl: user "george" denied on branch "default" (changeset "ef1ea85a6374")
2180 2180 no rollback information available
2181 2181 2:fb35475503ef
2182 2182
2183 2183
@@ -1,995 +1,1010 b''
1 1 #require serve
2 2
3 3 $ cat << EOF >> $HGRCPATH
4 4 > [ui]
5 5 > logtemplate={rev}:{node|short} {desc|firstline}
6 6 > [phases]
7 7 > publish=False
8 8 > [experimental]
9 9 > evolution.createmarkers=True
10 10 > evolution.exchange=True
11 11 > EOF
12 12
13 13 $ cat > $TESTTMP/hook.sh <<'EOF'
14 14 > echo "test-hook-bookmark: $HG_BOOKMARK: $HG_OLDNODE -> $HG_NODE"
15 15 > EOF
16 16 $ TESTHOOK="hooks.txnclose-bookmark.test=sh $TESTTMP/hook.sh"
17 17
18 18 initialize
19 19
20 20 $ hg init a
21 21 $ cd a
22 22 $ echo 'test' > test
23 23 $ hg commit -Am'test'
24 24 adding test
25 25
26 26 set bookmarks
27 27
28 28 $ hg bookmark X
29 29 $ hg bookmark Y
30 30 $ hg bookmark Z
31 31
32 32 import bookmark by name
33 33
34 34 $ hg init ../b
35 35 $ cd ../b
36 36 $ hg book Y
37 37 $ hg book
38 38 * Y -1:000000000000
39 39 $ hg pull ../a --config "$TESTHOOK"
40 40 pulling from ../a
41 41 requesting all changes
42 42 adding changesets
43 43 adding manifests
44 44 adding file changes
45 45 added 1 changesets with 1 changes to 1 files
46 46 adding remote bookmark X
47 47 updating bookmark Y
48 48 adding remote bookmark Z
49 49 new changesets 4e3505fd9583
50 50 test-hook-bookmark: X: -> 4e3505fd95835d721066b76e75dbb8cc554d7f77
51 51 test-hook-bookmark: Y: 0000000000000000000000000000000000000000 -> 4e3505fd95835d721066b76e75dbb8cc554d7f77
52 52 test-hook-bookmark: Z: -> 4e3505fd95835d721066b76e75dbb8cc554d7f77
53 53 (run 'hg update' to get a working copy)
54 54 $ hg bookmarks
55 55 X 0:4e3505fd9583
56 56 * Y 0:4e3505fd9583
57 57 Z 0:4e3505fd9583
58 58 $ hg debugpushkey ../a namespaces
59 59 bookmarks
60 60 namespaces
61 61 obsolete
62 62 phases
63 63 $ hg debugpushkey ../a bookmarks
64 64 X 4e3505fd95835d721066b76e75dbb8cc554d7f77
65 65 Y 4e3505fd95835d721066b76e75dbb8cc554d7f77
66 66 Z 4e3505fd95835d721066b76e75dbb8cc554d7f77
67 67
68 68 delete the bookmark to re-pull it
69 69
70 70 $ hg book -d X
71 71 $ hg pull -B X ../a
72 72 pulling from ../a
73 73 no changes found
74 74 adding remote bookmark X
75 75
76 76 finally no-op pull
77 77
78 78 $ hg pull -B X ../a
79 79 pulling from ../a
80 80 no changes found
81 81 $ hg bookmark
82 82 X 0:4e3505fd9583
83 83 * Y 0:4e3505fd9583
84 84 Z 0:4e3505fd9583
85 85
86 86 export bookmark by name
87 87
88 88 $ hg bookmark W
89 89 $ hg bookmark foo
90 90 $ hg bookmark foobar
91 91 $ hg push -B W ../a
92 92 pushing to ../a
93 93 searching for changes
94 94 no changes found
95 95 exporting bookmark W
96 96 [1]
97 97 $ hg -R ../a bookmarks
98 98 W -1:000000000000
99 99 X 0:4e3505fd9583
100 100 Y 0:4e3505fd9583
101 101 * Z 0:4e3505fd9583
102 102
103 103 delete a remote bookmark
104 104
105 105 $ hg book -d W
106 106 $ hg push -B W ../a --config "$TESTHOOK" --debug --config devel.bundle2.debug=yes
107 107 pushing to ../a
108 108 query 1; heads
109 109 searching for changes
110 110 all remote heads known locally
111 111 listing keys for "phases"
112 112 checking for updated bookmarks
113 113 listing keys for "bookmarks"
114 114 no changes found
115 bundle2-output-bundle: "HG20", 3 parts total
115 bundle2-output-bundle: "HG20", 4 parts total
116 116 bundle2-output: start emission of HG20 stream
117 117 bundle2-output: bundle parameter:
118 118 bundle2-output: start of parts
119 119 bundle2-output: bundle part: "replycaps"
120 bundle2-output-part: "replycaps" 185 bytes payload
120 bundle2-output-part: "replycaps" 195 bytes payload
121 121 bundle2-output: part 0: "REPLYCAPS"
122 122 bundle2-output: header chunk size: 16
123 bundle2-output: payload chunk size: 185
123 bundle2-output: payload chunk size: 195
124 bundle2-output: closing payload chunk
125 bundle2-output: bundle part: "check:bookmarks"
126 bundle2-output-part: "check:bookmarks" 23 bytes payload
127 bundle2-output: part 1: "CHECK:BOOKMARKS"
128 bundle2-output: header chunk size: 22
129 bundle2-output: payload chunk size: 23
124 130 bundle2-output: closing payload chunk
125 131 bundle2-output: bundle part: "check:phases"
126 132 bundle2-output-part: "check:phases" 48 bytes payload
127 bundle2-output: part 1: "CHECK:PHASES"
133 bundle2-output: part 2: "CHECK:PHASES"
128 134 bundle2-output: header chunk size: 19
129 135 bundle2-output: payload chunk size: 48
130 136 bundle2-output: closing payload chunk
131 137 bundle2-output: bundle part: "pushkey"
132 138 bundle2-output-part: "pushkey" (params: 4 mandatory) empty payload
133 bundle2-output: part 2: "PUSHKEY"
139 bundle2-output: part 3: "PUSHKEY"
134 140 bundle2-output: header chunk size: 90
135 141 bundle2-output: closing payload chunk
136 142 bundle2-output: end of bundle
137 143 bundle2-input: start processing of HG20 stream
138 144 bundle2-input: reading bundle2 stream parameters
139 145 bundle2-input-bundle: with-transaction
140 146 bundle2-input: start extraction of bundle2 parts
141 147 bundle2-input: part header size: 16
142 148 bundle2-input: part type: "REPLYCAPS"
143 149 bundle2-input: part id: "0"
144 150 bundle2-input: part parameters: 0
145 151 bundle2-input: found a handler for part replycaps
146 152 bundle2-input-part: "replycaps" supported
147 bundle2-input: payload chunk size: 185
153 bundle2-input: payload chunk size: 195
148 154 bundle2-input: payload chunk size: 0
149 bundle2-input-part: total payload size 185
155 bundle2-input-part: total payload size 195
156 bundle2-input: part header size: 22
157 bundle2-input: part type: "CHECK:BOOKMARKS"
158 bundle2-input: part id: "1"
159 bundle2-input: part parameters: 0
160 bundle2-input: found a handler for part check:bookmarks
161 bundle2-input-part: "check:bookmarks" supported
162 bundle2-input: payload chunk size: 23
163 bundle2-input: payload chunk size: 0
164 bundle2-input-part: total payload size 23
150 165 bundle2-input: part header size: 19
151 166 bundle2-input: part type: "CHECK:PHASES"
152 bundle2-input: part id: "1"
167 bundle2-input: part id: "2"
153 168 bundle2-input: part parameters: 0
154 169 bundle2-input: found a handler for part check:phases
155 170 bundle2-input-part: "check:phases" supported
156 171 bundle2-input: payload chunk size: 48
157 172 bundle2-input: payload chunk size: 0
158 173 bundle2-input-part: total payload size 48
159 174 bundle2-input: part header size: 90
160 175 bundle2-input: part type: "PUSHKEY"
161 bundle2-input: part id: "2"
176 bundle2-input: part id: "3"
162 177 bundle2-input: part parameters: 4
163 178 bundle2-input: found a handler for part pushkey
164 179 bundle2-input-part: "pushkey" (params: 4 mandatory) supported
165 180 pushing key for "bookmarks:W"
166 181 bundle2-input: payload chunk size: 0
167 182 bundle2-input: part header size: 0
168 183 bundle2-input: end of bundle2 stream
169 bundle2-input-bundle: 2 parts total
184 bundle2-input-bundle: 3 parts total
170 185 running hook txnclose-bookmark.test: sh $TESTTMP/hook.sh
171 186 test-hook-bookmark: W: 0000000000000000000000000000000000000000 ->
172 187 bundle2-output-bundle: "HG20", 1 parts total
173 188 bundle2-output: start emission of HG20 stream
174 189 bundle2-output: bundle parameter:
175 190 bundle2-output: start of parts
176 191 bundle2-output: bundle part: "reply:pushkey"
177 192 bundle2-output-part: "reply:pushkey" (params: 0 advisory) empty payload
178 193 bundle2-output: part 0: "REPLY:PUSHKEY"
179 194 bundle2-output: header chunk size: 43
180 195 bundle2-output: closing payload chunk
181 196 bundle2-output: end of bundle
182 197 bundle2-input: start processing of HG20 stream
183 198 bundle2-input: reading bundle2 stream parameters
184 199 bundle2-input-bundle: no-transaction
185 200 bundle2-input: start extraction of bundle2 parts
186 201 bundle2-input: part header size: 43
187 202 bundle2-input: part type: "REPLY:PUSHKEY"
188 203 bundle2-input: part id: "0"
189 204 bundle2-input: part parameters: 2
190 205 bundle2-input: found a handler for part reply:pushkey
191 206 bundle2-input-part: "reply:pushkey" (params: 0 advisory) supported
192 207 bundle2-input: payload chunk size: 0
193 208 bundle2-input: part header size: 0
194 209 bundle2-input: end of bundle2 stream
195 210 bundle2-input-bundle: 0 parts total
196 211 deleting remote bookmark W
197 212 listing keys for "phases"
198 213 [1]
199 214
200 215 export the active bookmark
201 216
202 217 $ hg bookmark V
203 218 $ hg push -B . ../a
204 219 pushing to ../a
205 220 searching for changes
206 221 no changes found
207 222 exporting bookmark V
208 223 [1]
209 224
210 225 exporting the active bookmark with 'push -B .'
211 226 demand that one of the bookmarks is activated
212 227
213 228 $ hg update -r default
214 229 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
215 230 (leaving bookmark V)
216 231 $ hg push -B . ../a
217 232 abort: no active bookmark
218 233 [255]
219 234 $ hg update -r V
220 235 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
221 236 (activating bookmark V)
222 237
223 238 delete the bookmark
224 239
225 240 $ hg book -d V
226 241 $ hg push -B V ../a
227 242 pushing to ../a
228 243 searching for changes
229 244 no changes found
230 245 deleting remote bookmark V
231 246 [1]
232 247 $ hg up foobar
233 248 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
234 249 (activating bookmark foobar)
235 250
236 251 push/pull name that doesn't exist
237 252
238 253 $ hg push -B badname ../a
239 254 pushing to ../a
240 255 searching for changes
241 256 bookmark badname does not exist on the local or remote repository!
242 257 no changes found
243 258 [2]
244 259 $ hg pull -B anotherbadname ../a
245 260 pulling from ../a
246 261 abort: remote bookmark anotherbadname not found!
247 262 [255]
248 263
249 264 divergent bookmarks
250 265
251 266 $ cd ../a
252 267 $ echo c1 > f1
253 268 $ hg ci -Am1
254 269 adding f1
255 270 $ hg book -f @
256 271 $ hg book -f X
257 272 $ hg book
258 273 @ 1:0d2164f0ce0d
259 274 * X 1:0d2164f0ce0d
260 275 Y 0:4e3505fd9583
261 276 Z 1:0d2164f0ce0d
262 277
263 278 $ cd ../b
264 279 $ hg up
265 280 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
266 281 updating bookmark foobar
267 282 $ echo c2 > f2
268 283 $ hg ci -Am2
269 284 adding f2
270 285 $ hg book -if @
271 286 $ hg book -if X
272 287 $ hg book
273 288 @ 1:9b140be10808
274 289 X 1:9b140be10808
275 290 Y 0:4e3505fd9583
276 291 Z 0:4e3505fd9583
277 292 foo -1:000000000000
278 293 * foobar 1:9b140be10808
279 294
280 295 $ hg pull --config paths.foo=../a foo --config "$TESTHOOK"
281 296 pulling from $TESTTMP/a (glob)
282 297 searching for changes
283 298 adding changesets
284 299 adding manifests
285 300 adding file changes
286 301 added 1 changesets with 1 changes to 1 files (+1 heads)
287 302 divergent bookmark @ stored as @foo
288 303 divergent bookmark X stored as X@foo
289 304 updating bookmark Z
290 305 new changesets 0d2164f0ce0d
291 306 test-hook-bookmark: @foo: -> 0d2164f0ce0d8f1d6f94351eba04b794909be66c
292 307 test-hook-bookmark: X@foo: -> 0d2164f0ce0d8f1d6f94351eba04b794909be66c
293 308 test-hook-bookmark: Z: 4e3505fd95835d721066b76e75dbb8cc554d7f77 -> 0d2164f0ce0d8f1d6f94351eba04b794909be66c
294 309 (run 'hg heads' to see heads, 'hg merge' to merge)
295 310 $ hg book
296 311 @ 1:9b140be10808
297 312 @foo 2:0d2164f0ce0d
298 313 X 1:9b140be10808
299 314 X@foo 2:0d2164f0ce0d
300 315 Y 0:4e3505fd9583
301 316 Z 2:0d2164f0ce0d
302 317 foo -1:000000000000
303 318 * foobar 1:9b140be10808
304 319
305 320 (test that too many divergence of bookmark)
306 321
307 322 $ $PYTHON $TESTDIR/seq.py 1 100 | while read i; do hg bookmarks -r 000000000000 "X@${i}"; done
308 323 $ hg pull ../a
309 324 pulling from ../a
310 325 searching for changes
311 326 no changes found
312 327 warning: failed to assign numbered name to divergent bookmark X
313 328 divergent bookmark @ stored as @1
314 329 $ hg bookmarks | grep '^ X' | grep -v ':000000000000'
315 330 X 1:9b140be10808
316 331 X@foo 2:0d2164f0ce0d
317 332
318 333 (test that remotely diverged bookmarks are reused if they aren't changed)
319 334
320 335 $ hg bookmarks | grep '^ @'
321 336 @ 1:9b140be10808
322 337 @1 2:0d2164f0ce0d
323 338 @foo 2:0d2164f0ce0d
324 339 $ hg pull ../a
325 340 pulling from ../a
326 341 searching for changes
327 342 no changes found
328 343 warning: failed to assign numbered name to divergent bookmark X
329 344 divergent bookmark @ stored as @1
330 345 $ hg bookmarks | grep '^ @'
331 346 @ 1:9b140be10808
332 347 @1 2:0d2164f0ce0d
333 348 @foo 2:0d2164f0ce0d
334 349
335 350 $ $PYTHON $TESTDIR/seq.py 1 100 | while read i; do hg bookmarks -d "X@${i}"; done
336 351 $ hg bookmarks -d "@1"
337 352
338 353 $ hg push -f ../a
339 354 pushing to ../a
340 355 searching for changes
341 356 adding changesets
342 357 adding manifests
343 358 adding file changes
344 359 added 1 changesets with 1 changes to 1 files (+1 heads)
345 360 $ hg -R ../a book
346 361 @ 1:0d2164f0ce0d
347 362 * X 1:0d2164f0ce0d
348 363 Y 0:4e3505fd9583
349 364 Z 1:0d2164f0ce0d
350 365
351 366 explicit pull should overwrite the local version (issue4439)
352 367
353 368 $ hg update -r X
354 369 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
355 370 (activating bookmark X)
356 371 $ hg pull --config paths.foo=../a foo -B . --config "$TESTHOOK"
357 372 pulling from $TESTTMP/a (glob)
358 373 no changes found
359 374 divergent bookmark @ stored as @foo
360 375 importing bookmark X
361 376 test-hook-bookmark: @foo: 0d2164f0ce0d8f1d6f94351eba04b794909be66c -> 0d2164f0ce0d8f1d6f94351eba04b794909be66c
362 377 test-hook-bookmark: X: 9b140be1080824d768c5a4691a564088eede71f9 -> 0d2164f0ce0d8f1d6f94351eba04b794909be66c
363 378
364 379 reinstall state for further testing:
365 380
366 381 $ hg book -fr 9b140be10808 X
367 382
368 383 revsets should not ignore divergent bookmarks
369 384
370 385 $ hg bookmark -fr 1 Z
371 386 $ hg log -r 'bookmark()' --template '{rev}:{node|short} {bookmarks}\n'
372 387 0:4e3505fd9583 Y
373 388 1:9b140be10808 @ X Z foobar
374 389 2:0d2164f0ce0d @foo X@foo
375 390 $ hg log -r 'bookmark("X@foo")' --template '{rev}:{node|short} {bookmarks}\n'
376 391 2:0d2164f0ce0d @foo X@foo
377 392 $ hg log -r 'bookmark("re:X@foo")' --template '{rev}:{node|short} {bookmarks}\n'
378 393 2:0d2164f0ce0d @foo X@foo
379 394
380 395 update a remote bookmark from a non-head to a head
381 396
382 397 $ hg up -q Y
383 398 $ echo c3 > f2
384 399 $ hg ci -Am3
385 400 adding f2
386 401 created new head
387 402 $ hg push ../a --config "$TESTHOOK"
388 403 pushing to ../a
389 404 searching for changes
390 405 adding changesets
391 406 adding manifests
392 407 adding file changes
393 408 added 1 changesets with 1 changes to 1 files (+1 heads)
394 409 test-hook-bookmark: Y: 4e3505fd95835d721066b76e75dbb8cc554d7f77 -> f6fc62dde3c0771e29704af56ba4d8af77abcc2f
395 410 updating bookmark Y
396 411 $ hg -R ../a book
397 412 @ 1:0d2164f0ce0d
398 413 * X 1:0d2164f0ce0d
399 414 Y 3:f6fc62dde3c0
400 415 Z 1:0d2164f0ce0d
401 416
402 417 update a bookmark in the middle of a client pulling changes
403 418
404 419 $ cd ..
405 420 $ hg clone -q a pull-race
406 421
407 422 We want to use http because it is stateless and therefore more susceptible to
408 423 race conditions
409 424
410 425 $ hg serve -R pull-race -p $HGPORT -d --pid-file=pull-race.pid -E main-error.log
411 426 $ cat pull-race.pid >> $DAEMON_PIDS
412 427
413 428 $ cat <<EOF > $TESTTMP/out_makecommit.sh
414 429 > #!/bin/sh
415 430 > hg ci -Am5
416 431 > echo committed in pull-race
417 432 > EOF
418 433
419 434 $ hg clone -q http://localhost:$HGPORT/ pull-race2 --config "$TESTHOOK"
420 435 test-hook-bookmark: @: -> 0d2164f0ce0d8f1d6f94351eba04b794909be66c
421 436 test-hook-bookmark: X: -> 0d2164f0ce0d8f1d6f94351eba04b794909be66c
422 437 test-hook-bookmark: Y: -> f6fc62dde3c0771e29704af56ba4d8af77abcc2f
423 438 test-hook-bookmark: Z: -> 0d2164f0ce0d8f1d6f94351eba04b794909be66c
424 439 $ cd pull-race
425 440 $ hg up -q Y
426 441 $ echo c4 > f2
427 442 $ hg ci -Am4
428 443 $ echo c5 > f3
429 444 $ cat <<EOF > .hg/hgrc
430 445 > [hooks]
431 446 > outgoing.makecommit = sh $TESTTMP/out_makecommit.sh
432 447 > EOF
433 448
434 449 (new config needs a server restart)
435 450
436 451 $ cd ..
437 452 $ killdaemons.py
438 453 $ hg serve -R pull-race -p $HGPORT -d --pid-file=pull-race.pid -E main-error.log
439 454 $ cat pull-race.pid >> $DAEMON_PIDS
440 455 $ cd pull-race2
441 456 $ hg -R $TESTTMP/pull-race book
442 457 @ 1:0d2164f0ce0d
443 458 X 1:0d2164f0ce0d
444 459 * Y 4:b0a5eff05604
445 460 Z 1:0d2164f0ce0d
446 461 $ hg pull
447 462 pulling from http://localhost:$HGPORT/
448 463 searching for changes
449 464 adding changesets
450 465 adding manifests
451 466 adding file changes
452 467 added 1 changesets with 1 changes to 1 files
453 468 updating bookmark Y
454 469 new changesets b0a5eff05604
455 470 (run 'hg update' to get a working copy)
456 471 $ hg book
457 472 * @ 1:0d2164f0ce0d
458 473 X 1:0d2164f0ce0d
459 474 Y 4:b0a5eff05604
460 475 Z 1:0d2164f0ce0d
461 476
462 477 Update a bookmark right after the initial lookup -B (issue4689)
463 478
464 479 $ echo c6 > ../pull-race/f3 # to be committed during the race
465 480 $ cat <<EOF > $TESTTMP/listkeys_makecommit.sh
466 481 > #!/bin/sh
467 482 > if hg st | grep -q M; then
468 483 > hg commit -m race
469 484 > echo committed in pull-race
470 485 > else
471 486 > exit 0
472 487 > fi
473 488 > EOF
474 489 $ cat <<EOF > ../pull-race/.hg/hgrc
475 490 > [hooks]
476 491 > # If anything to commit, commit it right after the first key listing used
477 492 > # during lookup. This makes the commit appear before the actual getbundle
478 493 > # call.
479 494 > listkeys.makecommit= sh $TESTTMP/listkeys_makecommit.sh
480 495 > EOF
481 496
482 497 (new config need server restart)
483 498
484 499 $ killdaemons.py
485 500 $ hg serve -R ../pull-race -p $HGPORT -d --pid-file=../pull-race.pid -E main-error.log
486 501 $ cat ../pull-race.pid >> $DAEMON_PIDS
487 502
488 503 $ hg -R $TESTTMP/pull-race book
489 504 @ 1:0d2164f0ce0d
490 505 X 1:0d2164f0ce0d
491 506 * Y 5:35d1ef0a8d1b
492 507 Z 1:0d2164f0ce0d
493 508 $ hg update -r Y
494 509 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
495 510 (activating bookmark Y)
496 511 $ hg pull -B .
497 512 pulling from http://localhost:$HGPORT/
498 513 searching for changes
499 514 adding changesets
500 515 adding manifests
501 516 adding file changes
502 517 added 1 changesets with 1 changes to 1 files
503 518 updating bookmark Y
504 519 new changesets 35d1ef0a8d1b
505 520 (run 'hg update' to get a working copy)
506 521 $ hg book
507 522 @ 1:0d2164f0ce0d
508 523 X 1:0d2164f0ce0d
509 524 * Y 5:35d1ef0a8d1b
510 525 Z 1:0d2164f0ce0d
511 526
512 527 (done with this section of the test)
513 528
514 529 $ killdaemons.py
515 530 $ cd ../b
516 531
517 532 diverging a remote bookmark fails
518 533
519 534 $ hg up -q 4e3505fd9583
520 535 $ echo c4 > f2
521 536 $ hg ci -Am4
522 537 adding f2
523 538 created new head
524 539 $ echo c5 > f2
525 540 $ hg ci -Am5
526 541 $ hg log -G
527 542 @ 5:c922c0139ca0 5
528 543 |
529 544 o 4:4efff6d98829 4
530 545 |
531 546 | o 3:f6fc62dde3c0 3
532 547 |/
533 548 | o 2:0d2164f0ce0d 1
534 549 |/
535 550 | o 1:9b140be10808 2
536 551 |/
537 552 o 0:4e3505fd9583 test
538 553
539 554
540 555 $ hg book -f Y
541 556
542 557 $ cat <<EOF > ../a/.hg/hgrc
543 558 > [web]
544 559 > push_ssl = false
545 560 > allow_push = *
546 561 > EOF
547 562
548 563 $ hg serve -R ../a -p $HGPORT2 -d --pid-file=../hg2.pid
549 564 $ cat ../hg2.pid >> $DAEMON_PIDS
550 565
551 566 $ hg push http://localhost:$HGPORT2/
552 567 pushing to http://localhost:$HGPORT2/
553 568 searching for changes
554 569 abort: push creates new remote head c922c0139ca0 with bookmark 'Y'!
555 570 (merge or see 'hg help push' for details about pushing new heads)
556 571 [255]
557 572 $ hg -R ../a book
558 573 @ 1:0d2164f0ce0d
559 574 * X 1:0d2164f0ce0d
560 575 Y 3:f6fc62dde3c0
561 576 Z 1:0d2164f0ce0d
562 577
563 578
564 579 Unrelated marker does not alter the decision
565 580
566 581 $ hg debugobsolete aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb
567 582 $ hg push http://localhost:$HGPORT2/
568 583 pushing to http://localhost:$HGPORT2/
569 584 searching for changes
570 585 abort: push creates new remote head c922c0139ca0 with bookmark 'Y'!
571 586 (merge or see 'hg help push' for details about pushing new heads)
572 587 [255]
573 588 $ hg -R ../a book
574 589 @ 1:0d2164f0ce0d
575 590 * X 1:0d2164f0ce0d
576 591 Y 3:f6fc62dde3c0
577 592 Z 1:0d2164f0ce0d
578 593
579 594 Update to a successor works
580 595
581 596 $ hg id --debug -r 3
582 597 f6fc62dde3c0771e29704af56ba4d8af77abcc2f
583 598 $ hg id --debug -r 4
584 599 4efff6d98829d9c824c621afd6e3f01865f5439f
585 600 $ hg id --debug -r 5
586 601 c922c0139ca03858f655e4a2af4dd02796a63969 tip Y
587 602 $ hg debugobsolete f6fc62dde3c0771e29704af56ba4d8af77abcc2f cccccccccccccccccccccccccccccccccccccccc
588 603 obsoleted 1 changesets
589 604 $ hg debugobsolete cccccccccccccccccccccccccccccccccccccccc 4efff6d98829d9c824c621afd6e3f01865f5439f
590 605 $ hg push http://localhost:$HGPORT2/
591 606 pushing to http://localhost:$HGPORT2/
592 607 searching for changes
593 608 remote: adding changesets
594 609 remote: adding manifests
595 610 remote: adding file changes
596 611 remote: added 2 changesets with 2 changes to 1 files (+1 heads)
597 612 remote: 2 new obsolescence markers
598 613 remote: obsoleted 1 changesets
599 614 updating bookmark Y
600 615 $ hg -R ../a book
601 616 @ 1:0d2164f0ce0d
602 617 * X 1:0d2164f0ce0d
603 618 Y 5:c922c0139ca0
604 619 Z 1:0d2164f0ce0d
605 620
606 621 hgweb
607 622
608 623 $ cat <<EOF > .hg/hgrc
609 624 > [web]
610 625 > push_ssl = false
611 626 > allow_push = *
612 627 > EOF
613 628
614 629 $ hg serve -p $HGPORT -d --pid-file=../hg.pid -E errors.log
615 630 $ cat ../hg.pid >> $DAEMON_PIDS
616 631 $ cd ../a
617 632
618 633 $ hg debugpushkey http://localhost:$HGPORT/ namespaces
619 634 bookmarks
620 635 namespaces
621 636 obsolete
622 637 phases
623 638 $ hg debugpushkey http://localhost:$HGPORT/ bookmarks
624 639 @ 9b140be1080824d768c5a4691a564088eede71f9
625 640 X 9b140be1080824d768c5a4691a564088eede71f9
626 641 Y c922c0139ca03858f655e4a2af4dd02796a63969
627 642 Z 9b140be1080824d768c5a4691a564088eede71f9
628 643 foo 0000000000000000000000000000000000000000
629 644 foobar 9b140be1080824d768c5a4691a564088eede71f9
630 645 $ hg out -B http://localhost:$HGPORT/
631 646 comparing with http://localhost:$HGPORT/
632 647 searching for changed bookmarks
633 648 @ 0d2164f0ce0d
634 649 X 0d2164f0ce0d
635 650 Z 0d2164f0ce0d
636 651 foo
637 652 foobar
638 653 $ hg push -B Z http://localhost:$HGPORT/
639 654 pushing to http://localhost:$HGPORT/
640 655 searching for changes
641 656 no changes found
642 657 updating bookmark Z
643 658 [1]
644 659 $ hg book -d Z
645 660 $ hg in -B http://localhost:$HGPORT/
646 661 comparing with http://localhost:$HGPORT/
647 662 searching for changed bookmarks
648 663 @ 9b140be10808
649 664 X 9b140be10808
650 665 Z 0d2164f0ce0d
651 666 foo 000000000000
652 667 foobar 9b140be10808
653 668 $ hg pull -B Z http://localhost:$HGPORT/
654 669 pulling from http://localhost:$HGPORT/
655 670 no changes found
656 671 divergent bookmark @ stored as @1
657 672 divergent bookmark X stored as X@1
658 673 adding remote bookmark Z
659 674 adding remote bookmark foo
660 675 adding remote bookmark foobar
661 676 $ hg clone http://localhost:$HGPORT/ cloned-bookmarks
662 677 requesting all changes
663 678 adding changesets
664 679 adding manifests
665 680 adding file changes
666 681 added 5 changesets with 5 changes to 3 files (+2 heads)
667 682 2 new obsolescence markers
668 683 new changesets 4e3505fd9583:c922c0139ca0
669 684 updating to bookmark @
670 685 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
671 686 $ hg -R cloned-bookmarks bookmarks
672 687 * @ 1:9b140be10808
673 688 X 1:9b140be10808
674 689 Y 4:c922c0139ca0
675 690 Z 2:0d2164f0ce0d
676 691 foo -1:000000000000
677 692 foobar 1:9b140be10808
678 693
679 694 $ cd ..
680 695
681 696 Test to show result of bookmarks comparison
682 697
683 698 $ mkdir bmcomparison
684 699 $ cd bmcomparison
685 700
686 701 $ hg init source
687 702 $ hg -R source debugbuilddag '+2*2*3*4'
688 703 $ hg -R source log -G --template '{rev}:{node|short}'
689 704 o 4:e7bd5218ca15
690 705 |
691 706 | o 3:6100d3090acf
692 707 |/
693 708 | o 2:fa942426a6fd
694 709 |/
695 710 | o 1:66f7d451a68b
696 711 |/
697 712 o 0:1ea73414a91b
698 713
699 714 $ hg -R source bookmarks -r 0 SAME
700 715 $ hg -R source bookmarks -r 0 ADV_ON_REPO1
701 716 $ hg -R source bookmarks -r 0 ADV_ON_REPO2
702 717 $ hg -R source bookmarks -r 0 DIFF_ADV_ON_REPO1
703 718 $ hg -R source bookmarks -r 0 DIFF_ADV_ON_REPO2
704 719 $ hg -R source bookmarks -r 1 DIVERGED
705 720
706 721 $ hg clone -U source repo1
707 722
708 723 (test that incoming/outgoing exit with 1, if there is no bookmark to
709 724 be exchanged)
710 725
711 726 $ hg -R repo1 incoming -B
712 727 comparing with $TESTTMP/bmcomparison/source (glob)
713 728 searching for changed bookmarks
714 729 no changed bookmarks found
715 730 [1]
716 731 $ hg -R repo1 outgoing -B
717 732 comparing with $TESTTMP/bmcomparison/source (glob)
718 733 searching for changed bookmarks
719 734 no changed bookmarks found
720 735 [1]
721 736
722 737 $ hg -R repo1 bookmarks -f -r 1 ADD_ON_REPO1
723 738 $ hg -R repo1 bookmarks -f -r 2 ADV_ON_REPO1
724 739 $ hg -R repo1 bookmarks -f -r 3 DIFF_ADV_ON_REPO1
725 740 $ hg -R repo1 bookmarks -f -r 3 DIFF_DIVERGED
726 741 $ hg -R repo1 -q --config extensions.mq= strip 4
727 742 $ hg -R repo1 log -G --template '{node|short} ({bookmarks})'
728 743 o 6100d3090acf (DIFF_ADV_ON_REPO1 DIFF_DIVERGED)
729 744 |
730 745 | o fa942426a6fd (ADV_ON_REPO1)
731 746 |/
732 747 | o 66f7d451a68b (ADD_ON_REPO1 DIVERGED)
733 748 |/
734 749 o 1ea73414a91b (ADV_ON_REPO2 DIFF_ADV_ON_REPO2 SAME)
735 750
736 751
737 752 $ hg clone -U source repo2
738 753 $ hg -R repo2 bookmarks -f -r 1 ADD_ON_REPO2
739 754 $ hg -R repo2 bookmarks -f -r 1 ADV_ON_REPO2
740 755 $ hg -R repo2 bookmarks -f -r 2 DIVERGED
741 756 $ hg -R repo2 bookmarks -f -r 4 DIFF_ADV_ON_REPO2
742 757 $ hg -R repo2 bookmarks -f -r 4 DIFF_DIVERGED
743 758 $ hg -R repo2 -q --config extensions.mq= strip 3
744 759 $ hg -R repo2 log -G --template '{node|short} ({bookmarks})'
745 760 o e7bd5218ca15 (DIFF_ADV_ON_REPO2 DIFF_DIVERGED)
746 761 |
747 762 | o fa942426a6fd (DIVERGED)
748 763 |/
749 764 | o 66f7d451a68b (ADD_ON_REPO2 ADV_ON_REPO2)
750 765 |/
751 766 o 1ea73414a91b (ADV_ON_REPO1 DIFF_ADV_ON_REPO1 SAME)
752 767
753 768
754 769 (test that difference of bookmarks between repositories are fully shown)
755 770
756 771 $ hg -R repo1 incoming -B repo2 -v
757 772 comparing with repo2
758 773 searching for changed bookmarks
759 774 ADD_ON_REPO2 66f7d451a68b added
760 775 ADV_ON_REPO2 66f7d451a68b advanced
761 776 DIFF_ADV_ON_REPO2 e7bd5218ca15 changed
762 777 DIFF_DIVERGED e7bd5218ca15 changed
763 778 DIVERGED fa942426a6fd diverged
764 779 $ hg -R repo1 outgoing -B repo2 -v
765 780 comparing with repo2
766 781 searching for changed bookmarks
767 782 ADD_ON_REPO1 66f7d451a68b added
768 783 ADD_ON_REPO2 deleted
769 784 ADV_ON_REPO1 fa942426a6fd advanced
770 785 DIFF_ADV_ON_REPO1 6100d3090acf advanced
771 786 DIFF_ADV_ON_REPO2 1ea73414a91b changed
772 787 DIFF_DIVERGED 6100d3090acf changed
773 788 DIVERGED 66f7d451a68b diverged
774 789
775 790 $ hg -R repo2 incoming -B repo1 -v
776 791 comparing with repo1
777 792 searching for changed bookmarks
778 793 ADD_ON_REPO1 66f7d451a68b added
779 794 ADV_ON_REPO1 fa942426a6fd advanced
780 795 DIFF_ADV_ON_REPO1 6100d3090acf changed
781 796 DIFF_DIVERGED 6100d3090acf changed
782 797 DIVERGED 66f7d451a68b diverged
783 798 $ hg -R repo2 outgoing -B repo1 -v
784 799 comparing with repo1
785 800 searching for changed bookmarks
786 801 ADD_ON_REPO1 deleted
787 802 ADD_ON_REPO2 66f7d451a68b added
788 803 ADV_ON_REPO2 66f7d451a68b advanced
789 804 DIFF_ADV_ON_REPO1 1ea73414a91b changed
790 805 DIFF_ADV_ON_REPO2 e7bd5218ca15 advanced
791 806 DIFF_DIVERGED e7bd5218ca15 changed
792 807 DIVERGED fa942426a6fd diverged
793 808
794 809 $ cd ..
795 810
796 811 Pushing a bookmark should only push the changes required by that
797 812 bookmark, not all outgoing changes:
798 813 $ hg clone http://localhost:$HGPORT/ addmarks
799 814 requesting all changes
800 815 adding changesets
801 816 adding manifests
802 817 adding file changes
803 818 added 5 changesets with 5 changes to 3 files (+2 heads)
804 819 2 new obsolescence markers
805 820 new changesets 4e3505fd9583:c922c0139ca0
806 821 updating to bookmark @
807 822 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
808 823 $ cd addmarks
809 824 $ echo foo > foo
810 825 $ hg add foo
811 826 $ hg commit -m 'add foo'
812 827 $ echo bar > bar
813 828 $ hg add bar
814 829 $ hg commit -m 'add bar'
815 830 $ hg co "tip^"
816 831 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
817 832 (leaving bookmark @)
818 833 $ hg book add-foo
819 834 $ hg book -r tip add-bar
820 835 Note: this push *must* push only a single changeset, as that's the point
821 836 of this test.
822 837 $ hg push -B add-foo --traceback
823 838 pushing to http://localhost:$HGPORT/
824 839 searching for changes
825 840 remote: adding changesets
826 841 remote: adding manifests
827 842 remote: adding file changes
828 843 remote: added 1 changesets with 1 changes to 1 files
829 844 exporting bookmark add-foo
830 845
831 846 pushing a new bookmark on a new head does not require -f if -B is specified
832 847
833 848 $ hg up -q X
834 849 $ hg book W
835 850 $ echo c5 > f2
836 851 $ hg ci -Am5
837 852 created new head
838 853 $ hg push -B .
839 854 pushing to http://localhost:$HGPORT/
840 855 searching for changes
841 856 remote: adding changesets
842 857 remote: adding manifests
843 858 remote: adding file changes
844 859 remote: added 1 changesets with 1 changes to 1 files (+1 heads)
845 860 exporting bookmark W
846 861 $ hg -R ../b id -r W
847 862 cc978a373a53 tip W
848 863
849 864 pushing an existing but divergent bookmark with -B still requires -f
850 865
851 866 $ hg clone -q . ../r
852 867 $ hg up -q X
853 868 $ echo 1 > f2
854 869 $ hg ci -qAml
855 870
856 871 $ cd ../r
857 872 $ hg up -q X
858 873 $ echo 2 > f2
859 874 $ hg ci -qAmr
860 875 $ hg push -B X
861 876 pushing to $TESTTMP/addmarks (glob)
862 877 searching for changes
863 878 remote has heads on branch 'default' that are not known locally: a2a606d9ff1b
864 879 abort: push creates new remote head 54694f811df9 with bookmark 'X'!
865 880 (pull and merge or see 'hg help push' for details about pushing new heads)
866 881 [255]
867 882 $ cd ../addmarks
868 883
869 884 Check summary output for incoming/outgoing bookmarks
870 885
871 886 $ hg bookmarks -d X
872 887 $ hg bookmarks -d Y
873 888 $ hg summary --remote | grep '^remote:'
874 889 remote: *, 2 incoming bookmarks, 1 outgoing bookmarks (glob)
875 890
876 891 $ cd ..
877 892
878 893 pushing an unchanged bookmark should result in no changes
879 894
880 895 $ hg init unchanged-a
881 896 $ hg init unchanged-b
882 897 $ cd unchanged-a
883 898 $ echo initial > foo
884 899 $ hg commit -A -m initial
885 900 adding foo
886 901 $ hg bookmark @
887 902 $ hg push -B @ ../unchanged-b
888 903 pushing to ../unchanged-b
889 904 searching for changes
890 905 adding changesets
891 906 adding manifests
892 907 adding file changes
893 908 added 1 changesets with 1 changes to 1 files
894 909 exporting bookmark @
895 910
896 911 $ hg push -B @ ../unchanged-b
897 912 pushing to ../unchanged-b
898 913 searching for changes
899 914 no changes found
900 915 [1]
901 916
902 917
903 918 Check hook preventing push (issue4455)
904 919 ======================================
905 920
906 921 $ hg bookmarks
907 922 * @ 0:55482a6fb4b1
908 923 $ hg log -G
909 924 @ 0:55482a6fb4b1 initial
910 925
911 926 $ hg init ../issue4455-dest
912 927 $ hg push ../issue4455-dest # changesets only
913 928 pushing to ../issue4455-dest
914 929 searching for changes
915 930 adding changesets
916 931 adding manifests
917 932 adding file changes
918 933 added 1 changesets with 1 changes to 1 files
919 934 $ cat >> .hg/hgrc << EOF
920 935 > [paths]
921 936 > local=../issue4455-dest/
922 937 > ssh=ssh://user@dummy/issue4455-dest
923 938 > http=http://localhost:$HGPORT/
924 939 > [ui]
925 940 > ssh=$PYTHON "$TESTDIR/dummyssh"
926 941 > EOF
927 942 $ cat >> ../issue4455-dest/.hg/hgrc << EOF
928 943 > [hooks]
929 944 > prepushkey=false
930 945 > [web]
931 946 > push_ssl = false
932 947 > allow_push = *
933 948 > EOF
934 949 $ killdaemons.py
935 950 $ hg serve -R ../issue4455-dest -p $HGPORT -d --pid-file=../issue4455.pid -E ../issue4455-error.log
936 951 $ cat ../issue4455.pid >> $DAEMON_PIDS
937 952
938 953 Local push
939 954 ----------
940 955
941 956 $ hg push -B @ local
942 957 pushing to $TESTTMP/issue4455-dest (glob)
943 958 searching for changes
944 959 no changes found
945 960 pushkey-abort: prepushkey hook exited with status 1
946 961 abort: exporting bookmark @ failed!
947 962 [255]
948 963 $ hg -R ../issue4455-dest/ bookmarks
949 964 no bookmarks set
950 965
951 966 Using ssh
952 967 ---------
953 968
954 969 $ hg push -B @ ssh # bundle2+
955 970 pushing to ssh://user@dummy/issue4455-dest
956 971 searching for changes
957 972 no changes found
958 973 remote: pushkey-abort: prepushkey hook exited with status 1
959 974 abort: exporting bookmark @ failed!
960 975 [255]
961 976 $ hg -R ../issue4455-dest/ bookmarks
962 977 no bookmarks set
963 978
964 979 $ hg push -B @ ssh --config devel.legacy.exchange=bundle1
965 980 pushing to ssh://user@dummy/issue4455-dest
966 981 searching for changes
967 982 no changes found
968 983 remote: pushkey-abort: prepushkey hook exited with status 1
969 984 exporting bookmark @ failed!
970 985 [1]
971 986 $ hg -R ../issue4455-dest/ bookmarks
972 987 no bookmarks set
973 988
974 989 Using http
975 990 ----------
976 991
977 992 $ hg push -B @ http # bundle2+
978 993 pushing to http://localhost:$HGPORT/
979 994 searching for changes
980 995 no changes found
981 996 remote: pushkey-abort: prepushkey hook exited with status 1
982 997 abort: exporting bookmark @ failed!
983 998 [255]
984 999 $ hg -R ../issue4455-dest/ bookmarks
985 1000 no bookmarks set
986 1001
987 1002 $ hg push -B @ http --config devel.legacy.exchange=bundle1
988 1003 pushing to http://localhost:$HGPORT/
989 1004 searching for changes
990 1005 no changes found
991 1006 remote: pushkey-abort: prepushkey hook exited with status 1
992 1007 exporting bookmark @ failed!
993 1008 [1]
994 1009 $ hg -R ../issue4455-dest/ bookmarks
995 1010 no bookmarks set
@@ -1,226 +1,227 b''
1 1 $ cat << EOF >> $HGRCPATH
2 2 > [format]
3 3 > usegeneraldelta=yes
4 4 > EOF
5 5
6 6 $ hg init debugrevlog
7 7 $ cd debugrevlog
8 8 $ echo a > a
9 9 $ hg ci -Am adda
10 10 adding a
11 11 $ hg debugrevlog -m
12 12 format : 1
13 13 flags : inline, generaldelta
14 14
15 15 revisions : 1
16 16 merges : 0 ( 0.00%)
17 17 normal : 1 (100.00%)
18 18 revisions : 1
19 19 full : 1 (100.00%)
20 20 deltas : 0 ( 0.00%)
21 21 revision size : 44
22 22 full : 44 (100.00%)
23 23 deltas : 0 ( 0.00%)
24 24
25 25 chunks : 1
26 26 0x75 (u) : 1 (100.00%)
27 27 chunks size : 44
28 28 0x75 (u) : 44 (100.00%)
29 29
30 30 avg chain length : 0
31 31 max chain length : 0
32 32 max chain reach : 44
33 33 compression ratio : 0
34 34
35 35 uncompressed data size (min/max/avg) : 43 / 43 / 43
36 36 full revision size (min/max/avg) : 44 / 44 / 44
37 37 delta size (min/max/avg) : 0 / 0 / 0
38 38
39 39 Test debugindex, with and without the --debug flag
40 40 $ hg debugindex a
41 41 rev offset length ..... linkrev nodeid p1 p2 (re)
42 42 0 0 3 .... 0 b789fdd96dc2 000000000000 000000000000 (re)
43 43 $ hg --debug debugindex a
44 44 rev offset length ..... linkrev nodeid p1 p2 (re)
45 45 0 0 3 .... 0 b789fdd96dc2f3bd229c1dd8eedf0fc60e2b68e3 0000000000000000000000000000000000000000 0000000000000000000000000000000000000000 (re)
46 46 $ hg debugindex -f 1 a
47 47 rev flag offset length size ..... link p1 p2 nodeid (re)
48 48 0 0000 0 3 2 .... 0 -1 -1 b789fdd96dc2 (re)
49 49 $ hg --debug debugindex -f 1 a
50 50 rev flag offset length size ..... link p1 p2 nodeid (re)
51 51 0 0000 0 3 2 .... 0 -1 -1 b789fdd96dc2f3bd229c1dd8eedf0fc60e2b68e3 (re)
52 52
53 53 debugdelta chain basic output
54 54
55 55 $ hg debugdeltachain -m
56 56 rev chain# chainlen prev delta size rawsize chainsize ratio lindist extradist extraratio
57 57 0 1 1 -1 base 44 43 44 1.02326 44 0 0.00000
58 58
59 59 $ hg debugdeltachain -m -T '{rev} {chainid} {chainlen}\n'
60 60 0 1 1
61 61
62 62 $ hg debugdeltachain -m -Tjson
63 63 [
64 64 {
65 65 "chainid": 1,
66 66 "chainlen": 1,
67 67 "chainratio": 1.02325581395,
68 68 "chainsize": 44,
69 69 "compsize": 44,
70 70 "deltatype": "base",
71 71 "extradist": 0,
72 72 "extraratio": 0.0,
73 73 "lindist": 44,
74 74 "prevrev": -1,
75 75 "rev": 0,
76 76 "uncompsize": 43
77 77 }
78 78 ]
79 79
80 80 debugdelta chain with sparse read enabled
81 81
82 82 $ cat >> $HGRCPATH <<EOF
83 83 > [experimental]
84 84 > sparse-read = True
85 85 > EOF
86 86 $ hg debugdeltachain -m
87 87 rev chain# chainlen prev delta size rawsize chainsize ratio lindist extradist extraratio readsize largestblk rddensity
88 88 0 1 1 -1 base 44 43 44 1.02326 44 0 0.00000 44 44 1.00000
89 89
90 90 $ hg debugdeltachain -m -T '{rev} {chainid} {chainlen} {readsize} {largestblock} {readdensity}\n'
91 91 0 1 1 44 44 1.0
92 92
93 93 $ hg debugdeltachain -m -Tjson
94 94 [
95 95 {
96 96 "chainid": 1,
97 97 "chainlen": 1,
98 98 "chainratio": 1.02325581395,
99 99 "chainsize": 44,
100 100 "compsize": 44,
101 101 "deltatype": "base",
102 102 "extradist": 0,
103 103 "extraratio": 0.0,
104 104 "largestblock": 44,
105 105 "lindist": 44,
106 106 "prevrev": -1,
107 107 "readdensity": 1.0,
108 108 "readsize": 44,
109 109 "rev": 0,
110 110 "uncompsize": 43
111 111 }
112 112 ]
113 113
114 114 Test max chain len
115 115 $ cat >> $HGRCPATH << EOF
116 116 > [format]
117 117 > maxchainlen=4
118 118 > EOF
119 119
120 120 $ printf "This test checks if maxchainlen config value is respected also it can serve as basic test for debugrevlog -d <file>.\n" >> a
121 121 $ hg ci -m a
122 122 $ printf "b\n" >> a
123 123 $ hg ci -m a
124 124 $ printf "c\n" >> a
125 125 $ hg ci -m a
126 126 $ printf "d\n" >> a
127 127 $ hg ci -m a
128 128 $ printf "e\n" >> a
129 129 $ hg ci -m a
130 130 $ printf "f\n" >> a
131 131 $ hg ci -m a
132 132 $ printf 'g\n' >> a
133 133 $ hg ci -m a
134 134 $ printf 'h\n' >> a
135 135 $ hg ci -m a
136 136 $ hg debugrevlog -d a
137 137 # rev p1rev p2rev start end deltastart base p1 p2 rawsize totalsize compression heads chainlen
138 138 0 -1 -1 0 ??? 0 0 0 0 ??? ???? ? 1 0 (glob)
139 139 1 0 -1 ??? ??? 0 0 0 0 ??? ???? ? 1 1 (glob)
140 140 2 1 -1 ??? ??? ??? ??? ??? 0 ??? ???? ? 1 2 (glob)
141 141 3 2 -1 ??? ??? ??? ??? ??? 0 ??? ???? ? 1 3 (glob)
142 142 4 3 -1 ??? ??? ??? ??? ??? 0 ??? ???? ? 1 4 (glob)
143 143 5 4 -1 ??? ??? ??? ??? ??? 0 ??? ???? ? 1 0 (glob)
144 144 6 5 -1 ??? ??? ??? ??? ??? 0 ??? ???? ? 1 1 (glob)
145 145 7 6 -1 ??? ??? ??? ??? ??? 0 ??? ???? ? 1 2 (glob)
146 146 8 7 -1 ??? ??? ??? ??? ??? 0 ??? ???? ? 1 3 (glob)
147 147
148 148 Test WdirUnsupported exception
149 149
150 150 $ hg debugdata -c ffffffffffffffffffffffffffffffffffffffff
151 151 abort: working directory revision cannot be specified
152 152 [255]
153 153
154 154 Test cache warming command
155 155
156 156 $ rm -rf .hg/cache/
157 157 $ hg debugupdatecaches --debug
158 158 updating the branch cache
159 159 $ ls -r .hg/cache/*
160 160 .hg/cache/rbc-revs-v1
161 161 .hg/cache/rbc-names-v1
162 162 .hg/cache/branch2-served
163 163
164 164 $ cd ..
165 165
166 166 Test internal debugstacktrace command
167 167
168 168 $ cat > debugstacktrace.py << EOF
169 169 > from __future__ import absolute_import
170 170 > import sys
171 171 > from mercurial import util
172 172 > def f():
173 173 > util.debugstacktrace(f=sys.stdout)
174 174 > g()
175 175 > def g():
176 176 > util.dst('hello from g\\n', skip=1)
177 177 > h()
178 178 > def h():
179 179 > util.dst('hi ...\\nfrom h hidden in g', 1, depth=2)
180 180 > f()
181 181 > EOF
182 182 $ $PYTHON debugstacktrace.py
183 183 stacktrace at:
184 184 debugstacktrace.py:12 in * (glob)
185 185 debugstacktrace.py:5 in f
186 186 hello from g at:
187 187 debugstacktrace.py:12 in * (glob)
188 188 debugstacktrace.py:6 in f
189 189 hi ...
190 190 from h hidden in g at:
191 191 debugstacktrace.py:6 in f
192 192 debugstacktrace.py:9 in g
193 193
194 194 Test debugcapabilities command:
195 195
196 196 $ hg debugcapabilities ./debugrevlog/
197 197 Main capabilities:
198 198 branchmap
199 199 $USUAL_BUNDLE2_CAPS$
200 200 getbundle
201 201 known
202 202 lookup
203 203 pushkey
204 204 unbundle
205 205 Bundle2 capabilities:
206 206 HG20
207 bookmarks
207 208 changegroup
208 209 01
209 210 02
210 211 digests
211 212 md5
212 213 sha1
213 214 sha512
214 215 error
215 216 abort
216 217 unsupportedcontent
217 218 pushraced
218 219 pushkey
219 220 hgtagsfnodes
220 221 listkeys
221 222 phases
222 223 heads
223 224 pushkey
224 225 remote-changegroup
225 226 http
226 227 https
@@ -1,910 +1,911 b''
1 1 #require killdaemons serve zstd
2 2
3 3 Client version is embedded in HTTP request and is effectively dynamic. Pin the
4 4 version so behavior is deterministic.
5 5
6 6 $ cat > fakeversion.py << EOF
7 7 > from mercurial import util
8 8 > util.version = lambda: '4.2'
9 9 > EOF
10 10
11 11 $ cat >> $HGRCPATH << EOF
12 12 > [extensions]
13 13 > fakeversion = `pwd`/fakeversion.py
14 14 > [devel]
15 15 > legacy.exchange = phases
16 16 > EOF
17 17
18 18 $ hg init server0
19 19 $ cd server0
20 20 $ touch foo
21 21 $ hg -q commit -A -m initial
22 22
23 23 Also disable compression because zstd is optional and causes output to vary
24 24 and because debugging partial responses is hard when compression is involved
25 25
26 26 $ cat > .hg/hgrc << EOF
27 27 > [extensions]
28 28 > badserver = $TESTDIR/badserverext.py
29 29 > [server]
30 30 > compressionengines = none
31 31 > EOF
32 32
33 33 Failure to accept() socket should result in connection related error message
34 34
35 35 $ hg serve --config badserver.closebeforeaccept=true -p $HGPORT -d --pid-file=hg.pid
36 36 $ cat hg.pid > $DAEMON_PIDS
37 37
38 38 $ hg clone http://localhost:$HGPORT/ clone
39 39 abort: error: $ECONNRESET$
40 40 [255]
41 41
42 42 (The server exits on its own, but there is a race between that and starting a new server.
43 43 So ensure the process is dead.)
44 44
45 45 $ killdaemons.py $DAEMON_PIDS
46 46
47 47 Failure immediately after accept() should yield connection related error message
48 48
49 49 $ hg serve --config badserver.closeafteraccept=true -p $HGPORT -d --pid-file=hg.pid
50 50 $ cat hg.pid > $DAEMON_PIDS
51 51
52 52 TODO: this usually outputs good results, but sometimes emits abort:
53 53 error: '' on FreeBSD and OS X.
54 54 What we ideally want are:
55 55
56 56 abort: error: $ECONNRESET$
57 57
58 58 The flakiness in this output was observable easily with
59 59 --runs-per-test=20 on macOS 10.12 during the freeze for 4.2.
60 60 $ hg clone http://localhost:$HGPORT/ clone
61 61 abort: error: * (glob)
62 62 [255]
63 63
64 64 $ killdaemons.py $DAEMON_PIDS
65 65
66 66 Failure to read all bytes in initial HTTP request should yield connection related error message
67 67
68 68 $ hg serve --config badserver.closeafterrecvbytes=1 -p $HGPORT -d --pid-file=hg.pid -E error.log
69 69 $ cat hg.pid > $DAEMON_PIDS
70 70
71 71 $ hg clone http://localhost:$HGPORT/ clone
72 72 abort: error: bad HTTP status line: ''
73 73 [255]
74 74
75 75 $ killdaemons.py $DAEMON_PIDS
76 76
77 77 $ cat error.log
78 78 readline(1 from 65537) -> (1) G
79 79 read limit reached; closing socket
80 80
81 81 $ rm -f error.log
82 82
83 83 Same failure, but server reads full HTTP request line
84 84
85 85 $ hg serve --config badserver.closeafterrecvbytes=40 -p $HGPORT -d --pid-file=hg.pid -E error.log
86 86 $ cat hg.pid > $DAEMON_PIDS
87 87 $ hg clone http://localhost:$HGPORT/ clone
88 88 abort: error: bad HTTP status line: ''
89 89 [255]
90 90
91 91 $ killdaemons.py $DAEMON_PIDS
92 92
93 93 $ cat error.log
94 94 readline(40 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
95 95 readline(7 from -1) -> (7) Accept-
96 96 read limit reached; closing socket
97 97
98 98 $ rm -f error.log
99 99
100 100 Failure on subsequent HTTP request on the same socket (cmd?batch)
101 101
102 102 $ hg serve --config badserver.closeafterrecvbytes=210 -p $HGPORT -d --pid-file=hg.pid -E error.log
103 103 $ cat hg.pid > $DAEMON_PIDS
104 104 $ hg clone http://localhost:$HGPORT/ clone
105 105 abort: error: bad HTTP status line: ''
106 106 [255]
107 107
108 108 $ killdaemons.py $DAEMON_PIDS
109 109
110 110 $ cat error.log
111 111 readline(210 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
112 112 readline(177 from -1) -> (27) Accept-Encoding: identity\r\n
113 113 readline(150 from -1) -> (35) accept: application/mercurial-0.1\r\n
114 114 readline(115 from -1) -> (2?) host: localhost:$HGPORT\r\n (glob)
115 115 readline(9? from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
116 116 readline(4? from -1) -> (2) \r\n (glob)
117 117 write(36) -> HTTP/1.1 200 Script output follows\r\n
118 118 write(23) -> Server: badhttpserver\r\n
119 119 write(37) -> Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
120 120 write(41) -> Content-Type: application/mercurial-0.1\r\n
121 write(21) -> Content-Length: 405\r\n
121 write(21) -> Content-Length: 417\r\n
122 122 write(2) -> \r\n
123 write(405) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
123 write(417) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Abookmarks%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
124 124 readline(4? from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n (glob)
125 125 readline(1? from -1) -> (1?) Accept-Encoding* (glob)
126 126 read limit reached; closing socket
127 127 readline(210 from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
128 128 readline(184 from -1) -> (27) Accept-Encoding: identity\r\n
129 129 readline(157 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
130 130 readline(128 from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
131 131 readline(87 from -1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
132 132 readline(39 from -1) -> (35) accept: application/mercurial-0.1\r\n
133 133 readline(4 from -1) -> (4) host
134 134 read limit reached; closing socket
135 135
136 136 $ rm -f error.log
137 137
138 138 Failure to read getbundle HTTP request
139 139
140 140 $ hg serve --config badserver.closeafterrecvbytes=292 -p $HGPORT -d --pid-file=hg.pid -E error.log
141 141 $ cat hg.pid > $DAEMON_PIDS
142 142 $ hg clone http://localhost:$HGPORT/ clone
143 143 requesting all changes
144 144 abort: error: bad HTTP status line: ''
145 145 [255]
146 146
147 147 $ killdaemons.py $DAEMON_PIDS
148 148
149 149 $ cat error.log
150 150 readline(292 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
151 151 readline(259 from -1) -> (27) Accept-Encoding: identity\r\n
152 152 readline(232 from -1) -> (35) accept: application/mercurial-0.1\r\n
153 153 readline(197 from -1) -> (2?) host: localhost:$HGPORT\r\n (glob)
154 154 readline(17? from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
155 155 readline(12? from -1) -> (2) \r\n (glob)
156 156 write(36) -> HTTP/1.1 200 Script output follows\r\n
157 157 write(23) -> Server: badhttpserver\r\n
158 158 write(37) -> Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
159 159 write(41) -> Content-Type: application/mercurial-0.1\r\n
160 write(21) -> Content-Length: 405\r\n
160 readline(1 from -1) -> (1) x (?)
161 write(21) -> Content-Length: 417\r\n
161 162 write(2) -> \r\n
162 write(405) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
163 write(417) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Abookmarks%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
163 164 readline\(12[34] from 65537\) -> \(2[67]\) GET /\?cmd=batch HTTP/1.1\\r\\n (re)
164 165 readline(9? from -1) -> (27) Accept-Encoding: identity\r\n (glob)
165 166 readline(7? from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n (glob)
166 167 readline(4? from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n (glob)
167 168 readline(1 from -1) -> (1) x (?)
168 169 read limit reached; closing socket
169 170 readline(292 from 65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
170 171 readline(266 from -1) -> (27) Accept-Encoding: identity\r\n
171 172 readline(239 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
172 173 readline(210 from -1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
173 174 readline(169 from -1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
174 175 readline(121 from -1) -> (35) accept: application/mercurial-0.1\r\n
175 176 readline(86 from -1) -> (2?) host: localhost:$HGPORT\r\n (glob)
176 177 readline(6? from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
177 178 readline(1? from -1) -> (2) \r\n (glob)
178 179 write(36) -> HTTP/1.1 200 Script output follows\r\n
179 180 write(23) -> Server: badhttpserver\r\n
180 181 write(37) -> Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
181 182 write(41) -> Content-Type: application/mercurial-0.1\r\n
182 183 write(20) -> Content-Length: 42\r\n
183 184 write(2) -> \r\n
184 185 write(42) -> 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
185 186 readline\(1[23] from 65537\) -> \(1[23]\) GET /\?cmd=ge.? (re)
186 187 read limit reached; closing socket
187 188 readline(292 from 65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
188 189 readline(262 from -1) -> (27) Accept-Encoding: identity\r\n
189 190 readline(235 from -1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
190 readline(206 from -1) -> (206) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Ali
191 readline(206 from -1) -> (206) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtag
191 192 read limit reached; closing socket
192 193
193 194 $ rm -f error.log
194 195
195 196 Now do a variation using POST to send arguments
196 197
197 198 $ hg serve --config experimental.httppostargs=true --config badserver.closeafterrecvbytes=315 -p $HGPORT -d --pid-file=hg.pid -E error.log
198 199 $ cat hg.pid > $DAEMON_PIDS
199 200
200 201 $ hg clone http://localhost:$HGPORT/ clone
201 202 abort: error: bad HTTP status line: ''
202 203 [255]
203 204
204 205 $ killdaemons.py $DAEMON_PIDS
205 206
206 207 $ cat error.log
207 208 readline(315 from 65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
208 209 readline(282 from -1) -> (27) Accept-Encoding: identity\r\n
209 210 readline(255 from -1) -> (35) accept: application/mercurial-0.1\r\n
210 211 readline(220 from -1) -> (2?) host: localhost:$HGPORT\r\n (glob)
211 212 readline(19? from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
212 213 readline(14? from -1) -> (2) \r\n (glob)
213 214 write(36) -> HTTP/1.1 200 Script output follows\r\n
214 215 write(23) -> Server: badhttpserver\r\n
215 216 write(37) -> Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
216 217 write(41) -> Content-Type: application/mercurial-0.1\r\n
217 write(21) -> Content-Length: 418\r\n
218 write(21) -> Content-Length: 430\r\n
218 219 write(2) -> \r\n
219 write(418) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httppostargs httpmediatype=0.1rx,0.1tx,0.2tx compression=none
220 write(430) -> lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Abookmarks%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httppostargs httpmediatype=0.1rx,0.1tx,0.2tx compression=none
220 221 readline\(14[67] from 65537\) -> \(2[67]\) POST /\?cmd=batch HTTP/1.1\\r\\n (re)
221 222 readline\(1(19|20) from -1\) -> \(27\) Accept-Encoding: identity\\r\\n (re)
222 223 readline(9? from -1) -> (41) content-type: application/mercurial-0.1\r\n (glob)
223 224 readline(5? from -1) -> (19) vary: X-HgProto-1\r\n (glob)
224 225 readline(3? from -1) -> (19) x-hgargs-post: 28\r\n (glob)
225 226 readline(1? from -1) -> (1?) x-hgproto-1: * (glob)
226 227 read limit reached; closing socket
227 228 readline(315 from 65537) -> (27) POST /?cmd=batch HTTP/1.1\r\n
228 229 readline(288 from -1) -> (27) Accept-Encoding: identity\r\n
229 230 readline(261 from -1) -> (41) content-type: application/mercurial-0.1\r\n
230 231 readline(220 from -1) -> (19) vary: X-HgProto-1\r\n
231 232 readline(201 from -1) -> (19) x-hgargs-post: 28\r\n
232 233 readline(182 from -1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
233 234 readline(134 from -1) -> (35) accept: application/mercurial-0.1\r\n
234 235 readline(99 from -1) -> (20) content-length: 28\r\n
235 236 readline(79 from -1) -> (2?) host: localhost:$HGPORT\r\n (glob)
236 237 readline(5? from -1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n (glob)
237 238 readline(? from -1) -> (2) \r\n (glob)
238 239 read(? from 28) -> (?) cmds=* (glob)
239 240 read limit reached, closing socket
240 241 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
241 242
242 243 $ rm -f error.log
243 244
244 245 Now move on to partial server responses
245 246
246 247 Server sends a single character from the HTTP response line
247 248
248 249 $ hg serve --config badserver.closeaftersendbytes=1 -p $HGPORT -d --pid-file=hg.pid -E error.log
249 250 $ cat hg.pid > $DAEMON_PIDS
250 251
251 252 $ hg clone http://localhost:$HGPORT/ clone
252 253 abort: error: bad HTTP status line: H
253 254 [255]
254 255
255 256 $ killdaemons.py $DAEMON_PIDS
256 257
257 258 $ cat error.log
258 259 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
259 260 readline(-1) -> (27) Accept-Encoding: identity\r\n
260 261 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
261 262 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
262 263 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
263 264 readline(-1) -> (2) \r\n
264 265 write(1 from 36) -> (0) H
265 266 write limit reached; closing socket
266 267 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
267 268
268 269 $ rm -f error.log
269 270
270 271 Server sends an incomplete capabilities response body
271 272
272 273 $ hg serve --config badserver.closeaftersendbytes=180 -p $HGPORT -d --pid-file=hg.pid -E error.log
273 274 $ cat hg.pid > $DAEMON_PIDS
274 275
275 276 $ hg clone http://localhost:$HGPORT/ clone
276 abort: HTTP request error (incomplete response; expected 385 bytes got 20)
277 abort: HTTP request error (incomplete response; expected 397 bytes got 20)
277 278 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
278 279 [255]
279 280
280 281 $ killdaemons.py $DAEMON_PIDS
281 282
282 283 $ cat error.log
283 284 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
284 285 readline(-1) -> (27) Accept-Encoding: identity\r\n
285 286 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
286 287 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
287 288 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
288 289 readline(-1) -> (2) \r\n
289 290 write(36 from 36) -> (144) HTTP/1.1 200 Script output follows\r\n
290 291 write(23 from 23) -> (121) Server: badhttpserver\r\n
291 292 write(37 from 37) -> (84) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
292 293 write(41 from 41) -> (43) Content-Type: application/mercurial-0.1\r\n
293 write(21 from 21) -> (22) Content-Length: 405\r\n
294 write(21 from 21) -> (22) Content-Length: 417\r\n
294 295 write(2 from 2) -> (20) \r\n
295 write(20 from 405) -> (0) lookup changegroupsu
296 write(20 from 417) -> (0) lookup changegroupsu
296 297 write limit reached; closing socket
297 298
298 299 $ rm -f error.log
299 300
300 301 Server sends incomplete headers for batch request
301 302
302 303 $ hg serve --config badserver.closeaftersendbytes=695 -p $HGPORT -d --pid-file=hg.pid -E error.log
303 304 $ cat hg.pid > $DAEMON_PIDS
304 305
305 306 TODO this output is horrible
306 307
307 308 $ hg clone http://localhost:$HGPORT/ clone
308 309 abort: 'http://localhost:$HGPORT/' does not appear to be an hg repository:
309 ---%<--- (application/mercuria)
310 ---%<--- (applicat)
310 311
311 312 ---%<---
312 313 !
313 314 [255]
314 315
315 316 $ killdaemons.py $DAEMON_PIDS
316 317
317 318 $ cat error.log
318 319 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
319 320 readline(-1) -> (27) Accept-Encoding: identity\r\n
320 321 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
321 322 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
322 323 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
323 324 readline(-1) -> (2) \r\n
324 325 write(36 from 36) -> (659) HTTP/1.1 200 Script output follows\r\n
325 326 write(23 from 23) -> (636) Server: badhttpserver\r\n
326 327 write(37 from 37) -> (599) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
327 328 write(41 from 41) -> (558) Content-Type: application/mercurial-0.1\r\n
328 write(21 from 21) -> (537) Content-Length: 405\r\n
329 write(21 from 21) -> (537) Content-Length: 417\r\n
329 330 write(2 from 2) -> (535) \r\n
330 write(405 from 405) -> (130) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
331 write(417 from 417) -> (118) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Abookmarks%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
331 332 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
332 333 readline(-1) -> (27) Accept-Encoding: identity\r\n
333 334 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
334 335 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
335 336 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
336 337 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
337 338 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
338 339 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
339 340 readline(-1) -> (2) \r\n
340 write(36 from 36) -> (94) HTTP/1.1 200 Script output follows\r\n
341 write(23 from 23) -> (71) Server: badhttpserver\r\n
342 write(37 from 37) -> (34) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
343 write(34 from 41) -> (0) Content-Type: application/mercuria
341 write(36 from 36) -> (82) HTTP/1.1 200 Script output follows\r\n
342 write(23 from 23) -> (59) Server: badhttpserver\r\n
343 write(37 from 37) -> (22) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
344 write(22 from 41) -> (0) Content-Type: applicat
344 345 write limit reached; closing socket
345 346 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
346 347
347 348 $ rm -f error.log
348 349
349 350 Server sends an incomplete HTTP response body to batch request
350 351
351 352 $ hg serve --config badserver.closeaftersendbytes=760 -p $HGPORT -d --pid-file=hg.pid -E error.log
352 353 $ cat hg.pid > $DAEMON_PIDS
353 354
354 355 TODO client spews a stack due to uncaught ValueError in batch.results()
355 356 #if no-chg
356 357 $ hg clone http://localhost:$HGPORT/ clone 2> /dev/null
357 358 [1]
358 359 #else
359 360 $ hg clone http://localhost:$HGPORT/ clone 2> /dev/null
360 361 [255]
361 362 #endif
362 363
363 364 $ killdaemons.py $DAEMON_PIDS
364 365
365 366 $ cat error.log
366 367 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
367 368 readline(-1) -> (27) Accept-Encoding: identity\r\n
368 369 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
369 370 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
370 371 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
371 372 readline(-1) -> (2) \r\n
372 373 write(36 from 36) -> (724) HTTP/1.1 200 Script output follows\r\n
373 374 write(23 from 23) -> (701) Server: badhttpserver\r\n
374 375 write(37 from 37) -> (664) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
375 376 write(41 from 41) -> (623) Content-Type: application/mercurial-0.1\r\n
376 write(21 from 21) -> (602) Content-Length: 405\r\n
377 write(21 from 21) -> (602) Content-Length: 417\r\n
377 378 write(2 from 2) -> (600) \r\n
378 write(405 from 405) -> (195) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
379 write(417 from 417) -> (183) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Abookmarks%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
379 380 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
380 381 readline(-1) -> (27) Accept-Encoding: identity\r\n
381 382 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
382 383 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
383 384 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
384 385 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
385 386 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
386 387 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
387 388 readline(-1) -> (2) \r\n
388 write(36 from 36) -> (159) HTTP/1.1 200 Script output follows\r\n
389 write(23 from 23) -> (136) Server: badhttpserver\r\n
390 write(37 from 37) -> (99) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
391 write(41 from 41) -> (58) Content-Type: application/mercurial-0.1\r\n
392 write(20 from 20) -> (38) Content-Length: 42\r\n
393 write(2 from 2) -> (36) \r\n
394 write(36 from 42) -> (0) 96ee1d7354c4ad7372047672c36a1f561e3a
389 write(36 from 36) -> (147) HTTP/1.1 200 Script output follows\r\n
390 write(23 from 23) -> (124) Server: badhttpserver\r\n
391 write(37 from 37) -> (87) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
392 write(41 from 41) -> (46) Content-Type: application/mercurial-0.1\r\n
393 write(20 from 20) -> (26) Content-Length: 42\r\n
394 write(2 from 2) -> (24) \r\n
395 write(24 from 42) -> (0) 96ee1d7354c4ad7372047672
395 396 write limit reached; closing socket
396 397
397 398 $ rm -f error.log
398 399
399 400 Server sends incomplete headers for getbundle response
400 401
401 402 $ hg serve --config badserver.closeaftersendbytes=895 -p $HGPORT -d --pid-file=hg.pid -E error.log
402 403 $ cat hg.pid > $DAEMON_PIDS
403 404
404 405 TODO this output is terrible
405 406
406 407 $ hg clone http://localhost:$HGPORT/ clone
407 408 requesting all changes
408 409 abort: 'http://localhost:$HGPORT/' does not appear to be an hg repository:
409 ---%<--- (application/mercuri)
410 ---%<--- (applica)
410 411
411 412 ---%<---
412 413 !
413 414 [255]
414 415
415 416 $ killdaemons.py $DAEMON_PIDS
416 417
417 418 $ cat error.log
418 419 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
419 420 readline(-1) -> (27) Accept-Encoding: identity\r\n
420 421 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
421 422 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
422 423 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
423 424 readline(-1) -> (2) \r\n
424 425 write(36 from 36) -> (859) HTTP/1.1 200 Script output follows\r\n
425 426 write(23 from 23) -> (836) Server: badhttpserver\r\n
426 427 write(37 from 37) -> (799) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
427 428 write(41 from 41) -> (758) Content-Type: application/mercurial-0.1\r\n
428 write(21 from 21) -> (737) Content-Length: 405\r\n
429 write(21 from 21) -> (737) Content-Length: 417\r\n
429 430 write(2 from 2) -> (735) \r\n
430 write(405 from 405) -> (330) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
431 write(417 from 417) -> (318) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Abookmarks%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
431 432 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
432 433 readline(-1) -> (27) Accept-Encoding: identity\r\n
433 434 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
434 435 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
435 436 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
436 437 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
437 438 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
438 439 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
439 440 readline(-1) -> (2) \r\n
440 write(36 from 36) -> (294) HTTP/1.1 200 Script output follows\r\n
441 write(23 from 23) -> (271) Server: badhttpserver\r\n
442 write(37 from 37) -> (234) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
443 write(41 from 41) -> (193) Content-Type: application/mercurial-0.1\r\n
444 write(20 from 20) -> (173) Content-Length: 42\r\n
445 write(2 from 2) -> (171) \r\n
446 write(42 from 42) -> (129) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
441 write(36 from 36) -> (282) HTTP/1.1 200 Script output follows\r\n
442 write(23 from 23) -> (259) Server: badhttpserver\r\n
443 write(37 from 37) -> (222) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
444 write(41 from 41) -> (181) Content-Type: application/mercurial-0.1\r\n
445 write(20 from 20) -> (161) Content-Length: 42\r\n
446 write(2 from 2) -> (159) \r\n
447 write(42 from 42) -> (117) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
447 448 readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
448 449 readline(-1) -> (27) Accept-Encoding: identity\r\n
449 450 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
450 readline(-1) -> (396) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
451 readline(-1) -> (410) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
451 452 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
452 453 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
453 454 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
454 455 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
455 456 readline(-1) -> (2) \r\n
456 write(36 from 36) -> (93) HTTP/1.1 200 Script output follows\r\n
457 write(23 from 23) -> (70) Server: badhttpserver\r\n
458 write(37 from 37) -> (33) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
459 write(33 from 41) -> (0) Content-Type: application/mercuri
457 write(36 from 36) -> (81) HTTP/1.1 200 Script output follows\r\n
458 write(23 from 23) -> (58) Server: badhttpserver\r\n
459 write(37 from 37) -> (21) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
460 write(21 from 41) -> (0) Content-Type: applica
460 461 write limit reached; closing socket
461 462 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
462 463
463 464 $ rm -f error.log
464 465
465 466 Server sends empty HTTP body for getbundle
466 467
467 $ hg serve --config badserver.closeaftersendbytes=933 -p $HGPORT -d --pid-file=hg.pid -E error.log
468 $ hg serve --config badserver.closeaftersendbytes=945 -p $HGPORT -d --pid-file=hg.pid -E error.log
468 469 $ cat hg.pid > $DAEMON_PIDS
469 470
470 471 $ hg clone http://localhost:$HGPORT/ clone
471 472 requesting all changes
472 473 abort: HTTP request error (incomplete response)
473 474 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
474 475 [255]
475 476
476 477 $ killdaemons.py $DAEMON_PIDS
477 478
478 479 $ cat error.log
479 480 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
480 481 readline(-1) -> (27) Accept-Encoding: identity\r\n
481 482 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
482 483 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
483 484 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
484 485 readline(-1) -> (2) \r\n
485 write(36 from 36) -> (897) HTTP/1.1 200 Script output follows\r\n
486 write(23 from 23) -> (874) Server: badhttpserver\r\n
487 write(37 from 37) -> (837) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
488 write(41 from 41) -> (796) Content-Type: application/mercurial-0.1\r\n
489 write(21 from 21) -> (775) Content-Length: 405\r\n
490 write(2 from 2) -> (773) \r\n
491 write(405 from 405) -> (368) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
486 write(36 from 36) -> (909) HTTP/1.1 200 Script output follows\r\n
487 write(23 from 23) -> (886) Server: badhttpserver\r\n
488 write(37 from 37) -> (849) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
489 write(41 from 41) -> (808) Content-Type: application/mercurial-0.1\r\n
490 write(21 from 21) -> (787) Content-Length: 417\r\n
491 write(2 from 2) -> (785) \r\n
492 write(417 from 417) -> (368) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Abookmarks%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
492 493 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
493 494 readline(-1) -> (27) Accept-Encoding: identity\r\n
494 495 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
495 496 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
496 497 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
497 498 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
498 499 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
499 500 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
500 501 readline(-1) -> (2) \r\n
501 502 write(36 from 36) -> (332) HTTP/1.1 200 Script output follows\r\n
502 503 write(23 from 23) -> (309) Server: badhttpserver\r\n
503 504 write(37 from 37) -> (272) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
504 505 write(41 from 41) -> (231) Content-Type: application/mercurial-0.1\r\n
505 506 write(20 from 20) -> (211) Content-Length: 42\r\n
506 507 write(2 from 2) -> (209) \r\n
507 508 write(42 from 42) -> (167) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
508 509 readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
509 510 readline(-1) -> (27) Accept-Encoding: identity\r\n
510 511 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
511 readline(-1) -> (396) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
512 readline(-1) -> (410) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
512 513 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
513 514 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
514 515 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
515 516 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
516 517 readline(-1) -> (2) \r\n
517 518 write(36 from 36) -> (131) HTTP/1.1 200 Script output follows\r\n
518 519 write(23 from 23) -> (108) Server: badhttpserver\r\n
519 520 write(37 from 37) -> (71) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
520 521 write(41 from 41) -> (30) Content-Type: application/mercurial-0.2\r\n
521 522 write(28 from 28) -> (2) Transfer-Encoding: chunked\r\n
522 523 write(2 from 2) -> (0) \r\n
523 524 write limit reached; closing socket
524 525 write(36) -> HTTP/1.1 500 Internal Server Error\r\n
525 526
526 527 $ rm -f error.log
527 528
528 529 Server sends partial compression string
529 530
530 $ hg serve --config badserver.closeaftersendbytes=945 -p $HGPORT -d --pid-file=hg.pid -E error.log
531 $ hg serve --config badserver.closeaftersendbytes=957 -p $HGPORT -d --pid-file=hg.pid -E error.log
531 532 $ cat hg.pid > $DAEMON_PIDS
532 533
533 534 $ hg clone http://localhost:$HGPORT/ clone
534 535 requesting all changes
535 536 abort: HTTP request error (incomplete response; expected 1 bytes got 3)
536 537 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
537 538 [255]
538 539
539 540 $ killdaemons.py $DAEMON_PIDS
540 541
541 542 $ cat error.log
542 543 readline(65537) -> (33) GET /?cmd=capabilities HTTP/1.1\r\n
543 544 readline(-1) -> (27) Accept-Encoding: identity\r\n
544 545 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
545 546 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
546 547 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
547 548 readline(-1) -> (2) \r\n
548 write(36 from 36) -> (909) HTTP/1.1 200 Script output follows\r\n
549 write(23 from 23) -> (886) Server: badhttpserver\r\n
550 write(37 from 37) -> (849) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
551 write(41 from 41) -> (808) Content-Type: application/mercurial-0.1\r\n
552 write(21 from 21) -> (787) Content-Length: 405\r\n
553 write(2 from 2) -> (785) \r\n
554 write(405 from 405) -> (380) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
549 write(36 from 36) -> (921) HTTP/1.1 200 Script output follows\r\n
550 write(23 from 23) -> (898) Server: badhttpserver\r\n
551 write(37 from 37) -> (861) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
552 write(41 from 41) -> (820) Content-Type: application/mercurial-0.1\r\n
553 write(21 from 21) -> (799) Content-Length: 417\r\n
554 write(2 from 2) -> (797) \r\n
555 write(417 from 417) -> (380) lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 bundle2=HG20%0Abookmarks%0Achangegroup%3D01%2C02%0Adigests%3Dmd5%2Csha1%2Csha512%0Aerror%3Dabort%2Cunsupportedcontent%2Cpushraced%2Cpushkey%0Ahgtagsfnodes%0Alistkeys%0Apushkey%0Aremote-changegroup%3Dhttp%2Chttps unbundle=HG10GZ,HG10BZ,HG10UN httpheader=1024 httpmediatype=0.1rx,0.1tx,0.2tx compression=none
555 556 readline(65537) -> (26) GET /?cmd=batch HTTP/1.1\r\n
556 557 readline(-1) -> (27) Accept-Encoding: identity\r\n
557 558 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
558 559 readline(-1) -> (41) x-hgarg-1: cmds=heads+%3Bknown+nodes%3D\r\n
559 560 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
560 561 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
561 562 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
562 563 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
563 564 readline(-1) -> (2) \r\n
564 565 write(36 from 36) -> (344) HTTP/1.1 200 Script output follows\r\n
565 566 write(23 from 23) -> (321) Server: badhttpserver\r\n
566 567 write(37 from 37) -> (284) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
567 568 write(41 from 41) -> (243) Content-Type: application/mercurial-0.1\r\n
568 569 write(20 from 20) -> (223) Content-Length: 42\r\n
569 570 write(2 from 2) -> (221) \r\n
570 571 write(42 from 42) -> (179) 96ee1d7354c4ad7372047672c36a1f561e3a6a4c\n;
571 572 readline(65537) -> (30) GET /?cmd=getbundle HTTP/1.1\r\n
572 573 readline(-1) -> (27) Accept-Encoding: identity\r\n
573 574 readline(-1) -> (29) vary: X-HgArg-1,X-HgProto-1\r\n
574 readline(-1) -> (396) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
575 readline(-1) -> (410) x-hgarg-1: bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=1&common=0000000000000000000000000000000000000000&heads=96ee1d7354c4ad7372047672c36a1f561e3a6a4c&listkeys=phases%2Cbookmarks\r\n
575 576 readline(-1) -> (48) x-hgproto-1: 0.1 0.2 comp=$USUAL_COMPRESSIONS$\r\n
576 577 readline(-1) -> (35) accept: application/mercurial-0.1\r\n
577 578 readline(-1) -> (2?) host: localhost:$HGPORT\r\n (glob)
578 579 readline(-1) -> (49) user-agent: mercurial/proto-1.0 (Mercurial 4.2)\r\n
579 580 readline(-1) -> (2) \r\n
580 581 write(36 from 36) -> (143) HTTP/1.1 200 Script output follows\r\n
581 582 write(23 from 23) -> (120) Server: badhttpserver\r\n
582 583 write(37 from 37) -> (83) Date: Fri, 14 Apr 2017 00:00:00 GMT\r\n
583 584 write(41 from 41) -> (42) Content-Type: application/mercurial-0.2\r\n
584 585 write(28 from 28) -> (14) Transfer-Encoding: chunked\r\n
585 586 write(2 from 2) -> (12) \r\n
586 587 write(6 from 6) -> (6) 1\\r\\n\x04\\r\\n (esc)
587 588 write(6 from 9) -> (0) 4\r\nnon
588 589 write limit reached; closing socket
589 590 write(27) -> 15\r\nInternal Server Error\r\n
590 591
591 592 $ rm -f error.log
592 593
593 594 Server sends partial bundle2 header magic
594 595
595 $ hg serve --config badserver.closeaftersendbytes=954 -p $HGPORT -d --pid-file=hg.pid -E error.log
596 $ hg serve --config badserver.closeaftersendbytes=966 -p $HGPORT -d --pid-file=hg.pid -E error.log
596 597 $ cat hg.pid > $DAEMON_PIDS
597 598
598 599 $ hg clone http://localhost:$HGPORT/ clone
599 600 requesting all changes
600 601 abort: HTTP request error (incomplete response; expected 1 bytes got 3)
601 602 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
602 603 [255]
603 604
604 605 $ killdaemons.py $DAEMON_PIDS
605 606
606 607 $ tail -7 error.log
607 608 write(28 from 28) -> (23) Transfer-Encoding: chunked\r\n
608 609 write(2 from 2) -> (21) \r\n
609 610 write(6 from 6) -> (15) 1\\r\\n\x04\\r\\n (esc)
610 611 write(9 from 9) -> (6) 4\r\nnone\r\n
611 612 write(6 from 9) -> (0) 4\r\nHG2
612 613 write limit reached; closing socket
613 614 write(27) -> 15\r\nInternal Server Error\r\n
614 615
615 616 $ rm -f error.log
616 617
617 618 Server sends incomplete bundle2 stream params length
618 619
619 $ hg serve --config badserver.closeaftersendbytes=963 -p $HGPORT -d --pid-file=hg.pid -E error.log
620 $ hg serve --config badserver.closeaftersendbytes=975 -p $HGPORT -d --pid-file=hg.pid -E error.log
620 621 $ cat hg.pid > $DAEMON_PIDS
621 622
622 623 $ hg clone http://localhost:$HGPORT/ clone
623 624 requesting all changes
624 625 abort: HTTP request error (incomplete response; expected 1 bytes got 3)
625 626 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
626 627 [255]
627 628
628 629 $ killdaemons.py $DAEMON_PIDS
629 630
630 631 $ tail -8 error.log
631 632 write(28 from 28) -> (32) Transfer-Encoding: chunked\r\n
632 633 write(2 from 2) -> (30) \r\n
633 634 write(6 from 6) -> (24) 1\\r\\n\x04\\r\\n (esc)
634 635 write(9 from 9) -> (15) 4\r\nnone\r\n
635 636 write(9 from 9) -> (6) 4\r\nHG20\r\n
636 637 write(6 from 9) -> (0) 4\\r\\n\x00\x00\x00 (esc)
637 638 write limit reached; closing socket
638 639 write(27) -> 15\r\nInternal Server Error\r\n
639 640
640 641 $ rm -f error.log
641 642
642 643 Servers stops after bundle2 stream params header
643 644
644 $ hg serve --config badserver.closeaftersendbytes=966 -p $HGPORT -d --pid-file=hg.pid -E error.log
645 $ hg serve --config badserver.closeaftersendbytes=978 -p $HGPORT -d --pid-file=hg.pid -E error.log
645 646 $ cat hg.pid > $DAEMON_PIDS
646 647
647 648 $ hg clone http://localhost:$HGPORT/ clone
648 649 requesting all changes
649 650 abort: HTTP request error (incomplete response)
650 651 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
651 652 [255]
652 653
653 654 $ killdaemons.py $DAEMON_PIDS
654 655
655 656 $ tail -8 error.log
656 657 write(28 from 28) -> (35) Transfer-Encoding: chunked\r\n
657 658 write(2 from 2) -> (33) \r\n
658 659 write(6 from 6) -> (27) 1\\r\\n\x04\\r\\n (esc)
659 660 write(9 from 9) -> (18) 4\r\nnone\r\n
660 661 write(9 from 9) -> (9) 4\r\nHG20\r\n
661 662 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
662 663 write limit reached; closing socket
663 664 write(27) -> 15\r\nInternal Server Error\r\n
664 665
665 666 $ rm -f error.log
666 667
667 668 Server stops sending after bundle2 part header length
668 669
669 $ hg serve --config badserver.closeaftersendbytes=975 -p $HGPORT -d --pid-file=hg.pid -E error.log
670 $ hg serve --config badserver.closeaftersendbytes=987 -p $HGPORT -d --pid-file=hg.pid -E error.log
670 671 $ cat hg.pid > $DAEMON_PIDS
671 672
672 673 $ hg clone http://localhost:$HGPORT/ clone
673 674 requesting all changes
674 675 abort: HTTP request error (incomplete response)
675 676 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
676 677 [255]
677 678
678 679 $ killdaemons.py $DAEMON_PIDS
679 680
680 681 $ tail -9 error.log
681 682 write(28 from 28) -> (44) Transfer-Encoding: chunked\r\n
682 683 write(2 from 2) -> (42) \r\n
683 684 write(6 from 6) -> (36) 1\\r\\n\x04\\r\\n (esc)
684 685 write(9 from 9) -> (27) 4\r\nnone\r\n
685 686 write(9 from 9) -> (18) 4\r\nHG20\r\n
686 687 write(9 from 9) -> (9) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
687 688 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
688 689 write limit reached; closing socket
689 690 write(27) -> 15\r\nInternal Server Error\r\n
690 691
691 692 $ rm -f error.log
692 693
693 694 Server stops sending after bundle2 part header
694 695
695 $ hg serve --config badserver.closeaftersendbytes=1022 -p $HGPORT -d --pid-file=hg.pid -E error.log
696 $ hg serve --config badserver.closeaftersendbytes=1034 -p $HGPORT -d --pid-file=hg.pid -E error.log
696 697 $ cat hg.pid > $DAEMON_PIDS
697 698
698 699 $ hg clone http://localhost:$HGPORT/ clone
699 700 requesting all changes
700 701 adding changesets
701 702 transaction abort!
702 703 rollback completed
703 704 abort: HTTP request error (incomplete response)
704 705 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
705 706 [255]
706 707
707 708 $ killdaemons.py $DAEMON_PIDS
708 709
709 710 $ tail -10 error.log
710 711 write(28 from 28) -> (91) Transfer-Encoding: chunked\r\n
711 712 write(2 from 2) -> (89) \r\n
712 713 write(6 from 6) -> (83) 1\\r\\n\x04\\r\\n (esc)
713 714 write(9 from 9) -> (74) 4\r\nnone\r\n
714 715 write(9 from 9) -> (65) 4\r\nHG20\r\n
715 716 write(9 from 9) -> (56) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
716 717 write(9 from 9) -> (47) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
717 718 write(47 from 47) -> (0) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
718 719 write limit reached; closing socket
719 720 write(27) -> 15\r\nInternal Server Error\r\n
720 721
721 722 $ rm -f error.log
722 723
723 724 Server stops after bundle2 part payload chunk size
724 725
725 $ hg serve --config badserver.closeaftersendbytes=1031 -p $HGPORT -d --pid-file=hg.pid -E error.log
726 $ hg serve --config badserver.closeaftersendbytes=1043 -p $HGPORT -d --pid-file=hg.pid -E error.log
726 727 $ cat hg.pid > $DAEMON_PIDS
727 728
728 729 $ hg clone http://localhost:$HGPORT/ clone
729 730 requesting all changes
730 731 adding changesets
731 732 transaction abort!
732 733 rollback completed
733 734 abort: HTTP request error (incomplete response)
734 735 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
735 736 [255]
736 737
737 738 $ killdaemons.py $DAEMON_PIDS
738 739
739 740 $ tail -11 error.log
740 741 write(28 from 28) -> (100) Transfer-Encoding: chunked\r\n
741 742 write(2 from 2) -> (98) \r\n
742 743 write(6 from 6) -> (92) 1\\r\\n\x04\\r\\n (esc)
743 744 write(9 from 9) -> (83) 4\r\nnone\r\n
744 745 write(9 from 9) -> (74) 4\r\nHG20\r\n
745 746 write(9 from 9) -> (65) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
746 747 write(9 from 9) -> (56) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
747 748 write(47 from 47) -> (9) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
748 749 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
749 750 write limit reached; closing socket
750 751 write(27) -> 15\r\nInternal Server Error\r\n
751 752
752 753 $ rm -f error.log
753 754
754 755 Server stops sending in middle of bundle2 payload chunk
755 756
756 $ hg serve --config badserver.closeaftersendbytes=1504 -p $HGPORT -d --pid-file=hg.pid -E error.log
757 $ hg serve --config badserver.closeaftersendbytes=1516 -p $HGPORT -d --pid-file=hg.pid -E error.log
757 758 $ cat hg.pid > $DAEMON_PIDS
758 759
759 760 $ hg clone http://localhost:$HGPORT/ clone
760 761 requesting all changes
761 762 adding changesets
762 763 transaction abort!
763 764 rollback completed
764 765 abort: HTTP request error (incomplete response)
765 766 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
766 767 [255]
767 768
768 769 $ killdaemons.py $DAEMON_PIDS
769 770
770 771 $ tail -12 error.log
771 772 write(28 from 28) -> (573) Transfer-Encoding: chunked\r\n
772 773 write(2 from 2) -> (571) \r\n
773 774 write(6 from 6) -> (565) 1\\r\\n\x04\\r\\n (esc)
774 775 write(9 from 9) -> (556) 4\r\nnone\r\n
775 776 write(9 from 9) -> (547) 4\r\nHG20\r\n
776 777 write(9 from 9) -> (538) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
777 778 write(9 from 9) -> (529) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
778 779 write(47 from 47) -> (482) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
779 780 write(9 from 9) -> (473) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
780 781 write(473 from 473) -> (0) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
781 782 write limit reached; closing socket
782 783 write(27) -> 15\r\nInternal Server Error\r\n
783 784
784 785 $ rm -f error.log
785 786
786 787 Server stops sending after 0 length payload chunk size
787 788
788 $ hg serve --config badserver.closeaftersendbytes=1513 -p $HGPORT -d --pid-file=hg.pid -E error.log
789 $ hg serve --config badserver.closeaftersendbytes=1525 -p $HGPORT -d --pid-file=hg.pid -E error.log
789 790 $ cat hg.pid > $DAEMON_PIDS
790 791
791 792 $ hg clone http://localhost:$HGPORT/ clone
792 793 requesting all changes
793 794 adding changesets
794 795 adding manifests
795 796 adding file changes
796 797 added 1 changesets with 1 changes to 1 files
797 798 transaction abort!
798 799 rollback completed
799 800 abort: HTTP request error (incomplete response)
800 801 (this may be an intermittent network failure; if the error persists, consider contacting the network or server operator)
801 802 [255]
802 803
803 804 $ killdaemons.py $DAEMON_PIDS
804 805
805 806 $ tail -13 error.log
806 807 write(28 from 28) -> (582) Transfer-Encoding: chunked\r\n
807 808 write(2 from 2) -> (580) \r\n
808 809 write(6 from 6) -> (574) 1\\r\\n\x04\\r\\n (esc)
809 810 write(9 from 9) -> (565) 4\r\nnone\r\n
810 811 write(9 from 9) -> (556) 4\r\nHG20\r\n
811 812 write(9 from 9) -> (547) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
812 813 write(9 from 9) -> (538) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
813 814 write(47 from 47) -> (491) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
814 815 write(9 from 9) -> (482) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
815 816 write(473 from 473) -> (9) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
816 817 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
817 818 write limit reached; closing socket
818 819 write(27) -> 15\r\nInternal Server Error\r\n
819 820
820 821 $ rm -f error.log
821 822
822 823 Server stops sending after 0 part bundle part header (indicating end of bundle2 payload)
823 824 This is before the 0 size chunked transfer part that signals end of HTTP response.
824 825
825 $ hg serve --config badserver.closeaftersendbytes=1710 -p $HGPORT -d --pid-file=hg.pid -E error.log
826 $ hg serve --config badserver.closeaftersendbytes=1722 -p $HGPORT -d --pid-file=hg.pid -E error.log
826 827 $ cat hg.pid > $DAEMON_PIDS
827 828
828 829 $ hg clone http://localhost:$HGPORT/ clone
829 830 requesting all changes
830 831 adding changesets
831 832 adding manifests
832 833 adding file changes
833 834 added 1 changesets with 1 changes to 1 files
834 835 new changesets 96ee1d7354c4
835 836 updating to branch default
836 837 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
837 838
838 839 $ killdaemons.py $DAEMON_PIDS
839 840
840 841 $ tail -22 error.log
841 842 write(28 from 28) -> (779) Transfer-Encoding: chunked\r\n
842 843 write(2 from 2) -> (777) \r\n
843 844 write(6 from 6) -> (771) 1\\r\\n\x04\\r\\n (esc)
844 845 write(9 from 9) -> (762) 4\r\nnone\r\n
845 846 write(9 from 9) -> (753) 4\r\nHG20\r\n
846 847 write(9 from 9) -> (744) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
847 848 write(9 from 9) -> (735) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
848 849 write(47 from 47) -> (688) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
849 850 write(9 from 9) -> (679) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
850 851 write(473 from 473) -> (206) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
851 852 write(9 from 9) -> (197) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
852 853 write(9 from 9) -> (188) 4\\r\\n\x00\x00\x00 \\r\\n (esc)
853 854 write(38 from 38) -> (150) 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00 \x06namespacephases\\r\\n (esc)
854 855 write(9 from 9) -> (141) 4\\r\\n\x00\x00\x00:\\r\\n (esc)
855 856 write(64 from 64) -> (77) 3a\r\n96ee1d7354c4ad7372047672c36a1f561e3a6a4c 1\npublishing True\r\n
856 857 write(9 from 9) -> (68) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
857 858 write(9 from 9) -> (59) 4\\r\\n\x00\x00\x00#\\r\\n (esc)
858 859 write(41 from 41) -> (18) 23\\r\\n\x08LISTKEYS\x00\x00\x00\x02\x01\x00 namespacebookmarks\\r\\n (esc)
859 860 write(9 from 9) -> (9) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
860 861 write(9 from 9) -> (0) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
861 862 write limit reached; closing socket
862 863 write(27) -> 15\r\nInternal Server Error\r\n
863 864
864 865 $ rm -f error.log
865 866 $ rm -rf clone
866 867
867 868 Server sends a size 0 chunked-transfer size without terminating \r\n
868 869
869 $ hg serve --config badserver.closeaftersendbytes=1713 -p $HGPORT -d --pid-file=hg.pid -E error.log
870 $ hg serve --config badserver.closeaftersendbytes=1725 -p $HGPORT -d --pid-file=hg.pid -E error.log
870 871 $ cat hg.pid > $DAEMON_PIDS
871 872
872 873 $ hg clone http://localhost:$HGPORT/ clone
873 874 requesting all changes
874 875 adding changesets
875 876 adding manifests
876 877 adding file changes
877 878 added 1 changesets with 1 changes to 1 files
878 879 new changesets 96ee1d7354c4
879 880 updating to branch default
880 881 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
881 882
882 883 $ killdaemons.py $DAEMON_PIDS
883 884
884 885 $ tail -23 error.log
885 886 write(28 from 28) -> (782) Transfer-Encoding: chunked\r\n
886 887 write(2 from 2) -> (780) \r\n
887 888 write(6 from 6) -> (774) 1\\r\\n\x04\\r\\n (esc)
888 889 write(9 from 9) -> (765) 4\r\nnone\r\n
889 890 write(9 from 9) -> (756) 4\r\nHG20\r\n
890 891 write(9 from 9) -> (747) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
891 892 write(9 from 9) -> (738) 4\\r\\n\x00\x00\x00)\\r\\n (esc)
892 893 write(47 from 47) -> (691) 29\\r\\n\x0bCHANGEGROUP\x00\x00\x00\x00\x01\x01\x07\x02 \x01version02nbchanges1\\r\\n (esc)
893 894 write(9 from 9) -> (682) 4\\r\\n\x00\x00\x01\xd2\\r\\n (esc)
894 895 write(473 from 473) -> (209) 1d2\\r\\n\x00\x00\x00\xb2\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00>6a3df4de388f3c4f8e28f4f9a814299a3cbb5f50\\ntest\\n0 0\\nfoo\\n\\ninitial\x00\x00\x00\x00\x00\x00\x00\xa1j=\xf4\xde8\x8f<O\x8e(\xf4\xf9\xa8\x14)\x9a<\xbb_P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00-foo\x00b80de5d138758541c5f05265ad144ab9fa86d1db\\n\x00\x00\x00\x00\x00\x00\x00\x07foo\x00\x00\x00h\xb8\\r\xe5\xd18u\x85A\xc5\xf0Re\xad\x14J\xb9\xfa\x86\xd1\xdb\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x96\xee\x1dsT\xc4\xadsr\x04vr\xc3j\x1fV\x1e:jL\x00\x00\x00\x00\x00\x00\x00\x00\\r\\n (esc)
895 896 write(9 from 9) -> (200) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
896 897 write(9 from 9) -> (191) 4\\r\\n\x00\x00\x00 \\r\\n (esc)
897 898 write(38 from 38) -> (153) 20\\r\\n\x08LISTKEYS\x00\x00\x00\x01\x01\x00 \x06namespacephases\\r\\n (esc)
898 899 write(9 from 9) -> (144) 4\\r\\n\x00\x00\x00:\\r\\n (esc)
899 900 write(64 from 64) -> (80) 3a\r\n96ee1d7354c4ad7372047672c36a1f561e3a6a4c 1\npublishing True\r\n
900 901 write(9 from 9) -> (71) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
901 902 write(9 from 9) -> (62) 4\\r\\n\x00\x00\x00#\\r\\n (esc)
902 903 write(41 from 41) -> (21) 23\\r\\n\x08LISTKEYS\x00\x00\x00\x02\x01\x00 namespacebookmarks\\r\\n (esc)
903 904 write(9 from 9) -> (12) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
904 905 write(9 from 9) -> (3) 4\\r\\n\x00\x00\x00\x00\\r\\n (esc)
905 906 write(3 from 5) -> (0) 0\r\n
906 907 write limit reached; closing socket
907 908 write(27) -> 15\r\nInternal Server Error\r\n
908 909
909 910 $ rm -f error.log
910 911 $ rm -rf clone
@@ -1,565 +1,565 b''
1 1 This test is a duplicate of 'test-http.t' feel free to factor out
2 2 parts that are not bundle1/bundle2 specific.
3 3
4 4 $ cat << EOF >> $HGRCPATH
5 5 > [devel]
6 6 > # This test is dedicated to interaction through old bundle
7 7 > legacy.exchange = bundle1
8 8 > [format] # temporary settings
9 9 > usegeneraldelta=yes
10 10 > EOF
11 11
12 12
13 13 This test tries to exercise the ssh functionality with a dummy script
14 14
15 15 creating 'remote' repo
16 16
17 17 $ hg init remote
18 18 $ cd remote
19 19 $ echo this > foo
20 20 $ echo this > fooO
21 21 $ hg ci -A -m "init" foo fooO
22 22
23 23 insert a closed branch (issue4428)
24 24
25 25 $ hg up null
26 26 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
27 27 $ hg branch closed
28 28 marked working directory as branch closed
29 29 (branches are permanent and global, did you want a bookmark?)
30 30 $ hg ci -mc0
31 31 $ hg ci --close-branch -mc1
32 32 $ hg up -q default
33 33
34 34 configure for serving
35 35
36 36 $ cat <<EOF > .hg/hgrc
37 37 > [server]
38 38 > uncompressed = True
39 39 >
40 40 > [hooks]
41 41 > changegroup = sh -c "printenv.py changegroup-in-remote 0 ../dummylog"
42 42 > EOF
43 43 $ cd ..
44 44
45 45 repo not found error
46 46
47 47 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/nonexistent local
48 48 remote: abort: repository nonexistent not found!
49 49 abort: no suitable response from remote hg!
50 50 [255]
51 51
52 52 non-existent absolute path
53 53
54 54 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy//`pwd`/nonexistent local
55 55 remote: abort: repository /$TESTTMP/nonexistent not found!
56 56 abort: no suitable response from remote hg!
57 57 [255]
58 58
59 59 clone remote via stream
60 60
61 61 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" --stream ssh://user@dummy/remote local-stream
62 62 streaming all changes
63 63 4 files to transfer, 602 bytes of data
64 64 transferred 602 bytes in * seconds (*) (glob)
65 65 searching for changes
66 66 no changes found
67 67 updating to branch default
68 68 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
69 69 $ cd local-stream
70 70 $ hg verify
71 71 checking changesets
72 72 checking manifests
73 73 crosschecking files in changesets and manifests
74 74 checking files
75 75 2 files, 3 changesets, 2 total revisions
76 76 $ hg branches
77 77 default 0:1160648e36ce
78 78 $ cd ..
79 79
80 80 clone bookmarks via stream
81 81
82 82 $ hg -R local-stream book mybook
83 83 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" --stream ssh://user@dummy/local-stream stream2
84 84 streaming all changes
85 85 4 files to transfer, 602 bytes of data
86 86 transferred 602 bytes in * seconds (*) (glob)
87 87 searching for changes
88 88 no changes found
89 89 updating to branch default
90 90 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
91 91 $ cd stream2
92 92 $ hg book
93 93 mybook 0:1160648e36ce
94 94 $ cd ..
95 95 $ rm -rf local-stream stream2
96 96
97 97 clone remote via pull
98 98
99 99 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/remote local
100 100 requesting all changes
101 101 adding changesets
102 102 adding manifests
103 103 adding file changes
104 104 added 3 changesets with 2 changes to 2 files
105 105 new changesets 1160648e36ce:ad076bfb429d
106 106 updating to branch default
107 107 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
108 108
109 109 verify
110 110
111 111 $ cd local
112 112 $ hg verify
113 113 checking changesets
114 114 checking manifests
115 115 crosschecking files in changesets and manifests
116 116 checking files
117 117 2 files, 3 changesets, 2 total revisions
118 118 $ cat >> .hg/hgrc <<EOF
119 119 > [hooks]
120 120 > changegroup = sh -c "printenv.py changegroup-in-local 0 ../dummylog"
121 121 > EOF
122 122
123 123 empty default pull
124 124
125 125 $ hg paths
126 126 default = ssh://user@dummy/remote
127 127 $ hg pull -e "\"$PYTHON\" \"$TESTDIR/dummyssh\""
128 128 pulling from ssh://user@dummy/remote
129 129 searching for changes
130 130 no changes found
131 131
132 132 pull from wrong ssh URL
133 133
134 134 $ hg pull -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/doesnotexist
135 135 pulling from ssh://user@dummy/doesnotexist
136 136 remote: abort: repository doesnotexist not found!
137 137 abort: no suitable response from remote hg!
138 138 [255]
139 139
140 140 local change
141 141
142 142 $ echo bleah > foo
143 143 $ hg ci -m "add"
144 144
145 145 updating rc
146 146
147 147 $ echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
148 148 $ echo "[ui]" >> .hg/hgrc
149 149 $ echo "ssh = \"$PYTHON\" \"$TESTDIR/dummyssh\"" >> .hg/hgrc
150 150
151 151 find outgoing
152 152
153 153 $ hg out ssh://user@dummy/remote
154 154 comparing with ssh://user@dummy/remote
155 155 searching for changes
156 156 changeset: 3:a28a9d1a809c
157 157 tag: tip
158 158 parent: 0:1160648e36ce
159 159 user: test
160 160 date: Thu Jan 01 00:00:00 1970 +0000
161 161 summary: add
162 162
163 163
164 164 find incoming on the remote side
165 165
166 166 $ hg incoming -R ../remote -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/local
167 167 comparing with ssh://user@dummy/local
168 168 searching for changes
169 169 changeset: 3:a28a9d1a809c
170 170 tag: tip
171 171 parent: 0:1160648e36ce
172 172 user: test
173 173 date: Thu Jan 01 00:00:00 1970 +0000
174 174 summary: add
175 175
176 176
177 177 find incoming on the remote side (using absolute path)
178 178
179 179 $ hg incoming -R ../remote -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/`pwd`"
180 180 comparing with ssh://user@dummy/$TESTTMP/local
181 181 searching for changes
182 182 changeset: 3:a28a9d1a809c
183 183 tag: tip
184 184 parent: 0:1160648e36ce
185 185 user: test
186 186 date: Thu Jan 01 00:00:00 1970 +0000
187 187 summary: add
188 188
189 189
190 190 push
191 191
192 192 $ hg push
193 193 pushing to ssh://user@dummy/remote
194 194 searching for changes
195 195 remote: adding changesets
196 196 remote: adding manifests
197 197 remote: adding file changes
198 198 remote: added 1 changesets with 1 changes to 1 files
199 199 $ cd ../remote
200 200
201 201 check remote tip
202 202
203 203 $ hg tip
204 204 changeset: 3:a28a9d1a809c
205 205 tag: tip
206 206 parent: 0:1160648e36ce
207 207 user: test
208 208 date: Thu Jan 01 00:00:00 1970 +0000
209 209 summary: add
210 210
211 211 $ hg verify
212 212 checking changesets
213 213 checking manifests
214 214 crosschecking files in changesets and manifests
215 215 checking files
216 216 2 files, 4 changesets, 3 total revisions
217 217 $ hg cat -r tip foo
218 218 bleah
219 219 $ echo z > z
220 220 $ hg ci -A -m z z
221 221 created new head
222 222
223 223 test pushkeys and bookmarks
224 224
225 225 $ cd ../local
226 226 $ hg debugpushkey --config ui.ssh="\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/remote namespaces
227 227 bookmarks
228 228 namespaces
229 229 phases
230 230 $ hg book foo -r 0
231 231 $ hg out -B
232 232 comparing with ssh://user@dummy/remote
233 233 searching for changed bookmarks
234 234 foo 1160648e36ce
235 235 $ hg push -B foo
236 236 pushing to ssh://user@dummy/remote
237 237 searching for changes
238 238 no changes found
239 239 exporting bookmark foo
240 240 [1]
241 241 $ hg debugpushkey --config ui.ssh="\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/remote bookmarks
242 242 foo 1160648e36cec0054048a7edc4110c6f84fde594
243 243 $ hg book -f foo
244 244 $ hg push --traceback
245 245 pushing to ssh://user@dummy/remote
246 246 searching for changes
247 247 no changes found
248 248 updating bookmark foo
249 249 [1]
250 250 $ hg book -d foo
251 251 $ hg in -B
252 252 comparing with ssh://user@dummy/remote
253 253 searching for changed bookmarks
254 254 foo a28a9d1a809c
255 255 $ hg book -f -r 0 foo
256 256 $ hg pull -B foo
257 257 pulling from ssh://user@dummy/remote
258 258 no changes found
259 259 updating bookmark foo
260 260 $ hg book -d foo
261 261 $ hg push -B foo
262 262 pushing to ssh://user@dummy/remote
263 263 searching for changes
264 264 no changes found
265 265 deleting remote bookmark foo
266 266 [1]
267 267
268 268 a bad, evil hook that prints to stdout
269 269
270 270 $ cat <<EOF > $TESTTMP/badhook
271 271 > import sys
272 272 > sys.stdout.write("KABOOM\n")
273 273 > EOF
274 274
275 275 $ echo '[hooks]' >> ../remote/.hg/hgrc
276 276 $ echo "changegroup.stdout = \"$PYTHON\" $TESTTMP/badhook" >> ../remote/.hg/hgrc
277 277 $ echo r > r
278 278 $ hg ci -A -m z r
279 279
280 280 push should succeed even though it has an unexpected response
281 281
282 282 $ hg push
283 283 pushing to ssh://user@dummy/remote
284 284 searching for changes
285 285 remote has heads on branch 'default' that are not known locally: 6c0482d977a3
286 286 remote: adding changesets
287 287 remote: adding manifests
288 288 remote: adding file changes
289 289 remote: added 1 changesets with 1 changes to 1 files
290 290 remote: KABOOM
291 291 $ hg -R ../remote heads
292 292 changeset: 5:1383141674ec
293 293 tag: tip
294 294 parent: 3:a28a9d1a809c
295 295 user: test
296 296 date: Thu Jan 01 00:00:00 1970 +0000
297 297 summary: z
298 298
299 299 changeset: 4:6c0482d977a3
300 300 parent: 0:1160648e36ce
301 301 user: test
302 302 date: Thu Jan 01 00:00:00 1970 +0000
303 303 summary: z
304 304
305 305
306 306 clone bookmarks
307 307
308 308 $ hg -R ../remote bookmark test
309 309 $ hg -R ../remote bookmarks
310 310 * test 4:6c0482d977a3
311 311 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/remote local-bookmarks
312 312 requesting all changes
313 313 adding changesets
314 314 adding manifests
315 315 adding file changes
316 316 added 6 changesets with 5 changes to 4 files (+1 heads)
317 317 new changesets 1160648e36ce:1383141674ec
318 318 updating to branch default
319 319 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
320 320 $ hg -R local-bookmarks bookmarks
321 321 test 4:6c0482d977a3
322 322
323 323 passwords in ssh urls are not supported
324 324 (we use a glob here because different Python versions give different
325 325 results here)
326 326
327 327 $ hg push ssh://user:erroneouspwd@dummy/remote
328 328 pushing to ssh://user:*@dummy/remote (glob)
329 329 abort: password in URL not supported!
330 330 [255]
331 331
332 332 $ cd ..
333 333
334 334 hide outer repo
335 335 $ hg init
336 336
337 337 Test remote paths with spaces (issue2983):
338 338
339 339 $ hg init --ssh "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/a repo"
340 340 $ touch "$TESTTMP/a repo/test"
341 341 $ hg -R 'a repo' commit -A -m "test"
342 342 adding test
343 343 $ hg -R 'a repo' tag tag
344 344 $ hg id --ssh "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/a repo"
345 345 73649e48688a
346 346
347 347 $ hg id --ssh "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/a repo#noNoNO"
348 348 abort: unknown revision 'noNoNO'!
349 349 [255]
350 350
351 351 Test (non-)escaping of remote paths with spaces when cloning (issue3145):
352 352
353 353 $ hg clone --ssh "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/a repo"
354 354 destination directory: a repo
355 355 abort: destination 'a repo' is not empty
356 356 [255]
357 357
358 358 Test hg-ssh using a helper script that will restore PYTHONPATH (which might
359 359 have been cleared by a hg.exe wrapper) and invoke hg-ssh with the right
360 360 parameters:
361 361
362 362 $ cat > ssh.sh << EOF
363 363 > userhost="\$1"
364 364 > SSH_ORIGINAL_COMMAND="\$2"
365 365 > export SSH_ORIGINAL_COMMAND
366 366 > PYTHONPATH="$PYTHONPATH"
367 367 > export PYTHONPATH
368 368 > "$PYTHON" "$TESTDIR/../contrib/hg-ssh" "$TESTTMP/a repo"
369 369 > EOF
370 370
371 371 $ hg id --ssh "sh ssh.sh" "ssh://user@dummy/a repo"
372 372 73649e48688a
373 373
374 374 $ hg id --ssh "sh ssh.sh" "ssh://user@dummy/a'repo"
375 375 remote: Illegal repository "$TESTTMP/a'repo" (glob)
376 376 abort: no suitable response from remote hg!
377 377 [255]
378 378
379 379 $ hg id --ssh "sh ssh.sh" --remotecmd hacking "ssh://user@dummy/a'repo"
380 380 remote: Illegal command "hacking -R 'a'\''repo' serve --stdio"
381 381 abort: no suitable response from remote hg!
382 382 [255]
383 383
384 384 $ SSH_ORIGINAL_COMMAND="'hg' serve -R 'a'repo' --stdio" $PYTHON "$TESTDIR/../contrib/hg-ssh"
385 385 Illegal command "'hg' serve -R 'a'repo' --stdio": No closing quotation
386 386 [255]
387 387
388 388 Test hg-ssh in read-only mode:
389 389
390 390 $ cat > ssh.sh << EOF
391 391 > userhost="\$1"
392 392 > SSH_ORIGINAL_COMMAND="\$2"
393 393 > export SSH_ORIGINAL_COMMAND
394 394 > PYTHONPATH="$PYTHONPATH"
395 395 > export PYTHONPATH
396 396 > "$PYTHON" "$TESTDIR/../contrib/hg-ssh" --read-only "$TESTTMP/remote"
397 397 > EOF
398 398
399 399 $ hg clone --ssh "sh ssh.sh" "ssh://user@dummy/$TESTTMP/remote" read-only-local
400 400 requesting all changes
401 401 adding changesets
402 402 adding manifests
403 403 adding file changes
404 404 added 6 changesets with 5 changes to 4 files (+1 heads)
405 405 new changesets 1160648e36ce:1383141674ec
406 406 updating to branch default
407 407 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
408 408
409 409 $ cd read-only-local
410 410 $ echo "baz" > bar
411 411 $ hg ci -A -m "unpushable commit" bar
412 412 $ hg push --ssh "sh ../ssh.sh"
413 413 pushing to ssh://user@dummy/*/remote (glob)
414 414 searching for changes
415 415 remote: Permission denied
416 416 remote: abort: pretxnopen.hg-ssh hook failed
417 417 remote: Permission denied
418 418 remote: pushkey-abort: prepushkey.hg-ssh hook failed
419 419 updating 6c0482d977a3 to public failed!
420 420 [1]
421 421
422 422 $ cd ..
423 423
424 424 stderr from remote commands should be printed before stdout from local code (issue4336)
425 425
426 426 $ hg clone remote stderr-ordering
427 427 updating to branch default
428 428 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
429 429 $ cd stderr-ordering
430 430 $ cat >> localwrite.py << EOF
431 431 > from mercurial import exchange, extensions
432 432 >
433 433 > def wrappedpush(orig, repo, *args, **kwargs):
434 434 > res = orig(repo, *args, **kwargs)
435 435 > repo.ui.write('local stdout\n')
436 436 > return res
437 437 >
438 438 > def extsetup(ui):
439 439 > extensions.wrapfunction(exchange, 'push', wrappedpush)
440 440 > EOF
441 441
442 442 $ cat >> .hg/hgrc << EOF
443 443 > [paths]
444 444 > default-push = ssh://user@dummy/remote
445 445 > [ui]
446 446 > ssh = "$PYTHON" "$TESTDIR/dummyssh"
447 447 > [extensions]
448 448 > localwrite = localwrite.py
449 449 > EOF
450 450
451 451 $ echo localwrite > foo
452 452 $ hg commit -m 'testing localwrite'
453 453 $ hg push
454 454 pushing to ssh://user@dummy/remote
455 455 searching for changes
456 456 remote: adding changesets
457 457 remote: adding manifests
458 458 remote: adding file changes
459 459 remote: added 1 changesets with 1 changes to 1 files
460 460 remote: KABOOM
461 461 local stdout
462 462
463 463 debug output
464 464
465 465 $ hg pull --debug ssh://user@dummy/remote
466 466 pulling from ssh://user@dummy/remote
467 467 running .* ".*/dummyssh" ['"]user@dummy['"] ('|")hg -R remote serve --stdio('|") (re)
468 468 sending hello command
469 469 sending between command
470 remote: 372
470 remote: 384
471 471 remote: capabilities: lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 $USUAL_BUNDLE2_CAPS$ unbundle=HG10GZ,HG10BZ,HG10UN
472 472 remote: 1
473 473 preparing listkeys for "bookmarks"
474 474 sending listkeys command
475 475 received listkey for "bookmarks": 45 bytes
476 476 query 1; heads
477 477 sending batch command
478 478 searching for changes
479 479 all remote heads known locally
480 480 no changes found
481 481 preparing listkeys for "phases"
482 482 sending listkeys command
483 483 received listkey for "phases": 15 bytes
484 484 checking for updated bookmarks
485 485
486 486 $ cd ..
487 487
488 488 $ cat dummylog
489 489 Got arguments 1:user@dummy 2:hg -R nonexistent serve --stdio
490 490 Got arguments 1:user@dummy 2:hg -R /$TESTTMP/nonexistent serve --stdio
491 491 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
492 492 Got arguments 1:user@dummy 2:hg -R local-stream serve --stdio
493 493 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
494 494 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
495 495 Got arguments 1:user@dummy 2:hg -R doesnotexist serve --stdio
496 496 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
497 497 Got arguments 1:user@dummy 2:hg -R local serve --stdio
498 498 Got arguments 1:user@dummy 2:hg -R $TESTTMP/local serve --stdio
499 499 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
500 500 changegroup-in-remote hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=a28a9d1a809cab7d4e2fde4bee738a9ede948b60 HG_NODE_LAST=a28a9d1a809cab7d4e2fde4bee738a9ede948b60 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
501 501 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
502 502 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
503 503 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
504 504 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
505 505 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
506 506 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
507 507 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
508 508 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
509 509 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
510 510 changegroup-in-remote hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=1383141674ec756a6056f6a9097618482fe0f4a6 HG_NODE_LAST=1383141674ec756a6056f6a9097618482fe0f4a6 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
511 511 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
512 512 Got arguments 1:user@dummy 2:hg init 'a repo'
513 513 Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
514 514 Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
515 515 Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
516 516 Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
517 517 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
518 518 changegroup-in-remote hook: HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=65c38f4125f9602c8db4af56530cc221d93b8ef8 HG_NODE_LAST=65c38f4125f9602c8db4af56530cc221d93b8ef8 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
519 519 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
520 520
521 521 remote hook failure is attributed to remote
522 522
523 523 $ cat > $TESTTMP/failhook << EOF
524 524 > def hook(ui, repo, **kwargs):
525 525 > ui.write('hook failure!\n')
526 526 > ui.flush()
527 527 > return 1
528 528 > EOF
529 529
530 530 $ echo "pretxnchangegroup.fail = python:$TESTTMP/failhook:hook" >> remote/.hg/hgrc
531 531
532 532 $ hg -q --config ui.ssh="\"$PYTHON\" $TESTDIR/dummyssh" clone ssh://user@dummy/remote hookout
533 533 $ cd hookout
534 534 $ touch hookfailure
535 535 $ hg -q commit -A -m 'remote hook failure'
536 536 $ hg --config ui.ssh="\"$PYTHON\" $TESTDIR/dummyssh" push
537 537 pushing to ssh://user@dummy/remote
538 538 searching for changes
539 539 remote: adding changesets
540 540 remote: adding manifests
541 541 remote: adding file changes
542 542 remote: added 1 changesets with 1 changes to 1 files
543 543 remote: hook failure!
544 544 remote: transaction abort!
545 545 remote: rollback completed
546 546 remote: abort: pretxnchangegroup.fail hook failed
547 547 [1]
548 548
549 549 abort during pull is properly reported as such
550 550
551 551 $ echo morefoo >> ../remote/foo
552 552 $ hg -R ../remote commit --message "more foo to be pulled"
553 553 $ cat >> ../remote/.hg/hgrc << EOF
554 554 > [extensions]
555 555 > crash = ${TESTDIR}/crashgetbundler.py
556 556 > EOF
557 557 $ hg --config ui.ssh="\"$PYTHON\" $TESTDIR/dummyssh" pull
558 558 pulling from ssh://user@dummy/remote
559 559 searching for changes
560 560 adding changesets
561 561 remote: abort: this is an exercise
562 562 transaction abort!
563 563 rollback completed
564 564 abort: stream ended unexpectedly (got 0 bytes, expected 4)
565 565 [255]
@@ -1,596 +1,596 b''
1 1
2 2 This test tries to exercise the ssh functionality with a dummy script
3 3
4 4 $ cat <<EOF >> $HGRCPATH
5 5 > [format]
6 6 > usegeneraldelta=yes
7 7 > EOF
8 8
9 9 creating 'remote' repo
10 10
11 11 $ hg init remote
12 12 $ cd remote
13 13 $ echo this > foo
14 14 $ echo this > fooO
15 15 $ hg ci -A -m "init" foo fooO
16 16
17 17 insert a closed branch (issue4428)
18 18
19 19 $ hg up null
20 20 0 files updated, 0 files merged, 2 files removed, 0 files unresolved
21 21 $ hg branch closed
22 22 marked working directory as branch closed
23 23 (branches are permanent and global, did you want a bookmark?)
24 24 $ hg ci -mc0
25 25 $ hg ci --close-branch -mc1
26 26 $ hg up -q default
27 27
28 28 configure for serving
29 29
30 30 $ cat <<EOF > .hg/hgrc
31 31 > [server]
32 32 > uncompressed = True
33 33 >
34 34 > [hooks]
35 35 > changegroup = sh -c "printenv.py changegroup-in-remote 0 ../dummylog"
36 36 > EOF
37 37 $ cd ..
38 38
39 39 repo not found error
40 40
41 41 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/nonexistent local
42 42 remote: abort: repository nonexistent not found!
43 43 abort: no suitable response from remote hg!
44 44 [255]
45 45
46 46 non-existent absolute path
47 47
48 48 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/`pwd`/nonexistent local
49 49 remote: abort: repository $TESTTMP/nonexistent not found!
50 50 abort: no suitable response from remote hg!
51 51 [255]
52 52
53 53 clone remote via stream
54 54
55 55 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" --stream ssh://user@dummy/remote local-stream
56 56 streaming all changes
57 57 4 files to transfer, 602 bytes of data
58 58 transferred 602 bytes in * seconds (*) (glob)
59 59 searching for changes
60 60 no changes found
61 61 updating to branch default
62 62 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
63 63 $ cd local-stream
64 64 $ hg verify
65 65 checking changesets
66 66 checking manifests
67 67 crosschecking files in changesets and manifests
68 68 checking files
69 69 2 files, 3 changesets, 2 total revisions
70 70 $ hg branches
71 71 default 0:1160648e36ce
72 72 $ cd ..
73 73
74 74 clone bookmarks via stream
75 75
76 76 $ hg -R local-stream book mybook
77 77 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" --stream ssh://user@dummy/local-stream stream2
78 78 streaming all changes
79 79 4 files to transfer, 602 bytes of data
80 80 transferred 602 bytes in * seconds (*) (glob)
81 81 searching for changes
82 82 no changes found
83 83 updating to branch default
84 84 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
85 85 $ cd stream2
86 86 $ hg book
87 87 mybook 0:1160648e36ce
88 88 $ cd ..
89 89 $ rm -rf local-stream stream2
90 90
91 91 clone remote via pull
92 92
93 93 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/remote local
94 94 requesting all changes
95 95 adding changesets
96 96 adding manifests
97 97 adding file changes
98 98 added 3 changesets with 2 changes to 2 files
99 99 new changesets 1160648e36ce:ad076bfb429d
100 100 updating to branch default
101 101 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
102 102
103 103 verify
104 104
105 105 $ cd local
106 106 $ hg verify
107 107 checking changesets
108 108 checking manifests
109 109 crosschecking files in changesets and manifests
110 110 checking files
111 111 2 files, 3 changesets, 2 total revisions
112 112 $ cat >> .hg/hgrc <<EOF
113 113 > [hooks]
114 114 > changegroup = sh -c "printenv.py changegroup-in-local 0 ../dummylog"
115 115 > EOF
116 116
117 117 empty default pull
118 118
119 119 $ hg paths
120 120 default = ssh://user@dummy/remote
121 121 $ hg pull -e "\"$PYTHON\" \"$TESTDIR/dummyssh\""
122 122 pulling from ssh://user@dummy/remote
123 123 searching for changes
124 124 no changes found
125 125
126 126 pull from wrong ssh URL
127 127
128 128 $ hg pull -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/doesnotexist
129 129 pulling from ssh://user@dummy/doesnotexist
130 130 remote: abort: repository doesnotexist not found!
131 131 abort: no suitable response from remote hg!
132 132 [255]
133 133
134 134 local change
135 135
136 136 $ echo bleah > foo
137 137 $ hg ci -m "add"
138 138
139 139 updating rc
140 140
141 141 $ echo "default-push = ssh://user@dummy/remote" >> .hg/hgrc
142 142 $ echo "[ui]" >> .hg/hgrc
143 143 $ echo "ssh = \"$PYTHON\" \"$TESTDIR/dummyssh\"" >> .hg/hgrc
144 144
145 145 find outgoing
146 146
147 147 $ hg out ssh://user@dummy/remote
148 148 comparing with ssh://user@dummy/remote
149 149 searching for changes
150 150 changeset: 3:a28a9d1a809c
151 151 tag: tip
152 152 parent: 0:1160648e36ce
153 153 user: test
154 154 date: Thu Jan 01 00:00:00 1970 +0000
155 155 summary: add
156 156
157 157
158 158 find incoming on the remote side
159 159
160 160 $ hg incoming -R ../remote -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/local
161 161 comparing with ssh://user@dummy/local
162 162 searching for changes
163 163 changeset: 3:a28a9d1a809c
164 164 tag: tip
165 165 parent: 0:1160648e36ce
166 166 user: test
167 167 date: Thu Jan 01 00:00:00 1970 +0000
168 168 summary: add
169 169
170 170
171 171 find incoming on the remote side (using absolute path)
172 172
173 173 $ hg incoming -R ../remote -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/`pwd`"
174 174 comparing with ssh://user@dummy/$TESTTMP/local
175 175 searching for changes
176 176 changeset: 3:a28a9d1a809c
177 177 tag: tip
178 178 parent: 0:1160648e36ce
179 179 user: test
180 180 date: Thu Jan 01 00:00:00 1970 +0000
181 181 summary: add
182 182
183 183
184 184 push
185 185
186 186 $ hg push
187 187 pushing to ssh://user@dummy/remote
188 188 searching for changes
189 189 remote: adding changesets
190 190 remote: adding manifests
191 191 remote: adding file changes
192 192 remote: added 1 changesets with 1 changes to 1 files
193 193 $ cd ../remote
194 194
195 195 check remote tip
196 196
197 197 $ hg tip
198 198 changeset: 3:a28a9d1a809c
199 199 tag: tip
200 200 parent: 0:1160648e36ce
201 201 user: test
202 202 date: Thu Jan 01 00:00:00 1970 +0000
203 203 summary: add
204 204
205 205 $ hg verify
206 206 checking changesets
207 207 checking manifests
208 208 crosschecking files in changesets and manifests
209 209 checking files
210 210 2 files, 4 changesets, 3 total revisions
211 211 $ hg cat -r tip foo
212 212 bleah
213 213 $ echo z > z
214 214 $ hg ci -A -m z z
215 215 created new head
216 216
217 217 test pushkeys and bookmarks
218 218
219 219 $ cd ../local
220 220 $ hg debugpushkey --config ui.ssh="\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/remote namespaces
221 221 bookmarks
222 222 namespaces
223 223 phases
224 224 $ hg book foo -r 0
225 225 $ hg out -B
226 226 comparing with ssh://user@dummy/remote
227 227 searching for changed bookmarks
228 228 foo 1160648e36ce
229 229 $ hg push -B foo
230 230 pushing to ssh://user@dummy/remote
231 231 searching for changes
232 232 no changes found
233 233 exporting bookmark foo
234 234 [1]
235 235 $ hg debugpushkey --config ui.ssh="\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/remote bookmarks
236 236 foo 1160648e36cec0054048a7edc4110c6f84fde594
237 237 $ hg book -f foo
238 238 $ hg push --traceback
239 239 pushing to ssh://user@dummy/remote
240 240 searching for changes
241 241 no changes found
242 242 updating bookmark foo
243 243 [1]
244 244 $ hg book -d foo
245 245 $ hg in -B
246 246 comparing with ssh://user@dummy/remote
247 247 searching for changed bookmarks
248 248 foo a28a9d1a809c
249 249 $ hg book -f -r 0 foo
250 250 $ hg pull -B foo
251 251 pulling from ssh://user@dummy/remote
252 252 no changes found
253 253 updating bookmark foo
254 254 $ hg book -d foo
255 255 $ hg push -B foo
256 256 pushing to ssh://user@dummy/remote
257 257 searching for changes
258 258 no changes found
259 259 deleting remote bookmark foo
260 260 [1]
261 261
262 262 a bad, evil hook that prints to stdout
263 263
264 264 $ cat <<EOF > $TESTTMP/badhook
265 265 > import sys
266 266 > sys.stdout.write("KABOOM\n")
267 267 > EOF
268 268
269 269 $ cat <<EOF > $TESTTMP/badpyhook.py
270 270 > import sys
271 271 > def hook(ui, repo, hooktype, **kwargs):
272 272 > sys.stdout.write("KABOOM IN PROCESS\n")
273 273 > EOF
274 274
275 275 $ cat <<EOF >> ../remote/.hg/hgrc
276 276 > [hooks]
277 277 > changegroup.stdout = $PYTHON $TESTTMP/badhook
278 278 > changegroup.pystdout = python:$TESTTMP/badpyhook.py:hook
279 279 > EOF
280 280 $ echo r > r
281 281 $ hg ci -A -m z r
282 282
283 283 push should succeed even though it has an unexpected response
284 284
285 285 $ hg push
286 286 pushing to ssh://user@dummy/remote
287 287 searching for changes
288 288 remote has heads on branch 'default' that are not known locally: 6c0482d977a3
289 289 remote: adding changesets
290 290 remote: adding manifests
291 291 remote: adding file changes
292 292 remote: added 1 changesets with 1 changes to 1 files
293 293 remote: KABOOM
294 294 remote: KABOOM IN PROCESS
295 295 $ hg -R ../remote heads
296 296 changeset: 5:1383141674ec
297 297 tag: tip
298 298 parent: 3:a28a9d1a809c
299 299 user: test
300 300 date: Thu Jan 01 00:00:00 1970 +0000
301 301 summary: z
302 302
303 303 changeset: 4:6c0482d977a3
304 304 parent: 0:1160648e36ce
305 305 user: test
306 306 date: Thu Jan 01 00:00:00 1970 +0000
307 307 summary: z
308 308
309 309
310 310 clone bookmarks
311 311
312 312 $ hg -R ../remote bookmark test
313 313 $ hg -R ../remote bookmarks
314 314 * test 4:6c0482d977a3
315 315 $ hg clone -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" ssh://user@dummy/remote local-bookmarks
316 316 requesting all changes
317 317 adding changesets
318 318 adding manifests
319 319 adding file changes
320 320 added 6 changesets with 5 changes to 4 files (+1 heads)
321 321 new changesets 1160648e36ce:1383141674ec
322 322 updating to branch default
323 323 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
324 324 $ hg -R local-bookmarks bookmarks
325 325 test 4:6c0482d977a3
326 326
327 327 passwords in ssh urls are not supported
328 328 (we use a glob here because different Python versions give different
329 329 results here)
330 330
331 331 $ hg push ssh://user:erroneouspwd@dummy/remote
332 332 pushing to ssh://user:*@dummy/remote (glob)
333 333 abort: password in URL not supported!
334 334 [255]
335 335
336 336 $ cd ..
337 337
338 338 hide outer repo
339 339 $ hg init
340 340
341 341 Test remote paths with spaces (issue2983):
342 342
343 343 $ hg init --ssh "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/a repo"
344 344 $ touch "$TESTTMP/a repo/test"
345 345 $ hg -R 'a repo' commit -A -m "test"
346 346 adding test
347 347 $ hg -R 'a repo' tag tag
348 348 $ hg id --ssh "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/a repo"
349 349 73649e48688a
350 350
351 351 $ hg id --ssh "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/a repo#noNoNO"
352 352 abort: unknown revision 'noNoNO'!
353 353 [255]
354 354
355 355 Test (non-)escaping of remote paths with spaces when cloning (issue3145):
356 356
357 357 $ hg clone --ssh "\"$PYTHON\" \"$TESTDIR/dummyssh\"" "ssh://user@dummy/a repo"
358 358 destination directory: a repo
359 359 abort: destination 'a repo' is not empty
360 360 [255]
361 361
362 362 Make sure hg is really paranoid in serve --stdio mode. It used to be
363 363 possible to get a debugger REPL by specifying a repo named --debugger.
364 364 $ hg -R --debugger serve --stdio
365 365 abort: potentially unsafe serve --stdio invocation: ['-R', '--debugger', 'serve', '--stdio']
366 366 [255]
367 367 $ hg -R --config=ui.debugger=yes serve --stdio
368 368 abort: potentially unsafe serve --stdio invocation: ['-R', '--config=ui.debugger=yes', 'serve', '--stdio']
369 369 [255]
370 370 Abbreviations of 'serve' also don't work, to avoid shenanigans.
371 371 $ hg -R narf serv --stdio
372 372 abort: potentially unsafe serve --stdio invocation: ['-R', 'narf', 'serv', '--stdio']
373 373 [255]
374 374
375 375 Test hg-ssh using a helper script that will restore PYTHONPATH (which might
376 376 have been cleared by a hg.exe wrapper) and invoke hg-ssh with the right
377 377 parameters:
378 378
379 379 $ cat > ssh.sh << EOF
380 380 > userhost="\$1"
381 381 > SSH_ORIGINAL_COMMAND="\$2"
382 382 > export SSH_ORIGINAL_COMMAND
383 383 > PYTHONPATH="$PYTHONPATH"
384 384 > export PYTHONPATH
385 385 > "$PYTHON" "$TESTDIR/../contrib/hg-ssh" "$TESTTMP/a repo"
386 386 > EOF
387 387
388 388 $ hg id --ssh "sh ssh.sh" "ssh://user@dummy/a repo"
389 389 73649e48688a
390 390
391 391 $ hg id --ssh "sh ssh.sh" "ssh://user@dummy/a'repo"
392 392 remote: Illegal repository "$TESTTMP/a'repo" (glob)
393 393 abort: no suitable response from remote hg!
394 394 [255]
395 395
396 396 $ hg id --ssh "sh ssh.sh" --remotecmd hacking "ssh://user@dummy/a'repo"
397 397 remote: Illegal command "hacking -R 'a'\''repo' serve --stdio"
398 398 abort: no suitable response from remote hg!
399 399 [255]
400 400
401 401 $ SSH_ORIGINAL_COMMAND="'hg' -R 'a'repo' serve --stdio" $PYTHON "$TESTDIR/../contrib/hg-ssh"
402 402 Illegal command "'hg' -R 'a'repo' serve --stdio": No closing quotation
403 403 [255]
404 404
405 405 Test hg-ssh in read-only mode:
406 406
407 407 $ cat > ssh.sh << EOF
408 408 > userhost="\$1"
409 409 > SSH_ORIGINAL_COMMAND="\$2"
410 410 > export SSH_ORIGINAL_COMMAND
411 411 > PYTHONPATH="$PYTHONPATH"
412 412 > export PYTHONPATH
413 413 > "$PYTHON" "$TESTDIR/../contrib/hg-ssh" --read-only "$TESTTMP/remote"
414 414 > EOF
415 415
416 416 $ hg clone --ssh "sh ssh.sh" "ssh://user@dummy/$TESTTMP/remote" read-only-local
417 417 requesting all changes
418 418 adding changesets
419 419 adding manifests
420 420 adding file changes
421 421 added 6 changesets with 5 changes to 4 files (+1 heads)
422 422 new changesets 1160648e36ce:1383141674ec
423 423 updating to branch default
424 424 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
425 425
426 426 $ cd read-only-local
427 427 $ echo "baz" > bar
428 428 $ hg ci -A -m "unpushable commit" bar
429 429 $ hg push --ssh "sh ../ssh.sh"
430 430 pushing to ssh://user@dummy/*/remote (glob)
431 431 searching for changes
432 432 remote: Permission denied
433 433 remote: pretxnopen.hg-ssh hook failed
434 434 abort: push failed on remote
435 435 [255]
436 436
437 437 $ cd ..
438 438
439 439 stderr from remote commands should be printed before stdout from local code (issue4336)
440 440
441 441 $ hg clone remote stderr-ordering
442 442 updating to branch default
443 443 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
444 444 $ cd stderr-ordering
445 445 $ cat >> localwrite.py << EOF
446 446 > from mercurial import exchange, extensions
447 447 >
448 448 > def wrappedpush(orig, repo, *args, **kwargs):
449 449 > res = orig(repo, *args, **kwargs)
450 450 > repo.ui.write('local stdout\n')
451 451 > return res
452 452 >
453 453 > def extsetup(ui):
454 454 > extensions.wrapfunction(exchange, 'push', wrappedpush)
455 455 > EOF
456 456
457 457 $ cat >> .hg/hgrc << EOF
458 458 > [paths]
459 459 > default-push = ssh://user@dummy/remote
460 460 > [ui]
461 461 > ssh = "$PYTHON" "$TESTDIR/dummyssh"
462 462 > [extensions]
463 463 > localwrite = localwrite.py
464 464 > EOF
465 465
466 466 $ echo localwrite > foo
467 467 $ hg commit -m 'testing localwrite'
468 468 $ hg push
469 469 pushing to ssh://user@dummy/remote
470 470 searching for changes
471 471 remote: adding changesets
472 472 remote: adding manifests
473 473 remote: adding file changes
474 474 remote: added 1 changesets with 1 changes to 1 files
475 475 remote: KABOOM
476 476 remote: KABOOM IN PROCESS
477 477 local stdout
478 478
479 479 debug output
480 480
481 481 $ hg pull --debug ssh://user@dummy/remote
482 482 pulling from ssh://user@dummy/remote
483 483 running .* ".*/dummyssh" ['"]user@dummy['"] ('|")hg -R remote serve --stdio('|") (re)
484 484 sending hello command
485 485 sending between command
486 remote: 372
486 remote: 384
487 487 remote: capabilities: lookup changegroupsubset branchmap pushkey known getbundle unbundlehash batch streamreqs=generaldelta,revlogv1 $USUAL_BUNDLE2_CAPS$ unbundle=HG10GZ,HG10BZ,HG10UN
488 488 remote: 1
489 489 query 1; heads
490 490 sending batch command
491 491 searching for changes
492 492 all remote heads known locally
493 493 no changes found
494 494 sending getbundle command
495 495 bundle2-input-bundle: with-transaction
496 496 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
497 497 bundle2-input-part: total payload size 45
498 498 bundle2-input-part: "phase-heads" supported
499 499 bundle2-input-part: total payload size 72
500 500 bundle2-input-bundle: 1 parts total
501 501 checking for updated bookmarks
502 502
503 503 $ cd ..
504 504
505 505 $ cat dummylog
506 506 Got arguments 1:user@dummy 2:hg -R nonexistent serve --stdio
507 507 Got arguments 1:user@dummy 2:hg -R $TESTTMP/nonexistent serve --stdio
508 508 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
509 509 Got arguments 1:user@dummy 2:hg -R local-stream serve --stdio
510 510 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
511 511 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
512 512 Got arguments 1:user@dummy 2:hg -R doesnotexist serve --stdio
513 513 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
514 514 Got arguments 1:user@dummy 2:hg -R local serve --stdio
515 515 Got arguments 1:user@dummy 2:hg -R $TESTTMP/local serve --stdio
516 516 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
517 517 changegroup-in-remote hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=a28a9d1a809cab7d4e2fde4bee738a9ede948b60 HG_NODE_LAST=a28a9d1a809cab7d4e2fde4bee738a9ede948b60 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
518 518 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
519 519 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
520 520 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
521 521 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
522 522 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
523 523 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
524 524 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
525 525 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
526 526 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
527 527 changegroup-in-remote hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=1383141674ec756a6056f6a9097618482fe0f4a6 HG_NODE_LAST=1383141674ec756a6056f6a9097618482fe0f4a6 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
528 528 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
529 529 Got arguments 1:user@dummy 2:hg init 'a repo'
530 530 Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
531 531 Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
532 532 Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
533 533 Got arguments 1:user@dummy 2:hg -R 'a repo' serve --stdio
534 534 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
535 535 changegroup-in-remote hook: HG_BUNDLE2=1 HG_HOOKNAME=changegroup HG_HOOKTYPE=changegroup HG_NODE=65c38f4125f9602c8db4af56530cc221d93b8ef8 HG_NODE_LAST=65c38f4125f9602c8db4af56530cc221d93b8ef8 HG_SOURCE=serve HG_TXNID=TXN:$ID$ HG_URL=remote:ssh:$LOCALIP
536 536 Got arguments 1:user@dummy 2:hg -R remote serve --stdio
537 537
538 538 remote hook failure is attributed to remote
539 539
540 540 $ cat > $TESTTMP/failhook << EOF
541 541 > def hook(ui, repo, **kwargs):
542 542 > ui.write('hook failure!\n')
543 543 > ui.flush()
544 544 > return 1
545 545 > EOF
546 546
547 547 $ echo "pretxnchangegroup.fail = python:$TESTTMP/failhook:hook" >> remote/.hg/hgrc
548 548
549 549 $ hg -q --config ui.ssh="\"$PYTHON\" $TESTDIR/dummyssh" clone ssh://user@dummy/remote hookout
550 550 $ cd hookout
551 551 $ touch hookfailure
552 552 $ hg -q commit -A -m 'remote hook failure'
553 553 $ hg --config ui.ssh="\"$PYTHON\" $TESTDIR/dummyssh" push
554 554 pushing to ssh://user@dummy/remote
555 555 searching for changes
556 556 remote: adding changesets
557 557 remote: adding manifests
558 558 remote: adding file changes
559 559 remote: added 1 changesets with 1 changes to 1 files
560 560 remote: hook failure!
561 561 remote: transaction abort!
562 562 remote: rollback completed
563 563 remote: pretxnchangegroup.fail hook failed
564 564 abort: push failed on remote
565 565 [255]
566 566
567 567 abort during pull is properly reported as such
568 568
569 569 $ echo morefoo >> ../remote/foo
570 570 $ hg -R ../remote commit --message "more foo to be pulled"
571 571 $ cat >> ../remote/.hg/hgrc << EOF
572 572 > [extensions]
573 573 > crash = ${TESTDIR}/crashgetbundler.py
574 574 > EOF
575 575 $ hg --config ui.ssh="\"$PYTHON\" $TESTDIR/dummyssh" pull
576 576 pulling from ssh://user@dummy/remote
577 577 searching for changes
578 578 remote: abort: this is an exercise
579 579 abort: pull failed on remote
580 580 [255]
581 581
582 582 abort with no error hint when there is a ssh problem when pulling
583 583
584 584 $ hg pull ssh://brokenrepository -e "\"$PYTHON\" \"$TESTDIR/dummyssh\""
585 585 pulling from ssh://brokenrepository/
586 586 abort: no suitable response from remote hg!
587 587 [255]
588 588
589 589 abort with configured error hint when there is a ssh problem when pulling
590 590
591 591 $ hg pull ssh://brokenrepository -e "\"$PYTHON\" \"$TESTDIR/dummyssh\"" \
592 592 > --config ui.ssherrorhint="Please see http://company/internalwiki/ssh.html"
593 593 pulling from ssh://brokenrepository/
594 594 abort: no suitable response from remote hg!
595 595 (Please see http://company/internalwiki/ssh.html)
596 596 [255]
General Comments 0
You need to be logged in to leave comments. Login now