##// END OF EJS Templates
push: add a way to allow concurrent pushes on unrelated heads...
marmoute -
r32709:16ada4cb default
parent child Browse files
Show More
@@ -1,1755 +1,1786 b''
1 # bundle2.py - generic container format to transmit arbitrary data.
1 # bundle2.py - generic container format to transmit arbitrary data.
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """Handling of the new bundle2 format
7 """Handling of the new bundle2 format
8
8
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
9 The goal of bundle2 is to act as an atomically packet to transmit a set of
10 payloads in an application agnostic way. It consist in a sequence of "parts"
10 payloads in an application agnostic way. It consist in a sequence of "parts"
11 that will be handed to and processed by the application layer.
11 that will be handed to and processed by the application layer.
12
12
13
13
14 General format architecture
14 General format architecture
15 ===========================
15 ===========================
16
16
17 The format is architectured as follow
17 The format is architectured as follow
18
18
19 - magic string
19 - magic string
20 - stream level parameters
20 - stream level parameters
21 - payload parts (any number)
21 - payload parts (any number)
22 - end of stream marker.
22 - end of stream marker.
23
23
24 the Binary format
24 the Binary format
25 ============================
25 ============================
26
26
27 All numbers are unsigned and big-endian.
27 All numbers are unsigned and big-endian.
28
28
29 stream level parameters
29 stream level parameters
30 ------------------------
30 ------------------------
31
31
32 Binary format is as follow
32 Binary format is as follow
33
33
34 :params size: int32
34 :params size: int32
35
35
36 The total number of Bytes used by the parameters
36 The total number of Bytes used by the parameters
37
37
38 :params value: arbitrary number of Bytes
38 :params value: arbitrary number of Bytes
39
39
40 A blob of `params size` containing the serialized version of all stream level
40 A blob of `params size` containing the serialized version of all stream level
41 parameters.
41 parameters.
42
42
43 The blob contains a space separated list of parameters. Parameters with value
43 The blob contains a space separated list of parameters. Parameters with value
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
44 are stored in the form `<name>=<value>`. Both name and value are urlquoted.
45
45
46 Empty name are obviously forbidden.
46 Empty name are obviously forbidden.
47
47
48 Name MUST start with a letter. If this first letter is lower case, the
48 Name MUST start with a letter. If this first letter is lower case, the
49 parameter is advisory and can be safely ignored. However when the first
49 parameter is advisory and can be safely ignored. However when the first
50 letter is capital, the parameter is mandatory and the bundling process MUST
50 letter is capital, the parameter is mandatory and the bundling process MUST
51 stop if he is not able to proceed it.
51 stop if he is not able to proceed it.
52
52
53 Stream parameters use a simple textual format for two main reasons:
53 Stream parameters use a simple textual format for two main reasons:
54
54
55 - Stream level parameters should remain simple and we want to discourage any
55 - Stream level parameters should remain simple and we want to discourage any
56 crazy usage.
56 crazy usage.
57 - Textual data allow easy human inspection of a bundle2 header in case of
57 - Textual data allow easy human inspection of a bundle2 header in case of
58 troubles.
58 troubles.
59
59
60 Any Applicative level options MUST go into a bundle2 part instead.
60 Any Applicative level options MUST go into a bundle2 part instead.
61
61
62 Payload part
62 Payload part
63 ------------------------
63 ------------------------
64
64
65 Binary format is as follow
65 Binary format is as follow
66
66
67 :header size: int32
67 :header size: int32
68
68
69 The total number of Bytes used by the part header. When the header is empty
69 The total number of Bytes used by the part header. When the header is empty
70 (size = 0) this is interpreted as the end of stream marker.
70 (size = 0) this is interpreted as the end of stream marker.
71
71
72 :header:
72 :header:
73
73
74 The header defines how to interpret the part. It contains two piece of
74 The header defines how to interpret the part. It contains two piece of
75 data: the part type, and the part parameters.
75 data: the part type, and the part parameters.
76
76
77 The part type is used to route an application level handler, that can
77 The part type is used to route an application level handler, that can
78 interpret payload.
78 interpret payload.
79
79
80 Part parameters are passed to the application level handler. They are
80 Part parameters are passed to the application level handler. They are
81 meant to convey information that will help the application level object to
81 meant to convey information that will help the application level object to
82 interpret the part payload.
82 interpret the part payload.
83
83
84 The binary format of the header is has follow
84 The binary format of the header is has follow
85
85
86 :typesize: (one byte)
86 :typesize: (one byte)
87
87
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
88 :parttype: alphanumerical part name (restricted to [a-zA-Z0-9_:-]*)
89
89
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
90 :partid: A 32bits integer (unique in the bundle) that can be used to refer
91 to this part.
91 to this part.
92
92
93 :parameters:
93 :parameters:
94
94
95 Part's parameter may have arbitrary content, the binary structure is::
95 Part's parameter may have arbitrary content, the binary structure is::
96
96
97 <mandatory-count><advisory-count><param-sizes><param-data>
97 <mandatory-count><advisory-count><param-sizes><param-data>
98
98
99 :mandatory-count: 1 byte, number of mandatory parameters
99 :mandatory-count: 1 byte, number of mandatory parameters
100
100
101 :advisory-count: 1 byte, number of advisory parameters
101 :advisory-count: 1 byte, number of advisory parameters
102
102
103 :param-sizes:
103 :param-sizes:
104
104
105 N couple of bytes, where N is the total number of parameters. Each
105 N couple of bytes, where N is the total number of parameters. Each
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
106 couple contains (<size-of-key>, <size-of-value) for one parameter.
107
107
108 :param-data:
108 :param-data:
109
109
110 A blob of bytes from which each parameter key and value can be
110 A blob of bytes from which each parameter key and value can be
111 retrieved using the list of size couples stored in the previous
111 retrieved using the list of size couples stored in the previous
112 field.
112 field.
113
113
114 Mandatory parameters comes first, then the advisory ones.
114 Mandatory parameters comes first, then the advisory ones.
115
115
116 Each parameter's key MUST be unique within the part.
116 Each parameter's key MUST be unique within the part.
117
117
118 :payload:
118 :payload:
119
119
120 payload is a series of `<chunksize><chunkdata>`.
120 payload is a series of `<chunksize><chunkdata>`.
121
121
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
122 `chunksize` is an int32, `chunkdata` are plain bytes (as much as
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
123 `chunksize` says)` The payload part is concluded by a zero size chunk.
124
124
125 The current implementation always produces either zero or one chunk.
125 The current implementation always produces either zero or one chunk.
126 This is an implementation limitation that will ultimately be lifted.
126 This is an implementation limitation that will ultimately be lifted.
127
127
128 `chunksize` can be negative to trigger special case processing. No such
128 `chunksize` can be negative to trigger special case processing. No such
129 processing is in place yet.
129 processing is in place yet.
130
130
131 Bundle processing
131 Bundle processing
132 ============================
132 ============================
133
133
134 Each part is processed in order using a "part handler". Handler are registered
134 Each part is processed in order using a "part handler". Handler are registered
135 for a certain part type.
135 for a certain part type.
136
136
137 The matching of a part to its handler is case insensitive. The case of the
137 The matching of a part to its handler is case insensitive. The case of the
138 part type is used to know if a part is mandatory or advisory. If the Part type
138 part type is used to know if a part is mandatory or advisory. If the Part type
139 contains any uppercase char it is considered mandatory. When no handler is
139 contains any uppercase char it is considered mandatory. When no handler is
140 known for a Mandatory part, the process is aborted and an exception is raised.
140 known for a Mandatory part, the process is aborted and an exception is raised.
141 If the part is advisory and no handler is known, the part is ignored. When the
141 If the part is advisory and no handler is known, the part is ignored. When the
142 process is aborted, the full bundle is still read from the stream to keep the
142 process is aborted, the full bundle is still read from the stream to keep the
143 channel usable. But none of the part read from an abort are processed. In the
143 channel usable. But none of the part read from an abort are processed. In the
144 future, dropping the stream may become an option for channel we do not care to
144 future, dropping the stream may become an option for channel we do not care to
145 preserve.
145 preserve.
146 """
146 """
147
147
148 from __future__ import absolute_import
148 from __future__ import absolute_import
149
149
150 import errno
150 import errno
151 import re
151 import re
152 import string
152 import string
153 import struct
153 import struct
154 import sys
154 import sys
155
155
156 from .i18n import _
156 from .i18n import _
157 from . import (
157 from . import (
158 changegroup,
158 changegroup,
159 error,
159 error,
160 obsolete,
160 obsolete,
161 pushkey,
161 pushkey,
162 pycompat,
162 pycompat,
163 tags,
163 tags,
164 url,
164 url,
165 util,
165 util,
166 )
166 )
167
167
168 urlerr = util.urlerr
168 urlerr = util.urlerr
169 urlreq = util.urlreq
169 urlreq = util.urlreq
170
170
171 _pack = struct.pack
171 _pack = struct.pack
172 _unpack = struct.unpack
172 _unpack = struct.unpack
173
173
174 _fstreamparamsize = '>i'
174 _fstreamparamsize = '>i'
175 _fpartheadersize = '>i'
175 _fpartheadersize = '>i'
176 _fparttypesize = '>B'
176 _fparttypesize = '>B'
177 _fpartid = '>I'
177 _fpartid = '>I'
178 _fpayloadsize = '>i'
178 _fpayloadsize = '>i'
179 _fpartparamcount = '>BB'
179 _fpartparamcount = '>BB'
180
180
181 preferedchunksize = 4096
181 preferedchunksize = 4096
182
182
183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
183 _parttypeforbidden = re.compile('[^a-zA-Z0-9_:-]')
184
184
185 def outdebug(ui, message):
185 def outdebug(ui, message):
186 """debug regarding output stream (bundling)"""
186 """debug regarding output stream (bundling)"""
187 if ui.configbool('devel', 'bundle2.debug', False):
187 if ui.configbool('devel', 'bundle2.debug', False):
188 ui.debug('bundle2-output: %s\n' % message)
188 ui.debug('bundle2-output: %s\n' % message)
189
189
190 def indebug(ui, message):
190 def indebug(ui, message):
191 """debug on input stream (unbundling)"""
191 """debug on input stream (unbundling)"""
192 if ui.configbool('devel', 'bundle2.debug', False):
192 if ui.configbool('devel', 'bundle2.debug', False):
193 ui.debug('bundle2-input: %s\n' % message)
193 ui.debug('bundle2-input: %s\n' % message)
194
194
195 def validateparttype(parttype):
195 def validateparttype(parttype):
196 """raise ValueError if a parttype contains invalid character"""
196 """raise ValueError if a parttype contains invalid character"""
197 if _parttypeforbidden.search(parttype):
197 if _parttypeforbidden.search(parttype):
198 raise ValueError(parttype)
198 raise ValueError(parttype)
199
199
200 def _makefpartparamsizes(nbparams):
200 def _makefpartparamsizes(nbparams):
201 """return a struct format to read part parameter sizes
201 """return a struct format to read part parameter sizes
202
202
203 The number parameters is variable so we need to build that format
203 The number parameters is variable so we need to build that format
204 dynamically.
204 dynamically.
205 """
205 """
206 return '>'+('BB'*nbparams)
206 return '>'+('BB'*nbparams)
207
207
208 parthandlermapping = {}
208 parthandlermapping = {}
209
209
210 def parthandler(parttype, params=()):
210 def parthandler(parttype, params=()):
211 """decorator that register a function as a bundle2 part handler
211 """decorator that register a function as a bundle2 part handler
212
212
213 eg::
213 eg::
214
214
215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
215 @parthandler('myparttype', ('mandatory', 'param', 'handled'))
216 def myparttypehandler(...):
216 def myparttypehandler(...):
217 '''process a part of type "my part".'''
217 '''process a part of type "my part".'''
218 ...
218 ...
219 """
219 """
220 validateparttype(parttype)
220 validateparttype(parttype)
221 def _decorator(func):
221 def _decorator(func):
222 lparttype = parttype.lower() # enforce lower case matching.
222 lparttype = parttype.lower() # enforce lower case matching.
223 assert lparttype not in parthandlermapping
223 assert lparttype not in parthandlermapping
224 parthandlermapping[lparttype] = func
224 parthandlermapping[lparttype] = func
225 func.params = frozenset(params)
225 func.params = frozenset(params)
226 return func
226 return func
227 return _decorator
227 return _decorator
228
228
229 class unbundlerecords(object):
229 class unbundlerecords(object):
230 """keep record of what happens during and unbundle
230 """keep record of what happens during and unbundle
231
231
232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
232 New records are added using `records.add('cat', obj)`. Where 'cat' is a
233 category of record and obj is an arbitrary object.
233 category of record and obj is an arbitrary object.
234
234
235 `records['cat']` will return all entries of this category 'cat'.
235 `records['cat']` will return all entries of this category 'cat'.
236
236
237 Iterating on the object itself will yield `('category', obj)` tuples
237 Iterating on the object itself will yield `('category', obj)` tuples
238 for all entries.
238 for all entries.
239
239
240 All iterations happens in chronological order.
240 All iterations happens in chronological order.
241 """
241 """
242
242
243 def __init__(self):
243 def __init__(self):
244 self._categories = {}
244 self._categories = {}
245 self._sequences = []
245 self._sequences = []
246 self._replies = {}
246 self._replies = {}
247
247
248 def add(self, category, entry, inreplyto=None):
248 def add(self, category, entry, inreplyto=None):
249 """add a new record of a given category.
249 """add a new record of a given category.
250
250
251 The entry can then be retrieved in the list returned by
251 The entry can then be retrieved in the list returned by
252 self['category']."""
252 self['category']."""
253 self._categories.setdefault(category, []).append(entry)
253 self._categories.setdefault(category, []).append(entry)
254 self._sequences.append((category, entry))
254 self._sequences.append((category, entry))
255 if inreplyto is not None:
255 if inreplyto is not None:
256 self.getreplies(inreplyto).add(category, entry)
256 self.getreplies(inreplyto).add(category, entry)
257
257
258 def getreplies(self, partid):
258 def getreplies(self, partid):
259 """get the records that are replies to a specific part"""
259 """get the records that are replies to a specific part"""
260 return self._replies.setdefault(partid, unbundlerecords())
260 return self._replies.setdefault(partid, unbundlerecords())
261
261
262 def __getitem__(self, cat):
262 def __getitem__(self, cat):
263 return tuple(self._categories.get(cat, ()))
263 return tuple(self._categories.get(cat, ()))
264
264
265 def __iter__(self):
265 def __iter__(self):
266 return iter(self._sequences)
266 return iter(self._sequences)
267
267
268 def __len__(self):
268 def __len__(self):
269 return len(self._sequences)
269 return len(self._sequences)
270
270
271 def __nonzero__(self):
271 def __nonzero__(self):
272 return bool(self._sequences)
272 return bool(self._sequences)
273
273
274 __bool__ = __nonzero__
274 __bool__ = __nonzero__
275
275
276 class bundleoperation(object):
276 class bundleoperation(object):
277 """an object that represents a single bundling process
277 """an object that represents a single bundling process
278
278
279 Its purpose is to carry unbundle-related objects and states.
279 Its purpose is to carry unbundle-related objects and states.
280
280
281 A new object should be created at the beginning of each bundle processing.
281 A new object should be created at the beginning of each bundle processing.
282 The object is to be returned by the processing function.
282 The object is to be returned by the processing function.
283
283
284 The object has very little content now it will ultimately contain:
284 The object has very little content now it will ultimately contain:
285 * an access to the repo the bundle is applied to,
285 * an access to the repo the bundle is applied to,
286 * a ui object,
286 * a ui object,
287 * a way to retrieve a transaction to add changes to the repo,
287 * a way to retrieve a transaction to add changes to the repo,
288 * a way to record the result of processing each part,
288 * a way to record the result of processing each part,
289 * a way to construct a bundle response when applicable.
289 * a way to construct a bundle response when applicable.
290 """
290 """
291
291
292 def __init__(self, repo, transactiongetter, captureoutput=True):
292 def __init__(self, repo, transactiongetter, captureoutput=True):
293 self.repo = repo
293 self.repo = repo
294 self.ui = repo.ui
294 self.ui = repo.ui
295 self.records = unbundlerecords()
295 self.records = unbundlerecords()
296 self.gettransaction = transactiongetter
296 self.gettransaction = transactiongetter
297 self.reply = None
297 self.reply = None
298 self.captureoutput = captureoutput
298 self.captureoutput = captureoutput
299
299
300 class TransactionUnavailable(RuntimeError):
300 class TransactionUnavailable(RuntimeError):
301 pass
301 pass
302
302
303 def _notransaction():
303 def _notransaction():
304 """default method to get a transaction while processing a bundle
304 """default method to get a transaction while processing a bundle
305
305
306 Raise an exception to highlight the fact that no transaction was expected
306 Raise an exception to highlight the fact that no transaction was expected
307 to be created"""
307 to be created"""
308 raise TransactionUnavailable()
308 raise TransactionUnavailable()
309
309
310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
310 def applybundle(repo, unbundler, tr, source=None, url=None, op=None):
311 # transform me into unbundler.apply() as soon as the freeze is lifted
311 # transform me into unbundler.apply() as soon as the freeze is lifted
312 tr.hookargs['bundle2'] = '1'
312 tr.hookargs['bundle2'] = '1'
313 if source is not None and 'source' not in tr.hookargs:
313 if source is not None and 'source' not in tr.hookargs:
314 tr.hookargs['source'] = source
314 tr.hookargs['source'] = source
315 if url is not None and 'url' not in tr.hookargs:
315 if url is not None and 'url' not in tr.hookargs:
316 tr.hookargs['url'] = url
316 tr.hookargs['url'] = url
317 return processbundle(repo, unbundler, lambda: tr, op=op)
317 return processbundle(repo, unbundler, lambda: tr, op=op)
318
318
319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
319 def processbundle(repo, unbundler, transactiongetter=None, op=None):
320 """This function process a bundle, apply effect to/from a repo
320 """This function process a bundle, apply effect to/from a repo
321
321
322 It iterates over each part then searches for and uses the proper handling
322 It iterates over each part then searches for and uses the proper handling
323 code to process the part. Parts are processed in order.
323 code to process the part. Parts are processed in order.
324
324
325 Unknown Mandatory part will abort the process.
325 Unknown Mandatory part will abort the process.
326
326
327 It is temporarily possible to provide a prebuilt bundleoperation to the
327 It is temporarily possible to provide a prebuilt bundleoperation to the
328 function. This is used to ensure output is properly propagated in case of
328 function. This is used to ensure output is properly propagated in case of
329 an error during the unbundling. This output capturing part will likely be
329 an error during the unbundling. This output capturing part will likely be
330 reworked and this ability will probably go away in the process.
330 reworked and this ability will probably go away in the process.
331 """
331 """
332 if op is None:
332 if op is None:
333 if transactiongetter is None:
333 if transactiongetter is None:
334 transactiongetter = _notransaction
334 transactiongetter = _notransaction
335 op = bundleoperation(repo, transactiongetter)
335 op = bundleoperation(repo, transactiongetter)
336 # todo:
336 # todo:
337 # - replace this is a init function soon.
337 # - replace this is a init function soon.
338 # - exception catching
338 # - exception catching
339 unbundler.params
339 unbundler.params
340 if repo.ui.debugflag:
340 if repo.ui.debugflag:
341 msg = ['bundle2-input-bundle:']
341 msg = ['bundle2-input-bundle:']
342 if unbundler.params:
342 if unbundler.params:
343 msg.append(' %i params')
343 msg.append(' %i params')
344 if op.gettransaction is None:
344 if op.gettransaction is None:
345 msg.append(' no-transaction')
345 msg.append(' no-transaction')
346 else:
346 else:
347 msg.append(' with-transaction')
347 msg.append(' with-transaction')
348 msg.append('\n')
348 msg.append('\n')
349 repo.ui.debug(''.join(msg))
349 repo.ui.debug(''.join(msg))
350 iterparts = enumerate(unbundler.iterparts())
350 iterparts = enumerate(unbundler.iterparts())
351 part = None
351 part = None
352 nbpart = 0
352 nbpart = 0
353 try:
353 try:
354 for nbpart, part in iterparts:
354 for nbpart, part in iterparts:
355 _processpart(op, part)
355 _processpart(op, part)
356 except Exception as exc:
356 except Exception as exc:
357 # Any exceptions seeking to the end of the bundle at this point are
357 # Any exceptions seeking to the end of the bundle at this point are
358 # almost certainly related to the underlying stream being bad.
358 # almost certainly related to the underlying stream being bad.
359 # And, chances are that the exception we're handling is related to
359 # And, chances are that the exception we're handling is related to
360 # getting in that bad state. So, we swallow the seeking error and
360 # getting in that bad state. So, we swallow the seeking error and
361 # re-raise the original error.
361 # re-raise the original error.
362 seekerror = False
362 seekerror = False
363 try:
363 try:
364 for nbpart, part in iterparts:
364 for nbpart, part in iterparts:
365 # consume the bundle content
365 # consume the bundle content
366 part.seek(0, 2)
366 part.seek(0, 2)
367 except Exception:
367 except Exception:
368 seekerror = True
368 seekerror = True
369
369
370 # Small hack to let caller code distinguish exceptions from bundle2
370 # Small hack to let caller code distinguish exceptions from bundle2
371 # processing from processing the old format. This is mostly
371 # processing from processing the old format. This is mostly
372 # needed to handle different return codes to unbundle according to the
372 # needed to handle different return codes to unbundle according to the
373 # type of bundle. We should probably clean up or drop this return code
373 # type of bundle. We should probably clean up or drop this return code
374 # craziness in a future version.
374 # craziness in a future version.
375 exc.duringunbundle2 = True
375 exc.duringunbundle2 = True
376 salvaged = []
376 salvaged = []
377 replycaps = None
377 replycaps = None
378 if op.reply is not None:
378 if op.reply is not None:
379 salvaged = op.reply.salvageoutput()
379 salvaged = op.reply.salvageoutput()
380 replycaps = op.reply.capabilities
380 replycaps = op.reply.capabilities
381 exc._replycaps = replycaps
381 exc._replycaps = replycaps
382 exc._bundle2salvagedoutput = salvaged
382 exc._bundle2salvagedoutput = salvaged
383
383
384 # Re-raising from a variable loses the original stack. So only use
384 # Re-raising from a variable loses the original stack. So only use
385 # that form if we need to.
385 # that form if we need to.
386 if seekerror:
386 if seekerror:
387 raise exc
387 raise exc
388 else:
388 else:
389 raise
389 raise
390 finally:
390 finally:
391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
391 repo.ui.debug('bundle2-input-bundle: %i parts total\n' % nbpart)
392
392
393 return op
393 return op
394
394
395 def _processpart(op, part):
395 def _processpart(op, part):
396 """process a single part from a bundle
396 """process a single part from a bundle
397
397
398 The part is guaranteed to have been fully consumed when the function exits
398 The part is guaranteed to have been fully consumed when the function exits
399 (even if an exception is raised)."""
399 (even if an exception is raised)."""
400 status = 'unknown' # used by debug output
400 status = 'unknown' # used by debug output
401 hardabort = False
401 hardabort = False
402 try:
402 try:
403 try:
403 try:
404 handler = parthandlermapping.get(part.type)
404 handler = parthandlermapping.get(part.type)
405 if handler is None:
405 if handler is None:
406 status = 'unsupported-type'
406 status = 'unsupported-type'
407 raise error.BundleUnknownFeatureError(parttype=part.type)
407 raise error.BundleUnknownFeatureError(parttype=part.type)
408 indebug(op.ui, 'found a handler for part %r' % part.type)
408 indebug(op.ui, 'found a handler for part %r' % part.type)
409 unknownparams = part.mandatorykeys - handler.params
409 unknownparams = part.mandatorykeys - handler.params
410 if unknownparams:
410 if unknownparams:
411 unknownparams = list(unknownparams)
411 unknownparams = list(unknownparams)
412 unknownparams.sort()
412 unknownparams.sort()
413 status = 'unsupported-params (%s)' % unknownparams
413 status = 'unsupported-params (%s)' % unknownparams
414 raise error.BundleUnknownFeatureError(parttype=part.type,
414 raise error.BundleUnknownFeatureError(parttype=part.type,
415 params=unknownparams)
415 params=unknownparams)
416 status = 'supported'
416 status = 'supported'
417 except error.BundleUnknownFeatureError as exc:
417 except error.BundleUnknownFeatureError as exc:
418 if part.mandatory: # mandatory parts
418 if part.mandatory: # mandatory parts
419 raise
419 raise
420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
420 indebug(op.ui, 'ignoring unsupported advisory part %s' % exc)
421 return # skip to part processing
421 return # skip to part processing
422 finally:
422 finally:
423 if op.ui.debugflag:
423 if op.ui.debugflag:
424 msg = ['bundle2-input-part: "%s"' % part.type]
424 msg = ['bundle2-input-part: "%s"' % part.type]
425 if not part.mandatory:
425 if not part.mandatory:
426 msg.append(' (advisory)')
426 msg.append(' (advisory)')
427 nbmp = len(part.mandatorykeys)
427 nbmp = len(part.mandatorykeys)
428 nbap = len(part.params) - nbmp
428 nbap = len(part.params) - nbmp
429 if nbmp or nbap:
429 if nbmp or nbap:
430 msg.append(' (params:')
430 msg.append(' (params:')
431 if nbmp:
431 if nbmp:
432 msg.append(' %i mandatory' % nbmp)
432 msg.append(' %i mandatory' % nbmp)
433 if nbap:
433 if nbap:
434 msg.append(' %i advisory' % nbmp)
434 msg.append(' %i advisory' % nbmp)
435 msg.append(')')
435 msg.append(')')
436 msg.append(' %s\n' % status)
436 msg.append(' %s\n' % status)
437 op.ui.debug(''.join(msg))
437 op.ui.debug(''.join(msg))
438
438
439 # handler is called outside the above try block so that we don't
439 # handler is called outside the above try block so that we don't
440 # risk catching KeyErrors from anything other than the
440 # risk catching KeyErrors from anything other than the
441 # parthandlermapping lookup (any KeyError raised by handler()
441 # parthandlermapping lookup (any KeyError raised by handler()
442 # itself represents a defect of a different variety).
442 # itself represents a defect of a different variety).
443 output = None
443 output = None
444 if op.captureoutput and op.reply is not None:
444 if op.captureoutput and op.reply is not None:
445 op.ui.pushbuffer(error=True, subproc=True)
445 op.ui.pushbuffer(error=True, subproc=True)
446 output = ''
446 output = ''
447 try:
447 try:
448 handler(op, part)
448 handler(op, part)
449 finally:
449 finally:
450 if output is not None:
450 if output is not None:
451 output = op.ui.popbuffer()
451 output = op.ui.popbuffer()
452 if output:
452 if output:
453 outpart = op.reply.newpart('output', data=output,
453 outpart = op.reply.newpart('output', data=output,
454 mandatory=False)
454 mandatory=False)
455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
455 outpart.addparam('in-reply-to', str(part.id), mandatory=False)
456 # If exiting or interrupted, do not attempt to seek the stream in the
456 # If exiting or interrupted, do not attempt to seek the stream in the
457 # finally block below. This makes abort faster.
457 # finally block below. This makes abort faster.
458 except (SystemExit, KeyboardInterrupt):
458 except (SystemExit, KeyboardInterrupt):
459 hardabort = True
459 hardabort = True
460 raise
460 raise
461 finally:
461 finally:
462 # consume the part content to not corrupt the stream.
462 # consume the part content to not corrupt the stream.
463 if not hardabort:
463 if not hardabort:
464 part.seek(0, 2)
464 part.seek(0, 2)
465
465
466
466
467 def decodecaps(blob):
467 def decodecaps(blob):
468 """decode a bundle2 caps bytes blob into a dictionary
468 """decode a bundle2 caps bytes blob into a dictionary
469
469
470 The blob is a list of capabilities (one per line)
470 The blob is a list of capabilities (one per line)
471 Capabilities may have values using a line of the form::
471 Capabilities may have values using a line of the form::
472
472
473 capability=value1,value2,value3
473 capability=value1,value2,value3
474
474
475 The values are always a list."""
475 The values are always a list."""
476 caps = {}
476 caps = {}
477 for line in blob.splitlines():
477 for line in blob.splitlines():
478 if not line:
478 if not line:
479 continue
479 continue
480 if '=' not in line:
480 if '=' not in line:
481 key, vals = line, ()
481 key, vals = line, ()
482 else:
482 else:
483 key, vals = line.split('=', 1)
483 key, vals = line.split('=', 1)
484 vals = vals.split(',')
484 vals = vals.split(',')
485 key = urlreq.unquote(key)
485 key = urlreq.unquote(key)
486 vals = [urlreq.unquote(v) for v in vals]
486 vals = [urlreq.unquote(v) for v in vals]
487 caps[key] = vals
487 caps[key] = vals
488 return caps
488 return caps
489
489
490 def encodecaps(caps):
490 def encodecaps(caps):
491 """encode a bundle2 caps dictionary into a bytes blob"""
491 """encode a bundle2 caps dictionary into a bytes blob"""
492 chunks = []
492 chunks = []
493 for ca in sorted(caps):
493 for ca in sorted(caps):
494 vals = caps[ca]
494 vals = caps[ca]
495 ca = urlreq.quote(ca)
495 ca = urlreq.quote(ca)
496 vals = [urlreq.quote(v) for v in vals]
496 vals = [urlreq.quote(v) for v in vals]
497 if vals:
497 if vals:
498 ca = "%s=%s" % (ca, ','.join(vals))
498 ca = "%s=%s" % (ca, ','.join(vals))
499 chunks.append(ca)
499 chunks.append(ca)
500 return '\n'.join(chunks)
500 return '\n'.join(chunks)
501
501
502 bundletypes = {
502 bundletypes = {
503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
503 "": ("", 'UN'), # only when using unbundle on ssh and old http servers
504 # since the unification ssh accepts a header but there
504 # since the unification ssh accepts a header but there
505 # is no capability signaling it.
505 # is no capability signaling it.
506 "HG20": (), # special-cased below
506 "HG20": (), # special-cased below
507 "HG10UN": ("HG10UN", 'UN'),
507 "HG10UN": ("HG10UN", 'UN'),
508 "HG10BZ": ("HG10", 'BZ'),
508 "HG10BZ": ("HG10", 'BZ'),
509 "HG10GZ": ("HG10GZ", 'GZ'),
509 "HG10GZ": ("HG10GZ", 'GZ'),
510 }
510 }
511
511
512 # hgweb uses this list to communicate its preferred type
512 # hgweb uses this list to communicate its preferred type
513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
513 bundlepriority = ['HG10GZ', 'HG10BZ', 'HG10UN']
514
514
515 class bundle20(object):
515 class bundle20(object):
516 """represent an outgoing bundle2 container
516 """represent an outgoing bundle2 container
517
517
518 Use the `addparam` method to add stream level parameter. and `newpart` to
518 Use the `addparam` method to add stream level parameter. and `newpart` to
519 populate it. Then call `getchunks` to retrieve all the binary chunks of
519 populate it. Then call `getchunks` to retrieve all the binary chunks of
520 data that compose the bundle2 container."""
520 data that compose the bundle2 container."""
521
521
522 _magicstring = 'HG20'
522 _magicstring = 'HG20'
523
523
524 def __init__(self, ui, capabilities=()):
524 def __init__(self, ui, capabilities=()):
525 self.ui = ui
525 self.ui = ui
526 self._params = []
526 self._params = []
527 self._parts = []
527 self._parts = []
528 self.capabilities = dict(capabilities)
528 self.capabilities = dict(capabilities)
529 self._compengine = util.compengines.forbundletype('UN')
529 self._compengine = util.compengines.forbundletype('UN')
530 self._compopts = None
530 self._compopts = None
531
531
532 def setcompression(self, alg, compopts=None):
532 def setcompression(self, alg, compopts=None):
533 """setup core part compression to <alg>"""
533 """setup core part compression to <alg>"""
534 if alg in (None, 'UN'):
534 if alg in (None, 'UN'):
535 return
535 return
536 assert not any(n.lower() == 'compression' for n, v in self._params)
536 assert not any(n.lower() == 'compression' for n, v in self._params)
537 self.addparam('Compression', alg)
537 self.addparam('Compression', alg)
538 self._compengine = util.compengines.forbundletype(alg)
538 self._compengine = util.compengines.forbundletype(alg)
539 self._compopts = compopts
539 self._compopts = compopts
540
540
541 @property
541 @property
542 def nbparts(self):
542 def nbparts(self):
543 """total number of parts added to the bundler"""
543 """total number of parts added to the bundler"""
544 return len(self._parts)
544 return len(self._parts)
545
545
546 # methods used to defines the bundle2 content
546 # methods used to defines the bundle2 content
547 def addparam(self, name, value=None):
547 def addparam(self, name, value=None):
548 """add a stream level parameter"""
548 """add a stream level parameter"""
549 if not name:
549 if not name:
550 raise ValueError('empty parameter name')
550 raise ValueError('empty parameter name')
551 if name[0] not in string.letters:
551 if name[0] not in string.letters:
552 raise ValueError('non letter first character: %r' % name)
552 raise ValueError('non letter first character: %r' % name)
553 self._params.append((name, value))
553 self._params.append((name, value))
554
554
555 def addpart(self, part):
555 def addpart(self, part):
556 """add a new part to the bundle2 container
556 """add a new part to the bundle2 container
557
557
558 Parts contains the actual applicative payload."""
558 Parts contains the actual applicative payload."""
559 assert part.id is None
559 assert part.id is None
560 part.id = len(self._parts) # very cheap counter
560 part.id = len(self._parts) # very cheap counter
561 self._parts.append(part)
561 self._parts.append(part)
562
562
563 def newpart(self, typeid, *args, **kwargs):
563 def newpart(self, typeid, *args, **kwargs):
564 """create a new part and add it to the containers
564 """create a new part and add it to the containers
565
565
566 As the part is directly added to the containers. For now, this means
566 As the part is directly added to the containers. For now, this means
567 that any failure to properly initialize the part after calling
567 that any failure to properly initialize the part after calling
568 ``newpart`` should result in a failure of the whole bundling process.
568 ``newpart`` should result in a failure of the whole bundling process.
569
569
570 You can still fall back to manually create and add if you need better
570 You can still fall back to manually create and add if you need better
571 control."""
571 control."""
572 part = bundlepart(typeid, *args, **kwargs)
572 part = bundlepart(typeid, *args, **kwargs)
573 self.addpart(part)
573 self.addpart(part)
574 return part
574 return part
575
575
576 # methods used to generate the bundle2 stream
576 # methods used to generate the bundle2 stream
577 def getchunks(self):
577 def getchunks(self):
578 if self.ui.debugflag:
578 if self.ui.debugflag:
579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
579 msg = ['bundle2-output-bundle: "%s",' % self._magicstring]
580 if self._params:
580 if self._params:
581 msg.append(' (%i params)' % len(self._params))
581 msg.append(' (%i params)' % len(self._params))
582 msg.append(' %i parts total\n' % len(self._parts))
582 msg.append(' %i parts total\n' % len(self._parts))
583 self.ui.debug(''.join(msg))
583 self.ui.debug(''.join(msg))
584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
584 outdebug(self.ui, 'start emission of %s stream' % self._magicstring)
585 yield self._magicstring
585 yield self._magicstring
586 param = self._paramchunk()
586 param = self._paramchunk()
587 outdebug(self.ui, 'bundle parameter: %s' % param)
587 outdebug(self.ui, 'bundle parameter: %s' % param)
588 yield _pack(_fstreamparamsize, len(param))
588 yield _pack(_fstreamparamsize, len(param))
589 if param:
589 if param:
590 yield param
590 yield param
591 for chunk in self._compengine.compressstream(self._getcorechunk(),
591 for chunk in self._compengine.compressstream(self._getcorechunk(),
592 self._compopts):
592 self._compopts):
593 yield chunk
593 yield chunk
594
594
595 def _paramchunk(self):
595 def _paramchunk(self):
596 """return a encoded version of all stream parameters"""
596 """return a encoded version of all stream parameters"""
597 blocks = []
597 blocks = []
598 for par, value in self._params:
598 for par, value in self._params:
599 par = urlreq.quote(par)
599 par = urlreq.quote(par)
600 if value is not None:
600 if value is not None:
601 value = urlreq.quote(value)
601 value = urlreq.quote(value)
602 par = '%s=%s' % (par, value)
602 par = '%s=%s' % (par, value)
603 blocks.append(par)
603 blocks.append(par)
604 return ' '.join(blocks)
604 return ' '.join(blocks)
605
605
606 def _getcorechunk(self):
606 def _getcorechunk(self):
607 """yield chunk for the core part of the bundle
607 """yield chunk for the core part of the bundle
608
608
609 (all but headers and parameters)"""
609 (all but headers and parameters)"""
610 outdebug(self.ui, 'start of parts')
610 outdebug(self.ui, 'start of parts')
611 for part in self._parts:
611 for part in self._parts:
612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
612 outdebug(self.ui, 'bundle part: "%s"' % part.type)
613 for chunk in part.getchunks(ui=self.ui):
613 for chunk in part.getchunks(ui=self.ui):
614 yield chunk
614 yield chunk
615 outdebug(self.ui, 'end of bundle')
615 outdebug(self.ui, 'end of bundle')
616 yield _pack(_fpartheadersize, 0)
616 yield _pack(_fpartheadersize, 0)
617
617
618
618
619 def salvageoutput(self):
619 def salvageoutput(self):
620 """return a list with a copy of all output parts in the bundle
620 """return a list with a copy of all output parts in the bundle
621
621
622 This is meant to be used during error handling to make sure we preserve
622 This is meant to be used during error handling to make sure we preserve
623 server output"""
623 server output"""
624 salvaged = []
624 salvaged = []
625 for part in self._parts:
625 for part in self._parts:
626 if part.type.startswith('output'):
626 if part.type.startswith('output'):
627 salvaged.append(part.copy())
627 salvaged.append(part.copy())
628 return salvaged
628 return salvaged
629
629
630
630
631 class unpackermixin(object):
631 class unpackermixin(object):
632 """A mixin to extract bytes and struct data from a stream"""
632 """A mixin to extract bytes and struct data from a stream"""
633
633
634 def __init__(self, fp):
634 def __init__(self, fp):
635 self._fp = fp
635 self._fp = fp
636
636
637 def _unpack(self, format):
637 def _unpack(self, format):
638 """unpack this struct format from the stream
638 """unpack this struct format from the stream
639
639
640 This method is meant for internal usage by the bundle2 protocol only.
640 This method is meant for internal usage by the bundle2 protocol only.
641 They directly manipulate the low level stream including bundle2 level
641 They directly manipulate the low level stream including bundle2 level
642 instruction.
642 instruction.
643
643
644 Do not use it to implement higher-level logic or methods."""
644 Do not use it to implement higher-level logic or methods."""
645 data = self._readexact(struct.calcsize(format))
645 data = self._readexact(struct.calcsize(format))
646 return _unpack(format, data)
646 return _unpack(format, data)
647
647
648 def _readexact(self, size):
648 def _readexact(self, size):
649 """read exactly <size> bytes from the stream
649 """read exactly <size> bytes from the stream
650
650
651 This method is meant for internal usage by the bundle2 protocol only.
651 This method is meant for internal usage by the bundle2 protocol only.
652 They directly manipulate the low level stream including bundle2 level
652 They directly manipulate the low level stream including bundle2 level
653 instruction.
653 instruction.
654
654
655 Do not use it to implement higher-level logic or methods."""
655 Do not use it to implement higher-level logic or methods."""
656 return changegroup.readexactly(self._fp, size)
656 return changegroup.readexactly(self._fp, size)
657
657
658 def getunbundler(ui, fp, magicstring=None):
658 def getunbundler(ui, fp, magicstring=None):
659 """return a valid unbundler object for a given magicstring"""
659 """return a valid unbundler object for a given magicstring"""
660 if magicstring is None:
660 if magicstring is None:
661 magicstring = changegroup.readexactly(fp, 4)
661 magicstring = changegroup.readexactly(fp, 4)
662 magic, version = magicstring[0:2], magicstring[2:4]
662 magic, version = magicstring[0:2], magicstring[2:4]
663 if magic != 'HG':
663 if magic != 'HG':
664 raise error.Abort(_('not a Mercurial bundle'))
664 raise error.Abort(_('not a Mercurial bundle'))
665 unbundlerclass = formatmap.get(version)
665 unbundlerclass = formatmap.get(version)
666 if unbundlerclass is None:
666 if unbundlerclass is None:
667 raise error.Abort(_('unknown bundle version %s') % version)
667 raise error.Abort(_('unknown bundle version %s') % version)
668 unbundler = unbundlerclass(ui, fp)
668 unbundler = unbundlerclass(ui, fp)
669 indebug(ui, 'start processing of %s stream' % magicstring)
669 indebug(ui, 'start processing of %s stream' % magicstring)
670 return unbundler
670 return unbundler
671
671
672 class unbundle20(unpackermixin):
672 class unbundle20(unpackermixin):
673 """interpret a bundle2 stream
673 """interpret a bundle2 stream
674
674
675 This class is fed with a binary stream and yields parts through its
675 This class is fed with a binary stream and yields parts through its
676 `iterparts` methods."""
676 `iterparts` methods."""
677
677
678 _magicstring = 'HG20'
678 _magicstring = 'HG20'
679
679
680 def __init__(self, ui, fp):
680 def __init__(self, ui, fp):
681 """If header is specified, we do not read it out of the stream."""
681 """If header is specified, we do not read it out of the stream."""
682 self.ui = ui
682 self.ui = ui
683 self._compengine = util.compengines.forbundletype('UN')
683 self._compengine = util.compengines.forbundletype('UN')
684 self._compressed = None
684 self._compressed = None
685 super(unbundle20, self).__init__(fp)
685 super(unbundle20, self).__init__(fp)
686
686
687 @util.propertycache
687 @util.propertycache
688 def params(self):
688 def params(self):
689 """dictionary of stream level parameters"""
689 """dictionary of stream level parameters"""
690 indebug(self.ui, 'reading bundle2 stream parameters')
690 indebug(self.ui, 'reading bundle2 stream parameters')
691 params = {}
691 params = {}
692 paramssize = self._unpack(_fstreamparamsize)[0]
692 paramssize = self._unpack(_fstreamparamsize)[0]
693 if paramssize < 0:
693 if paramssize < 0:
694 raise error.BundleValueError('negative bundle param size: %i'
694 raise error.BundleValueError('negative bundle param size: %i'
695 % paramssize)
695 % paramssize)
696 if paramssize:
696 if paramssize:
697 params = self._readexact(paramssize)
697 params = self._readexact(paramssize)
698 params = self._processallparams(params)
698 params = self._processallparams(params)
699 return params
699 return params
700
700
701 def _processallparams(self, paramsblock):
701 def _processallparams(self, paramsblock):
702 """"""
702 """"""
703 params = util.sortdict()
703 params = util.sortdict()
704 for p in paramsblock.split(' '):
704 for p in paramsblock.split(' '):
705 p = p.split('=', 1)
705 p = p.split('=', 1)
706 p = [urlreq.unquote(i) for i in p]
706 p = [urlreq.unquote(i) for i in p]
707 if len(p) < 2:
707 if len(p) < 2:
708 p.append(None)
708 p.append(None)
709 self._processparam(*p)
709 self._processparam(*p)
710 params[p[0]] = p[1]
710 params[p[0]] = p[1]
711 return params
711 return params
712
712
713
713
714 def _processparam(self, name, value):
714 def _processparam(self, name, value):
715 """process a parameter, applying its effect if needed
715 """process a parameter, applying its effect if needed
716
716
717 Parameter starting with a lower case letter are advisory and will be
717 Parameter starting with a lower case letter are advisory and will be
718 ignored when unknown. Those starting with an upper case letter are
718 ignored when unknown. Those starting with an upper case letter are
719 mandatory and will this function will raise a KeyError when unknown.
719 mandatory and will this function will raise a KeyError when unknown.
720
720
721 Note: no option are currently supported. Any input will be either
721 Note: no option are currently supported. Any input will be either
722 ignored or failing.
722 ignored or failing.
723 """
723 """
724 if not name:
724 if not name:
725 raise ValueError('empty parameter name')
725 raise ValueError('empty parameter name')
726 if name[0] not in string.letters:
726 if name[0] not in string.letters:
727 raise ValueError('non letter first character: %r' % name)
727 raise ValueError('non letter first character: %r' % name)
728 try:
728 try:
729 handler = b2streamparamsmap[name.lower()]
729 handler = b2streamparamsmap[name.lower()]
730 except KeyError:
730 except KeyError:
731 if name[0].islower():
731 if name[0].islower():
732 indebug(self.ui, "ignoring unknown parameter %r" % name)
732 indebug(self.ui, "ignoring unknown parameter %r" % name)
733 else:
733 else:
734 raise error.BundleUnknownFeatureError(params=(name,))
734 raise error.BundleUnknownFeatureError(params=(name,))
735 else:
735 else:
736 handler(self, name, value)
736 handler(self, name, value)
737
737
738 def _forwardchunks(self):
738 def _forwardchunks(self):
739 """utility to transfer a bundle2 as binary
739 """utility to transfer a bundle2 as binary
740
740
741 This is made necessary by the fact the 'getbundle' command over 'ssh'
741 This is made necessary by the fact the 'getbundle' command over 'ssh'
742 have no way to know then the reply end, relying on the bundle to be
742 have no way to know then the reply end, relying on the bundle to be
743 interpreted to know its end. This is terrible and we are sorry, but we
743 interpreted to know its end. This is terrible and we are sorry, but we
744 needed to move forward to get general delta enabled.
744 needed to move forward to get general delta enabled.
745 """
745 """
746 yield self._magicstring
746 yield self._magicstring
747 assert 'params' not in vars(self)
747 assert 'params' not in vars(self)
748 paramssize = self._unpack(_fstreamparamsize)[0]
748 paramssize = self._unpack(_fstreamparamsize)[0]
749 if paramssize < 0:
749 if paramssize < 0:
750 raise error.BundleValueError('negative bundle param size: %i'
750 raise error.BundleValueError('negative bundle param size: %i'
751 % paramssize)
751 % paramssize)
752 yield _pack(_fstreamparamsize, paramssize)
752 yield _pack(_fstreamparamsize, paramssize)
753 if paramssize:
753 if paramssize:
754 params = self._readexact(paramssize)
754 params = self._readexact(paramssize)
755 self._processallparams(params)
755 self._processallparams(params)
756 yield params
756 yield params
757 assert self._compengine.bundletype == 'UN'
757 assert self._compengine.bundletype == 'UN'
758 # From there, payload might need to be decompressed
758 # From there, payload might need to be decompressed
759 self._fp = self._compengine.decompressorreader(self._fp)
759 self._fp = self._compengine.decompressorreader(self._fp)
760 emptycount = 0
760 emptycount = 0
761 while emptycount < 2:
761 while emptycount < 2:
762 # so we can brainlessly loop
762 # so we can brainlessly loop
763 assert _fpartheadersize == _fpayloadsize
763 assert _fpartheadersize == _fpayloadsize
764 size = self._unpack(_fpartheadersize)[0]
764 size = self._unpack(_fpartheadersize)[0]
765 yield _pack(_fpartheadersize, size)
765 yield _pack(_fpartheadersize, size)
766 if size:
766 if size:
767 emptycount = 0
767 emptycount = 0
768 else:
768 else:
769 emptycount += 1
769 emptycount += 1
770 continue
770 continue
771 if size == flaginterrupt:
771 if size == flaginterrupt:
772 continue
772 continue
773 elif size < 0:
773 elif size < 0:
774 raise error.BundleValueError('negative chunk size: %i')
774 raise error.BundleValueError('negative chunk size: %i')
775 yield self._readexact(size)
775 yield self._readexact(size)
776
776
777
777
778 def iterparts(self):
778 def iterparts(self):
779 """yield all parts contained in the stream"""
779 """yield all parts contained in the stream"""
780 # make sure param have been loaded
780 # make sure param have been loaded
781 self.params
781 self.params
782 # From there, payload need to be decompressed
782 # From there, payload need to be decompressed
783 self._fp = self._compengine.decompressorreader(self._fp)
783 self._fp = self._compengine.decompressorreader(self._fp)
784 indebug(self.ui, 'start extraction of bundle2 parts')
784 indebug(self.ui, 'start extraction of bundle2 parts')
785 headerblock = self._readpartheader()
785 headerblock = self._readpartheader()
786 while headerblock is not None:
786 while headerblock is not None:
787 part = unbundlepart(self.ui, headerblock, self._fp)
787 part = unbundlepart(self.ui, headerblock, self._fp)
788 yield part
788 yield part
789 part.seek(0, 2)
789 part.seek(0, 2)
790 headerblock = self._readpartheader()
790 headerblock = self._readpartheader()
791 indebug(self.ui, 'end of bundle2 stream')
791 indebug(self.ui, 'end of bundle2 stream')
792
792
793 def _readpartheader(self):
793 def _readpartheader(self):
794 """reads a part header size and return the bytes blob
794 """reads a part header size and return the bytes blob
795
795
796 returns None if empty"""
796 returns None if empty"""
797 headersize = self._unpack(_fpartheadersize)[0]
797 headersize = self._unpack(_fpartheadersize)[0]
798 if headersize < 0:
798 if headersize < 0:
799 raise error.BundleValueError('negative part header size: %i'
799 raise error.BundleValueError('negative part header size: %i'
800 % headersize)
800 % headersize)
801 indebug(self.ui, 'part header size: %i' % headersize)
801 indebug(self.ui, 'part header size: %i' % headersize)
802 if headersize:
802 if headersize:
803 return self._readexact(headersize)
803 return self._readexact(headersize)
804 return None
804 return None
805
805
806 def compressed(self):
806 def compressed(self):
807 self.params # load params
807 self.params # load params
808 return self._compressed
808 return self._compressed
809
809
810 def close(self):
810 def close(self):
811 """close underlying file"""
811 """close underlying file"""
812 if util.safehasattr(self._fp, 'close'):
812 if util.safehasattr(self._fp, 'close'):
813 return self._fp.close()
813 return self._fp.close()
814
814
815 formatmap = {'20': unbundle20}
815 formatmap = {'20': unbundle20}
816
816
817 b2streamparamsmap = {}
817 b2streamparamsmap = {}
818
818
819 def b2streamparamhandler(name):
819 def b2streamparamhandler(name):
820 """register a handler for a stream level parameter"""
820 """register a handler for a stream level parameter"""
821 def decorator(func):
821 def decorator(func):
822 assert name not in formatmap
822 assert name not in formatmap
823 b2streamparamsmap[name] = func
823 b2streamparamsmap[name] = func
824 return func
824 return func
825 return decorator
825 return decorator
826
826
827 @b2streamparamhandler('compression')
827 @b2streamparamhandler('compression')
828 def processcompression(unbundler, param, value):
828 def processcompression(unbundler, param, value):
829 """read compression parameter and install payload decompression"""
829 """read compression parameter and install payload decompression"""
830 if value not in util.compengines.supportedbundletypes:
830 if value not in util.compengines.supportedbundletypes:
831 raise error.BundleUnknownFeatureError(params=(param,),
831 raise error.BundleUnknownFeatureError(params=(param,),
832 values=(value,))
832 values=(value,))
833 unbundler._compengine = util.compengines.forbundletype(value)
833 unbundler._compengine = util.compengines.forbundletype(value)
834 if value is not None:
834 if value is not None:
835 unbundler._compressed = True
835 unbundler._compressed = True
836
836
837 class bundlepart(object):
837 class bundlepart(object):
838 """A bundle2 part contains application level payload
838 """A bundle2 part contains application level payload
839
839
840 The part `type` is used to route the part to the application level
840 The part `type` is used to route the part to the application level
841 handler.
841 handler.
842
842
843 The part payload is contained in ``part.data``. It could be raw bytes or a
843 The part payload is contained in ``part.data``. It could be raw bytes or a
844 generator of byte chunks.
844 generator of byte chunks.
845
845
846 You can add parameters to the part using the ``addparam`` method.
846 You can add parameters to the part using the ``addparam`` method.
847 Parameters can be either mandatory (default) or advisory. Remote side
847 Parameters can be either mandatory (default) or advisory. Remote side
848 should be able to safely ignore the advisory ones.
848 should be able to safely ignore the advisory ones.
849
849
850 Both data and parameters cannot be modified after the generation has begun.
850 Both data and parameters cannot be modified after the generation has begun.
851 """
851 """
852
852
853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
853 def __init__(self, parttype, mandatoryparams=(), advisoryparams=(),
854 data='', mandatory=True):
854 data='', mandatory=True):
855 validateparttype(parttype)
855 validateparttype(parttype)
856 self.id = None
856 self.id = None
857 self.type = parttype
857 self.type = parttype
858 self._data = data
858 self._data = data
859 self._mandatoryparams = list(mandatoryparams)
859 self._mandatoryparams = list(mandatoryparams)
860 self._advisoryparams = list(advisoryparams)
860 self._advisoryparams = list(advisoryparams)
861 # checking for duplicated entries
861 # checking for duplicated entries
862 self._seenparams = set()
862 self._seenparams = set()
863 for pname, __ in self._mandatoryparams + self._advisoryparams:
863 for pname, __ in self._mandatoryparams + self._advisoryparams:
864 if pname in self._seenparams:
864 if pname in self._seenparams:
865 raise error.ProgrammingError('duplicated params: %s' % pname)
865 raise error.ProgrammingError('duplicated params: %s' % pname)
866 self._seenparams.add(pname)
866 self._seenparams.add(pname)
867 # status of the part's generation:
867 # status of the part's generation:
868 # - None: not started,
868 # - None: not started,
869 # - False: currently generated,
869 # - False: currently generated,
870 # - True: generation done.
870 # - True: generation done.
871 self._generated = None
871 self._generated = None
872 self.mandatory = mandatory
872 self.mandatory = mandatory
873
873
874 def __repr__(self):
874 def __repr__(self):
875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
875 cls = "%s.%s" % (self.__class__.__module__, self.__class__.__name__)
876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
876 return ('<%s object at %x; id: %s; type: %s; mandatory: %s>'
877 % (cls, id(self), self.id, self.type, self.mandatory))
877 % (cls, id(self), self.id, self.type, self.mandatory))
878
878
879 def copy(self):
879 def copy(self):
880 """return a copy of the part
880 """return a copy of the part
881
881
882 The new part have the very same content but no partid assigned yet.
882 The new part have the very same content but no partid assigned yet.
883 Parts with generated data cannot be copied."""
883 Parts with generated data cannot be copied."""
884 assert not util.safehasattr(self.data, 'next')
884 assert not util.safehasattr(self.data, 'next')
885 return self.__class__(self.type, self._mandatoryparams,
885 return self.__class__(self.type, self._mandatoryparams,
886 self._advisoryparams, self._data, self.mandatory)
886 self._advisoryparams, self._data, self.mandatory)
887
887
888 # methods used to defines the part content
888 # methods used to defines the part content
889 @property
889 @property
890 def data(self):
890 def data(self):
891 return self._data
891 return self._data
892
892
893 @data.setter
893 @data.setter
894 def data(self, data):
894 def data(self, data):
895 if self._generated is not None:
895 if self._generated is not None:
896 raise error.ReadOnlyPartError('part is being generated')
896 raise error.ReadOnlyPartError('part is being generated')
897 self._data = data
897 self._data = data
898
898
899 @property
899 @property
900 def mandatoryparams(self):
900 def mandatoryparams(self):
901 # make it an immutable tuple to force people through ``addparam``
901 # make it an immutable tuple to force people through ``addparam``
902 return tuple(self._mandatoryparams)
902 return tuple(self._mandatoryparams)
903
903
904 @property
904 @property
905 def advisoryparams(self):
905 def advisoryparams(self):
906 # make it an immutable tuple to force people through ``addparam``
906 # make it an immutable tuple to force people through ``addparam``
907 return tuple(self._advisoryparams)
907 return tuple(self._advisoryparams)
908
908
909 def addparam(self, name, value='', mandatory=True):
909 def addparam(self, name, value='', mandatory=True):
910 """add a parameter to the part
910 """add a parameter to the part
911
911
912 If 'mandatory' is set to True, the remote handler must claim support
912 If 'mandatory' is set to True, the remote handler must claim support
913 for this parameter or the unbundling will be aborted.
913 for this parameter or the unbundling will be aborted.
914
914
915 The 'name' and 'value' cannot exceed 255 bytes each.
915 The 'name' and 'value' cannot exceed 255 bytes each.
916 """
916 """
917 if self._generated is not None:
917 if self._generated is not None:
918 raise error.ReadOnlyPartError('part is being generated')
918 raise error.ReadOnlyPartError('part is being generated')
919 if name in self._seenparams:
919 if name in self._seenparams:
920 raise ValueError('duplicated params: %s' % name)
920 raise ValueError('duplicated params: %s' % name)
921 self._seenparams.add(name)
921 self._seenparams.add(name)
922 params = self._advisoryparams
922 params = self._advisoryparams
923 if mandatory:
923 if mandatory:
924 params = self._mandatoryparams
924 params = self._mandatoryparams
925 params.append((name, value))
925 params.append((name, value))
926
926
927 # methods used to generates the bundle2 stream
927 # methods used to generates the bundle2 stream
928 def getchunks(self, ui):
928 def getchunks(self, ui):
929 if self._generated is not None:
929 if self._generated is not None:
930 raise error.ProgrammingError('part can only be consumed once')
930 raise error.ProgrammingError('part can only be consumed once')
931 self._generated = False
931 self._generated = False
932
932
933 if ui.debugflag:
933 if ui.debugflag:
934 msg = ['bundle2-output-part: "%s"' % self.type]
934 msg = ['bundle2-output-part: "%s"' % self.type]
935 if not self.mandatory:
935 if not self.mandatory:
936 msg.append(' (advisory)')
936 msg.append(' (advisory)')
937 nbmp = len(self.mandatoryparams)
937 nbmp = len(self.mandatoryparams)
938 nbap = len(self.advisoryparams)
938 nbap = len(self.advisoryparams)
939 if nbmp or nbap:
939 if nbmp or nbap:
940 msg.append(' (params:')
940 msg.append(' (params:')
941 if nbmp:
941 if nbmp:
942 msg.append(' %i mandatory' % nbmp)
942 msg.append(' %i mandatory' % nbmp)
943 if nbap:
943 if nbap:
944 msg.append(' %i advisory' % nbmp)
944 msg.append(' %i advisory' % nbmp)
945 msg.append(')')
945 msg.append(')')
946 if not self.data:
946 if not self.data:
947 msg.append(' empty payload')
947 msg.append(' empty payload')
948 elif util.safehasattr(self.data, 'next'):
948 elif util.safehasattr(self.data, 'next'):
949 msg.append(' streamed payload')
949 msg.append(' streamed payload')
950 else:
950 else:
951 msg.append(' %i bytes payload' % len(self.data))
951 msg.append(' %i bytes payload' % len(self.data))
952 msg.append('\n')
952 msg.append('\n')
953 ui.debug(''.join(msg))
953 ui.debug(''.join(msg))
954
954
955 #### header
955 #### header
956 if self.mandatory:
956 if self.mandatory:
957 parttype = self.type.upper()
957 parttype = self.type.upper()
958 else:
958 else:
959 parttype = self.type.lower()
959 parttype = self.type.lower()
960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
960 outdebug(ui, 'part %s: "%s"' % (self.id, parttype))
961 ## parttype
961 ## parttype
962 header = [_pack(_fparttypesize, len(parttype)),
962 header = [_pack(_fparttypesize, len(parttype)),
963 parttype, _pack(_fpartid, self.id),
963 parttype, _pack(_fpartid, self.id),
964 ]
964 ]
965 ## parameters
965 ## parameters
966 # count
966 # count
967 manpar = self.mandatoryparams
967 manpar = self.mandatoryparams
968 advpar = self.advisoryparams
968 advpar = self.advisoryparams
969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
969 header.append(_pack(_fpartparamcount, len(manpar), len(advpar)))
970 # size
970 # size
971 parsizes = []
971 parsizes = []
972 for key, value in manpar:
972 for key, value in manpar:
973 parsizes.append(len(key))
973 parsizes.append(len(key))
974 parsizes.append(len(value))
974 parsizes.append(len(value))
975 for key, value in advpar:
975 for key, value in advpar:
976 parsizes.append(len(key))
976 parsizes.append(len(key))
977 parsizes.append(len(value))
977 parsizes.append(len(value))
978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
978 paramsizes = _pack(_makefpartparamsizes(len(parsizes) / 2), *parsizes)
979 header.append(paramsizes)
979 header.append(paramsizes)
980 # key, value
980 # key, value
981 for key, value in manpar:
981 for key, value in manpar:
982 header.append(key)
982 header.append(key)
983 header.append(value)
983 header.append(value)
984 for key, value in advpar:
984 for key, value in advpar:
985 header.append(key)
985 header.append(key)
986 header.append(value)
986 header.append(value)
987 ## finalize header
987 ## finalize header
988 headerchunk = ''.join(header)
988 headerchunk = ''.join(header)
989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
989 outdebug(ui, 'header chunk size: %i' % len(headerchunk))
990 yield _pack(_fpartheadersize, len(headerchunk))
990 yield _pack(_fpartheadersize, len(headerchunk))
991 yield headerchunk
991 yield headerchunk
992 ## payload
992 ## payload
993 try:
993 try:
994 for chunk in self._payloadchunks():
994 for chunk in self._payloadchunks():
995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
995 outdebug(ui, 'payload chunk size: %i' % len(chunk))
996 yield _pack(_fpayloadsize, len(chunk))
996 yield _pack(_fpayloadsize, len(chunk))
997 yield chunk
997 yield chunk
998 except GeneratorExit:
998 except GeneratorExit:
999 # GeneratorExit means that nobody is listening for our
999 # GeneratorExit means that nobody is listening for our
1000 # results anyway, so just bail quickly rather than trying
1000 # results anyway, so just bail quickly rather than trying
1001 # to produce an error part.
1001 # to produce an error part.
1002 ui.debug('bundle2-generatorexit\n')
1002 ui.debug('bundle2-generatorexit\n')
1003 raise
1003 raise
1004 except BaseException as exc:
1004 except BaseException as exc:
1005 # backup exception data for later
1005 # backup exception data for later
1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1006 ui.debug('bundle2-input-stream-interrupt: encoding exception %s'
1007 % exc)
1007 % exc)
1008 tb = sys.exc_info()[2]
1008 tb = sys.exc_info()[2]
1009 msg = 'unexpected error: %s' % exc
1009 msg = 'unexpected error: %s' % exc
1010 interpart = bundlepart('error:abort', [('message', msg)],
1010 interpart = bundlepart('error:abort', [('message', msg)],
1011 mandatory=False)
1011 mandatory=False)
1012 interpart.id = 0
1012 interpart.id = 0
1013 yield _pack(_fpayloadsize, -1)
1013 yield _pack(_fpayloadsize, -1)
1014 for chunk in interpart.getchunks(ui=ui):
1014 for chunk in interpart.getchunks(ui=ui):
1015 yield chunk
1015 yield chunk
1016 outdebug(ui, 'closing payload chunk')
1016 outdebug(ui, 'closing payload chunk')
1017 # abort current part payload
1017 # abort current part payload
1018 yield _pack(_fpayloadsize, 0)
1018 yield _pack(_fpayloadsize, 0)
1019 pycompat.raisewithtb(exc, tb)
1019 pycompat.raisewithtb(exc, tb)
1020 # end of payload
1020 # end of payload
1021 outdebug(ui, 'closing payload chunk')
1021 outdebug(ui, 'closing payload chunk')
1022 yield _pack(_fpayloadsize, 0)
1022 yield _pack(_fpayloadsize, 0)
1023 self._generated = True
1023 self._generated = True
1024
1024
1025 def _payloadchunks(self):
1025 def _payloadchunks(self):
1026 """yield chunks of a the part payload
1026 """yield chunks of a the part payload
1027
1027
1028 Exists to handle the different methods to provide data to a part."""
1028 Exists to handle the different methods to provide data to a part."""
1029 # we only support fixed size data now.
1029 # we only support fixed size data now.
1030 # This will be improved in the future.
1030 # This will be improved in the future.
1031 if util.safehasattr(self.data, 'next'):
1031 if util.safehasattr(self.data, 'next'):
1032 buff = util.chunkbuffer(self.data)
1032 buff = util.chunkbuffer(self.data)
1033 chunk = buff.read(preferedchunksize)
1033 chunk = buff.read(preferedchunksize)
1034 while chunk:
1034 while chunk:
1035 yield chunk
1035 yield chunk
1036 chunk = buff.read(preferedchunksize)
1036 chunk = buff.read(preferedchunksize)
1037 elif len(self.data):
1037 elif len(self.data):
1038 yield self.data
1038 yield self.data
1039
1039
1040
1040
1041 flaginterrupt = -1
1041 flaginterrupt = -1
1042
1042
1043 class interrupthandler(unpackermixin):
1043 class interrupthandler(unpackermixin):
1044 """read one part and process it with restricted capability
1044 """read one part and process it with restricted capability
1045
1045
1046 This allows to transmit exception raised on the producer size during part
1046 This allows to transmit exception raised on the producer size during part
1047 iteration while the consumer is reading a part.
1047 iteration while the consumer is reading a part.
1048
1048
1049 Part processed in this manner only have access to a ui object,"""
1049 Part processed in this manner only have access to a ui object,"""
1050
1050
1051 def __init__(self, ui, fp):
1051 def __init__(self, ui, fp):
1052 super(interrupthandler, self).__init__(fp)
1052 super(interrupthandler, self).__init__(fp)
1053 self.ui = ui
1053 self.ui = ui
1054
1054
1055 def _readpartheader(self):
1055 def _readpartheader(self):
1056 """reads a part header size and return the bytes blob
1056 """reads a part header size and return the bytes blob
1057
1057
1058 returns None if empty"""
1058 returns None if empty"""
1059 headersize = self._unpack(_fpartheadersize)[0]
1059 headersize = self._unpack(_fpartheadersize)[0]
1060 if headersize < 0:
1060 if headersize < 0:
1061 raise error.BundleValueError('negative part header size: %i'
1061 raise error.BundleValueError('negative part header size: %i'
1062 % headersize)
1062 % headersize)
1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1063 indebug(self.ui, 'part header size: %i\n' % headersize)
1064 if headersize:
1064 if headersize:
1065 return self._readexact(headersize)
1065 return self._readexact(headersize)
1066 return None
1066 return None
1067
1067
1068 def __call__(self):
1068 def __call__(self):
1069
1069
1070 self.ui.debug('bundle2-input-stream-interrupt:'
1070 self.ui.debug('bundle2-input-stream-interrupt:'
1071 ' opening out of band context\n')
1071 ' opening out of band context\n')
1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1072 indebug(self.ui, 'bundle2 stream interruption, looking for a part.')
1073 headerblock = self._readpartheader()
1073 headerblock = self._readpartheader()
1074 if headerblock is None:
1074 if headerblock is None:
1075 indebug(self.ui, 'no part found during interruption.')
1075 indebug(self.ui, 'no part found during interruption.')
1076 return
1076 return
1077 part = unbundlepart(self.ui, headerblock, self._fp)
1077 part = unbundlepart(self.ui, headerblock, self._fp)
1078 op = interruptoperation(self.ui)
1078 op = interruptoperation(self.ui)
1079 _processpart(op, part)
1079 _processpart(op, part)
1080 self.ui.debug('bundle2-input-stream-interrupt:'
1080 self.ui.debug('bundle2-input-stream-interrupt:'
1081 ' closing out of band context\n')
1081 ' closing out of band context\n')
1082
1082
1083 class interruptoperation(object):
1083 class interruptoperation(object):
1084 """A limited operation to be use by part handler during interruption
1084 """A limited operation to be use by part handler during interruption
1085
1085
1086 It only have access to an ui object.
1086 It only have access to an ui object.
1087 """
1087 """
1088
1088
1089 def __init__(self, ui):
1089 def __init__(self, ui):
1090 self.ui = ui
1090 self.ui = ui
1091 self.reply = None
1091 self.reply = None
1092 self.captureoutput = False
1092 self.captureoutput = False
1093
1093
1094 @property
1094 @property
1095 def repo(self):
1095 def repo(self):
1096 raise error.ProgrammingError('no repo access from stream interruption')
1096 raise error.ProgrammingError('no repo access from stream interruption')
1097
1097
1098 def gettransaction(self):
1098 def gettransaction(self):
1099 raise TransactionUnavailable('no repo access from stream interruption')
1099 raise TransactionUnavailable('no repo access from stream interruption')
1100
1100
1101 class unbundlepart(unpackermixin):
1101 class unbundlepart(unpackermixin):
1102 """a bundle part read from a bundle"""
1102 """a bundle part read from a bundle"""
1103
1103
1104 def __init__(self, ui, header, fp):
1104 def __init__(self, ui, header, fp):
1105 super(unbundlepart, self).__init__(fp)
1105 super(unbundlepart, self).__init__(fp)
1106 self._seekable = (util.safehasattr(fp, 'seek') and
1106 self._seekable = (util.safehasattr(fp, 'seek') and
1107 util.safehasattr(fp, 'tell'))
1107 util.safehasattr(fp, 'tell'))
1108 self.ui = ui
1108 self.ui = ui
1109 # unbundle state attr
1109 # unbundle state attr
1110 self._headerdata = header
1110 self._headerdata = header
1111 self._headeroffset = 0
1111 self._headeroffset = 0
1112 self._initialized = False
1112 self._initialized = False
1113 self.consumed = False
1113 self.consumed = False
1114 # part data
1114 # part data
1115 self.id = None
1115 self.id = None
1116 self.type = None
1116 self.type = None
1117 self.mandatoryparams = None
1117 self.mandatoryparams = None
1118 self.advisoryparams = None
1118 self.advisoryparams = None
1119 self.params = None
1119 self.params = None
1120 self.mandatorykeys = ()
1120 self.mandatorykeys = ()
1121 self._payloadstream = None
1121 self._payloadstream = None
1122 self._readheader()
1122 self._readheader()
1123 self._mandatory = None
1123 self._mandatory = None
1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1124 self._chunkindex = [] #(payload, file) position tuples for chunk starts
1125 self._pos = 0
1125 self._pos = 0
1126
1126
1127 def _fromheader(self, size):
1127 def _fromheader(self, size):
1128 """return the next <size> byte from the header"""
1128 """return the next <size> byte from the header"""
1129 offset = self._headeroffset
1129 offset = self._headeroffset
1130 data = self._headerdata[offset:(offset + size)]
1130 data = self._headerdata[offset:(offset + size)]
1131 self._headeroffset = offset + size
1131 self._headeroffset = offset + size
1132 return data
1132 return data
1133
1133
1134 def _unpackheader(self, format):
1134 def _unpackheader(self, format):
1135 """read given format from header
1135 """read given format from header
1136
1136
1137 This automatically compute the size of the format to read."""
1137 This automatically compute the size of the format to read."""
1138 data = self._fromheader(struct.calcsize(format))
1138 data = self._fromheader(struct.calcsize(format))
1139 return _unpack(format, data)
1139 return _unpack(format, data)
1140
1140
1141 def _initparams(self, mandatoryparams, advisoryparams):
1141 def _initparams(self, mandatoryparams, advisoryparams):
1142 """internal function to setup all logic related parameters"""
1142 """internal function to setup all logic related parameters"""
1143 # make it read only to prevent people touching it by mistake.
1143 # make it read only to prevent people touching it by mistake.
1144 self.mandatoryparams = tuple(mandatoryparams)
1144 self.mandatoryparams = tuple(mandatoryparams)
1145 self.advisoryparams = tuple(advisoryparams)
1145 self.advisoryparams = tuple(advisoryparams)
1146 # user friendly UI
1146 # user friendly UI
1147 self.params = util.sortdict(self.mandatoryparams)
1147 self.params = util.sortdict(self.mandatoryparams)
1148 self.params.update(self.advisoryparams)
1148 self.params.update(self.advisoryparams)
1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1149 self.mandatorykeys = frozenset(p[0] for p in mandatoryparams)
1150
1150
1151 def _payloadchunks(self, chunknum=0):
1151 def _payloadchunks(self, chunknum=0):
1152 '''seek to specified chunk and start yielding data'''
1152 '''seek to specified chunk and start yielding data'''
1153 if len(self._chunkindex) == 0:
1153 if len(self._chunkindex) == 0:
1154 assert chunknum == 0, 'Must start with chunk 0'
1154 assert chunknum == 0, 'Must start with chunk 0'
1155 self._chunkindex.append((0, self._tellfp()))
1155 self._chunkindex.append((0, self._tellfp()))
1156 else:
1156 else:
1157 assert chunknum < len(self._chunkindex), \
1157 assert chunknum < len(self._chunkindex), \
1158 'Unknown chunk %d' % chunknum
1158 'Unknown chunk %d' % chunknum
1159 self._seekfp(self._chunkindex[chunknum][1])
1159 self._seekfp(self._chunkindex[chunknum][1])
1160
1160
1161 pos = self._chunkindex[chunknum][0]
1161 pos = self._chunkindex[chunknum][0]
1162 payloadsize = self._unpack(_fpayloadsize)[0]
1162 payloadsize = self._unpack(_fpayloadsize)[0]
1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1163 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1164 while payloadsize:
1164 while payloadsize:
1165 if payloadsize == flaginterrupt:
1165 if payloadsize == flaginterrupt:
1166 # interruption detection, the handler will now read a
1166 # interruption detection, the handler will now read a
1167 # single part and process it.
1167 # single part and process it.
1168 interrupthandler(self.ui, self._fp)()
1168 interrupthandler(self.ui, self._fp)()
1169 elif payloadsize < 0:
1169 elif payloadsize < 0:
1170 msg = 'negative payload chunk size: %i' % payloadsize
1170 msg = 'negative payload chunk size: %i' % payloadsize
1171 raise error.BundleValueError(msg)
1171 raise error.BundleValueError(msg)
1172 else:
1172 else:
1173 result = self._readexact(payloadsize)
1173 result = self._readexact(payloadsize)
1174 chunknum += 1
1174 chunknum += 1
1175 pos += payloadsize
1175 pos += payloadsize
1176 if chunknum == len(self._chunkindex):
1176 if chunknum == len(self._chunkindex):
1177 self._chunkindex.append((pos, self._tellfp()))
1177 self._chunkindex.append((pos, self._tellfp()))
1178 yield result
1178 yield result
1179 payloadsize = self._unpack(_fpayloadsize)[0]
1179 payloadsize = self._unpack(_fpayloadsize)[0]
1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1180 indebug(self.ui, 'payload chunk size: %i' % payloadsize)
1181
1181
1182 def _findchunk(self, pos):
1182 def _findchunk(self, pos):
1183 '''for a given payload position, return a chunk number and offset'''
1183 '''for a given payload position, return a chunk number and offset'''
1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1184 for chunk, (ppos, fpos) in enumerate(self._chunkindex):
1185 if ppos == pos:
1185 if ppos == pos:
1186 return chunk, 0
1186 return chunk, 0
1187 elif ppos > pos:
1187 elif ppos > pos:
1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1188 return chunk - 1, pos - self._chunkindex[chunk - 1][0]
1189 raise ValueError('Unknown chunk')
1189 raise ValueError('Unknown chunk')
1190
1190
1191 def _readheader(self):
1191 def _readheader(self):
1192 """read the header and setup the object"""
1192 """read the header and setup the object"""
1193 typesize = self._unpackheader(_fparttypesize)[0]
1193 typesize = self._unpackheader(_fparttypesize)[0]
1194 self.type = self._fromheader(typesize)
1194 self.type = self._fromheader(typesize)
1195 indebug(self.ui, 'part type: "%s"' % self.type)
1195 indebug(self.ui, 'part type: "%s"' % self.type)
1196 self.id = self._unpackheader(_fpartid)[0]
1196 self.id = self._unpackheader(_fpartid)[0]
1197 indebug(self.ui, 'part id: "%s"' % self.id)
1197 indebug(self.ui, 'part id: "%s"' % self.id)
1198 # extract mandatory bit from type
1198 # extract mandatory bit from type
1199 self.mandatory = (self.type != self.type.lower())
1199 self.mandatory = (self.type != self.type.lower())
1200 self.type = self.type.lower()
1200 self.type = self.type.lower()
1201 ## reading parameters
1201 ## reading parameters
1202 # param count
1202 # param count
1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1203 mancount, advcount = self._unpackheader(_fpartparamcount)
1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1204 indebug(self.ui, 'part parameters: %i' % (mancount + advcount))
1205 # param size
1205 # param size
1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1206 fparamsizes = _makefpartparamsizes(mancount + advcount)
1207 paramsizes = self._unpackheader(fparamsizes)
1207 paramsizes = self._unpackheader(fparamsizes)
1208 # make it a list of couple again
1208 # make it a list of couple again
1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1209 paramsizes = zip(paramsizes[::2], paramsizes[1::2])
1210 # split mandatory from advisory
1210 # split mandatory from advisory
1211 mansizes = paramsizes[:mancount]
1211 mansizes = paramsizes[:mancount]
1212 advsizes = paramsizes[mancount:]
1212 advsizes = paramsizes[mancount:]
1213 # retrieve param value
1213 # retrieve param value
1214 manparams = []
1214 manparams = []
1215 for key, value in mansizes:
1215 for key, value in mansizes:
1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1216 manparams.append((self._fromheader(key), self._fromheader(value)))
1217 advparams = []
1217 advparams = []
1218 for key, value in advsizes:
1218 for key, value in advsizes:
1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1219 advparams.append((self._fromheader(key), self._fromheader(value)))
1220 self._initparams(manparams, advparams)
1220 self._initparams(manparams, advparams)
1221 ## part payload
1221 ## part payload
1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1222 self._payloadstream = util.chunkbuffer(self._payloadchunks())
1223 # we read the data, tell it
1223 # we read the data, tell it
1224 self._initialized = True
1224 self._initialized = True
1225
1225
1226 def read(self, size=None):
1226 def read(self, size=None):
1227 """read payload data"""
1227 """read payload data"""
1228 if not self._initialized:
1228 if not self._initialized:
1229 self._readheader()
1229 self._readheader()
1230 if size is None:
1230 if size is None:
1231 data = self._payloadstream.read()
1231 data = self._payloadstream.read()
1232 else:
1232 else:
1233 data = self._payloadstream.read(size)
1233 data = self._payloadstream.read(size)
1234 self._pos += len(data)
1234 self._pos += len(data)
1235 if size is None or len(data) < size:
1235 if size is None or len(data) < size:
1236 if not self.consumed and self._pos:
1236 if not self.consumed and self._pos:
1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1237 self.ui.debug('bundle2-input-part: total payload size %i\n'
1238 % self._pos)
1238 % self._pos)
1239 self.consumed = True
1239 self.consumed = True
1240 return data
1240 return data
1241
1241
1242 def tell(self):
1242 def tell(self):
1243 return self._pos
1243 return self._pos
1244
1244
1245 def seek(self, offset, whence=0):
1245 def seek(self, offset, whence=0):
1246 if whence == 0:
1246 if whence == 0:
1247 newpos = offset
1247 newpos = offset
1248 elif whence == 1:
1248 elif whence == 1:
1249 newpos = self._pos + offset
1249 newpos = self._pos + offset
1250 elif whence == 2:
1250 elif whence == 2:
1251 if not self.consumed:
1251 if not self.consumed:
1252 self.read()
1252 self.read()
1253 newpos = self._chunkindex[-1][0] - offset
1253 newpos = self._chunkindex[-1][0] - offset
1254 else:
1254 else:
1255 raise ValueError('Unknown whence value: %r' % (whence,))
1255 raise ValueError('Unknown whence value: %r' % (whence,))
1256
1256
1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1257 if newpos > self._chunkindex[-1][0] and not self.consumed:
1258 self.read()
1258 self.read()
1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1259 if not 0 <= newpos <= self._chunkindex[-1][0]:
1260 raise ValueError('Offset out of range')
1260 raise ValueError('Offset out of range')
1261
1261
1262 if self._pos != newpos:
1262 if self._pos != newpos:
1263 chunk, internaloffset = self._findchunk(newpos)
1263 chunk, internaloffset = self._findchunk(newpos)
1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1264 self._payloadstream = util.chunkbuffer(self._payloadchunks(chunk))
1265 adjust = self.read(internaloffset)
1265 adjust = self.read(internaloffset)
1266 if len(adjust) != internaloffset:
1266 if len(adjust) != internaloffset:
1267 raise error.Abort(_('Seek failed\n'))
1267 raise error.Abort(_('Seek failed\n'))
1268 self._pos = newpos
1268 self._pos = newpos
1269
1269
1270 def _seekfp(self, offset, whence=0):
1270 def _seekfp(self, offset, whence=0):
1271 """move the underlying file pointer
1271 """move the underlying file pointer
1272
1272
1273 This method is meant for internal usage by the bundle2 protocol only.
1273 This method is meant for internal usage by the bundle2 protocol only.
1274 They directly manipulate the low level stream including bundle2 level
1274 They directly manipulate the low level stream including bundle2 level
1275 instruction.
1275 instruction.
1276
1276
1277 Do not use it to implement higher-level logic or methods."""
1277 Do not use it to implement higher-level logic or methods."""
1278 if self._seekable:
1278 if self._seekable:
1279 return self._fp.seek(offset, whence)
1279 return self._fp.seek(offset, whence)
1280 else:
1280 else:
1281 raise NotImplementedError(_('File pointer is not seekable'))
1281 raise NotImplementedError(_('File pointer is not seekable'))
1282
1282
1283 def _tellfp(self):
1283 def _tellfp(self):
1284 """return the file offset, or None if file is not seekable
1284 """return the file offset, or None if file is not seekable
1285
1285
1286 This method is meant for internal usage by the bundle2 protocol only.
1286 This method is meant for internal usage by the bundle2 protocol only.
1287 They directly manipulate the low level stream including bundle2 level
1287 They directly manipulate the low level stream including bundle2 level
1288 instruction.
1288 instruction.
1289
1289
1290 Do not use it to implement higher-level logic or methods."""
1290 Do not use it to implement higher-level logic or methods."""
1291 if self._seekable:
1291 if self._seekable:
1292 try:
1292 try:
1293 return self._fp.tell()
1293 return self._fp.tell()
1294 except IOError as e:
1294 except IOError as e:
1295 if e.errno == errno.ESPIPE:
1295 if e.errno == errno.ESPIPE:
1296 self._seekable = False
1296 self._seekable = False
1297 else:
1297 else:
1298 raise
1298 raise
1299 return None
1299 return None
1300
1300
1301 # These are only the static capabilities.
1301 # These are only the static capabilities.
1302 # Check the 'getrepocaps' function for the rest.
1302 # Check the 'getrepocaps' function for the rest.
1303 capabilities = {'HG20': (),
1303 capabilities = {'HG20': (),
1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1304 'error': ('abort', 'unsupportedcontent', 'pushraced',
1305 'pushkey'),
1305 'pushkey'),
1306 'listkeys': (),
1306 'listkeys': (),
1307 'pushkey': (),
1307 'pushkey': (),
1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1308 'digests': tuple(sorted(util.DIGESTS.keys())),
1309 'remote-changegroup': ('http', 'https'),
1309 'remote-changegroup': ('http', 'https'),
1310 'hgtagsfnodes': (),
1310 'hgtagsfnodes': (),
1311 }
1311 }
1312
1312
1313 def getrepocaps(repo, allowpushback=False):
1313 def getrepocaps(repo, allowpushback=False):
1314 """return the bundle2 capabilities for a given repo
1314 """return the bundle2 capabilities for a given repo
1315
1315
1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1316 Exists to allow extensions (like evolution) to mutate the capabilities.
1317 """
1317 """
1318 caps = capabilities.copy()
1318 caps = capabilities.copy()
1319 caps['changegroup'] = tuple(sorted(
1319 caps['changegroup'] = tuple(sorted(
1320 changegroup.supportedincomingversions(repo)))
1320 changegroup.supportedincomingversions(repo)))
1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1321 if obsolete.isenabled(repo, obsolete.exchangeopt):
1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1322 supportedformat = tuple('V%i' % v for v in obsolete.formats)
1323 caps['obsmarkers'] = supportedformat
1323 caps['obsmarkers'] = supportedformat
1324 if allowpushback:
1324 if allowpushback:
1325 caps['pushback'] = ()
1325 caps['pushback'] = ()
1326 if not repo.ui.configbool('experimental', 'checkheads-strict', True):
1327 caps['checkheads'] = ('related',)
1326 return caps
1328 return caps
1327
1329
1328 def bundle2caps(remote):
1330 def bundle2caps(remote):
1329 """return the bundle capabilities of a peer as dict"""
1331 """return the bundle capabilities of a peer as dict"""
1330 raw = remote.capable('bundle2')
1332 raw = remote.capable('bundle2')
1331 if not raw and raw != '':
1333 if not raw and raw != '':
1332 return {}
1334 return {}
1333 capsblob = urlreq.unquote(remote.capable('bundle2'))
1335 capsblob = urlreq.unquote(remote.capable('bundle2'))
1334 return decodecaps(capsblob)
1336 return decodecaps(capsblob)
1335
1337
1336 def obsmarkersversion(caps):
1338 def obsmarkersversion(caps):
1337 """extract the list of supported obsmarkers versions from a bundle2caps dict
1339 """extract the list of supported obsmarkers versions from a bundle2caps dict
1338 """
1340 """
1339 obscaps = caps.get('obsmarkers', ())
1341 obscaps = caps.get('obsmarkers', ())
1340 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1342 return [int(c[1:]) for c in obscaps if c.startswith('V')]
1341
1343
1342 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1344 def writenewbundle(ui, repo, source, filename, bundletype, outgoing, opts,
1343 vfs=None, compression=None, compopts=None):
1345 vfs=None, compression=None, compopts=None):
1344 if bundletype.startswith('HG10'):
1346 if bundletype.startswith('HG10'):
1345 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1347 cg = changegroup.getchangegroup(repo, source, outgoing, version='01')
1346 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1348 return writebundle(ui, cg, filename, bundletype, vfs=vfs,
1347 compression=compression, compopts=compopts)
1349 compression=compression, compopts=compopts)
1348 elif not bundletype.startswith('HG20'):
1350 elif not bundletype.startswith('HG20'):
1349 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1351 raise error.ProgrammingError('unknown bundle type: %s' % bundletype)
1350
1352
1351 caps = {}
1353 caps = {}
1352 if 'obsolescence' in opts:
1354 if 'obsolescence' in opts:
1353 caps['obsmarkers'] = ('V1',)
1355 caps['obsmarkers'] = ('V1',)
1354 bundle = bundle20(ui, caps)
1356 bundle = bundle20(ui, caps)
1355 bundle.setcompression(compression, compopts)
1357 bundle.setcompression(compression, compopts)
1356 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1358 _addpartsfromopts(ui, repo, bundle, source, outgoing, opts)
1357 chunkiter = bundle.getchunks()
1359 chunkiter = bundle.getchunks()
1358
1360
1359 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1361 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1360
1362
1361 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1363 def _addpartsfromopts(ui, repo, bundler, source, outgoing, opts):
1362 # We should eventually reconcile this logic with the one behind
1364 # We should eventually reconcile this logic with the one behind
1363 # 'exchange.getbundle2partsgenerator'.
1365 # 'exchange.getbundle2partsgenerator'.
1364 #
1366 #
1365 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1367 # The type of input from 'getbundle' and 'writenewbundle' are a bit
1366 # different right now. So we keep them separated for now for the sake of
1368 # different right now. So we keep them separated for now for the sake of
1367 # simplicity.
1369 # simplicity.
1368
1370
1369 # we always want a changegroup in such bundle
1371 # we always want a changegroup in such bundle
1370 cgversion = opts.get('cg.version')
1372 cgversion = opts.get('cg.version')
1371 if cgversion is None:
1373 if cgversion is None:
1372 cgversion = changegroup.safeversion(repo)
1374 cgversion = changegroup.safeversion(repo)
1373 cg = changegroup.getchangegroup(repo, source, outgoing,
1375 cg = changegroup.getchangegroup(repo, source, outgoing,
1374 version=cgversion)
1376 version=cgversion)
1375 part = bundler.newpart('changegroup', data=cg.getchunks())
1377 part = bundler.newpart('changegroup', data=cg.getchunks())
1376 part.addparam('version', cg.version)
1378 part.addparam('version', cg.version)
1377 if 'clcount' in cg.extras:
1379 if 'clcount' in cg.extras:
1378 part.addparam('nbchanges', str(cg.extras['clcount']),
1380 part.addparam('nbchanges', str(cg.extras['clcount']),
1379 mandatory=False)
1381 mandatory=False)
1380
1382
1381 addparttagsfnodescache(repo, bundler, outgoing)
1383 addparttagsfnodescache(repo, bundler, outgoing)
1382
1384
1383 if opts.get('obsolescence', False):
1385 if opts.get('obsolescence', False):
1384 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1386 obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing)
1385 buildobsmarkerspart(bundler, obsmarkers)
1387 buildobsmarkerspart(bundler, obsmarkers)
1386
1388
1387 def addparttagsfnodescache(repo, bundler, outgoing):
1389 def addparttagsfnodescache(repo, bundler, outgoing):
1388 # we include the tags fnode cache for the bundle changeset
1390 # we include the tags fnode cache for the bundle changeset
1389 # (as an optional parts)
1391 # (as an optional parts)
1390 cache = tags.hgtagsfnodescache(repo.unfiltered())
1392 cache = tags.hgtagsfnodescache(repo.unfiltered())
1391 chunks = []
1393 chunks = []
1392
1394
1393 # .hgtags fnodes are only relevant for head changesets. While we could
1395 # .hgtags fnodes are only relevant for head changesets. While we could
1394 # transfer values for all known nodes, there will likely be little to
1396 # transfer values for all known nodes, there will likely be little to
1395 # no benefit.
1397 # no benefit.
1396 #
1398 #
1397 # We don't bother using a generator to produce output data because
1399 # We don't bother using a generator to produce output data because
1398 # a) we only have 40 bytes per head and even esoteric numbers of heads
1400 # a) we only have 40 bytes per head and even esoteric numbers of heads
1399 # consume little memory (1M heads is 40MB) b) we don't want to send the
1401 # consume little memory (1M heads is 40MB) b) we don't want to send the
1400 # part if we don't have entries and knowing if we have entries requires
1402 # part if we don't have entries and knowing if we have entries requires
1401 # cache lookups.
1403 # cache lookups.
1402 for node in outgoing.missingheads:
1404 for node in outgoing.missingheads:
1403 # Don't compute missing, as this may slow down serving.
1405 # Don't compute missing, as this may slow down serving.
1404 fnode = cache.getfnode(node, computemissing=False)
1406 fnode = cache.getfnode(node, computemissing=False)
1405 if fnode is not None:
1407 if fnode is not None:
1406 chunks.extend([node, fnode])
1408 chunks.extend([node, fnode])
1407
1409
1408 if chunks:
1410 if chunks:
1409 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1411 bundler.newpart('hgtagsfnodes', data=''.join(chunks))
1410
1412
1411 def buildobsmarkerspart(bundler, markers):
1413 def buildobsmarkerspart(bundler, markers):
1412 """add an obsmarker part to the bundler with <markers>
1414 """add an obsmarker part to the bundler with <markers>
1413
1415
1414 No part is created if markers is empty.
1416 No part is created if markers is empty.
1415 Raises ValueError if the bundler doesn't support any known obsmarker format.
1417 Raises ValueError if the bundler doesn't support any known obsmarker format.
1416 """
1418 """
1417 if not markers:
1419 if not markers:
1418 return None
1420 return None
1419
1421
1420 remoteversions = obsmarkersversion(bundler.capabilities)
1422 remoteversions = obsmarkersversion(bundler.capabilities)
1421 version = obsolete.commonversion(remoteversions)
1423 version = obsolete.commonversion(remoteversions)
1422 if version is None:
1424 if version is None:
1423 raise ValueError('bundler does not support common obsmarker format')
1425 raise ValueError('bundler does not support common obsmarker format')
1424 stream = obsolete.encodemarkers(markers, True, version=version)
1426 stream = obsolete.encodemarkers(markers, True, version=version)
1425 return bundler.newpart('obsmarkers', data=stream)
1427 return bundler.newpart('obsmarkers', data=stream)
1426
1428
1427 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1429 def writebundle(ui, cg, filename, bundletype, vfs=None, compression=None,
1428 compopts=None):
1430 compopts=None):
1429 """Write a bundle file and return its filename.
1431 """Write a bundle file and return its filename.
1430
1432
1431 Existing files will not be overwritten.
1433 Existing files will not be overwritten.
1432 If no filename is specified, a temporary file is created.
1434 If no filename is specified, a temporary file is created.
1433 bz2 compression can be turned off.
1435 bz2 compression can be turned off.
1434 The bundle file will be deleted in case of errors.
1436 The bundle file will be deleted in case of errors.
1435 """
1437 """
1436
1438
1437 if bundletype == "HG20":
1439 if bundletype == "HG20":
1438 bundle = bundle20(ui)
1440 bundle = bundle20(ui)
1439 bundle.setcompression(compression, compopts)
1441 bundle.setcompression(compression, compopts)
1440 part = bundle.newpart('changegroup', data=cg.getchunks())
1442 part = bundle.newpart('changegroup', data=cg.getchunks())
1441 part.addparam('version', cg.version)
1443 part.addparam('version', cg.version)
1442 if 'clcount' in cg.extras:
1444 if 'clcount' in cg.extras:
1443 part.addparam('nbchanges', str(cg.extras['clcount']),
1445 part.addparam('nbchanges', str(cg.extras['clcount']),
1444 mandatory=False)
1446 mandatory=False)
1445 chunkiter = bundle.getchunks()
1447 chunkiter = bundle.getchunks()
1446 else:
1448 else:
1447 # compression argument is only for the bundle2 case
1449 # compression argument is only for the bundle2 case
1448 assert compression is None
1450 assert compression is None
1449 if cg.version != '01':
1451 if cg.version != '01':
1450 raise error.Abort(_('old bundle types only supports v1 '
1452 raise error.Abort(_('old bundle types only supports v1 '
1451 'changegroups'))
1453 'changegroups'))
1452 header, comp = bundletypes[bundletype]
1454 header, comp = bundletypes[bundletype]
1453 if comp not in util.compengines.supportedbundletypes:
1455 if comp not in util.compengines.supportedbundletypes:
1454 raise error.Abort(_('unknown stream compression type: %s')
1456 raise error.Abort(_('unknown stream compression type: %s')
1455 % comp)
1457 % comp)
1456 compengine = util.compengines.forbundletype(comp)
1458 compengine = util.compengines.forbundletype(comp)
1457 def chunkiter():
1459 def chunkiter():
1458 yield header
1460 yield header
1459 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1461 for chunk in compengine.compressstream(cg.getchunks(), compopts):
1460 yield chunk
1462 yield chunk
1461 chunkiter = chunkiter()
1463 chunkiter = chunkiter()
1462
1464
1463 # parse the changegroup data, otherwise we will block
1465 # parse the changegroup data, otherwise we will block
1464 # in case of sshrepo because we don't know the end of the stream
1466 # in case of sshrepo because we don't know the end of the stream
1465 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1467 return changegroup.writechunks(ui, chunkiter, filename, vfs=vfs)
1466
1468
1467 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1469 @parthandler('changegroup', ('version', 'nbchanges', 'treemanifest'))
1468 def handlechangegroup(op, inpart):
1470 def handlechangegroup(op, inpart):
1469 """apply a changegroup part on the repo
1471 """apply a changegroup part on the repo
1470
1472
1471 This is a very early implementation that will massive rework before being
1473 This is a very early implementation that will massive rework before being
1472 inflicted to any end-user.
1474 inflicted to any end-user.
1473 """
1475 """
1474 # Make sure we trigger a transaction creation
1476 # Make sure we trigger a transaction creation
1475 #
1477 #
1476 # The addchangegroup function will get a transaction object by itself, but
1478 # The addchangegroup function will get a transaction object by itself, but
1477 # we need to make sure we trigger the creation of a transaction object used
1479 # we need to make sure we trigger the creation of a transaction object used
1478 # for the whole processing scope.
1480 # for the whole processing scope.
1479 op.gettransaction()
1481 op.gettransaction()
1480 unpackerversion = inpart.params.get('version', '01')
1482 unpackerversion = inpart.params.get('version', '01')
1481 # We should raise an appropriate exception here
1483 # We should raise an appropriate exception here
1482 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1484 cg = changegroup.getunbundler(unpackerversion, inpart, None)
1483 # the source and url passed here are overwritten by the one contained in
1485 # the source and url passed here are overwritten by the one contained in
1484 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1486 # the transaction.hookargs argument. So 'bundle2' is a placeholder
1485 nbchangesets = None
1487 nbchangesets = None
1486 if 'nbchanges' in inpart.params:
1488 if 'nbchanges' in inpart.params:
1487 nbchangesets = int(inpart.params.get('nbchanges'))
1489 nbchangesets = int(inpart.params.get('nbchanges'))
1488 if ('treemanifest' in inpart.params and
1490 if ('treemanifest' in inpart.params and
1489 'treemanifest' not in op.repo.requirements):
1491 'treemanifest' not in op.repo.requirements):
1490 if len(op.repo.changelog) != 0:
1492 if len(op.repo.changelog) != 0:
1491 raise error.Abort(_(
1493 raise error.Abort(_(
1492 "bundle contains tree manifests, but local repo is "
1494 "bundle contains tree manifests, but local repo is "
1493 "non-empty and does not use tree manifests"))
1495 "non-empty and does not use tree manifests"))
1494 op.repo.requirements.add('treemanifest')
1496 op.repo.requirements.add('treemanifest')
1495 op.repo._applyopenerreqs()
1497 op.repo._applyopenerreqs()
1496 op.repo._writerequirements()
1498 op.repo._writerequirements()
1497 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1499 ret = cg.apply(op.repo, 'bundle2', 'bundle2', expectedtotal=nbchangesets)
1498 op.records.add('changegroup', {'return': ret})
1500 op.records.add('changegroup', {'return': ret})
1499 if op.reply is not None:
1501 if op.reply is not None:
1500 # This is definitely not the final form of this
1502 # This is definitely not the final form of this
1501 # return. But one need to start somewhere.
1503 # return. But one need to start somewhere.
1502 part = op.reply.newpart('reply:changegroup', mandatory=False)
1504 part = op.reply.newpart('reply:changegroup', mandatory=False)
1503 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1505 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1504 part.addparam('return', '%i' % ret, mandatory=False)
1506 part.addparam('return', '%i' % ret, mandatory=False)
1505 assert not inpart.read()
1507 assert not inpart.read()
1506
1508
1507 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1509 _remotechangegroupparams = tuple(['url', 'size', 'digests'] +
1508 ['digest:%s' % k for k in util.DIGESTS.keys()])
1510 ['digest:%s' % k for k in util.DIGESTS.keys()])
1509 @parthandler('remote-changegroup', _remotechangegroupparams)
1511 @parthandler('remote-changegroup', _remotechangegroupparams)
1510 def handleremotechangegroup(op, inpart):
1512 def handleremotechangegroup(op, inpart):
1511 """apply a bundle10 on the repo, given an url and validation information
1513 """apply a bundle10 on the repo, given an url and validation information
1512
1514
1513 All the information about the remote bundle to import are given as
1515 All the information about the remote bundle to import are given as
1514 parameters. The parameters include:
1516 parameters. The parameters include:
1515 - url: the url to the bundle10.
1517 - url: the url to the bundle10.
1516 - size: the bundle10 file size. It is used to validate what was
1518 - size: the bundle10 file size. It is used to validate what was
1517 retrieved by the client matches the server knowledge about the bundle.
1519 retrieved by the client matches the server knowledge about the bundle.
1518 - digests: a space separated list of the digest types provided as
1520 - digests: a space separated list of the digest types provided as
1519 parameters.
1521 parameters.
1520 - digest:<digest-type>: the hexadecimal representation of the digest with
1522 - digest:<digest-type>: the hexadecimal representation of the digest with
1521 that name. Like the size, it is used to validate what was retrieved by
1523 that name. Like the size, it is used to validate what was retrieved by
1522 the client matches what the server knows about the bundle.
1524 the client matches what the server knows about the bundle.
1523
1525
1524 When multiple digest types are given, all of them are checked.
1526 When multiple digest types are given, all of them are checked.
1525 """
1527 """
1526 try:
1528 try:
1527 raw_url = inpart.params['url']
1529 raw_url = inpart.params['url']
1528 except KeyError:
1530 except KeyError:
1529 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1531 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'url')
1530 parsed_url = util.url(raw_url)
1532 parsed_url = util.url(raw_url)
1531 if parsed_url.scheme not in capabilities['remote-changegroup']:
1533 if parsed_url.scheme not in capabilities['remote-changegroup']:
1532 raise error.Abort(_('remote-changegroup does not support %s urls') %
1534 raise error.Abort(_('remote-changegroup does not support %s urls') %
1533 parsed_url.scheme)
1535 parsed_url.scheme)
1534
1536
1535 try:
1537 try:
1536 size = int(inpart.params['size'])
1538 size = int(inpart.params['size'])
1537 except ValueError:
1539 except ValueError:
1538 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1540 raise error.Abort(_('remote-changegroup: invalid value for param "%s"')
1539 % 'size')
1541 % 'size')
1540 except KeyError:
1542 except KeyError:
1541 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1543 raise error.Abort(_('remote-changegroup: missing "%s" param') % 'size')
1542
1544
1543 digests = {}
1545 digests = {}
1544 for typ in inpart.params.get('digests', '').split():
1546 for typ in inpart.params.get('digests', '').split():
1545 param = 'digest:%s' % typ
1547 param = 'digest:%s' % typ
1546 try:
1548 try:
1547 value = inpart.params[param]
1549 value = inpart.params[param]
1548 except KeyError:
1550 except KeyError:
1549 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1551 raise error.Abort(_('remote-changegroup: missing "%s" param') %
1550 param)
1552 param)
1551 digests[typ] = value
1553 digests[typ] = value
1552
1554
1553 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1555 real_part = util.digestchecker(url.open(op.ui, raw_url), size, digests)
1554
1556
1555 # Make sure we trigger a transaction creation
1557 # Make sure we trigger a transaction creation
1556 #
1558 #
1557 # The addchangegroup function will get a transaction object by itself, but
1559 # The addchangegroup function will get a transaction object by itself, but
1558 # we need to make sure we trigger the creation of a transaction object used
1560 # we need to make sure we trigger the creation of a transaction object used
1559 # for the whole processing scope.
1561 # for the whole processing scope.
1560 op.gettransaction()
1562 op.gettransaction()
1561 from . import exchange
1563 from . import exchange
1562 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1564 cg = exchange.readbundle(op.repo.ui, real_part, raw_url)
1563 if not isinstance(cg, changegroup.cg1unpacker):
1565 if not isinstance(cg, changegroup.cg1unpacker):
1564 raise error.Abort(_('%s: not a bundle version 1.0') %
1566 raise error.Abort(_('%s: not a bundle version 1.0') %
1565 util.hidepassword(raw_url))
1567 util.hidepassword(raw_url))
1566 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1568 ret = cg.apply(op.repo, 'bundle2', 'bundle2')
1567 op.records.add('changegroup', {'return': ret})
1569 op.records.add('changegroup', {'return': ret})
1568 if op.reply is not None:
1570 if op.reply is not None:
1569 # This is definitely not the final form of this
1571 # This is definitely not the final form of this
1570 # return. But one need to start somewhere.
1572 # return. But one need to start somewhere.
1571 part = op.reply.newpart('reply:changegroup')
1573 part = op.reply.newpart('reply:changegroup')
1572 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1574 part.addparam('in-reply-to', str(inpart.id), mandatory=False)
1573 part.addparam('return', '%i' % ret, mandatory=False)
1575 part.addparam('return', '%i' % ret, mandatory=False)
1574 try:
1576 try:
1575 real_part.validate()
1577 real_part.validate()
1576 except error.Abort as e:
1578 except error.Abort as e:
1577 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1579 raise error.Abort(_('bundle at %s is corrupted:\n%s') %
1578 (util.hidepassword(raw_url), str(e)))
1580 (util.hidepassword(raw_url), str(e)))
1579 assert not inpart.read()
1581 assert not inpart.read()
1580
1582
1581 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1583 @parthandler('reply:changegroup', ('return', 'in-reply-to'))
1582 def handlereplychangegroup(op, inpart):
1584 def handlereplychangegroup(op, inpart):
1583 ret = int(inpart.params['return'])
1585 ret = int(inpart.params['return'])
1584 replyto = int(inpart.params['in-reply-to'])
1586 replyto = int(inpart.params['in-reply-to'])
1585 op.records.add('changegroup', {'return': ret}, replyto)
1587 op.records.add('changegroup', {'return': ret}, replyto)
1586
1588
1587 @parthandler('check:heads')
1589 @parthandler('check:heads')
1588 def handlecheckheads(op, inpart):
1590 def handlecheckheads(op, inpart):
1589 """check that head of the repo did not change
1591 """check that head of the repo did not change
1590
1592
1591 This is used to detect a push race when using unbundle.
1593 This is used to detect a push race when using unbundle.
1592 This replaces the "heads" argument of unbundle."""
1594 This replaces the "heads" argument of unbundle."""
1593 h = inpart.read(20)
1595 h = inpart.read(20)
1594 heads = []
1596 heads = []
1595 while len(h) == 20:
1597 while len(h) == 20:
1596 heads.append(h)
1598 heads.append(h)
1597 h = inpart.read(20)
1599 h = inpart.read(20)
1598 assert not h
1600 assert not h
1599 # Trigger a transaction so that we are guaranteed to have the lock now.
1601 # Trigger a transaction so that we are guaranteed to have the lock now.
1600 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1602 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1601 op.gettransaction()
1603 op.gettransaction()
1602 if sorted(heads) != sorted(op.repo.heads()):
1604 if sorted(heads) != sorted(op.repo.heads()):
1603 raise error.PushRaced('repository changed while pushing - '
1605 raise error.PushRaced('repository changed while pushing - '
1604 'please try again')
1606 'please try again')
1605
1607
1608 @parthandler('check:updated-heads')
1609 def handlecheckupdatedheads(op, inpart):
1610 """check for race on the heads touched by a push
1611
1612 This is similar to 'check:heads' but focus on the heads actually updated
1613 during the push. If other activities happen on unrelated heads, it is
1614 ignored.
1615
1616 This allow server with high traffic to avoid push contention as long as
1617 unrelated parts of the graph are involved."""
1618 h = inpart.read(20)
1619 heads = []
1620 while len(h) == 20:
1621 heads.append(h)
1622 h = inpart.read(20)
1623 assert not h
1624 # trigger a transaction so that we are guaranteed to have the lock now.
1625 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1626 op.gettransaction()
1627
1628 currentheads = set()
1629 for ls in op.repo.branchmap().itervalues():
1630 currentheads.update(ls)
1631
1632 for h in heads:
1633 if h not in currentheads:
1634 raise error.PushRaced('repository changed while pushing - '
1635 'please try again')
1636
1606 @parthandler('output')
1637 @parthandler('output')
1607 def handleoutput(op, inpart):
1638 def handleoutput(op, inpart):
1608 """forward output captured on the server to the client"""
1639 """forward output captured on the server to the client"""
1609 for line in inpart.read().splitlines():
1640 for line in inpart.read().splitlines():
1610 op.ui.status(_('remote: %s\n') % line)
1641 op.ui.status(_('remote: %s\n') % line)
1611
1642
1612 @parthandler('replycaps')
1643 @parthandler('replycaps')
1613 def handlereplycaps(op, inpart):
1644 def handlereplycaps(op, inpart):
1614 """Notify that a reply bundle should be created
1645 """Notify that a reply bundle should be created
1615
1646
1616 The payload contains the capabilities information for the reply"""
1647 The payload contains the capabilities information for the reply"""
1617 caps = decodecaps(inpart.read())
1648 caps = decodecaps(inpart.read())
1618 if op.reply is None:
1649 if op.reply is None:
1619 op.reply = bundle20(op.ui, caps)
1650 op.reply = bundle20(op.ui, caps)
1620
1651
1621 class AbortFromPart(error.Abort):
1652 class AbortFromPart(error.Abort):
1622 """Sub-class of Abort that denotes an error from a bundle2 part."""
1653 """Sub-class of Abort that denotes an error from a bundle2 part."""
1623
1654
1624 @parthandler('error:abort', ('message', 'hint'))
1655 @parthandler('error:abort', ('message', 'hint'))
1625 def handleerrorabort(op, inpart):
1656 def handleerrorabort(op, inpart):
1626 """Used to transmit abort error over the wire"""
1657 """Used to transmit abort error over the wire"""
1627 raise AbortFromPart(inpart.params['message'],
1658 raise AbortFromPart(inpart.params['message'],
1628 hint=inpart.params.get('hint'))
1659 hint=inpart.params.get('hint'))
1629
1660
1630 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1661 @parthandler('error:pushkey', ('namespace', 'key', 'new', 'old', 'ret',
1631 'in-reply-to'))
1662 'in-reply-to'))
1632 def handleerrorpushkey(op, inpart):
1663 def handleerrorpushkey(op, inpart):
1633 """Used to transmit failure of a mandatory pushkey over the wire"""
1664 """Used to transmit failure of a mandatory pushkey over the wire"""
1634 kwargs = {}
1665 kwargs = {}
1635 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1666 for name in ('namespace', 'key', 'new', 'old', 'ret'):
1636 value = inpart.params.get(name)
1667 value = inpart.params.get(name)
1637 if value is not None:
1668 if value is not None:
1638 kwargs[name] = value
1669 kwargs[name] = value
1639 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1670 raise error.PushkeyFailed(inpart.params['in-reply-to'], **kwargs)
1640
1671
1641 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1672 @parthandler('error:unsupportedcontent', ('parttype', 'params'))
1642 def handleerrorunsupportedcontent(op, inpart):
1673 def handleerrorunsupportedcontent(op, inpart):
1643 """Used to transmit unknown content error over the wire"""
1674 """Used to transmit unknown content error over the wire"""
1644 kwargs = {}
1675 kwargs = {}
1645 parttype = inpart.params.get('parttype')
1676 parttype = inpart.params.get('parttype')
1646 if parttype is not None:
1677 if parttype is not None:
1647 kwargs['parttype'] = parttype
1678 kwargs['parttype'] = parttype
1648 params = inpart.params.get('params')
1679 params = inpart.params.get('params')
1649 if params is not None:
1680 if params is not None:
1650 kwargs['params'] = params.split('\0')
1681 kwargs['params'] = params.split('\0')
1651
1682
1652 raise error.BundleUnknownFeatureError(**kwargs)
1683 raise error.BundleUnknownFeatureError(**kwargs)
1653
1684
1654 @parthandler('error:pushraced', ('message',))
1685 @parthandler('error:pushraced', ('message',))
1655 def handleerrorpushraced(op, inpart):
1686 def handleerrorpushraced(op, inpart):
1656 """Used to transmit push race error over the wire"""
1687 """Used to transmit push race error over the wire"""
1657 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1688 raise error.ResponseError(_('push failed:'), inpart.params['message'])
1658
1689
1659 @parthandler('listkeys', ('namespace',))
1690 @parthandler('listkeys', ('namespace',))
1660 def handlelistkeys(op, inpart):
1691 def handlelistkeys(op, inpart):
1661 """retrieve pushkey namespace content stored in a bundle2"""
1692 """retrieve pushkey namespace content stored in a bundle2"""
1662 namespace = inpart.params['namespace']
1693 namespace = inpart.params['namespace']
1663 r = pushkey.decodekeys(inpart.read())
1694 r = pushkey.decodekeys(inpart.read())
1664 op.records.add('listkeys', (namespace, r))
1695 op.records.add('listkeys', (namespace, r))
1665
1696
1666 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1697 @parthandler('pushkey', ('namespace', 'key', 'old', 'new'))
1667 def handlepushkey(op, inpart):
1698 def handlepushkey(op, inpart):
1668 """process a pushkey request"""
1699 """process a pushkey request"""
1669 dec = pushkey.decode
1700 dec = pushkey.decode
1670 namespace = dec(inpart.params['namespace'])
1701 namespace = dec(inpart.params['namespace'])
1671 key = dec(inpart.params['key'])
1702 key = dec(inpart.params['key'])
1672 old = dec(inpart.params['old'])
1703 old = dec(inpart.params['old'])
1673 new = dec(inpart.params['new'])
1704 new = dec(inpart.params['new'])
1674 # Grab the transaction to ensure that we have the lock before performing the
1705 # Grab the transaction to ensure that we have the lock before performing the
1675 # pushkey.
1706 # pushkey.
1676 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1707 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1677 op.gettransaction()
1708 op.gettransaction()
1678 ret = op.repo.pushkey(namespace, key, old, new)
1709 ret = op.repo.pushkey(namespace, key, old, new)
1679 record = {'namespace': namespace,
1710 record = {'namespace': namespace,
1680 'key': key,
1711 'key': key,
1681 'old': old,
1712 'old': old,
1682 'new': new}
1713 'new': new}
1683 op.records.add('pushkey', record)
1714 op.records.add('pushkey', record)
1684 if op.reply is not None:
1715 if op.reply is not None:
1685 rpart = op.reply.newpart('reply:pushkey')
1716 rpart = op.reply.newpart('reply:pushkey')
1686 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1717 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1687 rpart.addparam('return', '%i' % ret, mandatory=False)
1718 rpart.addparam('return', '%i' % ret, mandatory=False)
1688 if inpart.mandatory and not ret:
1719 if inpart.mandatory and not ret:
1689 kwargs = {}
1720 kwargs = {}
1690 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1721 for key in ('namespace', 'key', 'new', 'old', 'ret'):
1691 if key in inpart.params:
1722 if key in inpart.params:
1692 kwargs[key] = inpart.params[key]
1723 kwargs[key] = inpart.params[key]
1693 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1724 raise error.PushkeyFailed(partid=str(inpart.id), **kwargs)
1694
1725
1695 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1726 @parthandler('reply:pushkey', ('return', 'in-reply-to'))
1696 def handlepushkeyreply(op, inpart):
1727 def handlepushkeyreply(op, inpart):
1697 """retrieve the result of a pushkey request"""
1728 """retrieve the result of a pushkey request"""
1698 ret = int(inpart.params['return'])
1729 ret = int(inpart.params['return'])
1699 partid = int(inpart.params['in-reply-to'])
1730 partid = int(inpart.params['in-reply-to'])
1700 op.records.add('pushkey', {'return': ret}, partid)
1731 op.records.add('pushkey', {'return': ret}, partid)
1701
1732
1702 @parthandler('obsmarkers')
1733 @parthandler('obsmarkers')
1703 def handleobsmarker(op, inpart):
1734 def handleobsmarker(op, inpart):
1704 """add a stream of obsmarkers to the repo"""
1735 """add a stream of obsmarkers to the repo"""
1705 tr = op.gettransaction()
1736 tr = op.gettransaction()
1706 markerdata = inpart.read()
1737 markerdata = inpart.read()
1707 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1738 if op.ui.config('experimental', 'obsmarkers-exchange-debug', False):
1708 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1739 op.ui.write(('obsmarker-exchange: %i bytes received\n')
1709 % len(markerdata))
1740 % len(markerdata))
1710 # The mergemarkers call will crash if marker creation is not enabled.
1741 # The mergemarkers call will crash if marker creation is not enabled.
1711 # we want to avoid this if the part is advisory.
1742 # we want to avoid this if the part is advisory.
1712 if not inpart.mandatory and op.repo.obsstore.readonly:
1743 if not inpart.mandatory and op.repo.obsstore.readonly:
1713 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1744 op.repo.ui.debug('ignoring obsolescence markers, feature not enabled')
1714 return
1745 return
1715 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1746 new = op.repo.obsstore.mergemarkers(tr, markerdata)
1716 op.repo.invalidatevolatilesets()
1747 op.repo.invalidatevolatilesets()
1717 if new:
1748 if new:
1718 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1749 op.repo.ui.status(_('%i new obsolescence markers\n') % new)
1719 op.records.add('obsmarkers', {'new': new})
1750 op.records.add('obsmarkers', {'new': new})
1720 if op.reply is not None:
1751 if op.reply is not None:
1721 rpart = op.reply.newpart('reply:obsmarkers')
1752 rpart = op.reply.newpart('reply:obsmarkers')
1722 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1753 rpart.addparam('in-reply-to', str(inpart.id), mandatory=False)
1723 rpart.addparam('new', '%i' % new, mandatory=False)
1754 rpart.addparam('new', '%i' % new, mandatory=False)
1724
1755
1725
1756
1726 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1757 @parthandler('reply:obsmarkers', ('new', 'in-reply-to'))
1727 def handleobsmarkerreply(op, inpart):
1758 def handleobsmarkerreply(op, inpart):
1728 """retrieve the result of a pushkey request"""
1759 """retrieve the result of a pushkey request"""
1729 ret = int(inpart.params['new'])
1760 ret = int(inpart.params['new'])
1730 partid = int(inpart.params['in-reply-to'])
1761 partid = int(inpart.params['in-reply-to'])
1731 op.records.add('obsmarkers', {'new': ret}, partid)
1762 op.records.add('obsmarkers', {'new': ret}, partid)
1732
1763
1733 @parthandler('hgtagsfnodes')
1764 @parthandler('hgtagsfnodes')
1734 def handlehgtagsfnodes(op, inpart):
1765 def handlehgtagsfnodes(op, inpart):
1735 """Applies .hgtags fnodes cache entries to the local repo.
1766 """Applies .hgtags fnodes cache entries to the local repo.
1736
1767
1737 Payload is pairs of 20 byte changeset nodes and filenodes.
1768 Payload is pairs of 20 byte changeset nodes and filenodes.
1738 """
1769 """
1739 # Grab the transaction so we ensure that we have the lock at this point.
1770 # Grab the transaction so we ensure that we have the lock at this point.
1740 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1771 if op.ui.configbool('experimental', 'bundle2lazylocking'):
1741 op.gettransaction()
1772 op.gettransaction()
1742 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1773 cache = tags.hgtagsfnodescache(op.repo.unfiltered())
1743
1774
1744 count = 0
1775 count = 0
1745 while True:
1776 while True:
1746 node = inpart.read(20)
1777 node = inpart.read(20)
1747 fnode = inpart.read(20)
1778 fnode = inpart.read(20)
1748 if len(node) < 20 or len(fnode) < 20:
1779 if len(node) < 20 or len(fnode) < 20:
1749 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1780 op.ui.debug('ignoring incomplete received .hgtags fnodes data\n')
1750 break
1781 break
1751 cache.setfnode(node, fnode)
1782 cache.setfnode(node, fnode)
1752 count += 1
1783 count += 1
1753
1784
1754 cache.write()
1785 cache.write()
1755 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
1786 op.ui.debug('applied %i hgtags fnodes cache entries\n' % count)
@@ -1,526 +1,527 b''
1 # discovery.py - protocol changeset discovery functions
1 # discovery.py - protocol changeset discovery functions
2 #
2 #
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import functools
10 import functools
11
11
12 from .i18n import _
12 from .i18n import _
13 from .node import (
13 from .node import (
14 hex,
14 hex,
15 nullid,
15 nullid,
16 short,
16 short,
17 )
17 )
18
18
19 from . import (
19 from . import (
20 bookmarks,
20 bookmarks,
21 branchmap,
21 branchmap,
22 error,
22 error,
23 phases,
23 phases,
24 setdiscovery,
24 setdiscovery,
25 treediscovery,
25 treediscovery,
26 util,
26 util,
27 )
27 )
28
28
29 def findcommonincoming(repo, remote, heads=None, force=False):
29 def findcommonincoming(repo, remote, heads=None, force=False):
30 """Return a tuple (common, anyincoming, heads) used to identify the common
30 """Return a tuple (common, anyincoming, heads) used to identify the common
31 subset of nodes between repo and remote.
31 subset of nodes between repo and remote.
32
32
33 "common" is a list of (at least) the heads of the common subset.
33 "common" is a list of (at least) the heads of the common subset.
34 "anyincoming" is testable as a boolean indicating if any nodes are missing
34 "anyincoming" is testable as a boolean indicating if any nodes are missing
35 locally. If remote does not support getbundle, this actually is a list of
35 locally. If remote does not support getbundle, this actually is a list of
36 roots of the nodes that would be incoming, to be supplied to
36 roots of the nodes that would be incoming, to be supplied to
37 changegroupsubset. No code except for pull should be relying on this fact
37 changegroupsubset. No code except for pull should be relying on this fact
38 any longer.
38 any longer.
39 "heads" is either the supplied heads, or else the remote's heads.
39 "heads" is either the supplied heads, or else the remote's heads.
40
40
41 If you pass heads and they are all known locally, the response lists just
41 If you pass heads and they are all known locally, the response lists just
42 these heads in "common" and in "heads".
42 these heads in "common" and in "heads".
43
43
44 Please use findcommonoutgoing to compute the set of outgoing nodes to give
44 Please use findcommonoutgoing to compute the set of outgoing nodes to give
45 extensions a good hook into outgoing.
45 extensions a good hook into outgoing.
46 """
46 """
47
47
48 if not remote.capable('getbundle'):
48 if not remote.capable('getbundle'):
49 return treediscovery.findcommonincoming(repo, remote, heads, force)
49 return treediscovery.findcommonincoming(repo, remote, heads, force)
50
50
51 if heads:
51 if heads:
52 allknown = True
52 allknown = True
53 knownnode = repo.changelog.hasnode # no nodemap until it is filtered
53 knownnode = repo.changelog.hasnode # no nodemap until it is filtered
54 for h in heads:
54 for h in heads:
55 if not knownnode(h):
55 if not knownnode(h):
56 allknown = False
56 allknown = False
57 break
57 break
58 if allknown:
58 if allknown:
59 return (heads, False, heads)
59 return (heads, False, heads)
60
60
61 res = setdiscovery.findcommonheads(repo.ui, repo, remote,
61 res = setdiscovery.findcommonheads(repo.ui, repo, remote,
62 abortwhenunrelated=not force)
62 abortwhenunrelated=not force)
63 common, anyinc, srvheads = res
63 common, anyinc, srvheads = res
64 return (list(common), anyinc, heads or list(srvheads))
64 return (list(common), anyinc, heads or list(srvheads))
65
65
66 class outgoing(object):
66 class outgoing(object):
67 '''Represents the set of nodes present in a local repo but not in a
67 '''Represents the set of nodes present in a local repo but not in a
68 (possibly) remote one.
68 (possibly) remote one.
69
69
70 Members:
70 Members:
71
71
72 missing is a list of all nodes present in local but not in remote.
72 missing is a list of all nodes present in local but not in remote.
73 common is a list of all nodes shared between the two repos.
73 common is a list of all nodes shared between the two repos.
74 excluded is the list of missing changeset that shouldn't be sent remotely.
74 excluded is the list of missing changeset that shouldn't be sent remotely.
75 missingheads is the list of heads of missing.
75 missingheads is the list of heads of missing.
76 commonheads is the list of heads of common.
76 commonheads is the list of heads of common.
77
77
78 The sets are computed on demand from the heads, unless provided upfront
78 The sets are computed on demand from the heads, unless provided upfront
79 by discovery.'''
79 by discovery.'''
80
80
81 def __init__(self, repo, commonheads=None, missingheads=None,
81 def __init__(self, repo, commonheads=None, missingheads=None,
82 missingroots=None):
82 missingroots=None):
83 # at least one of them must not be set
83 # at least one of them must not be set
84 assert None in (commonheads, missingroots)
84 assert None in (commonheads, missingroots)
85 cl = repo.changelog
85 cl = repo.changelog
86 if missingheads is None:
86 if missingheads is None:
87 missingheads = cl.heads()
87 missingheads = cl.heads()
88 if missingroots:
88 if missingroots:
89 discbases = []
89 discbases = []
90 for n in missingroots:
90 for n in missingroots:
91 discbases.extend([p for p in cl.parents(n) if p != nullid])
91 discbases.extend([p for p in cl.parents(n) if p != nullid])
92 # TODO remove call to nodesbetween.
92 # TODO remove call to nodesbetween.
93 # TODO populate attributes on outgoing instance instead of setting
93 # TODO populate attributes on outgoing instance instead of setting
94 # discbases.
94 # discbases.
95 csets, roots, heads = cl.nodesbetween(missingroots, missingheads)
95 csets, roots, heads = cl.nodesbetween(missingroots, missingheads)
96 included = set(csets)
96 included = set(csets)
97 missingheads = heads
97 missingheads = heads
98 commonheads = [n for n in discbases if n not in included]
98 commonheads = [n for n in discbases if n not in included]
99 elif not commonheads:
99 elif not commonheads:
100 commonheads = [nullid]
100 commonheads = [nullid]
101 self.commonheads = commonheads
101 self.commonheads = commonheads
102 self.missingheads = missingheads
102 self.missingheads = missingheads
103 self._revlog = cl
103 self._revlog = cl
104 self._common = None
104 self._common = None
105 self._missing = None
105 self._missing = None
106 self.excluded = []
106 self.excluded = []
107
107
108 def _computecommonmissing(self):
108 def _computecommonmissing(self):
109 sets = self._revlog.findcommonmissing(self.commonheads,
109 sets = self._revlog.findcommonmissing(self.commonheads,
110 self.missingheads)
110 self.missingheads)
111 self._common, self._missing = sets
111 self._common, self._missing = sets
112
112
113 @util.propertycache
113 @util.propertycache
114 def common(self):
114 def common(self):
115 if self._common is None:
115 if self._common is None:
116 self._computecommonmissing()
116 self._computecommonmissing()
117 return self._common
117 return self._common
118
118
119 @util.propertycache
119 @util.propertycache
120 def missing(self):
120 def missing(self):
121 if self._missing is None:
121 if self._missing is None:
122 self._computecommonmissing()
122 self._computecommonmissing()
123 return self._missing
123 return self._missing
124
124
125 def findcommonoutgoing(repo, other, onlyheads=None, force=False,
125 def findcommonoutgoing(repo, other, onlyheads=None, force=False,
126 commoninc=None, portable=False):
126 commoninc=None, portable=False):
127 '''Return an outgoing instance to identify the nodes present in repo but
127 '''Return an outgoing instance to identify the nodes present in repo but
128 not in other.
128 not in other.
129
129
130 If onlyheads is given, only nodes ancestral to nodes in onlyheads
130 If onlyheads is given, only nodes ancestral to nodes in onlyheads
131 (inclusive) are included. If you already know the local repo's heads,
131 (inclusive) are included. If you already know the local repo's heads,
132 passing them in onlyheads is faster than letting them be recomputed here.
132 passing them in onlyheads is faster than letting them be recomputed here.
133
133
134 If commoninc is given, it must be the result of a prior call to
134 If commoninc is given, it must be the result of a prior call to
135 findcommonincoming(repo, other, force) to avoid recomputing it here.
135 findcommonincoming(repo, other, force) to avoid recomputing it here.
136
136
137 If portable is given, compute more conservative common and missingheads,
137 If portable is given, compute more conservative common and missingheads,
138 to make bundles created from the instance more portable.'''
138 to make bundles created from the instance more portable.'''
139 # declare an empty outgoing object to be filled later
139 # declare an empty outgoing object to be filled later
140 og = outgoing(repo, None, None)
140 og = outgoing(repo, None, None)
141
141
142 # get common set if not provided
142 # get common set if not provided
143 if commoninc is None:
143 if commoninc is None:
144 commoninc = findcommonincoming(repo, other, force=force)
144 commoninc = findcommonincoming(repo, other, force=force)
145 og.commonheads, _any, _hds = commoninc
145 og.commonheads, _any, _hds = commoninc
146
146
147 # compute outgoing
147 # compute outgoing
148 mayexclude = (repo._phasecache.phaseroots[phases.secret] or repo.obsstore)
148 mayexclude = (repo._phasecache.phaseroots[phases.secret] or repo.obsstore)
149 if not mayexclude:
149 if not mayexclude:
150 og.missingheads = onlyheads or repo.heads()
150 og.missingheads = onlyheads or repo.heads()
151 elif onlyheads is None:
151 elif onlyheads is None:
152 # use visible heads as it should be cached
152 # use visible heads as it should be cached
153 og.missingheads = repo.filtered("served").heads()
153 og.missingheads = repo.filtered("served").heads()
154 og.excluded = [ctx.node() for ctx in repo.set('secret() or extinct()')]
154 og.excluded = [ctx.node() for ctx in repo.set('secret() or extinct()')]
155 else:
155 else:
156 # compute common, missing and exclude secret stuff
156 # compute common, missing and exclude secret stuff
157 sets = repo.changelog.findcommonmissing(og.commonheads, onlyheads)
157 sets = repo.changelog.findcommonmissing(og.commonheads, onlyheads)
158 og._common, allmissing = sets
158 og._common, allmissing = sets
159 og._missing = missing = []
159 og._missing = missing = []
160 og.excluded = excluded = []
160 og.excluded = excluded = []
161 for node in allmissing:
161 for node in allmissing:
162 ctx = repo[node]
162 ctx = repo[node]
163 if ctx.phase() >= phases.secret or ctx.extinct():
163 if ctx.phase() >= phases.secret or ctx.extinct():
164 excluded.append(node)
164 excluded.append(node)
165 else:
165 else:
166 missing.append(node)
166 missing.append(node)
167 if len(missing) == len(allmissing):
167 if len(missing) == len(allmissing):
168 missingheads = onlyheads
168 missingheads = onlyheads
169 else: # update missing heads
169 else: # update missing heads
170 missingheads = phases.newheads(repo, onlyheads, excluded)
170 missingheads = phases.newheads(repo, onlyheads, excluded)
171 og.missingheads = missingheads
171 og.missingheads = missingheads
172 if portable:
172 if portable:
173 # recompute common and missingheads as if -r<rev> had been given for
173 # recompute common and missingheads as if -r<rev> had been given for
174 # each head of missing, and --base <rev> for each head of the proper
174 # each head of missing, and --base <rev> for each head of the proper
175 # ancestors of missing
175 # ancestors of missing
176 og._computecommonmissing()
176 og._computecommonmissing()
177 cl = repo.changelog
177 cl = repo.changelog
178 missingrevs = set(cl.rev(n) for n in og._missing)
178 missingrevs = set(cl.rev(n) for n in og._missing)
179 og._common = set(cl.ancestors(missingrevs)) - missingrevs
179 og._common = set(cl.ancestors(missingrevs)) - missingrevs
180 commonheads = set(og.commonheads)
180 commonheads = set(og.commonheads)
181 og.missingheads = [h for h in og.missingheads if h not in commonheads]
181 og.missingheads = [h for h in og.missingheads if h not in commonheads]
182
182
183 return og
183 return og
184
184
185 def _headssummary(pushop):
185 def _headssummary(pushop):
186 """compute a summary of branch and heads status before and after push
186 """compute a summary of branch and heads status before and after push
187
187
188 return {'branch': ([remoteheads], [newheads],
188 return {'branch': ([remoteheads], [newheads],
189 [unsyncedheads], [discardedheads])} mapping
189 [unsyncedheads], [discardedheads])} mapping
190
190
191 - branch: the branch name,
191 - branch: the branch name,
192 - remoteheads: the list of remote heads known locally
192 - remoteheads: the list of remote heads known locally
193 None if the branch is new,
193 None if the branch is new,
194 - newheads: the new remote heads (known locally) with outgoing pushed,
194 - newheads: the new remote heads (known locally) with outgoing pushed,
195 - unsyncedheads: the list of remote heads unknown locally,
195 - unsyncedheads: the list of remote heads unknown locally,
196 - discardedheads: the list of heads made obsolete by the push.
196 - discardedheads: the list of heads made obsolete by the push.
197 """
197 """
198 repo = pushop.repo.unfiltered()
198 repo = pushop.repo.unfiltered()
199 remote = pushop.remote
199 remote = pushop.remote
200 outgoing = pushop.outgoing
200 outgoing = pushop.outgoing
201 cl = repo.changelog
201 cl = repo.changelog
202 headssum = {}
202 headssum = {}
203 # A. Create set of branches involved in the push.
203 # A. Create set of branches involved in the push.
204 branches = set(repo[n].branch() for n in outgoing.missing)
204 branches = set(repo[n].branch() for n in outgoing.missing)
205 remotemap = remote.branchmap()
205 remotemap = remote.branchmap()
206 newbranches = branches - set(remotemap)
206 newbranches = branches - set(remotemap)
207 branches.difference_update(newbranches)
207 branches.difference_update(newbranches)
208
208
209 # A. register remote heads
209 # A. register remote heads
210 remotebranches = set()
210 remotebranches = set()
211 for branch, heads in remote.branchmap().iteritems():
211 for branch, heads in remote.branchmap().iteritems():
212 remotebranches.add(branch)
212 remotebranches.add(branch)
213 known = []
213 known = []
214 unsynced = []
214 unsynced = []
215 knownnode = cl.hasnode # do not use nodemap until it is filtered
215 knownnode = cl.hasnode # do not use nodemap until it is filtered
216 for h in heads:
216 for h in heads:
217 if knownnode(h):
217 if knownnode(h):
218 known.append(h)
218 known.append(h)
219 else:
219 else:
220 unsynced.append(h)
220 unsynced.append(h)
221 headssum[branch] = (known, list(known), unsynced)
221 headssum[branch] = (known, list(known), unsynced)
222 # B. add new branch data
222 # B. add new branch data
223 missingctx = list(repo[n] for n in outgoing.missing)
223 missingctx = list(repo[n] for n in outgoing.missing)
224 touchedbranches = set()
224 touchedbranches = set()
225 for ctx in missingctx:
225 for ctx in missingctx:
226 branch = ctx.branch()
226 branch = ctx.branch()
227 touchedbranches.add(branch)
227 touchedbranches.add(branch)
228 if branch not in headssum:
228 if branch not in headssum:
229 headssum[branch] = (None, [], [])
229 headssum[branch] = (None, [], [])
230
230
231 # C drop data about untouched branches:
231 # C drop data about untouched branches:
232 for branch in remotebranches - touchedbranches:
232 for branch in remotebranches - touchedbranches:
233 del headssum[branch]
233 del headssum[branch]
234
234
235 # D. Update newmap with outgoing changes.
235 # D. Update newmap with outgoing changes.
236 # This will possibly add new heads and remove existing ones.
236 # This will possibly add new heads and remove existing ones.
237 newmap = branchmap.branchcache((branch, heads[1])
237 newmap = branchmap.branchcache((branch, heads[1])
238 for branch, heads in headssum.iteritems()
238 for branch, heads in headssum.iteritems()
239 if heads[0] is not None)
239 if heads[0] is not None)
240 newmap.update(repo, (ctx.rev() for ctx in missingctx))
240 newmap.update(repo, (ctx.rev() for ctx in missingctx))
241 for branch, newheads in newmap.iteritems():
241 for branch, newheads in newmap.iteritems():
242 headssum[branch][1][:] = newheads
242 headssum[branch][1][:] = newheads
243 for branch, items in headssum.iteritems():
243 for branch, items in headssum.iteritems():
244 for l in items:
244 for l in items:
245 if l is not None:
245 if l is not None:
246 l.sort()
246 l.sort()
247 headssum[branch] = items + ([],)
247 headssum[branch] = items + ([],)
248
248
249 # If there are no obsstore, no post processing are needed.
249 # If there are no obsstore, no post processing are needed.
250 if repo.obsstore:
250 if repo.obsstore:
251 allmissing = set(outgoing.missing)
251 allmissing = set(outgoing.missing)
252 cctx = repo.set('%ld', outgoing.common)
252 cctx = repo.set('%ld', outgoing.common)
253 allfuturecommon = set(c.node() for c in cctx)
253 allfuturecommon = set(c.node() for c in cctx)
254 allfuturecommon.update(allmissing)
254 allfuturecommon.update(allmissing)
255 for branch, heads in sorted(headssum.iteritems()):
255 for branch, heads in sorted(headssum.iteritems()):
256 remoteheads, newheads, unsyncedheads, placeholder = heads
256 remoteheads, newheads, unsyncedheads, placeholder = heads
257 result = _postprocessobsolete(pushop, allfuturecommon, newheads)
257 result = _postprocessobsolete(pushop, allfuturecommon, newheads)
258 headssum[branch] = (remoteheads, sorted(result[0]), unsyncedheads,
258 headssum[branch] = (remoteheads, sorted(result[0]), unsyncedheads,
259 sorted(result[1]))
259 sorted(result[1]))
260 return headssum
260 return headssum
261
261
262 def _oldheadssummary(repo, remoteheads, outgoing, inc=False):
262 def _oldheadssummary(repo, remoteheads, outgoing, inc=False):
263 """Compute branchmapsummary for repo without branchmap support"""
263 """Compute branchmapsummary for repo without branchmap support"""
264
264
265 # 1-4b. old servers: Check for new topological heads.
265 # 1-4b. old servers: Check for new topological heads.
266 # Construct {old,new}map with branch = None (topological branch).
266 # Construct {old,new}map with branch = None (topological branch).
267 # (code based on update)
267 # (code based on update)
268 knownnode = repo.changelog.hasnode # no nodemap until it is filtered
268 knownnode = repo.changelog.hasnode # no nodemap until it is filtered
269 oldheads = sorted(h for h in remoteheads if knownnode(h))
269 oldheads = sorted(h for h in remoteheads if knownnode(h))
270 # all nodes in outgoing.missing are children of either:
270 # all nodes in outgoing.missing are children of either:
271 # - an element of oldheads
271 # - an element of oldheads
272 # - another element of outgoing.missing
272 # - another element of outgoing.missing
273 # - nullrev
273 # - nullrev
274 # This explains why the new head are very simple to compute.
274 # This explains why the new head are very simple to compute.
275 r = repo.set('heads(%ln + %ln)', oldheads, outgoing.missing)
275 r = repo.set('heads(%ln + %ln)', oldheads, outgoing.missing)
276 newheads = sorted(c.node() for c in r)
276 newheads = sorted(c.node() for c in r)
277 # set some unsynced head to issue the "unsynced changes" warning
277 # set some unsynced head to issue the "unsynced changes" warning
278 if inc:
278 if inc:
279 unsynced = [None]
279 unsynced = [None]
280 else:
280 else:
281 unsynced = []
281 unsynced = []
282 return {None: (oldheads, newheads, unsynced, [])}
282 return {None: (oldheads, newheads, unsynced, [])}
283
283
284 def _nowarnheads(pushop):
284 def _nowarnheads(pushop):
285 # Compute newly pushed bookmarks. We don't warn about bookmarked heads.
285 # Compute newly pushed bookmarks. We don't warn about bookmarked heads.
286 repo = pushop.repo.unfiltered()
286 repo = pushop.repo.unfiltered()
287 remote = pushop.remote
287 remote = pushop.remote
288 localbookmarks = repo._bookmarks
288 localbookmarks = repo._bookmarks
289 remotebookmarks = remote.listkeys('bookmarks')
289 remotebookmarks = remote.listkeys('bookmarks')
290 bookmarkedheads = set()
290 bookmarkedheads = set()
291
291
292 # internal config: bookmarks.pushing
292 # internal config: bookmarks.pushing
293 newbookmarks = [localbookmarks.expandname(b)
293 newbookmarks = [localbookmarks.expandname(b)
294 for b in pushop.ui.configlist('bookmarks', 'pushing')]
294 for b in pushop.ui.configlist('bookmarks', 'pushing')]
295
295
296 for bm in localbookmarks:
296 for bm in localbookmarks:
297 rnode = remotebookmarks.get(bm)
297 rnode = remotebookmarks.get(bm)
298 if rnode and rnode in repo:
298 if rnode and rnode in repo:
299 lctx, rctx = repo[bm], repo[rnode]
299 lctx, rctx = repo[bm], repo[rnode]
300 if bookmarks.validdest(repo, rctx, lctx):
300 if bookmarks.validdest(repo, rctx, lctx):
301 bookmarkedheads.add(lctx.node())
301 bookmarkedheads.add(lctx.node())
302 else:
302 else:
303 if bm in newbookmarks and bm not in remotebookmarks:
303 if bm in newbookmarks and bm not in remotebookmarks:
304 bookmarkedheads.add(repo[bm].node())
304 bookmarkedheads.add(repo[bm].node())
305
305
306 return bookmarkedheads
306 return bookmarkedheads
307
307
308 def checkheads(pushop):
308 def checkheads(pushop):
309 """Check that a push won't add any outgoing head
309 """Check that a push won't add any outgoing head
310
310
311 raise Abort error and display ui message as needed.
311 raise Abort error and display ui message as needed.
312 """
312 """
313
313
314 repo = pushop.repo.unfiltered()
314 repo = pushop.repo.unfiltered()
315 remote = pushop.remote
315 remote = pushop.remote
316 outgoing = pushop.outgoing
316 outgoing = pushop.outgoing
317 remoteheads = pushop.remoteheads
317 remoteheads = pushop.remoteheads
318 newbranch = pushop.newbranch
318 newbranch = pushop.newbranch
319 inc = bool(pushop.incoming)
319 inc = bool(pushop.incoming)
320
320
321 # Check for each named branch if we're creating new remote heads.
321 # Check for each named branch if we're creating new remote heads.
322 # To be a remote head after push, node must be either:
322 # To be a remote head after push, node must be either:
323 # - unknown locally
323 # - unknown locally
324 # - a local outgoing head descended from update
324 # - a local outgoing head descended from update
325 # - a remote head that's known locally and not
325 # - a remote head that's known locally and not
326 # ancestral to an outgoing head
326 # ancestral to an outgoing head
327 if remoteheads == [nullid]:
327 if remoteheads == [nullid]:
328 # remote is empty, nothing to check.
328 # remote is empty, nothing to check.
329 return
329 return
330
330
331 if remote.capable('branchmap'):
331 if remote.capable('branchmap'):
332 headssum = _headssummary(pushop)
332 headssum = _headssummary(pushop)
333 else:
333 else:
334 headssum = _oldheadssummary(repo, remoteheads, outgoing, inc)
334 headssum = _oldheadssummary(repo, remoteheads, outgoing, inc)
335 pushop.pushbranchmap = headssum
335 newbranches = [branch for branch, heads in headssum.iteritems()
336 newbranches = [branch for branch, heads in headssum.iteritems()
336 if heads[0] is None]
337 if heads[0] is None]
337 # 1. Check for new branches on the remote.
338 # 1. Check for new branches on the remote.
338 if newbranches and not newbranch: # new branch requires --new-branch
339 if newbranches and not newbranch: # new branch requires --new-branch
339 branchnames = ', '.join(sorted(newbranches))
340 branchnames = ', '.join(sorted(newbranches))
340 raise error.Abort(_("push creates new remote branches: %s!")
341 raise error.Abort(_("push creates new remote branches: %s!")
341 % branchnames,
342 % branchnames,
342 hint=_("use 'hg push --new-branch' to create"
343 hint=_("use 'hg push --new-branch' to create"
343 " new remote branches"))
344 " new remote branches"))
344
345
345 # 2. Find heads that we need not warn about
346 # 2. Find heads that we need not warn about
346 nowarnheads = _nowarnheads(pushop)
347 nowarnheads = _nowarnheads(pushop)
347
348
348 # 3. Check for new heads.
349 # 3. Check for new heads.
349 # If there are more heads after the push than before, a suitable
350 # If there are more heads after the push than before, a suitable
350 # error message, depending on unsynced status, is displayed.
351 # error message, depending on unsynced status, is displayed.
351 errormsg = None
352 errormsg = None
352 for branch, heads in sorted(headssum.iteritems()):
353 for branch, heads in sorted(headssum.iteritems()):
353 remoteheads, newheads, unsyncedheads, discardedheads = heads
354 remoteheads, newheads, unsyncedheads, discardedheads = heads
354 # add unsynced data
355 # add unsynced data
355 if remoteheads is None:
356 if remoteheads is None:
356 oldhs = set()
357 oldhs = set()
357 else:
358 else:
358 oldhs = set(remoteheads)
359 oldhs = set(remoteheads)
359 oldhs.update(unsyncedheads)
360 oldhs.update(unsyncedheads)
360 dhs = None # delta heads, the new heads on branch
361 dhs = None # delta heads, the new heads on branch
361 newhs = set(newheads)
362 newhs = set(newheads)
362 newhs.update(unsyncedheads)
363 newhs.update(unsyncedheads)
363 if unsyncedheads:
364 if unsyncedheads:
364 if None in unsyncedheads:
365 if None in unsyncedheads:
365 # old remote, no heads data
366 # old remote, no heads data
366 heads = None
367 heads = None
367 elif len(unsyncedheads) <= 4 or repo.ui.verbose:
368 elif len(unsyncedheads) <= 4 or repo.ui.verbose:
368 heads = ' '.join(short(h) for h in unsyncedheads)
369 heads = ' '.join(short(h) for h in unsyncedheads)
369 else:
370 else:
370 heads = (' '.join(short(h) for h in unsyncedheads[:4]) +
371 heads = (' '.join(short(h) for h in unsyncedheads[:4]) +
371 ' ' + _("and %s others") % (len(unsyncedheads) - 4))
372 ' ' + _("and %s others") % (len(unsyncedheads) - 4))
372 if heads is None:
373 if heads is None:
373 repo.ui.status(_("remote has heads that are "
374 repo.ui.status(_("remote has heads that are "
374 "not known locally\n"))
375 "not known locally\n"))
375 elif branch is None:
376 elif branch is None:
376 repo.ui.status(_("remote has heads that are "
377 repo.ui.status(_("remote has heads that are "
377 "not known locally: %s\n") % heads)
378 "not known locally: %s\n") % heads)
378 else:
379 else:
379 repo.ui.status(_("remote has heads on branch '%s' that are "
380 repo.ui.status(_("remote has heads on branch '%s' that are "
380 "not known locally: %s\n") % (branch, heads))
381 "not known locally: %s\n") % (branch, heads))
381 if remoteheads is None:
382 if remoteheads is None:
382 if len(newhs) > 1:
383 if len(newhs) > 1:
383 dhs = list(newhs)
384 dhs = list(newhs)
384 if errormsg is None:
385 if errormsg is None:
385 errormsg = (_("push creates new branch '%s' "
386 errormsg = (_("push creates new branch '%s' "
386 "with multiple heads") % (branch))
387 "with multiple heads") % (branch))
387 hint = _("merge or"
388 hint = _("merge or"
388 " see 'hg help push' for details about"
389 " see 'hg help push' for details about"
389 " pushing new heads")
390 " pushing new heads")
390 elif len(newhs) > len(oldhs):
391 elif len(newhs) > len(oldhs):
391 # remove bookmarked or existing remote heads from the new heads list
392 # remove bookmarked or existing remote heads from the new heads list
392 dhs = sorted(newhs - nowarnheads - oldhs)
393 dhs = sorted(newhs - nowarnheads - oldhs)
393 if dhs:
394 if dhs:
394 if errormsg is None:
395 if errormsg is None:
395 if branch not in ('default', None):
396 if branch not in ('default', None):
396 errormsg = _("push creates new remote head %s "
397 errormsg = _("push creates new remote head %s "
397 "on branch '%s'!") % (short(dhs[0]), branch)
398 "on branch '%s'!") % (short(dhs[0]), branch)
398 elif repo[dhs[0]].bookmarks():
399 elif repo[dhs[0]].bookmarks():
399 errormsg = _("push creates new remote head %s "
400 errormsg = _("push creates new remote head %s "
400 "with bookmark '%s'!") % (
401 "with bookmark '%s'!") % (
401 short(dhs[0]), repo[dhs[0]].bookmarks()[0])
402 short(dhs[0]), repo[dhs[0]].bookmarks()[0])
402 else:
403 else:
403 errormsg = _("push creates new remote head %s!"
404 errormsg = _("push creates new remote head %s!"
404 ) % short(dhs[0])
405 ) % short(dhs[0])
405 if unsyncedheads:
406 if unsyncedheads:
406 hint = _("pull and merge or"
407 hint = _("pull and merge or"
407 " see 'hg help push' for details about"
408 " see 'hg help push' for details about"
408 " pushing new heads")
409 " pushing new heads")
409 else:
410 else:
410 hint = _("merge or"
411 hint = _("merge or"
411 " see 'hg help push' for details about"
412 " see 'hg help push' for details about"
412 " pushing new heads")
413 " pushing new heads")
413 if branch is None:
414 if branch is None:
414 repo.ui.note(_("new remote heads:\n"))
415 repo.ui.note(_("new remote heads:\n"))
415 else:
416 else:
416 repo.ui.note(_("new remote heads on branch '%s':\n") % branch)
417 repo.ui.note(_("new remote heads on branch '%s':\n") % branch)
417 for h in dhs:
418 for h in dhs:
418 repo.ui.note((" %s\n") % short(h))
419 repo.ui.note((" %s\n") % short(h))
419 if errormsg:
420 if errormsg:
420 raise error.Abort(errormsg, hint=hint)
421 raise error.Abort(errormsg, hint=hint)
421
422
422 def _postprocessobsolete(pushop, futurecommon, candidate_newhs):
423 def _postprocessobsolete(pushop, futurecommon, candidate_newhs):
423 """post process the list of new heads with obsolescence information
424 """post process the list of new heads with obsolescence information
424
425
425 Exists as a sub-function to contain the complexity and allow extensions to
426 Exists as a sub-function to contain the complexity and allow extensions to
426 experiment with smarter logic.
427 experiment with smarter logic.
427
428
428 Returns (newheads, discarded_heads) tuple
429 Returns (newheads, discarded_heads) tuple
429 """
430 """
430 # known issue
431 # known issue
431 #
432 #
432 # * We "silently" skip processing on all changeset unknown locally
433 # * We "silently" skip processing on all changeset unknown locally
433 #
434 #
434 # * if <nh> is public on the remote, it won't be affected by obsolete
435 # * if <nh> is public on the remote, it won't be affected by obsolete
435 # marker and a new is created
436 # marker and a new is created
436
437
437 # define various utilities and containers
438 # define various utilities and containers
438 repo = pushop.repo
439 repo = pushop.repo
439 unfi = repo.unfiltered()
440 unfi = repo.unfiltered()
440 tonode = unfi.changelog.node
441 tonode = unfi.changelog.node
441 torev = unfi.changelog.rev
442 torev = unfi.changelog.rev
442 public = phases.public
443 public = phases.public
443 getphase = unfi._phasecache.phase
444 getphase = unfi._phasecache.phase
444 ispublic = (lambda r: getphase(unfi, r) == public)
445 ispublic = (lambda r: getphase(unfi, r) == public)
445 hasoutmarker = functools.partial(pushingmarkerfor, unfi.obsstore,
446 hasoutmarker = functools.partial(pushingmarkerfor, unfi.obsstore,
446 futurecommon)
447 futurecommon)
447 successorsmarkers = unfi.obsstore.successors
448 successorsmarkers = unfi.obsstore.successors
448 newhs = set() # final set of new heads
449 newhs = set() # final set of new heads
449 discarded = set() # new head of fully replaced branch
450 discarded = set() # new head of fully replaced branch
450
451
451 localcandidate = set() # candidate heads known locally
452 localcandidate = set() # candidate heads known locally
452 unknownheads = set() # candidate heads unknown locally
453 unknownheads = set() # candidate heads unknown locally
453 for h in candidate_newhs:
454 for h in candidate_newhs:
454 if h in unfi:
455 if h in unfi:
455 localcandidate.add(h)
456 localcandidate.add(h)
456 else:
457 else:
457 if successorsmarkers.get(h) is not None:
458 if successorsmarkers.get(h) is not None:
458 msg = ('checkheads: remote head unknown locally has'
459 msg = ('checkheads: remote head unknown locally has'
459 ' local marker: %s\n')
460 ' local marker: %s\n')
460 repo.ui.debug(msg % hex(h))
461 repo.ui.debug(msg % hex(h))
461 unknownheads.add(h)
462 unknownheads.add(h)
462
463
463 # fast path the simple case
464 # fast path the simple case
464 if len(localcandidate) == 1:
465 if len(localcandidate) == 1:
465 return unknownheads | set(candidate_newhs), set()
466 return unknownheads | set(candidate_newhs), set()
466
467
467 # actually process branch replacement
468 # actually process branch replacement
468 while localcandidate:
469 while localcandidate:
469 nh = localcandidate.pop()
470 nh = localcandidate.pop()
470 # run this check early to skip the evaluation of the whole branch
471 # run this check early to skip the evaluation of the whole branch
471 if (nh in futurecommon or ispublic(torev(nh))):
472 if (nh in futurecommon or ispublic(torev(nh))):
472 newhs.add(nh)
473 newhs.add(nh)
473 continue
474 continue
474
475
475 # Get all revs/nodes on the branch exclusive to this head
476 # Get all revs/nodes on the branch exclusive to this head
476 # (already filtered heads are "ignored"))
477 # (already filtered heads are "ignored"))
477 branchrevs = unfi.revs('only(%n, (%ln+%ln))',
478 branchrevs = unfi.revs('only(%n, (%ln+%ln))',
478 nh, localcandidate, newhs)
479 nh, localcandidate, newhs)
479 branchnodes = [tonode(r) for r in branchrevs]
480 branchnodes = [tonode(r) for r in branchrevs]
480
481
481 # The branch won't be hidden on the remote if
482 # The branch won't be hidden on the remote if
482 # * any part of it is public,
483 # * any part of it is public,
483 # * any part of it is considered part of the result by previous logic,
484 # * any part of it is considered part of the result by previous logic,
484 # * if we have no markers to push to obsolete it.
485 # * if we have no markers to push to obsolete it.
485 if (any(ispublic(r) for r in branchrevs)
486 if (any(ispublic(r) for r in branchrevs)
486 or any(n in futurecommon for n in branchnodes)
487 or any(n in futurecommon for n in branchnodes)
487 or any(not hasoutmarker(n) for n in branchnodes)):
488 or any(not hasoutmarker(n) for n in branchnodes)):
488 newhs.add(nh)
489 newhs.add(nh)
489 else:
490 else:
490 # note: there is a corner case if there is a merge in the branch.
491 # note: there is a corner case if there is a merge in the branch.
491 # we might end up with -more- heads. However, these heads are not
492 # we might end up with -more- heads. However, these heads are not
492 # "added" by the push, but more by the "removal" on the remote so I
493 # "added" by the push, but more by the "removal" on the remote so I
493 # think is a okay to ignore them,
494 # think is a okay to ignore them,
494 discarded.add(nh)
495 discarded.add(nh)
495 newhs |= unknownheads
496 newhs |= unknownheads
496 return newhs, discarded
497 return newhs, discarded
497
498
498 def pushingmarkerfor(obsstore, pushset, node):
499 def pushingmarkerfor(obsstore, pushset, node):
499 """true if some markers are to be pushed for node
500 """true if some markers are to be pushed for node
500
501
501 We cannot just look in to the pushed obsmarkers from the pushop because
502 We cannot just look in to the pushed obsmarkers from the pushop because
502 discovery might have filtered relevant markers. In addition listing all
503 discovery might have filtered relevant markers. In addition listing all
503 markers relevant to all changesets in the pushed set would be too expensive
504 markers relevant to all changesets in the pushed set would be too expensive
504 (O(len(repo)))
505 (O(len(repo)))
505
506
506 (note: There are cache opportunity in this function. but it would requires
507 (note: There are cache opportunity in this function. but it would requires
507 a two dimensional stack.)
508 a two dimensional stack.)
508 """
509 """
509 successorsmarkers = obsstore.successors
510 successorsmarkers = obsstore.successors
510 stack = [node]
511 stack = [node]
511 seen = set(stack)
512 seen = set(stack)
512 while stack:
513 while stack:
513 current = stack.pop()
514 current = stack.pop()
514 if current in pushset:
515 if current in pushset:
515 return True
516 return True
516 markers = successorsmarkers.get(current, ())
517 markers = successorsmarkers.get(current, ())
517 # markers fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
518 # markers fields = ('prec', 'succs', 'flag', 'meta', 'date', 'parents')
518 for m in markers:
519 for m in markers:
519 nexts = m[1] # successors
520 nexts = m[1] # successors
520 if not nexts: # this is a prune marker
521 if not nexts: # this is a prune marker
521 nexts = m[5] or () # parents
522 nexts = m[5] or () # parents
522 for n in nexts:
523 for n in nexts:
523 if n not in seen:
524 if n not in seen:
524 seen.add(n)
525 seen.add(n)
525 stack.append(n)
526 stack.append(n)
526 return False
527 return False
@@ -1,1989 +1,2017 b''
1 # exchange.py - utility to exchange data between repos.
1 # exchange.py - utility to exchange data between repos.
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 bookmarks as bookmod,
19 bookmarks as bookmod,
20 bundle2,
20 bundle2,
21 changegroup,
21 changegroup,
22 discovery,
22 discovery,
23 error,
23 error,
24 lock as lockmod,
24 lock as lockmod,
25 obsolete,
25 obsolete,
26 phases,
26 phases,
27 pushkey,
27 pushkey,
28 scmutil,
28 scmutil,
29 sslutil,
29 sslutil,
30 streamclone,
30 streamclone,
31 url as urlmod,
31 url as urlmod,
32 util,
32 util,
33 )
33 )
34
34
35 urlerr = util.urlerr
35 urlerr = util.urlerr
36 urlreq = util.urlreq
36 urlreq = util.urlreq
37
37
38 # Maps bundle version human names to changegroup versions.
38 # Maps bundle version human names to changegroup versions.
39 _bundlespeccgversions = {'v1': '01',
39 _bundlespeccgversions = {'v1': '01',
40 'v2': '02',
40 'v2': '02',
41 'packed1': 's1',
41 'packed1': 's1',
42 'bundle2': '02', #legacy
42 'bundle2': '02', #legacy
43 }
43 }
44
44
45 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
45 # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE.
46 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
46 _bundlespecv1compengines = {'gzip', 'bzip2', 'none'}
47
47
48 def parsebundlespec(repo, spec, strict=True, externalnames=False):
48 def parsebundlespec(repo, spec, strict=True, externalnames=False):
49 """Parse a bundle string specification into parts.
49 """Parse a bundle string specification into parts.
50
50
51 Bundle specifications denote a well-defined bundle/exchange format.
51 Bundle specifications denote a well-defined bundle/exchange format.
52 The content of a given specification should not change over time in
52 The content of a given specification should not change over time in
53 order to ensure that bundles produced by a newer version of Mercurial are
53 order to ensure that bundles produced by a newer version of Mercurial are
54 readable from an older version.
54 readable from an older version.
55
55
56 The string currently has the form:
56 The string currently has the form:
57
57
58 <compression>-<type>[;<parameter0>[;<parameter1>]]
58 <compression>-<type>[;<parameter0>[;<parameter1>]]
59
59
60 Where <compression> is one of the supported compression formats
60 Where <compression> is one of the supported compression formats
61 and <type> is (currently) a version string. A ";" can follow the type and
61 and <type> is (currently) a version string. A ";" can follow the type and
62 all text afterwards is interpreted as URI encoded, ";" delimited key=value
62 all text afterwards is interpreted as URI encoded, ";" delimited key=value
63 pairs.
63 pairs.
64
64
65 If ``strict`` is True (the default) <compression> is required. Otherwise,
65 If ``strict`` is True (the default) <compression> is required. Otherwise,
66 it is optional.
66 it is optional.
67
67
68 If ``externalnames`` is False (the default), the human-centric names will
68 If ``externalnames`` is False (the default), the human-centric names will
69 be converted to their internal representation.
69 be converted to their internal representation.
70
70
71 Returns a 3-tuple of (compression, version, parameters). Compression will
71 Returns a 3-tuple of (compression, version, parameters). Compression will
72 be ``None`` if not in strict mode and a compression isn't defined.
72 be ``None`` if not in strict mode and a compression isn't defined.
73
73
74 An ``InvalidBundleSpecification`` is raised when the specification is
74 An ``InvalidBundleSpecification`` is raised when the specification is
75 not syntactically well formed.
75 not syntactically well formed.
76
76
77 An ``UnsupportedBundleSpecification`` is raised when the compression or
77 An ``UnsupportedBundleSpecification`` is raised when the compression or
78 bundle type/version is not recognized.
78 bundle type/version is not recognized.
79
79
80 Note: this function will likely eventually return a more complex data
80 Note: this function will likely eventually return a more complex data
81 structure, including bundle2 part information.
81 structure, including bundle2 part information.
82 """
82 """
83 def parseparams(s):
83 def parseparams(s):
84 if ';' not in s:
84 if ';' not in s:
85 return s, {}
85 return s, {}
86
86
87 params = {}
87 params = {}
88 version, paramstr = s.split(';', 1)
88 version, paramstr = s.split(';', 1)
89
89
90 for p in paramstr.split(';'):
90 for p in paramstr.split(';'):
91 if '=' not in p:
91 if '=' not in p:
92 raise error.InvalidBundleSpecification(
92 raise error.InvalidBundleSpecification(
93 _('invalid bundle specification: '
93 _('invalid bundle specification: '
94 'missing "=" in parameter: %s') % p)
94 'missing "=" in parameter: %s') % p)
95
95
96 key, value = p.split('=', 1)
96 key, value = p.split('=', 1)
97 key = urlreq.unquote(key)
97 key = urlreq.unquote(key)
98 value = urlreq.unquote(value)
98 value = urlreq.unquote(value)
99 params[key] = value
99 params[key] = value
100
100
101 return version, params
101 return version, params
102
102
103
103
104 if strict and '-' not in spec:
104 if strict and '-' not in spec:
105 raise error.InvalidBundleSpecification(
105 raise error.InvalidBundleSpecification(
106 _('invalid bundle specification; '
106 _('invalid bundle specification; '
107 'must be prefixed with compression: %s') % spec)
107 'must be prefixed with compression: %s') % spec)
108
108
109 if '-' in spec:
109 if '-' in spec:
110 compression, version = spec.split('-', 1)
110 compression, version = spec.split('-', 1)
111
111
112 if compression not in util.compengines.supportedbundlenames:
112 if compression not in util.compengines.supportedbundlenames:
113 raise error.UnsupportedBundleSpecification(
113 raise error.UnsupportedBundleSpecification(
114 _('%s compression is not supported') % compression)
114 _('%s compression is not supported') % compression)
115
115
116 version, params = parseparams(version)
116 version, params = parseparams(version)
117
117
118 if version not in _bundlespeccgversions:
118 if version not in _bundlespeccgversions:
119 raise error.UnsupportedBundleSpecification(
119 raise error.UnsupportedBundleSpecification(
120 _('%s is not a recognized bundle version') % version)
120 _('%s is not a recognized bundle version') % version)
121 else:
121 else:
122 # Value could be just the compression or just the version, in which
122 # Value could be just the compression or just the version, in which
123 # case some defaults are assumed (but only when not in strict mode).
123 # case some defaults are assumed (but only when not in strict mode).
124 assert not strict
124 assert not strict
125
125
126 spec, params = parseparams(spec)
126 spec, params = parseparams(spec)
127
127
128 if spec in util.compengines.supportedbundlenames:
128 if spec in util.compengines.supportedbundlenames:
129 compression = spec
129 compression = spec
130 version = 'v1'
130 version = 'v1'
131 # Generaldelta repos require v2.
131 # Generaldelta repos require v2.
132 if 'generaldelta' in repo.requirements:
132 if 'generaldelta' in repo.requirements:
133 version = 'v2'
133 version = 'v2'
134 # Modern compression engines require v2.
134 # Modern compression engines require v2.
135 if compression not in _bundlespecv1compengines:
135 if compression not in _bundlespecv1compengines:
136 version = 'v2'
136 version = 'v2'
137 elif spec in _bundlespeccgversions:
137 elif spec in _bundlespeccgversions:
138 if spec == 'packed1':
138 if spec == 'packed1':
139 compression = 'none'
139 compression = 'none'
140 else:
140 else:
141 compression = 'bzip2'
141 compression = 'bzip2'
142 version = spec
142 version = spec
143 else:
143 else:
144 raise error.UnsupportedBundleSpecification(
144 raise error.UnsupportedBundleSpecification(
145 _('%s is not a recognized bundle specification') % spec)
145 _('%s is not a recognized bundle specification') % spec)
146
146
147 # Bundle version 1 only supports a known set of compression engines.
147 # Bundle version 1 only supports a known set of compression engines.
148 if version == 'v1' and compression not in _bundlespecv1compengines:
148 if version == 'v1' and compression not in _bundlespecv1compengines:
149 raise error.UnsupportedBundleSpecification(
149 raise error.UnsupportedBundleSpecification(
150 _('compression engine %s is not supported on v1 bundles') %
150 _('compression engine %s is not supported on v1 bundles') %
151 compression)
151 compression)
152
152
153 # The specification for packed1 can optionally declare the data formats
153 # The specification for packed1 can optionally declare the data formats
154 # required to apply it. If we see this metadata, compare against what the
154 # required to apply it. If we see this metadata, compare against what the
155 # repo supports and error if the bundle isn't compatible.
155 # repo supports and error if the bundle isn't compatible.
156 if version == 'packed1' and 'requirements' in params:
156 if version == 'packed1' and 'requirements' in params:
157 requirements = set(params['requirements'].split(','))
157 requirements = set(params['requirements'].split(','))
158 missingreqs = requirements - repo.supportedformats
158 missingreqs = requirements - repo.supportedformats
159 if missingreqs:
159 if missingreqs:
160 raise error.UnsupportedBundleSpecification(
160 raise error.UnsupportedBundleSpecification(
161 _('missing support for repository features: %s') %
161 _('missing support for repository features: %s') %
162 ', '.join(sorted(missingreqs)))
162 ', '.join(sorted(missingreqs)))
163
163
164 if not externalnames:
164 if not externalnames:
165 engine = util.compengines.forbundlename(compression)
165 engine = util.compengines.forbundlename(compression)
166 compression = engine.bundletype()[1]
166 compression = engine.bundletype()[1]
167 version = _bundlespeccgversions[version]
167 version = _bundlespeccgversions[version]
168 return compression, version, params
168 return compression, version, params
169
169
170 def readbundle(ui, fh, fname, vfs=None):
170 def readbundle(ui, fh, fname, vfs=None):
171 header = changegroup.readexactly(fh, 4)
171 header = changegroup.readexactly(fh, 4)
172
172
173 alg = None
173 alg = None
174 if not fname:
174 if not fname:
175 fname = "stream"
175 fname = "stream"
176 if not header.startswith('HG') and header.startswith('\0'):
176 if not header.startswith('HG') and header.startswith('\0'):
177 fh = changegroup.headerlessfixup(fh, header)
177 fh = changegroup.headerlessfixup(fh, header)
178 header = "HG10"
178 header = "HG10"
179 alg = 'UN'
179 alg = 'UN'
180 elif vfs:
180 elif vfs:
181 fname = vfs.join(fname)
181 fname = vfs.join(fname)
182
182
183 magic, version = header[0:2], header[2:4]
183 magic, version = header[0:2], header[2:4]
184
184
185 if magic != 'HG':
185 if magic != 'HG':
186 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
186 raise error.Abort(_('%s: not a Mercurial bundle') % fname)
187 if version == '10':
187 if version == '10':
188 if alg is None:
188 if alg is None:
189 alg = changegroup.readexactly(fh, 2)
189 alg = changegroup.readexactly(fh, 2)
190 return changegroup.cg1unpacker(fh, alg)
190 return changegroup.cg1unpacker(fh, alg)
191 elif version.startswith('2'):
191 elif version.startswith('2'):
192 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
192 return bundle2.getunbundler(ui, fh, magicstring=magic + version)
193 elif version == 'S1':
193 elif version == 'S1':
194 return streamclone.streamcloneapplier(fh)
194 return streamclone.streamcloneapplier(fh)
195 else:
195 else:
196 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
196 raise error.Abort(_('%s: unknown bundle version %s') % (fname, version))
197
197
198 def getbundlespec(ui, fh):
198 def getbundlespec(ui, fh):
199 """Infer the bundlespec from a bundle file handle.
199 """Infer the bundlespec from a bundle file handle.
200
200
201 The input file handle is seeked and the original seek position is not
201 The input file handle is seeked and the original seek position is not
202 restored.
202 restored.
203 """
203 """
204 def speccompression(alg):
204 def speccompression(alg):
205 try:
205 try:
206 return util.compengines.forbundletype(alg).bundletype()[0]
206 return util.compengines.forbundletype(alg).bundletype()[0]
207 except KeyError:
207 except KeyError:
208 return None
208 return None
209
209
210 b = readbundle(ui, fh, None)
210 b = readbundle(ui, fh, None)
211 if isinstance(b, changegroup.cg1unpacker):
211 if isinstance(b, changegroup.cg1unpacker):
212 alg = b._type
212 alg = b._type
213 if alg == '_truncatedBZ':
213 if alg == '_truncatedBZ':
214 alg = 'BZ'
214 alg = 'BZ'
215 comp = speccompression(alg)
215 comp = speccompression(alg)
216 if not comp:
216 if not comp:
217 raise error.Abort(_('unknown compression algorithm: %s') % alg)
217 raise error.Abort(_('unknown compression algorithm: %s') % alg)
218 return '%s-v1' % comp
218 return '%s-v1' % comp
219 elif isinstance(b, bundle2.unbundle20):
219 elif isinstance(b, bundle2.unbundle20):
220 if 'Compression' in b.params:
220 if 'Compression' in b.params:
221 comp = speccompression(b.params['Compression'])
221 comp = speccompression(b.params['Compression'])
222 if not comp:
222 if not comp:
223 raise error.Abort(_('unknown compression algorithm: %s') % comp)
223 raise error.Abort(_('unknown compression algorithm: %s') % comp)
224 else:
224 else:
225 comp = 'none'
225 comp = 'none'
226
226
227 version = None
227 version = None
228 for part in b.iterparts():
228 for part in b.iterparts():
229 if part.type == 'changegroup':
229 if part.type == 'changegroup':
230 version = part.params['version']
230 version = part.params['version']
231 if version in ('01', '02'):
231 if version in ('01', '02'):
232 version = 'v2'
232 version = 'v2'
233 else:
233 else:
234 raise error.Abort(_('changegroup version %s does not have '
234 raise error.Abort(_('changegroup version %s does not have '
235 'a known bundlespec') % version,
235 'a known bundlespec') % version,
236 hint=_('try upgrading your Mercurial '
236 hint=_('try upgrading your Mercurial '
237 'client'))
237 'client'))
238
238
239 if not version:
239 if not version:
240 raise error.Abort(_('could not identify changegroup version in '
240 raise error.Abort(_('could not identify changegroup version in '
241 'bundle'))
241 'bundle'))
242
242
243 return '%s-%s' % (comp, version)
243 return '%s-%s' % (comp, version)
244 elif isinstance(b, streamclone.streamcloneapplier):
244 elif isinstance(b, streamclone.streamcloneapplier):
245 requirements = streamclone.readbundle1header(fh)[2]
245 requirements = streamclone.readbundle1header(fh)[2]
246 params = 'requirements=%s' % ','.join(sorted(requirements))
246 params = 'requirements=%s' % ','.join(sorted(requirements))
247 return 'none-packed1;%s' % urlreq.quote(params)
247 return 'none-packed1;%s' % urlreq.quote(params)
248 else:
248 else:
249 raise error.Abort(_('unknown bundle type: %s') % b)
249 raise error.Abort(_('unknown bundle type: %s') % b)
250
250
251 def _computeoutgoing(repo, heads, common):
251 def _computeoutgoing(repo, heads, common):
252 """Computes which revs are outgoing given a set of common
252 """Computes which revs are outgoing given a set of common
253 and a set of heads.
253 and a set of heads.
254
254
255 This is a separate function so extensions can have access to
255 This is a separate function so extensions can have access to
256 the logic.
256 the logic.
257
257
258 Returns a discovery.outgoing object.
258 Returns a discovery.outgoing object.
259 """
259 """
260 cl = repo.changelog
260 cl = repo.changelog
261 if common:
261 if common:
262 hasnode = cl.hasnode
262 hasnode = cl.hasnode
263 common = [n for n in common if hasnode(n)]
263 common = [n for n in common if hasnode(n)]
264 else:
264 else:
265 common = [nullid]
265 common = [nullid]
266 if not heads:
266 if not heads:
267 heads = cl.heads()
267 heads = cl.heads()
268 return discovery.outgoing(repo, common, heads)
268 return discovery.outgoing(repo, common, heads)
269
269
270 def _forcebundle1(op):
270 def _forcebundle1(op):
271 """return true if a pull/push must use bundle1
271 """return true if a pull/push must use bundle1
272
272
273 This function is used to allow testing of the older bundle version"""
273 This function is used to allow testing of the older bundle version"""
274 ui = op.repo.ui
274 ui = op.repo.ui
275 forcebundle1 = False
275 forcebundle1 = False
276 # The goal is this config is to allow developer to choose the bundle
276 # The goal is this config is to allow developer to choose the bundle
277 # version used during exchanged. This is especially handy during test.
277 # version used during exchanged. This is especially handy during test.
278 # Value is a list of bundle version to be picked from, highest version
278 # Value is a list of bundle version to be picked from, highest version
279 # should be used.
279 # should be used.
280 #
280 #
281 # developer config: devel.legacy.exchange
281 # developer config: devel.legacy.exchange
282 exchange = ui.configlist('devel', 'legacy.exchange')
282 exchange = ui.configlist('devel', 'legacy.exchange')
283 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
283 forcebundle1 = 'bundle2' not in exchange and 'bundle1' in exchange
284 return forcebundle1 or not op.remote.capable('bundle2')
284 return forcebundle1 or not op.remote.capable('bundle2')
285
285
286 class pushoperation(object):
286 class pushoperation(object):
287 """A object that represent a single push operation
287 """A object that represent a single push operation
288
288
289 Its purpose is to carry push related state and very common operations.
289 Its purpose is to carry push related state and very common operations.
290
290
291 A new pushoperation should be created at the beginning of each push and
291 A new pushoperation should be created at the beginning of each push and
292 discarded afterward.
292 discarded afterward.
293 """
293 """
294
294
295 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
295 def __init__(self, repo, remote, force=False, revs=None, newbranch=False,
296 bookmarks=()):
296 bookmarks=()):
297 # repo we push from
297 # repo we push from
298 self.repo = repo
298 self.repo = repo
299 self.ui = repo.ui
299 self.ui = repo.ui
300 # repo we push to
300 # repo we push to
301 self.remote = remote
301 self.remote = remote
302 # force option provided
302 # force option provided
303 self.force = force
303 self.force = force
304 # revs to be pushed (None is "all")
304 # revs to be pushed (None is "all")
305 self.revs = revs
305 self.revs = revs
306 # bookmark explicitly pushed
306 # bookmark explicitly pushed
307 self.bookmarks = bookmarks
307 self.bookmarks = bookmarks
308 # allow push of new branch
308 # allow push of new branch
309 self.newbranch = newbranch
309 self.newbranch = newbranch
310 # did a local lock get acquired?
310 # did a local lock get acquired?
311 self.locallocked = None
311 self.locallocked = None
312 # step already performed
312 # step already performed
313 # (used to check what steps have been already performed through bundle2)
313 # (used to check what steps have been already performed through bundle2)
314 self.stepsdone = set()
314 self.stepsdone = set()
315 # Integer version of the changegroup push result
315 # Integer version of the changegroup push result
316 # - None means nothing to push
316 # - None means nothing to push
317 # - 0 means HTTP error
317 # - 0 means HTTP error
318 # - 1 means we pushed and remote head count is unchanged *or*
318 # - 1 means we pushed and remote head count is unchanged *or*
319 # we have outgoing changesets but refused to push
319 # we have outgoing changesets but refused to push
320 # - other values as described by addchangegroup()
320 # - other values as described by addchangegroup()
321 self.cgresult = None
321 self.cgresult = None
322 # Boolean value for the bookmark push
322 # Boolean value for the bookmark push
323 self.bkresult = None
323 self.bkresult = None
324 # discover.outgoing object (contains common and outgoing data)
324 # discover.outgoing object (contains common and outgoing data)
325 self.outgoing = None
325 self.outgoing = None
326 # all remote heads before the push
326 # all remote topological heads before the push
327 self.remoteheads = None
327 self.remoteheads = None
328 # Details of the remote branch pre and post push
329 #
330 # mapping: {'branch': ([remoteheads],
331 # [newheads],
332 # [unsyncedheads],
333 # [discardedheads])}
334 # - branch: the branch name
335 # - remoteheads: the list of remote heads known locally
336 # None if the branch is new
337 # - newheads: the new remote heads (known locally) with outgoing pushed
338 # - unsyncedheads: the list of remote heads unknown locally.
339 # - discardedheads: the list of remote heads made obsolete by the push
340 self.pushbranchmap = None
328 # testable as a boolean indicating if any nodes are missing locally.
341 # testable as a boolean indicating if any nodes are missing locally.
329 self.incoming = None
342 self.incoming = None
330 # phases changes that must be pushed along side the changesets
343 # phases changes that must be pushed along side the changesets
331 self.outdatedphases = None
344 self.outdatedphases = None
332 # phases changes that must be pushed if changeset push fails
345 # phases changes that must be pushed if changeset push fails
333 self.fallbackoutdatedphases = None
346 self.fallbackoutdatedphases = None
334 # outgoing obsmarkers
347 # outgoing obsmarkers
335 self.outobsmarkers = set()
348 self.outobsmarkers = set()
336 # outgoing bookmarks
349 # outgoing bookmarks
337 self.outbookmarks = []
350 self.outbookmarks = []
338 # transaction manager
351 # transaction manager
339 self.trmanager = None
352 self.trmanager = None
340 # map { pushkey partid -> callback handling failure}
353 # map { pushkey partid -> callback handling failure}
341 # used to handle exception from mandatory pushkey part failure
354 # used to handle exception from mandatory pushkey part failure
342 self.pkfailcb = {}
355 self.pkfailcb = {}
343
356
344 @util.propertycache
357 @util.propertycache
345 def futureheads(self):
358 def futureheads(self):
346 """future remote heads if the changeset push succeeds"""
359 """future remote heads if the changeset push succeeds"""
347 return self.outgoing.missingheads
360 return self.outgoing.missingheads
348
361
349 @util.propertycache
362 @util.propertycache
350 def fallbackheads(self):
363 def fallbackheads(self):
351 """future remote heads if the changeset push fails"""
364 """future remote heads if the changeset push fails"""
352 if self.revs is None:
365 if self.revs is None:
353 # not target to push, all common are relevant
366 # not target to push, all common are relevant
354 return self.outgoing.commonheads
367 return self.outgoing.commonheads
355 unfi = self.repo.unfiltered()
368 unfi = self.repo.unfiltered()
356 # I want cheads = heads(::missingheads and ::commonheads)
369 # I want cheads = heads(::missingheads and ::commonheads)
357 # (missingheads is revs with secret changeset filtered out)
370 # (missingheads is revs with secret changeset filtered out)
358 #
371 #
359 # This can be expressed as:
372 # This can be expressed as:
360 # cheads = ( (missingheads and ::commonheads)
373 # cheads = ( (missingheads and ::commonheads)
361 # + (commonheads and ::missingheads))"
374 # + (commonheads and ::missingheads))"
362 # )
375 # )
363 #
376 #
364 # while trying to push we already computed the following:
377 # while trying to push we already computed the following:
365 # common = (::commonheads)
378 # common = (::commonheads)
366 # missing = ((commonheads::missingheads) - commonheads)
379 # missing = ((commonheads::missingheads) - commonheads)
367 #
380 #
368 # We can pick:
381 # We can pick:
369 # * missingheads part of common (::commonheads)
382 # * missingheads part of common (::commonheads)
370 common = self.outgoing.common
383 common = self.outgoing.common
371 nm = self.repo.changelog.nodemap
384 nm = self.repo.changelog.nodemap
372 cheads = [node for node in self.revs if nm[node] in common]
385 cheads = [node for node in self.revs if nm[node] in common]
373 # and
386 # and
374 # * commonheads parents on missing
387 # * commonheads parents on missing
375 revset = unfi.set('%ln and parents(roots(%ln))',
388 revset = unfi.set('%ln and parents(roots(%ln))',
376 self.outgoing.commonheads,
389 self.outgoing.commonheads,
377 self.outgoing.missing)
390 self.outgoing.missing)
378 cheads.extend(c.node() for c in revset)
391 cheads.extend(c.node() for c in revset)
379 return cheads
392 return cheads
380
393
381 @property
394 @property
382 def commonheads(self):
395 def commonheads(self):
383 """set of all common heads after changeset bundle push"""
396 """set of all common heads after changeset bundle push"""
384 if self.cgresult:
397 if self.cgresult:
385 return self.futureheads
398 return self.futureheads
386 else:
399 else:
387 return self.fallbackheads
400 return self.fallbackheads
388
401
389 # mapping of message used when pushing bookmark
402 # mapping of message used when pushing bookmark
390 bookmsgmap = {'update': (_("updating bookmark %s\n"),
403 bookmsgmap = {'update': (_("updating bookmark %s\n"),
391 _('updating bookmark %s failed!\n')),
404 _('updating bookmark %s failed!\n')),
392 'export': (_("exporting bookmark %s\n"),
405 'export': (_("exporting bookmark %s\n"),
393 _('exporting bookmark %s failed!\n')),
406 _('exporting bookmark %s failed!\n')),
394 'delete': (_("deleting remote bookmark %s\n"),
407 'delete': (_("deleting remote bookmark %s\n"),
395 _('deleting remote bookmark %s failed!\n')),
408 _('deleting remote bookmark %s failed!\n')),
396 }
409 }
397
410
398
411
399 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
412 def push(repo, remote, force=False, revs=None, newbranch=False, bookmarks=(),
400 opargs=None):
413 opargs=None):
401 '''Push outgoing changesets (limited by revs) from a local
414 '''Push outgoing changesets (limited by revs) from a local
402 repository to remote. Return an integer:
415 repository to remote. Return an integer:
403 - None means nothing to push
416 - None means nothing to push
404 - 0 means HTTP error
417 - 0 means HTTP error
405 - 1 means we pushed and remote head count is unchanged *or*
418 - 1 means we pushed and remote head count is unchanged *or*
406 we have outgoing changesets but refused to push
419 we have outgoing changesets but refused to push
407 - other values as described by addchangegroup()
420 - other values as described by addchangegroup()
408 '''
421 '''
409 if opargs is None:
422 if opargs is None:
410 opargs = {}
423 opargs = {}
411 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
424 pushop = pushoperation(repo, remote, force, revs, newbranch, bookmarks,
412 **opargs)
425 **opargs)
413 if pushop.remote.local():
426 if pushop.remote.local():
414 missing = (set(pushop.repo.requirements)
427 missing = (set(pushop.repo.requirements)
415 - pushop.remote.local().supported)
428 - pushop.remote.local().supported)
416 if missing:
429 if missing:
417 msg = _("required features are not"
430 msg = _("required features are not"
418 " supported in the destination:"
431 " supported in the destination:"
419 " %s") % (', '.join(sorted(missing)))
432 " %s") % (', '.join(sorted(missing)))
420 raise error.Abort(msg)
433 raise error.Abort(msg)
421
434
422 # there are two ways to push to remote repo:
435 # there are two ways to push to remote repo:
423 #
436 #
424 # addchangegroup assumes local user can lock remote
437 # addchangegroup assumes local user can lock remote
425 # repo (local filesystem, old ssh servers).
438 # repo (local filesystem, old ssh servers).
426 #
439 #
427 # unbundle assumes local user cannot lock remote repo (new ssh
440 # unbundle assumes local user cannot lock remote repo (new ssh
428 # servers, http servers).
441 # servers, http servers).
429
442
430 if not pushop.remote.canpush():
443 if not pushop.remote.canpush():
431 raise error.Abort(_("destination does not support push"))
444 raise error.Abort(_("destination does not support push"))
432 # get local lock as we might write phase data
445 # get local lock as we might write phase data
433 localwlock = locallock = None
446 localwlock = locallock = None
434 try:
447 try:
435 # bundle2 push may receive a reply bundle touching bookmarks or other
448 # bundle2 push may receive a reply bundle touching bookmarks or other
436 # things requiring the wlock. Take it now to ensure proper ordering.
449 # things requiring the wlock. Take it now to ensure proper ordering.
437 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
450 maypushback = pushop.ui.configbool('experimental', 'bundle2.pushback')
438 if (not _forcebundle1(pushop)) and maypushback:
451 if (not _forcebundle1(pushop)) and maypushback:
439 localwlock = pushop.repo.wlock()
452 localwlock = pushop.repo.wlock()
440 locallock = pushop.repo.lock()
453 locallock = pushop.repo.lock()
441 pushop.locallocked = True
454 pushop.locallocked = True
442 except IOError as err:
455 except IOError as err:
443 pushop.locallocked = False
456 pushop.locallocked = False
444 if err.errno != errno.EACCES:
457 if err.errno != errno.EACCES:
445 raise
458 raise
446 # source repo cannot be locked.
459 # source repo cannot be locked.
447 # We do not abort the push, but just disable the local phase
460 # We do not abort the push, but just disable the local phase
448 # synchronisation.
461 # synchronisation.
449 msg = 'cannot lock source repository: %s\n' % err
462 msg = 'cannot lock source repository: %s\n' % err
450 pushop.ui.debug(msg)
463 pushop.ui.debug(msg)
451 try:
464 try:
452 if pushop.locallocked:
465 if pushop.locallocked:
453 pushop.trmanager = transactionmanager(pushop.repo,
466 pushop.trmanager = transactionmanager(pushop.repo,
454 'push-response',
467 'push-response',
455 pushop.remote.url())
468 pushop.remote.url())
456 pushop.repo.checkpush(pushop)
469 pushop.repo.checkpush(pushop)
457 lock = None
470 lock = None
458 unbundle = pushop.remote.capable('unbundle')
471 unbundle = pushop.remote.capable('unbundle')
459 if not unbundle:
472 if not unbundle:
460 lock = pushop.remote.lock()
473 lock = pushop.remote.lock()
461 try:
474 try:
462 _pushdiscovery(pushop)
475 _pushdiscovery(pushop)
463 if not _forcebundle1(pushop):
476 if not _forcebundle1(pushop):
464 _pushbundle2(pushop)
477 _pushbundle2(pushop)
465 _pushchangeset(pushop)
478 _pushchangeset(pushop)
466 _pushsyncphase(pushop)
479 _pushsyncphase(pushop)
467 _pushobsolete(pushop)
480 _pushobsolete(pushop)
468 _pushbookmark(pushop)
481 _pushbookmark(pushop)
469 finally:
482 finally:
470 if lock is not None:
483 if lock is not None:
471 lock.release()
484 lock.release()
472 if pushop.trmanager:
485 if pushop.trmanager:
473 pushop.trmanager.close()
486 pushop.trmanager.close()
474 finally:
487 finally:
475 if pushop.trmanager:
488 if pushop.trmanager:
476 pushop.trmanager.release()
489 pushop.trmanager.release()
477 if locallock is not None:
490 if locallock is not None:
478 locallock.release()
491 locallock.release()
479 if localwlock is not None:
492 if localwlock is not None:
480 localwlock.release()
493 localwlock.release()
481
494
482 return pushop
495 return pushop
483
496
484 # list of steps to perform discovery before push
497 # list of steps to perform discovery before push
485 pushdiscoveryorder = []
498 pushdiscoveryorder = []
486
499
487 # Mapping between step name and function
500 # Mapping between step name and function
488 #
501 #
489 # This exists to help extensions wrap steps if necessary
502 # This exists to help extensions wrap steps if necessary
490 pushdiscoverymapping = {}
503 pushdiscoverymapping = {}
491
504
492 def pushdiscovery(stepname):
505 def pushdiscovery(stepname):
493 """decorator for function performing discovery before push
506 """decorator for function performing discovery before push
494
507
495 The function is added to the step -> function mapping and appended to the
508 The function is added to the step -> function mapping and appended to the
496 list of steps. Beware that decorated function will be added in order (this
509 list of steps. Beware that decorated function will be added in order (this
497 may matter).
510 may matter).
498
511
499 You can only use this decorator for a new step, if you want to wrap a step
512 You can only use this decorator for a new step, if you want to wrap a step
500 from an extension, change the pushdiscovery dictionary directly."""
513 from an extension, change the pushdiscovery dictionary directly."""
501 def dec(func):
514 def dec(func):
502 assert stepname not in pushdiscoverymapping
515 assert stepname not in pushdiscoverymapping
503 pushdiscoverymapping[stepname] = func
516 pushdiscoverymapping[stepname] = func
504 pushdiscoveryorder.append(stepname)
517 pushdiscoveryorder.append(stepname)
505 return func
518 return func
506 return dec
519 return dec
507
520
508 def _pushdiscovery(pushop):
521 def _pushdiscovery(pushop):
509 """Run all discovery steps"""
522 """Run all discovery steps"""
510 for stepname in pushdiscoveryorder:
523 for stepname in pushdiscoveryorder:
511 step = pushdiscoverymapping[stepname]
524 step = pushdiscoverymapping[stepname]
512 step(pushop)
525 step(pushop)
513
526
514 @pushdiscovery('changeset')
527 @pushdiscovery('changeset')
515 def _pushdiscoverychangeset(pushop):
528 def _pushdiscoverychangeset(pushop):
516 """discover the changeset that need to be pushed"""
529 """discover the changeset that need to be pushed"""
517 fci = discovery.findcommonincoming
530 fci = discovery.findcommonincoming
518 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
531 commoninc = fci(pushop.repo, pushop.remote, force=pushop.force)
519 common, inc, remoteheads = commoninc
532 common, inc, remoteheads = commoninc
520 fco = discovery.findcommonoutgoing
533 fco = discovery.findcommonoutgoing
521 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
534 outgoing = fco(pushop.repo, pushop.remote, onlyheads=pushop.revs,
522 commoninc=commoninc, force=pushop.force)
535 commoninc=commoninc, force=pushop.force)
523 pushop.outgoing = outgoing
536 pushop.outgoing = outgoing
524 pushop.remoteheads = remoteheads
537 pushop.remoteheads = remoteheads
525 pushop.incoming = inc
538 pushop.incoming = inc
526
539
527 @pushdiscovery('phase')
540 @pushdiscovery('phase')
528 def _pushdiscoveryphase(pushop):
541 def _pushdiscoveryphase(pushop):
529 """discover the phase that needs to be pushed
542 """discover the phase that needs to be pushed
530
543
531 (computed for both success and failure case for changesets push)"""
544 (computed for both success and failure case for changesets push)"""
532 outgoing = pushop.outgoing
545 outgoing = pushop.outgoing
533 unfi = pushop.repo.unfiltered()
546 unfi = pushop.repo.unfiltered()
534 remotephases = pushop.remote.listkeys('phases')
547 remotephases = pushop.remote.listkeys('phases')
535 publishing = remotephases.get('publishing', False)
548 publishing = remotephases.get('publishing', False)
536 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
549 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
537 and remotephases # server supports phases
550 and remotephases # server supports phases
538 and not pushop.outgoing.missing # no changesets to be pushed
551 and not pushop.outgoing.missing # no changesets to be pushed
539 and publishing):
552 and publishing):
540 # When:
553 # When:
541 # - this is a subrepo push
554 # - this is a subrepo push
542 # - and remote support phase
555 # - and remote support phase
543 # - and no changeset are to be pushed
556 # - and no changeset are to be pushed
544 # - and remote is publishing
557 # - and remote is publishing
545 # We may be in issue 3871 case!
558 # We may be in issue 3871 case!
546 # We drop the possible phase synchronisation done by
559 # We drop the possible phase synchronisation done by
547 # courtesy to publish changesets possibly locally draft
560 # courtesy to publish changesets possibly locally draft
548 # on the remote.
561 # on the remote.
549 remotephases = {'publishing': 'True'}
562 remotephases = {'publishing': 'True'}
550 ana = phases.analyzeremotephases(pushop.repo,
563 ana = phases.analyzeremotephases(pushop.repo,
551 pushop.fallbackheads,
564 pushop.fallbackheads,
552 remotephases)
565 remotephases)
553 pheads, droots = ana
566 pheads, droots = ana
554 extracond = ''
567 extracond = ''
555 if not publishing:
568 if not publishing:
556 extracond = ' and public()'
569 extracond = ' and public()'
557 revset = 'heads((%%ln::%%ln) %s)' % extracond
570 revset = 'heads((%%ln::%%ln) %s)' % extracond
558 # Get the list of all revs draft on remote by public here.
571 # Get the list of all revs draft on remote by public here.
559 # XXX Beware that revset break if droots is not strictly
572 # XXX Beware that revset break if droots is not strictly
560 # XXX root we may want to ensure it is but it is costly
573 # XXX root we may want to ensure it is but it is costly
561 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
574 fallback = list(unfi.set(revset, droots, pushop.fallbackheads))
562 if not outgoing.missing:
575 if not outgoing.missing:
563 future = fallback
576 future = fallback
564 else:
577 else:
565 # adds changeset we are going to push as draft
578 # adds changeset we are going to push as draft
566 #
579 #
567 # should not be necessary for publishing server, but because of an
580 # should not be necessary for publishing server, but because of an
568 # issue fixed in xxxxx we have to do it anyway.
581 # issue fixed in xxxxx we have to do it anyway.
569 fdroots = list(unfi.set('roots(%ln + %ln::)',
582 fdroots = list(unfi.set('roots(%ln + %ln::)',
570 outgoing.missing, droots))
583 outgoing.missing, droots))
571 fdroots = [f.node() for f in fdroots]
584 fdroots = [f.node() for f in fdroots]
572 future = list(unfi.set(revset, fdroots, pushop.futureheads))
585 future = list(unfi.set(revset, fdroots, pushop.futureheads))
573 pushop.outdatedphases = future
586 pushop.outdatedphases = future
574 pushop.fallbackoutdatedphases = fallback
587 pushop.fallbackoutdatedphases = fallback
575
588
576 @pushdiscovery('obsmarker')
589 @pushdiscovery('obsmarker')
577 def _pushdiscoveryobsmarkers(pushop):
590 def _pushdiscoveryobsmarkers(pushop):
578 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
591 if (obsolete.isenabled(pushop.repo, obsolete.exchangeopt)
579 and pushop.repo.obsstore
592 and pushop.repo.obsstore
580 and 'obsolete' in pushop.remote.listkeys('namespaces')):
593 and 'obsolete' in pushop.remote.listkeys('namespaces')):
581 repo = pushop.repo
594 repo = pushop.repo
582 # very naive computation, that can be quite expensive on big repo.
595 # very naive computation, that can be quite expensive on big repo.
583 # However: evolution is currently slow on them anyway.
596 # However: evolution is currently slow on them anyway.
584 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
597 nodes = (c.node() for c in repo.set('::%ln', pushop.futureheads))
585 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
598 pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(nodes)
586
599
587 @pushdiscovery('bookmarks')
600 @pushdiscovery('bookmarks')
588 def _pushdiscoverybookmarks(pushop):
601 def _pushdiscoverybookmarks(pushop):
589 ui = pushop.ui
602 ui = pushop.ui
590 repo = pushop.repo.unfiltered()
603 repo = pushop.repo.unfiltered()
591 remote = pushop.remote
604 remote = pushop.remote
592 ui.debug("checking for updated bookmarks\n")
605 ui.debug("checking for updated bookmarks\n")
593 ancestors = ()
606 ancestors = ()
594 if pushop.revs:
607 if pushop.revs:
595 revnums = map(repo.changelog.rev, pushop.revs)
608 revnums = map(repo.changelog.rev, pushop.revs)
596 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
609 ancestors = repo.changelog.ancestors(revnums, inclusive=True)
597 remotebookmark = remote.listkeys('bookmarks')
610 remotebookmark = remote.listkeys('bookmarks')
598
611
599 explicit = set([repo._bookmarks.expandname(bookmark)
612 explicit = set([repo._bookmarks.expandname(bookmark)
600 for bookmark in pushop.bookmarks])
613 for bookmark in pushop.bookmarks])
601
614
602 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
615 remotebookmark = bookmod.unhexlifybookmarks(remotebookmark)
603 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
616 comp = bookmod.comparebookmarks(repo, repo._bookmarks, remotebookmark)
604
617
605 def safehex(x):
618 def safehex(x):
606 if x is None:
619 if x is None:
607 return x
620 return x
608 return hex(x)
621 return hex(x)
609
622
610 def hexifycompbookmarks(bookmarks):
623 def hexifycompbookmarks(bookmarks):
611 for b, scid, dcid in bookmarks:
624 for b, scid, dcid in bookmarks:
612 yield b, safehex(scid), safehex(dcid)
625 yield b, safehex(scid), safehex(dcid)
613
626
614 comp = [hexifycompbookmarks(marks) for marks in comp]
627 comp = [hexifycompbookmarks(marks) for marks in comp]
615 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
628 addsrc, adddst, advsrc, advdst, diverge, differ, invalid, same = comp
616
629
617 for b, scid, dcid in advsrc:
630 for b, scid, dcid in advsrc:
618 if b in explicit:
631 if b in explicit:
619 explicit.remove(b)
632 explicit.remove(b)
620 if not ancestors or repo[scid].rev() in ancestors:
633 if not ancestors or repo[scid].rev() in ancestors:
621 pushop.outbookmarks.append((b, dcid, scid))
634 pushop.outbookmarks.append((b, dcid, scid))
622 # search added bookmark
635 # search added bookmark
623 for b, scid, dcid in addsrc:
636 for b, scid, dcid in addsrc:
624 if b in explicit:
637 if b in explicit:
625 explicit.remove(b)
638 explicit.remove(b)
626 pushop.outbookmarks.append((b, '', scid))
639 pushop.outbookmarks.append((b, '', scid))
627 # search for overwritten bookmark
640 # search for overwritten bookmark
628 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
641 for b, scid, dcid in list(advdst) + list(diverge) + list(differ):
629 if b in explicit:
642 if b in explicit:
630 explicit.remove(b)
643 explicit.remove(b)
631 pushop.outbookmarks.append((b, dcid, scid))
644 pushop.outbookmarks.append((b, dcid, scid))
632 # search for bookmark to delete
645 # search for bookmark to delete
633 for b, scid, dcid in adddst:
646 for b, scid, dcid in adddst:
634 if b in explicit:
647 if b in explicit:
635 explicit.remove(b)
648 explicit.remove(b)
636 # treat as "deleted locally"
649 # treat as "deleted locally"
637 pushop.outbookmarks.append((b, dcid, ''))
650 pushop.outbookmarks.append((b, dcid, ''))
638 # identical bookmarks shouldn't get reported
651 # identical bookmarks shouldn't get reported
639 for b, scid, dcid in same:
652 for b, scid, dcid in same:
640 if b in explicit:
653 if b in explicit:
641 explicit.remove(b)
654 explicit.remove(b)
642
655
643 if explicit:
656 if explicit:
644 explicit = sorted(explicit)
657 explicit = sorted(explicit)
645 # we should probably list all of them
658 # we should probably list all of them
646 ui.warn(_('bookmark %s does not exist on the local '
659 ui.warn(_('bookmark %s does not exist on the local '
647 'or remote repository!\n') % explicit[0])
660 'or remote repository!\n') % explicit[0])
648 pushop.bkresult = 2
661 pushop.bkresult = 2
649
662
650 pushop.outbookmarks.sort()
663 pushop.outbookmarks.sort()
651
664
652 def _pushcheckoutgoing(pushop):
665 def _pushcheckoutgoing(pushop):
653 outgoing = pushop.outgoing
666 outgoing = pushop.outgoing
654 unfi = pushop.repo.unfiltered()
667 unfi = pushop.repo.unfiltered()
655 if not outgoing.missing:
668 if not outgoing.missing:
656 # nothing to push
669 # nothing to push
657 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
670 scmutil.nochangesfound(unfi.ui, unfi, outgoing.excluded)
658 return False
671 return False
659 # something to push
672 # something to push
660 if not pushop.force:
673 if not pushop.force:
661 # if repo.obsstore == False --> no obsolete
674 # if repo.obsstore == False --> no obsolete
662 # then, save the iteration
675 # then, save the iteration
663 if unfi.obsstore:
676 if unfi.obsstore:
664 # this message are here for 80 char limit reason
677 # this message are here for 80 char limit reason
665 mso = _("push includes obsolete changeset: %s!")
678 mso = _("push includes obsolete changeset: %s!")
666 mst = {"unstable": _("push includes unstable changeset: %s!"),
679 mst = {"unstable": _("push includes unstable changeset: %s!"),
667 "bumped": _("push includes bumped changeset: %s!"),
680 "bumped": _("push includes bumped changeset: %s!"),
668 "divergent": _("push includes divergent changeset: %s!")}
681 "divergent": _("push includes divergent changeset: %s!")}
669 # If we are to push if there is at least one
682 # If we are to push if there is at least one
670 # obsolete or unstable changeset in missing, at
683 # obsolete or unstable changeset in missing, at
671 # least one of the missinghead will be obsolete or
684 # least one of the missinghead will be obsolete or
672 # unstable. So checking heads only is ok
685 # unstable. So checking heads only is ok
673 for node in outgoing.missingheads:
686 for node in outgoing.missingheads:
674 ctx = unfi[node]
687 ctx = unfi[node]
675 if ctx.obsolete():
688 if ctx.obsolete():
676 raise error.Abort(mso % ctx)
689 raise error.Abort(mso % ctx)
677 elif ctx.troubled():
690 elif ctx.troubled():
678 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
691 raise error.Abort(mst[ctx.troubles()[0]] % ctx)
679
692
680 discovery.checkheads(pushop)
693 discovery.checkheads(pushop)
681 return True
694 return True
682
695
683 # List of names of steps to perform for an outgoing bundle2, order matters.
696 # List of names of steps to perform for an outgoing bundle2, order matters.
684 b2partsgenorder = []
697 b2partsgenorder = []
685
698
686 # Mapping between step name and function
699 # Mapping between step name and function
687 #
700 #
688 # This exists to help extensions wrap steps if necessary
701 # This exists to help extensions wrap steps if necessary
689 b2partsgenmapping = {}
702 b2partsgenmapping = {}
690
703
691 def b2partsgenerator(stepname, idx=None):
704 def b2partsgenerator(stepname, idx=None):
692 """decorator for function generating bundle2 part
705 """decorator for function generating bundle2 part
693
706
694 The function is added to the step -> function mapping and appended to the
707 The function is added to the step -> function mapping and appended to the
695 list of steps. Beware that decorated functions will be added in order
708 list of steps. Beware that decorated functions will be added in order
696 (this may matter).
709 (this may matter).
697
710
698 You can only use this decorator for new steps, if you want to wrap a step
711 You can only use this decorator for new steps, if you want to wrap a step
699 from an extension, attack the b2partsgenmapping dictionary directly."""
712 from an extension, attack the b2partsgenmapping dictionary directly."""
700 def dec(func):
713 def dec(func):
701 assert stepname not in b2partsgenmapping
714 assert stepname not in b2partsgenmapping
702 b2partsgenmapping[stepname] = func
715 b2partsgenmapping[stepname] = func
703 if idx is None:
716 if idx is None:
704 b2partsgenorder.append(stepname)
717 b2partsgenorder.append(stepname)
705 else:
718 else:
706 b2partsgenorder.insert(idx, stepname)
719 b2partsgenorder.insert(idx, stepname)
707 return func
720 return func
708 return dec
721 return dec
709
722
710 def _pushb2ctxcheckheads(pushop, bundler):
723 def _pushb2ctxcheckheads(pushop, bundler):
711 """Generate race condition checking parts
724 """Generate race condition checking parts
712
725
713 Exists as an independent function to aid extensions
726 Exists as an independent function to aid extensions
714 """
727 """
715 if not pushop.force:
728 # * 'force' do not check for push race,
729 # * if we don't push anything, there are nothing to check.
730 if not pushop.force and pushop.outgoing.missingheads:
731 allowunrelated = 'related' in bundler.capabilities.get('checkheads', ())
732 if not allowunrelated:
716 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
733 bundler.newpart('check:heads', data=iter(pushop.remoteheads))
734 else:
735 affected = set()
736 for branch, heads in pushop.pushbranchmap.iteritems():
737 remoteheads, newheads, unsyncedheads, discardedheads = heads
738 if remoteheads is not None:
739 remote = set(remoteheads)
740 affected |= set(discardedheads) & remote
741 affected |= remote - set(newheads)
742 if affected:
743 data = iter(sorted(affected))
744 bundler.newpart('check:updated-heads', data=data)
717
745
718 @b2partsgenerator('changeset')
746 @b2partsgenerator('changeset')
719 def _pushb2ctx(pushop, bundler):
747 def _pushb2ctx(pushop, bundler):
720 """handle changegroup push through bundle2
748 """handle changegroup push through bundle2
721
749
722 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
750 addchangegroup result is stored in the ``pushop.cgresult`` attribute.
723 """
751 """
724 if 'changesets' in pushop.stepsdone:
752 if 'changesets' in pushop.stepsdone:
725 return
753 return
726 pushop.stepsdone.add('changesets')
754 pushop.stepsdone.add('changesets')
727 # Send known heads to the server for race detection.
755 # Send known heads to the server for race detection.
728 if not _pushcheckoutgoing(pushop):
756 if not _pushcheckoutgoing(pushop):
729 return
757 return
730 pushop.repo.prepushoutgoinghooks(pushop)
758 pushop.repo.prepushoutgoinghooks(pushop)
731
759
732 _pushb2ctxcheckheads(pushop, bundler)
760 _pushb2ctxcheckheads(pushop, bundler)
733
761
734 b2caps = bundle2.bundle2caps(pushop.remote)
762 b2caps = bundle2.bundle2caps(pushop.remote)
735 version = '01'
763 version = '01'
736 cgversions = b2caps.get('changegroup')
764 cgversions = b2caps.get('changegroup')
737 if cgversions: # 3.1 and 3.2 ship with an empty value
765 if cgversions: # 3.1 and 3.2 ship with an empty value
738 cgversions = [v for v in cgversions
766 cgversions = [v for v in cgversions
739 if v in changegroup.supportedoutgoingversions(
767 if v in changegroup.supportedoutgoingversions(
740 pushop.repo)]
768 pushop.repo)]
741 if not cgversions:
769 if not cgversions:
742 raise ValueError(_('no common changegroup version'))
770 raise ValueError(_('no common changegroup version'))
743 version = max(cgversions)
771 version = max(cgversions)
744 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
772 cg = changegroup.getlocalchangegroupraw(pushop.repo, 'push',
745 pushop.outgoing,
773 pushop.outgoing,
746 version=version)
774 version=version)
747 cgpart = bundler.newpart('changegroup', data=cg)
775 cgpart = bundler.newpart('changegroup', data=cg)
748 if cgversions:
776 if cgversions:
749 cgpart.addparam('version', version)
777 cgpart.addparam('version', version)
750 if 'treemanifest' in pushop.repo.requirements:
778 if 'treemanifest' in pushop.repo.requirements:
751 cgpart.addparam('treemanifest', '1')
779 cgpart.addparam('treemanifest', '1')
752 def handlereply(op):
780 def handlereply(op):
753 """extract addchangegroup returns from server reply"""
781 """extract addchangegroup returns from server reply"""
754 cgreplies = op.records.getreplies(cgpart.id)
782 cgreplies = op.records.getreplies(cgpart.id)
755 assert len(cgreplies['changegroup']) == 1
783 assert len(cgreplies['changegroup']) == 1
756 pushop.cgresult = cgreplies['changegroup'][0]['return']
784 pushop.cgresult = cgreplies['changegroup'][0]['return']
757 return handlereply
785 return handlereply
758
786
759 @b2partsgenerator('phase')
787 @b2partsgenerator('phase')
760 def _pushb2phases(pushop, bundler):
788 def _pushb2phases(pushop, bundler):
761 """handle phase push through bundle2"""
789 """handle phase push through bundle2"""
762 if 'phases' in pushop.stepsdone:
790 if 'phases' in pushop.stepsdone:
763 return
791 return
764 b2caps = bundle2.bundle2caps(pushop.remote)
792 b2caps = bundle2.bundle2caps(pushop.remote)
765 if not 'pushkey' in b2caps:
793 if not 'pushkey' in b2caps:
766 return
794 return
767 pushop.stepsdone.add('phases')
795 pushop.stepsdone.add('phases')
768 part2node = []
796 part2node = []
769
797
770 def handlefailure(pushop, exc):
798 def handlefailure(pushop, exc):
771 targetid = int(exc.partid)
799 targetid = int(exc.partid)
772 for partid, node in part2node:
800 for partid, node in part2node:
773 if partid == targetid:
801 if partid == targetid:
774 raise error.Abort(_('updating %s to public failed') % node)
802 raise error.Abort(_('updating %s to public failed') % node)
775
803
776 enc = pushkey.encode
804 enc = pushkey.encode
777 for newremotehead in pushop.outdatedphases:
805 for newremotehead in pushop.outdatedphases:
778 part = bundler.newpart('pushkey')
806 part = bundler.newpart('pushkey')
779 part.addparam('namespace', enc('phases'))
807 part.addparam('namespace', enc('phases'))
780 part.addparam('key', enc(newremotehead.hex()))
808 part.addparam('key', enc(newremotehead.hex()))
781 part.addparam('old', enc(str(phases.draft)))
809 part.addparam('old', enc(str(phases.draft)))
782 part.addparam('new', enc(str(phases.public)))
810 part.addparam('new', enc(str(phases.public)))
783 part2node.append((part.id, newremotehead))
811 part2node.append((part.id, newremotehead))
784 pushop.pkfailcb[part.id] = handlefailure
812 pushop.pkfailcb[part.id] = handlefailure
785
813
786 def handlereply(op):
814 def handlereply(op):
787 for partid, node in part2node:
815 for partid, node in part2node:
788 partrep = op.records.getreplies(partid)
816 partrep = op.records.getreplies(partid)
789 results = partrep['pushkey']
817 results = partrep['pushkey']
790 assert len(results) <= 1
818 assert len(results) <= 1
791 msg = None
819 msg = None
792 if not results:
820 if not results:
793 msg = _('server ignored update of %s to public!\n') % node
821 msg = _('server ignored update of %s to public!\n') % node
794 elif not int(results[0]['return']):
822 elif not int(results[0]['return']):
795 msg = _('updating %s to public failed!\n') % node
823 msg = _('updating %s to public failed!\n') % node
796 if msg is not None:
824 if msg is not None:
797 pushop.ui.warn(msg)
825 pushop.ui.warn(msg)
798 return handlereply
826 return handlereply
799
827
800 @b2partsgenerator('obsmarkers')
828 @b2partsgenerator('obsmarkers')
801 def _pushb2obsmarkers(pushop, bundler):
829 def _pushb2obsmarkers(pushop, bundler):
802 if 'obsmarkers' in pushop.stepsdone:
830 if 'obsmarkers' in pushop.stepsdone:
803 return
831 return
804 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
832 remoteversions = bundle2.obsmarkersversion(bundler.capabilities)
805 if obsolete.commonversion(remoteversions) is None:
833 if obsolete.commonversion(remoteversions) is None:
806 return
834 return
807 pushop.stepsdone.add('obsmarkers')
835 pushop.stepsdone.add('obsmarkers')
808 if pushop.outobsmarkers:
836 if pushop.outobsmarkers:
809 markers = sorted(pushop.outobsmarkers)
837 markers = sorted(pushop.outobsmarkers)
810 bundle2.buildobsmarkerspart(bundler, markers)
838 bundle2.buildobsmarkerspart(bundler, markers)
811
839
812 @b2partsgenerator('bookmarks')
840 @b2partsgenerator('bookmarks')
813 def _pushb2bookmarks(pushop, bundler):
841 def _pushb2bookmarks(pushop, bundler):
814 """handle bookmark push through bundle2"""
842 """handle bookmark push through bundle2"""
815 if 'bookmarks' in pushop.stepsdone:
843 if 'bookmarks' in pushop.stepsdone:
816 return
844 return
817 b2caps = bundle2.bundle2caps(pushop.remote)
845 b2caps = bundle2.bundle2caps(pushop.remote)
818 if 'pushkey' not in b2caps:
846 if 'pushkey' not in b2caps:
819 return
847 return
820 pushop.stepsdone.add('bookmarks')
848 pushop.stepsdone.add('bookmarks')
821 part2book = []
849 part2book = []
822 enc = pushkey.encode
850 enc = pushkey.encode
823
851
824 def handlefailure(pushop, exc):
852 def handlefailure(pushop, exc):
825 targetid = int(exc.partid)
853 targetid = int(exc.partid)
826 for partid, book, action in part2book:
854 for partid, book, action in part2book:
827 if partid == targetid:
855 if partid == targetid:
828 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
856 raise error.Abort(bookmsgmap[action][1].rstrip() % book)
829 # we should not be called for part we did not generated
857 # we should not be called for part we did not generated
830 assert False
858 assert False
831
859
832 for book, old, new in pushop.outbookmarks:
860 for book, old, new in pushop.outbookmarks:
833 part = bundler.newpart('pushkey')
861 part = bundler.newpart('pushkey')
834 part.addparam('namespace', enc('bookmarks'))
862 part.addparam('namespace', enc('bookmarks'))
835 part.addparam('key', enc(book))
863 part.addparam('key', enc(book))
836 part.addparam('old', enc(old))
864 part.addparam('old', enc(old))
837 part.addparam('new', enc(new))
865 part.addparam('new', enc(new))
838 action = 'update'
866 action = 'update'
839 if not old:
867 if not old:
840 action = 'export'
868 action = 'export'
841 elif not new:
869 elif not new:
842 action = 'delete'
870 action = 'delete'
843 part2book.append((part.id, book, action))
871 part2book.append((part.id, book, action))
844 pushop.pkfailcb[part.id] = handlefailure
872 pushop.pkfailcb[part.id] = handlefailure
845
873
846 def handlereply(op):
874 def handlereply(op):
847 ui = pushop.ui
875 ui = pushop.ui
848 for partid, book, action in part2book:
876 for partid, book, action in part2book:
849 partrep = op.records.getreplies(partid)
877 partrep = op.records.getreplies(partid)
850 results = partrep['pushkey']
878 results = partrep['pushkey']
851 assert len(results) <= 1
879 assert len(results) <= 1
852 if not results:
880 if not results:
853 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
881 pushop.ui.warn(_('server ignored bookmark %s update\n') % book)
854 else:
882 else:
855 ret = int(results[0]['return'])
883 ret = int(results[0]['return'])
856 if ret:
884 if ret:
857 ui.status(bookmsgmap[action][0] % book)
885 ui.status(bookmsgmap[action][0] % book)
858 else:
886 else:
859 ui.warn(bookmsgmap[action][1] % book)
887 ui.warn(bookmsgmap[action][1] % book)
860 if pushop.bkresult is not None:
888 if pushop.bkresult is not None:
861 pushop.bkresult = 1
889 pushop.bkresult = 1
862 return handlereply
890 return handlereply
863
891
864
892
865 def _pushbundle2(pushop):
893 def _pushbundle2(pushop):
866 """push data to the remote using bundle2
894 """push data to the remote using bundle2
867
895
868 The only currently supported type of data is changegroup but this will
896 The only currently supported type of data is changegroup but this will
869 evolve in the future."""
897 evolve in the future."""
870 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
898 bundler = bundle2.bundle20(pushop.ui, bundle2.bundle2caps(pushop.remote))
871 pushback = (pushop.trmanager
899 pushback = (pushop.trmanager
872 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
900 and pushop.ui.configbool('experimental', 'bundle2.pushback'))
873
901
874 # create reply capability
902 # create reply capability
875 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
903 capsblob = bundle2.encodecaps(bundle2.getrepocaps(pushop.repo,
876 allowpushback=pushback))
904 allowpushback=pushback))
877 bundler.newpart('replycaps', data=capsblob)
905 bundler.newpart('replycaps', data=capsblob)
878 replyhandlers = []
906 replyhandlers = []
879 for partgenname in b2partsgenorder:
907 for partgenname in b2partsgenorder:
880 partgen = b2partsgenmapping[partgenname]
908 partgen = b2partsgenmapping[partgenname]
881 ret = partgen(pushop, bundler)
909 ret = partgen(pushop, bundler)
882 if callable(ret):
910 if callable(ret):
883 replyhandlers.append(ret)
911 replyhandlers.append(ret)
884 # do not push if nothing to push
912 # do not push if nothing to push
885 if bundler.nbparts <= 1:
913 if bundler.nbparts <= 1:
886 return
914 return
887 stream = util.chunkbuffer(bundler.getchunks())
915 stream = util.chunkbuffer(bundler.getchunks())
888 try:
916 try:
889 try:
917 try:
890 reply = pushop.remote.unbundle(
918 reply = pushop.remote.unbundle(
891 stream, ['force'], pushop.remote.url())
919 stream, ['force'], pushop.remote.url())
892 except error.BundleValueError as exc:
920 except error.BundleValueError as exc:
893 raise error.Abort(_('missing support for %s') % exc)
921 raise error.Abort(_('missing support for %s') % exc)
894 try:
922 try:
895 trgetter = None
923 trgetter = None
896 if pushback:
924 if pushback:
897 trgetter = pushop.trmanager.transaction
925 trgetter = pushop.trmanager.transaction
898 op = bundle2.processbundle(pushop.repo, reply, trgetter)
926 op = bundle2.processbundle(pushop.repo, reply, trgetter)
899 except error.BundleValueError as exc:
927 except error.BundleValueError as exc:
900 raise error.Abort(_('missing support for %s') % exc)
928 raise error.Abort(_('missing support for %s') % exc)
901 except bundle2.AbortFromPart as exc:
929 except bundle2.AbortFromPart as exc:
902 pushop.ui.status(_('remote: %s\n') % exc)
930 pushop.ui.status(_('remote: %s\n') % exc)
903 if exc.hint is not None:
931 if exc.hint is not None:
904 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
932 pushop.ui.status(_('remote: %s\n') % ('(%s)' % exc.hint))
905 raise error.Abort(_('push failed on remote'))
933 raise error.Abort(_('push failed on remote'))
906 except error.PushkeyFailed as exc:
934 except error.PushkeyFailed as exc:
907 partid = int(exc.partid)
935 partid = int(exc.partid)
908 if partid not in pushop.pkfailcb:
936 if partid not in pushop.pkfailcb:
909 raise
937 raise
910 pushop.pkfailcb[partid](pushop, exc)
938 pushop.pkfailcb[partid](pushop, exc)
911 for rephand in replyhandlers:
939 for rephand in replyhandlers:
912 rephand(op)
940 rephand(op)
913
941
914 def _pushchangeset(pushop):
942 def _pushchangeset(pushop):
915 """Make the actual push of changeset bundle to remote repo"""
943 """Make the actual push of changeset bundle to remote repo"""
916 if 'changesets' in pushop.stepsdone:
944 if 'changesets' in pushop.stepsdone:
917 return
945 return
918 pushop.stepsdone.add('changesets')
946 pushop.stepsdone.add('changesets')
919 if not _pushcheckoutgoing(pushop):
947 if not _pushcheckoutgoing(pushop):
920 return
948 return
921 pushop.repo.prepushoutgoinghooks(pushop)
949 pushop.repo.prepushoutgoinghooks(pushop)
922 outgoing = pushop.outgoing
950 outgoing = pushop.outgoing
923 unbundle = pushop.remote.capable('unbundle')
951 unbundle = pushop.remote.capable('unbundle')
924 # TODO: get bundlecaps from remote
952 # TODO: get bundlecaps from remote
925 bundlecaps = None
953 bundlecaps = None
926 # create a changegroup from local
954 # create a changegroup from local
927 if pushop.revs is None and not (outgoing.excluded
955 if pushop.revs is None and not (outgoing.excluded
928 or pushop.repo.changelog.filteredrevs):
956 or pushop.repo.changelog.filteredrevs):
929 # push everything,
957 # push everything,
930 # use the fast path, no race possible on push
958 # use the fast path, no race possible on push
931 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
959 bundler = changegroup.cg1packer(pushop.repo, bundlecaps)
932 cg = changegroup.getsubset(pushop.repo,
960 cg = changegroup.getsubset(pushop.repo,
933 outgoing,
961 outgoing,
934 bundler,
962 bundler,
935 'push',
963 'push',
936 fastpath=True)
964 fastpath=True)
937 else:
965 else:
938 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
966 cg = changegroup.getchangegroup(pushop.repo, 'push', outgoing,
939 bundlecaps=bundlecaps)
967 bundlecaps=bundlecaps)
940
968
941 # apply changegroup to remote
969 # apply changegroup to remote
942 if unbundle:
970 if unbundle:
943 # local repo finds heads on server, finds out what
971 # local repo finds heads on server, finds out what
944 # revs it must push. once revs transferred, if server
972 # revs it must push. once revs transferred, if server
945 # finds it has different heads (someone else won
973 # finds it has different heads (someone else won
946 # commit/push race), server aborts.
974 # commit/push race), server aborts.
947 if pushop.force:
975 if pushop.force:
948 remoteheads = ['force']
976 remoteheads = ['force']
949 else:
977 else:
950 remoteheads = pushop.remoteheads
978 remoteheads = pushop.remoteheads
951 # ssh: return remote's addchangegroup()
979 # ssh: return remote's addchangegroup()
952 # http: return remote's addchangegroup() or 0 for error
980 # http: return remote's addchangegroup() or 0 for error
953 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
981 pushop.cgresult = pushop.remote.unbundle(cg, remoteheads,
954 pushop.repo.url())
982 pushop.repo.url())
955 else:
983 else:
956 # we return an integer indicating remote head count
984 # we return an integer indicating remote head count
957 # change
985 # change
958 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
986 pushop.cgresult = pushop.remote.addchangegroup(cg, 'push',
959 pushop.repo.url())
987 pushop.repo.url())
960
988
961 def _pushsyncphase(pushop):
989 def _pushsyncphase(pushop):
962 """synchronise phase information locally and remotely"""
990 """synchronise phase information locally and remotely"""
963 cheads = pushop.commonheads
991 cheads = pushop.commonheads
964 # even when we don't push, exchanging phase data is useful
992 # even when we don't push, exchanging phase data is useful
965 remotephases = pushop.remote.listkeys('phases')
993 remotephases = pushop.remote.listkeys('phases')
966 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
994 if (pushop.ui.configbool('ui', '_usedassubrepo', False)
967 and remotephases # server supports phases
995 and remotephases # server supports phases
968 and pushop.cgresult is None # nothing was pushed
996 and pushop.cgresult is None # nothing was pushed
969 and remotephases.get('publishing', False)):
997 and remotephases.get('publishing', False)):
970 # When:
998 # When:
971 # - this is a subrepo push
999 # - this is a subrepo push
972 # - and remote support phase
1000 # - and remote support phase
973 # - and no changeset was pushed
1001 # - and no changeset was pushed
974 # - and remote is publishing
1002 # - and remote is publishing
975 # We may be in issue 3871 case!
1003 # We may be in issue 3871 case!
976 # We drop the possible phase synchronisation done by
1004 # We drop the possible phase synchronisation done by
977 # courtesy to publish changesets possibly locally draft
1005 # courtesy to publish changesets possibly locally draft
978 # on the remote.
1006 # on the remote.
979 remotephases = {'publishing': 'True'}
1007 remotephases = {'publishing': 'True'}
980 if not remotephases: # old server or public only reply from non-publishing
1008 if not remotephases: # old server or public only reply from non-publishing
981 _localphasemove(pushop, cheads)
1009 _localphasemove(pushop, cheads)
982 # don't push any phase data as there is nothing to push
1010 # don't push any phase data as there is nothing to push
983 else:
1011 else:
984 ana = phases.analyzeremotephases(pushop.repo, cheads,
1012 ana = phases.analyzeremotephases(pushop.repo, cheads,
985 remotephases)
1013 remotephases)
986 pheads, droots = ana
1014 pheads, droots = ana
987 ### Apply remote phase on local
1015 ### Apply remote phase on local
988 if remotephases.get('publishing', False):
1016 if remotephases.get('publishing', False):
989 _localphasemove(pushop, cheads)
1017 _localphasemove(pushop, cheads)
990 else: # publish = False
1018 else: # publish = False
991 _localphasemove(pushop, pheads)
1019 _localphasemove(pushop, pheads)
992 _localphasemove(pushop, cheads, phases.draft)
1020 _localphasemove(pushop, cheads, phases.draft)
993 ### Apply local phase on remote
1021 ### Apply local phase on remote
994
1022
995 if pushop.cgresult:
1023 if pushop.cgresult:
996 if 'phases' in pushop.stepsdone:
1024 if 'phases' in pushop.stepsdone:
997 # phases already pushed though bundle2
1025 # phases already pushed though bundle2
998 return
1026 return
999 outdated = pushop.outdatedphases
1027 outdated = pushop.outdatedphases
1000 else:
1028 else:
1001 outdated = pushop.fallbackoutdatedphases
1029 outdated = pushop.fallbackoutdatedphases
1002
1030
1003 pushop.stepsdone.add('phases')
1031 pushop.stepsdone.add('phases')
1004
1032
1005 # filter heads already turned public by the push
1033 # filter heads already turned public by the push
1006 outdated = [c for c in outdated if c.node() not in pheads]
1034 outdated = [c for c in outdated if c.node() not in pheads]
1007 # fallback to independent pushkey command
1035 # fallback to independent pushkey command
1008 for newremotehead in outdated:
1036 for newremotehead in outdated:
1009 r = pushop.remote.pushkey('phases',
1037 r = pushop.remote.pushkey('phases',
1010 newremotehead.hex(),
1038 newremotehead.hex(),
1011 str(phases.draft),
1039 str(phases.draft),
1012 str(phases.public))
1040 str(phases.public))
1013 if not r:
1041 if not r:
1014 pushop.ui.warn(_('updating %s to public failed!\n')
1042 pushop.ui.warn(_('updating %s to public failed!\n')
1015 % newremotehead)
1043 % newremotehead)
1016
1044
1017 def _localphasemove(pushop, nodes, phase=phases.public):
1045 def _localphasemove(pushop, nodes, phase=phases.public):
1018 """move <nodes> to <phase> in the local source repo"""
1046 """move <nodes> to <phase> in the local source repo"""
1019 if pushop.trmanager:
1047 if pushop.trmanager:
1020 phases.advanceboundary(pushop.repo,
1048 phases.advanceboundary(pushop.repo,
1021 pushop.trmanager.transaction(),
1049 pushop.trmanager.transaction(),
1022 phase,
1050 phase,
1023 nodes)
1051 nodes)
1024 else:
1052 else:
1025 # repo is not locked, do not change any phases!
1053 # repo is not locked, do not change any phases!
1026 # Informs the user that phases should have been moved when
1054 # Informs the user that phases should have been moved when
1027 # applicable.
1055 # applicable.
1028 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1056 actualmoves = [n for n in nodes if phase < pushop.repo[n].phase()]
1029 phasestr = phases.phasenames[phase]
1057 phasestr = phases.phasenames[phase]
1030 if actualmoves:
1058 if actualmoves:
1031 pushop.ui.status(_('cannot lock source repo, skipping '
1059 pushop.ui.status(_('cannot lock source repo, skipping '
1032 'local %s phase update\n') % phasestr)
1060 'local %s phase update\n') % phasestr)
1033
1061
1034 def _pushobsolete(pushop):
1062 def _pushobsolete(pushop):
1035 """utility function to push obsolete markers to a remote"""
1063 """utility function to push obsolete markers to a remote"""
1036 if 'obsmarkers' in pushop.stepsdone:
1064 if 'obsmarkers' in pushop.stepsdone:
1037 return
1065 return
1038 repo = pushop.repo
1066 repo = pushop.repo
1039 remote = pushop.remote
1067 remote = pushop.remote
1040 pushop.stepsdone.add('obsmarkers')
1068 pushop.stepsdone.add('obsmarkers')
1041 if pushop.outobsmarkers:
1069 if pushop.outobsmarkers:
1042 pushop.ui.debug('try to push obsolete markers to remote\n')
1070 pushop.ui.debug('try to push obsolete markers to remote\n')
1043 rslts = []
1071 rslts = []
1044 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1072 remotedata = obsolete._pushkeyescape(sorted(pushop.outobsmarkers))
1045 for key in sorted(remotedata, reverse=True):
1073 for key in sorted(remotedata, reverse=True):
1046 # reverse sort to ensure we end with dump0
1074 # reverse sort to ensure we end with dump0
1047 data = remotedata[key]
1075 data = remotedata[key]
1048 rslts.append(remote.pushkey('obsolete', key, '', data))
1076 rslts.append(remote.pushkey('obsolete', key, '', data))
1049 if [r for r in rslts if not r]:
1077 if [r for r in rslts if not r]:
1050 msg = _('failed to push some obsolete markers!\n')
1078 msg = _('failed to push some obsolete markers!\n')
1051 repo.ui.warn(msg)
1079 repo.ui.warn(msg)
1052
1080
1053 def _pushbookmark(pushop):
1081 def _pushbookmark(pushop):
1054 """Update bookmark position on remote"""
1082 """Update bookmark position on remote"""
1055 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1083 if pushop.cgresult == 0 or 'bookmarks' in pushop.stepsdone:
1056 return
1084 return
1057 pushop.stepsdone.add('bookmarks')
1085 pushop.stepsdone.add('bookmarks')
1058 ui = pushop.ui
1086 ui = pushop.ui
1059 remote = pushop.remote
1087 remote = pushop.remote
1060
1088
1061 for b, old, new in pushop.outbookmarks:
1089 for b, old, new in pushop.outbookmarks:
1062 action = 'update'
1090 action = 'update'
1063 if not old:
1091 if not old:
1064 action = 'export'
1092 action = 'export'
1065 elif not new:
1093 elif not new:
1066 action = 'delete'
1094 action = 'delete'
1067 if remote.pushkey('bookmarks', b, old, new):
1095 if remote.pushkey('bookmarks', b, old, new):
1068 ui.status(bookmsgmap[action][0] % b)
1096 ui.status(bookmsgmap[action][0] % b)
1069 else:
1097 else:
1070 ui.warn(bookmsgmap[action][1] % b)
1098 ui.warn(bookmsgmap[action][1] % b)
1071 # discovery can have set the value form invalid entry
1099 # discovery can have set the value form invalid entry
1072 if pushop.bkresult is not None:
1100 if pushop.bkresult is not None:
1073 pushop.bkresult = 1
1101 pushop.bkresult = 1
1074
1102
1075 class pulloperation(object):
1103 class pulloperation(object):
1076 """A object that represent a single pull operation
1104 """A object that represent a single pull operation
1077
1105
1078 It purpose is to carry pull related state and very common operation.
1106 It purpose is to carry pull related state and very common operation.
1079
1107
1080 A new should be created at the beginning of each pull and discarded
1108 A new should be created at the beginning of each pull and discarded
1081 afterward.
1109 afterward.
1082 """
1110 """
1083
1111
1084 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1112 def __init__(self, repo, remote, heads=None, force=False, bookmarks=(),
1085 remotebookmarks=None, streamclonerequested=None):
1113 remotebookmarks=None, streamclonerequested=None):
1086 # repo we pull into
1114 # repo we pull into
1087 self.repo = repo
1115 self.repo = repo
1088 # repo we pull from
1116 # repo we pull from
1089 self.remote = remote
1117 self.remote = remote
1090 # revision we try to pull (None is "all")
1118 # revision we try to pull (None is "all")
1091 self.heads = heads
1119 self.heads = heads
1092 # bookmark pulled explicitly
1120 # bookmark pulled explicitly
1093 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1121 self.explicitbookmarks = [repo._bookmarks.expandname(bookmark)
1094 for bookmark in bookmarks]
1122 for bookmark in bookmarks]
1095 # do we force pull?
1123 # do we force pull?
1096 self.force = force
1124 self.force = force
1097 # whether a streaming clone was requested
1125 # whether a streaming clone was requested
1098 self.streamclonerequested = streamclonerequested
1126 self.streamclonerequested = streamclonerequested
1099 # transaction manager
1127 # transaction manager
1100 self.trmanager = None
1128 self.trmanager = None
1101 # set of common changeset between local and remote before pull
1129 # set of common changeset between local and remote before pull
1102 self.common = None
1130 self.common = None
1103 # set of pulled head
1131 # set of pulled head
1104 self.rheads = None
1132 self.rheads = None
1105 # list of missing changeset to fetch remotely
1133 # list of missing changeset to fetch remotely
1106 self.fetch = None
1134 self.fetch = None
1107 # remote bookmarks data
1135 # remote bookmarks data
1108 self.remotebookmarks = remotebookmarks
1136 self.remotebookmarks = remotebookmarks
1109 # result of changegroup pulling (used as return code by pull)
1137 # result of changegroup pulling (used as return code by pull)
1110 self.cgresult = None
1138 self.cgresult = None
1111 # list of step already done
1139 # list of step already done
1112 self.stepsdone = set()
1140 self.stepsdone = set()
1113 # Whether we attempted a clone from pre-generated bundles.
1141 # Whether we attempted a clone from pre-generated bundles.
1114 self.clonebundleattempted = False
1142 self.clonebundleattempted = False
1115
1143
1116 @util.propertycache
1144 @util.propertycache
1117 def pulledsubset(self):
1145 def pulledsubset(self):
1118 """heads of the set of changeset target by the pull"""
1146 """heads of the set of changeset target by the pull"""
1119 # compute target subset
1147 # compute target subset
1120 if self.heads is None:
1148 if self.heads is None:
1121 # We pulled every thing possible
1149 # We pulled every thing possible
1122 # sync on everything common
1150 # sync on everything common
1123 c = set(self.common)
1151 c = set(self.common)
1124 ret = list(self.common)
1152 ret = list(self.common)
1125 for n in self.rheads:
1153 for n in self.rheads:
1126 if n not in c:
1154 if n not in c:
1127 ret.append(n)
1155 ret.append(n)
1128 return ret
1156 return ret
1129 else:
1157 else:
1130 # We pulled a specific subset
1158 # We pulled a specific subset
1131 # sync on this subset
1159 # sync on this subset
1132 return self.heads
1160 return self.heads
1133
1161
1134 @util.propertycache
1162 @util.propertycache
1135 def canusebundle2(self):
1163 def canusebundle2(self):
1136 return not _forcebundle1(self)
1164 return not _forcebundle1(self)
1137
1165
1138 @util.propertycache
1166 @util.propertycache
1139 def remotebundle2caps(self):
1167 def remotebundle2caps(self):
1140 return bundle2.bundle2caps(self.remote)
1168 return bundle2.bundle2caps(self.remote)
1141
1169
1142 def gettransaction(self):
1170 def gettransaction(self):
1143 # deprecated; talk to trmanager directly
1171 # deprecated; talk to trmanager directly
1144 return self.trmanager.transaction()
1172 return self.trmanager.transaction()
1145
1173
1146 class transactionmanager(object):
1174 class transactionmanager(object):
1147 """An object to manage the life cycle of a transaction
1175 """An object to manage the life cycle of a transaction
1148
1176
1149 It creates the transaction on demand and calls the appropriate hooks when
1177 It creates the transaction on demand and calls the appropriate hooks when
1150 closing the transaction."""
1178 closing the transaction."""
1151 def __init__(self, repo, source, url):
1179 def __init__(self, repo, source, url):
1152 self.repo = repo
1180 self.repo = repo
1153 self.source = source
1181 self.source = source
1154 self.url = url
1182 self.url = url
1155 self._tr = None
1183 self._tr = None
1156
1184
1157 def transaction(self):
1185 def transaction(self):
1158 """Return an open transaction object, constructing if necessary"""
1186 """Return an open transaction object, constructing if necessary"""
1159 if not self._tr:
1187 if not self._tr:
1160 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1188 trname = '%s\n%s' % (self.source, util.hidepassword(self.url))
1161 self._tr = self.repo.transaction(trname)
1189 self._tr = self.repo.transaction(trname)
1162 self._tr.hookargs['source'] = self.source
1190 self._tr.hookargs['source'] = self.source
1163 self._tr.hookargs['url'] = self.url
1191 self._tr.hookargs['url'] = self.url
1164 return self._tr
1192 return self._tr
1165
1193
1166 def close(self):
1194 def close(self):
1167 """close transaction if created"""
1195 """close transaction if created"""
1168 if self._tr is not None:
1196 if self._tr is not None:
1169 self._tr.close()
1197 self._tr.close()
1170
1198
1171 def release(self):
1199 def release(self):
1172 """release transaction if created"""
1200 """release transaction if created"""
1173 if self._tr is not None:
1201 if self._tr is not None:
1174 self._tr.release()
1202 self._tr.release()
1175
1203
1176 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1204 def pull(repo, remote, heads=None, force=False, bookmarks=(), opargs=None,
1177 streamclonerequested=None):
1205 streamclonerequested=None):
1178 """Fetch repository data from a remote.
1206 """Fetch repository data from a remote.
1179
1207
1180 This is the main function used to retrieve data from a remote repository.
1208 This is the main function used to retrieve data from a remote repository.
1181
1209
1182 ``repo`` is the local repository to clone into.
1210 ``repo`` is the local repository to clone into.
1183 ``remote`` is a peer instance.
1211 ``remote`` is a peer instance.
1184 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1212 ``heads`` is an iterable of revisions we want to pull. ``None`` (the
1185 default) means to pull everything from the remote.
1213 default) means to pull everything from the remote.
1186 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1214 ``bookmarks`` is an iterable of bookmarks requesting to be pulled. By
1187 default, all remote bookmarks are pulled.
1215 default, all remote bookmarks are pulled.
1188 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1216 ``opargs`` are additional keyword arguments to pass to ``pulloperation``
1189 initialization.
1217 initialization.
1190 ``streamclonerequested`` is a boolean indicating whether a "streaming
1218 ``streamclonerequested`` is a boolean indicating whether a "streaming
1191 clone" is requested. A "streaming clone" is essentially a raw file copy
1219 clone" is requested. A "streaming clone" is essentially a raw file copy
1192 of revlogs from the server. This only works when the local repository is
1220 of revlogs from the server. This only works when the local repository is
1193 empty. The default value of ``None`` means to respect the server
1221 empty. The default value of ``None`` means to respect the server
1194 configuration for preferring stream clones.
1222 configuration for preferring stream clones.
1195
1223
1196 Returns the ``pulloperation`` created for this pull.
1224 Returns the ``pulloperation`` created for this pull.
1197 """
1225 """
1198 if opargs is None:
1226 if opargs is None:
1199 opargs = {}
1227 opargs = {}
1200 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1228 pullop = pulloperation(repo, remote, heads, force, bookmarks=bookmarks,
1201 streamclonerequested=streamclonerequested, **opargs)
1229 streamclonerequested=streamclonerequested, **opargs)
1202 if pullop.remote.local():
1230 if pullop.remote.local():
1203 missing = set(pullop.remote.requirements) - pullop.repo.supported
1231 missing = set(pullop.remote.requirements) - pullop.repo.supported
1204 if missing:
1232 if missing:
1205 msg = _("required features are not"
1233 msg = _("required features are not"
1206 " supported in the destination:"
1234 " supported in the destination:"
1207 " %s") % (', '.join(sorted(missing)))
1235 " %s") % (', '.join(sorted(missing)))
1208 raise error.Abort(msg)
1236 raise error.Abort(msg)
1209
1237
1210 wlock = lock = None
1238 wlock = lock = None
1211 try:
1239 try:
1212 wlock = pullop.repo.wlock()
1240 wlock = pullop.repo.wlock()
1213 lock = pullop.repo.lock()
1241 lock = pullop.repo.lock()
1214 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1242 pullop.trmanager = transactionmanager(repo, 'pull', remote.url())
1215 streamclone.maybeperformlegacystreamclone(pullop)
1243 streamclone.maybeperformlegacystreamclone(pullop)
1216 # This should ideally be in _pullbundle2(). However, it needs to run
1244 # This should ideally be in _pullbundle2(). However, it needs to run
1217 # before discovery to avoid extra work.
1245 # before discovery to avoid extra work.
1218 _maybeapplyclonebundle(pullop)
1246 _maybeapplyclonebundle(pullop)
1219 _pulldiscovery(pullop)
1247 _pulldiscovery(pullop)
1220 if pullop.canusebundle2:
1248 if pullop.canusebundle2:
1221 _pullbundle2(pullop)
1249 _pullbundle2(pullop)
1222 _pullchangeset(pullop)
1250 _pullchangeset(pullop)
1223 _pullphase(pullop)
1251 _pullphase(pullop)
1224 _pullbookmarks(pullop)
1252 _pullbookmarks(pullop)
1225 _pullobsolete(pullop)
1253 _pullobsolete(pullop)
1226 pullop.trmanager.close()
1254 pullop.trmanager.close()
1227 finally:
1255 finally:
1228 lockmod.release(pullop.trmanager, lock, wlock)
1256 lockmod.release(pullop.trmanager, lock, wlock)
1229
1257
1230 return pullop
1258 return pullop
1231
1259
1232 # list of steps to perform discovery before pull
1260 # list of steps to perform discovery before pull
1233 pulldiscoveryorder = []
1261 pulldiscoveryorder = []
1234
1262
1235 # Mapping between step name and function
1263 # Mapping between step name and function
1236 #
1264 #
1237 # This exists to help extensions wrap steps if necessary
1265 # This exists to help extensions wrap steps if necessary
1238 pulldiscoverymapping = {}
1266 pulldiscoverymapping = {}
1239
1267
1240 def pulldiscovery(stepname):
1268 def pulldiscovery(stepname):
1241 """decorator for function performing discovery before pull
1269 """decorator for function performing discovery before pull
1242
1270
1243 The function is added to the step -> function mapping and appended to the
1271 The function is added to the step -> function mapping and appended to the
1244 list of steps. Beware that decorated function will be added in order (this
1272 list of steps. Beware that decorated function will be added in order (this
1245 may matter).
1273 may matter).
1246
1274
1247 You can only use this decorator for a new step, if you want to wrap a step
1275 You can only use this decorator for a new step, if you want to wrap a step
1248 from an extension, change the pulldiscovery dictionary directly."""
1276 from an extension, change the pulldiscovery dictionary directly."""
1249 def dec(func):
1277 def dec(func):
1250 assert stepname not in pulldiscoverymapping
1278 assert stepname not in pulldiscoverymapping
1251 pulldiscoverymapping[stepname] = func
1279 pulldiscoverymapping[stepname] = func
1252 pulldiscoveryorder.append(stepname)
1280 pulldiscoveryorder.append(stepname)
1253 return func
1281 return func
1254 return dec
1282 return dec
1255
1283
1256 def _pulldiscovery(pullop):
1284 def _pulldiscovery(pullop):
1257 """Run all discovery steps"""
1285 """Run all discovery steps"""
1258 for stepname in pulldiscoveryorder:
1286 for stepname in pulldiscoveryorder:
1259 step = pulldiscoverymapping[stepname]
1287 step = pulldiscoverymapping[stepname]
1260 step(pullop)
1288 step(pullop)
1261
1289
1262 @pulldiscovery('b1:bookmarks')
1290 @pulldiscovery('b1:bookmarks')
1263 def _pullbookmarkbundle1(pullop):
1291 def _pullbookmarkbundle1(pullop):
1264 """fetch bookmark data in bundle1 case
1292 """fetch bookmark data in bundle1 case
1265
1293
1266 If not using bundle2, we have to fetch bookmarks before changeset
1294 If not using bundle2, we have to fetch bookmarks before changeset
1267 discovery to reduce the chance and impact of race conditions."""
1295 discovery to reduce the chance and impact of race conditions."""
1268 if pullop.remotebookmarks is not None:
1296 if pullop.remotebookmarks is not None:
1269 return
1297 return
1270 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1298 if pullop.canusebundle2 and 'listkeys' in pullop.remotebundle2caps:
1271 # all known bundle2 servers now support listkeys, but lets be nice with
1299 # all known bundle2 servers now support listkeys, but lets be nice with
1272 # new implementation.
1300 # new implementation.
1273 return
1301 return
1274 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1302 pullop.remotebookmarks = pullop.remote.listkeys('bookmarks')
1275
1303
1276
1304
1277 @pulldiscovery('changegroup')
1305 @pulldiscovery('changegroup')
1278 def _pulldiscoverychangegroup(pullop):
1306 def _pulldiscoverychangegroup(pullop):
1279 """discovery phase for the pull
1307 """discovery phase for the pull
1280
1308
1281 Current handle changeset discovery only, will change handle all discovery
1309 Current handle changeset discovery only, will change handle all discovery
1282 at some point."""
1310 at some point."""
1283 tmp = discovery.findcommonincoming(pullop.repo,
1311 tmp = discovery.findcommonincoming(pullop.repo,
1284 pullop.remote,
1312 pullop.remote,
1285 heads=pullop.heads,
1313 heads=pullop.heads,
1286 force=pullop.force)
1314 force=pullop.force)
1287 common, fetch, rheads = tmp
1315 common, fetch, rheads = tmp
1288 nm = pullop.repo.unfiltered().changelog.nodemap
1316 nm = pullop.repo.unfiltered().changelog.nodemap
1289 if fetch and rheads:
1317 if fetch and rheads:
1290 # If a remote heads in filtered locally, lets drop it from the unknown
1318 # If a remote heads in filtered locally, lets drop it from the unknown
1291 # remote heads and put in back in common.
1319 # remote heads and put in back in common.
1292 #
1320 #
1293 # This is a hackish solution to catch most of "common but locally
1321 # This is a hackish solution to catch most of "common but locally
1294 # hidden situation". We do not performs discovery on unfiltered
1322 # hidden situation". We do not performs discovery on unfiltered
1295 # repository because it end up doing a pathological amount of round
1323 # repository because it end up doing a pathological amount of round
1296 # trip for w huge amount of changeset we do not care about.
1324 # trip for w huge amount of changeset we do not care about.
1297 #
1325 #
1298 # If a set of such "common but filtered" changeset exist on the server
1326 # If a set of such "common but filtered" changeset exist on the server
1299 # but are not including a remote heads, we'll not be able to detect it,
1327 # but are not including a remote heads, we'll not be able to detect it,
1300 scommon = set(common)
1328 scommon = set(common)
1301 filteredrheads = []
1329 filteredrheads = []
1302 for n in rheads:
1330 for n in rheads:
1303 if n in nm:
1331 if n in nm:
1304 if n not in scommon:
1332 if n not in scommon:
1305 common.append(n)
1333 common.append(n)
1306 else:
1334 else:
1307 filteredrheads.append(n)
1335 filteredrheads.append(n)
1308 if not filteredrheads:
1336 if not filteredrheads:
1309 fetch = []
1337 fetch = []
1310 rheads = filteredrheads
1338 rheads = filteredrheads
1311 pullop.common = common
1339 pullop.common = common
1312 pullop.fetch = fetch
1340 pullop.fetch = fetch
1313 pullop.rheads = rheads
1341 pullop.rheads = rheads
1314
1342
1315 def _pullbundle2(pullop):
1343 def _pullbundle2(pullop):
1316 """pull data using bundle2
1344 """pull data using bundle2
1317
1345
1318 For now, the only supported data are changegroup."""
1346 For now, the only supported data are changegroup."""
1319 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1347 kwargs = {'bundlecaps': caps20to10(pullop.repo)}
1320
1348
1321 # At the moment we don't do stream clones over bundle2. If that is
1349 # At the moment we don't do stream clones over bundle2. If that is
1322 # implemented then here's where the check for that will go.
1350 # implemented then here's where the check for that will go.
1323 streaming = False
1351 streaming = False
1324
1352
1325 # pulling changegroup
1353 # pulling changegroup
1326 pullop.stepsdone.add('changegroup')
1354 pullop.stepsdone.add('changegroup')
1327
1355
1328 kwargs['common'] = pullop.common
1356 kwargs['common'] = pullop.common
1329 kwargs['heads'] = pullop.heads or pullop.rheads
1357 kwargs['heads'] = pullop.heads or pullop.rheads
1330 kwargs['cg'] = pullop.fetch
1358 kwargs['cg'] = pullop.fetch
1331 if 'listkeys' in pullop.remotebundle2caps:
1359 if 'listkeys' in pullop.remotebundle2caps:
1332 kwargs['listkeys'] = ['phases']
1360 kwargs['listkeys'] = ['phases']
1333 if pullop.remotebookmarks is None:
1361 if pullop.remotebookmarks is None:
1334 # make sure to always includes bookmark data when migrating
1362 # make sure to always includes bookmark data when migrating
1335 # `hg incoming --bundle` to using this function.
1363 # `hg incoming --bundle` to using this function.
1336 kwargs['listkeys'].append('bookmarks')
1364 kwargs['listkeys'].append('bookmarks')
1337
1365
1338 # If this is a full pull / clone and the server supports the clone bundles
1366 # If this is a full pull / clone and the server supports the clone bundles
1339 # feature, tell the server whether we attempted a clone bundle. The
1367 # feature, tell the server whether we attempted a clone bundle. The
1340 # presence of this flag indicates the client supports clone bundles. This
1368 # presence of this flag indicates the client supports clone bundles. This
1341 # will enable the server to treat clients that support clone bundles
1369 # will enable the server to treat clients that support clone bundles
1342 # differently from those that don't.
1370 # differently from those that don't.
1343 if (pullop.remote.capable('clonebundles')
1371 if (pullop.remote.capable('clonebundles')
1344 and pullop.heads is None and list(pullop.common) == [nullid]):
1372 and pullop.heads is None and list(pullop.common) == [nullid]):
1345 kwargs['cbattempted'] = pullop.clonebundleattempted
1373 kwargs['cbattempted'] = pullop.clonebundleattempted
1346
1374
1347 if streaming:
1375 if streaming:
1348 pullop.repo.ui.status(_('streaming all changes\n'))
1376 pullop.repo.ui.status(_('streaming all changes\n'))
1349 elif not pullop.fetch:
1377 elif not pullop.fetch:
1350 pullop.repo.ui.status(_("no changes found\n"))
1378 pullop.repo.ui.status(_("no changes found\n"))
1351 pullop.cgresult = 0
1379 pullop.cgresult = 0
1352 else:
1380 else:
1353 if pullop.heads is None and list(pullop.common) == [nullid]:
1381 if pullop.heads is None and list(pullop.common) == [nullid]:
1354 pullop.repo.ui.status(_("requesting all changes\n"))
1382 pullop.repo.ui.status(_("requesting all changes\n"))
1355 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1383 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1356 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1384 remoteversions = bundle2.obsmarkersversion(pullop.remotebundle2caps)
1357 if obsolete.commonversion(remoteversions) is not None:
1385 if obsolete.commonversion(remoteversions) is not None:
1358 kwargs['obsmarkers'] = True
1386 kwargs['obsmarkers'] = True
1359 pullop.stepsdone.add('obsmarkers')
1387 pullop.stepsdone.add('obsmarkers')
1360 _pullbundle2extraprepare(pullop, kwargs)
1388 _pullbundle2extraprepare(pullop, kwargs)
1361 bundle = pullop.remote.getbundle('pull', **kwargs)
1389 bundle = pullop.remote.getbundle('pull', **kwargs)
1362 try:
1390 try:
1363 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1391 op = bundle2.processbundle(pullop.repo, bundle, pullop.gettransaction)
1364 except bundle2.AbortFromPart as exc:
1392 except bundle2.AbortFromPart as exc:
1365 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1393 pullop.repo.ui.status(_('remote: abort: %s\n') % exc)
1366 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1394 raise error.Abort(_('pull failed on remote'), hint=exc.hint)
1367 except error.BundleValueError as exc:
1395 except error.BundleValueError as exc:
1368 raise error.Abort(_('missing support for %s') % exc)
1396 raise error.Abort(_('missing support for %s') % exc)
1369
1397
1370 if pullop.fetch:
1398 if pullop.fetch:
1371 results = [cg['return'] for cg in op.records['changegroup']]
1399 results = [cg['return'] for cg in op.records['changegroup']]
1372 pullop.cgresult = changegroup.combineresults(results)
1400 pullop.cgresult = changegroup.combineresults(results)
1373
1401
1374 # processing phases change
1402 # processing phases change
1375 for namespace, value in op.records['listkeys']:
1403 for namespace, value in op.records['listkeys']:
1376 if namespace == 'phases':
1404 if namespace == 'phases':
1377 _pullapplyphases(pullop, value)
1405 _pullapplyphases(pullop, value)
1378
1406
1379 # processing bookmark update
1407 # processing bookmark update
1380 for namespace, value in op.records['listkeys']:
1408 for namespace, value in op.records['listkeys']:
1381 if namespace == 'bookmarks':
1409 if namespace == 'bookmarks':
1382 pullop.remotebookmarks = value
1410 pullop.remotebookmarks = value
1383
1411
1384 # bookmark data were either already there or pulled in the bundle
1412 # bookmark data were either already there or pulled in the bundle
1385 if pullop.remotebookmarks is not None:
1413 if pullop.remotebookmarks is not None:
1386 _pullbookmarks(pullop)
1414 _pullbookmarks(pullop)
1387
1415
1388 def _pullbundle2extraprepare(pullop, kwargs):
1416 def _pullbundle2extraprepare(pullop, kwargs):
1389 """hook function so that extensions can extend the getbundle call"""
1417 """hook function so that extensions can extend the getbundle call"""
1390 pass
1418 pass
1391
1419
1392 def _pullchangeset(pullop):
1420 def _pullchangeset(pullop):
1393 """pull changeset from unbundle into the local repo"""
1421 """pull changeset from unbundle into the local repo"""
1394 # We delay the open of the transaction as late as possible so we
1422 # We delay the open of the transaction as late as possible so we
1395 # don't open transaction for nothing or you break future useful
1423 # don't open transaction for nothing or you break future useful
1396 # rollback call
1424 # rollback call
1397 if 'changegroup' in pullop.stepsdone:
1425 if 'changegroup' in pullop.stepsdone:
1398 return
1426 return
1399 pullop.stepsdone.add('changegroup')
1427 pullop.stepsdone.add('changegroup')
1400 if not pullop.fetch:
1428 if not pullop.fetch:
1401 pullop.repo.ui.status(_("no changes found\n"))
1429 pullop.repo.ui.status(_("no changes found\n"))
1402 pullop.cgresult = 0
1430 pullop.cgresult = 0
1403 return
1431 return
1404 pullop.gettransaction()
1432 pullop.gettransaction()
1405 if pullop.heads is None and list(pullop.common) == [nullid]:
1433 if pullop.heads is None and list(pullop.common) == [nullid]:
1406 pullop.repo.ui.status(_("requesting all changes\n"))
1434 pullop.repo.ui.status(_("requesting all changes\n"))
1407 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1435 elif pullop.heads is None and pullop.remote.capable('changegroupsubset'):
1408 # issue1320, avoid a race if remote changed after discovery
1436 # issue1320, avoid a race if remote changed after discovery
1409 pullop.heads = pullop.rheads
1437 pullop.heads = pullop.rheads
1410
1438
1411 if pullop.remote.capable('getbundle'):
1439 if pullop.remote.capable('getbundle'):
1412 # TODO: get bundlecaps from remote
1440 # TODO: get bundlecaps from remote
1413 cg = pullop.remote.getbundle('pull', common=pullop.common,
1441 cg = pullop.remote.getbundle('pull', common=pullop.common,
1414 heads=pullop.heads or pullop.rheads)
1442 heads=pullop.heads or pullop.rheads)
1415 elif pullop.heads is None:
1443 elif pullop.heads is None:
1416 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1444 cg = pullop.remote.changegroup(pullop.fetch, 'pull')
1417 elif not pullop.remote.capable('changegroupsubset'):
1445 elif not pullop.remote.capable('changegroupsubset'):
1418 raise error.Abort(_("partial pull cannot be done because "
1446 raise error.Abort(_("partial pull cannot be done because "
1419 "other repository doesn't support "
1447 "other repository doesn't support "
1420 "changegroupsubset."))
1448 "changegroupsubset."))
1421 else:
1449 else:
1422 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1450 cg = pullop.remote.changegroupsubset(pullop.fetch, pullop.heads, 'pull')
1423 pullop.cgresult = cg.apply(pullop.repo, 'pull', pullop.remote.url())
1451 pullop.cgresult = cg.apply(pullop.repo, 'pull', pullop.remote.url())
1424
1452
1425 def _pullphase(pullop):
1453 def _pullphase(pullop):
1426 # Get remote phases data from remote
1454 # Get remote phases data from remote
1427 if 'phases' in pullop.stepsdone:
1455 if 'phases' in pullop.stepsdone:
1428 return
1456 return
1429 remotephases = pullop.remote.listkeys('phases')
1457 remotephases = pullop.remote.listkeys('phases')
1430 _pullapplyphases(pullop, remotephases)
1458 _pullapplyphases(pullop, remotephases)
1431
1459
1432 def _pullapplyphases(pullop, remotephases):
1460 def _pullapplyphases(pullop, remotephases):
1433 """apply phase movement from observed remote state"""
1461 """apply phase movement from observed remote state"""
1434 if 'phases' in pullop.stepsdone:
1462 if 'phases' in pullop.stepsdone:
1435 return
1463 return
1436 pullop.stepsdone.add('phases')
1464 pullop.stepsdone.add('phases')
1437 publishing = bool(remotephases.get('publishing', False))
1465 publishing = bool(remotephases.get('publishing', False))
1438 if remotephases and not publishing:
1466 if remotephases and not publishing:
1439 # remote is new and non-publishing
1467 # remote is new and non-publishing
1440 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1468 pheads, _dr = phases.analyzeremotephases(pullop.repo,
1441 pullop.pulledsubset,
1469 pullop.pulledsubset,
1442 remotephases)
1470 remotephases)
1443 dheads = pullop.pulledsubset
1471 dheads = pullop.pulledsubset
1444 else:
1472 else:
1445 # Remote is old or publishing all common changesets
1473 # Remote is old or publishing all common changesets
1446 # should be seen as public
1474 # should be seen as public
1447 pheads = pullop.pulledsubset
1475 pheads = pullop.pulledsubset
1448 dheads = []
1476 dheads = []
1449 unfi = pullop.repo.unfiltered()
1477 unfi = pullop.repo.unfiltered()
1450 phase = unfi._phasecache.phase
1478 phase = unfi._phasecache.phase
1451 rev = unfi.changelog.nodemap.get
1479 rev = unfi.changelog.nodemap.get
1452 public = phases.public
1480 public = phases.public
1453 draft = phases.draft
1481 draft = phases.draft
1454
1482
1455 # exclude changesets already public locally and update the others
1483 # exclude changesets already public locally and update the others
1456 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1484 pheads = [pn for pn in pheads if phase(unfi, rev(pn)) > public]
1457 if pheads:
1485 if pheads:
1458 tr = pullop.gettransaction()
1486 tr = pullop.gettransaction()
1459 phases.advanceboundary(pullop.repo, tr, public, pheads)
1487 phases.advanceboundary(pullop.repo, tr, public, pheads)
1460
1488
1461 # exclude changesets already draft locally and update the others
1489 # exclude changesets already draft locally and update the others
1462 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1490 dheads = [pn for pn in dheads if phase(unfi, rev(pn)) > draft]
1463 if dheads:
1491 if dheads:
1464 tr = pullop.gettransaction()
1492 tr = pullop.gettransaction()
1465 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1493 phases.advanceboundary(pullop.repo, tr, draft, dheads)
1466
1494
1467 def _pullbookmarks(pullop):
1495 def _pullbookmarks(pullop):
1468 """process the remote bookmark information to update the local one"""
1496 """process the remote bookmark information to update the local one"""
1469 if 'bookmarks' in pullop.stepsdone:
1497 if 'bookmarks' in pullop.stepsdone:
1470 return
1498 return
1471 pullop.stepsdone.add('bookmarks')
1499 pullop.stepsdone.add('bookmarks')
1472 repo = pullop.repo
1500 repo = pullop.repo
1473 remotebookmarks = pullop.remotebookmarks
1501 remotebookmarks = pullop.remotebookmarks
1474 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1502 remotebookmarks = bookmod.unhexlifybookmarks(remotebookmarks)
1475 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1503 bookmod.updatefromremote(repo.ui, repo, remotebookmarks,
1476 pullop.remote.url(),
1504 pullop.remote.url(),
1477 pullop.gettransaction,
1505 pullop.gettransaction,
1478 explicit=pullop.explicitbookmarks)
1506 explicit=pullop.explicitbookmarks)
1479
1507
1480 def _pullobsolete(pullop):
1508 def _pullobsolete(pullop):
1481 """utility function to pull obsolete markers from a remote
1509 """utility function to pull obsolete markers from a remote
1482
1510
1483 The `gettransaction` is function that return the pull transaction, creating
1511 The `gettransaction` is function that return the pull transaction, creating
1484 one if necessary. We return the transaction to inform the calling code that
1512 one if necessary. We return the transaction to inform the calling code that
1485 a new transaction have been created (when applicable).
1513 a new transaction have been created (when applicable).
1486
1514
1487 Exists mostly to allow overriding for experimentation purpose"""
1515 Exists mostly to allow overriding for experimentation purpose"""
1488 if 'obsmarkers' in pullop.stepsdone:
1516 if 'obsmarkers' in pullop.stepsdone:
1489 return
1517 return
1490 pullop.stepsdone.add('obsmarkers')
1518 pullop.stepsdone.add('obsmarkers')
1491 tr = None
1519 tr = None
1492 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1520 if obsolete.isenabled(pullop.repo, obsolete.exchangeopt):
1493 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1521 pullop.repo.ui.debug('fetching remote obsolete markers\n')
1494 remoteobs = pullop.remote.listkeys('obsolete')
1522 remoteobs = pullop.remote.listkeys('obsolete')
1495 if 'dump0' in remoteobs:
1523 if 'dump0' in remoteobs:
1496 tr = pullop.gettransaction()
1524 tr = pullop.gettransaction()
1497 markers = []
1525 markers = []
1498 for key in sorted(remoteobs, reverse=True):
1526 for key in sorted(remoteobs, reverse=True):
1499 if key.startswith('dump'):
1527 if key.startswith('dump'):
1500 data = util.b85decode(remoteobs[key])
1528 data = util.b85decode(remoteobs[key])
1501 version, newmarks = obsolete._readmarkers(data)
1529 version, newmarks = obsolete._readmarkers(data)
1502 markers += newmarks
1530 markers += newmarks
1503 if markers:
1531 if markers:
1504 pullop.repo.obsstore.add(tr, markers)
1532 pullop.repo.obsstore.add(tr, markers)
1505 pullop.repo.invalidatevolatilesets()
1533 pullop.repo.invalidatevolatilesets()
1506 return tr
1534 return tr
1507
1535
1508 def caps20to10(repo):
1536 def caps20to10(repo):
1509 """return a set with appropriate options to use bundle20 during getbundle"""
1537 """return a set with appropriate options to use bundle20 during getbundle"""
1510 caps = {'HG20'}
1538 caps = {'HG20'}
1511 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1539 capsblob = bundle2.encodecaps(bundle2.getrepocaps(repo))
1512 caps.add('bundle2=' + urlreq.quote(capsblob))
1540 caps.add('bundle2=' + urlreq.quote(capsblob))
1513 return caps
1541 return caps
1514
1542
1515 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1543 # List of names of steps to perform for a bundle2 for getbundle, order matters.
1516 getbundle2partsorder = []
1544 getbundle2partsorder = []
1517
1545
1518 # Mapping between step name and function
1546 # Mapping between step name and function
1519 #
1547 #
1520 # This exists to help extensions wrap steps if necessary
1548 # This exists to help extensions wrap steps if necessary
1521 getbundle2partsmapping = {}
1549 getbundle2partsmapping = {}
1522
1550
1523 def getbundle2partsgenerator(stepname, idx=None):
1551 def getbundle2partsgenerator(stepname, idx=None):
1524 """decorator for function generating bundle2 part for getbundle
1552 """decorator for function generating bundle2 part for getbundle
1525
1553
1526 The function is added to the step -> function mapping and appended to the
1554 The function is added to the step -> function mapping and appended to the
1527 list of steps. Beware that decorated functions will be added in order
1555 list of steps. Beware that decorated functions will be added in order
1528 (this may matter).
1556 (this may matter).
1529
1557
1530 You can only use this decorator for new steps, if you want to wrap a step
1558 You can only use this decorator for new steps, if you want to wrap a step
1531 from an extension, attack the getbundle2partsmapping dictionary directly."""
1559 from an extension, attack the getbundle2partsmapping dictionary directly."""
1532 def dec(func):
1560 def dec(func):
1533 assert stepname not in getbundle2partsmapping
1561 assert stepname not in getbundle2partsmapping
1534 getbundle2partsmapping[stepname] = func
1562 getbundle2partsmapping[stepname] = func
1535 if idx is None:
1563 if idx is None:
1536 getbundle2partsorder.append(stepname)
1564 getbundle2partsorder.append(stepname)
1537 else:
1565 else:
1538 getbundle2partsorder.insert(idx, stepname)
1566 getbundle2partsorder.insert(idx, stepname)
1539 return func
1567 return func
1540 return dec
1568 return dec
1541
1569
1542 def bundle2requested(bundlecaps):
1570 def bundle2requested(bundlecaps):
1543 if bundlecaps is not None:
1571 if bundlecaps is not None:
1544 return any(cap.startswith('HG2') for cap in bundlecaps)
1572 return any(cap.startswith('HG2') for cap in bundlecaps)
1545 return False
1573 return False
1546
1574
1547 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1575 def getbundlechunks(repo, source, heads=None, common=None, bundlecaps=None,
1548 **kwargs):
1576 **kwargs):
1549 """Return chunks constituting a bundle's raw data.
1577 """Return chunks constituting a bundle's raw data.
1550
1578
1551 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1579 Could be a bundle HG10 or a bundle HG20 depending on bundlecaps
1552 passed.
1580 passed.
1553
1581
1554 Returns an iterator over raw chunks (of varying sizes).
1582 Returns an iterator over raw chunks (of varying sizes).
1555 """
1583 """
1556 usebundle2 = bundle2requested(bundlecaps)
1584 usebundle2 = bundle2requested(bundlecaps)
1557 # bundle10 case
1585 # bundle10 case
1558 if not usebundle2:
1586 if not usebundle2:
1559 if bundlecaps and not kwargs.get('cg', True):
1587 if bundlecaps and not kwargs.get('cg', True):
1560 raise ValueError(_('request for bundle10 must include changegroup'))
1588 raise ValueError(_('request for bundle10 must include changegroup'))
1561
1589
1562 if kwargs:
1590 if kwargs:
1563 raise ValueError(_('unsupported getbundle arguments: %s')
1591 raise ValueError(_('unsupported getbundle arguments: %s')
1564 % ', '.join(sorted(kwargs.keys())))
1592 % ', '.join(sorted(kwargs.keys())))
1565 outgoing = _computeoutgoing(repo, heads, common)
1593 outgoing = _computeoutgoing(repo, heads, common)
1566 bundler = changegroup.getbundler('01', repo, bundlecaps)
1594 bundler = changegroup.getbundler('01', repo, bundlecaps)
1567 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1595 return changegroup.getsubsetraw(repo, outgoing, bundler, source)
1568
1596
1569 # bundle20 case
1597 # bundle20 case
1570 b2caps = {}
1598 b2caps = {}
1571 for bcaps in bundlecaps:
1599 for bcaps in bundlecaps:
1572 if bcaps.startswith('bundle2='):
1600 if bcaps.startswith('bundle2='):
1573 blob = urlreq.unquote(bcaps[len('bundle2='):])
1601 blob = urlreq.unquote(bcaps[len('bundle2='):])
1574 b2caps.update(bundle2.decodecaps(blob))
1602 b2caps.update(bundle2.decodecaps(blob))
1575 bundler = bundle2.bundle20(repo.ui, b2caps)
1603 bundler = bundle2.bundle20(repo.ui, b2caps)
1576
1604
1577 kwargs['heads'] = heads
1605 kwargs['heads'] = heads
1578 kwargs['common'] = common
1606 kwargs['common'] = common
1579
1607
1580 for name in getbundle2partsorder:
1608 for name in getbundle2partsorder:
1581 func = getbundle2partsmapping[name]
1609 func = getbundle2partsmapping[name]
1582 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1610 func(bundler, repo, source, bundlecaps=bundlecaps, b2caps=b2caps,
1583 **kwargs)
1611 **kwargs)
1584
1612
1585 return bundler.getchunks()
1613 return bundler.getchunks()
1586
1614
1587 @getbundle2partsgenerator('changegroup')
1615 @getbundle2partsgenerator('changegroup')
1588 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1616 def _getbundlechangegrouppart(bundler, repo, source, bundlecaps=None,
1589 b2caps=None, heads=None, common=None, **kwargs):
1617 b2caps=None, heads=None, common=None, **kwargs):
1590 """add a changegroup part to the requested bundle"""
1618 """add a changegroup part to the requested bundle"""
1591 cg = None
1619 cg = None
1592 if kwargs.get('cg', True):
1620 if kwargs.get('cg', True):
1593 # build changegroup bundle here.
1621 # build changegroup bundle here.
1594 version = '01'
1622 version = '01'
1595 cgversions = b2caps.get('changegroup')
1623 cgversions = b2caps.get('changegroup')
1596 if cgversions: # 3.1 and 3.2 ship with an empty value
1624 if cgversions: # 3.1 and 3.2 ship with an empty value
1597 cgversions = [v for v in cgversions
1625 cgversions = [v for v in cgversions
1598 if v in changegroup.supportedoutgoingversions(repo)]
1626 if v in changegroup.supportedoutgoingversions(repo)]
1599 if not cgversions:
1627 if not cgversions:
1600 raise ValueError(_('no common changegroup version'))
1628 raise ValueError(_('no common changegroup version'))
1601 version = max(cgversions)
1629 version = max(cgversions)
1602 outgoing = _computeoutgoing(repo, heads, common)
1630 outgoing = _computeoutgoing(repo, heads, common)
1603 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1631 cg = changegroup.getlocalchangegroupraw(repo, source, outgoing,
1604 bundlecaps=bundlecaps,
1632 bundlecaps=bundlecaps,
1605 version=version)
1633 version=version)
1606
1634
1607 if cg:
1635 if cg:
1608 part = bundler.newpart('changegroup', data=cg)
1636 part = bundler.newpart('changegroup', data=cg)
1609 if cgversions:
1637 if cgversions:
1610 part.addparam('version', version)
1638 part.addparam('version', version)
1611 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1639 part.addparam('nbchanges', str(len(outgoing.missing)), mandatory=False)
1612 if 'treemanifest' in repo.requirements:
1640 if 'treemanifest' in repo.requirements:
1613 part.addparam('treemanifest', '1')
1641 part.addparam('treemanifest', '1')
1614
1642
1615 @getbundle2partsgenerator('listkeys')
1643 @getbundle2partsgenerator('listkeys')
1616 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1644 def _getbundlelistkeysparts(bundler, repo, source, bundlecaps=None,
1617 b2caps=None, **kwargs):
1645 b2caps=None, **kwargs):
1618 """add parts containing listkeys namespaces to the requested bundle"""
1646 """add parts containing listkeys namespaces to the requested bundle"""
1619 listkeys = kwargs.get('listkeys', ())
1647 listkeys = kwargs.get('listkeys', ())
1620 for namespace in listkeys:
1648 for namespace in listkeys:
1621 part = bundler.newpart('listkeys')
1649 part = bundler.newpart('listkeys')
1622 part.addparam('namespace', namespace)
1650 part.addparam('namespace', namespace)
1623 keys = repo.listkeys(namespace).items()
1651 keys = repo.listkeys(namespace).items()
1624 part.data = pushkey.encodekeys(keys)
1652 part.data = pushkey.encodekeys(keys)
1625
1653
1626 @getbundle2partsgenerator('obsmarkers')
1654 @getbundle2partsgenerator('obsmarkers')
1627 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1655 def _getbundleobsmarkerpart(bundler, repo, source, bundlecaps=None,
1628 b2caps=None, heads=None, **kwargs):
1656 b2caps=None, heads=None, **kwargs):
1629 """add an obsolescence markers part to the requested bundle"""
1657 """add an obsolescence markers part to the requested bundle"""
1630 if kwargs.get('obsmarkers', False):
1658 if kwargs.get('obsmarkers', False):
1631 if heads is None:
1659 if heads is None:
1632 heads = repo.heads()
1660 heads = repo.heads()
1633 subset = [c.node() for c in repo.set('::%ln', heads)]
1661 subset = [c.node() for c in repo.set('::%ln', heads)]
1634 markers = repo.obsstore.relevantmarkers(subset)
1662 markers = repo.obsstore.relevantmarkers(subset)
1635 markers = sorted(markers)
1663 markers = sorted(markers)
1636 bundle2.buildobsmarkerspart(bundler, markers)
1664 bundle2.buildobsmarkerspart(bundler, markers)
1637
1665
1638 @getbundle2partsgenerator('hgtagsfnodes')
1666 @getbundle2partsgenerator('hgtagsfnodes')
1639 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1667 def _getbundletagsfnodes(bundler, repo, source, bundlecaps=None,
1640 b2caps=None, heads=None, common=None,
1668 b2caps=None, heads=None, common=None,
1641 **kwargs):
1669 **kwargs):
1642 """Transfer the .hgtags filenodes mapping.
1670 """Transfer the .hgtags filenodes mapping.
1643
1671
1644 Only values for heads in this bundle will be transferred.
1672 Only values for heads in this bundle will be transferred.
1645
1673
1646 The part data consists of pairs of 20 byte changeset node and .hgtags
1674 The part data consists of pairs of 20 byte changeset node and .hgtags
1647 filenodes raw values.
1675 filenodes raw values.
1648 """
1676 """
1649 # Don't send unless:
1677 # Don't send unless:
1650 # - changeset are being exchanged,
1678 # - changeset are being exchanged,
1651 # - the client supports it.
1679 # - the client supports it.
1652 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1680 if not (kwargs.get('cg', True) and 'hgtagsfnodes' in b2caps):
1653 return
1681 return
1654
1682
1655 outgoing = _computeoutgoing(repo, heads, common)
1683 outgoing = _computeoutgoing(repo, heads, common)
1656 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1684 bundle2.addparttagsfnodescache(repo, bundler, outgoing)
1657
1685
1658 def _getbookmarks(repo, **kwargs):
1686 def _getbookmarks(repo, **kwargs):
1659 """Returns bookmark to node mapping.
1687 """Returns bookmark to node mapping.
1660
1688
1661 This function is primarily used to generate `bookmarks` bundle2 part.
1689 This function is primarily used to generate `bookmarks` bundle2 part.
1662 It is a separate function in order to make it easy to wrap it
1690 It is a separate function in order to make it easy to wrap it
1663 in extensions. Passing `kwargs` to the function makes it easy to
1691 in extensions. Passing `kwargs` to the function makes it easy to
1664 add new parameters in extensions.
1692 add new parameters in extensions.
1665 """
1693 """
1666
1694
1667 return dict(bookmod.listbinbookmarks(repo))
1695 return dict(bookmod.listbinbookmarks(repo))
1668
1696
1669 def check_heads(repo, their_heads, context):
1697 def check_heads(repo, their_heads, context):
1670 """check if the heads of a repo have been modified
1698 """check if the heads of a repo have been modified
1671
1699
1672 Used by peer for unbundling.
1700 Used by peer for unbundling.
1673 """
1701 """
1674 heads = repo.heads()
1702 heads = repo.heads()
1675 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1703 heads_hash = hashlib.sha1(''.join(sorted(heads))).digest()
1676 if not (their_heads == ['force'] or their_heads == heads or
1704 if not (their_heads == ['force'] or their_heads == heads or
1677 their_heads == ['hashed', heads_hash]):
1705 their_heads == ['hashed', heads_hash]):
1678 # someone else committed/pushed/unbundled while we
1706 # someone else committed/pushed/unbundled while we
1679 # were transferring data
1707 # were transferring data
1680 raise error.PushRaced('repository changed while %s - '
1708 raise error.PushRaced('repository changed while %s - '
1681 'please try again' % context)
1709 'please try again' % context)
1682
1710
1683 def unbundle(repo, cg, heads, source, url):
1711 def unbundle(repo, cg, heads, source, url):
1684 """Apply a bundle to a repo.
1712 """Apply a bundle to a repo.
1685
1713
1686 this function makes sure the repo is locked during the application and have
1714 this function makes sure the repo is locked during the application and have
1687 mechanism to check that no push race occurred between the creation of the
1715 mechanism to check that no push race occurred between the creation of the
1688 bundle and its application.
1716 bundle and its application.
1689
1717
1690 If the push was raced as PushRaced exception is raised."""
1718 If the push was raced as PushRaced exception is raised."""
1691 r = 0
1719 r = 0
1692 # need a transaction when processing a bundle2 stream
1720 # need a transaction when processing a bundle2 stream
1693 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1721 # [wlock, lock, tr] - needs to be an array so nested functions can modify it
1694 lockandtr = [None, None, None]
1722 lockandtr = [None, None, None]
1695 recordout = None
1723 recordout = None
1696 # quick fix for output mismatch with bundle2 in 3.4
1724 # quick fix for output mismatch with bundle2 in 3.4
1697 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1725 captureoutput = repo.ui.configbool('experimental', 'bundle2-output-capture',
1698 False)
1726 False)
1699 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1727 if url.startswith('remote:http:') or url.startswith('remote:https:'):
1700 captureoutput = True
1728 captureoutput = True
1701 try:
1729 try:
1702 # note: outside bundle1, 'heads' is expected to be empty and this
1730 # note: outside bundle1, 'heads' is expected to be empty and this
1703 # 'check_heads' call wil be a no-op
1731 # 'check_heads' call wil be a no-op
1704 check_heads(repo, heads, 'uploading changes')
1732 check_heads(repo, heads, 'uploading changes')
1705 # push can proceed
1733 # push can proceed
1706 if not util.safehasattr(cg, 'params'):
1734 if not util.safehasattr(cg, 'params'):
1707 # legacy case: bundle1 (changegroup 01)
1735 # legacy case: bundle1 (changegroup 01)
1708 lockandtr[1] = repo.lock()
1736 lockandtr[1] = repo.lock()
1709 r = cg.apply(repo, source, url)
1737 r = cg.apply(repo, source, url)
1710 else:
1738 else:
1711 r = None
1739 r = None
1712 try:
1740 try:
1713 def gettransaction():
1741 def gettransaction():
1714 if not lockandtr[2]:
1742 if not lockandtr[2]:
1715 lockandtr[0] = repo.wlock()
1743 lockandtr[0] = repo.wlock()
1716 lockandtr[1] = repo.lock()
1744 lockandtr[1] = repo.lock()
1717 lockandtr[2] = repo.transaction(source)
1745 lockandtr[2] = repo.transaction(source)
1718 lockandtr[2].hookargs['source'] = source
1746 lockandtr[2].hookargs['source'] = source
1719 lockandtr[2].hookargs['url'] = url
1747 lockandtr[2].hookargs['url'] = url
1720 lockandtr[2].hookargs['bundle2'] = '1'
1748 lockandtr[2].hookargs['bundle2'] = '1'
1721 return lockandtr[2]
1749 return lockandtr[2]
1722
1750
1723 # Do greedy locking by default until we're satisfied with lazy
1751 # Do greedy locking by default until we're satisfied with lazy
1724 # locking.
1752 # locking.
1725 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1753 if not repo.ui.configbool('experimental', 'bundle2lazylocking'):
1726 gettransaction()
1754 gettransaction()
1727
1755
1728 op = bundle2.bundleoperation(repo, gettransaction,
1756 op = bundle2.bundleoperation(repo, gettransaction,
1729 captureoutput=captureoutput)
1757 captureoutput=captureoutput)
1730 try:
1758 try:
1731 op = bundle2.processbundle(repo, cg, op=op)
1759 op = bundle2.processbundle(repo, cg, op=op)
1732 finally:
1760 finally:
1733 r = op.reply
1761 r = op.reply
1734 if captureoutput and r is not None:
1762 if captureoutput and r is not None:
1735 repo.ui.pushbuffer(error=True, subproc=True)
1763 repo.ui.pushbuffer(error=True, subproc=True)
1736 def recordout(output):
1764 def recordout(output):
1737 r.newpart('output', data=output, mandatory=False)
1765 r.newpart('output', data=output, mandatory=False)
1738 if lockandtr[2] is not None:
1766 if lockandtr[2] is not None:
1739 lockandtr[2].close()
1767 lockandtr[2].close()
1740 except BaseException as exc:
1768 except BaseException as exc:
1741 exc.duringunbundle2 = True
1769 exc.duringunbundle2 = True
1742 if captureoutput and r is not None:
1770 if captureoutput and r is not None:
1743 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1771 parts = exc._bundle2salvagedoutput = r.salvageoutput()
1744 def recordout(output):
1772 def recordout(output):
1745 part = bundle2.bundlepart('output', data=output,
1773 part = bundle2.bundlepart('output', data=output,
1746 mandatory=False)
1774 mandatory=False)
1747 parts.append(part)
1775 parts.append(part)
1748 raise
1776 raise
1749 finally:
1777 finally:
1750 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1778 lockmod.release(lockandtr[2], lockandtr[1], lockandtr[0])
1751 if recordout is not None:
1779 if recordout is not None:
1752 recordout(repo.ui.popbuffer())
1780 recordout(repo.ui.popbuffer())
1753 return r
1781 return r
1754
1782
1755 def _maybeapplyclonebundle(pullop):
1783 def _maybeapplyclonebundle(pullop):
1756 """Apply a clone bundle from a remote, if possible."""
1784 """Apply a clone bundle from a remote, if possible."""
1757
1785
1758 repo = pullop.repo
1786 repo = pullop.repo
1759 remote = pullop.remote
1787 remote = pullop.remote
1760
1788
1761 if not repo.ui.configbool('ui', 'clonebundles', True):
1789 if not repo.ui.configbool('ui', 'clonebundles', True):
1762 return
1790 return
1763
1791
1764 # Only run if local repo is empty.
1792 # Only run if local repo is empty.
1765 if len(repo):
1793 if len(repo):
1766 return
1794 return
1767
1795
1768 if pullop.heads:
1796 if pullop.heads:
1769 return
1797 return
1770
1798
1771 if not remote.capable('clonebundles'):
1799 if not remote.capable('clonebundles'):
1772 return
1800 return
1773
1801
1774 res = remote._call('clonebundles')
1802 res = remote._call('clonebundles')
1775
1803
1776 # If we call the wire protocol command, that's good enough to record the
1804 # If we call the wire protocol command, that's good enough to record the
1777 # attempt.
1805 # attempt.
1778 pullop.clonebundleattempted = True
1806 pullop.clonebundleattempted = True
1779
1807
1780 entries = parseclonebundlesmanifest(repo, res)
1808 entries = parseclonebundlesmanifest(repo, res)
1781 if not entries:
1809 if not entries:
1782 repo.ui.note(_('no clone bundles available on remote; '
1810 repo.ui.note(_('no clone bundles available on remote; '
1783 'falling back to regular clone\n'))
1811 'falling back to regular clone\n'))
1784 return
1812 return
1785
1813
1786 entries = filterclonebundleentries(repo, entries)
1814 entries = filterclonebundleentries(repo, entries)
1787 if not entries:
1815 if not entries:
1788 # There is a thundering herd concern here. However, if a server
1816 # There is a thundering herd concern here. However, if a server
1789 # operator doesn't advertise bundles appropriate for its clients,
1817 # operator doesn't advertise bundles appropriate for its clients,
1790 # they deserve what's coming. Furthermore, from a client's
1818 # they deserve what's coming. Furthermore, from a client's
1791 # perspective, no automatic fallback would mean not being able to
1819 # perspective, no automatic fallback would mean not being able to
1792 # clone!
1820 # clone!
1793 repo.ui.warn(_('no compatible clone bundles available on server; '
1821 repo.ui.warn(_('no compatible clone bundles available on server; '
1794 'falling back to regular clone\n'))
1822 'falling back to regular clone\n'))
1795 repo.ui.warn(_('(you may want to report this to the server '
1823 repo.ui.warn(_('(you may want to report this to the server '
1796 'operator)\n'))
1824 'operator)\n'))
1797 return
1825 return
1798
1826
1799 entries = sortclonebundleentries(repo.ui, entries)
1827 entries = sortclonebundleentries(repo.ui, entries)
1800
1828
1801 url = entries[0]['URL']
1829 url = entries[0]['URL']
1802 repo.ui.status(_('applying clone bundle from %s\n') % url)
1830 repo.ui.status(_('applying clone bundle from %s\n') % url)
1803 if trypullbundlefromurl(repo.ui, repo, url):
1831 if trypullbundlefromurl(repo.ui, repo, url):
1804 repo.ui.status(_('finished applying clone bundle\n'))
1832 repo.ui.status(_('finished applying clone bundle\n'))
1805 # Bundle failed.
1833 # Bundle failed.
1806 #
1834 #
1807 # We abort by default to avoid the thundering herd of
1835 # We abort by default to avoid the thundering herd of
1808 # clients flooding a server that was expecting expensive
1836 # clients flooding a server that was expecting expensive
1809 # clone load to be offloaded.
1837 # clone load to be offloaded.
1810 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1838 elif repo.ui.configbool('ui', 'clonebundlefallback', False):
1811 repo.ui.warn(_('falling back to normal clone\n'))
1839 repo.ui.warn(_('falling back to normal clone\n'))
1812 else:
1840 else:
1813 raise error.Abort(_('error applying bundle'),
1841 raise error.Abort(_('error applying bundle'),
1814 hint=_('if this error persists, consider contacting '
1842 hint=_('if this error persists, consider contacting '
1815 'the server operator or disable clone '
1843 'the server operator or disable clone '
1816 'bundles via '
1844 'bundles via '
1817 '"--config ui.clonebundles=false"'))
1845 '"--config ui.clonebundles=false"'))
1818
1846
1819 def parseclonebundlesmanifest(repo, s):
1847 def parseclonebundlesmanifest(repo, s):
1820 """Parses the raw text of a clone bundles manifest.
1848 """Parses the raw text of a clone bundles manifest.
1821
1849
1822 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1850 Returns a list of dicts. The dicts have a ``URL`` key corresponding
1823 to the URL and other keys are the attributes for the entry.
1851 to the URL and other keys are the attributes for the entry.
1824 """
1852 """
1825 m = []
1853 m = []
1826 for line in s.splitlines():
1854 for line in s.splitlines():
1827 fields = line.split()
1855 fields = line.split()
1828 if not fields:
1856 if not fields:
1829 continue
1857 continue
1830 attrs = {'URL': fields[0]}
1858 attrs = {'URL': fields[0]}
1831 for rawattr in fields[1:]:
1859 for rawattr in fields[1:]:
1832 key, value = rawattr.split('=', 1)
1860 key, value = rawattr.split('=', 1)
1833 key = urlreq.unquote(key)
1861 key = urlreq.unquote(key)
1834 value = urlreq.unquote(value)
1862 value = urlreq.unquote(value)
1835 attrs[key] = value
1863 attrs[key] = value
1836
1864
1837 # Parse BUNDLESPEC into components. This makes client-side
1865 # Parse BUNDLESPEC into components. This makes client-side
1838 # preferences easier to specify since you can prefer a single
1866 # preferences easier to specify since you can prefer a single
1839 # component of the BUNDLESPEC.
1867 # component of the BUNDLESPEC.
1840 if key == 'BUNDLESPEC':
1868 if key == 'BUNDLESPEC':
1841 try:
1869 try:
1842 comp, version, params = parsebundlespec(repo, value,
1870 comp, version, params = parsebundlespec(repo, value,
1843 externalnames=True)
1871 externalnames=True)
1844 attrs['COMPRESSION'] = comp
1872 attrs['COMPRESSION'] = comp
1845 attrs['VERSION'] = version
1873 attrs['VERSION'] = version
1846 except error.InvalidBundleSpecification:
1874 except error.InvalidBundleSpecification:
1847 pass
1875 pass
1848 except error.UnsupportedBundleSpecification:
1876 except error.UnsupportedBundleSpecification:
1849 pass
1877 pass
1850
1878
1851 m.append(attrs)
1879 m.append(attrs)
1852
1880
1853 return m
1881 return m
1854
1882
1855 def filterclonebundleentries(repo, entries):
1883 def filterclonebundleentries(repo, entries):
1856 """Remove incompatible clone bundle manifest entries.
1884 """Remove incompatible clone bundle manifest entries.
1857
1885
1858 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1886 Accepts a list of entries parsed with ``parseclonebundlesmanifest``
1859 and returns a new list consisting of only the entries that this client
1887 and returns a new list consisting of only the entries that this client
1860 should be able to apply.
1888 should be able to apply.
1861
1889
1862 There is no guarantee we'll be able to apply all returned entries because
1890 There is no guarantee we'll be able to apply all returned entries because
1863 the metadata we use to filter on may be missing or wrong.
1891 the metadata we use to filter on may be missing or wrong.
1864 """
1892 """
1865 newentries = []
1893 newentries = []
1866 for entry in entries:
1894 for entry in entries:
1867 spec = entry.get('BUNDLESPEC')
1895 spec = entry.get('BUNDLESPEC')
1868 if spec:
1896 if spec:
1869 try:
1897 try:
1870 parsebundlespec(repo, spec, strict=True)
1898 parsebundlespec(repo, spec, strict=True)
1871 except error.InvalidBundleSpecification as e:
1899 except error.InvalidBundleSpecification as e:
1872 repo.ui.debug(str(e) + '\n')
1900 repo.ui.debug(str(e) + '\n')
1873 continue
1901 continue
1874 except error.UnsupportedBundleSpecification as e:
1902 except error.UnsupportedBundleSpecification as e:
1875 repo.ui.debug('filtering %s because unsupported bundle '
1903 repo.ui.debug('filtering %s because unsupported bundle '
1876 'spec: %s\n' % (entry['URL'], str(e)))
1904 'spec: %s\n' % (entry['URL'], str(e)))
1877 continue
1905 continue
1878
1906
1879 if 'REQUIRESNI' in entry and not sslutil.hassni:
1907 if 'REQUIRESNI' in entry and not sslutil.hassni:
1880 repo.ui.debug('filtering %s because SNI not supported\n' %
1908 repo.ui.debug('filtering %s because SNI not supported\n' %
1881 entry['URL'])
1909 entry['URL'])
1882 continue
1910 continue
1883
1911
1884 newentries.append(entry)
1912 newentries.append(entry)
1885
1913
1886 return newentries
1914 return newentries
1887
1915
1888 class clonebundleentry(object):
1916 class clonebundleentry(object):
1889 """Represents an item in a clone bundles manifest.
1917 """Represents an item in a clone bundles manifest.
1890
1918
1891 This rich class is needed to support sorting since sorted() in Python 3
1919 This rich class is needed to support sorting since sorted() in Python 3
1892 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1920 doesn't support ``cmp`` and our comparison is complex enough that ``key=``
1893 won't work.
1921 won't work.
1894 """
1922 """
1895
1923
1896 def __init__(self, value, prefers):
1924 def __init__(self, value, prefers):
1897 self.value = value
1925 self.value = value
1898 self.prefers = prefers
1926 self.prefers = prefers
1899
1927
1900 def _cmp(self, other):
1928 def _cmp(self, other):
1901 for prefkey, prefvalue in self.prefers:
1929 for prefkey, prefvalue in self.prefers:
1902 avalue = self.value.get(prefkey)
1930 avalue = self.value.get(prefkey)
1903 bvalue = other.value.get(prefkey)
1931 bvalue = other.value.get(prefkey)
1904
1932
1905 # Special case for b missing attribute and a matches exactly.
1933 # Special case for b missing attribute and a matches exactly.
1906 if avalue is not None and bvalue is None and avalue == prefvalue:
1934 if avalue is not None and bvalue is None and avalue == prefvalue:
1907 return -1
1935 return -1
1908
1936
1909 # Special case for a missing attribute and b matches exactly.
1937 # Special case for a missing attribute and b matches exactly.
1910 if bvalue is not None and avalue is None and bvalue == prefvalue:
1938 if bvalue is not None and avalue is None and bvalue == prefvalue:
1911 return 1
1939 return 1
1912
1940
1913 # We can't compare unless attribute present on both.
1941 # We can't compare unless attribute present on both.
1914 if avalue is None or bvalue is None:
1942 if avalue is None or bvalue is None:
1915 continue
1943 continue
1916
1944
1917 # Same values should fall back to next attribute.
1945 # Same values should fall back to next attribute.
1918 if avalue == bvalue:
1946 if avalue == bvalue:
1919 continue
1947 continue
1920
1948
1921 # Exact matches come first.
1949 # Exact matches come first.
1922 if avalue == prefvalue:
1950 if avalue == prefvalue:
1923 return -1
1951 return -1
1924 if bvalue == prefvalue:
1952 if bvalue == prefvalue:
1925 return 1
1953 return 1
1926
1954
1927 # Fall back to next attribute.
1955 # Fall back to next attribute.
1928 continue
1956 continue
1929
1957
1930 # If we got here we couldn't sort by attributes and prefers. Fall
1958 # If we got here we couldn't sort by attributes and prefers. Fall
1931 # back to index order.
1959 # back to index order.
1932 return 0
1960 return 0
1933
1961
1934 def __lt__(self, other):
1962 def __lt__(self, other):
1935 return self._cmp(other) < 0
1963 return self._cmp(other) < 0
1936
1964
1937 def __gt__(self, other):
1965 def __gt__(self, other):
1938 return self._cmp(other) > 0
1966 return self._cmp(other) > 0
1939
1967
1940 def __eq__(self, other):
1968 def __eq__(self, other):
1941 return self._cmp(other) == 0
1969 return self._cmp(other) == 0
1942
1970
1943 def __le__(self, other):
1971 def __le__(self, other):
1944 return self._cmp(other) <= 0
1972 return self._cmp(other) <= 0
1945
1973
1946 def __ge__(self, other):
1974 def __ge__(self, other):
1947 return self._cmp(other) >= 0
1975 return self._cmp(other) >= 0
1948
1976
1949 def __ne__(self, other):
1977 def __ne__(self, other):
1950 return self._cmp(other) != 0
1978 return self._cmp(other) != 0
1951
1979
1952 def sortclonebundleentries(ui, entries):
1980 def sortclonebundleentries(ui, entries):
1953 prefers = ui.configlist('ui', 'clonebundleprefers', default=[])
1981 prefers = ui.configlist('ui', 'clonebundleprefers', default=[])
1954 if not prefers:
1982 if not prefers:
1955 return list(entries)
1983 return list(entries)
1956
1984
1957 prefers = [p.split('=', 1) for p in prefers]
1985 prefers = [p.split('=', 1) for p in prefers]
1958
1986
1959 items = sorted(clonebundleentry(v, prefers) for v in entries)
1987 items = sorted(clonebundleentry(v, prefers) for v in entries)
1960 return [i.value for i in items]
1988 return [i.value for i in items]
1961
1989
1962 def trypullbundlefromurl(ui, repo, url):
1990 def trypullbundlefromurl(ui, repo, url):
1963 """Attempt to apply a bundle from a URL."""
1991 """Attempt to apply a bundle from a URL."""
1964 lock = repo.lock()
1992 lock = repo.lock()
1965 try:
1993 try:
1966 tr = repo.transaction('bundleurl')
1994 tr = repo.transaction('bundleurl')
1967 try:
1995 try:
1968 try:
1996 try:
1969 fh = urlmod.open(ui, url)
1997 fh = urlmod.open(ui, url)
1970 cg = readbundle(ui, fh, 'stream')
1998 cg = readbundle(ui, fh, 'stream')
1971
1999
1972 if isinstance(cg, bundle2.unbundle20):
2000 if isinstance(cg, bundle2.unbundle20):
1973 bundle2.processbundle(repo, cg, lambda: tr)
2001 bundle2.processbundle(repo, cg, lambda: tr)
1974 elif isinstance(cg, streamclone.streamcloneapplier):
2002 elif isinstance(cg, streamclone.streamcloneapplier):
1975 cg.apply(repo)
2003 cg.apply(repo)
1976 else:
2004 else:
1977 cg.apply(repo, 'clonebundles', url)
2005 cg.apply(repo, 'clonebundles', url)
1978 tr.close()
2006 tr.close()
1979 return True
2007 return True
1980 except urlerr.httperror as e:
2008 except urlerr.httperror as e:
1981 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
2009 ui.warn(_('HTTP error fetching bundle: %s\n') % str(e))
1982 except urlerr.urlerror as e:
2010 except urlerr.urlerror as e:
1983 ui.warn(_('error fetching bundle: %s\n') % e.reason)
2011 ui.warn(_('error fetching bundle: %s\n') % e.reason)
1984
2012
1985 return False
2013 return False
1986 finally:
2014 finally:
1987 tr.release()
2015 tr.release()
1988 finally:
2016 finally:
1989 lock.release()
2017 lock.release()
@@ -1,1625 +1,1839 b''
1 ============================================================================================
1 ============================================================================================
2 Test cases where there are race condition between two clients pushing to the same repository
2 Test cases where there are race condition between two clients pushing to the same repository
3 ============================================================================================
3 ============================================================================================
4
4
5 This file tests cases where two clients push to a server at the same time. The
5 This file tests cases where two clients push to a server at the same time. The
6 "raced" client is done preparing it push bundle when the "racing" client
6 "raced" client is done preparing it push bundle when the "racing" client
7 perform its push. The "raced" client starts its actual push after the "racing"
7 perform its push. The "raced" client starts its actual push after the "racing"
8 client push is fully complete.
8 client push is fully complete.
9
9
10 A set of extension and shell functions ensures this scheduling.
10 A set of extension and shell functions ensures this scheduling.
11
11
12 $ cat >> delaypush.py << EOF
12 $ cat >> delaypush.py << EOF
13 > """small extension orchestrate push race
13 > """small extension orchestrate push race
14 >
14 >
15 > Client with the extensions will create a file when ready and get stuck until
15 > Client with the extensions will create a file when ready and get stuck until
16 > a file is created."""
16 > a file is created."""
17 >
17 >
18 > import atexit
18 > import atexit
19 > import errno
19 > import errno
20 > import os
20 > import os
21 > import time
21 > import time
22 >
22 >
23 > from mercurial import (
23 > from mercurial import (
24 > exchange,
24 > exchange,
25 > extensions,
25 > extensions,
26 > )
26 > )
27 >
27 >
28 > def delaypush(orig, pushop):
28 > def delaypush(orig, pushop):
29 > # notify we are done preparing
29 > # notify we are done preparing
30 > readypath = pushop.repo.ui.config('delaypush', 'ready-path', None)
30 > readypath = pushop.repo.ui.config('delaypush', 'ready-path', None)
31 > if readypath is not None:
31 > if readypath is not None:
32 > with open(readypath, 'w') as r:
32 > with open(readypath, 'w') as r:
33 > r.write('foo')
33 > r.write('foo')
34 > pushop.repo.ui.status('wrote ready: %s\n' % readypath)
34 > pushop.repo.ui.status('wrote ready: %s\n' % readypath)
35 > # now wait for the other process to be done
35 > # now wait for the other process to be done
36 > watchpath = pushop.repo.ui.config('delaypush', 'release-path', None)
36 > watchpath = pushop.repo.ui.config('delaypush', 'release-path', None)
37 > if watchpath is not None:
37 > if watchpath is not None:
38 > pushop.repo.ui.status('waiting on: %s\n' % watchpath)
38 > pushop.repo.ui.status('waiting on: %s\n' % watchpath)
39 > limit = 100
39 > limit = 100
40 > while 0 < limit and not os.path.exists(watchpath):
40 > while 0 < limit and not os.path.exists(watchpath):
41 > limit -= 1
41 > limit -= 1
42 > time.sleep(0.1)
42 > time.sleep(0.1)
43 > if limit <= 0:
43 > if limit <= 0:
44 > repo.ui.warn('exiting without watchfile: %s' % watchpath)
44 > repo.ui.warn('exiting without watchfile: %s' % watchpath)
45 > else:
45 > else:
46 > # delete the file at the end of the push
46 > # delete the file at the end of the push
47 > def delete():
47 > def delete():
48 > try:
48 > try:
49 > os.unlink(watchpath)
49 > os.unlink(watchpath)
50 > except OSError as exc:
50 > except OSError as exc:
51 > if exc.errno != errno.ENOENT:
51 > if exc.errno != errno.ENOENT:
52 > raise
52 > raise
53 > atexit.register(delete)
53 > atexit.register(delete)
54 > return orig(pushop)
54 > return orig(pushop)
55 >
55 >
56 > def uisetup(ui):
56 > def uisetup(ui):
57 > extensions.wrapfunction(exchange, '_pushbundle2', delaypush)
57 > extensions.wrapfunction(exchange, '_pushbundle2', delaypush)
58 > EOF
58 > EOF
59
59
60 $ waiton () {
60 $ waiton () {
61 > # wait for a file to be created (then delete it)
61 > # wait for a file to be created (then delete it)
62 > count=100
62 > count=100
63 > while [ ! -f $1 ] ;
63 > while [ ! -f $1 ] ;
64 > do
64 > do
65 > sleep 0.1;
65 > sleep 0.1;
66 > count=`expr $count - 1`;
66 > count=`expr $count - 1`;
67 > if [ $count -lt 0 ];
67 > if [ $count -lt 0 ];
68 > then
68 > then
69 > break
69 > break
70 > fi;
70 > fi;
71 > done
71 > done
72 > [ -f $1 ] || echo "ready file still missing: $1"
72 > [ -f $1 ] || echo "ready file still missing: $1"
73 > rm -f $1
73 > rm -f $1
74 > }
74 > }
75
75
76 $ release () {
76 $ release () {
77 > # create a file and wait for it be deleted
77 > # create a file and wait for it be deleted
78 > count=100
78 > count=100
79 > touch $1
79 > touch $1
80 > while [ -f $1 ] ;
80 > while [ -f $1 ] ;
81 > do
81 > do
82 > sleep 0.1;
82 > sleep 0.1;
83 > count=`expr $count - 1`;
83 > count=`expr $count - 1`;
84 > if [ $count -lt 0 ];
84 > if [ $count -lt 0 ];
85 > then
85 > then
86 > break
86 > break
87 > fi;
87 > fi;
88 > done
88 > done
89 > [ ! -f $1 ] || echo "delay file still exist: $1"
89 > [ ! -f $1 ] || echo "delay file still exist: $1"
90 > }
90 > }
91
91
92 $ cat >> $HGRCPATH << EOF
92 $ cat >> $HGRCPATH << EOF
93 > [ui]
93 > [ui]
94 > ssh = python "$TESTDIR/dummyssh"
94 > ssh = python "$TESTDIR/dummyssh"
95 > # simplify output
95 > # simplify output
96 > logtemplate = {node|short} {desc} ({branch})
96 > logtemplate = {node|short} {desc} ({branch})
97 > [phases]
97 > [phases]
98 > publish = no
98 > publish = no
99 > [experimental]
99 > [experimental]
100 > evolution = all
100 > evolution = all
101 > [alias]
101 > [alias]
102 > graph = log -G --rev 'sort(all(), "topo")'
102 > graph = log -G --rev 'sort(all(), "topo")'
103 > EOF
103 > EOF
104
104
105 We tests multiple cases:
106 * strict: no race detected,
107 * unrelated: race on unrelated heads are allowed.
108
109 #testcases strict unrelated
110
111 #if unrelated
112
113 $ cat >> $HGRCPATH << EOF
114 > [experimental]
115 > checkheads-strict = no
116 > EOF
117
118 #endif
119
105 Setup
120 Setup
106 -----
121 -----
107
122
108 create a repo with one root
123 create a repo with one root
109
124
110 $ hg init server
125 $ hg init server
111 $ cd server
126 $ cd server
112 $ echo root > root
127 $ echo root > root
113 $ hg ci -Am "C-ROOT"
128 $ hg ci -Am "C-ROOT"
114 adding root
129 adding root
115 $ cd ..
130 $ cd ..
116
131
117 clone it in two clients
132 clone it in two clients
118
133
119 $ hg clone ssh://user@dummy/server client-racy
134 $ hg clone ssh://user@dummy/server client-racy
120 requesting all changes
135 requesting all changes
121 adding changesets
136 adding changesets
122 adding manifests
137 adding manifests
123 adding file changes
138 adding file changes
124 added 1 changesets with 1 changes to 1 files
139 added 1 changesets with 1 changes to 1 files
125 updating to branch default
140 updating to branch default
126 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
141 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
127 $ hg clone ssh://user@dummy/server client-other
142 $ hg clone ssh://user@dummy/server client-other
128 requesting all changes
143 requesting all changes
129 adding changesets
144 adding changesets
130 adding manifests
145 adding manifests
131 adding file changes
146 adding file changes
132 added 1 changesets with 1 changes to 1 files
147 added 1 changesets with 1 changes to 1 files
133 updating to branch default
148 updating to branch default
134 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
149 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
135
150
136 setup one to allow race on push
151 setup one to allow race on push
137
152
138 $ cat >> client-racy/.hg/hgrc << EOF
153 $ cat >> client-racy/.hg/hgrc << EOF
139 > [extensions]
154 > [extensions]
140 > delaypush = $TESTTMP/delaypush.py
155 > delaypush = $TESTTMP/delaypush.py
141 > [delaypush]
156 > [delaypush]
142 > ready-path = $TESTTMP/readyfile
157 > ready-path = $TESTTMP/readyfile
143 > release-path = $TESTTMP/watchfile
158 > release-path = $TESTTMP/watchfile
144 > EOF
159 > EOF
145
160
146 Simple race, both try to push to the server at the same time
161 Simple race, both try to push to the server at the same time
147 ------------------------------------------------------------
162 ------------------------------------------------------------
148
163
149 Both try to replace the same head
164 Both try to replace the same head
150
165
151 # a
166 # a
152 # | b
167 # | b
153 # |/
168 # |/
154 # *
169 # *
155
170
156 Creating changesets
171 Creating changesets
157
172
158 $ echo b > client-other/a
173 $ echo b > client-other/a
159 $ hg -R client-other/ add client-other/a
174 $ hg -R client-other/ add client-other/a
160 $ hg -R client-other/ commit -m "C-A"
175 $ hg -R client-other/ commit -m "C-A"
161 $ echo b > client-racy/b
176 $ echo b > client-racy/b
162 $ hg -R client-racy/ add client-racy/b
177 $ hg -R client-racy/ add client-racy/b
163 $ hg -R client-racy/ commit -m "C-B"
178 $ hg -R client-racy/ commit -m "C-B"
164
179
165 Pushing
180 Pushing
166
181
167 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
182 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
168
183
169 $ waiton $TESTTMP/readyfile
184 $ waiton $TESTTMP/readyfile
170
185
171 $ hg -R client-other push -r 'tip'
186 $ hg -R client-other push -r 'tip'
172 pushing to ssh://user@dummy/server
187 pushing to ssh://user@dummy/server
173 searching for changes
188 searching for changes
174 remote: adding changesets
189 remote: adding changesets
175 remote: adding manifests
190 remote: adding manifests
176 remote: adding file changes
191 remote: adding file changes
177 remote: added 1 changesets with 1 changes to 1 files
192 remote: added 1 changesets with 1 changes to 1 files
178
193
179 $ release $TESTTMP/watchfile
194 $ release $TESTTMP/watchfile
180
195
181 Check the result of the push
196 Check the result of the push
182
197
183 $ cat ./push-log
198 $ cat ./push-log
184 pushing to ssh://user@dummy/server
199 pushing to ssh://user@dummy/server
185 searching for changes
200 searching for changes
186 wrote ready: $TESTTMP/readyfile
201 wrote ready: $TESTTMP/readyfile
187 waiting on: $TESTTMP/watchfile
202 waiting on: $TESTTMP/watchfile
188 abort: push failed:
203 abort: push failed:
189 'repository changed while pushing - please try again'
204 'repository changed while pushing - please try again'
190
205
191 $ hg -R server graph
206 $ hg -R server graph
192 o 98217d5a1659 C-A (default)
207 o 98217d5a1659 C-A (default)
193 |
208 |
194 @ 842e2fac6304 C-ROOT (default)
209 @ 842e2fac6304 C-ROOT (default)
195
210
196
211
197 Pushing on two different heads
212 Pushing on two different heads
198 ------------------------------
213 ------------------------------
199
214
200 Both try to replace a different head
215 Both try to replace a different head
201
216
202 # a b
217 # a b
203 # | |
218 # | |
204 # * *
219 # * *
205 # |/
220 # |/
206 # *
221 # *
207
222
208 (resync-all)
223 (resync-all)
209
224
210 $ hg -R ./server pull ./client-racy
225 $ hg -R ./server pull ./client-racy
211 pulling from ./client-racy
226 pulling from ./client-racy
212 searching for changes
227 searching for changes
213 adding changesets
228 adding changesets
214 adding manifests
229 adding manifests
215 adding file changes
230 adding file changes
216 added 1 changesets with 1 changes to 1 files (+1 heads)
231 added 1 changesets with 1 changes to 1 files (+1 heads)
217 (run 'hg heads' to see heads, 'hg merge' to merge)
232 (run 'hg heads' to see heads, 'hg merge' to merge)
218 $ hg -R ./client-other pull
233 $ hg -R ./client-other pull
219 pulling from ssh://user@dummy/server
234 pulling from ssh://user@dummy/server
220 searching for changes
235 searching for changes
221 adding changesets
236 adding changesets
222 adding manifests
237 adding manifests
223 adding file changes
238 adding file changes
224 added 1 changesets with 1 changes to 1 files (+1 heads)
239 added 1 changesets with 1 changes to 1 files (+1 heads)
225 (run 'hg heads' to see heads, 'hg merge' to merge)
240 (run 'hg heads' to see heads, 'hg merge' to merge)
226 $ hg -R ./client-racy pull
241 $ hg -R ./client-racy pull
227 pulling from ssh://user@dummy/server
242 pulling from ssh://user@dummy/server
228 searching for changes
243 searching for changes
229 adding changesets
244 adding changesets
230 adding manifests
245 adding manifests
231 adding file changes
246 adding file changes
232 added 1 changesets with 1 changes to 1 files (+1 heads)
247 added 1 changesets with 1 changes to 1 files (+1 heads)
233 (run 'hg heads' to see heads, 'hg merge' to merge)
248 (run 'hg heads' to see heads, 'hg merge' to merge)
234
249
235 $ hg -R server graph
250 $ hg -R server graph
236 o a9149a1428e2 C-B (default)
251 o a9149a1428e2 C-B (default)
237 |
252 |
238 | o 98217d5a1659 C-A (default)
253 | o 98217d5a1659 C-A (default)
239 |/
254 |/
240 @ 842e2fac6304 C-ROOT (default)
255 @ 842e2fac6304 C-ROOT (default)
241
256
242
257
243 Creating changesets
258 Creating changesets
244
259
245 $ echo aa >> client-other/a
260 $ echo aa >> client-other/a
246 $ hg -R client-other/ commit -m "C-C"
261 $ hg -R client-other/ commit -m "C-C"
247 $ echo bb >> client-racy/b
262 $ echo bb >> client-racy/b
248 $ hg -R client-racy/ commit -m "C-D"
263 $ hg -R client-racy/ commit -m "C-D"
249
264
250 Pushing
265 Pushing
251
266
252 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
267 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
253
268
254 $ waiton $TESTTMP/readyfile
269 $ waiton $TESTTMP/readyfile
255
270
256 $ hg -R client-other push -r 'tip'
271 $ hg -R client-other push -r 'tip'
257 pushing to ssh://user@dummy/server
272 pushing to ssh://user@dummy/server
258 searching for changes
273 searching for changes
259 remote: adding changesets
274 remote: adding changesets
260 remote: adding manifests
275 remote: adding manifests
261 remote: adding file changes
276 remote: adding file changes
262 remote: added 1 changesets with 1 changes to 1 files
277 remote: added 1 changesets with 1 changes to 1 files
263
278
264 $ release $TESTTMP/watchfile
279 $ release $TESTTMP/watchfile
265
280
266 Check the result of the push
281 Check the result of the push
267
282
283 #if strict
268 $ cat ./push-log
284 $ cat ./push-log
269 pushing to ssh://user@dummy/server
285 pushing to ssh://user@dummy/server
270 searching for changes
286 searching for changes
271 wrote ready: $TESTTMP/readyfile
287 wrote ready: $TESTTMP/readyfile
272 waiting on: $TESTTMP/watchfile
288 waiting on: $TESTTMP/watchfile
273 abort: push failed:
289 abort: push failed:
274 'repository changed while pushing - please try again'
290 'repository changed while pushing - please try again'
275
291
276 $ hg -R server graph
292 $ hg -R server graph
277 o 51c544a58128 C-C (default)
293 o 51c544a58128 C-C (default)
278 |
294 |
279 o 98217d5a1659 C-A (default)
295 o 98217d5a1659 C-A (default)
280 |
296 |
281 | o a9149a1428e2 C-B (default)
297 | o a9149a1428e2 C-B (default)
282 |/
298 |/
283 @ 842e2fac6304 C-ROOT (default)
299 @ 842e2fac6304 C-ROOT (default)
284
300
301 #endif
302 #if unrelated
303
304 (The two heads are unrelated, push should be allowed)
305
306 $ cat ./push-log
307 pushing to ssh://user@dummy/server
308 searching for changes
309 wrote ready: $TESTTMP/readyfile
310 waiting on: $TESTTMP/watchfile
311 remote: adding changesets
312 remote: adding manifests
313 remote: adding file changes
314 remote: added 1 changesets with 1 changes to 1 files
315
316 $ hg -R server graph
317 o 59e76faf78bd C-D (default)
318 |
319 o a9149a1428e2 C-B (default)
320 |
321 | o 51c544a58128 C-C (default)
322 | |
323 | o 98217d5a1659 C-A (default)
324 |/
325 @ 842e2fac6304 C-ROOT (default)
326
327 #endif
328
285 Pushing while someone creates a new head
329 Pushing while someone creates a new head
286 -----------------------------------------
330 -----------------------------------------
287
331
288 Pushing a new changeset while someone creates a new branch.
332 Pushing a new changeset while someone creates a new branch.
289
333
290 # a (raced)
334 # a (raced)
291 # |
335 # |
292 # * b
336 # * b
293 # |/
337 # |/
294 # *
338 # *
295
339
296 (resync-all)
340 (resync-all)
297
341
342 #if strict
343
298 $ hg -R ./server pull ./client-racy
344 $ hg -R ./server pull ./client-racy
299 pulling from ./client-racy
345 pulling from ./client-racy
300 searching for changes
346 searching for changes
301 adding changesets
347 adding changesets
302 adding manifests
348 adding manifests
303 adding file changes
349 adding file changes
304 added 1 changesets with 1 changes to 1 files
350 added 1 changesets with 1 changes to 1 files
305 (run 'hg update' to get a working copy)
351 (run 'hg update' to get a working copy)
352
353 #endif
354 #if unrelated
355
356 $ hg -R ./server pull ./client-racy
357 pulling from ./client-racy
358 searching for changes
359 no changes found
360
361 #endif
362
306 $ hg -R ./client-other pull
363 $ hg -R ./client-other pull
307 pulling from ssh://user@dummy/server
364 pulling from ssh://user@dummy/server
308 searching for changes
365 searching for changes
309 adding changesets
366 adding changesets
310 adding manifests
367 adding manifests
311 adding file changes
368 adding file changes
312 added 1 changesets with 1 changes to 1 files
369 added 1 changesets with 1 changes to 1 files
313 (run 'hg update' to get a working copy)
370 (run 'hg update' to get a working copy)
314 $ hg -R ./client-racy pull
371 $ hg -R ./client-racy pull
315 pulling from ssh://user@dummy/server
372 pulling from ssh://user@dummy/server
316 searching for changes
373 searching for changes
317 adding changesets
374 adding changesets
318 adding manifests
375 adding manifests
319 adding file changes
376 adding file changes
320 added 1 changesets with 1 changes to 1 files
377 added 1 changesets with 1 changes to 1 files
321 (run 'hg update' to get a working copy)
378 (run 'hg update' to get a working copy)
322
379
323 $ hg -R server graph
380 $ hg -R server graph
324 o 59e76faf78bd C-D (default)
381 o 59e76faf78bd C-D (default)
325 |
382 |
326 o a9149a1428e2 C-B (default)
383 o a9149a1428e2 C-B (default)
327 |
384 |
328 | o 51c544a58128 C-C (default)
385 | o 51c544a58128 C-C (default)
329 | |
386 | |
330 | o 98217d5a1659 C-A (default)
387 | o 98217d5a1659 C-A (default)
331 |/
388 |/
332 @ 842e2fac6304 C-ROOT (default)
389 @ 842e2fac6304 C-ROOT (default)
333
390
334
391
335 Creating changesets
392 Creating changesets
336
393
337 (new head)
394 (new head)
338
395
339 $ hg -R client-other/ up 'desc("C-A")'
396 $ hg -R client-other/ up 'desc("C-A")'
340 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
397 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
341 $ echo aaa >> client-other/a
398 $ echo aaa >> client-other/a
342 $ hg -R client-other/ commit -m "C-E"
399 $ hg -R client-other/ commit -m "C-E"
343 created new head
400 created new head
344
401
345 (children of existing head)
402 (children of existing head)
346
403
347 $ hg -R client-racy/ up 'desc("C-C")'
404 $ hg -R client-racy/ up 'desc("C-C")'
348 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
405 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
349 $ echo bbb >> client-racy/a
406 $ echo bbb >> client-racy/a
350 $ hg -R client-racy/ commit -m "C-F"
407 $ hg -R client-racy/ commit -m "C-F"
351
408
352 Pushing
409 Pushing
353
410
354 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
411 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
355
412
356 $ waiton $TESTTMP/readyfile
413 $ waiton $TESTTMP/readyfile
357
414
358 $ hg -R client-other push -fr 'tip'
415 $ hg -R client-other push -fr 'tip'
359 pushing to ssh://user@dummy/server
416 pushing to ssh://user@dummy/server
360 searching for changes
417 searching for changes
361 remote: adding changesets
418 remote: adding changesets
362 remote: adding manifests
419 remote: adding manifests
363 remote: adding file changes
420 remote: adding file changes
364 remote: added 1 changesets with 1 changes to 1 files (+1 heads)
421 remote: added 1 changesets with 1 changes to 1 files (+1 heads)
365
422
366 $ release $TESTTMP/watchfile
423 $ release $TESTTMP/watchfile
367
424
368 Check the result of the push
425 Check the result of the push
369
426
427 #if strict
428
370 $ cat ./push-log
429 $ cat ./push-log
371 pushing to ssh://user@dummy/server
430 pushing to ssh://user@dummy/server
372 searching for changes
431 searching for changes
373 wrote ready: $TESTTMP/readyfile
432 wrote ready: $TESTTMP/readyfile
374 waiting on: $TESTTMP/watchfile
433 waiting on: $TESTTMP/watchfile
375 abort: push failed:
434 abort: push failed:
376 'repository changed while pushing - please try again'
435 'repository changed while pushing - please try again'
377
436
378 $ hg -R server graph
437 $ hg -R server graph
379 o d603e2c0cdd7 C-E (default)
438 o d603e2c0cdd7 C-E (default)
380 |
439 |
381 | o 51c544a58128 C-C (default)
440 | o 51c544a58128 C-C (default)
382 |/
441 |/
383 o 98217d5a1659 C-A (default)
442 o 98217d5a1659 C-A (default)
384 |
443 |
385 | o 59e76faf78bd C-D (default)
444 | o 59e76faf78bd C-D (default)
386 | |
445 | |
387 | o a9149a1428e2 C-B (default)
446 | o a9149a1428e2 C-B (default)
388 |/
447 |/
389 @ 842e2fac6304 C-ROOT (default)
448 @ 842e2fac6304 C-ROOT (default)
390
449
391
450
451 #endif
452
453 #if unrelated
454
455 (The racing new head do not affect existing heads, push should go through)
456
457 $ cat ./push-log
458 pushing to ssh://user@dummy/server
459 searching for changes
460 wrote ready: $TESTTMP/readyfile
461 waiting on: $TESTTMP/watchfile
462 remote: adding changesets
463 remote: adding manifests
464 remote: adding file changes
465 remote: added 1 changesets with 1 changes to 1 files
466
467 $ hg -R server graph
468 o d9e379a8c432 C-F (default)
469 |
470 o 51c544a58128 C-C (default)
471 |
472 | o d603e2c0cdd7 C-E (default)
473 |/
474 o 98217d5a1659 C-A (default)
475 |
476 | o 59e76faf78bd C-D (default)
477 | |
478 | o a9149a1428e2 C-B (default)
479 |/
480 @ 842e2fac6304 C-ROOT (default)
481
482 #endif
483
392 Pushing touching different named branch (same topo): new branch raced
484 Pushing touching different named branch (same topo): new branch raced
393 ---------------------------------------------------------------------
485 ---------------------------------------------------------------------
394
486
395 Pushing two children on the same head, one is a different named branch
487 Pushing two children on the same head, one is a different named branch
396
488
397 # a (raced, branch-a)
489 # a (raced, branch-a)
398 # |
490 # |
399 # | b (default branch)
491 # | b (default branch)
400 # |/
492 # |/
401 # *
493 # *
402
494
403 (resync-all)
495 (resync-all)
404
496
497 #if strict
498
405 $ hg -R ./server pull ./client-racy
499 $ hg -R ./server pull ./client-racy
406 pulling from ./client-racy
500 pulling from ./client-racy
407 searching for changes
501 searching for changes
408 adding changesets
502 adding changesets
409 adding manifests
503 adding manifests
410 adding file changes
504 adding file changes
411 added 1 changesets with 1 changes to 1 files
505 added 1 changesets with 1 changes to 1 files
412 (run 'hg update' to get a working copy)
506 (run 'hg update' to get a working copy)
507
508 #endif
509 #if unrelated
510
511 $ hg -R ./server pull ./client-racy
512 pulling from ./client-racy
513 searching for changes
514 no changes found
515
516 #endif
517
413 $ hg -R ./client-other pull
518 $ hg -R ./client-other pull
414 pulling from ssh://user@dummy/server
519 pulling from ssh://user@dummy/server
415 searching for changes
520 searching for changes
416 adding changesets
521 adding changesets
417 adding manifests
522 adding manifests
418 adding file changes
523 adding file changes
419 added 1 changesets with 1 changes to 1 files
524 added 1 changesets with 1 changes to 1 files
420 (run 'hg update' to get a working copy)
525 (run 'hg update' to get a working copy)
421 $ hg -R ./client-racy pull
526 $ hg -R ./client-racy pull
422 pulling from ssh://user@dummy/server
527 pulling from ssh://user@dummy/server
423 searching for changes
528 searching for changes
424 adding changesets
529 adding changesets
425 adding manifests
530 adding manifests
426 adding file changes
531 adding file changes
427 added 1 changesets with 1 changes to 1 files (+1 heads)
532 added 1 changesets with 1 changes to 1 files (+1 heads)
428 (run 'hg heads .' to see heads, 'hg merge' to merge)
533 (run 'hg heads .' to see heads, 'hg merge' to merge)
429
534
430 $ hg -R server graph
535 $ hg -R server graph
431 o d9e379a8c432 C-F (default)
536 o d9e379a8c432 C-F (default)
432 |
537 |
433 o 51c544a58128 C-C (default)
538 o 51c544a58128 C-C (default)
434 |
539 |
435 | o d603e2c0cdd7 C-E (default)
540 | o d603e2c0cdd7 C-E (default)
436 |/
541 |/
437 o 98217d5a1659 C-A (default)
542 o 98217d5a1659 C-A (default)
438 |
543 |
439 | o 59e76faf78bd C-D (default)
544 | o 59e76faf78bd C-D (default)
440 | |
545 | |
441 | o a9149a1428e2 C-B (default)
546 | o a9149a1428e2 C-B (default)
442 |/
547 |/
443 @ 842e2fac6304 C-ROOT (default)
548 @ 842e2fac6304 C-ROOT (default)
444
549
445
550
446 Creating changesets
551 Creating changesets
447
552
448 (update existing head)
553 (update existing head)
449
554
450 $ hg -R client-other/ up 'desc("C-F")'
555 $ hg -R client-other/ up 'desc("C-F")'
451 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
556 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
452 $ echo aaa >> client-other/a
557 $ echo aaa >> client-other/a
453 $ hg -R client-other/ commit -m "C-G"
558 $ hg -R client-other/ commit -m "C-G"
454
559
455 (new named branch from that existing head)
560 (new named branch from that existing head)
456
561
457 $ hg -R client-racy/ up 'desc("C-F")'
562 $ hg -R client-racy/ up 'desc("C-F")'
458 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
563 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
459 $ echo bbb >> client-racy/a
564 $ echo bbb >> client-racy/a
460 $ hg -R client-racy/ branch my-first-test-branch
565 $ hg -R client-racy/ branch my-first-test-branch
461 marked working directory as branch my-first-test-branch
566 marked working directory as branch my-first-test-branch
462 (branches are permanent and global, did you want a bookmark?)
567 (branches are permanent and global, did you want a bookmark?)
463 $ hg -R client-racy/ commit -m "C-H"
568 $ hg -R client-racy/ commit -m "C-H"
464
569
465 Pushing
570 Pushing
466
571
467 $ hg -R client-racy push -r 'tip' --new-branch > ./push-log 2>&1 &
572 $ hg -R client-racy push -r 'tip' --new-branch > ./push-log 2>&1 &
468
573
469 $ waiton $TESTTMP/readyfile
574 $ waiton $TESTTMP/readyfile
470
575
471 $ hg -R client-other push -fr 'tip'
576 $ hg -R client-other push -fr 'tip'
472 pushing to ssh://user@dummy/server
577 pushing to ssh://user@dummy/server
473 searching for changes
578 searching for changes
474 remote: adding changesets
579 remote: adding changesets
475 remote: adding manifests
580 remote: adding manifests
476 remote: adding file changes
581 remote: adding file changes
477 remote: added 1 changesets with 1 changes to 1 files
582 remote: added 1 changesets with 1 changes to 1 files
478
583
479 $ release $TESTTMP/watchfile
584 $ release $TESTTMP/watchfile
480
585
481 Check the result of the push
586 Check the result of the push
482
587
588 #if strict
483 $ cat ./push-log
589 $ cat ./push-log
484 pushing to ssh://user@dummy/server
590 pushing to ssh://user@dummy/server
485 searching for changes
591 searching for changes
486 wrote ready: $TESTTMP/readyfile
592 wrote ready: $TESTTMP/readyfile
487 waiting on: $TESTTMP/watchfile
593 waiting on: $TESTTMP/watchfile
488 abort: push failed:
594 abort: push failed:
489 'repository changed while pushing - please try again'
595 'repository changed while pushing - please try again'
490
596
491 $ hg -R server graph
597 $ hg -R server graph
492 o 75d69cba5402 C-G (default)
598 o 75d69cba5402 C-G (default)
493 |
599 |
494 o d9e379a8c432 C-F (default)
600 o d9e379a8c432 C-F (default)
495 |
601 |
496 o 51c544a58128 C-C (default)
602 o 51c544a58128 C-C (default)
497 |
603 |
498 | o d603e2c0cdd7 C-E (default)
604 | o d603e2c0cdd7 C-E (default)
499 |/
605 |/
500 o 98217d5a1659 C-A (default)
606 o 98217d5a1659 C-A (default)
501 |
607 |
502 | o 59e76faf78bd C-D (default)
608 | o 59e76faf78bd C-D (default)
503 | |
609 | |
504 | o a9149a1428e2 C-B (default)
610 | o a9149a1428e2 C-B (default)
505 |/
611 |/
506 @ 842e2fac6304 C-ROOT (default)
612 @ 842e2fac6304 C-ROOT (default)
507
613
614 #endif
615 #if unrelated
616
617 (unrelated named branches are unrelated)
618
619 $ cat ./push-log
620 pushing to ssh://user@dummy/server
621 searching for changes
622 wrote ready: $TESTTMP/readyfile
623 waiting on: $TESTTMP/watchfile
624 remote: adding changesets
625 remote: adding manifests
626 remote: adding file changes
627 remote: added 1 changesets with 1 changes to 1 files (+1 heads)
628
629 $ hg -R server graph
630 o 833be552cfe6 C-H (my-first-test-branch)
631 |
632 | o 75d69cba5402 C-G (default)
633 |/
634 o d9e379a8c432 C-F (default)
635 |
636 o 51c544a58128 C-C (default)
637 |
638 | o d603e2c0cdd7 C-E (default)
639 |/
640 o 98217d5a1659 C-A (default)
641 |
642 | o 59e76faf78bd C-D (default)
643 | |
644 | o a9149a1428e2 C-B (default)
645 |/
646 @ 842e2fac6304 C-ROOT (default)
647
648 #endif
649
650 The racing new head do not affect existing heads, push should go through
508
651
509 pushing touching different named branch (same topo): old branch raced
652 pushing touching different named branch (same topo): old branch raced
510 ---------------------------------------------------------------------
653 ---------------------------------------------------------------------
511
654
512 Pushing two children on the same head, one is a different named branch
655 Pushing two children on the same head, one is a different named branch
513
656
514 # a (raced, default-branch)
657 # a (raced, default-branch)
515 # |
658 # |
516 # | b (new branch)
659 # | b (new branch)
517 # |/
660 # |/
518 # * (default-branch)
661 # * (default-branch)
519
662
520 (resync-all)
663 (resync-all)
521
664
665 #if strict
666
522 $ hg -R ./server pull ./client-racy
667 $ hg -R ./server pull ./client-racy
523 pulling from ./client-racy
668 pulling from ./client-racy
524 searching for changes
669 searching for changes
525 adding changesets
670 adding changesets
526 adding manifests
671 adding manifests
527 adding file changes
672 adding file changes
528 added 1 changesets with 1 changes to 1 files (+1 heads)
673 added 1 changesets with 1 changes to 1 files (+1 heads)
529 (run 'hg heads .' to see heads, 'hg merge' to merge)
674 (run 'hg heads .' to see heads, 'hg merge' to merge)
675
676 #endif
677 #if unrelated
678
679 $ hg -R ./server pull ./client-racy
680 pulling from ./client-racy
681 searching for changes
682 no changes found
683
684 #endif
685
530 $ hg -R ./client-other pull
686 $ hg -R ./client-other pull
531 pulling from ssh://user@dummy/server
687 pulling from ssh://user@dummy/server
532 searching for changes
688 searching for changes
533 adding changesets
689 adding changesets
534 adding manifests
690 adding manifests
535 adding file changes
691 adding file changes
536 added 1 changesets with 1 changes to 1 files (+1 heads)
692 added 1 changesets with 1 changes to 1 files (+1 heads)
537 (run 'hg heads .' to see heads, 'hg merge' to merge)
693 (run 'hg heads .' to see heads, 'hg merge' to merge)
538 $ hg -R ./client-racy pull
694 $ hg -R ./client-racy pull
539 pulling from ssh://user@dummy/server
695 pulling from ssh://user@dummy/server
540 searching for changes
696 searching for changes
541 adding changesets
697 adding changesets
542 adding manifests
698 adding manifests
543 adding file changes
699 adding file changes
544 added 1 changesets with 1 changes to 1 files (+1 heads)
700 added 1 changesets with 1 changes to 1 files (+1 heads)
545 (run 'hg heads' to see heads)
701 (run 'hg heads' to see heads)
546
702
547 $ hg -R server graph
703 $ hg -R server graph
548 o 833be552cfe6 C-H (my-first-test-branch)
704 o 833be552cfe6 C-H (my-first-test-branch)
549 |
705 |
550 | o 75d69cba5402 C-G (default)
706 | o 75d69cba5402 C-G (default)
551 |/
707 |/
552 o d9e379a8c432 C-F (default)
708 o d9e379a8c432 C-F (default)
553 |
709 |
554 o 51c544a58128 C-C (default)
710 o 51c544a58128 C-C (default)
555 |
711 |
556 | o d603e2c0cdd7 C-E (default)
712 | o d603e2c0cdd7 C-E (default)
557 |/
713 |/
558 o 98217d5a1659 C-A (default)
714 o 98217d5a1659 C-A (default)
559 |
715 |
560 | o 59e76faf78bd C-D (default)
716 | o 59e76faf78bd C-D (default)
561 | |
717 | |
562 | o a9149a1428e2 C-B (default)
718 | o a9149a1428e2 C-B (default)
563 |/
719 |/
564 @ 842e2fac6304 C-ROOT (default)
720 @ 842e2fac6304 C-ROOT (default)
565
721
566
722
567 Creating changesets
723 Creating changesets
568
724
569 (new named branch from one head)
725 (new named branch from one head)
570
726
571 $ hg -R client-other/ up 'desc("C-G")'
727 $ hg -R client-other/ up 'desc("C-G")'
572 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
728 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
573 $ echo aaa >> client-other/a
729 $ echo aaa >> client-other/a
574 $ hg -R client-other/ branch my-second-test-branch
730 $ hg -R client-other/ branch my-second-test-branch
575 marked working directory as branch my-second-test-branch
731 marked working directory as branch my-second-test-branch
576 $ hg -R client-other/ commit -m "C-I"
732 $ hg -R client-other/ commit -m "C-I"
577
733
578 (children "updating" that same head)
734 (children "updating" that same head)
579
735
580 $ hg -R client-racy/ up 'desc("C-G")'
736 $ hg -R client-racy/ up 'desc("C-G")'
581 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
737 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
582 $ echo bbb >> client-racy/a
738 $ echo bbb >> client-racy/a
583 $ hg -R client-racy/ commit -m "C-J"
739 $ hg -R client-racy/ commit -m "C-J"
584
740
585 Pushing
741 Pushing
586
742
587 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
743 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
588
744
589 $ waiton $TESTTMP/readyfile
745 $ waiton $TESTTMP/readyfile
590
746
591 $ hg -R client-other push -fr 'tip' --new-branch
747 $ hg -R client-other push -fr 'tip' --new-branch
592 pushing to ssh://user@dummy/server
748 pushing to ssh://user@dummy/server
593 searching for changes
749 searching for changes
594 remote: adding changesets
750 remote: adding changesets
595 remote: adding manifests
751 remote: adding manifests
596 remote: adding file changes
752 remote: adding file changes
597 remote: added 1 changesets with 1 changes to 1 files
753 remote: added 1 changesets with 1 changes to 1 files
598
754
599 $ release $TESTTMP/watchfile
755 $ release $TESTTMP/watchfile
600
756
601 Check the result of the push
757 Check the result of the push
602
758
759 #if strict
760
603 $ cat ./push-log
761 $ cat ./push-log
604 pushing to ssh://user@dummy/server
762 pushing to ssh://user@dummy/server
605 searching for changes
763 searching for changes
606 wrote ready: $TESTTMP/readyfile
764 wrote ready: $TESTTMP/readyfile
607 waiting on: $TESTTMP/watchfile
765 waiting on: $TESTTMP/watchfile
608 abort: push failed:
766 abort: push failed:
609 'repository changed while pushing - please try again'
767 'repository changed while pushing - please try again'
610
768
611 $ hg -R server graph
769 $ hg -R server graph
612 o b35ed749f288 C-I (my-second-test-branch)
770 o b35ed749f288 C-I (my-second-test-branch)
613 |
771 |
614 o 75d69cba5402 C-G (default)
772 o 75d69cba5402 C-G (default)
615 |
773 |
616 | o 833be552cfe6 C-H (my-first-test-branch)
774 | o 833be552cfe6 C-H (my-first-test-branch)
617 |/
775 |/
618 o d9e379a8c432 C-F (default)
776 o d9e379a8c432 C-F (default)
619 |
777 |
620 o 51c544a58128 C-C (default)
778 o 51c544a58128 C-C (default)
621 |
779 |
622 | o d603e2c0cdd7 C-E (default)
780 | o d603e2c0cdd7 C-E (default)
623 |/
781 |/
624 o 98217d5a1659 C-A (default)
782 o 98217d5a1659 C-A (default)
625 |
783 |
626 | o 59e76faf78bd C-D (default)
784 | o 59e76faf78bd C-D (default)
627 | |
785 | |
628 | o a9149a1428e2 C-B (default)
786 | o a9149a1428e2 C-B (default)
629 |/
787 |/
630 @ 842e2fac6304 C-ROOT (default)
788 @ 842e2fac6304 C-ROOT (default)
631
789
632
790
791 #endif
792
793 #if unrelated
794
795 (unrelated named branches are unrelated)
796
797 $ cat ./push-log
798 pushing to ssh://user@dummy/server
799 searching for changes
800 wrote ready: $TESTTMP/readyfile
801 waiting on: $TESTTMP/watchfile
802 remote: adding changesets
803 remote: adding manifests
804 remote: adding file changes
805 remote: added 1 changesets with 1 changes to 1 files (+1 heads)
806
807 $ hg -R server graph
808 o 89420bf00fae C-J (default)
809 |
810 | o b35ed749f288 C-I (my-second-test-branch)
811 |/
812 o 75d69cba5402 C-G (default)
813 |
814 | o 833be552cfe6 C-H (my-first-test-branch)
815 |/
816 o d9e379a8c432 C-F (default)
817 |
818 o 51c544a58128 C-C (default)
819 |
820 | o d603e2c0cdd7 C-E (default)
821 |/
822 o 98217d5a1659 C-A (default)
823 |
824 | o 59e76faf78bd C-D (default)
825 | |
826 | o a9149a1428e2 C-B (default)
827 |/
828 @ 842e2fac6304 C-ROOT (default)
829
830
831 #endif
832
633 pushing racing push touch multiple heads
833 pushing racing push touch multiple heads
634 ----------------------------------------
834 ----------------------------------------
635
835
636 There are multiple heads, but the racing push touch all of them
836 There are multiple heads, but the racing push touch all of them
637
837
638 # a (raced)
838 # a (raced)
639 # | b
839 # | b
640 # |/|
840 # |/|
641 # * *
841 # * *
642 # |/
842 # |/
643 # *
843 # *
644
844
645 (resync-all)
845 (resync-all)
646
846
847 #if strict
848
647 $ hg -R ./server pull ./client-racy
849 $ hg -R ./server pull ./client-racy
648 pulling from ./client-racy
850 pulling from ./client-racy
649 searching for changes
851 searching for changes
650 adding changesets
852 adding changesets
651 adding manifests
853 adding manifests
652 adding file changes
854 adding file changes
653 added 1 changesets with 1 changes to 1 files (+1 heads)
855 added 1 changesets with 1 changes to 1 files (+1 heads)
654 (run 'hg heads .' to see heads, 'hg merge' to merge)
856 (run 'hg heads .' to see heads, 'hg merge' to merge)
857
858 #endif
859
860 #if unrelated
861
862 $ hg -R ./server pull ./client-racy
863 pulling from ./client-racy
864 searching for changes
865 no changes found
866
867 #endif
868
655 $ hg -R ./client-other pull
869 $ hg -R ./client-other pull
656 pulling from ssh://user@dummy/server
870 pulling from ssh://user@dummy/server
657 searching for changes
871 searching for changes
658 adding changesets
872 adding changesets
659 adding manifests
873 adding manifests
660 adding file changes
874 adding file changes
661 added 1 changesets with 1 changes to 1 files (+1 heads)
875 added 1 changesets with 1 changes to 1 files (+1 heads)
662 (run 'hg heads' to see heads)
876 (run 'hg heads' to see heads)
663 $ hg -R ./client-racy pull
877 $ hg -R ./client-racy pull
664 pulling from ssh://user@dummy/server
878 pulling from ssh://user@dummy/server
665 searching for changes
879 searching for changes
666 adding changesets
880 adding changesets
667 adding manifests
881 adding manifests
668 adding file changes
882 adding file changes
669 added 1 changesets with 1 changes to 1 files (+1 heads)
883 added 1 changesets with 1 changes to 1 files (+1 heads)
670 (run 'hg heads .' to see heads, 'hg merge' to merge)
884 (run 'hg heads .' to see heads, 'hg merge' to merge)
671
885
672 $ hg -R server graph
886 $ hg -R server graph
673 o 89420bf00fae C-J (default)
887 o 89420bf00fae C-J (default)
674 |
888 |
675 | o b35ed749f288 C-I (my-second-test-branch)
889 | o b35ed749f288 C-I (my-second-test-branch)
676 |/
890 |/
677 o 75d69cba5402 C-G (default)
891 o 75d69cba5402 C-G (default)
678 |
892 |
679 | o 833be552cfe6 C-H (my-first-test-branch)
893 | o 833be552cfe6 C-H (my-first-test-branch)
680 |/
894 |/
681 o d9e379a8c432 C-F (default)
895 o d9e379a8c432 C-F (default)
682 |
896 |
683 o 51c544a58128 C-C (default)
897 o 51c544a58128 C-C (default)
684 |
898 |
685 | o d603e2c0cdd7 C-E (default)
899 | o d603e2c0cdd7 C-E (default)
686 |/
900 |/
687 o 98217d5a1659 C-A (default)
901 o 98217d5a1659 C-A (default)
688 |
902 |
689 | o 59e76faf78bd C-D (default)
903 | o 59e76faf78bd C-D (default)
690 | |
904 | |
691 | o a9149a1428e2 C-B (default)
905 | o a9149a1428e2 C-B (default)
692 |/
906 |/
693 @ 842e2fac6304 C-ROOT (default)
907 @ 842e2fac6304 C-ROOT (default)
694
908
695
909
696 Creating changesets
910 Creating changesets
697
911
698 (merges heads)
912 (merges heads)
699
913
700 $ hg -R client-other/ up 'desc("C-E")'
914 $ hg -R client-other/ up 'desc("C-E")'
701 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
915 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
702 $ hg -R client-other/ merge 'desc("C-D")'
916 $ hg -R client-other/ merge 'desc("C-D")'
703 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
917 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
704 (branch merge, don't forget to commit)
918 (branch merge, don't forget to commit)
705 $ hg -R client-other/ commit -m "C-K"
919 $ hg -R client-other/ commit -m "C-K"
706
920
707 (update one head)
921 (update one head)
708
922
709 $ hg -R client-racy/ up 'desc("C-D")'
923 $ hg -R client-racy/ up 'desc("C-D")'
710 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
924 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
711 $ echo bbb >> client-racy/b
925 $ echo bbb >> client-racy/b
712 $ hg -R client-racy/ commit -m "C-L"
926 $ hg -R client-racy/ commit -m "C-L"
713
927
714 Pushing
928 Pushing
715
929
716 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
930 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
717
931
718 $ waiton $TESTTMP/readyfile
932 $ waiton $TESTTMP/readyfile
719
933
720 $ hg -R client-other push -fr 'tip' --new-branch
934 $ hg -R client-other push -fr 'tip' --new-branch
721 pushing to ssh://user@dummy/server
935 pushing to ssh://user@dummy/server
722 searching for changes
936 searching for changes
723 remote: adding changesets
937 remote: adding changesets
724 remote: adding manifests
938 remote: adding manifests
725 remote: adding file changes
939 remote: adding file changes
726 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
940 remote: added 1 changesets with 0 changes to 0 files (-1 heads)
727
941
728 $ release $TESTTMP/watchfile
942 $ release $TESTTMP/watchfile
729
943
730 Check the result of the push
944 Check the result of the push
731
945
732 $ cat ./push-log
946 $ cat ./push-log
733 pushing to ssh://user@dummy/server
947 pushing to ssh://user@dummy/server
734 searching for changes
948 searching for changes
735 wrote ready: $TESTTMP/readyfile
949 wrote ready: $TESTTMP/readyfile
736 waiting on: $TESTTMP/watchfile
950 waiting on: $TESTTMP/watchfile
737 abort: push failed:
951 abort: push failed:
738 'repository changed while pushing - please try again'
952 'repository changed while pushing - please try again'
739
953
740 $ hg -R server graph
954 $ hg -R server graph
741 o be705100c623 C-K (default)
955 o be705100c623 C-K (default)
742 |\
956 |\
743 | o d603e2c0cdd7 C-E (default)
957 | o d603e2c0cdd7 C-E (default)
744 | |
958 | |
745 o | 59e76faf78bd C-D (default)
959 o | 59e76faf78bd C-D (default)
746 | |
960 | |
747 | | o 89420bf00fae C-J (default)
961 | | o 89420bf00fae C-J (default)
748 | | |
962 | | |
749 | | | o b35ed749f288 C-I (my-second-test-branch)
963 | | | o b35ed749f288 C-I (my-second-test-branch)
750 | | |/
964 | | |/
751 | | o 75d69cba5402 C-G (default)
965 | | o 75d69cba5402 C-G (default)
752 | | |
966 | | |
753 | | | o 833be552cfe6 C-H (my-first-test-branch)
967 | | | o 833be552cfe6 C-H (my-first-test-branch)
754 | | |/
968 | | |/
755 | | o d9e379a8c432 C-F (default)
969 | | o d9e379a8c432 C-F (default)
756 | | |
970 | | |
757 | | o 51c544a58128 C-C (default)
971 | | o 51c544a58128 C-C (default)
758 | |/
972 | |/
759 o | a9149a1428e2 C-B (default)
973 o | a9149a1428e2 C-B (default)
760 | |
974 | |
761 | o 98217d5a1659 C-A (default)
975 | o 98217d5a1659 C-A (default)
762 |/
976 |/
763 @ 842e2fac6304 C-ROOT (default)
977 @ 842e2fac6304 C-ROOT (default)
764
978
765
979
766 pushing raced push touch multiple heads
980 pushing raced push touch multiple heads
767 ---------------------------------------
981 ---------------------------------------
768
982
769 There are multiple heads, the raced push touch all of them
983 There are multiple heads, the raced push touch all of them
770
984
771 # b
985 # b
772 # | a (raced)
986 # | a (raced)
773 # |/|
987 # |/|
774 # * *
988 # * *
775 # |/
989 # |/
776 # *
990 # *
777
991
778 (resync-all)
992 (resync-all)
779
993
780 $ hg -R ./server pull ./client-racy
994 $ hg -R ./server pull ./client-racy
781 pulling from ./client-racy
995 pulling from ./client-racy
782 searching for changes
996 searching for changes
783 adding changesets
997 adding changesets
784 adding manifests
998 adding manifests
785 adding file changes
999 adding file changes
786 added 1 changesets with 1 changes to 1 files (+1 heads)
1000 added 1 changesets with 1 changes to 1 files (+1 heads)
787 (run 'hg heads .' to see heads, 'hg merge' to merge)
1001 (run 'hg heads .' to see heads, 'hg merge' to merge)
788 $ hg -R ./client-other pull
1002 $ hg -R ./client-other pull
789 pulling from ssh://user@dummy/server
1003 pulling from ssh://user@dummy/server
790 searching for changes
1004 searching for changes
791 adding changesets
1005 adding changesets
792 adding manifests
1006 adding manifests
793 adding file changes
1007 adding file changes
794 added 1 changesets with 1 changes to 1 files (+1 heads)
1008 added 1 changesets with 1 changes to 1 files (+1 heads)
795 (run 'hg heads .' to see heads, 'hg merge' to merge)
1009 (run 'hg heads .' to see heads, 'hg merge' to merge)
796 $ hg -R ./client-racy pull
1010 $ hg -R ./client-racy pull
797 pulling from ssh://user@dummy/server
1011 pulling from ssh://user@dummy/server
798 searching for changes
1012 searching for changes
799 adding changesets
1013 adding changesets
800 adding manifests
1014 adding manifests
801 adding file changes
1015 adding file changes
802 added 1 changesets with 0 changes to 0 files
1016 added 1 changesets with 0 changes to 0 files
803 (run 'hg update' to get a working copy)
1017 (run 'hg update' to get a working copy)
804
1018
805 $ hg -R server graph
1019 $ hg -R server graph
806 o cac2cead0ff0 C-L (default)
1020 o cac2cead0ff0 C-L (default)
807 |
1021 |
808 | o be705100c623 C-K (default)
1022 | o be705100c623 C-K (default)
809 |/|
1023 |/|
810 | o d603e2c0cdd7 C-E (default)
1024 | o d603e2c0cdd7 C-E (default)
811 | |
1025 | |
812 o | 59e76faf78bd C-D (default)
1026 o | 59e76faf78bd C-D (default)
813 | |
1027 | |
814 | | o 89420bf00fae C-J (default)
1028 | | o 89420bf00fae C-J (default)
815 | | |
1029 | | |
816 | | | o b35ed749f288 C-I (my-second-test-branch)
1030 | | | o b35ed749f288 C-I (my-second-test-branch)
817 | | |/
1031 | | |/
818 | | o 75d69cba5402 C-G (default)
1032 | | o 75d69cba5402 C-G (default)
819 | | |
1033 | | |
820 | | | o 833be552cfe6 C-H (my-first-test-branch)
1034 | | | o 833be552cfe6 C-H (my-first-test-branch)
821 | | |/
1035 | | |/
822 | | o d9e379a8c432 C-F (default)
1036 | | o d9e379a8c432 C-F (default)
823 | | |
1037 | | |
824 | | o 51c544a58128 C-C (default)
1038 | | o 51c544a58128 C-C (default)
825 | |/
1039 | |/
826 o | a9149a1428e2 C-B (default)
1040 o | a9149a1428e2 C-B (default)
827 | |
1041 | |
828 | o 98217d5a1659 C-A (default)
1042 | o 98217d5a1659 C-A (default)
829 |/
1043 |/
830 @ 842e2fac6304 C-ROOT (default)
1044 @ 842e2fac6304 C-ROOT (default)
831
1045
832
1046
833 Creating changesets
1047 Creating changesets
834
1048
835 (update existing head)
1049 (update existing head)
836
1050
837 $ echo aaa >> client-other/a
1051 $ echo aaa >> client-other/a
838 $ hg -R client-other/ commit -m "C-M"
1052 $ hg -R client-other/ commit -m "C-M"
839
1053
840 (merge heads)
1054 (merge heads)
841
1055
842 $ hg -R client-racy/ merge 'desc("C-K")'
1056 $ hg -R client-racy/ merge 'desc("C-K")'
843 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1057 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
844 (branch merge, don't forget to commit)
1058 (branch merge, don't forget to commit)
845 $ hg -R client-racy/ commit -m "C-N"
1059 $ hg -R client-racy/ commit -m "C-N"
846
1060
847 Pushing
1061 Pushing
848
1062
849 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1063 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
850
1064
851 $ waiton $TESTTMP/readyfile
1065 $ waiton $TESTTMP/readyfile
852
1066
853 $ hg -R client-other push -fr 'tip' --new-branch
1067 $ hg -R client-other push -fr 'tip' --new-branch
854 pushing to ssh://user@dummy/server
1068 pushing to ssh://user@dummy/server
855 searching for changes
1069 searching for changes
856 remote: adding changesets
1070 remote: adding changesets
857 remote: adding manifests
1071 remote: adding manifests
858 remote: adding file changes
1072 remote: adding file changes
859 remote: added 1 changesets with 1 changes to 1 files
1073 remote: added 1 changesets with 1 changes to 1 files
860
1074
861 $ release $TESTTMP/watchfile
1075 $ release $TESTTMP/watchfile
862
1076
863 Check the result of the push
1077 Check the result of the push
864
1078
865 $ cat ./push-log
1079 $ cat ./push-log
866 pushing to ssh://user@dummy/server
1080 pushing to ssh://user@dummy/server
867 searching for changes
1081 searching for changes
868 wrote ready: $TESTTMP/readyfile
1082 wrote ready: $TESTTMP/readyfile
869 waiting on: $TESTTMP/watchfile
1083 waiting on: $TESTTMP/watchfile
870 abort: push failed:
1084 abort: push failed:
871 'repository changed while pushing - please try again'
1085 'repository changed while pushing - please try again'
872
1086
873 $ hg -R server graph
1087 $ hg -R server graph
874 o 6fd3090135df C-M (default)
1088 o 6fd3090135df C-M (default)
875 |
1089 |
876 o be705100c623 C-K (default)
1090 o be705100c623 C-K (default)
877 |\
1091 |\
878 | o d603e2c0cdd7 C-E (default)
1092 | o d603e2c0cdd7 C-E (default)
879 | |
1093 | |
880 +---o cac2cead0ff0 C-L (default)
1094 +---o cac2cead0ff0 C-L (default)
881 | |
1095 | |
882 o | 59e76faf78bd C-D (default)
1096 o | 59e76faf78bd C-D (default)
883 | |
1097 | |
884 | | o 89420bf00fae C-J (default)
1098 | | o 89420bf00fae C-J (default)
885 | | |
1099 | | |
886 | | | o b35ed749f288 C-I (my-second-test-branch)
1100 | | | o b35ed749f288 C-I (my-second-test-branch)
887 | | |/
1101 | | |/
888 | | o 75d69cba5402 C-G (default)
1102 | | o 75d69cba5402 C-G (default)
889 | | |
1103 | | |
890 | | | o 833be552cfe6 C-H (my-first-test-branch)
1104 | | | o 833be552cfe6 C-H (my-first-test-branch)
891 | | |/
1105 | | |/
892 | | o d9e379a8c432 C-F (default)
1106 | | o d9e379a8c432 C-F (default)
893 | | |
1107 | | |
894 | | o 51c544a58128 C-C (default)
1108 | | o 51c544a58128 C-C (default)
895 | |/
1109 | |/
896 o | a9149a1428e2 C-B (default)
1110 o | a9149a1428e2 C-B (default)
897 | |
1111 | |
898 | o 98217d5a1659 C-A (default)
1112 | o 98217d5a1659 C-A (default)
899 |/
1113 |/
900 @ 842e2fac6304 C-ROOT (default)
1114 @ 842e2fac6304 C-ROOT (default)
901
1115
902
1116
903 racing commit push a new head behind another named branch
1117 racing commit push a new head behind another named branch
904 ---------------------------------------------------------
1118 ---------------------------------------------------------
905
1119
906 non-continuous branch are valid case, we tests for them.
1120 non-continuous branch are valid case, we tests for them.
907
1121
908 # b (branch default)
1122 # b (branch default)
909 # |
1123 # |
910 # o (branch foo)
1124 # o (branch foo)
911 # |
1125 # |
912 # | a (raced, branch default)
1126 # | a (raced, branch default)
913 # |/
1127 # |/
914 # * (branch foo)
1128 # * (branch foo)
915 # |
1129 # |
916 # * (branch default)
1130 # * (branch default)
917
1131
918 (resync-all + other branch)
1132 (resync-all + other branch)
919
1133
920 $ hg -R ./server pull ./client-racy
1134 $ hg -R ./server pull ./client-racy
921 pulling from ./client-racy
1135 pulling from ./client-racy
922 searching for changes
1136 searching for changes
923 adding changesets
1137 adding changesets
924 adding manifests
1138 adding manifests
925 adding file changes
1139 adding file changes
926 added 1 changesets with 0 changes to 0 files
1140 added 1 changesets with 0 changes to 0 files
927 (run 'hg update' to get a working copy)
1141 (run 'hg update' to get a working copy)
928
1142
929 (creates named branch on head)
1143 (creates named branch on head)
930
1144
931 $ hg -R ./server/ up 'desc("C-N")'
1145 $ hg -R ./server/ up 'desc("C-N")'
932 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
1146 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
933 $ hg -R ./server/ branch other
1147 $ hg -R ./server/ branch other
934 marked working directory as branch other
1148 marked working directory as branch other
935 $ hg -R ./server/ ci -m "C-Z"
1149 $ hg -R ./server/ ci -m "C-Z"
936 $ hg -R ./server/ up null
1150 $ hg -R ./server/ up null
937 0 files updated, 0 files merged, 3 files removed, 0 files unresolved
1151 0 files updated, 0 files merged, 3 files removed, 0 files unresolved
938
1152
939 (sync client)
1153 (sync client)
940
1154
941 $ hg -R ./client-other pull
1155 $ hg -R ./client-other pull
942 pulling from ssh://user@dummy/server
1156 pulling from ssh://user@dummy/server
943 searching for changes
1157 searching for changes
944 adding changesets
1158 adding changesets
945 adding manifests
1159 adding manifests
946 adding file changes
1160 adding file changes
947 added 2 changesets with 0 changes to 0 files
1161 added 2 changesets with 0 changes to 0 files
948 (run 'hg update' to get a working copy)
1162 (run 'hg update' to get a working copy)
949 $ hg -R ./client-racy pull
1163 $ hg -R ./client-racy pull
950 pulling from ssh://user@dummy/server
1164 pulling from ssh://user@dummy/server
951 searching for changes
1165 searching for changes
952 adding changesets
1166 adding changesets
953 adding manifests
1167 adding manifests
954 adding file changes
1168 adding file changes
955 added 2 changesets with 1 changes to 1 files (+1 heads)
1169 added 2 changesets with 1 changes to 1 files (+1 heads)
956 (run 'hg heads .' to see heads, 'hg merge' to merge)
1170 (run 'hg heads .' to see heads, 'hg merge' to merge)
957
1171
958 $ hg -R server graph
1172 $ hg -R server graph
959 o 55a6f1c01b48 C-Z (other)
1173 o 55a6f1c01b48 C-Z (other)
960 |
1174 |
961 o 866a66e18630 C-N (default)
1175 o 866a66e18630 C-N (default)
962 |\
1176 |\
963 +---o 6fd3090135df C-M (default)
1177 +---o 6fd3090135df C-M (default)
964 | |
1178 | |
965 | o cac2cead0ff0 C-L (default)
1179 | o cac2cead0ff0 C-L (default)
966 | |
1180 | |
967 o | be705100c623 C-K (default)
1181 o | be705100c623 C-K (default)
968 |\|
1182 |\|
969 o | d603e2c0cdd7 C-E (default)
1183 o | d603e2c0cdd7 C-E (default)
970 | |
1184 | |
971 | o 59e76faf78bd C-D (default)
1185 | o 59e76faf78bd C-D (default)
972 | |
1186 | |
973 | | o 89420bf00fae C-J (default)
1187 | | o 89420bf00fae C-J (default)
974 | | |
1188 | | |
975 | | | o b35ed749f288 C-I (my-second-test-branch)
1189 | | | o b35ed749f288 C-I (my-second-test-branch)
976 | | |/
1190 | | |/
977 | | o 75d69cba5402 C-G (default)
1191 | | o 75d69cba5402 C-G (default)
978 | | |
1192 | | |
979 | | | o 833be552cfe6 C-H (my-first-test-branch)
1193 | | | o 833be552cfe6 C-H (my-first-test-branch)
980 | | |/
1194 | | |/
981 | | o d9e379a8c432 C-F (default)
1195 | | o d9e379a8c432 C-F (default)
982 | | |
1196 | | |
983 +---o 51c544a58128 C-C (default)
1197 +---o 51c544a58128 C-C (default)
984 | |
1198 | |
985 | o a9149a1428e2 C-B (default)
1199 | o a9149a1428e2 C-B (default)
986 | |
1200 | |
987 o | 98217d5a1659 C-A (default)
1201 o | 98217d5a1659 C-A (default)
988 |/
1202 |/
989 o 842e2fac6304 C-ROOT (default)
1203 o 842e2fac6304 C-ROOT (default)
990
1204
991
1205
992 Creating changesets
1206 Creating changesets
993
1207
994 (update default head through another named branch one)
1208 (update default head through another named branch one)
995
1209
996 $ hg -R client-other/ up 'desc("C-Z")'
1210 $ hg -R client-other/ up 'desc("C-Z")'
997 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
1211 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
998 $ echo aaa >> client-other/a
1212 $ echo aaa >> client-other/a
999 $ hg -R client-other/ commit -m "C-O"
1213 $ hg -R client-other/ commit -m "C-O"
1000 $ echo aaa >> client-other/a
1214 $ echo aaa >> client-other/a
1001 $ hg -R client-other/ branch --force default
1215 $ hg -R client-other/ branch --force default
1002 marked working directory as branch default
1216 marked working directory as branch default
1003 $ hg -R client-other/ commit -m "C-P"
1217 $ hg -R client-other/ commit -m "C-P"
1004 created new head
1218 created new head
1005
1219
1006 (update default head)
1220 (update default head)
1007
1221
1008 $ hg -R client-racy/ up 'desc("C-Z")'
1222 $ hg -R client-racy/ up 'desc("C-Z")'
1009 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1223 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1010 $ echo bbb >> client-other/a
1224 $ echo bbb >> client-other/a
1011 $ hg -R client-racy/ branch --force default
1225 $ hg -R client-racy/ branch --force default
1012 marked working directory as branch default
1226 marked working directory as branch default
1013 $ hg -R client-racy/ commit -m "C-Q"
1227 $ hg -R client-racy/ commit -m "C-Q"
1014 created new head
1228 created new head
1015
1229
1016 Pushing
1230 Pushing
1017
1231
1018 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1232 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1019
1233
1020 $ waiton $TESTTMP/readyfile
1234 $ waiton $TESTTMP/readyfile
1021
1235
1022 $ hg -R client-other push -fr 'tip' --new-branch
1236 $ hg -R client-other push -fr 'tip' --new-branch
1023 pushing to ssh://user@dummy/server
1237 pushing to ssh://user@dummy/server
1024 searching for changes
1238 searching for changes
1025 remote: adding changesets
1239 remote: adding changesets
1026 remote: adding manifests
1240 remote: adding manifests
1027 remote: adding file changes
1241 remote: adding file changes
1028 remote: added 2 changesets with 1 changes to 1 files
1242 remote: added 2 changesets with 1 changes to 1 files
1029
1243
1030 $ release $TESTTMP/watchfile
1244 $ release $TESTTMP/watchfile
1031
1245
1032 Check the result of the push
1246 Check the result of the push
1033
1247
1034 $ cat ./push-log
1248 $ cat ./push-log
1035 pushing to ssh://user@dummy/server
1249 pushing to ssh://user@dummy/server
1036 searching for changes
1250 searching for changes
1037 wrote ready: $TESTTMP/readyfile
1251 wrote ready: $TESTTMP/readyfile
1038 waiting on: $TESTTMP/watchfile
1252 waiting on: $TESTTMP/watchfile
1039 abort: push failed:
1253 abort: push failed:
1040 'repository changed while pushing - please try again'
1254 'repository changed while pushing - please try again'
1041
1255
1042 $ hg -R server graph
1256 $ hg -R server graph
1043 o 1b58ee3f79e5 C-P (default)
1257 o 1b58ee3f79e5 C-P (default)
1044 |
1258 |
1045 o d0a85b2252a9 C-O (other)
1259 o d0a85b2252a9 C-O (other)
1046 |
1260 |
1047 o 55a6f1c01b48 C-Z (other)
1261 o 55a6f1c01b48 C-Z (other)
1048 |
1262 |
1049 o 866a66e18630 C-N (default)
1263 o 866a66e18630 C-N (default)
1050 |\
1264 |\
1051 +---o 6fd3090135df C-M (default)
1265 +---o 6fd3090135df C-M (default)
1052 | |
1266 | |
1053 | o cac2cead0ff0 C-L (default)
1267 | o cac2cead0ff0 C-L (default)
1054 | |
1268 | |
1055 o | be705100c623 C-K (default)
1269 o | be705100c623 C-K (default)
1056 |\|
1270 |\|
1057 o | d603e2c0cdd7 C-E (default)
1271 o | d603e2c0cdd7 C-E (default)
1058 | |
1272 | |
1059 | o 59e76faf78bd C-D (default)
1273 | o 59e76faf78bd C-D (default)
1060 | |
1274 | |
1061 | | o 89420bf00fae C-J (default)
1275 | | o 89420bf00fae C-J (default)
1062 | | |
1276 | | |
1063 | | | o b35ed749f288 C-I (my-second-test-branch)
1277 | | | o b35ed749f288 C-I (my-second-test-branch)
1064 | | |/
1278 | | |/
1065 | | o 75d69cba5402 C-G (default)
1279 | | o 75d69cba5402 C-G (default)
1066 | | |
1280 | | |
1067 | | | o 833be552cfe6 C-H (my-first-test-branch)
1281 | | | o 833be552cfe6 C-H (my-first-test-branch)
1068 | | |/
1282 | | |/
1069 | | o d9e379a8c432 C-F (default)
1283 | | o d9e379a8c432 C-F (default)
1070 | | |
1284 | | |
1071 +---o 51c544a58128 C-C (default)
1285 +---o 51c544a58128 C-C (default)
1072 | |
1286 | |
1073 | o a9149a1428e2 C-B (default)
1287 | o a9149a1428e2 C-B (default)
1074 | |
1288 | |
1075 o | 98217d5a1659 C-A (default)
1289 o | 98217d5a1659 C-A (default)
1076 |/
1290 |/
1077 o 842e2fac6304 C-ROOT (default)
1291 o 842e2fac6304 C-ROOT (default)
1078
1292
1079
1293
1080 raced commit push a new head behind another named branch
1294 raced commit push a new head behind another named branch
1081 ---------------------------------------------------------
1295 ---------------------------------------------------------
1082
1296
1083 non-continuous branch are valid case, we tests for them.
1297 non-continuous branch are valid case, we tests for them.
1084
1298
1085 # b (raced branch default)
1299 # b (raced branch default)
1086 # |
1300 # |
1087 # o (branch foo)
1301 # o (branch foo)
1088 # |
1302 # |
1089 # | a (branch default)
1303 # | a (branch default)
1090 # |/
1304 # |/
1091 # * (branch foo)
1305 # * (branch foo)
1092 # |
1306 # |
1093 # * (branch default)
1307 # * (branch default)
1094
1308
1095 (resync-all)
1309 (resync-all)
1096
1310
1097 $ hg -R ./server pull ./client-racy
1311 $ hg -R ./server pull ./client-racy
1098 pulling from ./client-racy
1312 pulling from ./client-racy
1099 searching for changes
1313 searching for changes
1100 adding changesets
1314 adding changesets
1101 adding manifests
1315 adding manifests
1102 adding file changes
1316 adding file changes
1103 added 1 changesets with 0 changes to 0 files (+1 heads)
1317 added 1 changesets with 0 changes to 0 files (+1 heads)
1104 (run 'hg heads .' to see heads, 'hg merge' to merge)
1318 (run 'hg heads .' to see heads, 'hg merge' to merge)
1105 $ hg -R ./client-other pull
1319 $ hg -R ./client-other pull
1106 pulling from ssh://user@dummy/server
1320 pulling from ssh://user@dummy/server
1107 searching for changes
1321 searching for changes
1108 adding changesets
1322 adding changesets
1109 adding manifests
1323 adding manifests
1110 adding file changes
1324 adding file changes
1111 added 1 changesets with 0 changes to 0 files (+1 heads)
1325 added 1 changesets with 0 changes to 0 files (+1 heads)
1112 (run 'hg heads .' to see heads, 'hg merge' to merge)
1326 (run 'hg heads .' to see heads, 'hg merge' to merge)
1113 $ hg -R ./client-racy pull
1327 $ hg -R ./client-racy pull
1114 pulling from ssh://user@dummy/server
1328 pulling from ssh://user@dummy/server
1115 searching for changes
1329 searching for changes
1116 adding changesets
1330 adding changesets
1117 adding manifests
1331 adding manifests
1118 adding file changes
1332 adding file changes
1119 added 2 changesets with 1 changes to 1 files (+1 heads)
1333 added 2 changesets with 1 changes to 1 files (+1 heads)
1120 (run 'hg heads .' to see heads, 'hg merge' to merge)
1334 (run 'hg heads .' to see heads, 'hg merge' to merge)
1121
1335
1122 $ hg -R server graph
1336 $ hg -R server graph
1123 o b0ee3d6f51bc C-Q (default)
1337 o b0ee3d6f51bc C-Q (default)
1124 |
1338 |
1125 | o 1b58ee3f79e5 C-P (default)
1339 | o 1b58ee3f79e5 C-P (default)
1126 | |
1340 | |
1127 | o d0a85b2252a9 C-O (other)
1341 | o d0a85b2252a9 C-O (other)
1128 |/
1342 |/
1129 o 55a6f1c01b48 C-Z (other)
1343 o 55a6f1c01b48 C-Z (other)
1130 |
1344 |
1131 o 866a66e18630 C-N (default)
1345 o 866a66e18630 C-N (default)
1132 |\
1346 |\
1133 +---o 6fd3090135df C-M (default)
1347 +---o 6fd3090135df C-M (default)
1134 | |
1348 | |
1135 | o cac2cead0ff0 C-L (default)
1349 | o cac2cead0ff0 C-L (default)
1136 | |
1350 | |
1137 o | be705100c623 C-K (default)
1351 o | be705100c623 C-K (default)
1138 |\|
1352 |\|
1139 o | d603e2c0cdd7 C-E (default)
1353 o | d603e2c0cdd7 C-E (default)
1140 | |
1354 | |
1141 | o 59e76faf78bd C-D (default)
1355 | o 59e76faf78bd C-D (default)
1142 | |
1356 | |
1143 | | o 89420bf00fae C-J (default)
1357 | | o 89420bf00fae C-J (default)
1144 | | |
1358 | | |
1145 | | | o b35ed749f288 C-I (my-second-test-branch)
1359 | | | o b35ed749f288 C-I (my-second-test-branch)
1146 | | |/
1360 | | |/
1147 | | o 75d69cba5402 C-G (default)
1361 | | o 75d69cba5402 C-G (default)
1148 | | |
1362 | | |
1149 | | | o 833be552cfe6 C-H (my-first-test-branch)
1363 | | | o 833be552cfe6 C-H (my-first-test-branch)
1150 | | |/
1364 | | |/
1151 | | o d9e379a8c432 C-F (default)
1365 | | o d9e379a8c432 C-F (default)
1152 | | |
1366 | | |
1153 +---o 51c544a58128 C-C (default)
1367 +---o 51c544a58128 C-C (default)
1154 | |
1368 | |
1155 | o a9149a1428e2 C-B (default)
1369 | o a9149a1428e2 C-B (default)
1156 | |
1370 | |
1157 o | 98217d5a1659 C-A (default)
1371 o | 98217d5a1659 C-A (default)
1158 |/
1372 |/
1159 o 842e2fac6304 C-ROOT (default)
1373 o 842e2fac6304 C-ROOT (default)
1160
1374
1161
1375
1162 Creating changesets
1376 Creating changesets
1163
1377
1164 (update 'other' named branch head)
1378 (update 'other' named branch head)
1165
1379
1166 $ hg -R client-other/ up 'desc("C-P")'
1380 $ hg -R client-other/ up 'desc("C-P")'
1167 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1381 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
1168 $ echo aaa >> client-other/a
1382 $ echo aaa >> client-other/a
1169 $ hg -R client-other/ branch --force other
1383 $ hg -R client-other/ branch --force other
1170 marked working directory as branch other
1384 marked working directory as branch other
1171 $ hg -R client-other/ commit -m "C-R"
1385 $ hg -R client-other/ commit -m "C-R"
1172 created new head
1386 created new head
1173
1387
1174 (update 'other named brnach through a 'default' changeset')
1388 (update 'other named brnach through a 'default' changeset')
1175
1389
1176 $ hg -R client-racy/ up 'desc("C-P")'
1390 $ hg -R client-racy/ up 'desc("C-P")'
1177 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1391 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1178 $ echo bbb >> client-racy/a
1392 $ echo bbb >> client-racy/a
1179 $ hg -R client-racy/ commit -m "C-S"
1393 $ hg -R client-racy/ commit -m "C-S"
1180 $ echo bbb >> client-racy/a
1394 $ echo bbb >> client-racy/a
1181 $ hg -R client-racy/ branch --force other
1395 $ hg -R client-racy/ branch --force other
1182 marked working directory as branch other
1396 marked working directory as branch other
1183 $ hg -R client-racy/ commit -m "C-T"
1397 $ hg -R client-racy/ commit -m "C-T"
1184 created new head
1398 created new head
1185
1399
1186 Pushing
1400 Pushing
1187
1401
1188 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1402 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1189
1403
1190 $ waiton $TESTTMP/readyfile
1404 $ waiton $TESTTMP/readyfile
1191
1405
1192 $ hg -R client-other push -fr 'tip' --new-branch
1406 $ hg -R client-other push -fr 'tip' --new-branch
1193 pushing to ssh://user@dummy/server
1407 pushing to ssh://user@dummy/server
1194 searching for changes
1408 searching for changes
1195 remote: adding changesets
1409 remote: adding changesets
1196 remote: adding manifests
1410 remote: adding manifests
1197 remote: adding file changes
1411 remote: adding file changes
1198 remote: added 1 changesets with 1 changes to 1 files
1412 remote: added 1 changesets with 1 changes to 1 files
1199
1413
1200 $ release $TESTTMP/watchfile
1414 $ release $TESTTMP/watchfile
1201
1415
1202 Check the result of the push
1416 Check the result of the push
1203
1417
1204 $ cat ./push-log
1418 $ cat ./push-log
1205 pushing to ssh://user@dummy/server
1419 pushing to ssh://user@dummy/server
1206 searching for changes
1420 searching for changes
1207 wrote ready: $TESTTMP/readyfile
1421 wrote ready: $TESTTMP/readyfile
1208 waiting on: $TESTTMP/watchfile
1422 waiting on: $TESTTMP/watchfile
1209 abort: push failed:
1423 abort: push failed:
1210 'repository changed while pushing - please try again'
1424 'repository changed while pushing - please try again'
1211
1425
1212 $ hg -R server graph
1426 $ hg -R server graph
1213 o de7b9e2ba3f6 C-R (other)
1427 o de7b9e2ba3f6 C-R (other)
1214 |
1428 |
1215 o 1b58ee3f79e5 C-P (default)
1429 o 1b58ee3f79e5 C-P (default)
1216 |
1430 |
1217 o d0a85b2252a9 C-O (other)
1431 o d0a85b2252a9 C-O (other)
1218 |
1432 |
1219 | o b0ee3d6f51bc C-Q (default)
1433 | o b0ee3d6f51bc C-Q (default)
1220 |/
1434 |/
1221 o 55a6f1c01b48 C-Z (other)
1435 o 55a6f1c01b48 C-Z (other)
1222 |
1436 |
1223 o 866a66e18630 C-N (default)
1437 o 866a66e18630 C-N (default)
1224 |\
1438 |\
1225 +---o 6fd3090135df C-M (default)
1439 +---o 6fd3090135df C-M (default)
1226 | |
1440 | |
1227 | o cac2cead0ff0 C-L (default)
1441 | o cac2cead0ff0 C-L (default)
1228 | |
1442 | |
1229 o | be705100c623 C-K (default)
1443 o | be705100c623 C-K (default)
1230 |\|
1444 |\|
1231 o | d603e2c0cdd7 C-E (default)
1445 o | d603e2c0cdd7 C-E (default)
1232 | |
1446 | |
1233 | o 59e76faf78bd C-D (default)
1447 | o 59e76faf78bd C-D (default)
1234 | |
1448 | |
1235 | | o 89420bf00fae C-J (default)
1449 | | o 89420bf00fae C-J (default)
1236 | | |
1450 | | |
1237 | | | o b35ed749f288 C-I (my-second-test-branch)
1451 | | | o b35ed749f288 C-I (my-second-test-branch)
1238 | | |/
1452 | | |/
1239 | | o 75d69cba5402 C-G (default)
1453 | | o 75d69cba5402 C-G (default)
1240 | | |
1454 | | |
1241 | | | o 833be552cfe6 C-H (my-first-test-branch)
1455 | | | o 833be552cfe6 C-H (my-first-test-branch)
1242 | | |/
1456 | | |/
1243 | | o d9e379a8c432 C-F (default)
1457 | | o d9e379a8c432 C-F (default)
1244 | | |
1458 | | |
1245 +---o 51c544a58128 C-C (default)
1459 +---o 51c544a58128 C-C (default)
1246 | |
1460 | |
1247 | o a9149a1428e2 C-B (default)
1461 | o a9149a1428e2 C-B (default)
1248 | |
1462 | |
1249 o | 98217d5a1659 C-A (default)
1463 o | 98217d5a1659 C-A (default)
1250 |/
1464 |/
1251 o 842e2fac6304 C-ROOT (default)
1465 o 842e2fac6304 C-ROOT (default)
1252
1466
1253
1467
1254 raced commit push a new head obsoleting the one touched by the racing push
1468 raced commit push a new head obsoleting the one touched by the racing push
1255 --------------------------------------------------------------------------
1469 --------------------------------------------------------------------------
1256
1470
1257 # b (racing)
1471 # b (racing)
1258 # |
1472 # |
1259 # ΓΈβ‡ β—” a (raced)
1473 # ΓΈβ‡ β—” a (raced)
1260 # |/
1474 # |/
1261 # *
1475 # *
1262
1476
1263 (resync-all)
1477 (resync-all)
1264
1478
1265 $ hg -R ./server pull ./client-racy
1479 $ hg -R ./server pull ./client-racy
1266 pulling from ./client-racy
1480 pulling from ./client-racy
1267 searching for changes
1481 searching for changes
1268 adding changesets
1482 adding changesets
1269 adding manifests
1483 adding manifests
1270 adding file changes
1484 adding file changes
1271 added 2 changesets with 2 changes to 1 files (+1 heads)
1485 added 2 changesets with 2 changes to 1 files (+1 heads)
1272 (run 'hg heads .' to see heads, 'hg merge' to merge)
1486 (run 'hg heads .' to see heads, 'hg merge' to merge)
1273 $ hg -R ./client-other pull
1487 $ hg -R ./client-other pull
1274 pulling from ssh://user@dummy/server
1488 pulling from ssh://user@dummy/server
1275 searching for changes
1489 searching for changes
1276 adding changesets
1490 adding changesets
1277 adding manifests
1491 adding manifests
1278 adding file changes
1492 adding file changes
1279 added 2 changesets with 2 changes to 1 files (+1 heads)
1493 added 2 changesets with 2 changes to 1 files (+1 heads)
1280 (run 'hg heads' to see heads, 'hg merge' to merge)
1494 (run 'hg heads' to see heads, 'hg merge' to merge)
1281 $ hg -R ./client-racy pull
1495 $ hg -R ./client-racy pull
1282 pulling from ssh://user@dummy/server
1496 pulling from ssh://user@dummy/server
1283 searching for changes
1497 searching for changes
1284 adding changesets
1498 adding changesets
1285 adding manifests
1499 adding manifests
1286 adding file changes
1500 adding file changes
1287 added 1 changesets with 1 changes to 1 files (+1 heads)
1501 added 1 changesets with 1 changes to 1 files (+1 heads)
1288 (run 'hg heads' to see heads, 'hg merge' to merge)
1502 (run 'hg heads' to see heads, 'hg merge' to merge)
1289
1503
1290 $ hg -R server graph
1504 $ hg -R server graph
1291 o 3d57ed3c1091 C-T (other)
1505 o 3d57ed3c1091 C-T (other)
1292 |
1506 |
1293 o 2efd43f7b5ba C-S (default)
1507 o 2efd43f7b5ba C-S (default)
1294 |
1508 |
1295 | o de7b9e2ba3f6 C-R (other)
1509 | o de7b9e2ba3f6 C-R (other)
1296 |/
1510 |/
1297 o 1b58ee3f79e5 C-P (default)
1511 o 1b58ee3f79e5 C-P (default)
1298 |
1512 |
1299 o d0a85b2252a9 C-O (other)
1513 o d0a85b2252a9 C-O (other)
1300 |
1514 |
1301 | o b0ee3d6f51bc C-Q (default)
1515 | o b0ee3d6f51bc C-Q (default)
1302 |/
1516 |/
1303 o 55a6f1c01b48 C-Z (other)
1517 o 55a6f1c01b48 C-Z (other)
1304 |
1518 |
1305 o 866a66e18630 C-N (default)
1519 o 866a66e18630 C-N (default)
1306 |\
1520 |\
1307 +---o 6fd3090135df C-M (default)
1521 +---o 6fd3090135df C-M (default)
1308 | |
1522 | |
1309 | o cac2cead0ff0 C-L (default)
1523 | o cac2cead0ff0 C-L (default)
1310 | |
1524 | |
1311 o | be705100c623 C-K (default)
1525 o | be705100c623 C-K (default)
1312 |\|
1526 |\|
1313 o | d603e2c0cdd7 C-E (default)
1527 o | d603e2c0cdd7 C-E (default)
1314 | |
1528 | |
1315 | o 59e76faf78bd C-D (default)
1529 | o 59e76faf78bd C-D (default)
1316 | |
1530 | |
1317 | | o 89420bf00fae C-J (default)
1531 | | o 89420bf00fae C-J (default)
1318 | | |
1532 | | |
1319 | | | o b35ed749f288 C-I (my-second-test-branch)
1533 | | | o b35ed749f288 C-I (my-second-test-branch)
1320 | | |/
1534 | | |/
1321 | | o 75d69cba5402 C-G (default)
1535 | | o 75d69cba5402 C-G (default)
1322 | | |
1536 | | |
1323 | | | o 833be552cfe6 C-H (my-first-test-branch)
1537 | | | o 833be552cfe6 C-H (my-first-test-branch)
1324 | | |/
1538 | | |/
1325 | | o d9e379a8c432 C-F (default)
1539 | | o d9e379a8c432 C-F (default)
1326 | | |
1540 | | |
1327 +---o 51c544a58128 C-C (default)
1541 +---o 51c544a58128 C-C (default)
1328 | |
1542 | |
1329 | o a9149a1428e2 C-B (default)
1543 | o a9149a1428e2 C-B (default)
1330 | |
1544 | |
1331 o | 98217d5a1659 C-A (default)
1545 o | 98217d5a1659 C-A (default)
1332 |/
1546 |/
1333 o 842e2fac6304 C-ROOT (default)
1547 o 842e2fac6304 C-ROOT (default)
1334
1548
1335
1549
1336 Creating changesets and markers
1550 Creating changesets and markers
1337
1551
1338 (continue existing head)
1552 (continue existing head)
1339
1553
1340 $ hg -R client-other/ up 'desc("C-Q")'
1554 $ hg -R client-other/ up 'desc("C-Q")'
1341 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1555 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1342 $ echo aaa >> client-other/a
1556 $ echo aaa >> client-other/a
1343 $ hg -R client-other/ commit -m "C-U"
1557 $ hg -R client-other/ commit -m "C-U"
1344
1558
1345 (new topo branch obsoleting that same head)
1559 (new topo branch obsoleting that same head)
1346
1560
1347 $ hg -R client-racy/ up 'desc("C-Z")'
1561 $ hg -R client-racy/ up 'desc("C-Z")'
1348 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1562 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1349 $ echo bbb >> client-racy/a
1563 $ echo bbb >> client-racy/a
1350 $ hg -R client-racy/ branch --force default
1564 $ hg -R client-racy/ branch --force default
1351 marked working directory as branch default
1565 marked working directory as branch default
1352 $ hg -R client-racy/ commit -m "C-V"
1566 $ hg -R client-racy/ commit -m "C-V"
1353 created new head
1567 created new head
1354 $ ID_Q=`hg -R client-racy log -T '{node}\n' -r 'desc("C-Q")'`
1568 $ ID_Q=`hg -R client-racy log -T '{node}\n' -r 'desc("C-Q")'`
1355 $ ID_V=`hg -R client-racy log -T '{node}\n' -r 'desc("C-V")'`
1569 $ ID_V=`hg -R client-racy log -T '{node}\n' -r 'desc("C-V")'`
1356 $ hg -R client-racy debugobsolete $ID_Q $ID_V
1570 $ hg -R client-racy debugobsolete $ID_Q $ID_V
1357
1571
1358 Pushing
1572 Pushing
1359
1573
1360 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1574 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1361
1575
1362 $ waiton $TESTTMP/readyfile
1576 $ waiton $TESTTMP/readyfile
1363
1577
1364 $ hg -R client-other push -fr 'tip' --new-branch
1578 $ hg -R client-other push -fr 'tip' --new-branch
1365 pushing to ssh://user@dummy/server
1579 pushing to ssh://user@dummy/server
1366 searching for changes
1580 searching for changes
1367 remote: adding changesets
1581 remote: adding changesets
1368 remote: adding manifests
1582 remote: adding manifests
1369 remote: adding file changes
1583 remote: adding file changes
1370 remote: added 1 changesets with 0 changes to 0 files
1584 remote: added 1 changesets with 0 changes to 0 files
1371
1585
1372 $ release $TESTTMP/watchfile
1586 $ release $TESTTMP/watchfile
1373
1587
1374 Check the result of the push
1588 Check the result of the push
1375
1589
1376 $ cat ./push-log
1590 $ cat ./push-log
1377 pushing to ssh://user@dummy/server
1591 pushing to ssh://user@dummy/server
1378 searching for changes
1592 searching for changes
1379 wrote ready: $TESTTMP/readyfile
1593 wrote ready: $TESTTMP/readyfile
1380 waiting on: $TESTTMP/watchfile
1594 waiting on: $TESTTMP/watchfile
1381 abort: push failed:
1595 abort: push failed:
1382 'repository changed while pushing - please try again'
1596 'repository changed while pushing - please try again'
1383
1597
1384 $ hg -R server debugobsolete
1598 $ hg -R server debugobsolete
1385 $ hg -R server graph
1599 $ hg -R server graph
1386 o a98a47d8b85b C-U (default)
1600 o a98a47d8b85b C-U (default)
1387 |
1601 |
1388 o b0ee3d6f51bc C-Q (default)
1602 o b0ee3d6f51bc C-Q (default)
1389 |
1603 |
1390 | o 3d57ed3c1091 C-T (other)
1604 | o 3d57ed3c1091 C-T (other)
1391 | |
1605 | |
1392 | o 2efd43f7b5ba C-S (default)
1606 | o 2efd43f7b5ba C-S (default)
1393 | |
1607 | |
1394 | | o de7b9e2ba3f6 C-R (other)
1608 | | o de7b9e2ba3f6 C-R (other)
1395 | |/
1609 | |/
1396 | o 1b58ee3f79e5 C-P (default)
1610 | o 1b58ee3f79e5 C-P (default)
1397 | |
1611 | |
1398 | o d0a85b2252a9 C-O (other)
1612 | o d0a85b2252a9 C-O (other)
1399 |/
1613 |/
1400 o 55a6f1c01b48 C-Z (other)
1614 o 55a6f1c01b48 C-Z (other)
1401 |
1615 |
1402 o 866a66e18630 C-N (default)
1616 o 866a66e18630 C-N (default)
1403 |\
1617 |\
1404 +---o 6fd3090135df C-M (default)
1618 +---o 6fd3090135df C-M (default)
1405 | |
1619 | |
1406 | o cac2cead0ff0 C-L (default)
1620 | o cac2cead0ff0 C-L (default)
1407 | |
1621 | |
1408 o | be705100c623 C-K (default)
1622 o | be705100c623 C-K (default)
1409 |\|
1623 |\|
1410 o | d603e2c0cdd7 C-E (default)
1624 o | d603e2c0cdd7 C-E (default)
1411 | |
1625 | |
1412 | o 59e76faf78bd C-D (default)
1626 | o 59e76faf78bd C-D (default)
1413 | |
1627 | |
1414 | | o 89420bf00fae C-J (default)
1628 | | o 89420bf00fae C-J (default)
1415 | | |
1629 | | |
1416 | | | o b35ed749f288 C-I (my-second-test-branch)
1630 | | | o b35ed749f288 C-I (my-second-test-branch)
1417 | | |/
1631 | | |/
1418 | | o 75d69cba5402 C-G (default)
1632 | | o 75d69cba5402 C-G (default)
1419 | | |
1633 | | |
1420 | | | o 833be552cfe6 C-H (my-first-test-branch)
1634 | | | o 833be552cfe6 C-H (my-first-test-branch)
1421 | | |/
1635 | | |/
1422 | | o d9e379a8c432 C-F (default)
1636 | | o d9e379a8c432 C-F (default)
1423 | | |
1637 | | |
1424 +---o 51c544a58128 C-C (default)
1638 +---o 51c544a58128 C-C (default)
1425 | |
1639 | |
1426 | o a9149a1428e2 C-B (default)
1640 | o a9149a1428e2 C-B (default)
1427 | |
1641 | |
1428 o | 98217d5a1659 C-A (default)
1642 o | 98217d5a1659 C-A (default)
1429 |/
1643 |/
1430 o 842e2fac6304 C-ROOT (default)
1644 o 842e2fac6304 C-ROOT (default)
1431
1645
1432
1646
1433 racing commit push a new head obsoleting the one touched by the raced push
1647 racing commit push a new head obsoleting the one touched by the raced push
1434 --------------------------------------------------------------------------
1648 --------------------------------------------------------------------------
1435
1649
1436 (mirror test case of the previous one
1650 (mirror test case of the previous one
1437
1651
1438 # a (raced branch default)
1652 # a (raced branch default)
1439 # |
1653 # |
1440 # ΓΈβ‡ β—” b (racing)
1654 # ΓΈβ‡ β—” b (racing)
1441 # |/
1655 # |/
1442 # *
1656 # *
1443
1657
1444 (resync-all)
1658 (resync-all)
1445
1659
1446 $ hg -R ./server pull ./client-racy
1660 $ hg -R ./server pull ./client-racy
1447 pulling from ./client-racy
1661 pulling from ./client-racy
1448 searching for changes
1662 searching for changes
1449 adding changesets
1663 adding changesets
1450 adding manifests
1664 adding manifests
1451 adding file changes
1665 adding file changes
1452 added 1 changesets with 1 changes to 1 files (+1 heads)
1666 added 1 changesets with 1 changes to 1 files (+1 heads)
1453 1 new obsolescence markers
1667 1 new obsolescence markers
1454 (run 'hg heads .' to see heads, 'hg merge' to merge)
1668 (run 'hg heads .' to see heads, 'hg merge' to merge)
1455 $ hg -R ./client-other pull
1669 $ hg -R ./client-other pull
1456 pulling from ssh://user@dummy/server
1670 pulling from ssh://user@dummy/server
1457 searching for changes
1671 searching for changes
1458 adding changesets
1672 adding changesets
1459 adding manifests
1673 adding manifests
1460 adding file changes
1674 adding file changes
1461 added 1 changesets with 1 changes to 1 files (+1 heads)
1675 added 1 changesets with 1 changes to 1 files (+1 heads)
1462 1 new obsolescence markers
1676 1 new obsolescence markers
1463 (run 'hg heads .' to see heads, 'hg merge' to merge)
1677 (run 'hg heads .' to see heads, 'hg merge' to merge)
1464 $ hg -R ./client-racy pull
1678 $ hg -R ./client-racy pull
1465 pulling from ssh://user@dummy/server
1679 pulling from ssh://user@dummy/server
1466 searching for changes
1680 searching for changes
1467 adding changesets
1681 adding changesets
1468 adding manifests
1682 adding manifests
1469 adding file changes
1683 adding file changes
1470 added 1 changesets with 0 changes to 0 files
1684 added 1 changesets with 0 changes to 0 files
1471 (run 'hg update' to get a working copy)
1685 (run 'hg update' to get a working copy)
1472
1686
1473 $ hg -R server debugobsolete
1687 $ hg -R server debugobsolete
1474 b0ee3d6f51bc4c0ca6d4f2907708027a6c376233 720c5163ecf64dcc6216bee2d62bf3edb1882499 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1688 b0ee3d6f51bc4c0ca6d4f2907708027a6c376233 720c5163ecf64dcc6216bee2d62bf3edb1882499 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1475 $ hg -R server graph
1689 $ hg -R server graph
1476 o 720c5163ecf6 C-V (default)
1690 o 720c5163ecf6 C-V (default)
1477 |
1691 |
1478 | o a98a47d8b85b C-U (default)
1692 | o a98a47d8b85b C-U (default)
1479 | |
1693 | |
1480 | x b0ee3d6f51bc C-Q (default)
1694 | x b0ee3d6f51bc C-Q (default)
1481 |/
1695 |/
1482 | o 3d57ed3c1091 C-T (other)
1696 | o 3d57ed3c1091 C-T (other)
1483 | |
1697 | |
1484 | o 2efd43f7b5ba C-S (default)
1698 | o 2efd43f7b5ba C-S (default)
1485 | |
1699 | |
1486 | | o de7b9e2ba3f6 C-R (other)
1700 | | o de7b9e2ba3f6 C-R (other)
1487 | |/
1701 | |/
1488 | o 1b58ee3f79e5 C-P (default)
1702 | o 1b58ee3f79e5 C-P (default)
1489 | |
1703 | |
1490 | o d0a85b2252a9 C-O (other)
1704 | o d0a85b2252a9 C-O (other)
1491 |/
1705 |/
1492 o 55a6f1c01b48 C-Z (other)
1706 o 55a6f1c01b48 C-Z (other)
1493 |
1707 |
1494 o 866a66e18630 C-N (default)
1708 o 866a66e18630 C-N (default)
1495 |\
1709 |\
1496 +---o 6fd3090135df C-M (default)
1710 +---o 6fd3090135df C-M (default)
1497 | |
1711 | |
1498 | o cac2cead0ff0 C-L (default)
1712 | o cac2cead0ff0 C-L (default)
1499 | |
1713 | |
1500 o | be705100c623 C-K (default)
1714 o | be705100c623 C-K (default)
1501 |\|
1715 |\|
1502 o | d603e2c0cdd7 C-E (default)
1716 o | d603e2c0cdd7 C-E (default)
1503 | |
1717 | |
1504 | o 59e76faf78bd C-D (default)
1718 | o 59e76faf78bd C-D (default)
1505 | |
1719 | |
1506 | | o 89420bf00fae C-J (default)
1720 | | o 89420bf00fae C-J (default)
1507 | | |
1721 | | |
1508 | | | o b35ed749f288 C-I (my-second-test-branch)
1722 | | | o b35ed749f288 C-I (my-second-test-branch)
1509 | | |/
1723 | | |/
1510 | | o 75d69cba5402 C-G (default)
1724 | | o 75d69cba5402 C-G (default)
1511 | | |
1725 | | |
1512 | | | o 833be552cfe6 C-H (my-first-test-branch)
1726 | | | o 833be552cfe6 C-H (my-first-test-branch)
1513 | | |/
1727 | | |/
1514 | | o d9e379a8c432 C-F (default)
1728 | | o d9e379a8c432 C-F (default)
1515 | | |
1729 | | |
1516 +---o 51c544a58128 C-C (default)
1730 +---o 51c544a58128 C-C (default)
1517 | |
1731 | |
1518 | o a9149a1428e2 C-B (default)
1732 | o a9149a1428e2 C-B (default)
1519 | |
1733 | |
1520 o | 98217d5a1659 C-A (default)
1734 o | 98217d5a1659 C-A (default)
1521 |/
1735 |/
1522 o 842e2fac6304 C-ROOT (default)
1736 o 842e2fac6304 C-ROOT (default)
1523
1737
1524
1738
1525 Creating changesets and markers
1739 Creating changesets and markers
1526
1740
1527 (new topo branch obsoleting that same head)
1741 (new topo branch obsoleting that same head)
1528
1742
1529 $ hg -R client-other/ up 'desc("C-Q")'
1743 $ hg -R client-other/ up 'desc("C-Q")'
1530 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1744 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
1531 $ echo bbb >> client-other/a
1745 $ echo bbb >> client-other/a
1532 $ hg -R client-other/ branch --force default
1746 $ hg -R client-other/ branch --force default
1533 marked working directory as branch default
1747 marked working directory as branch default
1534 $ hg -R client-other/ commit -m "C-W"
1748 $ hg -R client-other/ commit -m "C-W"
1535 created new head
1749 created new head
1536 $ ID_V=`hg -R client-other log -T '{node}\n' -r 'desc("C-V")'`
1750 $ ID_V=`hg -R client-other log -T '{node}\n' -r 'desc("C-V")'`
1537 $ ID_W=`hg -R client-other log -T '{node}\n' -r 'desc("C-W")'`
1751 $ ID_W=`hg -R client-other log -T '{node}\n' -r 'desc("C-W")'`
1538 $ hg -R client-other debugobsolete $ID_V $ID_W
1752 $ hg -R client-other debugobsolete $ID_V $ID_W
1539
1753
1540 (continue the same head)
1754 (continue the same head)
1541
1755
1542 $ echo aaa >> client-racy/a
1756 $ echo aaa >> client-racy/a
1543 $ hg -R client-racy/ commit -m "C-X"
1757 $ hg -R client-racy/ commit -m "C-X"
1544
1758
1545 Pushing
1759 Pushing
1546
1760
1547 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1761 $ hg -R client-racy push -r 'tip' > ./push-log 2>&1 &
1548
1762
1549 $ waiton $TESTTMP/readyfile
1763 $ waiton $TESTTMP/readyfile
1550
1764
1551 $ hg -R client-other push -fr 'tip' --new-branch
1765 $ hg -R client-other push -fr 'tip' --new-branch
1552 pushing to ssh://user@dummy/server
1766 pushing to ssh://user@dummy/server
1553 searching for changes
1767 searching for changes
1554 remote: adding changesets
1768 remote: adding changesets
1555 remote: adding manifests
1769 remote: adding manifests
1556 remote: adding file changes
1770 remote: adding file changes
1557 remote: added 1 changesets with 0 changes to 1 files (+1 heads)
1771 remote: added 1 changesets with 0 changes to 1 files (+1 heads)
1558 remote: 1 new obsolescence markers
1772 remote: 1 new obsolescence markers
1559
1773
1560 $ release $TESTTMP/watchfile
1774 $ release $TESTTMP/watchfile
1561
1775
1562 Check the result of the push
1776 Check the result of the push
1563
1777
1564 $ cat ./push-log
1778 $ cat ./push-log
1565 pushing to ssh://user@dummy/server
1779 pushing to ssh://user@dummy/server
1566 searching for changes
1780 searching for changes
1567 wrote ready: $TESTTMP/readyfile
1781 wrote ready: $TESTTMP/readyfile
1568 waiting on: $TESTTMP/watchfile
1782 waiting on: $TESTTMP/watchfile
1569 abort: push failed:
1783 abort: push failed:
1570 'repository changed while pushing - please try again'
1784 'repository changed while pushing - please try again'
1571
1785
1572 $ hg -R server debugobsolete
1786 $ hg -R server debugobsolete
1573 b0ee3d6f51bc4c0ca6d4f2907708027a6c376233 720c5163ecf64dcc6216bee2d62bf3edb1882499 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1787 b0ee3d6f51bc4c0ca6d4f2907708027a6c376233 720c5163ecf64dcc6216bee2d62bf3edb1882499 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1574 720c5163ecf64dcc6216bee2d62bf3edb1882499 39bc0598afe90ab18da460bafecc0fa953b77596 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1788 720c5163ecf64dcc6216bee2d62bf3edb1882499 39bc0598afe90ab18da460bafecc0fa953b77596 0 (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'}
1575 $ hg -R server graph --hidden
1789 $ hg -R server graph --hidden
1576 o 39bc0598afe9 C-W (default)
1790 o 39bc0598afe9 C-W (default)
1577 |
1791 |
1578 | o a98a47d8b85b C-U (default)
1792 | o a98a47d8b85b C-U (default)
1579 |/
1793 |/
1580 x b0ee3d6f51bc C-Q (default)
1794 x b0ee3d6f51bc C-Q (default)
1581 |
1795 |
1582 | o 3d57ed3c1091 C-T (other)
1796 | o 3d57ed3c1091 C-T (other)
1583 | |
1797 | |
1584 | o 2efd43f7b5ba C-S (default)
1798 | o 2efd43f7b5ba C-S (default)
1585 | |
1799 | |
1586 | | o de7b9e2ba3f6 C-R (other)
1800 | | o de7b9e2ba3f6 C-R (other)
1587 | |/
1801 | |/
1588 | o 1b58ee3f79e5 C-P (default)
1802 | o 1b58ee3f79e5 C-P (default)
1589 | |
1803 | |
1590 | o d0a85b2252a9 C-O (other)
1804 | o d0a85b2252a9 C-O (other)
1591 |/
1805 |/
1592 | x 720c5163ecf6 C-V (default)
1806 | x 720c5163ecf6 C-V (default)
1593 |/
1807 |/
1594 o 55a6f1c01b48 C-Z (other)
1808 o 55a6f1c01b48 C-Z (other)
1595 |
1809 |
1596 o 866a66e18630 C-N (default)
1810 o 866a66e18630 C-N (default)
1597 |\
1811 |\
1598 +---o 6fd3090135df C-M (default)
1812 +---o 6fd3090135df C-M (default)
1599 | |
1813 | |
1600 | o cac2cead0ff0 C-L (default)
1814 | o cac2cead0ff0 C-L (default)
1601 | |
1815 | |
1602 o | be705100c623 C-K (default)
1816 o | be705100c623 C-K (default)
1603 |\|
1817 |\|
1604 o | d603e2c0cdd7 C-E (default)
1818 o | d603e2c0cdd7 C-E (default)
1605 | |
1819 | |
1606 | o 59e76faf78bd C-D (default)
1820 | o 59e76faf78bd C-D (default)
1607 | |
1821 | |
1608 | | o 89420bf00fae C-J (default)
1822 | | o 89420bf00fae C-J (default)
1609 | | |
1823 | | |
1610 | | | o b35ed749f288 C-I (my-second-test-branch)
1824 | | | o b35ed749f288 C-I (my-second-test-branch)
1611 | | |/
1825 | | |/
1612 | | o 75d69cba5402 C-G (default)
1826 | | o 75d69cba5402 C-G (default)
1613 | | |
1827 | | |
1614 | | | o 833be552cfe6 C-H (my-first-test-branch)
1828 | | | o 833be552cfe6 C-H (my-first-test-branch)
1615 | | |/
1829 | | |/
1616 | | o d9e379a8c432 C-F (default)
1830 | | o d9e379a8c432 C-F (default)
1617 | | |
1831 | | |
1618 +---o 51c544a58128 C-C (default)
1832 +---o 51c544a58128 C-C (default)
1619 | |
1833 | |
1620 | o a9149a1428e2 C-B (default)
1834 | o a9149a1428e2 C-B (default)
1621 | |
1835 | |
1622 o | 98217d5a1659 C-A (default)
1836 o | 98217d5a1659 C-A (default)
1623 |/
1837 |/
1624 o 842e2fac6304 C-ROOT (default)
1838 o 842e2fac6304 C-ROOT (default)
1625
1839
General Comments 0
You need to be logged in to leave comments. Login now